Out of a 3 minute trailer, I saw 15 seconds of gameplay.
Yeah, I was confused too. Then I found this video: https://www.youtube.com/watch?v=uMOPoAq5vIA - I haven't watched it yet, but it seems to be a new video going through gameplay.
Out of a 3 minute trailer, I saw 15 seconds of gameplay.
Yeah, I was confused too. Then I found this video: https://www.youtube.com/watch?v=uMOPoAq5vIA - I haven't watched it yet, but it seems to be a new video going through gameplay.
I've noticed that sometimes it takes a long time to show up in results, and sometimes not show at all but worked 10-15 minutes later.
I think servers are overloaded atm
My systems:
They're all running in a kubernetes cluster. Nodes are primary deployment targets, sunshine / raspi is set as not preferred, but will be deployed to if there's no other node with resources. Storage is done on glusterfs. Services are provided to network via metallb, and ssl cert handling is done via certbot. Ansible is used to set up and configure the cluster, making it pretty easy to add a new node.
In practice this means any one host can go down without services going down. It will take a 10-15 minute time for kubernetes to flag a node as down and not just rebooting or something and reschedule the services, but it's more or less self healing and usually already fixed before I notice it's been a problem.
As for services.. Some game servers, jellyfin, specialized stream servers for a project, nextcloud, postgres cluster, node red, grafana, influxdb, gotify, proget, a web server, and about 5-10 smaller personal projects.
Because I have a HTTP server on another box (and only one public IP), the let’s encrypt auto renewal can’t work for the mail server, hence the dreams of setting up VMs. Perhaps I can have the HTTP server share the mail server’s certificate over the network, but that sounds risky to me for some reason.
Use a proxy in front of them, and let that deal with the certificate. Traefik is relatively easy to set up for that, but you can also use others, like f.ex nginx or haproxy.
Think of container as a running VM. Image is the file system of the VM. The image itself is static, so when restarted alll file system changes get tossed out (you can map certain paths inside the container to other storage). A Dockerfile is a file that describes how to build the image (For example : Use an ubuntu base image, run these commands, copy this file in to this path, expose this network port, and when running the container, start this file when it boots up).
When running a container you specify which image it should run, network ports to expose to the host network, environment variables that should be set inside the container, if it has access to a gpu, mapping paths to storage and so on. You can even change the startup command.
A docker compose file is a config file that can define all those things, and define it for multiple containers, binding them together in one stack. So you could for example have a static web server, an api server, a database server, redis, and so on defined and configured via environment variables. And you could just do "docker compose up" to bring up all the parts in their own docker namespace and virtual network.
This gives me MongoDB flashbacks. Postgres, if properly set up, should easily handle thousands of users.
Yeah.. Just noticed.. Same here. Could be, could be
Edit: The posts on localllama are marked as english. Which is one of the ones I do have as option.
Also thought so.. until I subscribed to another which did work as normal. And speaking of not knowing wtf I'm doing, somehow this got posted the wrong place.. I think..
Not sure what the bleep is going on. Shows as posted on programmerhumor@lemmy.ml in ui here and on my profile when checking now, but post was posted to infosecpub@infosec.pub and shows up there.
I am very confused right now
Edit: https://imgur.com/a/y3oT4QQ - but shows correct now...
Could use IPFS for file hosting