I’m trying to fix this annoying slowness when posting to larger communities. (Just try replying here…) I’ll be doing some restarts of the docker stack and nginx.
Sorry for the inconvenience.
Edit: Well I’ve changed the nginx from running in a docker container to running on the host, but that hasn’t solved the posting slowness…
Thank you Ruud for hosting! Your work is much appreciated.
Hey, I just want to echo what everyone else is saying - thanks much for hosting + all the efforts to keep things working well. It’s appreciated 👍
Thanks for hosting this great instance! We appreciate all you’re doing!
Godspeed to you over the coming days man. Really appreciate you putting this together and the extra work it takes when tackling something like this (both being new to the platform and the tech still being in relative infancy) - not to mention the crazy scaling happening. I will definitely be pitching in to help make sure the server stays up!!
Hey. From my own experience - Nginx is awesome and fast when it is working, but the more you want from it, the more difficult it becomes.
Give
Caddy
a try. This reverse proxy has always been excellent for me. It has HTTP3 (QUIC) support, automatic ACME and overall excellent configuration in terms of simplicity and user friendliness.Caddy is not a good choice if you need TCP/UDP proxy. It’s only HTTP/HTTPS proxy.
Someone said this about Caddy “it injects advertising headers into your responses”. Is this true? I don’t know anything about caddy but that doesn’t sound too good lo (to be fair it could be misinformation).
Never heard about it. This is open source project, free to use.
In case you want to understand why it’s good, check out
Caddyfile
example. Just specify something like this:example.com { reverse_proxy backend:1234 }
And that’s it! It automatically binds on 0.0.0.0:80 only for redirects to 0.0.0.0:443 + using ACME adds TLS, all behinds the scenes.
Add 1 more line to my given example and it adds compreasion.
I’ve been using it for my self-hosted stuff for prob 1-2 years and it kept working flawlessly all the time. Very satisfied.
Sounds very cool. Does running with that file also handle the SSL certificate and validation automatically? Or are there extra steps?
Everything is automated. As long as you know how ACME is working (port 80, accessible from the internet), everything is done in the background, including TLS (SSL) certificate maintenance.
A minimal config like that will default to provisioning (and periodically renewing) an SSL certificate from Let’s Encrypt automatically, and if there are any issues doing so it will try another free CA.
This requires port 80 and/or 443 to be reachable from the general Internet of course, as that’s where those CAs are.
There’s an optional extra step of putting
{ email admin@emailprovider.com }
(with your actual e-mail address substituted) at the top of the config file, so that the Let’s Encrypt knows who you are and can notify you if there are any problems with your certificates. For example, if any of your certificates are about to expire without being renewed1, or if they have to revoke certificates due to a bug on their side2 .
As long as you don’t need wildcard certificates3, it’s really that easy.
1: I’ve only had this happen twice: once when I had removed a subdomain from the config (so Caddy did not need to renew), and once when Caddy had “renewed” using the other CA due to network issues while contacting Let’s Encrypt.
2: Caddy has code to automatically detect revoked certificates and renew or replace them before it becomes an issue, so you can likely ignore this kind of e-mail.
3: Wildcard certificates are supported, but require an extra line of configuration and adding in a module to support your DNS provider.
Thank you, Ruud!
Good luck today lol
Thank you! Sounds like a lot of work.
Thanks for putting in the time to make this run smoothly
Well worth any inconvenience, thank you so much for hosting!
2 restarts done already :-)
Hmm. I guess the delay in posting is not related to nginx. I now have the same conf as a server that doesn’t have this issue.
I’m only familiar with the high-level Lemmy architecture, but could it be related to database indices being rebuilt?
Somehow I don’t think the slowness when posting or saving is due to the nginx server / reverse proxy running inside the Lemmy container.
I would think it’s related to inserts and updates in the DB, but I haven’t had time to look into it on my instance, sorry!
Edit wait! Posting and saving is fast now! What did you change? Nicely done! 👍
No it’s not! Not for me anyway. Yes I’ll be looking into that, but first migrate the server!
Oh wait, that’s because I’m posting to Lemmy.world from my instance. It’s only slow when posting to Lemmy.world from a Lemmy.world user.
With that in mind, it makes me think it has something to do with some insert or update that happens. My local DB is not under load, so my save is fast. Lemmy.world’s DB is under load so the save is slow.
It might not even be the insert/update that is slow. Could be some other insert into another table that gets triggered on save that is the culprit.
Posting seems faster when posting to a non-local federated community. Maybe that’s what you experienced?
You are right. Good catch.
Hehe, the joys of troubleshooting and profiling. Isn’t it fun?
Hmm if it takes too long the fun disappears… ;-)
You got this. <3
I don’t have experience scaling Lemmy, but I do have experience scaling stuff in general. I’m sure you’ve got a few people here who’d be willing to talk things through with you if you get too frustrated.
And don’t forget to breathe and step back if you have to. Your well being is more important.
Thank you so much for this amazing instance!
You’re welcome!
Thanks for hosting.