At the time of writing, Lemmyworld has the second highest number of active users (compared to all lemmy instances)

Also at the time of writing, Lemmyworld has >99% uptime.

By comparison, other lemmy instances with as many users as Lemmyworld keep going down.

What optimizations has Lemmyworld made to their hosting configuration that has made it more resilient than other instances’ hosting configurations?

See also Does Lemmy cache the frontpage by default (read-only)? on !lemmy_support@lemmy.ml

  • andrew@radiation.party
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    Ensuring there’s no data leakage in those cached calls can be tricky, especially if any api calls return anything sensitive (login tokens, authentication information, etc) but I can see caching all read-only endpoints that return the same data regardless of permissions for a second or two being helpful for the larger servers.

    It’s also worth noting that postgres does its own query-level caching, quite aggressively too. I’ve worked in some places where we had to add a SELECT RANDOM() to a query to ensure it was pulling the latest data.

    • maltfield@monero.houseOP
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      In my experience, the best benefits gained from caching are done before the backend and are stored in RAM, so the query never even reaches those services at all. I’ve used varnish for this (which is also what the big CDN providers use). In Lemmy, I imagine that would be the ngnix proxy that sits in-front of the backend.

      • PriorProject@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I haven’t heard admins discussing web-proxy caching, which may have something to do with the fact that the Lemmy API is currently pretty much entirely over websockets. I’m not an expert in web-sockets, and I don’t want to say that websockets API responses absolutely can’t be cached… but it’s not like caching a restful API. They are working on moving away from websockets, btw… but it’s not there yet.

        The comments from Lemmy devs in https://github.com/LemmyNet/lemmy/issues/2877 make me think that there’s a lot of database query optimization low-hanging fruit to be had, and that admins are frequently focusing on app configs like worker counts and db configs to maximize the effectiveness of db-level caches, indexes, and other optimizations.

        Which isn’t to say there aren’t gains in the direction your suggesting, but I haven’t seen evidence that anyone’s secret sauce is in effective web-proxy caches.

        • s900mhz@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I may be wrong, but there is a branch in the works (UI repo) that pulls the web socket out and replaces it all with http calls. So the web socket may not be here for long

          • PriorProject@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            You’re correct, the devs are already committed to deprecating the websocket API. This may make caching easier in the future and people may use it more as a result. I’m a little bit skeptical as most of the the heavy requests are from authenticated users, and web-proxy caching authenticated requests without risking serving them up to the wrong user is also non-trivial. But caching is not my area of expertise, there may be straightforward solutions here.

            But my comment was in reference to current releases in use on real world Lemmy servers.

            • s900mhz@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Yes, I didn’t intend to downplay your comment. Caching at the proxy later with auth is something I am not familiar with. I never had to implement it in my career. (So far 😅) I just wanted to make it known that the web socket may be a thing of Lemmy past for anyone unaware

              • PriorProject@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                Yes, I didn’t intend to downplay your comment.

                I never interpreted it that way. Your comment was helpful, and I was expanding on it with more context. Lemmy on, friend.

        • maltfield@monero.houseOP
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Yeah, that’s exactly why I’m asking this question. All the effort seems to be going into the DB – but you can have a horribly shitty DB and backend but still have a massively performant webserver by just caching away the reads to RAM.

          I didn’t see any tickets about this on the GitHub, which is why I’m asking around to see if there’s actually some very low-hanging-fruit for improving all the instances with a frontend RAM cache.

          • PriorProject@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            Yeah, that’s exactly why I’m asking this question. All the effort seems to be going into the DB – but you can have a horribly shitty DB and backend but still have a massively performant webserver by just caching away the reads to RAM.

            Much of your post seemed to focus on the techniques employed by lemmy.world, caching websocket responses in the web-proxy does not seem to prominently feature among those techniques.

            If you’re interested in advancing the state of the discussion around web-proxy caching, I’d consider standing up an instance to experiment with it and report your own findings. You wouldn’t necessarily have to take on the ongoing expense and moderation headache of a public instance, you could set up with new user registrations closed, create your own test users, and write a small load generator powered by https://join-lemmy.org/api/ to investigate the effect of caching common API queries.

        • Yours Truly@dataterm.digital
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          I work on nginx cache modules for a CDN provider.

          While websockets can be proxied, they’re impractical to cache. There are no turn key solutions for this that I’m aware of, but an interesting approach might be to build something on top of NChan with some custom logic in ngx_lua.

          I agree with you that web proxy cache’s aren’t the silver bullet solution. They need to be part of a more holistic approach, which should start with optimizing the database queries.

          Caching with auth is possible, but it’s a whole can of worms that should be a last resort, not a first one.