This site is currently struggling to handle the amount of new users. I have already upgraded the server, but it will go down regardless if half of Reddit tries to join.

However Lemmy is federated software, meaning you can interact seamlessly with communities on other instances like beehaw.org or lemmy.one. The documentation explains in more detail how this works. Use the instance list to find one where you can register. Then use the Community Browser to find interesting communities. Paste the community url into the search field to follow it.

You can help other Reddit refugees by inviting them to the same Lemmy instance where you joined. This way we can spread the load across many different servers. And users with similar interests will end up together on the same instances. Others on the same instance can also automatically see posts from all the communities that you follow.

Edit: If you moderate a large subreddit, do not link your users directly to lemmy.ml in your announcements. That way the server will only go down sooner.

  • aksdb@feddit.de
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    1 year ago

    I think lemmy will be bitten in the ass by not having considered clustering/horizontal scaling from the start. Federation alone as a scaling mechanism is only feasible for “nerds”. But if the network wants to grow, we will need a few scale-able large hosted instances. And if their only choice is to scale vertically, there will be a hard limit (unless we put a good old Mainframe somewhere ^^).

    Another downside of this design is: you can’t run it with high availability. If there’s only one process per instance, updating it will mean the whole instance is down. Sure, if all goes well this downtime is under a second. But if it doesn’t go well or if a migration is needed, this might quickly become hours.

    • PriorProject@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 year ago

      I think you probably underestimate how far one can get with “vertical” scaling. Here’s the dockerfile: https://raw.githubusercontent.com/LemmyNet/lemmy/release/v0.17/docker/prod/docker-compose.yml

      • It includes 4 different containers… so there’s a way to scale out to 4 machines right away. Maybe not every container is doing an equal amount of work… but there’s some amount of immediately available machine-splitting.
      • I’m no expert, but I believe that at least the lemmy and lemmy-ui containers are stateless. If so, they’re horizontally scalable already.
      • Postgres then would likely be the main bottleneck. But postgres offers read-replicas, so again the write-load and the read-load can be hosted on separate machines. And if there’s enough read-load, you can have many replicas.

      Other comments from the admins have shown that lemmy.ml today is running on a single eight-core box and it’s currently hosting 30k registered users and over 1k active. So how much more compute capacity can we throw at “vertical” scaling on the current software architecture?

      • Just by going to a bigger single box, we can get 128 cores with no problem, a 16x bump in capacity. Does that get us to at least to 300k registered + 10k active?
      • Splitting the containers onto 4 separate machines. Does that get us 2x more?
      • Adding PG read-replicas and additional lemmy/lemm-ui containers would allow us to expand our instance footprint to maybe 6 physical machines should get us another 2x or more in performance.

      Conservatively, that’s 100x the computing capacity of the current hardware and could potentially support 1m registered users and 50k active. Now, I don’t REALLY expect this to be possible today, there will be many software bottlenecks found along the way to scaling a single instance this large. But my point is that there’s already a medium amount of horizontal scalability built into lemmy, and if the software doesn’t fall over for algorithmic reasons (which is will at first), the current infrastructure architecture allows quite a lot of growth. There’s plenty of time between now and a federation of million user instances to adopt a truly distributed storage backend if needed.

      • aksdb@feddit.de
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Doesn’t solve the availability issues, though. I know of no seriously hosted system that doesn’t have at least two replicas in different availability zones. I don’t expect any hobby instance to offer any kind of availability guarantee. But if we want to have one or two central instances that the typical reddit user can flock to, this would IMO be essential to have.

        Also, in my experience it is FAR cheaper to have a few low to mid range systems for vertical scaling, than to throw a high end machine at it for vertical scaling. If you look the the pricing, the monthly costs for vertical scaling goes up exponentially once you want much more RAM and CPU cores (and storage, and so on).

        Being able to scale horizontally solves both issues: hardware is cheaper and reliability is higher.

        That lemmy is so damn efficient would then simply mean, that we can achieve excessively good results with low resources, where Reddit would already struggly and needs to put much more machines in place. That would be a nice “business” advantage.

        • PriorProject@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          1 year ago

          Doesn’t solve the availability issues, though. I know of no seriously hosted system that doesn’t have at least two replicas in different availability zones.

          I’m not sure why you think the setup I’ve described can’t have coverage in multiple availability zones. If the lemmy and lemmy-ui containers are stateless as I suspect, you can autoscale them. Pictrs is new to me, not sure there… but it appears to support object-storage which would likely make it stateless and the object-storage can replicate to multiple-az’s. Postgres read-replicas can be placed in multiple az’s as well. The only component that presents an issue is the Postgres write-leader, and failovers there can be done in minutes. Many many popular sites run with an infrastructure like this and achieve excellent uptimes.

          I do get the power of horizontal scalability, I specialize in distributed databases. But they come at a cost in flexibility relative to something like Postgres… and we’re very far from “needing” horizontally scaling database writes here. Everything else looks like it can be scaled horizontally if someone wants to take on the headache of doing so.

          • aksdb@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Well, one could try to swap postgres for cockroachdb. But a ticket in github that asked for clustering support was closed with being out of scope. So might be lemmy is not stateless. Haven’t checked the code yet, though.

            • PriorProject@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              1 year ago

              If cockroach is truly PG compatible, lemmy admins can swap it in without developer support. I suspect Cockroach constrains some SQL features and has poor performance on others, but that or AWS Aurora are things you can experiment with without dev support if you’re passionate about the proving out the value of scale-out.

              The statement that spawned my response though was this:

              I think lemmy will be bitten in the ass by not having considered clustering/horizontal scaling from the start. Federation alone as a scaling mechanism is only feasible for “nerds”. But if the network wants to grow, we will need a few scale-able large hosted instances.

              I still don’t think it’s true that we need horizontal scaling to support sufficiently large instances. The amount of vertical and horizontal scaling ability built into Lemmy today is both useful, and likely to outstrip the current ability of its code to scale a single instance. Any algorithms that scale super-linearly with respect to comment-count, post-count, user-count, or community-count, will fail just as hard with distributed backends as they do with an RDBMS. And as you note, PG-compatible distributed systems provide a potential lower-engineering-cost on-ramp to distributed systems once the codebase is efficient-enough to warrant such a transition to scale further. I suspect I’ve contributed everything of use I have to this thread though, and don’t expect to respond further.

              • iouapizza@infosec.pub
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 year ago

                As someone not versed in DBs and scaling for web architecture, this was a super fun read through, appreciate the comment chains from both users.

              • aksdb@feddit.de
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 year ago

                Thank you for your thorough explanations and input. It definitely gave me a few things to think about. And if I have some spare time I might even try to spin up lemmy in some local k8s to see how it reacts to being scaled up and down.

    • federico3@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Indeed. If a big instance like lemmy.ml was to be shut down all the communities would be lost. This is simply not sustainable. Why would users put effort building a community if it could be gone at any time?

      • aksdb@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        That however would be a different problem. A horizontally scaled instance would be able to cope with more users, but if it shuts down for monetary, personal, or whatever reason, it’s still down.

        Protecting a community from this is what the decentralized part is for. That is already in place.

        (Although there is a middle ground where you could design the system in a way that one instance is mirrored and load-balanced across different hosters. That would actually also be quite interesting to have. But that’s another layer of complexity on top.)

        • d3Xt3r@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          1 year ago

          Protecting a community from this is what the decentralized part is for. That is already in place.

          What? How is it solved exactly? If say lemmy.ml is down, what’s the point of other servers existing, if most of the content and users are here? Like, I created a few new communities on lemmy.ml, which don’t exist on say Beehaw because for some strange reason, the Beehaw admins don’t allow users to create communities. So how is going to Beehaw help me, if lemmy.ml is unavailable? Okay, so you tell me I should go to a different server then. Maybe even make a new server. Done and done. But there’s very few to zero users on that server, so those new communities and content created there might as well not exist. Also, even though Lemmy is federated, the homepage defaults to “local”, so all the new users coming in may miss out on all the other federated communities, and, if I’m reading this correctly, the federation isn’t even a fully automatic process, and some admins may even choose to put there server in a whitelist mode. All of it makes the whole “advantage” of federation, or at least Lemmy’s version of it, seem kind of pointless.

          It’s like saying, “Hey, Gmail is down so you should just use Hotmail instead.” Okay, so I can still send and receive emails, but I can’t access any of my old emails for context, and none of my contacts can reach me using my Gmail address, and none of my filters, address book and other content is available so I may not even be able to reach out to my contacts and let them know what my new email is.

          IMO the way the way the federation should’ve been designed is to use something like blockchain technology, so every instance basically has all the content and there’s only one source of truth for user accounts and data (distributed ledger), or maybe even just implement the whole thing as a plain old high-availability cluster with load balancing.

          Unless I’m missing something fundamental, I don’t see how this decentralization is of any use if the content isn’t there.

          • federico3@lemmy.ml
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            If say lemmy.ml is down, what’s the point of other servers existing, if most of the content and users are here?

            There is no replication and failover so the problem is not solved.

            blockchain technology

            Urgh, no way. Replication and some basic message signing would be enough.

          • aksdb@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            What? How is it solved exactly? If say lemmy.ml is down, what’s the point of other servers existing, […]

            Because you want to rely on someone else’s instance. The idiomatic solution would be for a community to host their own lemmy/activitypub instance and join the federation. Then the community has control over their own data. In every sense. If they want to delete something (for breaching law, protocol, or whatever), they are free to do so and don’t have to ask anyone else.

            IMO the way the way the federation should’ve been designed is to use something like blockchain technology […]

            Please no. I mean there is IPFS out there that somewhat works like that, but I don’t really like that. First, the ever-growing amount of data means that every instance has to keep up with it. If they wouldn’t replicate it, the deletion of a single instance would still eliminate the data, even if there were references in a block-chain.

            Also: the ability to “forget” is important. Not everything needs to live on forever. That it currently does, can already be a big problem. Look how peoples lives got almost ruined because someone dug up tweets from 10 years ago that were stupid. Solving the issue of data ownership is IMO one of the bigger things we need to keep in mind when designing a better web. Federation with the ability to “just” bring your own instance along where you are the owner is one of these options.

            • d3Xt3r@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              Fair point, but my original point/issue still stands. The admin here is saying “lemmy.ml is overloaded, use other instances instead” and that advice isn’t really helpful, at least in the present state of things. Right now, we have an influx of novice users coming in from Reddit, and other servers either not accepting applications at the moment, or they are tooniche/specific (or inflexible, like Beehaw); finally at the moment, majority of the content is on lemmy.ml. So the end result is that lemmy.ml is one of the main viable servers.

              If people join some random server which doesn’t have the content they’re after, they’ll either lose interest, OR they may continue to consume the content on emmy.ml via federation, but then that’s not really going to solve the load issue since the content on lemmy.ml isn’t distributed/replicated.

              I understand your point of ever growing data and how it may be better if that data is transient and not there forever, but for a news aggregator and forum type social network like Reddit (and now Lemmy), data is everything. If that data isn’t available, or not going to available in the future, or will not be visible to audiences due to it being on some random server, it’s going to give content creators much incentive to create content, and no content == no users. This sort of model/thinking will be doomed to failure, or be forever relegated to niche/enthusiast status, where only niche communities will thrive on specific servers targeting that niche. Which I guess is the ultimate goal of federation where every topic/community has its own server? But to get there, you’ll need interested users, and to get users to be interested you need a stable, singular place you can point them to, where they can post content knowing. And maybe, as that server grows, the admin could start splitting off the larger communities into their own individual instances?

            • solairusrising@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              This is how I am understanding it. Please correct me if I am wrong.

              I’m going to use Reddit as an example, since we all understand that…

              So the way I understand this is that backbone is now the whole of the internet instead of just reddit.com.

              Each instance would be somewhat akin to a self-hosted subreddit. We can reach any sub from any other sub, since the backbone is now spread across the whole internet instead of just reddit.com.

              These subs (instances) are also like old style BB forums in that there can be different categories (communities) hosted by that instance, but those are also still visible across other instances.

              So basically people who are making communities here are making a sub in a sub (in Reddit terms).

              Do I have that correct?

              • Parsnip8904@beehaw.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                Mostly. I try to think of instance as not a subreddit but a loose collection of them, like a multireddit.

                What is kind of nice, in my understanding, is that text content is replicated across federated instances when a user is using both. So if you’re on beehaw and comment on lemmy.ml, both of these servers will have your comments. That’s already providing slightly more redundancy than reddit.

        • GuyDudeman@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I still don’t quite understand how the community is replicated…

          Are you saying that if Lemmy.ml/tiki exists and someone creates Beehaw.org/tiki that they are the same community? They would show the same posts and comments?

          Or are they completely separate communities that would just have the same name… users could subscribe to both if they wanted, but the posts and comments would be stuck on their respective instances?

          Or - Is it the case that Lemmy.ml’s tiki community and posts and comments are also stored on Beehaw.org somehow?

          If I deleted the tiki community on Lemmy.ml, would users from both communities lose their posts and comments from the Lemmy.ml instance of that community?

          • Pigeon@beehaw.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            The current state is that they are separate communities, but I believe the person you’re replying to is proposing something like the other option, where some communities would be the same across instances so that the community and its post history would survive if one of the instances went down (not currently the case).

            Currently, if you deleted the tiki community on lemmy.ml, only the lemmy.ml tiki community posts/comments would be gone. Any other tiki communities on other instances would remain.

              • Jakob :lemmy:@lemmy.schuerz.at
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                If there’s a community serverA/tiki, you can search on serverB for serverA/tiki and join the community serverA/tiki from serverB. Content ist replicated to serverB and back.

                serverB/tiki@serverA is the replica you can fully use on serverB. This can exist beside serverB/tiki, which is a different community.

                If someone writes a posting or comment on serverA/tiki, you can see it in serverB/tiki@serverA.

                If someone writes a posting or comment on serverB/tiki@serverA, you can see it in serverA/tiki. (And even on serverC/tiki@serverA)

          • Mac@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            1 year ago

            The Tiki community should simply run a Tiki server, no? Problem solved.

            • GuyDudeman@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Great idea, but then I’d have to get into the whole hosting thing and all of that which I don’t want to do.

              • Mac@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                There may be someone in the community that’s interested and/or willing.

                But i agree, it’s not as simple as it sounds.

    • _NetNomad @ DXC@forum.dxcomplex.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      i’ve been saying we need a COBOL/CICS implementation of ActivityPub for YEARS and it’s always the same “where the hell am i supposed to get a 3270 in 2023” and “what do you mean i can’t shitpost during the batch window”