I do the majority of my Lemmy use on my own personal instance, and I’ve noticed that some threads are missing comments, some large threads, even large quantities of them. Now, I’m not talking about comments not being present when you first subscribe/discover a community to your instance, in this case, I noticed it with a lemmy.world thread that popped up less than a day ago, very well after I subscribed.

At the time of writing, that thread has 361 comments. When I view the same thread on my instance, I can see 118, that’s a large swathe of missing content for just one thread. I can use the search feature to forcibly resolve a particular comment to my instance and reply to it, but that defeats a lot of the purpose behind having my own instance.

So has anyone else noticed something similar happening? I know my instance hasn’t gone down since I created it, so it couldn’t be that.

  • EskueroA
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 years ago

    This arises from the good ol issue of everybody just migrating to the same three or four big servers which end overloaded with their own users and can’t send updates to other instances.

    I remember the same happening to Mastodon during the first few exodus until a combination of people not staying, stronger servers and software improvements settled the issue.

    I can barely get updates from lemmy.ml and lemmy.world isn’t much better

    Beehaw seems to perform okey.

    • exu@feditown.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      About half of the communities on lemmy.ml I subscribed tomare on “Subscribe Pending” and have been since I started this server.

  • delcake@lemmy.songsforno.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 years ago

    I’ve noticed the same situation in some threads on my own instance too. But I’m under the impression that it might just be backlogged on the responsible instance that’s supposed to send out the federated content. I’ve noticed this when just having my home feed set to New and then suddenly seeing like thirty posts from lemmy.world come across all at once with widely varied timestamps.

    I suppose the best way to test if this is the case would be to note down any threads that are missing substantial amounts of comments on your local server and then check back with that thread periodically to see if and when they start to fill in.

  • Freeman@lemmy.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    I haven’t noticed it happening. But haven’t checked much.

    What I have noticed is that some of the overloaded and larger instances can be slow…to post comments….to subscribe to…to post threads on etc. especially from a separate federated instance.

    Lemmy.world is easily one I have noticed along with lemmy.ml and occasionally…beehaw (but much less so).

    My guess is that in general those instances may be slow to sync/update data or respond.

    • chiisana@lemmy.chiisana.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      I’m also seeing this issue on the two instances you’ve mentioned. I’m not sure if it is just an overloaded issue, or if there’s more fundamental issue with the way I’m setting things up. One way around it is if I see a comment I really want to interact out of my own instance, I can copy the link from the fediverse icon, and then search for it. Then the comment (along with its parents) will pop up on my instance eventually. Not idea, as I’d still have to venture out of my own instance to discover the said comment chain, but at least it provides a way to interact, for now.

      • Freeman@lemmy.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 years ago

        I would just give it time. I think those instances have some scaling issues and things take time to sync.

        Do you have other users on your instance?

        I noticed it took a day or two to “catch up” as I added and federated with new communities on these instances.

        Again, I haven’t really dug in. They have seemed okay (I do have accounts on those instances too). It seems once everything is “caught up” and it’s just incremental it goes smoother.

        But are you seeing any resource constraints on your instance? Like cpu or ram?

        • chiisana@lemmy.chiisana.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 years ago

          All by myself. Plenty of room for activity. We’ll see if it catches up or just end up creating a larger divergence! And yeah, I do have account on lemmy.world as well, so it’s just extra song and dance for now.

  • darmok@darmok.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    I’ve noticed something similar on my instance in some cases as well. Nothing obvious logged as errors either. It just seems like the comment was never sent. In my case cpu is minimal so it doesn’t seem like a resource issue on the receiving side.

    I suspect it may be a resource issue on the sending side. Potentially, not able to keep up with the number of subscribers. I know there was some discussion from the devs around the number of federation workers needing to be increased to keep up, so another possibility there.

    It’s definitely problematic though. I was contemplating implementing some kind of resync this entire post and all comments via the Lemmy API to get things back in sync. But, if it is a sending server resource issue, I’m also hesitant to add a bunch more API calls to the mix. I think some kind of resync functionality will be necessary in the end.

  • Roman0@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 years ago

    I seriously thought I’m alone with this issue, but it seems it’s fairly common for people hosting on their own. Same as you guys, it won’t sync everything, some communities are even “stuck” with posts from a day back, even though there were many new ones posted.

    Kind of off topic question, but I guess it’s related? Is there anyone that can’t pull a certain community from an instance? I seem to can’t pull !asklemmy@lemmy.world or anything from that community, that includes posts and comments. No matter how many times I try, it won’t populate on my instance.

    EDIT: Caught this in my logs:

    lemmy | 2023-06-20T08:48:21.353798Z ERROR HTTP request{http.method=GET http.scheme="https" http.host=versalife.duckdns.org http.target=/api/v3/ws otel.kind="server" request_id=cf48b226-cba2-434a-8011-12388c351a7c http.status_code=101 otel.status_code="OK"}: lemmy_server::api_routes_websocket: couldnt_find_object: Failed to resolve actor for asklemmy@lemmy.world

    EDIT2: Apparently it’s a known issue with !asklemmy@lemmy.world, and a bug to be fixed in a future release.

  • Guadin@k.fe.derate.me
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    2 years ago

    Does your server have enough power and workers to handle all the federated messages? Or is it constantly at 100% CPU?

    • Jamie@jamie.moeOP
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      The machine is a dedicated server with 6 cores, 12 threads, all of which are usually under 10% utilization. Load averages currently are 0.35 0.5 0.6. Maybe I need to add more workers? There should be plenty of raw power to handle it.

      • Guadin@k.fe.derate.me
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        Yeah that sounds about enough to handle the load. How many workers do you use? And do you see any errors in your logs about handling messages? You could try to search for that particular thread to see if all replies are handled correctly?

        • Jamie@jamie.moeOP
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          I had it at the default 64, but I’ll try 512 and see if that helps. Nginx is configured to use 768, so I doubt there’s any bottleneck there. I did notice in the troubleshooting page there’s mention of searching the logs for “Activity queue stats,” but a grep of the docker log shows no results for that string.