Reddit refuge, escentric engineer and serial hobbyist.

  • 3 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle



  • czardestructo@lemmy.worldtoSelfhosted@lemmy.worldNextcloud appreciation post
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    I understand that everyone doesn’t always have a perfect experience but I’ve been using the same instance of nextcloud for over 8 years I just keep upgrading and migrating. It just works. Only issues I’ve had is when Debian withholds updating php for too long or when they finally do all the config files for php get fucked and I have to redo them all.







  • I’m going to have to give this a shot tonight, need to make a pfsense rule to allow the server to get out and then change its DNS. Regarding php, my current config is the following because I have over 64gigs of ram and went through great length to get Nextcloud to cache MORE into ram:

    pm.max_requests = 50000 #set higher, the process is recyled after 50k calls to prevent memory leaks
    pm.max_children = 1000
    pm.start_servers = 60
    pm.min_spare_servers = 30
    pm.max_spare_servers = 120
    








  • I’ve been running the Joplin server for over a year with clients on four laptops and three phones and share notes with my wife and its wonderful. There are certainly quirks and sometimes sync issues but by and large I’m really happy with it. There seems to be one cluster of notes I have that always irritates a fresh client sync and it shows up at 50 conflicts but I work through it. Also my notebooks are huge and the first sync can take an hour. It’s a lot slower than I’d expect.



  • I think a full resync then re-index will go fine. My setup is different in that I sync everything through Nextcloud and run a script that looks for changes and triggers an indexing scan in photoprism. That being said I’ve absolutely mutilated some photo prism databases (migrating servers, different folder names with the same content) and run full indexing and never ended up with duplicates. It’s very good at stacking and cleaning up the same files in the DB so long as there aren’t actual duplicates in the original storage. But again it might take 3 or more full scans to find and purge duplicates.


  • I am far from a photo prism expert but I can safely say the indexing algorithm is weird and takes multiple runs to finish. Logically I would expect to run it once and it would do everything in one scan but I’ve found it takes sometimes 3 to 5 full scans to update and properly catch up to major changes. It’s almost like it acknowledges big changes and documents it but waits for multiple passes before committing it. Also it does a really good job when scanning to look or duplicate images and stacking/repointing to the valid file. I would advise running the indexing another 2 or 3 times if you are confident the 31k files are actually on storage and just not showing up on the database.