Hi, I’m your customer base.
I’m a complete novice, no network or coding experience, but not afraid of computers either. I’m pretty worried about messing up something serious due to lack of knowledge.
In the end, I didn’t choose Synology or the like due to:
lack of robust community support. I’ve noodled around with Linux for years and learned that community support is essential.
price. I’d pay 10% or 50% more for a good pre-configured system, but not 3-4x more (which is just the general feeling I get from Synology)
lack of configurability. I’m still not sure what I would like to do (and be able). I know I want to replace some storage services, replace some streaming services, control my smart home, maaaaybe access my files remotely, and probably some other stuff. I may want to have email or a website in the future, but that’s not on my radar right now.
If there were some plug-and-play hardware/software solution that was still affordable and open, it would be a good choice for me.
Just getting started but yeah, I have basically no technology background. Mostly I’m too stubborn to know when to quit something so here I am lol.
What is grade?
Speed as I understand it is the physical speed the drive spins, and is directly proportional (?) to the read/write speed.
How does a beginner know which is which? What should I look for, and how do I know if it’s a good investment or overkill for a home setup?
DownloadThemAll seems to be helping. I’ll update the original post with the details once I have success. In this case, I was able to first download them internally in the browser, then copy the download link and add them to DtA using the link. Someone smarter than me will be able to explain why the extra step was necessary, or how to avoid it.
I couldn’t get it working, but I didn’t try too hard. I may give it another shot. I’m trying a different approach right now.
Yeah, that introduces an issue of queuing and monitoring dozens of downloads rather than just a few. I had similar results.
As my family is continuing to add photos over the week, I see no way to verify that previously downloaded parts are identical to the same parts in another takeout. If that makes sense.
Thanks, I’ll give it a shot. The download links are a little weird due to the google authentication, so they can only be used from a logged in account.
This route may be the answer. I didn’t have success so far in setting up a download manager that offered any real improvements over the browser. I wanted to avoid my photos being on two corporate services, but as you say, in theory everything is delete-able.
Great, any suggestions?
It could absolutely be worse. The main problem is the lack of flexibility - If I could ask for an extension after downloading 80% of the files over a week, that would be helpful for example. I’m also beginning to suspect that they cap the download speed because I am seeing similar speeds on my home and work network…
It’s not the speed - it’s the interruptions. If I could guarantee an uninterrupted download for 12 hours, then I could do it over the course of 3-4 days. I’m looking into some of the download management tools that people here have suggested.
The part that is Google’s fault is that they limit the number of download attempts and the files expire after 1 week. That should be clear form the post.
I used it for my music collection not that long ago and had no issues. The family’s photo library is an order of magnitude larger, so is putting me up against some of the limitations I didn’t run into before
Well then read it “shitty rural internet.” Use context clues.
Thank you! The goal is to set up immich. It’s my first real foray into self hosting, and it seems close enough to feature parity with Google that the family will go for it. I ran a test with my local photos and it works great, so this is the next step.
Looked promising until
When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.
The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on “Google Photos” as a backup of your photos. You will not be able to use rclone to redownload original images. You could use ‘google takeout’ to recover the original photos as a last resort
I’m not really looking forward to that step either
I’m currently about halfway through setting up a home server on an old/refurbished Dell PC. It has enough compute to transcode if needed, but no more. I’ll have to upgrade the storage to set up RAID. For software, I am running xubuntu, which offers the benefits of the great community and documentation of Ubuntu. It is very beginner friendly, but is a bit simpler and lighter than gnome. I’m running everything I can as Docker containers.