• 0 Posts
  • 36 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle




  • I’m far from an expert, but I don’t know of rclone doing versioning, or a continuous sync like syncthing. Also haven’t used proton, so take my thoughts with a grain of salt.

    Stage 1 Run rclone config to set up the proton remote. rclone config should take you through a wizard and will eventually ask you to authenticate somehow with the remote. Once that is done and saved, you’ll exit the rclone config wizard and be back at the command line.

    Then you would run a test command like: rclone ls :

    If it worked, you should see a list of files/folders on Proton. If not, you’ll have to go back to rclone config and edit the remote to fix whatever went wrong.

    Stage 2

    Test out copying the folders with a command something like: rclone copy localfile/folder remotename:remotepath

    Do some testing to get the hang of the command, but it is pretty straightforward.

    Stage 3

    I don’t know how many files or how big the files are, but I assume not too many and not too big. I also don’t know which version of Linux you have, but I assume you have access to systemd, cron, or both.

    You’ll make a basic shell script that runs the command you practiced in stage 2. Easy peasy, put it in a text file with a shebang at the beginning, make it executable, and give it a go. It should run exactly how it did when you typed the command out manually.

    Finally, you will write a systemd timer or a cron/crontab entry to execute that script at some frequency.

    So just to summarize:

    1. Setup the proton remote in rclone using rclone config
    2. Test out copying files to proton through rclone
    3. Write a basic shell script that runs the command to copy files from the desired local folders to the desired proton folders.
    4. Use one of the tools on Linux that lets you schedule the execution of scripts to automate running your copy to proton script as frequently as makes sense to you.




  • I did back of the envelope math a while back where if active users of large self-hosted communities (such as /r/selfhosted) put $5/month into donations to the most used self-hosted software projects, ~20 projects could get ~$75,000/year of income. Not enough to build a company around, but enough to live well on in many parts of the world as a sole developer or enough for a maintainer to pay for other developer contribution.

    I think the open source community fails to organize around the fact that development and maintenance isn’t free, but that as a massive user group, it takes minimal contribution from each to make an impact. Can better messaging and “structure” break the free rider problem?






  • Nothing gets solved overnight. I realize that f-droid isn’t the be all end all, but hey, be the change you want to see in the world. Maybe Aurora is a stop-gap, but maximize your use of f-droid alternatives and support the developers however you can. Be active in alternatives to the big-tech sites, like posting in beehaw communities.

    Poaching something I posted on Reddit /r/selfhosted at the beginning of the year:

    Back of the envelope math. Assuming 30,000 active users here on /r/selfhosted x $50/yr = $1.5M / 20 -> $75,000 per year. In other words, if /r/selfhosted gave $50 per person per year, “we” could contribute $1.5M to open source projects we use. Some projects probably wouldn’t know what to do with resources and/or don’t have the infrastructure in place to receive anything, so not a panacea. But for the well organized and developed projects?

    It’s sort of wild if you think about it. There are probably 10-12 very popular self-hosted applications with a very long tail, but 20-25 probably captures a very healthy cross section of use. Not every project or developer can accept monetary donations or use them effectively. But $75,000/year is median household income in the US.

    There are almost certainly many more open source software and app users than there are self-hosted people - I’d imagine the self-host people are a subset. So what if we add open source software and mobile apps to the collective pool we could financially contribute to - again, $50/year/per able user - maybe the number of supportable applications goes up to 50 or 80.

    Leading the thought experiment to a logical conclusion - if 80 open source projects received $75,000/year in donation income (at a minor cost to those able to pay and none to the vast majority), enough in most parts of the world to support a person and possibly a family, we would have more amazing, privacy respecting options. It doesn’t necessarily solve everything, most people naturally free-ride, and organizing many small contributions at a massive scale isn’t a solved problem itself. But, my point is that users collectively have way more power than we realize.






  • I would note that ChromeOS is mainstream with normal users and it is effectively a well curated, highly-opinionated Linux distribution. Distros like opensuse Aeon and Kalpa, and Fedora Silverblue, are going from well established platforms into the highly curated, highly-opinionated direction as well. Limited set of options that work well out of the box not prone to breaking, and explicitly not for tinkerers. I tend to think that if Linux is ever going to reach mainstream users (outside of ChromeOS), it will be through these bulletproof, opinionated distros that put bubble wrap around the user.


  • I have a server at home, so this might not apply to you. But I used this software (https://github.com/bluenviron/mediamtx) to proxy the rtsp streams from cameras. It makes the streams available over multiple formats, so I stick the webrtc stream from the security cameras into a little, super simple html page I threw together. Bookmarked, on my android home screen, and I’m one click from seeing all of my cameras streaming while I’m on LAN.

    Then I wrote a simple bash script to call ffmpeg to take a snapshot from each rtsp stream every x interval. I rewrote the landing page to show a table of those snapshots and each image is a hyperlink to the direct webrtc link to that camera’s stream. And the html page refreshes itself every x seconds.

    I’m happy with this approach so far. The streams are now easily available on Android, Windows, Linux devices, no app beyond a web browser required.

    What I plan to do next:

    1. Make the web server and proxied streams available over my mesh VPN so that the landing page and cameras are available from outside the LAN.

    2. Start throwing images at doods (https://github.com/snowzach/doods2) to identify objects, and pass the detection and image to a messenger like xmpp or matrix or telegram or even an irc channel to push an alert to me.