• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle




  • I recently upgraded three of my proxmox hosts with SSDs to make use of ceph. While researching I faced the same question - everyone said you need an enterprise SSD, or ceph would eat it alive. The feature that apparently matters the most in my case is Power Loss Protection (PLP). It’s not even primarily needed to protect from an possible outage, but it forces sync writes instead of relying on a cache for performance.

    There are some SSDs marketed for usage in data centers, these are generally enterprisey. Often they are classified for “Mixed Use” (read and write) or “Read Intensive”. Other interesting metrics are the Drive Writes Per Day (DWPD) and obviously TBW and IOPS.

    At the end I went with used Samsung PM883.

    But before you fall into this rabbit hole, you might check if you really need an enterprise SSD. If all you’re doing is running a few vms in a homelab, I would expect consumer SSDs to work just fine.









  • Yeah, the quality is really good. It’s also not cheap. I bought this case mostly because it’s rather shallow and did fit into my previous server rack.

    I’m now at a point where I should buy another drive cage but I’m a bit hesitant to spend 150€ for it. Well…

    Edit: Any reason you decided to go with a non-server mainboard without IPMI and ECC support?




  • I’ve been working in IT for about 6/7 years now and I’ve been selfhosting for about 5. And in all this time, in my work environment or at home, I’ve never bothered about backups.

    That really is quite a confession to make, especially in a professional context. But good for you to finally come around!

    I can’t really recommend a solution with a GUI but I can tell you a bit about how I backup my homelab. Like you I have a Proxmox cluster with several VMs and a NAS. I’ve mounted some storage from my NAS into Proxmox via NFS. This is where I let Proxmox store backups of all VMs.

    On my NAS I use restic to backup to two targets: An offsite NAS which contains full backups and additionally Wasabi S3 for the stuff I really don’t want to lose. I like restic a lot and found it rather easy to use (also coming from borg/borgmatic). It supports many different storage backends and multithreading (looking at you, borg).

    I run TrueNAS, so I make use of ZFS Snapshots too. This way I have multiple layers of defense against data loss with varying restore times for different scenarios.


  • One simple way to pull the new image into your cluster is to overwrite the latest tag, specify imagePullPolicy: Always in your deployment and then use kubectl rollout restart deployment my-static-site from within your pipeline. Kubernetes will then terminate all pods and replace them with new ones that pull the latest image.

    You can also work with versioned tags and kubectl set image deployment/my-static-site site=my/image:version. This might be a bit nicer and allows imagePullPolicy: IfNotPresent, but you have to pass your version number into your pipeline somehow, e.g. with git tags.


  • Those are probably some of the most complex and sophisticated machines humanity has ever built. Each one costs 300 million euro, consists of around 100.000 parts, weights almost 200 tons and needs dozens of specialized suppliers to be built.

    Good luck reverse engineering something like that.

    Even if you had the exact blueprints and enough knowledge, creating a functional production chain would be extremely hard, take years and cost billions.