• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • wanted to implement something like that with my 1920R UPS for my rack but haven’t found the time to commit to antiquated hardware.

    Was enough of a hassle dealing with the expired SSL certs on the management card yet getting software running on one of my machines to communicate with the UPS.

    Honestly you should just bypass dells management software and use NUT. It supports your UPS’s management card if you enable SNMP or you can bypass it all together and just run off of the USB/serial.

    All things considered my two servers chilling chew around 60w on average, not taking into account my POE cameras or other devices. The UPS should run for over a day without getting close to draining its batteries (have a half populated ebm too).

    I’m pretty surprised I can run my whole network for an hour off of my 1500va UPS with three switches and a handful of POE devices. I’m still thinking about replacing it with a rack mount unit so i can lock it inside my rack as I’ve been having issues with unauthorized people messing with it.




  • In the US I can comfirm both GE (freight and passenger) and siemens passenger locomotives run Linux. Some passenger trainsets/cars still run embedded XP.

    Pretty much all locomotives running out there today have a plethora of computers for managing fuel economy, brakes, and positive train control (rules compliance). Fun fact: the union pacific’s 4104 ‘big boy’ steam engine was fitted with wabtec’s I-ETMS PTC which is powered by Linux so there’s literally a steam powered locomotive running Linux.




  • Sorta… If the array was built with hybrid ZFS within unraid which is what the majority of unraid users go with as it allows for better mixing of various sized drives and easier expansion of the array in the future (in other words the main selling points of unraid) then you do not get any of the safety nets ZFS provides as what unraid is essentially doing is making a single drive zfs vdev for each drive in the array. In unraid’s own words “ZFS-formatted disks in Unraid’s array do not offer inherent self-healing protection.”.


  • FYI you probably shouldn’t be saying you feel really comfortable with your data safety while suggesting unraid. The way unraid handles it’s storage will lead to data loss at some point. Unraid only locks down an array and protects it when smart starts issuing warnings that a drive has failed. Smart isn’t magic though and when a drive starts to die it might start writing garbage data for days if not weeks before smart catches on. If a drive writes garbage for long enough there’s nothing you can do to fix it due to that way unraid handles arrays. This is why ZFS is such a popular option as it treats hard drive with a level of skepticism and verifies the data was actually written correctly along with verifying the data from time to time.

    That’s not even mentioning unraid is charging for what other software does for free.






  • There’s some great testimonies from exit node operators out there. Basically it’s only a matter of time before some form of police knock on your door and ask questions but depending on your jurisdiction you won’t be liable for the traffic.

    Your IP will be added to most spam block lists which (unjustly) adds the master list of exit nodes so do not use your home internet connection to host an exit node.




  • I’ve spent the last two weeks on getting a k3s cluster working and I’ve had nothing but problems but it has been a great catalysts for learning new tools like ansible and load balancers. I finally got the cluster working last night. If anyone else is having wierd issues with the cluster timing out ETCD needs fast storage. Moving my VMs from my spinning rust to a cheap SSD fixed all my problems.


  • The classic choice would be fractal design’s node 304 which fits six 3.5" drive in an ITX form factor. There’s also Silverstone’s CS381 which while being larger can fit eight hotswappable 3.5" drives and a micro atx motherboard.

    Even if you go ITX you don’t have to feel limited by the lack of PCIe slots. Since m.2 uses the PCIe protocol it’s very easy to adapt it to your needs such as to an additional PCIe 4x slot. There are even m.2 10GBe cards in both intel and Realtek flavors.

    Side question, what coral TPU do you have because it was my understanding that they use m.2, mini pcie, or USB and not the full size pcie slot?