![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
I’m not the OP, but you can get 8TiB SSDs, they are spendy, but doable, no spinning disks required, the benefit of using a nas based solution is you can put a bunch of cheap SSDs in
Not directly an answer, but the CRT guy has a series of industrial computers for different environments, which could provide inspiration.
Some of them have direct DC inputs, some have anti-vibration designs, some have massive passive cooling!
The little guys series https://www.youtube.com/watch?v=EP3aKEG79DM&list=PLec1d3OBbZ8LGjvbb0GQwlQxWXmI2PA88
I think a Synology box would work for you, or a TrueNas design - you could just build out one of their motherboards in your own itx case. These are good, robust, anti-vibration, mobile low power cpus, hardware selected for robustness and minimum heat. Stick it in a cupboard and forget about it, they run containers, and vms.
Ipv6
Depending on your gateway, you may be able to override the DNS settings for a few domains that you use internally
Watching the video.
Source First - source available - that’s what they do, good term.
I do like the discussion and the motivation illustrated
I now have a better feeling for how futo is trying to do open source capitalism
Ipmi interface?
Wow. Where is all this hate coming from?
People like to experiment, and tinker, and try things in their home lab, that would scale up in a business. Just to prove they can do it. That’s innovation. We should celebrate it. Not quash people
Okay. Do you want to debug your situation?
What’s the operating system of the host? What’s the hardware in the host?
What’s the operating system in the client? What’s the hardware in the client?
What does the network look like between the two? Including every piece of cable, and switch?
Do you get sufficient experience if you’re just streaming a single monitor instead of multiple monitors?
Remember the original poster here, was talking about running their own self-hosted GPU VM. So they’re not paying anybody else for the privilege of using their hardware
I personally stream with moonlight on my own network. Have no issues it’s just like being on the computer from my perspective.
If it doesn’t work for you Fair enough, but it can work for other people, and I think the original posters idea makes sense. They should absolutely run a GPU VM cluster, and have fun with it and it would be totally usable
Fair enough. If you know it doesn’t work for your use case that’s fine.
As demonstrated elsewhere in this discussion, GPU HEVC encoding only requires 10ms of extra latency, then it can transit over fiber optic networking at very low latency.
Many GPUs have HEVC decoders on board., including cell phones. Most newer Intel and AMD CPUs actually have an HEVC decoder pipeline as well.
I don’t think anybody’s saying a self-hosted GPU VM is for everybody, but it does make sense for a lot of use cases. And that’s where I think our schism is coming from.
As far as the $2,000 transducer to fiber… it’s doing the same exact thing, just more specialized equipment maybe a little bit lower latency.
Can you define what acceptable latency would be?
local network ping (like corporate networks) 1-2ms
Encoding and decoding delay 10-15ms
So about ~20ms of latency
Real world example
Fiber isn’t some exotic never seen technology, its everywhere nowadays.
Moonlight literally does what you want, today! using hvec encoding straight in the gpu.
Try it out on your own network now.
Yes, for some definition of ‘low latency’.
Geforce now, shadow.tech, luna, all demonstrate this is done at scale every day.
Do your own VM hosting in your own datacenter and you can knock off 10-30ms of latency.
However you define low latency there is a way to iteratively approach it with different costs. As technology marches on, more and more use cases are going to be ‘good enough’ for virtualization.
Quite frankly, if you have a all optical network being 1m away or 30km away doesn’t matter.
Just so we are clear, local isn’t always the clear winner, there are limits on how much power, cooling, noise, storage, and size that people find acceptable for their work environment. So there is some tradeoff function every application takes into account of all local vs distributed.
100% ^^^ This.
You could do everything with openstack, and it would be a great learning experience, but expect to dedicate about 30% of your life to running and managing openstack. When it just works, it’s great… when it doesn’t… ohh boy, its like a CRPG which will unlock your hardware after you finish the adventure.
This is a terrible idea, no really.
Any system that shares power and grounds (i.e. on the same bus), keep on the same power supply/domain.
Even, if!!!, it doesn’t fry your computer when one power system goes off but the other stays on - the system will absolutely not be stable, and will behave in unexpected ways.
DO NOT DO THIS.
It’s not satire if it’s what most people do by default:)
Developers are responsible for their own testing.
Test coverage and end to end tests will be assigned to someone no longer at the company, or on vacation.
What is a renewed drive? Do they have a datasheet with MTBF defined?
Spinning disks, or consumable flash?
What is the use case? RAID 5? Ceph? JBOD?
What is your human capital cost of monitoring and replacing bad disks?
Let’s say you have a data-lake with Ceph or something, it costs you $2-5 a month to monitor all your disks for errors, predictive failure, debug slow io, etc. The human cost of identifying a bad disk, pulling it, replacing it, then destroying it - Something like 15-30m. The cost of destroying a drive $5-50 (depending on your vendor, onsite destruction, etc)
A higher predictive failure rate of “Used” drives, has to factor in your fixed costs, and human costs. If the drive only lasts 70% as long as a new drive, then the math is fairly easy.
If the drive gets progressively slower (i.e. older SSDs) then the actual cost of the used drive becomes more difficult to model (you have to have a metric for service responsiveness, etc).
if its a hobby project, and your throwing drives into a self-healing system, then take any near free disks you can get, and just watch your power bill.
If you make money from this, or the downside of losing data is bad, then model out the higher failure rate into your cost model.
It probably does! Android split screen has been built into AOSP for a very long time.
I believe Android very aggressively swaps things out of memory, and just keeps a very memory efficient app bookmark to reload the app in the position you were at.
If you want to insure an app works in the background you might want to split screen, keeping your browser open while you do something else