• 2 Posts
  • 78 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle







  • Okay. Do you want to debug your situation?

    What’s the operating system of the host? What’s the hardware in the host?

    What’s the operating system in the client? What’s the hardware in the client?

    What does the network look like between the two? Including every piece of cable, and switch?

    Do you get sufficient experience if you’re just streaming a single monitor instead of multiple monitors?


  • Remember the original poster here, was talking about running their own self-hosted GPU VM. So they’re not paying anybody else for the privilege of using their hardware

    I personally stream with moonlight on my own network. Have no issues it’s just like being on the computer from my perspective.

    If it doesn’t work for you Fair enough, but it can work for other people, and I think the original posters idea makes sense. They should absolutely run a GPU VM cluster, and have fun with it and it would be totally usable


  • Fair enough. If you know it doesn’t work for your use case that’s fine.

    As demonstrated elsewhere in this discussion, GPU HEVC encoding only requires 10ms of extra latency, then it can transit over fiber optic networking at very low latency.

    Many GPUs have HEVC decoders on board., including cell phones. Most newer Intel and AMD CPUs actually have an HEVC decoder pipeline as well.

    I don’t think anybody’s saying a self-hosted GPU VM is for everybody, but it does make sense for a lot of use cases. And that’s where I think our schism is coming from.


    As far as the $2,000 transducer to fiber… it’s doing the same exact thing, just more specialized equipment maybe a little bit lower latency.




  • jet@hackertalks.comtoSelfhosted@lemmy.worldFully Virtualized Gaming Server?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    16 days ago

    Yes, for some definition of ‘low latency’.

    Geforce now, shadow.tech, luna, all demonstrate this is done at scale every day.

    Do your own VM hosting in your own datacenter and you can knock off 10-30ms of latency.

    However you define low latency there is a way to iteratively approach it with different costs. As technology marches on, more and more use cases are going to be ‘good enough’ for virtualization.

    Quite frankly, if you have a all optical network being 1m away or 30km away doesn’t matter.

    Just so we are clear, local isn’t always the clear winner, there are limits on how much power, cooling, noise, storage, and size that people find acceptable for their work environment. So there is some tradeoff function every application takes into account of all local vs distributed.






  • jet@hackertalks.comtodatahoarder@lemmy.mlRenewed drives
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    21 days ago

    What is a renewed drive? Do they have a datasheet with MTBF defined?

    Spinning disks, or consumable flash?

    What is the use case? RAID 5? Ceph? JBOD?

    What is your human capital cost of monitoring and replacing bad disks?

    Let’s say you have a data-lake with Ceph or something, it costs you $2-5 a month to monitor all your disks for errors, predictive failure, debug slow io, etc. The human cost of identifying a bad disk, pulling it, replacing it, then destroying it - Something like 15-30m. The cost of destroying a drive $5-50 (depending on your vendor, onsite destruction, etc)

    A higher predictive failure rate of “Used” drives, has to factor in your fixed costs, and human costs. If the drive only lasts 70% as long as a new drive, then the math is fairly easy.

    If the drive gets progressively slower (i.e. older SSDs) then the actual cost of the used drive becomes more difficult to model (you have to have a metric for service responsiveness, etc).

    • if its a hobby project, and your throwing drives into a self-healing system, then take any near free disks you can get, and just watch your power bill.

    • If you make money from this, or the downside of losing data is bad, then model out the higher failure rate into your cost model.