Found the spreadsheet https://goo.gl/z8nt3A
And the source: https://www.hardwareluxx.de/community/threads/die-sparsamsten-systeme-30w-idle.1007101/
Found the spreadsheet https://goo.gl/z8nt3A
And the source: https://www.hardwareluxx.de/community/threads/die-sparsamsten-systeme-30w-idle.1007101/
Still you can calculate how much you will save with 2w power reduction with selling this one and buying different NAS.
You can reduce the disk idle time after access to 5-15 min for better power saving.
Maybe you are looking at the wrong thing. CPU + motherboard controllers idle state matters more than spun down hdds
I saw a spreadsheet somewhere of a lot of cpu + motherboard combinations with idle power consumption for ultra low energy NAS optimisation.
This actually seems interesting. The training data will be archived in a blockchain. So tech bros can throw money for them at training costs. Artists could get some stake in the model.
They both are against free comercial use of the model as they want the money. So we will see.
There are many areas where this could help artists immensely as there are certain steps that could be automated like: auto shading or auto picture to vector conversion.
Modem translates fiber signals / DSL into twisted pair cable
Acces point translates twisted pair into wifi
I think you are looking for all in one router
For AI/ML workloads the VRAM is king
As you are starting out something older with lots of VRAM would be better than something faster with less VRAM for the same price.
The 4060 ti is a good baseline to compare against as it has a 16GB variant
“Minimum” VRAM for ML is around 10GB the more the better, less VRAM could be usable but with sacrefices with speed and quality.
If you like that stuff in couple of months, you could sell the GPU that you would buy and swap it with 4090 super
For AMD support is confusing as there is no official support for rocm (for mid range GPUs) on linux but someone said that it works.
There is new ZLUDA that enables running CUDA workloads on ROCm
https://www.xda-developers.com/nvidia-cuda-amd-zluda/
I don’t have enough info to reccomend AMD cards
Every 5 minutes the graphical stack crashed for me.
Removed by mod
https://lemmy.ml/post/10284661
No clear reason stated.
Removed by mod
I think short event or campaign with push for donations with a pop up that you actually can dismiss. An ad like banner. The biggest problem would be community organization as Lemmy isn’t only decentralized horizontally but also vertically. Different front ends, different apps different instances. Most of them wouldn’t want to implement an ad that wouldn’t benefit them directly. They also have costs with running their piece of lemmy. So some cut for them should be included.
I think a dedicated trustworthy person should be responsible for organizing this campaign as developer time is best spent elsewhere.
If you want to feel like a real hackerman you could try to doctor the website by inspect element.
Can’t go wrong with “black hole foundry”
I think more important is compute per watt and idle power consumption than raw max compute power.