

The fact it recommends popular stuff is a useful addon feature, its a good way to look at what others are watching.


The fact it recommends popular stuff is a useful addon feature, its a good way to look at what others are watching.


Its a big problem. I also dump projects that don’t automatically migrate their own SQLite scehema’s requiring manual intervention. That is a terrible way to treat the customer, just update the file. Separate databases always run into versioning issues at some point and require manual intervention and data migration and its a massive waste of the users time.


31 Containers in all. I have been up as high as ~60 and have paired it back removing the things I wasn’t using.
I also tend to remove anything that uses appreciable CPU at idle and I rarely run applications that require further containers in a stack just to boot, my needs aren’t that heavy.


I reject a lot of apps that require a docker compose that contains a database and caching infrastructure etc. All I need is the process and they ought to use SQLite by default because my needs are not going to exceed its capabilities. A lot of these self hosted apps are being overbuilt and coming without defaults or poor defaults and causing a lot of extra work to deploy them.


On the one hand they were talking selfhosting and then they pull out multiple $10s thousands rack servers. People don’t need a data centre at home to sync some files, pictures, email and play some media!


Every one always says XMPP and there were a lot of recommendations for ejabberd. I tried this recently and it was a total disaster, I do not have a working chat server. If I followed the docker instructions the server would just crash with no details of what went wrong. Where it should have been creating a default server config file it was instead creating a directory with the wrong permissions then promptly crashing. I tried following their documentation but after about 6 hours of messing about and adding more and more I still couldn’t get a client to login to it. I have no idea how to make this work.
So whatever the solution ultimately is I can’t recommend Ejabberd.


Most technology adoption follows an S curve, it can often take a long time to start to get going. Linux has gradually and steadily been improving especially for games and other desktop uses while at the same time Microsoft has been making Windows worse. I feel more that this is Microsoft’s fault, they have abandoned the development of desktop Windows and the advancement of support for modern processor designs and gaming hardware. This has for the first time has let Linux catch up and in many cases exceed Windows capabilities on especially gaming which has always been a stubborn issue. Its still a problem especially in hardware support for VR and other peripherals but its the sort of thing that might sort itself out once the user base grows and companies start producing software for Linux instead.
It might not be enough, but the switching off Windows 10 is causing a change which Microsoft might really regret in a few years.


Initially a lot of the AI was getting trained on lower class GPUs and none of these AI special cards/blades existed. The problem is that the problems are quite large and hence require a lot of VRAM to work on or you split it and pay enormous latency penalties going across the network. Putting it all into one giant package costs a lot more but it also performs a lot better, because AI is not an embarrassingly parallel problem that can be easily split across many GPUs without penalty. So the goal is often to reduce the number of GPUs you need to get a result quickly enough and it brings its own set of problems of power density in server racks.


It’s low power that is still making arm small computers popular. It’s impossible to get a pc down into the 2-5 Watt power consumption range and over time it’s the electrical costs that add up. I would suggest the RPI5 is the thing to get because it’s expensive for what it is and more performance is available from other options supported by armbian.
I use a 5600g on b450 ITX board and 4x 8GB Seagate drives and see about 35W idle and about 40W average. It used to be 45W because I was forced to use a GPU in addition to a 3600 to boot (even though its headless, just a bad bios setup that I can’t fix) and getting a CPU with graphics dropped my idle consumption quite a bit. I suspect the extra wattage for your machine is probably the bigger motherboard and the less efficient CPU.
It is possible to get the machine part down into single digits wattage and then about 5W a drive is the floor without spinning them down, so the minimum you could likely see with a much less powerful CPU is about 30-35W.
Make sure none of the exceptions are ticked and the Minimum number of articles to keep per feed is also 25 or below. Then its up to the cron when that runs so you might have to manually purge it and optimise the database to see what it will actually keep.
I can’t say I have ever worried about it, been running FreshRSS for years and it seems to keep its database size in check fairly well and the defaults have worked fine for me and it rarely gets above 100MB. So I know it “loosely” works in that old articles are absolutely getting purged in time but have no idea how strictly it follows these rules.
Everyone has given Linux answers, its also worth knowing quite a lot of UEFI’s contain the ability to secure erase as well. There are a number of USB bootable disk management tools that can do secure erase as well.


The DMZ for the ISPs router forward to the second router, then everything that hits your outside IP will be forwarded to router 2. Then on Router 2 you open the ports for your service and forward to the internal machine. That should all work fine.


Its quite complicated to setup as well, just went through the instructions and its a long way from just add to docker and run unfortunately. Would be nice to be able to just get a runner in the same or different docker and it just works easily without a lot of manual setup in Linux of directories and users and pipes etc.


I did the same move from contabo to Netcup. Contabo I had all sorts of weird bandwidth limiting problems that I couldn’t explain and which the continued to deny they were throttling. Netcup worked perfectly.


The problem is the information asymmetry, there is always another person for a fraudulent company to exploit due to a dysfunctionally expensive court system. Its why we need market level regulations and public institutions that recover peoples money and fine the organisations for their breaches. This sort of thing works a lot better in the EU than in the US due to the sales laws, the ability to return within 2 weeks, default warranty on goods out to 12 months and expectations of goods to be as advertised forced onto the retailers. They work, they need more enforcement from regulatory bodies but retailers do follow them for the most part and quickly change tune when you go to take legal action when they don’t because courts know these laws inside and out.


Mine are only 25k hours or so, around 3 years. My prior set of disks had a single failure at 6 years but I replaced them all and went to bigger capacity. There is also the power saving aspect of going down to 2 drives as well, it definitely saves some power not spinning 4 extra drives all the time.


I sub to channels and use Youtubes recommendations and new for you to find additional channels etc but I don’t watch them I use Metube and a browser plugin and download the videos to a directory. I don’t get all the privacy but I also am not giving them much watch data and I can avoid the ads.
I still had some issues with the mouse speed on cachyos even after I disabled acceleration. I felt off on its default and I ended up boosting it. Thing is my mouse has its speed inbuilt so I don’t need external software or anything else to configure it on Linux so I don’t understand why I had to boost the speed to make it behave a bit better, it felt like there was some latency as well.