Funny, I tweaked my Linux PC at work to look like Windows XP. It’s so cursed, I love it.
Funny, I tweaked my Linux PC at work to look like Windows XP. It’s so cursed, I love it.
Y’know what? I’m gonna be even more of a furry now, just to spite you.
I haven’t accidentally deleted a bunch of data yet (which, considering 99% of my interaction with Linux is when I’m SSH’d into a user’s server, I am very paranoid about not doing), but I have run fsck on a volume without mounting the read/write flashcache with dirty blocks on it first.
Oops.
Nah, I got that, you’re all Gucci
Yeah, as someone in a tech job whose primary function is “parsing and interpreting logs” sometimes even the repeated flood of seemingly useless logs can be helpful. If nothing else, they explain why there aren’t any useful logs and that can guide how I respond to the problem.
Oh that’s easy! I have this friendly multi-page PDF that assumes you have an active directory domain already (god rest your soul if you’re raw dogging kerberos and ldap raw) that walks you through the instructions step by step and…
mount.nfs4: access denied by server
I mean, even a cursory search on Google shows that smart TVs can gather a hell of a lot more data than just that, up to and including analyzing the actual video being displayed to figure out what you’re watching
Smart TVs will collect your personal info and viewing habits and send it to the manufacturer of they’re given half a chance
Some scummy brands will even configure their TVs to automatically and silently connect to open wifi networks to phone home
When IT folks say devs don’t know about hardware, they’re usually talking about the forest-level overview in my experience. Stuff like how the software being developed integrates into an existing environment and how to optimize code to fit within the bounds of reality–it may be practical to dump a database directly into memory when it’s a 500 MB testing dataset on your local workstation, but it’s insane to do that with a 500+ GB database in production environment. Similarly, a program may run fine when it’s using a NVMe SSD, but lots of environments even today still depend on arrays of traditional electromechanical hard drives because they offer the most capacity per dollar, and aren’t as prone to suddenly tombstoning when it dies like flash media. Suddenly, once the program is in production, it turns out that same program’s making a bunch of random I/O calls that could be optimized into a more sequential request or batched together into a single transaction, and now it runs like dogshit and drags down every other VM, container, or service sharing that array with it. That’s not accounting for the real dumb shit I’ve read about, like “dev hard coded their local IP address and it breaks in production because of NAT” or “program crashes because it doesn’t account for network latency.”
Game dev is unique because you’re explicitly targeting a single known platform (for consoles) or targeting for an extremely wide range of performance specs (for PC), and hitting an acceptable level of performance pre-release is (somewhat) mandatory, so this kind of mindfulness is drilled into devs much more heavily than business software dev is, especially in-house dev. Business development is almost entirely focused on “does it run without failing catastrophically” and almost everything else–performance, security, cleanliness, resource optimization–is given bare lip service at best.