Mossy Feathers (They/Them)

A

  • 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: July 20th, 2023

help-circle

  • Rollercoaster Tycoon 1 and 2; Need for Speed 2 and 3; SimCity 3k.

    Also, check your monitor properties. Afaik most CRT monitors (not TVs; those run at 60hz/50hz depending on region) are meant to run at 75~85hz. If it’s running at 60hz when it’s meant to run at a higher refresh rate, then that might be why it’s nauseating (my crt has a very noticeable flicker at 60hz, but that goes away at 75hz).

    Edit: to expand on this for any late-comers: CRTs work by using an electron gun (aka particle accelerator aka a motherfucking PARTICLE CANNON) to fire an electron beam at red, green and blue phosphors. When the electron hits a phosphor, it emits light based on the color hit. This beam sweeps over the phosphors at a speed dictated by the display’s refresh rate and illuminates the phosphors one-by-one until it has illuminated the entire screen. This is why trying to take a picture or video of a CRT requires you to sync your shutter speed with the CRT. If your shutter isn’t synced then the monitor will appear to be strobing or flickering (because it is, just very, very quickly)

    These phosphors have a set glow duration, which varies based on the intended display refresh rate. A refresh rate that is too low will cause the phosphors to dim before the electron beam passes over them, while a refresh rate that’s too high can cause ghosting, smearing, etc because the phosphors haven’t had a chance to “cool off”. TVs are designed to run at 60hz/50hz, depending on the region, and so their phosphors have a longer glow duration to eliminate flickering at their designated refresh rate. Computer monitors, on the other hand, were high-quality tubes and were typically geared for +75hz. The result is that if you run them at 60hz then you’ll get flickering because the phosphors have a shorter glow duration than a TV.

    Note: this is a place where LCD/LED panels solidly beat CRTs, because they can refresh the image without de-illuminating the panel, avoiding flicker at low refresh rates.

    Edit 2: oh! Also, use game consoles with CRT TVs, not computer monitors. This is because old consoles, especially pre-3d consoles, “cheated” on sprites and took advantage of standard CRT TV resolution to blend pixels. The result is that you may actually lose detail if you play them on a CRT computer monitor or modern display. That’s why a lot of older sprite-based games unironically look better if you use a real CRT TV or a decent CRT emulator video filter.


  • I’m… honestly kinda okay with it crashing. It’d suck because AI has a lot of potential outside of generative tasks; like science and medicine. However, we don’t really have the corporate ethics or morals for it, nor do we have the economic structure for it.

    AI at our current stage is guaranteed to cause problems even when used responsibly, because its entire goal is to do human tasks better than a human can. No matter how hard you try to avoid it, even if you do your best to think carefully and hire humans whenever possible, AI will end up replacing human jobs. What’s the point in hiring a bunch of people with a hyper-specialized understanding of a specific scientific field if an AI can do their work faster and better? If I’m not mistaken, normally having some form of hyper-specialization would be advantageous for the scientist because it means they can demand more for their expertise (so long as it’s paired with a general understanding of other fields).

    However, if you have to choose between 5 hyper-specialized and potentially expensive human scientists, or an AI designed to do the hyper-specialized task with 2~3 human generalists to design the input and interpret the output, which do you go with?

    So long as the output is the same or similar, the no-brainer would be to go with the 2~3 generalists and AI; it would require less funding and possibly less equipment - and that’s ignoring that, from what I’ve seen, AI tends to be better than human scientists in hyper-specialized tasks (though you still need scientists to design the input and parse the output). As such, you’re basically guaranteed to replace humans with AI.

    We just don’t have the society for that. We should be moving in that direction, but we’re not even close to being there yet. So, again, as much potential as AI has, I’m kinda okay if it crashes. There aren’t enough people who possess a brain capable of handling an AI-dominated world yet. There are too many people who see things like money, government, economics, etc as some kind of magical force of nature and not as human-made systems which only exist because we let them.






  • I love playing with new technologies. I wish graphics card prices stayed down because rt is too heavy nowadays for my first gen RT card. I play newer games with rt off and most setting turned down because of it.

    I wish they stayed down because VR has the potential to bring back crossfire/SLI. Nvidia’s gameworks already has support for using two GPUs to render different eyes and supposedly, when properly implemented, it results in a nearly 2x increase in fps. However, GPUs are way too expensive right now for people to buy two of them, so afaik there aren’t any VR games that support splitting rendering between two GPUs.

    VR games could be a hell of a lot cooler if having 2 GPUs was widely affordable and developers developed for them, but instead it’s being held back by single-gpu performance.


  • Not gonna lie, raytracing is cooler on older games than it is newer ones. Newer games use a lot of smoke and mirrors to simulate raytracing, which means raytracing isn’t as obvious of an upgrade, or can even be a downgrade depending on the scene. Older games, however, don’t have as much smoke and mirrors so raytracing can offer more of an improvement.

    Also, stylized games with raytracing are 10/10. Idk why, but applying rtx to highly stylized games always looks way cooler than on games with realistic graphics.


  • Not exactly, Maxis was with EA for a long time, both SimCity 3k and SimCity 4 were published under them, and the Sims would have never happened if it weren’t for EA.

    However you’re right that EA (or more specifically, John Riccitiello) did eventually fuck Maxis over. SimCity 2013 got fucked over by always-online drm and tiny cities, while The Sims 4 - which wasn’t originally meant to be a mainline Sims game but instead a successor to The Sims Online - got cannibalized and made into The Sims 4 because Riccitiello wanted a new Sims game ASAP.

    Why do I ascribe blame to Riccitiello and not EA as a whole? Because EA seems like they’ve improved since Riccitiello left. That’s not saying much, but it honestly seems like the quality of their games has improved since he left the company.


  • This. I recently hooked my steam deck up to a CRT (I’ve been playing a lot of games that were made for a 4:3 ratio). All it takes is an active adapter (in my case, active hdmi-to-vga) and setting the deck to output to a 4:3 aspect ratio.

    That said, if that was a smart TV then OP would probably have a lot more issues. My parents have a (Samsung) smart TV and I’ve had nothing but headaches trying to get my deck to work with that, and they’re both modern devices. Some days it takes one try, some days it takes five tries, some days I just give up.






  • Thanks! I’ve been looking harder at Linux, but the thing that’s holding me back is that I’m not sure how well the modeling and texturing tools I use will run on Linux and dual booting is a headache.

    Have you ever tried running windows in a vm, and if so, how well does it run? Only reason why I’m considering this is because I’ve heard some vm tools can do hardware passthrough to significantly increase vm performance. If the stuff I need to run works on Windows in a vm, then I might do that.

    Edit: you might check cyberpunk again, I’ve heard the new update currently has it performing significantly better on Linux than in Windows.