

That would depend on the network environment. If your VM is on a /28 subnet and you set /24 it won’t be valid
That would depend on the network environment. If your VM is on a /28 subnet and you set /24 it won’t be valid
that’s just how they are made.
Can confirm, even the little training compiler we made at Uni for a subset of Java (Javali) had a backend and frontend.
I can’t imagine trying to spit out machine code while parsing the input without an intermediary AST stage. It was complicated enough with the proper split.
All these naysaysers in the comments here… It’s obvious you have to keep the development pipeline moving. Just because we have one free codec at the stage of hardware support now does not mean the world can stop. There are always multiple codecs out there at various stages of adoption, that’s just normal.
Not sure about the enconding
Right click on video -> Stats for Nerds
And Eiffel is in a different plane of existence entirely
The original goes to 1024 x 786 and has different zoom levels. I’ve played most of the original parks this year and that does not see to be too high res to me. Give me a sec I’ll take a screenshot of mine in a minute.
Edit here it is. It’s the GOG version, which launches fullscreen, so the 1024 x 768 are stretched onto the center of my 1920 x 1080 screen.
As @shane@feddit.nl says, you can use the same public port for many different destination address, vendors may call it something like “port overloading”.
I just responded to him on that point, while you were typing to me. I didn’t know this existed, thanks for pointing it out!
More importantly, you can install a large pool of public address on your CGNAT. For instance if you install a /20 pool, work with a 100 users / public address multiplexing, you can have 400,000 users on that CGNAT. 100 users / address is a comfortable ratio that will not affect most users. 1000 users / address would be pushing it, but I’m sure some ISP will try it.
Sure, yeah, I have seen a few threads on NANOG about the NAT address ratios people are using. I also think I remember someone saying he was forced to use 1000 and it kind of worked as long as he pulled the heaviest users out of the pool. But if I recall correctly he was also saying he made IPv6 available in parallel to reduce the CGNAT load.
But the point that made this post ridiculous and an obvious joke is that it said “one address” :-)
A TCP session is a unique combination of client IP, client port, server IP, and server port. So you can use the same IP and port as long as the destination is a different IP or port.
Fair point! I wasn’t aware of any NAT working that way, but they could exist, I agree. It does blow up the session table a bit, but we are taking about a hell of a large theoretical system here anyway, so it’s not impossible.
This wouldn’t help going to popular destinations, since they have a lot of people going to the same IP address and port, but for many (most?) of them you probably have some sort of CDN servers in your data centers anyway.
Actually we have recently seen a few content providers not upgrading their cache servers and instead preferring to fall back to our PNIs (which to be fair are plenty fast and have good enough latencies). On the other hand others made new ones available recently. Seems there isn’t a universal best strategy the industry is converging on at the moment.
Funny how many here took this to be real, judging from the reactions. To me it’s an obvious joke.
Question to you guys: How do you suppose 200 million customers will share the less than 65’536 ports that are available on that one address?
2022-12-28 is actually about 2.6 years ago.
Did you misread? They wrote “the only reason left to boot from a DVD”, so the use case you replied with has nothing to do with the topic.
This week I heard from a network group lead of a university hospital, that they have a similar issue. Some medical devices that come with control computers can’t be upgraded, because they were only certified for medical use with the specific software they came with.
They just isolate those devices as much as possible on the network, not much else to do, when there is no official support and recertification for upgrading. And of course nobody wants to spend half a million on a new imaging device when the old one is still fine except for the OS of the control computer.
Sounds like a shitty place to be, I pity those guys.
That said, if you were talking about normal client computers then it’s inexcusable.
Pre-UEFI they were fighting over the boot sector, sure, but now that everything is more well defined, and every OS can read the FAT32 ESP? Never seen it…
At worst the UEFI boot entry is replaced. There are some really shitty UEFI implementations out there which only want to load \efi\microsoft\boot\bootx64.efi
or \efi\boot\bootx64.efi
, or keep resetting you back to those.
Assuming you were dumped into Windows suddenly, you can check if you have the necessary boot entries still with bcdedit and its firmware option
bcdedit /enum firmware
If you just have a broken order you can fix it with
bcdedit /set {fwbootmgr} displayorder {<GUID>} /addfirst
If you actually need a new entry for Linux it’s a bit more annyoing, you need to copy one of the windows entries, and then modify it.
bcdedit /copy {<GUID1>} /d "Fedora"
bcdedit /set {<GUID2>} path \EFI\FEDORA\SHIM.EFI
bcdedit /set {fwbootmgr} displayorder {<GUID2>} /addfirst
Where GUID1 is a suitable entry from windows, and GUID2 is the one you get back from the copy command as the identifier of the new entry. Of course you will have to adjust the description and the path according to your distro and where it puts its shim, or the grub efi, depending on which you’d like to start.
Edit: Using DiskGenius might be a little more comfortable.
Actually Solaris is still squirming while the first shovels of dirt are being heaped on.
For me things actually became easier when I got myself a native Linux install instead of Windows. But I guess it depends on your college.
Okay that’s fair. I fricked around with some C++ numerics BLAS header library (I think it was Eigen) on Linux before that was complicated and annoying too. The ARM Fast Models simulator was also a pain. Maybe I just don’t like C++ development now that I think about it.
C mostly worked okay for me though.
Hm. I’ve always found it harder to compile stuff on Windows.
The size difference is not significant. This is about the maintenance burden. When you need to change some of the code where CPU architecture specific things happen you always have to consider what to do with the code path or the compiler flags that concern 486 CPUs.
Here is the announcement by the maintainer Ingo Molnar where he lists some of the things he can now remove and stop worrying about: https://lore.kernel.org/lkml/20250425084216.3913608-1-mingo@kernel.org/
It’s quite cruel of that compiler not being happy until you’re exhausted.
Oh! Okay, that’s interesting to me! What was the input language? I imagine it might be a little more doable if it’s closer to hardware?
I don’t remember that well, but I think the object oriented stuff with dynamic dispatch was hard to deal with.