And those configs are clearly the result of someone else stitching together three different examples from different versions, with some settings that are silently ignored in the latest version or only exist when compiled with special flags.
And those configs are clearly the result of someone else stitching together three different examples from different versions, with some settings that are silently ignored in the latest version or only exist when compiled with special flags.
Here is a basic way to configure the service:
…
But this method has significant drawbacks and probably won’t work for most use cases, so do what works for you.
Seems like a pointless waste of time for a first-party effort, when they could be… Idk, implementing the audio manipulation APIs that Discord relies on, or something.
But I don’t necessarily see direct harm from it either. Just useless.
They did issue a fix: “Buy a new CPU please!”
That’s why they don’t mind the reputation hit. If 1 person swears allegiance to Intel as a result but 2 people buy new AMD chips, they’re still ahead. And people will forget eventually. But AMD won’t forget the Q3 2024 sales figures.
As a developer, my default definition of “slow” is whether it’s slow on my machine. Not ideal, but chimp brain do chimp brain things. My eyes see my own screen all day, not yours.
I use Linux because Hackintosh is a dying platform and it only takes about 800 hours to get it almost as good.
Visual Studio: 😳
I started on MacVim, so I could just use cmd+q. And by the time I used vim on the terminal I knew all about :commands
Least toxic Linux thread
Use netboot.xyz and let us know how it goes. I’ve always been curious.
I’m talking about user interactions, not deployments.
In a monolith with a transactional data store, you can have a nice and clean atomic state transition from one complete, valid state to the next in a single request/response.
With a distributed system, you’ll often have scenarios where the component which receives the initial request can’t guarantee the final state of the system by the time it needs to produce a response.
If it did, it would spend most of its effort orchestrating other components. That would couple them together and be no more useful than a monolith, just with new and exciting failure modes. So really the best it can do is tell the client “Here’s a token you can use to check back on the state of this operation later”.
And because data is often partitioned between different services, you can end up having partially-applied state changes. This leaves the data in an otherwise-invalid state, which must be accounted for – simply because of an implementation detail, not because it’s semantically meaningful to the client.
In operations that have irreversible or non-idempotent external side-effects, this can be especially difficult to manage. You may want to allow the client to resume from immediately before or after the side-effect if there is a failure later on. Or you may want to schedule the side-effect, from the perspective of an earlier component in the chain, so that it happens even if a middle component fails (like the equivalent of a catch or finally block).
If you try to cut corners by representing these things as special cases where the later components send data back to earlier ones, you end up introducing cycles in the data flow of your microservices. And then you’re in for a world of hurt. It’s better if you can represent it as a finite state machine, from the perspective of some coordinator component that’s not part of the data flow itself. But that’s a ton of work.
It complicates every service that deals with it, and it gets really messy to just manage the data stores to track the state. And if you have queues and batching and throttling and everything else, along with granular permissions… Things can break. And they can break in really horrible ways, like infinitely sending the same data to an external service because the components keep tossing an event back to each other.
There are general patterns – like sagas, distributed transactions, and event-sourcing – which can… kind of ease this problem. But they’re fundamentally limited by the CAP Theorem. And there isn’t a universally-accepted clean way to implement them, so you’re pretty much doing it from scratch each time.
Don’t get me wrong. Sometimes “Here’s a token to check back later” and modeling interactions as a finite state machine rather than an all-or-nothing is the right call. Some interactions should work that way. But you should build them that way on purpose, not to work around the downsides of a cool buzzword you decided to play around with.
Microservices can be useful, but yeah working in a codebase where every little function ends up having to make a CAP Theorem trade-off is exhausting, and creates sooo many weird UX situations.
I’m sure tooling will mature over time to ease the pain of representing in-flight, rolling-back, undone, etc. states across an entire system, but right now it feels like doing reactive programming without observables.
And also just… not everything needs to scale like whoa. And they can scale in different ways: queue up-front, data replication afterwards, syncing ledgers of CRDTs… Scaling in-flight operations is often the worst option. But it feels familiar, so it’s often the default choice.
Could also be that the HTTP server lied about the content length.
but it comes at the cost of short term agility
Often long-term agility, as well.
Big teams are faster on straightaways. Small teams go through the corners better. Upgrading from a go-kart to a dragster may just send your project 200mph into a wall. Sometimes a go-kart is really what you need.
Reminds me of the throne from Fullmetal Alchemist: Brotherhood.
Pointer moved to Hollywood, to become a character star. They had a string of interviews, but it ended in nothing.
And if they settle on they/them pronouns, you could have an inverted non-binary tree.
I agree wholeheartedly, and I think I failed to drive my point all the way home because I was typing on my phone.
I’m not worried that libs like left-pad
will disappear. My comment that many devs will copy-paste stuff for “group by key” instead of bringing in e.g. lodash
was meant to illustrate that devs often fail to find FOSS implementations even when the problem has an unambiguously correct solution with no transitive dependencies.
Frameworks are, of course, the higher-value part of FOSS. But they also require some buy-in, so it’s hard to knock devs for not using them when they could’ve, because sometimes there are completely valid reasons for going without.
But here’s the connection: Frameworks are made of many individual features, but they have some unifying abstractions that are shared across these features. If you treat every problem the way you treat “group by key”, and just copy-paste the SO answer for “How do I cache the result of a GET?” over and over again, you may end up with a decent approximation of those individual features, but you’ll lack any unifying abstraction.
Doing that manually, you’ll quickly find it to be so painful that you can’t help but find a framework to help you (assuming it’s not too late to stop painting yourself into a corner). With AI helping you do this? You could probably get much, much farther in your hideous hoard of ad-hoc solutions without feeling the pain that makes you seek out a framework.
I did not think there was that much to tightening. I read the whole damn thing.