Migration is seamless. Uninstall one binary and install the other.
Migration is seamless. Uninstall one binary and install the other.
Part of it might be that I’m often having similar arguments with the team I run about introducing dependencies.
Engineers have a tendency to want to use the perfect tool for a job at the expense of other concerns. It could be ease of maintenance, availability of the skill-set, user experience, or whatever. If there’s pushback it’s normally that they are putting their own priorities above other people’s equally valid concerns.
Often I’m telling people to step-back. Stop pushing, listen to the resistance and learn from it. Maybe I’m on a bit of a crusade when I see similar situations in open-source.
I think for python tooling the choice is Python Vs Rust. C isn’t in the mix either.
people like and want to program in rust
I think there’s a survivor bias going on here. Those that have tried rust and stuck with it, they also like it. Far more people in the python community haven’t tried it, or have and not stuck with it. I like and want to program Haskell. I’m not going to write python tools in it because the community won’t appreciate it.
Tools should be maintained by those that use them. Python doesn’t want to rely on the portion of the venn diagram that are rust and python users because that pool of people is much smaller.
Those languages bring different things though:
Python is the language the tool is for
C is the implementation language of Python and is always going to be there.
Cython is a very similar language to Python and designed to be very familiar to Python writers.
Fortran is the language that BLAS and similar libraries were historically implemented in since the 70s. Nobody in the python community has to write Fortran today. Those libraries are wrapped.
Rust is none of the above. Bringing it into the mix adds a new barrier.
I don’t think it’s a dream of “everything in python”, but “python tools for python development”. It means users of the language can contribute to the tooling.
…and people worry about the name of a git branch.
Op was listing different Lsp servers for things like jedi, pyright, etc. All of those things should really integrate with a single server.
Sounds like things are going very wrong in lsp land. The point of a language server is to support lots of types of tools through an abstracted server. Not to have one server per tool.
Otherwise, just use fly-checker. It can even get information from multiple tools at once.
Oh, “incident post-mortem” was ambiguous. I read “Incident that happened after death” not “analysis after incident”.
I thought OP had a necrophiliac blowjob fantasy.
I have a lot of respect for this project. I lurk on the discussion forum and issues and I’ve always seen mature discussion even though the project was born out of issues which could have been quite emotive.
It’s also a lot nicer to run than any other git forge that I’ve had experience with.
Do you think a style guide is enough for an open source code base? Contributions could be coming from lots of directions, and the code review process to enforce a style guide is going to be a lot of work. Even rejecting something takes time.
Furthermore there are many changes to NumPy internals, including continuing to migrate code from C to C++, that will make it easier to improve and maintain NumPy in the future.
I realise that C can be rather low level a lot of the time, but I’m not sure I’d pick C++ to help keep things easy to maintain. It opens up a Pandora’s box of possibilities.
Does it have higher-order functions? Yes, therefore you can use it to do functional programming.
Everything else is syntactic sugar.
…or the research is flawed. Gender identity was gained from social media accounts. So maybe it’s a general bias against social media users (half joking).
Comparing base model to base model I think Cascade is quite a lot better than SDXL, but …and it’s an enormous but… It seems to have been shunned by the community.
Maybe nobody with resources to do training is interested in a model with commercial restrictions, or the multi-model flow was just too different for people. Not sure, but the output of the base model can be really nice. Not always, but I find the biggest errors are people taking on a painterly/waxy appearance rather than the arm turning into a leg body horror you can get with SDXL. I think the “compressed” Stage C works to keep the composition together across the whole image more.
That’s a big fucking “if”
Does this mean, that because you’re now liberated from the dimensions of the training data, that all training data will apply to all sizes? e.g. generated portrait images will be influenced by landscape training data.
total_armageddon = launch_nuclear_missile <$> [1...]
Soft forks try to maintain code compatible so changes can apply to both code bases. Normally done when there’s hope of a future merging of the code lines. They rarely work, as eventually thing get hard.