• 0 Posts
  • 22 Comments
Joined 11 months ago
cake
Cake day: August 8th, 2023

help-circle
  • You are almost on point here, but seem to be missing the primary point of my work. I work as a researcher at a university, doing more-or-less fundamental research on topics that are relevant to industry.

    As I wrote: We develop our libraries for in-house use, and release the to the public because we know that they are valuable to the industry. If what I do is to be considered “industry subsidies”, then all of higher education is industry subsidies. (You could make the argument that spending taxpayer money to educate skilled workers is effectively subsidising industry).

    We respond to issues that are related either to bugs that we need to fix for our own use, or features that we ourselves want. We don’t spend time implementing features others want unless they give us funding for some project that we need to implement it for.

    In short: I don’t work for industry, I work in research and education, and the libraries my group develops happen to be of interest to the industry. Most of my co-workers do not publish their code anywhere, because they aren’t interested in spending the time required to turn hacky academic code into a usable library. I do, because I’ve noticed how much time it saves me and my team in the long run to have production-quality libraries that we can build on.


  • You’re not seeing the whole picture: I’m paid by the government to do research, and in doing that research my group develops several libraries that can benefit not only other research groups, but also industry. We license these libraries under MIT, because otherwise industry would be far more hesitant to integrate our libraries with their proprietary production code.

    I’m also an idealist of sorts. The way I see it, I’m developing publicly funded code that can be used by anyone, no strings attached, to boost productivity and make the world a better place. The fact that this gives us publicity and incentivises the industry to collaborate with us is just a plus. Calling it a self-imposed unpaid internship, when I’m literally hired full time to develop this and just happen to have the freedom to be able to give it out for free, is missing the mark.

    Also, we develop these libraries primarily for our own in-house use, and see the adoption of the libraries by others as a great way to uncover flaws and improve robustness. Others creating closed-source derivatives does not harm us or anyone else in any way as far as I can see.


  • I do exactly this: Write code/frameworks that are used in academic research, which is useful to industry. Once we publish an article, we publish our models open-source under the MIT license. That is because companies that want to use it can then embed our models into their proprietary software, with essentially no strings attached. This gives them an incentive to support our research in terms of collaborative projects, because they see that our research results in stuff they can use.

    If we had used the GPL, our main collaborators would probably not have been interested.




  • CapeWearingAeroplane@sopuli.xyztolinuxmemes@lemmy.worldSecurity
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    2 months ago

    Honestly: Yes. It’s an example that perfectly encapsulates how windows “as a concept” actively babies and dumbs down its users. I the 00’s, nobody had a problem with file extensions, but now that we’re working with users that have grown up with computers we suddenly need to remove them because they’re “too confusing”?



  • Well yes, I get the differerence between an interface and a class, and what I write is typically a class, which contains properties and functionality that may or may not be overridden in derived classes.

    For example, calling a parent class implementation can be useful when I have a derived model that needs to validate its input in some specific way, but otherwise does the same as the base class.

    What I don’t understand is why this makes OOP bad?


  • I’ve seen this thing where people dislike inheritance a lot, and I have to admit that I kind of struggle with seeing the issue when it’s used appropriately. I write a bunch of models that all share a large amount of core functionality, so of course I write an abstract base class in which a couple methods are overridden by derived models. I think it’s beautiful in the way that I can say “This model will do X, Y, Z, as long as there exists an implementation of methods A, B, C, which have these signatures”, then I can inherit that base class and implement A, B, and C for a bunch of different cases. In short, I think it’s a very useful way to express the purpose of the code, without focusing on the implementation of specific details, and a very natural way of expressing that two classes are closely related models, with the same functionality, as expressed by the base class.

    I honestly have a hard time seeing how not using inheritance would make such a code base cleaner, but please tell me, I would love to learn.


  • Theres plenty of cases where I would like to do some large calculation that can potentially give a NaN at many intermediate steps. I prefer to check for the NaN at the end of the calculation, rather than have a bunch of checks in every intermediate step.

    How I handle the failed calculation is rarely dependent on which intermediate step gave a NaN.

    This feels like people want to take away a tool that makes development in the engineering world a whole lot easier because “null bad”, or because they can’t see the use of multiplying 1e27 with 1e-30.


  • I have to admit, I’ve never touched the kind of issue where I need to load a bunch of binaries I can’t automatically trust as part of a build process, so I won’t speak on that.

    On the part about OS updates being a PITA, yes: I’ll admit that I offset updating the macOS major version for as long as possible. As long as my major version is maintained/get’s security updates, and the newer versions are backwards compatible enough that I can compile stuff for them without any hassle, I’ll stay on macOS 13. Judging by historical data, that means I have about two more years before I might need to spend an hour or two fixing up stuff that bugs out with the eventual major update.







  • I can agree that fighting apples UI’s can get frustrating (i.e. playing the “try to find the right button” game). What makes me think macs are great is that you get all the freedom you could wish for in a terminal that is unix-compliant, while also getting the reliability of a hugely widespread OS that a bunch of good developers are paid to maintain. With the new macs you also get the apple silicon hardware, which is great.

    I think most people that use macs indeed do need the safety rails, but at the same time they bother me. I know how to disable them within 15 mins of setting up my computer, but if I’m helping someone with an issue, I sometimes first need to spend some time disabling safety nets and installing the tools I need. Also: Shoving iCloud storage down my throat is shit. They should stop that.




  • I was starting to get issues with a macbook from 2012 (specifically homebrew / xcode) when I upgraded. I’m going to be honest: Having a powerhouse of a machine for 10 years before it becomes obsolete, I’m not going to complain for one second. Got myself a new macbook, and it runs like the wind. Works seamlessly with all the tools I need in an environment where we rely on gfortran / gcc, and a lot of my coworkers use Linux.

    To be fair: Part of the reason I waited for so long before upgrading was that I was waiting for them to ditch the butterfly keyboard / touchbar, and get some ports back into the machine. Once they did that I was sold. My only issue with macbooks would be the absurd price for an adequate amount of RAM, but as far as having a good computer, once it’s paid for it’s fantastic.