• 0 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle







  • I once developed an electronic program guide for a cable TV company in New Zealand and I’d lose my mind if I had to use timezones. The basic rule of thumb was:

    a) Internally you use UTC religiously. UTC is the same everywhere on Earth, time always goes forward, most languages have classes that represent instants, durations etc. In addition you make damned sure your server time is correct and UTC.

    b) You only deal with timezones when presenting something to a user or taking input from a user

    Prior to that I had worked for a US trading company that set all their servers to EST and was receiving trades through the system which expressed time & date ambiguously. Just had to assume everywhere that EST was the default but it was just dumb programming and I bet to this day every piece of code they develop has time bugs.


  • Rust isn’t really OOP like C#, Java or C++ - it has structs with functions that you could consider an “object” but there is no inheritance. Instead Rust uses traits which are a little bit like interfaces in some languages.

    The way the kernel is using Rust at the moment is to produce safe bindings for modules to be written in Rust, i.e. you can create a module in Rust source which will be correctly loaded up, the code is safe by default and will have access to kernel services via bindings. I expect over time that more of the kernel will become Rust, but the biggest impediment right now is Rust relies on LLVM and LLVM only supports a subset of targets that a kernel could potentially support with another compiler like gcc.



  • arc@lemm.eetoProgrammer Humor@programming.devIs this a Nut?
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    8 months ago

    The only reason people use JS is because it’s the defacto language of browsers. As a language it’s dogshit filled with all kinds of unpleasant traps.

    Here is a fun one I discovered the other day:

    new Date('2022-10-9').toUTCString() === 'Sat, 08 Oct 2022 23:00:00 GMT'
    new Date('2022-10-09').toUTCString() === 'Sun, 09 Oct 2022 00:00:00 GMT'
    

    So padding a day of the month with a 0 or not changes the result by 1 hour. Every browser does the same so I assume this is a legacy thing. It’s supposed to be padded but any sane language would throw an exception if it was malformed. Not JavaScript.



  • Concerning logs:

    1. You can still log to text if you want by configuration (e.g. forward stuff to syslog) and you can use any tools you like to read those files you want. So if you like text logs you can get them. You can even invoke journalctl to output logs on an ad hoc / scheduled basis in a variety of text formats and delimited fields.
    2. Binary allows structured logging (i.e. each log message is comprised of fields in a record), indexing and searching options that makes searches & queries faster. Just like in a database. e.g. if you want to search by date range, or a particular user then it’s easy and fast.
    3. Binary also allows the log to be signed & immutable to prevent tampering, allow auditing, intrusion detection etc… e.g. if someone broke into a system they could not delete records without it being obvious.
    4. You can also use splunk with systemd.

    So people object to systemd writing binary logs and yet they can get text, or throw it into splunk or do whatever they like. The purpose of the binary is make security, auditing and forensics better than it is for text.

    As for scripts, the point I’m making is systemd didn’t supplant sysvinit, it supplanted upstart. Upstart recognized that writing massive scripts to start/stop/restart a process was stupid and chose an event driven model for running stuff in a more declarative way. Basically upstart used a job system that was triggered by an event, e.g. the runlevel changes, so execute a job that might be to kick off a process. Systemd chose a dependency based model for starting stuff. It seems like dists preferred the latter and moved over to it. Solaris has smf which serves a similar purpose as systemd.

    So systemd is declarative - you describe a unit in a .service file - the process to start, the user id to run it with, what other units it depends on etc. and allow the system to figure out how to launch it and take care of other issues. It means stuff happens in the right order and in parallel if it can be. It’s fairly simple to write a unit file as opposed to a script. But if you needed to invoke a script you could do that too - write a unit file that invokes the script. You could even take a pre-existing init script and write a .service file that kicks it off.


  • arc@lemm.eetolinuxmemes@lemmy.worldSystemd controversy be like
    link
    fedilink
    arrow-up
    27
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Kind of sad there are still people raging over systemd. When it flares up in discussions there is the usual debunked nonsense:

    • it only logs information to binary and this is somehow bad. Except it it can be configured to log to text as well and it uses binary so it can forward secure sign records to prevent tampering as well as offering database style query operations.
    • it’s insecure because the repo has millions of lines of code. Except that they compile into hundreds of small binaries running with least privilege, and often replacing the task of far more dangerous processes (e.g. there is an NTP client in systemd which sets the time and nothing else).
    • various rants about the primary author

    What is more bizarre is the nostalgia and hearkening back to sysvinit scripts when systemd didn’t replace sysvinit! Systemd replaced upstart which replaced sysvinit. Because writing 100s of lines of script to stop/start/restart a process sucked - insecure, slow, didn’t scale, didn’t capture dependencies and everyone knew it. Upstart was the first attempt to solve the issue and was used in Debian / Ubuntu, Fedora / Red Hat, openSUSE and others until systemd came along.


  • The problem is, that most languages have no native support other than 32 or 64 bit floats and some representations on the wire don’t either. And most underlying processors don’t have arbitrary precision support either.

    So either you choose speed and sacrifice precision, or you choose precision and sacrifice speed. The architecture might not support arbitrary precision but most languages have a bignum/bigdecimal library that will do it more slowly. It might be necessary to marshal or store those values in databases or over the wire in whatever hacky way necessary (e.g. encapsulating values in a string).


  • We had tens of thousands of lines in our rake files to build a bunch of targets, none of which were even Ruby. I think if I needed to build another complex build system that was a directed acyclic graph I think I’d use Gradle, for a several reasons - we had some Java targets so we save on an additional developer runtime, it would run faster & Gradle is more mainstream and easy to get various plugins & documentation for.


  • It probably wasn’t a big deal when it was a niche project until Twitter imploded. Then all the public instances got overloaded with new users and the limits became obvious.

    A better design is Lemmy which is written in Rust so it has far more scalability. It’s compiled and because it’s tokio / actix based, it can also do a lot more stuff asynchronously so it’s not spawning thousands of threads to cope with concurrent requests.


  • There is a lot of magic in Java. Try Spring Boot for example, and things magically connect together with annotations, or somehow methods get injected onto interface on the fly, or an http interface maps onto a function with parameters because the runtime is doing it. This is most evident when you set a break point in some class and there might be 4 or 5 mystery functions it passed through between it and where you thought it was calling from. Sl4j, Lombok, Hibernate are doing the same kind of thing.


  • I wrote extensively in Ruby but for Rake - using Ruby as a build system. Can’t say I liked the language although it was okay for how we used it. We have 20 sub projects with some very complex build targets and dependency scanning going on and the Rake syntax was okay. Personally I think its biggest shortcoming was the documentation was very poor and stuff like gems felt primitive compared to other package management systems. One thing I liked from the language was blocks could evaluate to a value which I really use a lot in Rust too.

    I think if I were doing an acyclic dependency build system these days I’d use Gradle probably.

    As for Rails I expect failed to catch on because even compared to Python, Ruby is a slow language. And Python isn’t fast by any stretch. Projects that started with Rails hit the performance brick wall and moved to something else.