• 2 Posts
  • 156 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2024

help-circle

  • How would a new format be backwards-compatible? At least JPEG-XL can losslessly compress standard jpg for a bit of space savings, and servers can choose to deliver the decompressed jpg to clients that don’t support JPEG-XL.

    Also from Wikipedia:

    Computationally efficient encoding and decoding without requiring specialized hardware: JPEG XL is about as fast to encode and decode as old JPEG using libjpeg-turbo

    Being a JPEG superset, JXL provides efficient lossless recompression options for images in the traditional/legacy JPEG format that can represent JPEG data in a more space-efficient way (~20% size reduction due to the better entropy coder) and can easily be reversed, e.g. on the fly. Wrapped inside a JPEG XL file/stream, it can be combined with additional elements, e.g. an alpha channel.




  • Bash has its upsides too, like the fact that it has arrays / lists and dictionaries / hashmaps. In my opinion, it gets iffy though when you need to do stuff with IFS; at that point one might be better off just using specialized tools.

    Not saying working bash isn’t good enough, but it can break in very surprising ways is my experience.



  • Laser@feddit.orgtoProgrammer Humor@programming.devfunctions
    link
    fedilink
    arrow-up
    23
    arrow-down
    2
    ·
    4 days ago

    Not sure I’d call what bash has functions. They’re closer to subroutines in Basic than functions in other languages, as in you can’t return a value from them (they can only return their exit code, and you can capture their stdout and stderr). But even then, they are full subshells. It’s one of the reasons I don’t really like Bash, you’re forced into globally or at least broadly-scoped variables. Oh, and I have no clue right now how to find where in your pipe you got a non-null exit code.

    It’s not a big problem for simple scripting, but it makes things cumbersome once you try to do more.


  • In all seriousness though, the core of the technical stack has become very robust in my opinion (DNS being the exception). From a hobbyist’s perspective, things work much better than when the Web was still young. I can run multiple sites (some of them being what are today called apps) on a domain with subdomains, everything fast, HTTP3-capable, secured via valid free TLS certs, reverse proxied, all of that running on a system deployed in minutes…

    If you focus on the part of the Internet that you have control over, it’s a lot better than back in the simple days.






  • Similarly here. Have an Odroid with that platform, it wasn’t cheap but it came with several advantages:

    • 4 SATA ports on addition to the M2 slot
    • Intel QSV
    • 2 x 2.5 Gbit Ethernet (I only have gigabit at home though)

    Very powerful machine for the power usage, I ran a really old Athlon before though (from 2010 or so that I retrofitted with 16GB RAM) that did most stuff just fine. But I wanted some transcoding and also possibly a smaller case.

    I run everything bare metal though.