• 1 Post
  • 21 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle


  • They probably do use lots of NoSQL DBs too, which perform better for non relational “data lake” style architectures where you just wanna dump mountains of data as fast as possible into storage, to be perused later.

    When you have cases where you have very very high volume of data in, but very low need to query it (but some potential need, just very low), nosql DBs excel

    Stuff like census data where you just gotta legally store it for historical reasons, and very rarely some person will wanna query it for a study or something.

    Keep in mind when I talk about low need to query, the opposite high need us on the scale of like, "this db gets queried multiple times per minute’

    Stuff like… logins to a website, data that gets queried many times per minute or even second, then sometimes nosql DBs fall off.

    Depends what is queried.

    Super basic “lookup by ID” Stuff that operates as just a big ole KeyValuePair mapping ID -> Value? And thats all you gotta query?

    NoSql is still the right tool for the job.

    The moment any kind of JOIN enters the discussion though, chances are you actually wanna use sql now



  • pixxelkick@lemmy.worldtoSelfhosted@lemmy.worldstatic website generator
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    edit-2
    6 months ago

    I use Hugo, it’s not super complicated.

    You basically just define templates in pseudo html for common content (header, nav panel, footer, etc), and then you write your articles in markdown and Hugo combines the two and outputs actual html files.

    You also have a content folder for js, css, and images which get output as is.

    That’s about all there is to it, it’s a pretty minimalist static site generator.

    Hosting wise you can just put it on github pages for free.



  • Nowadays it’s less of an issue with docker and whatnot.

    Just set the image to refresh every night at midnight and if they tried to make manual changes it’ll just revert back to its original state at midnight.

    Customers don’t really get direct access to deployed code now, it’s buried under like 4 layers of abstraction on most CDNs now.

    Simply deploying to azure already smears multiple layers of access control and RBAC overtop that it’s hard enough for me, the dev, to answer the question if “what is actually deployed atm?”, let alone for the customer to get in their and meddle.







  • Id expect its something akin to average half life or whatnot, such that you can make multiple backups and further improve that number.

    Honestly Im curious how something could last for over a few thousand years and not be effectively speaking eternal.

    Like at a certain point, if it hasnt failed by 5,000 years, what on earth would cause it to fail after another 5,000 years? What process is slow enough to “erode” the perfectly preserved object that cant get the job done in 5,000 years, but it can get it done in 10,000?


  • what if you lack the fragments needed to reverse engineer/reconstruct a means to access the information?

    In this case the “Fragment” isnt even a fragment, it would be a completely intact start to finish monstrous amount of data.

    The larger the “fragment” is, and more complete it is, the more trivial it becomes to decode it.

    And since this data is being purposefully stored in a manner intended for future use, it’s very likely it will be encoded in a manner to facilitate and make it as easy as possible to decode in an intuitive manner.

    Id strongly suspect every individual “glass” would have some form of “clue” or “how to” at the start of it, that serves as a guide to help the consumer know they are decoding it right.

    Off the top of my head one example would be encoding a bunch of digits of the Fibonacci Sequence at the start as character literals (so text form), which even in binary form when inspected physically with a microscope, any scientist would go “oh hey thats Fibonacci!”

    Then after that a large blank, followed by perhaps in order the entire ANSI character set from 0 to whatever it goes to now. Or perhaps Unicode.

    The whole thing is only like a megabyte or two, so it would be less than 0.1% of the storage data, but having those 2 items at the start of every disk would be an easy way for the consumer to sanity check they are “reading” the data right, and clue them into “yo there’s data stored on here” very fast


  • This is awesome, I was talking about this with some friends, debating what is truly the best way to store data for long term (on the scale of thousands of years)

    Backing up all of human knowledge and history onto such plates actually seems like a worthy endeavor.

    Imagine if we had such detailed records about civilizations thousands of years ago!

    We have demonstrated time and time again that if you have a bunch of data unencrypted, it is actually quite trivial to reverse engineer it and decipher it.

    Dead sea scrolls, Rosetta stone, etc.

    This would be terabytes of data, and likely organized in a way to make it very intuitive to reverse engineer even by someone who has no idea how it works.

    We could even case study this. Give a loaded one of these slates to some scientists who have no idea how the data on them is stored and have them try and decipher it.

    If they can reasonably succeed quickly with no knowledge on how it works, then it should be easy for someone thousands of years from now too.



  • I was able to connect to the DB with Cloudbeaver, but it straight up wasn’t providing the diagram tab in the way the picture said it outta. The example pic even specifically is using a postgres DB as its example!

    I pretty much had the exact same view, but no diagram tab. Unfortunately the wiki article doesn’t go into much detail, it just says:

    “(if the tab is not presented then the object does not support the diagram presentation)”

    With no information provided further listing off what is, and is not, supported for diagram presentation.

    Lack of documentation it seems, which is unfortunate. It seemed like it has potential but I spent a good 20 minutes fiddling with it, trying different configurations and settings, nothing made it start working and it seems like (as is the case on a few of these tools) the ERD tooling is often a bit of an afterthought and poorly supported.

    Many of the tools are sql first, ERD… third? fourth? forgotten and lacking most features :(


  • Trying it out, the wiki says it has an ERD editor, but its documentation is kind of lacking.

    It’s example image here: https://github.com/dbeaver/cloudbeaver/wiki/Entity-Diagrams

    Shows it interacting with a postgres database, but when I try the same I am not getting a Diagram tab. Its also proving to be pretty awkward to try and work with.

    So far best I have found is Azimutt, which is pretty close to what I want but its interface is lacking atm, and I couldnt get it to successfully connect to my postgres database in the end (kept giving NOT FOUND errors even though I tested inside the docker image to validate the connection and it could indeed TCP the postgres database’s port)



  • I have a K3OS cluster built out of a bunch of raspberry pis, it works well.

    The big reason I like kubernetes is that once it is up and running with git ops style management, adding another service becomes trivial.

    I just copy paste one if my e is ting services, tweak the names/namespaces, and then change the specific for the pods to match what their docker configuration needs, ie what folders need mounting and any other secrets or configs.

    I then just commit the changes to github and apply them to the cluster.

    The process of being able to roll back changes via git is awesome