• 0 Posts
  • 74 Comments
Joined 9 months ago
cake
Cake day: October 4th, 2023

help-circle

  • Right now when updates get applied to the NAS, if it gets powered off during the update window that would be really bad and inconvenient require manual intervention.

    You sure? I mean, sure, it’s possible; there are devices out there that can’t deal with power loss during update. But others can: they’ll typically have space for two firmware versions, write out the new version into the inactive slot, and only when the new version is committed to persistent storage, atomically activate it.

    Last device I worked on functioned that way.

    you might lose data in flight if you’re not careful.

    That’s the responsibility of the application if they rely on the data to be persistent at some point; they need to be written to deal with the fact that there may be in-flight data that doesn’t make it to the disk if they intend to take other actions that depend on that fact; they’ll need to call fsync() or whatever their OS has if they expect the data to be on-drive.

    Normally, there will always a period where some data being written out is partial: the write() could complete after handing the data off to the OS’s buffer cache. The local drive could complete because data’s in its cache. The app could perform multiple write() calls, and the first could have completed without the second. With a NAS, the window might be a little bit longer than it otherwise would be, but something like a DBMS will do the fsync(); at any point, it’d be hypothetically possible for the OS to crash or power loss or something to happen.

    The real problem, that I need an nas for, is not the loss of some data, it’s when the storms hit and there’s flooding, the power can go up and down and cycle quite rapidly. And that’s really bad for sensitive hardware like hard disks. So I want the NAS to shut off when the power starts getting bad, and not turn on for a really long time but still turn on automatically when things stabilize

    Like I said in the above comment, you’ll get that even without a clean shutdown; you’ll actually get a bit more time if you don’t do a clean shutdown.

    Because this device runs a bunch of VMs and containers

    Ah, okay, it’s not just a file server? Fair enough – then that brings the case #2 back up again, which I didn’t expect to apply to the NAS itself.



  • I’m assuming that your goal here is automatic shutdown when the UPS battery gets low so you don’t actually have the NAS see unexpected power loss.

    This isn’t an answer to your question, but stepping back and getting a big-picture view: do you actually need a clean, automatic shutdown on your Synology server if the power goes out?

    I’d assume that the filesystems that the things are set up to run are power-loss safe.

    I’d also assume that there isn’t server-side state that needs to be cleanly flushed prior to power loss.

    Historically, UPSes providing a clean shutdown were important on personal computers for two reasons:

    • Some filesystems couldn’t deal with power loss, could produce a corrupted filesystem. FAT, for example, or HFS on the Mac. That’s not much of an issue today, and I can’t imagine that a Synology NAS would be doing that unless you’re explicitly choosing to use an old filesystem.

    • Some applications maintain state and when told to shut down, will dump it to disk. So maybe someone’s writing a document in Microsoft Word and hasn’t saved it for a long time, a few minutes will provide them time to save it (or the application to do an auto-save). Auto-save usually partially-mitigates this. I don’t have a Synology system, but AFAIK, they don’t run anything like that.

    Like, I’d think that the NAS could probably survive a power loss just fine, even with an unclean shutdown.

    If you have an attached desktop machine, maybe case #2 would apply, but I’d think that hooking the desktop up to the UPS and having it do a clean shutdown would address the issue – I mean, the NAS can’t force apps on computers using the NAS to dump state out to the NAS, so hooking the NAS up that way won’t solve case #2 for any attached computers.

    If all you want is more time before the NAS goes down uncleanly, you can just leave the USB and RS-232 connection out of the picture and let the UPS run until the battery is exhausted and then have the NAS go down uncleanly. Hell, that’d be preferable to an automated shutdown, as you’d get a bit more runtime before the thing goes down.


  • Yes. I wouldn’t be preemptively worried about it, though.

    Your scan is going to try to read and maybe write each sector and see if the drive returns an error for that operation. In theory, the adapter could respond with a read or write error even if a read or write worked or even return some kind of bogus data instead of an error.

    But I wouldn’t expect this to likely actually arise or be particularly worried about the prospect. It’s sort of a “could my grocery store checkout counter person murder me” thing. Theoretically yes, but I wouldn’t worry about it unless I had some reason to believe that that was the case.


  • tal@lemmy.todaytoSelfhosted@lemmy.worldServer for a boat
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    6 days ago

    What hardware and Linux distro would you use in this situation?

    The distro isn’t likely to be a factor here. Any (non-super-specialized) distro will be able to solve issues in about the same way.

    I mean, any recommendation is going to just be people mentioning their preferred distro.

    I don’t know whether saltwater exposure is a concern. If so, that may impose some constraints on heat generation (if you have to have it and storage hardware in a waterproof case).


  • If there’s a better way to configure Docker, I’m open to it, as long as it doesn’t require rebuilding everything from scratch.

    You could try using lvmcache (block device level) or bcachefs (filesystem level caching) or something like that, have rotational storage be the primary form of storage but let the system use SSD as a cache. Dunno what kind of performance improvements you might expect, though.


  • I would suggest, unless you have a very unusual situation, that you’re going to have an easier time of it with a keyboard and display.

    If your computer can do HDMI out, you can use a television as display.

    In all seriousness, unless this is some kind of super-exotic situation (like, you’re on a sailboat in the middle of the Pacific and are suddenly needing to set up a Debian server) I would probably get an inexpensive USB keyboard to keep around. Even if you don’t normally need it (like, you use a laptop or something) there are a number of situations that it solves, like “one of my laptop keys has just stopped working” or “I actually need to work on some kind of computer that doesn’t have an integrated keyboard”.

    kagis

    https://www.amazon.com/sgmedila-Waterproof-Foldable-Flexible-Dustproof/dp/B0CXTHH7QS/

    That’s not gonna be a very pleasant typing experience, but it’s under $4 for two, if you’re determined to spend as little as possible.

    If you can’t get access to a television, here’s a small, 640x480 USB/HDMI display under $50:

    https://www.amazon.com/Capacitive-Compatible-Raspberry-Resolution-Interface/dp/B0CFJDTM5X/

    I’d probably get a larger display, maybe used – I mean, maybe you think that you’re never gonna need to look at a computer’s output again, but you might find yourself troubleshooting a machine like this one, and 640x480 is a kind of significant limitation – but that’s at least a baseline.

    If you specifically don’t want a keyboard, and if you have some other device with a display and text input and USB (well, or serial) support, I’d bet that the Debian installer can probably handle an RS-232 serial console install.

    kagis

    Yup.

    https://p5r.uk/blog/2020/instaling-debian-over-serial-console.html

    But I’m guessing that you don’t have the serial hardware. Having a USB-to-serial adapter is another thing that I keep one of around because every now and then I need to work on headless devices that have a serial interface, but I’ll concede that the serial port is getting pretty elderly.

    I’d probably get a USB-to-serial male and USB-to-serial female adapter if neither end has an existing serial port (which these days, with desktop hardware, may be very possible). Something like this:

    https://www.amazon.com/OIKWAN-Adapter-Converter-Compatible-Windows/dp/B0BL1MRV6H/

    and

    https://www.amazon.com/Serial-Adapter-Chipset-Compatible-Windows/dp/B0CT8MRT5B/

    But then you have to be sure that you can get your machine to boot into the Debian install media. On machines that are designed to be run headless, routers and such, it’s common for the BIOS to support a serial interface. On desktop machines…not so much. So if it’s already configured to boot off USB, that may be fine, but if it’s not, well…

    Debian also has a fully-automated installer, as long as you can set your machine up to boot into it without a keyboard or display:

    https://wiki.debian.org/FAI

    That kind of thing is normally more used to set up VMs or manufacture hardware.

    I would be very careful with that thing and probably wipe it after you use it, since it’s gonna be a USB key that wipes computers if you reboot and they’re set to boot off USB.

    It almost certainly isn’t a great fit for your use case – like, the time you’re probably going to expend setting it up isn’t going to be worth whatever you’d save spending on hardware – but mentioning it for completeness.



  • tal@lemmy.todaytoSelfhosted@lemmy.worldAlternatives to CloudFlare?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    17 days ago

    I’d probably use a VPS myself.

    I seem to recall db0 saying that lemmy.dbzer0.com is behind some sort of reverse proxy. I assume that they’re in the same boat as OP.

    looks

    $ host -t a lemmy.dbzer0.com
    lemmy.dbzer0.com has address 51.77.203.116
    $ whois 51.77.203.116
    [snip]
    role:           OVH Technical Contact
    address:        OVH SAS
    address:        2 rue Kellermann
    address:        59100 Roubaix
    address:        France
    admin-c:        OK217-RIPE
    tech-c:         GM84-RIPE
    tech-c:         SL10162-RIPE
    nic-hdl:        OTC2-RIPE
    abuse-mailbox:  abuse@ovh.net
    mnt-by:         OVH-MNT
    created:        2004-01-28T17:42:29Z
    last-modified:  2014-09-05T10:47:15Z
    source:         RIPE # Filtered
    
    % Information related to '51.77.0.0/16AS16276'
    
    route:          51.77.0.0/16
    origin:         AS16276
    mnt-by:         OVH-MNT
    created:        2018-03-07T09:24:45Z
    last-modified:  2018-03-07T09:24:45Z
    source:         RIPE
    $
    

    I don’t know if that’s a VPS, but looks like they’re using OVH.


  • Well, there’s the obvious answer, that you actually have an Nvidia card. I think I’d probably consider taking a look at the card and at photos of new cards of both models and see which it looks like.

    From a software standpoint, I have a hard time believing that you’re misdetecting the type of card.

    I don’t know anything about Proxmox, but I understand that it’s some sort of platform used to virtualize systems. It apparently, based on a quick search, has some kind of support for Nvidia passthrough, called vGPU. If you’re looking from inside a virtualized environment, is it possible that you’re looking at a virtual GPU? That seems like a long shot, since I assume that if your GPU is AMD, that a virtual Nvidia GPU would be non-functional – it doesn’t look like this vGPU thing can use a host AMD GPU-- but I can’t think of any other way that you’re going to wind up detecting an Nvidia card that you don’t have.


  • What do you want to do with it? I mean, that really determines the hardware.

    Consider the following use cases:

    • If you’re trying to do a media server to serve video and audio files up to other devices around the house, then access time probably basically doesn’t matter, and rotational drives are fine, and CPU capacity is probably irrelevant; you only need to stream at the media’s speed, and there isn’t a whole lot of seeking, and there’s no computation. You need the system to be running at all time. Expandability, other than storage, doesn’t really matter.

    • If you want a backup server, then you’re probably in a similar situation.

    • If you’re trying to do a box to run LLMs, like a headless Stable Diffusion server, then you probably want a very beefy GPU, and enough storage space to store the relevant content, but you don’t need massive amounts of storage. CPU doesn’t matter much.

    • If you’re trying to do a firewall, then unless you have really elaborate processing requirements, CPU probably doesn’t matter. You are going to want at least two network ports. Keeping power usage low is probably desirable.

    • If you’re doing a home automation server, probably similar (though you don’t need network ports).

    • If you’re trying to have a box that runs VMs, then a bunch of memory and a beefy CPU, not to mention probably SSDs is likely desirable. Limiting power use probably isn’t that important.

    There are applications for which a Pi is completely reasonable, where you’re using very little power and just need to keep the box always available. But there are applications for which it’s unreasonable, too – it’d make a bad VM-hosting box.

    Like, if you say “I plan to do X, and Y and I’m thinking that I might do Z”, and maybe give some kind of a desired budget, that’ll probably get you more-useful advice.

    First I wanted to do it on a Raspberry Pi with an external hard-drive but then I read USB connected drives are unreliable and so on.

    I don’t know about unreliable. I’ve never had problems with USB-attached storage just not working. But I do have one enclosure with about five drive bays that doesn’t have an option to return to the previous power state on power loss – one has to tap the power button – which is incredibly obnoxious, as if it loses power and I’m away, I can’t bring it back up. That wasn’t something that I’d anticipated being an issue, and I’d suggest that anyone getting one for a system that they intend to use remotely check that such an enclosure does have such functionality.


  • Compatibility aside, I’d say that .tar.pxz aka .tpxz is probably my vote.

    LZMA is probably what I’d want to use. xz and 7zip use that. It’s a bit slow to compress, but it has good compression ratios, and it’s faster to decompress than bzip2.

    pixz permits for parallel LZMA compression/decompression. On present-day processors with a lot of cores, that’s desirable.

    https://github.com/vasi/pixz

    It also can use .tar as its container format, which is desirable; that’s everywhere.

    The major drawback to .tar is that it doesn’t support indexed access, so extracting a single file isn’t fast, but .tar.pxz does.




  • I don’t have an answer for you, partly because there isn’t enough information about your aims. However, you can probably work this out yourself, compare prices for different hardware. You’d need some of that missing information to run the numbers, though.

    I would imagine that an important input here is your expected usage.

    If you just want to set up a box to run a chatbot occasionally and you get maybe 1% utilization of the thing, the costs are different from if you intend to have the thing doing batch-processing jobs 24/7. The GPU is probably the dominant energy consumer in the thing, so if it’s running 24/7, the compute efficiency of the GPU in terms of energy is going to be a lot more important.

    If you have that usage figure, you can estimate the electricity consumption of your GPU.

    A second factor here, especially if you want interactive use, is what level of performance is acceptable to you. That may, depending upon your budget and use, be the dominant concern. You’ve got a baseline to work with.

    If you have those figures – how much performance you want, and what your usage rate is – you can probably estimate and compare various hardware possibilities.

    I’d throw a couple of thoughts out there.

    First, if what you want is sustained, 24/7 compute, you probably can look at what’s in existing, commercial data centers as a starting point, since people will have similar constraints. If what you care about is much less frequent, it may look different.

    Second, if you intend to use this for intermittent LLM use and have the budget and interest in playing games, you may want to make a game-oriented machine. Having a beefy GPU is useful both for running LLMs and playing games. That may differ radically from a build intended just to run LLMs. If you already have a desktop, just sticking a more-powerful GPU in may be the “best” route.

    Third, if performance is paramount, depending upon your application, it may be able to make use of multiple GPUs.

    Fourth, what applications you want to run may (it sounds like you may have decided on Nvidia already) affect what hardware is acceptable. First, AMD/Nvidia, but also, many applications have minimum VRAM requirements – the size of the model imposes constraints. Have a GPU without enough VRAM to run what you want to run, and you can’t run the model at all.

    Fifth, if you have not already, you may want to consider the possibility of not self-hosting at all, if you expect your use to be particularly intermittent and you have high hardware requirements. Something like vast.ai lets you rent hardware with beefy compute cards, which can be cheaper if your demands are intermittent, because the costs are spread across multiple users. If your use is to run a very occasional chatbot and you care a lot about performance and want to run very large models, for example, you could use a system with an H100, for example, for about $3/hour. An H100 costs about $30k and has 80GB of VRAM. If you want to run a chatbot a weekend a month for fun and you want to run a model that requires 80GB – an extreme case – that’s going to be a lot more economical than buying the same hardware yourself.

    Sixth, electricity costs where you are are going to be a factor. And if this system is going to be indoors and you live somewhere warm, you can multiply the cost for increased air conditioning load.



  • considers

    I think that mount the mount(1) command is probably calling the mount(2) system call, and it’s returning ENOENT, error 2. The mount(2) man page says “ENOENT A pathname was empty or had a nonexistent component.”.

    Hmm. So, I expect from the cyan color there that that “luks-d8…” thing is a symlink that points at some device file that LUKS creates when that luksOpen command runs.

    Maybe ls -l /dev/mapper/luks-d8... and see what it points at and whether that exists as a first step? It’s probably gonna be some device file somewhere in /dev.



  • Okay, it looks like you posted this prior to me posting my comment above. I’m not familiar with this graphical utility, but I’m assuming that it means that your disk is visible (like, if you run ls /dev/sda, you see your disk).

    So what you’ve probably got is a functioning hard drive, with a functioning partition table, and on the first partition (/dev/sda1), a LUKS layer.

    I haven’t used LUKS, but it’s a block-level encryption layer for Linux. It’ll have some command to expose an unencrypted layer, and you can mount that.

    Let’s try walking through this in a terminal.

    From https://superuser.com/questions/1702871/how-to-do-cryptsetup-luksopen-and-mount-in-a-single-command, it looks like the way this works is that one runs:

    $ sudo cryptsetup luksOpen <encrypted-device-name> <unencrypted-block-device-name>
    

    Your encrypted partition name is presently at /dev/sda1. So try running:

    $ sudo cryptsetup luksOpen /dev/sda1 my-unencrypted
    

    That should prompt you for a password. If it can decrypt it, it looks like it creates a block device at /dev/mapper/my-unencrypted.

    You can then create a directory to use as a mountpoint:

     $ sudo mkdir -p /mnt/my-mount-point
    

    And try mounting it (assuming that it’s just a filesystem):

    $ sudo mount /dev/mapper/my-unencrypted /mnt/my-mount-point