TLDR: I am running some Docker containers on a homelab server, and the containers’ volumes are mapped to NFS shares on my NAS. Is that bad performance?

  • I have a Linux PC that acts as my homelab server, and a Synology NAS.
  • The server is fast but has 100GB SSD.
  • The NAS is slow(er) but has oodles of storage.
  • Both devices are wired to their own little gigabit switch, using priority ports.

Of course it’s slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be “bad practice” to separate CPU and storage this way? Isn’t that pretty much what a data center also does?

  • a_fancy_kiwi@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    Would it be bad practice?

    No, it’s fine. Especially for people who self host. Use what you have available to you as best you can

    Why would it be bad practice?

    Depends on your use case. A gigabit connection and hard drives are fine for something like a personal media server or simple file storage but if you wanted to edit video or play games from the NAS, you might look into upgrading to SSDs and getting a faster connection to the PC

    • SoNick@readit.buzz
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      @a_fancy_kiwi Exactly! In a business environment where you need to squeeze every possible penny and every second of downtime is money lost, OP is introducing additional potential points of failure.

      In a homelab where downtime is just an inconvenience? Go for it! Try it for yourself and see how you like it!

      @PlutoniumAcid