Canadian software engineer living in Europe.

  • 1 Post
  • 32 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle

  • So my first impression is that the requirement to copy-paste that elaborate SQL to get the schema is clever but not sufficiently intuitive. Rather than saying “Run this query and paste the output”, you say “Run this script in your database” and print out a bunch of text that is not a query at all but a one-liner Bash script that relies on the existence of pbcopy – something that (a) doesn’t exist on many default installs (b) is a red flag for something that’s meant to be self-hosted (why am I talking to a pasteboard?), and (c) is totally unnecessary anyway.

    Instead, you could just say: “Run this query and paste the result in this box” and print out the raw SQL only. Leave it up to the user to figure out how they want to run it.

    Alternatively you can also do something like: “Run this on your machine and copy/paste the output”:

    $ curl 'https://app.chartdb.io/superquery.sql' | psql --user USERNAME --host HOSTNAME DBNAME
    

    In the case of the cloud service, it’s also not clear if the data is being stored on the server or client side in LocalStorage. I would think that the latter would be preferable.


  • Generally, I agree. I think what I meant by the above is “how would you tell someone how to use the thing”. My favourite example is email vs email-with-PGP.

    How do you send an email?

    1. Open client
    2. Click “send new email”
    3. Type your email
    4. Click send

    How do you send a PGP-encrypted email

    Let’s first talk about this thing called a “keyserver”. Once you know what that is, you’ll have to go out and find some keys to add to it. We’re not going to talk about styling your message 'cause that’s not something you should be able to do… etc. etc.


  • This is a common problem with Free software, and honestly I think it’s our biggest one: we build stuff for ourselves and stop there. If we want our stuff to be adopted (which, for things that rely on network effects, we do) then we need to pay more attention to usability.

    Here’s a suggestion for anyone starting a project they think they might share. Before you start writing any code, write the documentation. Then rewrite it from the perspective of the least tech-literate person you know who you’d still want to use the project. Only after you’ve worked out how easy it should be for this person to get started, then you can start writing the thing.


  • I’ve been self-hosting my blog for 21years if you can believe it, much of it has been done on a server in my house. I’ve hosted it on everything from a dusty old Pentium 200Mhz with 16MB of RAM (that’s MB, not GB!) to a shared web host (Webfaction), to a proper VPS (Hetzner), to a Raspberry Pi Kubernetes cluster, which is where it is now.

    The site is currently running Python/Django on a few Kubernetes pods on a few Raspberry Pi 4’s, so the total power consumption is tiny, and since they’re fanless, it’s all very quiet in my office upstairs.

    In terms of safety, there’s always a risk since you’re opening a port to the world for someone to talk directly to software running in your home. You can mitigate that by (a) keeping your software up to date, and (b) ensuring that if you’re maintaining the software yourself (like I am) keeping on top of any dependencies that may have known exploits. Like, don’t just stand up an instance of Wordpress and forget about it. That shit’s going to get compromised :-). You should also isolate the network from the rest of your LAN if you can. Docker sort of does this for you (though I hear it can be broken out of), but a proper demarcation between your laptop and a server on the Open web is a good idea.

    The safest option is probably to use a static site generator like Hugo, since then your attack surface is limited to whatever you’re using to serve the static sites (probably Nginx), while if you’re running a full-blown application that does publishing etc., then that’s a lot of stuff that could have holes you don’t know about. You may also want to setup something like Cloudflare in front of your site to prevent a DOS attack or something from crippling your home internet, though that may be overkill.

    But yeah, the bandwidth requirements to running a blog are negligible, and the experience of running your own stuff on your own hardware in your own house is pretty great. I recommend it :-)


  • But there’s nothing stopping you from loading realistic (or even real) data into a system like this. They’re entirely different concepts. Indeed, I’ve loaded gigabytes of production data into systems similar to what I’m proposing here (taking all necessary precautions of course). At one company, I even built a system that pulled production into a developer-friendly snapshot while simultaneously pseudo-anonymising that data so it can be safely (for some value of ${safe}) be tinkered with in development.

    In fact, adhering to a system like this makes such things easier, since you don’t have to make any concessions to “this is how we do it in development”. You just pull a snapshot from the environment you want to work with and load it into your Compose session.



  • I feel like you must have read an entirely different post, which must be a failing in my writing.

    I would never condone baking secrets into a compose file, which is why the values in compose.yaml aren’t secrets. The idea is that your compose file is used exclusively for testing and development, where the data isn’t real, and the priority is easing development. When you deploy, you don’t use that compose file because your environment is populated by whatever you use in production (typically Kubernetes these days).

    You should not store your development database password in a .env file because it’s not a secret. The AWS keys listed in the compose are meant to be exactly as they are there: XXX, because LocalStack doesn’t care what these values are, only that they exist.

    As for the CLI thing, again I think you’ve missed the point. The idea is to start from a position of “I’m building images” and therefore neve have a “local app, (Django, sqlite)” because sqlite should not be used unless that’s what’s used in production. There should be little to no difference between development and production, so scripting a bridge between these doesn’t make a lot of sense to me.


  • Daniel Quinn@lemmy.caOPtoPython@programming.devDeveloping with Docker
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 month ago

    I don’t mean to be snarky, but I feel like you didn’t actually read the post 'cause pretty much everything you’ve suggested is the opposite of what I was trying to say.

    • A CLI to make things simple sounds nice, but given that the whole idea is to harmonise the develop/test/deploy process, writing a whole program to hide the differences is counterproductive.
    • Config settings should be hard-coded into your docker-compose file and absolutely not stored in .json or .env files. The litmus test here is: “How many steps does it take to get this project running?” If it’s more than 1 (docker compose up) it’s too many.
    • Suggesting that one package Django into a single Lambda seems like an odd take on a post about Docker.


  • It’s a tough one, but there are a few options.

    For AWS, my favourite one is LocalStack, a Docker image that you can stand up like any other service and then tell it to emulate common AWS services: S3, Lamda, etc. They claim to support 80 different services which is… nuts. They’ve got a strange licensing model though, which last time I used it meant that they support some of the more common services for free, but if you want more you gotta pay… and they aren’t cheap. I don’t know if anything like this exists for Azure.

    The next-best choice is to use a stand-in. Many cloud services are just managed+branded Free software projects. RDS is either PostgreSQL or MySQL, ElastiCache is just Redis, etc. For these, you can just stand up a copy of the actual service and since the APIs are identical, you should be fine. Where it gets tricky is when the cloud provider has messed with the API or added functionality that doesn’t exist elsewhere. SQS for example is kind of like RabbitMQ but not.

    In those cases, it’s a question of how your application interacts with this service. If it’s by way of an external package (say Celery to SQS for example), then using RabbitMQ locally and SQS in production is probably fine because it’s Celery that’s managing the distinction and not you. They’ve done the work of testing compatibility, so theoretically you don’t have to.

    If however your application is the kind of thing that interacts with this service on a low level, opening a direct connection and speaking its protocol yourself, that’s probably not a good idea.

    That leaves the third option, which isn’t great, but I’ve done it and it’s not so bad: use the cloud service in development. Normally this is done by having separate services spun up per user or even with a role account. When your app writes to an S3 bucket locally, it’s actually writing to a real bucket called companyname-username-projectbucket. With tools like Terraform, the fiddly process of setting all this up can be drastically simplified, so it’s not so bad – just make sure that the developers are aware of the fact that their actions can incur costs is all.

    If none of the above are suitable, then it’s probably time to stub out the service and then rely more heavily on a QA or staging environment that’s better reflective of production.



  • Daniel Quinn@lemmy.catoSelfhosted@lemmy.worldPort Forwarding/Redirecting
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 months ago

    At the firewall level, port forwarding forwards traffic bound for one port to another machine on your network on an arbitrary port, but the UI built on top of it in your router may not include this.

    If it’s not an option in your Fritzbox, your options are:

    • Make the service running on your internal network listen on one of those high-number ports instead.
    • Introduce another machine on the network that also performs NAT between your router and your machine
    • Try to access the underlying firewall in your router to tweak the rules manually. Some routers have an admin console accessible via telnet or SSH that may allow this.
    • Get a new router.

    The first and last options on this list are probably the best.


  • It would be absolutely bizarre if you couldn’t connect with WireGuard port and Wireguard obfuscation set to Automatic. Things to try first:

    1. Connect without your VPN and try to access a single website like the theguardian.com
    2. Once that’s working, enable your VPN and that should do it.
    3. If you still can’t get connected, try switching out different countries. Each country listed corresponds to an IP to which your machine will try to connect over a benign port like 443 – so blocking that sort of traffic would be mad unless the IP is explicitly blocked. Therefore, driving to different country targets offers a different IP every time. They’d have to know Mulvad’s whole list and block them all.

    If the above somehow doesn’t work, Mulvad offers support through which you can get a temporary Server IP override. You can enter that in the bottom portion of your app’s settings.


  • Daniel Quinn@lemmy.catoPython@programming.devuv: Unified Python packaging
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    3 months ago

    Having used it for work, I really don’t understand the appeal, especially when compared to tools like Poetry. Uv persists in the dependency on requirements.txt, doesn’t streamline the publishing process, and contrary to the claims, it’s not a drop-in replacement for pip, as the command line API is different.

    It’s really fast, which is nice if you’re working on a nightmare codebase with 3000 dependencies, but most of us aren’t, and Poetry is pretty damned fast.

    If uv offered some of what Poetry does for me, if at the very least we could finally do away with requirements.txt and adopt something more useable – baked into pyproject.toml of course – then I’d be sold. But this is just faster pip.