Now only have to wait for:
- pyenv release
- pycharm update (including terminal)
- 3rd party libraries
to catch up…
Now only have to wait for:
to catch up…
If you use github pages, you can create, deploy, and host static websites for free. Only cost, if you want your own URL, is for a custom DNS name.
You can use their default Jekyll static rendering engine, and create the content using Markdown. And with github actions, all you need to update the content is create markdown, then push the change to the same repo. After a few minutes, the new content shows up.
Hugo can also be used, but it takes a few extra steps: https://gohugo.io/hosting-and-deployment/hosting-on-github/
You can also find ‘themes’ to customize the look and feel of the site, specific to the site generation tool.
If you want a lot of extra features, Docusaurus is pretty much as good as it gets, and you can set it up to push out to GH pages: https://docusaurus.io/docs/deployment
OK, you wanted a conversation… :-)
I did read the post, but I assumed it was the starting point of a system or mechanism, not the end-point. Wanting to just run “docker compose up” is fine, but there is more to developing and deploying to production (and continuing post-launch).
That’s why I mentioned the CLI. It lets you go from a simple local app (Django on sqlite) to a Docker one (postgres, celery, redis, etc.), to all the way out to the cloud (ECS/EKS/serverless lambda/RDS), without having to remember what commands do what or managing lots of separate docker-compose files.
I can see we are VERY far apart on how docker should be used in moving toward a production-ready system.
For one thing, recommending putting secrets inside docker-compose is an instantly disqualifying piece of advice. There’s a whole ‘secrets’ section of docker compose that is there to prevent people from inadvertently including those in cleartext and baking them into images: https://docs.docker.com/compose/how-tos/use-secrets/.
Github itself has a secret scanning mechanism to prevent leakage: https://docs.github.com/en/code-security/secret-scanning/introduction/about-secret-scanning. For gitlab, there’s also Blackbox or HashiCorp vault. Putting AWS key/secret inside a repo can be VERY expensive and open one to legal liability if the account is misused. Repeated infractions could lead to AWS banning one’s account.
I really recommend you take down that part of your post, instead of proliferating bad practices.
As for the rest, to each their own.
Good stuff.
A few things I’d change:
One Docker env variable and one line of code. Not a heavy lift, really. And next time I shell into the container I don’t need to remind everyone to activate the venv.
Creating a venv in Docker just for the hell of it is like creating a symlink to something that never changes or moves.
I can think of only two reasons to have a venv inside a container:
If you’re running third-party services inside a container, pinned to different Python versions.
If you do local development without docker and scripts that have to activate the venv from inside the script. If you move the scripts inside the container, now you don’t have a venv. But then it’s easy to just check an environment variable and skip, if inside Docker.
For most applications, it seems like an unnecessary extra step.
Wait until AGI!
AGI: Yes.
Wait until the sentient robots!
Sentient robots: Yes.
Wait until biological…
Biologics: Glub, glub. Yes.
The impression I had was thst subinterpreters could be launched from the python side. This writeup implies you need to write a C extension to make use of it.
Will have to do more research…
Nice writeup! Be good if, at some point, they covered how threading and multiprocessing impacts opcode processing, especially when it comes to those globals.
Great stuff. Waiting on the ‘W’ edition.
Thanks. After your note I went back and re-checked with my friend. I mixed up his comments with those from another friend with a different setup. Updated my original comment.
I have a closet full of old routers (including Linksys), extenders, and switches to be able to handle dead spots. They all sucked. Then I heard about mesh routers when they first came out. Tried two, saw that they worked well, and got a third one. A few months later, a new ISP showed up in our neighborhood with unmetered Gig fiber and I happily drop-kicked Comcast to the curb. It was gratifying that the fiber connection came with a single mesh device of the same brand I already had. Since then, I’ve upgraded to the next-gen routers, and gotten a few smaller ‘wall-wart’ units for extending the range outdoors.
I don’t really have to fuss with configurations like I had to before. It’s amazing how much of a time drain it was to go screw around with settings when a new device came in that didn’t work, or to replace a router when one died. I haven’t had to do anything in years. Every once in a while, I go set up a DHCP reservation but that’s it. The firmware updates auto-install while everyone’s asleep and I get pretty decent bandwidth in places I had constant dropoffs. When I switched out the actual routers to the new gen, the whole thing took 10m and the whole network was down for maybe 2m while the new ones booted up. No end devices had to be modified or restarted.
Where the fiber comes in, there’s a single router node, with two Ethernet ports. One goes to the fiber ONT, the other to a 10-port gig switch where it feeds the rest of wired setups. Elsewhere, the farthest mesh unit has no incoming physical connection, but a small wired switch connected to other wired devices near there. I didn’t have to make any router configuration settings to make this work. Just plugged it all in. Common devices go on the main network, and janky IOT devices (and visitors) go on the guest network.
For external access for self-hosting, you can take a domain name and set up a free Cloudflare tunnel to access your in-home services remotely. Pay Cloudflare a fee and you get extra rules-based access control. The router also has a premium service where it comes with a family bundle of security software. One other thing I like is that the mobile app sends a notification whenever a new device joins the network, so if I see one I don’t recognize, I can block them. Hasn’t happened yet, but if it does, I’ll know to go rotate the wifi passwords.
Anyway, highly recommend mesh routers. I happened to get Eeros (before they were acquired) but there are a few other brands around. Some people don’t like that Amazon bought eero, but they appear to be left to run as an independent outfit. It has been pretty solid so far.
P.S. A friend with a more complicated setup than mine got Ubiquitis. It’s anecdotal, but he recently asked about switching away and I told him pretty much what I’ve written here. YMMV.
Edit: checked back with friend. He said he was very happy with his Ubiquiti gear. I mixed up his review from years ago with another friend’s networking setup.
You’ve been using cheap cables.
Next step up is a JCAT: https://audiobacon.net/2019/11/02/the-jcat-signature-lan-a-1000-ethernet-cable/amp/
/s if not obvious.
Thanks for the tips. Will try again next week. It’s exciting tech, especially when modeling custom objects.
I tried it with a stock image of red, see-through dice against a solid backdrop. It removed the background, then generated a 2D cutout surface with the dice images projected on the front. Didn’t have time to try any more, but this one result was no good.
I tried to hand-solder a Hirose .35-pitch connector onto a custom OSHPark board once. Let’s just say it was a humbling experience. Thanks to a generous friend, I learned the value of solder masks and owning a home reflow oven.
Respect to whoever can do this sort of thing, but life is too damn short and my eyesight and hands don’t need the abuse.
How you solder those without dropping a blob and causing a short is a mystery.
There’s also: https://www.modular.com/max/mojo
All 418 error codes. We good.
I’m feeling attacked.