Apologies - I should have been clear. I was not referring to Rumelhart et al., but to pieces of work that point to "optimizing the thrusts of the Apollo spaceships" using backprop.
One thing AI has been great for, recently, has been search for obscure or indirect references like this, that might be one step removed from any specific thing you're searching for, or if you have a tip-of-the-tongue search where you might have forgotten a phrase, or know you're using the wrong wording.
It's cool that you can trace the work of these rocket scientists all the way to the state of the art AI.
I don't know if there is a particular paper exactly, but Ben Recht has a discussion of the relationship between techniques in optimal control that became prominent in the 60's, and backpropagation:
I'm considering moving reverse proxying to Traefik for my self-hosted stuff. Unlike the article's author, I'm running containerized workloads with Docker Compose, and currently using Caddy with the excellent caddy-docker-proxy plugin. What that gets me, currently:
- Reverse proxying, with Docker labels for configuration. New workloads are picked up automatically (but I do need to attach workloads to Caddy's network bridge).
- TLS certificates
- Automatic DNS configuration (using yet another plugin, caddy-dynamicdns), so I don't have to worry too much about losing access to my stuff if my ISP decides to hand me a different IP address (which hasn't happened yet)
There are a few things I'm currently not entirely happy about my setup:
- Any new/restarting workload makes Caddy restart entirely, resulting in loss of access to my stuff (temporarily). Caddy doesn't hand off existing connections to a new instance, unfortunately.
- Using wildcard certs isn't as simple as it could/should be. As I don't want every workload to be advertised to the world through certificate transparency logs, I use wildcard certs, and that means I currently can't use simple Caddy file syntax I otherwise would with a cert per hostname. This is something I know is being worked on in Caddy, but still.
Anyway, I've used Traefik in k8s environments before, and it's been fairly pleasant, so I think I'll give it a go for my personal stuff too!
PS: Don't let this comment discourage you trying Caddy, it's actually really good!
I use Caddy for single-purpose hosts and the like, but I 100% would throw Traefik at the problems you're describing--and I do, it's my k8s cluster ingest and it runs in my dev environments to enable using `localtest.me` with hostnames.
It's worth kicking the tires on. Both are great at different things.
I use (rootless) docker compose + traefik. Precisely because for wildcard certs it was really painless. Although I use my own DNS server and use RFC2136 DDNS for the LE DNS challenge.
No plugins needed, really. I have basically one ansible playbook to set all this up on a vm including templating out the compose files. Then another playbook that can remove everything from the server again (besides data/mounts). For backups I use restic with a custom script that can back up files, different dbs etc to multiple locations.
In the past I deployed k3s but I realized that was too much and too complicated for my self hosted stuff. I just want to deploy things quickly and not have to handle the certs myself.
I have not used Caddy, I use traefik and it discovers docker properties for configuration and TLS certificates with auto update. Not sure about dynamic DNS - I do not use it from Traefik. Adding and removing containers does not need a restart AFAIR.
Hmm, I'll have to take a better look at my setup then, because it's a daily occurrence for me. Either I'm "holding it wrong" (which is admittedly possible, perhaps even likely given the comments here), or I have a ticket to open soon-ish.
Those are giant limitations. This is the first I hear of any reverse proxy that has to restart and drop connections to update configuration. That is usually the first, most fundamental part of any such server's design.
That is absolutely not the case. Caddy config reloads are graceful and lightweight. I have no idea why this person is stopping their server instead of reloading the config.
Caddy doesn't have to restart, I think it's related to the specifics of their setup. The simple/easy path that gets a lot of people into caddy has a workflow that's more like, run caddy, job done. The next level is, give caddy super simple configuration file, reload caddy with "caddy reload --config /etc/caddy/Caddyfile". After that, you use the REST API to make changes to the server while it is running, which uses a JSON configuration definition instead of a Caddyfile, so it ends up being a jump for users.
> After that, you use the REST API to make changes to the server while it is running, which uses a JSON configuration definition instead of a Caddyfile, so it ends up being a jump for users.
You can, in fact, use any configuration format with the API as long as Caddy has its adapter compiled-in; you just have to use the correct value in the `Content-Type` header. For instance, you can use Caddyfile format using the `text/caddyfile` value in `Content-Type`. This is documented[0].
Unfortunately, I tried the demo and even though it’s supposed to work in Chrome, it didn’t for me. Nor with Safari. Have you gotten it working with favicons anywhere?
Isn't ArgoCD more of a GitOps tool? The pretty UI is mostly secondary to its main purpose for me, which is to keep the declarative "truth" in source control, and have ArgoCD be the control loop that keeps the cluster in sync with that truth. Accidentally nuked namespace? No worries, ArgoCD's (or whatever alternative, like flux) got your back!
Exactly this. The critical difference being who deploys and when/how your resources are deployed - a human, or a machine.
Having git be the source of truth of your production environment is a blessing and a curse which is directly related to the maturity of the deployment system.
If you have low confidence in your deployments (as in, you don't deploy very often and don't have full e2e tests & monitoring), GitOps is nightmarishly scary compared to classic, battle tested Ops team CLI scripts.
I don’t accidentally do anything in production. Sorry. Nor do my teams or any developers who have access to my clouds. These kinds of failures don’t happen in my world. You will not have a mutable environment outside of data storage. Period. Nuke all you want, it will repair and redeploy itself.
Is this discontent coming from the embedded player on the linked page? Because yes, that's a Spotify player, but in case you missed it: there are three links at the top of the page, for Google, Apple and Spotify. And if it's an old school RSS feed you want, here you go: https://cowenconvos.libsyn.com/rss
Thank you. Yes, I played around with the widget looking for the feed link and couldn’t find it. I should have searched around the page to see the other links. So this isn’t exclusive and I assumed using a bad embedded player was the result of being forced to use it, rather than some other reason.
You should look at Hetzner [0]. They offer unmetered bandwidth on their dedicated servers with a 1Gbps uplink (I personally run a Tor relay on one averaging a sustained 15+Mbps over the past year), idem for their "Storage Share" offering, and 20TB/month at 1Gbps on their cloud VMs.
I'm not affiliated with them, just a happy customer.