I mean for one thing your garden variety LLM had been substantially trained to handle Django. That is less context for it to bootstrap every time you summon it.
Just like rolling your shitty homebrew framework is a bad idea because only you understand it, the same is probably true with LLMs. Sure they’ll scan the bejesus out of your codebase every time they need to make a change and probably figure it out eventually… but that is just a poor use of limited context. With something mainstream, the LLM already has a lot about the universe in its training. Not to mention an ecosystem of plugins, skills, mcp servers, wizbango-hashers, and claberdashers. All there for the LLM to use instead of wasting tons of time, tokens and money perpetually relearning your oddball, one-off, rat infested homebrew framework.
There is much more engineering and testing (and probably AI training) in python and a web browser than there is in django. same with EG bash and linux vs ansible. that is what I mean by 2010s era frameworks - JSON/YAML easy wrappers with opinionated defaults and consistent interfaces.
AI has no problem going from programming language -> runtime without human-convenient middleware. So I am NOT implying to create your own django on the way to creating your CRUD app. I think you can make a CRUD app based by listing all the features you want. Including, if you really want an in-band administration feature like phpmyadmin or django admin, you could have AI generate something that pipes any system command to the web app.
Suit yourself really. maybe there's more training data for CRUD apps in python than C, but I don't think it's too hard to implement the fundamentals of a web app in any language if you're also using a web server.
Most webapps aren't that popular therefore don't use that much computation anyways, so theres a point of diminishing returns on making your CRUD as efficient as scientifically possible. some prefer a managed runtime so that a bug causes EG python to crash instead of the consequences of a bug in native code, but that can be mitigated easily enough as well.
But LLM will figure it out so why not take free speed?
BTW, also if we're getting rid of a web framework and letting the LLM write specialized code for the various CRUD operations, why not also get rid of Postgres/MySQL/Redis and let LLM also write specialized code for reading, writing, and querying the various business objects?
once all interactions appear instantaneous to a human, which is usually possible even with python et al, reducing CPU usage doesn't matter in 99.9% of cases that the app never gets popular enough that the savings in running the app would even add up to the cost of a LLM subscription.
Also, in most instances CRUD apps could run with their own data structures and filesystem data persistence. Not to say its a good idea,But I'd wager you could get on the frontpage with "show HN: I build a ToDo app that's 10x cheaper to run on AWS than Django".
In reality SQL databases, along with programming languages, OS utilities, webservers, crytography, and probably a few other technologies are basically bedrock technologies that LLM builds upon and have durable value, unlike Dev Tools / Frame Works / Simplified Human Interface wrapper projects, such as django ansible and the thousands of similar projects .
The more LLMs *CAN* code, human oriented coding tools and concerns become worthless.
I mean docs are largely written for an LLM-in-a-harness. That’s how it goes! If the LLM bootstraps with the right understanding of the universe and knows how to quickly build specific context flavors… life is good.
Assuming you mean crap like “school book bans”, climate change denialism, or some dude coal rolling… You realize that is actually bait targeted at you specifically right? It wouldn’t work as bait if it was shit you agreed with! It’s actually left-wing rage bait!
If you were immersed in the “right wing echo chamber” your flavor of rage bait would be about a school introducing a neutral bathroom policy, or some college student struggling to define what a woman is. Every Christmas you’d see articles about cities banning Christmas lights in town hall and Starbucks no longer using Christmas themed cups. It’s all fucking made up nonsense. No real human acts the way these algorithms portray us.
Honestly even ‘right-wing’ and ‘left-wing’ are part of the trick. Real people don’t exist on a binary axis. We’re all a weird mess of values and experiences that don’t fit neatly into two boxes. But the algorithm needs two teams, because you can’t sell outrage without an enemy.
The first step to detox is seeing everyone as human not as a contrived label.
I actually mean the second kind of stuff - I don't know why it fed it to me except that the family connections I have on social media are all on FB and they tend to lean more conservative/evangelical.
Trolls do as well. Very often if a comment is "bad", it comes from a relatively new account. Then it gets banned and a new account is created. Technically it's ban evasion, but dang doesn't really want to change anything at this point.
That sort of rage bait is literally targeted to rile up people sitting on the opposite side of the kind of people watching that other media site that rhymes with socks. It’s all fake bullshit algorithmically optimized to divide.
Everybody thinks their tribe is immune to this sort of stuff but it isn’t. It’s all the same nonsense packaged for different echo chambers.
At the end of the day, everybody is human. It isn’t us vs them, it’s just us.
The number of non-technical people in my orbit that could successfully pull up Claude code and one shot a basic todo app is zero. They couldn’t do it before and won’t be able to now.
You go to chatGPT and say "produce a detailed prompt that will create a functioning todo app" and then put that output into Claude Code and you now have a TODO app.
This is still a stumbling block for a lot of people. Plenty of people could've found an answer to a problem they had if they had just googled it, but they never did. Or they did, but they googled something weird and gave up. AI use is absolutely going to be similar to that.
Maybe I’m biased working in insurance software, but I don’t get the feeling much programming happens where the code can be completely stochastically generated, never have its code reviewed, and that will be okay with users/customers/governments/etc.
Even if all sandboxing is done right, programs will be depended on to store data correctly and to show correct outputs.
Insurance is complicated, not frequently discussed online, and all code depends on a ton of domain knowledge and proprietary information.
I'm in a similar domain, the AI is like a very energetic intern. For me to get a good result requires a clear and detailed enough prompt I could probably write expression to turn it into code. Even still, after a little back and forth it loses the plot and starts producing gibberish.
But in simpler domains or ones with lots of examples online (for instance, I had an image recognition problem that looked a lot like a typical machine learning contest) it really can rattle stuff off in seconds that would take weeks/months for a mid level engineer to do and often be higher quality.
You don't need to draw the line between tech experts and the tech-naive. Plenty of people have the capability but not the time or discipline to execute such a thing by hand.
3d printing is something I think about. LLMs do their best work against text and 3d printers consume gcode. I’ve had sonnet spit out perfectly good single layer test prints. Obviously it won’t have the context window to hold much more gcode BUT…
If there was a text based file format for models, it could generate those and you could hand that to the slicer. Like I’ve never looked, but are stl files text or binary? Or those 3mf files?
If Gemini can generate a good looking pelican on a bicycle SVG, it can probably help design some fairly useful functional parts given a good design language it was trained on.
And honestly if the slicer itself could be driven via CLI, you could in theory do the entire workflow right to the printer.
It makes me wonder if we are going to really see a push to text-based file formats. Markdown is the lingua franca of output for LLMs. Same with json, csv, etc. Things that are easy to “git diff” are also easy for LLMs…
There is a text based file format for models. It's called OpenSCAD. It's also much more information compacted than a mesh model file like STL - e.g. in OpenSCAD you describe the curve, in the mesh file like STL you explicitly state all elements of it.
It's just gimped to the point that you can basically only use it for hobbyist projects, anything reasonably professional looking is using STEP compatible files and that is much more complex to try to emulate and get right. STEP is a bit different - it's more like a mesh in that it contains the final geometry, but in BRep which is pretty close to the machining grade, while OpenSCAD is more like what you're asking about - a textual recipe to generate curves that you pass into an engine that turns it into the actual geometry. It's just that OpenSCAD is so wholly insufficient to express what professional designs need it never gets used in the professional world.
Just like rolling your shitty homebrew framework is a bad idea because only you understand it, the same is probably true with LLMs. Sure they’ll scan the bejesus out of your codebase every time they need to make a change and probably figure it out eventually… but that is just a poor use of limited context. With something mainstream, the LLM already has a lot about the universe in its training. Not to mention an ecosystem of plugins, skills, mcp servers, wizbango-hashers, and claberdashers. All there for the LLM to use instead of wasting tons of time, tokens and money perpetually relearning your oddball, one-off, rat infested homebrew framework.
Nothing has changed really…
reply