Hacker Newsnew | past | comments | ask | show | jobs | submit | skydhash's commentslogin

Most people that don't know how to program have no real desire in coding with AI (unless to pose as a SWE and get that sweet money). Most of them don't even like computers. Yes they do some tasks on it, but they're not that attached to the tool and its capabilities.

The other 80% is spent on the following:

- A lot of research. Libraries documentation, best practice, sample solutions, code history,... That could be easily 60% of the time. Even when you're familiar with the project, you're always checking other parts of the codebase and your notes.

- Communication. Most projects involve a team and there's a dependency graph between your work. There may be also a project manager dictating things and support that wants your input on some cases.

- Thinking. Code is just the written version of a solution. The latter needs to exists first. So you spend a lot of time wrangling with the problem and trying to balance tradeoffs. It also involves a lot of the other points.

Coding is a breeze compared to the others. And if you have setup a good environment, it's even enjoyable.


That's very much an echo chamber you find yourself in. I'm far away from any technological center and the main use of LLM for people is the web search widget, spell checking and generating letters. Also kids cheating on their homework.

They can't tell you (not everyone is eloquent), but they sure know why. Struggling to put something in word is not the same as not knowing.

The DOM is very ill-suited for most UI. Too complex and lots of missing features. It’s a whole bag of unneeded code and the resulting UI doesn’t fit anywhere.

> The DOM is very ill-suited for most UI. Too complex and lots of missing features

Can you expand on this, because I'm not seeing it myself. The DOM, html+css is very flexible. It easily encompasses most UI. Most UI is some kind of data display, so lists, trees, tables, forms.

The need for JS might be what you're complaining about. I think we might be stuck with it as a UI control language forever.


The DOM(and CSS) is primarily built for documents and forms, even with the latest addition like Flexbox or Grid layouts. It is closer to typesetting tools like troff, latex, texinfo than any UI engines you can think of. And some that are not needed, like the difference between <i>, <a>, <span>, <strong>,...

Also with most GUI frameworks, there's some difference between widgets like label, button, menu, checkbox,... and containers that does layout management. And there are not a lot of elements in both sets. This is the reason why React Native has a very sparse components library. With simpler implementation, you have a simpler rendering path, and the developer have less elements to deal with.

Also some have ready-made implementation of really useful widgets, like tree, grids, tables, lists, and other dynamic things. You can find libraries for those on the web, but the web implementation of scrollable container is janky.


But floating point error manifest in different ways. Most people only care about 2 to 4 decimals which even the cheapest calculators can do well for a good amount of consecutive of usual computations. Anyone who cares about better precision will choose a better calculator. So floating point error is remediable.

We have already proven that all the computing mechanism that those languages derive their semantic forms are equivalent to the Turing Machine. So C and Prolog are only different in terms of notations, not in terms of result.

Pretty obvious when you think that neural networks operate with numbers and very complex formulas (by combining several simple formulas with various weights). You can map a lot of things to number (words, colors, music notes,…) but that does not means the NN is going to provide useful results.

Everything is obvious if you ignore enough of the details/problem space. I’ll read the paper rather than rely on my own thought experiments and assumptions.

Current civilization is very complex. And it’s also fragile in some parts. When you build systems around instant communication and the availability of stuff built in the other side of the world on a fixed schedule, it’s very easy to disrupt.

> 4. People will eventually get the hang of using AI to do the optimum amount of delegation such that they still retain what is necessary and delegate what is not necessary. People who don't do this optimally will get outcompeted

Then they’ll be at the mercy of the online service availability and the company themselves. Also there’s the non deterministic result. I can delegate my understanding of some problems to a library, a software, a framework, because their operation are deterministic. Not so with LLMs.


I have been able to produce 20x the amount of useful outputs both in my day job and in my free time using a popular coding agent in 2026. Part of me is uncomfortable at having from some perspective my hard won knowledge of how to write English, code and to design systems partly commoditized. Part of me is amazed and grateful for being in this timeline. I am now learning and building things I only dreamed about for years. Sky is the limit.

When technology progressed enough to allow for

1. outsourcing and offshoring (non deterministic, easy to disrupt)

2. cloud computing (mercy of the online service availability)

we had the same dilemma.

Outsource exactly what you think is not critical to the business. Offshore enough so that you gain good talent across the globe. Use cloud computing so that your company does not spend time working on solving problems that have already been solved. Assess what skills are required and what aren't - an e-commerce company doesn't need deep expertise in linux and postgres.

Companies that do this well outcompete other companies that obsess over details that are not core to their value proposition. This is how modern startups work: it is in finding that critical balance of buying products externally vs building only the crucial skills internally.


Students are given student-level problem, not because someone wants the result, but because they can learn how solving problems works. Solving those easy problems with LLM does not help anyone.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: