Hacker Newsnew | past | comments | ask | show | jobs | submit | rented_mule's commentslogin

I saw hints of this ~20 years ago. I was working on software for a consumer device. For manufacturing it, we chose Foxconn. One non-negotiable point from their end was that they had to write some of the software on the device. They didn't care which part or how small.

The device had a physical keyboard with a micocontroller that managed it and they ended up writing the code that ran on that micro as it was largely independent of the code we were writing, and easy for us to test. The first versions were not great, but they got better quickly.

As we talked amongst ourselves about why they were so emphatic about this, it became clear to us that they were taking a long term view of the importance of moving into the intellectual property side of things. Dustin points out that, in some areas, they are there.


Something not unlike this happened to me when moving some batch processing code from C++ to Python 1.4 (this was 1997). The batch started finishing about 10x faster. We refused to believe it at first and started looking to make sure the work was actually being done. It was.

The port had been done in a weekend just to see if we could use Python in production. The C++ code had taken a few months to write. The port was pretty direct, function for function. It was even line for line where language and library differences didn't offer an easier way.

A couple of us worked together for a day to find the reason for the speedup. Just looking at the code didn't give us any clues, so we started profiling both versions. We found out that the port had accidentally fixed a previously unknown bug in some code that built and compared cache keys. After identifying the small misbehaving function, we had to study the C++ code pretty hard to even understand what the problem was. I don't remember the exact nature of the bug, but I do remember thinking that particular type of bug would be hard to express in Python, and that's exactly why it was accidentally fixed.

We immediately started moving the rest of our back end to Python. Most things were slower, but not by much because most of our back end was i/o bound. We soon found out that we could make algorithmic improvements so much more quickly, so a lot of the slowest things got a lot faster than they had ever been. And, most importantly, we (the software developers) got quite a bit faster.


My experience is the exact opposite.

This was particularly true for one of the projects I've worked with in the past, where Python was chosen as the main language for a monitoring service.

In short, it proved itself to be a disaster: just the Python process collecting and parsing the metrics of all programs consumed 30-40% of the processing power of the lower end boxes.

In the end, the project went ahead for a while more, and we had to do all sorts of mitigations to get the performance impact to be less of an issue.

We did consider replacing it all by a few open source tools written in C and some glue code, the initial prototype used few MBs instead of dozens (or even hundreds) of MBs of memory, while barely registering any CPU load, but in the end it was deemed a waste of time when the whole project was terminated.


Ditto for me. I had gotten so used to building web backends in Ruby and running at 700MB minimum. When I finally got around to writing a rust backend, it registered in the metrics as 0MB, so I thought for sure the application had crashed.

Turns out the metrics just rounded to the nearest 5MB


> but in the end it was deemed a waste of time when the whole project was terminated.

The main lesson of the story. Just pick Python and move fast, kids. It doesn’t matter how fast your software is if nobody uses it.


This is it. Getting something on the table for stakeholders to look at trumps anything else.


It would have taken the same time, if not less, given the extra time for mitigations, trying different optimization techniques, runtimes, etc.

One of the reasons the project was killed was that we couldn't port it to our line of low powered devices without a full rewrite in C.

Please note this was more than a decade ago, way before Rust was the language it was today. I wouldn't chose anything else besides Rust today since it gives the best of both worlds: a truly high level language with low level resource controls.


I would agree except for the python part. Sure, you gotta move fast, but if you survive a year you still gotta move fast, and I’ve never seen a python code base that was still coherent after a year. Expert pythonistas will claim, truthfully, that they have such a code base but the same can be said of expert rustaceans. I would stick to typescript or even Java. It will still be a shitshow after a year but not quite as fucked as python.


https://github.com/polarsource/polar/tree/main/server

If you're writing FastAPI (and you should be if you're doing a greenfield REST API project in Python in 2026), just s/copy/steal/ what those guys are doing and you'll be fine.


You can use Go and get the best of both worlds.


One of the slowest, most ineficient code bases I've ever worked on was in Go.

The mentality was "the language is fast, so as long as it compiles we're good"... Yeah that worked out about as well as you'd expect.


But that has nothing to do with the language.


Absolutely, and it's a good language when used properly. This was more of a problem with the hype surrounding it.


> Just pick Python and move fast, kids. It doesn’t matter how fast your software is if nobody uses it.

The reason nobody uses your software could be that it is too slow. As an example, if you write a video encoder or decoder, using pure Python might work for postage-stamp sized video because today’s hardware is insanely fast, but even, it likely will be easier to get the same speed in a language that’s better suited to the task.


Learning that it’s too slow takes users.


They were the users and it was too slow for them so they switched to python. Not C++ of course, what they meant was "the libraries we wrote in C++ were too buggy and slow that using them was slower than if we just used Python."


In some cases, common sense of developers can do that, too.


And this is why pretty much all commercial software is terrible and runs slower than the equivalent 20 years ago despite incredible advance in hardware.


For lots of software there wasn't an equivalent 20 years ago because there wasn't a language that would let developers explore semi-specified domains fast enough to create something useful. Unless it was visual basic, but we can't use that, because what would all the UX people be for?


Python itself is 30 years old. What are you talking about?

Almost every mainstream languages (except Go, Swift, Kotlin and Rust) are more than 30 years old, by the way.


if input() == "dynamic scope?": defined = "happyhappy" print(defined)

I'd rather not use python. The ick gets me every time.


It killed ny formatting

    if input() == "dynamic scope?":
        defined = "happyhappy"
    print(defined)


Terrible advice. Really.

Most the the business I do is rewriting old working python prototypes to C++. Python sucks, is slow and leaks. The new C++ code does not leak, meets our performance requirements, processes items instead of 36 hour in 8 hours, and such.

We are also rewriting all the old python UI in typescript. That went not so easy yet.

And when there are still old simple python helpers, I rewrite them into perl, because this will continue to run in the next years, unlike python.


Another anecdote, the team couldn’t improve concurrency reliably in Python, they rewrote the service in about a month (ten years ago) in Go, everything ran about 20x faster.


> just the Python process collecting and parsing the metrics of all programs consumed 30-40% of the processing power of the lower end boxes.

Just write the parsing loop in something faster like C or Rust, instead of the whole thing.


He struggled with the algorithms, you struggled with the runtime.

You are not the same.


> After identifying the small misbehaving function, we had to study the C++ code pretty hard to even understand what the problem was. I don't remember the exact nature of the bug, but I do remember thinking that particular type of bug would be hard to express in Python, and that's exactly why it was accidentally fixed.

Pure speculation, but I would guess this has something to do with a copy constructor getting invoked in a place you wouldn't guess, that ends up in a critical path.


Given the context, I'm thinking bad cache keys resulting in spurious cache misses, where the keys are built in some low-level way. Cache misses almost certainly have a bigger asymptotic impact than extra copies, unless that copy constructor is really heavy.


I'm just remembering a performance issue I heard of eons ago where a sorting function comparison callback inadvertently allocated memory. It made sorting very slow. Someone said in a meeting that sorting was slow, and we all had a laugh about "shouldn't have used the bubble sort!" But it was the key comparison doing something stupid.


good ol' shallow-vs-deep copy


My guess would be bad hashing, resulting in too many collisions.


Ome advantage of python is that it is so slow that if you choose the wrong algorithm or data structure that soon gets obvious. And for complicated stuff this is exactly where I find the LLMs struggle. So I make a first version in Python, and only when I am happy with the results and the speed feels reasonable compared to the problem complexity, I ask Claude Code to port the critical parts to Rust.


The last part is really interesting. It feels like the whole world will soon become Python/JS because thats what LLMs are good at. Very few people will then take the pain of optimizing it


The LLMs are pretty good at optimising.

Not because they are brilliant, but because they are pretty good at throwing pretty much all known techniques at a problem. And they also don't tire of profiling and running experiments.


If there's one thing LLMs are really, really good at, it's having a target and then hitting / improving upon that target.

If you have a comprehensive test suite or a realistic benchmark, saying "make tests pass" or "make benchmark go up" works wonders.

LLMs are really good at knowing patterns, we still need programmers to know which pattern to apply when. We'll soon reach a point where you'll be able to say "X is slow, do autoresearch on X" and X will just magically get faster.

The reason we can't yet isn't because LLMs are stupid, it's because autoresearch is a relatively new (last month or so) concept and hasn't yet entered into LLM pretraining corpora. LLMs can already do this, you just need to be a little bit more explicit in explaining exactly what you need them to do.


> The reason we can't yet isn't because LLMs are stupid, it's because autoresearch is a relatively new (last month or so) concept [...]

I'm not so sure. People have been doing stuff like (hyper) parameter search for ages. And profiling and trying out lots of things systematically has been the go-to approach for performance optimisation since forever; making an LLM instead of a human do that is the obvious thing to try?

The concept of 'autoresearch' might bring with it some interesting and useful new wrinkles, but on a fundamental level it's not rocket science.


I've not tried this yet, but doesn't it use up loads of tokens? How do you do it efficiently?


It uses a lot of minutes on your computer(s), since you need to run lots and lots of experiments.

I'm not sure if it's particularly token hungry.


Not just profiling, but decoding protocols too.

Recently I tried Codex/GPT5 with updating a bluetooth library for batteries and it was able to start capturing bluetooth packets and comparing them with the libraries other models. It was indefatigable. I didn't even know if was so easy to capture BLE packets.


Could you ask the LLM to do a write-up on the process and post it? (Or you can write a blog post by hand. Like a caveman. ;)


I find writing by hand is the best. LLMs spit out such linked-in writing that I don’t even want to read it. ;)

But that would be a good blog post and I got some travel coming up. But honestly it was just “oh here’s a BLE python library, see if we can get it running”. I prefer Codex because it seems to do well for guiding the LLMs for complete engineering changes.


Wireshark would do that. But you need to understand low level tools because in case on some BGP attack you all LLM developers will be fired in the spot.

Flakey internet connection: most of current 'soy devs' would be useless. Even more with boosted up chatbots.


> Flakey internet connection: most of current 'soy devs' would be useless.

We used to make the same jokes about Googling Stackoverflow since before many users on this site were born.


And it's partially true. Offline documentation should be mandatory everywhere. Networks can be degraded tomorrow in the current 2nd Cold War we are living. And, yes, the states and goverments have private backbones for the military/academia/healthcare and so on, but the rest it's screwed.

When the blackout the only protocols which worked fine where IRC, Gopher and Gemini. I could resort to using IRC->Bitlbee to chat against different people of the world, read news and proxy web sites over Gemini (the proto, not the shitty AI). But, for the rest, the average folk? half an our to fetch a non-working page.

That with a newspaper, go figure with the rest. And today tons of projects use sites with tons of JS and unnecesary trackers and data. In case of a small BGP attack, most projects done with LLM's will be damned. Because they won't even have experience on coding without LLM's. Without docs it's game over.

Also tons of languages pull dependencies. Linux distros with tons of DVD's can survive offline with Python, but good luck deploying NPM, Python and the rest projects to different OSes. If you are lucky you can resort to the bundled Go dependencies in Debian and cross compile, and the same with MinGW cross compiling against Windows with some Win32, SDL, DX support but that's it.

With QT Creator and MinGW, well, yes, you could build something reliable enough -being cross platform- and with Lazarus/Free Pascal, but forget about current projects downloading 20000 dependencies.


Heh, my preferred language is Nim which has good docs for the stdlib. It also does static binaries and runs on esp32 like a dream. I’m not worried about some internet downtime, but I also enjoy what I can guide LLMs to build for me.

The BLE battery syncing was a nice-to-have for an IoT prototype. Not something I wanted to spend hours digging through wireshark to figure out but fine for some LLM hacking.


> Offline documentation should be mandatory everywhere. Networks can be degraded tomorrow in the current 2nd Cold War we are living.

Eh? It's all about trade-offs. If our infrastructure is degraded enough that the internet goes down, I have more important things to do than work through a few more Jira tickets.

Especially since a lot of the work me and a lot of other folks are doing is delivered to customers via the internet anyway.


Not in my experience. They're pretty good at getting average performance which is often better than most programmers seem to be willing to aim for.


What kind of 'average' is this, if it's better than what seems to be typical?


> JS because thats what LLMs are good at.

That has not been my experience. JS/TS requires the most hand-holding, by far. LLMs are no doubt assumed to be good at JS due to the sheer amount of training data, but a lot of those inputs are of really poor quality, and even among the high quality inputs there isn't a whole lot of consistency in how they are written. That seems to trip up the LLMs. If anything, LLMs might finally be what breaks the JS camel's back. Although browser dominance still makes that unlikely.

> Very few people will then take the pain of optimizing it

Today's LLMs rarely take the initiative to write benchmarks, but if you ask it will and then will iterate on optimizing using the benchmark results as feedback. It works fairly well. There is a conceivable near future where LLMs or LLM tools will start doing this automatically.


My experience is from trying to get the React Native example to work with OpenUI. Felt Sonnet/Opus was much better at figuring out whats wrong with the current React implementation and fixing it than it was with React Native

But yes I see what you mean and I think people are trying to solve it with skills and harnesses at the application layer but its not there yet


Nope. The world runs on code written in C and C++. Including Python itself. There is a reason why there are literally millions of C/C++ programmers out there working on C/C++ code every day.

> We soon found out that we could make algorithmic improvements so much more quickly

It's true that writing code in C doesn't automatically make it faster.

For example, string manipulation. 0-terminated strings (the default in C) are, frankly, an abomination. String processing code is a tangle of strlen, strcpy, strncpy, strcat, all of which require repeated passes over the string looking for the 0. (Even worse, reloading the string into the cache just to find its length makes things even slower.)

Worse is the problem that, in order to slice a string, you have to malloc some memory and copy the string. And then carefully manage the lifetime of that slice.

The fix is simple - use length-delimited strings. D relies on them to great effect. You can do them in C, but you get no succor from the language. I've proposed a simple enhancement for C to make them work https://www.digitalmars.com/articles/C-biggest-mistake.html but nobody in the C world has any interest in it (which baffles me, it is so simple!).

Another source of slowdown in C is I've discovered over the years that C is not a plastic language, it is a brittle one. The first algorithm you select for a C project gets so welded into it that it cannot be changed without great difficulty. (And we all know that algorithms are the key to speed, not coding details.) Why isn't C plastic?

It's because one cannot switch back and forth between a reference type and a value type without extensively rewriting every use of it. For example:

    struct S { int a; }
    int foo(struct S s) { return s.a; }
    int bar(struct S *s) { return s->a; }
If you want to switch between reference and value, you've got to go through all your code swapping . and ->. It's just too tedious and never happens. In D:

    struct S { int a; }
    int foo(S s) { return s.a; }
    int bar(S *s) { return s.a; }
I discovered while working on D that there is no reason for the C and C++ -> operator to even exist, the . operator covers both bases!


What an honor to have Walter Bright respond to my comment! I used Zortech C++ extensively in the late 1980s and early 1990s on OS/2 and Windows. That beautiful black and purple cube-shaped box sat prominently on my bookshelf for many years. Thanks Walter!

Well clearly there is use for these - how do you distinguish what you are accessing in smart-pointer-like types.


You'd still use the "." operator. Value, reference, or smart pointer use the same syntax. This means you can refactor them easily.


This is the difference between scripting and programming. If you use C++ as a scripting language you're gonna have a bad time.Of course a scripting language is faster for scripting! That doesn't mean you go full Graham and throw away real programming languages, it just means you aren't writing systems software.

The usual strategy is to write a script then if it's slow see how you could design a program that would

The usual strategy in the real world is to copy paste thousands of lines of C++ code until someone comes along and writes a proper direct solution to the problem.

Of course there are ideas on how to fix this, either writing your own scripting libraries (stb), packages (go/rust/ts), metaprogramming (lisp/jai). As for bugs those are a function of how you choose to write code, the standard way of writing shell it bug prone, the standard way of writing python is less so, not using overloading & going wider in c++ generally helps.


I use C++ instead of so called 'scripting' languages all the time. I have zero problems doing that and it is lightning fast.

Fun story! Performance is often highly unintuitive, and even counterintuitive (e.g. going from C++ to Python). Very much an art as well as a science.

Crazy how many stories like this I’ve heard of how doing performance work helped people uncover bugs and/or hidden assumptions about their systems.


It doesn't come off as unintuitive by my read. They had a bug that led to a massive performance regression. Rewriting the code didn't have that bug so it led to a performance improvement.

They found that they had fewer bugs in Python so they continued with it.


I think a lot of people (especially those who are only peripherally involved in development, like management) don't really consider performance regressions at all when thinking about how to get software to go faster.

Meanwhile my experience has been that whenever there has been a performance issue severe enough to actually matter, it's often been the result of some kind of performance bug, not so much language, runtime, or even algorithm choices for that matter.

Hence whenever the topic of how to improve performance comes up, I always, always insist that we profile first.


My experience has been that performance bugs show up in lots of places and I'm very lucky when it's just a bug. The far more painful performance issues are language and runtime limitations.

But, of course, profiling is always step one.


I ported Python to C++ one time and it ran 10c faster with 10x less memory usage with no architectural changes


[dead]


This comment comes from a bot account. One of the more clever ones I’ve seen that avoids some of the usual tells, but the comment history taken together exposes it.

I hit the flag button on the comment and suggest others do too.


Huh? How am I a bot account?

Thanks, Programming History Facts Bot

I was not actually sure this one was a bot, despite LLM-isms and, sadly, being new. But you can look at the comment history and see.


Definitely not a bot, I am however super interested in programming history!

I don't think the better software part is playing out


There’s a lot of really great software out there right now, and a lot that’s terrible and I think powerful abstractions enable both.


you're thinking of the programs in low-level langs that survived their higher-level-lang competitors; if you plot the programs on your machine by age, how does the low quartile compare on reliability between programs written in each group


Survivorship bias is exactly right.

The C and assembly programs we still use are the ones that were good enough to last. The thousands that weren't are gone.

Nobody counts the programs that were never finished because the language made them too hard to write in the first place.


Until at some point in a language like python all the things that allowed you write software faster start to slow you down like the lack of static typing and typing errors and spending time figuring out whether foo method works with ducks or quacks or foovars or whether the latest refactoring actually silently broke it because now you need bazzes instead of ducks. Yeah.


[flagged]


AI account


I suspect that you used highly optimized algorithms written for python, like the vector algorithms in numpy? You will struggle to write better code, at least I would.


Python 1.4 would be mid-late 90s long before numpy and vector algorithms would have been available.

I suspect it’s more likely to be something like passing std::string by value not realising that would copy the string every time, especially with the statement that the mistake would be hard to express in Python.


Everything is new to the uninitiated. :P


> We immediately started moving the rest of our back end to Python. Most things were slower, but not by much because most of our back end was i/o bound.

Would be kind of cool if e. g. python or ruby could be as fast as C or C++.

I wonder if this could be possible, assuming we could modify both to achieve that as outcome. But without having a language that would be like C or C++. Right now there is a strange divide between "scripting" languages and compiled ones.


@dang this is an ai slop account, check his other comments


As a kid, I had the hardest time understanding what a computer was. At 9 years old in 1977, I had a friend whose dad was a computer programmer. The friend tried to explain to me what a computer was, but I just couldn't understand it. We even took a field trip to a National Weather Service office where they talked a lot about their computers and showed us one that filled a room, but I still didn't understand. None of the explanations made it sound like anything other than magic happening in a big set of boxes.

At 12 years old in 1980, I bought Atari BASIC Programming (it wasn't yet called the 2600). Minutes after plugging it in, the idea of a computer clicked for me. That quickly led to getting bored with that game system and convincing my parents to buy me a "real" computer. Eventually that led to a long career as a software developer. Thanks for opening that door for me Atari BASIC Programming!


Wow! I'm so glad this actually did open the door from someone. I remember nerding out over how lame it was, but I was fortunate enough to have a TRS-80 Model III in my household. And I didn't fully grasp the hardware limitations then.

Someone really should DIY a real Atari "VCS," as in adequate bank switched RAM and something not unlike TRS-80 BASIC achieved without outlandish hardware for the time...

OMG! Someone pulled it off back in the day! I had no idea. So much for my retirement project idea:

https://en.wikipedia.org/wiki/CompuMate

That's ok, I have a lot of years to go before I need one LOL


My story is similar. I loved playing video games and all, but after I wrote my first program, I became obsessed with computers. The infinite canvas for interactive human experience and problem solving felt out of this world.


Randomly, I spent an afternoon with a team of loom engineers long ago. In 1989, I took a month-long trip to the USSR. Trips for Americans back then were guided / chaperoned by the Soviet government, with the clear intention of showing off what the Soviet system was capable of. To see their manufacturing prowess, we spent an entire afternoon touring an automated bed-sheet factory and talking with the team that designed and maintained the machines. I don't remember much other than the intense noise and the large number of machines with white cotton sheets coming out.

All the sheets we saw in that factory, and in our hotels, were noticeably thicker and stiffer than American sheets, somewhere between American sheets and denim. When we asked about that, they seemed to feel sorry that we only had thin, flimsy sheets.


If you know about this ahead of time, and can be a contractor, then forming a corporation that is its own tax entity gives you this in the US. I did exactly this in the mid-90s when 9 months of work was suddenly going to yield 4-5x what I had made the previous year, then I would lose the position immediately after, and I knew I would need a break at that point. I paid myself the regular salary whether the corporation was making a lot or making nothing. The corporation had to pay taxes on the profit the first year, but it could deduct the loses against those profits in later years. The same thing happened 5 years later. All my work for a decade was through the corporation I had formed. You can also do things like have a company car (with the company deducting insurance, gas, maintenance, etc.), rent your home office to the company, and much more. US tax laws are much better for corporations and their owners than for individuals.


> Basically thought crime

Let's go in the opposite direction...

>> Amanda was 10 years old. she went into the bathroom and had sex with a 30 year old man.

If the story was real, should Amanda be banned from publishing her own account of her experience later in life? Should she be able to write about the impact it had on her? I think she should have that freedom.

What if she was 17 years 364 days old and the adult was 18 years 1 day old, assuming the age of consent is 18, and she writes about it being a good experience for her? 16 years old and 20? 4 and 40? Those are increasingly grotesque to me, but I don't know where to draw the line.

Wait, have I crossed the line in what I've written in this reply? Have we all?


I have no idea about Australia, but in USA it's pretty well established it is a crime to publish CSAM of yourself. Children are prosecuted for sending their own provocative images to others. I can only imagine the punishment would be worse if they distributed them after they were an adult.

So I would think hypothetically if the words were CSAM, the fact they are the victim publishing their own account would be immaterial to their defense.


IANAL, but written materials about sexual abuse don't seem to be illegal in the US. For recent-ish publications, see My Dark Vanessa by Russell and Tampa by Nutting.

(I liked the former which took a thoughtful approach whereas I didn't finish the latter because it just felt like erotica for pedophiles which isn't what I was looking for.)


I think one additional objection to AI generated depictions is that photo-realistic AI generated content gives plausible deniability to those who create/possess real life CSAM.


And it would make authorities waste time finding the real csam to investigate or mistakenly investigating AI csam (under the hypothetical that AI csam is decriminalized).


Oh wow, what a bad memory. This exact thing happened in a building I lived in several years ago, a couple of floors above me. It looked like waterfalls outside our windows and water was rushing in under the baseboards. All while every fire alarm in the building was going off and fire truck sirens were blaring outside. Understandably, the fire department would not turn off the water until they had been to every floor to check for fire. On the upside, it's impressive how much water can be delivered by fire sprinklers.

Closer to the topic, the building's management company tried to come after me (a renter) for the expense of the restoration people who were brought in to rip out my drywall and carpet so mold wouldn't form. Maybe they figured tenants were an easier target than the contractor's insurance? Oh, and the management company were the ones who selected and hired the contractors. I had to get very aggressive, with plenty of threats of legal action, to get them to back down. That was fairly easy to do as my state's laws specifically specify liability rules for flooding in multi-tenant buildings. They never did do repairs while I was there - I moved out when my lease expired nearly a year later as they were tying to raise the rent, with drywall still missing.


Oh man, multi-tenant housing sounds like the worst case scenario for this sort of thing. I’m glad you were able to avoid any liability, trying to pin liability for rebuilding a unit on a tenant is insane.

And yeah, the volume of water a fire pump can move is astounding. Electrical code requires the fire pump to be wired so that it can run at its locked rotor amp rating without tripping overcurrent protection and it’s usually tapped directly off the utility transformer separately from the rest of the electrical service. There’s also a smaller jockey pump that maintains water pressure in the system so that when the main pump turns on, there’s no lag with water coming out. The pump motor will keep spinning even if there’s a dead short if it’s fused right above locked rotor amps, since replacing a motor is cheaper than replacing a fully burned out structure and keeping the water flowing allows as many people to escape as possible. The feeder has to be encased in concrete or it has to be fire-resistant cable.


A fascinating takeaway from that video for me... If you take the US land that is dedicated to growing corn for ethanol that is put in gasoline, and replace all the corn on that land with solar panels, how much energy would it produce? Twice today's total electrical generation in the US, from all sources. And that's in the corn belt, which is far from ideal for solar. It would be billions of panels, but it's a pretty interesting perspective on the questions about the land use requirements of solar.


Another genuine question: I wonder how that would change the climate in those areas. I live in Iowa and "corn sweat" is a thing that never fails to make several weeks of summer completely unbearable.


It shows that bioenergy is very land inefficient.

There was a book about renewable energy in Britain about 17 years ago, "Sustainable Energy -- Without the Hot Air" that tried to make the argument that renewables could not power Britain, there wasn't enough land. But if you drilled down, this conclusion was due to use of biofuels.


The significant problem with that book is that it commits the primary energy fallacy. It sees that we need X GWh of chemical energy from fuels and says we have to replace it with X GWh of electricity. Which is of course completely wrong since it ignores the efficiencies of the processes and conflates two different things simply because they are measured in the same units.


Genuine question: How much energy, minerals, transportation, manufacturing, etc, etc. goes into making the panels. How much are the panels going to make back percentage wise in it's lifetime vs. the cost to make and transport, install?

Corn kind of reproduces itself every year (If you don't get the GMO kind), so you only need natural resources to continue to grow it right? Water, sunlight and labor?


He goes over that in the video. It's long, but very much worth watching.


> Corn kind of reproduces itself every year (If you don't get the GMO kind), so you only need natural resources to continue to grow it right? Water, sunlight and labor?

At industrial scale, it has a huge petro-chemical fertiliser input.


Total energy input to agriculture in the US is less than 2% of total energy consumption. So "huge" there has to be taken in context.

All the energy inputs to agriculture could be replaced with non-fossil inputs. Fertilizer in particular needs hydrogen to make ammonia, but that can be produced from non-fossil sources.


Germany uses less land for energy crops and is further north, but still could satisfy most of its electricity needs if it replaced the plants with solar panels.


20 years ago, I was working on a consumer device, doing indexing and searching of books. The indexer had about 1 MB of RAM available, and had to work in the background on a very slow, single core CPU, without the user noticing any slowdown. A lot of the optimization work involved trying to get algorithmic complexity and memory use closer to a function of the distinct words in books than to a function of the total words in books. Typical novels have on the order of 10 K distinct words and 100 K total words.

If you're indexing numbers, which we did, this book has little difference between total words and distinct words because it has so many distinct numbers in it. It ended up being a regular stress test to make sure our approach to capping memory use was working. But, because it constantly triggered that approach to capping memory usage, it took far longer to index than more typical books, including many that were much larger.


Over 30 years ago, was working on a presentation software that shipped with a bunch of (vector) clip art and remember using the (raster) graphics from the CIA World Factbook as a base to create vector (WMF) versions of the flags of various ‘new’ countries at the time (following the breakup of Yugoslavia) that were missing from the set that our art vendor provided to us.

The Croatia flag in particular took quite a while to trace/draw (by hand).


Bit confused, what's this to do with the CIA World Factbook?


> this book has little difference between total words and distinct words because it has so many distinct numbers in it. It ended up being a regular stress test to make sure our approach to capping memory use was working


So the factbook is an actual book too? That's what I missed, I thought it was a webpage so this was referring to some other post.


The CIA Factbook being publicy available since 1971 has existed longer than the internet

https://en.wikipedia.org/wiki/The_World_Factbook


That's a great logo. What a travesty.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: