The idea was to transition from coal to natural gas while using solar and wind to reduce fuel consumption, thereby significantly reducing CO2 emissions. Any claims of hydrogen being burned were either lies to the public to get the gas plants built despite the non-green optics or lies to investors as part of a fraud scheme.
Hydrogen burning could have a place in an all-renewable grid: it could be much more economical for very long duration storage than using batteries. The last 5-10% of the grid becomes much cheaper to do with renewables if something like hydrogen (or other e-fuels) is available.
A competitor that might be even better is very long duration high temperature thermal storage, if capex minimization is the priority.
> it could be much more economical for very long duration storage than using batteries
Yes, but that's not the only option you have. With the absolutely awful efficiency of burning hydrogen you'd need to be building a massive amount of additional wind and solar - which in turn means you'll also have additional capacity available during cloudy wind-calm days, which means you'll need to burn substantially less hydrogen to generate power.
This leads to the irony that building the power-generation infrastructure for generating enough hydrogen means you won't even need to bother with the hydrogen part: you're basically just building enough solar that their overcast supply is enough to meet the average demand. As a bonus, you've now got a massive oversupply during sunny winter days and even more during summer days, so most of the year electricity will essentially be free.
Efficiency is not very important for very long duration storage. What's important is minimizing cost, which is dominated by capex, not by the cost of the energy used to charge the storage system. Paying more to charge it can make sense if that greatly reduces capex.
Servers with ECC generally report zero recoverable memory errors until the chip starts failing, at which point there are increasingly many. Therefore the average server experiences zero cosmic ray related memory errors during its lifetime, despite having many times more memory than 256MB.
I don't hate journald because it's not plaintext, I hate it because it's worse than plaintext.
Somehow journald manages to provide a database which is 40x slower to query than running grep on a compressed text file.
I'm all in favour of storing logs in an indexed structured format but journald ain't it.
Journald is an odd one. I don't think it being a binary log/database makes sense. If you have a tiny operation, with a single server, then the binary database doesn't really make sense, having plain text is just easier and faster. If you're a bigger operation, you'll have a central logging solution, in which case you need journald to store the longs as plain text as well, before you can do log shipping.
The only use case where the binary format might make sense is if you ship journald logs to another central journald instance. That's just very much an edge case.
Doesn't that still involve a conversion? I believe that rsyslog can read the journald database, but you're typically not querying syslog data directly, so there's a conversion between rsyslog and logstash, Splunk, Datalog, whatever.
sqlite resolves lock contention between processes with exponential backoff. When the WAL reaches 4MB it stops all writes while it gets compacted into the database. Once the compaction is over all the waiting processes probably have retry intervals in the hundred millisecond range, and as they exit they are immediately replaced with new processes with shorter initial retry intervals. I don't know enough queuing theory to state this nicely or prove it, but I imagine the tail latency for the existing processes goes up quickly as the throughput of new processes approaches the limit of the database.
That is interesting, I’ll have to look into that further. I would expect Go to have similar issues because the RPS isn’t that much less. But maybe there is some knife edge here.
reply