Databases and traffic is going to have your lunch anyway. Good developers cost a good deal more than extra infrastructure. Adding people to help you cut cloud cost is easier and more predictable than re-implementing.
> It's not safe to grow veggies and grains with human manures.
Not true. My wife does research on this and post-sewage treatment dried sludge actually has better values than some of the food the used as reference samples.
Have you explored humaneur production? It strikes me that properly composting the waste ought to be enough to destroy any problematic bacteria. Would love to know what you ran into that may have prevented doing that. Been thinking about incorporating regenerative systems to build a food forest and trying to figure out safely handling biowaste. Even goose poop is potentially toxic.
Yes you can properly compost human manure, to make it safe... But the process is more complicated and involves processes which cannot be skipped. I would rather do less work and be safer.
This means I simply don't use human manure on my agriculture crops, I use it in my forested areas.
That said, I do use urine in my grass/leaf compost, and recently I started charging biochar with human urine. Urine is much safer that feces.
It's just a superstition but I wouldn't eat veggies grown directly in human manure (although the guy has done and hasn't gotten the sponge-brain: https://humanurehandbook.com/ )
Being unfit for human consumption is actually a feature not a draw back. We need to trigger growth and get humans off the sand so that nature can do her thing.
I met someone once who after sinking tons of his own money into projects argued you need poop and some kind of toxins to keep people and their goats away for at least 30-50 years. His experience was that people destroying everything scales much faster than constructive effort. You turn your back and everything is gone.
Of course something is to be said for economically viable greening but if you just want to restore nature you should aim to do just that.
I don't have the numbers here but I found calculating how many large trees you need to sustain 1 human vs how much electrolysis it would take pretty mind boggling.
Both use very sparse ground based measurements, which makes good emissions localization hard. There are also confirmations from satellite measurements.
Productivity in this case being in the eye of the beholder? I’d argue that people experienced in how node works, wouldn’t have to think too much, since async is the default. I agree with your general sentiment though.
If it is single threaded, and we are talking about shared variables, I thought I can assume the runtime is not going to pause my code execution, switch context, and run other code midway through my call back handler.
If we are talking about shared external resources (e.g. who can update a cloud blob) then we could have a proxy for that as a variable. You might need retry logic, and it could get tricky in that respect.
You can rely on node preventing data races, but those are distinct from race conditions. A race condition in the logic of your code can happen any time two "threads" of execution (in this case a thread could be considered a chain of asynchronous callbacks) interleave their operations. It's possible for one of them to do something with a resource the other was using unless you use some kind of synchronization to prevent the other from using the resource until the first thread is done with it. For example, two callback chains could start using the same database connection object. Perhaps one chain was in the middle of setting up a transaction when it needed to wait for some other async resource to load, and the other chain comes in and does something with it. Now it's in an unintended state because the object was allowed to be used by two different "threads" of callback chains.
Yes sorry I was thinking in terms of code that uses callbacks, not the async keyword.
What I mean is that distinct from threading, where the following code could be interrupted between the first and second line of the function, by something that updates global:
var global
function addTheseToGlobal(a, b) {
im = a + global
return im + b
}
I’d then have node handle the “simple” socket part, and the complexity which is race condition prone, handled by a language better suited for that, responding to the async node implementation?
Edit: spelling
Does not have to be copper. I've done several of these using galvanized steel hammered into the ground. We'd hammer in rods of 2 meters length until the resistance was low enough (around 1600 ohms Iirc). This would ensure hpfi tripped below 50mA
Kid brains are a sponge. I don't get why the author is either or in his mindset. Sure problem solving and curiosity are more important than syntax, but there is ample room for both.
Mafintosh created hypercore and hyperdrive, which is part of datproject.org a nice p2p option to consider too. It is used by Beaker browser among others.
What would be the point of the passive agressive behaviour? If the intention is to get Apple to pony up some cash, he should reach out directly to them with valid concerns over traffic etc. If the point is self-promotion, he inadvertently got that now. I think the approach chosen is the right one
Higher structural costs presumably. In the idea presented here you only need to move a small amount of mass at a time, so you don't need an especially strong crane.
If the giant block was literally just suspended and a crane would hoist it up and lower it to recover energy. Then this would put continuous stress on the crane structure. Much higher than a series of smaller blocks being lifted intermittently.
I was thinking the same but with chains or some heavy contiguous body. The problem is that you need something that can lift one super heavy load, as opposed to many lighter weight ones. I guess that's what you mean by hydraulics but then you need hydraulics