Hacker Newsnew | past | comments | ask | show | jobs | submit | pheo's commentslogin

Let's all just agree to use Perl. It's just dynamic, functional, OOPy C isn't it?

J/K. I think this is an interesting problem in that its a sandbox for allocation and GC in pretty much any dynamic interpreter's implementation. My qualm is that it would be "easy" to tune for the test. Consider the difference between dynamic blocks of a small but fixed size, getting alloc'd/freed in an asynchronous way (a network stack?) versus a pool of variable byte length strings getting shuffled around (a key/value store?). Those are simple, but drastically different, strategies for your heap. There won't be a "best" answer besides the limits of your problem domain.


Been there, done that. Even went to YAPC EU in Pisa and had a whiff of Larry. Not for me, that's all I can say. It's too loose, too much shooting from the hip. There's a lot of good ideas in there though.

Agreed. Which is why the 'one size fits all' approach might not be the best way to go. The main reason I decided to launch the challenge, and encourage a more combinatory approach with local special purpose allocators.


I think this site got slashdotted.


By my understanding of the event, a wall of mud and ash buried the city 10+ feet deep almost instantly. Fossil were found with food in their mouths. What timeline was this "simulation" built on?


The ancient Pompeii is much closer to Vesuvius than it is easy to gather from photos... The eruption took time, when the pyroclastic flow finally came down and hit the city, the city got buried quite quickly. However, before the pyroclastic flow came down, many people had plenty of time to escape, and many others died because of respiratory problems, or hit by debris.

Key thing to bear in mind is that, back then, very few people would have had an idea of what was happening and how dangerous it was, even less so about the fact that there was an actual need to escape and what was a safe distance... (Consider this: there is a lot of stuff falling from the sky and low visibility because of the dust in the air, would you run outside or seek cover insider?)

Here there is a more accurate and comprehensive description of events: http://www.bbc.co.uk/history/ancient/romans/pompeii_portents...


I had the same reaction as you, so I googled it. According to Wikipedia[1] the eruption did indeed take two days, with shocks starting in the morning and ash starting to fall in earnest at 1 p.m., with "pyrocastic flows" beginning in the middle of the night. People were able to escape and be rescued during the afternoon.

[1] https://en.wikipedia.org/wiki/Eruption_of_Mount_Vesuvius_in_...


Probably the diehards - the volcano-deniers who refused to believe anything of significant magnitude was about to happen.

You'll find 'em in every era...


Sigh. Was it really necessary to preface it with: "This is not a flippant comment"?

How else to explain such sudden deaths, even considering there was enough warning of impending catastrophe, and time to escape? Why else stick it out until that final blast of pyroclastic fury?


I'd like to add that a few "anti-business pro-freedom-everywhere radical[s]" thrown in the mix might be just what consumer software needs...


Woah.

I think that many "servers" today aren't going to benefit though from the level of optimization that you're talking about. Network bandwidth is at least an order of magnitude slower than memory and PCI throughput. For all but the most heavyweight computational tasks (ie. gaming, finance, scientific computing, etc.) this kind of multiplexing optimization is very helpful. If you're needing to get that kind of performance, you would userspace map off the NIC.

With how fast modern processors (CPUs and GPUs) are, whats much more important nowadays is not stalling on the flow from the NIC. The solution of the original author seems to me like a very simple fix for the old naive paradigm of state machine switching off from a single thread.

I think you might be (impressively) too close to the metal, and guilty of premature optimization. For example, a simple database application would benefit much more from keeping the slow I/O bottlenecks flowing than from the compute multiplexing.


Ah, true; this whole thing makes very little sense for the general case. Most workloads aren't like this.

The real goal here, in my mind, was to allow people in those particular high-performance domains (gaming especially, though the other ones would probably benefit too) to code at a higher abstraction level.

In my ideal world, a physics engine, for example, would be implemented in code as actors sending messages—with one actor for each physical (e.g. collide-able) object. Not only is that really expensive; the naive approach wouldn't even work! (You need "tick" events spammed to every physics-actor once per frame, which is just absolutely ridiculous.) But then the compiler transparently takes that and spits out something that isn't actor-modelled at all, but rather is a modern-day physics engine doing GPGPU SIMD to arrays of quaternions.

Or, in short: I want to code games in Elixir, without an impedance mismatch in the places where one of my actors' components turns out to need groupwise synchronous shared-memory evaluation rather than piecewise asynchronous reduction-scheduled evaluation. (That sounds like something that should be followed up with "and a pony!", but—given that the Erlang BEAM VM already has the infrastructure of "dirty" schedulers for CPU-bound tasks, and an LLVM IR step in its JIT which could be shunted over to a PTX-ISA target—this actually shouldn't even be that hard an extension to make.)


Read the whole thing. There are some well developed points near the end, but i feel many of these premises are fundamentally flawed. For example, that mobile devices will gain keyboards and become our primary computers.

Mobile is for content consumption, workstations are for content creation. Missing that key point leads many of the points here astray. Solving the disparity is one of the major open areas in HCI research.


TL;DR. How do you export to JSON?


"Fast," "Breeder," or "Salt" reactors breed weapons grade fissionable material (Ie. Plutonium 239) from relatively un-enriched materials (Ie. Uranium 238 and Thorium).

Fast reactors make nuclear weapons as byproduct. Thats why we don't use them.


You're probably being downvoted for saying that reactors make nuclear weapons. They don't. They might be able to produce materials to build a weapon, but it's not like they pop out little bombs. Considering some folks equate "nuclear power plant problem" with "nuclear bomb explosion", this is a nontrivial distinction.


The question though is whether the fissionable material is usable for weapons in any practical sense.

In the Integral Fast Reactor, for example, you end up with a mixture of the four isotopes of plutonium. It's impossible to use that mixture for weapons, and it's much harder to isolate the Pu239 than it is to enrich natural uranium.

In some thorium designs, the fissionable U233 (bred from thorium) is mixed with U232, which is also very hard to separate and makes the material unworkable for weapons. (However, this is not true of all thorium designs. Chemically separate the protactinium and it will decay to pure U233.)

If you had one of the nonproliferating designs and were silly enough to attempt using it for weapons production, you would need much larger and more sophisticated enrichment facilities than if you just enriched natural uranium. If you get your startup fuel from other countries, you can forgo enrichment facilities entirely, making it very clear that you don't have a weapons program.


absolutely awesome. This is how far we've come.


So its dronabinol?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: