Hacker Newsnew | past | comments | ask | show | jobs | submit | dahele's commentslogin

Slightly apples to oranges comparison.

The brain has an incredibly complex architecture, which evolved over millions of years. On top of that, it then develops throughout a human's lifespan. The brain we observe is a "finished product", and even then it has ~150 trillion synapses to do computations [0].

Even massive neural networks have a relatively simple architecture before they are trained. Part of the training process is effectively learning more complex architectures, which are manifested by changing weights.

What I'm getting it is that artificial neural networks aren't equivalent to the brain - ANNs are both learning their own structure, on top of the circuits actually doing computations. They are doing the work of millions of years of evolution, genetics, developmental biology, interaction with the environment etc. Perhaps it's to be expected that ANNs will need orders of magnitude greater number of parameters than a brain.

An interesting development is meta-learning, where we separate the process for learning the architecture (this could be using deep learning, but not necessarily) with the network actually doing computation (equivalent to the brain).

> A language model must be based on the fundamental notion that there are nouns (things), verbs (processes) and adjectives (attributes).

I agree, but how does the brain represent these concepts? Some would argue that ANNs do have these concepts, just hidden away in abstract vector representations. Take the visual system, which has been extensively studied - we see the brain represents contrast, edges, shapes and so on very similarly to convolutional NNs.

[0] It's likely that this number doesn't come close to capturing the brain's complexity, as it doesn't incorporate parameters like long-term potentiation/depression, synchronization, firing rates, habituation vs sensitization, immunomodulation and likely so much more we haven't yet discovered.


Very well put. Take AI as an example - foundational research in deep learning and reinforcement learning was first done in academia. Everyone knows the story of how long neural networks languished in the shadows, before its time. Even commercial research labs only really took note after the potential of these methods was discovered.

Government funding is a good thing, but it’d be even better would be if we could harness the free market. One solution could be to equip research institutions with the means to capture value from IP that follows on from basic research.


I was thinking in a similar direction. What about an adaptive patent law? Where the duration of the protection in a particular area of interest can be set by e.g. a committee (with a strict ruleset and a big time constant of course). So when there is not much progress in a certain area (e.g. cancer or nuclear fusion) the patent protection is increased to incentivise investments. A problem with this approach are obviously the unknown unknowns, which might prevent this system from incentivising inventions like the transistor. So this approach can probably only help with problem driven areas and not in cases where the tech innovation gives rise to new problems/solutions


"Reimagining Capitalism" by Rebecca Henderson is an eye-opening book that outlines very clearly defined steps that would force corporations to take more social responsibility.

Here's my take on the book:

Capitalism at its best works wonderfully, but the checks and balances that are needed for capitalism to work for the good of society are broken. Most countries across the world do not have a truly free press or even semi-functional democracy. We're hurtling towards a climate disaster, with no signs of slowing down, as well as facing massive global inequality.

Corporations are succeeding at present, but the single-minded focus on pleasing shareholders above all else is not sustainable and _will_ come back to hurt everyone, not least corporations. It's a classic case of the prisoner's dilemma - each corporation individually does better by ignoring societal damage, but if all of them do so, the whole system will eventually collapse - whether due to climate disaster, a collapse of buying power from impoverished masses, or a global pandemic.

Henderson argues that "only the relentless pressure of the free market can drive the kind of transformative innovation at scale [that we require]". The solution is to first fix democracy - get money out of politics, and fix the voting system. A representative government can agree on common goals and implement polices that harness the free market to work towards these agreed goals - eg. a carbon tax. Henderson also puts hope in "collective action" - corporations working together voluntarily, towards common stated goals. I'm a little sceptical of this last part.

Azeem Azhar (Exponential View) did a great interview with Henderson - https://hbr.org/podcast/2020/06/reimagining-capitalism-for-a...


Thanks for sharing, will checkout the podcast.


Unofficial worked solutions are available here https://github.com/goropikari/SolutionQCQINielsenChuang


This is great. Thank you!


Interesting point, as this is actually the topic of some debate.

The position you're referring to is known as the Trendelenburg Position. The benefits were thought to lie not so much with increasing perfusion to the brain, but bringing more blood back to the heart (known as preload), thereby increasing cardiac output and increasing blood flow to vital organs. If you're interested in this, there's some neat physiology called the Frank-Starling law [0].

However, it turns out that in practice, cardiac output doesn't actually improve. In fact, this position also increases the risk of fluid build up in the lungs (pulmonary oedema) and can worsen cerebral perfusion [1]. Therefore, it's falling out of favour in the context of shock. A compromise might be to elevate solely the legs, but its efficacy is also being questioned.

This illustrates a recurring theme in modern medicine; the human body & disease is so complex and poorly modelled, that it's often not possible to translate intuition or first principles of physiology into treatment. Instead, one relies on real world studies like those reviewed in [1] - AKA 'evidence-based medicine'.

[0] https://en.wikipedia.org/wiki/Frank%E2%80%93Starling_law

[1] https://www.cambridge.org/core/services/aop-cambridge-core/c...


Deep vein thrombosis (DVT) is unlikely to cause strokes, as DVT causes clots to form in the venous system, while strokes involve clots in the arterial system. Clots from veins are more likely to get lodged in the lungs (pulmonary embolism) than pass across the lungs into arteries.

The higher prevalence of strokes in COVID patients isn’t fully understood, but might be due a generally increased inflammatory state in the body, with complement (a cascade of proteins involved in immune response) seeming to play a significant role. It’s known that inflammatory states generally increase risk of clotting (eg cancer, surgery) but it remains to be seen if COVID is having a similar effect, or a more direct influence.


Interesting idea but unfortunately this isn’t feasible, unless you also have access to a full suite of diagnostic equipment at home. Even if someone at home could self-diagnose a stroke with any accuracy, around 10% of strokes are actually caused by haemorrhage (bleeding) rather than a clot. A CT scan is almost always carried out to rule this out, before administering thrombolytic drugs or aspirin, as these drugs can significantly worsen existing bleeding.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: