Sorry, but the test hasn't been shown to be 94.5% accurate. The study was structured as: 1. neurologist makes clinical diagnosis using usual tools and records most likely diagnosis and their confidence in it 2. new p-tau test is done. 3. neurologists reassess their confidence, and change their mind in many cases. There is no "gold standard" here (tough to have in AD) and not even a period of clinical follow-up to assess for any interval change. The study shows that patients got reclassified, but never showed that such reclassification was more accurate than the initial one, and by how much. There is an implication that because in a different, retrospective study the test did show some accuracy, that it would be helpful in this new situation. But that's about it, and different than directly showing this to be the case.
I've seen this idea that "LLMs are just guessing the next token" repeated everywhere. It is true that accuracy in that task is what the training algorithms aim at. That is not however, what the output of the model represents in use, in my opinion. I suspect the process is better understood as predicting the next concept, not the next token. As the procedure passes from one level to the next, this concept morphs from a simple token to an ever more abstract representation of an idea. That representation (and all the others being created elsewhere from the text) interact to form the next, even more abstract concept. In this way ideas "close" to each other become combined and can fuse into each other, until an "intelligent" final output is generated. It is true that the present configuration doesn't offer the LLM a very good way to look back to see what its output has been doing, and I suspect that kind of feedback will be necessary for big improvements in performance. Clearly, there is an integration of information occurring, and it is interesting to contemplate how that plays into G. Tononi's definition of consciousness in his "information integration theory".
Also, as far as hallucinations go, no symbolic representation of a set of concepts can distinguish reality from fantasy. Disconnect a human from their senses and they will hallucinate too. For progress in this, the LLM will have to be connected in some way to the reality of the world, like our senses and physical body connect us. Only then they can compare their "thoughts" and "beliefs" to reality. Insisting they at least check their output against facts as recorded by what we already consider reliable sources is the obvious first step. For example, I made a GPT called "Medicine in Context" to educate users; I wanted to call it "Reliable Knowledge: Medicine" because of the desperate need for ordinary people to get reliable medical information, but of course I wouldn't dare. It would be very irresponsible. It is clear that the GPT would have to be built to check every substantive fact against reality, and ideally to remember such established facts going into the future. Over time, it would accumulate true expertise.
I just bought a MacBook Pro with the M3 Max chip and installed MATLAB R2023b. Sonoma 14.3 is in place. As a requirement, I had to also install Corretto 8. MathWorks only supports the Java 8 JRE included with Amazon Corretto 8. I am already having several problems in MATLAB with his new setup. Can I assume that updating to Sonoma 14.4 might very well cause even more problems? I really don't understand any of this.
They may not fix it, but my understanding is they are relying on undocumented features and that's always a crap shoot. My company does low-level language stuff and we've been burned like this, too. We decided not to trade performance for compatibility in the last decade or two.
EDIT: maybe not undocumented, but undefined behavior?
When a kernel update breaks all JVM versions starting from Java 8, the kernel devs fucked up. Even worse when the breaking change is in the final production release only and not the beta release. Completely obvious that this is a bug.
Segmentation fault should trigger SIGSEV, not SIGKILL. They changed the behaviour of the kernel which broke the JVM and any other applications that are designed according to the POSIX standards. https://pubs.opengroup.org/onlinepubs/9699919799/functions/V...
This post resonates with me as a radiologist with 40 years experience, and a son who founded and runs his own company centered around machine learning, and now, LLMs. I frequently hear about how "AI" is going to replace radiologists any day now, but I do not believe it, for some of the same reasons described by the author, though in a different context.
Throughout my education, I was told that the natural world is a collection of molecules mediated by physical forces. Adherents to this “materialist” view may acknowledge that some dimensions of reality remain hidden, but the governing idea is that we are things. This was certainly the teaching emphasis in medical school, and is is also a common perspective in contemporary philosophy, under which notions about the "soul" or the "spirit" tend to be deprecated.
We will pass over the objection that materialism leans heavily on the laws of physics even though those laws are themselves not physical. How to account for the existence of the laws is left out of the picture. The fact that our inner lives also remain unaccounted for under this scheme is an even bigger objection, but doesn't seem to bother its supporters much (see Daniel Dennett's attempt to argue consciousness away as a sort of epiphenomenon).
Before considering living beings, we can ask: Are physical objects really just a collection of molecules and the forces that bind them?
This essay explores Charles Sanders Peirce's four methods of moving from doubt to belief, as applied to medicine, and how being "pleasing to reason" sometimes trumps empirical evidence.
I am sorry to hear about your diagnosis. I looked up the UpToDate information on current treatments and would be happy to send a copy to you. Let me know, you can find my email in my user profile. Also, you should be aware of the clinicaltrials.gov site run by the US National Institutes of Health which includes European trials.
I read that last one, on "Telautomatics". He mentions his attempt to interest auto manufacturers in a self-driving car. But he also describes some kind of odd flying machine that doesn't use wings or propellers etc, relying purely on "reactions". What could he have been talking about? Inertial propulsion? Some kind of reaction against electromagnetic fields? There is a great illustration of this thing. Weird. (Incidentally, you could draw a straight line from all those magazine ads to today's YouTube versions)