There are multiple problems. The biggest problem in my mind is that outcomes are not tracked well. That means that there's no medical feedback loop to measure the long-term effectiveness of... anything. And no good way to do retrospective studies to prove new measurement techniques. And also the laws for patient protection makes it incredibly difficult to get even anonymized data for researchers to use.
The next biggest problem is that there's a gold standard of treatment, and it's hard to get enough research to move the gold standard. In this case, the gold standard is essentially a diameter measurement, based on picking the visibly largest slice of a data set which happens to have an axis determined by the patient's somewhat random orientation in the CT scanner. Axial, Coronal, Sagittal. Those are not perfect, anatomical orientations - they're rough. Most radiologists won't take the time to create Multi-Planar Reformats along oblique or double-oblique angles. And even if they did, they'd still be eye-balling the measurement.
It's not very hard from a regulatory standpoint to create tools to allow a radiologist to click here, click there, and show a measurement of distance. Then the radiologist can look at a similar measurement from a different radiologist on the previous scan, and say whether it's bigger or smaller now. (But some PACS still don't handle anisotropic MPR correctly, which just blows my mind.)
It's vastly harder from a regulatory standpoint to say in a medical product for commercial use, "Hey, we automatically measured the volume of this tumor by automatically finding the boundary of the tissue and automatically ignoring the connected blood vessels and neighboring bone, and then we automatically found the largest cross-sectional diameter, and then we automatically compared the same measurements against the automatically non-linearly-registered longitudinal previous scan... diagnosis: the tumor is not growing, just changing shape slightly."
People think their CT scans are examined for hours by a team of doctors, like on "House." In reality, you're lucky if your CT scan gets 5 minutes of investigation, and that's by a radiologist who was trained in med school to read X-Rays, not CT. And your oncology head CT may be read by a radiologist who specializes in abdominal pediatric Ultrasound. And there's almost certainly no double-reading for Quality Assurance. And there's no rating on radiologists to tell you who has the most accurate diagnosis. And he's probably using 2D measuring tools on 3D data - because that's what they're familiar with, and that's what the reconstruction software manufacturers provide, and that's the medical gold standard. On a shared computer, where some other radiologist (or resident!) might have changed the display settings in a way he didn't notice. And he's probably been reading images for 14 hours straight.
Medicine has a data problem. It is multifaceted and far more difficult to fix than it should be. I've worked on these problems for years and it is immensely frustrating. Incentives just aren't set properly to solve them. For example with outcomes tracking, even if you had a good software system for it, you won't get reimbursed for it.
> For example with outcomes tracking, even if you had a good software system for it, you won't get reimbursed for it.
It's worse than that: It would be sued out of existence by those people who rank badly (ie, exactly those bad practitioners that need to have their asses kicked.)
In a related example: A friend wrote a book about first-hand experience in dealing with a nursing home for her aging parent. There were many instances of problems, but one in particular was that a doc had prescribed the wrong drug.
The publisher loved the book, and made the publication offer contingent on a number of changes. One of those changes: You can't implicate a person or his institution because of the liability. It doesn't matter how good your proof is, the jury system is so broken that the publisher just can't accept the risk.
Now just imagine if there were a searchable database of outcomes tied to individual practitioners or institutions.
Medical Practitioners and Medical Institutions should be required by law to document outcomes, provide information about disputed outcomes, and update the information on a periodic basis.
And tampering with the information should carry heavy punishment, including jail time for the practitioner and officers of the institution.
Actions like the recent cover-ups in the VA hospitals about wait times leading to deaths should be nearly impossible to hide from the public eye.
You'd need to take account of differences in patients lest you discourage doctors from treating the more unhealthy. You also need to alter the way we define and punish malpractice.
Did not ACA put incentives in place specifically for outcome tracking? I was just talking with a guy at MIT working on post-stroke outcome tracking and automated scoring simply through the myriad of sensors on a high-end smartphone, and he mentioned ACA now or in the future will pin a significant portion of reimbursement on outcome tracking, forming the basis for his business model.
They are intended to. Even if ACA isn't the ultimate form, most healthcare reform involves paying for performance rather than procedures and this implies outcomes tracking of some kind. You've got some serious chicken/egg problems though
This. 1000 times this. It's not even about automated diagnosis. The software used by most radiologists is more than two decades behind the state of the art, lacking even basic image registration and segmentation capabilities. (source: I've worked with radiologists who review brain images at several tier-one, top-10 hospitals).
If you are lucky, some eager, tech-savvy resident will use 3D Slicer or OsiriX on their personal laptop to coregister and segment images, and do longitudinal volumetric comparison. But those basic tools are not available on the platforms used to read most images.
Major vendors are trying to catch up, but from what I've seen, demand (and thus willingness to invest) is surprisingly limited.
> Major vendors are trying to catch up, but from what I've seen, demand (and thus willingness to invest) is surprisingly limited.
You may not be able to get insurance to reimburse for it, but my impression is that there is no shortage of people ready to pay cash money, easily into 4 figures, for someone providing this service in a 'personalized medicine' context.
As far as packaging a tool-chain that actually makes these state-of-the-art measurements and applying it in this domain, it's not clear what the FDA approval / requirements would be.
The next biggest problem is that there's a gold standard of treatment, and it's hard to get enough research to move the gold standard. In this case, the gold standard is essentially a diameter measurement, based on picking the visibly largest slice of a data set which happens to have an axis determined by the patient's somewhat random orientation in the CT scanner. Axial, Coronal, Sagittal. Those are not perfect, anatomical orientations - they're rough. Most radiologists won't take the time to create Multi-Planar Reformats along oblique or double-oblique angles. And even if they did, they'd still be eye-balling the measurement.
It's not very hard from a regulatory standpoint to create tools to allow a radiologist to click here, click there, and show a measurement of distance. Then the radiologist can look at a similar measurement from a different radiologist on the previous scan, and say whether it's bigger or smaller now. (But some PACS still don't handle anisotropic MPR correctly, which just blows my mind.)
It's vastly harder from a regulatory standpoint to say in a medical product for commercial use, "Hey, we automatically measured the volume of this tumor by automatically finding the boundary of the tissue and automatically ignoring the connected blood vessels and neighboring bone, and then we automatically found the largest cross-sectional diameter, and then we automatically compared the same measurements against the automatically non-linearly-registered longitudinal previous scan... diagnosis: the tumor is not growing, just changing shape slightly."
People think their CT scans are examined for hours by a team of doctors, like on "House." In reality, you're lucky if your CT scan gets 5 minutes of investigation, and that's by a radiologist who was trained in med school to read X-Rays, not CT. And your oncology head CT may be read by a radiologist who specializes in abdominal pediatric Ultrasound. And there's almost certainly no double-reading for Quality Assurance. And there's no rating on radiologists to tell you who has the most accurate diagnosis. And he's probably using 2D measuring tools on 3D data - because that's what they're familiar with, and that's what the reconstruction software manufacturers provide, and that's the medical gold standard. On a shared computer, where some other radiologist (or resident!) might have changed the display settings in a way he didn't notice. And he's probably been reading images for 14 hours straight.