Senior Vox writer Kelsey Piper believes that powerful AI is an existential threat to humanity, that current approaches to aligning AI with human interests are probably inadequate, and that a six month moratorium on training state-of-the-art AI systems might not be enough. The idea that the end is nigh is a little unusual, but I agree with both the premise and the conclusion. If you’re interested in this predicament, here is a summary of some of the problems we have yet to overcome.
In a recent blog post, Piper says that transformative AI could accelerate medical research and economic progress by leaps and bounds, in addition to having a weirder and less quantifiable effect on society. As she puts it, “Some of the impacts of that are mostly straightforward to predict. We will almost certainly cure a lot of diseases and make many important goods much cheaper. Some of the impacts are pretty close to unimaginable.”
This framing sounds about right. Transformative AI could stand in for a legion of highly-trained specialist researchers, outputting more cognitive labour than every human scientist combined. Presuming we aren’t haplessly imitating the sorcerer’s apprentice, animating our tools to do our work for us with no way to stop them when it all goes wrong, we should expect fast developments in every field.
But there’s one part of this scenario that gives me pause:
If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.
I do not think we are on track to cure cancer in 2200.
This is only in part because cancer is two hundred diseases wearing a trench coat, and only in part because cancer is highly optimized for evading treatment. The problem is that cancer research is not bottlenecked by cognitive labour.
Researchers in the boardroom start by coming up with a theory about how a molecular target they’ve been studying contributes to the development of cancer. They use high-throughput screening to hunt through chemical libraries for druglike compounds that they think will accomplish the effect they want, and then test that compound in an animal model. If it works, they move on to humans. AI and robotics have already accelerated the theorizing and screening steps, but those steps are already the fastest and least expensive in the whole process. The real barrier to curing cancer is logistical.
Bringing new drugs to market is a billion-dollar endeavour. Getting thousands of cancer patients into clinical trials while complying with all of the healthcare regulations in dozens of countries is expensive beyond belief. Worse, fewer than 5% of adult cancer patients participate in clinical trials, owing to factors like oncologists not believing the trial is the best option for their patient or a lack of applicable trials. Since every phase of a clinical trial lasts for months or years, the rate limiting step for testing new cancer treatments is the amount of patient-time available to the researchers.
(Is this assertion backed by data? Sort of. There’s a correlation between the number of relevant clinical trials and improvement in 5-year cancer survival rates, but realistically I think this effect is swamped by greater research effort going towards more treatable cancers and by physicians preferring to treat recalcitrant cancers with aggressive chemotherapy rather than experimental drugs. I plan to cover this argument in more detail in the future.)
The standards for efficacy and toxicity are relaxed for cancer treatments, but even with the bar brought lower most new treatments are not approved. Marginal adjustments to these numbers make a difference, and the cancer death rate in America is on the decline, but extrapolating this trend all the way through to cancer being curable by the end of the century is optimistic. Much of the low-hanging fruit has already been eaten: smoking and asbestos are less prevalent today than they were in the 20th century, and aggressive population screening for breast, prostate, and colorectal cancer is now the norm, but untargeted environmental causes and cancers that are treatable when caught early are in short supply.
Restricted clinical trial capacity, low efficacy, ruinous cost. Overcoming these hurdles in cancer research, and in some other areas of medicine with similar problems, will take resources that current AI technology can’t provide. I don’t believe the likeliest trajectories of the next 177 years gets us there either: the structural barriers to success are diffused across governments, oncologists, cancer patients, and healthcare organizations across the world. The relatively low rate at which new data is produced means that huge gains earlier in the process will have limited effect in the progress towards a cure.
The minimum viable cancer-curing AI is very, very powerful. I highly recommend Bender 2020 as a summary of the advances AI modelling have made in drug discovery. In short, the effect of making the preclinical stages faster and cheaper is completely dominated by even modest improvements on the clinical failure rate, the part of the process that machine learning is least equipped to help with. The preclinical stage makes use of proxy metrics for determining what the failure rate might be, but as Bender notes these proxies are necessary and absolutely insufficient. The kind of computer modelling you’d need to genuinely speed up clinical research is currently out of reach.
Even if we threw caution to the wind and pushed to create world-changing transformative AI as soon as possible, and even if we succeeded at aligning it with our true desires, the weirder and less quantifiable effects will dramatically change how the world works before we get out cure for cancer.