The impact of deploying artificial intelligence (AI) for radiation cancer therapy in a real-world clinical setting has been tested by Princess Margaret Cancer Centre researchers in a unique study involving physicians and their patients.
A team of researchers directly compared physician evaluations of radiation treatments generated by an AI machine learning (ML) algorithm to conventional radiation treatments generated by humans.
They found that in the majority of the 100 patients studied, treatments generated using ML were deemed to be clinically acceptable for patient treatments by physicians.
Results by the study team led by Drs. Chris McIntosh, Leigh Conroy, Ale Berlin, and Tom Purdie are published in Nature Medicine, June 3.
"We have shown that AI can be better than human judgement for curative-intent radiation therapy treatment, says Dr. McIntosh, scientist at the Peter Munk Cardiac Centre, Techna Institute, and chair of Medical Imaging and AI at the Joint Department of Medical Imaging and University of Toronto. "In fact, it is amazing that it works so well.
"A major finding is what happens when you actually deploy it in a clinical setting in comparison to a simulated one."
Adds Dr. Purdie, medical physicist at the Princess Margaret Cancer Centre: "There has been a lot of excitement generated by AI in the lab, and the assumption is that those results will translate directly to a clinical setting. But we sound a cautionary alert in our research that they may not.
"Once you put ML-generated treatments in the hands of people who are relying upon it to make real clinical decisions about their patients, that preference towards ML may drop," says Dr. Purdie, who is also an Associate Professor, Department of Radiation Oncology, University of Toronto.
"There can be a disconnect between what's happening in a lab-type of setting and a clinical one."
In the study, treating radiation oncologists were asked to evaluate two different radiation treatments – either ML or human-generated ones – with the same standardized criteria in two groups of patients who were similar in demographics and disease characteristics.
The difference was that one group of patients had already received treatment so the comparison was a "simulated" exercise. The second group of patients were about to begin radiation therapy treatment, so if AI-generated treatments were judged to be superior and preferable to their human counterparts, they would be used in the actual treatments.
Overall, 89 per cent of ML-generated treatments were considered clinically acceptable, and 72 per cent were selected over human-generated ones in head-to-head comparisons to conventional human-generated treatments.
Moreover, the ML radiation treatment process was faster than the conventional human-driven process by 60 per cent, reducing the overall time from 118 hours to 47 hours. In the long-term, this could represent a substantial cost savings through improved efficiency, while at the same time improving quality of clinical care, a rare win-win.
The study also has broader implications for AI in medicine.
While the ML treatments were overwhelmingly preferred when evaluated outside the clinical environment, as is done in most scientific works, physician preferences for the ML-generated ones changed when the chosen treatment, ML or human-generated, would be used to treat the patient.
In that situation, the number of ML treatments selected for patient treatment was significantly reduced issuing a note of caution for teams considering deploying inadequately validated AI systems.
Oncologists were not aware of which radiation treatment was designed by a human or a machine.
Human-generated treatments were created individually for each patient as per normal protocol by the specialized radiation therapist. By contrast, each ML treatment was developed by a computer algorithm trained on a high-quality, peer-reviewed data base of radiation therapy plans from 99 patients previously treated for prostate cancer at the Princess Margaret.
For each new patient, the ML algorithm automatically identifies the most similar patients in the data base, using learned similarity metrics from thousands of features from patient images, and delineated target and healthy organs that are a standard part of the radiation therapy treatment process. The complete treatment for a new patient is inferred from the most similar patients in the data base, according to the ML model.
Although ML-generated treatments were rated highly in both patient groups, the results in the pre-treatment group diverged from the post-treatment group.
In the group of patients that had already received treatment, the number of ML-generated treatments selected over human ones was 83 per cent. This dropped to 61 per cent for those selected specifically for treatment, prior to their treatment.
"In this study, we're saying researchers need to pay attention to a clinical setting," says Dr. Purdie. "If physicians feel that patient care is at stake, then that may influence their judgement, even though the ML treatments are thoroughly evaluated and validated."
Dr. Conroy, medical physicist at the Princess Margaret, points out that following the highly-successful study, ML-generated treatments are now used in treating the majority of prostate cancer patients at the cancer centre.
That success is due to careful planning, judicious stepwise integration into the clinical environment, and involvement of many stakeholders throughout the process of establishing a robust ML program, she explains. She adds that the program is constantly refined, oncologists are continuously consulted and give feedback, and the results of how well the ML treatments reflect clinical accuracy are shared with them.
"We were very systematic in how we integrated this into the clinic at Princess Margaret," says Dr. Berlin, clinician-scientist and radiation oncologist at the cancer centre. "To build this novel software, it took about six months, but to get everyone on board and comfortable with the process, it took more than two years.
"Vision, audacity and tenacity are key ingredients, and we are fortunate at Princess Margaret to have leaders across disciplines that embody these attributes," says Dr. Berlin, who is also an Assistant Professor, Department of Radiation Oncology, University of Toronto.
The success for launching a study of this calibre relied heavily on the commitment from the entire genitourinary radiation cancer group at the Princess Margaret, including radiation oncologists, medical physicists, and radiation therapists. This was a large multidisciplinary team effort with the ultimate goal for everyone to improve radiation cancer treatment for patients at the Princess Margaret.
The team is also expanding their work to other cancer sites, including lung and breast cancer with the goal of reducing cardiotoxicity, a possible side effect of treatment.
Other authors who contributed to this paper include: Michael C. Tjong, Tim Craig, Andrew Bayley, Charles Catton, Mary Gospodarowicz, Joelle Helou, Naghmeh Isfahanian, Vickie Kong, Tony Lam, Srinivas Raman, Padraig Warde, and Peter Chung.
This work was supported by the Canadian Institutes of Health Research, National Sciences and Engineering Research Council of Canada and the Princess Margaret Cancer Foundation.
Competing Interests
Chris McIntosh and Thomas Purdie receive royalties from RaySearch Laboratories in relation to ML radiation treatment technologies. The remaining authors report no competing interests with this study.
This story first appeared on UHN News.