Advertisement

Combined Crowd Innovation and AI in Producing Algorithms for Radiotherapy Targeting

Advertisement

Key Points

  • A crowd innovation contest produced automated AI algorithms that matched the skills of an expert radiation oncologist in segmenting lung tumors for radiotherapy targeting.
  • The top 5 algorithms exhibited accuracy within the benchmark of mean interobserver variability between 6 experts. 

In a study reported in JAMA Oncology, Mak et al found that a crowd innovation contest produced automated artificial intelligence (AI) algorithms “that replicated the skills of a highly trained physician” in segmenting lung tumors for radiotherapy targeting. The investigators also noted that the existing radiation oncologist work force does not meet growing global demand for radiotherapy.

Study Details

The study consisted of a crowd innovation contest involving an international community of programmers who were asked to produce automated AI algorithms capable of replicating the manual lung tumor segmentations of an expert radiation oncologist. Prizes totaling $55,000 USD were awarded for top algorithms.

The contest comprised three phases, including feedback, with the last phase being invitation-only. Contestants were provided a training set of computed tomography (CT) scans with expert contours and a validation set without expert segmentation to use in developing algorithms. They were also provided with feedback throughout the contest, including feedback from the study expert.

Final algorithms were evaluated by the study team on a separate (holdout) data set not available to the contestants. Algorithms were scored by comparing the volumetric segmentation of each algorithm on a given patient’s CT scan against the expert’s segmentation, with comparison generating a custom segmentation score (S score); a higher score indicated that an automated segmentation for a patient’s entire tumor had a high level of both relative and absolute overlap with the expert’s segmentation. Performance of the algorithms was also benchmarked against human expert interobserver and intraobserver variation among the study expert and five additional radiation oncologists.

Performance of AI Algorithms

Overall, 564 contestants from 62 countries registered for the contest, with 34 (6%) submitting algorithms. In total, 244 algorithm submissions were made in phase 1, 164 in phase 2, and 180 in phase 3. Among 45 of these algorithms submitted for final scoring, 10 developed by 9 contestants were considered the winning algorithms.

When combined using an ensemble model, the automated segmentations produced by the top 5 AI algorithms exhibited an accuracy (Dice coefficient = 0.79) within the benchmark of mean interobserver variation between the study expert and 5 additional radiation oncologist experts. In phase 1, the top algorithms had average S scores ranging from 0.15 to 0.38. The average S scores in phase 2 increased to 0.53 to 0.57. In phase 3, performance of the top algorithm increased by an additional 9%. Combinations of the top 5 algorithms from phase 2 and phase 3 using ensemble models yielded an additional 9% to 12% improvement in performance, achieving a final S score of 0.68.

Approximately 75% of the ensemble segmentations had an S score higher than 0.60, the lower threshold of expert intraobserver performance. This finding indicated that the contest produced algorithms capable of matching expert performance.

The investigators concluded, “A combined crowd innovation and AI approach rapidly produced automated algorithms that replicated the skills of a highly trained physician for a critical task in radiation therapy. These AI algorithms could improve cancer care globally by transferring the skills of expert clinicians to under-resourced health-care settings.”

Raymond H. Mak, MD, of the Department of Radiation Oncology, Brigham and Women’s Hospital/Dana-Farber Cancer Institute, is the corresponding author for the JAMA Oncology article.

Disclosure: This study was funded by the Laura and John Arnold Foundation, Harvard Catalyst, a National Institutes of Health grant to The Harvard Clinical and Translational Science Center, and the Division of Research and Faculty Development at Harvard Business School. For full disclosures of the study authors, visit jamanetwork.com.

The content in this post has not been reviewed by the American Society of Clinical Oncology, Inc. (ASCO®) and does not necessarily reflect the ideas and opinions of ASCO®.


Advertisement

Advertisement




Advertisement