Scientists from Yale and the University of Cologne were able to show that statistical models created by artificial intelligence (AI) predict very accurately whether a medication responds in people with schizophrenia. However, the models are highly context-dependent and cannot be generalized.
In a recent study, scientists have been investigating the accuracy of AI models that predict whether people with schizophrenia will respond to antipsychotic medication.
Statistical models from the field of artificial intelligence (AI) have great potential to improve decision-making related to medical treatment. However, data from medical treatment that can be used for training these models are not only rare, but also expensive. Therefore, the predictive accuracy of statistical models has so far only been demonstrated in a few data sets of limited size. In the current work, the scientists are investigating the potential of AI models and testing the accuracy of the prediction of treatment response to antipsychotic medication for schizophrenia in several independent clinical trials.
The results of the new study, in which researchers from the Faculty of Medicine of the University of Cologne and Yale were involved, show that the models were able to predict patient outcomes with high accuracy within the trial in which they were developed. However, when used outside the original trial, they did not show better performance than random predictions. Pooling data across trials did not improve predictions either.The study ‘Illusory generalizability of clinical prediction models’ was published in Science.
The study was led by leading scientists from the field of precision psychiatry. This is an area of psychiatry in which data-related models, targeted therapies and suitable medications for individuals or patient groups are supposed to be determined.
“Our goal is to use novel models from the field of AI to treat patients with mental health problems in a more targeted manner,” says Dr Joseph Kambeitz, Professor of Biological Psychiatry at the Faculty of Medicine of the University of Cologne and the University Hospital Cologne. “Although numerous initial studies prove the success of such AI models, a demonstration of the robustness of these models has not yet been made.”
And this safety is of great importance for everyday clinical use.
“We have strict quality requirements for clinical models and we also have to ensure that models in different contexts provide good predictions,” says Kambeitz. The models should provide equally good predictions, whether they are used in a hospital in the USA, Germany or Chile.
The results of the study show that a generalization of predictions of AI models across different study centres cannot be ensured at the moment. This is an important signal for clinical practice and shows that further research is needed to actually improve psychiatric care. In ongoing studies, the researchers hope to overcome these obstacles. In cooperation with partners from the USA, England and Australia, they are working on the one hand to examine large patient groups and data sets in order to improve the accuracy of AI models and on the use of other data modalities such as biological samples or new digital markers such as language, motion profiles and smartphone usage.