The history of decision-support systems in clinical medicine dates to the 1950s, when the first articles appeared that analyzed the relevance of formal probabilistic analysis to medical diagnosis and the potential role of computers to assist with the relevant calculations. By the 1960s there were many investigators who explored the application of Bayes Theorem (a formula that expresses the relationship of disease prevalence to the characteristics of tests used to diagnose disease) and it became clear that computers could correctly diagnose complex diseases if they were provided with all the appropriate prevalence data and corresponding conditional probabilities. But the era of Bayesian exploration, which continued well into the 1980s, led to identification of the many weaknesses of the approach. Most challenging was its appetite for data that often were not available, but it also made assumptions that limited the applicability of the formal approach. Notable among these were its inability to diagnose multiple simultaneous diseases, seeking rather to explain all abnormalities by a single unifying hypothesis. But equally limiting was its inability to explain the basis for its assessments (except in mathematical terms) or to meld smoothly with the practice styles of busy practitioners. Such limitations provoked, in the 1970s, a set of projects that explored the use of artificial intelligence methods in medical decision making settings. The importance of cognitive processing by clinicians was increasingly appreciated and some scientists sought to model such reasoning in computer programs that would assess a patient’s disease state or recommend therapeutic interventions. Key examples from that era were the Internist-1 program, a diagnostic tool from the University of Pittsburgh, and the MYCIN System, and program to assist with antimicrobial selection, developed at Stanford University. These “expert systems” were soon generalized to many other applications both within and outside clinical medicine, leading to explosive interest in the area with many articles in the lay media as well as in scientific journals.
The 1980s was a period of reflection, as it became clear that the ambitious predictions of how AI would revolutionize medicine had not come to pass. There was increasing focus on how to integrate decision support with clinicians’ workflow, leading many investigators to focus on clinical data systems and electronic medical records that could support tight integration with decision-support functionality. The most visible and effective systems that were routinely used were those that integrated “warnings” or “alerts” with clinical data management environments, notifying clinicians when elements in a patient’s electronic chart indicated that some kind of intervention or change should be considered. There were also efforts to represent and implement the increasingly popular clinical guidelines that provided consensus, and sometimes evidence-based support, regarding the workup or management of patients with specific complaints or syndromes.
In the last two decades the focus has been on integration with electronic health records, with capabilities that range from alerts during order-entry to “infobuttons” that provide access to pertinent background information directly from within the EHR. But the long-sought development of well-integrated patient-specific decision support, focusing on diagnosis or complex disease management, still remains to be achieved. The good news, however, is that the infrastructure is finally in place to facilitate such integrated capabilities. Questions remain about the willingness of EHR vendors either to implement such capabilities themselves or to provide mechanisms to facilitate the integration of decision support capabilities by investigators who seek to provide information and advice at the point of care.