To date, most applications of Artificial Intelligence (AI) to healthcare have been applied to clinical questions about diseases and patient care . Now that many hospitals have electronic health records (EHRs), there is potential to use AI for operational purposes . Patient flow in hospitals is difficult to plan for, because hospitals are highly connected systems in which capacity constraints in one area (for example, lack of ward beds) block the flow of patients from other locations. Management of emergency admissions is particularly complex, requiring bed managers to balance a known quantity (planned inpatient admission) with an unknown one (an uncertain number of emergency patients requiring beds).
In hospitals with EHRs, staff record patient data at the point of care, creating an opportunity to use real-time data for operational planning. In this study, we present a prediction pipeline that uses live EHR data for patients in a hospital emergency department (ED) to generate short-term, probabilistic forecasts of emergency admissions.
One advantage of using real-time EHR data for operational planning is that it can make use of accumulating data about each patient. To calculate each patient’s probability of admission, we trained 12 Machine Learning (ML) models on data available at successively longer elapsed times after the patient visit began. Figure 1 shows which features in the data were most important in each of the 12 models. At the outset, little is recorded about the patient other than age, arrival method, prior admission history and triage scores, so these are the most important. As the elapsed time increases, these diminish in importance as other signals of likely admission (like vital signs, lab results and consult requests) become available.
We worked closely with hospital bed managers to understand how to make these individual-level predictions most useful to them. From their point of view, knowing the probability that a particular patient will be admitted is less valuable than knowing in aggregate how many patients to plan for. In this respect, a prediction tool that can provide a probability distribution for the number of admissions in a given prediction window is more useful than one that solely estimates the probability of admission at the patient level. Also, such projections must allow for the number of patients not on the ED at the prediction time who will arrive later, and be admitted within the window.
We therefore developed a pipeline that begins by applying ML to live data for each patient currently in the ED, as described above, and follows a series of steps to convert the ML predictions into aggregate predictions for the total number of admissions to plan for. The seven steps are shown in Figure 2, using a real example of patients in the ED at 16:00 on 11 May 2021.
Our predictions outperformed a six-week rolling average benchmark that is conventionally used in the sector to predict daily admission numbers, as shown in Figure 3, and improved on the benchmark (which only projects up to midnight) by enabling predictions for short time-horizons of 4 and 8 hours at various times throughout the day.
There are challenges associated with developing models for real-time implementation. The models cannot rely on data collated at the end of each day (as is often the case in hospital management reporting), or on data that is coded retrospectively (as often happens for coding of patient diseases). Our models achieved comparable performance to other studies, using only data available in real-time. If models are to be used operationally, their performance needs to be sustained even as care provision, patient characteristics and the systems used to capture data evolve. We found that operational variations in how long it took a patient to be admitted during the first year of the Covid-19 pandemic affected our time-window predictions. The staged nature of the seven-step pipeline enabled us to identify where data drift was occurring and retrain models to allow for this drift.
Real-time operational models need to cover the ‘last mile’ of AI deployment ; this means that the applications can run end-to-end without human intervention. This last mile is the most neglected, leading to calls for a delivery science for AI, in which AI is viewed as an enabling component within an operational workflow, rather than an end in itself . We created an application to run four times daily, applying each step of the pipeline to generate predictions and emailing these to bed managers. An example of the email output is shown in the paper . Our work provides a practical example of a ML-based modelling approach that is designed and fit for the purpose of informing real-time operational management of emergency admissions.
 Yu, K. H., Beam, A. L. & Kohane, I. S. Artificial intelligence in healthcare. Nat. Biomed. Eng. 2, 719–731 (2018).
 Pianykh, O. S. et al. Improving healthcare operations management with machine learning. Nat. Mach. Intell. 2, 266–273 (2020).
 Cabitza, F., Campagner, A. & Balsano, C. Bridging the “last mile” gap between AI implementation and operation: “data awareness” that matters. Ann. Transl. Med. 8, 501 (2020)
 Li, R. C., Asch, S. M. & Shah, N. H. Developing a delivery science for artificial intelligence in healthcare. Npj Digit. Med. 3, 1–3 (2020).
 The code for this project can be viewed at https://github.com/zmek/real-time-admissions
The contributors to this research are Zella King, Joseph Farrington, Martin Utley, Enoch Kung, Samer Elkhodair, Steve Harris, Richard Sekula, Jonathan Gillham, Kezhi Li, and Sonya Crowe.
The work was funded by grants from the Wellcome Institutional Strategic Support Fund (ISSF) UCL and Partner Hospitals and the NIHR UCLH Biomedical Research Centre. Some of the contributors were funded by the National Institute for Health Research and NHSX (see paper for award reference numbers). The views expressed in this publication are those of the authors and not necessarily those of the National Institute for Health Research, NHSX or the Department of Health and Social Care.