At D&D we have started working on a whole new batch of machine learning projects. Whilst we are cognisant that we need to improve our core stock, we are excited about the possibility that we can start to broaden our portfolio and range of solutions. This is where this solution comes in. Why Radiology Turnaround […]
At D&D we have started working on a whole new batch of machine learning projects. Whilst we are cognisant that we need to improve our core stock, we are excited about the possibility that we can start to broaden our portfolio and range of solutions.
This is where this solution comes in.
Why Radiology Turnaround Times?
Bottlnecks in radiology testing are one of the constraints to effective flow through your hospital pathway. Morever, research in the GM Journal highlights that these delays in turnaround from scans is also having a big influence on patient care (see: https://www.gmjournal.co.uk/delays-in-radiology-scans-affecting-patient-care). Opponents of this article suggest that this is simply down to capacity and the need to drive a recruitment push to get more radiologists (see:https://www.rcr.ac.uk/posts/nhs-does-not-have-enough-radiologists-keep-patients-safe-say-three-four-hospital-imaging).
This means that there must be more effective and efficient ways to speed up the turnaround times of these tests, as well as all types of diagnostic.
Development of a tool to predict and optimise turnaround times
We have decided to develop a tool to try and tackle the problem posited, postulated and expressed above. Our aim is to allow healthcare providers and services to predict the turaround time per patient and allow them to plan accordingly to this prediction. This will lead to more effective planning, more efficient use of the dwindling radiologist capacity and generally lead to better patient outcomes.
How are we doing this?
The first answer to this is we are using state-of-the-art machine learning algorithms to detect the underlying patterns that cause variance in the turnaround of these tests.
In saying that, because it is complex, we envision that this research and product will need iteration and development.
We decided to use an algorithm called xgbDART, which is a method of dropping out trees that are less important in making the final prediction. To understand the full technicals see: https://xgboost.readthedocs.io/en/latest/tutorials/dart.html.
We say this is state-of-the-art as it is quite truly fresh off the shelf (released from the Department of Electrical Engineering and Computer Science at UC Berkley). It uses the logic behind multiple additive regression trees (see: http://statweb.stanford.edu/~jhf/R-MART), as they are good at dealing with models that are too specialised at detecting the underlying patterns in the data – in other words they tend to over-fit on the training data.
In our use case this algorithm outperformed all of the tried and tested commonly used models, such as Rnadom Forest, Gradient Boosting Machines, Recursive Partitioning Trees or event MART itself.
What factors do we look at in making the prediction?
The factors we use, in predicting TAT (Turnaround Times), are the factors available before the diagnostic examination is performed, such as patient age; patient type; modality and urgency amongst others.
Further iterations and improvement
The model does need further development, but we are excited about this new project.
At D&D we are always aiming for excellence in improving our scripts and systems. The data science team is researching new methods in improving accuracy and prediction strength. We are on a never ending quest for better models to solve complex health care problems.
However, with the model, and our renowned D&D Feature Importance Locator Engine (FILE) we can obtain predictions that are trustworthy, tested and allow for preventative actions to be put into effect sooner.
To read more about the FILE method.