Bilal Mateen

bilal_mateen.jpg

Bilal is a clinical-academic at Kings College Hospital (KCH), and a clinical data science fellow at The Alan Turing Institute (the UK’s national institute for data science and artificial intelligence). He spends the vast majority of his time coming up with ‘fun’ acronyms for his research, e.g. Project E.A.R. (Evaluating Algorithms Responsibly), and the T.R.E.E (Transparent, Reproducible, Ethical, and Effective) framework for assessing machine learning (ML) and artificial intelligence (AI)-based predictive models in medicine [we did in fact draw a picture of a tree for the publication, but the journal thought it added little value – we disagree… let us know what you think].

Legend: (Left) A visualisation of a prediction modelling workflow, containing the core elements of the process, from the data sources and well-described methods to the modelled outcomes. (Right) An artistic demonstration of what happens if the 4 core principles of high quality ML/AI work are corrupted. The lack of Transparency in reporting leads to an obscured representation of the work, which is not Replicable due to the absence of clearly stated assumptions or available data. Disreputable Ethics regarding the true goals result in rotten fruit, which in turn is not Effective at creating a thriving environment by provably improving health outcomes.

← Back to contributors overview