We combined four machine learning techniques and four data preprocessing for class imbalance to identify the outperforming strategy to screen articles in PubMed for inclusion in systematic reviews. We used textual data of 14 systematic reviews as case studies. Meta-analytic fixed-effect models were used to pool delta AUCs separately by classifier and strategy. Resampling techniques slightly improved the performance of the investigated machine learning techniques. From a computational perspective, random undersampling 35:65 may be preferred.
Five-hundred Random Forests were trained on a set of bootstrap samples of the whole dataset (1789 ED visits) to perform the classification task. MLTs seemed to be a promising opportunity for the exploitation of unstructured information reported in ED records in low- and middle-income Spanish-speaking countries.
The proposed machine learning instrument has the potential to help researchers identify relevant studies in the SR process by reducing workload, without losing sensitivity and at a small price in terms of specificity.
Six different examples were used to show and compare the different performances of MLTs and non-MLTs techniques: this activity served to develop a decision tree that on the basis of a set of predefined criteria provides a guideline for the selection of fit for purpose MLT.