Synergistic human-machine prediction: active error analysis and mitigation with Gaussian process regression
Date
2020
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
University of Delaware
Abstract
Before deployment, a machine learning model is evaluated on both training and validation sets. Assuming the latter is a representative sample, the validation performance offers an estimate of how well the model will perform on a test set. Even if the performance meets specifications, there may be cases of systematic errors caused by model underfitting, poor model design or even overfitting. We propose to perform error analysis of the training and validation set during deployment to alert a user when instances similar to previous systematic errors arise. Triggering user vigilance during deployment will improve the synergistic operation of the machine and the user. Our model-agnostic approach interpolates the distribution of errors (taking cues from both the training and validation sets), optimizes the threshold for alerting a user, and requests verification for possibly erroneous predictions that exceed the threshold. Under the assumption that the user would make the correct decision, the approach is evaluated by the reduction of loss, while seeking to maintain a budget of interventions. The framework is tested on illustrative examples and real-world data sets where machine learning models have systematic errors. We conclude with a discussion of the limitations and areas for future work.
Description
Keywords
Collaborative AI, Machine learning, Safe AI