We hypothesize that deep learning can yield state-of-the-art performance also for small data sets by meta-learning how to regularize them to avoid overfitting and further improving them by ensembling. We aim to develop the fundamental methods for doing so, and to apply them to various data modalities. We will extend the advances made for rather large tabular datasets to “scale down” deep learning to be effective also in the regime of small datasets. Specifically, we will develop approaches to search for optimal combinations of regularization methods, based on a meta-learning approach across many small datasets and by ensembling different combinations of regularization methods. We will also tackle the more structured data modalities of longitudinal data and image data.
Institute of Medical Biometry and Statistics,
Faculty of Medicine and Medical Center –
University of Freiburg