On-line and Incremental Learning with Convolutional Neural Networks


Whereas many approaches, that use convolutional neural networks, are designed assuming all training data is available at training time, in many real-life scenarios this is not the case. Examples of this are web search or facial recognition. They cannot use a fixed model because the number of categories (or objects) keeps growing or changing. This type of learning is called on-line learning. The data becomes available over time and the system learns gradually. The more restricted version of this is called incremental learning. These types of learning have their challenges. For example, it has to be able to create a reliable model at each time step and it does not know in advance what the data will look like in the future or how it changes during the training.

In the literature, there is much work available for this problem in neural networks, but not much in convolutional neural networks. The literature we found in our research shows, that the approaches that use convolutional neural networks are using fine-tuning, are boosting based or are a combination of both.

In this work, we propose three main approaches: a fine-tuning approach, combining convolutional neural networks and a boosting based method. We compared several setups of these approaches in our experiments. Our results show that all the approaches have trouble retaining old information when no images of these classes are available at the current time. From these results, we conclude that none of our proposed approaches work well in an incremental environment. The accuracies improve a lot when only a subset of the images of the class remain available. The best results have the fine-tune and boosting-based methods. However, these approaches are not ready to work in a real-life environment yet. More research is needed for that. This work is a good starting point for more research.


PDF Source