Abstract:
In recent years, neural network models have achieved great success in some fields, such as image segmentation, object detection, natural language processing (NLP), and so on. However, many key problems of neural network models have not been solved, for example, catastrophic forgetting. Human beings have the ability of continuous learning without catastrophic forgetting, but neural network models do not. Neural network models almost completely forget the previously learned tasks when it adapts to the new task. To solve this problem, many methods have been proposed. This paper summarized these methods to promote further research on this issue. The existing methods of mitigating catastrophic forgetting of neural network models were introduced in detail, and all methods were divided into four categories, namely exemplar-based methods, parameter-based methods, distillation-based methods and other methods. Different evaluation schemes were introduced to evaluate the effect of different methods on alleviating catastrophic forgetting of neural network models. An open discussion on the catastrophic forgetting problem in neural network models was carried out, and some research suggestions were given.