Review | Paradigm Shift in Natural Language Processing

In the era of deep learning, modeling for most natural language processing (NLP) tasks has converged into several mainstream paradigms. However, those paradigm shifts scattering in various NLP tasks have not been systematically reviewed and analyzed. The research team of Prof. Qiu Xipeng (Fudan University) attempts to summarize recent advances and trends in this line of research, namely paradigm shift or paradigm transfer. The paper reviews such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks. Related work is published in the third issue of Machine Intelligence Research in 2022. Download full text for free now!



In the era of deep learning, modeling for most natural language processing (NLP) tasks has converged into several mainstream paradigms.

 

For example, we usually adopt the sequence labeling paradigm to solve a bundle of tasks such as POS-tagging, named entity recognition (NER), and chunking, and adopt the classification paradigm to solve tasks like sentiment analysis.

 

With the rapid progress of pre-trained language models, recent years have witnessed a rising trend of paradigm shift, which is solving one NLP task in a new paradigm by reformulating the task.

 

The paradigm shift has achieved great success on many tasks and is becoming a promising way to improve model performance. Moreover, some of these paradigms have shown great potential to unify a large number of NLP tasks, making it possible to build a single model to handle diverse tasks.

 

Despite their success, these paradigm shifts scattering in various NLP tasks have not been systematically reviewed and analyzed. The research team of Prof. Qiu Xipeng (Fudan University) attempts to summarize recent advances and trends in this line of research, namely paradigm shift or paradigm transfer.

 

This paper reviews such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks.

 

This paper is organized as follows. Section 2 gives formal definitions of the seven paradigms, and introduces their representative tasks and instance models. Section 3 shows recent paradigm shifts that happened in different NLP tasks. Section 4 discusses the designs and challenges of several highlighted paradigms that have great potential to unify most existing NLP tasks. Section 5 concludes with a brief discussion of recent trends and future directions.

 

 

Download full text

Paradigm Shift in Natural Language Processing

Tian-Xiang Sun, Xiang-Yang Liu, Xi-Peng Qiu,  Xuan-Jing Huang

https://link.springer.com/article/10.1007/s11633-022-1331-6

https://www.mi-research.net/en/article/doi/10.1007/s11633-022-1331-6

  • Share:
Release Date: 2022-06-15 Visited: