Domain-Specific Language Model Pre-Training for Korean Tax Law Classification

Yeong Hyeon Gu, Xianghua Piao, Helin Yin, Dong Jin, Ri Zheng, Seong Joon Yoo

Research output: Contribution to journalArticlepeer-review


Owing to their increasing amendments and complexity, most taxpayers do not have the required knowledge of tax laws, which results in issues in everyday life. To use tax counseling services through the internet, a person must first select a category of tax laws corresponding to their tax question. However, a layperson without prior knowledge of tax laws may not know which category to select in the first place. Therefore, a model capable of automatically classifying the categories of tax laws is needed. Recently, a model using BERT has been frequently used for text classification; however, it is generally used in open-domains, and often experiences a degraded performance due to domain-specific technical terms, such as tax laws. Furthermore, a significant amount of time is required to train the model, since BERT is a large-scale model. To address these issues, this study proposes Korean tax law-BERT (KTL-BERT) for the automatic classification of categories of tax questions. For the proposed KTL-BERT, a new pre-trained language model was constructed by performing learning from scratch, to which a static masking method was applied based on DistilRoBERTa. Subsequently, the pre-trained language model was fine-tuned to classify five categories of tax law. A total of 327,735 tax law questions were used to verify the performance of the proposed KTL-BERT. The F1-score of the proposed KTL-BERT was approximately 91.06%, which is higher than that of the benchmark models by approximately 1.07%-15.46%, and the training speed was approximately 0.89%-56.07% higher.

Original languageEnglish
Pages (from-to)46342-46353
Number of pages12
JournalIEEE Access
StatePublished - 2022


  • BERT
  • Domain-specific
  • Korean tax law
  • Pre-trained language model
  • Text classification


Dive into the research topics of 'Domain-Specific Language Model Pre-Training for Korean Tax Law Classification'. Together they form a unique fingerprint.

Cite this