InternLM-Law: An Open-Sourced Chinese Legal Large Language Model

Zhiwei Fei, Songyang Zhang, Xiaoyu Shen, Dawei Zhu, Xiao Wang, Jidong Ge, and Vincent Ng
Proceedings of the 31st International Conference on Computational Linguistics, pp. 9376-9392, 2025.

Click here for the PDF version.

Abstract

We introduce InternLM-Law, a large language model (LLM) tailored for addressing diverse legal tasks related to Chinese laws. These tasks range from responding to standard legal questions (e.g., legal exercises in textbooks) to analyzing complex real-world legal situations. Our work contributes to Chinese Legal NLP research by (1) conducting one of the most extensive evaluations of state-of-the-art general-purpose and legal-specific LLMs to date that involves an automatic evaluation on the 20 legal NLP tasks in LawBench, a human evaluation on a challenging version of the Legal Consultation task, and an automatic evaluation of a model’s ability to handle very long legal texts; (2) presenting a methodology for training a Chinese legal LLM that offers superior performance to all of its counterparts in our extensive evaluation; and (3) facilitating future research in this area by making all of our code and model publicly available at https://github.com/InternLM/InternLM-Law.

BibTeX entry

@InProceedings{Fei+etal:25a,
  author = {Zhiwei Fei and Songyang Zhang and Xiaoyu Shen and Dawei Zhu and Xiao Wang and Jidong Ge and Vincent Ng},
  title = {InternLM-Law: An Open-Sourced Chinese Legal Large Language Model},
  booktitle = {Proceedings of the 31st International Conference on Computational Linguistics},
  pages = {9376--9392}, 
  year = 2025}