Abstract:
Large Language Model (LLM) is one of the most critical technologies for developing AI-generated content (AIGC). While LLM promotes the development of AIGC, it also leads to risks of generating illegal and bad information. The reasons for LLM’s generation of illegal and bad information are complex and the degree of risks is more serious, which brings challenges to the legal governance of internet information content. To address the challenges, Chinese legislators have refined the obligations of relevant subjects in the process of internet information content generation, and created a new obligation of labeling AIGC. However, there is a need to further improve the relevant rules. In the future, it is necessary to clarify the tort liability rules for damages caused by AIGC, determining the subject of tort liability and the principle of liability, and constructing a reasonable interpretation theory based on the current law. Also, there is a need to reasonably define the duty of care of the internet information content service platform, taking the development of AIGC technology and industry into account; Last, legislators should perfect the requirements for labeling of AIGC, specifying different labeling requirements in various scenarios, and imposing labeling obligations on AIGC service users and content disseminators.