AIGC时代网络信息内容的法律治理以大语言模型为例

    Legal Governance of Internet Information Content in AIGC EraTaking Large Language Model as an Example

    • 摘要: 大语言模型是发展人工智能生成内容(AIGC)最为关键的技术之一。它在推动AIGC发展的同时,也会带来生成违法和不良信息的风险。大语言模型生成违法和不良信息的原因较为复杂,造成的风险程度更为严重,因而给网络信息内容的法律治理带来挑战。为此,中国在立法上细化相关主体在网络信息内容生成过程中的义务,新增对人工智能生成内容进行标识的义务,但相关规则仍存在着继续完善的空间。未来,中国应进一步明确人工智能生成内容致害的侵权责任规则,确定侵权责任的主体和归责原则,基于现行法构造合理的解释论;合理界定网络信息内容服务平台对内容的注意义务,将AIGC技术发展和产业状况纳入考量因素;完善对人工智能生成内容进行标识的要求,区分场景规定不同的标识要求,增加服务使用者和内容传播者的标识义务。

       

      Abstract: Large Language Model (LLM) is one of the most critical technologies for developing AI-generated content (AIGC). While LLM promotes the development of AIGC, it also leads to risks of generating illegal and bad information. The reasons for LLM’s generation of illegal and bad information are complex and the degree of risks is more serious, which brings challenges to the legal governance of internet information content. To address the challenges, Chinese legislators have refined the obligations of relevant subjects in the process of internet information content generation, and created a new obligation of labeling AIGC. However, there is a need to further improve the relevant rules. In the future, it is necessary to clarify the tort liability rules for damages caused by AIGC, determining the subject of tort liability and the principle of liability, and constructing a reasonable interpretation theory based on the current law. Also, there is a need to reasonably define the duty of care of the internet information content service platform, taking the development of AIGC technology and industry into account; Last, legislators should perfect the requirements for labeling of AIGC, specifying different labeling requirements in various scenarios, and imposing labeling obligations on AIGC service users and content disseminators.

       

    /

    返回文章
    返回
    Baidu
    map