AIGC大模型中内源性幻觉难题溯源与版权规制路径

    Source of Endogenous Hallucination Problem and Path of Copyright Regulation in AIGC Large Model

    • 摘要: 人工智能作为新质生产力引擎,正推动作品创作领域的深刻变革。AIGC大模型输出结果受提供者价值理念、人工筛选内容以及训练输入数据的影响,其输出内容常伴随虚拟、偏见等内源性幻觉问题。为解决AIGC大模型内在幻觉问题,确保AIGC大模型与人类作者利益平衡,在理论修正层面,应实现价值理性引领AIGC大模型发展的主基调,以版权激励机制融贯AIGC数据训练全流程;同时,在具体举措中,为解决算法偏见疑难,可通过作品输出结果监督其内在推演逻辑;在诉讼中由AIGC提供者承担举证责任,以证明自身消除幻觉行为合规;坚持竞争政策的指导,实现市场竞争与科技向善和谐共生。

       

      Abstract: As the engine of new quality productive forces, artificial intelligence is driving profound changes in the field of work creation. However, the output result of AIGC large model is influenced by the provider’s value concept, manual screening content and training input data, and its output content is often accompanied by the emergence of endogenous hallucination problems such as virtuality and bias. In order to solve the inherent hallucination of AIGC grand model and ensure the balance of interests between AIGC grand model and human authors, we should realize the main tone of value rationality leading the development of AIGC grand model and integrate the whole process of AIGC data training with copyright incentive mechanism at the level of theoretical amendment. At the same time, in terms of concrete measures, firstly, in order to solve the problem of algorithmic bias, the internal reasoning logic can be supervised through the output result of the work; Secondly, the burden of proof should be borne by the AIGC provider in the lawsuit to prove the compliance of its own hallucination-eliminating behavior; Finally, adhere to the guidance of competition policy to achieve the harmonious coexistence between market competition and science and technology.

       

    /

    返回文章
    返回
    Baidu
    map