Abstract:
Generative artificial intelligence ( AI ), represented by ChatGPT, has brought new legal risks while optimizing industrial structure and promoting economic development. Risks in the training of language models mainly include infringement of personal information and copyrights of works, while risks in the process of content generation are reflected in the risk of misinformation caused by AI “illusion”, the risk of algorithmic bias, the risk of loss of control of algorithms caused by “emergence”, and the risk of algorithmic misuse in the interaction between humans and AI. Although “The Interim Measures for the Management of Generative Artificial Intelligence Services” provides a basic governance framework, there are still deficiencies in the setup of some provisions and the specific governance approach. The performance of generative AI depends on the size and quality of the training dataset, data governance in model training needs to respect its technical logic, and the practice should expand the boundary of reasonable use between public copyrighted data and public personal data. Content governance can draw on the basic concept of “constitutional AI” to build a dynamic content feedback and evaluation mechanism.