Abstract: Core key technologies such as synthetic data, interpretation fine-tuning, mind trees, and process supervision empower large models in a "concurrent" manner, intensifying industry concerns about the advent of the singularity of general artificial intelligence, while the development of embodied artificial intelligence opens up avenues for early and ubiquitous risk concretization. The regulatory nodes for general artificial intelligence and embodied artificial intelligence lie in the benevolence of large models, with model trustworthiness and value alignment becoming the "meta-rules" for risk prevention. Compared to technology-oriented regulatory legislation, constraint-oriented legislation targeting companies has the efficiency advantage of regulating organizational behavior. Corporate law contains various institutional tools guiding the healthy development and application of artificial intelligence. To promote model trustworthiness, scenarios applicable to the denial of corporate personhood, activation of corporate purpose clauses, optimization of director fiduciary duties, and utilization of dual share structures can be considered. To promote value alignment, diversified protection principles, the implementation of the concept of technology benevolence, the establishment of a technology ethics committee, and the strengthening of ethical-oriented ESG disclosures can be introduced.
Key Words: big model; general artificial intelligence; embodied artificial intelligence; company law; ESG; technological ethics
Author: TANG Liyao, associate research fellow, CASS Institute of Law;
Source: 2 (2024) Oriental Law.