In 2023, interest in AI was hotter than ever. In particular, starting with ChatGPT, research and development on Generative AI were conducted at various big tech companies.
AI makes our lives convenient, but it also creates unexpected variables. Examples include security issues such as generating false information through AI, discriminatory speech, personal information leakage, and deepfake phishing. Because of these problems, countries around the world are implementing regulations to prevent problems caused by AI and to use AI properly.
The biggest problem with Generative AI is Hallucination. Hallucination refers to the Generative AI generating information that is not related to facts, and it is a typical error that occurs when AI processes information. Specifically, when asked about a historical event that has never happened to ChatGPT, ChatGPT generates plausible but incorrect answers based on data from learning about an event similar to that event. This Hallucination can lead to the spread of misinformation and ethical and moral problems.
In response, Nature, a world-renowned international academic journal, banned the publication of photos, videos, illustrations, and graph images using generative AI in June. Nature banned the use of data or images obtained using Generative AI because legal copyright issues and the spread of false information could accelerate.
There is also the risk of spreading false information through false news produced through generative AI, in practiceI n the bureau, certain groups manipulated President Biden's voice through Generative AI. In the original video, the statement was "Let's support tanks to help Ukraine," but it was transformed into a statement criticizing transgender people through voice-generating AI technology.
Europe's AI Act
In 2021, the European Commission (EC) first proposed a regulatory and legal framework for AI. Then, on December 8, 2023, the European Commission, the European Parliament, and representatives of the 27 EU member states agreed on the AI bill, becoming the first in the world to pass the AI Act for regulating artificial intelligence.
The AI Act, which will take effect in 2026, is the first law to target AI, and also includes regulations on biometric authentication tools such as facial recognition and fingerprint scanning, including Generative AI such as ChatGPT.
The AI Act classifies risks of AI, enhances transparency, and imposes fines on companies that fail to comply with regulations. In addition, companies must comply with comprehensive regulations on AI, including writing technical documents, complying with EU copyright laws, and providing specific summaries of content used in training.
Companies that violate the rules will be fined between 7.5 million euros (about 10.7 billion won) and up to 35 million euros (about 49.7 billion won) or 7% of global sales. If this is applied to companies such as Google and Microsoft, the fine alone amounts to billions of dollars (trillion won).
Regulatory Trends Related to Artificial Intelligence in Korea
The Personal Information Protection Committee announced the Policy Direction of Safe Use of Personal Information in the Age of Artificial Intelligence in 2023. This policy focuses on using data necessary for the development of AI safely while minimizing the risk of privacy infringement through AI. In addition, the following bills have been proposed in response to the need to strengthen personal information on artificial intelligence data and regulate high-risk AI.
In addition, the U.S. announced the introduction of federal measures to reduce the social and economic toll of AI. It is expected to investigate human jobs to be replaced by AI and write guidelines to prevent AI-led hiring systems from creating various discriminations. It will also include the federal government's use of AI to disclose how AI technology is used to collect citizens' information regarding the protection of personal information. Seven AI companies, including Google, Meta, and MS, have announced that they will develop a "digital watermarking" system that helps users distinguish voice and video contents created and altered by AI.
Digital watermarking refers to a technology that inserts information such as copyright into data such as photos and manages them. In particular, false photos or videos using AI can affect the upcoming U.S. presidential election Google will require AI technology to be disclosed if it is used in U.S. presidential election content.
-
PREV Chat GPT Appears in a Year, Change brought about by Generative AI
2023-11-28 -
NEXT Cyber security trends selected by Gartner in 2024
2024-03-06