Scholars suggest Taiwan should establish a secure database to address AI risks

On August 26, 2024, a seminar titled “AI and Risk Society” was jointly organized by several civic groups in Taiwan. Scholars and experts were invited to discuss the risks behind the development of Artificial Intelligence (AI) and propose that the Taiwan government should establish a secure and localized database to mitigate AI-related risks.

Hosted by Chu Fu-ming, the CEO of the Black Bear Academy, the seminar was organized by the Taiwan Independence Alliance, Taiwan Security Association, Modern Culture Foundation, and New Taiwan Peace Foundation.

Professor Chen Hui-rong from the Department of Journalism at the University of Culture discussed the impact of AI on the news industry and society. She compared the risks behind AI to “attacking giants” and suggested constructing three walls of risk prevention: “sovereign database construction, responsible application and development, AI literacy.”

Chen Hui-rong urged the government, businesses, and civil society to continue researching AI and develop more secure regulations. She emphasized the difference between digital learning behind AI and human language learning, highlighting the need for society to address how to prevent AI systems from going out of control in the future.

Referring to the concerns raised by Geoffrey Hinton, known as the “AI father” and former CEO of Google, Chen Hui-rong outlined several AI risks, including misuse by malicious entities, causing technological unemployment, and posing existential threats to human society.

She linked these risks to the EU’s AI Act, implemented on August 1, which categorizes AI risks into four levels: Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk, guiding the regulation of AI technology in various sectors.

Chen Hui-rong noted that AI databases often overlook Taiwan’s local culture, resulting in AI-generated content that does not align with Taiwan’s unique characteristics. She proposed the development of a “trusted conversational AI engine” named TAIDE based on Taiwan’s distinct language databases to ensure the authenticity of AI models.

Secretary-General Sun You-lian of the Taiwan Labor Front pointed out that even ChatGPT acknowledges its own biases and prejudices, illustrating how extensive AI usage in service industries like banking and cafes may lead to a lack of human warmth.

Sun You-lian emphasized that as technology advances, the labor market is being reshaped, potentially leading to “technological unemployment” as AI replaces human labor in various occupations. He highlighted the potential challenges AI could pose in areas such as gender discrimination, employee management, facial recognition, and workplace safety in the labor sector.

He urged the government to not overlook social insurance and labor laws in the development of AI legislation, emphasizing the importance of institutional measures to protect individuals from being marginalized or socially excluded as a result of AI advancement.

Professor Wu Feng-wei from the Department of Philosophy at the University of Culture delved into the philosophical perspectives on the risks and moral issues associated with AI. He analyzed how AI legislation establishes legal norms through risk assessment.

Wu Feng-wei highlighted the EU’s approach to regulatory frameworks based on different AI risks and advised Taiwan’s government to adopt similar measures to promote AI development while safeguarding people’s health, safety, and basic rights from potential AI-related harm.频

Furthermore, Wu Feng-wei emphasized the need for governments to prohibit AI manipulation and decision-making that exploit vulnerable populations, collect sensitive biometric data, or assign social scores, to prevent AI systems from harming civil society or national sovereignty due to excessive development.

He presented the ethical features of the EU’s AI Act, emphasizing AI’s obligations in terms of privacy, dignity, and autonomy, and highlighted the risk assessment mechanism underpinning AI legislation as a form of “negative utilitarianism” that aims to minimize harm.

Wu Feng-wei stressed that AI databases could lead to cultural displacement as different databases embody varying values, underscoring the need to prioritize subjectivity and sovereignty issues. He proposed that Taiwan’s databases should reflect sovereign and unique values to prevent misleading information or values affecting the use of AI.

He warned of the potential risks if AI development evolves into “super intelligence,” leading to decisions harmful to humanity. Wu Feng-wei cautioned that if future AI databases reach “super intelligence” levels and deem humans as a long-term threat to the Earth, there could be a risk of humans becoming targets of attack.