AI is everywhere and being exploited by evil governments

**Tech Giants Shift Focus to Personal Devices and Social Platforms for AI Training**

Technology giants have nearly exhausted existing public and some private English databases on the internet for AI model training, leading them to target individual electronic devices and social platforms. This approach has sparked concerns among many people regarding potential misuse or abuse of personal privacy.

Apple held its 2024 WWDC keynote on June 11 and announced significant accomplishments in combining Apple products with AI. In the presentation, they highlighted the updates to Siri, allowing it to understand natural language like ChatGPT and perform tasks such as quick photo editing, email composition or editing, emoji and image generation through simple voice commands.

Moreover, Siri will integrate information from your phone, articles, documents, etc., to provide answers like a “personal assistant.” These features are only available on iPhone 15 Pro or above, iPads with M-series chips, and Mac computers, with the AI functions expected to expand gradually.

Additionally, Apple collaborated with OpenAI to integrate ChatGPT directly into Siri, enabling users to utilize GPT-4 as the AI engine. Despite Apple’s emphasis that these AI updates won’t rely on cloud servers for computations and all functions will be chip-based, ensuring strong protection of personal privacy and data from being stored by OpenAI.

Despite Apple’s assurance of robust protection of personal privacy and data, many still worry that personal information might be exploited by tech companies for AI training or undisclosed experiments, given the attractiveness of such data.

**Elon Musk Criticizes Apple’s AI Integration and Privacy Concerns**

Following Apple’s 2024 WWDC, Tesla CEO Elon Musk took to social media platform X to express his opinions. Musk’s criticisms of Apple’s approach and collaboration with OpenAI resonated with many internet users, most of whom expressed fear over Apple’s actions.

Musk stated, “If Apple integrates OpenAI into the operating system, all Apple devices will be banned from our company due to unacceptable security violations. Furthermore, all visitors must have their Apple devices checked at the entrance and place them in a Faraday pouch to block the devices’ electromagnetic waves.”

He further criticized Apple and OpenAI on privacy issues, emphasizing that Apple cannot develop its own AI but claims to ensure OpenAI safeguards users’ security and privacy, a notion he found absurd. Musk highlighted concerns that once Apple hands over data to OpenAI, Apple loses control and risks selling users downstream.

Musk discussed the implications of Apple’s practices, especially in terms of data sharing agreements and conditions, impacting AI power growth.

**Concerns Over Personal Data Usage for AI Training and Social Platform Activities**

Beyond worries about personal data used for AI training from mobile devices, concerns have shifted toward data usage on social media platforms. Previous revelations about Google and OpenAItranscribing YouTube videos for AI training have raised new worries. Meta announced that data from Facebook and Instagram in the UK and Europe would be utilized for training the Llama AI language model starting June 26.

While Meta states that training data comprises public posts, photos, interactions with AI chatbots, etc., excluding private message content, users can voice objections to Meta. However, users’ objections may require filling out complex forms and personal email information, with the potential for rejection by Meta.

Several European digital rights advocacy groups and individuals have raised questions about Meta’s practices and lodged complaints with over ten national privacy regulators, including the Irish Data Protection Commissioner to ensure Meta complies with EU data laws, sparking an investigation.

**Concerns Extend Beyond Privacy to AI Misuse by Authoritarian Governments and Malevolent Individuals**

Aside from worries about tech companies using personal data for AI training, concerns extend to potential misuse of AI technology by authoritarian governments or malicious individuals to promote distorted and nefarious values, fabricate false information, deceive and brainwash people for hidden agendas.

**US Congressional Figures and ASPI Report Highlight AI Misuse**

The US Senate Intelligence Committee Chairman Mark Warner, House China Task Force Democratic Chief Raja Krishnamoorthi, and House Republican Conference Chair Elise Stefanik jointly urged increased scrutiny on the NewsBreak platform due to multiple instances of disseminating false or severely inaccurate news.

NewsBreak faced criticism for editing and disseminating news articles from mainstream media sources like Reuters, AP, CNN, and Fox News through AI, resulting in significant content errors, drawing criticism from various circles and suspicions of ties to Chinese investors and CCP-linked entities.

Additionally, the Australian Strategic Policy Institute (ASPI) released a report exposing CCP propaganda methods deploying private Chinese companies to develop mobile games, AI, VR tech, overseas e-commerce platforms, deploying all means to collect data, understand individuals better, and distort reality to enhance influence worldwide.

**AI Misuse by Foreign Entities**

OpenAI uncovered five secret campaigns in the past three months attempting to manipulate or seize global discourse using AI developed by OpenAI to control public opinion and influence geopolitical outcomes. Entities from Russia, China, Iran, and Israel misused OpenAI’s language model power for generating fake news and propaganda in multiple languages across various platforms.

The misuse of AI by entities such as “Spamouflage” in China, “Bad Grammar” in Russia, and the “International Union of Virtual Media” in Iran for spreading false narratives and extremist ideologies across social media, poses significant concerns regarding AI misapplication and propaganda.

**Conclusion**

The increased integration of AI into personal devices and social platforms raises substantial concerns about privacy, data security, and the potential misuse of AI technology by both tech giants and state actors for manipulative purposes. Vigilance, regulatory oversight, and public awareness are essential to safeguard individual rights and the integrity of information dissemination in the digital age.