May 31, 2024 Epoch Times report:
This May has been an intense month of competition in the field of artificial intelligence (AI). Not only have OpenAI and Google unveiled AI models with enhanced functionality and human-like capabilities, but xAI under Musk’s ownership also joined the battle with a significant $6 billion funding boost. However, Musk’s predictions about AI replacing human jobs are causing concerns among many.
On May 27, Musk’s xAI company announced that it had secured $6 billion in funding during its Series B round, valuing the company at $24 billion post-fundraising. This round received substantial support from major investment firms and banks such as Andreessen Horowitz, Sequoia Capital, and Kingdom Holding.
The statement mentioned that the company released Grok-1 on the X platform in November last year, made it open-source in March this year, and subsequently introduced the more powerful Grok-1.5 and Grok-1.5V models. It is anticipated that in the coming months, xAI will launch updated products to bring its initial offerings to the market, building advanced infrastructure.
They emphasized that xAI focuses on developing advanced AI systems that are practical, capable, and maximize benefits for all of humankind. The company’s mission is to understand the true essence of the universe.
On the same day, Musk’s top AI researcher, Igor Babuschkin, shared the fundraising news on the X platform and invited those interested in contributing to the development of AGI (Artificial General Intelligence) and understanding the universe to join xAI. Musk later retweeted this message, stating, “You can join xAI if you believe in their mission and pursuit of truth about the universe without worrying about political correctness or popularity.”
In addition, Musk informed investors that he plans to build a supercomputer to support xAI’s development. This supercomputer will be composed of 100,000 H100 Nvidia GPU chips and is expected to be operational in the fall of next year. Previously, he mentioned that training the Grok-2 model required around 20,000 H100 chips, while the Grok-3 model and higher versions would need 100,000 chips.
Musk stated that he will personally ensure the timely delivery of the computer and indicated the potential collaboration with Oracle in developing the supercomputer. However, xAI and Oracle have not commented on this matter.
On May 24, Musk participated in the “Viva Technology” startup conference in Paris via video, where he shared his views and predictions on AI, which raised concerns among the audience.
Musk remarked, “In the future, people will need to communicate with computers through chips to speed up human thinking abilities and keep up with AI since the current communication speed between human brains and computers is too slow.” In reality, Musk’s Neuralink company aimed to use brain chips to accelerate people’s brain utilization to compete with or match AI’s pace.
However, in the video, he warned, “Google and OpenAI are teaching AI to lie and serve political correctness, rather than encouraging AI to learn or pursue truth because truth-seeking AI is unpopular. This approach is very dangerous.”
Regarding AI’s security, he stated, “I have spent a considerable amount of time contemplating AI’s security issues, which pose a significant challenge in programming. AI must have clear ethical norms, as it should not reverse physical, logical, and moral norms; currently, AI is under substantial dishonest influence due to political correctness.”
Musk emphasized, “Regulatory authorities need to be concerned about this issue and whether AI will sanctify known erroneous things. While xAI is currently avoiding such problems, there is still much on our agenda to address and resolve.”
However, Japanese computer engineer Jin Kiyaohar questioned the benign development of AI due to human ethics lagging behind.
He told Epoch Times, “From the current trend, the faster AI develops, the quicker humans’ jobs are being replaced. If human brains are implanted with chips, people will be entirely under machine control. To say AI is benignly developing, I think it’s premature because human ethics are not keeping up.”
Earlier, Musk vehemently opposed rapid AI advancement and urged governments to swiftly legislate regulations. However, with AI’s ongoing progress, his perspectives on AI seem to be evolving. When asked about AI’s impact on humanity during the conference, he stated, “In benign scenarios, humans will not need to search for jobs or struggle to find work because people will have high income at that time, although it won’t be universal basic income.” He added, “In the future, people will work as a leisure pursuit, with a variety of choices available, but it will become non-essential. AI will fulfill most people’s needs for goods and services at that time.”
Yet, his subsequent remarks were thought-provoking, “If computers and AI can do everything better than humans, then what meaning does your life have? This is indeed a problem occurring under benign conditions.”
Musk also discussed how AI impacts children and education. He expressed, “I believe parents will still be responsible for children’s values and ethics, but AI will heavily influence their education, as AI will become a knowledgeable, patient, and always correct teacher, perhaps tailored courses for children in the future.”
However, he expressed concerns about children being influenced by social media and AI algorithms. “These algorithms are stimulating their brains and affecting their way of thinking, so parents should limit or monitor their social media usage.”
Musk’s predictions on how AI will affect humanity and children align with Zack Kass, a former senior marketing executive at OpenAI, who in a January interview made similar predictions about the future trajectory of AI development, suggesting that in the future, people will have almost no work and will largely depend on AI to sustain their lives.
This future model where people have almost no work and rely heavily on AI can be compared to the concept presented by OpenAI’s CEO Sam Altman in 2021, who believes that humans can lead an ideal life without the need to work for survival, as long as they enjoy the convenience brought by technology.
However, speaking at an AI and Geopolitics video conference hosted by the Brookings Institute in May, Altman expressed his concern, stating, “I am worried that people are not taking the threat of AI to employment and the economy seriously, which is a significant issue.”
Japanese electronics engineer Satoru Ogino believes that getting something for nothing is dreadful for humanity.
He told Epoch Times, “Humans having high income without the need to work is merely a utopian fantasy. Only things obtained through hard labor hold real meaning and are cherished. If everything becomes easily accessible, people will lose their sense of happiness and the meaning of survival, leading to even greater crises and problems.”
Altman’s May speech represents the predicament faced by many, as more executives and employees fear their jobs being replaced while being compelled to use or learn AI to adapt to the current trend of AI utilization.
According to the annual Job Trends Index released by Microsoft and LinkedIn on May 8, which surveyed 31,000 individuals in 31 countries including the U.S., U.K., Germany, France, India, and Australia, 75% of employees are using AI in the workplace. Over half of the respondents admitted concealing their use of AI for critical tasks, as they feared being easily replaced by AI in their jobs. Nevertheless, they had to learn and utilize AI to enhance their work efficiency.
This displacement had previously sparked protests, as many filmmakers and studios increasingly relied on AI for production, causing concerns among directors, voice actors, scriptwriters, and others about their livelihoods. Last year, Hollywood screenwriters and actors collectively went on strike because production companies heavily used AI, leading to layoffs or pay cuts. The strike only ended after an agreement was reached between the union and the producers’ alliance.
Previously, AND Digital, a company providing IT services and consulting, conducted a survey among 600 business leaders on accelerating or slowing down enterprise values. The results showed that nearly 43% of CEOs believed they would be replaced by digital AI CEOs, while another 45% disclosed using AI tools like ChatGPT for various tasks and passing them off as their own work.
The rapid development of AI has unsettled Satoru Ogino. He remarked, “Whether AI’s development is benign or negative, it will bring disaster to humanity. In negative scenarios, the continuation of human survival will encounter significant challenges, while in benign situations, humans will overly rely on AI for all decisions, cease labor and critical thinking, ultimately becoming puppets manipulated or altered in their thinking and behavior by AI.”
(Contributions made by reporters Jia-Yi Wang and Chung-Yuan Zhang to this article)