Musk Joins the Fray, Introduces Competing Grok-3 and DeepSeek

Elon Musk, the billionaire entrepreneur from the United States, has recently launched the latest version of a chatbot named Grok-3 through his artificial intelligence startup xAI. The company claims that Grok-3 outperforms its competitors in terms of performance.

Amidst the increasingly fierce competition in the field of artificial intelligence, existing players like Microsoft-backed OpenAI, Google under Alphabet, and now the Chinese company DeepSeek have entered the arena.

The xAI team announced that Grok-3 will be rolled out in the United States starting on Tuesday, February 18, initially offering it to Premium+ users on Musk’s social media platform X. Users can also access the model through a separate subscription to the web version or app.

During a live event on Monday with three xAI engineers, Musk stated that Grok-3 is leading the pack in its category. He further noted that the model outperforms its predecessor, Grok-2.

Musk mentioned that Grok-3’s computational capabilities are ten times that of the previous generation. Early testing results indicate that the model excels on the Chatbot Arena platform, surpassing OpenAI and DeepSeek, particularly in standardized tests related to mathematics, science, and coding. Chatbot Arena is a crowdsourced testing website where different AI models compete through blind tests.

In addition to Grok-3, the xAI team also introduced an intelligent search engine named DeepSearch, hailed as the “next-generation search engine.” This reasoning-based chatbot can display its thinking process when responding to user queries.

As the product demonstration nears its end, Musk emphasized that the company will continue to enhance the model. “We should emphasize that this is just a test version, so there may be some imperfections initially, but we will quickly improve, updating almost daily,” he added, noting that the model’s voice assistant feature will be rolled out later.

Musk has consistently warned about the potential risks posed by artificial intelligence. In 2023, he founded xAI, entering the market dominated by OpenAI’s ChatGPT representing generative artificial intelligence.

In September 2024, OpenAI launched its most advanced model, o1, which possesses reasoning capabilities to solve complex scientific, coding, and mathematical problems.

It is noteworthy that Musk co-founded OpenAI with Sam Altman in 2015, when the organization was a non-profit entity.

However, tensions have arisen between Musk and the leadership of OpenAI in recent years. Recently, he led an investor group proposing a $97.4 billion acquisition of OpenAI’s non-profit parent company, a bid rejected by Altman.

OpenAI recently introduced the latest AI model, GPT-5, further enhancing reasoning abilities, context comprehension, and multimodal processing. Additionally, OpenAI announced deeper collaboration with Microsoft to bolster AI integration in enterprise applications and introduce API enhancement tools designed for developers to support a wider range of application scenarios.

Gil Luria, Managing Director of D.A. Davidson, told Reuters, “The launch of Grok-3 puts xAI back into the competition of large language models (LLMs). The model’s performance in certain benchmark tests surpasses current most advanced models, regaining market attention for xAI.”

xAI boasts one of the world’s largest supercomputer clusters – Colossus supercomputer, specifically used for AI training. The company stated last year that the computer was being utilized with 100,000 advanced Nvidia GPUs for AI training. On Tuesday, xAI revealed that the GPU cluster scale used for Grok-3 training has doubled.

Currently, xAI is raising billions of dollars to enhance data center capacity in Memphis, Tennessee for training more advanced AI models.

With the continuous advancement of AI technology, major tech companies are intensifying their competition, striving to secure a leading position in the field of artificial intelligence.

On Tuesday, the Chinese AI company DeepSeek also announced the release of the “Native Sparse Attention” (NSA) technology designed for ultra-fast long-text training and inference. According to their published paper, NSA optimized for modern hardware accelerates the inference process, reduces pre-training costs, and does not compromise performance.