• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Sci-Fi

Grok 4 passes Claude, DeepSeek in LLM rankings despite safety concerns

July 16, 2025
Share on FacebookShare on Twitter

Grok 4 by xAI was released on July 9, and it’s surged ahead of competitors like DeepSeek and Claude at LMArena, a leaderboard for ranking generative AI models. However, these types of AI rankings don’t factor in potential safety risks.

New AI models are commonly judged on a variety of metrics, including their ability to solve math problems, answer text questions, and write code. The big AI companies use a variety of standardized assessments to measure the effectiveness of their models, such as Humanity’s Last Exam, a 2,500-question test designed for AI benchmarking. Typically, when a company like Anthropic or OpenAI releases a new model, it shows improvements on these tests. Unsurprisingly, Grok 4 scores higher than Grok 3 on some key metrics, but it also has to battle in the court of public opinion.


This Tweet is currently unavailable. It might be loading or has been removed.

LMArena is a community-driven website that lets users test AI models side by side in blind tests. (LMArena has been accused of bias against open models, but it’s still one of the most popular AI ranking platforms.) Per their testing, Grok 4 scored in the top three in every category in which it was tested except for one. Here are the overall placements in each category:

  • Math: Tied for first

  • Coding: Tied for second

  • Creative Writing: Tied for second

  • Instruction Following: Tied for second

  • Hard Prompts: Tied for third

  • Longer Query: Tied for second

  • Multi-Turn: Tied for fourth

And in its latest overall rankings, Grok 4 is tied for third place, sharing the spot with OpenAI’s gpt-4.5. The ChatGPT models o3 and 4o are tied for the second position, while Google’s Gemini 2.5 Pro has the top spot.

LMArena says it used grok-4-0709, which is the API version of Grok 4 used by developers. Per Bleeping Computer, this performance may actually underrate Grok 4’s true potential, as LMArena uses the regular version of Grok 4. The Grok 4 Heavy model uses multiple agents that can act in concert to come up with better responses. However, Grok 4 Heavy isn’t available in API form yet, so LMArena can’t test it. 

Mashable Light Speed

However, while this all sounds like good news for Elon Musk and xAI, some Grok 4 users are reporting major safety problems. And, no, we’re not even talking about Mecha Hitler or NSFW anime avatars.

Does Grok 4 have sufficient safety guardrails?

While some users tested Grok 4’s capabilities, others wanted to see if Grok 4 had acceptable safety guardrails. xAI advertises that Grok will give “unfiltered answers,” but some Grok users have reported receiving extremely distressing responses.

X user Eleventh Hour decided to put Grok through its paces from a safety perspective, concluding in an article that “xAI’s Grok 4 has no meaningful safety guardrails.”


This Tweet is currently unavailable. It might be loading or has been removed.

Eleventh Hour ran the bot through its paces, asking for help to create a nerve agent called Tabun. Grok 4 typed out a detailed answer on how to allegedly synthesize the agent. For the record, synthesizing Tabun is not only dangerous but completely illegal. Popular AI chatbots from OpenAI and Anthropic have specific safety guardrails to avoid discussing CBRN topics (chemical, biological, radiological, and nuclear threats).

In addition, Eleventh Hour was able to get Grok 4 to tell them how to make VX nerve agent, fentanyl, and even the basics on how to build a nuclear bomb. It was also willing to assist in cultivating a plague, but was unable to find enough information to do so. In addition, with some basic prompting, suicide methods and extremist views were also fairly easy to obtain. 

xAI is aware of these problems, and the company has since updated Grok to deal with “problematic responses.”


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Topics
Artificial Intelligence

Next Post

Google Search's new feature lets AI call businesses for you

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • The Galaxy Tab S11 Ultra’s Pro Keyboard costs how much?!?!
  • Wordle today: The answer and hints for March 10, 2026
  • Best robot vacuum deal: Save $500 on Ecovacs Deebot X9 Pro Omni
  • Monster Hunter Stories 3: Twisted Reflection Review – IGN
  • NYT Mini crossword answers, hints for March 9, 2026

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously