• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Mobile

OpenAI knew. It chose not to call the police. Now Sam Altman is sorry.

April 25, 2026
Share on FacebookShare on Twitter

TL;DR

Sam Altman apologised to the community of Tumbler Ridge, British Columbia, for OpenAI’s failure to alert police after its own systems flagged a ChatGPT user who went on to kill eight people and injure 27 in Canada’s deadliest school shooting since 1989. Approximately a dozen OpenAI employees had reviewed the flagged account in June 2025 and some recommended reporting to law enforcement, but leadership overruled them, applying a “higher threshold” that the conversations did not meet. OpenAI has since lowered its reporting threshold and established contact with the RCMP, but all changes are voluntary, and Canada has no law requiring AI companies to report identified threats.

Sam Altman published an open letter to the community of Tumbler Ridge, British Columbia, on Thursday, apologising for OpenAI’s failure to alert law enforcement after its own systems flagged a user who went on to carry out the deadliest school shooting in Canada in nearly four decades. “I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman wrote. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” The letter, dated April 23 and released publicly a day later, arrived 72 days after Jesse Van Rootselaar, 18, killed eight people and injured 27 others in a shooting that began at a family home and ended at Tumbler Ridge Secondary School on February 10. OpenAI’s automated abuse detection had flagged Van Rootselaar’s ChatGPT account eight months earlier, in June 2025. Approximately a dozen employees reviewed the flagged conversations, which described scenarios involving gun violence, and some recommended contacting Canadian police. Company leadership decided against it. The account was banned. No one was told. Van Rootselaar created a second account and was not detected until after the RCMP released a name.

The decision

The Wall Street Journal first reported the internal debate at OpenAI. The employees who reviewed Van Rootselaar’s flagged account saw what they described as signs of “an imminent risk of serious harm to others.” They escalated their recommendation to report the conversations to law enforcement. Leadership applied what an OpenAI spokesperson later called a “higher threshold” for credible and imminent threat reporting and concluded the activity did not meet it. The account was terminated. The conversations were preserved internally. The police were not contacted. Eight months later, Van Rootselaar killed her mother, Jennifer Strang, 39, and her 11-year-old half-brother, Emmett Jacobs, at the family home, then drove to the secondary school and opened fire with a modified rifle, killing education assistant Shannda Aviugana-Durand, 39, and five students aged 12 and 13: Zoey Benoit, Ticaria Lampert, Kylie Smith, Abel Mwansa, and Ezekiel Schofield. Twenty-seven people were injured. Maya Gebala, 12, was shot three times in the head and neck while shielding classmates and sustained what doctors described as a “catastrophic, traumatic brain injury” with permanent cognitive and physical disability. Van Rootselaar died by suicide at the school.

The civil lawsuit filed in BC Supreme Court in March by Cia Edmonds on behalf of her daughter Maya alleges that ChatGPT provided “information, guidance, and assistance to plan a mass casualty event, including the types of weapons to be used, and describing precedents from other mass casualty events or historical acts of violence.” The specific content of the conversations has not been made public. BC Premier David Eby said he deliberately did not ask what was in the chat logs to avoid compromising the RCMP investigation. What is known is that OpenAI’s own system identified the conversations as potentially dangerous, that OpenAI’s own employees recommended action, and that OpenAI’s leadership chose not to act. The apology is not for a failure of detection. The detection worked. The apology is for what happened after detection worked.

The letter

TNW City Coworking space – Where your best work happens

A workspace designed for growth, collaboration, and endless networking opportunities in the heart of tech.

Altman’s letter was addressed to the Tumbler Ridge community and released after BC Premier Eby disclosed that Altman had agreed to apologise during earlier discussions about OpenAI’s handling of the case. “I have been thinking of you often over the past few months,” Altman wrote. “I cannot imagine anything worse in the world than losing a child.” He added: “I reaffirm the commitment I made to the mayor and premier to find ways to prevent tragedies like this in the future. Going forward, our focus will continue to be working with all levels of government to help ensure something like this never happens again.” The letter contained no specific policy commitments, no description of what OpenAI would change, and no acknowledgement that employees had recommended reporting the account and been overruled. Eby called the apology “necessary” but “grossly insufficient for the devastation done to the families of Tumbler Ridge.” Tumbler Ridge Mayor Darryl Krakowka acknowledged receipt and asked for “care and consideration” while the community navigates the grieving process.

The policy commitments came separately, in a letter from OpenAI vice-president of global policy Ann O’Leary to Canadian federal ministers. O’Leary wrote that OpenAI had lowered its reporting threshold so that a user no longer needs to discuss “the target, means, and timing” of planned violence for a conversation to be flagged for law enforcement referral. The company has enlisted mental health and behavioural experts to help assess flagged cases and established a direct point of contact with the RCMP. O’Leary stated that under the updated policies, Van Rootselaar’s interactions “would have been referred to police” if discovered today. The changes are voluntary. They are not legally binding. They can be reversed at any time. Canada has no law requiring AI companies to report threats identified through their platforms, and the federal government has not yet introduced one.

The pattern

Tumbler Ridge is not an isolated case. Florida has opened the first criminal investigation into an AI company after ChatGPT allegedly advised the gunman in a mass shooting at Florida State University, including guidance on how to make a firearm operational moments before the attack that killed two people and injured five. NPR reported on April 23 that “OpenAI is under scrutiny after two mass shooters used ChatGPT to plan attacks.” Seven families have separately sued OpenAI over ChatGPT acting as what their attorneys describe as a “suicide coach,” with documented deaths in Texas, Georgia, Florida, and Oregon. In another case, OpenAI is being sued for allegedly ignoring three warnings about a dangerous user, including its own internal mass-casualty flag. The number of reported AI safety incidents rose from 149 in 2023 to 233 in 2024, a 56% increase, and the 2025 and 2026 figures will be significantly higher.

The pattern that connects these cases is not that AI systems are spontaneously generating violence. It is that AI companies are identifying dangerous behaviour on their platforms and making internal decisions about whether to act on it, decisions that carry life-and-death consequences but are governed by no external standard, no legal obligation, and no regulatory oversight. The deeper risks of emotional dependency on AI chatbots, including the phenomenon researchers have termed “AI psychosis,” raise questions about what happens when systems optimised to sustain engagement become confidantes for users in crisis. OpenAI’s “higher threshold” for reporting was a business judgement, not a legal standard. The employees who recommended contacting police applied their own moral reasoning. The executives who overruled them applied a different calculus, one that presumably weighed the reputational and legal risks of reporting against the reputational and legal risks of not reporting, and got it catastrophically wrong.

The safety question

OpenAI announced an external safety fellowship hours after a New Yorker investigation reported it had dissolved its internal safety team, a sequence that captures the company’s approach to safety governance with uncomfortable precision. The superalignment team, led by Ilya Sutskever before his departure, was disbanded. The AGI-readiness team was dissolved. Safety was dropped from OpenAI’s IRS filings when the company converted from a nonprofit to a for-profit structure. OpenAI’s own robotics chief resigned over safety governance concerns, specifically objecting that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” The external fellowship, the voluntary policy changes, and Altman’s letter all share a common characteristic: they are gestures that OpenAI controls. They can be announced, modified, or withdrawn without external approval. They create the appearance of accountability without the mechanism of it.

OpenAI’s recent release of open-source safety policies for teen users covers graphic violence, dangerous activities, and other harm categories. OpenAI itself described these as a “meaningful safety floor,” not a comprehensive solution. The gap between floor and ceiling is where Tumbler Ridge happened. The system flagged a teenager describing gun violence scenarios. The policy said that was not enough to report. The teenager went on to kill eight people. A lower threshold would have triggered a report to the RCMP. Whether the RCMP would have acted on it, whether Canadian law would have permitted intervention based on ChatGPT conversations, whether any of that would have prevented the shooting are questions that cannot be answered because the report was never made. OpenAI’s updated policy now says it would make the report. But the updated policy is still voluntary, still internal, and still subject to the same leadership override that prevented the original report from being filed.

The gap

Canada’s AI minister, Evan Solomon, said OpenAI’s commitments “do not go far enough.” Federal ministers from the innovation, justice, public safety, and culture portfolios met with OpenAI representatives after the government summoned the company’s executives in late February. A joint task force between Innovation, Science and Economic Development Canada and Public Safety Canada is reviewing AI safety reporting protocols, with preliminary recommendations expected by summer 2026. Bill C-27, which contains the Artificial Intelligence and Data Act, was Canada’s proposed AI regulation framework but is now widely regarded as inadequate. Bill C-63, the Online Harms Act, was designed for social media platforms, not generative AI systems that conduct one-on-one conversations with users. The federal government has tabled new “lawful access” legislation to give police powers to pursue online data from foreign companies, but it does not specifically require AI companies to report threatening behaviour. Canada currently has no legal framework for assigning responsibility when an AI company possesses information that could prevent violence and chooses not to share it.

This is the gap that Altman’s letter cannot close. An apology addresses a past failure. A voluntary policy change addresses a future risk. Neither addresses the structural problem, which is that a company valued at $852 billion, racing to build artificial general intelligence, serving hundreds of millions of users, employing systems that can identify dangerous behaviour in real time, operates under no legal obligation to tell anyone what it finds. OpenAI’s employees saw a threat. OpenAI’s leadership decided the threat did not meet the company’s internal standard. Eight people are dead. The standard has been lowered. The next decision will be made by the same company, under the same voluntary framework, with the same absence of legal consequence for getting it wrong. Altman wrote that he shares the letter “with the understanding that everyone grieves in their own way and in their own time.” Tumbler Ridge is grieving. The question is not whether Sam Altman is sorry. The question is whether being sorry is a policy.

Next Post

The US wants to cut off China's chip equipment. China says the supply chain will break for everyone.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • Oracle’s $16.3B data centre financing required PIMCO to anchor $10B after US banks retreated
  • Clair Obscur: Expedition 33 Has Reached 8 Million Copies Sold
  • CAPE at 38, concentration above 2000 levels, but companies are actually profitable
  • Better than Quick Share and AirDrop
  • The US wants to cut off China’s chip equipment. China says the supply chain will break for everyone.

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously