Canadian officials demand answers after AI company banned Jesse Van Rootselaar’s account in June 2025 but did not alert police—eight months before she killed eight people in British Columbia.
OTTAWA — Canadian officials have summoned senior safety executives from OpenAI to Ottawa for an emergency meeting following revelations that the company failed to report a ChatGPT user who later committed one of the deadliest mass shootings in Canadian history .
Artificial Intelligence Minister Evan Solomon confirmed that the senior safety team from the U.S.-based company will meet with him on Tuesday to explain why it did not inform law enforcement about suspicious activity on the account of Jesse Van Rootselaar, the 18-year-old responsible for the February 10 massacre in Tumbler Ridge, British Columbia .
A missed opportunity?
OpenAI has acknowledged that its abuse-detection systems flagged Van Rootselaar’s account in June 2025—eight months before the attack—for “misuses of our models in furtherance of violent activities” . The account was banned at that time, but the company made no attempt to notify Canadian authorities .
The company maintains that the decision was consistent with its internal policies, which require a “very high bar” for involving law enforcement. According to OpenAI, Van Rootselaar’s ChatGPT usage did not point toward “credible or imminent planning of an attack” .
But for British Columbia Premier David Eby, that explanation rings hollow.
“From the outside, it looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia,” Eby told reporters Monday. “I’m angry about that, I’m trying hard not to rush to judgement” .
The tragedy
On February 10, Van Rootselaar shot and killed her mother and 11-year-old half-brother at their home before driving to Tumbler Ridge Secondary School, where she murdered five children and an educational assistant. The shooter died at the school from a self-inflicted gunshot wound as police responded . Two other students were injured, one of whom remains in serious condition at a Vancouver children’s hospital .
The small mining community, located 1,180 kilometers north of Vancouver, has been left reeling. Abel Mwansa, whose 12-year-old son Abel was killed in the attack, has spoken of meeting another victim’s father at the hospital where 12-year-old Maya Gebala remains unconscious after being shot in the head .
“We were told we only had hours and yet here you are, still fighting, still with us,” Maya’s father David Gebala wrote in a recent update .
What did OpenAI know and when?
According to reports, the company’s internal debate about Van Rootselaar’s account was more extensive than previously known. About a dozen OpenAI staffers discussed whether to take action on the troubling posts, which included scenarios involving gun violence . Some employees reportedly encouraged leaders to alert authorities about the potential real-world threat .
The company ultimately decided against any notification to law enforcement.
OpenAI has stated that it balances public safety concerns against user privacy and the risk of causing distress by triggering unnecessary police visits . The company also trains ChatGPT to discourage imminent real-world harm and refuse assistance for illegal activities .
Following the shooting, OpenAI did reach out to the Royal Canadian Mounted Police (RCMP) to provide information about Van Rootselaar’s account activity . But critics note that this contact came only after the company learned of the massacre—and only after it had met with British Columbia officials about opening a provincial office without mentioning its prior knowledge of the shooter .
“Reports that allege OpenAI had related intelligence before the shootings in Tumbler Ridge took place are profoundly disturbing for the victims’ families and all British Columbians,” Eby said in a statement .
Regulatory questions
Tuesday’s meeting in Ottawa will focus on understanding OpenAI’s safety protocols, including “when they escalate, and their threshold of escalation to police,” according to Minister Solomon .
Solomon declined to specify what actions or new legislation Ottawa might consider, but emphasized that “all options are on the table” regarding future protections from online harm .
Eby has urged the federal government to create a national standard for when AI companies must report users plotting violence on their platforms .
“It will have to be done carefully, but ensuring a consistent standard for all AI companies across the country is required,” he said .
Alan Mackworth, a professor emeritus with the University of British Columbia’s department of computer science who focuses on AI safety and ethics, drew parallels to professional obligations in other fields.
“These obligations are enshrined in law and/or professional ethics,” Mackworth said. “Similar obligations should be placed on social media and AI companies” .
Broader implications
The case has intensified scrutiny of tech companies’ responsibilities to report threatening user activity . While Canada has strict gun laws and mass shootings are extremely rare—making the Tumbler Ridge killings particularly shocking—the role of AI platforms in potentially preventing violence is now firmly in the spotlight .
OpenAI has said it is “constantly reviewing our referral criteria with experts and we are reviewing this case to see what we can learn and improve” . In a statement, the company expressed condolences and affirmed its cooperation with investigators.
“This was a devastating tragedy, and we are doing all we can to support the ongoing investigation,” said OpenAI spokesperson Jamie Radice .
The RCMP continues its investigation, with Staff Sergeant Kris Clark confirming that a “thorough review of the content on electronic devices, as well as social media and online activities” of Van Rootselaar is taking place .
As Canadian officials prepare to confront OpenAI’s leadership, the central question remains whether current thresholds for reporting potential threats are adequate—and whether, in this case, following the rules meant missing the chance to save lives.

