
Nearly a week following a New Year’s Day explosion in front of the Trump Hotel in Las Vegas, local law enforcement has reported further details regarding their investigation, including insights into the possible involvement of generative AI in the incident.
The investigation confirmed that the suspect, an active duty soldier in the US Army named Matthew Livelsberger, had a “possible manifesto” saved on his phone along with an email to a podcaster and various letters. Video evidence revealed him preparing for the explosion by pouring fuel onto the Cybertruck while it was stationary before driving to the hotel. Although he maintained a log of alleged surveillance, officials clarified that he had no prior criminal record and was not under investigation.
The Las Vegas Metro Police also shared multiple slides highlighting queries Livelsberger had posed to ChatGPT days before the explosion. These inquiries included questions about explosives, detonation methods, and legal avenues for acquiring firearms and explosive materials along his route.
When questioned about these queries, OpenAI spokesperson Liz Bourgeois stated:
We are saddened by this incident and committed to ensuring AI tools are used responsibly. Our models are designed to refuse harmful instructions and minimize harmful content. In this case, ChatGPT provided information already available online and included warnings against harmful or illegal activities. We are cooperating with law enforcement to assist in their investigation.
Authorities are still investigating potential causes for the explosion, which was characterized as a deflagration traveling at a slower rate than a high explosives detonation, which would have caused more extensive damage. They have not dismissed other possibilities, such as an electrical short. However, one plausible explanation, aligning with the queries and available evidence, is that the muzzle flash from a gunshot ignited fuel vapor or fireworks fuses inside the Cybertruck, leading to a more substantial explosion involving fireworks and other explosive materials.
Interestingly, attempts to replicate the queries in ChatGPT remain effective, as the information sought does not seem restricted and could be accessed through standard search methods. Nonetheless, the suspect’s engagement with a generative AI tool and the investigators’ ability to trace those queries raise important questions regarding AI chatbot guardrails, safety, and privacy in a very real context.