An ICE agent shot and killed 37-year-old Renee Good in Minneapolis on Wednesday, simply the most recent instance of federal authorities terrorizing communities with lethal power on the path of President Donald Trump. The ICE agent will be seen capturing at Good’s automotive in three separate viral movies, although the shooter hasn’t but been publicly recognized. Web sleuths are asking AI instruments to take away the ICE agent’s face masks. The issue is that AI chatbots can’t try this with any accuracy.
Video from the scene of the capturing on Wednesday was powerful to look at, however it immediately flooded all the main social media platforms. The video was damning, showing to indicate Good initially making an attempt to wave the ICE brokers on earlier than the masked males give conflicting orders. They first informed her to maneuver on, in keeping with eyewitnesses who spoke with Minnesota Public Radio, earlier than making an attempt to get her out of the automotive. Video reveals Good moved the automotive ahead, along with her wheels turned away from the brokers, however one of many males will be seen capturing on the automotive a number of occasions.
Homeland Safety Secretary Kristi Noem claimed that Good was making an attempt to run over the ICE brokers and dedicated an act of “home terrorism.” Vice President JD Vance known as it “basic terrorism” on Thursday. Visible investigations from Bellingcat and the New York Times contradicted their account.
Faux ICE brokers created with AI
Not lengthy after the movies went viral, social media customers on platforms like X began to ask the AI chatbot Grok to unmask the agent who shot Good. Faux photos created by unknown AI instruments additionally unfold on websites like TikTok and Instagram.
However AI merely can’t try this. It simply creates a picture from scratch that doesn’t present the precise face of that particular person. It’s roughly as helpful as selecting a random picture from the web.
Among the photos have gotten monumental traction, attracting over 1,000,000 views in a single tweet, and have unfold extensively throughout many networks, pushed by an ignorance of what AI instruments are able to producing.
@Grok is that this true
Sadly, AI can also be not good at figuring out whether or not photos are created with AI. The picture above will not be actual, however when Gizmodo requested Gemini whether or not it was created by AI, the chatbot mentioned it wasn’t.
Google just lately launched the SynthID watermark detector in Gemini, however that’s solely helpful when the picture was truly created with a Google instrument like Nano Banana Professional. The watermark is invisible to the bare eye, however Gemini has no method to undoubtedly rule on a picture created with a distinct firm’s instruments.
The picture above is AI however was not created with Google, and Gemini replied: “Primarily based on my evaluation, the picture is probably going an actual {photograph}, not AI-generated.” AI detection software program equally struggles with whether or not textual content has been created with synthetic intelligence instruments like ChatGPT, resulting in false accusations towards college students who swear they didn’t get AI to put in writing their papers.
Steve Grove isn’t an ICE agent
Unmasking individuals is solely past the capabilities of AI instruments in the mean time. These pretend photos are at the moment going viral, and other people seem like working them by means of facial recognition and getting false positives. One widespread title that’s cropped up on websites like Reddit and X is “Steve Grove,” an actual one who owns a gun store in Springfield, Missouri.
The Springfield Daily Citizen spoke with the actual Steve Grove, who mentioned that his Fb account has been inundated with messages. “I by no means go by ‘Steve,’” Grove informed the information outlet on Thursday. “After which, in fact, I’m not in Minnesota. I don’t work for ICE, and I’ve, you realize, 20 inches of hair on my head, however no matter.”
Steve Grove is the title of the CEO of the Star Tribune newspaper in Minneapolis, which can be the place this declare originated.
Faux Renee Good photos
Different pretend photos created on Wednesday tried to indicate Good in her automotive earlier than the capturing. One AI-generated picture unfold extensively on Bluesky in a cropped type, but additionally appeared on Facebook in a wider shot. Notably, the pretend picture doesn’t present anybody behind the wheel, with the girl supposedly making an attempt to characterize Good sitting within the passenger’s seat. The cropped model has been flipped in order that it seems extra like she’s within the driver’s seat.

Most disturbingly, one X person took a screenshot of Good, seen slumped over lifeless in her automotive, and informed Grok to place her in a bikini. Grok dutifully complied, mirroring the exercise of the AI chatbot making non-consensual sexualized photos of ladies and younger ladies in current weeks. It’s a federal crime to create baby sexual abuse materials, however Grok continues to do it on the request of customers.
AI can’t try this
We’ve seen this reliance on AI as an investigative instrument over and over prior to now yr. When safety digicam photos of the suspect within the Charlie Kirk capturing had been launched by the FBI, individuals ran them by means of AI instruments in an try and get a clearer picture of the particular person with out sun shades. When a suspect was ultimately arrested, some individuals had been confused as a result of Tyler Robinson’s mugshot didn’t look something just like the AI-altered photos that they had seen circulating on social media.
When Trump appeared unwell over Labor Day weekend final yr, social media tried to “improve” grainy photographs of the president utilizing generative synthetic intelligence instruments. The enhancement added a gigantic lump to his head.

However AI is simply introducing flaws, not making a clearer image. All you should do to grasp what’s taking place is to have a look at the flag on Trump’s hat. It didn’t create a extra correct American flag. The AI seemed for patterns and extrapolated from these patterns, sharpening the main focus however not making a extra correct image of actuality.
Old style misinformation
And you may’t all the time blame web sleuths solely for a few of the dumbest feedback in these conditions. Greg Kelly, an anchor at Newsmax, tried to recommend on Wednesday that the stickers on the again of Good’s automotive had been in some way suspicious.
“TOTALLY JUSTIFIED SHOOTING!!!!!! NOT EVEN CLOSE!!! (Inquisitive about these Stickers on the Again of the Automobile. Varied WACK JOB teams and affiliations? )” Kelly wrote on X.
TOTALLY JUSTIFIED SHOOTING!!!!!! NOT EVEN CLOSE!!! (Inquisitive about these Stickers on the Again of the Automobile. Varied WACK JOB teams and affiliations? ) pic.twitter.com/3xng119z7m
— Greg Kelly (@gregkellyusa) January 7, 2026
These stickers clearly appear to be stickers from the Nationwide Parks. And a report from the Associated Press suggests she was merely dropping off her son in school and acquired caught up in the course of the ICE incident, in keeping with her ex-husband. There’s no proof that Good was some form of left-wing radical. And even when she was, that wouldn’t have justified her killing.
Good had two kids from her first marriage, ages 15 and 12, in keeping with Minnesota Public Radio, and a 6-year-old son from her second marriage. A GoFundMe fundraising campaign for Good’s surviving spouse and son has raised over $600,000 on the time of this writing.
Trending Merchandise
Lenovo Latest 15.6″ Laptop co...
Thermaltake V250 Motherboard Sync A...
Dell KM3322W Keyboard and Mouse
Sceptre Curved 24-inch Gaming Monit...
HP 27h Full HD Monitor – Diag...
Wi-fi Keyboard and Mouse Combo R...
ASUS 27 Inch Monitor – 1080P,...
Lenovo V14 Gen 3 Enterprise Laptop ...
Amazon Fundamentals – 27 Inch...
