ICE’s Use of AI Will Lead to Big Mistakes. Maybe That’s the Point
U.S. Immigration and Customs Enforcement, under orders from the Trump administration to abduct thousands of people daily, has lately conducted sweeping crackdowns in heavily Democratic cities, with escalated operations in Minneapolis resulting in the killing of two Americans by federal agents this month. And though the deaths of Renee Good and Alex Pretti have shocked the nation and fueled resistance to ICE, the agency has continued to detain anyone they believe might be undocumented. Despite the government’s insistence that it’s targeting criminals who are residing in the country illegally, this largely untrained and unskilled mercenary force is scooping up migrants with no prior arrests as well as legal U.S. citizens.
ICE is anything but transparent about their haphazard work on the streets and how they pick their targets. The agency’s deployments seem to be organized around everything from trending topics in the right-wing media ecosystem (the increased aggression in Minnesota followed Trump’s racist rhetoric about Somali immigrants there supposedly defrauding benefits systems) to stereotypes about where migrant day laborers congregate (such as Home Depot stores). Of course, ICE also relies on many forms of surveillance and data analysis, much of it evidently powered by artificial intelligence — which may partly account for their pattern of wrongful arrests.
“They’ve initiated a lot of contracts with data providers and providers of AI solutions,” says Damon McCoy, a professor of computer science and engineering who serves as the co-director of the New York University Center for Cybersecurity. “So you can see what contracts they’ve established, and that gives you some insight into likely capabilities that they either have or that they’re building out.”
Among the tech companies with ICE contracts are Palantir, the data analytics firm whose co-founders include Silicon Valley Trump supporters Peter Thiel and Joe Lonsdale and is led by CEO Alex Karp, who donated $1 million to Trump’s inauguration fund. ICE uses Palantir’s tools to process and summarize tips sent to the department, as well as a Palantir system called ELITE that sources and collates information from government agencies to create neighborhood maps that lead them to probable deportation targets. There’s also Clearview AI, which provides algorithm-driven facial recognition software. “They seem to be trying to build some platform,” McCoy says, that would integrate these various systems.
“Instead of being used to figure out what pair of sneakers you might buy, [phone data] might be used to figure out what protest you attended”
Yet the way in which their network of AI services collects and synthesizes information for the agency remains rather mysterious. ICE officials themselves may not really grasp how these tools function or the range of material they’re combing through. The American Immigration Council, a nonprofit human rights group, warned in December: “Ultimately, AI-driven outputs begin to shape enforcement decisions in ways that are difficult to challenge. As the system grows more opaque, enforcement decisions end up driven by the final output rather than any understandable process.”
McCoy says that individuals are vulnerable to AI-powered surveillance without necessarily realizing their exposure. “There’s a pretty rich stream of location data that’s coming off of your phone,” he says; software as presumably innocuous as your flashlight app is “collecting fine-grained GPS data on you.” He notes that ICE has also begun to explore acquisitions of data from online advertising brokers, a source of potential insight into details such as a person’s health or financial history. “That data is very powerful,” McCoy says. “There’s a lot of location data, there’s a lot of browser history data, there’s a lot of purchasing data. It’s a very rich, powerful data source that’s been built out. Instead of being used to figure out what pair of sneakers you might buy, it might be used to figure out what protest you attended.”
The open availability of this kind of material poses a threat not only to migrants and activists but anyone with a digital footprint, because depending on AI to sort through it will inevitably lead to mistakes. “If they’re not vetting things well, and they’re just heavily relying on the AI, there’s more than likely a lot of issues that they’re going to encounter,” McCoy says. “The same issues that one encounters whenever you overly rely on AI — the AI starts hallucinating.” Biometric security analyses like facial recognition and iris scanning have also been shown to exhibit biases that undermine their accuracy.
“Pointing out that AI is prone to giving ICE bad information is missing the entire point of ICE”
Eva Galperin, director of cybersecurity for the Electronic Frontier Foundation, believes ICE is fundamentally indifferent as to whether its AI outputs can be trusted. “I think that pointing out that AI is prone to giving ICE bad information is missing the entire point of ICE,” she says. “They don’t care if the information they have is good. There is an enormous amount of pressure from the Trump administration on ICE to simply make arrests and to detain people, to deport people, and to do it in large numbers, and you cannot get numbers that big by adhering to the rule of law, as we have seen.”
Galperin also regards the Trump administration as the perfect gullible customers for overleveraged AI giants controlled by Trump’s billionaire tech-executive allies. “These companies are often in an enormous amount of debt, and one of the big problems that they’re having right now is that there’s simply not enough uptake by paying customers for all of these products that they’re building in order to justify the enormous cost of running them,” she says. “Leaving the U.S. government holding the bag is a way around that.”
“You also have to keep in mind that a lot of the people who are making these [contract] decisions [at federal agencies] have no technical understanding of the things that they are buying, how they work, whether or not they work, the advantages or the disadvantages,” Galperin adds. “They’re very easily bamboozled. And the AI industry is absolutely chock-full of snake-oil salesmen promising you all kinds of results that you are never going to get. The people in charge of procurement in the Trump administration are the ultimate suckers. Of course they’re buying all of this.”
The irresponsible use of AI by the administration, Galperin says, from the so-called Department of Government Efficiency cutting a swath of destruction through Washington with these programs to the White House posting digitally altered propaganda imagery, reflects a post-truth ideology. “It’s predicated on this notion that you can do so much with fewer people, and never mind if it’s not as good,” she says. “Never mind if you get some things wrong. You’re just going to go in there and shake things up and make things different. It doesn’t matter if you’re correct. You can just make claims after the fact.”
Indeed, as MAGA leaders and mouthpieces have offered dubious excuses to justify the killing of Good and Pretti and the detention of young children, that habit of inventing a flimsy pretext once a worst-case scenario has already unfolded has never been more obvious. And when AI dictates where the next raid should take place on the basis of probabilities, not specific evidence, you get randomized violence that must later be explained as if federal law enforcement had legitimate cause to be there and wasn’t acting on the guesses of unreliable software. For a regime more occupied with the appearance of all-out assault on its perceived enemies than anything like justice, that is clearly good enough.

