United States Customs and Border Safety plans to spend $225,000 for a yr of entry to Clearview AI, a face recognition instrument that compares photographs towards billions of photos scraped from the web.
The deal extends entry to Clearview instruments to Border Patrol’s headquarters intelligence division (INTEL) and the Nationwide Concentrating on Heart, models that gather and analyze knowledge as a part of what CBP calls a coordinated effort to “disrupt, degrade, and dismantle” individuals and networks seen as safety threats.
The contract states that Clearview gives entry to “over 60+ billion publicly accessible photos” and shall be used for “tactical focusing on” and “strategic counter-network evaluation,” indicating the service is meant to be embedded in analysts’ day-to-day intelligence work somewhat than reserved for remoted investigations. CBP says its intelligence models draw from a “number of sources,” together with commercially accessible instruments and publicly accessible knowledge, to determine individuals and map their connections for nationwide safety and immigration operations.
The settlement anticipates analysts dealing with delicate private knowledge, together with biometric identifiers akin to face photos, and requires nondisclosure agreements for contractors who’ve entry. It doesn’t specify what sorts of photographs brokers will add, whether or not searches might embody US residents, or how lengthy uploaded photos or search outcomes shall be retained.
The Clearview contract lands because the Division of Homeland Safety faces mounting scrutiny over how face recognition is utilized in federal enforcement operations far past the border, together with large-scale actions in US cities which have swept up US residents. Civil liberties teams and lawmakers have questioned whether or not face-search instruments are being deployed as routine intelligence infrastructure, somewhat than restricted investigative aids, and whether or not safeguards have stored tempo with enlargement.
Final week, Senator Ed Markey launched laws that might bar ICE and CBP from utilizing face recognition know-how altogether, citing considerations that biometric surveillance is being embedded with out clear limits, transparency, or public consent.
CBP didn’t instantly reply to questions on how Clearview could be built-in into its techniques, what forms of photos brokers are licensed to add, and whether or not searches might embody US residents.
Clearview’s enterprise mannequin has drawn scrutiny as a result of it depends on scraping photographs from public web sites at scale. These photos are transformed into biometric templates with out the information or consent of the individuals photographed.
Clearview additionally seems in DHS’s just lately launched synthetic intelligence stock, linked to a CBP pilot initiated in October 2025. The stock entry ties the pilot to CBP’s Traveler Verification System, which conducts face comparisons at ports of entry and different border-related screenings.
CBP states in its public privateness documentation that the Traveler Verification System doesn’t use info from “industrial sources or publicly accessible knowledge.” It’s extra probably, at launch, that Clearview entry would as an alternative be tied to CBP’s Automated Concentrating on System, which hyperlinks biometric galleries, watch lists, and enforcement data, together with recordsdata tied to latest Immigration and Customs Enforcement operations in areas of the US removed from any border.
Clearview AI didn’t instantly reply to a request for remark.
Latest testing by the Nationwide Institute of Requirements and Know-how, which evaluated Clearview AI amongst different distributors, discovered that face-search techniques can carry out effectively on “high-quality visa-like photographs” however falter in much less managed settings. Pictures captured at border crossings that have been “not initially meant for automated face recognition” produced error charges that have been “a lot increased, usually in extra of 20 p.c, even with the extra correct algorithms,” federal scientists say.
The testing underscores a central limitation of the know-how: NIST discovered that face-search techniques can’t cut back false matches with out additionally growing the chance that the techniques fail to acknowledge the proper particular person.
Consequently, NIST says businesses might function the software program in an “investigative” setting that returns a ranked checklist of candidates for human evaluate somewhat than a single confirmed match. When techniques are configured to all the time return candidates, nonetheless, searches for individuals not already within the database will nonetheless generate “matches” for evaluate. In these instances, the outcomes will all the time be one hundred pc fallacious.
