That the U.S. Surveillance State is rapidly growing to the point of ubiquity has been demonstrated over the past week by seemingly benign events. While the picture that emerges is grim, to put it mildly, at least Americans are again confronted with crystal clarity over how severe this has become.
The latest round of valid panic over privacy began during the Super Bowl held on Sunday. During the game, Amazon ran a commercial for its Ring camera security system. The ad manipulatively exploited people’s love of dogs to induce them to ignore the consequences of what Amazon was touting. It seems that trick did not work.
The ad highlighted what the company calls its “Search Party” feature, whereby one can upload a picture, for example, of a lost dog. Doing so will activate multiple other Amazon Ring cameras in the neighborhood, which will, in turn, use AI programs to scan all dogs, it seems, and identify the one that is lost. The 30-second commercial was full of heart-tugging scenes of young children and elderly people being reunited with their lost dogs.
But the graphic Amazon used seems to have unwittingly depicted how invasive this technology can be. That this capability now exists in a product that has long been pitched as nothing more than a simple tool for homeowners to monitor their own homes created, it seems, an unavoidable contract between public understanding of Ring and what Amazon was now boasting it could do.

Many people were not just surprised but quite shocked and alarmed to learn that what they thought was merely their own personal security system now has the ability to link with countless other Ring cameras to form a neighborhood-wide (or city-wide, or state-wide) surveillance dragnet. That Amazon emphasized that this feature is available (for now) only to those who “opt-in” did not assuage concerns.
Numerous media outlets sounded the alarm. The online privacy group Electronic Frontier Foundation (EFF) condemned Ring’s program as previewing “a world where biometric identification could be unleashed from consumer devices to identify, track, and locate anything — human, pet, and otherwise.”
Many private citizens who previously used Ring also reacted negatively. “Viral videos online show people removing or destroying their cameras over privacy concerns,” reported USA Today. The backlash became so severe that, just days later, Amazon — seeking to assuage public anger — announced the termination of a partnership between Ring and Flock Safety, a police surveillance tech company (while Flock is unrelated to Search Party, public backlash made it impossible, at least for now, for Amazon to send Ring’s user data to a police surveillance firm).
The Amazon ad seems to have triggered a long-overdue spotlight on how the combination of ubiquitous cameras, AI, and rapidly advancing facial recognition software will render the term “privacy” little more than a quaint concept from the past. As EFF put it, Ring’s program “could already run afoul of biometric privacy laws in some states, which require explicit, informed consent from individuals before a company can just run face recognition on someone.”
Those concerns escalated just a few days later in the context of the Tucson disappearance of Nancy Guthrie, mother of long-time TODAY Show host Savannah Guthrie. At the home where she lives, Nancy Guthrie used Google’s Nest camera for security, a product similar to Amazon’s Ring.
Guthrie, however, did not pay Google for a subscription for those cameras, instead solely using the cameras for real-time monitoring. As CBS News explained, “with a free Google Nest plan, the video should have been deleted within 3 to 6 hours — long after Guthrie was reported missing.” Even professional privacy advocates have understood that customers who use Nest without a subscription will not have their cameras connected to Google’s data servers, meaning that no recordings will be stored or available for any period beyond a few hours.
For that reason, Pima County Sheriff Chris Nanos announced early on “that there was no video available in part because Guthrie didn’t have an active subscription to the company.” Many people, for obvious reasons, prefer to avoid permanently storing comprehensive daily video reports with Google of when they leave and return to their own home, or who visits them at their home, when, and for how long.
Despite all this, FBI investigators on the case were somehow magically able to “recover” this video from Guthrie’s camera many days later. FBI Director Kash Patel was essentially forced to admit this when he released still images of what appears to be the masked perpetrator who broke into Guthrie’s home. (The Google user agreement, which few users read, does protect the company by stating that images may be stored even in the absence of a subscription.)
While the “discovery” of footage from this home camera by Google engineers is obviously of great value to the Guthrie family and law enforcement agents searching for Guthrie, it raises obvious yet serious questions about why Google, contrary to common understanding, was storing the video footage of unsubscribed users. A former NSA data researcher and CEO of a cybersecurity firm, Patrick Johnson, told CBS: “There's kind of this old saying that data is never deleted, it's just renamed.”

It is rather remarkable that Americans are being led, more or less willingly, into a state-corporate, Panopticon-like domestic surveillance state with relatively little resistance, though the widespread reaction to Amazon’s Ring ad is encouraging. Much of that muted reaction may be due to a lack of realization about the severity of the evolving privacy threat. Beyond that, privacy and other core rights can seem abstract and less of a priority than more material concerns, at least until they are gone.
It is always the case that there are benefits available from relinquishing core civil liberties: allowing infringements on free speech may reduce false claims and hateful ideas; allowing searches and seizures without warrants will likely help the police catch more criminals, and do so more quickly; giving up privacy may, in fact, enhance security.
But the core premise of the West generally, and the U.S. in particular, is that those trade-offs are never worthwhile. Americans still all learn and are taught to admire the iconic (if not apocryphal) 1775 words of Patrick Henry, which came to define the core ethos of the Revolutionary War and American Founding: “Give me liberty or give me death.” It is hard to express in more definitive terms on which side of that liberty-versus-security trade-off the U.S. was intended to fall.
These recent events emerge in a broader context of this new Silicon Valley-driven destruction of individual privacy. Palantir’s federal contracts for domestic surveillance and domestic data management continue to expand rapidly, with more and more intrusive data about Americans consolidated under the control of this one sinister corporation.
Facial recognition technology — now fully in use for an array of purposes from Customs and Border Protection at airports to ICE’s patrolling of American streets — means that fully tracking one’s movements in public spaces is easier than ever, and is becoming easier by the day. It was only three years ago that we interviewed New York Timesreporter Kashmir Hill about her new book, “Your Face Belongs to Us.” The warnings she issued about the dangers of this proliferating technology have not only come true with startling speed but also appear already beyond what even she envisioned.
On top of all this are advances in AI. Its effects on privacy cannot yet be quantified, but they will not be good. I have tried most AI programs simply to remain abreast of how they function.
After just a few weeks, I had to stop my use of Google’s Gemini because it was compiling not just segregated data about me, but also a wide array of information to form what could reasonably be described as a dossier on my life, including information I had not wittingly provided it. It would answer questions I asked it with creepy, unrelated references to the far-too-complete picture it had managed to create of many aspects of my life (at one point, it commented, somewhat judgmentally or out of feigned “concern,” about the late hours I was keeping while working, a topic I never raised).
Many of these unnerving developments have happened without much public notice because we are often distracted by what appear to be more immediate and proximate events in the news cycle. The lack of sufficient attention to these privacy dangers over the last couple of years, including at times from me, should not obscure how consequential they are.
All of this is particularly remarkable, and particularly disconcerting, since we are barely more than a decade removed from the disclosures about mass domestic surveillance enabled by the courageous whistleblower Edward Snowden. Although most of our reporting focused on state surveillance, one of the first stories featured the joint state-corporate spying framework built in conjunction with the U.S. security state and Silicon Valley giants.
The Snowden stories sparked years of anger, attempts at reform, changes in diplomatic relations, and even genuine (albeit forced) improvements in Big Tech’s user privacy. But the calculation of the U.S. security state and Big Tech was that at some point, attention to privacy concerns would disperse and then virtually evaporate, enabling the state-corporate surveillance state to march on without much notice or resistance. At least as of now, the calculation seems to have been vindicated.


