My insurance company spied on my house with a drone. Then the real nightmare started.

It was already a hectic day when my insurance agent left me a panicked voicemail saying my homeowners insurance had lapsed. I felt sick and naked. Suddenly, any leak, any fire, any tree branch falling on the century-old Hudson Valley home that has been in my family for nearly 40 years could drain my bank account. I was overcome with shame. How had I let this happen? Had I forgotten to update a credit card? Had I missed a bill? Had I done something wrong with the policy? But when I checked my records, even on Travelers’ website, there was nothing.

A few hours later, my panic turned to bewilderment. When I finally got to my insurance broker, he told me the reason Travelers had revoked my policy: AI-driven drone surveillance. My finances were in jeopardy, it seemed, because of a bad piece of code.

I take privacy and surveillance very seriously—so seriously that I started one of the leading think tanks on the subject, the Surveillance Technology Oversight Project. But while my job involves studying surveillance threats across the country, I had no idea that my own insurance company was using my premium dollars to spy on me. Travelers is not only using aerial photography and AI to monitor its customers’ rooftops, it’s also written patents on the technology—nearly 50, in fact. And it may not be the only insurer spying from the sky.

This didn’t just feel creepy and invasive — it felt wrong. Literally wrong: There was nothing wrong with my roof.

I’m a lazy homeowner. I hate gardening and I don’t clean as often as I should. But I still take care of the essentials. Whether it’s upgrading the electrical or installing new HVAC, I try to make sure my home is safe. But to Travelers’ AI, my laziness seemed like too big a risk to insure. The algorithm didn’t detect a foundation issue or a leaky pipe problem. Instead, as my agent revealed, the ominous threat that canceled my insurance was nothing more than moss.

Where there’s moisture, there’s moss, and if you let a large amount of it sit for an extended period of time, it can undermine the life of your roof. A small amount is largely harmless. Still, it couldn’t be easier to treat. Sure, I could have removed the moss sooner, but life got busy and it just kept falling (and growing) through the cracks. Finally, in June, weeks before I knew my roof was being monitored, I went to the hardware store, spent $80 on moss killer, connected the white bottle of chemical to the garden hose, and sprayed it on the roof. The whole process took about five minutes. A few days later, to my great relief, the moss was dying. I thought that was the end of a completely unmemorable story.

Who knows. If I had done that a month earlier, Travelers’ technology might never have noticed me, never have told me I was an insurance risk. But one of the great frustrations of the AI ​​surveillance era is that as companies and governments track more and more of our lives in ever-increasing detail, we rarely know we’re being watched. At least not until it’s too late to change our minds.

While there’s no way to know exactly how many other Travelers customers have been targeted by the company’s surveillance program, I’m certainly not the first. In February, ABC’s Boston office reported on a customer who was threatened with non-renewal if she didn’t replace her roof. The roof was well within its expected lifespan and the customer had not experienced any leaks; yet she was told that without a roof replacement, she would not be covered. She said she was left with a $30,000 bill to replace a slate roof that experts estimated could last another 70 years.

Insurers have a vested interest in being overly cautious in how they build their AI models. No one can use AI to predict the future; you train the technology to make guesses based on changes in roof color and grainy aerial photos. But even the best AI models will get many predictions wrong, especially at scale and especially when you’re trying to guess about the future of radically different roof designs on countless buildings in different environments. For the insurance companies designing the algorithms, that leaves a lot of questions about when to put a thumb on the scale in favor of or against the homeowner. And insurance companies will have enormous incentives to side against the homeowner every time.

Think about it: Every time the AI ​​gives the green light for a roof that actually has something wrong with it, the insurance company foots the bill. Every time that happens, the company can add that data point to its model and train it to be even more risk-averse. But when homeowners are threatened with cancellation, they foot the bill for repairs, even if the repairs are unnecessary. If a homeowner in Boston throws out a slate roof that has 70 years of life left, the insurance company never knows it was wrong to remove it. It never updates the model to be less aggressive on similar homes.

Over time, insurance companies will have every incentive to make the models increasingly unforgiving, causing more Americans to lose coverage and potentially causing millions or billions of dollars in unnecessary home repairs. And as insurers continue to suffer losses due to the climate crisis and inflation, the pressure to force unnecessary preventative repairs on customers will only increase.

A confusing coda to this whole ordeal was what Travelers said when I contacted them with a detailed list of fact-checking questions and a request for an interview. In response, a spokesperson sent a terse denial: “Artificial intelligence analysis/modeling and drone surveillance are not part of our underwriting process. When available, our underwriters may reference high-resolution aerial imagery as part of a holistic assessment of the property’s condition.”

How did this make sense given what was written on Travelers’ own website and patent applications? Then the precision and smoothness of the language started to stand out. What exactly counts as the “underwriting decision process”? When Travelers brags online that its employees “rely on algorithms and aerial imagery to identify the shape of a roof — a typically time-consuming process for customers — with nearly 90% accuracy,” doesn’t that classification count as the underwriting process? And even though Travelers has flown tens of thousands of drone flights, aren’t those part of underwriting? And if AI and drones don’t really impact customers, why are they filing so many patent applications for “Systems and Methods for Artificial Intelligence (AI) Roof Deterioration Analysis”? It felt like the company was trying to do both, bragging about using the latest and greatest technology while avoiding liability for mistakes. When I asked the company these follow-up questions, Travelers didn’t respond.


Fortunately, my own roof isn’t going anywhere, at least not for a while. A few hours after my panicked ordeal with Travelers began and I started looking for new coverage, the situation was resolved. Travelers admitted that it had screwed up. It never admitted that the AI ​​was wrong to tag me. But it did reveal the reason I couldn’t find my cancellation message: the company never sent it.

Travelers may have invested vast sums in neural networks and drones, but the company apparently never updated its billing software to handle the basics reliably. Without a notice of non-renewal, it couldn’t legally cancel coverage. Bad advanced technology screwed me; bad basic software saved me.

Part of what’s so disturbing about the whole episode is how opaque it was. When Travelers flew a drone over my house, I never knew. When it decided I was too much of a risk, I had no idea why or how. As more and more companies use increasingly opaque forms of AI to chart the course of our lives, we’re all at risk. AI may offer companies a quick way to save some money, but when these systems use our data to make decisions about our lives, we’re the ones bearing the risk. As maddening as it is to deal with a human insurance agent, it’s clear that AI and surveillance are not adequate substitutes. And unless lawmakers take action, the situation will only get worse.

The reason I still have insurance is because of simple consumer protection laws. New York State won’t let Travelers cancel my insurance without notice. But why do we let companies like Travelers use AI on us in the first place without any protections? A century ago, lawmakers saw the need to regulate the insurance market and make policies more transparent, but now we need updated laws to protect us from the AI ​​that’s trying to control our fate. Without them, the future looks bleak. Insurance is one of the few things that protects us from the risks of modern life. Without AI safeguards, algorithms will eat away at what little peace of mind our policies give us.


Albert Fox-kahn is the founder and executive director of the Surveillance Technology Oversight Project, or STOPa New York-based civil rights and privacy group.

Read the original article on Business Insider

Leave a Comment