Welcome to Infinite Curiosity, a weekly newsletter that explores the intersection of Artificial Intelligence and Startups. Tech enthusiasts across 200 countries have been reading what I write. Subscribe to this newsletter for free to receive it in your inbox every week:
The rapid integration of robots and AI into various industries is no longer a vision of the future. It's happening right now! From autonomous vehicles navigating our streets to AI agents managing our schedules, robots are becoming indispensable. But one question looms large: Do robots need their own insurance?
Why do robots need insurance?
Like any other asset or an employee, robots can cause damage or be damaged themselves. They can malfunction, be hacked, or make decisions that lead to unintended consequences. For instance, a self-driving Uber vehicle struck and killed a pedestrian in Arizona in 2018.
This tragic incident highlighted the complex liability issues surrounding autonomous systems. Who is at fault — the operator, the manufacturer, the software developer, or the AI itself?
As robots become more autonomous, the traditional lines of responsibility blur. Insurance is needed not just to protect the owners and operators of robots, but also to provide a framework for accountability and compensation when things go wrong.
The most important point is the simplest – robots are new. They won’t simply be machines to help lift, rather they promise to be problem solvers. But they also promise, implicitly, to be problem creators.
Like the Saturday Night Live skit above or the endless repetition of jokes about Skynet, no one will want powerful unpredictable machines without some form of protection!
In other words, if you’re a company who is trying to sell robots, then you almost certainly need a way to provide robot insurance.
What's the problem with current insurance products?
A skeptic would say that robots are no different than the mid-20th century advent of “labor-saving devices” like washing machines, dishwashers, or food processors. After all, the first food processors didn’t have the safety measures that a modern Cuisinart does. I’m always a little impressed that the machine simply can’t turn on until the top is properly in place.
Any machine that combines physical power and computational power will require underwriting of operational malfunctions and cybersecurity threats. Their design and manufacturing must meet industry standards and regulations. But most importantly, traditional insurance also offers financial protection against damage to third-parties.
The important traditional liability insurance products (whether home, auto, or commercial) are based on human error and predictable risk patterns. In insurance, we understand that these liability policies cover a combination of the physical and behavioral attributes – a house with a pool will be more expensive to insure than the one without. A ski resort with a history of customer injuries around their ski-lifts will find insurance more expensive than one with a spotless safety record.
Robots operating with AI algorithms and decision making autonomy will now mix all these risk categories together. And they will do so in confusing and hard to predict ways. Manufacturing and operational malfunctions will now require us to expand behavioral analysis to include robot behavior. Cybersecurity risks could now cause malfunctions and the bad outcomes no longer limited to the simple logic of button pushing.
For our society to make use of autonomous and semi-autonomous robots, we must simultaneously parse the downsides and allocate the responsibilities. Without these commercial arrangements, the uncertainty will be too much to bear and the logical decision will be to simply avoid the risk.
How do robots change the risks?
If you haven’t yet, watch Figure AI’s youtube videos – the robot puts away dishes, hands the user an apple, and has integrated OpenAI to handle voice prompts.
Robot insurance will most likely start from the frameworks that we’ve already developed for industrial robots like injury to workers or product damage. But the surface area of risk is already much wider than the mental model we might have of a controlled factory floor. In 2019, a Swiss Post drone crashed near a group of children due to a malfunction. Insurance for such drones should cover potential injuries and property damage.
For example, cyber insurance might cover data breaches but not physical damage caused by a hacked robot. In 2017, the FDA recalled St. Jude's cardiac devices due to vulnerabilities that could allow hackers to remotely control them. Cyber insurance doesn't typically cover damages resulting from the actions or the breakdowns of hacked devices.
Similarly, property insurance might cover equipment failure due to mechanical issues but not damages resulting from a robot's decision-making process. This gap leaves operators and manufacturers exposed to significant financial risks.
Creators and buyers of these robots may also seek to cover more mundane commercial concerns, like warranties to cover repair costs or business interruption if a robot plays a critical role in a commercial process. Operators and owners will also need new types of professional liability coverage to handle the added complexity of AI-mediated instructions.
Most regulatory approaches to AI will fall into a copilot framework. In this framework, the expectations are that humans will supervise AI and humans will continue to bear the obligations of any action performed. But it’s not clear that robots acting in the world can operate according to this logic of human responsibility.
What challenges do insurers face to create Robot Insurance?
Creating a net new insurance product comes with a lot of headache.
At the philosophical level, insurance exists to put financial price on the priceless.
From theft to car accidents to hurricanes to death. These are things we find unthinkable in our own lives. So insurance exists to process and transform them into the bloodless language of incidence, severity, and premiums.
To perform these transformations, the insurance industry needs data. Ideally this data exists over long periods of time and preferably comes with stable conditions. The novelty of robots means there's insufficient historical data to predict future losses accurately. Without a robust data set, actuarial models struggle to estimate risk. Robots and AI technologies evolve quickly, rendering existing data obsolete. An actuarial model based on last year's technology may not apply today.
Despite the fact that AI-systems aren’t purely mechanistic, there have been leaps in research on AI Explainability. Moreover, the sheer volume of data coming from these systems should help tech forward insurers build risk models more quickly while helping creators and operators rapidly diagnose and fix missteps.
Take the drone example above. If delivery drones provide data on flight paths, weather conditions, and mechanical status, insurers can more accurately price premiums based on actual risk exposure. Similar steps should allow insurers to analyze operational data to determine the likelihood of malfunctions or accidents.
New tech-driven claims processing will also have much richer data than before. Instead of photos of car damage and analysis of tire tracks, we now have data logs of telemetry, onboard cameras, and “decision events”. These must be analyzed to process claims and provide incontrovertible evidence of what occurred. In autonomous vehicle accidents like the 2016 Tesla crash where a driver was killed while using autopilot, data played a crucial role in understanding the circumstances.
Utilizing robot-collected data enhances transparency and accuracy in insurance. However, it also raises privacy concerns. Insurers must balance data usage with ethical considerations to maintain trust.
How can insurers partner with startups to cross the chasm?
Robotic systems have the potential to create more direct connections between the creator of the systems and the insurer bearing the risk. After all, no one has more incentive to make consumers comfortable with robots than the people and companies creating them.
Deep partnerships can make sure that insurance coverage begins with risk mitigation.
Continuous monitoring of a robot's performance can help prevent risk with overrides, constraints, or “kill switches”.
Scenario simulations and digital twins during training can be adjusted to be risk-aware, and insurers can help set these standards. Perhaps borrowing from the FDA-framework of testing safety separately from testing efficacy or performance.
Once in contact with the real world, these same tools could help adjust premiums dynamically. This will incentivize safe operation and regular updates. Insurers must be proactive as opposed to being reactive in their pricing strategies to accommodate the evolving nature of AI decision-making.
Moreover, the pricing of risk creates a market signal around AI-safety that can naturally ripple through the value chain. If it really is true that one LLM-company has more commitment to creating “Safe AI”, then an insurers’ price will naturally drive manufacturers to have lower prices for the safer option. This can also impact transparency and explainability as well. For example, both insurers and regulators will prefer predictable and understandable systems over black boxes with the same statistical profile.
In this context, insurance can be a useful adjunct to regulation. For example, insurers were important advocates for airbags, even before federal regulation began to require them. Insurance costs will be faster to act to incentivize the developers of AI models to take one more step when ensuring the right controls are in place. In an industry that is moving this quickly, it may be that insurance (not waiting for regulation) is our best hope of avoiding the Terminator.
How can robot insurance get started?
Embedded insurance programs can also help build the bridge towards larger capacity and coverage over time. Initially, data limitations and experience will likely create a situation that is mediated by smaller coverage limits and large exclusions. Significant events are more likely to be handled by litigation over negligence than through insurance coverage.
But we see already that innovators can fill this gap by risking their own balance sheet while still using insurance structures. Companies can use a mix of their own balance sheets, captive insurance arrangements, and more traditional carrier partnerships to bring novel products to market.
Even when innovators do put their own capital at risk, structuring the protections to be “legible” to an insurer or re-insurer is critical to scale. Even if the commercial practice is to give a “satisfaction guaranteed” protection, no insurer could backstop that guarantee unless there’s a track record demonstrating underwriting, claims, and payouts to understand and price the real risk over time.
The balance sheets of startups ultimately won’t be able to support the size of risks or create the diversification necessary for good outcomes. So insurers and reinsurers will need to invest in the systems to handle data, underwriting, claims, and partnerships necessary to enable a robotic future.
There are also established approaches to make the first policies easier to structure and underwrite:
Usage-based insurance is an approach that would tailor policies based on actual robot usage and performance data. And is already established using telematics in auto insurance. It’s also likely that AI-underwriting will need to be embraced. For example, Lloyd's of London is exploring AI to underwrite cyber risks associated with robotics. Making proper and effective use of data as systems rapidly change won’t work without automation and it’s likely that AI strategies will be essential to picking out patterns in usage, accident, and claims data.
Parametric insurance offers pre-defined payouts when specific triggers occur. It may enable early insurance datasets to be established, though it may work more effectively for commercial coverages like business interruption than for third-party damages.
Modular policies can also be developed to cover these risks in a crawl, walk, run cadence. Insurance startups building software for AI-governance might offer a template for gradual expansion across a variety of novel risks.
All of these approaches will require a collaborative insurance framework involving all parties to ensure comprehensive coverage. Shared liabilities reflect the interconnected nature of robotic technologies and their deployment.
What will a good insurance product look like?
Machines are already embedded in daily life and many of us are excited by the productivity enhancements that could come in a Jetsons-like future. In addition to this, we’ll have significant safety gains that have already come from increased automation in factories. And the promise that robots could now do both our tedious and dangerous tasks from folding laundry to defusing bombs.
While AI and robotic safety are paramount, the real world is messy. Accidents and tragedies will happen. So this future also will require the kinds of promises that insurance has been making for hundreds of years.
An effective insurance product for robots should:
Cover both physical damages and intangible risks like data loss or reputational harm.
Adapt to the rapid advancements in technology and the evolving nature of AI behavior.
Should be clear on liability. It should clearly define the responsibilities of manufacturers, operators, and developers to avoid legal ambiguities.
Encourage best practices in cybersecurity and regular maintenance to minimize risks.
Utilize data to assess risks accurately and price premiums fairly.
As robots continue to permeate various aspects of society, the need for specialized insurance becomes increasingly critical. It's not just about mitigating financial losses but also about fostering trust in robotic technologies. Insurers, manufacturers, and regulators must collaborate to develop products that address the unique challenges posed by robots. This will ensure that innovation does not outpace our ability to manage its risks.
If you're a founder or an investor who has been thinking about this, I'd love to hear from you.
If you are getting value from this newsletter, consider subscribing for free and sharing it with 1 friend who’s curious about AI: