"Regulatory, Legal and Safety Headwinds" Tesla’s Autonomous Driving Ambitions Pressured, Breakthrough Hinges on Data Accumulation?
Input
Modified
Tesla’s autonomous driving business faces a “triple risk” of regulatory scrutiny, consumer backlash and safety concerns Recurring accidents over several years, with Austin robotaxi program posting elevated incident rates Incomplete technological maturity, gradual improvement anticipated through long-term driving data accumulation

Tesla’s autonomous driving business has encountered mounting headwinds. Persistent friction stemming from regulatory scrutiny and consumer disputes, compounded by recurring accidents, has constrained the company’s growth trajectory. Industry experts contend that Tesla’s autonomous driving technology remains underdeveloped, arguing that meaningful performance gains will require the accumulation of extensive real-world driving data over a prolonged period.
Regulatory and Consumer Backlash Intensifies
On the 18th (local time), Tesla announced it would cease using the terms “Autopilot” and “Full Self-Driving (FSD)” in California. The move appears to follow a determination by the California Department of Motor Vehicles (DMV) that the company’s marketing language may have violated state regulations. The DMV recently concluded that Tesla’s use of the terms Autopilot and FSD implied a level of vehicle autonomy beyond the systems’ actual capabilities, thereby creating a risk of consumer deception. The agency subsequently warned it could revoke Tesla’s sales license and granted a 60-day grace period.
Consumer dissatisfaction surrounding Tesla’s autonomous driving program has also intensified. In February last year, controversy over Tesla’s FSD function in Australia escalated into a class-action lawsuit. The vehicles at issue were the Tesla Model 3 and Model Y equipped with HW3, the company’s third-generation self-driving computer. Tesla had promoted since 2016 that all vehicles sold included the hardware necessary for FSD capability, yet later acknowledged that HW3 was incapable of fully supporting FSD functions. Elon Musk, Tesla’s Chief Executive Officer, pledged in 2024 to enable FSD functionality through hardware upgrades for vehicles equipped with HW3, but no definitive upgrade roadmap has been disclosed to date. Some owners who purchased or leased the affected models have filed lawsuits seeking financial compensation for Tesla’s alleged failure to honor its commitments.
In South Korea, 98 Tesla owners have filed a lawsuit against Tesla Korea seeking refunds, alleging the company sold vehicles incapable of supporting the FSD option. Tesla Model S and Model X vehicles manufactured in the United States may utilize FSD domestically if they satisfy U.S. safety standards under the Korea-U.S. Free Trade Agreement’s equivalency clause. By contrast, Model 3 and Model Y vehicles imported from China are unable to deploy FSD due to regulatory constraints. Of the 59,916 Tesla vehicles sold domestically last year, only 719 units, or 1.2%, were U.S.-manufactured models eligible for FSD use. Nevertheless, Tesla reportedly sold the FSD option to buyers of China-made vehicles for approximately $7,500, asserting that domestic regulatory restrictions would soon be lifted.
Persistent Safety Concerns
Safety concerns surrounding Tesla’s autonomous robotaxi service have spread rapidly. According to the National Highway Traffic Safety Administration (NHTSA), Tesla robotaxis that began pilot operations in Austin, Texas, in June last year have been involved in a total of 14 collisions, five of which occurred this year. Reported incidents include a collision with a stationary object while traveling at 17 miles per hour, a bus striking a stopped vehicle, a crash with a large truck at 4 miles per hour, and collisions with a pole or tree while reversing at 1 mile per hour, as well as another stationary object at 2 miles per hour.
Based on Tesla’s fourth-quarter 2025 earnings materials, cumulative paid mileage logged by the Austin robotaxi fleet is estimated to have reached approximately 800,000 miles as of mid-January this year. On that basis, the average mileage per accident stands at roughly 57,000 miles. The figure significantly exceeds benchmarks cited in Tesla’s own Vehicle Safety Report, which states that the average U.S. driver experiences a minor accident every 229,000 miles and a major accident every 699,000 miles. By that measure, the robotaxi fleet’s accident frequency is approximately four times higher than that of human drivers in minor-accident terms.
Concerns over the safety of Tesla’s autonomous systems predate the Austin pilot program. A fatal accident in 2019 involving Tesla’s driver-assistance system, Autopilot, remains emblematic. In Florida, a Tesla Model S operating at night collided with an SUV parked roadside, striking a man and woman standing nearby. The woman died and the man sustained serious injuries. The vehicle owner alleged that Autopilot failed to properly detect road boundaries and forward obstacles, filing suit against Tesla.
Tesla asserted that the driver bore full responsibility due to inattentiveness. However, a federal jury in Miami ruled in August last year that Tesla was 33% liable for the accident and ordered the company to pay $242.5 million in damages, including punitive damages, to the victims’ families. The ruling marked the first instance of a U.S. court assigning manufacturer liability to Tesla in a crash involving the Autopilot function. Tesla has indicated it will appeal the decision.

Technology Refinement Through Real-World Deployment
As safety incidents continue to surface, experts argue that Tesla’s technological approach lags behind competitors employing sensor-fusion architectures. Companies such as Waymo, which have commercialized fully autonomous vehicles, typically integrate lidar, radar and camera systems to process environmental data in combination. Lidar uses laser beams to precisely map surrounding objects, while radar calculates the distance and speed of targets over a broader range. By contrast, Tesla has adopted its “Tesla Vision” approach since 2021, eliminating radar in favor of camera-based computer vision.
Missy Cummings, a researcher at George Mason University’s Autonomy and Robotics Lab and a former fighter pilot, underscored the risks of this strategy in an interview with nonprofit outlet More Perfect Union. “If computer vision is 97% accurate, that means three errors out of every 100 attempts,” she said. “If we divide autonomy levels from kindergarten to high school graduation, Tesla’s ‘self-driving’ would be around first- or second-grade level,” she added, arguing that Tesla’s methodology diverges from what is taught in university robotics programs.
Nevertheless, some observers project incremental improvement in Tesla’s autonomous driving capability. Tesla’s FSD system employs an end-to-end learning architecture, forming its decision-making criteria through data gathered during real-world driving. Unlike rule-based systems that rely on predefined If-Then logic, end-to-end models refine performance as mileage expands and data accumulates. A market specialist noted, “Tesla has adopted a strategy of refining imperfect technology through the accumulation of real-world data. Technical completeness will improve over time, yet the process of acquiring training data for unpredictable edge cases carries accident risks that may translate into legal and reputational liabilities.”
Comment