The fatal crash of the Xiaomi SU7 has cast a harsh spotlight on intelligent driving systems in China. As accidents mount, public trust is eroding...
On March 29, a tragic accident involving a Xiaomi SU7—a popular electric vehicle model launched by the Chinese consumer electronics giant Xiaomi—claimed the lives of three university students in East China. The car, reportedly operating under Xiaomi's semi-autonomous system at the time, veered into a concrete divider and caught fire.
The mother of one of the three victims killed in the incident recalled that she once drove this Xiaomi SU7, a popular electric vehicle made by the Chinese consumer electronics giant Xiaomi, with her daughter from Shenzhen to Wuhan. During the 1,000-kilometer journey, her daughter kept telling her that intelligent driving was "convenient and safe."Despite her daughter’s confidence, the mother had voiced concerns, cautioning that the technology was not yet reliable enough to be trusted completely. “I told her she would regret it someday, but she disagreed, insisting there were plenty of reasons to trust the system,” the mother said.
This generational divide in trust toward emerging technologies has become increasingly evident. Younger users are often quick to embrace innovation, sometimes influenced by online reviewers who hype features like “zero takeover.” Older individuals may approach such systems more cautiously—but as this case shows, caution alone wasn't enough to prevent the tragedy.
The crash reignited national debate over the risks of intelligent driving. Two weeks later, the Ministry of Industry and Information Technology of China summoned about 60 representatives of car companies, demanding them of avoid overstating the ability of driver assistance technology. The Ministry also introduced standards for over-the-air (OTA) software updates to ensure greater transparency and traceability, according to sources familiar with the matter.
As part of these efforts, automakers are now prohibited from using terms like “autonomous driving” or “advanced smart driving” to market systems that only meet Level 2 automation standards.
Lessons Unlearned: What Past Smart Driving Collisions Tell UsIn fact, Xiaomi SU7 isn’t the first car involved in intelligent driving accident that sparked national discussion on the security of Intelligent driving. According to the latest data from the National Highway Traffic Safety Administration (NHTSA) of the US, since 2019, there have been 736 car accidents related to Tesla's autopilot mode in the US, resulting in 17 fatalities.
☆Tesla Model S, Florida, USA, 2016:The first widely publicized incident involving the intelligent driving feature occurred in the U.S. when the car’s sensor system failed to detect a large white 18-wheel truck and trailer crossing the highway. The vehicle attempted to pass beneath the trailer at full speed, and the roof was torn off by the impact. The driver was killed.
☆NIO ES8, Fujian, China, 2021:A 31-year-old entrepreneur died in a rear-end collision on the Shenhai Expressway while using NIO’s Navigate on Pilot (NOP) assisted driving feature. The vehicle crashed into a maintenance truck that was collecting traffic cones. Reports indicate that the NOP system failed to detect and respond to the slow-moving and stationary objects ahead.
☆XPeng P7, Ningbo, China, 2022:A Xiaopeng P7 crashed into a car parked in front of it, resulting in the death of the driver of the car. The XPeng owner later confirmed that the XPILOT 2.5 intelligent driving assistance system was engaged at the time. He admitted to being "just distracted" and said the system gave no warning before the crash.
☆AITO M7 Plus, Shanxi, China, 2024:The vehicle’s automatic emergency braking system failed to activate at a speed of 115 km/h—above its effective operating range of 4 to 85 km/h. As a result, it rear-ended a road maintenance vehicle working in the fast lane of the expressway. Three people lost their lives in the crash.
☆Li Auto L9 Pro, Hubei, China, 2024:With the assisted driving system activated on the highway, the car misidentified an image of a small truck on a high billboard as a real vehicle and suddenly braked, leading to a rear-end collision. The owner sustained minor injuries to his hands and face. Experts noted that millimeter-wave radar has difficulty detecting stationary objects, and onboard cameras are highly susceptible to environmental interference.
These incidents point to a recurring issue: current sensor systems often struggle to detect complex or stationary objects, revealing critical limitations in today’s autonomous driving technology.
So, What Exactly Is'Intelligent Driving'?According to the SAE (Society of Automotive Engineers), autonomous driving capabilities areified on a scale from Level 0 to Level 5:
☆L1-L2:Driver assistance. The driver must remain engaged at all times.
☆L3:Conditional automation. The car can handle most driving, but the driver must be ready to intervene.
☆L4-L5:High to full automation. The vehicle can operate independently in most or all conditions, with minimal or no human intervention.
Credit: SAE International
The Xiaomi SU7 falls somewhere between L2 and L2+, meaning it’s still a driver-assistance system—not autonomous. Yet, confusion persists.
“It’s not just about what the car can do. It’s about how the user perceives what the car can do,” one blogger noted. “If drivers overestimate the system’s intelligence, accidents become almost inevitable.”
This gap between human expectation and machine capability is a serious concern. In the SU7 crash, investigations suggest the driver may have either misused or overly trusted the system. Was a takeover request issued in time? Was the system responding accurately to its surroundings? These questions remain unanswered.
Is the Real Problem the Tech—Or Who's Regulating It?One thing is clear: s smart cars are no longer just a technological issue—they're a public safety concern.
In recent years, China has seen a surge in EVs and smart cars, with tech giants like Huawei, Xiaomi, and Baidu entering the market. Yet regulations have lagged behind. Unlike the aviation industry, which mandates rigorous testing, most intelligent driving systems in China are tested under limited or self-defined criteria, leaving regulators without a unified framework to enforce.
The result is a patchwork of oversight.
While the Ministry of Industry and Information Technology (MIIT) has proposed guidelines, key areas—such as accident liability, black box data transparency, and uniform safety standards—remain unsettled and continue to evolve.
Moreover, companies often use ambiguous language. Buzzwords like “City NOA,” “intelligent cruise,” and “autonomous navigation” are often marketed with little clarity, leading to serious consumer misunderstandings.
After the Crash: Confidence, Confusion, and BacklashThe Xiaomi SU7 crash has left many wondering: can we really trust smart driving anymore? In interviews, owners express a blend of pride, concern, and confusion.
“I was proud to own the SU7,” says a tech enthusiast on Weibo. “Now I’m just cautious. I’ve stopped using Mi Pilot altogether.”
Others are less concerned. “Technology always needs time to mature,” an XPeng owner on the AutoHome Forum said. “Blaming the system without clear evidence is unfair.”
Online, the sentiment is more volatile. On Chinese social media platforms like Weibo and Xiaohongshu, hashtags such as #Promotion of Intelligent Driving Can No Longer Be Ambiguous and # Intelligent Driving and Autonomous Driving Cannot Be Equated have gone viral. Posts range from memes mocking "AI drivers" to serious calls for a pause on autonomous driving technology deployment.
A survey by AutoHome found that over 60% of respondents now feel "less confident" about autonomous features following recent accidents. Nearly 40% said they were unaware of the difference between "assisted driving" and "autonomous driving" prior to the Xiaomi crash.
Still the Future?A Crossroads for Intelligent DrivingMany market insiders believe that the SU7 incident may put pressure on Xiaomi's automotive business and stock price. But how deep the impact will go remains to be seen. Safety concerns continue to challenge fast-growing EV startups as they go head-to-head with century-old automakers. While such accidents aren’t uncommon in the public eye, the market has a short memory—and tends to move on quickly.
Despite these setbacks, the industry remains committed to the autonomous future.
From Tesla's Dojo supercomputer to Huawei’s MDC platform, R&D into full-stack autonomous systems is accelerating. The core belief driving this race: smarter AI and better sensors will eventually overcome current limitations.
But optimism alone isn’t enough—it needs to be backed by meaningful regulation.
“I don’t doubt that full self-driving will be safer than human drivers one day,” says Dr. Chen Ming, an AI ethics expert at Peking University. “But until then, companies must stop pretending we’re already there.”
Industry leaders are now facing growing calls to adopt voluntary safety standards, share incident data, and participate in government-led testing zones. At the same time, regulators need to tighten rules and clearly define who’s responsible when things go wrong.
Above all, consumers deserve transparency. They should know exactly what their car can—and can’t—do. Companies can’t keep hiding behind vague language and legal fine print.
As for Xiaomi, the SU7’s fate may depend less on its next software update and more on whether it can rebuild public trust through openness and accountability.
Smarter Roads Ahead?Intelligent driving is the ultimate goalof the entire automotive industry.
Across the industry, there’s a broad consensus: the “first chapter” of the new automotive revolution is focused on electrification, while the “second” will be centered around intelligence. As one of the key battlegrounds in the global automotive industry and even in the tech competition, the race for "intelligent driving" is becoming increasingly intense.
Autonomous driving is the ultimate goal of the entire automotive industry. Its widespread adoption isn’t just about convenience—freeing drivers from the wheel and pedals—it’s also about safety. Machines don’t get drunk, drowsy, or distracted. Elon Musk has repeatedly used data to argue that, despite occasional accidents, driving with Full Self-Driving (FSD) actually reduces overall risk.
Humans have a natural aversion to repetitive, mechanical tasks, and commercial forces will overcome many obstacles to push forward automotive driver assistance technologies, ultimately achieving fully autonomous driving. A few dozen—or even a few hundred—accidents won’t slow that momentum. But the real challenge lies in how to move forward while minimizing the cost—especially when that cost is measured in lives and public trust.
(This article will also be simultaneously published on six channels: DataYuan official website, Tumblr, Substack, Facebook, X, and Medium.)