For years, carmakers have been promising a future where drivers can finally stop driving. We’ve seen “hands-free,” “semi-autonomous,” and “driver-assist” systems come and go—each one bringing us a few inches closer to that sci-fi dream of a car smart enough to handle the road without our help. Now, General Motors says it’s ready to take the next leap. Its new “eyes-off” system, powered by artificial intelligence, claims to let you do something no other production vehicle legally allows: take your hands and your eyes off the road while AI does the rest.
It’s an audacious move. While Tesla continues to push “Full Self-Driving” as a beta product and Mercedes experiments with Level 3 autonomy in small markets, GM wants to be the first American automaker to make “eyes-off” autonomy mainstream. This new system—an evolution of the company’s Super Cruise and Ultra Cruise platforms—blends computer vision, lidar-based mapping, and generative AI to anticipate not just traffic, but the subtle patterns of human driving behavior.
In other words, the car doesn’t just see what’s ahead. It’s learning how you drive, so it can mimic you when you’re too busy doomscrolling to notice the light just turned green.
The AI Behind the Wheel
At its core, GM’s “eyes-off” system operates at what the SAE calls Level 3 autonomy: the car handles all driving tasks within approved conditions, but the human must be ready to take control when prompted. The difference here is in the intelligence layer. GM’s system uses onboard neural processors to process sensor data in real time, while connecting to the cloud for predictive modeling. The company says this allows the car to “understand intent”—an AI buzzphrase meaning it can predict whether a pedestrian will cross, or if a vehicle ahead is about to change lanes, based on patterns it has learned.
GM has been training this model for years using data from its existing fleet of Super Cruise vehicles, which have collectively logged millions of semi-autonomous miles across North America. By analyzing those trips, the system’s AI has learned to interpret thousands of micro-movements that human drivers make instinctively: the gentle swerve to avoid potholes, the tap of the brake before a curve, or the quick glance toward a merge lane. It’s these behaviors, GM argues, that separate real-world driving from mere automation.
The result, according to GM engineers who demonstrated the system to select media outlets this month, is a ride that feels surprisingly natural. The AI doesn’t just follow the lane; it flows with traffic. When merging, it gauges not only space but social cues—how fast the car behind might accelerate, or how much room the next driver is willing to give. “It’s the closest we’ve come to giving a machine human driving intuition,” one engineer said.
From Super Cruise to Super Chill
So what does “eyes-off” actually mean in practice? It’s not carte blanche to nap behind the wheel or binge a Netflix show just yet. The system is geofenced to specific highways and urban corridors that GM has mapped in high detail—down to lane curvature and guardrail placement. On those roads, a driver can take their hands off the wheel, their eyes off the road, and let the car take over. Cameras track the driver’s face to ensure they’re still conscious and able to respond to alerts if needed.
When activated, the system shifts into a kind of automotive meditation mode. The steering wheel subtly retracts, the seat adjusts for comfort, and the central display transforms into a multitasking hub. You can check your messages, join a video call, or, as GM’s demo playfully suggested, “scroll endlessly through your social feeds.” Hence the tongue-in-cheek nickname among testers: the doomscrolling drive mode.
GM insists it’s safe. In a closed-course test, the car handled lane changes, merges, and stop-and-go traffic with eerie confidence. Even when the AI faced unusual scenarios—a cyclist weaving along the shoulder or a broken-down vehicle—the system smoothly recalibrated its path. When it needed human input, it provided clear, escalating alerts: first a visual cue, then a gentle steering vibration, and finally an audible warning.
The Road to Trust
Of course, all this raises the perennial question: do we trust cars to think for us? Public confidence in self-driving technology remains shaky after years of overpromises and high-profile crashes. Tesla’s “Full Self-Driving” misnomer hasn’t helped, and regulators have grown wary of companies using real roads as AI training labs.
GM is taking a more conservative route. The company says every “eyes-off” activation is geo-limited, data-logged, and backed by redundant safety systems that can intervene instantly if sensors or AI models misbehave. The AI, it adds, doesn’t learn on the fly—it improves only through validated updates pushed by GM engineers.
That’s a key distinction. Where Tesla’s Autopilot is an evolving experiment, GM’s approach feels more like an appliance: carefully controlled, cautiously deployed, and wrapped in layers of corporate liability insurance. Whether that’s comforting or disappointing depends on what kind of futurist you are.
The Bigger Picture
The real story here isn’t just about GM’s tech; it’s about how AI is quietly rewriting what “driving” means. The car is no longer just a machine for movement—it’s becoming a digital space, an extension of your online life. When the vehicle is capable of piloting itself, your attention becomes the product again. You’re free to work, text, stream, or, yes, doomscroll.
That blurring of digital and physical worlds is what excites automakers and advertisers alike. If your car becomes another screen, the battle for your eyeballs just moved onto the highway.
GM, for its part, seems aware of the irony. The company insists that its ultimate goal is safety—reducing fatigue, distraction, and human error. But as cars grow smarter and drivers grow less engaged, the philosophical question remains: at what point do we stop being “drivers” altogether?
Eyes Off, Mind On
The “eyes-off” system is slated to roll out in select Cadillac and Chevrolet models by late 2026, pending regulatory approval. Pricing hasn’t been announced, but GM says it will be positioned as a premium feature alongside its existing driver-assist options.
In the meantime, the company is framing the technology as a bridge—between today’s semi-autonomy and the full self-driving cars of tomorrow. Whether that bridge leads to freedom or complacency will depend on how responsibly both the AI and the humans behind it behave.
Because even if you can take your eyes off the road, it doesn’t mean you should stop paying attention.
