Anyone who visited a CES tech event and didn’t feel overwhelmed by the sheer volume and variety of gadgets and gizmos on display would probably feel somewhat short-changed. At the 2026 edition of the show, there was no danger of this happening.
To this end, anyone visiting CES this year, at its traditional and rather fitting home of Las Vegas, certainly had plenty to get to grips with, as the audiovisual and consumer electronics giants such as Sony, Samsung, Panasonics, TCL and Hisense were all there in force, as ever.
Visitors would also have been very aware of a couple of key themes: one being the continued importance of connected and software-defined vehicles (SDVs); the other being the march – in some cases quite literally – of robotics in industrial and lifestyle settings, and the use of artificial intelligence (AI) in these use cases, an application set known as physical AI.
Moreover, the show demonstrated the clear synergies between robotics and automotive use cases. Indeed, one learned observer went so far as to describe a connected vehicle as a “special kind of robot” in a world where screens are the new horsepower.
Yet while the advances in technology regarding connected vehicles and robotics at CES could be justifiably described as “revolutionary”, this revolution is one very much foretold, especially as regards SDVs. At CES 2024, in what was probably the first true post-Covid show of its type, it was hugely apparent that the main attractions had four wheels.
This point was made very strongly in one of the keynote addresses by CES stalwart Liz Claman of the Fox Business Network, who noted – almost a decade after former Ford CEO Alan Mulally became the first auto executive of his status to be invited to speak at the show – that CES is now effectively a tech version “on steroids” of the Detroit Auto Show – the flagship event for the automotive industry as a whole.
The key driver of this transition in 2024, so to speak, was car manufacturers’ rapid increase in the electrification of their product lines, leading to a sharp increase in in-car electronics such as digital cockpits, advanced driver assistance systems (ADAS) and in-vehicle infotainment (IVI) systems. In an interview with Claman this year, Qualcomm CEO Cristiano Amon agreed, suggesting that vehicles are no less than a new space for computing, and that the overarching strategy for the company would be to tap into the growing world of AI in local devices.
From prototypes to production
While the 2024 edition of CES did see leading automotive manufacturers show off their progress as regards connected vehicles – with BMW, Mercedes, Jeep, John Deere and Hyundai at the front of the grid – the event was mainly centred around prototypes and potential. The 2026 show is when things truly got on the road.
While two years ago Qualcomm offered a view of what could be, in 2026, it revealed a number of real deployments. It revealed that it had teamed with Chinese startup Leapmotor for what it calls the world’s first cross-domain integrated service powered by its Snapdragon Cockpit Elite and Snapdragon Ride Elite automotive platforms. It also announced that it has expanded its technology relationship with Google to provide what the firms call a leading foundation for transforming the automotive industry. It also announced a collaboration with manufacturing group ZF to provide a scalable ADAS that combines advanced AI compute and perception capabilities.
Leapmotor’s D19 will become the first mass-production vehicle powered by a dual Snapdragon Elite automotive platform as part of the manufacturer’s aims towards a more centralised vehicle architecture, making cars easier and more efficient to build, as well as delivering more responsive features for drivers and passengers.
The manufacturer believes the dual‑chipset architecture can deliver the required compute performance to streamline vehicle electronics, reduce system complexity and enable more advanced AI capabilities across the entire vehicle. It has been deployed to unify systems, including Wi-Fi 6 and 5G mobile communications.
The central domain controller runs an intelligent cockpit, driver assistance, body controls (including lighting, climate, doors and windows), the vehicle gateway, and also provides the compute headroom needed for real‑time coordination and advanced AI, including emerging agentic AI workloads. It also enables over-the-air updates, remote diagnostics and remote vehicle control, with its service‑oriented architecture offering more than 200 modular capabilities for flexible, user‑defined experiences.
Driver assistance is delivered through up to 13 cameras and multiple sensors using LiDAR, vehicle-millimetre‑wave radar, ultrasonic sensors and a high‑precision IMU, to deliver reliable L2 driver assistance. Other supported features include assisted parking, with the controller helping vehicles handle complex daily and urban scenarios.
In-vehicle connectivity supports voice calling, emergency services, Bluetooth, Wi‑Fi and precise location services such as a global navigation satellite system (GNSS). The D19 also has the ability to support up to eight displays, including multiple 3K and 4K screens, and up to 18‑channel audio for immersive in‑car entertainment. A true example of screens being the new horsepower.
It was a point taken up by Qualcomm Europe’s senior director of product marketing, Thomas Dannemann. “Are screens the new horsepower? I totally agree. A car with a display from the very left to the very right [creates] a luxury vehicle. Big display sets [are in demand]. I’ve seen the new Mercedes with the display that looks awesome. For me, this says ‘this is really a luxury car’. So definitely the number of displays sells in the future.”
Yet Dannemann also noted that there was a vehicle technology that could supersede even the most advanced screen consoles: voice activation. “Basic functions need to have buttons, but as soon as it comes to complex functions, such as turning on the Wi-Fi inside the car, why should I have to find, somewhere in sub-menus, the option to turn it on? What if I just tell the car, ‘Turn on the Wi-Fi?’ You can really cut out a lot of pain,” he said.
Qualcomm showed at CES how the technology to support such voice recognition has changed with the introduction of large language models (LLMs) in a network. These can process natural language to the extent of not needing to know the precise word for command features, a very recently developed ability. Dannemann noted that such technology is evolving rapidly, and said he expected the car industry to adopt these features in the near future, easing the way drivers – and passengers – interact with vehicles.
A personalised ride
Bringing things back to the present term, Qualcomm also took advantage of CES to get its technology onto the roadways of Las Vegas in the form of a Lincoln Aviator. The three-row SUV is built to seat up to seven people with a three-litre V6 engine that can produce 400 horsepower. Advanced technology in the car is offered through the Lincoln Digital Experience system, supported by an embedded Snapdragon platform.
When in drive mode, the car’s driver monitoring system kicks in with ADAS view becoming active. In the test drive, the system monitored the driver’s face to make sure he was looking forward and was set up to transmit a visual notification if there were any distractions.
The driver’s scene can be highly personalised. The Qualcomm Ride Assist stack working on a single SOC offers the possibility to scan and display ride information, including other vehicles and objects such as stop signs and traffic lights, pedestrians, cyclists, motorbikes, boundaries, lanes and speeds.
Safety technology includes the ability to keep the vehicle centred and a safe distance from the car in front, with pre-collision assist offering automatic forward collision warning, dynamic brake support and pedestrian detection. There are also 360-degree cameras to provide a clear view of what’s around the vehicle, with specific blind spot information.
Using Google technology, drivers can use a mapping system featuring globally sourced, real-time data and voice commands through Google Assistant. A 5G embedded modem with Wi-Fi hotspot within the Snapdragon platform enables internet access and the onboard GNSS.
In the test drive, Qualcomm stressed that the management system was acting more like a co-pilot than full autonomy. Certain key features, such as the Google interaction, were performed in the cloud, while some tasks were performed locally, particularly those with latency dependency and that relied on speed in decision-making. The need for local AI processing is simply down to the reality that connectivity with a car cannot be guaranteed all the time. One use case cited in this scenario is repair information that has to be available anywhere, as it is just typical bad luck that breakdowns tend to happen in the middle of nowhere.
Another of the company’s key technology launches at CES was the A10 5G Modem-RF, its first 5G RedCap modem for the automotive industry. Designed to accelerate the industry’s migration to 5G, it is said to combine a lower complexity design with capabilities based on the 3GPP Release 17 wireless standard, allowing automakers to launch advanced connectivity features across more vehicle tiers.
At CES, Enrico Salvatori, senior vice-president and president of Qualcomm Europe, revealed the technology building blocks being used regarding the CPU, GPU, NPU architecture deployed in a number of sectors. He highlighted the importance of the digital cockpit, where he feels the impact of AI functionality at the edge will be most significant because the technology will take user experiences to another level.
“When the driver and passengers are in the car, the AI can help with recognition, and use that to select user interfaces, etc. It can also help with entertainment in the car, [support] safety, help passengers and the driver understand what’s going on outside the car, and take action to protect the drivers and make them more comfortable. So far, GenAI answers our questions in the car: ‘I see this light blinking in the cockpit. Why? What’s going on?’ The next step is for the AI to ask us questions: ‘How do you feel? Where do you want to go? Do you want to go have some food?’ That interaction will be more two-way, enabling many more applications, services and scenarios.”
A real example of the use of advanced AI in automotive was offered in the form of the Mercedes‑Benz GLC 400 4MATIC with EQ Technology car line, which is attributed with advancing software-defined vehicles (SDVs) and setting new benchmarks for intelligent, connected and user-centric mobility. Digital cockpit and connectivity experiences are powered by Snapdragon Digital Chassis solutions, covering computer vision, the perception stack and the drive policy stack. The AI experience is based on fleet data taken from cars on the road to improve accuracy and performance.
Going forward, and as the manufacturer takes more from AI, Salvatori revealed that the company would be introducing visual language action (VLA). This will rely more on multimodal AI models, expanding training ability by collecting more in-drive data. Mercedes uses a platform that supports both IVI and ADAS functionality.
Regarding the move towards on-device AI and processing data at the edge rather than in the cloud, it was recognised as imperative that applications such as ADAS had to perform processing with minimum latency. While inference will stay at the edge, model training will remain in the cloud, due to what Salvatori called the need to pass “a magic threshold” processing the billions of parameters in AI models through the number of trillions of operations (TOPS) provided by edge or cloud hardware.
“At the moment, most of the models are so complex that only a datacentre in the cloud can manage them. But the industry is improving the performance of hardware at the edge. Model complexity is being reduced [through] optimised software. The magic threshold at the moment in terms of processing is 10, 12 billion parameters per model [so that] we can manage with good accuracy on the edge. The big models [require] 50 billion, [which] we cannot manage at the edge. But the good news is that the quality and the performance of the new-generation models in the 10, 12 billion parameters range is good enough for acceptable performance, so that we can run all the inference at the edge.”
These processing numbers, he emphasised, are only going up, which will mean more essential applications can be processed at the edge in use cases such as parking, vehicle recognition, video surveillance, interacting with a driver and giving advice in real time. The smart car is getting smarter.
In part two of our look at the world of CES 2026, we find out how far AI and connected technologies are spreading to not just automotive use cases, but to the internet of things in general, including robotics.


