This session presents a software-defined vehicle architecture that demonstrates how vehicle data can be transformed into actionable intelligence using AI, with relevance for connected commercial and fleet vehicles as well as adjacent industries. We present a fully vertically integrated in-vehicle AI demonstrator, spanning from hardware signals to the infotainment layer, shown from both an OEM and an aftermarket perspective.
Based on AOSP Android Automotive, the architecture uses the Vehicle HAL (VHAL) as a standardized interface for in-vehicle signal access and control. Vehicle signals are transported via CAN-over-Ethernet using an Open1722 proxy, enabling scalable and vehicle data access.
The system integrates live camera streams from both an OEM perspective and an aftermarket perspective, illustrating how AI-enabled vehicle data can be exposed beyond OEM boundaries. Video data is processed by AI models for object detection, face recognition, and gaze estimation, turning raw sensor data into meaningful semantic information.
A newly introduced AI HAL enables structured configuration and orchestration of AI workloads and supports adaptive AI behavior by combining vehicle context (e.g., vehicle start, door events) with AI model execution. This highlights the importance of vehicle data as a core asset for adaptive AI.