Traditional cybersecurity falls short against prompt injection, tool misuse, and cross-domain actions, requiring a risk-based edge AI security architecture that preserves performance and trust.
As in-vehicle AI systems evolve from passive voice interfaces to multimodal, agentic applications, the security model of the smart cockpit must be re-evaluated. Unlike earlier infotainment systems, these AI architectures reason, plan, invoke tools, and increasingly operate across vehicle domains. This expanded capability fundamentally changes the system’s risk profile, introducing threats that traditional automotive cybersecurity approaches were not designed to address. Prompt injection and jailbreak techniques can manipulate model tool-using decisions rather than exploit code, enabling the hijacking of AI models and agents. Meanwhile, the erosion of domain isolation assumptions increases the potential impact of compromised AI behavior across vehicle services and user data.
This presentation will explore emerging attack paths, including tool misuse, agent context manipulation, and unintended cross-domain actions, and understand why model-level safeguards alone are inadequate. It will examine why cloud-centric AI guardrail models are insufficient for in-vehicle deployments, where security cannot come at the cost of real-time performance or interaction quality. Finally, it will outline a forward-looking, risk-based security architecture for edge AI systems — an approach that preserves performance while enforcing trust boundaries, validating tool outputs, protecting memory integrity, and inspecting AI inputs and outputs where decisions are made.