OpenSSL Conference

OpenSSL Conference

Navinya


Session

10-09
11:00
40min
Model-Side Confidential Inference: Leveraging OpenSSL for End-to-End Encrypted AI Inference Pipelines
Tarique Aman Aziz, Navinya

With the rise of large language models (LLMs) and inference platforms, privacy concerns have intensified regarding how user prompts and contextual data are handled. Although TLS provides secure communication channels, plaintext inference data is still exposed to multiple layers of model-serving infrastructure before it reaches the model, including logging layers, proxies, and orchestration frameworks.

In this talk, we propose and demonstrate a practical framework that extends OpenSSL beyond transport encryption to enable model-side confidential inference. Input data is encrypted at the client using OpenSSL’s AES encryption suite and transmitted through standard protocols. Decryption occurs only within the model process, inside trusted memory space. We integrate this with the Model Context Protocol (MCP), a lightweight protocol increasingly used for orchestrating model input/output streams in modern LLM inference engines.

The result is a secure, auditable, and privacy-preserving pipeline where prompts remain encrypted until the moment the model begins inference. We’ll showcase working code examples using OpenSSL, compare performance trade-offs, and explore integration patterns for LLM stacks, including Llama.cpp, and/or vLLM.

This design opens new frontiers for OpenSSL, enabling cryptographic protection not just in web services, but inside the AI inference layer itself.

Technical Deep Dive & Innovation
Krakow/ Business Value & Enterprise Adoption