2025-10-09 –, Krakow/ Business Value & Enterprise Adoption
With the rise of large language models (LLMs) and inference platforms, privacy concerns have intensified regarding how user prompts and contextual data are handled. Although TLS provides secure communication channels, plaintext inference data is still exposed to multiple layers of model-serving infrastructure before it reaches the model, including logging layers, proxies, and orchestration frameworks.
In this talk, we propose and demonstrate a practical framework that extends OpenSSL beyond transport encryption to enable model-side confidential inference. Input data is encrypted at the client using OpenSSL’s AES encryption suite and transmitted through standard protocols. Decryption occurs only within the model process, inside trusted memory space. We integrate this with the Model Context Protocol (MCP), a lightweight protocol increasingly used for orchestrating model input/output streams in modern LLM inference engines.
The result is a secure, auditable, and privacy-preserving pipeline where prompts remain encrypted until the moment the model begins inference. We’ll showcase working code examples using OpenSSL, compare performance trade-offs, and explore integration patterns for LLM stacks, including Llama.cpp, and/or vLLM.
This design opens new frontiers for OpenSSL, enabling cryptographic protection not just in web services, but inside the AI inference layer itself.
What if your AI model could decrypt your data only at the very last moment, inside its runtime memory, and never expose it anywhere else?
In this talk, we explore how OpenSSL can power confidential inference pipelines, where user prompts are encrypted on the client and decrypted only by the model itself, just before inference. We'll demonstrate a working proof of concept that utilizes OpenSSL’s AES encryption in a model-serving setup and show how this approach integrates with modern AI serving protocols, such as MCP.
If you're interested in cryptography, AI, privacy, or practical security architecture, this session will open your eyes to how OpenSSL can secure the next frontier: AI inference itself.
Tarique Aman Aziz is a Software Engineering Manager at Red Hat, currently working in the Data and AI team with a strong emphasis on Applied AI and secure inference systems. He brings deep expertise in Model Context Protocol (MCP), Multi-Agent orchestration, and the architectural foundations of modern AI applications.
Formerly leading Red Hat’s Innovation Office, Tarique has a proven track record in designing scalable, production-grade systems using technologies like Quarkus, Apache Camel, and Kogito.
His current focus blends applied research and real-world engineering, particularly around AI, model orchestration, and privacy-preserving inference using open tools like OpenSSL.