Workshops Artificial Intelligence AI API Cryptographic Security
Artificial Intelligence Full Day or Half Day Workshop

Cryptographic Security of AI APIs and Inference Infrastructure

AI inference APIs handle proprietary prompts, model outputs, and authentication tokens over TLS connections that Shor's algorithm will break. This workshop maps the cryptographic surface of AI-as-a-service architectures and builds a concrete PQC migration plan.

Full day (6 hours) or half day
In person or online
Max 30 delegates

Proud to recommend our expert members

Qrypto Cyber
Eclypses
Arqit
QuantBond
Krown
Applied Quantum
Quantum Bitcoin
Venari Security
QuStream
BHO Legal
Census
QSP
IDQ
Patero
Entopya
Belden
Atlant3D
Zenith Studio
Qudef
Aries Partners
GQI
Upperside Conferences
Austrade
Arrise Innovations
CyberRST
Triarii Research
QSysteme
WizzWang
DeepTech DAO
Xyberteq
Viavi
Entrust
Qsentinel
Nokia
Gopher Security
Quside
Qrypto Cyber
Eclypses
Arqit
QuantBond
Krown
Applied Quantum
Quantum Bitcoin
Venari Security
QuStream
BHO Legal
Census
QSP
IDQ
Patero
Entopya
Belden
Atlant3D
Zenith Studio
Qudef
Aries Partners
GQI
Upperside Conferences
Austrade
Arrise Innovations
CyberRST
Triarii Research
QSysteme
WizzWang
DeepTech DAO
Xyberteq
Viavi
Entrust
Qsentinel
Nokia
Gopher Security
Quside

Workshop Description

For AI platform security leads and MLOps engineers. Covers quantum cryptographic exposure of AI inference APIs, model serving TLS, API key authentication, OWASP LLM Top 10 intersections, and PQC migration using FIPS 203/204/205 for AI-as-a-service architectures.

Every AI inference call traverses a TLS session. The key exchange protecting that session (X25519 or ECDHE P-256) is vulnerable to a cryptographically relevant quantum computer running Shor's algorithm. For organisations serving AI APIs externally, the harvest-now-decrypt-later threat is immediate: an adversary recording encrypted API traffic today could decrypt proprietary prompts, model outputs, and customer data once quantum capability arrives. Long-lived API keys used in agentic AI systems compound the exposure. This workshop maps the full cryptographic dependency chain of a typical AI inference pipeline, from API gateway through load balancer to model server and vector database, then builds a phased PQC migration plan using the NIST FIPS 203 (ML-KEM), FIPS 204 (ML-DSA), and FIPS 205 (SLH-DSA) standards. We address the specific performance constraints of AI workloads: handshake latency impact on inference SLAs, ciphertext size overhead on high-throughput API endpoints, and hybrid scheme deployment for backward compatibility.

What participants cover

  • TLS key exchange vulnerability in AI inference APIs: X25519 and ECDHE P-256 exposure to Shor's algorithm and harvest-now-decrypt-later risk
  • API authentication cryptography: long-lived API keys, OAuth tokens, and JWT signing in agentic AI systems under quantum threat
  • OWASP LLM Top 10 intersections: where AI-specific vulnerabilities (prompt injection, supply chain, information disclosure) compound quantum cryptographic exposure
  • FIPS 203/204/205 for AI infrastructure: ML-KEM for API gateway TLS, ML-DSA for model artefact signing, SLH-DSA for long-lived verification
  • Performance impact assessment: handshake latency, ciphertext size, and throughput overhead for high-volume inference endpoints
  • Regulatory alignment: EU AI Act Article 15, NIST AI RMF, ENISA and BSI guidance on quantum-safe AI infrastructure

Preliminary Agenda

Full-day session structure with scheduled breaks. Content is configurable to your AI deployment architecture, API infrastructure, and existing security tooling.

# Session Topics
1 AI Inference Infrastructure and Its Cryptographic Surface Where cryptography sits in modern AI deployment
2 TLS, mTLS, and API Authentication Under Quantum Threat Protocol-level exposure in AI-as-a-service architectures
  • TLS 1.3 key exchange (X25519, ECDHE P-256) and its vulnerability to Shor's algorithm
  • Long-lived API keys and OAuth tokens: harvest-now-decrypt-later risk in agentic AI systems
  • mTLS certificate chains for model serving: ECDSA and RSA signing exposure
Break, after 50 min
3 OWASP LLM Top 10 and Quantum Security Intersections Where AI-specific threats compound quantum exposure
  • LLM01 (Prompt Injection) and LLM06 (Sensitive Information Disclosure): encrypted context windows at risk
  • LLM09 (Supply Chain): model registries, container signing, and pipeline integrity under PQC
  • EU AI Act Article 15 (accuracy/robustness) and NIST AI RMF GM.3 intersection with quantum readiness
4 Interactive Demonstration: AI API Cryptographic Audit Full-day format only
  • Mapping cryptographic dependencies across an AI inference pipeline (API gateway, load balancer, model server, vector DB)
  • Identifying harvest-now-decrypt-later exposure windows for API traffic containing proprietary prompts and outputs
  • Prioritising migration: which endpoints move to ML-KEM/ML-DSA first and why
Break, after 60 min
5 PQC Migration for AI Infrastructure FIPS 203/204/205 in practice for AI deployments
  • ML-KEM-768 for API gateway TLS: performance impact on inference latency (handshake overhead, ciphertext size)
  • ML-DSA-65 for model artefact signing and container image verification
  • Hybrid schemes (ML-KEM + X25519) for phased migration without breaking existing client integrations
6 Vendor and Standards Landscape Independent guidance on PQC readiness across AI platforms
  • Cloud AI platform PQC readiness: current TLS support in major inference APIs
  • ENISA and BSI guidance on quantum-safe AI infrastructure
  • Timeline alignment: CNSA 2.0, ANSSI, BSI migration schedules and their implications for AI deployments
7 Q&A and Migration Planning

Designed and Delivered By

Workshops are designed and delivered by QSECDEF in collaboration with sector specialists. All facilitators have direct experience in both quantum technologies and AI infrastructure security.

QD

Quantum Security Defence

Workshop design and delivery

QSECDEF brings world-leading expertise in post-quantum cryptography, quantum computing strategy, and defence-grade security assessment. Our advisory membership spans 600+ organisations and 1,200+ professionals working at the intersection of quantum technologies and critical infrastructure security.

AI

AI Security Partners

Domain expertise and operational validation

AI workshops are co-delivered with sector specialists who bring direct operational experience in AI platform security, inference infrastructure, and MLOps. This ensures workshop content is grounded in the operational realities of production AI deployments.

Commission This Workshop

Sessions are configured around your AI deployment architecture, API infrastructure, authentication mechanisms, and existing security tooling. Get in touch to discuss requirements and schedule a date.

Contact Us