Workshop Description
For AI platform security leads and MLOps engineers. Covers quantum cryptographic exposure of AI inference APIs, model serving TLS, API key authentication, OWASP LLM Top 10 intersections, and PQC migration using FIPS 203/204/205 for AI-as-a-service architectures.
Every AI inference call traverses a TLS session. The key exchange protecting that session (X25519 or ECDHE P-256) is vulnerable to a cryptographically relevant quantum computer running Shor's algorithm. For organisations serving AI APIs externally, the harvest-now-decrypt-later threat is immediate: an adversary recording encrypted API traffic today could decrypt proprietary prompts, model outputs, and customer data once quantum capability arrives. Long-lived API keys used in agentic AI systems compound the exposure. This workshop maps the full cryptographic dependency chain of a typical AI inference pipeline, from API gateway through load balancer to model server and vector database, then builds a phased PQC migration plan using the NIST FIPS 203 (ML-KEM), FIPS 204 (ML-DSA), and FIPS 205 (SLH-DSA) standards. We address the specific performance constraints of AI workloads: handshake latency impact on inference SLAs, ciphertext size overhead on high-throughput API endpoints, and hybrid scheme deployment for backward compatibility.
What participants cover
- TLS key exchange vulnerability in AI inference APIs: X25519 and ECDHE P-256 exposure to Shor's algorithm and harvest-now-decrypt-later risk
- API authentication cryptography: long-lived API keys, OAuth tokens, and JWT signing in agentic AI systems under quantum threat
- OWASP LLM Top 10 intersections: where AI-specific vulnerabilities (prompt injection, supply chain, information disclosure) compound quantum cryptographic exposure
- FIPS 203/204/205 for AI infrastructure: ML-KEM for API gateway TLS, ML-DSA for model artefact signing, SLH-DSA for long-lived verification
- Performance impact assessment: handshake latency, ciphertext size, and throughput overhead for high-volume inference endpoints
- Regulatory alignment: EU AI Act Article 15, NIST AI RMF, ENISA and BSI guidance on quantum-safe AI infrastructure