Securing the AI-Native Software Development Lifecycle (Jozsef Ottucsak)
| February 27th, 2026Level: Tehnical
Abstract:
Imagine a development pipeline where code writes itself, tests itself, and deploys itself—all before you finish your morning coffee. This is the promise of agentic IDEs and agent-driven development. But for security professionals, it’s a potential nightmare. When an LLM generates thousands of lines of code in minutes, human review cycles become the bottleneck, and the uncertainty of the output becomes the primary risk.
Traditional SDLC security is built for human velocity; it breaks under AI velocity. This session dissects the collision of AI agents and application security. We will move beyond the hype to dissect real-world implementations: where AI-native development shines, where it fails, and where it introduces terrifying new risks.
We will contrast the old world with the new, exploring the dual nature of AI-native coding: the massive productivity gains versus the anxiety of deploying code generated by a probabilistic engine. We will discuss how to build new verification layers and processes that don’t just “review” code, but validate it at the speed of the agent.
Join us to learn how to build security guardrails capable of handling high-velocity uncertainty, ensuring that the agent doesn’t just ship code faster, but ships it safely.
Bio:
Jozsef Ottucsak is a seasoned Product Security Architect with over a decade of experience in secure software development lifecycle (SDLC) initiatives for on-premise, hybrid, and cloud-native applications.
Currently serving as a Staff Product Security Architect at Diligent, he specializes in enabling developers to build secure products by establishing security requirements, designing secure-by-design processes, and providing technical guidance.