Forty-eight percent of cybersecurity professionals now rank agentic AI and autonomous systems as the single most dangerous attack vector heading into 2026, according to a Dark Reading readership poll cited by Kiteworks. That number outranks deepfake threats, board-level cyber recognition gaps, and passwordless adoption concerns.
The poll’s timing aligns with a detailed analysis published by The Hacker News, which breaks down why the security industry is now treating agentic AI as a first-order operational risk rather than a policy question.
Three Agent Categories, Three Risk Profiles
The Hacker News analysis identifies three distinct categories of agents operating in enterprise environments, each carrying different security implications.
The first is general-purpose coding and productivity agents like Claude Code and GitHub Copilot. According to the analysis, “whether they have been formally approved or not, they are being used.” The question for security teams is what data these agents can access, how they interact with codebases, and what actions they can take.
The second is vendor-built agents powered by Model Context Protocol (MCP). The analysis flags a specific attack vector: an agent managing a user’s calendar can receive a malicious calendar invite carrying hidden instructions in the event description. “The agent reads it, interprets the embedded prompt, and executes,” according to The Hacker News. This is prompt injection through a side channel, and it works because MCP-connected agents treat incoming data from integrated services as trusted input.
The third is custom agents built by individual users. The analysis makes a blunt observation: “With agentic AI, anyone in the organization can build functional tools, automations, workflows, agents with real system access, without writing traditional code.” Marketing, finance, and operations teams are building agents that go live without security review. The article frames this as “a supply chain problem in a different form.”
The Lateral Movement Problem
The core security concern isn’t that agents exist. It’s the breadth of access they require to be useful.
“Broad permissions are what make agents useful: access to calendars, communication platforms, file systems, code repositories, internal APIs,” the Hacker News analysis states. “That access is also what makes the blast radius significant when something goes wrong.”
The article identifies a specific lateral movement path: “An agent with access to both a terminal and an email inbox can be manipulated through either channel to act in the other.” A compromised email input can trigger terminal commands. A malicious prompt injected through a calendar event can cause an agent to exfiltrate data through an API call. Each integration an agent has access to becomes both a capability and an attack surface.
Kiteworks’ analysis adds that “every AI agent introduced into an organization creates a non-human identity requiring API access and machine-to-machine authentication, challenges that legacy identity management systems were never designed to handle.”
SANS Formalizes Agentic Security Training
The security training industry is now responding. SANS Institute offers SEC545: GenAI and LLM Application Security, a five-day course covering the full GenAI stack from RAG pipelines and vector databases to MLOps workflows and agentic AI security.
The course covers how agents consume inputs, chain tools together, and produce outputs, and what a session with an MCP-connected agent looks like from an access control standpoint. It is scheduled for SANSFIRE 2026 in July.
The Hacker News article frames the SANS course as a response to a specific gap: “Security teams that cannot speak the language of AI engineering, that cannot challenge design decisions, propose workable controls, or ask informed questions, get bypassed. Business units move forward without them.”
The Security Team’s Shrinking Window
The pattern the analysis describes is familiar from previous technology shifts. Cloud computing followed the same arc: business units adopted first, security teams scrambled to catch up, and the gap created years of misconfigured environments that still produce breaches today.
The difference with agentic AI is speed. According to The Hacker News, “the same dynamic is playing out with AI, at a faster pace and with higher stakes.” The article’s recommendation for security teams is direct: “Try building an agent. Experiment with the tools your developers are already using. This hands-on familiarity is where real understanding begins.”
The window for security teams to get ahead of agentic deployments is narrowing. The agents are already running.