SA Enterprises Race to Deploy Agentic AI, But Security Gaps Remain
Agentic AI is moving from aspiration to enterprise reality in South Africa. Enterprises across the country are racing to deploy autonomous agents that can plan, act and make decisions independently. But most are doing so without the security foundations to match. Globally, three out of four agentic AI projects are exposing organisations to major security risks. Projected to reach USD 152 million by 2030, the local market is no exception.
Agentic AI represents a massive leap forward. Sixty-two percent of organisations worldwide are already experimenting with AI agents, moving beyond simple conversation to autonomous action that will redefine productivity.
Controlling What AI Does, Not Just What It Says
“The challenge is that most enterprises can observe what AI says but remain blind to what AI does,” says Justin Lee, Regional Director for Southern Africa at Palo Alto Networks. “Traditional security tools were not designed for a world where software agents hold identities, make decisions and take action independently.” As autonomous agents plan tasks, execute actions and coordinate with other agents without continuous human oversight, the attack surface shifts fundamentally. Reasoning paths become targets for manipulation. Memory becomes a surface for poisoning. Tools become entry points for unintended actions.
“This shift from ‘AI that talks’ to ‘AI that acts’ introduces new risks, from unmanaged agentic identities to unpredictable runtime behaviours,” adds Rynier Schoeman, Cyber Architecture Specialist and Solutions Consultant at Palo Alto Networks, “Prisma AIRS 3.0 provides a comprehensive platform to discover, assess and protect agentic AI, giving our customers the unique ability to confidently, and securely, scale the AI-powered enterprise.”
For security teams, the infrastructure layer is where the foundational challenges live: limited visibility across agent deployments, difficulty assessing risk as agent behaviour evolves, and insufficient governance over agent identity and runtime activity.
“SA organisations need a clear picture of their full agent environment before they can begin to secure it,” explains Lee. “That means knowing which agents are running, what they are connected to, and whether they have been assessed for vulnerabilities. Without that foundation, scaling agentic AI introduces risk that compounds quickly.”
Defending the Layer Where Work Actually Happens
On the user layer, where agents increasingly operate on behalf of employees through the browser, the risk profile is equally complex. Shadow AI agents, prompt injection attacks and agent hijacking represent a new class of threats that conventional endpoint and network security cannot address.
“Organisations are unleashing a new workforce of agents, but you cannot give autonomy without security,” notes Schoeman. “By embedding AI-powered data protection and securing AI interactions directly in the browser, leaders can now greenlight strategic AI initiatives that were previously stalled. We are not just securing an interface. We are securing a new way of work.”
Lee points out that the browser is where most work happens today, and it is fast becoming the central hub for agentic AI interactions. “Organisations need the flexibility to use the AI tools that work best for their teams, while maintaining visibility over what those tools are doing, keeping agents within their intended scope, and being able to demonstrate accountability for both human and non-human actions. That last point is becoming increasingly relevant as AI regulations evolve globally and locally.”
As South African enterprises move from experimentation to scaled agentic AI deployment, closing the security gap requires protection at every layer: from the infrastructure where agents are built and deployed, to the browser where they interact with the world. Organisations that secure agentic AI early will scale it faster. Those who don’t will spend more time cleaning up than moving forward.