As organizations grapple with digital transformation, artificial intelligence (AI) is emerging not merely as a competitive advantage but as a fundamental business imperative. With its transformative potential across operations, AI promises significant returns—but not without challenges. Ahead of TechEx North America on June 4-5, insights from Kieran Norton, Deloitte’s US Cyber AI & Automation Leader, offer a sharp lens into the complex interplay of AI deployment, cybersecurity, and governance.
Norton, with over 25 years in cybersecurity and a leading voice at Deloitte, articulates what many in enterprise leadership are only beginning to grasp: AI’s rewards can be immense, but so can its risks. The current imperative? Move beyond experimental deployments and hype to create AI strategies grounded in security, governance, and real-world return on investment (ROI).
AI in the Enterprise: Opportunities and Evolution
AI has moved rapidly from conceptual use to tangible application. Initially introduced through tools like chatbots or customer segmentation models, AI is now being integrated deeper into core business processes—from supply chain forecasting to cybersecurity triage. This evolution means that AI is no longer optional or experimental. It must be deployed responsibly, securely, and in ways that deliver measurable ROI.
Organizations are seeing AI-driven efficiency in areas such as:
- SOC (Security Operations Center) automation: AI reduces time spent triaging tickets by 60–80%, freeing human analysts for higher-level tasks.
- Predictive maintenance: Manufacturing firms are leveraging AI to analyze sensor data, reducing downtime and saving millions annually.
- Customer service optimization: Intelligent agents are providing 24/7 support, decreasing resolution times and improving satisfaction.
However, these benefits come with strings attached: new attack vectors, deeper integration complexities, and pressing governance demands.
The Dual Role of AI in Cybersecurity
AI is simultaneously a powerful ally in defending against threats and a potential new tool in the arsenal of malicious actors. Norton highlights how AI-enhanced tools are helping companies detect phishing attacks and network anomalies in real time. But he also underscores a rapidly shifting threat landscape.
AI as a Defense Mechanism
Modern security platforms now use machine learning (ML) to analyze enormous datasets to spot abnormal behavior—long before a human analyst could detect the same anomaly. For example, AI-driven tools can:
- Detect lateral movement within networks.
- Flag data exfiltration attempts.
- Monitor endpoints for non-standard usage patterns.
According to IBM’s Cost of a Data Breach Report 2023, organizations using AI and automation extensively were able to identify and contain breaches 108 days faster on average than those without.
AI as a Threat Vector
Just as defenders use AI, so do attackers. We’re witnessing the emergence of AI-powered malware capable of evading traditional detection systems, and sophisticated phishing scams created using generative AI models. This asymmetry forces businesses to stay ahead not only technologically but also strategically.
The Governance Imperative: Moving Beyond Checklists
Norton draws parallels between the current state of AI adoption and the early days of cloud computing. Then, as now, businesses saw the promise but underestimated the structural changes needed to integrate new technologies securely and at scale.
Today, integrating AI requires:
- Updated governance frameworks: Traditional cybersecurity policies often fail to address model bias, hallucination, or prompt injection risks.
- Cross-functional oversight: Legal, compliance, risk, and IT must collaborate to create policies that address data sovereignty, consent, and fairness.
- Model lifecycle management: Monitoring performance degradation, re-training schedules, and audit trails are now central to AI governance.
He cautions against creating parallel governance structures solely for AI. Instead, organizations should evolve their existing cybersecurity and risk management programs to handle AI-specific nuances.
“You shouldn’t create another programme just for AI security on top of what you’re already doing,” Norton advises. “You should be modernising your programme to address the nuances associated with AI workloads.”
Data: The Lifeblood of AI and a New Security Frontier
Data remains at the heart of both AI’s value and its risk. For AI systems to deliver, they must have access to high-quality, clean, and comprehensive datasets. But with this access comes responsibility.
Enterprises need to:
- Map their data environments to ensure visibility into what data is being used by AI, and where it resides.
- Apply rigorous access controls and encryption protocols to prevent leaks and unauthorized access.
- Ensure compliance with data privacy laws like GDPR, CCPA, and emerging AI-specific regulations.
Failing to secure data not only risks compliance breaches but also undermines the integrity of AI outputs—especially when data is poisoned or corrupted at the source.
Starting Small: The Practical Roadmap to AI ROI
One of Norton’s most valuable recommendations is to start with smaller, well-bounded AI implementations that present low operational and reputational risks. Chatbots, document summarization tools, and internal recommendation engines are ideal starting points.
He differentiates between:
- Chatbots: Lower risk, primarily designed to surface information based on training data.
- Agents (agentic AI): Higher risk, particularly if they’re involved in executing transactions or making consequential decisions (e.g., in finance or healthcare).
“If you plug 5, 6, 10, 50, a hundred agents together, you’re getting into a network of agency,” says Norton. “The interactions become quite complex and present different issues.”
To avoid risk overload, businesses must first test AI in narrow contexts with clearly defined KPIs and oversight. Only after validating the efficacy and understanding the risk can AI be scaled.
Case Study: Deloitte’s SOC AI Implementation
A standout example of AI delivering tangible value is Deloitte’s internal deployment of AI to handle Level I security ticket triage. With thousands of events flagged daily, human analysts were stretched thin. AI was trained to handle preliminary ticket triage, classifying and escalating only those incidents that warranted human attention.
Results included:
- A significant reduction in analyst fatigue.
- Quicker response times to genuine threats.
- A high-confidence operational prototype with measurable ROI.
This AI was not customer-facing, reducing reputational risk, and experts were embedded in the process to validate decisions—illustrating the principle of responsible augmentation, not replacement.
Building the Future: Secure AI by Design
For AI to deliver on its promise sustainably, organizations must bake security and governance into the design and deployment lifecycle from the outset.
Key recommendations include:
- Establish multidisciplinary AI governance committees to guide responsible development and usage.
- Integrate secure AI development practices akin to DevSecOps—“SecAIops”—to ensure vulnerabilities are caught early.
- Educate stakeholders across the business on risks such as bias, hallucinations, and adversarial attacks.
- Use sandboxed environments for agentic AI experiments to minimize unintended consequences.
- Continuously monitor AI systems using real-time telemetry and establish clear escalation paths for anomalous behavior.
As Norton emphasizes, companies must avoid the allure of theoretical value and instead focus on solving real problems with real AI—where success can be measured and trust can be built.
Conclusion: AI ROI Comes with Responsibility
As AI continues to mature and its business impact deepens, it will no longer be confined to IT departments or innovation labs. Every function, from operations to legal to the C-suite, will need to understand and adapt to the implication of AI. The real ROI comes not from simply deploying AI, but from doing so securely, ethically, and strategically.
Norton’s thoughts provide a blueprint: proceed cautiously in integrating AI, start small, evolve security and governance models, and build real-world impact, not just possibility. Organizations that embrace this approach will be the ones to reap AI’s rewards while avoiding its pitfalls.