Introducing Microsoft Sentinel MCP and Sentinel Graph
With Sentinel MCP, Microsoft Sentinel becomes an agentic security platform, as Sentinel Graph successfully models security data as interconnected nodes.
On September 30, 2025, Microsoft announced a transformative evolution of Microsoft Sentinel from a cloud-native SIEM into a comprehensive agentic security platform, introducing the Model Context Protocol (MCP) server in public preview. This announcement marks a fundamental shift in how security operations centers can leverage AI agents to defend against sophisticated threats at machine speed. The Sentinel MCP server, combined with the general availability of the Sentinel data lake and public preview of Sentinel graph, positions Microsoft’s security platform as the industry’s first fully integrated environment where autonomous AI agents can access unified security context, reason over complex relationships, and execute automated responses across 84 trillion daily security signals. For security teams drowning in alerts and facing critical talent shortages, this represents the most significant architectural advancement in SIEM technology since the category’s inception—enabling junior analysts to perform at expert levels and reducing incident response times by more than 40 percent.
The announcement arrives as organizations struggle with overwhelming alert volumes, fragmented security tools averaging 45 per enterprise, and one in three cybersecurity positions remaining vacant. Microsoft’s solution fundamentally reimagines security operations by providing AI agents with standardized access to rich security context through an open protocol, eliminating the need for custom integrations and enabling both Microsoft and third-party agents to collaborate seamlessly within a unified platform.
Understanding the Model Context Protocol and why it matters for security
Model Context Protocol emerged as an open-source standard introduced by Anthropic to solve a fundamental challenge: connecting AI assistants to external systems where data lives. Microsoft has adopted and extended this protocol specifically for cybersecurity applications, creating what the company describes as a “USB-C port for AI applications”—just as USB-C standardized device connectivity, MCP standardizes how AI models connect to data sources and tools.
The protocol defines a client-server architecture with three core components working in concert. The MCP host serves as the AI application that coordinates multiple MCP clients, such as Visual Studio Code or Security Copilot. The MCP client maintains connections to MCP servers and obtains context for the host application. The MCP server provides the actual context, tools, and resources that AI agents need to perform their work. This architecture enables something previously impossible at scale: AI agents can autonomously discover what capabilities are available from each connected system, interact using natural language, and leverage standardized communication protocols that work across any AI application that implements MCP.
Rather than security teams spending weeks building custom connectors for each new integration, the MCP server presents a discoverable “menu” of available actions in language that AI agents inherently understand. This eliminates the fragmentation that has plagued security operations for decades, where data sits siloed across dozens of tools with incompatible APIs and query languages. The open standard nature ensures long-term compatibility and prevents vendor lock-in, enabling security teams to build once and deploy across multiple AI platforms.
For cybersecurity specifically, this standardization proves transformative because security context is everything. An alert about a failed login attempt means nothing without understanding whether that user account has elevated privileges, whether the source IP has appeared in threat intelligence feeds, whether similar patterns occurred across other users, and whether the endpoint involved shows other suspicious behaviors. Gathering this context manually across disparate tools can take analysts 25 to 40 minutes per alert—time that multiplies across thousands of daily alerts and during which attackers are actively moving laterally through compromised environments.
The Microsoft Sentinel MCP server brings unified security context to AI agents
Microsoft Sentinel MCP server launched in public preview as a fully managed cloud service that requires no infrastructure deployment from security teams. Built on the MCP standard, it provides standardized, secure access to the complete security context stored in the Sentinel data lake, including tabular telemetry spanning years of historical data, graph-based relationship mappings between entities, and vector embeddings for unstructured security signals. The server uses Microsoft Entra for authentication and is available to all Sentinel data lake customers, with data stored in the same region as the connected workspace to meet data residency requirements.
The technical implementation delivers three critical layers of access that transform how security operations function. First, data exploration tools enable security analysts to query Sentinel data using natural language without knowing which tables to access, understanding complex schemas, or writing Kusto Query Language (KQL) scripts. An analyst can simply prompt “Find the top 3 users at risk and explain why they are at risk” or “Identify devices that showed an outstanding amount of outgoing network connections,” and the MCP server translates these requests into optimized queries against the appropriate data sources, returning actionable insights in plain language.
Second, graph-based context models relationships across users, devices, activities, and threat indicators, enabling AI agents to understand how entities connect rather than viewing them as isolated data points. This proves essential for threat investigations where attackers exploit trust relationships and legitimate credentials to move laterally. The graph capabilities support both pre-breach analysis identifying vulnerable attack paths and post-breach investigations tracing how compromises spread across environments. When Security Copilot or other AI agents query through the MCP server, they receive not just individual data points but the full relationship context needed for accurate threat assessment.
Third, natural language agent creation allows SOC engineers to describe their intent in conversational language to rapidly build custom security agents. The MCP server automatically selects appropriate tools, configures the right AI model instructions, and establishes connections to security data—tasks that previously required deep technical expertise and weeks of development time. Security teams can now create organization-specific agents tailored to their unique workflows, compliance requirements, and threat landscapes in minutes rather than months.
The server operates at the URL https://sentinel.microsoft.com/mcp/data-exploration
and integrates with multiple platforms including Security Copilot (with native integration coming), VS Code with GitHub Copilot, and any MCP-compatible development environment. Authentication requires at least Security Reader role access through Microsoft Entra ID, supporting Azure role-based access control for least-privilege security. All data remains encrypted at rest using Microsoft-managed keys by default, with customer-managed key options available for organizations with specific compliance requirements. The transport mechanism uses HTTP with Server-Sent Events, ensuring real-time streaming of security context to connected agents.
Microsoft Sentinel’s evolution from SIEM to security data platform
Microsoft Sentinel has transformed from its origins as a cloud-native Security Information and Event Management system into what the company positions as “the security platform for the agentic era.” This evolution centers on three major architectural innovations that reached milestone releases on September 30, 2025, creating an integrated foundation for AI-driven security operations.
The Sentinel data lake reached general availability as a fully managed, cloud-native security data repository purpose-built for AI operations at scale. Unlike traditional SIEM architectures that force organizations to choose between data retention and cost, the data lake implements a two-tier storage model. The analytics tier provides “hot” storage optimized for real-time queries, analytics rules, threat hunting, and alerting, with 30 days default retention extending up to two years. The data lake tier offers “cold” storage for long-term forensics, compliance, historical analysis, and machine learning model training, supporting up to 12 years total retention at less than 15 percent of traditional analytics log costs.
This architectural separation of storage and compute enables unprecedented scale and flexibility. Security teams can retain comprehensive telemetry from 350+ native connectors spanning Microsoft 365, Azure, AWS, GCP, and third-party security tools without the prohibitive costs that previously limited retention to 30 to 90 days. The data lake uses open-format Parquet files, ensuring interoperability and enabling advanced analytics through multiple modalities: full KQL support for traditional queries, Python-based Jupyter notebooks for machine learning development, and natural language interaction through the MCP server. When historical data is needed for investigation or analysis, teams can promote specific datasets from the lake tier to analytics tier on-demand, paying only for what they actively analyze.
The Sentinel graph capabilities entered public preview to address a fundamental limitation of traditional SIEM systems: they excel at analyzing individual events but struggle to understand relationships and context. Graph technology models security data as interconnected nodes and edges representing users, identities, devices, cloud resources, data flows, activities, and threat intelligence indicators. This graph-based approach matches how attackers actually operate—exploiting trust relationships, legitimate credentials, and interconnected systems to achieve their objectives.
Graph-powered experiences transform security operations across multiple domains. The incident graph in the Microsoft Defender portal provides blast radius analysis that visualizes vulnerable paths an attacker could traverse from a compromised entity to reach critical assets, enabling SOC teams to prioritize remediation based on actual risk rather than alert severity scores. The hunting graph enables visual traversal of complex relationships to proactively identify privileged access paths to sensitive resources before attacks escalate. Data risk graphs in Microsoft Purview Insider Risk Management connect users, assets, and risky activities across SharePoint and OneDrive to show the full blast radius of insider threats and prevent data exfiltration.
For AI agents and Security Copilot, graph context proves transformative because it enables reasoning over interconnected data with precision. Rather than analyzing isolated alerts, agents can answer complex questions like “which paths could an attacker take from this compromised user account to reach our financial database?” The graph automatically correlates alerts with relationship context, prioritizes incidents by actual business impact, and enables automated response actions that understand the full scope of affected systems. This shift from defenders thinking in isolated event lists to thinking in relationship graphs fundamentally changes the speed and accuracy of threat detection and response.
How Sentinel graph capabilities enhance security operations and investigation
The practical impact of graph-based security operations manifests across the entire security lifecycle, from proactive threat hunting to incident response and compliance investigations. Traditional SIEM queries return flat tables of events that analysts must manually correlate, a time-consuming process prone to missing subtle connections that span weeks or months. Graph queries return relationship networks that immediately surface patterns invisible in tabular data.
Consider a real-world scenario: an analyst investigates a suspicious login from an unusual location. Traditional SIEM analysis requires manually querying user account history, checking device inventory for the source system, searching for similar authentications across other users, reviewing historical threat intelligence on the source IP address, and examining network traffic patterns—each query crafted separately with results mentally correlated by the analyst. With Sentinel graph, a single query returns the complete relationship network: how the user connects to sensitive resources, what lateral movement paths exist from the source device, which other users share similar authentication patterns, how the IP address relates to known threat actor infrastructure, and what cascading impacts could occur if the account is compromised.
This contextual understanding enables pre-breach security where organizations identify and remediate vulnerable attack paths before adversaries exploit them. Security teams can visualize privilege escalation routes showing how an attacker with initial access to a low-privilege account could reach domain administrator credentials through a chain of delegated access rights, misconfigured permissions, and trust relationships. Remediating these paths proactively—before any compromise occurs—dramatically reduces attack surface and eliminates entire classes of attack techniques.
During active incidents, graph analysis accelerates response by tracing attack paths across the environment. When ransomware encrypts file servers, graph queries instantly reveal how the malware spread from the initial infection vector, which systems remain at risk, what lateral movement techniques the attacker used, and which accounts or credentials need immediate remediation. This comprehensive impact assessment replaces hours of manual investigation with minutes of graph traversal, enabling security teams to contain threats before they cause catastrophic damage.
The integration with Microsoft Purview creates unique capabilities for data security investigations that traditional SIEMs cannot provide. Data security teams can trace how sensitive files move across collaboration platforms, who accessed confidential information before it appeared in unauthorized locations, and what risk indicators correlate with data exfiltration attempts. The unified audit logs from SharePoint, OneDrive, Teams, and Exchange combine with Entra audit logs and threat intelligence to create a complete picture of data security incidents spanning structured and unstructured repositories.
Integration with Security Copilot transforms SOC analyst productivity
Microsoft Security Copilot represents the industry’s first security product combining OpenAI’s GPT-4 architecture with a security-specific language model trained on Microsoft’s 84 trillion daily security signals. Launched generally available in April 2024, Security Copilot integrates deeply with Sentinel through two complementary mechanisms: direct plugin integration for existing Sentinel capabilities and the new MCP server for advanced agentic workflows.
The immediate productivity gains prove substantial and measurable. Forrester’s Total Economic Impact study analyzing four organizations using Security Copilot projected returns on investment ranging from 112 percent in conservative scenarios to 457 percent in high-impact implementations over three years. Organizations achieved 22 percent faster incident response and seven percent more accurate security decisions. Perhaps most significantly, junior analysts achieved senior-level output quality with AI assistance, addressing the critical skills gap where one in three cybersecurity positions remains vacant.
The integration manifests across the security operations lifecycle. For incident analysis and triage, Security Copilot automatically generates comprehensive incident summaries available in both Azure and Defender portals, synthesizing dozens of alerts and logs into actionable briefs that previously required 25 to 40 minutes of manual analysis. Guided response recommendations provide step-by-step remediation instructions tailored to the specific incident context. Context enrichment automatically correlates alerts with historical data and threat intelligence, while entity investigation provides automated analysis of all users, devices, IP addresses, and files involved.
Threat hunting capabilities transform from manual to AI-assisted through natural language to KQL translation. Security analysts describe hunt hypotheses in plain English, and Security Copilot generates optimized hunting queries across both Sentinel and Defender XDR tables in the unified Defender portal. Graph-powered hunting leverages Sentinel graph for relationship-based queries that identify attack paths and privilege escalation routes. Pre-built promptbooks provide investigation workflows for common scenarios like “Microsoft Sentinel incident investigation” that guide analysts through comprehensive analysis even for unfamiliar attack types.
The September 2025 announcement introduced autonomous agent capabilities that shift from reactive assistance to proactive automation. The no-code agent builder enables security teams to create custom Security Copilot agents using natural language descriptions of desired functionality. The system automatically generates agent code, provides an “autotune” feature that refines instructions for optimal performance, and enables one-click deployment to Security Copilot workspaces. Agents can execute complete end-to-end workflows without human intervention—analyzing phishing emails, optimizing conditional access policies, triaging data loss prevention alerts, and remediating vulnerabilities based on organizational context.
The Microsoft Security Store launched as a centralized marketplace where teams discover and deploy agents created by Microsoft, partners like Accenture, ServiceNow, Zscaler, BlueVoyant, OneTrust, and Aviatrix, and the community. Available agents include threat intelligence briefing agents that curate relevant intelligence based on organizational attributes, user-submitted phishing triage agents that automatically analyze reported emails, conditional access optimization agents identifying policy gaps, and access review agents empowering reviewers to make fast, accurate decisions in Microsoft Entra. This ecosystem approach prevents organizations from building the same capabilities repeatedly, enabling rapid deployment of proven automation for common security workflows.
Real-world deployments demonstrate transformative impact. NCC Group saved 50 hours per week for their SOC team through Security Copilot automation. Organizations implementing full Sentinel and Security Copilot integration achieved more than 40 percent reduction in incident service level agreements. Field studies show 87 to 92 percent time reduction per alert investigation when agents handle Tier 1 and Tier 2 triage autonomously. The shift enables human analysts to focus on strategic threat hunting, security architecture improvements, and complex edge cases requiring creative problem-solving rather than repetitive alert processing.
GitHub Copilot bridges development and security operations through MCP
The Sentinel MCP server creates an unexpected but powerful integration between GitHub Copilot and security operations, enabling security teams with development skills to build highly customized organization-specific agents within familiar coding environments. This developer-focused security automation pathway complements the no-code agent builder in Security Copilot, providing maximum flexibility for technical teams.
Security engineers can now open VS Code, enable the Sentinel MCP server through the command palette by adding the server URL https://sentinel.microsoft.com/mcp/data-exploration
, and immediately access MCP tools through GitHub Copilot. The integration supports what Microsoft calls “vibe-coding”—describing desired agent functionality in natural language while GitHub Copilot generates the implementation code. Developers can write security detection rules, create response automation scripts, build custom integrations between security tools, and generate optimized KQL queries for Sentinel with AI assistance throughout the development process.
This workflow enables rapid iteration and testing. Developers build agents against the Sentinel data lake, test functionality with real security data, refine behavior based on results, and deploy production-ready agents to Security Copilot workspaces—all within the same environment. Version control through GitHub maintains audit trails and enables collaboration across distributed security engineering teams. The approach bridges the historical gap between security operations and software development, acknowledging that modern security increasingly requires programmatic automation rather than just manual analysis.
The technical workflow supports both simple automation scripts and sophisticated multi-agent systems. Security teams can create agents that automatically enrich alerts with context from internal threat intelligence platforms, custom agents that integrate proprietary security tools not yet in the Microsoft ecosystem, and specialized agents implementing organization-specific playbooks codifying institutional knowledge and compliance requirements. The MCP standard ensures these custom agents work seamlessly alongside Microsoft-provided and partner-created agents, enabling mixed ecosystems where the right tool handles each specific task.
The broader Microsoft security strategy positions Sentinel within unified operations
Microsoft Sentinel operates as a central component within Microsoft’s comprehensive security ecosystem spanning identity, endpoints, cloud infrastructure, data governance, and application security. The Secure Future Initiative represents what Microsoft describes as the largest cybersecurity engineering project in history, with the equivalent of 34,000 full-time engineers working for 11 months to strengthen security posture across all Microsoft products and services. This initiative establishes security as every employee’s core priority tied to performance reviews, with 99 percent completing Security Foundations training.
The unified security architecture implements Zero Trust principles across the entire Microsoft cloud. Azure Active Directory, now Microsoft Entra ID, manages identity and access with conditional access policies, dynamic risk-based controls, and phishing-resistant multi-factor authentication deployed to 92 percent of employee accounts. Microsoft Defender XDR integrates endpoint, email, identity, and cloud app security through the unified Defender portal where Sentinel incidents synchronize bidirectionally. Microsoft Defender for Cloud provides Cloud-Native Application Protection Platform capabilities securing multicloud workloads across Azure, AWS, and GCP with unified risk analysis identifying an average of 351 exploitable attack paths to high-value assets per organization.
Microsoft Purview completes the data security layer, providing data loss prevention, insider risk management, and information protection across structured and unstructured data repositories. The September 2025 announcements positioned Purview as a comprehensive Data Security Posture Management platform for AI, with 99 percent of organizations experiencing sensitive data exposure through AI tools requiring robust safeguards. The integration between Sentinel graph and Purview creates unique capabilities for investigating data security incidents that span security events and data access patterns.
This ecosystem approach delivers practical benefits beyond architectural elegance. Security teams operate from the unified Defender portal rather than switching between multiple consoles, with consistent interfaces for incident response whether threats originate from email phishing, cloud misconfigurations, or compromised identities. Sentinel provides the long-term data repository and advanced analytics engine, while Defender XDR delivers real-time detection and automated response for Microsoft-native threats. Together, they create comprehensive coverage from initial access through impact, with AI agents and Security Copilot providing the orchestration layer that enables machine-speed operations.
The platform processes 84 trillion security signals daily across Microsoft’s global infrastructure, creating unparalleled visibility into emerging threats and attack patterns. This telemetry feeds Microsoft’s Threat Intelligence platform, which expanded in June 2025 to all 27 EU member states, EFTA members, the United Kingdom, Monaco, and Vatican City. Real-time threat intelligence tailored to national threat environments enables government and private sector organizations to anticipate attacks before they materialize. The Cybercrime Threat Intelligence Program supports coordinated law enforcement, while the Microsoft Threat Analysis Center monitors foreign influence operations enhanced by AI-powered detection of deepfake synthetic media.
Significance for security teams and SOC operations in the agentic era
The transformation from traditional security operations centers to AI-augmented environments addresses critical problems that have plagued cybersecurity for decades. Modern SOC analysts face thousands of alerts daily with false positive rates exceeding 50 percent in many environments, creating alert fatigue that causes genuine threats to blend into noise. The average enterprise security stack contains 45 separate tools from a market exceeding 3,000 vendors, with each tool requiring specialized expertise and manual correlation with other systems. Meanwhile, one in three cybersecurity positions remains vacant due to talent shortages, and those positions that are filled face relentless pressure during a period when cyber attacks increased in frequency and sophistication.
Agentic AI fundamentally reimagines this paradigm by shifting from human analysts manually investigating every alert to AI agents autonomously handling routine tasks while humans supervise and manage exceptions. Organizations deploying these capabilities report transformative results: 87 to 92 percent time reduction per alert investigation, 40 percent reduction in incident service level agreements, and mean time to resolution improvements enabling threats to be contained in minutes rather than hours. The IBM Cost of a Data Breach Report 2024 found organizations without AI and automation experienced average breach costs of $5.72 million compared to $3.84 million for those with extensive AI and automation—a savings of $1.88 million per incident.
The operational transformation manifests across the security lifecycle. For alert triage and investigation, agents automatically deduplicate alerts, perform parallel context enrichment across all integrated security tools, correlate with threat intelligence feeds, map machine and account relationships through Sentinel graph, calculate risk scores with explanations, execute automated containment for confirmed threats, and generate complete documentation—all in approximately three minutes compared to 25 to 40 minutes for manual analysis. This speed and consistency ensures 100 percent alert coverage where no potential threat goes uninvestigated due to analyst workload or fatigue.
For incident response orchestration, autonomous workflows detect anomalous activity from unified data sources, automatically gather context from all integrated tools, correlate events across timelines and affected entities, assess impact using graph relationships to understand blast radius, execute pre-approved containment actions like endpoint isolation or account disablement, implement remediation fixes based on best practices, and generate complete incident reports while continuously monitoring for persistence mechanisms or lateral movement. Human oversight focuses on reviewing high-risk actions, approving major changes, handling complex edge cases requiring creative problem-solving, and conducting strategic post-incident analysis to improve future responses.
Proactive threat hunting shifts from occasional exercises to continuous operations where AI agents autonomously search for indicators of compromise, generate hypotheses based on threat intelligence, create and execute advanced hunting queries, identify subtle patterns across massive datasets spanning months or years, surface hidden threats missed by signature-based detection, and correlate seemingly unrelated events across time and organizational boundaries. Security teams that previously conducted monthly or quarterly hunt operations now maintain persistent hunt missions running 24 hours per day without analyst fatigue or cognitive load limitations.
The vulnerability management workflow demonstrates end-to-end automation potential. Agents receive vulnerability scanner output, assess exploitability in the specific organizational environment considering compensating controls and network segmentation, prioritize based on actual risk rather than generic CVSS scores, determine optimal remediation approaches whether through patching, configuration changes, or compensating controls, generate deployment plans that account for change management and business continuity requirements, monitor remediation progress, and validate fixes through testing. Organizations using Microsoft Intune Vulnerability Remediation Agent reduce remediation timeframes from weeks to minutes for critical vulnerabilities.
Perhaps most importantly, these capabilities enable human analysts to focus on high-value strategic work rather than repetitive tasks. Analysts become “SOC pilots” overseeing teams of AI agents, establishing automation guardrails, tuning detection logic, developing hunt hypotheses, improving security architecture, and solving novel problems requiring human creativity and intuition. This role evolution addresses job satisfaction concerns while simultaneously improving organizational security posture through better allocation of scarce human expertise.
European regulatory perspectives on AI security and compliance considerations
The European Union established the world’s first comprehensive AI regulatory framework through the AI Act (Regulation EU 2024/1689), which entered into force August 1, 2024, with full applicability required by August 2, 2026. The regulation implements a risk-based approach categorizing AI systems into four tiers: unacceptable risk systems that are banned entirely, high-risk systems requiring strict compliance, limited-risk systems with transparency obligations, and minimal or no-risk systems facing few requirements.
Article 15 of the EU AI Act establishes specific cybersecurity requirements for high-risk AI systems, mandating appropriate levels of accuracy, robustness, and cybersecurity throughout the AI system lifecycle. Technical cybersecurity solutions must prevent, detect, respond to, resolve, and control attacks including data poisoning where training data is manipulated, model poisoning targeting pre-trained components, model evasion using adversarial examples, and confidentiality attacks extracting sensitive information. High-risk AI systems must demonstrate resilience against unauthorized tampering, implement security measures appropriate to relevant circumstances and risks, maintain continuous risk assessment, apply security by design principles, and provide comprehensive technical documentation.
The penalty structure creates substantial incentives for compliance. Organizations face fines up to €35 million or seven percent of global annual turnover for prohibited AI practices, up to €15 million or three percent for high-risk AI non-compliance, and up to €7.5 million or 1.5 percent for providing incorrect information to authorities. These penalties rival or exceed GDPR fines, signaling the EU’s serious commitment to AI governance and safety.
GDPR integration with AI security creates overlapping compliance obligations where AI systems processing personal data must satisfy both frameworks simultaneously. The European Data Protection Board Opinion 28/2024 clarifies that GDPR establishes no priority among legal bases for AI data processing, requiring controllers to conduct thorough assessments when relying on legitimate interest justification. The three-step legitimate interest assessment requires identifying the legitimate interest, demonstrating processing necessity, and balancing organizational interests against data subject rights and freedoms. While AI-powered cybersecurity improvements and conversational assistance constitute valid legitimate interests, processing must be strictly necessary and balancing tests properly documented.
Data subject rights present particular challenges for AI systems. The right to explanation for automated decisions requires meaningful information about the logic involved, yet many AI systems operate as black boxes where decision-making processes prove difficult to articulate. The right to object to automated decision-making under Article 22 prohibits decisions based solely on automated processing with legal or similarly significant effects unless explicit consent, contractual necessity, or legal authorization applies. Organizations deploying AI security systems must provide high-level explanations enabling users to contest detrimental outcomes while protecting proprietary algorithms and security methodologies.
ENISA (European Union Agency for Cybersecurity) developed the Framework for AI Cybersecurity Practices providing practical implementation guidance through a three-layer approach. Layer I establishes cybersecurity foundations covering basic practices for ICT-hosted ecosystems based on confidentiality, integrity, and availability principles. Layer II addresses AI-specific cybersecurity challenges including the dynamic, socio-technical nature of AI systems, AI lifecycle considerations from requirements analysis through decommissioning, AI supply chain risks, asset identification and protection, and detailed threat taxonomy classification. Layer III provides sector-specific cybersecurity guidance tailored to healthcare, automotive, finance, and other domains with unique risk profiles.
ENISA’s AI Threat Landscape Report identifies critical vulnerabilities across the AI lifecycle and maps threat actors to specific stages. The standardization assessment reveals gaps in current AI cybersecurity standards, recommending enhanced EU cybersecurity certification schemes, software layer standards applicable to AI components, system-specific analysis requirements, and rigorous traceability mechanisms for data and testing procedures. Research priorities emphasize both AI for cybersecurity applications enabling enhanced threat detection and automated response, and securing AI systems against data poisoning, model manipulation, and adversarial attacks.
The interaction between multiple European regulations creates a complex compliance landscape where organizations must simultaneously address EU AI Act requirements, GDPR data protection obligations, NIS2 Directive network and information security mandates, Digital Operational Resilience Act requirements for financial sector operational resilience, and Cyber Resilience Act horizontal cybersecurity requirements for digital products. These frameworks complement rather than replace each other, requiring integrated compliance strategies rather than siloed approaches that address each regulation independently.
Microsoft’s European commitments and sovereign cloud strategy
Microsoft responded to European regulatory requirements and sovereignty concerns with comprehensive initiatives including dedicated governance structures, technical solutions, and policy commitments. In April 2025, Microsoft announced five core European Digital Commitments: building a broad AI and cloud ecosystem across Europe, upholding Europe’s digital resilience during geopolitical volatility, protecting privacy of European data, helping protect and defend Europe’s cybersecurity, and strengthening Europe’s economic competitiveness including through open source contributions.
The European Security Program launched June 2025 expanded Microsoft’s AI-based threat intelligence sharing to all 27 EU member states, EFTA members, the United Kingdom, Monaco, and Vatican City. Real-time threat intelligence tailored to national threat environments provides governments and critical infrastructure operators with actionable intelligence on nation-state cyber activity, with particular focus on Russian and Chinese threat actors, support for Ukraine and nations providing assistance, and monitoring of Iranian and North Korean espionage objectives. The Cybercrime Threat Intelligence Program enables coordinated law enforcement operations against transnational cybercrime networks, while the Microsoft Threat Analysis Center provides regular intelligence briefings on state-affiliated actors and AI-enhanced detection of deepfake synthetic media.
Cybersecurity capacity investments increased European datacenter capacity by 40 percent over two years with commitments to double capacity between 2023 and 2027. Designated European partners receive operational continuity arrangements ensuring service availability, while contingency plans address potential geopolitical scenarios that could disrupt cloud services. These infrastructure investments provide European customers with data residency options, reduced latency, and resilience against supply chain disruptions or political interference.
The Deputy CISO for Europe position established dedicated accountability for compliance with European regulations including Digital Operational Resilience Act, NIS 2 Directive, Cyber Resilience Act, and EU AI Act. This role reports directly to Microsoft’s Global CISO as part of the Cybersecurity Governance Council, ensuring European regulatory requirements receive executive-level attention and resource allocation. Microsoft views the Cyber Resilience Act as a “new gold standard for cybersecurity” with transformative impact comparable to GDPR’s influence on privacy practices, dedicating substantial engineering resources to compliance and participating in the European Commission Expert Group on Cybersecurity.
For EU AI Act compliance, Microsoft established cross-functional governance including working groups spanning AI governance, engineering, legal, and public policy functions. The company conducted thorough reviews of existing systems for prohibited practices through internal surveys distributed via central tooling, with expert review and follow-up for engineering teams. Microsoft’s Restricted Use Policy incorporates EU AI Act prohibited practices company-wide, preventing development, deployment, or marketing of banned AI systems, with contract updates preventing customers from improperly using Microsoft AI services. Active engagement with the EU AI Office, participation in Member State discussions, and contributions to the Code of Practice for general-purpose AI models demonstrate proactive compliance efforts. Microsoft publishes ongoing guidance on the Trust Center to help customers navigate their own compliance obligations.
Microsoft Sovereign Cloud provides technical solutions for data sovereignty, operational control, and regulatory compliance tailored to European requirements. The Sovereign Public Cloud operates across all European datacenter regions with data remaining in Europe under European law, operations and access controlled by European personnel, and customer-controlled encryption preventing Microsoft from accessing protected data without explicit authorization. The architecture requires no migration for existing workloads, enabling gradual adoption of sovereignty controls. Microsoft 365 Local deploys productivity services in private cloud environments within customer datacenters or sovereign cloud infrastructure, providing full customer control over security, compliance, and governance for highly regulated industries.
The Sovereign Private Cloud serves governments and critical infrastructure operators requiring the highest standards of data residency, operational autonomy, and disconnected access capability. Built on Azure Local architecture, this model enables air-gapped deployments where national security requirements mandate complete isolation from public internet connectivity. A partner ecosystem including 1,800+ AI models from providers like Hugging Face and Mistral ensures European organizations can leverage open-source and regional models rather than exclusively US-based providers. The Microsoft Sovereign Cloud specialization within the AI Cloud Partner Program enables national Partner Clouds supporting country-specific sovereignty requirements.
AI-driven security automation trends shaping the future of cybersecurity
The cybersecurity industry underwent explosive transformation in 2024-2025 as AI and machine learning tool usage skyrocketed 594.82 percent from 521 million transactions in April 2023 to 3.1 billion monthly transactions by January 2024. Generative AI investment surged to $25.2 billion despite overall AI private investment declining, demonstrating market conviction that generative AI represents a fundamental technology shift rather than incremental improvement. The market expanded to 2,826 AI companies in cybersecurity worldwide from major vendors like Splunk, Palo Alto Networks, Darktrace, CrowdStrike, and Fortinet, with AI market expansion predicted to exceed $3 trillion by 2034.
Security operations center transformation accelerated rapidly with 66 percent of organizations now using security AI and automation in SOCs, representing a 10 percent year-over-year increase. The business case proves compelling: organizations deploying extensive AI and automation in SOCs average $1.88 million lower breach costs than those without AI capabilities. Defensive AI demonstrates particular strength in cloud security, data security, and network security, with 71 percent of security stakeholders confident AI-powered solutions outperform traditional tools and 69 percent of enterprise executives believing AI is necessary to respond effectively to modern cyberattacks.
Agentic AI security emerged as the dominant trend for 2025, representing a paradigm shift from static tools requiring human prompts to autonomous agents that independently detect, investigate, and respond to threats. Multi-agent systems—sometimes called agent swarms—enable teams of specialized AI agents to collaborate on complex security tasks, with each agent handling specific domains like network security, endpoint protection, identity security, or cloud security. Coordination through central orchestrators prevents conflicts while enabling sophisticated workflows that previously required extensive manual security operations platform integrations. Microsoft Sentinel positions itself as the unifying platform for these agents through the MCP server, but major competitors including CrowdStrike Charlotte AI, Palo Alto Networks XSIAM, and numerous startups pursue similar autonomous SOC visions.
The shift from detection to prevention represents a strategic evolution enabled by AI’s pattern recognition at scale. Rather than primarily responding to attacks after they occur, AI agents enable predictive security posture management that identifies and remediates vulnerabilities before exploitation. Proactive threat hunting based on behavioral baselines and anomaly detection surfaces threats during reconnaissance and initial access stages rather than after data exfiltration or ransomware deployment. Anticipatory defenses adjust security controls based on emerging threat intelligence and attack pattern predictions. This prevention-first approach proves substantially more cost-effective than incident response, with studies consistently showing a 10X cost difference between preventing breaches versus responding to them.
Data security for generative AI drives significant spending increases as organizations recognize that most security historically focused on structured databases while generative AI requires protecting unstructured data comprising 80 to 90 percent of organizational information assets. Gartner predicts through 2025 a 15 percent or greater increase in application and data security spending specifically for generative AI protection. The challenge intensifies because 80 percent of data experts agree AI makes data security more challenging, as models trained on or accessing sensitive data can inadvertently expose that information through prompt injection, model inversion, or simple oversharing when users lack appropriate context about information sensitivity.
Machine identity management emerged as critical with the proliferation of generative AI, cloud automation, and DevOps practices dramatically increasing machine accounts and credentials. Unmanaged machine identities significantly expand attack surfaces, yet IAM teams report responsibility for only 44 percent of their organization’s machine identities according to 2024 surveys. The gap creates substantial risk as adversaries increasingly target service accounts, API keys, and automation credentials that often possess elevated privileges without the monitoring applied to human accounts. Organizations need coordinated enterprise-wide machine identity and access management strategies integrated with privileged access management platforms.
AI-powered attacks evolved from theoretical concerns to operational reality, with important distinctions between AI-assisted and AI-powered threats. AI-assisted attacks represent the current state: malware variants automatically generated, more convincing phishing emails leveraging language models, and reconnaissance automation using AI to profile targets. AI-powered attacks remain largely emerging but include deepfake scams impersonating executives for business email compromise, automated exploit generation discovering and weaponizing zero-day vulnerabilities through AI-based fuzzing, and adaptive malware that modifies behavior based on victim environment analysis. Security leaders anticipate these capabilities maturing rapidly, with 93 percent expecting daily AI-powered attacks by late 2025.
The industry simultaneously grapples with shadow AI where employees use unsanctioned AI models without proper governance, creating data exposure risks and compliance gaps. Post-quantum cryptography preparations accelerated following NIST’s release of initial post-quantum cryptography standards, with crypto agility becoming essential as organizations must rapidly adapt cryptographic mechanisms when quantum computers threaten current encryption schemes. Zero Trust architecture advancement continues as perimeter-based security proves inadequate for cloud-centric environments, with micro-segmentation, continuous user context checks, and session monitoring becoming standard practices preventing lateral movement during breaches.
The promise and complexity of machine-speed security operations
The transformation described in Microsoft’s September 30, 2025 announcement represents far more than incremental SIEM improvement or feature addition to existing security tools. The convergence of the Sentinel data lake providing unified security context, Sentinel graph enabling relationship-based reasoning, and the MCP server standardizing AI agent access creates the technical foundation for fundamentally reimagining security operations at machine speed and scale. Organizations that successfully adopt these capabilities position themselves to defend against increasingly sophisticated adversaries leveraging similar AI technologies for offensive operations.
The significance extends beyond efficiency gains measured in hours saved or costs reduced. Agentic AI security enables qualitative transformation in what becomes possible for security operations. Junior analysts supervising AI agents achieve outcomes previously requiring senior expertise, addressing critical talent shortages without compromising security effectiveness. Security teams shift from reactive firefighting to proactive hunt operations and strategic architecture improvements. Comprehensive alert coverage becomes achievable regardless of SOC team size or analyst workload. Most critically, response times compress from hours to minutes during active intrusions, potentially preventing catastrophic breaches through rapid containment before lateral movement and data exfiltration occur.
Yet the transformation brings substantial challenges requiring thoughtful navigation. Organizations deploying AI security capabilities must establish robust governance frameworks ensuring agents operate within appropriate boundaries, maintain human oversight for high-risk decisions, implement comprehensive audit logging of agent actions, and develop response procedures when agents make errors or face adversarial manipulation. The skills required for security operations evolve from manual investigation and tool operation toward AI supervision, prompt engineering, agent development, and strategic security architecture—requiring significant training investments and cultural adaptation. Trust building proves essential as security teams must develop confidence in AI recommendations through gradual rollout, transparent decision-making, and demonstrated reliability before delegating critical security functions to autonomous agents.
The European regulatory landscape adds complexity for organizations operating in or serving European markets. Compliance with the EU AI Act’s Article 15 cybersecurity requirements, GDPR’s data protection obligations, and sector-specific regulations like DORA and NIS2 requires integrated governance strategies spanning technical security, legal compliance, and organizational policy. Microsoft’s proactive compliance efforts including the Deputy CISO for Europe, Sovereign Cloud offerings, and EU AI Act implementation provide blueprints other vendors will likely follow as European regulations influence global AI security standards. Organizations should anticipate European requirements becoming de facto global standards given the market’s size and regulatory sophistication.
The competitive landscape will continue rapid evolution as every major security vendor pursues autonomous security capabilities. Microsoft’s integration advantages from controlling identity through Entra ID, endpoints through Defender, cloud infrastructure through Azure, and collaboration through Microsoft 365 create comprehensive visibility and control that multi-vendor environments struggle to match. However, the MCP standard’s open nature enables interoperability that could allow best-of-breed approaches if vendors embrace standards-based integration rather than proprietary ecosystems. Organizations should evaluate whether unified platforms or integrated ecosystems better serve their specific requirements, existing investments, and compliance obligations.
Looking forward, the industry will likely see autonomous security agents handling 30 percent or more of tedious and repetitive security tasks within 12 to 18 months based on current development trajectories. Security analysts will increasingly operate as supervisors managing teams of AI agents rather than directly investigating every alert. Traditional manual SOC operations will become obsolete for organizations with sufficient resources to deploy AI capabilities, while those unable to adopt face growing disadvantage against adversaries leveraging AI for attacks. The fundamental question facing security leaders is not whether to adopt AI-driven security but how quickly organizations can integrate these capabilities while maintaining proper oversight, developing necessary skills, and establishing robust governance frameworks ensuring agents operate safely and effectively within organizational risk tolerance.
The Microsoft Sentinel MCP announcement of September 30, 2025 will likely be recognized as a pivotal moment when enterprise security operations fundamentally transformed from human-led manual processes to AI-augmented machine-speed defense. Organizations that navigate this transformation successfully while addressing governance, skills, and compliance challenges will achieve substantial competitive advantages protecting against increasingly sophisticated threats that similarly leverage AI capabilities for offensive operations.
🚀 Ready to Master Microsoft 365 and Microsoft Copilot?
Join us at the European Collaboration Summit to dive deeper into cutting-edge technologies and transform your organization’s approach to modern work.
Join 3,000+ Microsoft 365, Copilot, SharePoint, Viva, and Teams practicioners, technology leaders, and innovators from across Europe at the premier event where the future of moder work is shaped.