Generated on: November 20, 2025 Target period: Within the last 24 hours Processing mode: Details Mode Number of updates: 10 items
Published: November 19, 2025 21:00:38 UTC Link: Generally Available: Azure Managed Lustre Support for Azure MCP Server
Update ID: 529381 Data source: Azure Updates API
Categories: Launched, Storage, Azure Managed Lustre
Summary:
What was updated
Azure Managed Lustre now provides full integration with Azure MCP Server, and this feature is generally available.
Key changes or new features
Developers and IT professionals can now manage Azure Managed Lustre file systems directly through Azure MCP Server. This integration allows unified management of Azure resources using native Azure tools and APIs, streamlining deployment, configuration, and monitoring of Lustre file systems within Azure environments.
Target audience affected
Developers, infrastructure engineers, and IT operations teams who use Azure Managed Lustre for high-performance file storage and want centralized management via Azure MCP Server.
Important notes if any
This general availability release ensures production-ready support for managing Lustre file systems alongside other Azure resources, improving operational efficiency and automation capabilities. Users should leverage Azure MCP Server APIs for consistent resource management workflows.
Details:
The recent general availability of Azure Managed Lustre support for Azure MCP Server marks a significant enhancement in managing high-performance file systems within Azure’s cloud infrastructure. Azure Managed Lustre is a fully managed, high-throughput, low-latency file system optimized for HPC (High Performance Computing) and data-intensive workloads. Azure MCP Server (Microsoft Cloud Platform Server) is a management platform designed to provide unified control over Azure resources, enabling infrastructure teams and developers to automate and streamline resource management.
Background and Purpose
Prior to this update, managing Azure Managed Lustre file systems required using separate management interfaces or APIs, which could complicate automation and integration within broader cloud management workflows. The purpose of integrating Azure Managed Lustre with Azure MCP Server is to centralize and simplify resource management, enabling users to provision, monitor, and control Lustre file systems alongside other Azure resources through a consistent management plane. This integration supports operational efficiency, reduces management overhead, and enhances automation capabilities for HPC and data analytics environments.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Azure MCP Server acts as a centralized management layer that communicates with Azure Resource Manager (ARM) APIs under the hood. The integration exposes Azure Managed Lustre resources through MCP Server’s management API endpoints, translating user commands into ARM operations. This abstraction allows users to manage Lustre file systems without directly interacting with ARM or Lustre-specific APIs. The implementation leverages Azure’s role-based access control (RBAC) to ensure secure and granular permissions when managing Lustre resources. Additionally, MCP Server supports scripting and automation via PowerShell and REST APIs, enabling integration into CI/CD pipelines and operational scripts.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Published: November 19, 2025 21:00:38 UTC Link: Generally Available: CSI Dynamic Provisioning for Azure Managed Lustre
Update ID: 529368 Data source: Azure Updates API
Categories: Launched, Storage, Azure Managed Lustre, Features, Management, Open Source, Services, Microsoft Ignite
Summary:
What was updated
Azure Managed Lustre (AMLFS) now supports Container Storage Interface (CSI) Dynamic Provisioning, and this feature is generally available.
Key changes or new features
Developers can now dynamically provision AMLFS-backed persistent volumes directly within Kubernetes clusters using standard StorageClass and PersistentVolumeClaim objects. This eliminates the need for pre-provisioning storage volumes manually, streamlining storage management for high-performance Lustre file systems in containerized environments.
Target audience affected
Kubernetes developers and IT professionals managing containerized applications requiring high-throughput, scalable shared storage on Azure. DevOps engineers and cloud architects integrating Lustre storage with Kubernetes workloads will benefit from simplified storage provisioning.
Important notes if any
Dynamic provisioning requires configuring appropriate StorageClass definitions referencing the AMLFS CSI driver. This update enhances automation and scalability for HPC and data-intensive workloads on Azure Kubernetes Service (AKS) or other Kubernetes platforms using AMLFS. Users should review the updated CSI driver documentation to implement dynamic provisioning correctly.
Details:
The recent general availability of CSI Dynamic Provisioning for Azure Managed Lustre (AMLFS) introduces a significant enhancement for Kubernetes workloads requiring high-performance, scalable file storage. Previously, provisioning Azure Managed Lustre file systems for Kubernetes involved manual steps to create and manage persistent volumes (PVs), which limited agility and automation. This update enables Kubernetes clusters to dynamically provision AMLFS-backed persistent volumes on demand using standard Kubernetes StorageClass and PersistentVolumeClaim (PVC) objects, streamlining storage management and improving developer productivity.
Background and Purpose
Azure Managed Lustre is a fully managed, high-performance parallel file system optimized for compute-intensive workloads such as HPC, AI/ML, and big data analytics. Kubernetes users deploying such workloads often require Lustre’s low-latency, high-throughput shared storage. Prior to this update, integrating AMLFS with Kubernetes involved static provisioning, where administrators had to manually create Lustre file systems and PVs before pods could consume them. This manual approach hindered DevOps automation and dynamic scaling. The introduction of CSI (Container Storage Interface) dynamic provisioning addresses these challenges by automating volume lifecycle management, aligning Lustre storage consumption with Kubernetes-native workflows.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The Azure Managed Lustre CSI driver acts as a Kubernetes external provisioner that interfaces with Azure Resource Manager (ARM) APIs to create and manage AMLFS instances. When a PVC is submitted:
This integration leverages Kubernetes CSI specifications for volume lifecycle operations and Azure’s REST APIs for resource provisioning, ensuring a cloud-native and declarative approach.
Use Cases and Application Scenarios
Important Considerations and Limitations
Published: November 19, 2025 21:00:38 UTC Link: Public Preview: 20 MB/s/TiB Performance Tier for Azure Managed Lustre
Update ID: 529359 Data source: Azure Updates API
Categories: In preview, Storage, Azure Managed Lustre
Summary:
What was updated
Azure Managed Lustre introduced a new 20 MB/s/TiB performance tier, now available in public preview.
Key changes or new features
This new tier significantly enhances throughput, allowing up to 20 MB/s per TiB of provisioned storage. It supports file systems up to 25 PiB in size, catering to very large-scale storage needs. The update is optimized for demanding AI and HPC workloads that require high throughput and massive parallelism.
Target audience affected
Developers and IT professionals working with large-scale AI, machine learning, and high-performance computing applications will benefit most. Organizations needing scalable, high-throughput shared file systems for data-intensive workloads will find this tier especially useful.
Important notes if any
As this feature is in public preview, users should evaluate it in non-production environments and provide feedback. Pricing and SLA details may differ from generally available tiers. Integration with existing Azure Managed Lustre deployments is seamless, but performance tuning may be required to maximize benefits.
For more details, visit: https://azure.microsoft.com/updates?id=529359
Details:
Azure Managed Lustre has introduced a new 20 MB/s/TiB performance tier, now available in public preview, aimed at addressing the performance requirements of large-scale AI and high-performance computing (HPC) workloads. This update significantly enhances throughput capacity by allowing customers to provision file systems up to 25 PiB in size, delivering a scalable and high-performance parallel file system optimized for demanding data-intensive applications.
Background and Purpose
Azure Managed Lustre is a fully managed, high-performance file system based on the open-source Lustre file system, widely used in HPC environments for its ability to provide low-latency, high-throughput access to shared data. As AI and HPC workloads grow in scale and complexity, the need for higher throughput per unit of storage has become critical. The introduction of the 20 MB/s/TiB tier addresses this by increasing the data throughput capability, enabling faster data processing and reduced job runtimes for large datasets.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The increased throughput is achieved by enhancing the underlying infrastructure and tuning the Lustre file system parameters to better utilize Azure’s high-speed networking and storage hardware. This includes:
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
In summary, the new 20 MB/s/TiB performance tier for Azure Managed Lustre in public preview offers a substantial throughput increase and enhanced scalability, designed
Published: November 19, 2025 20:00:49 UTC Link: Public Preview: Auto-import for Azure Managed Lustre
Update ID: 529342 Data source: Azure Updates API
Categories: In preview, Storage, Azure Managed Lustre, Features, Microsoft Ignite, Services
Summary:
What was updated
Azure Managed Lustre File System (AMLFS) now supports the Auto-import feature, released in public preview.
Key changes or new features
Auto-import enables automatic, policy-driven synchronization of data from Azure Blob Storage containers into AMLFS clusters. This feature continuously monitors Blob Storage for new or modified files and imports them into the Lustre file system without manual intervention, ensuring data consistency and reducing operational overhead. It simplifies workflows that rely on up-to-date data in high-performance Lustre environments by automating data refresh processes.
Target audience affected
Developers and IT professionals managing high-performance computing (HPC) workloads, data engineers, and architects using Azure Managed Lustre for scalable, high-throughput file storage integrated with Blob Storage.
Important notes if any
As a public preview feature, Auto-import should be used with caution in production environments. Users should review the preview limitations and provide feedback to Microsoft. Integration requires proper configuration of synchronization policies and permissions between AMLFS and Blob Storage. This feature enhances data orchestration capabilities in Azure HPC scenarios.
For more details, visit: https://azure.microsoft.com/updates?id=529342
Details:
The Azure update announces the public preview release of the Auto-import feature for Azure Managed Lustre File System (AMLFS), designed to enhance data synchronization workflows between Azure Blob Storage and AMLFS clusters. This feature addresses the need for automated, policy-driven data consistency in high-performance computing (HPC) and big data analytics environments that leverage Lustre’s parallel file system capabilities alongside the scalability of Azure Blob Storage.
Background and Purpose:
Azure Managed Lustre is a fully managed, high-throughput, low-latency file system optimized for HPC and large-scale data processing workloads. Traditionally, users manually synchronize data between Azure Blob Storage and AMLFS, which can be error-prone and operationally intensive. The Auto-import feature was introduced to automate this synchronization, ensuring that AMLFS clusters automatically reflect changes—such as additions, modifications, or deletions—in the underlying Blob Storage containers. This reduces manual intervention, improves data freshness, and streamlines workflows.
Specific Features and Detailed Changes:
Technical Mechanisms and Implementation Methods:
Auto-import leverages event-driven architecture and Azure Blob Storage change feed or event grid notifications to detect changes in the source containers. Upon detecting a change, the AMLFS control plane initiates incremental synchronization operations that import or update files in the Lustre file system namespace. The synchronization respects Lustre’s metadata and file system semantics, ensuring consistency and coherence. The feature is implemented as a background service integrated with the AMLFS management layer, orchestrating data movement and metadata updates without impacting cluster performance.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
In summary,
Published: November 19, 2025 17:45:05 UTC Link: Public Preview: Recommended alerts for Azure Monitor Workspace
Update ID: 515505 Data source: Azure Updates API
Categories: In preview, Compute, Containers, DevOps, Management and governance, Azure Kubernetes Service (AKS), Azure Monitor, Features
Summary:
What was updated
Azure Monitor introduced a public preview feature that allows one-click enablement of recommended alerts within the Azure Portal, specifically for Azure Monitor Managed Service for Prometheus customers.
Key changes or new features
The update provides pre-configured alerts focused on monitoring ingestion limits for Azure Monitor Workspaces. These recommended alerts help users proactively track and manage data ingestion to prevent hitting workspace limits, ensuring uninterrupted monitoring and metric collection. The alerts can be enabled quickly without manual configuration, streamlining alert setup for Prometheus users.
Target audience affected
This update primarily targets developers and IT professionals using Azure Monitor Managed Service for Prometheus who need to monitor their workspace ingestion metrics effectively. It also benefits monitoring and operations teams responsible for maintaining Azure Monitor health and performance.
Important notes if any
The feature is currently in public preview, so users should evaluate it in non-production environments before full adoption. Enabling these alerts helps avoid data loss or monitoring gaps caused by ingestion limit breaches. Users should monitor Azure updates for GA announcements and additional alert recommendations.
Link: https://azure.microsoft.com/updates?id=515505
Details:
The recent Azure update introduces a Public Preview feature enabling one-click activation of recommended alerts within the Azure Portal specifically for Azure Monitor Managed Service for Prometheus users. This enhancement primarily targets improved observability and proactive management of Azure Monitor Workspace ingestion limits, thereby helping IT professionals prevent metric ingestion bottlenecks and ensure uninterrupted monitoring workflows.
Background and Purpose of the Update
Azure Monitor Managed Service for Prometheus allows customers to ingest Prometheus metrics into Azure Monitor Workspaces, facilitating unified monitoring across cloud-native and traditional workloads. However, ingestion limits on Azure Monitor Workspaces can lead to dropped metrics or throttling if not properly managed. Prior to this update, configuring alerts to track ingestion thresholds required manual setup and expertise, potentially leading to delayed detection of ingestion issues. The update aims to simplify this process by providing pre-configured, recommended alerts that can be enabled with a single click, enhancing operational efficiency and reducing the risk of missing critical metric data.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Under the hood, these recommended alerts leverage Azure Monitor’s alerting framework, which supports metric alerts based on workspace-level telemetry. The system monitors ingestion-related metrics exposed by the Azure Monitor Workspace resource provider, such as ingestion volume and throttling events. When enabled, the alert rules continuously evaluate these metrics against predefined thresholds. Upon breach, alerts trigger notifications or automated actions as configured by the user. The one-click enablement feature is implemented as a portal-integrated workflow that programmatically creates these alert rules with appropriate scopes and conditions, streamlining deployment.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
In summary, this update streamlines the configuration of critical ingestion limit alerts for Azure Monitor Workspaces used with the Managed Service for Prometheus,
Published: November 19, 2025 17:30:18 UTC Link: Public Preview: Azure Managed Redis integration with Microsoft Foundry
Update ID: 532188 Data source: Azure Updates API
Categories: In preview, AI + machine learning, Microsoft Foundry
Summary:
What was updated
Azure Managed Redis is now integrated into the Microsoft Foundry MCP tools catalog and is available in public preview.
Key changes or new features
This integration enables developers to easily connect Azure Managed Redis as a knowledge store or memory store for AI agents built on the Foundry platform. It simplifies the process of leveraging Redis for fast, scalable caching and state management within AI workloads, enhancing agent performance and responsiveness.
Target audience affected
Developers building AI agents using Microsoft Foundry, IT professionals managing AI infrastructure, and architects designing scalable AI solutions that require efficient memory or knowledge storage.
Important notes if any
The feature is currently in public preview, so users should evaluate it in non-production environments and provide feedback. Integration streamlines Redis connectivity but requires familiarity with both Azure Managed Redis and Microsoft Foundry MCP tools. Monitoring and scaling Redis instances remain critical for optimal AI agent performance.
For more details, visit: https://azure.microsoft.com/updates?id=532188
Details:
The recent public preview announcement of Azure Managed Redis integration with Microsoft Foundry introduces a significant enhancement for developers building AI agents within the Foundry environment by enabling seamless use of Redis as a high-performance knowledge or memory store. This update reflects Microsoft’s strategic effort to streamline state management and data caching for AI workloads, leveraging Redis’s low-latency, in-memory data capabilities directly within the Foundry MCP (Microsoft Cloud Platform) tools catalog.
Background and Purpose:
Microsoft Foundry is a platform designed to accelerate the development and deployment of AI agents by providing a modular, scalable environment with integrated tools and services. AI agents often require fast, reliable access to transient or persistent state information—such as session memory, knowledge bases, or contextual data—to operate effectively. Azure Managed Redis, a fully managed, scalable, and secure Redis service, is widely recognized for its sub-millisecond latency and support for complex data structures. Integrating Managed Redis into Foundry addresses the need for a native, performant memory store that can be easily provisioned and managed alongside AI agent components, reducing architectural complexity and operational overhead.
Specific Features and Changes:
Technical Mechanisms and Implementation:
Under the hood, this integration leverages Azure Managed Redis’s REST APIs and SDKs, combined with Foundry’s orchestration and deployment pipelines. When a developer selects Redis from the Foundry catalog, the platform automates the creation of a Redis instance within the user’s Azure subscription, configures network security groups or private endpoints for secure access, and injects connection strings and credentials into the agent runtime environment. The Foundry agent runtime includes Redis client libraries compatible with common programming languages used in AI development (e.g., Python, C#, Node.js), enabling direct interaction with Redis data stores. Additionally, telemetry and monitoring hooks are integrated to track Redis performance and usage metrics within the Foundry monitoring dashboards.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Published: November 19, 2025 17:00:30 UTC Link: Generally Available: TLS and TCP termination on Azure Application Gateway
Update ID: 532202 Data source: Azure Updates API
Categories: Launched, Networking, Security, Application Gateway
Summary:
What was updated
Azure Application Gateway now generally supports TLS and TCP protocol termination.
Key changes or new features
This update enables Application Gateway to load balance and securely terminate non-HTTP(S) traffic using TCP and TLS protocols. Previously focused on HTTP/HTTPS, the gateway can now handle encrypted and unencrypted TCP-based workloads, expanding its applicability to a broader range of applications. This includes decrypting TLS traffic at the gateway, allowing inspection, routing, and policy enforcement before forwarding to backend servers.
Target audience affected
Developers and IT professionals managing applications that rely on TCP or TLS protocols beyond HTTP/HTTPS will benefit. Network architects and security teams can leverage this to simplify infrastructure by consolidating load balancing and TLS termination within Application Gateway.
Important notes if any
Ensure backend pools and health probes are configured appropriately for TCP/TLS workloads. Review security policies to accommodate decrypted traffic at the gateway. This feature is now generally available, so it is production-ready for critical workloads.
For more details, visit: https://azure.microsoft.com/updates?id=532202
Details:
Azure Application Gateway’s general availability of TLS and TCP termination marks a significant enhancement in its capability to handle non-HTTP(S) traffic, extending its traditional role beyond Layer 7 (application layer) load balancing to also support Layer 4 (transport layer) protocols. This update addresses the growing need for secure and scalable load balancing of applications that communicate over TCP or TLS protocols but do not necessarily use HTTP(S).
Background and Purpose of the Update
Historically, Azure Application Gateway has been optimized for HTTP and HTTPS traffic, providing advanced Layer 7 load balancing features such as URL-based routing, SSL offloading, and Web Application Firewall (WAF) integration. However, many enterprise and cloud-native applications rely on other protocols over TCP or TLS, such as MQTT, FTP over TLS, or custom TCP-based protocols. Prior to this update, handling such traffic required alternative load balancing solutions like Azure Load Balancer or third-party appliances, which lack the integrated security and routing capabilities of Application Gateway. The introduction of TLS and TCP termination enables Application Gateway to natively manage these protocols, simplifying architecture and improving security posture.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Application Gateway operates as a reverse proxy and load balancer. With this update, it listens on specified frontend IPs and ports configured for TCP or TLS protocols. When TLS termination is enabled, the gateway uses uploaded SSL certificates to decrypt incoming traffic. For TCP termination, it simply proxies the TCP stream to backend pool members based on configured load balancing rules. Backend pools are defined by IP addresses or FQDNs, and health probes use TCP SYN or TLS handshake checks to verify backend health. Configuration is managed via Azure Portal, ARM templates, CLI, or PowerShell, where listeners can be set to TCP or TLS protocols instead of HTTP/HTTPS.
Use Cases and Application Scenarios
Important Considerations and Limitations
Published: November 19, 2025 17:00:30 UTC Link: Public Preview: Microsoft Foundry data connection for Azure Databricks
Update ID: 527678 Data source: Azure Updates API
Categories: In preview, AI + machine learning, Analytics, Azure Databricks
Summary:
What was updated
Azure Databricks Genie now supports integration with Microsoft Foundry via a new data connection, currently in public preview.
Key changes or new features
Developers and data engineers can connect Genie spaces directly to Microsoft Foundry agents using the Model Context Protocol (MCP). This enables seamless, secure access to trusted and governed data within Foundry, simplifying data ingestion and enhancing data lineage and context for AI and analytics workloads in Azure Databricks.
Target audience affected
This update primarily benefits developers, data scientists, and IT professionals working with Azure Databricks and Microsoft Foundry who require streamlined, governed data access for building intelligent applications and analytics pipelines.
Important notes if any
As this feature is in public preview, users should evaluate it in non-production environments and provide feedback. Integration relies on MCP, so familiarity with this protocol and Foundry agent configuration is recommended to maximize benefits. Keep an eye on Azure updates for general availability and additional enhancements.
Details:
The recent public preview announcement of the Microsoft Foundry data connection for Azure Databricks introduces a direct integration between Azure Databricks Genie and Microsoft Foundry, enabling seamless, secure access to trusted enterprise data within Databricks environments. This update is designed to streamline data workflows by leveraging the Model Context Protocol (MCP) to connect Genie spaces to Foundry agents, facilitating a more efficient and governed data analytics process.
Background and Purpose
Azure Databricks Genie is a collaborative environment that simplifies data engineering and machine learning workflows on Databricks. Microsoft Foundry is a data operations platform that provides data governance, cataloging, and operationalization capabilities. Prior to this update, accessing Foundry-managed data within Databricks required complex manual integration or data duplication, which could lead to data inconsistencies and governance challenges. The purpose of this update is to bridge these platforms natively, enabling Databricks users to directly query and utilize trusted data assets managed by Foundry without leaving the Databricks environment, thereby enhancing productivity and data governance.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The integration leverages MCP to establish a secure, authenticated communication channel between Azure Databricks Genie and Microsoft Foundry. MCP acts as a protocol layer that standardizes the interaction with Foundry’s data agents, exposing datasets and metadata in a consumable format. Implementation involves configuring Genie spaces with Foundry agent endpoints and appropriate authentication credentials, typically using Azure Active Directory (AAD) for identity management. Once connected, Databricks notebooks can invoke MCP APIs to query datasets, retrieve schema information, and maintain data context throughout the analytics lifecycle.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Published: November 19, 2025 17:00:30 UTC Link: Public Preview: Azure Databricks Genie in Copilot Studio
Update ID: 527668 Data source: Azure Updates API
Categories: In preview, AI + machine learning, Analytics, Azure Databricks
Summary:
What was updated
Azure Databricks Genie has been integrated into Microsoft Copilot Studio and is now available in public preview.
Key changes or new features
This integration enables users to access intelligent, enterprise-grade AI-driven answers and conversational analytics directly across their entire Azure Databricks data estate. It leverages advanced AI capabilities to provide natural language querying and insights, enhancing data exploration and decision-making workflows within Copilot Studio.
Target audience affected
Developers, data engineers, data scientists, and IT professionals who work with Azure Databricks and seek to streamline data analytics through AI-powered conversational interfaces will benefit from this update.
Important notes if any
As this feature is in public preview, users should evaluate it in non-production environments and provide feedback. Integration requires appropriate Azure Databricks and Copilot Studio configurations and permissions to access enterprise data securely. This update represents a step toward more intuitive, AI-driven data analytics within Azure’s ecosystem.
Details:
The recent public preview release of Azure Databricks Genie within Microsoft Copilot Studio represents a significant advancement in enabling conversational AI-driven analytics directly on enterprise data housed in Azure Databricks. This integration is designed to empower data engineers, data scientists, and business analysts by providing an intelligent, natural language interface to query, analyze, and derive insights from large-scale data environments without requiring deep SQL or Spark expertise.
Background and Purpose
Azure Databricks is a leading unified analytics platform optimized for Apache Spark, widely used for big data processing and machine learning workloads. However, interacting with complex data lakes and data warehouses often requires specialized skills in query languages and data engineering. Microsoft Copilot Studio is a platform that integrates AI capabilities to facilitate natural language interactions with enterprise data. By embedding Azure Databricks Genie—an AI-powered conversational analytics engine—into Copilot Studio, Microsoft aims to democratize access to advanced analytics, reducing the barrier to entry for data exploration and accelerating decision-making processes.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Azure Databricks Genie operates by interfacing with the Azure Databricks REST APIs and SQL endpoints. When a user submits a natural language query via Copilot Studio, the system uses an LLM-based natural language understanding (NLU) layer to parse and interpret the intent. The query is then translated into Spark SQL or PySpark code optimized for the underlying data schema and executed on the Databricks cluster. Results are returned and rendered within Copilot Studio’s conversational UI. The system incorporates enterprise security and compliance features, including Azure Active Directory (AAD) authentication, role-based access control (RBAC), and data masking where applicable. Additionally, Genie can leverage metadata from the Databricks Unity Catalog to understand data lineage and schema, improving query accuracy and governance adherence.
Use Cases and Application Scenarios
Important Considerations and Limitations
Published: November 19, 2025 17:00:30 UTC Link: Public Preview: Azure API Management adds support for A2A Agent APIs
Update ID: 527635 Data source: Azure Updates API
Categories: In preview, Integration, Internet of Things, Mobile, Web, API Management
Summary:
What was updated
Azure API Management now publicly supports Agent-to-Agent (A2A) APIs, enabling unified management of these APIs alongside existing API types.
Key changes or new features
The update introduces the ability to onboard, manage, and govern A2A APIs within the Azure API Management platform. This includes APIs used for AI model interactions, Model Context Protocol (MCP) tools, and traditional APIs. Organizations can now apply consistent policies, security, monitoring, and lifecycle management to agent APIs, improving operational efficiency and governance.
Target audience affected
Developers building AI-driven or agent-based applications, IT professionals managing API ecosystems, and architects designing integrated API strategies will benefit from this enhancement.
Important notes if any
This feature is currently in public preview, so users should evaluate it in non-production environments and provide feedback. As a preview, some functionalities may evolve before general availability. Users should review Azure’s documentation for any limitations or prerequisites.
This update streamlines API management for emerging agent-based architectures, supporting broader AI and automation integration scenarios within Azure.
Details:
The recent public preview announcement of Azure API Management (APIM) support for Agent-to-Agent (A2A) APIs introduces a significant enhancement enabling organizations to manage and govern agent APIs alongside existing API types such as AI model APIs, Model Context Protocol (MCP) tools, and traditional RESTful APIs. This update addresses the growing need to streamline API governance in environments where autonomous agents or AI-driven components communicate directly, facilitating unified lifecycle management, security, and monitoring within a single API management platform.
Background and Purpose
As enterprises increasingly adopt AI agents and autonomous systems that interact programmatically, the complexity of managing these agent APIs separately from traditional APIs has grown. Previously, Azure APIM focused on managing external-facing or internal REST APIs but lacked native support for agent-specific API protocols and communication patterns. The introduction of A2A API support aims to bridge this gap by enabling organizations to onboard, secure, and monitor agent APIs with the same rigor and tooling as other API types, thereby simplifying governance and operational consistency.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Under the hood, Azure APIM extends its API gateway capabilities to recognize and route agent API calls, which may involve message queues, event hubs, or protocol adapters. The service abstracts the underlying communication protocols, exposing a consistent management interface. Implementation involves:
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
This report was automatically generated - 2025-11-20 03:05:56 UTC