Generated on: November 07, 2025 Target period: Within the last 24 hours Processing mode: Details Mode Number of updates: 6 items
Published: November 06, 2025 17:00:51 UTC Link: Generally Available: Ultra Disk’s new flexible provisioning model
Update ID: 526635 Data source: Azure Updates API
Categories: Launched, Storage, Azure Disk Storage
Summary:
What was updated
Azure Ultra Disk now supports a new flexible provisioning model, announced as generally available.
Key changes or new features
The flexible provisioning model allows independent configuration of disk capacity, IOPS, and throughput (MBps). This decoupling enables developers and IT professionals to precisely tailor disk performance parameters to specific workload requirements without over-provisioning capacity. It improves cost efficiency by allowing optimization of performance and storage separately. Users can scale IOPS and throughput dynamically based on application demands while maintaining the desired disk size.
Target audience affected
This update primarily benefits developers, IT administrators, and cloud architects managing high-performance workloads on Azure VMs that require fine-tuned disk performance, such as databases, analytics, and I/O-intensive applications.
Important notes if any
Existing Ultra Disk customers can migrate to the flexible provisioning model to gain these benefits. It is recommended to evaluate workload performance needs carefully to optimize cost and performance. This enhancement aligns with Azure’s goal to provide more granular and cost-effective storage performance options.
For more details, visit: https://azure.microsoft.com/updates?id=526635
Details:
The Azure Ultra Disk flexible provisioning model has reached General Availability, introducing a significant enhancement that decouples capacity, IOPS, and throughput settings, allowing IT professionals to independently configure these parameters to better align with specific workload requirements and optimize cost-performance balance.
Background and Purpose of the Update
Previously, Azure Ultra Disks required users to provision capacity, IOPS, and throughput in a fixed ratio, which often led to over-provisioning or under-utilization of resources. This rigid coupling limited the ability to fine-tune performance characteristics for diverse workloads, especially those with fluctuating or non-linear I/O demands. The new flexible provisioning model addresses this by enabling independent scaling of capacity, IOPS, and throughput, thereby providing greater control and efficiency in storage resource allocation.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Under the hood, the Ultra Disk service architecture has been enhanced to abstract the underlying physical resource allocation, allowing independent throttling and scaling of IOPS and throughput separate from capacity. This is achieved through a more granular resource management layer that dynamically allocates backend storage and network bandwidth based on the provisioned parameters. The provisioning model leverages Azure’s distributed storage fabric to ensure consistent low latency and high throughput while maintaining data durability and availability. The update also includes validation logic to ensure provisioned IOPS and throughput values are within supported limits relative to the chosen capacity.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
In summary, the General Availability of Azure Ultra Disk’s flexible provisioning model empowers IT professionals to independently configure capacity,
Published: November 06, 2025 16:00:54 UTC Link: Generally Available: Object Replication Metrics
Update ID: 520201 Data source: Azure Updates API
Categories: Launched, Storage, Azure Blob Storage
Summary:
What was updated
Azure Blob Storage Object Replication metrics for pending operations and pending bytes are now generally available across all Azure regions.
Key changes or new features
The update introduces GA availability of detailed replication metrics that track the number of pending replication operations and the volume of pending bytes. These metrics enable real-time monitoring of replication health and performance, helping identify and troubleshoot replication delays effectively. Developers and IT professionals can leverage these insights to optimize replication throughput and ensure data consistency and availability across storage accounts.
Target audience affected
This update primarily benefits developers, IT administrators, and DevOps engineers managing Azure Blob Storage with Object Replication enabled, especially those responsible for data synchronization, disaster recovery, and high-availability scenarios.
Important notes if any
The metrics are accessible via Azure Monitor and API endpoints, allowing integration into existing monitoring and alerting workflows. Users should incorporate these metrics into their operational dashboards to proactively manage replication performance and address potential bottlenecks before they impact applications.
Details:
The recent Azure update announces the general availability of Object Replication metrics for Blob storage, specifically focusing on pending operations and pending bytes across all Azure regions. This enhancement provides IT professionals with critical telemetry to monitor and manage the replication status of objects between storage accounts, enabling improved operational visibility and performance optimization.
Background and Purpose of the Update
Object Replication (OR) in Azure Blob storage is a feature that asynchronously replicates blobs between two storage accounts, typically across regions, to support scenarios such as disaster recovery, data migration, and compliance. Prior to this update, while Object Replication ensured eventual consistency, there was limited visibility into the replication process’s internal state, particularly regarding delays or backlogs. The introduction of detailed replication metrics addresses this gap by providing actionable insights into pending replication operations and the volume of data yet to be replicated, thereby empowering administrators to proactively detect and troubleshoot replication issues.
Specific Features and Detailed Changes
The update delivers two primary metrics exposed via Azure Monitor for Object Replication:
These metrics are available at the storage account level and can be accessed through Azure Monitor metrics APIs, Azure Portal, or integrated into custom monitoring solutions. The metrics are updated in near real-time, allowing for timely detection of replication delays or bottlenecks.
Technical Mechanisms and Implementation Methods
Object Replication operates by asynchronously copying blob data from a source storage account to a destination account based on configured replication policies. Internally, the replication engine tracks replication requests and their completion status. The newly exposed metrics derive from this internal state, aggregating counts and sizes of pending replication tasks.
From an implementation perspective, these metrics are surfaced via the Azure Monitor platform, leveraging the existing metrics pipeline. This means users can query them using Azure Monitor REST APIs, Azure CLI (az monitor metrics), or integrate with Azure Event Hubs and Log Analytics for advanced alerting and visualization. No additional configuration is required to enable these metrics once Object Replication is set up.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
In summary, the general
Published: November 06, 2025 15:45:17 UTC Link: Generally Available: Azure MCP Server
Update ID: 526881 Data source: Azure Updates API
Categories: Launched, Compute, Mobile, Web, AI + machine learning, Containers, DevOps, Analytics, App Service, Azure AI Foundry, Azure Container Apps, GitHub Enterprise, Microsoft Fabric
Summary:
What was updated
Azure MCP Server is now generally available, introducing a new way for agents to interact securely and efficiently with Azure services.
Key changes or new features
Built on the Model Context Protocol (MCP), Azure MCP Server establishes a secure, standards-based bridge enabling seamless communication between agents and various Azure services such as Azure Kubernetes Service (AKS), Azure Container Apps (ACA), App Service, Cosmos DB, SQL Database, and AI Foundry. This facilitates enhanced developer workflows by providing consistent, protocol-driven access to cloud resources, improving integration, automation, and management capabilities.
Target audience affected
Developers building cloud-native applications, DevOps engineers, and IT professionals managing Azure infrastructure and services will benefit from simplified, secure interactions with Azure resources through MCP Server.
Important notes if any
Azure MCP Server’s adoption can streamline agent-based operations and improve security posture by leveraging standardized communication protocols. Users should evaluate integration scenarios to maximize benefits and ensure compatibility with existing workflows. Further technical details and implementation guidance are available in the official Azure documentation.
Details:
The Azure MCP Server has reached general availability, introducing a new cloud-native framework designed to enhance developer interaction with Azure services by leveraging the Model Context Protocol (MCP). This update aims to provide a secure, standardized communication bridge that simplifies and streamlines connectivity between diverse Azure resources such as Azure Kubernetes Service (AKS), Azure Container Apps (ACA), App Service, Cosmos DB, Azure SQL Database, and AI Foundry.
Background and Purpose:
As cloud environments grow increasingly complex, developers require more efficient and secure methods to integrate and orchestrate multiple Azure services. Traditional approaches often involve disparate APIs and custom integration layers, which increase development overhead and potential security risks. The Azure MCP Server addresses these challenges by implementing MCP, a protocol designed to standardize context sharing and communication between services, thereby enabling seamless interoperability and reducing integration complexity.
Specific Features and Detailed Changes:
Technical Mechanisms and Implementation Methods:
The MCP Server operates by maintaining a context model that represents the state and metadata relevant to a given operation or workflow. When an agent or service connects, it registers its context model with the MCP Server. The server then mediates context propagation using a secure, standardized protocol that supports authentication tokens, encryption, and role-based access control. This ensures that only authorized services can access or modify context data. The MCP Server can be deployed as a managed Azure service or containerized within AKS or ACA, providing flexibility in deployment topology.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
Azure MCP Server integrates deeply with Azure identity and access management (Azure AD) for authentication and authorization, ensuring secure context sharing. It complements Azure DevOps and Azure Monitor by providing contextual metadata that can enhance pipeline automation and observability. The server’s compatibility with containerized environments like AKS and ACA allows it to fit naturally into modern cloud-native architectures, while integration with data services like Cosmos DB and SQL Database supports complex data workflows. Additionally, its support for AI Foundry enables advanced AI scenarios by maintaining
Published: November 06, 2025 15:45:17 UTC Link: Public Preview: GitHub Copilot in SQL Server Management Studio (SSMS)
Update ID: 520729 Data source: Azure Updates API
Categories: In preview
Summary:
What was updated
GitHub Copilot integration is now available in public preview within SQL Server Management Studio (SSMS).
Key changes or new features
The integration enables AI-assisted coding for Transact-SQL (T-SQL), helping developers write queries faster and with improved accuracy. It leverages database schema and connection context to provide relevant code completions and can answer general SQL questions directly within SSMS. This enhances productivity by reducing manual coding and debugging efforts.
Target audience affected
SQL developers and database administrators using SSMS for T-SQL development and management will benefit from this update. IT professionals involved in database development and maintenance can leverage AI assistance to streamline workflows.
Important notes if any
As a public preview feature, GitHub Copilot in SSMS may have limitations and is subject to ongoing improvements. Users should evaluate it in non-production environments initially. Access requires appropriate GitHub Copilot licensing and an active internet connection for AI services.
For more details, visit: https://azure.microsoft.com/updates?id=520729
Details:
The recent public preview release of GitHub Copilot integration within SQL Server Management Studio (SSMS) introduces an AI-powered coding assistant designed to enhance the efficiency and accuracy of writing Transact-SQL (T-SQL) queries. This update leverages GitHub Copilot’s AI capabilities directly inside SSMS, enabling database professionals to generate context-aware code suggestions and receive natural language explanations based on the connected database schema and session context.
Background and Purpose:
Writing complex T-SQL queries often requires deep familiarity with database schema, syntax, and best practices, which can slow down development and increase the risk of errors. GitHub Copilot, powered by OpenAI’s Codex model, has been widely adopted in software development for code completion and generation. Integrating Copilot into SSMS aims to bring these productivity gains to database developers and administrators by providing intelligent code assistance tailored to SQL Server environments. The public preview phase allows users to evaluate and provide feedback on this integration before general availability.
Specific Features and Changes:
Technical Mechanisms and Implementation:
GitHub Copilot in SSMS operates by sending anonymized code context and user prompts to the GitHub Copilot service, which runs on OpenAI’s Codex model hosted in the cloud. The integration within SSMS captures the active database connection details and schema metadata to provide context-rich prompts to the AI model. Returned suggestions are displayed inline within the SSMS query editor, allowing users to accept, reject, or modify them. The extension respects security boundaries by not transmitting sensitive data beyond what is necessary for code generation and adheres to Microsoft’s compliance standards.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
While GitHub Copilot in SSMS primarily enhances the local SQL Server development experience, it complements Azure SQL Database and Azure SQL Managed Instance workflows by enabling faster query authoring that can be deployed to these cloud platforms. Additionally, it aligns with Azure DevOps pipelines where T-SQL scripts are version-controlled and automated. The AI-assisted coding experience can be combined with Azure Data Studio extensions and Azure Synapse Analytics for broader data platform
Published: November 06, 2025 15:45:17 UTC Link: Public Preview: Azure SQL updates for early November 2025
Update ID: 520715 Data source: Azure Updates API
Categories: In preview, Databases, Hybrid + multicloud, Azure SQL Database
Summary:
What was updated
Azure SQL Hyperscale received an update in early November 2025.
Key changes or new features
The primary enhancement is the support for multiple geo-secondary replicas. This allows developers and IT professionals to create disaster recovery architectures that span multiple geographic regions more easily and with greater flexibility.
Target audience affected
This update primarily impacts database administrators, developers, and IT professionals responsible for designing, deploying, and managing Azure SQL Hyperscale environments, especially those focused on high availability and disaster recovery.
Important notes if any
The addition of multiple geo-secondary replicas improves resilience and failover capabilities but may require updated configuration and monitoring strategies. Users should review best practices for geo-replication and disaster recovery planning to fully leverage this feature.
For more details, visit: https://azure.microsoft.com/updates?id=520715
Details:
In early November 2025, Azure SQL introduced a significant enhancement to its Hyperscale service tier by enabling support for multiple geo-secondary replicas, aimed at improving disaster recovery and high availability strategies across geographically distributed environments.
Background and Purpose:
Azure SQL Hyperscale is designed to provide highly scalable and performant cloud-native SQL database capabilities, supporting rapid growth and large workloads. Traditionally, Hyperscale allowed a single geo-secondary replica for disaster recovery (DR), which limited flexibility in multi-region failover architectures. The update addresses this limitation by allowing multiple geo-secondary replicas, thereby enhancing resilience and enabling more complex DR topologies that span multiple Azure regions.
Specific Features and Detailed Changes:
Technical Mechanisms and Implementation Methods:
Azure SQL Hyperscale uses a log-based asynchronous replication mechanism to maintain geo-secondary replicas. Each replica maintains a copy of the database’s log stream and data pages, applying changes asynchronously to keep in sync with the primary. With multiple geo-secondary replicas, the primary streams transaction logs independently to each replica. The system ensures consistency and durability through write-ahead logging and checkpointing. Failover processes are enhanced to allow selection of the most appropriate geo-secondary replica based on health and region.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
Published: November 06, 2025 15:45:17 UTC Link: Public Preview: Azure Database for PostgreSQL read replicas with Premium SSD v2
Update ID: 520710 Data source: Azure Updates API
Categories: In preview, Databases, Hybrid + multicloud, Azure Database for PostgreSQL
Summary:
What was updated
Azure Database for PostgreSQL flexible server now supports configuring read replicas (both in-region and geo-replicas) with the Premium SSD v2 storage tier, currently in public preview.
Key changes or new features
Developers and IT professionals can leverage Premium SSD v2 storage for read replicas, which offers improved performance and lower latency compared to previous storage options. This enhancement allows better scaling of read-heavy workloads by offloading read operations to replicas with faster, more reliable storage. The update supports both in-region and geo-replicas, enabling flexible disaster recovery and global read scaling scenarios.
Target audience affected
This update primarily impacts developers, database administrators, and IT professionals managing Azure Database for PostgreSQL flexible servers who require high-performance read scaling and improved storage performance for read replicas.
Important notes if any
As this feature is in public preview, users should evaluate it in non-production environments before deploying to critical workloads. Pricing and SLA details may differ from generally available tiers. Users should monitor Azure updates for GA announcements and additional feature enhancements.
Reference: https://azure.microsoft.com/updates?id=520710
Details:
The recent Azure update introduces Public Preview support for configuring read replicas in Azure Database for PostgreSQL flexible server with the Premium SSD v2 storage tier. This enhancement enables IT professionals to leverage the improved performance and cost-efficiency of Premium SSD v2 for read replicas, both in-region and geo-replicas, thereby optimizing read scalability and workload distribution.
Background and Purpose:
Azure Database for PostgreSQL flexible server supports read replicas to offload read-heavy workloads from the primary server, enhancing application responsiveness and throughput. Previously, read replicas could be provisioned with certain storage tiers, but the introduction of Premium SSD v2 storage for read replicas addresses the need for higher IOPS, lower latency, and better cost-performance balance. Premium SSD v2 is designed to deliver scalable and consistent performance with improved durability, making it well-suited for read-intensive database operations.
Specific Features and Detailed Changes:
Technical Mechanisms and Implementation Methods:
Read replicas in Azure Database for PostgreSQL flexible server operate by asynchronously replicating data from the primary server to one or more secondary servers. The replication mechanism ensures eventual consistency for read operations, allowing applications to direct read queries to replicas without impacting primary write performance. With this update, the underlying storage for replicas uses Premium SSD v2, which leverages Azure’s latest generation of SSD storage technology featuring enhanced caching, lower latency, and higher durability. Configuration is done via Azure Portal, CLI, or ARM templates, where users specify the storage tier during replica creation or upgrade existing replicas to Premium SSD v2.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
This report was automatically generated - 2025-11-07 03:03:27 UTC