Generated on: March 26, 2026 Target period: Within the last 24 hours Processing mode: Details Mode Number of updates: 8 items
Published: March 25, 2026 20:00:39 UTC Link: Generally Available: Cosmos DB Mirroring in Microsoft Fabric with private endpoints
Update ID: 558836 Data source: Azure Updates API
Categories: Launched, Databases, Internet of Things, Azure Cosmos DB, Features
Summary:
What was updated
Private endpoint support for Azure Cosmos DB Mirroring in Microsoft Fabric is now generally available.
Key changes or new features
Developers and IT professionals can now use private endpoints to securely connect Microsoft Fabric to Azure Cosmos DB when using the Mirroring feature. This enables secure, private network connectivity (via Azure Private Link) for analytics workloads on operational data, reducing exposure to the public internet and enhancing data security.
Target audience affected
Azure Cosmos DB users, Microsoft Fabric administrators, developers building analytics solutions, and IT professionals managing data security and network configurations.
Important notes if any
Enabling private endpoints helps organizations meet strict security and compliance requirements by ensuring data traffic remains within the Azure backbone network. This update allows seamless integration of operational and analytical workloads without compromising network security. Existing Cosmos DB Mirroring setups can now be enhanced with private endpoints; configuration may require updates to network and DNS settings.
Read more on the official update page.
Details:
Background and Purpose of the Update
This update announces the general availability of private endpoint support for Azure Cosmos DB Mirroring in Microsoft Fabric. The primary purpose is to enhance network security for organizations leveraging Cosmos DB Mirroring within Microsoft Fabric by enabling private connectivity, thus allowing secure analytics on operational data without exposing resources to the public internet.
Specific Features and Detailed Changes
With this release, users can now configure private endpoints for Cosmos DB Mirroring in Microsoft Fabric. This means that data traffic between Microsoft Fabric and Azure Cosmos DB can traverse a secure, private Azure network path rather than the public internet. This update enables organizations to maintain a strong security posture while performing analytics on their operational data mirrored from Azure Cosmos DB.
Technical Mechanisms and Implementation Methods
Private endpoints in Azure are network interfaces that connect you privately and securely to a service powered by Azure Private Link. When private endpoint support is enabled for Cosmos DB Mirroring in Microsoft Fabric, the mirrored data is accessed over a private IP address within your Azure Virtual Network (VNet). This ensures that data in transit remains within the Azure backbone network, minimizing exposure to potential threats from the public internet. The implementation involves configuring private endpoints for both the source Cosmos DB account and the Microsoft Fabric workspace, ensuring that all data synchronization and analytics operations are routed securely via Azure Private Link.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
This update strengthens integration between Azure Cosmos DB, Microsoft Fabric, and Azure networking services such as Private Link and Virtual Networks. It allows organizations to build end-to-end secure analytics pipelines, leveraging Cosmos DB as the operational data store, Microsoft Fabric for analytics, and Azure networking for secure data transport.
Summary Sentence
Private endpoint support for Azure Cosmos DB Mirroring in Microsoft Fabric is now generally available, enabling secure, private analytics on operational data while maintaining enhanced network security.
Published: March 25, 2026 20:00:39 UTC Link: Public Preview: Blue-green agent pool upgrade in AKS
Update ID: 557862 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) now offers a public preview of blue-green agent pool upgrades.
Key changes or new features
This update introduces blue-green upgrade support for AKS node pools. Instead of upgrading nodes in-place, AKS creates a parallel (green) node pool with the desired configuration. Workloads can be validated on the new pool before traffic is shifted from the existing (blue) pool. This approach reduces upgrade risks and allows for easy rollback if issues are detected.
Target audience affected
AKS cluster administrators, DevOps engineers, and developers managing Kubernetes workloads on Azure.
Important notes if any
The blue-green upgrade process provides a safer and more controlled upgrade path for AKS node pools, minimizing downtime and disruption. It is currently in public preview, so it is not recommended for production workloads yet. Users should review AKS documentation for limitations and best practices before adoption.
For more details, see the official update.
Details:
Azure Update Technical Report
Title: Public Preview: Blue-green agent pool upgrade in AKS
Link: Azure Update
Traditionally, in-place node pool upgrades in Azure Kubernetes Service (AKS) apply updates directly to existing nodes within a running environment. This process can introduce operational risk, as any misconfiguration or failure during the upgrade may impact live workloads. The blue-green agent pool upgrade feature addresses this risk by enabling a safer, more controlled upgrade path for AKS node pools.
Summary:
The blue-green agent pool upgrade feature in AKS (public preview) enables safer, parallel node pool upgrades by allowing validation of new configurations before shifting workloads, reducing operational risk and providing a clear rollback path.
Published: March 25, 2026 17:45:33 UTC Link: Public Preview: Fabric Mirroring integration for Azure Database for MySQL
Update ID: 558841 Data source: Azure Updates API
Categories: In preview, Databases, Azure Database for MySQL, Features
Summary:
What was updated
Public preview of Fabric Mirroring integration for Azure Database for MySQL – Flexible Server.
Key changes or new features
Developers and IT professionals can now replicate MySQL operational data into Microsoft Fabric in near real time. This integration eliminates the need to build or maintain custom ETL pipelines, streamlining data movement from Azure Database for MySQL to Microsoft Fabric. The mirroring process is managed natively, enabling faster and more reliable data synchronization for analytics and reporting.
Target audience affected
Developers, data engineers, and IT professionals who manage data integration, analytics, or reporting workflows using Azure Database for MySQL and Microsoft Fabric.
Important notes if any
This feature is currently in public preview and may not be suitable for production workloads. Users can leverage this integration to simplify data replication scenarios and accelerate time-to-insight. Review the official documentation for setup instructions, supported scenarios, and any preview limitations.
Details:
Background and Purpose of the Update
This Azure update introduces the public preview of Fabric Mirroring integration for Azure Database for MySQL – Flexible Server. The primary goal is to enable seamless replication of MySQL operational data into Microsoft Fabric environments in near real time. This integration is designed to eliminate the need for building or maintaining custom ETL (Extract, Transform, Load) pipelines, thereby simplifying data movement and accelerating analytics workflows.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Summary Sentence
The public preview of Fabric Mirroring integration for Azure Database for MySQL – Flexible Server enables near real-time replication of MySQL operational data into Microsoft Fabric, streamlining analytics and reporting workflows by eliminating the need for custom ETL pipelines.
Published: March 25, 2026 17:30:45 UTC Link: Public Preview: Azure SQL Managed Instance change event streaming
Update ID: 558884 Data source: Azure Updates API
Categories: Launched, Databases, Hybrid + multicloud, Azure SQL Database, Azure SQL Managed Instance, Features
Summary:
What was updated
Azure SQL Managed Instance now supports change event streaming (CES) to Azure Event Hubs in public preview.
Key changes or new features
You can stream row-level data changes—including inserts, updates, and deletes—from Azure SQL Managed Instance directly to Azure Event Hubs in near real time. Changes are published as transactions commit, minimizing latency and reducing the need for custom polling or ETL solutions. This enables seamless integration with downstream analytics, monitoring, or event-driven applications.
Target audience affected
Developers building real-time data pipelines, analytics solutions, or event-driven architectures; IT professionals managing data integration, replication, or monitoring between SQL Managed Instance and other Azure services.
Important notes if any
This feature is currently in public preview and may not be suitable for production workloads. It simplifies real-time data movement but requires configuration of both SQL Managed Instance and Event Hubs. Review preview limitations and pricing implications before adoption.
Details:
Azure Update Report: Public Preview – Azure SQL Managed Instance Change Event Streaming
Background and Purpose of the Update
This update introduces change event streaming (CES) for Azure SQL Managed Instance, enabling the streaming of row-level data changes—specifically inserts, updates, and deletes—to Azure Event Hubs in near real time. The primary purpose is to facilitate low-latency, scalable integration between transactional SQL data and downstream event-driven architectures or analytics platforms. By publishing changes as transactions commit, the solution minimizes latency and supports modern data processing requirements.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Summary Sentence
Azure SQL Managed Instance now supports streaming row-level data changes to Azure Event Hubs in near real time, enabling low-latency integration with event-driven architectures and analytics platforms through change event streaming in public preview.
Published: March 25, 2026 17:30:45 UTC Link: Generally Available: Custom time zone support for pg_cron via cron.timezone in Azure Database for PostgreSQL
Update ID: 558870 Data source: Azure Updates API
Categories: Launched, Databases, Hybrid + multicloud, Azure Database for PostgreSQL, Features
Summary:
What was updated
Custom time zone support for pg_cron is now generally available in Azure Database for PostgreSQL. You can now configure the cron.timezone server parameter.
Key changes or new features
The new cron.timezone parameter allows you to set the time zone used by pg_cron for scheduled jobs. Previously, pg_cron jobs always used the database server’s time zone, but now you can specify a custom time zone for job evaluation and execution. This enables more flexible and region-specific scheduling for automated tasks.
Target audience affected
Developers and IT professionals who use Azure Database for PostgreSQL and rely on pg_cron for scheduling maintenance, data processing, or other automated jobs.
Important notes if any
Changing the cron.timezone parameter affects all pg_cron jobs on the server. Ensure that scheduled job timings are reviewed and updated as needed to align with the new time zone setting. This feature helps support global applications with region-specific scheduling requirements.
For more details, see the official update: https://azure.microsoft.com/updates?id=558870
Details:
Azure Update Technical Report
Title: Generally Available: Custom time zone support for pg_cron via cron.timezone in Azure Database for PostgreSQL
Link: Azure Update
Background and Purpose of the Update
Azure Database for PostgreSQL provides managed PostgreSQL database services, including support for the pg_cron extension, which enables scheduling of database jobs. Previously, scheduled jobs using pg_cron operated based on the server’s default time zone, which could lead to discrepancies for global teams or applications requiring job execution aligned with specific local times. The purpose of this update is to introduce flexibility by allowing users to modify the time zone used by pg_cron, ensuring scheduled jobs run according to the desired local time zone.
Specific Features and Detailed Changes
The update introduces the ability to configure the cron.timezone server parameter in Azure Database for PostgreSQL. This parameter directly controls the time zone reference for pg_cron job scheduling. Users can now set cron.timezone to any valid time zone supported by PostgreSQL, such as ‘UTC’, ‘America/New_York’, or ‘Asia/Kolkata’. This change enables jobs scheduled via pg_cron to be evaluated and executed according to the specified time zone, rather than the server’s default.
Technical Mechanisms and Implementation Methods
cron.timezone parameter is a server-level setting that can be modified through the Azure portal, Azure CLI, or ARM templates.cron.timezone parameter, pg_cron internally uses this value to interpret the timing of scheduled jobs.Use Cases and Application Scenarios
Important Considerations and Limitations
cron.timezone parameter affects all pg_cron jobs on the server; individual jobs cannot have separate time zones.cron.timezone may require a server restart, depending on the configuration method.Integration with Related Azure Services
cron.timezone parameter as part of infrastructure-as-code workflows.Summary Sentence
Azure Database for PostgreSQL now supports custom time zone configuration for pg_cron scheduled jobs via the cron.timezone parameter, enabling precise control over job execution timing to match local requirements and operational needs.
Published: March 25, 2026 17:30:45 UTC Link: Generally Available: PostgreSQL migration service supports compatible EDB workloads into Azure Database for PostgreSQL
Update ID: 558865 Data source: Azure Updates API
Categories: Launched, Databases, Hybrid + multicloud, Azure Database for PostgreSQL, Features
Summary:
What was updated
Azure Database for PostgreSQL migration service now supports migrations from EDB PostgreSQL (including EDB Postgres Extended Server) to Azure Database for PostgreSQL.
Key changes or new features
Developers and IT professionals can now securely and reliably migrate EDB PostgreSQL workloads to Azure Database for PostgreSQL. This update enables consolidation of PostgreSQL estates, including those running on EDB Postgres Extended Server, using Azure’s migration workflows.
Target audience affected
This update is relevant for database administrators, IT professionals, and developers managing EDB PostgreSQL databases who are planning to move workloads to Azure Database for PostgreSQL.
Important notes if any
The migration service supports secure and reliable workflows, helping organizations streamline their move to Azure. Ensure compatibility and review migration prerequisites before starting. This feature is generally available, so it is recommended for production use.
For more details, see the official update:
https://azure.microsoft.com/updates?id=558865
Details:
Azure Update Technical Explanation
Title: Generally Available: PostgreSQL migration service supports compatible EDB workloads into Azure Database for PostgreSQL
Source: Azure Updates
This update announces the general availability of support for migrating EDB PostgreSQL workloads into Azure Database for PostgreSQL. The primary objective is to enable organizations to seamlessly migrate and consolidate their PostgreSQL estates, specifically those running on EDB Postgres Extended Server, into Azure’s managed PostgreSQL platform. This enhancement addresses the need for secure, reliable, and streamlined migration workflows for enterprises leveraging EDB PostgreSQL in their on-premises or other cloud environments.
Summary:
Azure Database for PostgreSQL now supports secure, reliable migration of compatible EDB Postgres Extended Server workloads, enabling organizations to consolidate and modernize their PostgreSQL estates using Azure’s managed database platform.
Published: March 25, 2026 17:30:45 UTC Link: Generally Available: PostgreSQL migration service supports for Google AlloyDB into Azure Database for PostgreSQL
Update ID: 558851 Data source: Azure Updates API
Categories: Launched, Databases, Hybrid + multicloud, Azure Database for PostgreSQL, Features
Summary:
What was updated
Azure Database for PostgreSQL migration service now generally supports migrations from Google AlloyDB.
Key changes or new features
Google AlloyDB can now be used as a source for migrating PostgreSQL workloads directly into Azure Database for PostgreSQL. This update enables secure and reliable migration workflows, allowing organizations to consolidate their PostgreSQL databases from Google AlloyDB to Azure. The migration service supports minimal downtime and is designed to handle enterprise-scale migrations efficiently.
Target audience affected
This update is relevant for developers, database administrators, and IT professionals managing PostgreSQL workloads who are planning to move databases from Google AlloyDB to Azure Database for PostgreSQL.
Important notes if any
Ensure compatibility between the source (Google AlloyDB) and target (Azure Database for PostgreSQL) versions before migration. Review Azure’s migration documentation for best practices and prerequisites. This feature streamlines cloud migration projects, especially for organizations consolidating multi-cloud PostgreSQL estates into Azure. For more details, refer to the official Azure Update.
Details:
Background and Purpose of the Update
This update announces the general availability of support for Google AlloyDB as a source in Azure’s PostgreSQL migration service. The primary purpose is to enable organizations to migrate and consolidate their PostgreSQL workloads from Google AlloyDB to Azure Database for PostgreSQL. This enhancement addresses the need for seamless and secure migration paths for enterprises seeking to transition from Google Cloud’s managed PostgreSQL-compatible databases to Azure’s managed PostgreSQL offerings.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Summary Sentence
Google AlloyDB is now generally supported as a migration source for Azure Database for PostgreSQL, enabling secure and reliable workflows for organizations to migrate and consolidate their PostgreSQL workloads from Google Cloud to Azure.
Published: March 25, 2026 17:30:45 UTC Link: Generally Available: Online migration now uses the pgoutput plugin
Update ID: 558846 Data source: Azure Updates API
Categories: Launched, Databases, Hybrid + multicloud, Azure Database for PostgreSQL, Features
Summary:
What was updated
Azure Database for PostgreSQL online migration now uses the pgoutput plugin for logical replication.
Key changes or new features
The migration process now leverages PostgreSQL’s native pgoutput logical replication plugin, enhancing reliability and performance for online (minimal-downtime) migrations. This change improves compatibility with modern PostgreSQL deployments and tooling, aligning Azure’s migration process with the broader PostgreSQL ecosystem.
Target audience affected
Developers and IT professionals responsible for database migrations to Azure Database for PostgreSQL, especially those requiring minimal downtime and compatibility with PostgreSQL-native tools and workflows.
Important notes if any
Switching to pgoutput may require reviewing your migration workflows to ensure compatibility with the new logical replication method. This update is generally available, so it is recommended for all new online migrations. Existing migrations using older plugins may consider transitioning to benefit from improved performance and ecosystem alignment.
For more details, see the official update: https://azure.microsoft.com/updates?id=558846
Details:
Azure Update: Generally Available – Online migration now uses the pgoutput plugin
Background and Purpose of the Update
This update introduces the use of the pgoutput plugin for online (minimal-downtime) migrations in Azure Database for PostgreSQL. The primary goal is to enhance migration reliability and performance by leveraging PostgreSQL’s native logical replication framework. This aligns Azure’s migration tooling with the standard mechanisms used in modern PostgreSQL deployments, improving compatibility and reducing friction during migration processes.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Summary Sentence:
Azure Database for PostgreSQL online migrations now use the pgoutput plugin, enabling minimal-downtime migrations with improved reliability, performance, and native PostgreSQL compatibility.
This report was automatically generated - 2026-03-26 03:05:02 UTC