Generated on: November 21, 2025 Target period: Within the last 24 hours Processing mode: Details Mode Number of updates: 12 items
Published: November 20, 2025 20:00:10 UTC Link: Public Preview: Container network metrics filtering in Advanced Container Networking Services for (ACNS) for AKS
Update ID: 523076 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS)
Summary:
What was updated
Azure Container Networking Services (ACNS) for AKS now supports container network metrics filtering, released in public preview.
Key changes or new features
This update enables selective ingestion of network metrics in containerized environments, allowing developers and IT professionals to filter out unnecessary or excessive network telemetry data. This helps reduce storage costs and prevents dashboards from becoming cluttered with irrelevant metrics, improving monitoring efficiency and cost management.
Target audience affected
Developers and IT professionals managing Azure Kubernetes Service (AKS) clusters using ACNS who require optimized network monitoring and cost control.
Important notes if any
As this feature is in public preview, users should evaluate it in non-production environments before full deployment. Filtering capabilities can be configured to tailor network metric collection to specific operational needs, enhancing observability while controlling data volume and associated costs.
For more details, visit: https://azure.microsoft.com/updates?id=523076
Details:
The recent public preview release of container network metrics filtering in Azure Container Networking Services (ACNS) for Azure Kubernetes Service (AKS) addresses the challenge of managing voluminous network telemetry data generated by containerized workloads. In modern AKS environments, continuous collection of detailed network metrics from containers can result in excessive data ingestion, leading to inflated monitoring storage costs and reduced operational clarity due to dashboard clutter. This update introduces granular filtering capabilities that enable IT professionals to selectively collect and retain only relevant network metrics, optimizing both cost and observability.
Background and Purpose
As container adoption grows, network observability becomes critical for diagnosing connectivity, performance, and security issues within microservices architectures. ACNS provides the underlying networking infrastructure for AKS, including network policy enforcement and telemetry. However, the default behavior of collecting all available network metrics can overwhelm monitoring systems such as Azure Monitor and Log Analytics, increasing storage consumption and complicating data analysis. The purpose of this update is to empower users to define filters that limit metric ingestion to a subset of interest, thereby reducing noise and cost while maintaining actionable insights.
Specific Features and Detailed Changes
The key feature introduced is the ability to configure metric filters at the ACNS level. Users can specify criteria such as namespaces, pods, or specific network interfaces to include or exclude from metric collection. This filtering applies to network-related metrics like packet counts, byte transfer, connection states, and latency measurements. The configuration is exposed via AKS cluster settings or through Azure CLI and ARM templates, allowing integration into Infrastructure as Code (IaC) workflows. The filtering logic operates before metrics are sent to Azure Monitor, ensuring only filtered data is ingested and stored.
Technical Mechanisms and Implementation Methods
Under the hood, ACNS integrates with the Azure Monitor Metrics pipeline. The filtering mechanism is implemented as a pre-ingestion processing step within the ACNS telemetry agent running on AKS nodes. This agent intercepts network telemetry emitted by container network interfaces and applies user-defined filter rules. The filters are declarative, supporting label selectors and resource identifiers, enabling precise targeting. The filtered metrics are then forwarded to Azure Monitor Metrics and Log Analytics workspaces configured for the cluster. This approach minimizes network overhead and storage usage by reducing the volume of telemetry data transmitted and retained.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
This update tightly integrates with Azure Monitor Metrics and Log Analytics, the primary telemetry ingestion and analysis services in Azure. Filtered metrics are sent to these services, enabling continued use of Azure Monitor dashboards, alerts, and workbooks with reduced data volume. The filtering configuration can be managed via Azure CLI, ARM templates, or Azure Policy, facilitating automation and governance. Additionally, this feature complements Azure Network Watcher and Azure Security Center by refining the scope of network telemetry feeding into these services.
In summary, the public preview of container network metrics
Published: November 20, 2025 18:45:04 UTC Link: Generally Available: MCP support for AI toolchain operator add-on in AKS
Update ID: 523152 Data source: Azure Updates API
Categories: Launched, Compute, Containers, Azure Kubernetes Service (AKS)
Summary:
What was updated
The AI Toolchain Operator add-on for Azure Kubernetes Service (AKS) now has Generally Available (GA) support for Model Context Protocol (MCP) within KAITO inference workspaces.
Key changes or new features
The update enables seamless integration of dynamic model management and tool calling via MCP, enhancing AI workload orchestration in AKS. This addresses challenges related to dynamic model context handling during inference, improving operational efficiency and scalability for AI deployments.
Target audience affected
Developers and IT professionals working with AI/ML workloads on AKS, particularly those leveraging KAITO inference workspaces and requiring robust model lifecycle and tool integration capabilities.
Important notes if any
The GA release signifies production readiness, encouraging adoption in enterprise environments. Users should review MCP integration best practices to maximize benefits and ensure compatibility with existing AI toolchain components.
Details:
The recent Azure update announces the general availability (GA) of Model Context Protocol (MCP) support for the AI Toolchain Operator add-on within Azure Kubernetes Service (AKS). This enhancement is designed to streamline and optimize AI model deployment and inference workflows by integrating MCP into KAITO inference workspaces, thereby addressing critical challenges in dynamic model management and tool interoperability.
Background and Purpose
As AI workloads become increasingly complex, managing inference pipelines that involve multiple models and tools dynamically is a significant challenge. The AI Toolchain Operator add-on in AKS facilitates the orchestration of AI model lifecycle and inference tasks within Kubernetes environments. However, prior to this update, there were limitations in dynamically linking models with associated tools and context during inference. The introduction of MCP support aims to standardize and simplify communication between models and tools, enabling more flexible, context-aware AI workflows.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The implementation leverages Kubernetes Custom Resource Definitions (CRDs) to represent AI models, tools, and their contexts. The AI Toolchain Operator watches these CRDs and orchestrates the deployment and execution of inference workloads. MCP metadata is embedded within these resources, enabling tools to query and utilize model context dynamically. Communication between components uses standard Kubernetes APIs and custom MCP-compliant interfaces, ensuring extensibility and interoperability. The operator also integrates with Azure Container Registry and Azure Monitor for image management and telemetry, respectively.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Published: November 20, 2025 18:30:02 UTC Link: Generally Available: Cluster-wide Cilium network policy with Azure CNI powered by Cilium for AKS
Update ID: 523120 Data source: Azure Updates API
Categories: Launched, Compute, Containers, Azure Kubernetes Service (AKS)
Summary:
What was updated
Azure Kubernetes Service (AKS) now offers Generally Available (GA) support for cluster-wide Cilium network policies with Azure CNI powered by Cilium.
Key changes or new features
This update enables platform teams to define and enforce consistent network policies across all Kubernetes namespaces within an AKS cluster using Cilium’s advanced networking and security capabilities integrated with Azure CNI. It addresses the complexity of managing multi-tenant network policies by providing a unified, cluster-wide policy model. This improves security posture, simplifies policy management, and enhances observability and troubleshooting through Cilium’s eBPF-based datapath.
Target audience affected
Developers, DevOps, and IT professionals managing AKS clusters, especially those operating multi-tenant or large-scale Kubernetes environments requiring consistent and scalable network security policies.
Important notes if any
Users should ensure their AKS clusters are running compatible versions to leverage this feature. Transitioning to cluster-wide policies may require reviewing existing namespace-scoped policies to avoid conflicts. This GA release signals production readiness and Microsoft’s commitment to integrating Cilium’s capabilities deeply with Azure networking.
Reference: https://azure.microsoft.com/updates?id=523120
Details:
The recent general availability of cluster-wide Cilium network policy support for Azure Kubernetes Service (AKS) clusters using Azure CNI powered by Cilium addresses a critical challenge in Kubernetes network security management by enabling consistent, scalable, and fine-grained network policy enforcement across all namespaces within a cluster. Traditionally, managing network policies on a per-namespace basis in multi-tenant or large-scale Kubernetes environments has been complex and error-prone, often leading to inconsistent security postures and operational overhead for platform teams. This update introduces a unified approach to defining and enforcing network policies cluster-wide, simplifying security governance and improving operational efficiency.
From a feature perspective, this update extends the Azure CNI integration with Cilium to support cluster-wide network policies, allowing administrators to define network policies that apply uniformly across all namespaces rather than requiring duplication or namespace-scoped policies. Cilium, an open-source eBPF-based networking and security solution, leverages the Linux kernel’s extended Berkeley Packet Filter (eBPF) technology to provide high-performance, programmable packet processing. By integrating Cilium’s advanced capabilities with Azure CNI, AKS clusters benefit from enhanced network observability, security, and scalability. The cluster-wide policies can specify ingress and egress rules that control traffic flows between pods, namespaces, and external endpoints, enabling zero-trust network segmentation at scale.
Technically, the implementation relies on Cilium’s eBPF datapath programs that run within the Linux kernel on each node, intercepting and enforcing network policies at the packet level with minimal latency. The Azure CNI plugin, responsible for IP address management and routing in AKS, now works in tandem with Cilium’s datapath to apply these cluster-wide policies consistently across nodes and namespaces. The policy definitions use Kubernetes Custom Resource Definitions (CRDs) extended for cluster-wide scope, allowing declarative management via standard Kubernetes tools (kubectl, Helm, GitOps pipelines). This architecture ensures that network policies are enforced natively within the kernel, avoiding the overhead of proxy-based solutions and enabling scalability to large clusters with thousands of nodes and pods.
Use cases for this update include multi-tenant AKS clusters where platform teams need to enforce baseline security policies across all namespaces to prevent lateral movement and unauthorized access, as well as large-scale environments requiring consistent network segmentation without the complexity of managing numerous namespace-scoped policies. It also benefits DevOps teams seeking to implement zero-trust networking models by defining global ingress and egress controls that apply uniformly, simplifying compliance and audit processes.
Important considerations include ensuring that cluster-wide policies are carefully designed to avoid unintended traffic disruptions, as these policies override or complement namespace-scoped policies. Platform teams should validate policy rules in staging environments before production rollout. Additionally, while Cilium’s eBPF-based enforcement offers high performance, it requires Linux kernel versions and node configurations compatible with eBPF features; thus, cluster node OS and kernel versions should be verified for compatibility. Monitoring and troubleshooting tools provided by Cilium and Azure Monitor should be leveraged to gain visibility into policy enforcement and network flows.
Integration with related Azure services is seamless: AKS clusters using Azure CNI powered by Cilium can integrate with Azure Monitor for container insights, enabling detailed network telemetry and alerting. Azure Policy can be used to enforce compliance of network policy CRDs, and Azure Active Directory integration supports RBAC for managing network policy resources. Furthermore, this update complements Azure Security Center’s Kubernetes threat detection capabilities by providing stronger network segmentation controls.
In summary, the general availability of cluster-wide Cilium network policy support in AKS with Azure CNI powered by Cilium delivers a robust, scalable, and efficient solution for managing Kubernetes network security at the cluster level, empowering platform and security teams to implement consistent, high-performance network policies across multi-tenant and large-scale AKS environments with enhanced observability and integration into Azure’s security and monitoring ecosystem.
Published: November 20, 2025 18:30:02 UTC Link: Generally Available: Local redirect policy in Azure CNI powered by Cilium for AKS
Update ID: 523081 Data source: Azure Updates API
Categories: Launched, Compute, Containers, Azure Kubernetes Service (AKS)
Summary:
What was updated
Azure CNI powered by Cilium for Azure Kubernetes Service (AKS) now generally supports the local redirect policy feature.
Key changes or new features
The local redirect policy optimizes network traffic by redirecting pod-to-pod communication locally within the same node, reducing cross-node traffic. This significantly lowers latency and improves performance for high-scale AKS workloads by minimizing unnecessary routing through other nodes or network hops.
Target audience affected
Developers and IT professionals managing large-scale AKS clusters with performance-sensitive applications will benefit most. Network engineers and cloud architects focusing on Kubernetes networking and Azure CNI configurations should consider adopting this feature.
Important notes if any
Enabling local redirect policy requires Azure CNI powered by Cilium integration in AKS clusters. It is particularly valuable for workloads with heavy east-west traffic patterns. Users should validate compatibility and test workloads to ensure optimal performance gains. For detailed implementation guidance, refer to the official Azure documentation linked in the update.
Details:
The recent general availability of the local redirect policy in Azure CNI powered by Cilium for Azure Kubernetes Service (AKS) addresses critical performance challenges faced by high-scale AKS workloads, particularly those arising from inefficient cross-node traffic routing. Traditionally, when pods communicate across nodes in AKS clusters, network traffic often traverses multiple hops, leading to increased latency and reduced throughput. This update introduces a local redirect policy that optimizes traffic flow by ensuring that intra-node pod-to-pod communications are handled locally whenever possible, thereby minimizing unnecessary cross-node routing.
From a feature standpoint, the local redirect policy enhances the Azure CNI plugin integrated with Cilium, an open-source networking and security layer for Kubernetes. This policy dynamically redirects traffic destined for pods on the same node to local endpoints, bypassing the default routing path that would otherwise send traffic through the node’s network stack and potentially across nodes. This results in significant reductions in latency and improvements in network throughput for pod-to-pod communications within the same node. The implementation leverages Cilium’s eBPF (extended Berkeley Packet Filter) capabilities, which allow for high-performance, kernel-level packet processing and redirection without the overhead of user-space proxies or additional hops.
Technically, the local redirect policy is configured as part of the Azure CNI configuration in AKS clusters using Cilium as the CNI provider. When enabled, Cilium programs eBPF hooks into the Linux kernel networking stack on each node, intercepting traffic flows and applying the redirect logic based on pod IP addresses and node locality. This mechanism ensures that traffic destined for pods residing on the same node is locally redirected, reducing the need for encapsulation or routing through the Azure virtual network infrastructure. The policy is managed through Kubernetes Custom Resource Definitions (CRDs) and can be fine-tuned via Cilium network policies, providing granular control over traffic redirection behavior.
Use cases for this update are particularly relevant for large-scale AKS deployments running latency-sensitive applications such as real-time analytics, financial services, gaming, and microservices architectures where inter-pod communication is frequent and performance-critical. By reducing network latency and improving throughput, the local redirect policy can enhance overall application responsiveness and resource efficiency. It also benefits scenarios involving service meshes or complex network policies where minimizing network hops can reduce overhead and simplify troubleshooting.
Important considerations include ensuring that the AKS cluster is running a compatible version of Azure CNI and Cilium that supports the local redirect policy feature. Network administrators should validate that enabling local redirect does not conflict with existing network policies or security configurations, as the redirection occurs at the kernel level and may affect packet inspection or monitoring tools. Additionally, while the policy optimizes intra-node traffic, cross-node traffic still follows standard routing paths, so overall cluster network design and node distribution remain important for performance tuning.
Integration-wise, this update complements other Azure networking services such as Azure Virtual Network, Azure Network Security Groups (NSGs), and Azure Monitor for network insights. The local redirect policy works seamlessly within the Azure VNet infrastructure, ensuring that pod networking remains consistent and secure while benefiting from enhanced performance. It also integrates with Azure Policy and Azure Arc for governance and compliance in hybrid or multi-cloud Kubernetes deployments.
In summary, the general availability of the local redirect policy in Azure CNI powered by Cilium for AKS provides a kernel-level traffic optimization that significantly improves intra-node pod communication performance by leveraging eBPF-based local redirection, making it a valuable enhancement for high-scale, latency-sensitive Kubernetes workloads on Azure.
Published: November 20, 2025 18:15:25 UTC Link: Generally Available: Layer 7 policy with Advanced Container Networking Services (ACNS) for AKS
Update ID: 523115 Data source: Azure Updates API
Categories: Launched, Compute, Containers, Azure Kubernetes Service (AKS)
Summary:
What was updated
Azure Container Networking Services (ACNS) for Azure Kubernetes Service (AKS) has reached general availability for its Layer 7 Policy feature.
Key changes or new features
The Layer 7 Policy enables granular, application-layer traffic control within microservices architectures. Developers and IT professionals can now define and enforce fine-grained routing, filtering, and security policies based on HTTP/S attributes such as headers, methods, paths, and query parameters. This enhances traffic management capabilities beyond traditional Layer 3/4 controls, supporting scenarios like canary deployments, A/B testing, and zero-trust security models within AKS clusters.
Target audience affected
This update primarily impacts developers building microservices on AKS and IT/network administrators responsible for securing and managing containerized workloads. It benefits teams requiring advanced traffic governance and security at the application protocol level.
Important notes if any
Since Layer 7 Policy is now generally available, it is production-ready and supported by Microsoft. Users should review the updated ACNS documentation to implement these policies effectively and consider potential impacts on cluster networking and application performance. Integration with existing AKS networking configurations and security practices is recommended to maximize benefits.
Details:
The recent Azure update announces the general availability of Layer 7 policy enforcement within Advanced Container Networking Services (ACNS) for Azure Kubernetes Service (AKS), addressing the critical need for granular traffic control in microservices-based architectures. This enhancement enables IT professionals to implement fine-grained, application-layer (Layer 7) traffic policies directly within the AKS networking stack, improving security, compliance, and operational control over containerized workloads.
Background and Purpose
As organizations increasingly adopt microservices and containerized applications on AKS, managing east-west traffic between services becomes complex. Traditional Layer 3/4 network policies (IP and port-based) are often insufficient for controlling traffic based on application-level attributes such as HTTP methods, URLs, headers, or payload content. To meet these requirements, Azure introduced Layer 7 policy capabilities in ACNS, allowing detailed inspection and enforcement of traffic rules at the application layer, thus enhancing security posture and traffic governance.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The Layer 7 policy feature is implemented as part of ACNS, which extends the Azure CNI plugin for AKS. ACNS operates at the pod network interface level, intercepting and inspecting traffic flows. For Layer 7 inspection, ACNS integrates with a lightweight proxy or eBPF-based filtering mechanism capable of parsing HTTP/S traffic inline. Policies are compiled from Kubernetes CRDs into efficient filtering rules applied at the datapath, ensuring minimal performance overhead. TLS traffic inspection is supported via integration with service mesh sidecars or by terminating TLS at ingress points, depending on deployment architecture.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Published: November 20, 2025 18:15:25 UTC Link: Generally Available: Azure NetApp Files single file restore from backup
Update ID: 522077 Data source: Azure Updates API
Categories: Launched, Storage, Azure NetApp Files
Summary:
What was updated
Azure NetApp Files now supports single file restore directly from backup vaults.
Key changes or new features
Users can restore individual files from backups without restoring the entire volume. This granular restore capability reduces recovery time and operational costs by avoiding full volume restores when only specific files are needed.
Target audience affected
Developers and IT professionals managing Azure NetApp Files workloads who require efficient data recovery options, including those responsible for backup and disaster recovery strategies.
Important notes if any
This feature is generally available, ensuring production readiness and support. It enhances data protection workflows by enabling faster, more cost-effective restores, improving business continuity and minimizing downtime.
Details:
The recent Azure update announces the general availability of the single file restore capability from Azure NetApp Files (ANF) backups, enabling IT professionals to recover individual files directly from backup snapshots without restoring entire volumes. This enhancement addresses the operational inefficiencies and resource overhead associated with volume-level restores, providing a more granular, cost-effective, and time-saving data recovery option.
Background and Purpose:
Azure NetApp Files is a high-performance, enterprise-grade file storage service optimized for workloads requiring low latency and high throughput. Previously, backup and restore operations in ANF were volume-centric, meaning that to recover lost or corrupted data, administrators had to restore entire volumes from backup snapshots. This approach often resulted in unnecessary downtime, increased storage consumption, and operational complexity, especially when only a few files needed recovery. The introduction of single file restore from backup aims to streamline data recovery processes by allowing selective restoration of individual files, thereby minimizing disruption and resource usage.
Specific Features and Detailed Changes:
Technical Mechanisms and Implementation Methods:
Under the hood, Azure NetApp Files backups are based on snapshot technology that captures point-in-time, read-only copies of volumes. The single file restore feature extends this by enabling file-level access to these snapshots. When a restore request is made for specific files:
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
Published: November 20, 2025 17:00:13 UTC Link: Generally Available: DNS security policy Threat Intelligence feed
Update ID: 530183 Data source: Azure Updates API
Categories: Launched, Networking, Azure DNS
Summary:
What was updated
Azure DNS Security Policy now includes a generally available Threat Intelligence feed.
Key changes or new features
The update introduces a Microsoft-managed Threat Intelligence feed integrated into DNS security policies. This feed helps detect and block DNS queries to known malicious domains, enhancing protection against cyberattacks that often start with DNS lookups. The feed is continuously updated to provide smart, real-time protection.
Target audience affected
Developers and IT professionals responsible for network security, DNS management, and threat mitigation in Azure environments will benefit from this feature. It is particularly useful for security teams aiming to reduce exposure to phishing, malware, and other DNS-based threats.
Important notes if any
Implementing the Threat Intelligence feed requires enabling it within Azure DNS security policies. This feature leverages Microsoft’s extensive threat data, reducing the need for manual threat list management. It is recommended to monitor alerts and logs generated by this feed to respond promptly to potential threats. For detailed configuration and best practices, refer to the official Azure documentation.
Link for more information: https://azure.microsoft.com/updates?id=530183
Details:
The Azure update announcing the general availability of the DNS Security Policy Threat Intelligence feed introduces a managed, continuously updated threat intelligence feed designed to enhance DNS-level security by blocking access to known malicious domains. This feature leverages Microsoft’s extensive threat intelligence to proactively protect enterprise environments from attacks that typically begin with DNS queries, such as phishing, malware distribution, and command-and-control communications.
Background and Purpose:
DNS is a critical component of network infrastructure, translating domain names into IP addresses. However, it is also a frequent vector for cyberattacks, as adversaries use DNS queries to reach malicious domains or exfiltrate data. Traditional DNS filtering solutions require manual updates or third-party feeds, which may lag behind emerging threats. The purpose of this update is to provide Azure customers with a native, integrated, and automatically maintained threat intelligence feed that blocks DNS requests to domains identified as malicious by Microsoft’s global security telemetry.
Specific Features and Detailed Changes:
Technical Mechanisms and Implementation Methods:
The Threat Intelligence feed operates by maintaining a list of domains categorized as malicious based on Microsoft’s global threat telemetry, including signals from Microsoft Defender, Azure Sentinel, and other Microsoft security products. When a DNS query is processed by Azure Firewall DNS Proxy or Azure DNS Private Resolver configured with DNS security policies, the queried domain is checked against the threat intelligence feed. If a match is found, the configured policy action (e.g., block) is enforced. This mechanism leverages Azure’s scalable cloud infrastructure to perform DNS inspection with minimal latency impact. The feed is delivered as a managed service, abstracting complexity from customers and ensuring continuous updates.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
Published: November 20, 2025 16:00:12 UTC Link: Generally Available: Azure Sphere OS version 25.10 is now available
Update ID: 522390 Data source: Azure Updates API
Categories: Launched, Internet of Things, Azure Sphere
Summary:
What was updated
Azure Sphere OS has been updated to version 25.10 and is now generally available in the Retail feed.
Key changes or new features
This release focuses solely on updates to the Azure Sphere OS itself. No updates to the Azure Sphere SDK are included in this version. Devices connected to the internet will automatically receive the OS update via the cloud.
Target audience affected
Developers and IT professionals managing Azure Sphere devices who rely on the OS for secure IoT device operation.
Important notes if any
Since the SDK remains unchanged, developers do not need to update their development environment to support this OS version. Ensure devices maintain internet connectivity to receive the update seamlessly. Review device-specific release notes for any security or performance improvements included in this OS update.
Details:
Azure Sphere OS version 25.10 has reached general availability and is now accessible via the Retail feed, representing a targeted update to the operating system component of Azure Sphere without accompanying SDK changes. This update is designed to enhance the security, reliability, and functionality of Azure Sphere devices by delivering incremental OS improvements directly through cloud-based updates to internet-connected devices.
Background and Purpose:
Azure Sphere is a secured, high-level application platform with built-in communication and security features for IoT devices. The OS is a critical element that ensures device integrity, security, and connectivity. Version 25.10 continues Microsoft’s commitment to providing robust security and operational stability by refining the OS layer, addressing vulnerabilities, and improving system performance without requiring developers to update their development environment or SDK.
Specific Features and Detailed Changes:
While the update does not introduce new SDK features or APIs, it includes important under-the-hood enhancements such as security patches, kernel updates, improved device management capabilities, and possibly optimizations in networking stacks or system services. These changes help protect devices from emerging threats, improve system responsiveness, and enhance compatibility with Azure Sphere Security Service. The exact patch notes typically detail fixes for known vulnerabilities, performance improvements, and reliability enhancements.
Technical Mechanisms and Implementation Methods:
Azure Sphere OS updates are delivered over-the-air (OTA) via the Azure Sphere Security Service. Devices connected to the internet automatically receive the update from the cloud, ensuring seamless deployment without manual intervention. The update process is designed to be secure and resilient, employing cryptographic verification to prevent tampering and rollback protections to maintain device integrity. The update mechanism supports staged rollouts and can be monitored through Azure Sphere CLI or Azure Sphere Security Service dashboards.
Use Cases and Application Scenarios:
This update is critical for enterprises deploying Azure Sphere-based IoT solutions in environments requiring continuous security compliance, such as industrial automation, smart appliances, and critical infrastructure monitoring. By maintaining devices on the latest OS version, organizations can reduce the risk of security breaches, ensure compliance with regulatory standards, and improve device uptime and reliability. The update is particularly relevant for large-scale deployments where manual updates would be impractical.
Important Considerations and Limitations:
Integration with Related Azure Services:
Azure Sphere OS version 25.10 continues to integrate tightly with the Azure Sphere Security Service, which manages device authentication, update distribution, and threat detection. The OS improvements enhance the overall security posture when combined with Azure IoT Hub for device telemetry and Azure Defender for IoT for advanced threat protection. Organizations leveraging Azure Sphere alongside Azure IoT services benefit from a comprehensive, secure IoT solution stack that simplifies device lifecycle management and security compliance.
In summary, Azure Sphere OS version 25.10 delivers essential security and reliability enhancements through a cloud-managed OS update process, reinforcing the secure foundation of Azure Sphere devices without requiring SDK changes, thereby enabling IT professionals to maintain secure and stable IoT deployments efficiently.
Published: November 20, 2025 15:00:59 UTC Link: Generally Available: Trusted Launch is now supported for Arm64 Marketplace Images
Update ID: 529797 Data source: Azure Updates API
Categories: Launched, Compute, Virtual Machines
Summary:
What was updated
Azure Marketplace now offers Arm64 virtual machine images with Trusted Launch support generally available.
Key changes or new features
Trusted Launch, a security feature that provides secure boot, virtual TPM, and integrity monitoring, is now enabled for Arm64-based Marketplace images. This allows users to leverage Arm64 VMs’ cost and performance advantages while maintaining enhanced security protections.
Target audience affected
Developers and IT professionals deploying Azure VMs who require both cost-efficient Arm64 architecture and advanced security features. This is particularly relevant for workloads that benefit from Arm64 performance and need compliance or security hardening through Trusted Launch.
Important notes if any
This update enables a combination of security and efficiency previously unavailable, facilitating secure deployment of Arm64 workloads in Azure. Users should verify compatibility of their applications with Arm64 architecture and consider Trusted Launch requirements when selecting VM images.
For more details, visit: https://azure.microsoft.com/updates?id=529797
Details:
The recent Azure update announces the general availability of Trusted Launch support for Arm64-based Marketplace images, enabling customers to deploy Arm64 virtual machines (VMs) with enhanced security guarantees provided by Trusted Launch. This update combines the cost-efficiency and performance advantages of Arm64 architecture with Azure’s advanced VM security features, addressing the growing demand for secure, high-performance cloud workloads on Arm64 platforms.
Background and Purpose:
Azure has been expanding its Arm64 VM offerings to provide customers with more cost-effective and energy-efficient compute options, particularly suited for scale-out workloads, web servers, and containerized applications. However, security remains a paramount concern, especially for enterprise and regulated workloads. Trusted Launch is an Azure security feature that provides a secure boot process, virtualized Trusted Platform Module (vTPM), and integrity monitoring to protect VMs from firmware and boot-level malware attacks. Prior to this update, Trusted Launch was primarily available for x86 VMs. Extending Trusted Launch to Arm64 Marketplace images addresses the need to secure Arm64 workloads with the same robust protections, enabling broader adoption of Arm64 in sensitive environments.
Specific Features and Detailed Changes:
Technical Mechanisms and Implementation Methods:
Trusted Launch integrates several security technologies:
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
Published: November 20, 2025 13:45:47 UTC Link: Public Preview: Azure NetApp Files migration assistant (portal support)
Update ID: 525620 Data source: Azure Updates API
Categories: In preview, Storage, Azure NetApp Files
Summary:
What was updated
Azure NetApp Files (ANF) migration assistant now supports portal integration in public preview.
Key changes or new features
The migration assistant leverages ONTAP’s SnapMirror replication engine to enable efficient, cost-effective data migration from on-premises environments, Cloud Volumes ONTAP (CVO), or other cloud providers directly to Azure NetApp Files. The new portal support simplifies management by allowing users to initiate and monitor migrations through the Azure portal UI, improving usability and operational visibility.
Target audience affected
Developers, IT professionals, and cloud architects responsible for data migration, storage management, and hybrid cloud deployments using Azure NetApp Files.
Important notes if any
This feature is currently in public preview, so users should evaluate it in test environments before production use. Familiarity with ONTAP SnapMirror and Azure NetApp Files is recommended to maximize migration efficiency and minimize downtime. The portal integration aims to streamline migration workflows but may have limitations compared to CLI or API-based approaches during preview.
Details:
The Azure NetApp Files migration assistant public preview introduces portal-based support for leveraging NetApp ONTAP’s SnapMirror replication technology to facilitate efficient, reliable, and cost-effective data migration from on-premises NetApp systems, Cloud Volumes ONTAP (CVO), or other cloud providers directly into Azure NetApp Files (ANF). This update aims to simplify and streamline the migration process by integrating migration management into the Azure portal, enabling IT professionals to orchestrate and monitor data replication tasks within a familiar Azure management interface.
Background and Purpose
Enterprises increasingly adopt Azure NetApp Files for high-performance, enterprise-grade file storage in the cloud. Migrating large datasets from existing NetApp ONTAP environments—whether on-premises or in other clouds—can be complex, time-consuming, and costly. Traditionally, migrations required manual setup of SnapMirror relationships and external tooling. This update addresses these challenges by embedding migration assistant capabilities directly into the Azure portal, reducing operational overhead and accelerating cloud adoption.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The migration assistant leverages ONTAP SnapMirror, a block-level replication engine that asynchronously copies data between source and destination volumes. The process involves:
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Published: November 20, 2025 13:45:47 UTC Link: Public Preview: Azure NetApp Files cache volumes
Update ID: 523917 Data source: Azure Updates API
Categories: In preview, Storage, Azure NetApp Files
Summary:
What was updated
Azure NetApp Files has introduced cache volumes, now available in public preview. This feature leverages NetApp’s ONTAP FlexCache technology to provide persistent, high-performance caching within Azure.
Key changes or new features
Cache volumes enable a local, read-only cache of data stored on ONTAP-based storage systems, reducing latency and improving performance for workloads accessing remote or central data repositories. This persistent cache accelerates data access without duplicating data, optimizing throughput and reducing network traffic. It supports seamless integration with existing Azure NetApp Files deployments.
Target audience affected
Developers and IT professionals managing high-performance file storage workloads in Azure, particularly those using ONTAP-based storage systems who require low-latency access and improved data throughput for distributed applications.
Important notes if any
As this feature is in public preview, users should evaluate it in non-production environments and provide feedback. Pricing and SLA details may change upon general availability. Integration requires familiarity with ONTAP FlexCache concepts and Azure NetApp Files configurations.
For more details, visit: https://azure.microsoft.com/updates?id=523917
Details:
The recent public preview release of Azure NetApp Files (ANF) cache volumes introduces a significant enhancement designed to optimize data access performance for ONTAP-based storage workloads in Azure by leveraging NetApp’s proven FlexCache technology. This update addresses the growing demand for low-latency, high-throughput access to shared datasets distributed across multiple geographic locations or cloud environments.
Background and Purpose
Azure NetApp Files is a high-performance, enterprise-grade file storage service built on NetApp’s ONTAP technology, widely used for mission-critical applications requiring SMB and NFS protocols. Traditionally, accessing data stored in a central ONTAP volume from remote locations or multiple clients can introduce latency and bandwidth constraints. The cache volumes feature aims to mitigate these issues by providing a persistent, local cache that accelerates read operations and reduces network load, thereby improving application responsiveness and scalability.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The cache volumes feature is implemented using ONTAP FlexCache technology, which creates a distributed caching layer that maintains cache coherence with the source volume. When a client reads data from the cache volume, ONTAP checks if the data is current; if not, it fetches updated data from the source volume and updates the cache. Write operations are not permitted on cache volumes, ensuring data integrity by funneling all writes to the authoritative source volume. The cache volumes reside within the same Azure region or can be deployed in different regions to optimize cross-region data access. The underlying synchronization uses ONTAP’s metadata and data consistency protocols to ensure cache freshness and minimize stale reads.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Cache volumes integrate seamlessly with Azure NetApp Files management tools and monitoring via Azure Monitor and Azure Resource Manager. They can be combined with Azure Virtual Machines, Azure Kubernetes Service (AKS), and Azure HPC clusters to accelerate file-based workloads. Additionally, cache volumes
Published: November 20, 2025 08:00:32 UTC Link: Public Preview: Azure Monitor for Azure Arc-enabled Kubernetes with OpenShift and Azure Red Hat OpenShift
Update ID: 530174 Data source: Azure Updates API
Categories: In preview, Hybrid + multicloud, Compute, Containers, DevOps, Management and governance, Azure Arc, Azure Kubernetes Service (AKS), Azure Monitor
Summary:
What was updated
Azure Monitor now supports monitoring for Azure Arc-enabled Kubernetes clusters running OpenShift, including Azure Red Hat OpenShift (ARO), currently in public preview.
Key changes or new features
This update extends Azure Monitor’s capabilities to provide comprehensive health and performance monitoring across Kubernetes infrastructure layers and workloads on OpenShift clusters managed via Azure Arc. Developers and IT professionals can collect metrics, logs, and alerts from these hybrid and multicloud Kubernetes environments, enabling unified observability alongside native Azure Kubernetes Service (AKS) clusters.
Target audience affected
Developers, DevOps engineers, and IT operations teams managing Kubernetes workloads on Azure Arc-enabled OpenShift clusters, including Azure Red Hat OpenShift users, who require integrated monitoring and diagnostics within the Azure ecosystem.
Important notes if any
This feature is currently in public preview, so users should evaluate it in non-production environments and expect ongoing improvements. Integration requires Azure Arc-enabled Kubernetes clusters with OpenShift configured and appropriate Azure Monitor agents deployed. Refer to official documentation for setup details and limitations during preview.
Details:
The recent public preview announcement of Azure Monitor support for Azure Arc-enabled Kubernetes clusters running OpenShift and Azure Red Hat OpenShift (ARO) extends Azure’s comprehensive monitoring capabilities to hybrid and multi-cloud Kubernetes environments, enabling IT professionals to gain unified observability across on-premises, edge, and cloud deployments.
Background and Purpose:
Azure Monitor is a native Azure service designed to provide end-to-end monitoring of applications and infrastructure. Traditionally, Azure Monitor has supported Azure Kubernetes Service (AKS) clusters, but with the increasing adoption of hybrid cloud strategies and Kubernetes distributions like OpenShift, there is a need for consistent monitoring across diverse environments. Azure Arc enables management of Kubernetes clusters outside Azure, including on-premises and other clouds. This update aims to bridge the observability gap by integrating Azure Monitor with Azure Arc-enabled Kubernetes clusters running OpenShift and ARO, allowing organizations to leverage Azure’s monitoring stack uniformly.
Specific Features and Changes:
Technical Mechanisms and Implementation:
Use Cases and Application Scenarios:
Important Considerations and Limitations:
This report was automatically generated - 2025-11-21 03:08:01 UTC