Generated on: August 07, 2025 Target period: Within the last 24 hours Processing mode: Details Mode Number of updates: 20 items
Published: August 06, 2025 16:00:45 UTC Link: Generally Available: Azure Data Box Next Gen is now generally available in additional regions
Update ID: 499945 Data source: Azure Updates API
Categories: Launched, Migration, Storage, Azure Data Box, Features
Summary:
What was updated
Azure Data Box Next Gen devices are now generally available (GA) in additional global regions.
Key changes or new features
The service has expanded GA availability to new regions including Australia, Japan, Singapore, Brazil, Hong Kong, UAE, Switzerland, and Norway. Both Azure Data Box models—120 TB and 525 TB capacity—are now available in these regions, enabling large-scale offline data transfer with enhanced capacity options.
Target audience affected
Developers, data engineers, and IT professionals responsible for large data migrations, hybrid cloud scenarios, and edge data transfer in the newly supported regions will benefit from this update.
Important notes if any
This expansion improves data transfer flexibility and compliance by allowing customers in these regions to leverage Azure Data Box Next Gen locally. Organizations planning large-scale data import/export workflows should evaluate these new regional availabilities to optimize data movement strategies and reduce network dependency.
Details:
The recent Azure update announces the general availability (GA) of Azure Data Box Next Gen devices in additional global regions, including Australia, Japan, Singapore, Brazil, Hong Kong, UAE, Switzerland, and Norway, expanding the geographic reach of this secure, large-scale data transfer solution. This update also confirms that both Azure Data Box 120TB and 525TB capacity devices are now GA, enabling customers to choose the appropriate device size for their data migration or edge computing needs.
Background and Purpose:
Azure Data Box Next Gen is designed to address the challenges of securely and efficiently transferring large volumes of data to Azure, particularly when network bandwidth is limited, unreliable, or costly. It supports offline data migration scenarios, edge data collection, and hybrid cloud workflows. By expanding GA availability to new regions, Microsoft aims to provide localized, compliant, and performant data transfer options to customers worldwide, reducing data ingress latency and meeting regional data sovereignty requirements.
Specific Features and Detailed Changes:
Technical Mechanisms and Implementation Methods:
Azure Data Box Next Gen devices operate as ruggedized, network-attached storage appliances shipped to the customer site. Customers connect the device to their local network and transfer data using SMB or NFS protocols. The device encrypts data at rest and in transit internally. Once data ingestion is complete, the device is shipped back to Microsoft data centers, where data is uploaded to the specified Azure storage account. The entire process is orchestrated through the Azure portal, which handles job creation, device activation, key management, and data ingestion monitoring. The devices support integration with Azure Active Directory for access control and logging.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
Azure Data Box Next Gen integrates seamlessly with Azure Storage services such as Blob Storage, Azure Data Lake Storage Gen2, and Azure
Published: August 06, 2025 15:00:18 UTC Link: Public Preview: Azure Storage Discovery
Update ID: 499143 Data source: Azure Updates API
Categories: In preview, Storage, Storage Accounts, Features, Management, Services
Summary:
What was updated
Azure Storage Discovery has been introduced in public preview, offering enhanced visibility and analytics across your entire Azure Storage estate.
Key changes or new features
This service enables enterprise-wide discovery and deep analysis of Azure Storage usage, including capacity consumption and activity patterns. It provides actionable insights to optimize storage costs, improve security posture, and better manage data. The tool surfaces previously unavailable metrics and usage details, helping organizations identify underutilized resources, potential security risks, and opportunities for operational efficiency.
Target audience affected
Developers, IT professionals, and cloud architects responsible for managing and optimizing Azure Storage resources at scale will benefit from this update. It is particularly useful for enterprises with large or complex storage environments seeking centralized monitoring and governance.
Important notes if any
Azure Storage Discovery is currently in public preview, so users should evaluate it in non-production environments and provide feedback. Integration with existing monitoring and management workflows may require API usage, as insights are accessible via API data. Users should monitor for updates as features mature toward general availability.
Link: https://azure.microsoft.com/updates?id=499143
Details:
Azure Storage Discovery, now available in public preview, is a new service designed to provide enterprise-wide visibility and actionable insights into an organization’s entire Azure Storage data estate. This update addresses the growing complexity and scale of storage resources across multiple subscriptions and regions, enabling IT professionals and storage administrators to better understand usage patterns, optimize costs, and enhance security posture.
Background and Purpose:
As enterprises increasingly adopt Azure Storage services (Blob, File, Queue, Table), managing and governing these distributed storage accounts becomes challenging. Prior to this update, visibility into storage capacity, activity, and security was fragmented and often limited to individual storage accounts or subscriptions. Azure Storage Discovery aims to centralize and aggregate storage telemetry and metadata, providing a holistic view that supports operational efficiency, cost management, and compliance.
Specific Features and Changes:
Technical Mechanisms and Implementation:
Azure Storage Discovery leverages Azure Resource Graph and Azure Monitor to collect metadata and telemetry from storage accounts across an organization. It integrates with Azure Active Directory (AAD) for secure, role-based access control, ensuring that only authorized users can view sensitive storage data. The service uses scalable data ingestion pipelines to continuously update storage metrics and activity logs. Data is presented through Azure Portal dashboards and can be accessed via APIs for integration with custom reporting tools or automation workflows.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
Azure Storage Discovery complements Azure Cost Management by providing granular storage-specific insights that feed into broader cost optimization strategies. It integrates with Azure Security Center (Microsoft Defender for Storage) to enhance threat detection capabilities. Additionally, it works alongside Azure Policy for enforcing governance rules and Azure Monitor for alerting on storage metrics. The API access allows integration with third-party IT service management (ITSM) and reporting tools, enabling automated workflows and custom dashboards.
In summary, Azure Storage Discovery in public preview empowers IT professionals with comprehensive, centralized insights into their Azure Storage environments, facilitating improved governance, cost control, security, and operational efficiency across large-scale, multi-subscription deployments.
Published: August 06, 2025 04:00:09 UTC Link: Generally Available: AKS support for Advanced Container Networking: L7 Policies
Update ID: 499274 Data source: Azure Updates API
Categories: Launched, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) now generally supports Layer 7 (L7) network policies within Advanced Container Networking Services using Cilium clusters.
Key changes or new features
This update introduces the ability to enforce fine-grained, application-layer (L7) network policies, allowing developers and IT professionals to define security rules based on HTTP attributes such as methods, paths, headers, and hostnames. This enhances traffic control beyond traditional Layer 3/4 policies, improving security posture and traffic management in containerized applications.
Target audience affected
Developers and IT professionals managing AKS clusters with Advanced Container Networking and Cilium who require sophisticated network security controls at the application layer.
Important notes if any
L7 policies require AKS clusters to be configured with Advanced Container Networking and Cilium. This feature enables more granular security enforcement but may require updates to existing network policy definitions and testing to ensure compatibility with application traffic patterns.
For more details, visit: https://azure.microsoft.com/updates?id=499274
Details:
The recent Azure Kubernetes Service (AKS) update introduces general availability support for Layer 7 (L7) network policies within Advanced Container Networking Services (ACNS) clusters using Cilium, significantly enhancing network security and traffic management capabilities at the application layer.
Background and Purpose
Traditionally, Kubernetes network policies operate primarily at Layer 3 (IP) and Layer 4 (TCP/UDP port) levels, which limits the granularity of traffic control to IP addresses and ports. However, modern microservices architectures often require more nuanced security controls based on application-layer protocols (HTTP, gRPC, etc.) to enforce policies such as URL path restrictions, HTTP methods, or header-based filtering. The introduction of L7 policies in AKS with ACNS and Cilium addresses this gap by enabling fine-grained, application-aware network security controls, improving both security posture and traffic governance.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The implementation relies on Cilium’s use of eBPF (extended Berkeley Packet Filter) technology, which allows dynamic insertion of code into the Linux kernel to perform deep packet inspection and filtering without the overhead of user-space proxies. This enables real-time L7 policy enforcement directly at the network interface level of pods. The policies are translated into eBPF programs that inspect packet payloads for HTTP/gRPC attributes, applying allow or deny decisions accordingly. The integration with AKS ensures that these policies can be managed declaratively via Kubernetes manifests and are compatible with existing Kubernetes RBAC and namespace isolation mechanisms.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Published: August 06, 2025 03:30:15 UTC Link: Private Preview: Agentic experience for AKS in the Azure CLI
Update ID: 499377 Data source: Azure Updates API
Categories: In development, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) introduced a new agentic AI-powered experience integrated into the Azure CLI via the new command group “az aks agent.”
Key changes or new features
The “az aks agent” command enables developers and operators to leverage agentic reasoning capabilities directly within the CLI. This AI-driven feature assists with intelligent automation, troubleshooting, and management tasks for AKS clusters, streamlining workflows and reducing manual effort.
Target audience affected
This update primarily targets developers, DevOps engineers, and IT professionals managing AKS clusters who seek enhanced productivity and smarter CLI interactions.
Important notes if any
The feature is currently in private preview, so access may require enrollment or invitation. Users should expect ongoing improvements and provide feedback to shape the final release. Being AI-powered, it is recommended to validate critical outputs before applying changes in production environments.
For more details, visit: https://azure.microsoft.com/updates?id=499377
Details:
The Azure Kubernetes Service (AKS) team has introduced a private preview of an agentic AI-powered command-line interface (CLI) experience via the new “az aks agent” command, designed to embed intelligent agentic reasoning capabilities directly into the Azure CLI. This update aims to streamline and enhance the operational and development workflows for AKS users by leveraging AI-driven automation and contextual assistance within the CLI environment.
Background and Purpose
Managing Kubernetes clusters in AKS often involves complex, multi-step procedures requiring deep expertise in Kubernetes concepts, Azure resource management, and troubleshooting. The purpose of this update is to reduce cognitive load and accelerate productivity by integrating an AI agent that can understand user intents, provide contextual recommendations, automate routine tasks, and assist in complex cluster management scenarios directly from the CLI. This aligns with the broader Azure strategy of embedding AI to simplify cloud operations and improve developer experience.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The agentic experience leverages underlying AI models trained on Kubernetes and Azure operational data, combined with natural language processing (NLP) techniques to parse user inputs. The CLI extension acts as a client that sends user queries to the AI backend service, which performs intent recognition, reasoning, and command synthesis. The generated commands are then presented to the user for review and execution, ensuring control and transparency. The architecture ensures secure communication with Azure APIs and respects role-based access controls (RBAC) and subscription boundaries. The preview likely uses containerized microservices for scalability and modular updates.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Published: August 06, 2025 03:30:15 UTC Link: Public Preview: Managed Namespaces in AKS
Update ID: 499371 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) introduced Public Preview support for Managed Namespaces.
Key changes or new features
Managed Namespaces enable users to enumerate the namespaces they have access to across subscriptions, resource groups, and AKS clusters. Users can then retrieve deployment credentials scoped to those namespaces, simplifying access management and deployment workflows. This feature centralizes namespace access control and credential management, reducing the need for cluster-admin privileges and improving security posture.
Target audience affected
Developers and IT professionals managing AKS clusters, particularly those responsible for multi-tenant environments or role-based access control (RBAC) configurations. It benefits teams needing streamlined namespace-level access and deployment capabilities without broad cluster permissions.
Important notes if any
This feature is currently in Public Preview, so users should evaluate it in non-production environments and provide feedback. Integration requires appropriate Azure RBAC and AKS configurations. Documentation and API references are available for implementation guidance.
Details:
The recent Azure update introduces the Public Preview of Managed Namespaces in Azure Kubernetes Service (AKS), designed to enhance multi-tenant Kubernetes cluster management by enabling fine-grained namespace access control and simplified credential management across subscription, resource group, and cluster scopes.
Background and Purpose:
In multi-tenant or large organizational environments, managing access to Kubernetes namespaces within AKS clusters can be complex and error-prone. Traditionally, administrators manually configure Role-Based Access Control (RBAC) and distribute kubeconfig files per namespace, which can lead to security risks and operational overhead. This update aims to streamline namespace access management by providing a centralized, managed mechanism to list accessible namespaces and retrieve deployment credentials securely, improving governance and developer productivity.
Specific Features and Detailed Changes:
Technical Mechanisms and Implementation Methods:
Managed namespaces leverage Azure AD for authentication and integrate with Kubernetes RBAC for authorization. The service exposes APIs that allow users to list namespaces they are authorized to access based on their Azure AD identity and assigned roles. Upon request, the system generates scoped kubeconfig files or tokens with limited permissions tied to the namespace, using Kubernetes service accounts and role bindings managed by AKS control plane components. This approach abstracts credential management from cluster admins and automates secure access provisioning.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
Managed namespaces tightly integrate with Azure Active Directory for identity management and Azure RBAC for access control, ensuring consistent policy enforcement. They complement AKS’s existing security features such as Azure Policy for Kubernetes and Azure Monitor for auditing. Additionally, integration with Azure DevOps or GitHub Actions can leverage managed namespace credentials for secure CI/CD workflows.
In summary, the Public Preview of Managed Namespaces in AKS provides a robust framework for scalable, secure, and manageable namespace-level access control, significantly improving multi-tenant Kubernetes cluster operations by automating credential management and enforcing least privilege principles through Azure AD and Kubernetes RBAC integration.
Published: August 06, 2025 03:30:15 UTC Link: Generally Available: AKS Security Dashboard
Update ID: 499366 Data source: Azure Updates API
Categories: Launched, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) Security Dashboard is now generally available within the Azure Portal.
Key changes or new features
The dashboard offers a centralized, integrated view of your AKS cluster’s security posture. It consolidates insights on software vulnerabilities, critical security issues, compliance gaps, and active runtime threats. This enables real-time monitoring and quicker identification of security risks directly from the Azure Portal without needing separate tools.
Target audience affected
Developers, DevOps engineers, and IT security professionals managing AKS clusters will benefit from enhanced visibility into cluster security and compliance. It supports teams responsible for securing containerized workloads and maintaining regulatory compliance.
Important notes if any
The dashboard leverages Azure Security Center capabilities and requires appropriate permissions to access cluster security data. It is recommended to regularly review the dashboard to proactively address vulnerabilities and threats. Integration with existing security workflows can help improve overall AKS cluster security management.
For more details, visit: https://azure.microsoft.com/updates?id=499366
Details:
The Azure Kubernetes Service (AKS) Security Dashboard, now generally available, delivers a centralized, integrated security management experience within the Azure Portal, designed to enhance the security posture and runtime threat protection of AKS clusters. This update addresses the growing complexity and security challenges faced by organizations deploying containerized workloads at scale by consolidating critical security insights into a single pane of glass.
Background and Purpose:
As Kubernetes adoption accelerates, securing containerized applications and their underlying infrastructure has become paramount. AKS clusters are often distributed and dynamic, making it difficult to maintain continuous visibility into vulnerabilities, compliance, and active threats. Prior to this update, security information was fragmented across multiple tools and portals, complicating incident response and remediation efforts. The AKS Security Dashboard was introduced to provide IT professionals and security teams with a unified view that simplifies monitoring and hardening of AKS environments.
Specific Features and Detailed Changes:
The AKS Security Dashboard integrates data from Azure Defender for Kubernetes and Azure Security Center to present actionable security insights directly within the AKS resource blade in the Azure Portal. Key features include:
Technical Mechanisms and Implementation Methods:
The dashboard aggregates telemetry collected by Azure Defender for Kubernetes, which deploys monitoring agents and leverages Kubernetes audit logs, network traffic analysis, and container image scanning. It uses Azure Security Center’s continuous assessment engine to evaluate cluster configurations against security policies. The data is processed and correlated in Azure Monitor and Security Center backend services, then surfaced in the portal with rich visualizations and drill-down capabilities. The dashboard is accessible without additional setup beyond enabling Azure Defender for Kubernetes on the AKS cluster, ensuring seamless integration.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
The AKS Security Dashboard tightly integrates with Azure Defender for Kubernetes for threat detection and vulnerability scanning, Azure Security Center for continuous security assessment and policy enforcement, and Azure Monitor for telemetry collection and alerting. It complements Azure Policy by surfacing compliance gaps and supports exporting findings to Azure Sentinel for advanced security analytics and incident investigation. This integration enables a comprehensive security management workflow within the Azure ecosystem.
In summary, the generally available AKS Security Dashboard provides IT professionals with a powerful, centralized tool to monitor and improve the security posture of AKS clusters by consolidating vulnerability data, compliance status, and runtime threat insights within the Azure Portal,
Published: August 06, 2025 03:30:15 UTC Link: Public Preview: Azure Virtual Network Verifier for AKS (VNV) for AKS
Update ID: 499361 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Virtual Network Verifier (VNV) for Azure Kubernetes Service (AKS) is now available in public preview via the Azure Portal.
Key changes or new features
VNV enables detection and troubleshooting of outbound network connectivity issues within AKS clusters. It provides developers and IT professionals with diagnostic capabilities to verify virtual network configurations affecting AKS egress traffic, helping identify misconfigurations or network policy problems that impact cluster connectivity.
Target audience affected
This update primarily targets AKS developers, network engineers, and IT administrators responsible for managing AKS cluster networking and ensuring reliable outbound connectivity for containerized applications.
Important notes if any
As a public preview feature, VNV may have limited SLA and should be used with caution in production environments. Users can access it directly through the Azure Portal without additional setup. This tool enhances troubleshooting efficiency, reducing time to resolution for network-related issues in AKS clusters.
For more details, visit: https://azure.microsoft.com/updates?id=499361
Details:
The Azure Virtual Network Verifier for AKS (VNV), now available in public preview via the Azure Portal, is a diagnostic tool designed to detect and troubleshoot outbound connectivity issues within Azure Kubernetes Service (AKS) clusters by analyzing virtual network configurations and network policies.
Background and Purpose:
AKS clusters often face complex networking challenges, especially related to outbound connectivity from pods to external services or endpoints. Misconfigurations in network security groups (NSGs), user-defined routes (UDRs), or Azure Firewall rules can cause connectivity failures that are difficult to diagnose. The purpose of the Azure Virtual Network Verifier for AKS is to provide a streamlined, integrated solution that automates the detection of such issues, reducing manual troubleshooting efforts and improving cluster reliability.
Specific Features and Detailed Changes:
Technical Mechanisms and Implementation Methods:
The Virtual Network Verifier operates by simulating outbound network traffic from pod IP addresses through the virtual network infrastructure. It performs path analysis by querying the effective routes, NSG rules, and firewall policies applied to the subnet and node level. The tool leverages Azure Resource Manager APIs and network diagnostic APIs to gather configuration data and simulate packet flows. It then correlates this data to identify where traffic is being dropped or blocked. This approach allows it to pinpoint exact rule conflicts or missing routes causing connectivity failures.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
In summary, the Azure Virtual Network Verifier for AKS in public preview provides IT professionals with a powerful, integrated tool to diagnose and resolve outbound connectivity issues in AKS clusters by analyzing network configurations and policies
Published: August 06, 2025 03:30:15 UTC Link: Public Preview: Multiple Standard Load Balancers support in AKS
Update ID: 499356 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) now supports multiple Standard Load Balancers (SLBs) per cluster, available in public preview.
Key changes or new features
This update enables AKS clusters to use more than one Standard Load Balancer simultaneously, overcoming the previous limit of 300 inbound rules per node NIC. It allows assigning different SLBs to separate agent pools, facilitating traffic isolation and improved scalability for large or complex workloads.
Target audience affected
Developers and IT professionals managing AKS clusters who require higher load balancing capacity, better traffic segmentation, or need to scale applications beyond the previous SLB rule limits.
Important notes if any
As this feature is in public preview, it should be used with caution in production environments. Users should test workloads thoroughly and monitor SLB configurations. Refer to official Azure documentation for configuration details and limitations during the preview phase.
Details:
The recent Azure Kubernetes Service (AKS) update introduces public preview support for multiple Standard Load Balancers (SLBs) per AKS cluster, addressing key scalability and traffic isolation challenges inherent in single SLB deployments. Traditionally, AKS clusters were limited to one Standard Load Balancer per cluster, which imposed a maximum of 300 inbound NAT or load balancing rules per node network interface card (NIC). This constraint restricted large-scale or multi-tenant workloads that require extensive load balancing configurations or traffic segregation.
With this update, AKS enables assigning multiple SLBs to a single cluster, allowing each agent pool to be associated with a distinct Standard Load Balancer. This architectural enhancement effectively multiplies the available inbound rule capacity by distributing load balancing rules across multiple SLBs, thereby overcoming the 300-rule limit per NIC. It also facilitates logical traffic isolation, as different SLBs can be configured with unique frontend IPs, health probes, and load balancing rules tailored to the specific needs of each agent pool or workload type.
Technically, the implementation leverages Azure’s native Standard Load Balancer resource model, where each SLB is provisioned and managed independently but integrated into the AKS cluster’s networking fabric. During cluster or node pool creation and updates, users can specify which SLB to associate with each node pool. The AKS control plane orchestrates the provisioning and configuration of these SLBs, ensuring that Kubernetes services of type LoadBalancer are correctly mapped to the appropriate SLB based on node pool assignments. This requires enhancements in the AKS networking plugin and API server to support multi-SLB awareness and rule management.
Use cases benefiting from this feature include large-scale AKS deployments requiring thousands of inbound rules, multi-tenant clusters where workloads must be isolated at the network load balancer level, and scenarios demanding differentiated traffic routing policies per node pool. For example, a cluster hosting both production and development workloads can assign separate SLBs to each environment’s node pools, enabling independent scaling, monitoring, and security configurations. Additionally, this supports hybrid scenarios where different SLBs expose services on distinct public IPs or subnets, facilitating compliance and operational separation.
Important considerations include the preview nature of the feature, which may involve limitations or evolving API behaviors. Users must plan for increased management complexity due to multiple SLBs and ensure proper IP address management to avoid conflicts. The feature currently applies to Standard Load Balancers only, as Basic Load Balancers do not support this multi-instance model. Furthermore, integration with Azure Firewall, Application Gateway, or third-party network virtual appliances requires careful architecture to maintain consistent traffic flows and security postures across multiple SLBs.
Integration with related Azure services remains seamless, as each Standard Load Balancer functions as a first-class Azure resource compatible with Azure Monitor for metrics and diagnostics, Azure Policy for governance, and Azure Resource Manager templates for infrastructure as code. This update complements Azure’s broader networking enhancements in AKS, such as advanced networking (Azure CNI), user-defined routing, and private clusters, enabling more granular and scalable network architectures.
In summary, the public preview of multiple Standard Load Balancers per AKS cluster significantly enhances cluster scalability and traffic isolation by allowing distinct SLBs per node pool, overcoming previous inbound rule limits, and enabling sophisticated multi-tenant and large-scale deployment scenarios while maintaining deep integration with Azure’s networking ecosystem.
Published: August 06, 2025 03:30:15 UTC Link: Generally Available: Static egress gateway public prefix support in AKS
Update ID: 499351 Data source: Azure Updates API
Categories: Launched, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) now generally supports static egress gateway public prefix.
Key changes or new features
AKS customers can create a dedicated gateway node pool that routes outbound traffic from specific annotated pods through a static public IP prefix ranging from /28 to /31. This enables predictable, stable egress IP addresses for outbound traffic, improving network security and compliance by allowing firewall rules and IP whitelisting based on fixed IP ranges.
Target audience affected
Developers and IT professionals managing AKS clusters who require consistent outbound IP addresses for workloads, especially those integrating with external services that enforce IP-based access controls.
Important notes if any
The feature requires pods to be annotated to route traffic through the egress gateway. The static public prefix size can be configured between /28 and /31, providing flexibility in IP allocation. This update enhances network traffic control without impacting cluster scalability or performance. For implementation details and best practices, refer to the official Azure documentation.
Details:
The recent general availability of static egress gateway public prefix support in Azure Kubernetes Service (AKS) introduces a significant enhancement for managing outbound traffic with predictable and secure IP addressing. This update enables AKS users to create a dedicated gateway node pool that routes outbound traffic from specifically annotated pods through a static public IP prefix, ranging from /28 to /31, thereby improving network control, compliance, and integration capabilities.
Background and Purpose
In Kubernetes environments, controlling outbound traffic IP addresses is critical for scenarios such as firewall whitelisting, compliance auditing, and secure service integrations. Traditionally, AKS clusters use dynamic outbound IP addresses for egress traffic, which can complicate network security policies and external service configurations. The static egress gateway public prefix feature addresses this by allowing customers to assign a fixed set of public IP addresses for outbound traffic, ensuring consistent and predictable egress IPs.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The implementation involves creating a dedicated node pool configured as an egress gateway with a static public IP prefix attached via Azure’s networking infrastructure. Outbound traffic from annotated pods is routed through this gateway using Kubernetes network policies and routing rules. The gateway nodes perform SNAT (Source Network Address Translation) to translate pod IPs to the static public IPs within the assigned prefix. This setup leverages Azure’s native VNet and public IP resource management, ensuring high availability and scalability.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Published: August 06, 2025 03:30:15 UTC Link: Public Preview: Increase ingestion quota for Azure Managed Prometheus with an ARM API
Update ID: 499346 Data source: Azure Updates API
Categories: In preview, DevOps, Management and governance, Azure Monitor, Features
Summary:
What was updated
Azure Monitor now offers a public preview of an Azure Resource Manager (ARM) API that allows customers to request an increase in ingestion quotas for Azure Managed Prometheus metrics within Azure Monitor workspaces.
Key changes or new features
Previously, ingestion limits for Managed Prometheus metrics were fixed by default quotas. With this update, developers and IT professionals can programmatically request higher ingestion quotas via the new ARM API, enabling better scalability and flexibility for monitoring large-scale Prometheus metric workloads.
Target audience affected
This update primarily impacts developers, DevOps engineers, and IT professionals who use Azure Managed Prometheus for monitoring containerized applications and require higher metric ingestion capacity in Azure Monitor workspaces.
Important notes if any
The feature is currently in public preview, so users should test and validate in non-production environments before broad adoption. Approval of quota increase requests is subject to Azure Monitor’s capacity and policies. Users must have appropriate permissions to call the ARM API and manage workspace quotas.
Link: https://azure.microsoft.com/updates?id=499346
Details:
The recent Azure update introduces a public preview feature that enables customers to increase the ingestion quota for Azure Managed Prometheus metrics into Azure Monitor Workspaces via an Azure Resource Manager (ARM) API. This enhancement addresses the default ingestion limits imposed on Azure Monitor Workspaces, providing greater flexibility and scalability for monitoring large-scale Prometheus environments.
Background and Purpose
Azure Monitor integrates with Azure Managed Prometheus to collect and analyze Prometheus metrics at scale. By default, Azure Monitor Workspaces enforce ingestion quotas to maintain service stability and performance. However, organizations with extensive Prometheus deployments often require higher ingestion capacities to capture detailed telemetry data without data loss or throttling. Prior to this update, quota increases typically involved manual support requests, which could delay scaling operations. The introduction of an ARM API for quota increase requests automates and streamlines this process, enabling faster and programmatic quota management.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The ARM API exposes operations to submit quota increase requests, which include parameters such as the desired ingestion limit and the target Azure Monitor Workspace resource ID. Upon submission, the request undergoes validation and approval processes managed by Azure’s backend systems. Once approved, the ingestion quota for the specified workspace is updated accordingly. The API leverages Azure’s role-based access control (RBAC) to ensure that only authorized users can request quota changes. Users interact with the API via REST calls or through Azure CLI/PowerShell extensions that wrap these API operations.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Published: August 06, 2025 03:30:15 UTC Link: Public Preview: LocalDNS for AKS
Update ID: 499341 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) now offers LocalDNS in public preview.
Key changes or new features
LocalDNS deploys a DNS proxy on each AKS node, significantly improving DNS resolution speed and reliability. This approach reduces DNS query bottlenecks common in large clusters and lowers latency by handling DNS requests locally on each node instead of routing them through centralized DNS services.
Target audience affected
Developers and IT professionals managing or developing applications on AKS clusters, especially those operating large-scale or latency-sensitive workloads, will benefit from enhanced DNS performance and stability.
Important notes if any
As this feature is in public preview, it may not be fully supported in production environments. Users should test LocalDNS in development or staging clusters before adopting it broadly. Monitoring and validation of DNS behavior are recommended to ensure compatibility with existing cluster configurations.
For more details, visit: https://azure.microsoft.com/updates?id=499341
Details:
The Azure Kubernetes Service (AKS) Public Preview of LocalDNS introduces a node-local DNS proxy designed to optimize DNS resolution within AKS clusters, addressing performance bottlenecks and reliability issues inherent in large-scale Kubernetes deployments. Traditionally, AKS clusters rely on centralized DNS services (like CoreDNS) that can become overwhelmed under heavy query loads, leading to increased latency and potential resolution failures. LocalDNS mitigates these challenges by deploying a DNS proxy daemon on each node, enabling DNS queries to be resolved locally rather than routing all requests through a shared cluster DNS service.
Technically, LocalDNS operates by running a lightweight DNS proxy on every AKS node. This proxy intercepts DNS queries from pods on the node and either resolves them from a local cache or forwards them to the cluster DNS service if necessary. By caching DNS responses locally, LocalDNS significantly reduces the number of queries sent to the centralized DNS service, thereby lowering query latency and improving overall cluster DNS reliability. This architecture also reduces network hops and congestion, which is especially beneficial in large clusters with thousands of nodes and high DNS query volumes. The implementation leverages Kubernetes DaemonSets to ensure the DNS proxy runs on each node, integrating seamlessly with the existing CoreDNS service without requiring changes to pod configurations or application code.
From a feature perspective, LocalDNS provides:
Use cases for LocalDNS include large-scale AKS clusters where DNS query volume can cause performance degradation, latency-sensitive applications requiring faster DNS resolution, and scenarios where network reliability is critical. It is particularly useful in microservices architectures with frequent inter-service communication relying on DNS.
Important considerations include that LocalDNS is currently in public preview, so it should be used with caution in production environments. Users should monitor DNS performance and cluster behavior after enabling LocalDNS. Compatibility with custom DNS configurations and network policies should be validated, as LocalDNS modifies the DNS query path. Additionally, while LocalDNS reduces latency and load on CoreDNS, it introduces a new component on each node that requires resource allocation and monitoring.
Regarding integration, LocalDNS works alongside AKS’s existing CoreDNS service and leverages Kubernetes native constructs such as DaemonSets for deployment. It complements Azure networking features by reducing DNS-related network traffic, potentially improving overall cluster network efficiency. It also aligns with Azure Monitor and Azure Policy for observability and governance, allowing administrators to track DNS performance metrics and enforce configuration standards.
In summary, the LocalDNS public preview for AKS enhances DNS resolution by deploying a node-local DNS proxy that reduces latency, increases reliability, and alleviates load on centralized DNS services, making it a valuable update for optimizing DNS performance in large or latency-sensitive AKS clusters.
Published: August 06, 2025 03:30:15 UTC Link: Public Preview : Azure Bastion integration with AKS
Update ID: 499335 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Bastion integration with Azure Kubernetes Service (AKS) is now available in public preview.
Key changes or new features
This integration allows seamless, secure, and continuous access to private AKS clusters without exposing them via public IPs. It also supports AKS clusters configured with API server authorized IP ranges, including public clusters. By leveraging Azure Bastion, developers and IT professionals can connect to AKS nodes and manage clusters securely through the Azure portal without needing VPNs or jump servers.
Target audience affected
Developers and IT professionals managing AKS clusters, especially those prioritizing secure access to private or restricted AKS environments.
Important notes if any
As this feature is in public preview, it should be used with caution in production environments. Users should monitor Azure updates for general availability and any changes in capabilities or pricing. Integration requires proper Azure Bastion and AKS configuration to ensure secure connectivity.
For more details, visit: https://azure.microsoft.com/updates?id=499335
Details:
The Azure Bastion integration with Azure Kubernetes Service (AKS), now available in public preview, provides a seamless and secure method for IT professionals to access private AKS clusters without exposing them to the public internet or managing jump servers. This update addresses the critical need for secure, persistent connectivity to AKS API servers, especially in environments with stringent security requirements.
Background and Purpose
Traditionally, accessing AKS clusters—particularly private clusters—requires complex network configurations such as VPNs, jump boxes, or exposing API servers with authorized IP ranges, which can increase attack surfaces and operational overhead. Azure Bastion, a managed PaaS service, offers secure RDP/SSH connectivity directly through the Azure Portal without public IP exposure. Integrating Azure Bastion with AKS aims to simplify and harden cluster access by leveraging Bastion’s secure connectivity model, thus reducing reliance on traditional, less secure methods.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Azure Bastion acts as a secure jump host within the virtual network hosting the AKS cluster. When integrated, Bastion establishes a secure tunnel to the AKS API server endpoint within the cluster’s virtual network. This is achieved by:
Use Cases and Application Scenarios
Important Considerations and Limitations
Published: August 06, 2025 03:30:15 UTC Link: Public Preview: AKS Model Context Protocol (MCP) server
Update ID: 499326 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) now offers the Model Context Protocol (MCP) server in public preview as an open source release.
Key changes or new features
The MCP server provides a standardized protocol enabling AI agents to interact directly with AKS clusters. This foundational component simplifies cluster management by offering a consistent interface for orchestrating workloads, monitoring cluster state, and automating operational tasks. It facilitates integration of AI-driven tools and workflows with AKS, enhancing automation and operational efficiency.
Target audience affected
Developers building AI agents or automation tools that manage AKS clusters, as well as IT professionals and DevOps engineers responsible for AKS cluster operations and automation.
Important notes if any
As a public preview feature, the MCP server is open source and may undergo changes before general availability. Users should evaluate it in test environments and provide feedback. Integration requires familiarity with AKS internals and AI agent development. The protocol aims to standardize cluster interaction, potentially influencing future AKS management paradigms.
For more details, visit: https://azure.microsoft.com/updates?id=499326
Details:
The Azure Kubernetes Service (AKS) Model Context Protocol (MCP) server has been released as an open source public preview, introducing a standardized communication layer designed to facilitate AI agents’ interaction with AKS clusters. This update addresses the growing need for seamless, programmatic cluster management and operational automation within AI-driven workflows.
Background and Purpose
As Kubernetes adoption expands, managing cluster state and orchestrating workloads programmatically has become increasingly complex, especially when integrating AI agents or automated systems that require contextual awareness of the cluster environment. The MCP server aims to simplify this by providing a consistent protocol that exposes cluster metadata, state, and operational controls in a structured manner. This foundational component is intended to serve as a bridge between AI-driven tools and AKS, enabling more intelligent, context-aware automation and management.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The MCP server operates as a service within the AKS cluster, exposing RESTful or gRPC endpoints that adhere to the Model Context Protocol specification. It aggregates data from Kubernetes API servers, metrics endpoints, and cluster telemetry sources, transforming this information into a coherent context model consumable by AI agents. The server supports authentication and authorization aligned with AKS security best practices, ensuring secure access to cluster context data. Deployment is facilitated via Helm charts or Kubernetes manifests, enabling easy integration into existing AKS environments.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
MCP server complements Azure Monitor and Azure Policy by providing a richer context model that these services can consume for enhanced monitoring and governance. It can integrate with Azure Machine Learning pipelines to enable AI models that adapt based on cluster state. Additionally, MCP’s open source nature allows integration with Azure DevOps and GitOps workflows, facilitating automated deployment and lifecycle management driven by AI insights.
In summary, the AKS Model Context Protocol server introduces a standardized, extensible interface that empowers AI agents and automation tools with rich, real-time cluster context, streamlining management and operational workflows within AKS environments while fostering integration with Azure’s broader ecosystem.
Published: August 06, 2025 03:30:15 UTC Link: Generally Available: Control Plane Improvements in AKS
Update ID: 499313 Data source: Azure Updates API
Categories: Launched, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) control plane API server has been enhanced to improve resiliency and efficiency.
Key changes or new features
Implementation of Kubernetes Enhancement Proposal (KEP) 5116: Streaming Encoding for LIST Responses. This change reduces API server memory consumption by about 10x during large LIST API calls by streaming response data instead of buffering it entirely in memory.
Target audience affected
Developers and IT professionals managing AKS clusters, especially those running workloads or tools that perform large-scale LIST operations against the Kubernetes API.
Important notes if any
This update improves cluster stability and scalability under heavy API usage, reducing the risk of API server memory pressure and potential outages. No changes to API behavior or client compatibility are expected, but users may notice improved performance during large LIST queries.
For more details, visit: https://azure.microsoft.com/updates?id=499313
Details:
The recent Azure Kubernetes Service (AKS) update introduces generally available control plane improvements focused on enhancing the resiliency and efficiency of the Kubernetes API server, specifically through the implementation of Kubernetes Enhancement Proposal (KEP) 5116, titled “Streaming Encoding for LIST Responses.” This update addresses critical performance bottlenecks encountered during large LIST API calls, which are common in large-scale Kubernetes clusters.
Background and Purpose
In Kubernetes, the API server acts as the central management entity, handling all RESTful API requests including LIST operations that retrieve collections of resources. In large clusters, LIST calls can return extensive datasets, causing significant memory consumption spikes on the API server. This can lead to degraded performance or instability in the control plane. KEP 5116 was introduced to optimize how LIST responses are encoded and transmitted, aiming to reduce memory usage and improve API server responsiveness and stability.
Specific Features and Detailed Changes
The core feature of this update is the adoption of streaming encoding for LIST responses. Traditionally, the API server would marshal the entire list of resources into memory before sending the response to the client, resulting in high peak memory usage proportional to the size of the data set. With streaming encoding, the API server serializes and transmits the response incrementally, streaming chunks of the list as they are encoded rather than buffering the entire response in memory. This approach reduces peak memory usage by approximately 10x during large LIST calls, significantly lowering the risk of API server memory exhaustion.
Technical Mechanisms and Implementation Methods
Streaming encoding leverages Kubernetes’ internal API machinery enhancements that allow the API server to write serialized objects directly to the HTTP response stream. This is implemented by modifying the encoding pipeline to support chunked transfer encoding, enabling the server to send partial data without waiting for the complete response to be ready. The API server’s LIST handlers were updated to iterate over resource items and encode them on-the-fly. This method also improves latency for clients, as data begins arriving sooner rather than after full serialization. The implementation adheres to Kubernetes API conventions and maintains backward compatibility with existing clients.
Use Cases and Application Scenarios
This improvement is particularly beneficial for large-scale AKS clusters with thousands of nodes and tens of thousands of pods or other resources, where LIST operations are frequent and data volumes are substantial. Scenarios include cluster monitoring tools, custom controllers, and operators that perform LIST calls to reconcile state or gather metrics. It also benefits CI/CD pipelines and automation scripts that query cluster state extensively. By reducing API server memory pressure, cluster stability and responsiveness improve, enabling smoother operations and scaling.
Important Considerations and Limitations
While streaming encoding reduces memory usage, it does not change the total data volume transmitted over the network. Network bandwidth and client-side processing remain factors to consider. Clients must support standard HTTP chunked transfer encoding, which is broadly supported but should be verified for custom tooling. Additionally, this update focuses on LIST responses; other API operations are unaffected. Monitoring and alerting on API server memory usage should continue to ensure that other factors do not cause resource exhaustion. The update is available by default in the latest AKS control plane versions, and no manual configuration is required.
Integration with Related Azure Services
This control plane enhancement integrates seamlessly with Azure Monitor for containers, Azure Policy for Kubernetes, and Azure Arc-enabled Kubernetes, all of which rely heavily on LIST API calls for cluster state inspection. Improved API server performance directly benefits these services by reducing latency and increasing reliability of data collection and policy enforcement. Additionally, Azure DevOps pipelines and GitOps tools that interact with AKS clusters will experience more stable API interactions. This update complements other AKS control plane improvements aimed at scalability and operational excellence.
In summary, the GA release of streaming encoding for LIST responses in AKS control plane significantly optimizes API server memory usage during large LIST operations by implementing incremental response streaming per KEP 5116, enhancing cluster stability and performance in large-scale Kubernetes environments without
Published: August 06, 2025 03:30:15 UTC Link: Public Preview: Web Application Firewall on Application Gateway for Containers
Update ID: 499308 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Application Gateway for Containers now supports Web Application Firewall (WAF) policy in public preview.
Key changes or new features
The integration enables AKS administrators and developers to apply WAF Default Ruleset directly on Application Gateway for Containers. This provides enhanced protection against common web vulnerabilities and malicious attacks such as SQL injection, cross-site scripting, and other OWASP top 10 threats. The feature leverages Azure’s managed WAF capabilities tailored for containerized workloads, improving security posture without requiring additional infrastructure.
Target audience affected
Azure Kubernetes Service (AKS) administrators, developers managing containerized applications, and IT security professionals responsible for workload protection in container environments.
Important notes if any
This feature is currently in public preview, so it should not be used in production environments without appropriate testing. Users should monitor Azure updates for GA announcements and review WAF policy configurations to optimize security rules for their specific workloads. Integration simplifies securing container ingress traffic by combining Application Gateway’s Layer 7 load balancing with WAF protections.
Details:
The recent Azure update announces the public preview of Web Application Firewall (WAF) support on Application Gateway for Containers, specifically targeting workloads running in Azure Kubernetes Service (AKS). This enhancement enables AKS administrators and developers to apply WAF policies directly to containerized applications exposed via Application Gateway, thereby strengthening security posture against common web vulnerabilities and attacks.
Background and Purpose
Application Gateway is a Layer 7 load balancer that offers advanced routing and security features, including WAF capabilities to protect web applications from threats such as SQL injection, cross-site scripting, and other OWASP Top 10 vulnerabilities. Traditionally, WAF was available on Application Gateway for VM-based or App Service workloads. With the growing adoption of containerized applications orchestrated by AKS, there was a need to extend WAF protection to these environments. This update addresses that gap by enabling WAF policies on Application Gateway for Containers, allowing seamless integration of security controls in containerized web application deployments.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Application Gateway for Containers functions as an ingress controller within AKS clusters, managing inbound HTTP/HTTPS traffic. The WAF module inspects incoming requests against the configured ruleset before forwarding them to backend pods. The inspection includes pattern matching for attack signatures, anomaly scoring, and request blocking or logging based on policy settings. The public preview supports the Default Ruleset, which is regularly updated by Microsoft to address emerging threats. Deployment involves:
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
Published: August 06, 2025 03:30:15 UTC Link: Generally Available: Deployment safeguards in AKS
Update ID: 499299 Data source: Azure Updates API
Categories: Launched, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) has introduced generally available deployment safeguards designed to enforce Kubernetes best practices during application deployment.
Key changes or new features
The deployment safeguards help prevent common misconfigurations and bugs by validating Kubernetes manifests and configurations before deployment. This feature automatically checks for compliance with recommended security and operational standards, reducing the risk of deployment failures and runtime issues. It integrates seamlessly into the AKS deployment pipeline, providing real-time feedback and blocking unsafe deployments.
Target audience affected
This update primarily benefits Kubernetes developers, DevOps engineers, and IT professionals managing AKS clusters who seek to improve deployment reliability and security by minimizing human errors and misconfigurations.
Important notes if any
The deployment safeguards are now generally available, meaning they are fully supported for production use. Users should review their existing deployment workflows to incorporate these safeguards for enhanced cluster stability and security. Further customization and policy tuning options may be available to tailor the safeguards to specific organizational requirements.
For more details, visit: https://azure.microsoft.com/updates?id=499299
Details:
The Azure Kubernetes Service (AKS) deployment safeguards feature, now generally available, addresses common challenges in Kubernetes application lifecycle management by enforcing best practices and preventing misconfigurations that often cause deployment failures or runtime issues. This update aims to enhance cluster reliability and operational stability by integrating automated validation and policy enforcement directly into the AKS deployment pipeline.
Background and Purpose
Kubernetes environments are complex and prone to misconfigurations, especially during rapid development and deployment cycles. Common issues include invalid resource definitions, missing required fields, or configurations that violate security or operational guidelines. These errors can lead to failed deployments, degraded application performance, or security vulnerabilities. The deployment safeguards feature was introduced to proactively detect and block such problematic configurations before they reach the cluster, reducing downtime and operational overhead.
Specific Features and Detailed Changes
The deployment safeguards capability in AKS provides a set of built-in validation checks and policy enforcement mechanisms that run during the deployment process. Key features include:
Technical Mechanisms and Implementation Methods
Under the hood, deployment safeguards leverage admission controllers and Azure Policy for Kubernetes. Admission controllers are Kubernetes components that intercept API requests to the cluster and can mutate or reject resources based on defined rules. AKS integrates custom admission controllers that perform validation checks aligned with best practices and organizational policies. Azure Policy for Kubernetes extends this by enabling declarative policy definitions that are enforced cluster-wide. The safeguards feature is configurable via the Azure portal, CLI, or ARM templates, allowing teams to tailor validation rules to their environment.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
In summary, the generally available deployment safeguards in AKS provide a robust mechanism to enforce Kubernetes best practices and organizational policies during deployment, reducing errors and improving cluster reliability. By leveraging admission controllers and Azure Policy integration, this feature supports a wide range of deployment workflows and environments, making it a valuable tool for IT professionals managing Kubernetes at scale in Azure.
Published: August 06, 2025 03:30:15 UTC Link: Public Preview: Encryption in Transit for Azure Files NFS shares in AKS
Update ID: 499294 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) now supports Encryption in Transit (EiT) for Azure Files NFS v4.1 shares through the Azure File CSI driver, currently in public preview.
Key changes or new features
This update extends EiT capabilities, previously made generally available for Azure Files NFS shares in June, by enabling TLS 1.2 encryption for NFS volumes mounted within AKS clusters. Developers and IT professionals can now secure data moving between AKS pods and Azure Files NFS storage, ensuring confidentiality and integrity of file data in transit.
Target audience affected
Kubernetes developers, DevOps engineers, and IT administrators managing AKS workloads that utilize Azure Files NFS shares for persistent storage will benefit from enhanced security. This is particularly relevant for those with compliance or security requirements around data encryption.
Important notes if any
This feature is currently in public preview, so it should be used with caution in production environments. Users need to update their Azure File CSI driver and configure their AKS clusters accordingly to enable EiT. The underlying TLS 1.2 protocol ensures compatibility with existing security standards. Further documentation and best practices are available on Azure’s official update page.
Details:
The recent Azure update announces the public preview of Encryption in Transit (EiT) support for Azure Files NFS v4.1 shares within Azure Kubernetes Service (AKS) clusters using the Azure File Container Storage Interface (CSI) driver. This enhancement extends the security posture of AKS workloads by enabling TLS 1.2-based encryption for data transmitted between AKS pods and Azure Files NFS volumes, building upon the prior general availability of EiT for Azure Files NFS shares announced in June.
Background and Purpose:
Azure Files provides fully managed file shares accessible via SMB and NFS protocols, widely used for stateful workloads in containerized environments. NFS v4.1 shares are particularly favored for Linux-based AKS clusters due to their compatibility and performance characteristics. However, until now, data in transit between AKS pods and Azure Files NFS shares was not encrypted, posing potential exposure risks on the network. This update addresses this gap by introducing EiT, which encrypts network traffic using TLS, thereby enhancing data confidentiality and compliance with security standards.
Specific Features and Changes:
Technical Mechanisms and Implementation:
The Azure File CSI driver has been enhanced to support TLS 1.2 for NFS mounts. When EiT is enabled, the CSI driver negotiates a secure TLS session with the Azure Files backend, encrypting all NFS protocol traffic. This is achieved by leveraging the underlying Linux kernel’s support for NFS over TLS, combined with Azure Files’ backend infrastructure that supports TLS termination and certificate management. Administrators enable EiT by specifying mount options such as tls
or equivalent parameters in the CSI driver’s volume manifest. The encryption process is transparent to applications, requiring no code changes.
Use Cases and Application Scenarios:
Important Considerations and Limitations:
Integration with Related Azure Services:
In
Published: August 06, 2025 03:30:15 UTC Link: Generally Available: Confidential VMs for Ubuntu 24.04 in AKS
Update ID: 499289 Data source: Azure Updates API
Categories: Launched, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) now generally supports Confidential Virtual Machines (CVM) running Ubuntu 24.04 for node pools.
Key changes or new features
Confidential VMs provide hardware-based security and memory encryption to protect sensitive workloads. With this update, AKS users can create node pools backed by Ubuntu 24.04 CVMs, enabling enhanced confidentiality for containerized applications. This facilitates the migration and deployment of highly sensitive container workloads in AKS with improved data protection and isolation.
Target audience affected
Developers and IT professionals managing containerized applications on AKS who require strong security guarantees for sensitive workloads. Security-focused teams looking to leverage confidential computing capabilities in Azure Kubernetes environments will benefit most.
Important notes if any
This feature is now generally available, indicating production readiness and full Azure support. Users should consider compatibility with Ubuntu 24.04 and validate their workloads against CVM constraints. Leveraging CVMs may involve specific VM SKU selections and potential cost implications due to enhanced security hardware.
For more details, visit: https://azure.microsoft.com/updates?id=499289
Details:
The recent general availability of Confidential Virtual Machines (CVM) for Ubuntu 24.04 in Azure Kubernetes Service (AKS) represents a significant advancement in securing containerized workloads by leveraging hardware-based trusted execution environments. Confidential VMs utilize Intel SGX or AMD SEV technologies to provide memory encryption and isolate workloads from the host OS, hypervisor, and other tenants, ensuring data confidentiality and integrity even in multi-tenant cloud environments. This update specifically enables AKS node pools to run on Ubuntu 24.04 CVMs, facilitating the migration and deployment of highly sensitive container workloads that require stringent security guarantees.
From a feature perspective, this update introduces support for Ubuntu 24.04 as the OS image for CVM node pools within AKS clusters. Ubuntu 24.04, being a long-term support (LTS) release, offers enhanced security, updated kernel features, and improved container runtime compatibility, which aligns well with the security posture of Confidential VMs. The integration allows AKS users to create node pools explicitly backed by CVM hardware, ensuring that all containers scheduled on these nodes benefit from hardware-enforced memory encryption and isolation. This is a critical enhancement for workloads handling regulated data, intellectual property, or any sensitive information requiring compliance with data protection standards such as GDPR, HIPAA, or PCI DSS.
Technically, the implementation leverages Azure’s underlying confidential computing infrastructure, which uses hardware extensions like AMD SEV-SNP or Intel TDX to encrypt VM memory and protect against privileged malware or insider threats. When a node pool is provisioned with CVM and Ubuntu 24.04, the AKS control plane orchestrates the deployment of nodes running the confidential VM image, with the container runtime configured to operate within this trusted execution environment. The Kubernetes scheduler can then place sensitive pods on these nodes, ensuring workload isolation at the hardware level. This update also ensures compatibility with AKS features such as node autoscaling, monitoring, and security policies, while maintaining the confidentiality guarantees.
Use cases for this update are primarily centered around enterprises and organizations that need to process sensitive data in containers without exposing it to cloud infrastructure operators or other tenants. Examples include financial services running confidential transaction processing, healthcare applications managing protected health information (PHI), and government workloads requiring classified data handling. Additionally, software vendors delivering SaaS solutions can leverage CVM node pools to assure customers that their data remains encrypted and isolated throughout processing, enhancing trust and compliance.
Important considerations include the fact that confidential VM node pools may have specific hardware requirements and availability constraints depending on the Azure region and VM SKU. Performance overhead due to memory encryption and attestation processes should be evaluated relative to workload sensitivity. Additionally, not all container workloads may be compatible with the CVM environment, especially those requiring direct hardware access or specific kernel modules unsupported in the confidential VM image. Proper configuration of Kubernetes security contexts and pod security policies is necessary to fully leverage the confidentiality features. Monitoring and logging should be adapted to respect confidentiality boundaries without exposing sensitive data.
This update integrates seamlessly with related Azure services such as Azure Policy for governance, Azure Monitor for observability, and Azure Security Center for threat detection, enabling a comprehensive security posture for confidential container workloads. It also complements Azure Confidential Ledger and Azure Key Vault Managed HSM, providing end-to-end confidential computing and key management capabilities. By combining these services, organizations can architect highly secure, compliant, and scalable containerized applications on AKS with hardware-backed confidentiality assurances.
In summary, the general availability of Confidential VMs for Ubuntu 24.04 in AKS empowers IT professionals to deploy and manage highly sensitive container workloads with robust hardware-enforced confidentiality, leveraging the latest Ubuntu LTS features and Azure’s confidential computing infrastructure to meet stringent security and compliance requirements in production Kubernetes environments.
Published: August 06, 2025 03:30:15 UTC Link: Public Preview: Confidential VMs for Azure Linux
Update ID: 499284 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure has announced the public preview of Confidential Virtual Machines (CVM) support for Azure Linux nodes within Azure Kubernetes Service (AKS).
Key changes or new features
This update enables AKS node pools to run on Confidential VMs, providing hardware-based trusted execution environments that protect data in use. It allows developers and IT professionals to deploy highly sensitive containerized workloads with enhanced confidentiality and security guarantees. The integration leverages AMD SEV-SNP technology to isolate and encrypt memory, ensuring tenant data remains protected even from cloud operators.
Target audience affected
This update primarily targets developers and IT professionals managing containerized applications on AKS who require strong data confidentiality and compliance for sensitive workloads. Security-conscious organizations looking to migrate or run confidential workloads in Kubernetes on Azure will benefit.
Important notes if any
As this feature is in public preview, it is recommended to test workloads thoroughly before production use. Some limitations and region availability restrictions may apply. Users should review the official documentation and preview terms for details on support and SLA.
Link: https://azure.microsoft.com/updates?id=499284
Details:
The recent Azure update announces the public preview of Confidential Virtual Machines (CVM) support for Azure Linux nodes within Azure Kubernetes Service (AKS), enabling enhanced security and confidentiality for containerized workloads running on Linux-based node pools.
Background and Purpose
Confidential Computing is a growing priority in cloud security, addressing concerns around data and code protection even from cloud infrastructure operators. Azure Confidential VMs leverage hardware-based Trusted Execution Environments (TEEs), such as Intel SGX or AMD SEV, to isolate and encrypt data in use, not just at rest or in transit. Prior to this update, Confidential VMs were available for Windows workloads and standalone Linux VMs, but integration with AKS Linux node pools was lacking. This update aims to fill that gap by enabling confidential compute capabilities directly within AKS, facilitating the migration of highly sensitive container workloads to managed Kubernetes with strong confidentiality guarantees.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
Confidential VMs in Azure use hardware TEEs to create isolated execution environments. For Linux CVMs in AKS:
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
In summary, the public preview of Confidential VMs for Azure Linux in AKS extends Azure’s confidential computing
Published: August 06, 2025 03:30:15 UTC Link: Public Preview: Disable HTTP proxy
Update ID: 499279 Data source: Azure Updates API
Categories: In preview, Compute, Containers, Azure Kubernetes Service (AKS), Features
Summary:
What was updated
Azure Kubernetes Service (AKS) introduced a public preview feature to disable the HTTP proxy configuration.
Key changes or new features
Previously, AKS clusters supported an HTTP proxy feature that configured both nodes and pods to route network traffic through a specified HTTP proxy, facilitating secure communication in proxy-dependent environments. The new update allows users to disable this HTTP proxy setting, providing greater flexibility in network configurations and troubleshooting scenarios where proxy usage is not desired or causes issues.
Target audience affected
This update primarily affects developers and IT professionals managing AKS clusters in environments that utilize HTTP proxies for network security or compliance. It is especially relevant for those needing to customize or bypass proxy settings for cluster nodes and pods.
Important notes if any
As this feature is in public preview, users should test the disable HTTP proxy functionality in non-production environments before deploying broadly. Disabling the proxy may impact network traffic routing and security policies, so careful validation is recommended. For full details and implementation guidance, refer to the official Azure update link.
Details:
The recent Azure update introduces a Public Preview feature allowing users to disable the HTTP proxy functionality in Azure Kubernetes Service (AKS) clusters. This update addresses the need for greater flexibility in managing network traffic in proxy-dependent environments by enabling users to opt out of the automatic HTTP proxy configuration applied to both AKS nodes and pods.
Background and Purpose
AKS clusters often operate in enterprise environments where outbound network traffic must traverse HTTP proxies for security, compliance, or monitoring reasons. Previously, AKS introduced an HTTP proxy feature that automatically configures nodes and pods to route traffic through a specified HTTP proxy, simplifying cluster deployment in such environments. However, some scenarios require disabling this proxy behavior—such as when troubleshooting network issues, optimizing performance, or when proxy usage is no longer necessary. This update provides the ability to disable the HTTP proxy feature, enhancing operational control and flexibility.
Specific Features and Detailed Changes
Technical Mechanisms and Implementation Methods
The HTTP proxy feature in AKS works by injecting environment variables (such as HTTP_PROXY
, HTTPS_PROXY
, and NO_PROXY
) into node and pod configurations, and by configuring the underlying container runtime and node networking stack to route traffic accordingly. Disabling the feature effectively removes these environment variables and associated routing rules from the cluster nodes and pods. This can be controlled via AKS cluster configuration parameters or CLI commands during cluster creation or update. The implementation ensures that disabling the proxy does not disrupt other networking components or cluster stability.
Use Cases and Application Scenarios
Important Considerations and Limitations
Integration with Related Azure Services
In summary, the Public Preview of the ability to disable the HTTP proxy feature in AKS clusters provides IT professionals with enhanced control over network traffic routing, enabling flexible adaptation to evolving enterprise network architectures and troubleshooting needs while maintaining compatibility with Azure’s broader ecosystem of security and monitoring services.
This report was automatically generated - 2025-08-07 03:07:31 UTC