You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Recent Announcements
The AWS Cloud platform expands daily. Learn about announcements, launches, news, innovation and more from Amazon Web Services.
Knowledge Bases for Amazon Bedrock now lets you configure Guardrails
Knowledge Bases for Amazon Bedrock (KB) securely connects foundation models (FMs) to internal company data sources for Retrieval Augmented Generation (RAG), to deliver more relevant and accurate responses. We are excited to announce Guardrails for Amazon Bedrock is integrated with Knowledge Bases. Guardrails allow you to instrument safeguards customized to your RAG application requirements, and responsible AI policies, leading to a better end user experience. Guardrails provides a comprehensive set of policies to protect your users from undesirable responses and interactions with a generative AI application. First, you can customize a set of denied topics to avoid within the context of your application. Second, you can filter content across prebuilt harmful categories such as hate, insults, sexual, violence, misconduct, and prompt attacks. Third, you can define a set of offensive and inappropriate words to be blocked in their application. Finally, you can filter user inputs containing sensitive information (e.g., personally identifiable information) or redact confidential information in model responses based on use cases. Guardrails can be applied to the input sent to the model as well the content generated by the foundation model. This capability within Knowledge Bases is now available in Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), US East (N. Virginia), US West (Oregon) regions. To learn more, refer to Knowledge Bases for Amazon Bedrock documentation. To get started, visit the Amazon Bedrock console or utilize the RetrieveAndGenerate API.
Amazon RDS for MySQL announces Extended Support minor 5.7.44-RDS.20240408
Amazon Relational Database Service (RDS) for MySQL announces Amazon RDS Extended Support minor version 5.7.44-RDS.20240408. We recommend that you upgrade to this version to fix known security vulnerabilities and bugs in prior versions of MySQL. Learn more about upgrading your database instances, including minor and major version upgrades, in the Amazon RDS User Guide. Amazon RDS Extended Support provides you more time, up to three years, to upgrade to a new major version to help you meet your business requirements. During Extended Support, Amazon RDS will provide critical security and bug fixes for your MySQL and PostgreSQL databases on Aurora and RDS after the community ends support for a major version. You can run your MySQL databases on Amazon RDS with Extended Support for up to three years beyond a major version’s end of standard support date. Learn more about Extended Support in the Amazon RDS User Guide and the Pricing FAQs. Amazon RDS for MySQL makes it simple to set up, operate, and scale MySQL deployments in the cloud. See Amazon RDS for MySQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
Knowledge Bases for Amazon Bedrock now lets you configure inference parameters
We are excited to announce that Knowledge Bases for Amazon Bedrock (KB) now lets you configure inference parameters to have greater control over personalizing the responses generated by a foundation model (FM).  With this launch you can optionally set inference parameters to define parameters such randomness and length of the response generated by the foundation model. You can control how random or diverse the generated text is by adjusting a few settings, such as temperature and top-p. The temperature setting makes the model more or less likely to choose unusual or unexpected words. A lower value for temperature generates expected and more common word choices. The top-p setting limits how many word options the model considers. Reducing this number restricts the consideration to a smaller set of word choices makes the output more conventional. In addition to randomness and diversity, you can restrict the length of the foundation model output, through maxTokens, and stopsequences. You can use the maxTokens setting to specify the minimum or maximum number of tokens to return in the generated response. Finally, the stopsequences setting allows you to configure strings that serve as control for the model to stop generating further tokens. The inference parameters capability within Knowledge Bases is now available in Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), US East (N. Virginia), US West (Oregon) regions. To learn more, refer to Knowledge Bases for Amazon Bedrock documentation. To get started, visit the Amazon Bedrock console or utilize the RetrieveAndGenerate API.
AWS HealthImaging now supports retrieval of DICOM Part 10 instances
AWS HealthImaging now supports retrieval of DICOM Part 10 data, enabling customers to download instance-level binaries. The retrieve DICOM instance API is built in conformance to the DICOMweb WADO-RS standard for web-based medical imaging. With this feature launch, customers taking advantage of HealthImaging’s cloud-native interfaces can better interoperate with systems that utilize DICOM Part 10 binaries. You can retrieve a DICOM instance from a HealthImaging data store by specifying the Series, Study, and Instance UIDs associated with the resource. You can also provide an optional image set ID as a query parameter to specify the image set from which the instance resource should be retrieved. Customers can specify the Transfer Syntax, such as uncompressed (ELE) or compressed (High-throughput JPEG 2000). To learn more about how to retrieve DICOM P10 binaries, see the AWS HealthImaging Developer Guide.  AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers and their software partners to store, analyze, and share medical images at petabyte scale. With AWS HealthImaging, you can run your medical imaging applications at scale from a single, authoritative copy of each medical image in the cloud, while reducing total cost of ownership. AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland). To learn more, visit AWS HealthImaging.
Amazon MSK now supports the removal of brokers from MSK provisioned clusters
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports removing brokers from MSK provisioned clusters. Administrators can optimize costs of their Amazon MSK clusters by reducing broker count to meet the changing needs of their streaming workloads, while maintaining cluster performance, availability, and data durability. Customers use Amazon MSK as the core foundation to build a variety of real-time streaming applications and high-performance event driven architectures. As their business needs and traffic patterns change, they often adjust their cluster capacity to optimize their costs. Amazon MSK Provisioned provides flexibility for customers to change their provisioned clusters by adding brokers or changing the instance size and type. With broker removal, Amazon MSK Provisioned now offers an additional option to right-size cluster capacity. Customers can remove multiple brokers from their MSK provisioned clusters to meet the varying needs of their streaming workloads without any impact to client connectivity for reads and writes. By using broker removal capability, administrators can adjust cluster’s capacity, eliminating the need to migrate to another cluster to reduce broker count. Brokers can be removed from Amazon MSK provisioned clusters configured with M5 or M7g instance types. The feature is available in all AWS Regions where MSK Provisioned is supported. To learn more, visit our launch blog and the Amazon MSK Developer Guide.
Bottlerocket now supports NVIDIA Fabric Manager for Multi-GPU Workloads
Today, AWS has announced that Bottlerocket, the Linux-based operating system purpose-built for containers, now supports NVIDIA Fabric Manager, enabling users to harness the power of multi-GPU configurations for their AI and machine learning workloads. With this integration, Bottlerocket users can now seamlessly leverage their connected GPUs as a high-performance compute fabric, enabling efficient and low-latency communication between all the GPUs in each of their P4/P5 instances. The growing sophistication of deep learning models has led to an exponential increase in the computational resources required to train them within a reasonable timeframe. To address this increase in computational demands, customers running AI and machine learning workloads have turned to multi-GPU implementations, leveraging NVIDIA's NVSwitch and NVLink technologies to create a unified memory fabric across connected GPUs. The Fabric Manager support in the Bottlerocket NVIDIA variants allows users to configure this fabric, enabling all GPUs to be used as a single, high-performance pool rather than individual units. This unlocks Bottlerocket users to run multi-GPU setups on P4/P5 instances, significantly accelerating the training of complex neural networks. To learn more about Fabric Manager support in the Bottlerocket NVIDIA variants, please visit the official Bottlerocket GitHub repo.
Amazon MWAA now available in additional Regions
Amazon Managed Workflows for Apache Airflow (MWAA) is now available in five new AWS Regions: Europe (Milan), Africa (Cape Town), US West (N. California), Asia Pacific (Hong Kong), and Middle East (Bahrain). Amazon MWAA is a managed service for Apache Airflow that lets you use the same familiar Apache Airflow platform as you do today to orchestrate your workflows and enjoy improved scalability, availability, and security without the operational burden of having to manage the underlying infrastructure. Learn more about using Amazon MWAA on the product page. Please visit the AWS region table for more information on AWS regions and services. To learn more about Amazon MWAA visit the Amazon MWAA documentation. Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.
AWS announces Amazon DocumentDB zero-ETL integration with Amazon OpenSearch Service
Amazon DocumentDB zero-ETL integration with Amazon OpenSearch Service provides customers advanced search capabilities, such as fuzzy search, cross-collection search and multilingual search, on their Amazon DocumentDB documents using the OpenSearch API. With a few clicks in the AWS Console, customers can now seamlessly synchronize their data from Amazon DocumentDB to Amazon OpenSearch Service, eliminating the need to write any custom code to extract, transform, and load the data. This integration extends the existing text search and vector search capabilities in Amazon DocumentDB, providing customers greater flexibility for searching their JSON-based documents. This zero-ETL integration uses Amazon OpenSearch Ingestion to synchronize the data from Amazon DocumentDB collections to Amazon OpenSearch Service. Amazon OpenSearch Ingestion is able to automatically understand the format of the data in Amazon DocumentDB collections and maps the data to your index mapping templates in Amazon OpenSearch Service to yield the most performant search results. Customers can synchronize data from multiple Amazon DocumentDB collections via multiple pipelines into one Amazon OpenSearch managed cluster or serverless collection to offer holistic insights across several applications. Amazon DocumentDB zero-ETL integration with Amazon OpenSearch Service is now available in the following 13 regions : US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), and Canada (Central). To learn more and get started with this zero-ETL integration, visit the developer guides for Amazon DocumentDB and Amazon OpenSearch Service and the launch blog.
Amazon WorkSpaces Core now supports Windows Server bundles
Amazon WorkSpaces Core now offers new bundles powered by Windows Server 2019 and Windows Server 2022. With these bundles, customers and partners can take advantage of the latest license included Windows Server instances. This new feature will allow customers to minimize getting started time by providing staged images. In addition, this feature enables customers and partners to run multi-session VDI workloads on WorkSpaces Core desktops. You can get started using the managed Windows Server 2019 and Windows Server 2022 WorkSpaces Core bundle or create your own custom bundle and image tailored to your requirements. For more information on Amazon WorkSpaces Core’s new Windows Server Bundles, visit Amazon WorkSpaces Core FAQs. The new WorkSpaces Core Windows Server 2019 and Windows Server 2022 support is available in all AWS Regions where Amazon WorkSpaces Core is available. For pricing information, visits Amazon WorkSpaces Core pricing page.
Announcing General Availability of Amazon Redshift Serverless in the South America (São Paulo) AWS region
Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in additional AWS region South America (São Paulo). With Amazon Redshift Serverless, all users including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications. With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon S3, access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes. Amazon Redshift Serverless provides unified billing for queries on any of these data sources, helping you efficiently monitor and manage costs. To get started, see the Amazon Redshift Serverless feature page, user documentation, and API Reference.
Amazon Managed Service for Prometheus now supports inline editing of alert manager and rules configuration
Amazon Managed Service for Prometheus now supports inline editing of rules and alert manager configuration directly from the AWS console. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service that makes it easy to monitor and alarm on operational metrics at scale. Prometheus is a popular Cloud Native Computing Foundation open-source project for monitoring and alerting on metrics from compute environments such as Amazon Elastic Kubernetes Service. Previously, customers could define alerting and recording rules, or alert manager definition, by importing respective configuration defined in a YAML file, via the AWS console. Now, they can import, preview, and edit existing rules or alert manager configurations from YAML files or create them directly from the AWS console. The inline editing experience allows customers to preview their rules and alert manager configuration prior to setting them. This feature is now available in all regions where Amazon Managed Service for Prometheus is generally available. To learn more about Amazon Managed Service for Prometheus, visit the product page and pricing page.
Amazon OpenSearch Ingestion launches new user interface for easy blueprint discovery
Amazon OpenSearch Ingestion now offers a new user interface that enables searching for blueprints using full-text search in the AWS Console, helping you easily discover all the sources that you can ingest data from into Amazon OpenSearch Service. Blueprints are pre-filled OpenSearch Ingestion configuration files that help you quickly get started with ingesting data from popular sources like Amazon S3, DynamoDB and Security Lake. The new interface now also offers visual tiles with icons for all your favorite blueprints, providing you a bird’s-eye view to all the sources and sinks supported by Amazon OpenSearch Ingestion. The new user interface for blueprints offers a customized getting started guide for each source, detailing keys steps in setting up a successful end-to-end integration. As part of the visual overhaul, Amazon OpenSearch Ingestion now also offers support for specifying the pipeline configuration in JSON format in the AWS Console in addition to the existing YAML support. This will allow you to confidently copy and paste configurations from your text or code editors without having to worry about formatting errors due to inconsistent whitespace. These features are available in all the AWS commercial regions where Amazon OpenSearch Ingestion is currently available. To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Service Developer Guide.
Amazon Connect now supports creating rules for monitoring and alerting on Flow metrics
You can now configure rules to automatically create a task, send an email, or generate an Amazon Eventbridge event whenever a Flows and Flow Modules metrics breaches the threshold you define. For example, you can create a rule to assign a task to a contact center administrator whenever the dropped rate (i.e. percentage of contacts that dropped from a flow) for your inbound welcome flow exceeds 10% over a trailing 4 hour window.  These features are available in all AWS regions where Amazon Connect is available. To learn more about Contact Lens Rules and Amazon Connect, the AWS contact center as a service solution on the cloud, please see the Amazon Connect Administrator Guide or visit the Amazon Connect website.
Application Load Balancer launches IPv6 only support for internet clients
Application Load Balancer (ALB) now allows customers to provision load balancers without IPv4s for clients that can connect using just IPv6s. To connect, clients can resolve AAAA DNS records that are assigned to ALB. The ALB is still dual stack for communication between the load balancer and targets. With this new capability, you have the flexibility to use both IPv4s or IPv6s for your application targets, while avoiding IPv4 charges for clients that don’t require it. To get started, you can either create a new dual-stack ALB without public IPv4 or modify existing ALBs to use dual-stack without public IPv4 using AWS APIs or console. There are no additional charges for using this feature. The internet facing Ipv6 only ALB is now available in all commercial AWS Regions, and the AWS GovCloud (US) Regions. 
Amazon OpenSearch Serverless now available in Europe (London) and Asia Pacific (Mumbai)
We are excited to announce the availability of Amazon OpenSearchServerless in the Europe West (London) and Asia Pacific South (Mumbai) regions. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless automatically provisions and scales resources to provide consistently fast data ingestion rates and millisecond response times during changing usage patterns and application demand.  OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs).  The support for OpenSearch Serverless is now available in 11 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), and Asia Pacific South (Mumbai). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation. 
Amazon MWAA now supports Airflow REST API with web server auto scaling
Amazon Managed Workflows for Apache Airflow (MWAA) now supports the Airflow REST API along with web server auto scaling, allowing customers to programmatically monitor and manage their Apache Airflow environments at scale.  Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. With Airflow REST API support, customers can now monitor workflows, trigger new executions, manage connections, and perform other administration tasks with ease via scalable API calls. Web server auto scaling enables MWAA to automatically scale out the Airflow web servers to handle increased demand, whether from REST API requests, Command Line Interface (CLI) usage, or more concurrent Airflow User Interface (UI) users. You can launch or upgrade an Apache Airflow environment to include web server auto scaling on Amazon MWAA with just a few clicks in the AWS Management Console in all currently supported Amazon MWAA regions. To learn more about Airflow REST API and web server auto scaling, visit the Launch Blog. To learn more about Amazon MWAA visit the Amazon MWAA documentation.
Amazon Detective adds support for EKS audit logs in Security Lake integration
Amazon Detective now supports retrieving Amazon Elastic Kubernetes Service (Amazon EKS) audit logs from Amazon Security Lake. With this launch, Detective customers leveraging the Security Lake integration can query and analyze Amazon EKS audit logs in addition to AWS CloudTrail and Amazon VPC Flow Logs. This enhancement enables more comprehensive investigations into potential security issues involving Amazon EKS workloads. By integrating Amazon EKS audit logs, Detective provides security analysts with deeper visibility into Kubernetes API calls and activities within EKS clusters. Amazon Detective is a managed security service that simplifies the investigation process by building data aggregations, summaries, and visualizations based on security findings and activity logs. Alongside EKS support, Detective now supports OCSF v1.1.0, enchancing query performance for your security analytics. This allows for more effective threat detection, incident response, and compliance auditing for containerized applications. The integration seamlessly surfaces relevant Amazon EKS logs during investigations, accelerating the analysis process without the need to switch between multiple tools. This new capability is available in all AWS Regions where both Amazon Detective and Amazon Security Lake are available. For the list of supported Regions, refer to the AWS Regional Services list. To get started, visit the Detective console and enable the Security Lake integration. You can find guidance on querying Amazon EKS audit logs in the Amazon Detective User Guide. For more information about Amazon Detective, visit the service page.
AWS Shield Advanced is now available in Canada West (Calgary) Region
Starting today, you can use AWS Shield Advanced in the AWS Canada West (Calgary) Region. AWS Shield Advanced is a managed application security service that safeguards applications running on AWS from distributed denial of service (DDoS) attacks. Shield Advanced provides always-on detection and automatic inline mitigations that minimize application downtime and latency. Also, it provides protections against more sophisticated and larger attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Amazon Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53. To learn more visit, the AWS Shield Advanced product page. For a full list of AWS regions where AWS Shield Advanced is available, visit the AWS Regional Services page. AWS Shield Advanced pricing may vary between regions. For more information about pricing, visit the AWS Shield Pricing page.
Amazon Managed Grafana now supports Grafana version 10.4
Customers can now run Amazon Managed Grafana workspaces with Grafana version 10.4. This release includes features that were launched as a part of open source Grafana versions 9.5 to 10.4, including Correlations, Subfolders, and new visualization panels such as Data Grid, XY chart and Trend panel. This release also introduces new configuration APIs to manage service accounts and tokens for Amazon Managed Grafana workspaces. Service Accounts, replace API keys as the primary way to authenticate applications with Grafana APIs using Service Account Tokens. These new APIs eliminate the need to manually create Service accounts, enabling customers to fully automate their provisioning workflows. With correlations, customers can define relationships between different data sources, rendered as interactive links in Explore visualizations that trigger queries on the related data source; carrying forward data like namespace, host, or label values, enabling root cause analysis with a diverse set of data sources. Subfolders enable nested hierarchy of folders with nested layers of permissions, allowing customers to organize their dashboards to reflect their organization’s hierarchy. To explore the complete list of new features, please refer to our user documentation. Grafana version 10.4 is supported in all AWS regions where Amazon Managed Grafana is generally available. You can create a new Amazon Managed Grafana workspace or upgrade your existing 9.4 workspace to 10.4 from the AWS Console, SDK, or CLI. Check out the Amazon Managed Grafana user guide and Amazon Managed Grafana API Reference for detailed documentation.
Amazon VPC Lattice now supports TLS Passthrough
Today, AWS announces the general availability of TLS Passthrough for Amazon VPC Lattice, which allows customers to enable end-to-end authentication and encryption using their existing TLS/mTLS implementations. Prior to this launch, VPC Lattice supported HTTP and HTTPS listener protocols only, which terminates TLS and performs request level routing and load balancing based on information in HTTP headers. With this launch, you can configure a TLS listener, which routes traffic based on the server name indicator (SNI) field of a TLS/mTLS connection, allowing you to perform end-to-end authentication and encryption between your TCP and HTTP services without terminating TLS in VPC Lattice.  For more information, visit the Amazon VPC Lattice product detail page and TLS pass-through documentation. For details on pricing, please visit the VPC Lattice pricing page.
AWS HealthImaging supports cross account data imports
AWS HealthImaging now supports cross-account and cross-region import jobs. With this release, customers can directly import DICOM data from any S3 bucket owned by their organization, owned by collaborators, or from publicly available sources like the Registry of Open Data on AWS (RODA). Customers can import data from an S3 bucket in a different region than their data stores as long as that bucket is in a region where HealthImaging is available. To run cross-account DICOM import jobs, the S3 bucket owner must grant the data store owner list bucket and get object permissions, and the data store owner must add the bucket to their IAM ImportJobDataAccessRole. This makes it easy to load publicly available open data sets like the Imaging Data Commons (IDC) Collections.  Medical imaging SaaS products can now easily import DICOM data from their customers’ accounts. Large organizations can populate one HealthImaging data store from many S3 input buckets distributed across their multi-account environment and researchers can easily and securely share data across multi-institution clinical studies. AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers and their software partners to store, analyze, and share medical images at petabyte scale. With AWS HealthImaging, you can run your medical imaging applications at scale from a single, authoritative copy of each medical image in the cloud, while reducing infrastructure costs.  AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland). To learn more, visit AWS HealthImaging.
Amazon RDS for PostgreSQL announces Extended Support minor 11.22-RDS.20240418
Amazon Relational Database Service (RDS) for PostgreSQL announces Amazon RDS Extended Support minor version 11.22-RDS.20240418. We recommend that you upgrade to this version to fix known security vulnerabilities and bugs in prior versions of PostgreSQL. Amazon RDS Extended Support provides you more time, up to three years, to upgrade to a new major version to help you meet your business requirements. During Extended Support, Amazon RDS will provide critical security and bug fixes for your MySQL and PostgreSQL databases on Aurora and RDS after the community ends support for a major version. You can run your PostgreSQL databases on Amazon RDS with Extended Support for up to three years beyond a major version’s end of standard support date. Learn more about Extended Support in the Amazon RDS User Guide. You are able to leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. Learn more about upgrading your database instances, including minor and major version upgrades, in the Amazon RDS User Guide. Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
AWS CodeBuild now supports connecting to an Amazon VPC from reserved capacity
AWS CodeBuild now supports connecting your fleet of reserved Linux hosts to your Amazon VPC. Reserved capacity allows you to provision a fleet of CodeBuild hosts that persist your build environment. These hosts remain available to receive subsequent build requests, which reduces build start-up latencies. With this feature, you can use reserved capacity to compile your software within your VPC and access resources such as Amazon Relational Database Service, Amazon ElastiCache, or any service endpoints that are only reachable from within a specific VPC. Configuring reserved capacity to connect to your VPC also secures your builds by applying the same network access controls as defined in your security groups. This feature is available in US East (N. Virginia), US East (Ohio), US West (Oregon), South America (Sao Paulo), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Mumbai), Europe (Ireland), and Europe (Frankfurt).  To learn more about CodeBuild’s reserved capacity, see running builds on reserved capacity. To learn more about CodeBuild’s support for connecting to VPC, see configuring builds with VPC.
Amazon EKS announces native support for autoscaling CoreDNS Pods
Today, AWS announces general availability of CoreDNS autoscaling capabilities for Amazon EKS clusters. This feature allows you to scale capacity of DNS server instances to meet the ever-changing capacity needs of your services without the overhead of managing custom solutions. Organizations are standardizing on Kubernetes as their compute infrastructure platform to build scalable, containerized applications. Scaling CoreDNS Pods is key to ensure reliable DNS resolution by distributing the query load across multiple instances, and provide high availability for applications and services. With this launch, you no longer need to pre-configure the scaling parameters and deploy a client on each cluster to monitor the capacity and scale accordingly. EKS manages the autoscaling of DNS resources when you use the CoreDNS EKS add-on. This feature works for CoreDNS v1.9 and EKS release version 1.25 and later. For more information about which versions are compatible with CoreDNS autoscaling, visit Amazon EKS documentation. You can benefit from the simplified out-of-box managed option that requires minimal configuration and helps improve the resiliency of your applications. We recommend using this feature in conjunction with other EKS Cluster Autoscaling best practices to improve overall application availability and cluster scalability. Autoscaling capabilities for CoreDNS Pods are available in all regions where Amazon EKS is available. To get started, visit the Amazon EKS documentation. 
Amazon Connect Contact Lens now provides analytics for Flows and Flow Modules
Amazon Connect Contact Lens now offers analytics for Flows and Flow Modules, enabling you to identify emergent issues (e.g., a spike in contacts unexpectedly dropping from a flow), monitor usage patterns (e.g., most used flows or modules, an increasing trend in duration), and measure the impact of configuration changes across your customer or agent experiences including guides and task automation. From the Flows performance dashboard, you can view and compare real-time and historical aggregated performance, trends, and insights over custom-defined time periods (e.g., week over week), helping you answer questions such as “how many contacts dropped out of my contact center before reaching a queue?” or “how long does it take for contacts to navigate through my end-customer self-service voice flow?” These metrics are also available programmatically via the existing GetMetricsDataV2 API. These features are available in all AWS regions where Amazon Connect is available. To learn more about flow analytics and the flows performance dashboard, see the Amazon Connect Administrator Guide and Amazon Connect API Reference. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.
Amazon EventBridge now supports Customer Managed Keys (CMK) for Event Buses
Amazon EventBridge announces support for Amazon Key Management Service (KMS) Customer Managed Keys (CMK) on Event Buses. This capability allows you to encrypt your events using your own keys instead of an AWS owned key (which is used by default). With support for CMK, you now have more fine grained security control over your events, satisfying your company’s security requirements and governance policies. Amazon EventBridge Event Bus is a serverless event router that enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and AWS services. You can set up rules to determine where to send your events, allowing applications to react to changes in your events as they occur. Customer managed Keys (CMK) are KMS keys that you create and manage by yourself. You can also audit and track usage of your keys via CloudTrail when keys are used for encryption in EventBridge. You can encrypt your custom and partner events by enabling CMK on custom, partner or default buses and you will only be charged for the customer managed key by KMS. Optionally, you can also add Dead Letter Queues (DLQs) for your event buses to persist events that could not be decrypted for rule matching because of misconfigured permisions.  CMK support is now available in all AWS Commercial Regions where EventBridge is available. To learn more, read EventBridge documentation and KMS documentation.
Amazon Virtual Private Cloud (VPC) flow logs extends support for Amazon Elastic Container Service (ECS)
You can now turn on Amazon Virtual Private Cloud (VPC) Flow Logs for your Amazon Elastic Container Service (ECS) workloads running on both Amazon EC2 and AWS Fargate to export detailed telemetry information for all network flows. Amazon ECS helps you deploy and manage your containerized applications easily and efficiently. VPC Flow Logs enable you to capture and log information about your VPC network traffic. With this launch, you can include Service name, ECS Cluster name and other ECS metadata in your flow logs subscriptions. These additional flow logs fields make it easier for you to monitor your ECS workloads and troubleshoot any issues. VPC Flow Logs for ECS is available in the following AWS Regions: US East (Ohio, N. Virginia), US West (Northern California, Oregon), Africa (Cape Town), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Canada (Central), Canada West (Calgary), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm, Zurich), Israel (Tel Aviv), Middle East (Bahrain, UAE), South America (Sao Paulo), China (Beijing), operated by Sinnet, China (Ningxia) operated by NWCD, and  AWS GovCloud (US-East, US-West) Regions.  To get started, see VPC Flow Logs public documentation and this blog post.
Introducing Amazon EC2 C7i-flex instances
AWS announces the general availability of Amazon EC2 C7i-flex instances that deliver up to 19% better price performance compared to C6i instances. C7i-flex instances expand the EC2 Flex instances portfolio to provide the easiest way for you to get price performance benefits for a majority of compute intensive workloads. The new instances are powered by the 4th generation Intel Xeon Scalable custom processors (Sapphire Rapids) that are available only on AWS, and offer 5% lower prices compared to C7i. C7i-flex instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don't fully utilize all compute resources. With C7i-flex instances, you can seamlessly run web and application servers, databases, caches, Apache Kafka, and Elasticsearch, and more. For compute-intensive workloads that need larger instance sizes (up to 192 vCPUs and 384 GiB memory) or continuous high CPU usage, you can leverage C7i instances. C7i-flex instances are available in the following AWS Regions: US West (N. California), Europe (Ireland, London, Paris, Spain, Stockholm), Canada (Central), Asia Pacific (Mumbai, Singapore), and South America (São Paulo). To learn more, visit Amazon EC2 C7i-flex instances.
Amazon EBS direct APIs now support VPC endpoint policies
Amazon Elastic Block Store (EBS) direct APIs now support Virtual Private Cloud (VPC) endpoint policies in all AWS Regions. This newly supported capability provides a granular access control to your EBS resources for improved data protection and security posture. Previously, customers have full access to EBS direct APIs through an interface VPC endpoint, powered by AWS PrivateLink. With this newly supported capability, customers can attach a VPC endpoint policy to an interface VPC endpoint and manage which EBS direct APIs actions (GetSnapshotBlock, ListSnapshotBlocks, ListChangedBlocks, PutSnapshotBlock) may be performed, the principal that may perform the actions, and the resources on which the actions may be performed. VPC endpoint policy support is available in all AWS Regions where EBS direct APIs are available. To learn more, visit our documentation.
AWS Fault Injection Service is now available in Europe (Spain) Region
Starting today, customers can use AWS Fault Injection Service (FIS) in Europe (Spain) Region. FIS is a fully managed service for running fault injection experiments to improve an application’s performance, observability, and resilience. FIS simplifies the process of setting up and running controlled fault injection experiments across a range of AWS services, so teams can build confidence in their application behavior. With the addition of this region, AWS FIS is now available in 21 AWS commercial regions and the AWS GovCloud (US) regions. For a list of regions where AWS FIS is available, see the FIS Service endpoints. To get started, you can log into the AWS Management Console, or you can use the AWS API, SDK, or CLI. To learn more about FIS, visit the AWS FIS page and see the FIS User Guide for more details. 
Amazon EC2 M7gd instances are now available in South America (Sao Paulo) region
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7gd instances with up to 3.8 TB of local NVMe-based SSD block-level storage are available in South America (Sao Paulo) region. These Graviton3-based instances with DDR5 memory are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage, including those that need temporary storage of data for scratch space, temporary files, and caches. They have up to 45% improved real-time NVMe storage performance than comparable Graviton2-based instances. Graviton3-based instances also use up to 60% less energy for the same performance than comparable EC2 instances, enabling you to reduce your carbon footprint in the cloud. M7gd instances are now available in the following AWS regions: US East (N. Virginia, Ohio), US West (Oregon, N. California), Europe (Spain, Stockholm, Ireland, Frankfurt, Paris), Asia Pacific (Tokyo, Mumbai, Singapore, Sydney), and South America (Sao Paulo). To learn more, see Amazon EC2 M7gd instances. To get started, see the AWS Management Console.
Amazon EC2 M6id instances are now available in Europe (London) region
Starting today, Amazon EC2 M6id instances are available in AWS Region Europe (London). These instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors with an all-core turbo frequency of 3.5 GHz and up to 7.6 TB of local NVMe-based SSD block-level storage.  M6id instances are built on AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. Customers can take advantage of access to high-speed, low-latency local storage to scale performance of applications such as video encoding, image manipulation, other forms of media processing, data logging, distributed web-scale in-memory caches, in-memory databases, and real-time big data analytics. These instances are generally available today in the US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Tokyo, Sydney, Mumbai, Seoul, Singapore), Europe (Ireland, Frankfurt, Zurich, London), Israel (Tel Aviv), Canada (Central), Canada West (Calgary), South America (Sao Paulo), and AWS GovCloud (US-West) Regions. Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit our product page for M6id.
Amazon EMR 7.1 now supports Trino 435, Python 3.11
Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Apache Hive, and Presto. Today, we are excited to announce that Amazon EMR 7.1 release is now generally available and includes the latest versions of popular open-source software. Amazon EMR 7.1 includes Trino 435, PrestoDB 0.284, Apache Zookeeper 3.9.1, Apache Livy 0.8, Apache Flink 1.18.1, Apache Hudi 0.14.1, and Apache Iceberg 1.4.3. In addition, Amazon EMR 7.1 introduces support for Python 3.11 for Apache Spark 3.5 applications. Amazon EMR release 7.1 is now available in all regions where Amazon EMR is available. See Regional Availability of Amazon EMR, and our release notes for more detailed information.
Amazon S3 will no longer charge for several HTTP error codes
Amazon S3 will make a change so unauthorized requests that customers did not initiate are free of charge. With this change, bucket owners will never incur request or bandwidth charges for requests that return an HTTP 403 (Access Denied) error response if initiated from outside their individual AWS account or AWS Organization. To see the full list of error codes that are free of charge, visit Billing for Amazon S3 error responses. This billing change requires no changes to customer applications and applies to all S3 buckets. These billing changes will apply in all AWS Regions, including the AWS GovCloud (US) Regions and the AWS China Regions. This deployment is starting today and we will post another update in a few weeks when it is completed. To learn more, visit Billing for Amazon S3 error responses and Error Responses in the S3 User Guide.
Amazon EMR 7.1 now supports additional metrics for enhanced monitoring
Amazon EMR 7.1 introduces the capability to configure Amazon CloudWatch Agent to publish additional metrics for Apache Hadoop, YARN, and Apache HBase applications running on your Amazon EMR on EC2 clusters. This feature provides comprehensive monitoring capabilities, allowing you to track the performance and health of your cluster more effectively.  Amazon EMR automatically publishes a set of free metrics every five minutes for monitoring cluster activity and health. Starting with Amazon EMR Release 7.0, you can install the Amazon CloudWatch Agent to publish 34 paid metrics to Amazon CloudWatch every minute. With Amazon EMR 7.1, you can now configure the agent to send additional paid metrics, providing even deeper visibility into your cluster's performance. Furthermore, you can opt to send these metrics to an Amazon Managed Service for Prometheus endpoint, if you are already using Prometheus to monitor your enterprise metrics. Additional metrics are is available with Amazon EMR release 7.1, in all regions where Amazon EMR is available. See Regional Availability of Amazon EMR, and our release notes for more details. You will be charged separately for any metrics published by Amazon CloudWatch Agent to Amazon CloudWatch or Amazon Managed Service for Prometheus. To learn how to configure additional metrics for Amazon CloudWatch agent, read the documentation in the Amazon EMR Release Guide.
AWS IAM Identity Center adds PKCE-based authorization for AWS applications
AWS IAM Identity Center now supports OAuth 2.0 authorization code flows using the Proof Key for Code Exchange (PKCE) standard. This provides AWS applications, such as Amazon Q Developer Pro, a simple and safe way to authenticate users and obtain their consent to access AWS resources from desktops and mobile devices with web browsers.  IAM Identity Center is the recommended service for managing workforce access to AWS applications and multiple AWS accounts. It can be used with an existing identity source or by creating a new directory. It provides your workforce access to your selected AWS managed applications, and a scalable option for you to manage access to AWS accounts. AWS IAM Identity Center is available at no additional cost in AWS Regions. Learn more about session duration here.
AWS Security Hub announces support for version 3.0 of the CIS AWS Foundations Benchmark
Today, AWS Security Hub announces support for version 3.0 of the Center for Internet Security (CIS) AWS Foundations Benchmark. The CIS v3.0 standard contains 37 security controls, including 7 new controls which are unique to this standard. Security Hub has satisfied the requirements of the CIS Security Software Certification and has been awarded the certification for levels 1 and 2 of version 3.0 of the CIS AWS Foundations Benchmark. To quickly enable the new standard across your AWS environment, you should use central configuration. This will allow you to enable the standard in some or all of your organization accounts and across all of AWS Regions that are linked to Security Hub with a single action. By using central configuration, you are also able to carry-over the enablement setting of individual controls from previous versions of the CIS standard to this newer version. Alternatively, if you are not using central configuration, you may enable the standard and configure the controls in it on an account-by-account and Region-by-Region basis. To learn more about using central configuration, visit the AWS security blog.  To get started with Security Hub, consult the following list of resources: Learn more about Security Hub capabilities and features, and the Regions in which they are available, in the AWS Security Hub user guide Subscribe to the Security Hub SNS topic to receive notifications about new Security Hub features and controls Try Security Hub at no cost for 30 days
Amazon RDS for PostgreSQL supports pgvector 0.7.0
Amazon Relational Database Service (RDS) for PostgreSQL now supports pgvector 0.7.0, an open-source extension for PostgreSQL for storing vector embeddings in your database, letting you use retrieval-augemented generation (RAG) when building your generative AI applications. This release of pgvector includes features that increase the number of dimensions of vectors you can index, reduce index size, and includes additional support for using CPU SIMD in distance computations. pgvector 0.7.0 adds two new vector data types: halfvec for storing dimensions as 2-byte floats, and sparsevec for storing up to 1,000 nonzero dimensions, and now supports indexing binary vectors using the PostgreSQL-native bit type. These additions let you use scalar and binary quantization for the vector data type using PostgreSQL expression indexes, which reduces the storage size of the index and lowers the index build time. Quantization lets you increase the maximum dimensions of vectors you can index: 4,000 for halfvec and 64,000 for binary vectors. pgvector 0.7.0 also adds functions to calculate both Hamming and Jaccard distance for binary vectors. pgvector 0.7.0 is available on database instances in Amazon RDS running PostgreSQL 16.3 and higher, 15.7 and higher, 14.12 and higher, 13.15 and higher, and 12.19 and higher in all applicable AWS Regions, including the AWS GovCloud (US) Regions. Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
Amazon Managed Service for Prometheus collector integrates with Amazon EKS access management controls
Amazon Managed Service for Prometheus collector, a fully-managed agentless collector for Prometheus metrics now integrates with the Amazon EKS access management controls. Starting today, the collector utilizes the EKS access management controls to create a managed access policy that allows the collector to discover and collect Prometheus metrics. Amazon Managed Service for Prometheus collector with support for EKS access management controls is available in all regions where Amazon Managed Service for Prometheus is available. To learn more about Amazon Managed Service for Prometheus collector, visit the user guide or product page.
Amazon SageMaker notebooks now support G6 instance types
We are pleased to announce general availability of Amazon EC2 G6 instances on SageMaker notebooks. Amazon EC2 G6 instances are powered by up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. G6 instances offer 2x better performance for deep learning inference compared to EC2 G4dn instances. Customers can use G6 instances to interactively test model deployment and for interactive model training for use cases such as generative AI fine-tuning and inference workloads, natural language processing, language translation, computer vision, and recommender engines. Amazon EC2 G6 instances are available for SageMaker notebooks in the AWS US East (N. Virginia and Ohio) and US West (Oregon) regions. Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio and SageMaker notebook instances.
Amazon MQ now supports RabbitMQ version 3.12
Amazon MQ now provides support for RabbitMQ version 3.12.13, which includes several fixes and performance improvements to the previous versions of RabbitMQ supported by Amazon MQ. Starting from RabbitMQ 3.12.13, all Classic Queues on Amazon MQ brokers are upgraded to Classic Queues version 2 (CQv2) automatically. All queues on RabbitMQ 3.12 now behave similarly to lazy queues. These changes provide a significant improvement to throughput and lower memory usage for most use cases.  If you are running earlier versions of RabbitMQ, such as 3.8, 3.9, 3.10 or 3.11, we strongly encourage you to upgrade to RabbitMQ 3.12.13. This can be accomplished with just a few clicks in the AWS Management Console. We also encourage you to enable automatic minor version upgrades on RabbitMQ 3.12.13 to help ensure your brokers take advantage of future fixes and improvements.  Amazon MQ for RabbitMQ will end support for RabbitMQ versions 3.8, 3.9 and 3.10 as indicated in the version support calendar. To learn more about upgrading, see Managing Amazon MQ for RabbitMQ engine versions in the Amazon MQ Developer Guide. To learn more about the changes in RabbitMQ 3.12, see the Amazon MQ release notes. This version is available in all the regions Amazon MQ is available in. For a full list of available regions see the AWS Region Table. 
Amazon Cognito introduces tiered pricing for machine-to-machine (M2M) usage
Amazon Cognito introduces pricing for machine-to-machine (M2M) authentication to better support continued growth and expand capabilities. There is no change to Amazon Cognito's user based pricing (monthly active users or MAUs). Customer accounts currently using Amazon Cognito for M2M use cases will be exempt from pricing for 12 months. M2M pricing is based on the number of application clients configured for M2M authentication and the number of tokens requested for them. You can find details on our pricing page. Amazon Cognito makes it easier to add authentication, authorization, and identity management to your web and mobile apps. In addition to supporting human identities, Cognito's M2M authentication enables developers to leverage machine identities to secure interactions between their services or across organizations. Developers can define machine identities and generate OAuth 2.0 tokens to authenticate them using Cognito user pools that are configured with the OAuth 2.0 client credentials grant. This pricing change applies to only to user pools configured in this way and is not applicable to any other OAuth 2.0 flows. Amazon Cognito is available in 29 AWS Regions globally. To learn more about Amazon Cognito’s support for OAuth 2.0 standards, visit the product documentation page. To get started, visit the Amazon Cognito home page.
Amazon QuickSight launches SPICE capacity auto-purchase API
Amazon QuickSight is excited to announce the launch of SPICE capacity auto-purchase API. Previously, customers were required to manually turn on SPICE auto-purchase via the console UI. Now with this API enhancement, QuickSight users can programmatically turn on the SPICE capacity auto-purchase, seamlessly integrating it into their adoption and migration pipeline. Once turned on, users don’t need to estimate SPICE usage and manually purchase capacity each time. Instead, they can seamlessly ingest data and use SPICE worry free, as QuickSight will automatically acquire the necessary capacity to meet their usage requirements. For further details, visit here. The new SPICE capacity auto-purchase API is now available in Amazon QuickSight Enterprise Editions in all QuickSight regions - US East (N. Virginia and Ohio), US West (Oregon), Canada, Sao Paulo, Europe (Frankfurt, Stockholm, Paris, Ireland and London), Asia Pacific (Mumbai, Seoul, Singapore, Sydney and Tokyo), and the AWS GovCloud (US-West) Region.
Amazon ECR adds pull through cache support for GitLab.com
Amazon Elastic Container Registry (ECR) now includes GitLab Container Registry as a supported upstream registry for ECR’s pull through cache feature. With today’s release, customers using GitLab’s software-as-a-subscription offering, GitLab.com, can automatically sync images from the newly supported upstream registry to their private ECR repositories. ECR customers can create a pull through cache rule that maps an upstream registry to a namespace in their private ECR registry. Using Amazon ECR Pull through cache support with GitLab Container Registry requires authentication. Customers can provide credentials that are stored in AWS Secrets Manager and are used to authenticate to the upstream registry. Once rule is configured, images can be pulled through ECR from GitLab Container Registry. ECR automatically creates new repositories for cached images and keeps them in-sync with the upstream registry. Additionally, customers can use repository creation templates (in preview) to specify initial configurations for the new repositories created via pull through cache. Using pull through cache with other registries, customers can be assured of having the latest images from upstream sources in ECR, while also benefiting from the availability, performance, and security of ECR.  Pull through cache rules are supported in all AWS regions, excluding AWS GovCloud (US) Regions and AWS China Regions. To learn more about creating a pull through cache rule in ECR, please visit our user guide.
Amazon RDS for PostgreSQL supports minor versions 16.3, 15.7, 14.12, 13.15, and 12.19
Amazon Relational Database Service (RDS) for PostgreSQL now supports the latest minor versions PostgreSQL 16.3, 15.7, 14.12, 13.15, and 12.19. This release of RDS for PostgreSQL also includes support for pgvector 0.7.0, which lets you index vectors larger than 2,000 dimensions and adds support for scalar and binary quantization through expression indexes.  The PostgreSQL community released PostgreSQL 16.3 minor version as of today. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes added by the PostgreSQL community. You are able to leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance window. Learn more about upgrading your database instances in the Amazon RDS User Guide. Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
Amazon ElastiCache updates minimum TLS version to 1.2
Today we are updating the minimum supported TLS version to 1.2 on Amazon ElastiCache compatible with open-source Redis version 6 and above, across all regions. This update is designed to help you meet security, compliance, and regulatory requirements. Amazon ElastiCache supports the Transport Layer Security (TLS) encryption protocol, which is used to secure data in-transit over the network. TLS versions 1.0 and 1.1 are no longer recommended as a security best practice, and we have historically supported them to maintain backward compatibility for customers that have older or difficult to update clients. ElastiCache will continue to support TLS 1.0 and 1.1 until May 8, 2025, and customers must update their client software before that date. For more information about ElastiCache and in-transit encryption (TLS), see our documentation.
Amazon OpenSearch Serverless now available in Europe (Paris) region
We are excited to announce that Amazon OpenSearch Serverless is expanding availability to the Europe West (Paris) EU-West-3 region. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs). The support for OpenSearch Serverless is now available in 9 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe West (Paris). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.
Amazon Connect launches AWS CloudTrail support for flow management pages
Amazon Connect now provides AWS CloudTrail support for flow management pages on the Connect admin website. When you add, update, or delete a flow from a flow management page, a record of that activity is available in AWS CloudTrail for visibility, reporting, and compliance, helping you answer questions such as, “who last updated this flow?” or “when was this flow last saved?” These features are supported in all AWS regions where Amazon Connect is offered. To learn more about Connect Flows and AWS see the Amazon Connect Administrator Guide and Amazon Connect API Reference. To learn more about AWS CloudTrail support see the AWS CloudTrail Documentation.
Amazon SageMaker now integrates with Amazon DataZone to help unify governance across data and ML assets
Amazon SageMaker now integrates with Amazon DataZone making it easier for customers to access machine learning (ML) infrastructure, data and ML assets. This integration will unify data governance across data and ML workflows. ML administrators can setup the infrastructure controls and permissions for ML projects in Amazon DataZone. Project members can collaborate on business use cases and share assets with one another. Data scientists and ML engineers can then create a SageMaker environment and kick start their development process inside SageMaker Studio. Data scientists and ML engineers can also search, discover, and subscribe to data and ML assets in their business catalog within SageMaker Studio. They can consume these assets for ML tasks such as data preparation, model training, and feature engineering in SageMaker Studio and SageMaker Canvas. Upon completing the ML tasks, data scientists and ML engineers can publish data, models, and feature groups to the business catalog for governance and discoverability. This integration is supported in the following AWS Regions where SageMaker and Amazon DataZone are available: Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), South America (São Paulo), US East (Ohio), US West (Oregon), and US East (N. Virginia), To learn more, see the Amazon SageMaker ML governance web page and the Amazon SageMaker developer guide.
Amazon Connect launches UI and API support for enhanced search capabilities for Flows and Flow Modules
Amazon Connect now provides enhanced search capabilities for flows and flow modules on the Connect admin website and programmatically using APIs. You can now search for flows and flow modules by name, description, type, status, and tags, making it easy to filter and identify a specific flow when managing your Connect instances. For example, you can now search for all flows tagged with the Department:Help_Desk key value pair to filter your set of flows down to the specific ones you are looking for. This feature is supported in all AWS regions where Amazon Connect is offered. To learn more about Connect Flows and AWS see the Amazon Connect Administrator Guide and Amazon Connect API Reference. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.
AWS Resilience Hub expands application resilience drift detection capabilities
AWS Resilience Hub has expanded its drift detection capabilities by introducing a new type of drift detection — application resource drift. Following last year’s release of application resilience drift detection, this new enhancement detects changes, such as the addition or deletion of resources within the application's input sources. For both drift detection types, you can enable AWS Resilience Hub scheduled assessment and notification services to receive a notification when a drift occurs. The latest resiliency assessment identifies the drifts and presents remediation actions to bring the application back into compliance with your resilience policy. These detection capabilities, combined with AWS Resilience Hub's scheduled assessments and notification services, empower customers to continuously oversee and manage the resilience of their applications. These capabilities are available in all of the AWS Regions where AWS Resilience Hub is supported. For the most up-to-date availability information, see the AWS Regional Services List. To learn more about drift detection, visit our product page. To get started with AWS Resilience Hub, sign into the AWS console.
Amazon EC2 Inf2 instances, optimized for generative AI, now in new regions
Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) Inf2 instances are generally available in the Asia Pacific (Sydney), Europe (London), Europe (Paris), Europe (Stockholm), South America (Sao Paulo) regions. These instances deliver high performance at the lowest cost in Amazon EC2 for generative AI models.  You can use Inf2 instances to run popular applications such as text summarization, code generation, video and image generation, speech recognition, personalization, and more. Inf2 instances are the first inference-optimized instances in Amazon EC2 to introduce scale-out distributed inference supported by NeuronLink, a high-speed, nonblocking interconnect. Inf2 instances offer up to 2.3 petaflops and up to 384 GB of total accelerator memory with 9.8 TB/s bandwidth.  The AWS Neuron SDK integrates natively with popular machine learning frameworks, so you can continue using your existing frameworks to deploy on Inf2. Developers can get started with Inf2 instances using AWS Deep Learning AMIs, AWS Deep Learning Containers, or managed services such as Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), and Amazon SageMaker. Inf2 instances are now available in four sizes: inf2.xlarge, inf2.8xlarge, inf2.24xlarge, inf2.48xlarge in 13 AWS Regions as On-Demand Instances, Reserved Instances, and Spot Instances, or as part of a Savings Plan. To learn more about Inf2 instances, see the Amazon EC2 Inf2 Instances webpage and the AWS Neuron Documentation.
New Generative Engine with three synthetic English Polly voices
Today, we are excited to announce the general availability of the highly expressive generative engine with the three English Amazon Polly voices: two American English voices, Ruth and Matthew, and one British English voice Amy.  Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk and to build speech-enabled products depending on your business needs. The generative engine is Amazon Polly's most advanced text-to-speech (TTS) model. It has been trained with a variety of voices, languages, and styles. It performs with the high precision to render context-dependent prosody, pausing, spelling, dialectal properties, foreign word pronunciation, and more. Generative synthetic voices are emotionally engaged, assertive, and highly colloquial in a way that makes them remarkably similar to human voice. We ensured that despite the powerful abilities of the new voices, they are also suitable for low latency online conversational use-cases. Our customers can use a generative voice persona as a knowledgeable customer assistant, a virtual trainer, or an advertiser with a near-human synthetic speech. Ruth, Matthew, and Amy generative voices are accessible in the US East (North Virginia) region and complement the other English voices that are already available for developing speech products for a variety of use cases.  For more details, please read the Amazon Polly documentation and visit our pricing page.
Amazon Connect launches granular access controls (using resource tags) for flows and flow modules
Amazon Connect now provides granular access controls using resource tags to define who can access specific flows and flow modules from the Connect admin website. For example, you can now tag flows with Department:Support from the flow designer UI, restricting access to only administrators from your support line of business. These features are supported in all AWS regions where Amazon Connect is offered. To learn more about Connect Flows and tag-based access controls, see the Amazon Connect Administrator Guide and Amazon Connect API Reference. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.
AWS Cost Anomaly Detection reduces anomaly detection latency by up to 30%
Starting today, AWS Cost Anomaly Detection will detect cost anomalies up to 30% faster. Customers can now identify and respond to spending changes more quickly. Cost Anomaly Detection leverages advanced machine learning to identify unusual changes in spend, enabling customers to quickly take action to avoid unexpected costs. With this new capability, AWS Cost Anomaly Detection analyzes cost and usage data up to three times a day, instead of daily, to detect anomalies. This means customers can receive notifications, understand root causes, and take action on unexpected spend changes much faster to avoid unplanned spend and optimize cost. The reduced anomaly detection latency applies automatically for all AWS Cost Anomaly Detection customers across all commercial AWS regions globally at no additional cost. To learn more about AWS Cost Anomaly Detection and how to reduce your risk of spend surprises, visit the AWS Cost Anomaly Detection product page.
AWS Budgets now supports resource and tag-based access controls
AWS Budgets now supports resource and tag-based access controls for easy management and access. You can now add tags to your AWS Budgets resources and define AWS Identity and Access Management (IAM) policies to specify fine-grained permissions for AWS Budgets resources based on their resource names and tags, improving governance and information security through these two granular access control features. With resource-level access controls, you can configure IAM policies that reference budgets using Amazon Resource Names (ARNs) or wildcards, and specify the users, roles and actions that are permitted on the resources. Using tag-based permissions, you can define IAM policies that specify permissions for tagged budgets. For example, you can tag a budget based on a business unit and limit control over those resources to the members of that business unit. Resource and tag based access controls for AWS Budgets is available in all AWS commercial regions, excluding China. You can get started with these new features using the AWS Budgets console or programmatically via the public APIs at no additional cost. To get started, visit AWS Budgets and to learn more visit Using Resource and Tag based access control for budgets. Budget tagging is available in all AWS Regions where AWS Budgets is available and is integrated with AWS CloudTrail to monitor and troubleshoot API activity.
Amazon Titan Text Premier is now available in Amazon Bedrock
Amazon Titan Text Premier, the latest addition to the Amazon Titan family of large language models (LLMs), is now generally available in Amazon Bedrock. Amazon Titan Text Premier is an advanced, high-performance, and cost-effective LLM engineered to deliver superior performance for enterprise-grade text generation applications, including optimized performance for retrieval-augmented generation (RAG) and Agents. The model incorporates safe, secure, and trustworthy responsible AI practices and excels in delivering exceptional generative AI text capabilities at scale. Exclusive to Amazon Bedrock, Amazon Titan Text models support a wide range of text-related tasks, including summarization, text generation, classification, question-answering, and information extraction. With Titan Text Premier, you can unlock new levels of efficiency and productivity for your text generation needs. This new model offers optimized performance for key features like RAG on Knowledge Bases for Amazon Bedrock, and function calling on Agents for Amazon Bedrock. Such integrations enable advanced applications like building interactive AI assistants that leverage your APIs and interact with your documents. With Titan Text Premier being available via Amazon Bedrock’s serverless experience, you can easily access the model using a single API and without managing any infrastructure. Amazon Titan Text Premier is now available in Amazon Bedrock in the US East (N. Virginia) AWS Region. To learn more, read the AWS News launch blog, Amazon Titan product page, and documentation. To get started with Titan Text Premier in Amazon Bedrock, visit the Amazon Bedrock console.
AWS Global Accelerator launches new edge location in Türkiye
AWS Global Accelerator now supports traffic through a new AWS edge location in Istanbul in Türkiye. With the addition of the edge location, Global Accelerator is now available through 117 Points of Presence globally and supports application endpoints in 29 AWS Regions. AWS Global Accelerator is a service that is designed to improve the availability, security, and performance of your internet-facing applications. By using the congestion-free AWS network, end-user traffic to your applications benefits from increased availability, DDoS protection at the edge, and higher performance relative to the public internet. Global Accelerator provides static IP addresses that act as fixed entry endpoints for your application resources in one or more AWS Regions, such as your Application Load Balancers, Network Load Balancers, Amazon EC2 instances, or Elastic IPs. Global Accelerator continually monitors the health of your application endpoints and offers deterministic fail-over for multi-region workloads without any DNS dependencies. To get started, visit the AWS Global Accelerator website and review its documentation.
Amazon RDS for SQL Server Supports Minor Versions 2019 CU26 and 2022 CU12 GDR
Two new minor versions of Microsoft SQL Server are now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports these latest minor versions of SQL Server 2019 and 2022 across the Express, Web, Standard, and Enterprise editions. We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor versions include SQL Server 2019 CU26 -15.0.4365.2 and 2022 CU12 GDR - 16.0.4120.1. These minor versions are available in all AWS commercial regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions. Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.
Announcing a larger instance bundle for Amazon Lightsail
Amazon Lightsail now offers a larger instance bundle with 16 vCPUs and 64 GB memory. The new instance bundle is available with Linux operating system (OS) and application blueprints, for both IPv6-only and dual-stack networking types. You can create instances using the new bundle with pre-configured Linux OS and application blueprints including WordPress, Drupal, Magento, MEAN, LAMP, Node.js, Amazon Linux, Ubuntu, CentOS Stream, and AlmaLinux.  The new larger instance bundle enables you to scale your web applications and run more compute and memory intensive workloads in Lightsail. This higher performance instance bundle is ideal for general purpose workloads that require ability to handle large spikes in load. Using this new bundle, you can run web and application servers, large databases, virtual desktops, batch processing, enterprise applications, and more. This new bundle is now available in all AWS Regions where Amazon Lightsail is available. For more information on pricing, or to get started with your free account, click here.
Amazon MemoryDB now supports condition keys for user authentication and encryption in transit
Today, Amazon MemoryDB launched two new condition keys for IAM policies that enable you to control user authentication and encryption in transit settings during cluster creation. The new condition keys let you create IAM policies or Service Control Policies (SCPs) to enhance security and meet compliance requirements. The first condition key called, memorydb:TLSEnabled, enables you to require a specific encryption in transit setting in your AWS Accounts. For example, you can use the new memorydb:TLSEnabled condition key to enforce that MemoryDB clusters can only be created with encryption in transit enabled. The second condition key called, memorydb:UserAuthenticationMode, enables you to enforce that MemoryDB users have a user authentication setting. For example, you can use the new memorydb:UserAuthenticationMode condition key to require that MemoryDB users have IAM authentication enabled.  Amazon MemoryDB condition keys are now available in all regions where MemoryDB is generally available. To learn more about using condition keys with MemoryDB, please refer to our documentation. 
Agents for Amazon Bedrock now supports Provisioned Throughput pricing model
Agents for Amazon Bedrock enable developers to create generative AI-based applications that can complete complex tasks for a wide range of use cases and deliver answers based on company knowledge sources. As agentic applications scale, they require higher input and output model throughput compared to on-demand limits. Today, we are launching support for Provisioned Throughput with Agents for Amazon Bedrock. With Provisioned Throughput, you can purchase model units for the specific base model. A model unit provides a certain guaranteed throughput, which is measured by the maximum number of input or output tokens processed per minute. You are charged by the hour for each model unit and you have the flexibility to choose between no commitment, and 1-month or 6-month commitment terms. To learn more about the new capabilities on Bedrock Agents, visit the documentation page.
Announcing Amazon Bedrock Studio preview
Today, we are announcing the preview launch of Amazon Bedrock Studio, an SSO-enabled web interface that provides the easiest way for developers across an organization to collaborate and build generative AI applications. Developers can login to Bedrock Studio using their company credentials to build, evaluate, and share generative AI apps. Bedrock Studio offers a rapid prototyping environment and streamlines access to multiple Foundation Models (FMs) in and tools like Knowledge Bases, Agents, and Guardrails. To enable Bedrock Studio, AWS administrators can configure one or more workspaces for their organization in the AWS Management Console for Bedrock, and grant permissions to individuals or groups to use the workspace. Once the workspace is set up, developers can log into Bedrock Studio using their SSO credentials and immediately start interacting with FMs and other Bedrock tooling in a playground setting. There is no additional cost to using Bedrock Studio, customers only pay for Bedrock usage (for example, API calls to FMs and hosting of Knowledge Bases) in the AWS account. To learn more, visit the Amazon Bedrock Studio page. Amazon Bedrock Studio is now available in preview in AWS Regions US East (N. Virginia) and US West (Oregon). For more information, see the AWS Region table.
Amazon RDS Performance Insights now supports RDS for Oracle Multitenant
Amazon RDS (Relational Database Service) Performance Insights now supports the Oracle Multitenant configuration on Amazon RDS for Oracle. An Amazon RDS for Oracle Multitenant instance operates as a container database (CDB) hosting one or more pluggable databases (PDBs).  With this release, Performance Insights has introduced a new PDB dimension to help you visualize and analyze the distribution of the load on individual PDBs within the CDB on a RDS for Oracle instance. Now, you can slice the database load metric by the “PDB” and “SQL” dimensions to identify the top queries running on each of the PDBs. Before this launch, you could visualize the database load only at the CDB level. This PDB-level granular information helps you diagnose database performance issues quickly for instances with an Oracle Multitenant configuration. Amazon RDS Performance Insights is a database performance tuning and monitoring feature of RDS that allows you to visually assess the load on your database and determine when and where to take action. With one click in the Amazon RDS Management Console, you can add a fully-managed performance monitoring solution to your Amazon RDS database. To learn more about RDS Performance Insights, read the Amazon RDS User Guide and visit Performance Insights pricing for pricing details and region availability.  
AWS IoT TwinMaker announces Knowledge Graph optimization for efficient entity-metadata query capabilities
AWS IoT TwinMaker makes it easier to create digital twins of real-world systems such as buildings, factories, industrial equipment, and production lines. Today, AWS announced enhancements to AWS IoT TwinMaker Knowledge Graph that enable faster and more flexible entity metadata query capabilities for industrial customers. These optimizations provide faster entity-metadata queries using full-text search and wildcard search, addressing the data engineering needs of customers requiring efficient entity metadata access. Industrial customers often deal with large number of entities and require quick, flexible ways to search and retrieve information about their entities. The new TwinMaker Knowledge Graph optimizations allow users to perform full-text searches and use wildcard characters to match and retrieve data more effectively. With the improved entity-metadata query capabilities, customers can now perform full-text searches across their entity data to quickly find relevant information, use wildcard characters in their searches to match and retrieve data more flexibly. These enhancements build upon the existing features of AWS IoT TwinMaker Knowledge Graph, which structures and organizes information about digital twins for easier access and understanding. This feature is available in all regions where AWS IoT TwinMaker is generally available. To learn more, visit AWS IoT TwinMaker developer guide and API reference. Use the AWS Management Console to get started.
Amazon Connect Cases now provides APIs for managing attachments
Amazon Connect Cases now provides APIs that make it easy to upload files, check file details, and delete files from cases. Contact center administrators can use these APIs to automate the attachment of files to cases. In addition, these APIs also enable you to use case attachments in a custom agent desktop. Amazon Connect Cases is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), and Europe (London). To learn more and get started, visit the webpage and documentation.
Amazon EMR Serverless announces detailed performance monitoring of Apache Spark jobs with Amazon Managed Service for Prometheus
Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Today, we are excited to announce detailed performance monitoring of Apache Spark jobs with Amazon Managed Service for Prometheus, allowing you to analyze, monitor, and optimize your jobs using job-specific engine metrics and information about Spark event timelines, stages, tasks, and executors. Apache Spark provides detailed performance metrics for the driver and executors for jobs such as JVM heap memory, GC, shuffle information etc. These metrics can be used for performance troubleshooting and workload characterization. Amazon Managed Service for Prometheus is a secure, serverless, fully-managed monitoring and alerting service. With EMR Serverless integration with Amazon Managed Service for Prometheus, you can now monitor these performance metrics for multiple applications/jobs in a single view, making it easier for centralized teams to monitor these metrics to identify performance bottlenecks, historical trends etc. This feature is generally available on EMR release versions 7.1.0 and later and in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Europe (Stockholm, Paris, Frankfurt, Ireland, London), South America (São Paulo) and Asia Pacific (Tokyo, Seoul, Singapore, Mumbai, Sydney). To get started, visit the Monitor Spark metrics with Amazon Managed Service for Prometheus page in the Amazon EMR Serverless User Guide.
Amazon EC2 R7i instances are now available in AWS GovCloud (US-East) Region
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in AWS GovCloud (US-East) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers. R7i instances delivers up to 15% better price-performance compared to prior generation R6i. They offer larger instance sizes up to 48xlarge, can attach up to 128 EBS volumes and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads. In addition, R7i instances support the new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. To learn more, visit Amazon EC2 R7i instance page.
AWS Firewall Manager is now available in the AWS Canada West (Calgary) region
AWS Firewall Manager is now available in the AWS Canada West (Calgary) region, enabling customers to create policies for AWS WAF and manage web application security for applications running in this region. Support for other policy types will be available in the coming months. Firewall Manager is now available in a total of 31 AWS commercial regions, 2 GovCloud regions, and all Amazon CloudFront edge locations. AWS Firewall Manager is a security management service that enables customers to centrally configure and manage firewall rules across their accounts and resources. Using AWS Firewall Manager, customers can manage AWS WAF rules, AWS Shield Advanced protections, AWS Network Firewall, Amazon Route53 Resolver DNS Firewall, VPC security groups, and VPC network access control lists (NACLs) across their AWS Organizations. AWS Firewall Manager makes it easier for customers to ensure that all firewall rules are consistently enforced and compliant, even as new accounts and resources are created. To get started, see the AWS Firewall Manager documentation for more details and the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features, and its pricing, visit the AWS Firewall Manager website.
Amazon RDS for Oracle now supports April 2024 Release Update
Amazon Relational Database Service (Amazon RDS) for Oracle now supports the April 2024 Release Update (RU) for Oracle Database versions 19c and 21c. To learn more about Oracle RUs supported on Amazon RDS for each engine version, see the Amazon RDS for Oracle Release notes. If the auto minor version upgrade (AmVU) option is enabled, your DB instance is upgraded to the latest quarterly RU six to eight weeks after it is made available by Amazon RDS for Oracle in your AWS Region. These upgrades will happen during the maintenance window. To learn more, see the Amazon RDS maintenance window documentation. For more information about the AWS Regions where Amazon RDS for Oracle is available, see the AWS Region table.
Amazon Connect Contact Lens now provides support for PII redaction in Spanish
Amazon Connect Contact Lens now provides support for personally identifiable information (PII) redaction in the Spanish (US) language, enabling contact centers to help identify and redact sensitive information on contact transcripts such as social security numbers, credit card details, bank account information, and personal contact information (i.e. name, email address, phone number, and mailing address).  PII redaction for Spanish (US) language is available in all regions where Contact Lens conversational analytics is supported. To learn more, please visit our documentation. This feature is included with Contact Lens conversational analytics at no additional charge. For information about Contact Lens pricing, please visit our pricing page. 
Amazon EMR Studio is now available in the two additional AWS regions
Starting today, you can use Amazon EMR Studio in the Asia Pacific (Melbourne) and Israel (Tel Aviv) regions to run interactive workloads on EMR. Amazon EMR Studio is an integrated development environment (IDE) that makes it easy for data scientists and data engineers to develop, visualize, and debug big data and analytics applications written in PySpark, Python, Scala, and R. EMR Studio provides fully managed Jupyter Notebooks and tools such as Spark UI and YARN Timeline Service to simplify debugging. You can also enable single sign-on using AWS IAM Identity Center that allows you to log in directly with your corporate credentials without logging into the AWS console. You can learn more by reading the Amazon EMR Studio documentation, visiting the Amazon EMR Studio home page, or watching the Amazon EMR Studio demos.
AWS Amplify Gen 2 is now generally available
AWS Amplify Gen 2, the code-first developer experience for building full-stack apps using TypeScript, is now generally available. Amplify Gen 2 enables developers to express app requirements like the data models, business logic, and authorization rules in TypeScript. The necessary cloud infrastructure is then automatically provisioned, without needing explicit infrastructure definitions. This streamlined approach accelerates full-stack development for teams of all sizes.  Since the public preview, we have added a number of features since the Gen 2 preview : Storage support with revamped authorization capabilities, a file manager TypeScript Functions support with environment variables Custom queries and mutations support for more flexibility with data operations A new Amplify console with features such as custom domains, data management, and PR previews. Integration guides for AI/ML services including: Bedrock, Translate, Polly, and Rekognition Improved relationship modeling behavior across one-to-one, one-to-many, and many-to-many associations Auth enhancements like multiple OIDC providers, user groups support, and granting access to other AWS resources Connect to existing MySQL and PostgreSQL database support This is in addition to all the features launched during the preview.  For more information about the AWS Regions where AWS Amplify’s code-first DX (Gen 2) is available, see the AWS Region table. Get started with Gen 2 by visiting the launch blog.
Amazon FSx for Lustre is now available in the AWS Canada West (Calgary) Region
Customers can now create Amazon FSx for Lustre file systems in the AWS Canada West (Calgary) Region. Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. Amazon FSx for Lustre provides fully managed shared storage built on the world’s most popular high-performance file system, designed for fast processing of workloads such as machine learning, high performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA).  To learn more about Amazon FSx for Lustre, visit our product page, and see the AWS Region Table for complete regional availability information.
Amazon FSx for NetApp ONTAP is now available in the AWS Canada West (Calgary) Region
Customers can now create Amazon FSx for NetApp ONTAP file systems in the AWS Canada West (Calgary) Region. Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. Amazon FSx for NetApp ONTAP provides the first and only complete, fully managed NetApp ONTAP file systems in the cloud. It offers the familiar features, performance, capabilities, and APIs of ONTAP with the agility, scalability, and simplicity of an AWS service. To learn more about Amazon FSx for NetApp ONTAP, visit our product page, and see the AWS Region Table for complete regional availability information.
Amazon FSx for OpenZFS is now available in the AWS Canada West (Calgary) Region
Customers can now create Amazon FSx for OpenZFS file systems in the AWS Canada West (Calgary) Region. Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. Amazon FSx for OpenZFS provides fully managed, cost-effective, shared file storage powered by the popular OpenZFS file system, and is designed to deliver sub-millisecond latencies and multi-GB/s throughput along with rich ZFS-powered data management capabilities (like snapshots, data cloning, and compression). To learn more about Amazon FSx for OpenZFS, visit our product page, and see the AWS Region Table for complete regional availability information.
Amazon FSx for Windows File Server is now available in the AWS Canada West (Calgary) Region
Customers can now create Amazon FSx for Windows File Server file systems in the AWS Canada West (Calgary) Region. Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. Amazon FSx for Windows File Server provides fully managed, highly reliable file storage built on Windows Server and can be accessed via the industry-standard Server Message Block (SMB) protocol.  To learn more about Amazon FSx for Windows File Server, visit our product page, and see the AWS Region Table for complete regional availability information.
Amazon DynamoDB introduces configurable maximum throughput for On-demand tables
Amazon DynamoDB on-demand is a serverless, pay-per-request billing option that can serve thousands of requests per second without capacity planning. Previously, the on-demand request rate was only limited by the default throughput quota (40K read request units and 40K write request units), which uniformly applied to all tables within the account, and could not be customized or tailored for diverse workloads and differing requirements. Since on-demand mode scales instantly to accommodate varying traffic patterns, a piece of hastily written or unoptimized code could rapidly scale up and consume resources, making it difficult to keep costs and usage bounded. Starting today, you can optionally configure maximum read or write (or both) throughput for individual on-demand tables and associated secondary indexes, making it easy to balance costs and performance. Throughput requests in excess of the maximum table throughput will automatically get throttled, but you can easily modify the table-specific maximum throughput at any time based on your application requirements. Customers can use this feature for predictable cost management, protection against accidental surge in consumed resources and excessive use, and safe guarding downstream services with fixed capacities from potential overloading and performance bottlenecks. On-Demand throughput is available in all AWS Regions. See Amazon DynamoDB Pricing page for on-demand pricing. See the Developer Guide to learn more.
IP prefix visibility on Amazon CloudWatch Internet Monitor console
Amazon CloudWatch Internet Monitor, an internet traffic monitoring service for AWS applications that gives you a global view of traffic patterns and health events, now displays IPv4 prefixes in its console dashboard. Using this data in Internet Monitor, you can get more details about your application traffic and health events. For example, you can do the following: View the IPv4 prefixes associated with a client location that is impacted by a health event View IPv4 prefixes associated with a client location Filter and search traffic data by the network associated with an IPv4 prefix or IPv4 address This feature gives you added flexibility for learning more about the specific impact of internet traffic health events on your application users, by searching and filtering drill-down data. You can also get insights into which IPv4 prefixes and addresses are used by clients of your application at different locations. To learn more, visit the Internet Monitor user guide documentation.
Amazon Pinpoint introduces country rules to precisely control SMS message delivery
Amazon Pinpoint now offers country rules, a new feature that allows developers to control the specific countries they send SMS and voice messages to. This enhancement helps organizations align their message sending activities to the precise list of countries where they operate. With country rules, developers can tightly manage their message delivery by setting configurations at the account level or by using dedicated configuration sets. This flexibility allows developers to use different country configurations for each API request made through Amazon Pinpoint, ensuring messages are only sent to their approved destinations. The country rules feature is now available in 29 AWS regions, empowering developers to streamline their messaging operations. To get started with Amazon Pinpoint country rules, simply log in to the Amazon Pinpoint SMS console, or refer to the Amazon Pinpoint SMS protect documentation.
Amazon Route 53 Resolver DNS Firewall now available in the Canada West (Calgary) Region
Starting today, you can use Amazon Route 53 Resolver DNS Firewall in the Canada West (Calgary) Region. Route 53 Resolver DNS Firewall is a managed firewall that enables customers to block DNS queries made for domains identified as low-reputation or suspected to be malicious, and to allow queries for trusted domains. DNS Firewall is a feature of Route 53 Resolver, which is a recursive DNS server that is available by default in all Amazon Virtual Private Clouds (VPCs) and that responds to DNS queries from AWS resources within a VPC for public DNS records, VPC-specific domain names, and Route 53 private hosted zones. DNS Firewall provides more granular control over the DNS querying behavior of resources within your VPCs by letting you create “blocklists” for domains you don’t want your VPC resources to communicate with via DNS, or take a stricter, “walled-garden” approach by creating “allowlists” that permit outbound DNS queries only to domains you specify. Visit the AWS Region Table to see all AWS Regions where Amazon Route 53 is available. Please visit our product page and documentation to learn more about Amazon Route 53 Resolver DNS Firewall. 
AWS Transfer Family is now available in the AWS Canada West (Calgary) Region
Customers in AWS Canada West (Calgary) Region can now use AWS Transfer Family. AWS Transfer Family provides fully managed file transfers for Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS) over Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP), FTP over SSL (FTPS) and Applicability Statement 2 (AS2). In addition to file transfers, Transfer Family offers common file processing steps and enables event-driven automation to modernize managed file transfer (MFT) workflows, helping customers to simplify and migrate their business-to-business file transfer workflows to AWS.   To learn more about AWS Transfer Family, visit our product page and user-guide. See the AWS Region Table for complete regional availability information.
Amazon Personalize now makes it easier than ever to delete users from datasets
Amazon Personalize now makes it easier than ever to remove users from your datasets with a new deletion API. Amazon Personalize uses datasets provided by customers to train custom personalization models on their behalf. This new capability allows you to delete records about users from your datasets including user metadata and user interactions. This helps to maintain data for your compliance programs and keep your data current as your user base changes. Once the deletion is complete, Personalize will no longer store information about the deleted users and therefore will not consider the user for model training.  Using the new deletion API is easy. Simply upload a CSV file with the userIDs for the users you wish to delete to S3. Then create a deletion job and specify the file’s S3 location. Amazon Personalize enables you to personalize your website, app, ads, emails, and more, using the same machine learning technology used by Amazon, without requiring any prior machine learning experience. To get started with Amazon Personalize, visit our documentation.
AWS announces a new Amazon EC2 API to retrieve the public endorsement key from NitroTPM
Today, AWS introduces a new EC2 API to retrieve the public endorsement key (EkPub) for the Nitro Trusted Platform Module (NitroTPM) of an Amazon EC2 instance. Amazon EC2 customers can now programmatically retrieve the unique public endorsement key from the NitroTPM of their EC2 instance using the GetInstanceTPMEkPub API. There is no additional cost for using this API other than the cost for the usage of an EC2 instance. NitroTPM EkPub retrieval is available in AWS GovCloud (US) and all AWS Commercial Regions with the exception of Amazon Web Services China (Beijing) Region, operated by Sinnet, Amazon Web Services China (Ningxia) Region, operated by NWCD. To learn more about NitroTPM and how to get started with this feature, visit the NitroTPM user guide.
AWS Control Tower is now available in AWS Canada West (Calgary) Region
Starting today, customers can use AWS Control Tower in the AWS Canada West (Calgary) Region. With this launch, AWS Control Tower is available in 29 AWS Regions and the AWS GovCloud (US) Regions. AWS Control Tower offers the easiest way to set up and govern a secure, multi-account AWS environment. It simplifies AWS experiences by orchestrating multiple AWS services on your behalf while maintaining the security and compliance needs of your organization. You can set up a multi-account AWS environment within 30 minutes or less, govern new or existing account configurations, gain visibility into compliance status, and enforce controls at scale. If you are new to AWS Control Tower, you can launch it today in any of the supported regions to build and govern your multi-account environment. If you are already using AWS Control Tower and you want to extend its governance features to the newly supported Region, you can do so with AWS Control Tower’s Landing Zone APIs or go to the settings page in your AWS Control Tower dashboard, select your Regions, and update your landing zone. You must then update all accounts that are governed by AWS Control Tower, then your entire landing zone, all accounts, and OUs will be under governance in the new region(s). For a full list of Regions where AWS Control Tower is available, see the AWS Region Table. To learn more, visit the AWS Control Tower homepage or see the AWS Control Tower User Guide.
Amazon RDS for SQL Server Supports SSAS Multidimensional for SQL Server 2019
Amazon RDS for SQL Server now supports SQL Server Analysis Services (SSAS) in Multidimensional mode for SQL Server 2019. There is no additional cost to install SSAS directly on your Amazon RDS for SQL Server DB instance. SSAS is a Microsoft Business Intelligence tool for developing enterprise-level Online Analytical Processing (OLAP) solutions. You can now configure Microsoft SQL Server Analysis Services (SSAS) in Multidimensional mode on Amazon RDS for SQL Server with SQL Server 2019 version 15.00.4153 or higher. Amazon RDS supports SSAS in the Single-AZ configuration for SQL Server Standard and Enterprise Editions.  Learn more about how to configure SSAS in RDS from this Blog post, and see the Amazon Relational Database Services User Guide for more information. Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.
Amazon Bedrock now available in the Asia Pacific (Mumbai) Region
Beginning today, customers can use Amazon Bedrock in the Asia Pacific (Mumbai) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications. Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance. To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.
Announcing automated 997 and TA1 acknowledgements for AWS B2B Data Interchange
AWS B2B Data Interchange now automatically generates 997 functional acknowledgements and TA1 interchange acknowledgements in response to all relevant inbound X12 electronic data interchange (EDI) transactions. These acknowledgements are used to confirm receipt of individual transactions and to report errors. With this launch, you can now automate delivery of 997 and TA1 acknowledgements to trading partners that require them. Each acknowledgement generated by AWS B2B Data Interchange is stored in Amazon S3, alongside your transformed EDI, and emits an Amazon EventBridge event. You can use these events to automatically send the acknowledgement generated by AWS B2B Data Interchange to your trading partners via SFTP or AS2 using AWS Transfer Family or any other EDI connectivity solution. Support for automated acknowledgements is available in all AWS Regions where AWS B2B Data Interchange is available and provided at no additional cost. To get started with AWS B2B Data Interchange for building and running your event-driven EDI workflows, take the self-paced workshop or visit the documentation.
Amazon Personalize launches new recipes supporting larger item catalogs with lower latency
Today, Amazon Personalize announces the general availability of two new recipes, User-Personalization-v2 and Personalized-Ranking-v2 (v2 recipes). Built on Transformers architecture, these new recipes support catalogs with up to 5 million items with lower inference latency. Amazon Personalize testing showed that v2 recipes improved recommendation accuracy by up to 9% and recommendation coverage by up to 1.8x compared to previous versions. A higher coverage means Amazon Personalize recommends more of your catalog. These new recipes also support item metadata like genres and descriptions in inference responses, allowing customers to easily enrich recommendations in their user interfaces. Amazon Personalize enables customers to personalize their website, app, emails, and more, using the same machine learning (ML) technology used by Amazon, without requiring any ML expertise. Using recipes - algorithms for specific uses cases - provided by Amazon Personalize, customers can deliver a wide array of personalization, including product or content recommendations and personalized ranking. To get started with Amazon Personalize, provide an activity stream – clicks, page views, signups, purchases, and so forth – as well as a catalog of the items you want to recommend, such as videos, products, articles, or music. You can also provide demographic information from your users. Amazon Personalize will process the data, train and optimize your custom model, then host it for your applications. User-Personalization-v2 and Personalized-Ranking-v2 are available in all supported regions. You can access them through Amazon Personalize console or API. To get started, refer to our documentation.
Amazon Chime SDK Voice Connector now supports audio streaming G.711 A-Law encoded audio
Amazon Chime SDK Voice Connector audio streaming now supports G.711 A-law encoded audio. With this update, companies can stream audio to AWS from their phone systems using session initiation protocol recording (SIPREC) with G.711 A-law encoded audio. Businesses use Amazon Chime SDK Voice Connector audio streaming to send call audio from their phone systems to AWS for real-time and post-call analytics, machine learning (ML) and natural language processing, call recording, and other business processes. This helps businesses build analytics and compliance solutions that help improve customer experiences using real-time ML-powered conversation insights derived from their streamed phone calls. G.711 A-law encoded audio streaming is available in all AWS regions where Amazon Chime SDK Voice Connector audio streaming is available. To get started or to learn more about Amazon Chime SDK Voice Connector streaming, refer to the Administrator Guide or visit the Amazon Chime SDK website.
Amazon Connect Contact Lens now provides generative AI-powered agent performance evaluations (preview)
Amazon Connect Contact Lens now provides managers with generative AI-powered recommendations for answers to questions in agent evaluation forms, enabling them to perform evaluations faster and more accurately. Managers now receive additional agent behavioral insights (e.g., did the agent show empathy while delivering bad news?) and will get context and justification for the recommended answers (reference points from the transcript that were used to provide answers). These generative AI-powered recommendations are built using Amazon Bedrock and are available in US West (Oregon) and US East (North Virginia) regions in the English language. To learn more, please visit our documentation and our webpage. During preview, this feature is included with Contact Lens performance evaluations at no additional charge. For information about Contact Lens pricing, please visit our pricing page.
Introducing file commit history in Amazon CodeCatalyst
Today, AWS announces the general availability of file commit history in Amazon CodeCatalyst. Customers can now view the file git commit history in the CodeCatalyst’s console. Amazon CodeCatalyst helps teams plan, code, build, test, and deploy applications on AWS. Viewing the history of commits, pertaining to a file, is a way to reduce the effort for developers when trying to understand the history of changes for a codebase. To learn more, check out our documentation on or visit our webpage.
AWS Trusted Advisor now supports API to exclude resources
AWS Trusted Advisor introduces new API to enable you to programmatically exclude resources from recommendations associated with Trusted Advisor best practice checks. On Nov 17 2023, Trusted Advisor announced the launch of new APIs available to Business, Enterprise On-Ramp, or Enterprise Support customers. This launch adds the BatchUpdateRecommendationResourceExclusions API to this suite of APIs to allow users to exclude specific resources from specific checks programmatically. This capability is only available to via AWS Trusted Advisor APIs, and not available through AWS Support API (SAPI).  Trusted Advisor continuously evaluates your AWS environment using best practice checks in the categories of cost optimization, performance, resilience, security, operational excellence, and service quotas, and provides recommendations to remediate any deviations from best practices. Based on your business and architectural context, you can now use this exclude capability within your operational tooling to increase the relevance of recommendations shown by AWS Trusted Advisor. Trusted Advisor APIs are generally available in the US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Sydney), and Europe (Ireland) Regions. To learn more about Trusted Advisor API, please refer to the user guide.
Knowledge Bases for Amazon Bedrock now supports MongoDB Atlas for vector storage
Knowledge Bases for Amazon Bedrock securely connects foundation models (FMs) to internal company data sources for Retrieval Augmented Generation (RAG), to deliver more relevant and accurate responses. Today, we are announcing vector storage support for MongoDB Atlas in Knowledge Bases (KB) for Amazon Bedrock.  Knowledge Bases’ native integration with vector databases allows you to innovate and create unique vector search based experiences, mitigating the need to build custom data source integrations. Vector search allows you to generate deep and accurate insights and find specific information from a corpus of documents. With this launch, your MongoDB Atlas vector database can now take advantage of Knowledge Bases capabilities such as adding metadata to source data to retrieve a filtered list of relevant passages, customizing prompts, and configuring the number of retrieval results. You can connect your AWS account to MongoDB Atlas over the public internet as well as through AWS PrivateLink for added security. This integration adds to the list of vector databases supported by Knowledge Bases, including, Amazon Aurora, Amazon OpenSearch Serverless, Pinecone, and Redis. You can also use this integration with KB’s Retrieve API and the RetrieveAndGenerate API. The MongoDB integration for Amazon Bedrock Knowledge Bases is now generally available in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, refer to the MongoDB integration feature blog and Knowledge Bases documentation. To get started, please visit MongoDB Atlas on AWS Marketplace and the Amazon Bedrock console.
New feature for Amazon Connect Contact Lens agent screen recording
You can now enable agent screen recording when your VDI environment is configured to allow multiple agents to connect concurrently to the same Windows instance (multi-session VDI). This makes it even easier and more cost effective for you to help agents improve their performance when using Amazon Connect in a multi-session VDI environment. You can enable agent screen recording for multi-session VDI by deploying the latest version of the screen recording client application. Amazon Connect automatically creates separate agent screen recordings for each agent that is logged into the same desktop, which you can view directly in the Amazon Connect web UI, alongside call recordings, agent evaluations, transcripts, and call summaries. Agent screen recording is available in all the AWS Regions where Amazon Connect Contact Lens is already available. To learn more about agent screen recording, please visit the documentation and webpage. For more information about agent screen recording pricing, visit the Amazon Connect pricing page.
Amazon EMR Serverless introduces Shuffle-optimized disks delivering improved performance for I/O intensive workloads
Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. An EMR Serverless application uses workers to execute workloads, allowing users to configure ephemeral storage per worker based on the workload's needs. Today, we are excited to introduce Shuffle-optimized disks on Amazon EMR Serverless, offering increased storage capacity (up to 2TB) and higher IOPS delivering better performance for I/O-intensive Spark and Hive workloads. Shuffle is a fundamental step in an Apache Spark or Apache Hive job, involving I/O intensive operations that redistributes or reorganizes data for parallel computations during operations like joins, aggregations, or transformations. Complex workloads with large datasets to shuffle require sufficient disk capacity and I/O performance for optimized shuffle processing. Shuffle-optimized disks offer up to 2TB of storage capacity and higher baseline IOPS, enabling you to efficiently run shuffle-heavy and I/O-intensive Spark and Hive workloads. Shuffle-optimized disks are generally available on EMR release versions 7.1.0 in all AWS Regions where EMR Serverless is available, excluding AWS GovCloud (US) and China regions. For more information on Shuffle-optimized disks, visit the EMR Serverless User Guide. For pricing info on Shuffle-optimized disks, visit the EMR Serverless pricing page.
Amazon EC2 now protects your AMIs from accidental deregistration
Starting today, you can prevent Amazon Machine Images (AMIs) from accidental deregistration by marking them as protected. A protected AMI cannot be deregistered until you explicitly disable deregistration protection. Prior to today, you could recover accidentally deregistered AMIs by onboarding onto Recycle Bin. However, if the AMIs were actively being used to launch instances, unintentional deregistrations could lead to production outages until you recovered those AMIs from Recycle Bin. Now by marking your critical AMIs as protected, you can proactively safeguard your AWS environments against accidental AMI deregistrations. To further safeguard your environments, you can optionally enable a 24-hour cooldown period during which a protected AMI can’t be deregistered even after you disable protection. AMI deregistration protection is now available in all AWS Regions, including the AWS GovCloud (US) Regions and AWS China Regions, and can be enabled through EC2 Console, CLI, and APIs. To learn more, please visit documentation here.
Amazon EC2 simplifies visibility into your active AMIs
Starting today, you can check when your Amazon Machine Images (AMIs) were last used to launch EC2 instances by simply describing your AMIs, enabling you to efficiently filter and track your active AMIs. Prior to today, you had to write complex scripts to query which of your AMIs were actively used to launch instances. These scripts became increasingly cumbersome and error-prone as the number of AMIs grew. Now you can easily verify if your AMIs are active - based on when they were used to launch instances - by simply describing those AMIs. You can now easily create advanced filtering scripts through which you can track your active AMIs at-scale and make informed decisions about deprecating, disabling, and deregistering your AMIs. This feature is now available in all AWS Regions, including the AWS GovCloud (US) Regions and AWS China Regions. You can view the last launched time for your AMIs via EC2 Console, CLI, and API. Learn more by visiting the documentation here.
Amazon CloudWatch launches resource filtering for cross-account observability
Amazon CloudWatch is excited to announce a resource filtering capability for cross-account observability, providing customers with the flexibility to share a subset of their logs or metrics across multiple AWS accounts using configurable filters. Cross-account observability in Amazon CloudWatch allows you to seamlessly search, visualize, and analyze all your metrics and logs, without any account boundaries. With today's launch, you can include or exclude logs and metrics shared across accounts, allowing you to make analysis specific to certain logs and metrics to identify trends and insights. This will help customers to efficiently monitor and troubleshoot issues affecting the health of their applications by only sharing required logs and metrics to a cross-account.  Logs and metrics resource filtering for cross-account in Amazon CloudWatch is available in all commercial AWS Regions at no extra cost. With a few clicks in the AWS Management Console, you can start using CloudWatch resource filtering to share a subset of logs and metrics to your monitoring account. Alternatively, you can use the AWS Command Line Interface (AWS CLI), AWS SDKs, and AWS CloudFormation. To learn more about cross-account observability, please refer to our documentation. You can learn more about CloudWatch cross-account pricing here.
Amazon Route 53 Resolver DNS Firewall now supports Domain Redirection
Starting today, you can enable Route 53 Resolver DNS Firewall to automatically skip the inspection of domains included in a domain redirection chain, such as Canonical Name (CNAME) and Delegation Name (DNAME), thus avoiding the need to explicitly specify each domain from the chain in your Route 53 DNS Firewall rules when allow-listing domains. Before today, when allow-listing domains, Route 53 DNS Firewall compared every DNS query from your VPC against the domains in the allow-list associated to a DNS Firewall rule. If an incoming query was for a domain present in a redirection chain (e.g. CNAME) that was not included in your allow-list of domains, DNS Firewall would block the DNS resolution for this domain, thereby requiring you to explicitly add each domain in the redirection chain to the allow-list. With this release, you can now configure the DNS Firewall rule to automatically apply to all domains in a redirection chain, such as CNAME or DNAME, without requiring you to add each domain in the chain to the allow-list. Route 53 Resolver DNS Firewall support for domain redirection is available in all Regions where Route 53 is available, including the AWS GovCloud (US) Regions. Visit the AWS Region Table to see all AWS Regions where Amazon Route 53 is available.  You can get started by using the AWS Console or Route 53 API. For more information, visit the Route 53 Resolver product detail page, the feature documentation, or the step-by-step guide in the AWS News Blog. For details on pricing, visit the pricing page.

  • No labels