Recent Announcements
The AWS Cloud platform expands daily. Learn about announcements, launches, news, innovation and more from Amazon Web Services.
Announcing expanded DICOMweb retrievals with AWS HealthImaging

AWS HealthImaging has launched two additional DICOMweb APIs for retrieving medical imaging metadata and image frames. This launch offers customers greater flexibility in how they access data stored on HealthImaging, and expanded interoperability with legacy applications.

Customers can now download DICOM instance metadata from HealthImaging with GetDICOMInstanceMetadata. Customers can also retrieve one or more image frames from a DICOM instance stored on HealthImaging with GetDICOMInstanceFrames. Both of these APIs are built in conformance with the DICOMweb WADO-RS standard for web-based medical imaging. These APIs make it easy to connect DICOMweb-enabled applications, such as medical imaging viewers, to HealthImaging data stores.

AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers and their software partners to store, analyze, and share medical images at petabyte scale. With AWS HealthImaging, you can run your medical imaging applications at scale from a single, authoritative copy of each medical image in the cloud, while reducing your total cost of ownership.

Announcing the July 2024 updates to Amazon Corretto

Corretto 22.0.2, 21.0.4, 17.0.12, 11.0.24, and 8u422, quarterly security and critical updates for the Long-Term Supported (LTS) and Feature (FR) versions of OpenJDK, are now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK.

Click on the Corretto home page to download Corretto 8, Corretto 11, Corretto 17, Corretto 21, or Corretto 22. You can also get the updates on your Linux system by configuring a Corretto Apt or Yum repo.
 

AWS Artifact now supports enhanced search capability for reports

We are excited to announce enhanced search capability in the AWS Artifact Reports console that allows you to quickly find the compliance reports you need. You can now perform targeted searches for reports based on individual columns, including report title, category, series, description, and ARN. This empowers you to easily locate specific reports. For example, if you need to find all SOC (System and Organization Controls) reports, you can now search the "Title" column using the "contains" operator and the term "SOC". The new column-specific search helps you narrow down your results and save time.

Targeted search is available for customers who are opted-in to use the AWS Artifact reports console with fine-grained permissions. By default, all AWS Artifact customers are opted-in to use the new console. If you had previously opted out of fine-grained permissions, you can easily opt back in by clicking the URL in the banner displayed on top of the AWS reports list.

The feature is available in all AWS commercial regions. To learn more about AWS Artifact and how to start downloading AWS compliance reports, refer to the product page and documentation.

Amazon QuickSight improves controls performance

Currently in Amazon QuickSight when readers interact with controls they need to wait for all relevant controls to reload after each change is made. With this new release readers will see most controls immediately accessible as loading has been moved to the background. Readers may see a loading indicator in the sample value list, but we prioritize the control being interacted with to reduce the loading time experienced.

Readers can start interacting with controls right away when the dashboard loads. As more controls are added to the dashboard or if controls previously loaded slowly, it will be more clear that this update has sped up the ability to interact with controls on the dashboard.

This improvement to controls is supported in all Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Asia Pacific (Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland and London), South America (São Paulo) and AWS GovCloud (US-West). See here for QuickSight regional endpoints.

Amazon OpenSearch Serverless levels up speed and efficiency with smart caching

We are excited to announce the new smart caching feature for indexing in Amazon OpenSearch Serverless. This innovative caching mechanism automatically fetches and intelligently manages data, leading to faster data retrieval, efficient storage usage, and cost savings.

OpenSearch Serverless has a built-in caching tier for indexing and search compute, measured in OpenSearch Compute Units (OCU). Prior to this feature, the OCUs consumed by indexing was predominantly determined by the size of the workload. The new caching feature optimizes data management on the indexing compute by keeping only the most recently or frequently used data readily available in the cache. Instead of basing the OCU consumption primarily on the total data size, the new caching feature ensures that the cost is determined by the rate of data ingestion. When data requested is not in the cache, it automatically fetches from the data stored in Amazon Simple Storage Service (Amazon S3). This approach reduces overall OCU usage, making the indexing process more efficient and cost-effective while ensuring faster data retrieval.

The support for the new smart caching feature in OpenSearch Serverless is automatically enabled for all collections and is now available in 13 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), and Canada (Central).

Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability.

To learn more about OpenSearch Serverless, see the documentation.
 

Amazon RDS for MariaDB supports Long-Term Support version 11.4 in Amazon RDS Database Preview Environment

Amazon RDS for MariaDB now supports version 11.4 in the Amazon RDS Database Preview Environment, allowing you to evaluate the latest Long-Term Support Release on Amazon RDS for MariaDB. You can deploy MariaDB 11.4 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database, making it simpler to set up, operate, and monitor databases.

MariaDB 11.4 is the latest Long-Term Support Release from the MariaDB community. MariaDB Long-Term Support Releases include bug fixes, security patches, as well as new features. Please refer to the MariaDB 11.4 release notes for more details about this release.

The Amazon RDS Database Preview Environment supports both Single-AZ and Multi-AZ deployments on the latest generation of instance classes. Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment.

Amazon RDS Database Preview Environment database instances are priced the same as production RDS instances created in the US East (Ohio) Region.

AWS Elemental MediaConnect supports individual output stopping capability

You can now disable individual outputs on an AWS Elemental MediaConnect flow, temporarily stopping them from transmitting data. This makes it easier to manage content sharing by allowing you to suspend and restart distribution of live video to a single destination without having to delete and then reconfigure output settings or flows.

To learn more about stopping individual flow outputs, visit the AWS Elemental MediaConnect documentation page.

AWS Elemental MediaConnect is a reliable, secure, and flexible transport service for live video that enables broadcasters and content owners to build live video workflows and securely share live content with partners and customers. MediaConnect helps customers transport high-value live video streams into, through, and out of the AWS Cloud. MediaConnect can function as a standalone service or as part of a larger video workflow with other AWS Elemental Media Services, a family of services that form the foundation of cloud-based workflows to transport, transcode, package, and deliver video.

Visit the AWS Region Table for a full list of AWS Regions where MediaConnect is available. To learn more about MediaConnect, please visit here.

AWS Cloud Control API now supports IPv6

AWS Cloud Control API now allows customers to use Internet Protocol version 6 (IPv6) addresses for their new and existing service endpoints. Customers moving to IPv6 can simplify their network stack by running their AWS Cloud Control API endpoints on a network that supports both IPv4 and IPv6.

The continued growth of the Internet, particularly in the areas of mobile applications, connected devices, and IoT, has spurred an industry-wide move to IPv6. IPv6 increases the number of available addresses by several orders of magnitude so customers will no longer need to manage overlapping address spaces in their VPCs. Customers can standardize their applications on the new version of Internet Protocol by moving their AWS Cloud Control API endpoints to IPv6 with AWS CLI.

Support for IPv6 on AWS Cloud Control API is available in all Regions where AWS Cloud Control API is available. See here for a full listing of our Regions. To learn more about AWS Cloud Control API, please refer to our user guide.
 

Amazon FSx for OpenZFS now supports additional deployment options in two AWS Regions

Customers can now create Amazon FSx for OpenZFS Single-AZ 2 file systems in two additional AWS Regions: Canada (Central) and Asia Pacific (Mumbai). Customers can now also create new Multi-AZ file systems in these two regions with higher throughput capacity than they could previously, up to a new maximum of 10,240 MBps.

Amazon FSx for OpenZFS provides fully managed, cost-effective, shared file storage powered by the popular OpenZFS file system, and is designed to deliver sub-millisecond latencies and multi-GB/s of throughput along with rich ZFS-powered data management capabilities (like snapshots, data cloning, and compression). It offers two deployment types: Single-AZ (built in a single AZ) and Multi-AZ (built with storage volumes in two different AZs with automatic replication for data resiliency across AZs). Within the Single-AZ deployment type, the latest generation, Single-AZ 2, offers double the performance scalability as compared to the previous generation (Single-AZ 1), and delivers a high-speed NVMe read cache that automatically caches your most recently-accessed data, making it suitable for high-performance workloads such as media processing and rendering, financial analytics, and machine learning. Starting today, customers can launch Single-AZ 2 file systems, as well as Multi-AZ file systems with up to 10,240 MBps of throughput, in the two additional AWS Regions mentioned above.

To learn more about Amazon FSx for OpenZFS, visit our product page, and see the AWS Region Table for complete regional availability information.

AWS Security Hub launches 24 new security controls

AWS Security Hub has released 24 new security controls, increasing the number of controls offered to 418. Security Hub now supports controls for additional AWS services such as Amazon Inspector, Amazon Data Firehose and AWS Service Catalog. Security Hub also released new controls against previously supported services like Amazon GuardDuty and Amazon DynamoDB. For the full list of recently released controls and the AWS Regions in which they are available, visit the Security Hub user guide.

To use the new controls, turn on the standard they belong to. Security Hub will then start evaluating your security posture and monitoring your resources for the relevant security controls. You can use central configuration to do so across all your organization accounts and linked Regions with a single action. If you are already using the relevant standards and have Security Hub configured to automatically enable new controls, these new controls will run without taking any additional action.

To get started, consult the following list of resources:

AWS Application Migration Service achieves FedRAMP High authorization

Starting today, you can use AWS Application Migration Service with workloads that require FedRAMP High categorization level in the AWS GovCloud (US-East and US-West) Regions.

In addition to achieving FedRAMP High authorization in the AWS GovCloud (US-East and US-West) Regions, AWS Application Migration Service is in scope for numerous compliance programs and standards, including HIPAA (Health Insurance Portability and Accountability Act), PCI DSS (Payment Card Industry – Data Security Standard), ISO (International Organization for Standardization), SOC 1, 2, and 3 (System and Organization Controls). To learn more about AWS Application Migration Service compliance validation, visit the documentation here.

Application Migration Service minimizes time-intensive, error-prone manual processes by automating the conversion of your source servers to run natively on AWS. It also helps simplify modernization of your migrated applications by allowing you to select preconfigured and custom optimization options during migration.

To start using Application Migration Service for free, sign in through the AWS Management Console. For more information, visit the Application Migration Service product page.
 

AWS Identity and Access Management simplifies management of OpenID Connect identity providers

Today, AWS Identity and Access Management (IAM) is announcing improvements that simplify how customers manage OpenID Connect (OIDC) identity providers (IdPs) in their AWS accounts. These improvements include increased availability when handling federated user logins through existing IdPs and a streamlined process for provisioning new OIDC IdPs.

IAM now secures communication with OIDC IdPs by trusting the root certificate authority (CA) anchoring the IdP’s SSL/TLS server certificate. This aligns with current industry standards and removes the need for customers to update certificate thumbprints when rotating SSL/TLS certificates. For customers using less common root CAs or a self-signed SSL/TLS server certificate, IAM will continue to rely on the certificate thumbprint set in your IdP configuration. This change automatically applies to new and existing OIDC IdPs, and no action is required from customers.

Additionally, when customers configure a new OIDC IdP using either the IAM console or API/CLI, customers no longer need to supply the IdP’s SSL/TLS server certificate thumbprint as IAM will automatically retrieve it. This thumbprint is maintained with the IdP configuration, but is not used if the IdP relies on a trusted root CA.

Amazon Q introduces support for scanned PDFs and embedded images in PDF documents

Amazon Q Business is a fully managed, generative-AI powered assistant that enhances employee productivity by answering questions, providing summaries, generating content, and completing tasks based on customer's enterprise data. Across various industries, users want to derive insights from document types such as invoices, tax statements, which are frequently in scanned PDF format. Starting today, Amazon Q Business users can get answers from text content in scanned PDFs, and images embedded in PDF documents.

Prior to today, customers who wanted to derive insights from scanned PDFs and images in PDF documents would first have to do preprocessing to extract the text from these documents using Optical Character Recognition (OCR) followed by ingestion into Amazon Q Business. Starting today, customers can directly feed these documents into Q Business, and search and act on them without the need for preprocessing of any kind. With this launch, customers can simplify the process of building their own generative AI assistants using Q Business APIs or web applications.

This feature is available in all AWS regions where Amazon Q for Business is available. To learn more about the support for Scanned PDFs and embedded images, visit the documentation page or refer to the blog Improve productivity when processing scanned PDFs using Amazon Q Business. To explore Amazon Q, visit the Amazon Q website.

Announcing IDE workspace context awareness in Q Developer chat

Today, AWS announces IDE workspace context awareness in Amazon Q Developer chat. Users can now add @workspace to their chat message to Amazon Q Developer to ask questions about the code in the project they currently have open in the integrated development environment (IDE). Developers can ask questions like “@workspace what does this codebase do?” or “how does this @workspace implement authentication and authorization?”.

Previously, Amazon Q Developer chat in the IDE could only answer questions about your currently opened code file. Now, Q Developer automatically ingests and indexes all code files, configurations, and project structure, giving the chat comprehensive context across your entire application within the IDE. The index is stored locally and is created the first time you mention @workspace.

To get started, make sure you are using the most up-to-date version of the Amazon Q Developer IDE extension, open the Q chat in your IDE, and just ask a question that includes @workspace.

AWS Secrets Manager announces open source release of Secrets Manager Agent

AWS Secrets Manager today announces Secrets Manager Agent - a language agnostic local HTTP service that you can install and use in your compute environments to read secrets from Secrets Manager and cache them in memory. With this launch, you can now simplify and standardize the way you read secrets across compute environments without the need for custom code.

Secrets Manager Agent is an open source release that your applications can use to retrieve secrets from a local HTTP service instead of making a network call to Secrets Manager. With customizable configuration options such as time to live, cache size, maximum connections, and HTTP port, you can adapt the agent based on your application needs. The agent also offers built-in protection against Server Side Request Forgery (SSRF) to ensure security when calling the agent within your compute environment.

The Secrets Manager Agent open source code is available on GitHub and can be used in all AWS Regions where AWS Secrets Manager is available. To learn more about how to use Secrets Manager Agent, visit our documentation.

Amazon ECS now enforces software version consistency for containerized applications

Amazon Elastic Container Service (Amazon ECS) now enforces software version consistency for your containerized applications, helping you ensure all tasks in your application are identical and that all code changes go through the safeguards defined in your deployment pipeline.

Customers deploy long-running applications such as HTTP-based microservices as Amazon ECS services and often use container image tags to configure these services. Although container images are immutable, image tags aren’t immutable by default and there is no standard mechanism to prevent different versions from being unintentionally deployed when you configure a containerized application using image tags. To prevent such inconsistencies, Amazon ECS now resolves container image tags to the image digest (SHA256 hash of the image manifest) when you deploy an update to your Amazon ECS service and enforces that all tasks in the service are identical and launched with this image digest(s). This means, even if you use a mutable image tag like ‘LATEST’ in your task definition and your service scales out after the deployment, the correct image (which was used when deploying the service) is used for launching new tasks.
 

Amazon RDS for SQL Server supports minor version 2019 CU27

A new minor version of Microsoft SQL Server is now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports the latest minor version of SQL Server 2019 across the Express, Web, Standard, and Enterprise editions.

We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor version include SQL Server 2019 CU27 - 15.0.4375.4.

The minor version is available in all AWS regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.

Announcing availability of AWS Outposts in Senegal

AWS Outposts can now be shipped and installed at your data center and on-premises locations in Senegal.

AWS Outposts is a family of fully managed solutions that extends AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises or edge location for a truly consistent hybrid experience. Outposts is ideal for workloads that require low latency access to on-premises systems, local data processing, and migration of applications with local system interdependencies. Outposts can also help meet data residency requirements. Outposts is available in a variety of form factors, from 1U and 2U Outposts servers to 42U Outposts racks, and multiple rack deployments.

With the availability of Outposts in Senegal, you can use AWS services to run your workloads and data in country in your on-premises facilities and connect to your nearest AWS Region for management and operations.

To learn more about Outposts, read the product overview and user guide. For the most updated list of countries and territories where Outposts is supported, check out the Outposts rack FAQs page and the Outposts servers FAQs page.

AWS Batch now supports gang-scheduling on Amazon EKS using multi-node parallel jobs

Today, AWS announces the general availability of Multi-Node Parallel (MNP) jobs in AWS Batch on Amazon Elastic Kubernetes Service (Amazon EKS). With AWS Batch MNP jobs you can run tightly-coupled High Performance Computing (HPC) applications like training multi-layer AI/ML models. AWS Batch helps you to launch, configure, and manage nodes in your Amazon EKS cluster without manual intervention.

You can configure MNP jobs using the RegisterJobsDefinition API or via job definitions sections of AWS Batch Management Console. With MNP jobs you can run AWS Batch on Amazon EKS workloads that span multiple Amazon Elastic Compute Cloud (Amazon EC2) instances. AWS Batch MNP jobs support any IP-based inter-instance communications framework, such as NVIDIA Collective Communications Library (NCCL), Gloo, Message Passing Interface (MPI), or Unified Collective Communication (UCC) as well as machine learning and parallel computing libraries such as PyTorch and Dask. For more information, see Multi-Node Parallel jobs page in the AWS Batch User Guide.

AWS Batch supports developers, scientists, and engineers in running efficient batch processing for ML model training, simulations, and analysis at any scale. Multi-Node Parallel jobs are available in any AWS Region where AWS Batch is available.
 

Chatting about your AWS resources is now generally available for Amazon Q Developer

Today, AWS announces the general availability of Amazon Q Developer’s capability to chat about your AWS account resources. With this capability, you can use natural language prompts to list resources in your AWS account, get specific resource details, and ask about related resources.

From the Amazon Q Developer chat panel in the AWS Management Console, you can ask Q to “list my S3 buckets” or “show my running EC2 instances in us-east-1” and Amazon Q returns a list of resource details, along with a summary. You can ask what Amazon EC2 instances an Amazon CloudWatch alarm is monitoring or ask “what related resources does my ec2 instance <id> have?” and Amazon Q Developer shows attached Amazon EBS volumes, configured Amazon VPCs, and AWS IAM roles for Amazon EC2 instances automatically.

To learn more, visit Amazon Q Developer or the documentation.

AWS Neuron introduces Flash Attention kernel enabling high performance and large sequence lengths

Today, AWS announces the release of Neuron 2.19, introducing support for flash attention kernel to enable performant LLM model training and inference with large sequence lengths.

AWS Neuron is the SDK for AWS Inferentia and Trainium based instances purpose-built for generative AI. Neuron integrates with popular ML frameworks like PyTorch. It includes a compiler, runtime, tools, and libraries to support high performance training and inference of AI models on Trn1 and Inf2 instances.

This release adds new features and performance improvements for both training and inference and new Ubuntu 22 Neuron DLAMIs for PyTorch 2.1 and PyTorch 1.13. Neuron 2.19 adds support for Flash Attention kernel to enable training for large sequence lengths (greater than or equal to 8K), Llama3 model training, and interleaved pipeline parallelism to enhance training efficiency and resource utilization. For inference, this release adds Flash Attention kernel support to enable LLM inference for context lengths of up to 32k. Neuron 2.19 additionally adds support for Llama3 model inference and adds beta support for continuous batching with Mistral-7B-v0.2 models. Neuron 2.19 introduces new tools: Neuron Node Problem Detector and Recovery plugin in EKS and Neuron Monitor for EKS to enable enhanced Neuron metrics monitoring in Kubernetes.

You can use AWS Neuron SDK to train and deploy models on Trn1 and Inf2 instances, available in AWS Regions as On-Demand Instances, Reserved Instances, Spot Instances, or part of Savings Plan.

For a list of features in Neuron 2.19, visit Neuron Release Notes. To get started with Neuron, see:
AWS Neuron
Inf2 Instances
Trn1 Instances
 

Amazon ECS now provides enhanced stopped task error messages for easier troubleshooting

Amazon Elastic Container Services (Amazon ECS) now makes it easier to troubleshoot task launch failures with enhanced stopped task error messages. When your Amazon ECS task fails to launch, you see the stopped task error messages in the AWS Management Console or in the ECS DescribeTasks API response. With today’s launch, Amazon ECS stopped task error messages are now more specific and actionable.

Amazon ECS is designed to help easily launch and scale your applications. When your Amazon ECS task fails to launch, you can use the Amazon ECS stopped task error message to identify the failure reason and resolve the failure. With this launch, stopped task error messages from common task launch failures now include more specific failure reasons and remediation recommendations. Amazon ECS documentation for these failures additionally provides in-depth root cause details and steps to mitigate the failure. If you manage your applications running on Amazon ECS using the AWS Management Console, error messages now include a direct link to the relevant Amazon ECS troubleshooting documentation page such as this Troubleshooting Amazon ECS ResourceInitializationError errors page, making it easier for you to access detailed information and resolve failures faster.

The new experience is now automatically enabled in all AWS Regions. See more details in Amazon ECS stopped task error messages updates.
 

RDS Performance Insights provides support for AWS PrivateLink and IPv6

Amazon RDS (Relational Database Service) Performance Insights now provides support for AWS PrivateLink and Internet Protocol Version 6 (IPv6). Customers can now access Performance Insights API/CLI privately, without going through the public Internet. Additionally, Performance Insights includes support for IPv6 connectivity and for a dual stack configuration (IPv4 and IPv6).

AWS PrivateLink provides private, secure, and scalable connectivity between virtual private clouds (VPCs) and AWS services. Customers can now prevent sensitive data, such as SQL text, from traversing the Internet to maintain compliance with regulations such as HIPAA and PCI . With IPv6 support, scaling an application on AWS is no longer constrained by the number of IPv4 addresses in the VPC. This eliminates the need for complex architectures to work around the limits of public IPv4 addresses.

Amazon RDS Performance Insights is a database performance tuning and monitoring feature of RDS that allows you to visually assess the load on your database and determine when and where to take action. With one click in the Amazon RDS Management Console, you can add a fully-managed performance monitoring solution to your Amazon RDS database.

To learn more about RDS Performance Insights, read the Amazon RDS User Guide and visit Performance Insights pricing for pricing details and region availability.
 

Amazon MWAA now available in nine additional Regions

Amazon Managed Workflows for Apache Airflow (MWAA) is now available in nine new AWS Regions: Asia Pacific (Jakarta), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Middle East (UAE), Europe (Spain), Europe (Zurich), Canada West (Calgary), Israel (Tel Aviv), and Asia Pacific (Osaka).

Amazon MWAA is a managed service for Apache Airflow that lets you use the same familiar Apache Airflow platform as you do today to orchestrate your workflows and enjoy improved scalability, availability, and security without the operational burden of having to manage the underlying infrastructure. Learn more about using Amazon MWAA on the product page.

Please visit the AWS region table for more information on AWS regions and services. To learn more about Amazon MWAA visit the Amazon MWAA documentation.

Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

AWS License Manager now integrates with Red Hat Subscription Manager

AWS License Manager now integrates with Red Hat Subscription Manager (RHSM) to provide greater insight into use of Red Hat Enterprise Linux (RHEL) on Amazon EC2. With instance and subscription data from RHSM accessible directly in License Manager, you can better manage cost optimization and compliance of your RHEL usage on AWS.

You can already use License Manager to discover and track RHEL instances on Amazon EC2 launched from AWS provided Amazon Machine Images (AMIs). License Manager can now integrate with RHSM to show information about instances launched from custom RHEL images. The new feature will help customers discover RHEL instances and subscriptions in use on AWS, and identify cases of double payment where an instance has subscriptions purchased from both AWS and Red Hat assigned to it.

This feature is available in all AWS Regions where AWS License Manager is available.

To get started, visit the AWS License Manager console and select the Linux subscriptions tab in the left navigation. First time users will be directed to AWS License Manager settings to select the regions you want the Linux subscriptions data to be gathered from, set-up linking with AWS Organizations to see a cross-account view. Once this part of the process is completed you will be asked to provide your RHSM API token to complete the integration. To learn more, see the Linux subscriptions section in the AWS License Manager user guide.

Amazon QuickSight launches a 20x higher limit for SPICE JOIN

Amazon QuickSight is excited to announce an increase in the table size limit for joining SPICE datasets from 1GB to 20GB. Previously, when customers prepared their data and joined tables from various sources, including SPICE, the combined secondary tables had to be less than 1GB. This limitation often forced QuickSight customers to find workarounds in their upstream data pipeline to handle large datasets and build complex data models. With the new 20GB limit for secondary tables, users can now join SPICE tables with 20 times the previous capacity, significantly enhancing data preparation capabilities in QuickSight. This upgrade also enables large cross-source join tasks by leveraging SPICE ingestion. For further details, visit here.

The new SPICE JOIN with 20GB limit is now available in Amazon QuickSight Enterprise Editions in all QuickSight regions - US East (N. Virginia and Ohio), US West (Oregon), Canada, São Paulo, Europe (Frankfurt, Stockholm, Paris, Ireland, London, Zurich and Milan), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo, Jakarta), Cape Town, and the AWS GovCloud (US-West) Region.

Guardrails for Amazon Bedrock can now detect hallucinations & safeguard apps using any FM

Guardrails for Amazon Bedrock enables customers to implement safeguards based on their application requirements and responsible AI policies. Today, guardrails adds contextual grounding checks and introduces a new ApplyGuardrail API to build trustworthy generative AI applications using any foundation model (FM).

Customers rely on the inherent capabilities of the FMs to generate grounded (credible) responses that are based on company’s source data. However, FMs can conflate multiple pieces of information, producing incorrect or new information - impacting the reliability of the application. With contextual grounding checks, Guardrails can now detect hallucinations in model responses for RAG (retrieval-augmented generation) and conversational applications. This safeguard helps detect and filter responses that are factually incorrect based on a reference source, and are irrelevant to the users’ query. Customers can configure confidence thresholds to filter responses with low confidence of grounding or relevance.

In addition, to support choice of safeguarding applications using different FMs, Guardrails now supports an ApplyGuardrail API to evaluate user inputs and model responses for any custom and third-party FM, in addition to FMs already supported in Amazon Bedrock. The ApplyGuardrail API now enables centralized safety and governance for all your generative AI applications.

Guardrails is the only offering from a major cloud provider to provide safety, privacy, and truthfulness protections in a single solution. Contextual grounding check and ApplyGuardrail API are supported in all AWS regions where Guardrails for Amazon Bedrock is supported.

To learn more about Guardrails for Amazon Bedrock, visit the feature page and read the news blog.
 

AWS Backup now supports Amazon Elastic Block Store (EBS) Snapshots Archive in backup policies

Today, AWS Backup announces support for Amazon EBS Snapshots Archive in backup policies, allowing customers to automatically move Amazon EBS Snapshots created by AWS Backup to Amazon EBS Snapshots Archive at the AWS Organizations level. Amazon EBS Snapshots Archive is low-cost, long-term storage tier meant for your rarely-accessed snapshots that do not need frequent retrieval. You can now use your Organizations’ management account to set an Amazon EBS Snapshots Archival policy across accounts.

To get started, create a new or edit an existing AWS Backup policy from your AWS Organizations’ management account. You can use AWS Backup policies to transition your Amazon EBS Snapshots to Amazon EBS Snapshots Archive and manage their lifecycle, alongside AWS Backup’s other supported resources. Amazon EBS Snapshots are incremental, storing only the changes since the last snapshot and making them cost effective for daily and weekly backups that need to be accessed frequently. You may also have Amazon EBS snapshots that you only need to access every few months, retaining them for long-term regulatory requirements. For these long-term snapshots, you can now transition your Amazon EBS snapshots managed by AWS Backup to Amazon EBS Snapshots Archive tier to store full snapshots at lower costs.

AWS Backup support for Amazon EBS Snapshots Archive in backup policies is available in all commercial and AWS GovCloud (US) Regions, where AWS Backup, AWS Backup policies and EBS Snapshots Archive are available. You can get started by using the AWS Organizations API, or CLI. For more information, visit our documentation.

Amazon Cognito is now available in Canada West (Calgary) Region

Starting today, customers can use Amazon Cognito in Canada West (Calgary) Region. Cognito makes it easy to add authentication, authorization, and user management to your web and mobile apps. The service scales to millions of users and supports sign-in with social identity providers such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via standards such as SAML 2.0 and OpenID Connect.

With the addition of this region, Amazon Cognito is now available in 30 AWS Regions globally. For a list of regions where Amazon Cognito is available, see the AWS Region Table. To learn more about Amazon Cognito, visit the product documentation page. To get started, visit the Amazon Cognito home page.
 

Amazon Q Developer is now available in SageMaker Studio

Amazon SageMaker, a fully managed machine learning service, announces the general availability of Amazon Q Developer in SageMaker Studio. SageMaker Studio customers now get generative AI assistance powered by Q Developer right within their JupyterLab Integrated Development Environment (IDE). With Q Developer, data scientists and ML engineers can access expert guidance on SageMaker features, code generation, and troubleshooting. This allows for more productivity by eliminating the need for tedious online searches and documentation review, and ensuring more time delivering differentiated business value.

Data scientists and ML engineers using JupyterLab in SageMaker Studio can kick off their model development lifecycle with Amazon Q Developer. They can use the chat capability to discover and learn how to leverage SageMaker features for their use case without having to sift through extensive documentation. As well, users can generate code tailored to their needs and jump-start the development process. Further, they can use Q Developer to get in-line code suggestions and conversational assistance to edit, explain, and document their code in JupyterLab. Users can also leverage Q Developer to receive step by step guidance for troubleshooting when running into errors. With the introduction of Q Developer, users can leverage generative AI assistance within their JupyterLab environment. This integration empowers data scientists and ML engineers to accelerate their workflow, enhance productivity, and deliver ML models more efficiently, streamlining the machine learning development process.

This feature is available in all commercial AWS regions where SageMaker Studio is available.

For additional details, see our product page and documentation.

Amazon Cognito is now available in Asia Pacific (Hong Kong) Region

Starting today, customers can use Amazon Cognito in Asia Pacific (Hong Kong) Region. Cognito makes it easy to add authentication, authorization, and user management to your web and mobile apps. The service scales to millions of users and supports sign-in with social identity providers such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via standards such as SAML 2.0 and OpenID Connect.

With the addition of this region, Amazon Cognito is now available in 29 AWS Regions globally. For a list of regions where Amazon Cognito is available, see the AWS Region Table. To learn more about Amazon Cognito, visit the product documentation page. To get started, visit the Amazon Cognito home page.

Customize Amazon Q Developer code recommendations, and receive chat responses in the IDE (Preview)

Today, AWS announces the general availability of customized Amazon Q Developer inline code recommendations. You can now securely connect Amazon Q Developer to your private code bases and generate more precise suggestions by including your organization’s internal APIs, libraries, classes, methods, and best practices. In preview, you can also use Amazon Q Developer chat in the IDE to ask questions about how your internal code base is structured, where and how certain functions or libraries are used, or what specific functions, methods, or APIs do. With these capabilities, Amazon Q Developer can save builders hours typically spent examining previously written code or internal documentation to understand how to use internal APIs, libraries, and more. 

 

To get started, you first need to securely connect your organization’s private repositories to Amazon Q Developer in the AWS Management Console. Amazon Q Developer administrators can select which repositories to use to customize recommendations, applying strict access control. Your administrators can decide which customization to activate, and they can manage access to a private customization from the console so only specific developers have access. Each customization is isolated from other customers, and none of the customizations built with these new capabilities will be used to train the foundation models underlying Amazon Q Developer.

 

Customized code recommendations and chat in the IDE are available as part of the Amazon Q Developer Pro subscription. To learn more about pricing, visit Amazon Q Developer Pricing. To learn more about these capabilities, see Amazon Q Developer or read the announcement blog post.

Agents for Amazon Bedrock now support code interpretation (Preview)

Amazon Web Services, Inc. (AWS) today announced a new code interpretation capability on Agents for Amazon Bedrock. Code interpretation allows agents to dynamically generate and execute code snippets within a secure sandboxed environment, extending the capabilities of Agents for complex use cases such as data analysis, data visualization, and optimization problems.

This new capability allows developers to move beyond the predefined capabilities of the large language model (LLM) and tackle more complex, data-driven use cases. Agents can now generate and execute code, process files with diverse data types and formatting, and even generate graphs to enhance the user experience. Also, the iterative code execution capabilities allow Agents to work through challenging data science problems, giving them the ability to orchestrate increasingly complex tasks.

Code interpretation is currently available in the Northern Virginia, Oregon, Europe (Frankfurt) AWS regions.

Learn more here.

Agents for Amazon Bedrock now retain memory (Preview)

Amazon Web Services, Inc. (AWS) today announced Agents for Amazon Bedrock can retain memory across multiple interactions over time, allowing developers to build generative AI applications that seamlessly adapt to user context and preferences, enhancing personalized experiences and automating complex business processes more efficiently.

By retaining memory AI assistants remember historical knowledge and learn from user interactions over time. For example, if a user is booking a flight, the application can remember the user's travel preferences for future bookings. This capability is crucial for complex multi-step tasks like insurance claims processing, where continuity and context retention significantly improve the user experience.

Memory retention is available in all AWS Regions where Claude 3 Sonnet and Haiku models support Agents for Amazon Bedrock.

Learn more about memory retention on Agents for Amazon Bedrock here.

Knowledge Bases for Amazon Bedrock now supports advanced RAG capabilities

Knowledge Bases for Amazon Bedrock is a fully managed Retrieval-Augmented Generation (RAG) capability that allows you to connect foundation models (FMs) to internal company data sources to deliver relevant and accurate responses. Chunking allows processing long documents by breaking them into smaller chunks, enabling accurate knowledge retrieval from a user’s question. Today, we are launching advanced chunking options. The first is custom chunking. With this, customers can write their own chunking code as a Lambda function, and even use off the shelf components from frameworks like LangChain and LlamaIndex. Additionally, we are launching built-in chunking options such as semantic and hierarchical chunking.

Additionally, customers can enable smart parsing to extract information from more complex data such as tables. This capability uses Amazon Bedrock foundation models to parse tabular content in file formats such as PDF to improve retrieval accuracy. You can customize parsing prompts to extract data in the format of your choice. Knowledge Bases now also supports query reformulation. This capability breaks down queries into simpler sub-queries, retrieves relevant information for each, and combines the results into a final comprehensive answer. With these new accuracy improvements for chunking, parsing, and advanced query handling, Knowledge Bases empowers users to build highly accurate and relevant knowledge resources suited for enterprise use cases.

These capabilities are supported in the all AWS Regions where Knowledge Bases is available. To learn more about these features and how to get started, refer to the Knowledge Bases for Amazon Bedrock documentation and visit the Amazon Bedrock console.

Knowledge Bases for Amazon Bedrock now supports additional data sources (preview)

Knowledge Bases for Amazon Bedrock is a fully managed Retrieval-Augmented Generation (RAG) capability that allows you to connect foundation models (FMs) to internal company data sources to deliver relevant and accurate responses. Today, we are launching a new feature that allows customers to securely ingest data from various sources into their knowledge bases. Knowledge Bases now supports the web data source allowing you to index public web pages. Secondly, Knowledge Bases now supports three additional data connectors including Atlassian Confluence, Microsoft SharePoint, and Salesforce. You can connect directly to these data sources to build your RAG applications. These new capabilities reduce the time and cost associated with data movement, while ensuring that the knowledge bases stays up-to-date with the latest changes in the connected data sources.

Customers can set up these new data sources through the AWS Management Console for Amazon Bedrock or the CreateDataSource API. To get started, visit the Knowledge Bases documentation.

This capability is supported in the all AWS Regions where Knowledge Bases is available. To learn more about these features and how to get started, refer to the Knowledge Bases for Amazon Bedrock documentation and visit the Amazon Bedrock console.

Amazon Bedrock Prompt Management and Prompt Flows now available in preview

Today, we are announcing the preview launch of Amazon Bedrock Prompt Management and Prompt Flows. Amazon Bedrock Prompt Management simplifies the creation, evaluation, versioning, and sharing of prompts to help developers and prompt engineers get the best responses from foundation models for their use cases. Developers can use the Prompt Builder to experiment with multiple FMs, model configurations, and prompt messages. They can test and compare prompts in-place using the Prompt Builder, without the need of any deployment. To share the prompt for use in downstream applications, they can simply create a version and make an API call to retrieve the prompt. In addition, Bedrock Prompt Flows accelerates the creation, testing, and deployment of workflows through an intuitive visual builder. Developers can use the visual builder to drag and drop different components such as prompts, Knowledge Bases, and Lambda functions to automate a workflow.

Announcing AWS App Studio preview

AWS App Studio, a generative artificial intelligence (AI)-powered service that uses natural language to create enterprise-grade applications, is now available in preview. App Studio opens up application development to technical professionals without software development skills (such as IT project managers, data engineers, and enterprise architects), empowering them to quickly build business applications, eliminating the need for operational expertise. This allows users to focus on building applications that help solve business problems and increase productivity in their roles, while removing the heavy lifting of building and running applications.

App Studio is the fastest and easiest way for technical professionals to build enterprise-grade applications that were previously only built by professional developers. App Studio’s generative AI-powered assistant accelerates the application creation process. To get started, builders can write a basic prompt describing the application they want, App Studio will generate an outline to verify the user’s intent and then build an application with a multi-page UI, a data model, and business logic. Builders can then ask clarifying questions and App Studio will give detailed answers on how to make the change using the point-and-click interface. The user can also easily connect their application to internal data sources using built-in connectors for AWS (such as Amazon Aurora, Amazon DynamoDB, and Amazon S3) and Salesforce, along with hundreds of third-party services (such as HubSpot, Twilio, and Zendesk) using an API connector. With App Studio, users do not have to think about the underlying code at all—App Studio handles all the deployment, operations, and maintenance.

It is free to build with App Studio, and customers only pay for the time employees spend using the published applications, saving up to 80% compared to other low-code offerings.

App Studio is now available in preview in the US West (Oregon) AWS Region.

To learn more and get started, visit AWS App Studio, review the documentation, and read the announcement blog post.

AWS announces the general availability of vector search for Amazon MemoryDB

Vector search for Amazon MemoryDB, an in-memory database with multi-AZ durability, is now generally available. This capability helps you to store, index, retrieve, and search vectors. Amazon MemoryDB delivers the fastest vector search performance at the highest recall rates among popular vector databases on AWS. Vector search for MemoryDB supports storing millions of vectors with single-digit millisecond query and update latencies at the highest levels of throughput with >99% recall. You can generate vector embeddings using AI/ML services, such as Amazon Bedrock and Amazon SageMaker, and store them within MemoryDB. 

With vector search for MemoryDB, you can develop real-time machine learning (ML) and generative AI applications that require the highest throughput at the highest recall rates with the lowest latency using the MemoryDB API or orchestration frameworks such as LangChain. For example, a bank can use vector search for MemoryDB to detect anomalies, such as fraudulent transactions during periods of high transactional volumes, with minimal false positives. 

Vector search for MemoryDB is available in all AWS Regions that MemoryDB is available—at no additional cost. 

To get started, create a new MemoryDB cluster using MemoryDB version 7.1 and enable vector search through the AWS Management Console or AWS Command Line Interface (CLI). To learn more, check out the vector search for MemoryDB documentation.

Fine-tuning for Anthropic's Claude 3 Haiku in Amazon Bedrock (Preview)

Fine-tuning for Anthropic's Claude 3 Haiku model in Amazon Bedrock is now available in preview. Amazon Bedrock is the only fully managed service that provides you with the ability to fine tune Claude models. Claude 3 Haiku is Anthropic’s most compact model, and is one of the most affordable and fastest options on the market for its intelligence category according to Anthropic. By providing your own task-specific training dataset, you can fine tune and customize Claude 3 Haiku to boost model accuracy, quality, and consistency to further tailor generative AI for your business.

Fine-tuning allows Claude 3 Haiku to excel in areas crucial to your business compared to more general models by encoding company and domain knowledge. Within your secure AWS environment, use Amazon Bedrock to customize Claude 3 Haiku with your own data to build applications specific to your domain, organization, and use case. By fine-tuning Haiku and adapting its knowledge to your exact business requirements, you can create unique user experiences that reflect your company’s proprietary information, brand, products, and more. You can also enhance performance for domain-specific actions such as classification, interactions with custom APIs, or industry-specific data interpretation. Amazon Bedrock makes a separate copy of the base foundation model that is accessible only by you and trains this private copy of the model.

Fine-tuning for Anthropic's Claude 3 Haiku in Amazon Bedrock is now available in preview in the US West (Oregon) AWS Region. To learn more, read the launch blog and documentation. To request to be considered for access to the preview of Anthropic's Claude 3 Haiku fine-tuning in Amazon Bedrock, contact your AWS account team or submit a support ticket via the AWS Management Console. When creating the support ticket, select Bedrock as the Service and Models as the Category.

Announcing the general availability of Amazon Q Apps

Today, AWS announces the general availability of Amazon Q Apps, an Amazon Q Business capability that has been in public preview since April 2024.
 
Amazon Q Apps empowers organizational users to quickly turn their ideas into apps, all in a single step from their conversation with Amazon Q Business or by describing the app that they want to build in their own words. With Amazon Q Apps, users can effortlessly build, share, and customize apps on enterprise data to streamline tasks and boost individual and team productivity. Users can also publish apps to the admin-managed library and share them with their coworkers. Amazon Q Apps inherit user permissions, access controls, and enterprise guardrails from Amazon Q Business for secure sharing and adherence to data governance policies. 

Amazon Q Apps enhances business user experience and collaboration with new and improved capabilities. Customers can now bring the power of Amazon Q Apps into their tools of choice and application environment through APIs that seamlessly allow creating and consuming Amazon Q Apps outputs. App creators can now review the original app creation prompt to refine and improve new app versions without starting from scratch, as well as to pick data sources to improve output quality.

Amazon Q Business and Amazon Q Apps are available in the US East (N. Virginia) and US West (Oregon) AWS Regions.
 
For more information, check out Amazon Q Business and read the AWS News Blog.

Announcing the next generation of Amazon FSx for NetApp ONTAP file systems

Today, we’re announcing next-generation Amazon FSx for NetApp ONTAP file systems that provide higher scalability and enhanced flexibility compared to previous-generation file systems. Previous-generation file systems consisted of a single highly-available (HA) pair of file servers with up to 4 GBps of throughput. Next-gen file systems can be created or expanded with up to 12 HA pairs, allowing you to scale up to 72 GB/s of total throughput (up to 6 GBps per pair), giving you the flexibility to scale performance and storage to meet the needs of your most demanding workloads.

With next-gen FSx for ONTAP file systems, a single HA pair can now deliver up to 6 GBps of throughput, providing workloads running on a single HA even more room to grow. However, customers with the most compute-intensive workloads need the higher throughput provided by a file system with multiple HA pairs. Before today, these customers could create a file system with multiple HA pairs but couldn’t add HA pairs or adjust its throughput at a later time. Now, next-gen file systems allow you to add HA pairs and adjust their throughput capacity, giving you additional flexibility to optimize your workload’s performance over time.

Next-gen file systems are available today in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Europe (Ireland), and Asia-Pacific (Sydney). You can create next-gen Multi-AZ file systems with a single HA pair, and Single-AZ file systems with up to 12 HA pairs. To learn more, visit the FSx for ONTAP user guide.

Amazon FSx for NetApp ONTAP now supports NVMe-over-TCP for simpler, lower-latency shared block storage

Amazon FSx for NetApp ONTAP, a service that provides fully managed shared storage built on NetApp’s popular ONTAP file system, today announced support for the NVMe-over-TCP (NVMe/TCP) block storage protocol. Using NVMe/TCP, you can accelerate your block storage workloads such as databases and Virtual Desktop Infrastructure (VDI) with lower latency compared to traditional iSCSI block storage, and simplify multi-path IO (MPIO) configuration relative to iSCSI.

FSx for ONTAP provides you with multi-protocol access to fully managed shared storage, including the iSCSI protocol for deploying applications such as databases and VDI that rely on shared block storage. NVMe/TCP is an implementation of the NVMe protocol that transports data over TCP using traditional Ethernet as a fabric. With this launch, you have the option of using NVMe/TCP to provide shared block storage for these applications in order to take advantage of NVMe/TCP’s lower latency and simplified setup.

NVMe/TCP is available on all second-generation Amazon FSx for ONTAP file systems in all AWS Regions where they’re available. To learn more, visit the FSx for ONTAP user guide.

AWS Glue Data catalog now supports generating statistics for Apache Iceberg tables

AWS Glue Data Catalog now supports generating column-level aggregated statistics for Apache Iceberg tables. These statistics are now integrated with cost-based optimizer (CBO) from Amazon Redshift Spectrum, resulting in improved query performance and potential cost savings.

Apache Iceberg support statistics such as nulls, min, max, but lacks support for generating aggregation statistics such as number of distinct values (NDV). With this launch, you now have integrated end-to-end experience where NDVs are collected on columns of Apache Iceberg table and stored in Apache Iceberg Puffin files. Amazon Redshift use these aggregation statistics to optimize queries by applying the most restrictive filters as early as possible in the query processing, thereby limiting memory usage and the number of records read to provide the query results.

To get started, you can generate statistics for an Apache Iceberg table using AWS Glue Console or AWS Glue APIs. With each run, Glue Catalog will compute statistics for current Iceberg table snapshot, store in an Iceberg puffin file and Glue Catalog. As you run queries from Amazon Redshift Spectrum, you will automatically get the query performance improvements with built-in integration with Apache Iceberg.

Amazon FSx for NetApp ONTAP now allows you to read data during backup restores

Amazon FSx for NetApp ONTAP, a fully managed shared storage service built on NetApp’s popular ONTAP file system, now allows you to read data from a volume while it is being restored from a backup. The feature “read-access during backup restores” allows you to improve Recovery Time Objectives by up to 17x for read-only workloads that rely on backup restores for business continuity, such as media streaming and compliance verification.

You can restore an FSx for ONTAP backup into a new volume at any time. Before today, when you restored a backup, Amazon FSx provided read-write access to data once the backup was fully downloaded onto the volume. The restore process typically took minutes to hours—depending on the backup size. Starting today, Amazon FSx enables read access to data within minutes of initiating a restore, enabling you to browse through your backup and retrieve critical data to resume operations faster in the event of accidental data modification or deletion. The volume becomes writable automatically once data has been fully restored. Now, you can reduce time to recover media streaming applications when the primary volume becomes unavailable by serving reads from a volume being restored, and compliance teams can initiate audits sooner by accessing data without waiting for the restore to complete.

This feature is available on all new and existing FSx for ONTAP second-generation file systems in all AWS Regions where FSx for ONTAP second-generation file systems are available. See the FSx for ONTAP product documentation for more details.

AWS Glue Studio now offers a no code data preparation authoring experience

Today, AWS Glue Studio Visual ETL announces general availability of data preparation authoring, a new no code data preparation user experience for business users and data analysts with a spreadsheet-style UI that runs data integration jobs at scale on AWS Glue for Spark. The new visual data preparation experience makes it easier for data analysts and data scientists to clean and transform data to prepare it for analytics and machine learning (ML). Within this new experience, you can choose from hundreds of prebuilt transformations to automate data preparation tasks, all without the need to write any code.

Business analysts can now collaborate with data engineers to build data integration jobs. Data engineers can use Glue Studio Visual flow-based view to define connections to the data and set the ordering of the data flow process, while business analysts can use the data preparation experience to define the data transformation and output. Additionally, DataBrew customers can import their existing data cleansing and preparation “recipes” to the new AWS Glue data preparation experience and continue to author them directly in AWS Glue Studio and scale up recipes to process petabytes of data, and at the lower price point for AWS Glue jobs.

The feature is available in all commercial AWS Regions where AWS Glue DataBrew is available. To learn more, refer to the documentation and read the blog post.

Amazon SageMaker introduces a new generative AI inference optimization capability

Today, Amazon SageMaker announced general availability of a new inference capability that delivers up to ~2x higher throughput while reducing costs by up to ~50% for generative AI models such as Llama 3, Mistral, and Mixtral models. For example, with a Llama 3-70B model, you can achieve up to ~2400 tokens/sec on a ml.p5.48xlarge instance v/s ~1200 tokens/sec previously without any optimization.

With this new capability, customers can choose from a menu of the latest model optimization techniques, such as speculative decoding, quantization, and compilation, and apply them to their generative AI models. SageMaker will do the heavy lifting of provisioning required hardware to run the optimization recipe, along with deep learning frameworks and libraries. Customers get out-of-the-box support for a speculative decoding solution from SageMaker that has been tested for performance at scale for various popular open source models, or they can bring their own speculative decoding solution. For quantization, SageMaker ensures compatibility and support for precision types on different model architectures. For compilation, the runtime infrastructure of SageMaker ensures efficient loading and caching of optimized models to reduce auto-scaling time.

Customers can leverage this new capability from AWS SDK for Python (Boto3), SageMaker Python SDK, or the AWS Command Line Interface (AWS CLI). This capability is now generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), and South America (Sao Paulo) Regions.

Amazon RDS Data API for Aurora PostgreSQL is now available in 10 additional AWS regions

RDS Data API for Aurora Serverless v2 and Aurora provisioned PostgreSQL-Compatible database instances is now available in Asia Pacific (Sydney), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Europe (Ireland), Europe (London), Europe (Paris), US West (N. California), US East (Ohio), Canada (Central) regions. RDS Data API allows you to access these Aurora clusters via a secure HTTP endpoint and run SQL statements without the use of database drivers and without managing connections.

Data API eliminates the use of drivers and improves application scalability by automatically pooling and sharing database connections (connection pooling) rather than requiring customers to manage connections. Customers can call Data API via AWS SDK and CLI. Data API also enables access to Aurora databases via AWS AppSync GraphQL APIs. API commands supported in the Data API for Aurora Serverless v2 and Aurora provisioned are backwards compatible with Data API for Aurora Serverless v1 for easy customer application migrations.

Data API supports Aurora PostgreSQL 15.3, 14.8, 13.11 and higher versions. Customers currently using Data API for ASv1 are encouraged to migrate to ASv2 to take advantage of the new Data API. To learn more, read the documentation.

Amazon MWAA now supports Apache Airflow version 2.9

You can now create Apache Airflow version 2.9 environments on Amazon Managed Workflows for Apache Airflow (MWAA). Apache Airflow 2.9 is the latest minor release of the popular open-source tool that helps customers author, schedule, and monitor workflows.

Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. Apache Airflow 2.9 introduces several notable enhancements, such as new API endpoints for improved dataset management, custom names in dynamic task mapping for better readability, and advanced scheduling options including conditional expressions for dataset dependencies and the combination of dataset and time-based schedules.

Amazon EC2 R8g instances powered by AWS Graviton4 now generally available

AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) R8g instances. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.

AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Elastic Fabric Adapter (EFA) networking support is offered on 24xlarge, 48xlarge, and bare metal sizes, and Elastic Network Adapter (ENA) Express support is available on instance sizes larger than 12xlarge. 

Amazon OpenSearch Service announces Natural Language Query Generation for log analysis

Amazon OpenSearch Service has added support for AI powered Natural Language Query Generation in OpenSearch Dashboards Log Explorer. With Natural Language Query Generation, you can accelerate analysis by asking log exploration questions in plain English, which are then automatically translated to the relevant Piped Processing Language (PPL) queries and executed to fetch the requested data.

With this new natural language support, you can get started quickly with log analysis without first having to be proficient in PPL. Further, it opens up log analysis to a wider set of team members who can simply explore their log data by asking questions like - “show me the count of 5xx errors for each of the pages on my website” or “show me the throughput by hosts”. This also helps advanced users in constructing complex queries by allowing for iterative refinement of both the natural language questions and the generated PPL.

This feature is available at no cost for customers running managed clusters with OpenSearch 2.13 or above in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), China (Beijing), China (Ningxia), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (São Paulo), AWS GovCloud (US-East), and AWS GovCloud (US-West).

AWS Partner Central now supports multi-factor authentication

AWS Partner Central now supports multi-factor authentication (MFA) capabilities at login. Users will be prompted to enter a one-time passcode sent to their registered e-mail address along with login credentials to confirm their identify.

MFA adds an additional layer of protection, reducing the risk of unauthorized access to AWS Partner Central. Additionally, it ensures only active users are able to access AWS Partner Central, as the registered email address must be accessible. AWS Partners will be automatically enrolled in MFA, but alliance leads and cloud admins have the ability to disable the feature if desired for all AWS Partner Central users.

To learn more, visit the AWS Partner Central Getting Started Guide.

Simplified service terms for AWS Marketplace sellers

AWS Partners can now register as sellers in AWS Marketplace with a simplified one-click experience. We have removed the need for AWS partners to review and accept a separate set of terms to sell in AWS Marketplace by including AWS Marketplace Terms into AWS service terms. Instead, partners simply sign in to their AWS account and click to register as an AWS Marketplace seller in the AWS Marketplace management portal.

AWS Partners such as independent software vendors (ISVs), data providers, and consulting partners can sell their software, services, and data in AWS Marketplace to AWS customers. AWS Marketplace, jointly with AWS Partner Network (APN), helps ISVs and consulting partners to build, market, and sell their AWS offerings by providing valuable business, technical, and marketing support. AWS Marketplace is available to customers globally. Partners can discover the benefits of becoming an AWS Marketplace seller and get started on their AWS Marketplace journey.

To learn more, review the new simplified Terms for selling in AWS Marketplace.

Amazon EventBridge Schema Registry now supports AWS PrivateLink VPC endpoints

Amazon EventBridge Schema Registry now supports AWS PrivateLink, allowing you to access the registry from within your Amazon Virtual Private Cloud (VPC) without traversing the public internet. With today’s launch, you can leverage EventBridge Schema Registry features from a private subnet without the need to deploy an internet gateway, configure firewall rules, or set up proxy servers.

Amazon EventBridge lets you use events to connect application components, making it easier to build scalable event-driven applications. EventBridge Schema Registry allows you to centrally store schemas, representing the structure of your events, so other teams can discover and consume them. You can add schemas to the registry yourself or use the Schema Discovery feature to capture the schemas of events sent to an EventBridge Event Bus. Once schemas are in your registry, you can download code bindings for those schemas in Java, Python, TypeScript, and Golang and use them in your preferred Integrated Development Environment (IDE) to take advantage of IDE features such as code validation and auto-completion.

Amazon FSx for OpenZFS introduces a highly available Single-AZ deployment option

Amazon FSx for OpenZFS now supports highly available (HA) Single-AZ deployments, offering high availability and consistent sub-millisecond latencies for use cases like data analytics, machine learning, and semiconductor chip design that can benefit from high availability but do not require multi-zone resiliency. Single-AZ HA file systems provide a lower-latency and lower-cost storage option than Multi-AZ file systems for these use cases, while offering all the same data management capabilities and features.

Before today, FSx for OpenZFS offered Single-AZ non-HA file systems, which provide sub-millisecond read and write latencies, and Multi-AZ file systems, which provide high availability and durability by replicating data synchronously across AZs. With Single-AZ HA file systems, customers can now achieve both high availability and consistent sub-millisecond latencies at a lower cost relative to Multi-AZ file systems for workloads such as data analytics, machine learning, and semiconductor chip design that do not need multi-zone resiliency because they're operating on a secondary copy of the data or data that can be regenerated.

You can create Single-AZ HA file systems in the following AWS regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Asia Pacific (Hong Kong, Mumbai, Seoul, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Stockholm), Middle East (Bahrain). To learn more about Single-AZ HA file systems, please visit FSx for OpenZFS documentation.

Amazon Q Business now provides responses that are personalized to users

Amazon Q is excited to announce personalization capabilities in Q Business, that help customers further increase employee productivity by considering their user profile to provide more useful responses. Q uses information such as an employee’s location, department and role to improve the relevance of responses.

Q Business’ personalization capabilities are automatically enabled and will use your enterprise’s employee profile data to improve their user experience, with no additional set-up needed. Q receives employee profile information from your organization’s identity provider that you have connected to AWS IAM Identity Center.

Amazon Q Business revolutionizes the way that employees interact with organizational knowledge and enterprise systems. It helps users get comprehensive answers to complex questions and take actions in a unified, intuitive web-based chat experience—all using an enterprise’s existing content, data, and systems. Personalization capability is available in all AWS Regions where Q Business is available. For more information, see Amazon Q Business User Guide.
 

Announcing Playlist page for PartyRock

Today, PartyRock is announcing a Playlist page to help you showcase a collection of PartyRock apps, curated by you. Everyone can build, use, and share generative AI powered apps using PartyRock, which uses foundation models from Amazon Bedrock.

On November 26th, 2023, we announced a Discover page to showcase top community-created PartyRock apps. With this release, you can now add apps to a personalized Playlist page, making it convenient for others to view and use your apps. Previously, PartyRock apps were available in two modes: Private, where only you could view, use, and edit your apps, and Shared using links, where you could share links with anyone to view and use your apps. Starting today, you have an additional mode of making your apps Public, where they are automatically displayed on your Playlist page, making it easy for anyone to view and use your apps. Set up your playlist by navigating to the Playlist page from the side navigation bar on PartyRock. Here, you can review your current apps and add them to your playlist. Once created, your playlist will be available at https://partyrock.aws/u/<PartyRock username>. With playlists, also comes ‘app views’ displaying the number of times other users viewed or used your apps, whether via a shared link or directly from your Playlist page.

For a limited time, AWS offers new PartyRock users a free trial without the need to provide a credit card or sign up for an AWS account. To get hands-on with generative AI, visit PartyRock.

Amazon S3 Express One Zone now supports logging of all events in AWS CloudTrail

With Amazon S3 Express One Zone support for logging of all data plane API actions in AWS CloudTrail, you can get details on who made API calls to S3 Express One Zone and when API calls were made, thereby enhancing data visibility for governance, compliance, and operational auditing. Now, you can use AWS CloudTrail to log S3 Express One Zone object-level activity such as PutObject and GetObject, in addition to directory-bucket level actions such as CreateBucket and DeleteBucket that were already supported.

With logging of all events in AWS CloudTrail, you can quickly determine which S3 Express One Zone objects were created, read, updated or deleted and identify the source of the API calls. If you detect unauthorized S3 Express One Zone object access, you can take immediate action to restrict access. In addition, you can use CloudTrail features such as advanced event selectors for granular control over which events are logged and CloudTrail integration with Amazon EventBridge to create rule-based workflows for event-driven architectures.

You can enable AWS CloudTrail data events logging for S3 Express One Zone in all AWS Regions where S3 Express One Zone is available. Get started with CloudTrail event logging for S3 Express One Zone by using the CloudTrail console, AWS CLI, or AWS SDKs. For pricing information, visit the CloudTrail pricing page. To learn more, see the S3 User Guide, S3 Express One Zone product page and the AWS News Blog.

Amazon OpenSearch Serverless expands support for time-series workloads up to 30TB

We are excited to announce that Amazon OpenSearch Serverless now supports workloads up to 30TB of data for time-series collections. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple for you to run search and analytics workloads without having to think about infrastructure management. With the support for larger datasets, OpenSearch Serverless now enables more data-intensive use cases such as log analytics, security analytics, real-time application monitoring, and more.

OpenSearch Serverless’ compute capacity used for indexing and search are measured in OpenSearch Compute Units (OCUs). To accommodate for larger datasets, OpenSearch Serverless now allows customers to independently scale indexing and search operations to use up to 500 OCUs. In addition, the release brings in a new data hydration mechanism that improves scaling and lowering query latency. You configure the maximum OCU limits on search and indexing independently to manage costs. You can also monitor real-time OCU usage with CloudWatch metrics to gain a better perspective on your workload's resource consumption.

Announcing Valkey GLIDE, an open source client library for Valkey and Redis open source

Today, we’re introducing Valkey General Language Independent Driver for the Enterprise (GLIDE), an open source Valkey client library. Valkey is an open source key-value data store that supports a variety of workloads such as caching, and message queues. Valkey GLIDE is one of the official client libraries for Valkey and it supports all Valkey commands. GLIDE supports Valkey 7.2 and above, and Redis open source 6.2, 7.0, and 7.2. Application programmers can use GLIDE to safely and reliably connect their applications to services that are Valkey- and Redis OSS-compatible.

Valkey GLIDE is designed for reliability, optimized performance, and high-availability, for Valkey- and Redis OSS- based applications. It is supported by AWS, and is preconfigured with best practices learned from over a decade of operating Redis OSS-compatible services used by thousands of customers. To help ensure consistency in application development and operations, GLIDE is implemented using a core driver framework, written in Rust, with language specific extensions. This design ensures consistency in features across languages, and reduces overall complexity. In this release, GLIDE is available for Java and Python, with support for additional languages actively under development.

Valkey GLIDE is open source, permissively licensed (Apache 2.0 license), and can be used with any Valkey- or Redis OSS-compatible distribution supporting version 6.2, 7.0, and 7.2, including Amazon ElastiCache and Amazon MemoryDB. You can get started by downloading it from the major open source package managers. Learn more about it in the blog post, and submit contributions on the Valkey GLIDE GitHub repository.

Amazon CloudFront announces managed cache policies for web applications

Amazon CloudFront announces two new managed cache policies, UseOriginCacheControlHeaders and UseOriginCacheControlHeaders-QueryStrings, for dynamically generated websites and applications that return Cache-Control headers. With the new managed cache policies, CloudFront caches content based on the Cache-Control headers returned by the origin, and defaults to not caching when no Cache-Control header is returned. This functionality was previously available to customers that created custom cache policies, but is now available out-of-the-box for all customers as a managed cache policy.

Cache policies instruct CloudFront when and how to cache, including which request attributes to include in the cache key. Previously, customers had two common options for managed cache policies: CachingOptimized to always cache unless disallowed by a caching directive, and CachingDisabled to disable all caching. For all other cases, customers had to create custom cache policies. Now, customers can use a single managed cache policy for websites backed by content management systems like WordPress or dynamically generated content that has a mix of cacheable and non-cacheable content.

The new managed cache policies are available for immediate use at no additional cost. This feature can be enabled via the CloudFront Console, SDK, and CLI. The CloudFront console automatically provides recommendations on cache policies based on your origin type. For more information, refer to the CloudFront Developer Guide. To get started with CloudFront, visit the CloudFront Product Page.

FreeRTOS releases new Long Term Support version

Today, AWS announced the third release of FreeRTOS Long Term Support (LTS) - FreeRTOS 202406 LTS. FreeRTOS LTS releases provide feature stability with security updates and critical bug fixes for two years. The new LTS release includes the latest FreeRTOS kernel v11.1 that supports Symmetric Multiprocessing (SMP) and Memory Protection Units (MPU) and recently updated libraries, such as the FreeRTOS-Plus-TCP v4.2.1 library and the Over-the-Air (OTA) library, providing you with an improved IPv6 and OTA capabilities. With the FreeRTOS LTS releases, you can continue to maintain your existing FreeRTOS code base and avoid any potential disruptions resulting from FreeRTOS version upgrades.

Similar to the previous FreeRTOS LTS release, FreeRTOS 202406 LTS includes libraries that have been validated for memory safety with the C Bounded Model Checker (CBMC) and have undergone specific code quality checks, including MISRA-C compliance and Coverity static analysis to help improve code safety, portability, and reliability in embedded systems. For more information, refer to the LTS Code Quality Checklist.

The support period for the previous LTS release will end on Nov-2024, thus providing you with an overlapping time window to migrate your projects to the new LTS release. Refer to the migration guide and corresponding validation tests to upgrade your project to FreeRTOS 202406 LTS. If you prefer not to upgrade and want to continue receiving critical fixes on the previous LTS version beyond its expiry, consider the FreeRTOS Extended Maintenance Plan.

To learn more, visit the FreeRTOS LTS page and FreeRTOS LTS GitHub repository.

Migration Acceleration Program (MAP) visualizations available in AWS Partner Central- Analytics and Insights Dashboard

Today, Amazon Web Services, Inc. (AWS) announces new 2024 AWS Migration Acceleration Program (MAP) data visualizations in Analytics and Insights Dashboards, accessible via AWS Partner Central. AWS Partners with the Migration Competency and in Differentiated or Validation Stages of any Partner Path, can now access these new visualizations.

Previously, Partners' realization of their MAP benefit was based on project or milestone completion. With the 2024 MAP Program changes designed toward revenue-based outcomes, Analytics and Insights surfaces the necessary information needed to maintain visibility and awareness of these outcomes to claim their MAP funding benefits in the AWS Partner Funding Portal (APFP).

To get started, log into your AWS Partner Central account and navigate to the Investments tab within the Analytics & Insights dashboard. Learn more about the dashboard’s functionality and the latest updates in the Analytics and Insights User Guide and FAQs (login required). To learn more about the new 2024 MAP funding template, refer to link and  blog post.

Amazon EMR support for backup and restore for Apache HBase Tables available in Asia Pacific (Seoul)

We are excited to announce that backup and restore for Apache HBase Tables is now available in the AWS region in Asia Pacific (Seoul)

Apache HBase Write Ahead Log allows recording all changes to data to file-based storage. With the launch today, customers can now write their Apache HBase write-ahead logs to the Amazon EMR WAL, a durable managed storage layer. This ensures business continuity in case of a disaster as well as offer a higher resilience for workloads. In the event that customer’s cluster, or in the rare cases that the Availability Zone becomes unhealthy or unavailable, customers can create a new cluster, point it to the same Amazon S3 root directory and Amazon EMR WAL workspace, and automatically recover data from Amazon EMR WAL within a few minutes. This feature also allows Apache HBase administrators to easily perform common operational tasks like upgrading to the latest versions, rotating your clusters, and cleaning up old write ahead logs.

Customer who enroll into the feature will be charged based on usage - for Amazon EMR WAL data storage, writes, and reads during recovery operations. This feature is now available for Amazon EMR release 6.15 and 7.0 and later. To learn how to get started, please refer to our documentation, and for pricing information please refer to the pricing section.
 

Amazon Connect launches automated rotation of agent shifts

Amazon Connect now supports automated rotation of agent shifts, making it easier for contact center managers to administrate schedules and ensure that agents receive a business-defined sequence of shifts.

Automated shift rotations speed up the week-to-week scheduling process for contact center managers, eliminating the need to manually move agents between different shifts. With shift rotation, contact center managers can now create a pattern of shifts that agents will repeatedly rotate through (e.g., morning shift, afternoon shift, night shift) and define how many weeks each shift should be scheduled before moving to the next one in the rotation. These shift rotation patterns are automatically applied when new schedules are created, eliminating the need for contact center managers to manually assign shifts to groups of agents. Additionally, contact center managers can bulk upload and download shift rotation and shift profile assignments, making it easy to set up and update shift rotations for thousands of agents.

There is no additional charge for this feature and it is available in all AWS Regions where Amazon Connect agent scheduling is available, To get started with Amazon Connect agent scheduling, click here.

Amazon ECR adds EventBridge support with ECR’s replication feature

Amazon Elastic Container Registry (Amazon ECR) now emits Amazon EventBridge events when customers successfully replicate images using ECR’s replication capability. Amazon EventBridge is a serverless service that makes it easy for customers to connect their applications using events generated from a variety of sources.

ECR’s replication feature enables you to easily copy container images across multiple AWS accounts and regions, and benefit from in-region pulls, allowing your applications to start faster as pull latency is reduced. Geographically dispersed images also help you meet backup and disaster recovery requirements for your applications. With today’s launch, ECR will automatically generate events when image replication is successfully completed across configured source and destination regions and/or accounts. This allows you to know once your replicated images are available for use in the destination registry, and trigger automated actions following replication completion. To receive ECR events, you can configure a rule using the EventBridge console, and include automated actions that should be triggered when an event matches the rule.

EventBridge support for ECR replication is available in all commercial AWS Regions. To get started and learn more about EventBridge events with Amazon ECR, see our user guide.
 

Amazon SNS now supports sending SMS from Canada West (Calgary) region

Customers that use Amazon Simple Notification Service (Amazon SNS) can now host their applications in Canada West (Calgary) region, and send text messages (SMS) to consumers in more than 200 countries and territories. Using Amazon SNS, customers can send a message directly to one phone number, or multiple phone numbers at once by subscribing those phone numbers to a topic and sending messages to the topic. With this expansion, Amazon SNS now supports the ability to send SMS from 29 AWS regions.

More information:

Amazon SQS introduces new Amazon CloudWatch metrics for FIFO queues

Amazon Simple Queue Service introduces two new Amazon CloudWatch metrics to improve the usage visibility of FIFO queues. Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.

With the new metrics, SQS customers have greater visibility into their FIFO usage. The new metrics are:

  • NumberOfDeduplicatedSentMessages - The number of messages sent to a queue that were deduplicated. This metric helps in determining if a producer is sending duplicate messages to an SQS FIFO queue.
  • ApproximateNumberOfGroupsWithInflightMessages - The approximate number of message groups with inflight messages, where a message is considered to be in flight after it is received from a queue by a consumer, but not yet deleted from the queue. This metric helps you troubleshoot and optimise your FIFO queue throughput by either increasing FIFO message groups or scaling your consumers.


Metrics are available on a per-queue basis, and can be accessed through SQS and CloudWatch consoles. These metrics are available in all AWS regions where Amazon SQS is available.

To learn more, see SQS Metrics Documentation.

Amazon RDS Snapshot Export to S3 now available in eight additional AWS regions

Snapshot Export to S3 for Amazon Aurora and Amazon RDS snapshots is now available in Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Canada West (Calgary), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), and Middle East (UAE) regions. Snapshot export to S3 exports snapshot data as Apache Parquet, an efficient open columnar storage format. Snapshot export to S3 allows ingestion of data stored in Amazon RDS and Amazon Aurora for purposes such as populating data lakes for analytics or for training machine learning models.

You can create an export with just a few clicks on the Amazon RDS Management Console or using the AWS SDK or CLI. Extracting data from a snapshot doesn’t impact the performance of your database, as the export operation is performed on your snapshot and not your database. The extracted data in Apache Parquet format is portable, so you can consume it with other AWS services such as Amazon Athena, Amazon SageMaker, or Amazon Redshift Spectrum or with big data processing frameworks such as Apache Spark.

Amazon RDS Snapshot Export to S3 can export data from Amazon RDS for PostgreSQL, Amazon RDS for MariaDB, Amazon RDS for MySQL, Amazon Aurora PostgreSQL-Compatible Edition and Amazon Aurora MySQL snapshots. For more information, including instructions on getting started, read the Aurora documentation or Amazon RDS documentation.
 

Amazon Q Developer is now generally available (GA) in the Visual Studio IDE

Today, Amazon Web Services, Inc. launches the general availability of Amazon Q Developer in the Visual Studio IDE, available as part of the AWS Toolkit extension. You can now chat with Amazon Q about your project and ask Amazon Q to scan your project for security vulnerabilities.

Amazon Q Developer helps simplify the software development lifecycle by answering questions about technical topics, generating code, and explaining code. You can ask Amazon Q to answer questions such as “How do I debug issues with my Lambda functions locally before deploying to AWS?”. You can also request Amazon Q to generate code with prompts like “Generate test cases for [function name]”, where you reference a function name in an open file.

Amazon Q Developer can also keep your software secure by highlighting security vulnerabilities. You can click “Run Security Scan” from the margin menu, which will return with a list of vulnerabilities. Currently, security scans only support C#.

You can ask Amazon Q questions, update your code, and initiate actions with quick commands all from the Amazon Q chat panel in your IDE. When you ask Amazon Q a question, it uses the current file that is open in your IDE as context, including the programming language and the file path.

AWS Lambda introduces new controls to make it easier to search, filter, and aggregate Lambda function logs

AWS Lambda announces advanced logging controls that enable you to natively capture logs in JSON structured format, adjust log levels, and select the Amazon CloudWatch log group for your Lambda functions.

You can now capture your Lambda logs in JSON structured format without having to bring your own logging libraries. JSON format allows logs to be structured as a series of key-value pairs, enabling you to quickly search, filter, and analyze your function logs. You can also control the log level (e.g. ERROR, DEBUG, INFO, etc.) of your Lambda logs without making any code changes. This enables you to choose the desired logging granularity level for your function, eliminating the need to sift through large volumes of logs while debugging and troubleshooting critical errors. Lastly, you can choose the CloudWatch log group to which Lambda sends your logs. This allows you to easily aggregate logs from multiple functions within an application in one place, and apply security, governance, and retention policies at the application level, rather than applying them individually to every function.

To get started, you can specify advanced logging controls for your Lambda functions using Lambda API, Lambda console, AWS CLI, AWS Serverless Application Model (SAM), and AWS CloudFormation. To learn more, visit the launch blog post or Lambda Developer Guide.

Lambda advanced logging controls are now available in AWS GovCloud (US) Regions, at no additional cost. For more information, see the AWS Region table.

Amazon DataZone introduces fine-grained access control

Today, Amazon DataZone has introduced fine-grained access control, providing data owners granular control over their data at row and column levels. Customers use Amazon DataZone to catalog, discover, analyze, share, and govern data at scale across organizational boundaries with governance and access controls. Data owners can now restrict access to specific records of data, instead of granting access to the entire dataset. For example, if your table contains data for multiple regions, you can create row filters to grant access to rows with different regions to different projects. Additionally, column filters allow you to restrict access to specific columns, such as those containing Personally Identifiable Information (PII), ensuring that subscribers can only access the necessary and less sensitive data.

To get started, you can create row and column filters within the Amazon DataZone portal. When a user requests access to your data asset, you can approve the subscription by applying the appropriate filters. Amazon DataZone enforces these filters using AWS Lake Formation and Amazon Redshift, ensuring that the subscriber can only access the rows and columns that you have authorized.

Fine-grained access control support for both Amazon Redshift and AWS Lake Formation is now generally available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Europe (London), and South America (São Paulo).

Learn more about fine-grained access control in the user documentation.
 

AWS Launch Wizard now adds programmatic deployments through APIs and Cloudformation templates

AWS Launch Wizard now enables customers to programmatically deploy workloads on AWS using Application Programming Interface (API) or CloudFormation templates while still leveraging built-in automation and best practice recommendations. With this launch, customers now have a choice to deploy third-party applications on AWS such as SQL Server - SingleNode, HA, or FCI, SAP, and all supported workloads through AWS Launch Wizard APIs or CloudFormation resources, in addition to the existing console-based experience. Additionally, AWS Launch Wizard has also introduced APIs to programmatically retrieve application specifications for a simplified deployment experience.

AWS Launch Wizard offers a guided way of sizing, configuring, and deploying AWS resources for third party applications, such as Microsoft SQL Server Always On and HANA based SAP systems, without the need to manually identify and provision individual AWS resources.

AWS Launch Wizard is available in 29 Regions including US East (N. Virginia, Ohio), Europe (Frankfurt, Ireland, London, Paris, Stockholm and Milan), South America (Sao Paulo), US West (N. California, Oregon), Canada (Central), Asia Pacific (Mumbai, Seoul, Tokyo, Hong Kong, Hyderabad, Singapore, and Sydney), Middle East (Bahrain, UAE), Africa (Cape Town), Europe (Spain), Europe (Zurich), Asia Pacific (Melbourne), China (Beijing, operated by Sinnet), China (Ningxia, operated by NWCD), and the AWS GovCloud (US) Regions.

To learn more about AWS Launch Wizard, visit the Launch Wizard Page. To get started, check out the Launch Wizard User Guide and the API Page.
 

Amazon RDS now supports M6gd and R6gd database instances in the AWS GovCloud (US) Regions

Amazon Relational Database Service (Amazon RDS) for PostgreSQL, MySQL, and MariaDB now supports AWS Graviton2-based M6gd and R6gd database instances in the AWS GovCloud (US) Regions.

With this regional expansion, M6gd instances are now available for Amazon RDS for PostgreSQL, MySQL, and MariaDB in US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Canada (Calgary, Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm, Zurich), Middle East (Bahrain, UAE), and the AWS GovCloud (US) Regions. R6gd instances are available for Amazon RDS for PostgreSQL, MySQL, and MariaDB in US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Jakarta, Mumbai, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, Paris), and the AWS GovCloud (US) Regions. For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page.

M6gd and R6gd database instances are available on Amazon RDS for PostgreSQL version 16.1 and higher, 15.2 and higher, 14.5 and higher, and 13.4 and higher. M6gd and R6gd database instances are available on Amazon RDS for MySQL version 8.0.32 and higher, and Amazon RDS for MariaDB version 10.11.4 and higher, 10.6.13 and higher, 10.5.20 and higher, and 10.4.29 and higher. Get started by creating a fully managed M6gd database instance using the Amazon RDS Management Console. For more details, refer to the Amazon RDS User Guide.
 

AWS Lambda adds support for runtime management controls in the AWS GovCloud (US) Regions

AWS Lambda now supports runtime management controls in the AWS GovCloud (US) Regions. The operational simplicity of automatic runtime updates is one of the features customers most like about Lambda. This release provides customers running critical production workloads with more visibility and control over when runtime updates are applied to their functions.

For each runtime, Lambda provides a managed execution environment which includes the underlying Amazon Linux OS, programming language runtime, and SDKs. Lambda takes care of applying patches and security updates to all these components. These runtime updates allow customers to delegate responsibility for patching from the customer to Lambda. With this release, the updates made to the managed runtimes provided by Lambda are now visible to customers as distinct runtime versions. Customers also have more control over when Lambda updates their functions to a new runtime version, either automatically or synchronized with customer-driven function updates. In the very rare event of an unexpected runtime incompatibility with an existing function, they can also roll back to an earlier runtime version.

Visit our product documentation for more information about runtime management controls. Sign in to the AWS Lambda console to get started.

AWS Private CA is now available in the Beijing and Ningxia Regions in China

AWS Private Certificate Authority (AWS Private CA) is now available in the AWS China (Beijing) Region, operated by Sinnet, and in the AWS China (Ningxia) Region, operated by NWCD. AWS Private CA is a managed, highly-available, cloud certificate authority (CA) with private keys secured in AWS-managed hardware security modules (HSMs). By using AWS Private CA, you can reduce the operational costs and complexity of using public key infrastructure (PKI) at scale across industries and use cases including financial services, automotive, manufacturing, healthcare, electronics, technology, energy, and smart home.

AWS Private CA enables you to create customizable private certificates for a broad range of scenarios. AWS services such as AWS Certificate Manager, Amazon Managed Streaming for Apache Kafka (MSK), AWS IAM Roles Anywhere and Amazon Elastic Kubernetes Service (EKS) can all leverage private certificates from AWS Private CA. You can use AWS Private CA to create private certificates for Internet of Things (IoT) devices including Matter smart home devices.

For regions where AWS Private CA is available, see AWS Services by Region.

To learn more about AWS Private CA, visit the product page and documentation. To learn how to use AWS Private CA to create and operate Matter-compliant CAs, see Using the Matter standard.
 

RDS for Db2 now supports Private Offers on licensing through AWS Marketplace

Amazon RDS for Db2 customers using hourly Db2 licensing through AWS Marketplace can now obtain customized contract terms from IBM, using AWS Marketplace Private Offers. This complements the existing options of using the public hourly license price to get started instantly, or using the Bring-Your-Own-License (BYOL) option.

Customers can request a Private Offer from IBM and, if available, get individualized quotes on Db2 hourly licensing through AWS Marketplace.

Amazon RDS for Db2 makes it easy to set up, operate, and scale Db2 databases in the cloud. See the Amazon RDS for Db2 Pricing page for pricing and regional availability information. To learn more about the AWS Marketplace license option, visit the AWS Documentation to get started.
 

AWS Managed Services (AMS) Accelerate now includes Trusted Remediator

AWS Managed Services (AMS) Accelerate customers can now use Trusted Remediator to automatically remediate recommendations based on Trusted Advisor checks. By eliminating human effort required to fix misconfigurations on the accounts, Trusted Remediator improves security, fault tolerance, performance, while simultaneously reducing cost for AMS customers. AMS helps you operate AWS efficiently and securely. It extends your operations team’s capabilities with operational and security monitoring, incident detection and management, patch, backup, and cost optimization.

Trusted Remediator uses tested and proven automation solutions to not only minimize security risks but also scales operations processes by consistently providing quality remediations. Examples of supported checks include underutilized Amazon EBS Volumes and Amazon Redshift should have automatic upgrades to major versions enabled. Trusted Remediator provides customers with the flexibility to configure remediations across single or multiple accounts. Customers can choose to remediate checks at the account level or resource level, with the ability to apply tag-based exceptions.

To learn more, see Trusted Remediator. Learn more about AMS here.

Elastic Fabric Adapter (EFA) now supports cross-subnet communication

We are excited to announce that AWS now supports cross-subnet communication between Elastic Fabric Adapter (EFA) interfaces for Amazon EC2 instances within the same Availability Zone (AZ). This enhancement makes it possible to communicate with EC2 instances across subnets while benefiting from the low latency and high throughput provided by EFA. EFA is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and Machine Learning (ML) applications.

Previously, EFA traffic was limited to the same subnet, thus requiring all instances to be configured within a single subnet. With this update, you have the option to send traffic over EFA across subnets for both existing and new instances. To take advantage of this capability, you need to configure your security group rules to allow traffic to and from security groups of instances from other subnets. You will also need to ensure your application configuration and orchestration logic accounts for hosts from other subnets when provisioning and managing EFA-enabled instances to communicate across subnets over EFA.

The cross-subnet communication support over EFA is available in all AWS commercial regions, the AWS GovCloud (US) regions, and the Amazon Web Services China (Beijing) region, operated by Sinnet, and the Amazon Web Services China (Ningxia) region, operated by NWCD. For more information about EFA, please visit EFA documentation page.
 

Amazon MQ now supports RabbitMQ version 3.13

Amazon MQ now provides support for RabbitMQ version 3.13, which includes several fixes and performance improvements to the previous versions of RabbitMQ supported by Amazon MQ. Starting from RabbitMQ 3.13, Amazon MQ will manage patch version upgrades for your brokers. All brokers on version 3.13 will be automatically upgraded to the latest compatible and secure patch version in your scheduled maintenance window.

If you are running earlier versions of RabbitMQ, such as 3.8, 3.9, 3.10, 3.11 or 3.12, we strongly encourage you to upgrade to RabbitMQ 3.13. This can be accomplished with just a few clicks in the AWS Management Console. Amazon MQ for RabbitMQ will soon end support for RabbitMQ versions 3.8, 3.9 and 3.10. To learn more about upgrading, see Managing Amazon MQ for RabbitMQ engine versions in the Amazon MQ Developer Guide. To learn more about the changes in RabbitMQ 3.13, see the Amazon MQ release notes. This version is available in all the regions Amazon MQ is available in. For a full list of available regions see the AWS Region Table.
 

Amazon EKS natively supports EC2 capacity blocks for ML

You can now use Amazon EC2 instances reserved using Capacity Blocks for ML natively in Amazon EKS clusters with managed node groups. EKS managed node groups make it easy to run highly-available and secure Kubernetes clusters by automating the provisioning and lifecycle of cluster worker nodes. EC2 Capacity Blocks provide you with assured and predictable access to GPU instances for your artificial intelligence / machine learning (AI/ML) workloads.

Customers increasingly choose Kubernetes as the platform for their AI/ML workloads and Amazon EKS lets them combine the benefits of Kubernetes with the security, scalability, and availability of the AWS cloud. Native support for EC2 Capacity Blocks in Amazon EKS simplifies capacity planning for cutting-edge AI/ML workloads in Kubernetes clusters, helping to ensure that GPU capacity is available when and where it’s needed. To get started, create an EKS managed node group with an EC2 Launch Template targeting a Capacity Block reservation so that the reserved GPU capacity will be accessible in the EKS cluster when the reservation becomes active. EC2 Capacity Blocks can be reserved up to eight weeks in advance and for just the amount of time that you require the instances.

Native EKS support for EC2 Capacity Blocks via managed node groups is available in the US East (Ohio), US East (N. Virginia), and US West (Oregon) AWS Regions. Read more about Amazon EKS support for EC2 Capacity Blocks for ML in the Amazon EKS User Guide.
 

AWS Direct Connect announces native 400 Gbps Dedicated Connections at select locations

AWS Direct Connect now offers native 400 Gbps Dedicated Connections to support your private connectivity needs to the cloud.

AWS Direct Connect provides private, high-bandwidth connectivity between AWS and your data center, office, or colocation facility. Native 400 Gbps connections provide higher bandwidth, without the operational overhead of managing multiple 100 Gbps connections in a link aggregation group. The increased capacity delivered by 400 Gbps connections is particularly beneficial to applications that transfer large-scale datasets, such as for machine learning and large language model training or advanced driver assistance systems for autonomous vehicles.

For production workloads, AWS recommends using connections in more than one AWS Direct Connect location to ensure resilience against device or colocation failure. To get started, follow our Resiliency Recommendations to determine the best resiliency model for your use case. After selecting a resiliency model, the AWS Direct Connect Resiliency Toolkit can guide you through the process for ordering redundant connectivity through the AWS Direct Connect Console or CLI/APIs. AWS encourages you to use the Resiliency Toolkit failover test feature to test your configurations before going live and set up active health monitoring using Amazon CloudWatch Network Monitor.

Starting today, 400 Gbps Dedicated Connections are available at these locations. This list will be updated as 400 Gbps Dedicated Connections are made available at additional locations. The AWS Direct Connect pricing page has pricing information for 400 Gbps Dedicated Connections and the Direct Connect User Guide provides setup instructions.

Sign into the Direct Connect Console today to order your 400 Gbps Dedicated Connection!

Amazon OpenSearch Ingestion adds support for ingesting data from self-managed sources

Amazon OpenSearch Ingestion now allows you to ingest data from self-managed OpenSearch, Elasticsearch and Apache Kafka clusters, eliminating the need to run and manage 3rd party tools like Logstash to migrate your data from self-managed sources into Amazon OpenSearch Service. Now you can seamlessly migrate or continuously replicate your data from all OpenSearch versions and Elasticsearch 7.x versions either on Amazon EC2 or on-premises environments into Amazon OpenSearch Service managed clusters or Serverless collections.

You can now migrate data from all indices, or just specific indices, from one or more self-managed OpenSearch/Elasticsearch clusters to one or more Amazon OpenSearch Service managed clusters or Serverless collections. Amazon OpenSearch Ingestion will continually detect new indices in the self-managed source cluster that need to be processed and can even be scheduled to reprocess indices at a configurable interval to pick up on new documents. Similarly, Amazon OpenSearch Ingestion pipelines can consume data from one or more topics in your self-managed Kafka cluster and transform the data before writing it to Amazon OpenSearch Service or Amazon S3. You can check out the complete list of features in this blog post.

Announcing streamlined Migration Acceleration Program (MAP) funding and approval process in AWS Partner Central

Today, Amazon Web Services, Inc. (AWS) announces a new Migration Acceleration Program (MAP) template in AWS Partner Central with streamlined funding and approval processes to better support partner-led migrations. Eligible AWS Partners can leverage MAP to accelerate more customer migration opportunities, now scaling to support migrations up to $10M in annual recurring revenue, with a simple approval workflow and access to new Strategic Partner Incentives (SPIs).

The new MAP template helps partners accelerate migration opportunities by providing better speed to market with fewer AWS approval stages. The simplified partner experience for submitting fund requests and cash claims in AWS Partner Funding Portal (APFP) automatically creates claim milestones and associates them with realized revenue outcomes for the MAP Mobilize phase. This improves overall partner productivity by avoiding the need to manually create individual claim milestones. The new MAP template also supports additional SPIs for new customer engagement and modernization opportunities. Partners can now easily get visibility to their funding investment status through Analytics tab in AWS Partner Central.

The MAP template can be accessed by all partners at the Validated Stage in AWS Partner Central and AWS Migration Competency Partners. To learn more, review the 2024 MAP program guide.
 

Amazon API Gateway WebSocket APIs now available in 7 additional AWS Regions

Today, Amazon API Gateway has expanded the availability of WebSocket APIs to 7 additional AWS Regions: Asia Pacific (Jakarta), Europe (Zurich), Europe (Spain), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Israel (Tel Aviv), and Canada West (Calgary). With this launch, customers can build APIs with real-time bi-directional communication across all commercial AWS Regions.

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. WebSocket APIs enable real-time bi-directional communication, resulting in richer client-server interactions where services can push data to clients without requiring clients to make an explicit request. They are often used in real-time applications such as chat applications, collaboration platforms, multiplayer games, and financial trading platforms. WebSocket APIs have routes that can be integrated with backend HTTP endpoints, Lambda functions, or other AWS services.

AWS Application Migration Service supports Dynatrace post-launch action

Starting today, AWS Application Migration Service (AWS MGN) provides an action for installing the Dynatrace agent on your migrated instances. For each migrated server, you can choose to automatically install the Dynatrace agent to support your observability needs.

Application Migration Service minimizes time-intensive, error-prone manual processes by automating the conversion of your source servers to run natively on AWS. It also helps simplify modernization of your migrated applications by allowing you to select preconfigured and custom optimization options during migration.

This feature is now available in all of the Commercial regions where Application Migration Service is available. Access the AWS Regional Services List for the most up-to-date availability information.

To start using Application Migration Service for free, sign in through the AWS Management Console. For more information, visit the Application Migration Service product page.

For more information on Dynatrace and to create a trial account, visit the Dynatrace sign up page.

RDS for PostgreSQL supports PL/Rust crates serde, serde_json, regex, and url

Amazon RDS for PostgreSQL now supports new PL/Rust crates such as serde and serde_json crates, allowing you to exchange information between server and client or between servers by serializing and deserializing data structures in your PL/Rust user-defined functions. The release also includes support for regex crate that allow you to search strings for matches of a regular expression and url crate that implements the URL standard to provide parsing and deparsing of URL strings. With support for additional crates, you can now build more types of extensions on RDS for PostgreSQL using Trusted Language Extensions for PostgreSQL (pg_tle).

pg_tle is an open source development kit to help you build extensions written in a trusted language, such as PL/Rust, that run safely on PostgreSQL. Support for serde, serde_json, regex, and url crates is available on database instances in Amazon RDS running PostgreSQL 16.3-R2 and higher, 15.7-R2 and higher, 14.12-R2 and higher, and 13.15-R2 and higher in all applicable AWS Regions. To learn more about using pg_tle, see our documentation.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
 

Amazon S3 Access Grants now integrate with open source Python frameworks

Amazon S3 Access Grants now integrate with open source Python frameworks using the AWS SDK for Python (Boto3) plugin. S3 Access Grants help you to map identities in Identity Providers (IdPs) such as Active Directory, or AWS Identity and Access Management (IAM) principals, to your datasets in S3. Importing the Boto3 plugin to your client replaces any custom code required to manage data permissions, so you can use S3 Access Grants in open source Python frameworks such as Django, TensorFlow, NumPy, Pandas, and more.

Get started with S3 Access Grants using the AWS SDK for Python by importing the Boto3 plugin as a module in your Python code. The Boto3 plugin now has the ability to automatically request, cache, and refresh temporary credentials issued by S3, based on an Access Grant. As a result, the permissions for your Python-based S3 clients will be determined based on user group membership in an IdP.

Amazon S3 Access Grants are available in all AWS Regions where AWS IAM Identity Center is available. To learn more about the Boto3 plugin, visit the GitHub repository. For pricing details, visit Amazon S3 pricing. To learn more, refer to the documentation.

Amazon Connect launches the ability to preferentially route contacts to specific agents within a queue

Amazon Connect now supports the ability to preferentially route a contact within a queue to specific agents. Using this new feature, you can now set the preferred agent(s) for a given contact, and if that agent is unavailable, fall back to the next set of routing criteria. You can also use this feature to integrate Amazon Connect’s routing with your own custom business logic or machine learning models to personalize matching each contact to the most suitable agent, resulting in better business outcomes and increased customer satisfaction. For example, you could route repeat contacts to the agent who previously handled the customer, and if that specific agent isn’t available, offer the contact to another available agent within the same queue.

This feature is available in all AWS regions where Amazon Connect is offered. To learn more about routing criteria, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.

Amazon S3 Access Grants now integrate with Amazon SageMaker Studio

Amazon S3 Access Grants now integrate with Amazon SageMaker Studio for machine learning (ML) model training. S3 Access Grants help you to map identities in Identity Provider (IdPs) such as Active Directory, or AWS Identity and Access Management (IAM) principals, to your ML datasets in S3. Using the AWS SDK for Python (Boto3) plugin within Amazon SageMaker Studio notebooks helps you easily use S3 Access Grants for ML training and inference.

Get started with S3 Access Grants in SageMaker Studio by launching a JupyterLab notebook. Next, import the Amazon S3 Access Grants Boto3 plugin into your notebook to start accessing your ML datasets in S3. The Boto3 plugin automatically requests, caches, and refreshes temporary credential tokens for all S3 requests that you run in your notebook. S3 Access Grants automatically update S3 permissions based on end-user group membership as users are added and removed from groups in the IdP.

Amazon S3 Access Grants with Amazon SageMaker Studio are available in all AWS Regions where SageMaker Studio is available. For pricing details, visit Amazon S3 pricing and Amazon SageMaker pricing. To learn more about S3 Access Grants, refer to the documentation.

Amazon GuardDuty EC2 Runtime Monitoring now supports Ubuntu and Debian OS

The Amazon GuardDuty EC2 Runtime Monitoring eBPF security agent now supports Amazon Elastic Compute Cloud (Amazon EC2) workloads that use the Ubuntu (Ubuntu 20.04, Ubuntu 22.04) and Debian (Debian 11 and Debian 12) operating system. If you use GuardDuty EC2 Runtime Monitoring with automated agent management then GuardDuty will automatically upgrade the security agent for your Amazon EC2 workloads. If you are not using automated agent management, you are responsible for upgrading the agent manually. You can view the current agent version running in your Amazon EC2 instances in the EC2 runtime coverage page of the GuardDuty console. If you are not yet using GuardDuty EC2 Runtime Monitoring, you can enable the feature for a 30-day free trial with a few steps.

GuardDuty Runtime Monitoring helps you identify and respond to potential threats, including instances or self-managed containers in your AWS environment associated with suspicious network activity, such as querying IP addresses associated with cryptocurrency-related activity, or connections to a Tor network as a Tor relay. Threats to compute workloads often involve remote code execution that leads to the download and execution of malware. GuardDuty Runtime Monitoring provides visibility into suspicious commands that involve malicious file downloads and execution across each step, providing earlier discovery of threats during initial compromise—before they become business-impacting events.

EvolutionaryScale’s ESM3, a frontier language model family for biology, now available on AWS

EvolutionaryScale’s ESM3 1.4B open source language model is now generally available on AWS through Amazon SageMaker JumpStart and AWS HealthOmics, with the full family coming soon. Amazon SageMaker JumpStart is a ML hub with foundation models, built-in algorithms, and prebuilt ML solutions that can be deployed with just a few clicks. AWS HealthOmics is a purpose-built service that helps healthcare and life science organizations analyze biological data.

EvolutionaryScale, a frontier AI research lab and Public Benefit Corporation dedicated to developing AI for biology’s most complex problems, has released the cutting-edge ESM3 family of models. ESM3 is a biological frontier model family capable of generating entirely new proteins that have never existed in nature. ESM3 can generate proteins based on sequence, structure, and/or functional constraints – a novel "programmable biology" approach. Trained on billions of protein sequences spanning 3.8 billion years of evolution, ESM3 is one of the largest and most advanced generative AI models ever applied to biology.

EvolutionaryScale’s ESM3 1.4B open source model is available in Amazon SageMaker JumpStart initially in US East (Ohio) and in all available AWS HealthOmics regions, except Asia Pacific (Singapore). To learn more, read the blog and press release. To get started with ESM3, visit SageMaker JumpStart website and AWS HealthOmics GitHub repository.

Amazon EventBridge announces new console dashboard

Amazon EventBridge announces a new console dashboard providing you with a centralized view of your EventBridge resources, metrics, and quotas. The dashboard leverages CloudWatch metrics, allowing you to monitor account level metrics such as PutEvents, Matched Events, and Invocations for Buses, Concurrency and Throttles for Pipes, and Invocations and Errors for ScheduledGroups. Additionally, the dashboard allows you to view your default and applied quotas and navigate to the Service Quotas page to request increases, enabling you to respond quickly to changes in usage.

The Amazon EventBridge Event Bus is a serverless event router that enables you to create scalable event-driven applications by routing events between your own applications, SaaS applications, and AWS services. EventBridge Pipes provides a consistent, and cost-effective way to create point-to-point integrations between event producers and consumers. The EventBridge Scheduler makes it simple for developers to create, execute, and manage scheduled tasks at scale.

The new console dashboard surfaces account level metrics, providing deeper insight into your event-driven applications and allowing you to quickly identify and resolve issues as they arise. You can use the dashboard to answer basic questions such as “How many Buses and Pipes have I configured in my account?”, “What was my PutEvent traffic pattern for the last 3 hours?” or “What is the concurrency of my Pipe?”. You can further analyze and customize these dashboards in CloudWatch.

Amazon EC2 High Memory instances now available in Asia Pacific (Hong Kong) Region

Starting today, Amazon EC2 High Memory instances with 3TiB of memory are now available in Asia Pacific (Hong Kong) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.

Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory.

For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog.
 

AWS ParallelCluster 3.10 with support for Amazon Linux 2023 and Terraform

AWS ParallelCluster 3.10 is now generally available. Key features of this release include support for Amazon Linux 2023 and Terraform. With Terrafrom support, customers can automate deployment and management of clusters similar to how they use Terraform to automate other parts of their AWS infrastructure. Other important features in this release include:

  1. Support for connecting clusters to an external Slurm database daemon (Slurmdbd) to follow best practices of enabling Slurm accounting in a multi-cluster environment.
  2. A new allocation strategy configuration to allocate EC2 Spot instances from the lowest-priced, highest-capacity availability pools to minimize job interruptions and save costs.

For more details on the release, review the AWS ParallelCluster 3.10.0 release notes.

AWS ParallelCluster is a fully-supported and maintained open-source cluster management tool that enables R&D customers and their IT administrators to operate high-performance computing (HPC) clusters on AWS. AWS ParallelCluster is designed to automatically and securely provision cloud resources into elastically-scaling HPC clusters capable of running scientific, engineering, and machine-learning (ML/AI) workloads at scale on AWS.

Amazon SageMaker Model Registry now supports cross-account machine learning (ML) model sharing

Today, we're excited to announce that Amazon SageMaker Model Registry now integrates with AWS Resource Access Manager (AWS RAM), making it easier to securely share and discover machine learning (ML) models across your AWS accounts.

Data scientists, ML engineers, and governance officers need access to ML models across different AWS accounts such as development, staging and production to make the relevant decisions. With this launch, customers can now seamlessly share and access ML models registered in SageMaker Model Registry between different AWS accounts. Customers can simply go to the AWS RAM console or CLI, specify the Amazon SageMaker Model Registry model that needs to be shared, and grant access to specific AWS accounts or to everyone in the organization. Authorized users can then instantly discover and use those shared models in their own AWS accounts . This streamlines the ML workflows, enables better visibility and governance, and accelerates the adoption of ML models across the organization.

Amazon EventBridge Pipes now supports AWS PrivateLink

Amazon EventBridge Pipes now supports AWS PrivateLink, allowing you to access Pipes from within your Amazon Virtual Private Cloud (VPC) without traversing the public internet. With today’s launch, you can leverage EventBridge Pipes features from a private subnet without the need to deploy an internet gateway, configure firewall rules, or set up proxy servers.

Amazon EventBridge lets you use events to connect application components, making it easier to build scalable event-driven applications. EventBridge Pipes provides a simple, consistent, and cost-effective way to create point-to-point integrations between event producers and consumers. Pipes enables you to send data from one of 7 different event sources to any of the 20+ targets supported by the EventBridge Event Bus, including HTTPS endpoints through EventBridge API Destinations and event buses themselves. Today’s release of AWS PrivateLink support further reduces the amount of integration code you need to write and infrastructure you need to maintain when building event-driven applications.

AWS PrivateLink support for EventBridge Pipes is available in all AWS Regions where EventBridge Pipes is available.

To get started, follow the directions provided in the AWS PrivateLink documentation. To learn more about Amazon EventBridge Pipes, visit the EventBridge documentation.
 

Amazon SageMaker now supports SageMaker Studio Personalization

We are excited to announce that Amazon SageMaker now allows admins to personalize the SageMaker Studio experience for their end-users. Admins can choose to hide applications and ML Tools from SageMaker Studio based on the end user preferences.

Starting today, admins can use the new personalization capability while setting up domains and user-profiles on SageMaker Console or using APIs, and tailor the SageMaker Studio interface. They can curate experiences by selectively showing or hiding specific ML tools, applications and IDEs for specific personas to align closely with how users interact with the platform. This improves SageMaker Studio usability and provides a more intuitive and user-friendly experience. Data scientists and ML engineers can now easily discover and select ML features required to complete their workflows, leading to a better developer productivity.

You can get started by creating or editing a domain, or a user profile in SageMaker Console or by using SageMaker APIs. This feature is available in all Amazon Web Services regions where SageMaker Studio is currently available. To learn more, visit documentation.
 

Amazon Q in Connect now recommends step-by-step guides

Amazon Q in Connect, a generative-AI powered assistant for contact center agents, now recommends step-by-step guides in real-time, which agents use to quickly take action to resolve customers' issues. Amazon Q in Connect uses the real-time conversation with a customer to detect the customer's intent and provides a guided workflow that leads an agent through each step needed to solve the issue, reducing handle time and increasing first contact resolution rates and customer satisfaction. For example, when a customer contacts a financial services company, Amazon Q in Connect analyzes the conversation and detects the customer wants to open a retirement plan. Amazon Q in Connect then provides the agent with a guide that enables the agent to collect the necessary information, deliver the required disclosures, and automatically open the account. To learn more about Amazon Q in Connect, please visit the website or see the help documentation.

Announcing Data Quality Definition Language (DQDL) enhancements for AWS Glue Data Quality

Customers use AWS Glue Data Quality, a feature of AWS Glue, to measure and monitor quality of their data. They author data quality rules using DQDL to ensure their data is accurate . Customers need the ability to author rules for complex business scenarios that include filter conditions, exclusion conditions, validations for empty values, and composite rules . Previously customers authored SQL to perform these data quality validations in the CustomSQL rule type. Today, AWS Glue announces new set of new enhancements to DQDL that allows data engineers easily author complex data quality rules using native rule types. DQDL now supports

  • NOT operator allowing customers to exclude certain values in their rule.
  • New keywords such as NULL, EMPTY, and WHITESPACES_ONLY to author rules that capture missing values without complex regular expressions.
  • Composite rules for customers to author sophisticated business rules. They can now specify options to manage the evaluation order of these rules.
  • WHERE clause in DQDL to filter data before applying rules.

Refer to DQDL guide for more information.

AWS Glue Data Quality is available in all commercial regions where AWS Glue is available. To learn more, visit the AWS Glue product page and our documentation.

  • No labels