You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

Recent Announcements
The AWS Cloud platform expands daily. Learn about announcements, launches, news, innovation and more from Amazon Web Services.
Announcing the general availability of AWS Backup logically air-gapped vault

Today, AWS Backup announces the general availability of logically air-gapped vault, a new type of AWS Backup vault that allows secure sharing of backups across accounts and organizations, supporting direct restore to help reduce recovery time from a data loss event. Logically air-gapped vault stores immutable backup copies that are locked by default, and isolated with encryption using AWS owned keys.

You can get started with logically air-gapped vault using the AWS Backup console, API, or CLI. Target backups to a logically air-gapped vault by specifying it as a copy destination in your backup plan. Share the vault for recovery or restore testing with other accounts using AWS Resource Access Manager (RAM). Once shared, you can initiate direct restore jobs from that account, eliminating the overhead of copying backups first.

AWS Backup support for logically air-gapped vault is available in the following Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Africa (Cape Town), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm, Zurich), Middle East (Bahrain, UAE), Israel (Tel Aviv) and South America (Sao Paulo). It currently supports Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Amazon Aurora, Amazon DocumentDB, Amazon Neptune, AWS Storage Gateway, Amazon Simple Storage Service (S3), Amazon Elastic File System (EFS), Amazon DynamoDB, Amazon Timestream, AWS CloudFormation, and VMware. For more information visit the AWS Backup product page, documentation, and launch blog.

Claude 3.5 Sonnet and Claude 3 Haiku now available in more regions

Beginning today, Amazon Bedrock customers in US West (Oregon), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Singapore) can now access Claude 3.5 Sonnet. Additionally, Amazon Bedrock customers in Asia Pacific (Tokyo) and Asia Pacific (Singapore) can now access Claude 3 Haiku.

Claude 3.5 Sonnet is Anthropic’s latest foundation model and ranks among the most intelligent in the world. With Claude 3.5 Sonnet, you can now get intelligence better than Claude 3 Opus, at one fifth the cost. Claude 3 Haiku is Anthropic’s most compact model, and one of the fastest, most affordable options on the market for its intelligence category.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other foundation models from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustainable growth from generative AI while maintaining privacy and security.

To learn more, read the Claude in Amazon Bedrock product page and documentation. To get started with Claude 3.5 Sonnet and Claude 3 Haiku in Amazon Bedrock, visit the Amazon Bedrock console.
 

Announcing delegated administrator for Cost Optimization Hub

Cost Optimization Hub is an AWS Billing and Cost Management feature that helps you consolidate and prioritize cost optimization recommendations, so that you can get the most out of your AWS spend. Starting today, you can designate a member account as the delegated administrator, allowing that account to view cost optimization recommendations in the Cost Optimization Hub with administrator privileges, giving you greater flexibility to identify resource optimization opportunities centrally.

Delegating an administrator allows you to manage Cost Optimization Hub independently of the management account and incorporate AWS security best practices, which recommend delegating responsibilities outside of the management account where possible. Cost Optimization Hub delegated administrators can easily identify, filter, and aggregate over 15 types of AWS cost optimization recommendations, such as EC2 instance rightsizing recommendations, idle resource recommendations, and Savings Plans recommendations, across your AWS accounts and AWS Regions through a single console page.

Delegated administrator for Cost Optimization Hub is available in all AWS Regions where Cost Optimization Hub and AWS Organizations are available.

To learn more about delegated administrator for Cost Optimization Hub, see the user guide.

Amazon EC2 High Memory instances now available in Europe (Paris) Region

Starting today, Amazon EC2 High Memory instances with 9TiB of memory (u-9tb1.112xlarge) are now available in Europe (Paris) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.

Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory.

For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog.
 

Amazon EFS now supports up to 30 GiB/s (a 50% increase) of read throughput

Amazon EFS provides serverless, fully elastic file storage that makes it simple to set up and run file workloads in the AWS cloud. In March 2024, we increased the Elastic Throughput read throughput limit to 20 GiB/s (from 10 GiB/s) to support the growing demand for read-heavy workloads such as AI and machine learning. Now, we are further increasing the read throughput to 30 GiB/s, extending EFS’s simple, fully elastic, and provisioning-free experience to support throughput-intensive AI and machine learning workloads for model training, inference, financial analytics, and genomic data analysis.

The increased throughput limits are immediately available for EFS file systems using the Elastic Throughput mode in the US East (N. Virginia), US East (Ohio), US West (Oregon), EU West (Dublin), and Asia Pacific (Tokyo) Regions. To learn more, see the Amazon EFS Documentation or create a file system using the Amazon EFS Console, API, or AWS CLI.
 

Amazon QuickSight now includes nested filters

Amazon QuickSight includes a new advanced filter type: nested filters. Authors can use a nested filter to use one field in a dataset to filter another field in the dataset. You might know this by another name such as in SQL this would be known as a correlated sub-query and in shopping analysis this would be known as market basket analysis.

Nested filtering enables authors to show additional contextual data, rather than filtering it out if it doesn’t meet an initial condition. This can be useful in many different scenarios including market basket analysis where it is now possible to find out sales quantity by product for those customers who have purchased a specific product or who did not purchase a specific product. It is now also possible to find out the group of customers who did not purchase any one of the selected product list or only purchased a specific list of products.

Nested filters are now available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Asia Pacific (Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland and London), South America (São Paulo) and AWS GovCloud (US-West). See here for QuickSight regional endpoints.

For more on how to set up a nested filter, go to our documentation here and blog post here.
 

Announcing G4dn WorkSpaces Personal bundles with WSP for Windows

Amazon WorkSpaces is introducing G4dn WorkSpaces Personal bundles with WSP (WorkSpaces Streaming Protocol) for Windows. Now you can run your graphics-intensive and accelerated applications on Windows using G4dn WorkSpaces with WSP on AWS.


The Graphics G4dn bundles offer cost-effective solutions for graphics applications optimized for NVIDIA GPUs using NVIDIA libraries such as CUDA, CuDNN, OptiX, and Video Codec SDK. With WSP, your users will benefit from a highly responsive remote desktop experience with 4K resolution and support for multiple monitors. G4dn WorkSpaces with WSP is available for Windows Server 2022. Alternatively, you can bring your own Windows desktop licenses for Windows 10/11.


You can deploy the Graphics G4dn bundles for Windows with WSP in AWS Regions where WorkSpaces is available, except for the Africa (Cape Town) and Israel (Tel Aviv) Regions. You can launch G4dn Graphics bundles for Windows with WSP from the AWS Management Console, AWS API, or AWS CLI. See the Amazon WorkSpaces pricing page for more information.

Amazon EC2 High Memory instances now available in Europe (Frankfurt) Region

Starting today, Amazon EC2 High Memory instances with 18TiB of memory (u-18tb1.112xlarge) are now available in Europe (Frankfurt) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.

Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory.

For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog.
 

Introducing Titan Image Generator v2 now available on Amazon Bedrock

We’re excited to announce the launch of Amazon Titan Image Generator v2, a new image generation model which brings to customers additional control and flexibility - including image conditioning using ControlNet, subject consistency, and background removal.

With Amazon Titan Image Generator v2 image conditioning, you can use a reference image to guide the generation of the output image. Image conditioning uses ControlNet techniques, like canny edge to preserve edges, and segmentation to identify regions in the reference image, to guide image generation. Amazon Titan Image Generator v2 also lets you fine-tune the model for subject consistency, allowing you to preserve specific images, objects, or themes in the output. Amazon Titan Image Generator v2’s background removal supports intelligent detection and segmentation, allowing it to work with multiple foreground objects and overlapping elements in the image. With Amazon Titan Image Generator v2, you can now shape your visual narratives like never before, guiding the generative process with precision and infusing your artistic vision into every pixel.

Amazon Titan Image Generator v2 is now available in the US East (N. Virginia), US West (Oregon). To learn more, read about Amazon Titan Image Generator v2 in the AWS News launch blog, visit the Amazon Titan product page, or refer to our documentation. To get started with Amazon Titan Image Generator v2 in Amazon Bedrock, visit the Amazon Bedrock console.

Large language models powered by Amazon Sagemaker Jumpstart available in Redshift ML

Amazon Redshift ML enables customers to create, train, and deploy machine learning models on their Redshift data using familiar SQL commands. Now, you can leverage pretrained publicly available LLMs in Amazon SageMaker JumpStart as part of Redshift ML. For example, you can use LLMs to summarize feedback, perform entity extraction, and conduct sentiment analysis on data in your Redshift table. Large Language Models in Redshift ML is now generally available which empowers you to bring the power of generative AI to your data warehouse.

With this capability, Amazon Redshift ML removes the complexities of building custom machine learning pipelines to perform generative AI tasks like text summarization or categorization. To get started, create an endpoint using one of the supported text based LLMs in SageMaker Jumpstart, create a Redshift ML model referencing the endpoint and you can start invoking the LLM endpoint using standard SQL commands through Redshift ML using your data in Redshift.

Amazon Redshift support for Large Language Models in Amazon Sagemaker Jumpstart is now available where Amazon Redshift is available and Amazon Sagemaker Jumpstart is available. To learn more, visit the Amazon Redshift database developer guide.
 

Simplified Migration Acceleration Program for VMware funding experience in AWS Partner Central

Today, Amazon Web Services, Inc. (AWS) announces a simplified Migration Acceleration Program (MAP) template in AWS Partner Central that includes the VMware Strategic Partner Incentive (SPI). Eligible AWS Partners can leverage the enhanced MAP template to accelerate more VMware customer migration opportunities with a simple approval workflow and access to SPIs.

The simplified MAP template accelerates VMware opportunities by providing better speed to market with fewer AWS approval stages. The AWS Partner Funding Portal (APFP) automatically calculates the eligible VMware SPI incentive and creates associated claim milestones. This improves overall partner productivity by eliminating manual steps. Now, partners can skip navigating to a separate Partner Initiative Funding (PIF) benefit and directly request the VMware SPI as part of the standard MAP template.

The simplified MAP template can be accessed by all partners at the Validated Stage in AWS Partner Central and AWS Migration Competency Partners. To learn more, review the 2024 APFP user guide.
 

New version of Amazon ECR basic scanning is now generally available

Today, Amazon Elastic Container Registry (ECR) announced the general availability of a new version of basic scanning. The new version of ECR basic scanning uses Amazon’s native scanning technology, which is designed to provide customers with improved scanning results and vulnerability detection across a broad set of popular operating systems. This allows customers to further strengthen the security of their container images.

ECR basic scanning enables customers to identify software vulnerabilities in their ECR container images. Customers can either scan their container images manually or via configurations that specify which repositories should be scanned when an image is pushed. Today’s launch enables customers to detect container image vulnerabilities across popular operating systems and receive improved scan findings.

The new version of ECR basic scanning is now generally available in all AWS commercial regions and the AWS GovCloud (US) Regions at no additional cost. Existing customers can switch to the new version by using the AWS console or using the new put-account-setting API. New ECR accounts are automatically opted into using the new scanning version. To learn more about ECR basic scanning, this change, and supported regions, please visit our documentation. ECR also offers enhanced scanning which is powered by Amazon Inspector and comes with additional security benefits, including scanning for programming language package vulnerabilities. A complete list of differences between ECR basic scanning and enhanced scanning can be found here.
 

AWS Control Tower releases 2 new descriptive control APIs

AWS Control Tower customers can now programmatically get descriptions for managed controls. These APIs enable automation of AWS Control Tower’s library of managed controls improving ease of use for control deployment. With this release customers can extend AWS Control Tower governance into regions where some of their enabled controls are not available. Customers can also enable a control in additional Regions, even though the control is not supported in all of their governed Regions. AWS Control Tower now supports the below APIs:

  • ListControls – This API call returns a paginated list of all available controls in AWS Control Tower controls library.
  • GetControl – This API call returns details of an enabled control which includes target identifier, control summary, target regions, and drift status of the control.
Announcing Terraform support Amazon Timestream for InfluxDB Deployments

We are excited to announce the launch of Terraform compatibility for Amazon Timestream for InfluxDB. Terraform support enables you to automate and streamline your time-series data management workflows with infrastructure-as-code (IaC).

With Terraform support, you can now define and manage your Timestream for InfluxDB instances, databases, and tables using Terraform configuration files, reducing manual errors and increasing efficiency. This feature makes it easier to version and track changes to your infrastructure, providing a more seamless experience for industrial and IoT applications.
To get started, use the AWS-provided Terraform Reference Engine on GitHub that configures the code and infrastructure required for the Terraform open source engine to work with Amazon Timestream for InfluxDB.

Amazon Timestream for InfluxDB is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Europe (Ireland), Europe (Frankfurt), and Europe (Stockholm).

To learn more about this new feature and how to get started, visit the Amazon Timestream for InfluxDB product page, documentation, and pricing page. You can also explore the Terraform documentation for more information on how to integrate Terraform with Timestream for InfluxDB.
 

Amazon OpenSearch Service OR1 instances now available in Sao Paulo

OR1, the OpenSearch Optimized Instance family, is now available in South America (Sao Paulo). OR1 delivers up to 30% price-performance improvement over existing instances (based on internal benchmarks), and uses Amazon S3 to provide 11 9s of durability. The new OR1 instances are best suited for indexing-heavy workloads, and offers better indexing performance compared to the existing memory optimized instances available on OpenSearch Service.

OR1 enables customers to economically and reliably scale their OpenSearch deployments without compromising the interactive analytics experience they expect. Each OR1 instance is provisioned with compute, local instance storage for caching, and remote Amazon S3-based managed storage. OR1 offers pay-as-you-go pricing and reserved instances, with a simple hourly rate for the instance, local instance storage, as well as provisioned managed storage. OR1 instances come in sizes ‘medium’ through ‘16xlarge’, and offer compute, memory, and storage flexibility. OR1 instances support OpenSearch versions 2.11 and above. Please refer to the Amazon OpenSearch Service pricing page for pricing details.

OR1 instance family is now available on Amazon OpenSearch Service in South America (Sao Paulo). Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability. To get started with OR1 instances, visit our documentation.
 

Amazon Cognito enhances Advanced Security Features (ASF) to disallow password reuse and stream security events

Amazon Cognito enhances Advanced Security Features (ASF) to address additional enterprise needs. You now have the option to disallow users from reusing previous passwords, helping you address compliance needs. Additionally, you now have the option to stream security events from ASF to an Amazon S3 bucket, Amazon Kinesis Firehose, or CloudWatch Insights. This allows you to combine ASF events with security signals from other AWS and third-party tools, helping you gain better insights and elevating security.

Amazon Cognito is a service that makes it simpler to add authentication, authorization, and user management to your web and mobile apps. The service provides authentication for applications with millions of users and supports sign-in with social identity providers such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via standards such as SAML 2.0 and OpenID Connect.

This new feature is now available as part of Cognito advanced security features in all AWS Regions, except AWS GovCloud (US) Regions.

To get started, see the following resources:

Amazon Connect now supports additional agent scheduling staffing rules

Amazon Connect now supports additional agent scheduling staffing rules, making it easier to schedule agents while complying with labor, union, and other contractual rules. You can now configure five new rules for scheduling agents in Amazon Connect: minimum rest period between shifts, minimum rest period per week, maximum consecutive working days, maximum consecutive day of the week worked, and shift cannot start earlier than the previous day's shift. Once configured, these rules will be applied when new schedules are generated as well as when existing schedules are edited. These additional rules in agent scheduling make day-to-day management of agent schedules easier for managers.

This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To get started with Amazon Connect agent scheduling, click here.

Elastic Load Balancing Trust Store now supports cross-account sharing using AWS Resource Access Manager

Elastic Load Balancing (ELB) Trust Stores now supports a new capability that enables cross-account sharing via AWS RAM (Resource Access Manager). This feature allows customers to centrally manage their ELB Trust Stores across multiple accounts to streamline trust store management and enable consistent Mutual TLS configurations across Application Load Balancers (ALBs).

With this launch, ELB Trust Store owners can now share their trust stores and revocation lists with other AWS accounts, organizational units (OUs), and specific IAM roles and users through AWS RAM. Security Admins can now maintain a single or smaller number of trust stores within AWS. Application developers can ensure that their ALBs are reliably authenticating certificate based identities by simply attaching the trust store(s) managed by their respective security admins while configuring their load balancers. This improves operational efficiency while using Mutual TLS and reduces the potential for human error associated with managing disparate trust stores and revocation lists.

This feature is now available in all commercial AWS Regions and the AWS GovCloud (US) Regions. To learn more, refer to the ELB documentation.

AWS CodeBuild now supports three new Arm-based compute types

AWS CodeBuild now supports building and testing your software applications on three new Arm-based compute types: Medium, X-Large and 2X-Large. You can select up to 48 vCPUs and 96 GB memory to run more resource-intensive workloads.

With the addition of these new compute types, you now have similar compute options for running x86 and Arm workloads on CodeBuild. You can build and test your software applications on Arm without the need to emulate or cross-compile. CodeBuild supports AWS Graviton 3 processors, which delivers a leap in performance and capabilities over previous generations of AWS Graviton processors.

These compute types are now available in: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), China (Beijing), China (Ningxia), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Spain), Europe (Stockholm), Israel (Tel Aviv), Middle East (Bahrain), Middle East (UAE), South America (São Paulo).

To learn more about CodeBuild’s compute options, please visit our documentation. To get started, visit the AWS CodeBuild product page.
 

Amazon RDS for Oracle now supports July 2024 Release Update

Amazon Relational Database Service (Amazon RDS) for Oracle now supports the July 2024 Release Update (RU) for Oracle Database versions 19c and 21c.

To learn more about Oracle RUs supported on Amazon RDS for each engine version, see the Amazon RDS for Oracle Release notes. If the auto minor version upgrade (AmVU) option is enabled, your DB instance is upgraded to the latest quarterly RU six to eight weeks after it is made available by Amazon RDS for Oracle in your AWS Region. These upgrades will happen during the maintenance window. To learn more, see the Amazon RDS maintenance window documentation.

For more information about the AWS Regions where Amazon RDS for Oracle is available, see the AWS Region table.
 

AWS Resilience Hub introduces improved resource grouping capabilities

Today, AWS Resilience Hub is launching improved resource grouping to more intelligently group your resources into Application Components, which are resources that work and fail as a single unit, when onboarding your application.

Customers who want to manage and improve the resilience posture of their applications on AWS can now more quickly and easily onboard their application to Resilience Hub with the new improved resource grouping feature. This feature simplifies onboarding by automatically and accurately organizing resources into appropriate AppComponents. Customers are presented with the grouping recommendations and can determine if the recommendations are accurate and apply to their case. This is particularly valuable for complex, cross-Region applications, as it helps minimize total onboarding time and enables more efficient and organized resilience assessments.

The improved resource grouping feature is available in all AWS Regions where Resilience Hub is supported.

To learn more about Resilience Hub, visit the documentation and product pages.

Amazon DataZone offers business use case-based grouping with data products

Today, Amazon DataZone introduces data products, which enable the grouping of data assets into well-defined, self-contained packages tailored for specific business use cases. For example, a marketing analysis data product can bundle various data assets, such as marketing campaign data, pipeline data, and customer data. With data products, customers can simplify discovery and subscription processes, aligning them with business objectives and reducing redundancy in handling individual assets.

To get started, data producers can create a collection of relevant cataloged assets in the Amazon DataZone portal, add business context, and publish it as a data product unit. This makes it easier for data consumers to find all necessary data assets for specific use cases. Consumers can subscribe to all assets within a data product through a single approval workflow. Data producers can also manage the product's lifecycle, including managing subscriptions and removing it from the catalog. Amazon DataZone offers API support for data product workflows, facilitating integration and automation.

The feature is supported in all the AWS commercial regions where Amazon DataZone is currently available.

Check out this blog and video to learn more about data products in Amazon DataZone. Get started with the technical documentation.

Amazon Connect now supports audio optimization for Amazon WorkSpaces cloud desktops

Amazon Connect now makes it easier to deliver high-quality voice experiences in Amazon WorkSpaces Virtual Desktop Infrastructure (VDI) environments. Amazon Connect automatically optimizes audio by redirecting media from your agent’s local desktop to Connect, simplifying the agent experience and improving audio quality by reducing network hops. Agents can simply log into their Amazon WorkSpaces client application for a supported device or a web browser and start accepting calls using your custom agent user interface (i.e., custom Contact Control Panel) using APIs in the Amazon Connect open source JavaScript libraries.

These new features are available in all AWS regions where Amazon Connect is offered. To learn more, please see the documentation.
 

Amazon Verified Permissions improves support for OIDC identity providers

Amazon Verified Permissions has simplified implementing fine-grained authorization for developers using third party identity providers, such as Okta, CyberArk and Transmit security. Developers can now authorize user actions, based on attributes and group memberships, managed within their own open id connect (OIDC) compliant identity provider. For example, in a insurance claims processing application, you can authorize that only users in the “manager” group who completed the “high value claim training” are allowed to approve claims for more than $10,000.

Verified Permissions provides fine-grained authorization for the applications that you build, allowing you to implement permissions as Cedar policies rather than application code. This feature simplifies implementing fine-grained authorization by enabling you to pass OIDC tokens to authorize requests. When authorizing the request, Amazon Verified Permissions validates the OIDC token and evaluates Cedar policies using user attributes and groups extracted from the token.

You can get started using the feature by visiting Amazon Verified Permission from the AWS console, and creating a new policy store. We have partnered with leading identity providers, CyberArk, Okta, and Transmit Security, to test this feature and ensure a smooth experience. This feature is available in all regions where Amazon Verified Permissions is available. For more information visit the Verified Permissions product page.
 

Amazon DataZone achieves PCI DSS Certification

Amazon DataZone has obtained the Payment Card Industry Data Security Standard (PCI DSS) compliance certification, demonstrating that it meets requirements established by the PCI Security Standards Council for handling payment account data securely, as required by financial and insurance industry customers handling credit card payments.

The certification includes the 2024 PCI 3-D Secure (3DS) assessment and the shared responsibility guide. These are now available to AWS customers in the AWS Management Console through AWS Artifact, which can help enable PCI 3DS attestation for customers. This attestation may be required to support application-based authentication, digital wallet integration, and browser-based e-commerce transactions using the AWS Services.

Amazon DataZone is a data management service that makes it faster and easier for customers to catalog, discover, share, and govern data between data producers and consumers in their organization. For more information about Amazon DataZone and how to get started, refer to our product page and review the Amazon DataZone technical documentation.

AWS Payment Cryptography is now available in four new regions across Europe and Asia

AWS Payment Cryptography is now available in four new regions - Europe(Frankfurt), Europe(Ireland), Asia Pacific(Singapore) and Asia Pacific(Tokyo). Adding new regional support allows customers with low-latency payment applications to build, deploy or migrate into additional AWS Regions without relying on cross-region support. Customers using AWS Payment Cryptography can simplify cryptography operations in their payment applications with a service that grows elastically, provides modern APIs and integrates into AWS services such as IAM and CloudTrail.

AWS Payment Cryptography is a managed AWS service that provides access to cryptographic functions and key management used in payment processing in accordance with Payment Card Industry (PCI) security standards without the need to procure dedicated payment HSM instances. The service provides customers performing payment functions such as acquirers, payment facilitators, networks, switches, processors, and banks with the ability to move their payment cryptographic operations closer to applications in the cloud and minimize dependencies on auxiliary data centers or colocation facilities containing dedicated payment HSMs.

AWS Payment Cryptography is available in the following AWS Regions: US East (Ohio, N. Virginia), US West (Oregon), Europe (Ireland, Frankfurt) and Asia Pacific (Singapore, Tokyo).

To learn more about the service, see the AWS Payment Cryptography user guide, and visit the AWS Payment Cryptography page for pricing details and availability in additional regions.

Amazon Connect launches the ability to configure when whisper flows are used

Amazon Connect now supports the ability to configure when whisper flows are used during a contact to optimize flow performance. A whisper flow is what a customer or agent experiences during the moment when they are connected to each other in a voice or chat conversation. With this launch, you can turn off whisper flows, helping you further optimize your flow’s performance and reduce contact duration. For example, you can choose to turn off whisper flows during an outbound or callback scenario to save time when the agent and customer are expecting the contact.

This feature is available in all AWS regions where Amazon Connect is offered. To learn more about flows, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.

AWS Database Migration Service (DMS) expands DMS homogeneous data migrations feature to 29 AWS regions

In June 2023, AWS Database Migration Service (DMS) launched a homogeneous data migration feature to simplify and accelerate like-to-like migrations to Amazon Relational Database Service (Amazon RDS) or Amazon Aurora.

Today, we are pleased to announce that the DMS homogeneous data migrations feature is now made generally available in 29 AWS regions (includes 9 AWS regions from the June 2023 launch). These include: US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Singapore), US West (N. California), South America (São Paulo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Canada (Central), Europe (London), Europe (Paris), Asia Pacific (Osaka), Asia Pacific (Hong Kong), Middle East (Bahrain), Europe (Milan), Africa (Cape Town), Asia Pacific (Jakarta), Middle East (UAE), Europe (Zurich), Europe (Spain), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Israel (Tel Aviv), and Canada West (Calgary).

The DMS homogeneous data migration feature uses Built-in native database tooling to provide simple and performant like-to-like migrations with minimal downtime. It supports migrating from PostgreSQL (version:10.4 - 14.x), MySQL (version: 5.7 and higher), MariaDB (version:10.2 and higher), MongoDB (version:4.x, 5.x, and 6.0), and Amazon DocumentDB (version:3.6, 4.0, and 5.0) databases with both full load and ongoing replication options. Supported source locations include on-premises, Amazon EC2, or Amazon DocumentDB and supported targets include Amazon Relational Database Service (Amazon RDS), Amazon Aurora, Amazon DocumentDB and Amazon DocumentDB Elastic cluster.

To learn more, see AWS DMS homogeneous migrations in the AWS Database Migration Service Documentation.

Amazon EBS Fast Snapshot Restore (FSR) is now available in six additional regions

Amazon EBS Fast Snapshot Restore (FSR) is now available in the AWS Europe (Zurich), Europe (Spain), Middle East (UAE), Asia Pacific (Melbourne), Asia Pacific (Hyderabad), and Israel (Tel Aviv) regions.

For volumes that are created from snapshots, the storage blocks must be pulled down from Amazon S3 and written to the volume before you can access them. This initialization process takes time and can cause a significant increase in the latency of I/O operations the first time each block is accessed. Volume performance is achieved after initialization is complete and all blocks have been downloaded and written to the volume. Launched in 2019, FSR eliminates the need for volume initialization, and ensures that EBS volumes restored from FSR-enabled snapshots instantly receive full provisioned performance. With FSR, you get improved and predictable performance which helps with use cases such as virtual desktop infrastructure (VDI), backup & restore, test/dev volume copies, and booting from custom AMIs. You can now also use Amazon Data Lifecycle Manager to automate and manage Fast Snapshot Restore on snapshots created by Data Lifecycle Manager in these regions.

To learn more, see the technical documentation, blog, and pricing pages on FSR. This feature is now available through the AWS Command Line Interface (CLI), AWS SDKs, or the AWS console in all commercial AWS regions.

Announcing purchase order support for AWS Data Exchange private offers

Today, AWS Marketplace is extending transaction purchase order support to AWS Data exchange private offers, giving customers the ability to ensure their invoices reflect the proper purchase order number. This launch makes it easier for customers to process and pay invoices.

AWS Marketplace transaction purchase orders will allow the purchase order number that a customer provides at the time of the transaction in AWS Marketplace to appear on all out-of-cycle invoices related to that purchase. Today, a customer’s management (payer) account and linked accounts can provide a purchase order number at the time of purchase in AWS Marketplace for professional services, SaaS contracts, AMI contracts, container contracts, CloudFormation template contracts, helm chart contracts, and private offers with a flexible payment schedule for AMI, container, CloudFormation templates, and helm chart products with annual pricing models. Now, customers can also add transaction purchase order numbers for AWS Data Exchange private offers at the time of offer acceptance in the AWS Data Exchange console.

To enable transaction purchase orders for AWS Marketplace, sign into the management account (for AWS Organizations) and enable the AWS Billing integration in the AWS Marketplace Console settings. To learn more, read the AWS Marketplace Buyer Guide.

AWS Systems Manager launches API support for Quick Setup

Quick Setup, a capability of AWS Systems Manager, provides an intuitive console experience for configuring frequently used Amazon Web Services features and services with recommended best practices. With just a few clicks, customers can enable common configurations and best practices across accounts and regions. This includes enabling auto-updates for popular AWS agents, such as the CloudWatch agent, defining patch schedules and baselines for Amazon Elastic Compute Cloud (Amazon EC2) instances, or configuring AWS Resource Explorer to search and discover resources in AWS accounts or across an entire AWS Organization.

Today, AWS Systems Manager announces the launch of a new Quick Setup API which enables customers to programmatically address these use cases. With this launch, customers can integrate the new Quick Setup API into their infrastructure as code and programmatic workflows. By calling the Quick Setup API, customers can automate the deployment of AWS services and their configurations, ensuring they are consistently setup across accounts and regions using tools such as the AWS CLI or the AWS SDK.

To get started, review the Quick Setup API reference documentation.

New Amazon CloudWatch dimensions for Amazon EC2 On Demand Capacity Reservations

Today, we are introducing new Amazon CloudWatch(CW) dimensions for Amazon EC2 On-Demand Capacity Reservations(ODCR). The existing CW metrics for On-Demand Capacity Reservations can now be grouped using the following new dimensions: Availability Zone, Instance Match Criteria, Instance Type, Platform, Tenancy, or across all Capacity Reservations. You can group the metrics by any of these dimensions, within selected region.

You can now efficiently monitor your On Demand Capacity Reservations and identify unused capacity by setting CloudWatch alarms on CloudWatch metrics on any of these six new dimensions in addition to the existing Capacity Reservation ID dimension. These new 5 minute dimension metrics are enabled by default and available to all ODCR customers at no additional cost in all commercial AWS Regions and the AWS GovCloud (US) Regions. To learn more about CloudWatch metrics for On-Demand Capacity Reservations, please refer to the ODCR technical documentation.

AWS CodePipeline introduces stage level conditions to implement pipeline gates

AWS CodePipeline V2 type pipelines now support stage level conditions to enable development teams to safely release changes that meet quality and compliance requirements. Customers can configure stage level conditions to gate a pipeline execution before entering the stage, and before exiting a stage - when all actions in the stage have completed successfully, or when any action in the stage has failed. A condition consists of one or more rules, and a result to apply when the condition fails. Customers can configure a stage level condition from the console, API, CLI, CloudFormation, or SDK.

Customers can choose from rules that check the status of an Amazon CloudWatch alarm, or whether the current time is within the deployment window, and custom check by invoking an AWS Lambda function. A condition will fail if one or more rules fail, and CodePipeline will perform the configured result such as Rollback and Fail. For example, you can configure a condition to be evaluated when all the actions in a stage have successfully completed, and roll back the changes if a CloudWatch alarm goes into ALARM state within 60 minutes. Customers can also override a condition that blocks a pipeline execution if it fails a condition evaluation to allow the pipeline execution to enter or exit a stage.

To learn more about using stage level conditions in your pipeline, visit our documentation. For more information about AWS CodePipeline, visit our product page. Stage level conditions feature is available in all regions where AWS CodePipeline is supported.

Amazon RDS for SQL Server supports integration of transaction log backup with DMS

Amazon RDS for SQL Server now integrates transaction log backups with Database Migration Service (DMS). This integration provides greater reliability in data replication for customers using DMS to replicate data from an RDS for SQL Server source database. If database connection interruptions or sudden transaction volume spikes cause active transaction logs to be archived before DMS can finish processing, DMS can now access the RDS for SQL Server backup logs to recover and resume replication. This prevents replication failures that would previously require a full data reload.

The integration of transaction log backup with DMS is now generally available in all AWS regions where RDS for SQL Server is currently supported. To learn more about setting up ongoing DMS replication on a RDS for SQL Server DB instance, visit the documentation page.

Amazon Bedrock achieves FedRAMP High authorization

Amazon Bedrock is a FedRAMP High authorized service in the AWS GovCloud (US-West) Region. Federal agencies, public sector organizations and other enterprises with FedRAMP High compliance requirements can now leverage Amazon Bedrock to access fully managed large language models (LLMs) and other foundation models (FMs). To learn more about Amazon Bedrock security and compliance, visit the webpage here.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using other AWS services.

To learn more and get started, see the following resources
 

Amazon DynamoDB Accelerator (DAX) is now available in additional AWS Regions

Amazon DynamoDB Accelerator (DAX) is now available in the Europe (Spain) and Europe (Stockholm) Regions. You can create DAX clusters using Amazon EC2 R5 and T3 instance types in these AWS Regions for applications that require microsecond latency.

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available caching service built for Amazon DynamoDB. DAX delivers up to a 10 times performance improvement—from milliseconds to microseconds—even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management. You provision a DAX cluster, use the DAX client SDK to point your existing DynamoDB API calls to the DAX cluster, and let DAX handle the rest.

For DAX Regional availability information, see the “Service endpoints” section on Amazon DynamoDB endpoints and quotas. Pricing details are available on the Amazon DynamoDB Pricing page. To get started with DAX, see Developing with the DynamoDB Accelerator (DAX) Client.
 

Amazon WorkSpaces now offers Microsoft Visual Studio

Today Amazon WorkSpaces and WorkSpaces Core are announcing the general availability of Microsoft Visual Studio 2022 on WorkSpaces Personal. Following this launch, WorkSpaces administrators can provide comprehensive Integrated Development Environment (IDE) for .NET and C++ developers on Windows powered WorkSpaces.

With this launch, Amazon WorkSpaces adds Microsoft Visual Studio Enterprise 2022 and Microsoft Visual Studio Professional 2022 to the list of available license included applications on WorkSpaces Personal. Through the Manage applications workflow, which enables administrators to install the necessary set of applications on WorkSpaces Personal depending on the requirements of their end users and uninstall them when an end user is no longer in need of them. Amazon WorkSpaces administrators can now easily add or remove Microsoft Visual Studio 2022 on existing WorkSpaces Personal using the same workflow.

This functionality is now available in all the AWS Regions where Amazon WorkSpaces Personal is available. You will be charged for the hardware and application bundles you choose for your WorkSpaces instances. For more details on pricing, refer to Amazon WorkSpaces pricing.

To get started, Open the WorkSpaces console. In the navigation pane, choose WorkSpaces, Personal, select a WorkSpace and choose Actions, Manage applications. You can now install Microsoft Visual Studio 2022 on the selected WorkSpace. To learn more and see the list of supported operating systems, please refer to Manage applications in WorkSpaces Personal.

AWS CodeBuild now supports VPC-connectivity on Windows

AWS CodeBuild now supports connecting your Windows builds to your Amazon VPC resources. This new capability allows CodeBuild to access your VPC resources without requiring internet access. CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages.

With this feature, you can use CodeBuild to build and test your software within your VPC and access resources such as Amazon Relational Database Service, Amazon ElastiCache, or any service endpoints that are only reachable from within a specific VPC. Configuring your builds to connect to your VPC also secures them by applying the same network access controls as defined in your security groups.

This feature is available in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), South America (São Paulo), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Singapore), and Asia Pacific (Sydney) regions where Windows builds are supported.

To get started, configure your CodeBuild project to use Windows compute and select the VPC that your project needs to access. To learn more about CodeBuild’s support for connecting to VPC, see configuring builds with VPC. Visit the product page to learn more about getting started with CodeBuild.
 

Amazon WorkSpaces Thin Client has received Carbon Trust verification for the product's carbon footprint

We are happy to announce that the product carbon footprint of Amazon WorkSpaces Thin Client has been verified by the Carbon Trust. This verification is based on measurement and estimation of the product's carbon footprint throughout the stages of its lifecycle.

The total lifecycle carbon emission for Amazon WorkSpaces Thin Client is 77kg CO2e as verified by the Carbon Trust. Additionally, this product is made from 50% recycled materials (power adapter and cable not included) and uses Sleep Mode to reduce energy consumption when idle. Amazon WorkSpaces Thin Clients are built to last. But when you’re ready, you can recycle your devices through Amazon Second Chance. See the product sustainability fact sheet for more information about Amazon WorkSpaces Thin Client’s sustainability features and Amazon’s commitment to sustainability.

Visit the WorkSpaces Thin Client page and Amazon Business to learn more.
 

Amazon RDS for MySQL supports version 8.4 in RDS Database preview environment

Amazon RDS for MySQL now supports version 8.4 in the Amazon RDS Database Preview Environment, allowing you to evaluate the latest Long-Term Support Release on Amazon RDS for MySQL. You can deploy MySQL 8.4 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database, making it simpler to set up, operate, and monitor databases.

MySQL 8.4 is the latest Long-Term Support Release from the MySQL community. MySQL Long-Term Support Releases include bug fixes, security patches, as well as new features. Please refer to the MySQL 8.4 release notes for more details about this release.

The Amazon RDS Database Preview Environment supports both Single-AZ and Multi-AZ deployments on the latest generation of instance classes. Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment.

Amazon RDS Database Preview Environment database instances are priced the same as production RDS instances created in the US East (Ohio) Region.
 

Llama 3 is now available in the AWS GovCloud (US-West) Region

As of today, Amazon Bedrock customers can use Meta’s Llama 3 models, Llama 3 8B and Llama 3 70B, in the AWS GovCloud (US-West) Region.

Meta Llama 3 is designed for you to build, experiment, and responsibly scale your generative artificial intelligence applications. You can now use these two Llama 3 models in Amazon Bedrock to easily experiment with and evaluate even more top foundation models for your use case.

Llama 3 models support a broad range of use cases with improvements in reasoning, code generation, and instruction. The Llama 3 model family is a collection of instruction-tuned LLMs in 8B and 70B parameter sizes. Llama 3 8B is ideal for limited computational power and resources, faster training times, and edge devices. The model excels at text summarization, text classification, sentiment analysis, and language translation. Llama 3 70B is ideal for content creation, conversational AI, language understanding, research development, and enterprise applications. The model excels at text summarization and accuracy, text classification and nuance, sentiment analysis and nuance reasoning, language modeling, dialogue systems, code generation, and following instructions.

Meta Llama 3 models are available in Amazon Bedrock in US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Canada (Central), Europe (London), and the GovCloud (US-West) AWS Region. To learn more, visit the Llama in Amazon Bedrock product page and documentation.
 

AWS Application Migration Service supports New Relic post-launch action

Starting today, AWS Application Migration Service (AWS MGN) provides an action for installing New Relic’s infrastructure agent on your migrated instances. For each migrated server, you can choose to automatically install the New Relic infrastructure agent to support your observability needs.

Application Migration Service minimizes time-intensive, error-prone manual processes by automating the conversion of your source servers to run natively on AWS. It also helps simplify modernization of your migrated applications by allowing you to select preconfigured and custom optimization options during migration.

This feature is now available in all of the Commercial regions where Application Migration Service is available. Access the AWS Regional Services List for the most up-to-date availability information.

To start using Application Migration Service for free, sign in through the AWS Management Console. For more information, visit the Application Migration Service product page.

For more information on New Relic and to create an account, visit the New Relic sign-up page.
 

Amazon Redshift releases drivers for supporting single sign-on with AWS IAM Identity Center

Amazon Redshift customers can now connect to their data warehouses via JDBC/ODBC/Python drivers with corporate identity by integrating their identity providers with AWS IAM Identity Center, which enables a seamless single-sign-on experience with other AWS services or Redshift tools that already support trusted identity propagation. With single sign-on capabilities, users can seamlessly access Amazon Redshift and other AWS services without the need to manage multiple sets of credentials.

Customers can now connect to Amazon Redshift data warehouses from their SQL client tools over JDBC, Python, and ODBC drivers using their identity with their preferred identity provider such as Microsoft Entra Id, Okta, Ping, OneLogin, etc. by integrating with AWS IAM Identity Center. To authenticate with AWS IAM Identity Center, customers need to configure the issuer_url, plugin_name, and idc_region fields in the Extended Properties for their driver settings. Amazon Redshift supports a browser plugin for AWS IAM Identity Center, which will prompt a browser window for users to sign in with their user credentials defined in their corporate identity providers. Once the users are authenticated they will have authorized access to data based on the permissions defined in either Redshift roles or AWS Lake Formation.

This feature is available in the AWS regions where both AWS IAM Identity Center and Amazon Redshift are available. For more information, see our documentation and blog.
 

Amazon Connect now enables agents to view pre-approved windows when scheduling time off

Amazon Connect now enables agents to view pre-approved time off windows for their scheduling group (group allowance), making it easier for agents to identify available options when requesting time off. For example, when creating a time off request for 8 a.m. to 12 p.m., an agent can now see that while there is available group allowance for 8 a.m. to 11 a.m., there isn’t any remaining allowance for 11 a.m. to 12 p.m. Agents can then modify their time off request before submitting or request a supervisor override. This launch simplifies the time off process for agents by letting them evaluate which days have the required allowance before submitting the request.

This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To get started with Amazon Connect agent scheduling, click here.
 

Announcing fine-grained access control via AWS Lake Formation with EMR Serverless

We are excited to announce the general availability of fine-grained data access control (FGAC) via AWS Lake Formation for Apache Spark with Amazon EMR Serverless. This enables you to enforce full FGAC (database, table, column, row and cell-level) policies defined in Lake Formation to take effect for your data lake tables from your EMR Serverless Spark jobs and interactive sessions.

Lake Formation makes it simple to build, secure, and manage data lakes. It allows you to define fine-grained access controls through grant and revoke statements, similar to those used with relational database management systems (RDBMS), and automatically enforce those policies via compatible engines like Athena, Spark on EMR on EC2, and Redshift Spectrum. With today's launch, the same Lake Formation rules that you set up for use with other services like Athena now apply to your Spark jobs and interactive sessions on EMR Serverless, further simplifying security and governance of your data lakes.

Fine-grained access control for Apache Spark batch jobs and interactive sessions via EMR Studio on EMR Serverless is available with the EMR 7.2 release in all regions where EMR Serverless is available except GovCloud and China regions. To get started, see Using AWS Lake Formation with Amazon EMR Serverless.

AWS Graviton-based EC2 instances now support hibernation

Starting today, customers can hibernate their EC2 instances powered by AWS Graviton processors. Hibernation helps lower costs and achieve faster startup times by enabling customers to pause and resume their running instances at scale.

When customers hibernate an instance, EC2 signals the operating system to perform hibernation (suspend-to-disk). This process saves the contents of the instance’s memory (RAM) to the associated Amazon Elastic Block Store (Amazon EBS) root volume. EC2 then persists the instance's EBS root volume and any attached EBS data volumes. When a hibernated instance resumes, EC2 restores the root volume to its previous state, along with the RAM contents and any previously attached data volumes.

AWS offers hibernate capability across a variety of EC2 instance types, with support for additional instance types added regularly. This feature is available in commercial AWS Regions and in the AWS GovCloud (US) Regions. While an instance is hibernated, customers pay only for the storage of EBS volumes, including the saved content from the instance memory. There are no charges for instance usage or data transfer during hibernation.

Customers can hibernate their EC2 instances through AWS CloudFormation, the AWS Management Console, AWS SDKs, AWS Tools for Powershell, and the AWS Command Line Interface (CLI). To learn more, visit the news blog on hibernation. For information about enabling hibernation for your EC2 instances, refer to the hibernation FAQs and User Guide.

Amazon Connect now supports Inbound DID calling in Vietnam

Amazon Connect has expanded availability to support Inbound Direct Dial (DID) telephone numbers and guaranteed number presentation for in-country calling for Vietnam from the Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Seoul) regions.

The new Telephony Rates are now available as part of the standard pricing for Amazon Connect service usage for Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Seoul) regions. To see all AWS Regions where Amazon Connect is available, see the AWS Region table. Visit the Amazon Connect website for more information.

Amazon Neptune Analytics now introduces new smaller capacity units

Today, we are excited to announce that Amazon Neptune Analytics now offers 32 and 64 m-NCU (Neptune Memory Capacity Units) capacity units, expanding beyond the current range of 128 to 1024 m-NCUs. This enhancement provides greater flexibility and cost efficiency for smaller graph sizes, making it easier and more affordable to get started with vector search, graph algorithms, and graph RAG. Data Scientists and developers can now start with the smaller capacity units, significantly reducing costs while meeting their specific needs during the initial stages of development.

Many organizations require customized setups for their specific needs, particularly during proof of concept (POC) and testing phases where graph sizes are typically small. Previously, the smallest capacity unit available in Amazon Neptune Analytics was 128 m-NCU, often providing more capacity than needed and resulting in higher costs for smaller-scale operations. With the introduction of 32 and 64 m-NCU capacity units, now users can start with smaller capacity units, significantly reducing expenses while maintaining the performance they expect from Neptune Analytics. This enhancement allows organizations to efficiently run analytics, graph vector search, and graph algorithms at a lower cost. Once they are satisfied with the results of their POC, they can scale up to larger capacity units for production to fully leverage the benefits of Neptune Analytics.

You can create a Neptune Analytics graph via new capacity units from the Amazon Neptune console, AWS Command Line Interface (AWS CLI), or SDK. To learn more about Neptune Analytics, visit the features page, user guide, and pricing page.
 

Create an AWS Account using your bank account available in Germany

Customers in Germany can now create an AWS Account using their bank account. Upon signup, customers with a Germany billing address can now securely connect their bank account which supports the Single Euro Payment Area (SEPA) standard. Until today, signing up for an AWS account was only permitted with a debit or credit card. With this release, customers in Germany can choose to use their card or securely connect their bank account.

To link your bank account, sign up for AWS, and enter your address and billing country in Germany. When prompted to add a payment method click "Bank Account" followed by "Link your bank account". Select your bank from the list of available banks and sign in to your bank using your online banking credentials. Signing in to your bank allows you to securely add your bank account to your AWS account and verifies that you are the owner of the bank account. By default, this bank account will be used when paying for your future AWS invoices. Signup with Bank Account is available in Germany, the first country where this feature is available.

AWS Elastic Disaster Recovery now supports Flexible Instance Types launch setting

AWS Elastic Disaster Recovery (AWS DRS) introduces Flexible Instance Types, a new capability that reduces the likelihood of failed recovery attempts or prolonged downtime due to capacity shortage. This feature allows you to create a list of optional instance types that can be used during a launch, based on specific attributes. By defining a range of acceptable instance types upfront, you can increase the chances of finding available resources, even in situations where capacity is constrained.

AWS DRS minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. Prior to this release, customers could only define a single instance type or rely on automatic right-sizing to select the most relevant instance type. However, this approach risked instance unavailability during a disaster or drill scenario. Now, with the launch of Flexible Instance Types, AWS DRS empowers you to tailor your disaster recovery strategy, by defining a range of acceptable instance attributes, so you can maximize the chances of successful provisioning, even in situations where many resource types are unavailable due to high demand.

This new capability is available in all AWS Regions where AWS DRS is available. See the AWS Regional Services List for the latest availability information.

To learn more about AWS DRS, visit our product page or documentation. To get started, sign in to the AWS Elastic Disaster Recovery Console.

AWS IoT SiteWise Edge on Siemens Industrial Edge is now generally available

Today, AWS announces the general availability of AWS IoT SiteWise Edge on Siemens Industrial Edge, offering customers a seamless way to bridge the gap between OT (Operational Technology) and IT (Information Technology) to solve industrial operational challenges. This new offering allows customers to easily connect and collect data from their industrial equipment using Siemens Industrial Edge connectivity applications and send it to the cloud using AWS IoT SiteWise Edge. Users such as process engineers and maintenance technicians can use applications on Siemens Industrial Edge to gain insights or send the data to AWS IoT SiteWise to organize, standardize, and store data for driving critical use cases such as asset monitoring, predictive maintenance, energy monitoring, and building industrial data lakes.

Within minutes, customers can collect data from their industrial equipment using Siemens connectivity applications such as OPC-UA, MQTT, Siemens S7, Modbus TCP, Profinet IO, and third-party connectors available on Siemens Industrial Edge Marketplace. They can use AWS IoT SiteWise Edge to seamlessly send this data to AWS. In the cloud, AWS IoT SiteWise enables industrial customers to improve operational efficiency and optimize asset performance by providing purpose-built, hybrid cloud-edge capabilities for analysis, storage, and enterprise-wide monitoring for industrial use cases.

AWS IoT SiteWise Edge is available in all regions where AWS IoT SiteWise is supported.

To start, visit AWS IoT SiteWise console to create a gateway, and AWS IoT SiteWise user guide to learn more.
 

Amazon AppStream 2.0 introduces Red Hat Enterprise Linux Application and Desktop streaming

Amazon AppStream 2.0 now offers support for Red Hat Enterprise Linux, enabling ISVs and central IT organizations to stream Red Hat Enterprise Linux apps and desktops to users while leveraging the flexibility, scalability, and cost-effectiveness of the AWS Cloud. With this launch, customers have the flexibility to choose from a broader set of operating systems including Red Hat Enterprise Linux, Amazon Linux 2, and Microsoft Windows.

This launch enables organizations operating in highly regulated industries to accelerate time to market, scale resources up or down with demand, and manage the entire fleet centrally through the AWS Console by streaming Red Hat Enterprise Linux apps from AppStream 2.0. Red Hat Enterprise Linux on AppStream 2.0 also enables traditional desktop apps to be converted to a SaaS delivery model without the cost of refactoring, while pay-as-you-go billing and license-included images ensure you only pay for the resources you use.

Red Hat Enterprise Linux is currently supported in all AWS Regions where AppStream 2.0 is available. Red Hat Enterprise Linux-based instances use per second billing (with a minimum of 15 minutes). For more information, see Amazon AppStream 2.0 Pricing.

To get started, log into the AppStream 2.0 console and select a Region. Next, launch a Red Hat Enterprise Linux-based Image Builder to install your applications, and create a custom image for your end users.
 

Amazon EC2 Fleet and EC2 Auto Scaling groups now supports aliases for Amazon Machine Images (AMIs)

EC2 Fleet and EC2 Auto Scaling now support using custom identifiers to reference Amazon Machine Images (AMI) in EC2 Fleet launch requests and Auto Scaling groups configured to choose from a diversified list of instance types. You can create these identifiers using AWS Systems Manager Parameter Store and use the parameters to reference your AMI during instance launch. These AMI references simplify your automation as you no longer need to modify your code every time a new version of an AMI is created.

Amazon Connect Contact Lens now provides downloadable screen recordings

Amazon Connect Contact Lens now provides the ability to download screen recordings from the contact details page in the Amazon Connect UI, for customers that use Contact Lens screen recording. With this launch, managers can evaluate contact quality and agent performance via offline reviews, as well as review downloaded screen recordings with agents for coaching. This launch also provides a new permission to manage who can download screen recordings.

This feature is available in all AWS regions where Contact Lens screen recording is offered. To learn more about Amazon Connect Contact Lens, see our website. To learn more about this feature, please visit our help documentation.

Build your event-driven application using AWS CloudFormation Git sync status changes

AWS CloudFormation Git sync now publishes sync status changes as events to Amazon Eventbridge. With this launch, you can subscribe to new deployment sync events that shows your Git repositories or resource sync status changes. You can also get instant notification and build on top of it to further automate your GitOps workflow. With EventBridge, you can take advantage of a serverless event bus to easily connect and route events between many AWS services and third-party applications. Events are delivered to EventBridge in near real-time, and you can write simple rules to listen for specific events.

AWS CloudFormation allows you to launch and configure your desired resources and their dependencies together as a stack described in a template. Using AWS CloudFormation Git sync, you can store this template in a remote Git repository and have your stacks synchronized to automate your workflow. Now, using the status changes in Eventbridge, you can build event-driven application based on the notification you received when the sync completed or failed, track the metrics, or automate rollback deployments of broken code.

This feature is available in the following regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Paris), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Stockholm), and South America (São Paulo).

You learn more from our documentation here and our launch blog here.
 

Amazon Q Business launches support for cross-region AWS IAM Identity Center access

Amazon Q Business is a fully managed, generative-AI powered assistant that enhances workforce productivity by answering questions, providing summaries, generating content, and completing tasks based on customer's enterprise data. AWS IAM Identity Center helps set up and centrally manage workforce user identity and their access to their AWS accounts and applications. Q Business is integrated with IAM Identity Center so that workforce users can securely and privately access enterprise content using web applications built with Q Business.

Prior to today, Q Business applications could only connect to, and source user identity information from IAM Identity Center instances located in the same AWS Region as the Q Business application. Starting today, at the time of Q Business application creation, customers can choose to connect to an IAM Identity Center instance located in a region different from the Q Business application to source user identity information. When users access Q Business applications, Q Business makes cross-region API calls to fetch their identity and attributes from the cross-region Identity Center instance to authenticate users, and authorize user access to the content they are allowed to access. Customers can now use Q Business applications to enhance the productivity of a larger set of users than was possible earlier.

This feature is available in all AWS Regions where Amazon Q for Business is available, and is supported for organization instances of IAM Identity Center in all regions except opt-in regions. To learn more, visit the documentation. To explore Amazon Q, visit the Amazon Q website.

Introducing AWS End User Messaging

Today, we announce AWS End User Messaging as the new name for Amazon Pinpoint’s SMS, MMS, push, and text to voice messaging capabilities. We are making this change to simplify how you manage end user communications across your applications and other AWS services. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications. Developers can integrate messaging to support uses cases such as one-time passcodes (OTP) at sign-ups, account updates, appointment reminders, delivery notifications, promotions and more.

Your existing applications will continue to work as they did previously and you do not need to take any actions. We have updated the new name for SMS, MMS, push, and text to voice messaging in the AWS Management console, documentation, AWS support, and service webpages. There are no changes to APIs, the AWS Command Line Interface (AWS CLI), the AWS Identity and Access Management (IAM) access policies, service events, and endpoints. The marketing features of campaigns, journeys, segmentation, and analytics continue to be available as Amazon Pinpoint.

To learn more and get started, see here.

Amazon MQ now supports ActiveMQ version 5.18

Amazon MQ now supports ActiveMQ minor version 5.18, which introduces several improvements and fixes compared to the previous version of ActiveMQ supported by Amazon MQ. These enhancements include initial support for the JMS 2.0 simplified APIs, such as JMSContext, JMSProducer, and JMSConsumer, as well as the implementation of methods for XA transactions. Starting from ActiveMQ 5.18, Amazon MQ will manage patch version upgrades for your brokers. All brokers on ActiveMQ version 5.18 will be automatically upgraded to the next compatible and secure patch version in your scheduled maintenance window.

If you are utilizing prior versions of ActiveMQ, such as 5.17, 5.16, or 5.15, we strongly recommend you to upgrade to ActiveMQ 5.18. You can easily perform this upgrade with just a few clicks in the AWS Management Console. To learn more about upgrading, consult the ActiveMQ Version Management section in the Amazon MQ Developer Guide. To learn more about the changes in ActiveMQ 5.18, see the Amazon MQ release notes. This version is available across all AWS Regions where Amazon MQ is available.
 

AWS Elemental MediaLive now supports SRT caller input

You can now use AWS Elemental MediaLive to receive sources using SRT caller.

Secure Reliable Transport (SRT), an open source video transport protocol supported by the SRT Alliance, helps deliver video reliably across the internet. SRT has two primary connection modes: caller and listener. With SRT caller input support in MediaLive you can add inputs to a MediaLive channel by calling into an available listener source address and port. For more information on how to use SRT caller input, visit the MediaLive documentation.

AWS Elemental MediaLive is a broadcast-grade live video processing service. It lets you create high-quality live video streams for delivery to broadcast televisions and internet-connected multiscreen devices, like connected TVs, tablets, smartphones, and set-top boxes.

The MediaLive service functions independently or as part of AWS Media Services, a family of services that form the foundation of cloud-based workflows and offer you the capabilities you need to transport, create, package, monetize, and deliver video. Visit the AWS region table for a full list of AWS Regions where AWS Elemental MediaLive is available.
 

Meta Llama 3.1 405B now generally available in Amazon Bedrock

The most advanced Meta Llama models to date, Llama 3.1, are available in Amazon Bedrock. Starting today, the Llama 3.1 405B model is now generally available in Amazon Bedrock. Amazon Bedrock offers a turnkey way to build generative AI applications with Llama. Llama 3.1 models are a collection of 8B, 70B, and 405B parameter size models offering new capabilities for your generative AI applications.

All Llama 3.1 models demonstrate significant improvements over previous versions. The models support a 128K context length and exhibit improved reasoning for multilingual dialogue use cases in eight languages. The models access more information from lengthy text to make more informed decisions and leverage richer contextual data to generate more subtle and refined responses. According to Meta, Llama 3.1 405B is one of the best and largest publicly available foundation models and is well suited for synthetic data generation and model distillation. Llama 3.1 models also provide state-of-the-art capabilities in general knowledge, math, tool use, and multilingual translation.

Meta’s Llama 3.1 models are generally available in Amazon Bedrock in the US West (Oregon) Region. To learn more, read the AWS News launch blog, Llama in Amazon Bedrock product page, pricing page, and documentation. To get started with Llama 3.1 in Amazon Bedrock, visit the Amazon Bedrock console.

AWS launches a self-guided journey for AWS Partners enrolled on the Services Path

AWS is launching a self-guided experience to help AWS Partners on the Services Path reach the Advanced tier. This experience includes personalized Tasks in Partner Central that offer tips and resources for both new and existing partners to advance in their AWS journey, unlocking access to programs like Specialization and enhancing their discoverability for AWS customers and AWS sales teams. In addition, AWS Partners who provide managed services will receive personalized tasks including the new AWS MSP Practice Building Guide.

This experience empowers partners to progress through tiers and access benefits independently. Tasks will now be labeled with their associated solution name for easy identification. We've enhanced the completion experience to allow partners to resolve Tasks quickly, and added new ones for improved program enablement and enrollment.

Additionally, we’ve simplified the journey for Services partners by removing the Customer Satisfaction survey and Public References requirements, streamlining progression and reducing administrative burdens.

To learn more, refer to the AWS Partner Network blog post. The Tasks feature is available to AWS Partners enrolled in the Software and Services Paths today. Existing partners can access Tasks by logging in to AWS Partner Central. Visit AWS Partner Network to learn more about becoming an AWS Partner.

AWS IAM Identity Center is now available in the Canada West (Calgary) AWS Region

You can now deploy AWS IAM Identity Center in the Canada West (Calgary) AWS Region. With the addition of this AWS Region, IAM Identity Center is now available in 33 AWS Regions globally.

IAM Identity Center is the recommended service for managing workforce access to AWS applications and multiple AWS accounts. Use IAM Identity Center with your existing identity source or create a new directory, and manage workforce access to part or all of your AWS environment. With IAM Identity Center, you can manage and audit user access more easily and consistently, your workforce has single sign-on access and unified experience across AWS services, and your data owners can authorize and log data access by user. IAM Identity Center is available to you at no additional cost.

Amazon SageMaker launches faster auto-scaling for Generative AI models

We are excited to announce a new capability in Amazon SageMaker Inference that helps customers reduce the time it takes for their Generative AI models to scale automatically. They can now use sub-minute metrics and significantly reduce overall scaling latency for AI models. Using this enhancement customers can improve the responsiveness of their Generative AI applications as demand fluctuates.

With this capability customers get two new high resolution CloudWatch metrics - ConcurrentRequestsPerModel and ConcurrentRequestsPerModelCopy - that enable faster autoscaling. These metrics are emitted at a 10 second interval and provide a more accurate representation of the load on the endpoint by tracking the actual concurrency or number of in-flight inference requests being processed by the model. Customers can create auto-scaling policies using these high-resolution metrics to scale their models deployed on SageMaker endpoints. Amazon SageMaker will start adding new instances or model copies in under a minute when thresholds defined in these auto-scaling policies are reached. This allows customers to optimize performance and cost-efficiency for their inference workloads on SageMaker.

This new capability is accessible on accelerator instance families (g4dn, g5, g6, p2, p3, p4d, p4de, p5, inf1, inf2, trn1n, trn1) in all AWS regions where Amazon SageMaker Inference is available, except China and the AWS GovCloud (US) Regions. To learn more, see AWS ML blog and visit our documentation.

Amazon GameLift now supports AWS Nigeria Local Zone

Today, we are excited to announce the general availability (GA) of an update to Amazon GameLift that expands support to the AWS Nigeria region Local Zone, which increases coverage for game developers, while providing seamless, low-latency gameplay experiences for players. With this update, game developers can tap into the Nigeria Local Zone to reach players across the continent of Africa.

AWS Local Zones are a type of infrastructure deployment that extends AWS Regions to place compute, storage, database, and other AWS services at the edge of the cloud near large population, industry, and information technology (IT) centers—enabling developers to deploy games that require single-digit millisecond latency closer to end users or on-premises data centers. With the latest update developers can now:

  • Deploy your game to a Nigeria Local Zone Fleet location.
  • Update a queue with a Fleet location in the Nigeria Local Zone.
  • Match players into games sessions in Nigeria Local Zone locations.

To get started, visit the Amazon GameLift documentation to see a complete list of regions.

Amazon GameLift is available in regions: US East (Ohio and N. Virginia), US West (N. California and Oregon), Africa (Cape Town), Asia Pacific (Hong Kong, Mumbai, Seoul, Singapore, Sydney, Osaka, and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, and Stockholm), Middle East (Bahrain), South America (São Paulo), AWS China (Beijing) Region, operated by Sinnet, and AWS China (Ningxia) Region, operated by NWCD, and now available in 9 Local Zones in Chicago, Houston, Dallas, Kansas City, Denver, Atlanta, Los Angeles, Phoenix, and Nigeria.

AWS HealthImaging announces enhanced copy and update capabilities

AWS HealthImaging adds new copy and update capabilities, making it easier than ever to manage your medical imaging data. With this launch, you can more efficiently organize, combine, and update your medical imaging data to support common clinical and research workflows.

This launch offers enhanced capabilities for modifying DICOM data, and simplifies resolving metadata inconsistencies. You can now copy one or more DICOM instances, making it easier organize your instances by imaging Study and Series. It is now easier to update metadata, including Study, Series, and SOP Instance UIDs, so you can keep data current as new patient information becomes available. This launch also gives you the ability to modify private DICOM metadata elements. Lastly, you can now revert metadata to a prior version with a single action. Together, these enhancements simplify workflows for improving the quality and consistency of your medical imaging data, throughout it’s lifecycle.

AWS HealthImaging is now generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland).

To learn more, see Modifying image sets in the AWS HealthImaging Developer Guide.

Amazon ECR repository creation templates are now generally available

Amazon Elastic Container Registry (ECR) is announcing the general availability of repository creation templates, which allow customers to specify initial configuration for repositories that are automatically created by ECR via pull through cache and replication. ECR customers can specify configuration for these repositories, including encryption settings, lifecycle policies, and repository permissions. This enables customers to define custom configurations and assign them as defaults for various use cases within their registries.

Repository creation templates can specify configuration for all repository settings including resource-based access policies, tag immutability, encryption, and lifecycle policies. Each template contains a customer specified prefix which is used to match new repositories to a specific template. As new repositories are created by ECR, the configuration is applied automatically. This enables customers to control which configuration is applied to new repositories created via pull through cache and replication, and customize default creation settings.

AWS Step Functions now supports Customer Managed Keys

AWS Step Functions now supports the use of Customer Managed Keys with AWS Key Management Service (AWS KMS) to encrypt Step Functions State Machine and Activity resources. This new capability enables you to encrypt your workflow definitions and execution data using your own encryption keys.

AWS Step Functions is a visual workflow service capable of orchestrating over 12,000+ API actions from over 220 AWS services to build distributed applications and data processing workloads. With support for Customer Managed Keys, you have more fine-grained security control over your workflow data, making it easier to meet your organization's regulatory and compliance requirements. You can also audit and track usage of your encryption keys with AWS CloudTrail.

To learn more about using Customer Managed Keys with AWS Step Functions, visit AWS Step Functions documentation and AWS KMS documentation.

CloudWatch RUM PutRumEvents API now supports data event logging in AWS CloudTrail

CloudWatch RUM, which helps you perform real user monitoring by collecting client-side data of application performance and user interactions in real time, now supports AWS CloudTrail data event logging for PutRumEvents API, enabling enhanced data visibility for governance, compliance, and operational auditing.

Each data item collected using RUM web client or “app monitor“ is considered a RUM event and is sent to CloudWatch RUM using the PutRumEvents API. Now, CloudTrail logs would provide a comprehensive audit trail of PutRumEvents API calls, helping troubleshoot issues by providing insights into request parameters, source IP addresses, and timestamps. These logs can be used to gain visibility into your request activity, and archive it in a secure, highly available, and durable S3 data store. Using this you can identify throttling exceptions when API calls exceed the limit on account or when permission to send data is denied to the RUM app monitor on failing authentication. These logs can also be leveraged for Security Information and Event Management (SIEM) solutions to comply with audit and compliance requirements.

You can enable AWS CloudTrail data events logging for CloudWatch RUM in all AWS Regions where CloudWatch RUM is available. Get started with CloudTrail event logging for CloudWatch RUM by using the CloudTrail console, AWS CLI, or AWS SDKs. For pricing information, visit the CloudTrail pricing page.

Click here to see all RUM APIs logged in CloudTrail, and see the CloudWatch RUM user guide to learn more.

AWS Clean Rooms launches new capabilities for entity resolution, ML modeling, privacy, and analysis controls

Today, AWS Clean Rooms announces four new enhancements: the general availability of AWS Entity Resolution on Clean Rooms, additional privacy controls for data analyses, a feature to configure which collaborators receive analyses results, and the ability to generate seed data for lookalike modeling using SQL. These capabilities help you improve data matching, and give you increased control and flexibility for data collaborations.

AWS Entity Resolution is now natively integrated within AWS Clean Rooms to help you and your partners more easily prepare and match related customer records. Using rule-based or data service provider-based matching can help you improve data matching for enhanced advertising campaign planning, targeting, and measurement. For example, an advertiser can match records with a media publisher using rule-based matching, or with a data service provider such as LiveRamp to understand overlapping audiences.

Enhanced privacy and analysis controls give you greater flexibility to support multiple use cases in a collaboration. You can now disallow specific output columns from custom SQL data analyses for increased data protection, and you can easily choose which collaborator receives analyses results. Additionally, you can now use a SQL query as the seed data source for lookalike modeling in AWS Clean Rooms ML.

AWS Clean Rooms helps companies and their partners more easily analyze and collaborate on their collective datasets—without sharing or copying one another’s underlying data. AWS Clean Rooms is generally available in these AWS Regions. To learn more, visit the AWS Entity Resolution on AWS Clean Rooms blog.

Announcing 24 months support for Amazon EMR

Today, Amazon EMR announces 24 month support for Amazon EMR release versions. Amazon EMR aims to get the latest open-source versions of its Core Engines and Open Table Formats into your hands within 90 days from their upstream release. This extended support period gives customers peace of mind and a predictable timeline for budgeting, testing, and transitioning workloads.

During this 24 month period, Amazon EMR will provide support and fixes for critical issues related to security, bugs, and data corruption, subject to the availability of fixes. Standard Support covers eligible components under recommended configurations. Amazon EMR intends to deploy fixes to the latest patch, minor or major versions as soon as fixes are available, and within 90 day timeframe of being verified by Amazon EMR. Amazon EMR will apply the fixes automatically whenever you launch a new EMR on EC2 cluster, a new EMR on EKS container, or a new Serverless job, so that you can benefit from the latest patches. Clusters past their 24 months support period will remain accessible even after the support period ends.

To offer you with additional time to migrate from older releases, Amazon EMR will maintain existing levels of support for all releases for at least 12 months from today. After this period, Standard Support will be available for all eligible releases, on all deployment models – EMR on EC2, EMR on EKS and Serverless, in all regions where Amazon EMR operates, at no additional cost. To learn more about what’s included with support and how support works, please read the documentation.

Amazon EC2 D3en instances are now available in Asia Pacific (Jakarta) region

Starting today, Amazon EC2 D3en instances, the latest generation of the dense HDD-storage instances, are available in the Asia Pacific (Jakarta) region. D3en instances are ideal for workloads including distributed / clustered file systems, big data and analytics, and high capacity data lakes. With D3en instances, you can easily migrate from previous-generation D2 instances or on-premises infrastructure to a platform optimized for dense HDD storage workloads.

Amazon EC2 D3en instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software innovations that enable the delivery of private networking, and efficient, flexible, and secure cloud services with isolated multi-tenancy. D3en instances offer up to 336 TB of local HDD storage. These instances also offer up to 75 Gbps of network bandwidth, and up to 7 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).

To get started with D3en instances, visit the AWS Management Console, AWS Command Line Interface (CLI), or AWS SDKs. To learn more, visit the EC2 D3en instances page.
 

AWS DataSync expands support for agentless cross-region data transfers to include opt-in regions

AWS DataSync now supports agentless cross-region data transfers between all regions in the commercial AWS partition, including opt-in regions. With this update, you can now transfer millions of files or objects between AWS Storage services such as Amazon S3, Amazon EFS, and Amazon FSx in different regions without deploying or managing a DataSync agent.

AWS DataSync is an online data movement service that simplifies, automates, and accelerates the transfer of files and objects. It uses a purpose-built network protocol and scale-out architecture to move data quickly and securely between AWS Storage services, on-premises storage, edge locations, or other clouds. DataSync uses an agent to access storage systems that are located on premises or in other clouds while providing fully-managed access to storage in AWS. With this release, you can now transfer data between AWS Storage services in any region in the commercial partition, including opt-in regions, without deploying a DataSync agent. This enables you to easily replicate data between managed file systems for business continuity, as well as copy portions of your data to different buckets or file systems to meet evolving application requirements.

Agentless cross-region transfers are now available in all AWS regions where AWS DataSync is available. To get started, visit the AWS DataSync console. To learn more, view the AWS DataSync documentation.
 

AWS Signer open sources Notation plugin for container image signing

Today, AWS open-sourced the AWS Signer plugin for Notation, giving customers flexibility and transparency in how they sign and verify container images with AWS Signer, a managed signing service. Notation is an open source tool developed by the Notary Project, an industry standard for securing software supply chains by authenticating container images and other OCI artifacts. The plugin extends Notation with Signer managed secrets and revocation capabilities. Customers can now incorporate the Signer plugin as a library inside their native tools to generate and verify container artifacts signatures.

Notation can be used as a CLI executable or as a Golang library. With the open sourced Signer Plugin, you can now seamlessly incorporate signing and verification activities into your existing applications and tooling by adding a go-library. This removes the need for installing and invoking the plugin as an executable. Additionally, you get transparency in how AWS Signer APIs are used for signature generation and verification. If you prefer a CLI integration with Signer, you can now also build your own version of the Signer Plugin executable or continue downloading pre-built executables from AWS Signer documentation.

AWS Signer Plugin is released as an open-source project under the Apache 2.0 license. You can access the source code and instructions to build the Signer plugin in the GitHub repository here. To learn more about container image signing refer this blog.

Mistral Large 2 foundation model now available in Amazon Bedrock

Mistral AI’s Mistral Large 2 (24.07) foundation model is now generally available in Amazon Bedrock. This model is the latest version of Mistral AI's flagship large language model, Mistral Large (24.02), with significant improvements on multilingual accuracy, conversational behavior, coding capabilities, reasoning and instruction-following behavior.

With this new release of Mistral Large, you can leverage its multi-lingual proficiency to seamlessly communicate and process information in dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, Polish, Arabic, and Hindi. It has an increased context window of 128k (compared to 32k in the original version), making it able to process more information in order to generate accurate responses. Its advanced coding capabilities enable you to develop software across various programming languages, such as Python, Java, C, C++, JavaScript, and Bash, as well as specialized languages like Swift and Fortran. The model also has best-in-class agentic capabilities with native function calling and JSON outputting allowing it to interact with external systems, APIs, and tools.

Mistral AI’s Mistral Large 2 foundation model is now available on Amazon Bedrock in the US West (Oregon) region. To learn more read the Machine Learning blog, visit the Mistral AI on Amazon Bedrock product page, and read the documentation. To get started with Mistral Large on Amazon Bedrock, visit the Amazon Bedrock console.
 

AWS Cost Categories now supports “Billing Entity” dimension

AWS Cost Categories has added a new dimension “Billing Entity” to its rules. You can now use eight types of dimensions: “Linked Account”, “Charge Type”, “Service”, "Usage Type", “Cost Allocation Tags”, “Region”, “Billing Entity” and other “Cost Category” while creating cost categories rules.

AWS Cost Categories is a feature within the AWS Cost Management product suite that enables you to group cost and usage information into meaningful categories based on your needs. You can create custom categories and map your cost and usage information into these categories based on the rules defined by you using various dimensions. Once cost categories are set up and enabled, you will be able to view and manage your AWS cost and usage information by these categories in AWS cost management services, e.g. understanding the ownership of your spend at the Cost Categories level in AWS Cost Explorer and AWS Cost and Usage Report (CUR). Cost categories can be applied to your AWS cost and usage at the beginning of the month or retroactively for up to 12 months. Adding Billing Entity gives customers more granular control over their cost categories. Billing Entity is the unit to identify whether customers’ invoices or transactions are for AWS Marketplace or for purchases of other AWS services.

Cost categories is provided free of charge and this feature is available in all Commercial Regions. To get started with cost categories, please visit AWS Cost Categories product details page, AWS Cost Categories FAQs - Amazon Web Services.

Meta Llama 3.1 generative AI models now available in Amazon SageMaker JumpStart

The most advanced and capable Meta Llama models to date, Llama 3.1, are now available in Amazon SageMaker JumpStart, a machine learning (ML) hub that offers pretrained models and built-in algorithms to help you quickly get started with ML. You can deploy and use Llama 3.1 models with a few clicks in SageMaker Studio or programmatically through the SageMaker Python SDK.

Llama 3.1 models demonstrate significant improvements over previous versions due to increased training data and scale. The models support a 128K context length, an increase of 120K tokens from Llama 3. Llama 3.1 models have 16 times the capacity of Llama 3 models and improved reasoning for multilingual dialogue use cases in eight languages. The models can access more information from lengthy text passages to make more informed decisions and leverage richer contextual data to generate more refined responses. According to Meta, Llama 3.1 405B is one of the largest publicly available foundation models and is well suited for synthetic data generation and model distillation, both of which can improve smaller Llama models. For use of synthetic data to fine tune models, you must comply with Meta's license. Read the EULA for additional information. All Llama 3.1 models provide state-of-the-art capabilities in general knowledge, math, tool use, and multilingual translation.

Llama 3.1 models are available today in SageMaker JumpStart in US East (Ohio), US West (Oregon), and US East (N. Virginia) AWS regions. To get started with Llama 3.1 models in SageMaker JumpStart, see documentation and blog.

Meta Llama 3.1 generative AI models now available in Amazon Bedrock

The most advanced Meta Llama models to date, Llama 3.1, are now available in Amazon Bedrock. Amazon Bedrock offers a turnkey way to build generative AI applications with Llama. Llama 3.1 models are a collection of 8B, 70B, and 405B parameter size models offering new capabilities for your generative AI applications.

All Llama 3.1 models demonstrate significant improvements over previous versions. The models support a 128K context length, have 16 times the capacity of Llama 3, and exhibit improved reasoning for multilingual dialogue use cases in eight languages. The models access more information from lengthy text to make more informed decisions and leverage richer contextual data to generate more subtle and refined responses. According to Meta, Llama 3.1 405B is one of the best and largest publicly available foundation models and is well suited for synthetic data generation and model distillation. Llama 3.1 models also provide state-of-the-art capabilities in general knowledge, math, tool use, and multilingual translation.

Meta’s Llama 3.1 models are available in Amazon Bedrock in the US West (Oregon) Region. To learn more, read the AWS News launch blog, Llama in Amazon Bedrock product page, and documentation. To get started with Llama 3.1 in Amazon Bedrock, visit the Amazon Bedrock console. To request to be considered for access to the preview of Llama 3.1 405B in Amazon Bedrock, contact your AWS account team or submit a support ticket via the AWS Management Console. When creating the support ticket, select Bedrock as the Service and Models as the Category.

AWS Mainframe Modernization Code Conversion with mLogica is now generally available

We are excited to announce public availability of Mainframe Modernization Code Conversion with mLogica. This new capability enables automated conversion of legacy code written in Assembler language to COBOL. The majority of mainframe environments include Assembler code that is expensive to maintain. Modernization of the code unblocks modernization projects to enable refactor projects, replatform projects, on-mainframe modernization initiatives, within AWS Mainframe Modernization toolchains, or to use alongside third-party modernization toolchains.
AWS Mainframe Modernization service allows you to modernize and migrate on-premises mainframe applications to AWS. It offers automated refactor and replatform patterns, as well as augmentation patterns via data replication and file transfer.

AWS Mainframe Modernization Code Conversion is available through the AWS Mainframe Modernization service console. To learn more, visit AWS Mainframe Modernization service product and documentation pages.
 

Amazon EKS introduces new controls for Kubernetes version support policy

Today, Amazon EKS announces new controls for Kubernetes version policy, allowing cluster administrators to configure end of standard support behavior for EKS clusters. This behavior can easily be set through the EKS Console and CLI. Kubernetes version policy control is available for Kubernetes versions in standard support.

Controls for Kubernetes version policy makes it easier for you to choose which clusters should enter extended support and which clusters can be automatically upgraded at the end of standard support. This control provides the flexibility for you to balance version upgrades against business requirements depending on the environment or applications running per cluster.

Controls for Kubernetes version policy is available in all AWS regions.

To learn more about the controls for Kubernetes version policy, refer to the EKS documentation.

AWS AppConfig announces feature flag targets, variants, and splits

Today, AWS announces advanced targeting capabilities for AWS AppConfig feature flags. Customers can set up multiple values within flag data, and target those values to fine-grained and high-cardinality user segments. A common use-case for feature flag targets include allow lists, where a customer can specify user IDs or customer tiers, and only enable a new or premium feature for those segments. Another use-case is to split traffic to 15% of your user-base, and experiment with a user experience optimization for a limited cohort of users before rolling the feature out to all users.

Customers can start using this powerful feature by creating an AWS AppConfig feature flag, set its value, and then create one or more variants of that flag with different variations of data. Customers then create rules to determine which variant should be targeted to specific segments. Once the flag, variants, and rules are created, customers use the latest version of the AWS AppConfig Agent running in EC2, Lambda, ECS, EKS, or on-premises to retrieve the flag data. When requesting flag data, customers pass in context, like user IDs or other user metadata, which are evaluated client-side against the flag rules to return the appropriate and specific data.

AWS AppConfig’s feature flag targets, variants, and splits are available in all AWS Regions, including the AWS GovCloud (US) Regions. To get started, use the AWS AppConfig Getting Started Guide.

Amazon ECS now supports Amazon Linux 2023 and more for on-premises container workloads

Amazon Elastic Container Service (Amazon ECS) now supports managing on-premises workloads running on Amazon Linux 2023, Fedora 40, Debian 11, Debian 12, Ubuntu 24, and CentOS Stream 9. Amazon ECS Anywhere is a feature of Amazon ECS that enables you to run and manage container-based applications on-premises, including on your own virtual machines (VMs) and bare metal servers.

Amazon ECS Anywhere is available in all AWS Regions globally. To learn more visit the ECS Anywhere user guide.
 

Amazon Connect Contact Lens launches a new dashboard for outbound campaign analytics

Amazon Connect Contact Lens now offers a new dashboard for outbound campaign analytics. You can now easily visualize and monitor campaign performance, track efficiency, measure compliance, and understand campaign outcomes for your voice workloads. You can view real-time and historical reports using custom time periods and benchmarks, track campaign progress and delivery status, and drill down into call classification outcomes (e.g., human answered, voicemail). You can also quickly identify trends and patterns across key metrics, such as dials attempted or abandonment rate, to monitor and enhance campaign performance. Additionally, these metrics are now available via API for custom reporting or integrations with other data sources.

Outbound campaign analytics is available in Amazon Connect Contact Lens reports and the GetMetricDataV2 API in all AWS regions where Amazon Connect outbound campaigns is available. This includes US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), and Europe (London).

For more information about outbound campaign analytics, consult the Amazon Connect Administrator Guide and Amazon Connect API Reference. To learn more about Amazon Connect Outbound Campaigns, please visit the outbound campaigns webpage.

Amazon MQ now supports quorum queues for RabbitMQ 3.13

Amazon MQ now provides support for quorum queues, a replicated FIFO queue type offered by open-source RabbitMQ that uses the Raft consensus algorithm to maintain data consistency. Quorum queues are the replicated queue type recommended by open-source RabbitMQ maintainers. With quorum queues, developers can design highly available messaging systems with higher data consistency and fault tolerance.

Quorum queues can detect network failures faster and recover more quickly, improving the resiliency of the message broker as a whole. Quorum queues also provide poison message handling which helps developers manage unprocessed messages more efficiently. Amazon MQ benchmarks show that quorum queues offer an increased throughput (up to 2 times higher) compared to classic mirrored queues on RabbitMQ brokers.

Amazon MQ supports quorum queues only on RabbitMQ 3.13 and above. You can easily get started with quorum queues by declaring a new queue with queue type as ‘quorum’. To learn more about using quorum queues, see the Amazon MQ developer guide and the Amazon MQ release notes. Quorum queues are available in all the regions Amazon MQ is available in. For a full list of available regions see the AWS Region Table.
 

Amazon VPC IPAM now supports BYOIP for IPs registered with any Internet Registry

Starting today, Amazon Virtual Private Cloud (VPC) IP Address Manager (IPAM) supports Bring-Your-Own-IP (BYOIP) for IP addresses registered with any Internet Registry. Internet registries manage the allocation and registration of IP addresses within specific geographical regions. BYOIP allows you to bring IP addresses allocated to you by these registries, to AWS, and use them for your workloads. This new feature extends BYOIP support to previously unsupported Internet Registries, including JPNIC, LACNIC, and AFRINIC.

When setting up BYOIP, AWS validates that you control the IP address space that you are bringing to AWS. This validation ensures that users cannot use IP ranges belonging to others, preventing routing and security issues. Previously, IPAM only supported validation via the Registration Data Access Protocol (RDAP), which not all internet registries supported. Now, you can use DNS records for validating that the IP addresses belong to you, and process does not rely on any internet registries. Once you have set up BYOIP, you can create Elastic IP addresses (IPv4) from your BYOIP address range and use them with AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances and NAT gateways. If you have BYOIPv6 addresses, you can associate them with subnets, Elastic Network Interfaces (ENI), and Amazon EC2 instances within your VPCs.

This feature is currently available in all AWS Regions, except China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD). For more information, review IPAM’s technical documentation.
 

Amazon DocumentDB announces improvements to document compression

Amazon DocumentDB (with MongoDB compatibility) now supports the ability to enable compression on existing collections, set a compression threshold for each collection, and enable compression on all new collections using a cluster-wide setting. Compressed documents in Amazon DocumentDB can be up to 7 times smaller than uncompressed documents, leading to lower storage costs, I/O costs, and improved query performance.

Customers can enable document compression across their entire cluster using a single cluster-wide parameter group setting. With the compression setting enabled, all new collections created during a database migration or after upgrading the cluster will be compressed by default. The compression parameter also applies to collections created through insert operations or $out aggregation stage.

Moreover, customers can now modify compression settings on existing collections using the new CollMod settings, without migrating its documents to a new compressed collection. CollMod now supports enabling compression on existing collection and setting a minimum compression threshold on document size. These compression settings will apply to both new and updated documents in the modified collections.

These compression benefits are now supported in Amazon DocumentDB 5.0 instance-based clusters in all regions where Amazon DocumentDB is available. Please refer to document compression page in the developer guide for more details.
 

Amazon Connect Contact Lens now provides generative AI-powered summaries within seconds after a contact ends

Amazon Connect Contact Lens now provides generative AI-powered post-contact summaries within seconds after a contact ends, versus minutes previously, helping you get faster insights when reviewing contacts, save time on after-contact work, and more quickly identifying opportunities to improve contact quality and agent performance. These faster summaries are available via API and Kinesis data streams, enabling integrations with third-party agent workspace or CRM systems. You can also access these summaries natively within Amazon Connect through contact details and contact control panel (CCP).

Generative AI-powered post-contact summaries are available in the US West (Oregon), and US East (Northern Virginia) regions. To learn more, please visit our documentation and our webpage. This feature is included with Contact Lens conversational analytics at no additional charge. For information about Contact Lens pricing, please visit our pricing page.
 

Amazon RDS now supports M6i, R6i, M6g, R6g, and T4g database instances in Israel (Tel Aviv) Region

Amazon Relational Database Service (Amazon RDS) for PostgreSQL, MySQL, and MariaDB now supports M6i, R6i, M6g, R6g, and T4g database instances in Israel (Tel Aviv) Region. With this expansion, customers of RDS for open source engines in Israel (Tel Aviv) Region have more than double the number of available instance types to choose from.

M6i and R6i database instance types offer a new maximum instance size for the region of 32xlarge. 32xlarge supports 128 vCPU, which is 33% more than the maximum size of M5 and R5 database instance types.

For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page. Get started by creating any of these fully managed database instance using the Amazon RDS Management Console. For more details, refer to the Amazon RDS User Guide.
 

AWS KMS increases default service quotas for cryptographic operations

AWS KMS has doubled default service quotas for cryptographic operations in all AWS Regions, including raising the symmetric cryptographic operation request rate from 50,000 to 100,000 in US East (N. Virginia), US West (Oregon), and Europe (Ireland).

The request rate for cryptographic operations involving RSA and ECC KMS keys has also been increased from 500 to 1,000 in all AWS Regions.

These default service quotas have been increased in all AWS Regions, including the AWS GovCloud (US) Regions. To learn more, see Request quotas section in the AWS KMS Developer Guide.

Amazon DocumentDB now supports change streams on reader instances

Amazon DocumentDB (with MongoDB compatibility) now supports change streams on reader instances.

With change stream on reader instances, customers can now isolate change stream workloads to specific reader instances, which reduces the load on cluster’s writer instance. Change stream tokens can be shared across writer and reader instances, enabling customers to resume change streams from a specific document or time from any Amazon DocumentDB instance during a cluster failover or maintenance event. This functionality is also available in Amazon DocumentDB global clusters – customers can now read change streams from reader instances from the secondary global cluster.

These change stream enhancements are available in Amazon DocumentDB 5.0 instance-based clusters and global clusters in all regions where Amazon DocumentDB is supported.

Amazon DocumentDB is a fully managed, native JSON database that makes it simple and cost-effective to operate critical document workloads at virtually any scale without managing infrastructure. To learn more, please visit the documentation, and get started by creating Amazon DocumentDB cluster from the AWS Management Console. For pricing and region availability, visit the pricing page.
 

Amazon Connect launches search API for agent status

Amazon Connect now provides an API to search an agent status by name, ID, tag, or other criteria. Agent statuses are used in the Contact Control Panel (CCP) to indicate if an agent is available to handle contacts or not, for example because they are away for lunch or in training. With this new API, you can now answer questions such as, “How many of our statuses are disabled?”, and, “What statuses have ‘break’ in their description?”, and see a response with details like name, description, display order, and ARN.

The SearchAgentStatuses API is supported in all AWS regions where Amazon Connect is offered. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website. To learn more about the new search APIs, see the API documentation. To learn more about agent statuses, see the Amazon Connect Administrator Guide.

Productionize Fine-tuned Foundation Models from SageMaker Canvas

Amazon SageMaker Canvas now supports deploying fine-tuned Foundation Models (FMs) to SageMaker real-time inference endpoints, allowing you to bring generative AI capabilities into production and consume outside the Canvas workspace. SageMaker Canvas is a no-code workspace that enables analysts and citizen data scientists to generate accurate ML predictions and use generative AI capabilities.

SageMaker Canvas provides access to fine-tuning FMs powered by Amazon Bedrock and SageMaker JumpStart such as Amazon Titan Express, Falcon-7B-Instruct, Falcon-40B-Instruct, and Flan-T5 variants. You can upload a dataset, select a FM to fine-tune, and SageMaker Canvas automatically creates and tunes the model to adapt the FMs to the patterns and nuances of your specific use-case enhancing the performance of the model’s responses.
Starting today, you can deploy fine-tuned FMs to SageMaker endpoints making it easier to integrate generative AI capabilities into your applications outside the SageMaker Canvas workspace.

To get started, log in to SageMaker Canvas to access the fine-tuned FMs Select the desired model and deploy it with the appropriate endpoint configurations such as indefinitely or for a specific duration of time. SageMaker Inferencing charges will apply to deployed models. A new user can access the latest version by directly launching SageMaker Canvas from their AWS console. An existing user can access the latest version of SageMaker Canvas by clicking “Log Out” and logging back in.

The expanded feature is now available in all AWS regions where SageMaker Canvas is supported. To learn more, refer to the SageMaker Canvas product documentation.

Amazon Connect launches search API for hierarchy groups

Amazon Connect now provides an API to search for hierarchy groups by name, group ID, tag, or other criteria. Hierarchy groups describe your organization’s structure, and are used for reporting and access control. With this new API, you can now answer questions such as, “How many teams operate in the northwest region?” and, “What groups have a tag indicating they can access performance reviews?” and see a response with details like name, description, hierarchy level, ARN, and when a record was last updated.

The SearchUserHierarchyGroups API is supported in all AWS regions where Amazon Connect is offered. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website. To learn more about the new search APIs, see the API documentation. To learn more about hierarchy groups, see the Amazon Connect Administrator Guide.
 

Simplify Your AWS Marketplace Catalog API (CAPI) Integration with Strongly-Typed API Schemas

We're excited to announce the introduction of a GitHub library that will host the schemas for the DetailsDocument used in StartChangeSet, DescribeChangeSet, and DescribeEntity APIs in Catalog API (CAPI). This new feature aims to simplify the integration process for developers working with the Catalog API.

Today, as a developer in seller/partner organizations, you need to construct the API request structure manually when integrating with Catalog API for operations such adding pricing dimensions. This involves reviewing the API documentation and experimenting to understand the schema of the "DetailsDocument" for the request. With the new schema library, you can directly import the Java and Python libraries to create a strongly-typed response, without having to refer to the documentation or experiment with the JSON structure. This will save time and reduce the risk of errors during both integration testing and implementation. Additionally, if there are changes to the DetailsDocument schema, you can simply download the new version of the library, review the changes, and make the necessary updates to your code. This new functionality will exist alongside sending and receiving a string object in the "Details" attribute of the StartChangeSet, DescribeChangeSet, and DescribeEntity APIs. If you've already integrated with these APIs, you can continue using the "Details" attribute. However, newly onboarding sellers and sellers onboarding new API actions are advised to use the schema library to make integration with Catalog API faster.

For information on how to download the shape library and use it, refer to StartChangeSet API, DescribeChangeSet API and DescribeEntity API.
 

AWS Lambda now supports Amazon MQ for ActiveMQ and RabbitMQ in five new regions

AWS Lambda now supports Amazon MQ for ActiveMQ and RabbitMQ in the Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Europe (Spain), Europe (Zurich), and Israel (Tel Aviv) regions, enabling you to build serverless applications with Lambda functions that are invoked based on messages posted to Amazon MQ message brokers.

Amazon MQ is a managed message broker service for Apache ActiveMQ Classic and RabbitMQ that makes it easy to migrate to a message broker in the cloud. Lambda makes it easy to read from Amazon MQ message brokers and process messages without needing to create and manage a consumer application that monitors Amazon MQ queues for updates. Your Lambda function is invoked when the messages exceed the batch size or batch window, or when the payload exceeds 6MB. Lambda manages connectivity with your Amazon MQ message brokers on your behalf.

This feature incurs no additional charge. You pay for the Lambda function invocations triggered by the event source mapping connected to Amazon MQ message brokers. To learn more, see the Lambda developer Guide for Amazon MQ.

Amazon CloudWatch Logs Infrequent Access log class available in AWS GovCloud (US) Regions

Amazon CloudWatch Logs Infrequent Access (Logs IA), a log class for cost-effectively consolidating all your logs natively on AWS, is now available in all GovCloud regions. Logs IA helps improve visibility into your overall application health with a subset of CloudWatch Logs' capabilities including managed ingestion, cross-account log analytics, and encryption with a lower per GB ingestion price. This makes Logs IA ideal for ad-hoc querying and after-the-fact forensic analysis on infrequently accessed logs.

With Logs IA, you can choose the log class that best aligns with your use case. While you can use CloudWatch Logs Standard for logs requiring real-time operational visibility, anomaly detection, or real-time logs analysis, Logs IA is best suited for logs that require infrequent querying. By consolidating your logs natively in CloudWatch, you can eliminate the operational overhead of managing multiple solutions and reduce your Mean Time to Resolution (MTTR) by analyzing all your logs in one place.

CloudWatch Logs IA is now available in all AWS GovCloud (US) regions. With a few clicks in the AWS Management Console, you can start using CloudWatch Logs IA to send logs to CloudWatch. Alternatively, you can use AWS Command Line Interface (AWS CLI), AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), and AWS SDKs. Learn more about CloudWatch Logs IA pricing and read the user guide here.

Amazon EC2 C7i-flex instances are now available in Asia Pacific (Sydney) and Asia Pacific (Tokyo) Regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i-flex instances that deliver up to 19% better price performance compared to C6i instances, are available in Asia Pacific (Sydney) and Asia Pacific (Tokyo) regions. C7i-flex instances expand the EC2 Flex instances portfolio to provide the easiest way for you to get price performance benefits for a majority of compute intensive workloads. The new instances are powered by the 4th generation Intel Xeon Scalable custom processors (Sapphire Rapids) that are available only on AWS, and offer 5% lower prices compared to C7i.

C7i-flex instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don't fully utilize all compute resources. With C7i-flex instances, you can seamlessly run web and application servers, databases, caches, Apache Kafka, and Elasticsearch, and more. For compute-intensive workloads that need larger instance sizes (up to 192 vCPUs and 384 GiB memory) or continuous high CPU usage, you can leverage C7i instances.

C7i-flex instances are available in the following AWS Regions: US East (Ohio), US West (N. California), Europe (Ireland, London, Paris, Spain, Stockholm), Canada (Central), Asia Pacific (Mumbai, Singapore, Sydney, Tokyo), and South America (São Paulo).

To learn more, visit Amazon EC2 C7i-flex instances.
 

Amazon RDS for SQL Server supports password policies for SQL Server logins

Amazon Relational Database Service (Amazon RDS) for SQL Server now supports password policies for SQL Server logins. If you use SQL Server logins to authenticate users to an RDS for SQL Server database instance, you can now apply password policies to meet your compliance requirements. You can configure password policy parameters such as minimum length, minimum age, maximum age, lockout threshold, lockout duration, and lockout reset counter.

Customers can configure password policies in all AWS regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.

AWS Lambda now supports SnapStart for Java functions that use the ARM64 architecture

Starting today, you can use Lambda SnapStart with Java functions that use the ARM64 instruction set architecture.

SnapStart for Java delivers up to 10x faster function startup performance at no extra cost, enabling you to build highly responsive and scalable Java applications using AWS Lambda without having to provision resources or implement complex performance optimizations. This launch expands SnapStart’s performance benefits to functions running on ARM64 architecture, which enables upto 34% better price performance as compared to x86.

Lambda SnapStart for Java functions on ARM64 architecture is available in all AWS Regions where SnapStart is generally available. You can activate SnapStart for new or existing Lambda functions that use the ARM64 architecture and Java versions 11 or higher using the AWS Lambda API, AWS Management Console, AWS Command Line Interface (AWS CLI), AWS Cloud Formation, AWS Serverless Application Model (AWS SAM), AWS SDK, and AWS Cloud Development Kit (AWS CDK).

For more information on Lambda SnapStart, see the documentation and the SnapStart launch blog post. To learn more about Lambda, see the Lambda developer guide.

Amazon EC2 High Memory instances now available in Asia Pacific (Jakarta) Region

Starting today, Amazon EC2 High Memory instances with 6TiB of memory (u-6tb1.56xlarge, u-6tb1.112xlarge) are now available in Asia Pacific (Jakarta) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.

Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory.

Amazon Redshift Serverless with lower base capacity available in the Europe (London) Region

Amazon Redshift now allows you to get started with Amazon Redshift Serverless with a lower data warehouse base capacity configuration of 8 Redshift Processing Units (RPUs) in the AWS Europe (London) region. Amazon Redshift Serverless measures data warehouse capacity in RPUs, and you pay only for the duration of workloads you run in RPU-hours on a per-second basis. Previously, the minimum base capacity required to run Amazon Redshift Serverless was 32 RPUs. With the new lower base capacity minimum of 8 RPUs, you now have even more flexibility to a support diverse set of workloads of small to large complexity based on your price performance requirements. You can increment or decrement the RPU in units of 8 RPUs.

Amazon Redshift Serverless allows you to run and scale analytics without having to provision and manage data warehouse clusters. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. With the new lower capacity configuration, you can use Amazon Redshift Serverless for production environments, test and development environments at an optimal price point when a workload needs a small amount of compute.

To get started, see the Amazon Redshift Serverless feature page, user documentation, and API Reference.
 

  • No labels