Recent Announcements
The AWS Cloud platform expands daily. Learn about announcements, launches, news, innovation and more from Amazon Web Services.
Amazon Redshift introduces query identifiers for improved query performance monitoring

Amazon Redshift introduces a unique identifier assigned to SQL queries, which lets you effectively track query performance over time and identify recurring patterns in resource-intensive queries. This new feature, called a 'query hash', uniquely identifies SQL queries based on their textual representation and predicate values.

Query hash is a unique query signature that is generated for executed queries on a data warehouse. With query hash, you can investigate query performance by either performing trend analysis of queries over a period of time or comparing performance for a query across different time periods. This feature adds two new columns to SYS_QUERY_HISTORY view: user_query_hash, a hash with query literals, and generic_query_hash, the hash without query literals.

Amazon Redshift query hash is now generally available for both Amazon Redshift provisioned clusters and Amazon Redshift Serverless data warehouses in all AWS commercial and the AWS GovCloud (US) Regions where Amazon Redshift is available. To get started and learn more, visit the Amazon Redshift database developer guide.

Four new synthetic generative voices for Amazon Polly

Today, we are excited to announce the general availability of four highly expressive Amazon Polly voices speaking in American and Australian English.

Amazon Polly is a managed service that turns text into lifelike speech, allowing you to create applications that talk and to build speech-enabled products depending on your business needs.

The generative engine is Amazon Polly's most advanced text-to-speech (TTS) model. With this launch, we add a variety of new synthetic generative English voices to Polly portfolio: i.e., an Australian English voice Olivia, and three US English voices Joanna, Danielle, and Stephen. These voices sound similar to our neural voices with the same names, but they have much more natural pronunciation and prosody. Our customers can use this high-tier product in various industries and for different purposes: e.g., education, publishing, marketing, etc.

Danielle, Joanna, Olivia, and Stephen generative voices are accessible in the US East (North Virginia), Europe (Frankfurt), and US West (Oregon) regions and complement the other types of voices that are already available in the same regions.


For more details, please read the Amazon Polly documentation and visit our pricing page.

Amazon WorkSpaces Thin Client inventory now available to purchase in UK

Amazon WorkSpaces Thin Client inventory is now available to purchase in UK on Amazon Business.

Amazon WorkSpaces Thin Client is a low-cost end-user device that helps organizations reduce overall virtual desktop costs, strengthen security posture, and simplify end-user deployment. End users can set up WorkSpaces Thin Client in minutes using an on-device guided deployment experience to connect their network and peripherals and log in to their virtual desktops including Amazon WorkSpaces, Amazon WorkSpaces Web, and Amazon AppStream 2.0. WorkSpaces Thin Client also helps IT organizations improve security by removing the ability to upload or download files or applications and by establishing device trust through a secure chip. With the WorkSpaces Thin Client service, IT administrators have a complete view of their inventory and can remotely reset, patch, and control access to the thin client.

The WorkSpaces Thin Client service is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (London). Devices are now available to purchase in the US, UK, France, Germany, Italy, and Spain on Amazon Business.

Visit the WorkSpaces Thin Client page and Amazon Business to learn more.
 

Streamline automation of policy management workflows with service reference information

We now offer service reference information to streamline automation of policy management workflows, helping you to retrieve available actions across AWS services from machine-readable files. Whether you are a security administrator establishing guardrails for workloads or a developer ensuring appropriate access to applications, you can now more easily identify the available actions for AWS services. We provide service reference for AWS services, enabling you to seamlessly incorporate the metadata into your own policy management workflows.

With this new offering, you can automate the retrieval of service reference information, eliminating manual effort and your policies align with the latest service updates. You can incorporate this service reference directly into your existing policy management tools and processes for a seamless integration. This feature is offered at no additional cost. To get started, refer to the documentation on service reference information.

Announcing general availability of Console to Code to generate code

AWS is announcing the general availability of Console to Code, powered by Amazon Q Developer. Console to Code makes it simple, fast, and cost-effective to move from prototyping in the AWS Management Console to building code for production deployments. Customers can generate code for their console actions in their preferred format with a single click. The generated code helps customers get started and bootstrap their automation pipelines for tasks.

Console to Code makes it easy to convert actions performed in the console into reusable code, using the language of your choice. Customers use the AWS Management Console to learn and prototype cloud solutions, and using Console to code they can automatically capture those actions, and generate code for them. Console to Code provides code in CLI, Cloudformation and CDK formats. CLI code is recorded as customers take actions in console and replicates underlying AWS best practices. Customers can also generate CDK and Cloudformation code using Amazon Q Developer GenAI capability. This code follows AWS guided best practices to perform reliable deployments. Customers can copy or download the code and iterate on it to make it production ready. Customers no longer have to make a choice between Console or Infrastructure-as-Code (IaC).

Console to Code, powered by Amazon Q Developer, is generally available in commercial regions for Amazon Elastic Compute Cloud (EC2), Amazon Virtual Private Cloud (VPC) and Amazon Relational Database Service (RDS). Learn more about Console-to-Code.

Amazon EventBridge Schema Registry now in the AWS GovCloud (US) Regions

The Amazon EventBridge Schema Registry and Schema Discovery service is now available in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions, allowing you to discover and store event structure - or schema - in a shared, central location. You can download code bindings for those schemas for Java, Python, Typescript, and Golang so it’s easier to use events as objects in your code.

The Schema Discovery and Schema Registry features are integrated with the Amazon EventBridge Event Bus, which enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. While event buses decouple your event sources and event targets, letting your teams act more independently, the Schema Registry enables your teams to share the structures of the events they are publishing so that other teams can discover and create integrations incorporating their events.

You can have the Schema Registry centrally store schemas by adding them to the registry yourself or by turning on the Schema Discovery feature to automatically discover and store schemas from events sent to an event bus. By generating code bindings, you can interact with the event as an object in your code, using your preferred IDE to take advantage of features like code validation and auto-completion.

To learn more about Amazon EventBridge, please visit our documentation or get started in the AWS console.

Amazon Bedrock Model Evaluation now supports evaluating custom models

Model Evaluation on Amazon Bedrock allows you to evaluate, compare, and select the best foundation models for your use case. Amazon Bedrock offers a choice of automatic evaluation and human evaluation. You can use automatic evaluation with predefined algorithms for metrics such as accuracy, robustness, and toxicity. Additionally, for those metrics or subjective and custom metrics, such as friendliness, style, and alignment to brand voice, you can set up a human evaluation workflow with a few clicks. Human evaluation workflows can leverage your own employees or an AWS-managed team as reviewers. Model evaluation provides built-in curated datasets or you can bring your own datasets.

Now, customers can evaluate their own custom fine-tuned models from fine-tuning and continued pretraining jobs on Amazon Bedrock. This allows customers to complete the cycle of selecting a base model, customizing it, evaluating it, and customizing it again if needed or continuing to production if they are satisfied with its evaluation outcome. To evaluate a custom model, simply select the custom model from the list of models to evaluate in the model selector tool when creating an evaluation job.

Model Evaluation on Amazon Bedrock is now Generally Available in these commercial regions and the AWS GovCloud (US-West) Region.

To learn more about Model Evaluation on Amazon Bedrock, see the Amazon Bedrock developer experience web page. To get started, sign in to Amazon Bedrock on the AWS Management Console or use the Amazon Bedrock APIs.
 

Amazon Connect now supports using your customer’s initial chat message to personalize the customer experience

Amazon Connect Chat now supports using your customer's initial message in flows, enabling you to improve self-service containment rates and personalize the customer experience. You can use the initial chat message to display the right step-by-step guide, trigger interactive messages from Amazon Lex (e.g., list pickers, carousels), or route the chat to the best agent. For example, if the initial message is about an order issue, you can immediately show the customer a list pickers of recent orders. Alternatively, if the message is about rescheduling a delivery, you can present date and time pickers to help them make the change.

To use the customer's initial message with Amazon Lex, simply check the 'Initialize bot with message' option in the 'Get customer input' block within Amazon Connect's flow designer. Additionally, you can access the customer's initial message using the InitialMessage flow attribute for branching flows or integrations using AWS Lambda.

This new feature is available in all AWS regions where Amazon Connect is available. To learn more and get started, please refer to the help documentation, pricing page, or visit the Amazon Connect website.

AWS CodePipeline introduces new getting started experience

AWS CodePipeline introduces a simplified and new getting started experience to enable you to quickly create new pipelines. When you create a new pipeline using the CodePipeline console, you can now select from a list of pipeline templates across Build, Automation, and Deployment use cases. After selecting a pipeline template, you will be prompted to enter values for the action configuration fields in the pipeline definition, and completing the process will render a fully configured pipeline that is ready to run.

To learn more about using the new pipeline templates, visit our documentation. For more information about AWS CodePipeline, visit our product page. Getting started experience is action is available in all regions where AWS CodePipeline is supported, except for GovCloud regions.
 

Amazon EventBridge Archive and Replay now in the AWS GovCloud (US) Regions

Amazon EventBridge Archive and Replay is now available in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions, making event-driven applications more durable and extensible by providing an easier way to replay past events. Archive and Replay enables you to build applications that can more easily recover from errors and also allows you to more easily validate new functionality in your applications.

The Archive and Replay feature is integrated with the Amazon EventBridge Event Bus, which enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. You can set up routing rules on the event bus to determine where to send your events, allowing for application architectures to react to changes in your systems as they occur. Event buses make it easier to build event-driven applications by facilitating event ingestion, delivery, security, authorization, and error handling.

To learn more about Amazon EventBridge, please visit our documentation or get started in the AWS console.
 

AWS Lambda now detects and stops recursive loops between Lambda and Amazon S3

Lambda recursive loop detection can now automatically detect and stop recursive loops between AWS Lambda and Amazon Simple Storage Service (Amazon S3). Lambda recursive loop detection, which is enabled by default, is a preventative guardrail that automatically detects and stops recursive invocations between Lambda and other supported services, preventing unintended usage and billing from runaway workloads.

Customers commonly use Amazon S3 as an event source to trigger Lambda functions. Customer misconfiguration or code defect can cause processed events to be sent back to the same Amazon S3 bucket that invoked the Lambda function, causing unintended recursive loops. Now, Lambda will automatically detect and stop such recursive loops and send customers an AWS Health Dashboard notification with troubleshooting steps.

S3 support for recursive loop detection is available in all regions where Lambda recursive loop detection is available. If your function uses intentional recursive loops, you can use the PutFunctionRecursionConfig API to turn off recursive loop detection.

To learn more about Lambda recursive loop detection, please refer to Lambda documentation.

Announcing Amazon ElastiCache for Valkey

Today, Amazon ElastiCache announces support for Valkey with Serverless priced 33% lower and node-based priced 20% lower than other supported engines. With ElastiCache Serverless for Valkey, customers can create a cache in under a minute and get started as low as $6/month. Valkey is an open source, high performance, key-value datastore stewarded by Linux Foundation. It is a drop in replacement of Redis OSS, backed by 40+ companies with rapid adoption since project inception in March 2024.

Hundreds of thousands of customers use ElastiCache to improve their database and application performance and optimize costs. ElastiCache for Valkey provides a fully managed experience built on open-source technology while leveraging the security, operational excellence, 99.99% availability, and reliability from AWS. It is ideal for use cases such as caching, leaderboards, and session stores. To lower costs, ElastiCache Serverless for Valkey minimum data storage is 100MB, 90% lower than ElastiCache Serverless for Redis OSS. If you are using ElastiCache reserved nodes, when you switch from ElastiCache for Redis OSS to ElastiCache for Valkey, you retain your existing discounted reserved node rates across all node sizes within the same family.

ElastiCache for Valkey is now available in all AWS regions. You can upgrade from ElastiCache for Redis OSS to ElastiCache for Valkey in a few clicks with zero downtime. You can get started using the AWS Management Console, Software Development Kit (SDK), or Command Line Interface (CLI). For more information, please visit the ElastiCache features page, getting started blog, and documentation.

Announcing Amazon MemoryDB for Valkey

Today, Amazon MemoryDB announces support for Valkey, which is priced 30% lower than MemoryDB for Redis OSS. With MemoryDB for Valkey, you are not charged for data written up to 10 TB/month. Any data written over 10TB/month is billed at $0.04/GB, which is 80% lower than MemoryDB for Redis OSS. Valkey is an open source, high performance, key-value datastore stewarded by Linux Foundation. It is a drop in replacement of Redis OSS. Valkey is backed by 40+ companies and has seen rapid adoption since the project was created in March 2024.

Amazon MemoryDB is a fully managed, Valkey- and Redis OSS-compatible database service, which provides multi-AZ durability, microsecond read and single-digit millisecond write latency, and high throughput. It is ideal for use cases such as caching, leaderboards, and session stores. With MemoryDB for Valkey, you can benefit from a fully managed experience built on open-source technology while leveraging the security, operational excellence, and reliability that AWS provides. MemoryDB for Valkey also delivers the fastest vector search performance at the highest recall rates among popular vector databases on AWS.

MemoryDB for Valkey is available in all AWS Regions that MemoryDB is available. To get started, you can create a new Valkey cluster by specifying a database name or upgrade an existing MemoryDB for Redis OSS cluster to MemoryDB for Valkey using the AWS Management Console, Software Development Kit (SDK), or Command Line Interface (CLI). For more information, please visit the MemoryDB features page, getting started blog, and documentation.

Access organization-wide views of agreements and spend in AWS Marketplace

AWS Marketplace announces the general availability of a new procurement insights dashboard, helping you manage your organization’s renewals and optimize your AWS Marketplace spend. The new dashboard gives you detailed visibility into your organization’s AWS Marketplace agreements and associated spend across the AWS accounts in your organization.

The procurement insights dashboard identifies contracts nearing their end of term, enabling you to proactively address upcoming renewals. The comprehensive view of your organization's AWS Marketplace agreements allows you to identify consolidation opportunities across teams. You can gain a deeper understanding of your organization's AWS Marketplace spend, as it offers insights into historical spend by offer type, subscriber ID, solution provider, and more. You can also embed the dashboard within other tools using an API.

The procurement insights dashboard and API are available in all AWS Regions where the AWS Marketplace console is available.

You can access the dashboard in the procurement insights tab in the AWS Marketplace console. If you have a management or delegated administrator account, you can enable organizational visibility in the console settings by following the directions in the AWS Marketplace Buyer Guide. You can access the directions to use the API here.

Extension of EOL Dates for Amazon Corretto 8 and 11

We are pleased to announce that Amazon is extending the End of Life (EOL) dates for Amazon Corretto 8 and Amazon Corretto 11.

The new EOL dates are as follows:

  • Amazon Corretto 8: December-2030
    • Previous EOL date was April 2026
  • Amazon Corretto 11: January-2032
    • Previous EOL date was September 2027


Please note that Amazon will halt support for JavaFX, which is currently included in Corrretto 8, on its original EOL date of March-2026. After this date, JavaFX will no longer be included with Corretto 8.

If you have any questions or need further assistance, please reach out to the Amazon Corretto team by creating a GitHub ticket or, if you have an AWS service contract, you may alternatively raise a ticket through the AWS Support Center.

For customers looking to upgrade the JDK version for their Java applications, check out Amazon Q Developer for code transformation assistance.

Thank you for choosing Amazon Corretto!

Amazon OpenSearch Serverless introduces a suite of new features and enhancements

Amazon OpenSearch Serverless has recently introduced a suite of new features and enhancements that enable faster indexing, improved search performance, and expanded analytical capabilities.

The updates include the introduction of a flat object data type, which allows for more efficient storage and searching of nested data. OpenSearch Serverless now supports enhanced geospatial features, providing users with the ability to uncover valuable insights from location-based data. OpenSearch Serverless has also expanded its field types, including support for unsigned long, and doc count mapper. The new multi-term aggregation feature enables you to perform complex aggregations, and gain deeper insights into your data. Furthermore, OpenSearch Serverless has also seen significant reduction in indexing latencies and faster ascending/descending search sorts, improving the overall efficiency and performance.

These new features for OpenSearch Serverless are automatically enabled for all collections and is now available in 14 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), Canada (Central), and the AWS GovCloud (US-West) Region.

Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability.

To learn more about OpenSearch Serverless, see the documentation.
 

Amazon VPC Lattice is now available in 3 additional Regions

Amazon VPC Lattice is now available in 3 additional AWS Regions: Asia Pacific (Osaka), Asia Pacific (Hong Kong), and Middle East (Bahrain).

Amazon VPC Lattice is an application networking service that simplifies connecting, securing, and monitoring service-to-service communication. You can use Amazon VPC Lattice to facilitate cross-account and cross-VPC connectivity, as well as application layer load balancing for your workloads in a consistent way regardless of the underlying compute type – instances, containers, and serverless.

With this launch, Amazon VPC Lattice is now generally available in 21 AWS regions: Please visit the AWS region table for more information on AWS regions and services. To learn more, visit the Amazon VPC Lattice homepage, Amazon VPC Lattice User Guide, Frequently Asked Questions, and browse the most recent VPC Lattice blog posts here.

Amazon Q in Connect adds personalized guidance for agents

Amazon Q in Connect, a generative-AI powered assistant for contact center agents, now recommends personalized guidance to agents using customer data from Amazon Connect and other third-party CRM systems. Amazon Q in Connect detects the customer's intent from the real-time voice or chat conversation and understands customer data to recommend what an agent should say or what action they should take.

For example, when a customer contacts a hotel to upgrade their room, Amazon Q in Connect analyzes the real-time conversation, identifies the customer's loyalty tier, and provides the agent with a step-by-step guide of upgrade options and discounts to offer the customer. With Amazon Q in Connect, contact centers can empower agents to provide a more personalized and efficient customer interaction while driving increased customer satisfaction.

For region availability, please see the availability of Amazon Connect features by Region. To learn more about Amazon Q in Connect, please visit the website or see the help documentation.
 

Mountpoint for Amazon S3 CSI driver introduces new access controls for individual Kubernetes pods

The Mountpoint for Amazon S3 Container Storage Interface (CSI) driver now supports configuring distinct AWS Identity and Access Management (IAM) roles for individual Kubernetes pods. Built on Mountpoint for Amazon S3, the CSI driver presents an S3 bucket as a volume accessible by containers in Amazon Elastic Kubernetes Service (Amazon EKS) and self-managed Kubernetes clusters. Now, you can use IAM roles for each pod to restrict access to specific buckets or objects, without making changes to your applications.

Previously, you could configure an IAM role that the CSI driver used for all pods in your Kubernetes cluster. With this launch, you can further strengthen your application security when building multi-tenant environments by configuring the CSI driver to use individual IAM roles for each pod that attaches a volume. This means that you can run data-intensive jobs, like machine learning or media transcoding, across multiple pods while allowing each pod to access only the data it needs, providing data isolation between pods as a result.

Amazon EKS supports the Mountpoint for Amazon S3 CSI driver as an EKS add-on. You can install, configure, and update the CSI driver with just a few clicks in the Amazon EKS console, AWS Command Line Interface (AWS CLI), EKS Application Programming Interface (API), and AWS CloudFormation. To get started, follow the user guide.

Amazon Connect launches prompt customizations for Amazon Q in Connect

Amazon Q in Connect, a generative-AI powered assistant for contact center agents, now enables contact center supervisors to pre-configure LLM prompts to match your company's brand and business guidelines. Supervisors can tailor prompts to change Amazon Q in Connect's tone and behavior to incorporate specific company phrases, follow language guidelines, and designate certain "fixed" responses for situations requiring absolute consistency. For example, when a customer contacts a healthcare insurance provider, Amazon Q in Connect can be customized to be sensitive for use cases such as denied claim. Agents using step-by-step guides for the claim appeals process will be provided empathetic phrasing and automated disclaimers for different types of medical advice. With Amazon Q in Connect, contact centers can empower agents to consistently represent the company's brand, reduce compliance risks, and increase customer satisfaction.


For region availability, please see the availability of Amazon Connect features by Region. To learn more about Amazon Q in Connect, please visit the website or see the help documentation.
 

AWS Deadline Cloud now supports resubmitting jobs

Today, AWS announces support for resubmitting your Deadline Cloud jobs, via API, CLI, and within the Deadline Cloud monitor, so you can easily run jobs again with updated parameters. AWS Deadline Cloud is a fully managed service that simplifies render management for teams creating computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design.

Resubmitting jobs makes it easy to run the same job with updated parameters. For example, you can submit a job to render a subset of testing frames, verify their output, then run the job again with the full frame range.

Resubmitting jobs is available in all AWS Regions where Deadline Cloud is available.

For more information, please visit the Deadline Cloud product page, and the Deadline Cloud User Guide.
 

Preview: Amazon Q Business now supports an integration with Smartsheet

Amazon Q Business now supports the integration with Smartsheet, the modern enterprise work management platform trusted by millions of people at companies across the globe. This connector makes it easy to synchronize data from your Smartsheet instance with your Amazon Q index. When implemented, your employees can use Amazon Q Business to query their intelligent assistant on information about their Smartsheet projects and tasks.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. It empowers employees to be more creative, data-driven, efficient, prepared, and productive. The over 40 connectors supported by Q Business can be scheduled to automatically sync your index with your selected data sources, so you're always securely searching through the most up-to-date content.

To learn more about Amazon Q Business and its integration with Smartsheet, visit the Amazon Q Business connectors webpage. The new connector with Smartsheet is available in all AWS Regions where Q Business is available.
 

AWS Outposts supported in the AWS Europe (Spain) Region

AWS Outposts is now supported in the AWS Europe (Spain) Region. Outposts is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises or edge location for a truly consistent hybrid experience.

Organizations from startups to enterprises and the public sector in and outside of Spain can now connect their Outposts to this Region. Outposts allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to.

To get started, visit the AWS Management Console. To learn more, refer to the User Guide and Product Overview.

AWS Security Hub launches 7 new security controls

AWS Security Hub has released 7 new security controls, increasing the total number of controls offered to 430. Security Hub now supports controls for new resource types, such as Amazon Simple Storage Service (S3) Multi-Region Access Points and Amazon Managed Streaming for Apache Kafka (MSK) Connect. Security Hub also released new control for Amazon GuardDuty EKS Runtime Monitoring. For the full list of recently released controls and the AWS Regions in which they are available, visit the Security Hub user guide.

To use the new controls, turn on the standard they belong to. Security Hub will then start evaluating your security posture and monitoring your resources for the relevant security controls. You can use central configuration to do so across all your organization accounts and linked Regions with a single action. If you are already using the relevant standards and have Security Hub configured to automatically enable new controls, these new controls will run without taking any additional action.

To get started, consult the following list of resources:

AWS CodePipeline introduces new general purpose compute action

AWS CodePipeline introduces the Commands action that enables you to easily run shell commands as part of your pipeline execution. With the Commands action, you will have access to a secure compute environment backed by CodeBuild to run AWS CLI, third-party tools, or any shell commands. The Commands action runs CodeBuild managed on-demand EC2 compute, and uses an Amazon Linux 2023 standard 5.0 image.

Previously, if you wanted to run AWS CLI commands, third-party CLI commands, or simply invoke an API, you had to create a CodeBuild project, configure the project with the appropriate commands, and add a CodeBuild action to your pipeline to run the project. Now, you can simply add the Commands action to your pipeline, and define one or more commands as part of the action configuration. Since Commands is like any other CodePipeline action, you can use the standard CodePipeline features of input / output artifacts and output variables.

To learn more about using the Commands action in your pipeline, visit our documentation. For more information about AWS CodePipeline, visit our product page. The Commands action is available in all regions where AWS CodePipeline is supported.
 

Amazon Route 53 Resolver endpoints now support DNS-over-HTTPS (DoH) with Server Name Indication (SNI) validation

Starting today, you can provide Server Name Indication (SNI) with Route 53 Resolver endpoints for DNS-over-HTTPS (DoH), allowing you to specify the target server hostname for DNS query requests from your outbound endpoints to DoH servers that require SNI for TLS validation.

DoH on Amazon Route 53 Resolver endpoints allows you to encrypt DNS queries that pass through the endpoints and improve privacy by minimizing the visibility of the information exchanged through the queries. With this launch, you can now specify the hostname with your outbound endpoint configuration to perform TLS handshakes for your DNS requests from the outbound endpoints to the DoH server. Enabling SNI validation for your DoH Resolver endpoints also helps you meet regulatory and business compliance requirements, such as those described in the memorandum of the US Office of Management and Budget, where outbound DNS traffic must be be addressed to Cybersecurity and Infrastructure Security Agency (CISA) Protective DNS that require SNI hostname validation for a successful TLS handshake.

Resolver endpoints support for DoH with SNI is available in all Regions where Route 53 is available, including the AWS GovCloud (US) Regions. Visit the AWS Region Table to see all AWS Regions where Amazon Route 53 is available.

You can get started by using the AWS Console or Route 53 API. For more information, visit the Route 53 Resolver product detail page and service documentation. For details on pricing, visit the pricing page.
 

Amazon SageMaker JumpStart is now available in the AWS GovCloud (US-West and US-East) Regions

Amazon SageMaker JumpStart is now available in the AWS GovCloud (US) Regions. Public sector customers can easily deploy and fine-tune open-weight models through the SageMaker Python SDK.

Amazon SageMaker JumpStart is a machine learning (ML) hub that offers hundreds of pre-trained models and built-in algorithms to help you quickly get started with ML. Customers can discover hundreds of open-weight pre-trained models such as Llama and Mistral stored in the AWS infrastructure, fine-tune with their own data, and deploy for cost effective inferencing using SageMaker Python SDK.

Amazon SageMaker JumpStart is now Generally Available in the AWS GovCloud (US-West and US-East) Regions. Please note that some models require instances not yet available in GovCloud regions and will be usable after instances become available.

To learn more about using SageMaker JumpStart through SageMaker Python SDK, see the SageMaker Python SDK documentation. Available models can also be found in the documentation.

AWS Application Composer is now AWS Infrastructure Composer

AWS Application Composer is now called AWS Infrastructure Composer. The new name emphasizes our capabilities in building infrastructure architectures.

Since launching at re:Invent ’22, customers have told us how Application Composer has helped accelerate their serverless application architecture design with Application Composer’s simple drag-and-drop interface. Since the initial release, we have expanded our support to any CloudFormation resource, empowering customers to build any required resource architecture. The new AWS Infrastructure Composer name reflects our focus to help customers build any infrastructure with CloudFormation.

AWS Infrastructure Composer is available in all commercial regions and the AWS GovCloud (US) Regions.

Amazon Connect can now generate forecast for workloads with as little as one contact

Amazon Connect can now generate forecasts for smaller workloads, with as little as one contact, making it easier for contact center managers to predict demand. This eliminates the need for you to manually adjust historical data to meet minimum data requirements. By reducing minimum data requirements, you can now enable managers to generate forecasts for smaller volume workloads than were previously possible, making it easier to do capacity planning and staffing.

This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.
 

Amazon Connect Contact Lens supports new read-only permissions for reports and dashboards

Amazon Connect Contact Lens now allows users to save and publish reports and dashboards as read-only. By publishing a report as read-only, only the user who created the report or dashboard can edit the report, while still making it visible for others to view or create a copy. For example, a contact center manager can configure a custom read-only dashboard and share it with the supervisors on their team to ensure they monitor the same metrics, while still allowing the supervisors to customize and save their own versions for further analysis.

This feature is available in all AWS regions where Amazon Connect is offered. To learn more about read only reports, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.

Amazon EC2 now supports Optimize CPUs post instance launch

Amazon EC2 now allows customers to modify an instance’s CPU options after launch. You can now modify the number of vCPUs and/or disable the hyperthreading of a stopped EC2 instance to save on vCPU-based licensing costs. In addition, an instance’s CPU options are now maintained when changing its instance type.

The Optimize CPUs feature allows customers to disable hyperthreading and reduce the number of vCPUs on an instance, resulting in a high memory to vCPU ratio helping customers save the vCPU-based licensing costs. This is particularly beneficial to customers who Bring-Your-Own-license (BYOL) for commercial database workloads, like Microsoft SQL Server.

This feature is available in all commercial AWS Regions.

To get started, see CPU options in the Amazon EC2 User Guide. To learn more about the new API, visit the Amazon EC2 API Reference.
 

Amazon Connect now supports multi-day copy and paste of agent schedules

Amazon Connect now supports copying of agent schedules across multiple days, making management of agent schedules more efficient. You can now copy multiple days shifts from one agent to another agent or to the same agent, up to 14 days at a time. For example, if a new agent joins the team mid-month, you can quickly provide them with a schedule by copying up to 14 days of shifts from an existing agent’s schedule. Similarly, if an agent has a flexible working arrangement for a few weeks, you can edit their schedule for the first week and then copy it over to remaining weeks. Multi-day copy of agent schedules improves manager productivity by reducing time spent on managing agent schedules.

This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.
 

Amazon WorkSpaces now supports file transfer between WorkSpaces sessions and local devices

Amazon WorkSpaces is launching support for transferring files between a WorkSpaces Personal session and a local computer. This helps customers to manage and share files seamlessly, increasing their productivity. This is supported on personal WorkSpaces that use the DCV streaming protocol when using the Windows, Linux client applications or web access.

With this launch, users can streamline their workflows and have easier ways to organize, manage, edit, and share files across their devices and platforms. The files on the WorkSpaces will be saved in a persistent storage folder. Amazon WorkSpaces also offers robust security measures, and administrators can control whether users can upload or download files from WorkSpaces to protect the data security of your organization.

This functionality is now available in all the AWS Regions where Amazon WorkSpaces Personal is available. There are no additional WorkSpaces costs for using the file transfer functionality. However, the files uploaded consume user volume that is attached to the WorkSpaces. Customers can increase the size of the user volumes attached to WorkSpaces at any time. Changing the volume size of a WorkSpace will effect the billing rate. See Amazon WorkSpaces pricing for more information.

To get started on the WorkSpaces file transfer function, see Configure file transfer for DCV WorkSpaces.
 

AWS Partner Central now supports association of an AWS Marketplace private offer to a launched opportunity

Today, AWS Partner Central has enhanced the APN Customer Engagements (ACE) Pipeline Manager by allowing AWS partners to link an AWS Marketplace private offer to a launched opportunity.

This feature gives AWS partners improved visibility into their AWS Marketplace transactions. By linking AWS Marketplace private offers to opportunities, partners can track deals from their co-selling pipeline all the way to customer offers. Additionally, partners can view their agreement information, such as agreement ID and creation date, in ACE Pipeline Manager, connected to the original customer opportunity.

Starting today, this feature is available globally for all AWS Partners who have linked their AWS Partner Central and AWS Marketplace accounts. To get started, log in to AWS Partner Central and review the ACE user guide.

AWS IoT Core removes TLS ALPN requirement and adds custom authorizer capabilities

Today, AWS IoT Core announces three new capabilities for domain configurations. Devices no longer need to rely on Transport Layer Security (TLS) Application Layer Protocol Negotiation (ALPN) extension to determine authentication type and protocol. Furthermore, developers can add additional X.509 client certificates validation to custom authentication workflow. Previously, devices selected authentication type by connecting to a defined port and providing TLS ALPN with chosen protocol. The new capability to configure authentication type and protocol purely based on the TLS Server Name Indication (SNI) extension makes it simpler to connect devices to the cloud without requiring TLS ALPN. This enables developers to migrate existing device fleets to AWS IoT Core without firmware updates or Amazon-specific TLS ALPN strings. The authentication type and protocol combination will be assigned to an endpoint for all supported TCP ports of this custom domain.

Building on the above-mentioned feature, AWS IoT Core added two additional authentication capabilities. Custom Authentication with X.509 Client Certificates allows customers to authenticate IoT devices using X.509 certificates and then add custom authentication logics as an additional layer of security check. Secondly, Custom Client Certificate Validation allows customers to validate X.509 client certificate based on a custom Lambda function. For example, developers can build custom certificate revocation checks, such as, Online Certificate Status Protocol and Certificate Revocation List, before allowing a client to connect.

All three capabilities are available in all AWS regions where AWS IoT Core is present, except AWS GovCloud (US). Visit the developer guide to learn more about this feature.

AWS B2B Data Interchange announces support for generating outbound X12 EDI

AWS B2B Data Interchange now supports outbound EDI transformation, enabling you to generate X12 EDI documents from JSON or XML data inputs. This new capability adds to B2B Data Interchange’s existing support for transforming inbound EDI documents and automatically generating EDI acknowledgements. With the ability to transform and generate X12 EDI documents up to 150 MB, you can now automate your bidirectional EDI workflows at scale on AWS.

The introduction of outbound EDI transformation establishes B2B Data Interchange as a comprehensive EDI service for conducting end-to-end transactions with your business partners. For example, healthcare payers can now process claims with claim payments, suppliers can confirm purchase orders with invoices, and logistics providers can respond to shipment requests with status notifications. B2B Data Interchange monitors specified prefixes in Amazon S3 to automatically process inbound and outbound EDI. Each outbound EDI document generated emits an Amazon EventBridge event which can be used to automatically send the documents to your business partners using AWS Transfer Family’s SFTP and AS2 capabilities, or any other EDI connectivity solution.

Support for generating outbound X12 EDI is available in all AWS Regions where AWS B2B Data Interchange is available. To get started with building and running bidirectional, event-driven EDI workflows on B2B Data Interchange, take the self-paced workshop or deploy the CloudFormation template.
 

AWS Compute Optimizer now supports 80 new Amazon EC2 instance types

AWS Compute Optimizer now supports 80 additional Amazon Elastic Compute Cloud (Amazon EC2) instance types. The newly supported instance types include the latest generation compute optimized instances (c7i-flex, c6id, c8g), memory optimized instances (r8g, x8g), storage optimized instances (i4i), and GPU-based instances (g5, g5g, g6, gr6, p4d, p4de, p5). This expands the total EC2 instance types supported by Compute Optimizer to 779.

By including support for the latest instance types that have improved price to performance ratios, Compute Optimizer helps customers identify additional savings opportunities and performance improvement opportunities. The newly supported c8g, r8g, and x8g EC2 instance types include the new AWS Graviton4 processors that offer 50% more cores, 160% more memory bandwidth, and up to 60% better performance than AWS Graviton2 processors. The C7i-flex instances powered by the 4th generation Intel Xeon Scalable custom processors (Sapphire Rapids) offer 5% better price/performance compared to c7i instances.

For more information about the AWS Regions where Compute Optimizer is available, see AWS Region table.

For more information about Compute Optimizer, visit our product page and documentation. You can start using AWS Compute Optimizer through the AWS Management Console, AWS CLI, and AWS SDK.
 

AWS Cloud WAN and AWS Network Manager are now available in additional AWS Regions

With this launch, AWS Cloud WAN and AWS Network Manager are now available in AWS Asia Pacific (Melbourne, Hyderabad), AWS Europe (Spain, Zurich), AWS Middle East (UAE) Region and AWS Canada West (Calgary) Regions. Additionally, AWS Cloud WAN is available in AWS Israel (Tel Aviv) Region.


With AWS Cloud WAN, you can use a central dashboard and network policies to create a global network that spans multiple locations and networks, allowing you to configure and manage different networks using the same technology. You can use your network policies to specify which of your Amazon Virtual Private Clouds, AWS Transit Gateways, and on-premises locations you want to connect to by using an AWS Site-to-Site VPN, AWS Direct Connect, or third-party software-defined WAN (SD-WAN) products. The Cloud WAN central dashboard, powered by AWS Network Manager, generates a complete view of the network to help you monitor network health, security, and performance. AWS Network Manager reduces the operational complexity of managing global networks across AWS and on-premises locations. It provides a single global view of your private network. You can visualize your global network in a topology diagram and monitor your network using CloudWatch Metrics and events for network topology changes, routing updates, and connection status updates.

To learn more about AWS Cloud WAN, see the product detail page and documentation. To learn more about AWS Network Manager, see the documentation.
 

Auto Scaling in AWS Glue interactive sessions is now generally available

Auto Scaling in AWS Glue interactive sessions is now generally available. AWS Glue interactive sessions with Glue versions 3.0 or higher can now dynamically scale resources up and down based on the workload. With Auto Scaling, you no longer need to worry about over-provisioning resources for sessions, spend time optimizing the number of workers, or pay for idle workers.

AWS Glue is a serverless data integration service that allows you to schedule and run data integration and extract, transform, and load (ETL) jobs or sessions without managing any computing infrastructure. AWS Glue allows users to configure the number of works and type of workers to utilize. AWS Glue Auto Scaling monitors each stage of the session run and turns workers off when they are idle or adds workers if additional parallel processing is possible. This simplifies the process of tuning resources and optimizing costs.

This feature is now available in all commercial AWS Regions, GovCloud (US-West), and China Regions where AWS Glue interactive sessions is available.

For more details, please refer to the Glue Auto Scaling blog post and visit our documentation.

Amazon Location Service is now available in AWS Europe (Spain) Region

Today, we are announcing the availability of Amazon Location Service in the AWS Europe (Spain) Region. Amazon Location Service is a location-based service that helps developers easily and securely add maps, search places and geocodes, plan routes, and enable device tracking and geofencing capabilities into their applications. With Amazon Location Service, developers can start a new location project or migrate from existing mapping service workloads to benefit from cost reduction, privacy protection, and ease of integration with other AWS services.

With this launch, Amazon Location Service is now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Asia Pacific (Mumbai), Canada (Central), Europe (London), South America (São Paulo), AWS GovCloud (US-West), and AWS Europe (Spain). To learn more, please see the Amazon Location Service Getting Started page.

Amazon Aurora Serverless v2 now supports up to 256 ACUs

Amazon Aurora Serverless v2 now supports database capacity of up to 256 Aurora Capacity Units (ACUs). Aurora Serverless v2 measures capacity in ACUs where each ACU is a combination of approximately 2 gibibytes (GiB) of memory, corresponding CPU, and networking. You specify the capacity range and the database scales within this range to support your application’s needs.

With higher maximum capacity, customers can now use Aurora Serverless for even more demanding workloads. Instead of scaling up to 128 ACUs (256 GiB), the database can now scale up to 256 ACUs (512 GiB). You can get started with higher capacity with a new cluster or your existing cluster with just a few clicks in the AWS Management console. For a new cluster, select the desired capacity for the maximum capacity setting. For existing clusters, select modify and update the maximum capacity setting. For existing incompatible instances that don’t allow capacity higher than 128 ACUs, add a new reader with the higher capacity to the existing cluster and failover to it. 256 ACUs is supported for Aurora PostgreSQL 13.13+, 14.10+, 15.5+, 16.1+, and Aurora MySQL 3.06+.

Aurora Serverless is an on-demand, automatic scaling configuration for Amazon Aurora. It adjusts capacity in fine-grained increments to provide just the right amount of database resources for an application’s needs. For pricing details and Region availability, visit Amazon Aurora Pricing. To learn more, read the documentation, and get started by creating an Aurora Serverless v2 database using only a few steps in the AWS Management Console.
 

Amazon Q Business is now HIPAA eligible

Amazon Q Business is now HIPAA (Health Insurance Portability and Accountability Act) eligible. Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems.

With the Amazon Q Business HIPAA certification, healthcare and life sciences organizations such as health insurance companies and healthcare providers, can now use Amazon Q Business to run sensitive workloads regulated under the U.S. Health Insurance Portability and Accountability Act (HIPAA). AWS maintains a standards-based risk management program to ensure that the HIPAA-eligible services specifically support HIPAA administrative, technical, and physical safeguards.

Amazon Q Business is HIPAA compliant in all of the AWS Regions where Amazon Q Business is supported. See the AWS Regional Services List for the most up-to-date availability information. To learn more about HIPAA eligible services, visit the webpage. To get started with Amazon Q Business, visit the product page to learn more.
 

Printer redirection and user selected regional settings now available on Amazon AppStream 2.0 multi-session fleets

Amazon AppStream 2.0 is helping enhance the end-user experience by introducing support for local printer redirection and user-selected regional settings on multi-session fleets. While these features were already available on single-session fleets, this launch extends these functionalities to multi-session fleets, helping administrators to leverage the cost benefits of the multi-session model while providing an enhanced end-user experience. By combining these enhancements with the existing advantages of multi-session fleets, AppStream 2.0 offers a comprehensive solution that helps balance cost-efficiency and user satisfaction.

With local printer redirection, AppStream 2.0 users can redirect print jobs from their streaming application to a printer that is connected to their local computer. No printer driver needs to be installed on the AppStream 2.0 streaming instance to enable users to print documents during their streaming sessions. Additionally, your users can now configure their streaming sessions to use regional settings. They can set the locale, and input method used by their applications in their streaming sessions. Each user's settings persist across all future sessions in the same AWS Region.

These features are available at no additional cost in all the AWS Regions where Amazon AppStream 2.0 is available. AppStream 2.0 offers pay-as-you go pricing. To get started with AppStream 2.0, see Getting Started with Amazon AppStream 2.0.

To enable these features for your users, you must use an AppStream 2.0 image that uses AppStream 2.0 agent released on or after September18, 2024 or your image is using Managed AppStream 2.0 image updates released on or after September 20, 2024.

Amazon AppStream 2.0 enables automatic time zone redirection for enhanced user experience

Amazon AppStream 2.0 now allows end users to enable automatic time zone redirection for application and desktop streaming sessions. With this new capability, AppStream 2.0 streaming sessions will automatically adjust to match the time zone setting of the end user's client device.

While end users can still manually configure regional preferences like time zone, language and input method based on their location. Automatic time zone redirection eliminates the need to manually configure time zone. By automatically redirecting the time zone, AppStream 2.0 provides an improved localized experience for end users. The streaming applications and desktops will now display the user's local time zone out of the box, without any manual configuration required. This helps create a more intuitive experience for users across different global locations. The time zone redirection works independently of the AWS region where the AppStream 2.0 fleet is deployed.

This feature is available to all the customers using web browser to connect to AppStream 2.0 at no additional cost in all the AWS Regions where Amazon AppStream 2.0 is available. AppStream 2.0 offers pay-as-you go pricing. To get started with AppStream 2.0, see Getting Started with Amazon AppStream 2.0.

To enable this feature for your users, you must use an AppStream 2.0 image that uses AppStream 2.0 agent released on or after September 18, 2024 or your image is using Managed AppStream 2.0 image updates released on or after September 20, 2024.

Amazon Timestream for InfluxDB now includes advanced configuration options

Amazon Timestream for InfluxDB now supports additional configuration options, providing you with more control over how the engine behaves and communicates with its clients.With today’s launch, Timestream for InfluxDB also introduces a feature that allows you to monitor instance CPU, Memory, and Disk utilization metrics directly from the AWS Management Console.

Timestream for InfluxDB offers the full feature set of the 2.7 open-source version of InfluxDB, the most popular open source time-series database engine, in a fully managed service with features like Multi-AZ high-availability and enhanced durability. You can now configure the port to access your InfluxDB instances, allowing for greater flexibility in your infrastructure setup. Additionally, over 20 new engine configuration parameters gives you precise control over your instance's behavior. To get started, navigate to the Amazon Timestream Console, and configure your instances according to your needs. Existing customers can also update their instances to take advantage of these new configuration options.

Amazon Timestream for InfluxDB is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Jakarta), Europe (Paris), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Stockholm), Europe (Spain), and Middle East (UAE).

You can create a Amazon Timestream for InfluxDB Instance from the Amazon Timestream console, AWS Command line Interface (CLI), or SDK, and AWS CloudFormation. To learn more about Amazon Timestream for InfluxDB visit the product page, documentation, and pricing page.

Amazon Managed Service for Prometheus now supports Internet Protocol Version 6 (IPv6)

Amazon Managed Service for Prometheus now offers customers the option to use Internet Protocol version 6 (IPv6) addresses for their new and existing workspaces. Customers moving to IPv6 can simplify their network stack by running and operating their Amazon Managed Service for Prometheus workspaces on a network that supports both IPv4 and IPv6. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service that makes it easy to monitor and alarm on operational metrics at scale. Prometheus is a popular Cloud Native Computing Foundation open-source project for monitoring and alerting on metrics from compute environments such as Amazon Elastic Kubernetes Service.

The continued growth of the internet is exhausting available Internet Protocol version 4 (IPv4) addresses. IPv6 increases the number of available addresses by several orders of magnitude so customers will no longer need to manage overlapping address spaces in their VPCs. Customers can now connect to Amazon Managed Service for Prometheus APIs with IPv6 connections. Customers can also continue to connect to Amazon Managed Service for Prometheus APIs via IPv4 connections if they do not utilize IPv6.

To learn more on best practices for configuring IPv6 in your environment, visit the whitepaper on IPv6 in AWS. Support for IPv6 on Amazon Managed Service for Prometheus is available in all regions where the service is GA. To learn more about Amazon Managed Service for Prometheus, visit the user guide or product page.

Amazon Virtual Private Cloud (VPC) now supports BYOIP and BYOASN in all AWS Local Zones

Starting today, Amazon VPC supports two key public IP address management features, Bring-Your-Own-IP (BYOIP) and Bring-Your-Own-ASN (BYOASN), in all AWS Local Zones. If your applications use trusted IP addresses and Autonomous System Numbers (ASNs) that your customers or partners have allowed in their networks, you can run these applications in AWS Local Zones without requiring your partners or customers to change their allow-lists.

The reachability of many workloads, including host-managed VPNs, proxies, and telecommunication network functions, depends on an organization’s IP address and ASN. With BYOIP, you can now assign your public IPs to workloads in AWS Local Zones, and with BYOASN, you can advertise them using your own ASN. This ensures your workloads remain reachable by customers or partners that have allowlisted your IP addresses and ASN.

The BYOIP and BYOASN features are available in all AWS Local Zones, and all AWS Regions except China (Beijing, operated by Sinnet) and China (Ningxia, operated by NWCD).

For more information about this feature, review the EC2 BYOIP documentation and IPAM tutorials.
 

AWS Snowball Edge Storage Optimized 210TB device is available in three new regions

AWS Snowball Edge Storage Optimized 210TB device is now available in three additional regions: Asia Pacific (Mumbai), South America (Sao Paulo), and Asia Pacific (Seoul). The AWS Snowball Edge Storage Optimized 210TB features storage capacity of 210TB per device and high performance NVMe storage, enabling customers to quickly complete large data migrations.

For the majority of data migration workloads, customers should use AWS DataSync as a secure, online service that automates and accelerates moving data between on premises and AWS Storage services. When bandwidth is limited, or a connection is intermittent, customers can use AWS Snowball Edge Storage Optimized 210TB for offline data migration.

The AWS Snowball Edge Storage Optimized 210TB device supports two pricing options for data migration: less than 100TB, and from 100TB to 210TB pricing. To learn more, visit the AWS Snowball Pricing, Snow product page and Snow Family documentation.

Amazon Bedrock now available in the Asia Pacific (Seoul) and US East (Ohio) Regions

Beginning today, customers can use Amazon Bedrock in the Asia Pacific (Seoul) and US East (Ohio) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance.

To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

New VMware Strategic Partner Incentive (SPI) for Managed Services in AWS Partner Central

Today, Amazon Web Services, Inc. (AWS) announces a new VMware SPI for Managed Services as part of Migration Acceleration Program (MAP) in AWS Partner Central. Eligible AWS Partners who also provide manage services post migration, can now leverage the VMware SPI for Managed Services to accelerate VMware customer migration opportunities.

This new VMware SPI for Managed Services is available through the enhanced MAP template in AWS Partner Central which provides better speed to market with fewer AWS approval stages. With this enhancement, the AWS Partner Funding Portal (APFP) automatically calculates the eligible VMware SPI for Managed Services improving overall partner productivity by eliminating manual steps.

The VMware SPI for Managed Services is now available for all Partners in Services path and in Validated or higher stage including all AWS Migration and Modernization Competency Partners. To learn more, review the 2024 APFP user guide

Amazon Redshift launches RA3.large instances

Amazon Redshift launches RA3.large, a new smaller size in the RA3 node type with 2 vCPU and 16 GiB memory. You can now benefit from RA3.large as it gives more flexibility in compute options to choose from based on your workload requirements.

Amazon Redshift RA3.large offers all the innovation of Redshift Managed Storage (RMS), including scaling and paying for compute and storage independently, data sharing, write operations support for concurrency scaling, Zero-ETL, and Multi-AZ. Along with already available sizes in the RA3 node type, RA3.16xlarge, RA3.4xlarge and RA3.xlplus, now with the introduction of RA3.large, you have even more compute sizing options to choose from to address the diverse workload and price performance requirements.

To get started with RA3.large, you can create a cluster with the AWS Management Console or the create cluster API. To upgrade cluster from your Redshift DC2 environment to an RA3 cluster, you can take a snapshot of your existing cluster and restore it to an RA3 cluster, or do a resize from your existing cluster to a new RA3 cluster. To learn more about the RA3 node type, see the cluster management guide and the ’Upgrading to RA3 node type’ documentation. You can find more information on pricing by visiting the Amazon Redshift pricing page.

RA3.large is generally available in all commercial regions where the RA3 node type is available. For more details on regional availability, see the ’RA3 node type availability’ documentation.

AWS announces Reserved Nodes flexibility for Amazon ElastiCache

Today we’re announcing enhancements to Amazon ElastiCache Reserved Nodes that make them flexible and easier to use, helping you get the most out of your reserved nodes discount. Reserved nodes provide you with a significant discount compared to on-demand node prices, enabling you to optimize costs based on your expected usage.

Previously, you needed to purchase a reservation for a specified node type (e.g. cache.r7g.xlarge) and would only be eligible for a discount on the given type with no flexibility. With this feature, ElastiCache reserved nodes offer size flexibility within an instance family (or node family) and AWS region. This means that your existing discounted reserved node rate will be applied automatically to usage of all sizes in the same node family. For example, if you purchase a r7g.xlarge reserved node and need to scale to a larger node such as r7g.2xlarge, your reserved node discounted rate is automatically applied to 50% usage of the r7g.2xlarge node in the same AWS Region. The size flexibility capability will reduce the time that you need to spend managing your reserved nodes. With this feature, you can get the most out of your discount even if your capacity needs change.

Amazon ElastiCache reserved node size flexibility is available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions. To learn more, visit Amazon ElastiCache, the ElastiCache user guides and our blog post.
 

Amazon S3 adds Service Quotas support for S3 general purpose buckets

You can now manage your Amazon S3 general purpose bucket quotas in Service Quotas. Using Service Quotas, you can view the total number of buckets in an AWS account, compare that number to your bucket quota, and request a service quota increase.

You can get started using the Amazon S3 page on the Service Quotas console, AWS SDK, or AWS CLI. Service Quotas support for S3 is available in the US East (N. Virginia) and China (Beijing) AWS Regions. To learn more about using Service Quotas with S3 buckets, visit the S3 User Guide.
 

AWS Chatbot adds support to centrally manage access to AWS accounts from Slack and Microsoft Teams with AWS Organizations

AWS announces general availability of AWS Organizations support in AWS Chatbot. AWS customers can now centrally govern access to their accounts from Slack and Microsoft Teams with AWS Organizations.

This launch introduces chatbot management policy type in AWS Organizations to control access to your organization's accounts from chat channels. Using Service Control Policies (SCPs), customers can also globally enforce permission limits on CLI commands originating from chat channels.

With this launch, customers can use chatbot policies and multi-account management services in AWS Organizations to determine which permissions models, chat applications, and chat workspaces can be used to access their accounts. For example, you can restrict access to production accounts from chat channels in designated workspaces/teams. Customers can also use SCPs to specify guardrails on the CLI command tasks executed from chat channels. For example, you can specify deny all rds: delete-db-cluster CLI actions originating from chat channels.

AWS Organizations support in AWS Chatbot is available at no additional cost in all AWS Regions where AWS Chatbot is offered. Visit the Securing your AWS organization in AWS Chatbot documentation and blog to learn more.

Amazon EMR Serverless introduces Job Run Concurrency and Queuing controls

Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Today, we are excited to announce job run admission control on Amazon EMR Serverless with support for job run concurrency and queuing controls.

Job run concurrency and queuing enables you to configure the maximum number of concurrent job runs for an application and automatically queues all other submitted job runs. This prevents job run failures caused when API limits are exceeded due to a spike in job run submissions or when resources are exhausted either due to an account or application's maximum concurrent vCPUs limit or an underlying subnet's IP address limit being exceeded. Job run queuing also simplifies job run management by eliminating the need to build complex queuing management systems to retry failed jobs due to limit errors (e.g., maximum concurrent vCPUs, subnet IP address limits etc.). With this feature, jobs are automatically queued and processed as concurrency slots become available, ensuring efficient resource utilization and preventing job failures.

Amazon EMR Serverless job run concurrency and queuing is available in all AWS Regions where AWS EMR Serverless is available, including the AWS GovCloud (US) Regions and excluding China regions. To learn more, visit Job concurrency and queuing in the EMR Serverless documentation.
 

NICE DCV renames to Amazon DCV and releases version 2024.0 with support for Ubuntu 24.04

Amazon announces DCV version 2024.0. In this latest release, NICE DCV has been renamed to Amazon DCV. The new DCV version introduces several enhancements, including support for Ubuntu 24.04 and enabling the QUIC UDP protocol by default. Amazon DCV is a high-performance remote display protocol designed to help customers securely access remote desktop or application sessions, including 3D graphics applications hosted on servers with high-performance GPUs.

Amazon DCV version 2024.0 introduces the following updates, features, and improvements:

  • Renames to Amazon DCV. NICE DCV is now renamed as Amazon DCV. Additionally, Amazon has consolidated the WorkSpaces Streaming Protocol (WSP), used in Amazon WorkSpaces, with Amazon DCV. The renaming does not affect customer workloads, and there is no change to folder paths and internal tooling names.
  • Supports Ubuntu 24.04, the latest LTS version of Ubuntu with the latest security patches and updates, providing improved stability and reliability. Additionally, the DCV client on Ubuntu 24.04 now natively supports Wayland, providing better performance through more efficient graphical rendering.
  • Enables the QUIC UDP protocol by default, allowing end users to receive an optimized streaming experience.
  • Adds the ability to blank the Linux host screen when a remote user is connected to the Linux server in a console session, preventing users physically present near the server from seeing the screen and interacting with the remote session using the input devices connected to the host.

For more information, please see the Amazon DCV 2024.0 release notes or visit the Amazon DCV webpage to get started with DCV.

Amazon Bedrock Knowledge Bases now provides option to stop ingestion jobs

Today, Amazon Bedrock Knowledge Bases is announcing the general availability of the stop ingestion API. This new API offers you greater control over data ingestion workflows by allowing you to stop an ongoing ingestion job that you no longer want to continue.

Earlier, you had to wait for the full completion of an ingestion job, even in cases where you no longer desired to ingest from the data source or needed to make other adjustments. With the introduction of the new "StopIngestionJob" API, you can now stop an in-progress ingestion job with a single API call. For example, you can use this feature to quickly stop an ingestion job you accidentally initiated, or if you want to change the documents in your data source. This enhanced flexibility enables you to rapidly respond to changing requirements and optimize your costs.

This new capability is available across all AWS Regions where Amazon Bedrock Knowledge Bases is available.

To learn more about stopping ingestion jobs and the other capabilities of Amazon Bedrock Knowledge Bases, please refer to the documentation.

Amazon Data Firehose delivers data streams into Apache Iceberg format tables in Amazon S3

Amazon Data Firehose (Firehose) can now deliver data streams into Apache Iceberg tables in Amazon S3.

Firehose enables customers to acquire, transform, and deliver data streams into Amazon S3, Amazon Redshift, OpenSearch, Splunk, Snowflake, and other destinations for analytics. With this new feature, Firehose integrates with Apache Iceberg, so customers can deliver data streams directly into Apache Iceberg tables in their Amazon S3 data lake. Firehose can acquire data streams from Kinesis Data Streams, Amazon MSK, or Direct PUT API, and is also integrated to acquire streams from AWS Services such as AWS WAF web ACL logs, Amazon CloudWatch Logs, Amazon VPC Flow Logs, AWS IOT, Amazon SNS, AWS API Gateway Access logs and many others listed here. Customers can stream data from any of these sources directly into Apache Iceberg tables in Amazon S3, and avoid multi-step processes. Firehose is serverless, so customers can simply setup a stream by configuring the source and destination properties, and pay based on bytes processed.

The new feature also allows customers to route records in a data stream to different Apache Iceberg tables based on the content of the incoming record. To route records to different tables, customers can configure routing rules using JSON expressions. Additionally, customers can specify if the incoming record should apply a row-level update or delete operation in the destination Apache Iceberg table, and automate processing for data correction and right to forget scenarios.

To get started, visit Amazon Data Firehose documentation, pricing, and console.

Amazon MSK APIs now supports AWS PrivateLink

Amazon Managed Streaming for Apache Kafka (Amazon MSK) APIs now come with AWS PrivateLink support, allowing you to invoke Amazon MSK APIs from within your Amazon Virtual Private Cloud (VPC) without traversing the public internet.

By default, all communication between your Apache Kafka clients and your Amazon MSK provisioned clusters is private, and your data never traverses the internet. With this launch, clients can also invoke MSK APIs via a private endpoint. This allows client applications with strict security requirements to perform MSK specific actions, such as fetching bootstrap connection strings or describing cluster details, without needing to communicate over a public connection .

AWS PrivateLink support for Amazon MSK is available in all AWS Regions where Amazon MSK is available. To get started, follow the directions provided in the AWS PrivateLink documentation. To learn more about Amazon MSK, visit the Amazon MSK documentation.
 

Amazon Connect launches the ability to initiate outbound SMS contacts

Amazon Connect now supports the ability to initiate outbound SMS contacts, enabling you to help increase customer satisfaction by engaging your customers on their preferred communication channel. You can now deliver proactive SMS experiences for scenarios such as post-contact surveys, appointment reminders, and service updates, allowing customers to respond at their convenience. Additionally you can offer customers the option to switch to SMS while waiting in a call queue, eliminating their hold time.

To get started, add the new Send message block to a contact flow or use the new StartOutboundChatContact API to initiate outbound SMS contacts. This feature is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), and Europe (London). To learn more and get started, please refer to the documentation for the Send message flow block and StartOutboundChatContact API.
 

AWS Incident Detection and Response now available in Japanese

Starting today, AWS Incident Detection and Response supports incident engagement in Japanese language. AWS Incident Detection and Response offers AWS Enterprise Support customers proactive engagement and incident management for critical workloads. With AWS Incident Detection and Response, AWS Incident Management Engineers (IMEs) are available 24/7 to detect incidents and engage with you within five minutes of an alarm from your workloads, providing guidance for mitigation and recovery.

This feature allows AWS Enterprise Support customers to interact with Japanese-speaking Incident Management Engineers (IMEs) who will provide proactive engagement and incident management for critical incidents. To use this service in Japanese, customers must select Japanese as their preferred language during workload onboarding.

For more details, including information on supported regions and additional specifics about the AWS Incident Detection and Response service, please visit the product page.
 

AWS Announces AWS re:Post Agent, a generative AI-powered virtual assistant

AWS re:Post launches re:Post Agent, a generative AI-powered assistant that's designed to enhance customer interactions by offering intelligent and near real-time responses on re:Post. re:Post Agent provides the first response to questions in the re:Post community. Cloud developers can now get general technical guidance faster to successfully build and operate their cloud workloads.
With re:Post Agent, you have a generative AI companion augmented by the community that expands the available AWS knowledge. Community experts can earn points to build their reputation status by reviewing answers from re:Post Agent.

Visit AWS re:Post to collaborate with re:Post Agent and experience the power of generative AI-driven technical guidance.

Amazon AppStream 2.0 increases application settings storage limit

Amazon AppStream 2.0 has expanded the default size limit for application settings persistence from 1GB to 5GB. This increase allows end users to store more application data and settings with no manual intervention and without impacting the performance or session setup time.

Application settings persistence allows users' customizations and configurations to persist across sessions. When enabled, AppStream 2.0 automatically saves changes to a Virtual Hard Disk (VHD) stored in an S3 bucket unique to your account and AWS Region. This helps in enhancing the user experience by enabling users to resume work where they left off. With expanded default storage size and performance improvements, AppStream 2.0 makes it easier than ever for end users to retain their application data, settings, and customizations across sessions. The VHD syncs efficiently even for multi-gigabyte files due to optimizations in data syncing and access times.

This feature is available at no additional cost in all regions where Amazon AppStream 2.0 is available. AppStream 2.0 offers pay-as-you go pricing. To get started with AppStream 2.0, see Getting Started with Amazon AppStream 2.0.

To enable this feature for your users, you must use an AppStream 2.0 image that uses AppStream 2.0 agent released on or after September 18, 2024 or your image is using Managed AppStream 2.0 image updates released on or after September 20, 2024.
 

Amazon EventBridge announces new event delivery latency metric for Event Buses

Amazon EventBridge Event Bus now provides an end-to-end event delivery latency metric in Amazon CloudWatch that tracks the duration between event ingestion and successful delivery to the targets on your Event Bus. This new IngestionToInvocationSuccessLatency allows you to now detect and respond to event processing delays caused by under-performing, under-scaled, or unresponsive targets.

Amazon EventBridge Event Bus is a serverless event router that enables you to create highly scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. You can set up rules to determine where to send your events, allowing for applications to react to changes in your systems as they occur. With the new IngestionToInvocationSuccessLatency metric you can now better monitor and understand event delivery latency to your targets, increasing the observability of your event-driven architecture.

Support for the new end-to-end latency metric for Event Buses is now available in all commercial AWS Regions. To learn more about the new IngestionToInvocationSuccessLatency metric for Amazon EventBridge Event Buses, please read our blog post and documentation.

Launch Amazon CloudWatch Internet Monitor from Amazon Network Load Balancer console

By adding your Network Load Balancer (NLB) to a monitor, you can gain improved visibility into your application's internet performance and availability using Amazon CloudWatch Internet Monitor. You can now create or associate a monitor for an NLB directly when you create an NLB in the AWS Management console. You can create a monitor for the load balancer, or add the load balancer to an existing monitor, directly from the Integrations tab on the console.

With a monitor, you can get detailed metrics about your application's internet traffic that goes through a load balancer, with the ability to drill down into specific locations and internet service providers (ISPs). You also get health event alerts for internet issues that affect your application customers, and can review specific recommendations for improving the internet performance and availability for your application. After you create a monitor, you can customize it at any time by visiting the Internet Monitor console in Amazon CloudWatch.

To learn more about how you can use and customize a monitor, see the Internet Monitor user guide documentation.

Amazon Inspector enhances engine for Lambda standard scanning

Today, Amazon Inspector announced an upgrade to the engine powering its Lambda standard scanning. This upgrade will provide you with a more comprehensive view of the vulnerabilities in the third-party dependencies used in your Lambda functions and associated Lambda layers in your environment. With the launch of this enhanced scanning engine, you will benefit from these capabilities without any disruption to your existing workflows. Existing customers can expect to see some findings closed as the new engine re-evaluates your existing resources to better assess risks, while also surfacing new vulnerabilities.

Amazon Inspector is a vulnerability management service that continually scans AWS workloads including Amazon EC2 instances, container images, and AWS Lambda functions for software vulnerabilities, code vulnerabilities, and unintended network exposure across your entire AWS organization.


This improved version of Lambda standard scanning is available in all commercial and AWS GovCloud (US) Regions where Amazon Inspector is available.

To learn more and get started with continual vulnerability scanning of your workloads, visit:

AWS CloudShell extends most recent capabilities to all commercial Regions

AWS CloudShell now supports Amazon Virtual Private Cloud (VPC) support, improved environment start times, and support for Docker environments in all commercial Regions where CloudShell is live. Previously, these features were only available in a limited set of CloudShell’s live commercial Regions. These features increase the productivity of CloudShell customers and enable a consistent experience across all CloudShell commercial Regions.

CloudShell VPC support allows you to create CloudShell environments in a VPC, which enables you to use CloudShell securely within the same subnet as other resources in your VPC without the need for additional network configuration. Start times have been improved, enabling customers to begin using CloudShell more quickly. With the Docker integration, CloudShell users can initialize Docker containers on demand and connect to them to prototype or deploy Docker based resources with AWS CDK Toolkit.

These features are now supported in all AWS Commercial Regions where AWS CloudShell is available today. For more information about the AWS Regions where AWS CloudShell is available, see the AWS Region table.

Learn more about these expanded capabilities in the CloudShell Documentation, including specific entries on VPC Support and the Docker integration.

Amazon Bedrock Model Evaluation now available in the AWS GovCloud (US-West) Region

Model Evaluation on Amazon Bedrock allows you to evaluate, compare, and select the best foundation models for your use case. Amazon Bedrock offers a choice of automatic evaluation and human evaluation. You can use automatic evaluation with predefined algorithms for metrics such as accuracy, robustness, and toxicity. Model evaluation provides built-in curated datasets or you can bring your own datasets.

Amazon Bedrock’s interactive interface guides you through model evaluation. You simply choose automatic evaluation, select the task type and metrics, and upload your prompt dataset. Amazon Bedrock then runs evaluations and generates a report, so you can easily understand how the model performed against the metrics you selected, and choose the right one for your use case. Using this report in conjunction with the cost and latency metrics from the Amazon Bedrock, you can select the model with the required quality, cost, and latency tradeoff.

Model Evaluation on Amazon Bedrock is now Generally Available in AWS GovCloud (US-West) in addition to many commercial regions.

To learn more about Model Evaluation on Amazon Bedrock, see the Amazon Bedrock developer experience web page. To get started, sign in to Amazon Bedrock on the AWS Management Console or use the Amazon Bedrock APIs.

Amazon Aurora supports PostgreSQL 16.4, 15.8, 14.13, 13.16, and 12.20

Amazon Aurora PostgreSQL-Compatible Edition now supports PostgreSQL versions 16.4, 15.8, 14.13, 13.16, and 12.20. These releases contain product improvements and bug fixes made by the PostgreSQL community, along with Aurora-specific security and feature improvements. These releases also contain new Babelfish’s features and improvements. As a reminder, Amazon Aurora PostgreSQL 12 end of standard support is February 28, 2025. You can either upgrade to a newer major version or continue to run Amazon Aurora PostgreSQL 12 past the end of standard support date with RDS Extended Support.

These releases are available in all commercial AWS regions and AWS GovCloud (US) Regions, except China regions. You can initiate a minor version upgrade by modifying your DB cluster. Please review the Aurora documentation to learn more. To learn which versions support each feature, head to our feature parity page.

Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
 

Amazon SES adds HTTPS open tracking for custom domains

Amazon Simple Email Service (SES) now supports HTTPS for tracking open and click events when using custom domains. Using HTTPS helps meet security compliance requirements and reduces the chances of email delivery issues with mailbox providers that reject non-secure links. The new feature provides the flexibility to configure HTTPS as mandatory for both open and click tracking, or make it optional based on the protocol of the links in your email.

Previously, HTTPS was only available for click event tracking with custom domains. If you required HTTPS for tracking both open and clicks events, you were limited to the default tracking approach where the links in your emails were wrapped with an Amazon-provided domain that immediately redirected recipients to the intended destination. Now, you can secure the tracking of both open and click events while providing a trustworthy and branded experience for your recipients by using your own custom domain. This can help increase deliverability metrics and protect your sender reputation by isolating it from the reputation of other senders.

You can enable HTTPS for open and click tracking with custom domains in all AWS Regions where Amazon SES is offered.

To learn more, see the Amazon SES documentation for configuring custom domains for open and click tracking.
 

Announcing sample-based partitioning for AWS HealthOmics variant stores

We are excited to announce that AWS HealthOmics variant stores are now optimized to improve sample based queries saving time and query costs for customers. AWS HealthOmics helps customers accelerate scientific breakthroughs by providing a fully managed service designed to handle bioinformatics and drug discovery workflows and storage at any scale. With this release, any new variant stores customer create will be automatically partitioned by the sample.

This feature automatically partitions data loaded into a variant store by the sample information. Because of this partitioning, any analysis that includes sample level filtering no longer needs to scan the full set of data leading to a lower query cost and faster results. Sample based queries are common when using clinical outcome or phenotypic information to perform filtering.

Sample partitioning is now supported for all new variant stores created in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore), and Israel (Tel Aviv). To get started using the variant store, see the AWS HealthOmics documentation.
 

Amazon Redshift announces mTLS support for Amazon MSK

Amazon Redshift streaming ingestion already supports Amazon IAM authentication and with this announcement, we are now extending authentication methods with the addition of mutual transport layer security (mTLS) authentication between Amazon Redshift provisioned cluster or serverless workgroup and Amazon Managed Streaming for Apache Kafka (MSK) cluster or serverless.

mTLS is an industry standard for authentication that provides the means for a server to authenticate a client it's sending information to, and for the client to authenticate the server. The benefit of using mTLS is to provide a trusted authentication method that relies on each party (client & server) exchanging a certificate issued by mutually trusted certificate authorities. This is a common requirement for compliance reasons in a variety of applications in several industries, e.g., financial, retail, government and healthcare industries.

mTLS authentication is available starting with Amazon Redshift patch 184 release in all AWS regions where Amazon Redshift and Amazon MSK are currently available. See AWS service availability by region for more information.

To learn more about using mTLS authentication with Amazon Redshift streaming, please refer to the Amazon MSK and mTLS sub-sections of the Amazon Redshift streaming documentation.
 

Amazon Q in QuickSight now generates data stories that are personalized to users

Amazon Q in QuickSight announces personalization in data stories. A capability of Amazon Q in QuickSight, data stories helps users generate visually compelling documents and presentations that provide insights, highlight key findings, and recommend actionable next steps. With the addition of personalization to data stories, the generated narratives are tailored to the user and leverage employee location and job role to provide commentary that is more specific to the user’s organization.

Amazon Q in QuickSight brings the power of Generative Business Intelligence to customers, enabling them to leverage natural language capabilities of Amazon Q to quickly extract insights from data, make better business decisions, and accelerate the work of business users. Personalization is automatically enabled for data stories and uses your organization’s employee profile data, without any additional setup. Amazon Q in QuickSight sources employee profile information from AWS IAM Identity Center that is connected to your organization’s identity provider.

Personalization in data stories is initially available in the US East (N. Virginia) and US West (Oregon) AWS Regions. For more information, see Amazon QuickSight User Guide.
 

Amazon Aurora MySQL now supports RDS Data API

Amazon Aurora MySQL-Compatible Edition now supports a redesigned RDS Data API for Aurora Serverless v2 and Aurora provisioned database instances. You can now access these Aurora clusters via a secure HTTP endpoint and run SQL statements without the use of database drivers and without managing connections. This follows the launch of Data API for Amazon Aurora PostgreSQL-Compatible Edition for Aurora Serverless v2 and Aurora provisioned database instances last year.

Data API was originally only available for single instance Aurora Serverless v1 clusters with a 1,000 request per second (RPS) rate limit. Based on customer feedback, Data API has now been redesigned for increased scalability. Data API will not impose a rate limit on requests made to Aurora Serverless v2 and Aurora provisioned clusters.

Data API eliminates the use of drivers and improves application scalability by automatically pooling and sharing database connections (connection pooling) rather than requiring customers to manage connections. Customers can call Data API via AWS SDK and CLI. Data API also enables access to Aurora databases via AWS AppSync GraphQL APIs. API commands supported in the redesigned Data API are backwards compatible with Data API for Aurora Serverless v1 for easy customer application migrations.

Data API supports Aurora MySQL 3.07 and higher versions in 14 regions. Customers currently using Data API for Aurora Serverless v1 are encouraged to migrate to Aurora Serverless v2 to take advantage of the redesigned Data API. To learn more, read the documentation.
 

AWS CodePipeline introduces pipeline variable check rule for stage level condition

AWS CodePipeline V2 type pipelines introduces pipeline variable check as a new rule that customers can use in their stage level condition to gate a pipeline execution. You can use this rule with any condition that is evaluated before entering the stage, before exiting a stage - when all actions in the stage have completed successfully, or when any action in the stage has failed. With the variable check rule, you can evaluate a pipeline parameter or an output variable from a prior action in the pipeline against a threshold, to determine if the condition will succeed or fail. For example, you can check if an output variable from a CodeBuild action is a certain value to determine if the pipeline execution should enter a stage.

To learn more about using the variable check rule in stage level conditions in your pipeline, visit our documentation. For more information about AWS CodePipeline, visit our product page. The stage level conditions feature is available in all regions where AWS CodePipeline is supported.
 

Amazon Timestream for InfluxDB is now available in the Jakarta, Milan, UAE and Zaragoza AWS regions

You can now use Amazon Timestream for InfluxDB in the Asia Pacific (Jakarta), Europe (Milan), Middle East (UAE) and Europe (Spain) AWS regions. Timestream for InfluxDB makes it easy for application developers and DevOps teams to run fully managed InfluxDB databases on AWS for real-time time-series applications using open-source APIs.

Timestream for InfluxDB offers the full feature set available in the InfluxDB 2.7 release of the open-source version, and adds deployment options with Multi-AZ high availability and enhanced durability. For high availability, Timestream for InfluxDB allows you to automatically create a primary database instance and synchronously replicate the data to an instance in a different Availability Zone. When it detects a failure, Amazon Timestream automatically fails over to a standby instance without manual intervention. To migrate to Timestream for InfluxDB from a self-managed InfluxDB instance, you can use our sample migration script by following this guide.

With the latest release, customers can use Amazon Timestream for InfluxDB in the following regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Jakarta), Europe (Paris), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Stockholm), Europe (Spain), and Middle East (UAE). To get started with Amazon Timestream, visit our product page.
 

Announcing availability of AWS Outposts in Kuwait

AWS Outposts can now be shipped and installed at your data center and on-premises locations in Kuwait.

AWS Outposts is a family of fully managed solutions that extends AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises or edge location for a truly consistent hybrid experience. Outposts is ideal for workloads that require low latency access to on-premises systems, local data processing, and migration of applications with local system interdependencies. Outposts can also help meet data residency requirements. Outposts is available in a variety of form factors, from 1U and 2U Outposts servers to 42U Outposts racks, and multiple rack deployments.

With the availability of Outposts in Kuwait, you can use AWS services to run your workloads and data in country in your on-premises facilities and connect to your nearest AWS Region.

To learn more about Outposts, read the product overview and user guide. For the most updated list of countries and territories where Outposts is supported, check out the Outposts rack FAQs page and the Outposts servers FAQs page.
 

Amazon MemoryDB is now available in the AWS Europe (Spain) region

Amazon MemoryDB is a fully managed, Redis OSS-compatible database for in-memory performance and multi-AZ durability. Customers in Europe (Spain) can now use MemoryDB as a primary database for use cases that require ultra-fast performance and durable storage, such as payment card analytics, message streaming between microservices, and IoT events processing. With Amazon MemoryDB, all of your data is stored in memory, which enables you to achieve microsecond read and single-digit millisecond write latency and high throughput. Amazon MemoryDB also stores data durably across multiple Availability Zones (AZs) using a Multi-AZ transactional log to enable fast failover, database recovery, and node restarts. Delivering both in-memory performance and Multi-AZ durability, Amazon MemoryDB can be used as a high-performance primary database for your microservices applications eliminating the need to separately manage both a cache and durable database.

To get started, you can create an Amazon MemoryDB cluster in minutes through the AWS Management Console, AWS Command Line Interface (CLI), or AWS Software Development Kit (SDK). Read more about MemoryDB in this blog post, and visit the MemoryDB webpage for access to the latest webinars, tutorials, and demos. For pricing and regional availability, please refer to the Amazon MemoryDB pricing page.

Application Discovery Service Agentless Collector now supports Amazon Linux 2023

Today, we are excited to announce that the Application Discovery Service Agentless Collector now runs on Amazon Linux 2023 (AL2023). AL2023 offers long-term support with access to the latest Linux security updates.

The Agentless Collector is deployed as a virtual appliance within an on-premises data center, allowing one install to monitor hundreds of servers. With the Agentless Collector, configure the discovery tool in a matter of minutes. The data can then be used in AWS Migration Hub to explore recommended Amazon EC2 instances or AWS Database Migration Service to explore recommended Amazon RDS instances.

The Agentless Collector on AL2023 (version 2) is now generally available, and can be used in all AWS Regions where AWS Application Discovery Service is available. To learn more, visit our user guide.

Amazon RDS Performance Insights now supports queries run through Data API

Amazon RDS (Relational Database Service) Performance Insights now allows customers to monitor queries run through the RDS Data API for Aurora PostgreSQL clusters. The RDS Data API provides an HTTP endpoint to run SQL statements on an Amazon Aurora DB cluster.

With this launch, customers are now able to use Performance Insights to monitor the impact of the queries run through the RDS Data API on their database performance. Additionally, customers can identify these queries and their related statistics by slicing the database load metric using the host name dimension, and filtering for 'RDS Data API'.

Amazon RDS Performance Insights is a database performance tuning and monitoring feature of RDS that allows you to visually assess the load on your database and determine when and where to take action. With one click in the Amazon RDS Management Console, you can add a fully-managed performance monitoring solution to your Amazon RDS database.

To learn more about RDS Performance Insights, read the Amazon RDS User Guide and visit Performance Insights pricing for pricing details and region availability.

Amazon RDS for Oracle now supports Oracle Management Agent version 13.5.0.0.v2 for Oracle Enterprise Manager Cloud Control 13cR5

Amazon RDS for Oracle now supports Oracle Management Agent (OMA) version 13.5.0.0.v2 for Oracle Enterprise Manager (OEM) Cloud Control 13c Release 5. OEM 13c offers web-based tools to monitor and manage your Oracle databases. Amazon RDS for Oracle installs OMA, which communicates with your Oracle Management Service (OMS) to provide monitoring information. Customers running OMS version 13.5 update 23 can now manage databases by installing OMA 13.5.0.0.v2

To enable the version 13.5.0.0.v2 of OMA for OEM 13cR5, navigate to "Option Groups" in the AWS Management Console and add the "OEM_AGENT" option to a new or existing option group and set AGENT_VERSION to “13.5.0.0.v2”. You will also need to configure option settings including OMS hostname (or IP), port and agent registration password to allow OMA on your Amazon RDS for Oracle database instances to communicate with your existing Oracle Management Service (OMS) stack. To learn more, please refer to Amazon RDS for Oracle documentation.

Amazon RDS for Oracle makes it easy to set up, operate, and scale Oracle Database deployments in the cloud. See Amazon RDS for Oracle Database Pricing for regional availability.
 

AWS ParallelCluster 3.11 now available with login node enhancements

AWS ParallelCluster 3.11 is now generally available. Key features of this release include support for NICE DCV and custom action scripts on Login nodes. Use custom action scripts to automate the setup and configuration of Login Nodes to meet your specific organization's needs such as installing additional software, configuring settings, or custom commands. Add custom action scripts by uploading them to an S3 bucket and specifying their paths in the ParallelCluster YAML configuration file. Other important features in this release include:

  1. Support for pyxis and enroot for simplified container image management and the efficient execution of container-based HPC and ML/AI workloads using Slurm.
  2. Support for multiple login node pools where the login nodes of each pool can be configured with a specific Amazon EC2 instance type to best fit their specific use case.

For more details on the release, review the AWS ParallelCluster 3.11.0 release notes.

AWS ParallelCluster is a fully-supported and maintained open-source cluster management tool that enables R&D customers and their IT administrators to operate high-performance computing (HPC) clusters on AWS. AWS ParallelCluster is designed to automatically and securely provision cloud resources into elastically-scaling HPC clusters capable of running scientific, engineering, and machine-learning (ML/AI) workloads at scale on AWS.

AWS ParallelCluster is available at no additional charge in the AWS Regions listed here, and you pay only for the AWS resources needed to run your applications. To learn more about launching HPC clusters on AWS, visit the AWS ParallelCluster User Guide. To start using ParallelCluster, see the installation instructions for ParallelCluster UI and CLI.

Amazon MWAA now supports Apache Airflow version 2.10

You can now create Apache Airflow version 2.10 environments on Amazon Managed Workflows for Apache Airflow (MWAA). Apache Airflow 2.10 is the latest minor release of the popular open-source tool that helps customers author, schedule, and monitor workflows.

Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. Apache Airflow 2.10 introduces several notable enhancements, such as a new Dark Mode for improved user experience, especially in low-light environments; dynamic dataset scheduling for flexible workflow management; and new task-level metrics for enhanced visibility into resource utilization. This update also includes important security updates and bug fixes that enhance the security and reliability of your workflows.

You can launch a new Apache Airflow 2.10 environment on Amazon MWAA with just a few clicks in the AWS Management Console in all currently supported Amazon MWAA regions. To learn more about Apache Airflow 2.10 visit the Amazon MWAA documentation, and the Apache Airflow 2.10 change log in the Apache Airflow documentation.
 

Amazon EKS and Amazon EKS Distro now supports Kubernetes version 1.31

Kubernetes version 1.31 introduced several new features and bug fixes, and AWS is excited to announce that you can now use Amazon Elastic Kubernetes Service (EKS) and Amazon EKS Distro to run Kubernetes version 1.31. Starting today, you can create new EKS clusters using version 1.31 and upgrade existing clusters to version 1.31 using the EKS console, the eksctl command line interface, or through an infrastructure-as-code tool.

Kubernetes version 1.31 introduces several key improvements, including stable support for AppArmor security modules, storing timestamps for persistent volume phase transitions, and the beta VolumeAttributeClass API for modifying mutable properties of persistent volumes managed by compatible drivers like Amazon EBS CSI driver starting v1.35.0. To learn more about the changes in Kubernetes version 1.31, see our documentation and the Kubernetes project release notes.

EKS now supports Kubernetes version 1.31 in all the AWS Regions where EKS is available, including the AWS GovCloud (US) Regions.

You can learn more about the Kubernetes versions available on EKS and instructions to update your cluster to version 1.31 by visiting EKS documentation. EKS Distro builds of Kubernetes version 1.31 are available through ECR Public Gallery and GitHub. Learn more about the EKS version lifecycle policies in the documentation.

Amazon CloudWatch Natural Language Query Generation is now available in 7 additional regions

Amazon CloudWatch announces the general availability of natural language query generation powered by generative AI for Logs Insights and Metrics Insights in 7 additional regions including Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney) Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), US East (Ohio). This feature enables you to quickly generate queries in the context of your logs and metrics data using plain language so that you can accelerate gathering insights from your observability data without needing extensive knowledge of the query language.

Query Generator simplifies your CloudWatch Logs and Metrics Insights experience through natural language querying. You can ask questions in plain English, such as “Show me the 10 slowest Lambda requests in the last 24 hours” and it will generate the appropriate query or refine any existing query in the query window. The generated queries automatically adjust the time ranges for queries that require data within a specified period.

Query Generator is hosted in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Frankfurt) regions. For other supported regions, the feature makes cross-region calls to US regions to generate queries. To learn more, view documentation.

To access the feature, click on “Query generator” in the CloudWatch Logs Insights or Metrics Insights console. For more information and examples, click “Info” in the help panel. There is no charge for using Query generator. Any queries executed in Logs Insights or Metrics Insights are subject to standard CloudWatch pricing. To learn more, visit our getting started guide.

AWS Lambda now supports SnapStart for Java functions in the AWS GovCloud (US) Regions

Starting today, AWS Lambda SnapStart for Java functions is generally available in the AWS GovCloud (US) Regions. AWS Lambda SnapStart for Java delivers up to 10x faster function startup performance at no extra cost, making it easier for you to build highly responsive and scalable Java applications using AWS Lambda without having to provision resources or spend time and effort implementing complex performance optimizations.

For latency sensitive applications where you want to support unpredictable bursts of traffic, high and outlier startup latencies—known as cold starts—can cause delays in your users’ experience. Lambda SnapStart offers improved startup times by initializing the function’s code ahead of time, taking a snapshot of the initialized execution environment, and caching it. When the function is invoked and subsequently scales up, Lambda SnapStart resumes new execution environments from the cached snapshot instead of initializing them from scratch, significantly improving startup latency. Lambda SnapStart is ideal for applications such as synchronous APIs, interactive microservices, or data processing.

You can activate SnapStart for new or existing Java-based Lambda functions running on Amazon Corretto 11, 17, and 21 using the AWS Lambda API, AWS Management Console, AWS Command Line Interface (AWS CLI), AWS CloudFormation, AWS Serverless Application Model (AWS SAM), AWS SDK, and AWS Cloud Development Kit (AWS CDK).

For more information on Lambda SnapStart, see the documentation and the launch blog post. To learn more about Lambda, see the Lambda developer guide.
 

PostgreSQL 17.0 is now available in Amazon RDS Database preview environment

Amazon RDS for PostgreSQL 17.0 is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the pre-release of PostgreSQL 17 on Amazon RDS for PostgreSQL. You can deploy PostgreSQL 17.0 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database.

PostgreSQL 17 includes updates to vacuuming that reduces memory usage, improves time to finish vacuuming, and shows progress of vacuuming indexes. With PostgreSQL 17, you no longer need to drop logical replication slots when performing a major version upgrade. PostgreSQL 17 continues to build on the SQL/JSON standard, adding support for `JSON_TABLE` features that can convert JSON to a standard PostgreSQL table. The `MERGE` command now supports the `RETURNING` clause, letting you further work with modified rows. PostgreSQL 17 also includes general improvements to query performance and adds more flexibility to partition management with the ability to SPLIT/MERGE partitions. Please refer to the PostgreSQL community announcement for more details.

Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the Preview Environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the Preview Environment.

Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region.

Amazon FSx for Lustre provides additional performance metrics and an enhanced monitoring dashboard

Amazon FSx for Lustre, a service that provides high-performance, cost-effective, and scalable file storage for compute workloads, now provides additional performance metrics for improved visibility into file system activity and an enhanced monitoring dashboard with performance insights and recommendations. You can use the new Amazon Cloudwatch metrics and dashboard to right-size your file systems and optimize performance and costs.

Previously, you could use performance metrics to monitor the file system storage capacity, throughput and IOPS delivered by the storage system, the primary performance characteristics for most workloads. Now, using additional performance metrics that include server- and disk-level resource utilization, you can optimize performance and costs across an even wider range of compute- and GPU-intensive workloads, such as machine learning, high performance computing (HPC), video processing, and financial modeling. The enhanced monitoring dashboard provides actionable performance recommendations. For example, if your workload is network throughput-limited you can increase the file system throughput performance with a few clicks.

These additional performance metrics and updated monitoring dashboard are available now, at no additional cost, for all new and existing file systems in all commercial AWS Regions and the AWS GovCloud (US) Regions.

To learn more, see monitoring with Amazon CloudWatch.

Amazon Kinesis Data Streams announces support for Attribute-Based Access Control (ABAC)

Amazon Kinesis Data Streams announces support for attribute-based access control (ABAC) using stream tags, enabling customers to enhance their overall security postures with a scalable access control solution. Amazon Kinesis Data Streams is a serverless data streaming service that enables customers to capture, process, and store data streams at any scale. ABAC is an authorization strategy that defines access permissions based on tags which can be attached to IAM resources, such as IAM users and roles, and to AWS resources for fine-grained access control.

ABAC support for Kinesis Data Streams makes it simple for you to give granular access to developers without requiring a policy update when a user or project is added, removed or updated. With ABAC support for Kinesis Data Streams, IAM policies can be used to allow or deny specific Kinesis Data Streams API actions when the IAM principal's tags match the tags on a data stream.

Getting started with ABAC for Kinesis Data Streams is easy. Kinesis Data Streams supports using stream tags and attaching them to IAM policies that allow or deny access to the Kinesis Data Stream based on your tags. You can use the Amazon APIs, the Amazon CLI, or the Amazon Web Services Management Console to tag your resources.

Llama 3.2 generative AI models now available in Amazon Bedrock

The Llama 3.2 collection of models are now available in Amazon Bedrock. Llama 3.2 represents Meta’s latest advancement in large language models (LLMs). Llama 3.2 models are offered in various sizes, from small and medium-sized multimodal models, 11B and 90B parameter models, capable of sophisticated reasoning tasks including multimodal support for high resolution images to lightweight text-only 1B and 3B parameter models suitable for edge devices. Llama 3.2 is the first Llama model to support vision tasks, with a new model architecture that integrates image encoder representations into the language model.

In addition to the existing text capable Llama 3.1 8B, 70B, and 405B models, Llama 3.2 supports multimodal use cases. You can now use four new Llama 3.2 models — 90B, 11B, 3B, and 1B — from Meta in Amazon Bedrock to unlock the next generation of AI possibilities. With a focus on responsible innovation and system-level safety, Llama 3.2 models help you build and deploy cutting-edge generative AI models and applications, leveraging Llama in Amazon Bedrock to ignite new innovations like image reasoning and are also more accessible for on edge applications. The new models are also designed to be more efficient for AI workloads, with reduced latency and improved performance, making them suitable for a wide range of applications.

Meta’s Llama 3.2 90B and 11B models are available in Amazon Bedrock in the US West (Oregon) Region, and in the US East (Ohio, N. Virginia) Regions via cross-region inference. Llama 3.2 1B and 3B models are available in the US West (Oregon) and Europe (Frankfurt) Regions, and in the US East (Ohio, N. Virginia) and Europe (Ireland, Paris) Regions via cross-region inference. To learn more, read the launch blog, Llama product page, and documentation. To get started with Llama 3.2 in Amazon Bedrock, visit the Amazon Bedrock console.

AWS announces general availability for Security Group Referencing on AWS Transit Gateway

AWS announces the general availability for Security Group Referencing across VPCs connected by the AWS Transit Gateway. With this capability, customers can simplify management of Security Groups and gain a better security posture for their TGW based networks.

Customers can configure Security Groups by specifying a list of rules that allow network traffic based on criteria such as IP CIDRs, Prefix-Lists, Ports and SG references. Until now, customers were not able to use SG references for controlling traffic between VPCs connected via TGW. Security Group Referencing allows customers to specify other SGs as references, or matching criterion in inbound security rules to allow instance-to-instance traffic. With this capability, customers do not need to reconfigure security rules as applications scale up or down or if their IP addresses change. Rules with SG references also provide higher scale as a single rule can cover thousands of instances and prevents customers from over-running SG rules or ENI limits.

Security Group Referencing on TGW is available in all AWS Regions where Transit Gateway is available. You can enable this feature using the AWS Management Console, Amazon Command Line Interface, and the Amazon Software Development Kit. There is no additional charge for using Security Group Referencing on TGW. For more information, see the AWS Transit Gateway product, pricing and documentation pages.

AWS CloudTrail launches network activity events for VPC endpoints (preview)

With the launch of AWS CloudTrail network activity for VPC endpoints, you now have additional visibility into AWS API activity that traverses your VPC endpoints, enabling you to strengthen your data perimeter and implement better detective controls. At preview launch, you can enable network activity events for VPC endpoints for four AWS Services: Amazon EC2, AWS Key Management Service (AWS KMS), AWS Secrets Manager, and AWS CloudTrail.

With network activity events for VPC endpoints, you can view details of who is accessing resources within your network giving you greater ability to identify and respond to malicious or unauthorized actions in your data perimeter. For example, as the VPC endpoint owner, you can view logs of actions that were denied due to VPC endpoint policies or use these events to validate the impact of updating existing policies.

You can turn on logging for network activity events logging for your VPC endpoints using the AWS CloudTrail console, AWS CLI, and SDKs. When creating a new trail or event data store or editing an existing one, you can select network activity events for supported services that you wish to monitor; you can configure to log all API calls, or log only the access denied calls, and you can use advanced event selectors for additional filtering controls.

Network activity events for VPC endpoint is available in preview in all commercial AWS Regions. Please refer to CloudTrail pricing to learn more about network activity events pricing. To learn more about this feature and get started, please refer to the documentation.

Announcing the new Resources widget on myApplications

Today, we are excited to announce the launch of the new Resources widget on the myApplications dashboard, providing a view of the resources in your applications on AWS.

Using the new Resources widget, you can quickly search and discover application resources in the myApplications dashboard. Start by searching for your application in the Applications widget on Console Home and click to open the application dashboard. From the dashboard you can view the Resources widget to see the list of resources that power your application. You can also query keywords to further focus on your application’s resources.

To access the Resources widget, make sure you have AWS Resource Explorer turned on, and visit myApplications by signing into AWS Management Console. This widget is available in all public AWS Regions.

Share AWS End User Messaging SMS resources across multiple AWS accounts

You can now use AWS Resource Access Manager (RAM) to share the following SMS resources: phone numbers, sender IDs, phone pools, and opt-out lists in AWS End User Messaging SMS, also referred to as sms-voice. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications. AWS RAM is a service that enables AWS customers securely share resources across AWS accounts. With AWS RAM, you can also share resources within organizational units (OUs) in AWS Organizations. Sharing SMS resources in End User Messaging SMS can help you reduce the number of phone numbers your organization requires, saving you the time and cost it takes to register numbers. AWS RAM is a centralized and controlled way to share those resources and provides a step-by-step guide to scope resource policies without requiring you to write them yourself.

To learn more and get started, see here.

AWS Serverless Application Repository now supports AWS PrivateLink

AWS Serverless Application Repository now supports AWS PrivateLink to connect to AWS Serverless Application Repository through an interface VPC endpoint. You can now connect directly to the AWS Serverless Application Repository using AWS PrivateLink in your virtual private cloud (VPC) instead of connecting over the internet.

When you use an AWS PrivateLink, communication between your VPC and AWS Serverless Application Repository is conducted entirely within the AWS network, which can provide greater security and protect your sensitive information. An AWS PrivateLink endpoint connects your VPC directly to AWS Serverless Application Repository. The instances in your VPC don't need public IP addresses to communicate with the AWS Serverless Application Repository API. To use AWS Serverless Application Repository through your VPC, you have two options. One is to connect from an instance that is inside your VPC. The other is to connect your private network to your VPC by using an AWS VPN option or AWS Direct Connect. You can create an AWS PrivateLink to connect to AWS Serverless Application Repository using the AWS Management Console or AWS Command Line Interface (AWS CLI) commands. For more information, see Creating an Interface Endpoint.

AWS Serverless Application Repository support for AWS PrivateLink is available in all AWS regions where AWS Serverless Application Repository is available.

Amazon SNS now delivers SMS text messages via AWS End User Messaging

Amazon Simple Notification Service (SNS) announces integration with AWS End User Messaging for the delivery of SMS messages. Starting today, SNS customers can start using new features like SMS resource management, two-way messaging, granular resource permissions, country block rules, and centralized billing for all AWS SMS messaging without making any changes to configurations or the global AWS SMS network used by SNS.

Amazon SNS is a fully managed pub/sub service that provides one-to-many message delivery to various endpoints, including AWS Lambda, Amazon SQS, Amazon Data Firehose, mobile devices via AWS End User Messaging and mobile push, as well as email. AWS End User Messaging’s SMS capabilities provide resilient and flexible APIs to deliver SMS at a global scale with capabilities like number fail-over with phone pools, multi-media messaging (MMS), and the SMS simulator.

If you already send SMS via Amazon SNS APIs, you do not need to take any action. Existing resources such as phone numbers or sender IDs that you own have been configured with the required permissions. You will need to grant Amazon SNS permission to send to any new phone number you request after September 24, 2024.

To learn more, visit the Amazon SNS Developer Guide or the AWS End User Messaging SMS User Guide.

Amazon EC2 G6 instances now available in additional regions

Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G6 instances powered by NVIDIA L4 GPUs are now available in Europe (Frankfurt, London), Asia Pacific (Tokyo, Malaysia ), and Canada (Central) regions. G6 instances can be used for a wide range of graphics-intensive and machine learning use cases.

Customers can use G6 instances for deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization as well as graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming. G6 instances feature up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.52 TB of local NVMe SSD storage.


Amazon EC2 G6 instances are available today in the AWS US East (N. Virginia and Ohio), US West (Oregon), Europe (Frankfurt, London, Spain), Asia Pacific (Tokyo, Malaysia ) and Canada (Central) regions. Customers can purchase G6 instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans.

To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6 instance page.
 

Introducing Amazon EC2 C8g and M8g Instances

AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) C8g instances and Amazon EC2 M8g instances. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance over Graviton3-based instances. C8g instances are ideal for compute-intensive workloads, such as high performance computing (HPC), batch processing, gaming, video encoding, scientific modeling, distributed analytics, CPU-based machine learning inference, and ad serving. M8g instances are built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. These instances are built on the AWS Nitro System.

AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory than Graviton3-based instances. They are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Elastic Fabric Adapter (EFA) networking support is offered on 24xlarge, 48xlarge, and bare metal sizes, and Elastic Network Adapter (ENA) Express support is available on instance sizes larger than 12xlarge.

C8g and M8g instances are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), and Europe (Frankfurt).

To learn more, see Amazon EC2 C8g instances and Amazon EC2 M8g instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

Customize your Amazon SageMaker model deployment software and driver versions

You can now pick the software and driver versions used by the instances that best fits your needs when deploying models on SageMaker. Amazon SageMaker makes it easier to deploy ML models including foundation models (FMs) to make inference requests at the best price performance for any use case.

Previously, customers had to use preset software and driver versions defined by SageMaker on the managed instances behind an endpoint. Now customers can specify the “InferenceAmiVersion” parameter when configuring endpoints to select the combination of software and driver versions (such as Nvidia driver and CUDA version) that best meets their requirements. This allows you to tailor your hosting environment to meet your performance, compatibility, scalability, and operational requirements of your ML applications. By using this parameter, you can also downgrade and upgrade driver versions for your endpoints on your own schedule.

This feature is available in all regions where SageMaker is available. You can learn more about deploying model on SageMaker here and more about this feature in our documentation.
 

WorkSpaces Secure Browser announces new session management dashboard

Today, AWS End User Computing announced new session management capabilities for Amazon WorkSpaces Secure Browser. This update provides administrators with deeper visibility and monitoring insights for day-to-day service management.

Administrators may now access a dashboard view from the AWS Console which displays active and historical sessions over the last 35 days. Using the dashboard, administrators can filter sessions based on user, session ID, portal, or by current status. Administrators can view a session’s start time, current status, and force stop in-progress sessions. Administrators can export session management records from the last 35 days, which can be used for further monitoring or auditing. Session management is available via dashboard view in the console or by using the AWS CLI/SDK.

Session management works with all existing or newly created portals, at no additional charge, in all regions where WorkSpaces Secure Browser is available.

If you are new to WorkSpaces Secure Browser you can get started by visiting the pricing page and adding the Free Trial offer to your AWS account. Then, go to the Amazon WorkSpaces Secure Browser console and create your first portal, today.
 

  • No labels