You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

Recent Announcements
The AWS Cloud platform expands daily. Learn about announcements, launches, news, innovation and more from Amazon Web Services.
AWS CloudFormation accelerates dev-test cycle with adjustable timeouts for custom resources

AWS CloudFormation launches a new property for custom resources called ServiceTimeout. This new property allows customers to set a maximum timeout for the execution of the provisioning logic in a custom resource, enabling faster feedback loops in dev-test cycles.

CloudFormation custom resources allow customers to write their own provisioning logic in CloudFormation templates and have CloudFormation run the logic during a stack operation. Custom resources use a callback pattern where the custom resource must respond to CloudFormation within a timeout of 1 hour. Previously, this timeout value was not configurable, so code bugs in the customer's custom resource logic resulted in long wait times. With the new ServiceTimeout property, customers can set a custom timeout value, after which CloudFormation fails the execution of the custom resource. This accelerates feedback on failures, allowing for quicker dev-test iterations.

The new ServiceTimeout property is available in all AWS Regions where AWS CloudFormation is available. Refer to the AWS Region table for details.

Refer to the custom resources documentation to learn more about the ServiceTimeout property.
 

Amazon Redshift Serverless is now available in the AWS Middle East (UAE) region

Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in the AWS Middle East (UAE) region. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.

With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon S3, access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes. Amazon Redshift Serverless provides unified billing for queries on any of these data sources, helping you efficiently monitor and manage costs.

Amazon CodeCatalyst now supports Bitbucket Cloud source code repositories

Amazon CodeCatalyst now supports the use of source code repositories hosted in Bitbucket Cloud in CodeCatalyst projects. This allows customers to use Bitbucket Cloud repositories with CodeCatalyst’s features such as its cloud IDE (Development Environments), view the status of CodeCatalyst workflows back in Bitbucket Cloud, and even block Bitbucket Cloud pull request merges based on the status of CodeCatalyst workflows.

Customers want the flexibility to use source code repositories hosted in Bitbucket Cloud, without the need to migrate to CodeCatalyst to use its functionality. Migration is a long process and customers want to evaluate CodeCatalyst and its capabilities using their own code repositories before they decide to migrate. Support for popular source code providers such as Bitbucket Cloud is the top customer ask for CodeCatalyst. Now customers can use the capabilities of CodeCatalyst without the need for migration of source code from Bitbucket Cloud.

This capability is available in regions where CodeCatalyst is available. There is no change to pricing.

For more information, see the documentation or visit the Amazon CodeCatalyst website.

Amazon Data Firehose now supports integration with AWS Secrets Manager

Amazon Data Firehose (Firehose) now supports integration with AWS Secrets Manager (Secrets Manager) to configure secrets such as database credentials or keys to connect to streaming destinations such as Amazon Redshift, Snowflake, Splunk, and HTTP endpoints.

Amazon Data Firehose needs to access a secret such as database credentials or keys to connect to a streaming destination. With this launch, Amazon Data Firehose can retrieve a secret from Secrets Manager instead of using a plain text secret in configuration to connect to the destination. By using Secrets Manager integration, you can ensure that secrets are not visible in plain text during Firehose stream creation workflow either in AWS Management Console or API parameters. This feature provides a more secure practice to store and maintain a secret in Firehose and allows you to leverage automatic secret rotation capability provided by Secrets Manager.

Amazon FSx for Lustre increases maximum metadata IOPS by up to 15x

Amazon FSx for Lustre, a service that provides high-performance, cost-effective, and scalable file storage for compute workloads, is increasing the maximum level of metadata IO operations per second (IOPS) you can drive on a file system by up to 15x, and now allows you to provision metadata IOPS independently of your file system’s storage capacity.

A file system’s level of metadata IOPS determines the number of files and directories that you can create, list, read, and delete per second. By default, the metadata IOPS of an FSx for Lustre file system scales with its storage capacity. Starting today, you can provision up to 15x higher metadata performance per file system—independently of your file system’s storage capacity—allowing you to scale to even higher levels of performance, accelerate time-to-results, and optimize your storage costs for metadata-intensive machine learning research and high-performance computing (HPC) workloads. You can also update your file system’s metadata IOPS level with the click of a button, allowing you to quickly increase performance as your workloads scale.

Centrally manage member account root email addresses across your AWS Organization

Today, we are making it easier for AWS Organizations customers to centrally manage the root email address of member accounts across their Organization using the AWS Command Line Interface (CLI), AWS Software Development Kit (SDK), and AWS Organizations console. We previously released the Accounts SDK that enables Organizations customers to centrally and programmatically manage both primary and alternate contact information as well as the enabled AWS Regions for their accounts. In order to manage the root email address, customers were forced to login as root to manage the root email address of member accounts. Starting today, customers can use the same SDK to update the root email address of a member account from either the Organization’s management account (or delegated administrator), saving them the time and effort of logging into each account directly and allowing them to manage their Organization’s root addresses at scale. Additionally, this API will require customers to verify the new root email address using One Time Password (OTP) ensuring customers are using accurate email addresses for their member accounts. The root email address won’t change to the new email address until it has been verified.

Amazon API Gateway customers can easily secure APIs using Amazon Verified Permissions

Amazon Verified Permissions expanded support for securing Amazon API Gateway APIs, with fine grained access controls when using an Open ID connect (OIDC) compliant identity provider. Developers can now control access based on user attributes and group memberships, without writing code. For example, say you are building a loan processing application. Using this feature, you can restrict access to the “approve_loan” API to only users in the “loan_officer” group.

Amazon Verified Permissions is a scalable fine-grained authorization service for the applications that you build. Verified Permissions launched a new feature to secure API Gateway REST APIs for customers using an OIDC compliant identity provider. The feature provides a wizard for connecting Verified Permissions with API Gateway and an identity provider, and defining permissions based on user groups. Verified Permissions automatically generates an authorization model and Cedar policies that allow only authorized user groups access to application’s APIs. The wizard deploys a Lambda authorizer that calls Verified Permissions to validate that the API request has a valid OIDC token and is authorized. Additionally, the lambda authorizer caches authorization decisions to reduce latency and cost.

To get started, visit the Verified Permissions console, and create a policy store by selecting “Import using API Gateway and Identity Provider”. We have partnered with leading identity providers, CyberArk, Okta, and Transmit Security, to test this feature and ensure a smooth experience. This feature is available in all regions where Verified permissions is available. For more information visit the product page.
 

AWS AppFabric now supports JumpCloud

AWS AppFabric, a no-code service that quickly integrates with software-as-a-service (SaaS) applications to enhance an organization’s security posture, now supports JumpCloud. AppFabric provides aggregated and normalized audit logs from popular SaaS applications like Slack, Zoom, Salesforce, Atlassian Jira suite, Google Workspace, and Microsoft 365. By centralizing SaaS application data, AppFabric helps teams gain greater visibility into vulnerabilities in a customer's SaaS environment, enabling them to monitor threats more effectively and respond to incidents faster. IT and security teams no longer need to manage point-to-point SaaS integrations that take time away from higher value tasks, like standardizing alerts or setting common security policies.

AppFabric's support for JumpCloud means that customers can now seamlessly ingest JumpCloud log data, along with over 35 other supported applications.

Amazon EC2 C6id instances are now available in South America (São Paulo) region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C6id instances are available in the South America (Sao Paulo) Region. These instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors with an all-core turbo frequency of 3.5 GHz and up to 7.6 TB of local NVMe-based SSD block-level storage. C6id instances are built on AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. Customers can take advantage of access to high-speed, low-latency local storage for compute-intensive workloads, such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.

Amazon Inspector container image scanning is now available for Amazon CodeCatalyst and GitHub actions

Amazon Inspector now offers native integration with Amazon CodeCatalyst and GitHub actions for container image scanning, allowing customers to assess their container images for software vulnerabilities within their Continuous Integration and Continuous Delivery (CI/CD) tools, pushing security earlier in the software development lifecycle. With this expansion, Inspector now natively integrates with four developer tools including, Jenkins, TeamCity, GitHub actions, and Amazon CodeCatalyst for container image scanning. This feature works with CI/CD tools hosted anywhere in AWS, as well as in on-premise environments and hybrid clouds, providing consistency for developers to use a single solution across all their development pipelines.

Amazon Inspector is a vulnerability management service that continually scans AWS workloads for software vulnerabilities, code vulnerabilities, and unintended network exposure across your entire AWS Organization. Customers can also use Amazon Inspector to scan container images and other archives, such as zip and TAR, for software vulnerabilities directly from local developer laptops and machines. To learn more about scanning container images hosted anywhere, click here.
 

Announcing the common control library in AWS Audit Manager

AWS Audit Manager has introduced a common control library that simplifies the process of automating risk and compliance assessments against enterprise controls. This new library enables Governance, Risk, and Compliance (GRC) teams to efficiently map their controls into Audit Manager for evidence collection.

The new common control library provides pre-defined and pre-mapped AWS data sources, eliminating the need to identify which AWS resources to assess for various controls. It defines AWS-managed common controls based on extensive mapping and reviews by AWS certified auditors, determining the appropriate data sources for evidence collection. With this launch, Audit Manager will also deliver more evidence mappings for controls, including 140 newly supported API calls for additional evidence. You can customize and update all evidence mappings as appropriate for your objectives.

The library also reduces the need to implement different compliance standard requirements individually and review data multiple times across different compliance regimes. It identifies common requirements across controls, helping customers understand their audit readiness across multiple frameworks simultaneously.

As AWS Audit Manager updates or adds data sources (e.g., additional CloudTrail events or API calls, or newly launched AWS Config rules) or maps additional compliance frameworks to the common controls, customers automatically inherit these improvements. This removes the need for constant updating and provides the benefit of additional compliance frameworks added to the Audit Manager library.

AWS launches Tax Settings API to programmatically manage tax registration information

Today AWS launches AWS Tax Settings API, a new public API service that enables customers to programmatically view, set, and modify tax registration information and associated business legal name and address. This launch allows you to automate tax registration updates as an enhanced offering to the AWS Tax Settings page.

Previously, customers managing tax registration information could only update tax information from the Tax Settings Page on the AWS Billing Console. Now, the API enables customers to automate setting their tax information while creating bulk accounts instead of manually setting tax registration information for accounts manually. This programmatic support allows customers to build automation around setting and modifying tax registration information. Customers creating accounts using the AWS Account Creation API and other AWS services can now fully automate their account creation process by integrating the tax registration workflow into their overall programmatic account creation process. For further details, visit here.

Amazon OpenSearch Ingestion now supports ingesting streaming data from Amazon MSK Serverless

Amazon OpenSearch Ingestion now allows you to ingest streaming data from Amazon Managed Streaming for Apache Kafka (MSK) Serverless, enabling you to seamlessly index the data from Amazon MSK Serverless clusters in Amazon OpenSearch Service managed clusters or Serverless collections without the need for any third-party data connectors. With this integration, you can now use Amazon OpenSearch Ingestion to perform near- real-time aggregations, sampling and anomaly detection on data ingested from Amazon MSK Serverless, helping you to build efficient data pipelines to power your complex observability and analytics use cases.

Amazon OpenSearch Ingestion pipelines can consume data from one or more topics in an Amazon MSK Serverless cluster and transform the data before writing it to Amazon OpenSearch Service or Amazon S3. While reading data from Amazon MSK Serverless via Amazon OpenSearch Ingestion, you can configure the number of consumers per topic and tune different fetch parameters for high and low priority data. Furthermore, you can also optionally use AWS Glue Schema Registry to specify your data schema to dynamically read custom data schema at ingest time.

Amazon Location Service launches Enhanced Location Integrity features

Amazon Location Service launches enhanced location integrity features, which offer tools to help developers evaluate the accuracy and authenticity of user-reported locations. With enhanced location integrity features, customers can now use predictive tools that anticipate user movements into or out of customer-specified areas, using criteria like time-to-breach and proximity to enhance monitoring and security measures. For instance, a retailer can utilize improved location integrity features to gauge the proximity of a curbside pickup user and optimize operations for a superior customer experience.

Customers can also use new validation capabilities to help confirm user locations by triangulating WiFi, cellular signals, and IP address information. This is critical for detecting and preventing location spoofing. Lastly, Amazon Location Service now also supports detailed geofences, allowing for the management of complex areas like state boundaries. These improvements provide stronger and more accurate location tracking capabilities, enabling more stringent protocols for location integrity.

Amazon Location Service is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), South America (São Paulo), and the AWS GovCloud (US-West) Region.

To learn more, visit the Amazon Location Service Developer Guide.
 

Amazon CloudWatch GetMetricData API now supports AWS CloudTrail data event logging

Amazon CloudWatch now supports AWS CloudTrail data event logging for the GetMetricData and GetMetricWidgetImage APIs. With this launch, customers have greater visibility into metric retrieval activity from their AWS account for best practices in security and operational troubleshooting.

CloudTrail captures API activities related to Amazon CloudWatch GetMetricData and GetMetricWidgetImage APIs as events. Using the information that CloudTrail collects, you can identify a specific request to CloudWatch GetMetricData or GetMetricWidgetImage APIs, the IP address of the requester, the requester's identity, and the date and time of the request. Logging CloudWatch GetMetricData and GetMetricWidgetImage APIs using CloudTrail helps you enable operational and risk auditing, governance, and compliance of your AWS account.

AWS CloudTrail logging for the GetMetricData and GetMetricWidgetImage API actions is available now in all AWS commercial Regions.

Data logging incurs charges according to AWS CloudTrail Pricing. To learn more about this feature, visit the Amazon CloudWatch documentation page. To enable logging for Amazon CloudWatch metrics data events, using the AWS CloudTrail Management Console or the AWS CloudTrail Command Line Interface (CLI), specify CloudWatch metric as the data event type, then choose the APIs that you want to monitor.

Amazon EC2 instance type finder capability is generally available in AWS Console

Today, Amazon Web Services, announced the availability of Amazon EC2 instance type finder, enabling you to select the ideal Amazon EC2 instance types for your workload. It uses machine learning to help customers make quick and cost-effective selections for instance types, before provisioning workloads. Using the AWS Management Console, customers can specify their workload requirements and get trusted recommendations. Amazon EC2 instance type finder is integrated with Amazon Q, allowing customers to use natural language to specify requirements and get instance family suggestions.

EC2 has more than 750 instance types and EC2 instance type finder enables customers to easily choose the best option for their workload requirements. It helps customers stay up to date with the latest instance types and allows them to optimize price-performance for their workloads. By using the EC2 instance type finder in Amazon Q and other console experiences, customers can make informed decisions on the best instance types for their workloads, thereby speeding up their AWS development.

Customers can get instance family suggestions while in an activity, such as launching an instance. EC2 instance type finder is available in all commercial AWS regions (learn more here). Amazon Q experience is available everywhere builders need it. You can find Amazon Q in the AWS Management Console, documentation, AWS website, your IDE through Amazon CodeWhisperer, or through AWS Chatbot in team chat rooms on Slack or Microsoft Teams. For Regional availability for specific Amazon Q in AWS capabilities, visit the Amazon Q FAQs page.
 

AWS IoT Device Management adds a unified connectivity metrics monitoring dashboard

Today, AWS IoT Device Management announced the launch of a new connectivity metrics dashboard, enabling customers to easily identify connectivity patterns and configure operational alarms for their device fleet through a unified view. AWS IoT Device Management is a fully managed cloud service that helps you register, organize, monitor, and remotely manage Internet of Things (IoT) devices at scale. With this launch, you can now select and view a range of connectivity metrics sourced from AWS IoT Core and AWS IoT Device Management on a single page.

The connectivity metrics dashboard consolidates frequently used metrics from AWS IoT Core, such as successful connections, inbound/outbound messages published, and connection request authorization failures. Additionally, you can use the guided workflow to enable AWS IoT Device Management’s Fleet Indexing feature and add widgets for connected device counts, percentage of devices disconnected, and disconnect reasons to the same page. Using the unified dashboard, you can quickly identify potential connectivity and operational problems to reduce the time associated with fleet troubleshooting procedures.

To get started with the connectivity metrics dashboard, visit the ‘Monitor’ tab in the AWS IoT console and then select the new ‘Connectivity metrics’ page.

To learn more, visit the AWS IoT Device Management developer guide.

Amazon SageMaker Model Registry now supports machine learning (ML) governance information

Amazon SageMaker now integrates Model Cards into Model Registry, making it easier for customers to manage governance information for specific model versions directly in Model Registry in just a few clicks.

Today, customers register ML models in Model Registry to manage their models. Now, with this launch, they can register ML model versions early in the development lifecycle, including essential business details and technical metadata. This integration allows customers to seamlessly review and govern models across their lifecycle from a single place. By enhancing the discoverability of model governance information, this update offers customers greater visibility into the model lifecycle from experimentation and training to evaluation and deployment. This streamlined experience ensures that model governance is consistent and easily accessible throughout the development process.

This new capability is now available in all AWS regions where SageMaker is present except GovCloud regions.

To get started, see SageMaker Model Registry developer guide for additional information.

Amazon CodeCatalyst now supports GitHub Cloud source code with blueprints

Amazon CodeCatalyst now supports the use of source code repositories hosted in GitHub Cloud with its blueprints capability. This allows customers to create a project from a CodeCatalyst blueprint into a GitHub Cloud source repository and add a blueprint into an existing project's GitHub Cloud source repository. It also enables customers to create custom blueprints in a GitHub Cloud repository.

Customers can use CodeCatalyst blueprints to create a project with a source repository and sample source code, CI/CD workflows, build and test reports, and integrated issue tracking tools. As the blueprint gets updated with the latest best practices or new options, it can regenerate the relevant parts of your codebase in projects containing that blueprint. CodeCatalyst also allows IT Leaders to build custom well-architected blueprints for their developer teams, specifying technology to be used, control access to project resources, set deployment locations and define testing and building methods. These capabilities were earlier available for source code repositories in CodeCatalyst. Customers wanted the flexibility to use blueprints with source code repositories hosted in GitHub Cloud. With this launch, customers can now get the same value from CodeCatalyst blueprints with GitHub Cloud hosted repositories.

Amazon OpenSearch Serverless slashes entry cost in half for all collection types

We are excited to offer a new lower entry point for Amazon OpenSearch Serverless, which makes it affordable to run small-scale search and analytics workloads. Opensearch Serverless’ compute capacity for indexing and searching data are measured in OpenSearch Compute Units (OCUs). Prior to this update, highly-available production deployments required a minimum of 4 OCUs with redundancy for protection against Availability Zone outages and infrastructure failures.

With the introduction of fractional 0.5 OCU, OpenSearch Serverless can be deployed starting at just 2 OCUs for production workloads. This includes 1 OCU for primary and standby indexing nodes at 0.5 OCU each, and 1 OCU total for search across two 0.5 OCU active replica nodes in separate Availability Zones. OpenSearch Serverless will automatically scale up the OCUs based on workload demand. Additionally, for dev/test workloads that don't require high availability, OpenSearch Serverless offers a 1 OCU deployment option, further cutting costs in half, with 0.5 OCU for indexing and 0.5 OCU for search.

Amazon Connect now provides time zone support for forecasts

Amazon Connect now provides time zone support for forecasts, making it easier for contact center managers to analyze future demand. With this launch, you can now generate, view, and download forecasts for the time zone in which your business operates. This feature will also automatically adjust forecasts to account for daylight saving changes (e.g., if a contact center receives contacts from 8am-8pm US Eastern time, then forecasts will automatically switch from 8am-8pm Eastern Daylight Time (EDT) to 8am-8pm Eastern Standard Time (EST) on November 3, 2024). Time zone support in forecasts simplifies the day-to-day experience for managers.

Amazon Aurora MySQL 3.07 (compatible with MySQL 8.0.36) is generally available

Starting today, Amazon Aurora MySQL 3.07 (with MySQL 8.0 compatibility) will support MySQL 8.0.36. In addition to security enhancements and bug fixes in MySQL 8.0.36, Amazon Aurora MySQL 3.07 includes several fixes and general improvements. For more details, refer to the Aurora MySQL 3 and MySQL 8.0.36.

To upgrade, you can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” to allow automatic upgrades in the upcoming maintenance window. This release is available in all AWS regions where Aurora MySQL is available.

Amazon EC2 C6id instances are now available in Canada (Central) region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C6id instances are available in Canada (Central) Region. These instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors with an all-core turbo frequency of 3.5 GHz and up to 7.6 TB of local NVMe-based SSD block-level storage. C6id instances are built on AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. Customers can take advantage of access to high-speed, low-latency local storage for compute-intensive workloads, such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.

These instances are generally available today in the US (Ohio, N.Virginia, Oregon), Canada (Calgary, Central), Asia Pacific (Tokyo, Sydney, Seoul, Singapore), Europe (Ireland, Frankfurt, London), Israel (Tel Aviv), and AWS GovCloud (US-West) Regions.

Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To learn more, see Amazon C6id instances. To get started, visit AWS Command Line Interface (CLI), and AWS SDKs.

AWS HealthImaging now publishes events to Amazon EventBridge

AWS HealthImaging now supports event-driven architectures by sending event notifications to Amazon EventBridge. By subscribing to HealthImaging events in EventBridge, you can automatically kick-off application workflows such as image quality assessment or de-identification based upon changes to resources in the data store. With EventBridge, developers can take advantage of a serverless event bus to easily connect and route events between many AWS services and third-party applications. Developers working with HealthImaging can now receive state changes for asynchronous tasks, such as DICOM import jobs and image set copy and update operations. Events are delivered to EventBridge in near real-time, and developers can write simple rules to listen for specific events.

AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers and their software partners to store, analyze, and share medical images at petabyte scale. With AWS HealthImaging, you can run your medical imaging applications at scale from a single, authoritative copy of each medical image in the cloud, while reducing total cost of ownership.

AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland).

To learn more, visit AWS HealthImaging.
 

Introducing Amazon EMR Serverless Streaming jobs for continuous processing on streaming data

Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. We are excited to announce a new streaming job mode on Amazon EMR Serverless, enabling you to continuously analyze and process streaming data.
Streaming has become vital for businesses to gain continuous insights from data sources like sensors, IoT devices, and web logs. However, processing streaming data can be challenging due to requirements such as high availability, resilience to failures, and integration with streaming services. Amazon EMR Serverless Streaming jobs has built-in features to addresses these challenges. It offers high availability through multi-AZ (Availability Zone) resiliency by automatically failing over to healthy AZs. It also offers increased resiliency through automatic job retries on failures and log management features like log rotation and compaction, preventing the accumulation of log files that might lead to job failures. In addition, Amazon EMR Serverless Streaming jobs support processing data from streaming services like self-managed Apache Kafka clusters, Amazon Managed Streaming for Apache Kafka, and now is integrated with Amazon Kinesis Data Streams using a new built-in Amazon Kinesis Data Streams Connector, making it easier to build end-to-end streaming pipelines.

Amazon API Gateway integration timeout limit increase beyond 29 seconds

Amazon API Gateway now enables customers to increase their integration timeout beyond the prior limit of 29 seconds. This setting represents the maximum amount of time API Gateway will wait for a response from the integration to complete. You can raise the integration timeout to greater than 29 seconds for Regional REST APIs and private REST APIs, but this might require a reduction in your account-level throttle quota limit. With this launch, customers with workloads requiring longer timeouts, such as Generative AI use cases with Large Language Models (LLMs), can leverage API Gateway.

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.

Amazon Route 53 Profiles now available in the AWS GovCloud (US) Regions

Starting today, you can enable Route 53 Profiles in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions to define a standard DNS configuration, in the form of a Profile, that may include Route 53 private hosted zone (PHZ) associations, Route 53 Resolver rules, and Route 53 Resolver DNS Firewall rule groups, and apply this configuration to multiple VPCs in your account. Profiles can also be used to enforce DNS settings for your VPCs, with configurations for DNSSEC validations, Resolver reverse DNS lookups, and the DNS Firewall failure mode. Your can share Profiles with AWS accounts in your organization using AWS Resource Access Manager (RAM). Route 53 Profiles simplifies the association of Route 53 resources and VPC-level settings for DNS across VPCs and AWS accounts in a Region with a single configuration, minimizing the complexity of having to manage each resource association and setting per VPC.

Amazon Timestream for LiveAnalytics now an Amazon EventBridge Pipes target

Amazon TimeStream for LiveAnalytics is now an Amazon EventBridge Pipes target, simplifying the ingestion of time-series data from sources such as Amazon Kinesis, Amazon DynamoDB, Amazon SQS, and more. Pipes provides a fully-managed experience, enabling you to easily ingest time-series data into Timestream for LiveAnalytics without the need to write undifferentiated integration code.

Amazon Timestream for LiveAnalytics is fast, scalable, purpose-built time series database that makes it easy to store and analyze trillions of time series data points per day. Amazon EventBridge Pipes provides a simple, consistent, and cost-effective way to create point-to-point integrations between event producers and consumers. Now, with a few clicks, you can connect your applications generating time-series data to Timestream using Pipes, enabling you to monitor your applications in real time and quickly identify trends and patterns. You can now ingest time-series data from diverse sources using Eventbridge Pipes, making it easier to derive advanced insights.
 

AWS DMS now supports Babelfish for Aurora PostgreSQL as a source

AWS Database Migration Service (AWS DMS) now supports Babelfish for Aurora PostgreSQL as a source by enhancing its existing PostgreSQL endpoint to handle Babelfish data types. Babelfish is a feature of Amazon Aurora PostgreSQL-Compatible Edition that enables Aurora to understand commands from applications written for Microsoft SQL Server.

AWS DMS supports both Full Load and Change Data Capture (CDC) migration modes for Babelfish. Full Load migration copies all of the data from the source database and CDC copies only the data that has changed since the last migration.

To migrate your data from Babelfish, you can use the AWS DMS console, AWS CLI, or AWS SDKs. To learn more, refer to using Babelfish for Aurora PostgreSQL as a source for AWS DMS.
 

Amazon Q offers inline completions in the command line

Today, Amazon Q Developer launches AI-powered inline completions in the command line. As developers type in their command line, Q Developer will provide real-time AI-generated code suggestions. For instance, if a developer types `git`, Q Developer might suggest `push origin main`. Developers can accept the suggestion by simply pressing the right arrow.

To generate accurate suggestions, Q Developer looks at your current shell context and your recent shell history. You can learn more about how Q Developer manages your data here.

Amazon Connect agent workspace launches refreshed look and feel

The Amazon Connect agent workspace now features an updated user interface to improve productivity and focus for your agents. The new user interface is designed to be more intuitive, highly responsive, and increase visual consistency across capabilities, providing your agents with a streamlined user experience. With this launch, you can also easily build and embed third-party applications that have a consistent look and feel with the agent workspace by using Cloudscape Design System components.

Amazon Titan Text Embeddings V2 now available for use with Bedrock Knowledge Bases

Amazon Titan Text Embeddings V2, a new embeddings model in the Amazon Titan family of models, is now available for use with Knowledge Bases for Amazon Bedrock. Using Titan Text Embeddings V2, customers can embed their data into a vector database and use it to retrieve relevant information for tasks such as questions and answers, classification, or personalized recommendations.

Amazon Text Embeddings V2 is optimized for retrieval augmented generation (RAG) and is an efficient model ideal for high accuracy retrieval tasks at different dimensions. The model supports flexible embeddings sizes (1024, 512 , 256) and maintains accuracy at smaller dimension sizes, helping to reduce storage costs without compromising on accuracy. When reducing from 1,024 to 512 dimensions, Titan Text Embeddings V2 retains approximately 99% retrieval accuracy, and when reducing from 1,024 to 256 dimensions, the model maintains 97% accuracy. Additionally, Titan Text Embeddings V2 includes multilingual support for 100+ languages in pre-training as well as unit vector normalization for improving accuracy of measuring vector similarity.

AWS Backup now supports Amazon Elastic Block Store (EBS) Snapshots Archive in the AWS GovCloud (US) Regions

Today, AWS Backup announces support for EBS Snapshots Archive in the AWS GovCloud (US) Regions, allowing customers to automatically move EBS Snapshots created by AWS Backup to EBS Snapshots Archive. EBS Snapshots Archive is low-cost, long-term storage tier meant for your rarely-accessed snapshots that do not need frequent or fast retrieval, allowing you to save up to 75% on storage cost.

You can now use AWS Backup to transition your EBS Snapshots to EBS Snapshots Archive and manage their lifecycle, alongside AWS Backup’s other supported resources in the AWS GovCloud (US) Regions. EBS Snapshots are incremental, storing only the changes since the last snapshot and making them cost effective for daily and weekly backups that need to be accessed frequently. You may also have EBS snapshots that you only need to access every few months or years, retaining them for long-term regulatory requirements. For these long-term snapshots, you can now transition your EBS snapshots managed by AWS Backup to EBS Snapshots Archive Tier to store full, point-in-time snapshots at lower storage costs.

Amazon CloudWatch Logs announces Live Tail streaming CLI support

We are excited to announce streaming CLI support for Amazon CloudWatch Logs Live Tail, making it possible to view, search and filter relevant log events in real-time. You can now view your logs interactively in real-time as they’re ingested via AWS CLI or programmatically within your own custom dashboards inside or outside of AWS.

In CloudWatch Logs, Live Tail console has been providing customers a rich out-of-the-box experience to view and detect issues in their incoming logs. Additionally, it provides fine-grained controls to filter and highlight analytics of interest while investigating issues relating to deployments or incidents. By using the streaming CLI for Live Tail, you can now have similar experience from AWS CLI or integrate the same capabilities within your custom dashboard.

AWS Elastic Beanstalk now supports .NET 8 on AL2023

AWS Elastic Beanstalk now supports .NET 8 on AL2023 Elastic Beanstalk environments. Elastic Beanstalk .NET 8 on AL2023 environments come with .NET 8.0 installed by default. See Release Notes for additional details.

AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. .NET 8 on AL2023 runtime adds security improvements, such as support for the SHA-3 hashing algorithm, along with other updates including enhanced dynamic profile-guided optimization (PGO) that can lead to runtime performance improvements, and better garbage collection with the ability to adjust the memory limit on the fly. You can create Elastic Beanstalk environment(s) running .NET 8 on AL2023 using any of the Elastic Beanstalk interfaces such as Elastic Beanstalk Console, Elastic Beanstalk CLI, Elastic Beanstalk API, and AWS Toolkit for Visual Studio.

AWS Batch introduces the Job Queue Snapshot to view jobs at the front of the job queues

AWS Batch now offers the Job Queue Snapshot feature, enabling you to observe the jobs at the front of your queues. This feature provides visibility to the existing AWS Batch Fair Share Scheduling capabilities. The Job Queue Snapshot displays the jobs at the front of your job queues to assist administrators.

Job Queue Snapshot addresses the needs of customers using AWS Batch and leveraging Fair Share Scheduling to balance workloads within the same organization. By gaining visibility into the jobs at the front of their queues, you can quickly identify and resolve issues that may be impacting workload progress, helping you to meet Service Level Agreements (SLAs) and minimize disruptions to your end-users.

The Job Queue Snapshot feature is available to all AWS Batch customers today and across all AWS Regions where AWS Batch is offered. Customers can access the snapshot through the AWS Batch console or by using the GetJobQueueSnapshot API via the AWS Command Line Interface (AWS CLI).

To learn more about Job Queue Snapshot and how to leverage it for your batch computing workloads, visit Viewing job queue status in the AWS Batch User Guide.

AWS CloudFormation Hooks is now available in the AWS GovCloud (US) Regions

AWS CloudFormation Hooks is now generally available in the AWS GovCloud (US) Regions. With this launch, customers can deploy Hooks in these newly supported AWS Regions to help keep resources secure and compliant.

With CloudFormation Hooks, you can invoke custom logic to automate actions or inspect resource configurations prior to a create, update or delete CloudFormation stack operation. Today’s launch extends this capability to GovCloud customers and partners to help keeping resources secure and compliant.

With this launch, CloudFormation Hooks is available in 31 AWS regions globally: US East (Ohio, N. Virginia), US West (N. California, Oregon), Canada (Central, Calgary), Asia Pacific (Hong Kong, Jakarta, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo, Hyderabad, Melbourne), Europe (Ireland, Frankfurt, Zurich, London, Paris, Stockholm, Milan), Middle East (UAE, Bahrain), South America (São Paulo), Africa (Cape Town), and the AWS GovCloud (US-East, US-West) Regions.

To get started, you can explore sample hooks published to the CloudFormation Public Registry or author Hooks using the CloudFormation CLI and publish them to your CloudFormation Private Registry. To learn more, check out the AWS News Blog post, refer to the User Guide and API reference. You can also learn more by following the AWS CloudFormation Hooks workshop.
 

AWS Transfer Family increases message size and throughput limits for AS2

AWS Transfer Family support for the Applicability Statement (AS2) protocol has increased its default message size limit from 50 MB to 1 GB and throughput limit from 30 to 100 message transfers per second. You will find these increased limits reflected on the AWS Transfer Family page within the Service Quotas console. These increased limits enable you to reliably connect with trading partners that frequently transmit sizable batches of AS2 messages.

The increased message size and throughput limits for AS2 are available in all AWS Regions where the service is available. To learn more about the AS2 quotas and limitations, visit the documentation. To get started with Transfer Family’s AS2 capabilities, take the self-paced workshop or deploy the AS2 CloudFormation template.
 

Announcing preview of the AWS SDK for SAP ABAP - BTP edition

The AWS SDK for SAP ABAP – BTP edition is now available in preview, making it easier for SAP Business Technology Platform (BTP) users to connect to AWS services, including the latest generative AI capabilities. With this new edition, SAP customers can develop and run powerful SAP extensions and standalone applications in SAP BTP that use AWS services.

These capabilities help SAP customers innovate faster while keeping their ERP core clean, including customers using SAP’s RISE and GROW offerings, or self-managed deployments on AWS or other cloud providers. Whether seeking to streamline invoice generation with Amazon Bedrock, improve sales forecasts with Amazon Forecast, or enable predictive maintenance with AWS IoT Core, the AWS SDK for SAP ABAP – BTP edition simplifies access to AWS services from within the SAP BTP ABAP Environment.

Amazon Connect now supports Apple Messages for Business

Amazon Connect Chat now supports Apple Messages for Business, enabling you to deliver personalized customer experiences on Apple Messages, the default messaging application on all iOS devices, increasing customer satisfaction and reducing costs. Rich messaging features such as link previews, quick replies, forms, attachments, customer authentication, iMessage apps, and Apple Pay allow customers to browse product recommendations, check shipments, schedule appointments, or make a payment.

Amazon Connect’s integration with Apple Messages for Business makes it easy for your customers to chat with you anytime they tap your registered phone number on an Apple device, reducing call volumes and operational costs by deflecting calls to chats. Apple Messages for Business chats use the same generative AI-powered chatbots, routing, configuration, analytics, and agent experience as calls, chats, tasks, and web calling in Amazon Connect, making it easy for you to deliver seamless omnichannel customer experiences.

AWS Supply Chain Lead Time Insights enhances the support for data variability

Vendor Lead Time (VLT) Insights increases lead time deviation awareness, focusing on critical factors such as the vendor’s transportation mode and source locations. Users can identify lead time deviations at a more granular level and view them through the ASC Insights UI. Additionally, users can easily export all lead time deviations to combine with external sources for further analysis.

Customers lack timely visibility of vendor lead time deviations (actual lead times vs contractual lead times). Identifying and incorporating these deviations is crucial for improving planning accuracy and avoiding stock out situations. Traditional data analysis methods are time-consuming, often taking weeks to identify variability each quarter. By the time deviations are identified and predictions are adjusted, the underlying data is already outdated. As a result, lead time predictions become less accurate, which heightens the risk of stock outs due to inadequate inventory and leads to increased costs from expedited shipping or higher safety stock adjustments.

This release allows customers to identify and export vendor lead time deviations at a more granular level, including transportation modes and source locations. This will help customers identify deviations from contractual lead times quickly. Customers can then update their planning cycle by using the recommended lead times.

AWS Marketplace announces amendments for AMI annual agreements

AWS Marketplace announces the general availability of amendments for annual agreements on Amazon Machine Image (AMI) products purchased on AWS Marketplace. This allows customers with annual agreements to switch the Elastic Cloud Compute (EC2) instance types for the AMI solution they purchased from AWS Marketplace.

AWS customers who run AMI software from AWS Marketplace for extended periods choose to use annual plans which offer discounts over on-demand pricing. Previously, annual agreements only provided discounts on the initially selected EC2 instance types, and if customers later needed to support additional users by adding more instances or upgrading to larger instance types, they had to pay on-demand rates or purchase additional annual plans.

Customers can now easily modify their AMI annual agreements in the AWS Marketplace Console. They can add new instance types or switch to a different instance type at any time. If the new instance type results in a higher cost, customers will retain their original discount and AWS Marketplace will automatically calculate the pro-rated cost for the new instance types. Customers also retain the original end date of the agreement for the new instance types, simplifying renewals.. Amendments are available for all AMI products in the AWS Marketplace with annual pricing plans, and they will support both existing and new agreements.

Announcing AWS DMS Serverless improved Oracle to Redshift Full Load throughput

AWS Database Migration Service Serverless (AWS DMSS) now supports improved Oracle to Amazon Redshift Full Load throughput. Using AWS DMSS, you can now migrate data from Oracle databases to Amazon Redshift at much higher throughput rates, ranging from two to ten times faster than previously possible with AWS DMSS.

AWS DMSS Oracle to Amazon Redshift Full Load performance enhancements will automatically be applied whenever AWS DMSS detects that a Full Load operation is being conducted between an Oracle database and Amazon Redshift. Additional information about AWS DMSS Full Load can be found in Full Load documentation.

To learn more, see the AWS DMS Full Load for Oracle databases documentation.

For AWS DMS regional availability, please refer to the AWS Region Table.
 

Amazon Connect provides Zero-ETL analytics data lake to access contact center data

Amazon Connect announces the general availability of analytics data lake, a single source for contact center data including contact records, agent performance, Contact Lens insights, and more — eliminating the need to build and maintain complex data pipelines. Organizations can create their own custom reports using Amazon Connect data or combine data queried from third-party sources using zero-ETL.

Analytics data lake enables contact center managers to leverage BI tools of their choice, such as QuickSight, to analyze the information that matters most to improving customer experience and operational efficiency. That could include a customized view of metrics like service level, combining performance insights with third-party data like CRMs, or using contact center data to inform AI/ML models and contact center optimization opportunities. For example, managers can visualize which agents have the highest customer satisfaction for calls about lost orders and then adjust routing profiles to staff their queues with the ideal agents to achieve their desired business outcomes.

Amazon Connect data lake supports querying engines like Amazon Athena and data visualization applications like Amazon QuickSight or other third-party business intelligence (BI) applications. The Amazon Connect analytics data lake is available in all the AWS Regions where Amazon Connect is available. To learn more and get started, visit the Amazon Connect website and the API documentation.
 

Amazon QuickSight launches multi column sorting for Tables

Amazon QuickSight now supports the ability to sort by multiple columns in Tables. This allows both authors and readers to sort by two or more columns simultaneously in a nested fashion (e.g., first by column A, then B, then C) using the new sorting pop over. They can add, remove, reorder and reset sort on a table. Readers can also perform multi column sort using hidden and off visual field as defined by the author or opt for single column sort from column header context menu as well. For more details refer to documentation.

Real-time audio and Microsoft Server 2022 support are now available on Amazon AppStream 2.0 multi-session fleets

Amazon AppStream 2.0 announces support for real-time audio conferencing on multi-session fleets. Additionally, you can now launch multi-session fleets powered by Microsoft Windows Server 2022 operating system and take the advantage of latest operating systems features.

Multi-session fleets enable IT admins to host multiple end-user sessions on a single AppStream 2.0 instance, helping customers to make better use of instance resources. By providing your users with access to streaming applications and audio conferencing, you can help improve team collaboration for remote workers. Your users don't need to exit their AppStream 2.0 sessions to interact using well-known audio conferencing software. Before you set up your multi-session fleet for audio conferencing, read the multi-session recommendations. These recommendations will help you choose the appropriate instance type and value for the maximum number of user sessions on a single instance.
 

Amazon Cognito user pools now support the ability to customize access tokens

In December 2023, Amazon Cognito user pools announced the ability to enrich identity and access tokens with custom attributes in the form of OAuth 2.0 scopes and claims. Today, we are expanding this functionality to support complex custom attributes such as arrays, maps and JSON objects in both identity and access tokens. You can now make fine-grained authorization decisions using complex custom attributes in the token. This feature enables you to offer enhanced personalization and increased access control. You can also simplify migration and modernization of your applications to use Amazon Cognito with minimal or no changes to your applications.

Amazon Cognito is a service that makes it simpler to add authentication, authorization, and user management to your web and mobile apps. Amazon Cognito provides authentication for applications with millions of users and supports sign-in with social identity providers such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via standards such as SAML 2.0 and OpenID Connect.

Access token customization is available as part of Cognito advanced security features in all AWS Regions, except AWS GovCloud (US) Regions.

To get started, see the following resources:

Powertools for AWS Lambda (Python) adds support for Agents for Amazon Bedrock

Powertools for AWS Lambda (Python), an open-source developer library, launched a new feature to ease the creation of Agents for Amazon Bedrock.

With this release, Powertools for AWS Lambda (Python) handles the automatic generation of OpenAPI schemas directly from the business logic code, validates inputs and outputs according to that schema, and drastically reduces the boilerplate necessary to manage requests and responses from Agents for Amazon Bedrock. By abstracting away the complexities, Powertools for AWS Lambda (Python) allows developers to focus their time and efforts directly on writing business logic, thereby boosting productivity and accelerating development velocity.

AWS AppSync now supports long running events with asynchronous Lambda function invocations

AWS AppSync now allows customers to invoke their Lambda functions, configured as AppSync data sources, in an event-driven manner. This new capability enables asynchronous execution of Lambda functions, providing more flexibility and scalability for serverless and event-driven applications.

Previously, customers could only invoke Lambda functions synchronously from AppSync, which meant that the GraphQL API would wait for the Lambda function to complete before returning a response. With support for Event mode, AppSync can now trigger Lambda functions asynchronously, decoupling the API response from the Lambda execution. This is particularly beneficial for long-running operations (e.g. initiating a generative AI model inference, and leveraging the Lambda function to send model responses to clients over AppSync WebSockets), batch processing (e.g. kicking off a database processing job), or scenarios where immediate responses are not required (e.g. creating and putting messages in a queue).

This feature is available in all AWS regions supported by AppSync. For more details, refer to the AppSync documentation.
 

Amazon Bedrock announces new Converse API

Today, Amazon Bedrock announces the new Converse API, which provides developers a consistent way to invoke Amazon Bedrock models removing the complexity to adjust for model-specific differences such as inference parameters. This API also simplifies managing multi-turn conversations by enabling developers to provide conversational history in a structured way as part of the API request. Furthermore, Converse API supports Tool use (function calling), which for supported models (Anthropic's Claude 3 model family including Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku; Mistral Large; and Cohere’s Command R and R+), will enable developers to perform a wide variety of tasks that require access to external tools and APIs.

The Converse API provides a consistent experience that works with Amazon Bedrock models, removing the need for developers to manage any model-specific implementation. With this API, you can write a code once and use it seamlessly with different models on Amazon Bedrock.

Introducing versioning for AWS WAF Bot & Fraud Control managed rule groups

AWS WAF now allows you to select specific versions of Bot Control and Fraud Control managed rule groups within your web ACLs. This provides greater control over managing traffic when AWS makes new managed rule groups updates available to you.

With versioning, you gain the flexibility to test new and updated bot and fraud rules before deploying them to production. For example, you can apply a new version of a managed rule group to a staging environment to validate efficacy. You can then incrementally roll out the version across production to closely monitor impact before fully enabling it. If a new version inadvertently causes issues, you can swiftly roll back to the previous version to instantly restore original behavior.

With this launch, you will be configured to use the default version (v1.0) of Bot Control and Fraud Control managed rules groups, and you will continue to receive periodic AWS updates. If you do not want to receive updates automatically, you can select a specific version and you will remain on that version you selected until you manually update or till it reaches end of life. For more information and best practices about version management, see documentation.

Amazon Redshift Serverless is now available in Region Europe (Zurich) and Europe (Spain)

Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in AWS Europe (Zurich) and Europe (Spain) regions. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.

With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon S3, access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes. Amazon Redshift Serverless provides unified billing for queries on any of these data sources, helping you efficiently monitor and manage costs.

Amazon EventBridge Scheduler adds new API request metrics for improved observability

Amazon EventBridge Scheduler now emits 12 new Amazon CloudWatch metrics allowing you to monitor API request rates for create, delete, get, list, and update API calls for Schedules and ScheduleGroups. You can now more effectively monitor your application’s performance when making calls to Scheduler’s APIs and proactively identify when you may need to increase your Scheduler service quotas.

EventBridge Scheduler allows you to create millions of scheduled events and tasks to run across more than 270 AWS services without provisioning or managing the underlying infrastructure. EventBridge Scheduler supports one time and recurring schedules that can be created using cron expressions, rate expressions, or specific times with support for time zones and daylight savings. Today’s expansion of Scheduler usage metrics helps you pinpoint potential bottlenecks before they appear, allowing for easy scaling of your applications.

Amazon QuickSight is now available in Milan, Zurich, Cape Town and Jakarta Regions

Amazon QuickSight, which lets you easily create and publish interactive dashboards across your organization and embed data visualizations into your apps, is now available in Milan, Zurich, Cape Town and Jakarta Regions. New accounts are able to sign up for QuickSight with Milan, Zurich, Cape Town or Jakarta as their primary region, making SPICE capacity available in the region and ensuring proximity to AWS and on-premises data sources. Users on existing QuickSight accounts can now switch regions with the region switcher and create SPICE datasets in the new regions.

With this launch, QuickSight expands to Africa for the first time and is now available in all continents with 21 regions, including: US East (Ohio and N. Virginia), US West (Oregon), Europe (Stockholm, Paris, Frankfurt, Ireland, London, Milan and Zurich), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo and Jakarta), Canada (Central), South America (São Paulo), Africa (Cape Town) and GovCloud (US-West). Learn more about available regions here.
 

One-click instance profile creation to launch an RDS Custom for SQL Server instance

Starting today, RDS Custom for SQL Server database instance creation is simplified with single-click creation and attachment of an instance profile. You can choose “Create a new instance profile” and provide an instance profile name for Create database, Restore snapshot, and Restore to Point-in-time options within RDS Management Console. RDS Management Console will automatically generate a new instance profile with all the necessary permissions for RDS Custom automation tasks.

To leverage this feature, you need to ensure that you are logged into AWS Console with the following permissions - iam:CreateInstanceProfile, iam:AddRoleToInstanceProfile, iam:CreateRole, and iam:AttachRolePolicy .

Claude 3 Sonnet and Haiku now available in Amazon Bedrock in the Europe (Frankfurt) region

Beginning today, customers in the Europe (Frankfurt) region can access Claude 3 Sonnet and Haiku in Amazon Bedrock to easily build and scale generative AI applications.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance.
 

Amazon MSK adds support for Apache Kafka version 3.7

Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 3.7 for new and existing clusters. Apache Kafka version 3.7 includes several bug fixes and new features that improve performance. Key improvements include latency improvements resulting from leader discovery optimizations during leadership changes, as well as log segment flush optimization options. For more details and a complete list of improvements and bug fixes, see the Apache Kafka release notes for version 3.7.

Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is compatible with Apache Kafka, which enables you to quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you can spend more time innovating on streaming applications and less time managing Apache Kafka clusters. To learn how to get started, see the Amazon MSK Developer Guide.

Amazon RDS Multi-AZ deployment with two readable standbys supports 6 additional AWS Regions

Amazon Relational Database Service (Amazon RDS) Multi-AZ deployments with two readable standbys are now available in six additional AWS Regions.

Amazon RDS Multi-AZ deployments with two readable standbys is ideal when your workloads require lower write latency, automated failovers, and more read capacity. In addition, this deployment option supports minor version upgrades and system maintenance updates with typically less than one second of downtime when using Amazon RDS Proxy or any one of the open-source AWS Advanced JDBC Driver, PgBouncer, or ProxySQL.

The six newly supported regions are Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Europe (Spain), Europe (Zurich), Middle East (UAE), and Israel (Tel Aviv) Regions. Amazon RDS Multi-AZ database with two readable standby instances is supported on Amazon RDS for PostgreSQL version 16.1 and higher, 15.2 and higher, 14.5 and higher, 13.8 and higher, and Amazon RDS for MySQL version 8.0.28 and higher. Refer the Amazon RDS User Guide for a full list of regional availability and supported engine versions.

Learn more about Amazon RDS Multi-AZ deployments in the AWS News Blog. Create or update fully managed Amazon RDS Multi-AZ database with two readable standby instances in the Amazon RDS Management Console.

Amazon SageMaker Canvas announces up to 10x faster startup time

Amazon SageMaker Canvas announces up to 10x faster startup time, enabling users to achieve faster business outcomes using a visual, no-code interface for machine learning (ML). With a faster startup time, you can now quickly prepare data, build, customize, and deploy machine learning (ML) and generative AI (Gen AI) models in SageMaker Canvas, without writing a single line of code.

SageMaker Canvas can be launched using multiple methods including using your corporate credentials with a single sign-on portal such as AWS IAM Identity Center (IdC), Amazon SageMaker Studio, the AWS Management Console, or a pre-signed URL set up by IT administrators. Now, launching Canvas is quicker than ever using any of these methods. You can launch Canvas in less than a minute and get started with your ML journey 10x faster than before.

Starting today, all new user profiles created in existing or new SageMaker domains can experience this accelerated startup time. Faster startup time is available in all AWS regions where SageMaker Canvas is supported today. Please see the SageMaker Canvas product page to learn more.

Introducing the Document widget for PartyRock

Everyone can build, use, and share generative AI powered apps for fun and for boosting personal productivity using PartyRock. PartyRock uses foundation models from Amazon Bedrock to turn your ideas into working PartyRock apps.

PartyRock apps are composed of UI elements called widgets. Widgets display content, accept input, connect with other widgets, and generate outputs like text, images, and chats using foundation models. Now available is the Document widget, allowing you to integrate text content from files and documents directly into a PartyRock app. The Document widget supports common file types including PDF, MD, TXT, DOCX, HTML, and CSV, with a limit of 120,000 characters. You can add the Document widget to new or existing apps. With the Document widget, you can build apps that generate summaries, extract action items, facilitate chats about document content, or create images based on text from documents like blogs.

For a limited time, AWS offers new PartyRock users a free trial without the need to provide a credit card or sign up for an AWS account. To get hands-on with generative AI, visit PartyRock.
 

Amazon FSx for Lustre is now available in the AWS US East (Atlanta) Local Zone

Customers can now create Amazon FSx for Lustre file systems in the AWS US East (Atlanta) Local Zone.

Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. Amazon FSx for Lustre provides fully managed shared storage built on the world’s most popular high-performance file system, designed for fast processing of workloads such as machine learning, high performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA).

To learn more about Amazon FSx for Lustre, visit our product page, and see the AWS Region Table for complete regional availability information.
 

Introducing Amazon EC2 High Memory U7i Instances

Amazon Web Services is announcing general availability for Amazon EC2 High Memory U7i instances, the first DDR5 memory based 8-socket offering by a leading cloud provider, offering up to 32TiB of memory and 896 vCPUs. Powered by 4th Generation Intel Xeon Scalable processors (Sapphire Rapids), U7i instances have twice as many vCPUs, delivering more than 135% compute performance and up to 45% better price performance versus existing U-1 instances. Combining the largest memory sizes with the highest vCPU count in the AWS cloud, these instances are ideal to run large in-memory databases such as SAP HANA, Oracle, and SQL Server and compute-intensive workloads such as large language models.

U7i instances are available in four sizes supporting 12TiB, 16TiB, 24TiB, and 32TiB memory. They offer up to 100Gbps of Elastic Block Store (EBS) bandwidth for storage volumes, facilitating up to 2.5x faster restart times compared to existing U-1 instances. U7i instances deliver up to 200Gbps of network bandwidth and support ENA Express.

U7i instances are available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul) and Asia Pacific (Sydney). Customers can use these instances with On Demand, Reserved Instances, and Savings Plan purchase options. To learn more, visit the U7i instances page.

These instances are certified by SAP for running SAP S/4HANA, SAP BW/4HANA, Business Suite on HANA, Data Mart Solutions on HANA, and Business Warehouse on HANA in production environments. For details, see the SAP HANA Hardware Directory.
 

Amazon MSK launches support for KRaft mode for new Apache Kafka clusters

Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports KRaft mode (Apache Kafka Raft) in Apache Kafka version 3.7. The Apache Kafka community developed KRaft to replace Apache ZooKeeper for metadata management in Apache Kafka clusters. In KRaft mode, cluster metadata is propagated within a group of Kafka controllers, which are part of the Kafka cluster, versus across ZooKeeper nodes. On Amazon MSK, like with ZooKeeper nodes, KRaft controllers are included at no additional cost to you, and require no additional setup or management.

You can now create clusters in either KRaft mode or ZooKeeper mode on Apache Kafka version 3.7. In Kraft mode, you can add up to 60 brokers to host more partitions per-cluster, without requesting a limit increase, compared to the 30-broker quota on Zookeeper-based clusters. Support for Apache Kafka version 3.7 is offered in all AWS regions where Amazon MSK is available. To learn more about KRaft on MSK, read our launch blog and FAQs. To get started with Amazon MSK, see the Amazon MSK Developer Guide.
 

Amazon DynamoDB now supports resource-based policies in the AWS GovCloud (US) Regions

Amazon DynamoDB now supports resource-based policies in the AWS GovCloud (US) Regions. Resource-based policies help you simplify access control for your DynamoDB resources. With resource-based policies, you can specify the Identity and Access Management (IAM) principals that have access to a resource and what actions they can perform on it. You can attach a resource-based policy to a DynamoDB table or a stream. The resource-based policy that you attach to a table can include access permissions to its indexes. The resource-based policy that you attach to a stream can include access permissions to the stream. With resource-based policies, you can also simplify cross-account access control for sharing resources with IAM principals of different AWS accounts.

Resource-based policies support integrations with IAM Access Analyzer and Block Public Access (BPA) capabilities. IAM Access Analyzer reports cross-account access to external entities specified in resource-based policies, and the findings provide visibility to help you refine permissions and conform to least privilege. BPA helps you prevent public access to your DynamoDB tables, indexes, and streams, and is automatically enabled in the resource-based policies creation and modification workflows.
 

Amazon Redshift Serverless is now generally available in the AWS China (Ningxia) Region

Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in the AWS China (Ningxia) region. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists can now use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.

With a few clicks in the AWS Management Console, you can get started with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon Simple Storage Service (Amazon S3), access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes, as well as data in your operational databases, such as Amazon Aurora.

Amazon DynamoDB local supports configurable maximum throughput for on-demand tables

Amazon DynamoDB local now supports configurable maximum throughput for individual on-demand tables and associated secondary indexes. Customers can use the configurable maximum throughput for on-demand tables feature for predictable cost management, protection against accidental surge in consumed resources and excessive use, and safe guarding downstream services with fixed capacities from potential overloading and performance bottlenecks. With DynamoDB local, you can develop and test your application with managing maximum on-demand table throughput, making it easier to validate the use of the supported API actions before releasing code to production.

DynamoDB local is free to download and available for macOS, Linux, and Windows. DynamoDB local does not require an internet connection and it works with your existing DynamoDB API calls. To get started with the latest version see “Deploying DynamoDB locally on your computer”. To learn more, see Setting Up DynamoDB Local (Downloadable Version).
 

Amazon CloudWatch now offers 30 days of alarm history

Amazon CloudWatch extended the duration during which customers can access their alarm history. Now, customers can view the history of their alarm state changes for up to 30 days prior.

Previously, CloudWatch provided 2 weeks of alarm history. Customers rely on alarm history to review previous triggering events, alarming trends, and noisiness. This extended history makes it easier to observe past behavior and review incidents over a longer period of time.

New Oracle to PostgreSQL built-in system functions in DMS Schema Conversion

DMS Schema Conversion has released five generative artificial intelligence (AI)-assisted built-in functions to improve Oracle to PostgreSQL conversions. This launch marked the first ever gen AI-assisted conversion improvement in DMS Schema Conversion. 

Customers can use these functions by applying the DMS Schema Conversion extension pack. The extension pack is an add-on module that emulates source database functions that aren't supported in the target database and can streamline the conversion step.

DMS Schema Conversion is generally available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Singapore).

To learn more, visit Converting database schemas using DMS Schema Conversion. For more details on how to apply extension pack, go to Using extension packs in DMS Schema Conversion.

AWS Network Firewall increases quota for stateful rules

The AWS Network Firewall service quota limit for stateful rules is now adjustable. The default limit is still 30,000 stateful rules per firewall policy in a Region, but you can request an increase up to 50,000. This firewall rule limit increase helps customers strengthen their security posture on AWS and mitigate emerging threats more effectively.

A higher rule limit provides flexibility to customers with large-scale deployments to define their firewall policy with different combinations of AWS managed and customer defined rules. Starting today, you can implement a broader range of rules to defend against various threats and scale as you grow on AWS.

Mistral Small foundation model now available in Amazon Bedrock

The Mistral Small foundation model from Mistral AI is now generally available in Amazon Bedrock. You can now access four high-performing models from Mistral AI in Amazon Bedrock including Mistral Small, Mistral Large, Mistral 7B, and Mixtral 8x7B, further expanding model choice. Mistral Small is a highly efficient large language model optimized for high-volume, low-latency language-based tasks. It provides outstanding performance at a cost-effective price point. Key features of Mistral Small include retrieval-augmented generation (RAG) specialization, coding proficiency, and multilingual capabilities.

Mistral Small is perfectly suited for straightforward tasks that can be performed in bulk, such as classification, customer support, or text generation. The model specializes in RAG ensuring important information is retained even in long context windows, which can extend up to 32K tokens. Mistral Small excels in code generation, review, and commenting, supporting all major coding languages. Mistral Small also has multilingual capabilities delivering top-tier performance in English, French, German, Spanish, and Italian; it also supports dozens of other languages. The model also comes with built-in efficient guardrails for safety.

Mistral AI’s Mistral Small foundation model is now available in Amazon Bedrock in the US East (N. Virginia) AWS region. To learn more, read the AWS News launch blog, Mistral AI in Amazon Bedrock product page, and documentation. To get started with Mistral Small in Amazon Bedrock, visit the Amazon Bedrock console.
 

PostgreSQL 17 Beta 1 is now available in Amazon RDS Database Preview Environment

Amazon RDS for PostgreSQL 17 Beta 1 is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the pre-release of PostgreSQL 17 on Amazon RDS for PostgreSQL. You can deploy PostgreSQL 17 Beta 1 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database.

PostgreSQL 17 includes updates to vacuuming that reduces memory usage, improves time to finish vacuuming, and shows progress of vacuuming indexes. With PostgreSQL 17, you no longer need to drop logical replication slots when performing a major version upgrade. PostgreSQL 17 continues to build on the SQL/JSON standard, adding support for `JSON_TABLE` features that can convert JSON to a standard PostgreSQL table. The `MERGE`command now supports the `RETURNING` clause, letting you further work with modified rows. PostgreSQL 17 also includes general improvements to query performance and adds more flexibility to partition management with the ability to SPLIT/MERGE partitions. Please refer to the PostgreSQL community announcement for more details.

Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the preview environment.

Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region.
 

Connect your Jupyter notebooks to Amazon EMR Serverless using Apache Livy endpoints

Today, we are excited to announce that Amazon EMR Serverless now supports endpoints for Apache Livy. Customers can now securely connect their Jupyter notebooks and manage Apache Spark workloads using Livy’s REST interface.

Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple and cost effective for data engineers and analysts to run petabyte-scale data analytics in the cloud. With the Livy endpoints, setting up a connection is easy - just point your Livy client in your on-premises notebook running Sparkmagic kernels to the EMR Serverless endpoint URL. You can now interactively query, explore and visualize data, and run Spark workloads using Jupyter notebooks without having to manage clusters or servers. In addition, you can use the Livy REST APIs for use cases that need interactive code execution outside notebooks.
 

AWS Launches Console-based Bulk Policy Migration for Billing and Cost Management Console Access

AWS Billing and Cost Management console now supports a console-based simplified migration experience for affected policies containing retired IAM actions (aws-portal). Customers, who are not migrated to using fine-grained IAM actions, can trigger this experience by clicking on Update IAM Policies recommended action available on the Billing and Cost Management home page. The experience identifies affected policies, suggests equivalent new actions to match customers’ current access, provides testing options, and completes the migration of all affected policies across the organization.

The experience automatically identifies required new fine-grained actions, making it easy for customers to maintain their current access post-migration. The experience provides flexibility of testing with a few accounts and rollback changes with a button click, making the migration a risk-free operation for customers. Moreover, the experience provides optional customization opportunity for customers to broaden or fine-tune their access by modifying the aws-recommended IAM action mapping as well as migrating select accounts one at a time.

AWS Chatbot now supports tagging of AWS Chatbot resources

AWS Chatbot now enables customers to tag AWS Chatbot resources. Tags are simple key-value pairs that customers can assign to AWS resources such as AWS Chatbot channel configurations to easily organize, search, identify resources, and control access.

Prior to today, customers could not tag AWS Chatbot resources. As a result, they could not use tag-based controls to manage access to AWS Chatbot resources. By tagging AWS Chatbot resources, customers can now enforce tag-based controls in their environments. Customers can manage tags for AWS Chatbot resources using the AWS CLI, SDKs, or AWS Management Console.

AWS Chatbot support for tagging Chatbot resources is available at no additional cost in all AWS Regions where AWS Chatbot service is offered. To learn more, visit the AWS Chatbot Tagging your AWS Chatbot resources documentation page.

Amazon EKS and Amazon EKS Distro now support Kubernetes version 1.30

Kubernetes version 1.30 introduced several new features and bug fixes, and AWS is excited to announce that you can now use Amazon EKS and Amazon EKS Distro to run Kubernetes version 1.30. Starting today, you can create new EKS clusters using v1.30 and upgrade your existing clusters to v1.30 using the Amazon EKS console, the eksctl command line interface, or through an infrastructure-as-code tool.

Kubernetes version 1.30 includes stable support for pod scheduling readiness and minimum domains parameter for PodTopologySpread constraints. As a reminder, starting with Kubernetes version 1.30 or newer, any newly created managed node groups will automatically default to using AL2023 as the node operating system. For detailed information on major changes in Kubernetes version 1.30, see the Kubernetes project release notes.

Kubernetes v1.30 support for Amazon EKS is available in all AWS Regions where Amazon EKS is available, including the AWS GovCloud (US) Regions.

You can learn more about the Kubernetes versions available on Amazon EKS and instructions to update your cluster to version 1.30 by visiting Amazon EKS documentation. Amazon EKS Distro builds of Kubernetes v1.30 are available through ECR Public Gallery and GitHub. Learn more about the Amazon EKS version lifecycle policies in the documentation.
 

Introducing the Amazon Kinesis Data Streams Apache Spark Structured Streaming Connector for Amazon EMR

We are excited to announce the launch of the Amazon Kinesis Data Streams Connector for Spark Structured Streaming on Amazon EMR. The new connector makes it easy for you to build real-time streaming applications and pipelines that consume Amazon Kinesis Data Streams using Apache Spark Structured Streaming. Starting Amazon EMR 7.1, the connector comes pre-packaged on Amazon EMR on EKS, EMR on EC2 and EMR Serverless. Now, you do not need to build or download any packages and can focus on building your business logic using the familiar and optimized Spark Data Source APIs when consuming data from your Kinesis data streams.

Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store streaming data at massive scale. Amazon EMR is the cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using Apache Spark and other open-source frameworks. The new Amazon Kinesis Data Streams Connector for Apache Spark is faster, more scalable, and fault-tolerant than alternative open-source options. The connector also supports Enhanced Fan-out consumption with dedicated read throughput. To learn more and see a code example, go to Build Spark Structured Streaming applications with the open source connector for Amazon Kinesis Data Streams.
 

New open-source AWS Advanced Python Wrapper driver now available for Amazon Aurora and Amazon RDS

The Amazon Web Services (AWS) Advanced Python Wrapper driver is now generally available for use with Amazon RDS and Amazon Aurora PostgreSQL and MySQL-compatible edition database clusters. This database driver provides support for faster switchover and failover times, and authentication with AWS Secrets Manager or AWS Identity and Access Management (IAM).

The AWS Advanced Python Wrapper driver wraps the open-source Psycopg and the MySQL Connector/Python drivers and supports Python versions 3.8 or newer. You can install the aws-advanced-python-wrapper package using the pip command along with either the psycpg or mysql-connector-python open-source packages. The wrapper driver relies on monitoring database cluster status and being aware of the cluster topology to determine the new writer. This approach reduces switchover and failover times from tens of seconds to single digit seconds compared to the open-source drivers.

The AWS Advanced Python Wrapper driver is released as an open-source project under the Apache 2.0 License. Check out the project on GitHub to view installation instructions and documentation.
 

AWS re:Post Private is now available in five new regions

AWS re:Post Private is now available in five new regions: US East (N. Virginia), Europe (Ireland), Canada (Central), Asia Pacific (Sydney), and Asia Pacific (Singapore).

re:Post Private is a secure, private version of the AWS re:Post, designed to help organizations increase speed to get started with the cloud, remove technical roadblocks, accelerate innovation, and improve developer productivity. With re:Post Private, it is easier for organizations to build an organizational cloud community that drives efficiencies at scale and provides access to valuable knowledge resources. Additionally, re:Post Private centralizes trusted AWS technical content and offers private discussion forums to improve how organizational teams collaborate internally—and with AWS—to remove technical obstacles, accelerate innovation, and scale more efficiently in the cloud. On re:Post Private, convert a discussion thread into a support case, and centralize AWS Support responses for your organization’s cloud community. Learn more about using AWS re:Post Private on the product page.

AWS announces new AWS Direct Connect location in Chicago, Illinois

Today, AWS announced the opening of a new AWS Direct Connect location within the Coresite CH1 data center in Chicago, Illinois. By connecting your network to AWS at the new Illinois location, you gain private, direct access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones. This is the fourth AWS Direct Connect site within Chicago Metropolitan area and the 44th site in the United States.

The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet. The new Direct Connect location at Coresite CH1 offers dedicated 10 Gbps and 100 Gbps connections with MACsec encryption available.

For more information on the over 140 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
 

Amazon EC2 M7i-flex, M7i, C7i, and R7i instances are now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i-flex, M7i, C7i are available in the AWS GovCloud (US-East) Region. In addition, Amazon EC2 M7i-flex, M7i and R7i instances are available in the AWS GovCloud (US-West) Region. These instances are powered by powered by custom 4th Generation Intel Xeon Scalable processors (code-named Sapphire Rapids)custom processors, available only on AWS, and offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.

M7i-flex instances are the easiest way for you to get price-performance benefits for a majority of general-purpose workloads, and deliver up to 19% better price-performance compared to M6i. M7i-flex instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don't fully utilize all compute resources such as web and application servers, virtual-desktops, batch-processing, and microservices.

M7i, C7i and R7i deliver up to 15% better price-performance compared to prior generation M6i, C6i and R6i instances. They offer larger instance sizes up to 48xlarge, can attach up to 128 EBS volumes and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.

Amazon QuickSight launches public API for SPICE CMK Data Encryption

Amazon QuickSight is excited to announce the launch of public API support for Customer Managed Keys (CMK) to encrypt and manage SPICE datasets. Previously, customers were required to manually configure CMK data encryption keys via the QuickSight console UI. Now with this API enhancement, QuickSight users can programmatically opt in and configure the customer managed keys, seamlessly integrating it into their adoption and migration pipeline. Once turned on, the feature would benefit QuickSight users to 1/ be able to revoke access to SPICE datasets with one click, and 2/ maintain an auditable log that tracks how SPICE datasets are accessed. For further details, visit here.

AWS Lambda console now supports sharing test events between developers in additional regions

Developers can now share test events with other developers in their AWS account in Africa (Cape Town), Asia Pacific (Jakarta), Asia Pacific (Osaka), Europe (Milan), Europe (Spain), Europe (Zurich), Middle East (Bahrain), Middle East (UAE). Test events provide developers the ability to define a sample event in the Lambda console, and then invoke a Lambda function using that event to test their code. Previously in the above mentioned regions, test events were only available to the developers who created them. With this launch, developers can make test events available to other team members in their AWS account using granular IAM permissions. This capability makes it easier for developers to collaborate and streamline testing workflows. It also allows developers to use a consistent set of test events across their entire team.

Amazon RDS Extended Support APIs are now available

Amazon Aurora and Amazon Relational Database Service (RDS) announce the availability of Extended Support APIs for automated database management. You can use these APIs to create new databases or restore existing snapshots, and specify whether or not they will be in Extended Support. You can also use these APIs to view the Extended Support status of your existing databases. When your databases are in Extended Support, Amazon RDS will provide critical security and bug fixes for your MySQL and PostgreSQL databases after the community ends support for a major version, to give you time to upgrade to a newer community-supported version.

Starting today, when you create or restore a database running MySQL 5.7, PostgreSQL 11, or higher major version on Aurora or RDS, it will be subject to Extended Support automatically. This ensures that your existing scripts and automation will work as expected. However, you can choose to override this behavior using the Amazon RDS Console, AWS CLI, and the Extended Support APIs. To learn more about opting out of Extended Support when creating or restoring databases, and viewing the Extended Support status via the Amazon API, AWS CLI or Amazon RDS Console, refer to the Amazon RDS User Guide.

Amazon RDS Extended Support APIs are available for Aurora MySQL-Compatible version 2 and higher, Aurora PostgreSQL-Compatible version 11 and higher, RDS for MySQL major versions 5.7 and higher, and RDS for PostgreSQL major versions 11 and higher.

Amazon MWAA supports FIPS 140-2 compliant endpoints in US and Canada Regions

Amazon Managed Workflows for Apache Airflow (MWAA) now offers Federal Information Processing Standard (FIPS) 140-2 validated endpoints to help you protect sensitive information. These endpoints terminate Transport Layer Security (TLS) sessions using a FIPS 140-2 validated cryptographic software module, making it easier for you to use Amazon MWAA for regulated workloads.

Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. FIPS compliant endpoints on Amazon MWAA helps companies contracting with the US and Canadian federal governments meet the FIPS security requirement to encrypt sensitive data in supported Regions.

FIPS 140-2 compliant endpoints for Amazon MWAA are available in US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), and Canada (Central) Regions. To learn more about Amazon MWAA visit the Amazon MWAA documentation.

Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

Amazon SES launches Mail Manager to help manage complex inbound and outbound email workloads

Today, Amazon Simple Email Service (SES) announces the general availability of Mail Manager, a suite of email management features designed to streamline complex email operations for businesses of all sizes. With Mail Manager, companies can centralize their email infrastructure, applying unified policies and rules to manage both inbound and outbound email flows through a single interface. 

Mail Manager allows organizations to set up dedicated email ingress endpoints, enforce sophisticated email traffic filtering policies such as IP filters, and utilize a powerful rules engine to process and route emails to intended destinations. Mail Manager provides customers archiving capabilities to meet their compliance needs with records retention and data protection.

At launch, Mail Manager will offer three initial Email Add Ons, developed with Spamhaus, Abusix, and Trend Micro, to provide email security features. These add-ons offer additional layers of protection and control, enhancing the overall security posture of your email operations.

Mail Manager is generally available, and you can use it in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland, Frankfurt), Asia Pacific (Tokyo, Sydney).

To learn more, see the documentation on Mail Manager in the Amazon SES Developer Guide and the blog post. To start using Mail Manager, visit the Amazon SES console.

Amazon Braket adds new quantum processor from IQM in Europe (Stockholm) Region

Amazon Braket, the quantum computing service from AWS, now supports Garnet, a new high-fidelity 20-qubit superconducting quantum processing unit (QPU) from IQM, expanding Braket to the European Union for the first time. Researchers and developers in Europe, the Middle-East, and Asia Pacific can now conveniently explore this high-fidelity device during their work hours and accelerate their research in quantum computing.

Amazon Braket enables customers to explore and experiment with different types of quantum hardware on AWS, including superconducting, trapped-ion, and neutral atom devices. With this launch, customers now have on-demand access to Garnet for building and testing quantum programs, and priority access via Hybrid Jobs for running algorithms - all with pay-as-you-go pricing. Customers can also reserve dedicated capacity on this QPU for their most advanced or time-sensitive workloads via Braket Direct - with hourly pricing and no upfront commitments. Finally, the expansion of the Braket service to the Europe (Stockholm) region can help customers in the European Union meet their data residency needs.

You can access the Garnet QPU from the Europe (Stockholm) Region. Researchers at accredited institutions can apply for credits to support experiments on Amazon Braket through the AWS Cloud Credits for Research program. To get started with the new Garnet QPU, please see the following resources:

AWS Entity Resolution expands support for customer compliance with ISO and SOC

AWS Entity Resolution has added certification for System and Organization Controls (SOC) reports. Amazon Web Services (AWS) maintains certifications through audits of its controls to help appropriately manage information security risks that affect the confidentiality, integrity, and availability of company and customer information.

AWS Entity Resolution helps companies easily match, link, and enhance related records across multiple applications and data stores to improve data quality so that they can better understand and engage their customers. AWS Entity Resolution offers advanced matching techniques, such as rule-based matching and machine learning (ML) models to help customers more accurately link related sets of customer information. AWS Entity Resolution is now in-scope for SOC 1, 2, and 3 reporting. You can download copies of the AWS ISO certificates and SOC reports in AWS Artifact to use them to jump-start your own certification efforts.
 

Amazon OpenSearch Service zero-ETL integration with Amazon S3 now available

Today, AWS announces the general availability of Amazon OpenSearch Service zero-ETL integration with Amazon S3, a new efficient way for customers to query operational logs in Amazon S3 data lakes eliminating the need to switch between tools to analyze data. Customers can quickly get started by installing out-of-the-box dashboards for AWS log types such as VPC Flow, WAF, and Elastic Load Balancer.

Customers who use OpenSearch Service also use Amazon S3 as a cost-effective way to store infrequently-accessed operational log data. To perform analysis on Amazon S3 data and correlate data across multiple sources, customers previously had to copy that data into OpenSearch Service to take advantage of its rich analytics and visualization features that help them understand data, identify anomalies, and detect potential threats. However, continuously replicating data between services can be time consuming, expensive, and hard to maintain. With OpenSearch Service zero-ETL integration with Amazon S3, customers can access operational log data stored in Amazon S3 using OpenSearch Service, making it easier to perform complex queries and visualizations on their data without any data movement.

Amazon OpenSearch Service zero-ETL integration with Amazon S3 is generally available using OpenSearch Service 2.13 in Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), US East (Ohio), US East (N. Virginia), and US West (Oregon).

To learn more, see the Amazon OpenSearch Service Integration page and the Amazon OpenSearch Service Developer Guide .
 

Amazon RDS Proxy is now available in 6 additional AWS regions

Amazon Relational Database Service (RDS) Proxy is now available in Middle East (UAE), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Israel (Tel Aviv), Europe (Zurich), and Canada (Central) AWS Regions. RDS Proxy is a fully managed and a highly available database proxy for RDS and Amazon Aurora databases. RDS Proxy helps improve application scalability, resiliency, and security.

Many applications, including those built on modern architectures capable of horizontal scaling based on ebb and flow of active users, can open a large number of database connections or open and close connections frequently. This can stress the database’s memory and compute, leading to slower performance and limited application scalability. Amazon RDS Proxy sits between your application and database to pool and share established database connections, improving database efficiency and application scalability. In case of a failure, Amazon RDS Proxy automatically connects to a standby database instance within a region. With Amazon RDS Proxy, database credentials and access can be managed through AWS Secrets Manager and AWS Identity and Access Management (IAM), eliminating the need to embed database credentials in application code.

For information on supported database engine versions and regional availability of RDS Proxy, refer to our RDS and Aurora documentations.

AWS announces new edge location in Egypt

Amazon Web Services (AWS) announces expansion in Egypt by launching a new Amazon CloudFront edge location in Cairo, Egypt. Customers in Egypt can expect up to 30% improvement in latency, on average, for data delivered through the new edge location. The new AWS edge location brings the full suite of benefits provided by Amazon CloudFront, a secure, highly distributed, and scalable content delivery network (CDN) that delivers static and dynamic content, APIs, and live and on-demand video with low latency and high performance.

All Amazon CloudFront edge locations are protected against infrastructure-level DDoS threats with AWS Shield that uses always-on network flow monitoring and in-line mitigation to minimize application latency and downtime. You also have the ability to add additional layers of security for applications to protect them against common web exploits and bot attacks by enabling AWS Web Application Firewall (WAF).

Traffic delivered from this edge location is included within the Middle East region pricing. To learn more about AWS edge locations, see CloudFront edge locations.
 

Announcing support for Sigv4A with session tokens issued in AWS GovCloud (US-West) Region

Today, AWS Identity and Access Management (IAM) is announcing support for signing AWS API requests with the Sigv4A encryption algorithm using session tokens issued in the AWS GovCloud (US-West) Region. Cryptographically signing an AWS request with the Sigv4A algorithm allows you to send the request to service endpoints in any of the AWS GovCloud (US) Regions.

If workloads or callers in your account intend to sign AWS requests using Sigv4A, or you plan to adopt a specific AWS feature that requires it, configure the AWS Security Token Service (STS) endpoint in the AWS GovCloud (US-West) Region to vend session tokens that support the Sigv4A algorithm. You can configure this behavior either using the AWS IAM Console or calling the AWS IAM SetSecurityTokenServicePreferences API. Session tokens that support the Sigv4A algorithm are larger in size and match the size of session tokens issued by the STS endpoint in the AWS GovCloud (US-East) Region, which already supports the use of Sigv4A.

Announcing LlamaIndex support for Amazon Neptune to build GraphRAG applications

Starting today, you can build Graph Retrieval-Augmented Generation (GraphRAG) applications by combining knowledge graphs stored in Amazon Neptune and LlamaIndex, a popular open-source framework for building applications with Large Language Models (LLM) such as those available in Amazon Bedrock.

Customers looking to build Generative AI applications often use Retrieval-Augmented Generation (RAG) to improve LLM output so it remains relevant, accurate, and useful in various contexts. RAG extends the already powerful capabilities of LLMs to specific domains or an organization's internal knowledge base, without the need to retrain the model. Knowledge graphs explicitly consolidate and integrate an organization’s information assets. GraphRAG uses knowledge graphs, existing graphs or ones generated from source data, to relate concepts and entities across the underlying content, further improving RAG applications. For example, if asked, “Tell me about news events that impact companies in my trading portfolio,” a GraphRAG app could respond by also identifying news articles for upstream and downstream dependencies in the supply chain, which in turn might have an impact on those companies. With today’s announcement , you can use LlamaIndex to create GraphRAG applications with knowledge graphs stored in Amazon Neptune.

To get started visit the Amazon Neptune GraphStore documentation.
 

AWS Glue now supports native SaaS connectivity: Salesforce connector available now

AWS Glue now supports SaaS connectivity with out-of-the-box support for Salesforce enabling users to quickly preview and transfer their CRM data, query, detect schema and schedule jobs.

As enterprises increasingly rely on data to make business decisions, they face the challenge of collecting data from a growing ecosystem of data stores into a centralized location for analytics, AutoML, ML training, and business intelligence. With the new Salesforce connector, customers can easily ingest and aggregate their CRM data to any of Glue's supported destinations including Apache Iceberg, Delta Lake and Apache Hudi formats on Amazon S3; data warehouses such as Amazon Redshift and Snowflake, and many more. Reverse-ETL use cases are also supported, allowing users to write data back to Salesforce.

Built on Spark and with support for multiple worker threads to extract data in parallel, the Salesforce connector is scalable and performant. With support for OAuth 2.0 and Managed Client Application, customers simply use their Salesforce login credentials to securely authenticate and authorize data access. VPC support provides customers with enterprise-level security. And with access to Glue’s built-in 250+ transformations, customers have the flexibility to easily customize their data pipelines. To simplify data management, customers have access to AWS Glue Data Quality, Glue Data Catalog, Monitoring, Workflows, and Sensitive Data Detection.

To get started, create a Salesforce Glue Connection and an EL/ETL job with Salesforce as a source and/or destination in AWS Glue Studio. The AWS Glue Salesforce connector is available in all commercial AWS regions. To learn more, visit AWS Glue documentation.

Amazon EventBridge Event Bus now supports improved filtering capabilities for event matching

Amazon EventBridge event matching on Event Bus now supports an array of values when combining anything-but filtering (matching anything except for the value) with prefix filtering (matching against characters at the beginning of a value), suffix filtering (matching against characters at the end of a value), and wildcard filtering (matching against patterns in string values). For example, you can now match against values that do not end with specific file types such as .png and .jpg. Or you can match against values that do not have a specific filename path such as */lib/* and */bin/*.

Amazon EventBridge Event Bus is a serverless event router that enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and AWS services. You can set up rules to determine where to send your data, allowing applications to react to changes in your data as they occur.

Amazon Redshift announces Snapshot Isolation as the default for new cluster creates and restores

Starting today, Amazon Redshift is making snapshot isolation as the default for provisioned clusters when you create a new cluster or restore a cluster from a snapshot. The database isolation level will remain unchanged on your existing provisioned clusters unless explicitly changed. You can switch to serializable at any time if it is your preferred database isolation level. This change makes the product experience consistent for both Provisioned and Serverless which already uses snapshot isolation as default.

Amazon Redshift offers two database isolation levels — serializable and snapshot — to handle concurrent transactions within your data warehouse. Serializable isolation provides strict correctness guarantees equivalent to running your operations serially. Most data warehousing applications do not need these strict guarantees that limit concurrency on operations. Unlike serializable, snapshot isolation gives you better performance by allowing for more concurrency of operations on the same table when processing large volumes of data. You can change the isolation level for your database using CREATE DATABASE or ALTER DATABASE commands.

Amazon EC2 M7i-flex, M7i, and C7i instances are now available in AWS GovCloud (US-East) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i-flex, M7i, and C7i instances powered by custom 4th Generation Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the AWS GovCloud (US-East) Region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.

M7i-flex instances are the easiest way for you to get price-performance benefits for a majority of general-purpose workloads, and deliver up to 19% better price-performance compared to M6i. M7i-flex instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don't fully utilize all compute resources such as web and application servers, virtual-desktops, batch-processing, and microservices.

M7i and C7i deliver up to 15% better price-performance compared to prior generation M6i and C6i instances. They offer larger instance sizes up to 48xlarge, can attach up to 128 EBS volumes and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.

In addition, all three instance types support the new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML.

AWS CloudFormation accelerates dev-test cycle with a new parameter for DeleteStack API

AWS CloudFormation launches a new parameter called DeletionMode for the DeleteStack API. This new parameter allows customers to safely delete their CloudFormation stacks that are in DELETE_FAILED state.

Today, customers create, update, delete, and re-create CloudFormation stacks when iterating on their cloud infrastructure in their dev-test environments. Customers can use the DeleteStack CloudFormation API to successfully delete their stacks and stack resources. However, certain stack resources can prevent the DeleteStack API to successfully complete for e.g. when customers attempt to delete non-empty Amazon S3 buckets. The DeleteStack API can enter into the DELETE_FAILED state in such scenarios. With this launch, customers can pass FORCE_DELETE_STACK value to the new DeletionMode parameter and delete such stacks.
 

Amazon Security Lake now supports logs from AWS WAF

Today, AWS announces the expansion in the log coverage support for Amazon Security Lake, now including AWS Web Application Firewall Logs (AWS WAF). This enhancement allows you to automatically centralize and normalize your AWS WAF web ACL logs in Security Lake. You can easily analyze your log data to determine if a suspicious IP address is interacting with your environment, monitor trends in denied requests to identify new exploitation campaigns, or conduct analytics to determine anomalous successful access by previously blocked hosts. This enables you to monitor and investigate potential suspicious activities in your web applications.

Security Lake automatically centralizes security data from AWS environments, SaaS providers, on premises, and cloud sources into a purpose-built data lake stored in your account. AWS WAF is a web application firewall that enables you to monitor the HTTP(S) requests that are made to your protected web application resource. Today’s announcement of AWS WAF logs coverage further streamlines the collection and management of your security data across accounts and AWS Regions, freeing up time for analyzing security data and improving the protection of your workloads, applications, and data.

RDS Performance Insights provides fine grained access control

Amazon RDS (Relational Database Service) Performance Insights now provides fine-grained access control for the performance data that it collects. Customers can create new IAM policies or update existing IAM policies to enforce fine-grained access to Performance Insights data through the console or APIs.

This launch allows customers to define an access control policy for specific dimensions of the database load metric of Performance Insights. For example, customers can define a policy that allows a certain user to view SQL statistics, but denies access to view the full SQL text. Before this launch, customers could define the access control policy at the level of individual actions and resources only. With this feature, customers can restrict access to potentially sensitive dimensions such as SQL text, and provide access to non-sensitive dimensions on the same API action within a single IAM policy.

Amazon RDS Performance Insights is a database performance tuning and monitoring feature of RDS that allows you to visually assess the load on your database and determine when and where to take action. With one click in the Amazon RDS Management Console, you can add a fully-managed performance monitoring solution to your Amazon RDS database.
 

Amazon OpenSearch Service now supports OpenSearch version 2.13

You can now run OpenSearch version 2.13 in Amazon OpenSearch Service. With OpenSearch 2.13, we have made several improvements to search performance and resiliency, OpenSearch Dashboards, and added new features to help you build AI-powered applications. We have introduced concurrent segment search that allows users to query index segments in parallel at the shard level. This offers improved latency for long-running requests that contain aggregations or large ranges. You can now index quantized vectors with FAISS-engine-based k-NN indexes, with potential to reduce memory footprint by as much as 50 percent with minimal impact to accuracy and latency. I/O-based admission control proactively monitors and prevents I/O usage breaches to further improve the resilience of the cluster. 

This launch also introduces several features that enable you to build and deploy AI-powered search applications. The new flow framework, helps you to automate the configuration of search and ingest pipeline resources required by advanced search features like semantic, multimodal, and conversational search. This adds to existing capabilities for automating ml-commons resource setup, allowing you to package OpenSearch AI solutions into portable templates. Additionally, we’ve added predefined templates to automate setup for models that are integrated through our connectors to APIs like OpenAI, Amazon Bedrock, and Cohere that enable you to build solutions like semantic search.

For information on upgrading to OpenSearch 2.13, please see this documentation. OpenSearch 2.13 is now available in all AWS Regions where Amazon OpenSearch Service is available.

  • No labels