You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Recent Announcements
The AWS Cloud platform expands daily. Learn about announcements, launches, news, innovation and more from Amazon Web Services.
Amazon RDS for SQL Server Supports Minor Version 2022 CU13

A new minor version of Microsoft SQL Server is now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports the latest minor version of SQL Server 2022 across the Express, Web, Standard, and Enterprise editions.

We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor version include SQL Server 2022 CU13 - 16.0.4125.3.

The minor version is available in all AWS commercial regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.
 

Productionize Foundation Models from SageMaker Canvas

Amazon SageMaker Canvas now supports deploying Foundation Models (FMs) to SageMaker real-time inference endpoints, allowing you to bring generative AI capabilities into production and consume them outside the Canvas workspace. SageMaker Canvas is a no-code workspace that enables analysts and citizen data scientists to generate accurate ML predictions and use generative AI capabilities.

SageMaker Canvas provides access to FMs powered by Amazon Bedrock and SageMaker JumpStart, supports RAG-based customization, and fine-tuning of FMs. Starting today, you can deploy FMs powered by SageMaker JumpStart such as Falcon-7B, Llama-2, and more to SageMaker endpoints making it easier to integrate generative AI capabilities into your applications outside the SageMaker Canvas workspace. FMs powered by Amazon Bedrock can already be accessed using a single API outside the SageMaker workspace. By simplifying the deployment process, SageMaker Canvas accelerates time-to-value and ensures a smooth transition from experimentation to production.

To get started, log in to SageMaker Canvas to access the FMs powered by SageMaker JumpStart. Select the desired model and deploy it with the appropriate endpoint configurations such as indefinitely or for a specific duration of time. SageMaker Inferencing charges will apply to deployed models. A new user can access the latest version by directly launching SageMaker Canvas from their AWS console. An existing user can access the latest version of SageMaker Canvas by clicking “Log Out” and logging back in.

AWS CloudTrail Lake announces AI-powered natural language query generation (preview)

AWS announces generative AI-powered natural language query generation in AWS CloudTrail Lake (preview), enabling you to simply analyze your AWS activity events without having to write complex SQL queries. Now you can ask questions in plain English about your AWS API and user activity, such as “How many errors were logged during the past week for each service and what was the cause of each error?” or “Show me all users who logged in using console yesterday”, and AWS CloudTrail will generate a SQL query, which you can run as is or fine-tune to meet your use case.

This new feature empowers users who are not experts in writing SQL queries or who don’t have a deep understanding of CloudTrail events. As a result, exploration and analysis of AWS activity in event data stores on CloudTrail Lake becomes simpler and quicker, accelerating compliance, security, and operational investigation.

This feature is now available in preview in AWS US East (N. Virginia) at no additional cost. Please note that running the queries generated using this feature will result in CloudTrail Lake query charges. Refer to CloudTrail pricing for details. To learn more about this feature and get started, please refer to the documentation or the AWS News Blog.

Amazon Connect now provides color coding for shift activities in agent scheduling

Amazon Connect now provides color coding for shift activities in agent scheduling, enabling a simplified experience for contact center managers and agents. With this launch, you can now configure colors for agent shift activities, such as red for breaks and lunches, green for team meetings, and purple for trainings. With customizable colors, managers can quickly see how different activities are placed in agent schedules (e.g. is more than half the team doing a training at the same time, does the team meeting include everyone, etc.). This launch also simplifies the experience for agents as they can easily understand their schedule at-a-glance for the week without having to read through each scheduled activity. Customizable colors make day-to-day schedule management more efficient for managers and agents.

Amazon SES now provides custom values in the Feedback-ID header

Today, Amazon Simple Email Service (SES) released a new feature to give customers control over parts of the auto-generated Feedback-ID header in messages sent through SES. This feature provides additional details to help customers identify deliverability trends. Customers can use products like PostMaster Tools by Gmail to see complaint rates by identifiers of their choice, such as sender identity or campaign ID. This makes it easier to track deliverability performance associated with independent workloads and campaigns, and accelerates troubleshooting when diagnosing complaint rates.

Previously, SES automatically generated a Feedback-ID header when sending emails on behalf of SES customers. This Feedback-ID helps customers track their deliverability performance, such as complaint rates, at the AWS account level. Now SES includes up to two custom values in the Feedback-ID header, which customers can pass to SES during sending. Customers specify message tag values for either “ses:feedback-id-a” or “ses:feedback-id-b” (or both), and SES automatically includes these values as the first and second fields in the Feedback-ID header (respectively). This gives even more granularity when viewing deliverability metrics in tools such as PostMaster Tools by Gmail.

SES supports fine grained Feedback-ID in all AWS regions where SES is available.

For more information, see the documentation for SES event publishing.

AWS Audit Manager generative AI best practices framework now includes Amazon SageMaker

Available today, the AWS Audit Manager generative AI best practices framework now includes Amazon SageMaker in addition to Amazon Bedrock. Customers can use this prebuilt standard framework to gain visibility into how their generative AI implementation on SageMaker or Amazon Bedrock follows AWS recommended best practices and start auditing their generative AI usage and automating evidence collection. The framework provides a consistent approach for tracking AI model usage and permissions, flagging sensitive data, and alerting on issues.

This framework includes 110 controls across areas such as governance, data security, privacy, incident management, and business continuity planning. Customers can select and customize controls to structure automated assessments. For example, customers seeking to mitigate known biases before feeding data into their model can use the ‘Pre-processing Techniques’ control to require evidence of validation criteria including documentation of data augmentation, re-weighting, or re-sampling. Similarly, customers can use the 'Bias and Ethics Training' control to upload documentation demonstrating that their workforce is trained to address ethical considerations and AI bias in the model.

AWS IAM Access Analyzer now offers policy checks for public and critical resource access

AWS Identity and Access Management (IAM) Access Analyzer guides customers toward least privilege by providing tools to set, verify, and refine permissions. IAM Access Analyzer now extends custom policy checks to proactively detect nonconformant updates to policies that grant public access or grant access to critical AWS resources ahead of deployments. Security teams can use these checks to streamline their IAM policy reviews, automatically approving policies that conform with their security standards and inspecting more deeply when policies don’t conform. Custom policy checks use the power of automated reasoning to provide the highest levels of security assurance backed by mathematical proof.

Security and development teams can innovate faster by automating and scaling their policy reviews for public and critical resource access. You can integrate these custom policy checks into the tools and environments where developers author their policies, such as their CI/CD pipelines, GitHub, and VSCode. Developers can create or modify an IAM policy, and then commit it to a code repository. If custom policy checks determine that the policy adheres to your security standards, your policy review automation lets the deployment process continue. If custom policy checks determine that the policy does not adhere to your security standards, developers can review and update the policy before deploying it to production.

AWS Identity and Access Management now supports passkey as a second authentication factor

AWS Identity and Access Management (IAM) now supports passkeys for multi-factor authentication to provide easy and secure sign-ins across your devices. Based on FIDO standards, passkeys use public key cryptography, which enables strong, phishing-resistant authentication that is more secure than passwords. IAM now allows you to secure access to AWS accounts using passkeys for multi-factor authentication (MFA) with support for built-in authenticators, such as Touch ID on Apple MacBooks and Windows Hello facial recognition on PCs. Passkeys can be created with a hardware security key or with your chosen passkey provider using your fingerprint, face, device PIN, and they are synced across your devices to sign-in with AWS.

AWS Identity and Access Management helps you securely manage identities and control access to AWS services and resources. MFA is a security best practice in IAM that requires a second authentication factor in addition to the user name and password sign-in credentials. Passkey support in IAM is a new feature to further enhance MFA usability and recoverability. You can use a range of supported IAM MFA methods, including FIDO-certified security keys to harden access to your AWS accounts.

This feature is available now in all AWS Regions, except in the China Regions. To learn more about using passkeys in IAM, get started by visiting the launch blog post and Using MFA in AWS documentation.

To learn more:

AWS Cloud WAN introduces Service Insertion to simplify security inspection at global scale

Today AWS announces Service Insertion, a new feature of AWS Cloud WAN that simplifies the integration of security and inspection services into the Cloud WAN based global networks. Using this feature, you can easily steer your global network traffic between Amazon VPCs (Virtual Private Cloud), AWS Regions, on-premises locations, and Internet via security appliances or inspection services using central Cloud WAN policy or the AWS management console.

Customers deploy inspection services or security appliances such as firewalls, intrusion detection/protection systems (IDS/IPS) and secure web gateways to inspect and protect their global Cloud WAN traffic. With Service Insertion, customers can easily steer multi-region or multi-segment network traffic to security appliances or services without having to create and manage complex routing configurations or third-party automation tools. Using service insertion, you define your inspection and routing intent in a central policy document and your configuration is consistently deployed across your Cloud WAN network. Service insertion works with both AWS Network Firewall and third-party security solutions, and makes it easy to perform east-west (VPC-to-VPC) and north-south (Internet Ingress/Egress) security inspection across multiple AWS Regions and on-premises locations across the globe.

Detect malware in new object uploads to Amazon S3 with Amazon GuardDuty

Today, Amazon Web Services (AWS) announces the general availability of Amazon GuardDuty Malware Protection for Amazon S3. This expansion of GuardDuty Malware Protection allows you to scan newly uploaded objects to Amazon S3 buckets for potential malware, viruses, and other suspicious uploads and take action to isolate them before they are ingested into downstream processes.

GuardDuty helps customers protect millions of Amazon S3 buckets and AWS accounts. GuardDuty Malware Protection for Amazon S3 is fully managed by AWS, alleviating the operational complexity and overhead that normally comes with managing a data-scanning pipeline, with compute infrastructure operated on your behalf. This feature also gives application owners more control over the security of their organization’s S3 buckets; they can enable GuardDuty Malware Protection for S3 even if core GuardDuty is not enabled in the account. Application owners are automatically notified of the scan results using Amazon EventBridge to build downstream workflows, such as isolation to a quarantine bucket, or define bucket policies using tags that prevent users or applications from accessing certain objects.

AWS IAM Access Analyzer now offers recommendations to refine unused access

AWS Identity and Access Management (IAM) Access Analyzer guides customers toward least privilege by providing tools to set, verify, and refine permissions. IAM Access Analyzer now offers actionable recommendations to guide you to remediate unused access. For unused roles, access keys, and passwords, IAM Access Analyzer provides quick links in the console to help you delete them. For unused permissions, IAM Access Analyzer reviews your existing policies and recommends a refined version tailored to your access activity.

As a central security team member, you can use IAM Access Analyzer to gain visibility into unused access across your AWS organization and automate how you rightsize permissions. Security teams set up automated workflows to notify their developers about new IAM Access Analyzer findings. Now, you can include step-by-step recommendations provided by IAM Access Analyzer to notify and simplify how developers refine unused permissions. This feature is offered at no additional cost with unused access findings and is a part of the growing Cloud Infrastructure Entitlement Management capabilities at AWS. The recommendations are available in AWS Commercial Regions, excluding the AWS GovCloud (US) Regions and AWS China Regions.

To learn more about IAM Access Analyzer unused access analysis:

AWS Private CA introduces Connector for SCEP for mobile devices (Preview)

AWS Private Certificate Authority (AWS Private CA) launches the Connector for SCEP, which lets you use a managed and secure cloud certificate authority (CA) to enroll mobile devices securely and at scale. Simple Certificate Enrollment Protocol (SCEP) is a protocol widely adopted by mobile device management (MDM) solutions for getting digital identity certificates from a CA and enrolling corporate-issued and bring-your-own-device (BYOD) mobile devices. With the Connector for SCEP, you use a managed private CA with a managed SCEP solution to reduce operational costs, simplify processes, and optimize your public key infrastructure (PKI). Additionally, the Connector for SCEP lets you use AWS Private CA with industry-leading SCEP-compatible MDM solutions including Microsoft Intune and Jamf Pro.

The Connector for SCEP is one of three connector types offered for AWS Private CA. Connectors allow you to replace your existing CAs with AWS Private CA in environments that have an established native certificate distribution solution. This means that instead of using multiple CA solutions, you can utilize a single private CA solution for your enterprise. You benefit from comprehensive support, extending to Kubernetes, Active Directory, and, now, mobile devices.

During the preview period, Connector for SCEP is available in the following AWS Regions: US East (N. Virginia).

This feature is offered at no additional charge, you only pay for the AWS Private CAs and the certificates issued from them. To get started, see the Getting started guide or go to the Connector for SCEP console

Amazon ECS on AWS Fargate now allows you to encrypt ephemeral storage with customer-managed KMS keys

Amazon Elastic Container Service (Amazon ECS) and AWS Fargate now allow you to use customer managed keys in AWS Key Management Service (KMS) to encrypt data stored in Fargate task ephemeral storage. Ephemeral storage for tasks running on Fargate platform version 1.4.0 or higher is encrypted with AWS owned keys by default. This feature allows you to add a self-managed security layer which can help you meet compliance requirements.

Customers who run applications that deal with sensitive data often need to encrypt data using self-managed keys to meet security or regulatory requirements and also provide encryption visibility to auditors. To meet these requirements you can now configure a customer-managed KMS key for your ECS cluster to encrypt the ephemeral storage for all Fargate tasks in the cluster. You can manage this key and audit access like any other KMS key. Customers can use this feature to configure encryption for new and existing ECS applications without changes from developers.

Amazon CloudWatch Application Signals, for application monitoring (APM) is generally available

Today, AWS announces the general availability of Amazon CloudWatch Application Signals, an OpenTelemetry (OTeL) compatible application performance monitoring (APM) feature in CloudWatch, that makes it easy to automatically instrument and track application performance against their most important business or service level objectives (SLOs) for applications on AWS. With no manual effort, no custom code, and no custom dashboards, Application Signals provides service operators with a pre-built, standardized dashboard showing the most important metrics for application performance – volume, availability, latency, faults, and errors – for each of their applications on AWS.

By correlating telemetry across metrics, traces, logs, real-user monitoring, and synthetic monitoring, Application Signals enables customers to speed up troubleshooting and reduce application disruption. For example, an application developer operating a payment processing application can see if payment processing latency is spiking and drill into the precisely correlated trace contributing to the spike to establish cause in application code or dependency. Developers that use Container Insights to monitor container infrastructure, can further identify root cause such as a memory shortage or a high CPU utilization on the container pod running the application code causing the spike.

Application Signals is generally available in 28 commercial AWS Regions, except CA West (Calgary) Region, AWS GovCloud (US) Regions and China Regions. For pricing, see Amazon CloudWatch pricing.

Try Application Signals with the AWS One Observability Workshop sample application. To learn more, see documentation to enable Amazon CloudWatch Application Signals for Amazon EKS, Amazon EC2, native Kubernetes and custom instrumentation for other platforms.
 

Amazon Security Lake is now available in the the AWS GovCloud (US) Regions

Amazon Security Lake is now available in the AWS GovCloud (US) Regions. You can now centralize security data from AWS environments, SaaS providers, on premises, and cloud sources into a purpose-built data lake stored in your Amazon S3 account.

Security Lake makes it easier to analyze security data, gain a more comprehensive understanding of security across your entire organization, and improve the protection of your workloads, applications, and data. Security Lake automates the collection and management of your security data across accounts and AWS Regions so that you can use your preferred analytics tools while retaining control and ownership over your security data.

For more information about the AWS Regions where Security Lake is available, see the AWS Region table. You can enable your 15-day free trial of Amazon Security Lake with a single-click in the AWS Management console

To get started, see the following list of resources:

Amazon RDS for PostgreSQL announces Extended Support minor 11.22-RDS.20240509

Amazon Relational Database Service (RDS) for PostgreSQL announces Amazon RDS Extended Support minor version 11.22-RDS.20240509. We recommend that you upgrade to this version to fix known security vulnerabilities and bugs in prior versions of PostgreSQL.

Amazon RDS Extended Support provides you more time, up to three years, to upgrade to a new major version to help you meet your business requirements. During Extended Support, Amazon RDS will provide critical security and bug fixes for your MySQL and PostgreSQL databases on Aurora and RDS after the community ends support for a major version. You can run your PostgreSQL databases on Amazon RDS with Extended Support for up to three years beyond a major version’s end of standard support date. Learn more about Extended Support in the Amazon RDS User Guide.

You are able to leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. Learn more about upgrading your database instances, including minor and major version upgrades, in the Amazon RDS User Guide.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
 

Amazon CloudWatch announces AI-Powered natural language query generation

Amazon CloudWatch announces the general availability of natural language query generation powered by generative AI for Logs Insights and Metrics Insights. This feature enables you to quickly generate queries in context of your logs and metrics data using plain language. By simplifying the query generation process, you can accelerate gathering insights from your observability data without needing extensive knowledge of the query language.

Query Generator simplifies your CloudWatch Logs and Metrics Insights experience through natural language querying. You can ask questions in plain English, such as "Show me the 10 slowest Lambda requests in the last 24 hours" or "Which DynamoDB table is most throttled" and it will generate the appropriate queries or refine any existing queries in the query window, as well as now, automatically adjust the time ranges for queries that require data within a specified period. It also provides line-by-line explanations of the generated code, helping you learn query syntax.

This feature is now supported in US East (N. Virginia), US West (Oregon), and Asia Pacific (Tokyo)

To access the feature, click on "Query generator" in the CloudWatch Logs Insights or Metrics Insights console pages. In the help panel, select "Info" for more information. There is no charge for using Query generator. Any queries executed in Logs Insights or Metrics Insights are subject to standard CloudWatch pricing. To learn more about Query generator in CloudWatch Logs Insights or Metrics Insights, visit our getting started guide.
 

AWS CloudFormation accelerates dev-test cycle with adjustable timeouts for custom resources

AWS CloudFormation launches a new property for custom resources called ServiceTimeout. This new property allows customers to set a maximum timeout for the execution of the provisioning logic in a custom resource, enabling faster feedback loops in dev-test cycles.

CloudFormation custom resources allow customers to write their own provisioning logic in CloudFormation templates and have CloudFormation run the logic during a stack operation. Custom resources use a callback pattern where the custom resource must respond to CloudFormation within a timeout of 1 hour. Previously, this timeout value was not configurable, so code bugs in the customer's custom resource logic resulted in long wait times. With the new ServiceTimeout property, customers can set a custom timeout value, after which CloudFormation fails the execution of the custom resource. This accelerates feedback on failures, allowing for quicker dev-test iterations.

The new ServiceTimeout property is available in all AWS Regions where AWS CloudFormation is available. Refer to the AWS Region table for details.

Refer to the custom resources documentation to learn more about the ServiceTimeout property.
 

Amazon EC2 M6in and M6idn instances are now available in Asia Pacific (Mumbai)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M6in and M6idn instances are available in AWS Regions Asia Pacific (Mumbai), Canada (Central). These sixth-generation network optimized instances, powered by 3rd Generation Intel Xeon Scalable processors and built on the AWS Nitro System, deliver up to 200Gbps network bandwidth, 2x more network bandwidth, and up to 2x higher packet-processing performance over comparable fifth-generation instances. Customers can use M6in and M6idn instances to scale the performance and throughput of network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, and Telco applications such as 5G User Plane Function.


M6in and M6idn instances are available in 10 different instance sizes including metal, with up to 128 vCPUs and 512 GiB of memory. They deliver up to 100 Gbps of Amazon Elastic Block Store (EBS) bandwidth and up to 400K IOPS, the highest Amazon EBS performance across EC2 instances. M6in and M6idn instances offer Elastic Fabric Adapter (EFA) networking support on 32xlarge and metal sizes. M6idn instances offer up to 7.6 TB of high-speed, low-latency instance storage.

With this regional expansion, M6in and M6idn instances are available in the following AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Europe (Ireland, Frankfurt, Spain, Stockholm), Asia Pacific (Mumbai, Singapore, Tokyo, Sydney), Canada (Central), and AWS GovCloud (US-West). Customers can purchase the new instances through Savings Plans, Reserved, On-Demand, and Spot instances. To learn more, see M6in and M6idn instances page.

Amazon Redshift Serverless is now available in the AWS Middle East (UAE) region

Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in the AWS Middle East (UAE) region. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.

With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon S3, access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes. Amazon Redshift Serverless provides unified billing for queries on any of these data sources, helping you efficiently monitor and manage costs.

Amazon CodeCatalyst now supports Bitbucket Cloud source code repositories

Amazon CodeCatalyst now supports the use of source code repositories hosted in Bitbucket Cloud in CodeCatalyst projects. This allows customers to use Bitbucket Cloud repositories with CodeCatalyst’s features such as its cloud IDE (Development Environments), view the status of CodeCatalyst workflows back in Bitbucket Cloud, and even block Bitbucket Cloud pull request merges based on the status of CodeCatalyst workflows.

Customers want the flexibility to use source code repositories hosted in Bitbucket Cloud, without the need to migrate to CodeCatalyst to use its functionality. Migration is a long process and customers want to evaluate CodeCatalyst and its capabilities using their own code repositories before they decide to migrate. Support for popular source code providers such as Bitbucket Cloud is the top customer ask for CodeCatalyst. Now customers can use the capabilities of CodeCatalyst without the need for migration of source code from Bitbucket Cloud.

This capability is available in regions where CodeCatalyst is available. There is no change to pricing.

For more information, see the documentation or visit the Amazon CodeCatalyst website.

Amazon Data Firehose now supports integration with AWS Secrets Manager

Amazon Data Firehose (Firehose) now supports integration with AWS Secrets Manager (Secrets Manager) to configure secrets such as database credentials or keys to connect to streaming destinations such as Amazon Redshift, Snowflake, Splunk, and HTTP endpoints.

Amazon Data Firehose needs to access a secret such as database credentials or keys to connect to a streaming destination. With this launch, Amazon Data Firehose can retrieve a secret from Secrets Manager instead of using a plain text secret in configuration to connect to the destination. By using Secrets Manager integration, you can ensure that secrets are not visible in plain text during Firehose stream creation workflow either in AWS Management Console or API parameters. This feature provides a more secure practice to store and maintain a secret in Firehose and allows you to leverage automatic secret rotation capability provided by Secrets Manager.

Centrally manage member account root email addresses across your AWS Organization

Today, we are making it easier for AWS Organizations customers to centrally manage the root email address of member accounts across their Organization using the AWS Command Line Interface (CLI), AWS Software Development Kit (SDK), and AWS Organizations console. We previously released the Accounts SDK that enables Organizations customers to centrally and programmatically manage both primary and alternate contact information as well as the enabled AWS Regions for their accounts. In order to manage the root email address, customers were forced to login as root to manage the root email address of member accounts. Starting today, customers can use the same SDK to update the root email address of a member account from either the Organization’s management account (or delegated administrator), saving them the time and effort of logging into each account directly and allowing them to manage their Organization’s root addresses at scale. Additionally, this API will require customers to verify the new root email address using One Time Password (OTP) ensuring customers are using accurate email addresses for their member accounts. The root email address won’t change to the new email address until it has been verified.

Amazon API Gateway customers can easily secure APIs using Amazon Verified Permissions

Amazon Verified Permissions expanded support for securing Amazon API Gateway APIs, with fine grained access controls when using an Open ID connect (OIDC) compliant identity provider. Developers can now control access based on user attributes and group memberships, without writing code. For example, say you are building a loan processing application. Using this feature, you can restrict access to the “approve_loan” API to only users in the “loan_officer” group.

Amazon Verified Permissions is a scalable fine-grained authorization service for the applications that you build. Verified Permissions launched a new feature to secure API Gateway REST APIs for customers using an OIDC compliant identity provider. The feature provides a wizard for connecting Verified Permissions with API Gateway and an identity provider, and defining permissions based on user groups. Verified Permissions automatically generates an authorization model and Cedar policies that allow only authorized user groups access to application’s APIs. The wizard deploys a Lambda authorizer that calls Verified Permissions to validate that the API request has a valid OIDC token and is authorized. Additionally, the lambda authorizer caches authorization decisions to reduce latency and cost.

To get started, visit the Verified Permissions console, and create a policy store by selecting “Import using API Gateway and Identity Provider”. We have partnered with leading identity providers, CyberArk, Okta, and Transmit Security, to test this feature and ensure a smooth experience. This feature is available in all regions where Verified permissions is available. For more information visit the product page.
 

Amazon FSx for Lustre increases maximum metadata IOPS by 15x

Amazon FSx for Lustre, a service that provides high-performance, cost-effective, and scalable file storage for compute workloads, is increasing the maximum level of metadata IO operations per second (IOPS) you can drive on a file system by 15x, and now allows you to provision metadata IOPS independently of your file system’s storage capacity.

A file system’s level of metadata IOPS determines the number of files and directories that you can create, list, read, and delete per second. By default, the metadata IOPS of an FSx for Lustre file system scales with its storage capacity. Starting today, you can provision 15x higher metadata performance per file system—independently of your file system’s storage capacity—allowing you to scale to even higher levels of performance, accelerate time-to-results, and optimize your storage costs for metadata-intensive machine learning research and high-performance computing (HPC) workloads. You can also update your file system’s metadata IOPS level with the click of a button, allowing you to quickly increase performance as your workloads scale.

AWS AppFabric now supports JumpCloud

AWS AppFabric, a no-code service that quickly integrates with software-as-a-service (SaaS) applications to enhance an organization’s security posture, now supports JumpCloud. AppFabric provides aggregated and normalized audit logs from popular SaaS applications like Slack, Zoom, Salesforce, Atlassian Jira suite, Google Workspace, and Microsoft 365. By centralizing SaaS application data, AppFabric helps teams gain greater visibility into vulnerabilities in a customer's SaaS environment, enabling them to monitor threats more effectively and respond to incidents faster. IT and security teams no longer need to manage point-to-point SaaS integrations that take time away from higher value tasks, like standardizing alerts or setting common security policies.

AppFabric's support for JumpCloud means that customers can now seamlessly ingest JumpCloud log data, along with over 35 other supported applications.

Amazon EC2 C6id instances are now available in South America (São Paulo) region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C6id instances are available in the South America (Sao Paulo) Region. These instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors with an all-core turbo frequency of 3.5 GHz and up to 7.6 TB of local NVMe-based SSD block-level storage. C6id instances are built on AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. Customers can take advantage of access to high-speed, low-latency local storage for compute-intensive workloads, such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.

Amazon Inspector container image scanning is now available for Amazon CodeCatalyst and GitHub actions

Amazon Inspector now offers native integration with Amazon CodeCatalyst and GitHub actions for container image scanning, allowing customers to assess their container images for software vulnerabilities within their Continuous Integration and Continuous Delivery (CI/CD) tools, pushing security earlier in the software development lifecycle. With this expansion, Inspector now natively integrates with four developer tools including, Jenkins, TeamCity, GitHub actions, and Amazon CodeCatalyst for container image scanning. This feature works with CI/CD tools hosted anywhere in AWS, as well as in on-premise environments and hybrid clouds, providing consistency for developers to use a single solution across all their development pipelines.

Amazon Inspector is a vulnerability management service that continually scans AWS workloads for software vulnerabilities, code vulnerabilities, and unintended network exposure across your entire AWS Organization. Customers can also use Amazon Inspector to scan container images and other archives, such as zip and TAR, for software vulnerabilities directly from local developer laptops and machines. To learn more about scanning container images hosted anywhere, click here.
 

Announcing the common control library in AWS Audit Manager

AWS Audit Manager has introduced a common control library that simplifies the process of automating risk and compliance assessments against enterprise controls. This new library enables Governance, Risk, and Compliance (GRC) teams to efficiently map their controls into Audit Manager for evidence collection.

The new common control library provides pre-defined and pre-mapped AWS data sources, eliminating the need to identify which AWS resources to assess for various controls. It defines AWS-managed common controls based on extensive mapping and reviews by AWS certified auditors, determining the appropriate data sources for evidence collection. With this launch, Audit Manager will also deliver more evidence mappings for controls, including 140 newly supported API calls for additional evidence. You can customize and update all evidence mappings as appropriate for your objectives.

The library also reduces the need to implement different compliance standard requirements individually and review data multiple times across different compliance regimes. It identifies common requirements across controls, helping customers understand their audit readiness across multiple frameworks simultaneously.

As AWS Audit Manager updates or adds data sources (e.g., additional CloudTrail events or API calls, or newly launched AWS Config rules) or maps additional compliance frameworks to the common controls, customers automatically inherit these improvements. This removes the need for constant updating and provides the benefit of additional compliance frameworks added to the Audit Manager library.

AWS launches Tax Settings API to programmatically manage tax registration information

Today AWS launches AWS Tax Settings API, a new public API service that enables customers to programmatically view, set, and modify tax registration information and associated business legal name and address. This launch allows you to automate tax registration updates as an enhanced offering to the AWS Tax Settings page.

Previously, customers managing tax registration information could only update tax information from the Tax Settings Page on the AWS Billing Console. Now, the API enables customers to automate setting their tax information while creating bulk accounts instead of manually setting tax registration information for accounts manually. This programmatic support allows customers to build automation around setting and modifying tax registration information. Customers creating accounts using the AWS Account Creation API and other AWS services can now fully automate their account creation process by integrating the tax registration workflow into their overall programmatic account creation process. For further details, visit here.

Amazon OpenSearch Ingestion now supports ingesting streaming data from Amazon MSK Serverless

Amazon OpenSearch Ingestion now allows you to ingest streaming data from Amazon Managed Streaming for Apache Kafka (MSK) Serverless, enabling you to seamlessly index the data from Amazon MSK Serverless clusters in Amazon OpenSearch Service managed clusters or Serverless collections without the need for any third-party data connectors. With this integration, you can now use Amazon OpenSearch Ingestion to perform near- real-time aggregations, sampling and anomaly detection on data ingested from Amazon MSK Serverless, helping you to build efficient data pipelines to power your complex observability and analytics use cases.

Amazon OpenSearch Ingestion pipelines can consume data from one or more topics in an Amazon MSK Serverless cluster and transform the data before writing it to Amazon OpenSearch Service or Amazon S3. While reading data from Amazon MSK Serverless via Amazon OpenSearch Ingestion, you can configure the number of consumers per topic and tune different fetch parameters for high and low priority data. Furthermore, you can also optionally use AWS Glue Schema Registry to specify your data schema to dynamically read custom data schema at ingest time.

Amazon Location Service launches Enhanced Location Integrity features

Amazon Location Service launches enhanced location integrity features, which offer tools to help developers evaluate the accuracy and authenticity of user-reported locations. With enhanced location integrity features, customers can now use predictive tools that anticipate user movements into or out of customer-specified areas, using criteria like time-to-breach and proximity to enhance monitoring and security measures. For instance, a retailer can utilize improved location integrity features to gauge the proximity of a curbside pickup user and optimize operations for a superior customer experience.

Customers can also use new validation capabilities to help confirm user locations by triangulating WiFi, cellular signals, and IP address information. This is critical for detecting and preventing location spoofing. Lastly, Amazon Location Service now also supports detailed geofences, allowing for the management of complex areas like state boundaries. These improvements provide stronger and more accurate location tracking capabilities, enabling more stringent protocols for location integrity.

Amazon Location Service is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), South America (São Paulo), and the AWS GovCloud (US-West) Region.

To learn more, visit the Amazon Location Service Developer Guide.
 

Amazon CloudWatch GetMetricData API now supports AWS CloudTrail data event logging

Amazon CloudWatch now supports AWS CloudTrail data event logging for the GetMetricData and GetMetricWidgetImage APIs. With this launch, customers have greater visibility into metric retrieval activity from their AWS account for best practices in security and operational troubleshooting.

CloudTrail captures API activities related to Amazon CloudWatch GetMetricData and GetMetricWidgetImage APIs as events. Using the information that CloudTrail collects, you can identify a specific request to CloudWatch GetMetricData or GetMetricWidgetImage APIs, the IP address of the requester, the requester's identity, and the date and time of the request. Logging CloudWatch GetMetricData and GetMetricWidgetImage APIs using CloudTrail helps you enable operational and risk auditing, governance, and compliance of your AWS account.

AWS CloudTrail logging for the GetMetricData and GetMetricWidgetImage API actions is available now in all AWS commercial Regions.

Data logging incurs charges according to AWS CloudTrail Pricing. To learn more about this feature, visit the Amazon CloudWatch documentation page. To enable logging for Amazon CloudWatch metrics data events, using the AWS CloudTrail Management Console or the AWS CloudTrail Command Line Interface (CLI), specify CloudWatch metric as the data event type, then choose the APIs that you want to monitor.

Amazon EC2 instance type finder capability is generally available in AWS Console

Today, Amazon Web Services, announced the availability of Amazon EC2 instance type finder, enabling you to select the ideal Amazon EC2 instance types for your workload. It uses machine learning to help customers make quick and cost-effective selections for instance types, before provisioning workloads. Using the AWS Management Console, customers can specify their workload requirements and get trusted recommendations. Amazon EC2 instance type finder is integrated with Amazon Q, allowing customers to use natural language to specify requirements and get instance family suggestions.

EC2 has more than 750 instance types and EC2 instance type finder enables customers to easily choose the best option for their workload requirements. It helps customers stay up to date with the latest instance types and allows them to optimize price-performance for their workloads. By using the EC2 instance type finder in Amazon Q and other console experiences, customers can make informed decisions on the best instance types for their workloads, thereby speeding up their AWS development.

Customers can get instance family suggestions while in an activity, such as launching an instance. EC2 instance type finder is available in all commercial AWS regions (learn more here). Amazon Q experience is available everywhere builders need it. You can find Amazon Q in the AWS Management Console, documentation, AWS website, your IDE through Amazon CodeWhisperer, or through AWS Chatbot in team chat rooms on Slack or Microsoft Teams. For Regional availability for specific Amazon Q in AWS capabilities, visit the Amazon Q FAQs page.
 

AWS IoT Device Management adds a unified connectivity metrics monitoring dashboard

Today, AWS IoT Device Management announced the launch of a new connectivity metrics dashboard, enabling customers to easily identify connectivity patterns and configure operational alarms for their device fleet through a unified view. AWS IoT Device Management is a fully managed cloud service that helps you register, organize, monitor, and remotely manage Internet of Things (IoT) devices at scale. With this launch, you can now select and view a range of connectivity metrics sourced from AWS IoT Core and AWS IoT Device Management on a single page.

The connectivity metrics dashboard consolidates frequently used metrics from AWS IoT Core, such as successful connections, inbound/outbound messages published, and connection request authorization failures. Additionally, you can use the guided workflow to enable AWS IoT Device Management’s Fleet Indexing feature and add widgets for connected device counts, percentage of devices disconnected, and disconnect reasons to the same page. Using the unified dashboard, you can quickly identify potential connectivity and operational problems to reduce the time associated with fleet troubleshooting procedures.

To get started with the connectivity metrics dashboard, visit the ‘Monitor’ tab in the AWS IoT console and then select the new ‘Connectivity metrics’ page.

To learn more, visit the AWS IoT Device Management developer guide.

Amazon SageMaker Model Registry now supports machine learning (ML) governance information

Amazon SageMaker now integrates Model Cards into Model Registry, making it easier for customers to manage governance information for specific model versions directly in Model Registry in just a few clicks.

Today, customers register ML models in Model Registry to manage their models. Now, with this launch, they can register ML model versions early in the development lifecycle, including essential business details and technical metadata. This integration allows customers to seamlessly review and govern models across their lifecycle from a single place. By enhancing the discoverability of model governance information, this update offers customers greater visibility into the model lifecycle from experimentation and training to evaluation and deployment. This streamlined experience ensures that model governance is consistent and easily accessible throughout the development process.

This new capability is now available in all AWS regions where SageMaker is present except GovCloud regions.

To get started, see SageMaker Model Registry developer guide for additional information.

Amazon CodeCatalyst now supports GitHub Cloud source code with blueprints

Amazon CodeCatalyst now supports the use of source code repositories hosted in GitHub Cloud with its blueprints capability. This allows customers to create a project from a CodeCatalyst blueprint into a GitHub Cloud source repository and add a blueprint into an existing project's GitHub Cloud source repository. It also enables customers to create custom blueprints in a GitHub Cloud repository.

Customers can use CodeCatalyst blueprints to create a project with a source repository and sample source code, CI/CD workflows, build and test reports, and integrated issue tracking tools. As the blueprint gets updated with the latest best practices or new options, it can regenerate the relevant parts of your codebase in projects containing that blueprint. CodeCatalyst also allows IT Leaders to build custom well-architected blueprints for their developer teams, specifying technology to be used, control access to project resources, set deployment locations and define testing and building methods. These capabilities were earlier available for source code repositories in CodeCatalyst. Customers wanted the flexibility to use blueprints with source code repositories hosted in GitHub Cloud. With this launch, customers can now get the same value from CodeCatalyst blueprints with GitHub Cloud hosted repositories.

Amazon OpenSearch Serverless slashes entry cost in half for all collection types

We are excited to offer a new lower entry point for Amazon OpenSearch Serverless, which makes it affordable to run small-scale search and analytics workloads. Opensearch Serverless’ compute capacity for indexing and searching data are measured in OpenSearch Compute Units (OCUs). Prior to this update, highly-available production deployments required a minimum of 4 OCUs with redundancy for protection against Availability Zone outages and infrastructure failures.

With the introduction of fractional 0.5 OCU, OpenSearch Serverless can be deployed starting at just 2 OCUs for production workloads. This includes 1 OCU for primary and standby indexing nodes at 0.5 OCU each, and 1 OCU total for search across two 0.5 OCU active replica nodes in separate Availability Zones. OpenSearch Serverless will automatically scale up the OCUs based on workload demand. Additionally, for dev/test workloads that don't require high availability, OpenSearch Serverless offers a 1 OCU deployment option, further cutting costs in half, with 0.5 OCU for indexing and 0.5 OCU for search.

Amazon Connect now provides time zone support for forecasts

Amazon Connect now provides time zone support for forecasts, making it easier for contact center managers to analyze future demand. With this launch, you can now generate, view, and download forecasts for the time zone in which your business operates. This feature will also automatically adjust forecasts to account for daylight saving changes (e.g., if a contact center receives contacts from 8am-8pm US Eastern time, then forecasts will automatically switch from 8am-8pm Eastern Daylight Time (EDT) to 8am-8pm Eastern Standard Time (EST) on November 3, 2024). Time zone support in forecasts simplifies the day-to-day experience for managers.

Amazon Aurora MySQL 3.07 (compatible with MySQL 8.0.36) is generally available

Starting today, Amazon Aurora MySQL 3.07 (with MySQL 8.0 compatibility) will support MySQL 8.0.36. In addition to security enhancements and bug fixes in MySQL 8.0.36, Amazon Aurora MySQL 3.07 includes several fixes and general improvements. For more details, refer to the Aurora MySQL 3 and MySQL 8.0.36.

To upgrade, you can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” to allow automatic upgrades in the upcoming maintenance window. This release is available in all AWS regions where Aurora MySQL is available.

Amazon EC2 C6id instances are now available in Canada (Central) region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C6id instances are available in Canada (Central) Region. These instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors with an all-core turbo frequency of 3.5 GHz and up to 7.6 TB of local NVMe-based SSD block-level storage. C6id instances are built on AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. Customers can take advantage of access to high-speed, low-latency local storage for compute-intensive workloads, such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.

These instances are generally available today in the US (Ohio, N.Virginia, Oregon), Canada (Calgary, Central), Asia Pacific (Tokyo, Sydney, Seoul, Singapore), Europe (Ireland, Frankfurt, London), Israel (Tel Aviv), and AWS GovCloud (US-West) Regions.

Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To learn more, see Amazon C6id instances. To get started, visit AWS Command Line Interface (CLI), and AWS SDKs.

AWS HealthImaging now publishes events to Amazon EventBridge

AWS HealthImaging now supports event-driven architectures by sending event notifications to Amazon EventBridge. By subscribing to HealthImaging events in EventBridge, you can automatically kick-off application workflows such as image quality assessment or de-identification based upon changes to resources in the data store. With EventBridge, developers can take advantage of a serverless event bus to easily connect and route events between many AWS services and third-party applications. Developers working with HealthImaging can now receive state changes for asynchronous tasks, such as DICOM import jobs and image set copy and update operations. Events are delivered to EventBridge in near real-time, and developers can write simple rules to listen for specific events.

AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers and their software partners to store, analyze, and share medical images at petabyte scale. With AWS HealthImaging, you can run your medical imaging applications at scale from a single, authoritative copy of each medical image in the cloud, while reducing total cost of ownership.

AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland).

To learn more, visit AWS HealthImaging.
 

Introducing Amazon EMR Serverless Streaming jobs for continuous processing on streaming data

Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. We are excited to announce a new streaming job mode on Amazon EMR Serverless, enabling you to continuously analyze and process streaming data.
Streaming has become vital for businesses to gain continuous insights from data sources like sensors, IoT devices, and web logs. However, processing streaming data can be challenging due to requirements such as high availability, resilience to failures, and integration with streaming services. Amazon EMR Serverless Streaming jobs has built-in features to addresses these challenges. It offers high availability through multi-AZ (Availability Zone) resiliency by automatically failing over to healthy AZs. It also offers increased resiliency through automatic job retries on failures and log management features like log rotation and compaction, preventing the accumulation of log files that might lead to job failures. In addition, Amazon EMR Serverless Streaming jobs support processing data from streaming services like self-managed Apache Kafka clusters, Amazon Managed Streaming for Apache Kafka, and now is integrated with Amazon Kinesis Data Streams using a new built-in Amazon Kinesis Data Streams Connector, making it easier to build end-to-end streaming pipelines.

Amazon API Gateway integration timeout limit increase beyond 29 seconds

Amazon API Gateway now enables customers to increase their integration timeout beyond the prior limit of 29 seconds. This setting represents the maximum amount of time API Gateway will wait for a response from the integration to complete. You can raise the integration timeout to greater than 29 seconds for Regional REST APIs and private REST APIs, but this might require a reduction in your account-level throttle quota limit. With this launch, customers with workloads requiring longer timeouts, such as Generative AI use cases with Large Language Models (LLMs), can leverage API Gateway.

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.

Amazon Route 53 Profiles now available in the AWS GovCloud (US) Regions

Starting today, you can enable Route 53 Profiles in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions to define a standard DNS configuration, in the form of a Profile, that may include Route 53 private hosted zone (PHZ) associations, Route 53 Resolver rules, and Route 53 Resolver DNS Firewall rule groups, and apply this configuration to multiple VPCs in your account. Profiles can also be used to enforce DNS settings for your VPCs, with configurations for DNSSEC validations, Resolver reverse DNS lookups, and the DNS Firewall failure mode. Your can share Profiles with AWS accounts in your organization using AWS Resource Access Manager (RAM). Route 53 Profiles simplifies the association of Route 53 resources and VPC-level settings for DNS across VPCs and AWS accounts in a Region with a single configuration, minimizing the complexity of having to manage each resource association and setting per VPC.

Amazon Timestream for LiveAnalytics now an Amazon EventBridge Pipes target

Amazon TimeStream for LiveAnalytics is now an Amazon EventBridge Pipes target, simplifying the ingestion of time-series data from sources such as Amazon Kinesis, Amazon DynamoDB, Amazon SQS, and more. Pipes provides a fully-managed experience, enabling you to easily ingest time-series data into Timestream for LiveAnalytics without the need to write undifferentiated integration code.

Amazon Timestream for LiveAnalytics is fast, scalable, purpose-built time series database that makes it easy to store and analyze trillions of time series data points per day. Amazon EventBridge Pipes provides a simple, consistent, and cost-effective way to create point-to-point integrations between event producers and consumers. Now, with a few clicks, you can connect your applications generating time-series data to Timestream using Pipes, enabling you to monitor your applications in real time and quickly identify trends and patterns. You can now ingest time-series data from diverse sources using Eventbridge Pipes, making it easier to derive advanced insights.
 

AWS DMS now supports Babelfish for Aurora PostgreSQL as a source

AWS Database Migration Service (AWS DMS) now supports Babelfish for Aurora PostgreSQL as a source by enhancing its existing PostgreSQL endpoint to handle Babelfish data types. Babelfish is a feature of Amazon Aurora PostgreSQL-Compatible Edition that enables Aurora to understand commands from applications written for Microsoft SQL Server.

AWS DMS supports both Full Load and Change Data Capture (CDC) migration modes for Babelfish. Full Load migration copies all of the data from the source database and CDC copies only the data that has changed since the last migration.

To migrate your data from Babelfish, you can use the AWS DMS console, AWS CLI, or AWS SDKs. To learn more, refer to using Babelfish for Aurora PostgreSQL as a source for AWS DMS.
 

Amazon Q offers inline completions in the command line

Today, Amazon Q Developer launches AI-powered inline completions in the command line. As developers type in their command line, Q Developer will provide real-time AI-generated code suggestions. For instance, if a developer types `git`, Q Developer might suggest `push origin main`. Developers can accept the suggestion by simply pressing the right arrow.

To generate accurate suggestions, Q Developer looks at your current shell context and your recent shell history. You can learn more about how Q Developer manages your data here.

Amazon Connect agent workspace launches refreshed look and feel

The Amazon Connect agent workspace now features an updated user interface to improve productivity and focus for your agents. The new user interface is designed to be more intuitive, highly responsive, and increase visual consistency across capabilities, providing your agents with a streamlined user experience. With this launch, you can also easily build and embed third-party applications that have a consistent look and feel with the agent workspace by using Cloudscape Design System components.

Amazon Titan Text Embeddings V2 now available for use with Bedrock Knowledge Bases

Amazon Titan Text Embeddings V2, a new embeddings model in the Amazon Titan family of models, is now available for use with Knowledge Bases for Amazon Bedrock. Using Titan Text Embeddings V2, customers can embed their data into a vector database and use it to retrieve relevant information for tasks such as questions and answers, classification, or personalized recommendations.

Amazon Text Embeddings V2 is optimized for retrieval augmented generation (RAG) and is an efficient model ideal for high accuracy retrieval tasks at different dimensions. The model supports flexible embeddings sizes (1024, 512 , 256) and maintains accuracy at smaller dimension sizes, helping to reduce storage costs without compromising on accuracy. When reducing from 1,024 to 512 dimensions, Titan Text Embeddings V2 retains approximately 99% retrieval accuracy, and when reducing from 1,024 to 256 dimensions, the model maintains 97% accuracy. Additionally, Titan Text Embeddings V2 includes multilingual support for 100+ languages in pre-training as well as unit vector normalization for improving accuracy of measuring vector similarity.

AWS Backup now supports Amazon Elastic Block Store (EBS) Snapshots Archive in the AWS GovCloud (US) Regions

Today, AWS Backup announces support for EBS Snapshots Archive in the AWS GovCloud (US) Regions, allowing customers to automatically move EBS Snapshots created by AWS Backup to EBS Snapshots Archive. EBS Snapshots Archive is low-cost, long-term storage tier meant for your rarely-accessed snapshots that do not need frequent or fast retrieval, allowing you to save up to 75% on storage cost.

You can now use AWS Backup to transition your EBS Snapshots to EBS Snapshots Archive and manage their lifecycle, alongside AWS Backup’s other supported resources in the AWS GovCloud (US) Regions. EBS Snapshots are incremental, storing only the changes since the last snapshot and making them cost effective for daily and weekly backups that need to be accessed frequently. You may also have EBS snapshots that you only need to access every few months or years, retaining them for long-term regulatory requirements. For these long-term snapshots, you can now transition your EBS snapshots managed by AWS Backup to EBS Snapshots Archive Tier to store full, point-in-time snapshots at lower storage costs.

Amazon CloudWatch Logs announces Live Tail streaming CLI support

We are excited to announce streaming CLI support for Amazon CloudWatch Logs Live Tail, making it possible to view, search and filter relevant log events in real-time. You can now view your logs interactively in real-time as they’re ingested via AWS CLI or programmatically within your own custom dashboards inside or outside of AWS.

In CloudWatch Logs, Live Tail console has been providing customers a rich out-of-the-box experience to view and detect issues in their incoming logs. Additionally, it provides fine-grained controls to filter and highlight analytics of interest while investigating issues relating to deployments or incidents. By using the streaming CLI for Live Tail, you can now have similar experience from AWS CLI or integrate the same capabilities within your custom dashboard.

AWS Elastic Beanstalk now supports .NET 8 on AL2023

AWS Elastic Beanstalk now supports .NET 8 on AL2023 Elastic Beanstalk environments. Elastic Beanstalk .NET 8 on AL2023 environments come with .NET 8.0 installed by default. See Release Notes for additional details.

AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. .NET 8 on AL2023 runtime adds security improvements, such as support for the SHA-3 hashing algorithm, along with other updates including enhanced dynamic profile-guided optimization (PGO) that can lead to runtime performance improvements, and better garbage collection with the ability to adjust the memory limit on the fly. You can create Elastic Beanstalk environment(s) running .NET 8 on AL2023 using any of the Elastic Beanstalk interfaces such as Elastic Beanstalk Console, Elastic Beanstalk CLI, Elastic Beanstalk API, and AWS Toolkit for Visual Studio.

AWS Batch introduces the Job Queue Snapshot to view jobs at the front of the job queues

AWS Batch now offers the Job Queue Snapshot feature, enabling you to observe the jobs at the front of your queues. This feature provides visibility to the existing AWS Batch Fair Share Scheduling capabilities. The Job Queue Snapshot displays the jobs at the front of your job queues to assist administrators.

Job Queue Snapshot addresses the needs of customers using AWS Batch and leveraging Fair Share Scheduling to balance workloads within the same organization. By gaining visibility into the jobs at the front of their queues, you can quickly identify and resolve issues that may be impacting workload progress, helping you to meet Service Level Agreements (SLAs) and minimize disruptions to your end-users.

The Job Queue Snapshot feature is available to all AWS Batch customers today and across all AWS Regions where AWS Batch is offered. Customers can access the snapshot through the AWS Batch console or by using the GetJobQueueSnapshot API via the AWS Command Line Interface (AWS CLI).

To learn more about Job Queue Snapshot and how to leverage it for your batch computing workloads, visit Viewing job queue status in the AWS Batch User Guide.

AWS CloudFormation Hooks is now available in the AWS GovCloud (US) Regions

AWS CloudFormation Hooks is now generally available in the AWS GovCloud (US) Regions. With this launch, customers can deploy Hooks in these newly supported AWS Regions to help keep resources secure and compliant.

With CloudFormation Hooks, you can invoke custom logic to automate actions or inspect resource configurations prior to a create, update or delete CloudFormation stack operation. Today’s launch extends this capability to GovCloud customers and partners to help keeping resources secure and compliant.

With this launch, CloudFormation Hooks is available in 31 AWS regions globally: US East (Ohio, N. Virginia), US West (N. California, Oregon), Canada (Central, Calgary), Asia Pacific (Hong Kong, Jakarta, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo, Hyderabad, Melbourne), Europe (Ireland, Frankfurt, Zurich, London, Paris, Stockholm, Milan), Middle East (UAE, Bahrain), South America (São Paulo), Africa (Cape Town), and the AWS GovCloud (US-East, US-West) Regions.

To get started, you can explore sample hooks published to the CloudFormation Public Registry or author Hooks using the CloudFormation CLI and publish them to your CloudFormation Private Registry. To learn more, check out the AWS News Blog post, refer to the User Guide and API reference. You can also learn more by following the AWS CloudFormation Hooks workshop.
 

AWS Transfer Family increases message size and throughput limits for AS2

AWS Transfer Family support for the Applicability Statement (AS2) protocol has increased its default message size limit from 50 MB to 1 GB and throughput limit from 30 to 100 message transfers per second. You will find these increased limits reflected on the AWS Transfer Family page within the Service Quotas console. These increased limits enable you to reliably connect with trading partners that frequently transmit sizable batches of AS2 messages.

The increased message size and throughput limits for AS2 are available in all AWS Regions where the service is available. To learn more about the AS2 quotas and limitations, visit the documentation. To get started with Transfer Family’s AS2 capabilities, take the self-paced workshop or deploy the AS2 CloudFormation template.
 

Announcing preview of the AWS SDK for SAP ABAP - BTP edition

The AWS SDK for SAP ABAP – BTP edition is now available in preview, making it easier for SAP Business Technology Platform (BTP) users to connect to AWS services, including the latest generative AI capabilities. With this new edition, SAP customers can develop and run powerful SAP extensions and standalone applications in SAP BTP that use AWS services.

These capabilities help SAP customers innovate faster while keeping their ERP core clean, including customers using SAP’s RISE and GROW offerings, or self-managed deployments on AWS or other cloud providers. Whether seeking to streamline invoice generation with Amazon Bedrock, improve sales forecasts with Amazon Forecast, or enable predictive maintenance with AWS IoT Core, the AWS SDK for SAP ABAP – BTP edition simplifies access to AWS services from within the SAP BTP ABAP Environment.

Amazon Connect now supports Apple Messages for Business

Amazon Connect Chat now supports Apple Messages for Business, enabling you to deliver personalized customer experiences on Apple Messages, the default messaging application on all iOS devices, increasing customer satisfaction and reducing costs. Rich messaging features such as link previews, quick replies, forms, attachments, customer authentication, iMessage apps, and Apple Pay allow customers to browse product recommendations, check shipments, schedule appointments, or make a payment.

Amazon Connect’s integration with Apple Messages for Business makes it easy for your customers to chat with you anytime they tap your registered phone number on an Apple device, reducing call volumes and operational costs by deflecting calls to chats. Apple Messages for Business chats use the same generative AI-powered chatbots, routing, configuration, analytics, and agent experience as calls, chats, tasks, and web calling in Amazon Connect, making it easy for you to deliver seamless omnichannel customer experiences.

AWS Supply Chain Lead Time Insights enhances the support for data variability

Vendor Lead Time (VLT) Insights increases lead time deviation awareness, focusing on critical factors such as the vendor’s transportation mode and source locations. Users can identify lead time deviations at a more granular level and view them through the ASC Insights UI. Additionally, users can easily export all lead time deviations to combine with external sources for further analysis.

Customers lack timely visibility of vendor lead time deviations (actual lead times vs contractual lead times). Identifying and incorporating these deviations is crucial for improving planning accuracy and avoiding stock out situations. Traditional data analysis methods are time-consuming, often taking weeks to identify variability each quarter. By the time deviations are identified and predictions are adjusted, the underlying data is already outdated. As a result, lead time predictions become less accurate, which heightens the risk of stock outs due to inadequate inventory and leads to increased costs from expedited shipping or higher safety stock adjustments.

This release allows customers to identify and export vendor lead time deviations at a more granular level, including transportation modes and source locations. This will help customers identify deviations from contractual lead times quickly. Customers can then update their planning cycle by using the recommended lead times.

AWS Marketplace announces amendments for AMI annual agreements

AWS Marketplace announces the general availability of amendments for annual agreements on Amazon Machine Image (AMI) products purchased on AWS Marketplace. This allows customers with annual agreements to switch the Elastic Cloud Compute (EC2) instance types for the AMI solution they purchased from AWS Marketplace.

AWS customers who run AMI software from AWS Marketplace for extended periods choose to use annual plans which offer discounts over on-demand pricing. Previously, annual agreements only provided discounts on the initially selected EC2 instance types, and if customers later needed to support additional users by adding more instances or upgrading to larger instance types, they had to pay on-demand rates or purchase additional annual plans.

Customers can now easily modify their AMI annual agreements in the AWS Marketplace Console. They can add new instance types or switch to a different instance type at any time. If the new instance type results in a higher cost, customers will retain their original discount and AWS Marketplace will automatically calculate the pro-rated cost for the new instance types. Customers also retain the original end date of the agreement for the new instance types, simplifying renewals.. Amendments are available for all AMI products in the AWS Marketplace with annual pricing plans, and they will support both existing and new agreements.

Announcing AWS DMS Serverless improved Oracle to Redshift Full Load throughput

AWS Database Migration Service Serverless (AWS DMSS) now supports improved Oracle to Amazon Redshift Full Load throughput. Using AWS DMSS, you can now migrate data from Oracle databases to Amazon Redshift at much higher throughput rates, ranging from two to ten times faster than previously possible with AWS DMSS.

AWS DMSS Oracle to Amazon Redshift Full Load performance enhancements will automatically be applied whenever AWS DMSS detects that a Full Load operation is being conducted between an Oracle database and Amazon Redshift. Additional information about AWS DMSS Full Load can be found in Full Load documentation.

To learn more, see the AWS DMS Full Load for Oracle databases documentation.

For AWS DMS regional availability, please refer to the AWS Region Table.
 

Amazon Connect provides Zero-ETL analytics data lake to access contact center data

Amazon Connect announces the general availability of analytics data lake, a single source for contact center data including contact records, agent performance, Contact Lens insights, and more — eliminating the need to build and maintain complex data pipelines. Organizations can create their own custom reports using Amazon Connect data or combine data queried from third-party sources using zero-ETL.

Analytics data lake enables contact center managers to leverage BI tools of their choice, such as QuickSight, to analyze the information that matters most to improving customer experience and operational efficiency. That could include a customized view of metrics like service level, combining performance insights with third-party data like CRMs, or using contact center data to inform AI/ML models and contact center optimization opportunities. For example, managers can visualize which agents have the highest customer satisfaction for calls about lost orders and then adjust routing profiles to staff their queues with the ideal agents to achieve their desired business outcomes.

Amazon Connect data lake supports querying engines like Amazon Athena and data visualization applications like Amazon QuickSight or other third-party business intelligence (BI) applications. The Amazon Connect analytics data lake is available in all the AWS Regions where Amazon Connect is available. To learn more and get started, visit the Amazon Connect website and the API documentation.
 

Amazon QuickSight launches multi column sorting for Tables

Amazon QuickSight now supports the ability to sort by multiple columns in Tables. This allows both authors and readers to sort by two or more columns simultaneously in a nested fashion (e.g., first by column A, then B, then C) using the new sorting pop over. They can add, remove, reorder and reset sort on a table. Readers can also perform multi column sort using hidden and off visual field as defined by the author or opt for single column sort from column header context menu as well. For more details refer to documentation.

Real-time audio and Microsoft Server 2022 support are now available on Amazon AppStream 2.0 multi-session fleets

Amazon AppStream 2.0 announces support for real-time audio conferencing on multi-session fleets. Additionally, you can now launch multi-session fleets powered by Microsoft Windows Server 2022 operating system and take the advantage of latest operating systems features.

Multi-session fleets enable IT admins to host multiple end-user sessions on a single AppStream 2.0 instance, helping customers to make better use of instance resources. By providing your users with access to streaming applications and audio conferencing, you can help improve team collaboration for remote workers. Your users don't need to exit their AppStream 2.0 sessions to interact using well-known audio conferencing software. Before you set up your multi-session fleet for audio conferencing, read the multi-session recommendations. These recommendations will help you choose the appropriate instance type and value for the maximum number of user sessions on a single instance.
 

Amazon Cognito user pools now support the ability to customize access tokens

In December 2023, Amazon Cognito user pools announced the ability to enrich identity and access tokens with custom attributes in the form of OAuth 2.0 scopes and claims. Today, we are expanding this functionality to support complex custom attributes such as arrays, maps and JSON objects in both identity and access tokens. You can now make fine-grained authorization decisions using complex custom attributes in the token. This feature enables you to offer enhanced personalization and increased access control. You can also simplify migration and modernization of your applications to use Amazon Cognito with minimal or no changes to your applications.

Amazon Cognito is a service that makes it simpler to add authentication, authorization, and user management to your web and mobile apps. Amazon Cognito provides authentication for applications with millions of users and supports sign-in with social identity providers such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via standards such as SAML 2.0 and OpenID Connect.

Access token customization is available as part of Cognito advanced security features in all AWS Regions, except AWS GovCloud (US) Regions.

To get started, see the following resources:

Powertools for AWS Lambda (Python) adds support for Agents for Amazon Bedrock

Powertools for AWS Lambda (Python), an open-source developer library, launched a new feature to ease the creation of Agents for Amazon Bedrock.

With this release, Powertools for AWS Lambda (Python) handles the automatic generation of OpenAPI schemas directly from the business logic code, validates inputs and outputs according to that schema, and drastically reduces the boilerplate necessary to manage requests and responses from Agents for Amazon Bedrock. By abstracting away the complexities, Powertools for AWS Lambda (Python) allows developers to focus their time and efforts directly on writing business logic, thereby boosting productivity and accelerating development velocity.

AWS AppSync now supports long running events with asynchronous Lambda function invocations

AWS AppSync now allows customers to invoke their Lambda functions, configured as AppSync data sources, in an event-driven manner. This new capability enables asynchronous execution of Lambda functions, providing more flexibility and scalability for serverless and event-driven applications.

Previously, customers could only invoke Lambda functions synchronously from AppSync, which meant that the GraphQL API would wait for the Lambda function to complete before returning a response. With support for Event mode, AppSync can now trigger Lambda functions asynchronously, decoupling the API response from the Lambda execution. This is particularly beneficial for long-running operations (e.g. initiating a generative AI model inference, and leveraging the Lambda function to send model responses to clients over AppSync WebSockets), batch processing (e.g. kicking off a database processing job), or scenarios where immediate responses are not required (e.g. creating and putting messages in a queue).

This feature is available in all AWS regions supported by AppSync. For more details, refer to the AppSync documentation.
 

Amazon Bedrock announces new Converse API

Today, Amazon Bedrock announces the new Converse API, which provides developers a consistent way to invoke Amazon Bedrock models removing the complexity to adjust for model-specific differences such as inference parameters. This API also simplifies managing multi-turn conversations by enabling developers to provide conversational history in a structured way as part of the API request. Furthermore, Converse API supports Tool use (function calling), which for supported models (Anthropic's Claude 3 model family including Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku; Mistral Large; and Cohere’s Command R and R+), will enable developers to perform a wide variety of tasks that require access to external tools and APIs.

The Converse API provides a consistent experience that works with Amazon Bedrock models, removing the need for developers to manage any model-specific implementation. With this API, you can write a code once and use it seamlessly with different models on Amazon Bedrock.

Introducing versioning for AWS WAF Bot & Fraud Control managed rule groups

AWS WAF now allows you to select specific versions of Bot Control and Fraud Control managed rule groups within your web ACLs. This provides greater control over managing traffic when AWS makes new managed rule groups updates available to you.

With versioning, you gain the flexibility to test new and updated bot and fraud rules before deploying them to production. For example, you can apply a new version of a managed rule group to a staging environment to validate efficacy. You can then incrementally roll out the version across production to closely monitor impact before fully enabling it. If a new version inadvertently causes issues, you can swiftly roll back to the previous version to instantly restore original behavior.

With this launch, you will be configured to use the default version (v1.0) of Bot Control and Fraud Control managed rules groups, and you will continue to receive periodic AWS updates. If you do not want to receive updates automatically, you can select a specific version and you will remain on that version you selected until you manually update or till it reaches end of life. For more information and best practices about version management, see documentation.

Amazon Redshift Serverless is now available in Region Europe (Zurich) and Europe (Spain)

Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in AWS Europe (Zurich) and Europe (Spain) regions. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.

With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon S3, access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes. Amazon Redshift Serverless provides unified billing for queries on any of these data sources, helping you efficiently monitor and manage costs.

Amazon EventBridge Scheduler adds new API request metrics for improved observability

Amazon EventBridge Scheduler now emits 12 new Amazon CloudWatch metrics allowing you to monitor API request rates for create, delete, get, list, and update API calls for Schedules and ScheduleGroups. You can now more effectively monitor your application’s performance when making calls to Scheduler’s APIs and proactively identify when you may need to increase your Scheduler service quotas.

EventBridge Scheduler allows you to create millions of scheduled events and tasks to run across more than 270 AWS services without provisioning or managing the underlying infrastructure. EventBridge Scheduler supports one time and recurring schedules that can be created using cron expressions, rate expressions, or specific times with support for time zones and daylight savings. Today’s expansion of Scheduler usage metrics helps you pinpoint potential bottlenecks before they appear, allowing for easy scaling of your applications.

Amazon QuickSight is now available in Milan, Zurich, Cape Town and Jakarta Regions

Amazon QuickSight, which lets you easily create and publish interactive dashboards across your organization and embed data visualizations into your apps, is now available in Milan, Zurich, Cape Town and Jakarta Regions. New accounts are able to sign up for QuickSight with Milan, Zurich, Cape Town or Jakarta as their primary region, making SPICE capacity available in the region and ensuring proximity to AWS and on-premises data sources. Users on existing QuickSight accounts can now switch regions with the region switcher and create SPICE datasets in the new regions.

With this launch, QuickSight expands to Africa for the first time and is now available in all continents with 21 regions, including: US East (Ohio and N. Virginia), US West (Oregon), Europe (Stockholm, Paris, Frankfurt, Ireland, London, Milan and Zurich), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo and Jakarta), Canada (Central), South America (São Paulo), Africa (Cape Town) and GovCloud (US-West). Learn more about available regions here.
 

One-click instance profile creation to launch an RDS Custom for SQL Server instance

Starting today, RDS Custom for SQL Server database instance creation is simplified with single-click creation and attachment of an instance profile. You can choose “Create a new instance profile” and provide an instance profile name for Create database, Restore snapshot, and Restore to Point-in-time options within RDS Management Console. RDS Management Console will automatically generate a new instance profile with all the necessary permissions for RDS Custom automation tasks.

To leverage this feature, you need to ensure that you are logged into AWS Console with the following permissions - iam:CreateInstanceProfile, iam:AddRoleToInstanceProfile, iam:CreateRole, and iam:AttachRolePolicy .

Claude 3 Sonnet and Haiku now available in Amazon Bedrock in the Europe (Frankfurt) region

Beginning today, customers in the Europe (Frankfurt) region can access Claude 3 Sonnet and Haiku in Amazon Bedrock to easily build and scale generative AI applications.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance.
 

Amazon MSK adds support for Apache Kafka version 3.7

Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 3.7 for new and existing clusters. Apache Kafka version 3.7 includes several bug fixes and new features that improve performance. Key improvements include latency improvements resulting from leader discovery optimizations during leadership changes, as well as log segment flush optimization options. For more details and a complete list of improvements and bug fixes, see the Apache Kafka release notes for version 3.7.

Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is compatible with Apache Kafka, which enables you to quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you can spend more time innovating on streaming applications and less time managing Apache Kafka clusters. To learn how to get started, see the Amazon MSK Developer Guide.

Amazon RDS Multi-AZ deployment with two readable standbys supports 6 additional AWS Regions

Amazon Relational Database Service (Amazon RDS) Multi-AZ deployments with two readable standbys are now available in six additional AWS Regions.

Amazon RDS Multi-AZ deployments with two readable standbys is ideal when your workloads require lower write latency, automated failovers, and more read capacity. In addition, this deployment option supports minor version upgrades and system maintenance updates with typically less than one second of downtime when using Amazon RDS Proxy or any one of the open-source AWS Advanced JDBC Driver, PgBouncer, or ProxySQL.

The six newly supported regions are Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Europe (Spain), Europe (Zurich), Middle East (UAE), and Israel (Tel Aviv) Regions. Amazon RDS Multi-AZ database with two readable standby instances is supported on Amazon RDS for PostgreSQL version 16.1 and higher, 15.2 and higher, 14.5 and higher, 13.8 and higher, and Amazon RDS for MySQL version 8.0.28 and higher. Refer the Amazon RDS User Guide for a full list of regional availability and supported engine versions.

Learn more about Amazon RDS Multi-AZ deployments in the AWS News Blog. Create or update fully managed Amazon RDS Multi-AZ database with two readable standby instances in the Amazon RDS Management Console.

Amazon SageMaker Canvas announces up to 10x faster startup time

Amazon SageMaker Canvas announces up to 10x faster startup time, enabling users to achieve faster business outcomes using a visual, no-code interface for machine learning (ML). With a faster startup time, you can now quickly prepare data, build, customize, and deploy machine learning (ML) and generative AI (Gen AI) models in SageMaker Canvas, without writing a single line of code.

SageMaker Canvas can be launched using multiple methods including using your corporate credentials with a single sign-on portal such as AWS IAM Identity Center (IdC), Amazon SageMaker Studio, the AWS Management Console, or a pre-signed URL set up by IT administrators. Now, launching Canvas is quicker than ever using any of these methods. You can launch Canvas in less than a minute and get started with your ML journey 10x faster than before.

Starting today, all new user profiles created in existing or new SageMaker domains can experience this accelerated startup time. Faster startup time is available in all AWS regions where SageMaker Canvas is supported today. Please see the SageMaker Canvas product page to learn more.

Introducing the Document widget for PartyRock

Everyone can build, use, and share generative AI powered apps for fun and for boosting personal productivity using PartyRock. PartyRock uses foundation models from Amazon Bedrock to turn your ideas into working PartyRock apps.

PartyRock apps are composed of UI elements called widgets. Widgets display content, accept input, connect with other widgets, and generate outputs like text, images, and chats using foundation models. Now available is the Document widget, allowing you to integrate text content from files and documents directly into a PartyRock app. The Document widget supports common file types including PDF, MD, TXT, DOCX, HTML, and CSV, with a limit of 120,000 characters. You can add the Document widget to new or existing apps. With the Document widget, you can build apps that generate summaries, extract action items, facilitate chats about document content, or create images based on text from documents like blogs.

For a limited time, AWS offers new PartyRock users a free trial without the need to provide a credit card or sign up for an AWS account. To get hands-on with generative AI, visit PartyRock.
 

Amazon FSx for Lustre is now available in the AWS US East (Atlanta) Local Zone

Customers can now create Amazon FSx for Lustre file systems in the AWS US East (Atlanta) Local Zone.

Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. Amazon FSx for Lustre provides fully managed shared storage built on the world’s most popular high-performance file system, designed for fast processing of workloads such as machine learning, high performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA).

To learn more about Amazon FSx for Lustre, visit our product page, and see the AWS Region Table for complete regional availability information.
 

Introducing Amazon EC2 High Memory U7i Instances

Amazon Web Services is announcing general availability for Amazon EC2 High Memory U7i instances, the first DDR5 memory based 8-socket offering by a leading cloud provider, offering up to 32TiB of memory and 896 vCPUs. Powered by 4th Generation Intel Xeon Scalable processors (Sapphire Rapids), U7i instances have twice as many vCPUs, delivering more than 135% compute performance and up to 45% better price performance versus existing U-1 instances. Combining the largest memory sizes with the highest vCPU count in the AWS cloud, these instances are ideal to run large in-memory databases such as SAP HANA, Oracle, and SQL Server and compute-intensive workloads such as large language models.

U7i instances are available in four sizes supporting 12TiB, 16TiB, 24TiB, and 32TiB memory. They offer up to 100Gbps of Elastic Block Store (EBS) bandwidth for storage volumes, facilitating up to 2.5x faster restart times compared to existing U-1 instances. U7i instances deliver up to 200Gbps of network bandwidth and support ENA Express.

U7i instances are available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul) and Asia Pacific (Sydney). Customers can use these instances with On Demand, Reserved Instances, and Savings Plan purchase options. To learn more, visit the U7i instances page.

These instances are certified by SAP for running SAP S/4HANA, SAP BW/4HANA, Business Suite on HANA, Data Mart Solutions on HANA, and Business Warehouse on HANA in production environments. For details, see the SAP HANA Hardware Directory.
 

Amazon MSK launches support for KRaft mode for new Apache Kafka clusters

Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports KRaft mode (Apache Kafka Raft) in Apache Kafka version 3.7. The Apache Kafka community developed KRaft to replace Apache ZooKeeper for metadata management in Apache Kafka clusters. In KRaft mode, cluster metadata is propagated within a group of Kafka controllers, which are part of the Kafka cluster, versus across ZooKeeper nodes. On Amazon MSK, like with ZooKeeper nodes, KRaft controllers are included at no additional cost to you, and require no additional setup or management.

You can now create clusters in either KRaft mode or ZooKeeper mode on Apache Kafka version 3.7. In Kraft mode, you can add up to 60 brokers to host more partitions per-cluster, without requesting a limit increase, compared to the 30-broker quota on Zookeeper-based clusters. Support for Apache Kafka version 3.7 is offered in all AWS regions where Amazon MSK is available. To learn more about KRaft on MSK, read our launch blog and FAQs. To get started with Amazon MSK, see the Amazon MSK Developer Guide.
 

Amazon DynamoDB now supports resource-based policies in the AWS GovCloud (US) Regions

Amazon DynamoDB now supports resource-based policies in the AWS GovCloud (US) Regions. Resource-based policies help you simplify access control for your DynamoDB resources. With resource-based policies, you can specify the Identity and Access Management (IAM) principals that have access to a resource and what actions they can perform on it. You can attach a resource-based policy to a DynamoDB table or a stream. The resource-based policy that you attach to a table can include access permissions to its indexes. The resource-based policy that you attach to a stream can include access permissions to the stream. With resource-based policies, you can also simplify cross-account access control for sharing resources with IAM principals of different AWS accounts.

Resource-based policies support integrations with IAM Access Analyzer and Block Public Access (BPA) capabilities. IAM Access Analyzer reports cross-account access to external entities specified in resource-based policies, and the findings provide visibility to help you refine permissions and conform to least privilege. BPA helps you prevent public access to your DynamoDB tables, indexes, and streams, and is automatically enabled in the resource-based policies creation and modification workflows.
 

Amazon Redshift Serverless is now generally available in the AWS China (Ningxia) Region

Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in the AWS China (Ningxia) region. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists can now use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.

With a few clicks in the AWS Management Console, you can get started with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon Simple Storage Service (Amazon S3), access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes, as well as data in your operational databases, such as Amazon Aurora.

Amazon DynamoDB local supports configurable maximum throughput for on-demand tables

Amazon DynamoDB local now supports configurable maximum throughput for individual on-demand tables and associated secondary indexes. Customers can use the configurable maximum throughput for on-demand tables feature for predictable cost management, protection against accidental surge in consumed resources and excessive use, and safe guarding downstream services with fixed capacities from potential overloading and performance bottlenecks. With DynamoDB local, you can develop and test your application with managing maximum on-demand table throughput, making it easier to validate the use of the supported API actions before releasing code to production.

DynamoDB local is free to download and available for macOS, Linux, and Windows. DynamoDB local does not require an internet connection and it works with your existing DynamoDB API calls. To get started with the latest version see “Deploying DynamoDB locally on your computer”. To learn more, see Setting Up DynamoDB Local (Downloadable Version).
 

Amazon CloudWatch now offers 30 days of alarm history

Amazon CloudWatch extended the duration during which customers can access their alarm history. Now, customers can view the history of their alarm state changes for up to 30 days prior.

Previously, CloudWatch provided 2 weeks of alarm history. Customers rely on alarm history to review previous triggering events, alarming trends, and noisiness. This extended history makes it easier to observe past behavior and review incidents over a longer period of time.

New Oracle to PostgreSQL built-in system functions in DMS Schema Conversion

DMS Schema Conversion has released five generative artificial intelligence (AI)-assisted built-in functions to improve Oracle to PostgreSQL conversions. This launch marked the first ever gen AI-assisted conversion improvement in DMS Schema Conversion. 

Customers can use these functions by applying the DMS Schema Conversion extension pack. The extension pack is an add-on module that emulates source database functions that aren't supported in the target database and can streamline the conversion step.

DMS Schema Conversion is generally available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Singapore).

To learn more, visit Converting database schemas using DMS Schema Conversion. For more details on how to apply extension pack, go to Using extension packs in DMS Schema Conversion.

AWS Network Firewall increases quota for stateful rules

The AWS Network Firewall service quota limit for stateful rules is now adjustable. The default limit is still 30,000 stateful rules per firewall policy in a Region, but you can request an increase up to 50,000. This firewall rule limit increase helps customers strengthen their security posture on AWS and mitigate emerging threats more effectively.

A higher rule limit provides flexibility to customers with large-scale deployments to define their firewall policy with different combinations of AWS managed and customer defined rules. Starting today, you can implement a broader range of rules to defend against various threats and scale as you grow on AWS.

Mistral Small foundation model now available in Amazon Bedrock

The Mistral Small foundation model from Mistral AI is now generally available in Amazon Bedrock. You can now access four high-performing models from Mistral AI in Amazon Bedrock including Mistral Small, Mistral Large, Mistral 7B, and Mixtral 8x7B, further expanding model choice. Mistral Small is a highly efficient large language model optimized for high-volume, low-latency language-based tasks. It provides outstanding performance at a cost-effective price point. Key features of Mistral Small include retrieval-augmented generation (RAG) specialization, coding proficiency, and multilingual capabilities.

Mistral Small is perfectly suited for straightforward tasks that can be performed in bulk, such as classification, customer support, or text generation. The model specializes in RAG ensuring important information is retained even in long context windows, which can extend up to 32K tokens. Mistral Small excels in code generation, review, and commenting, supporting all major coding languages. Mistral Small also has multilingual capabilities delivering top-tier performance in English, French, German, Spanish, and Italian; it also supports dozens of other languages. The model also comes with built-in efficient guardrails for safety.

Mistral AI’s Mistral Small foundation model is now available in Amazon Bedrock in the US East (N. Virginia) AWS region. To learn more, read the AWS News launch blog, Mistral AI in Amazon Bedrock product page, and documentation. To get started with Mistral Small in Amazon Bedrock, visit the Amazon Bedrock console.
 

PostgreSQL 17 Beta 1 is now available in Amazon RDS Database Preview Environment

Amazon RDS for PostgreSQL 17 Beta 1 is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the pre-release of PostgreSQL 17 on Amazon RDS for PostgreSQL. You can deploy PostgreSQL 17 Beta 1 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database.

PostgreSQL 17 includes updates to vacuuming that reduces memory usage, improves time to finish vacuuming, and shows progress of vacuuming indexes. With PostgreSQL 17, you no longer need to drop logical replication slots when performing a major version upgrade. PostgreSQL 17 continues to build on the SQL/JSON standard, adding support for `JSON_TABLE` features that can convert JSON to a standard PostgreSQL table. The `MERGE`command now supports the `RETURNING` clause, letting you further work with modified rows. PostgreSQL 17 also includes general improvements to query performance and adds more flexibility to partition management with the ability to SPLIT/MERGE partitions. Please refer to the PostgreSQL community announcement for more details.

Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the preview environment.

Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region.
 

Connect your Jupyter notebooks to Amazon EMR Serverless using Apache Livy endpoints

Today, we are excited to announce that Amazon EMR Serverless now supports endpoints for Apache Livy. Customers can now securely connect their Jupyter notebooks and manage Apache Spark workloads using Livy’s REST interface.

Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple and cost effective for data engineers and analysts to run petabyte-scale data analytics in the cloud. With the Livy endpoints, setting up a connection is easy - just point your Livy client in your on-premises notebook running Sparkmagic kernels to the EMR Serverless endpoint URL. You can now interactively query, explore and visualize data, and run Spark workloads using Jupyter notebooks without having to manage clusters or servers. In addition, you can use the Livy REST APIs for use cases that need interactive code execution outside notebooks.
 

AWS Launches Console-based Bulk Policy Migration for Billing and Cost Management Console Access

AWS Billing and Cost Management console now supports a console-based simplified migration experience for affected policies containing retired IAM actions (aws-portal). Customers, who are not migrated to using fine-grained IAM actions, can trigger this experience by clicking on Update IAM Policies recommended action available on the Billing and Cost Management home page. The experience identifies affected policies, suggests equivalent new actions to match customers’ current access, provides testing options, and completes the migration of all affected policies across the organization.

The experience automatically identifies required new fine-grained actions, making it easy for customers to maintain their current access post-migration. The experience provides flexibility of testing with a few accounts and rollback changes with a button click, making the migration a risk-free operation for customers. Moreover, the experience provides optional customization opportunity for customers to broaden or fine-tune their access by modifying the aws-recommended IAM action mapping as well as migrating select accounts one at a time.

AWS Chatbot now supports tagging of AWS Chatbot resources

AWS Chatbot now enables customers to tag AWS Chatbot resources. Tags are simple key-value pairs that customers can assign to AWS resources such as AWS Chatbot channel configurations to easily organize, search, identify resources, and control access.

Prior to today, customers could not tag AWS Chatbot resources. As a result, they could not use tag-based controls to manage access to AWS Chatbot resources. By tagging AWS Chatbot resources, customers can now enforce tag-based controls in their environments. Customers can manage tags for AWS Chatbot resources using the AWS CLI, SDKs, or AWS Management Console.

AWS Chatbot support for tagging Chatbot resources is available at no additional cost in all AWS Regions where AWS Chatbot service is offered. To learn more, visit the AWS Chatbot Tagging your AWS Chatbot resources documentation page.

Amazon EKS and Amazon EKS Distro now support Kubernetes version 1.30

Kubernetes version 1.30 introduced several new features and bug fixes, and AWS is excited to announce that you can now use Amazon EKS and Amazon EKS Distro to run Kubernetes version 1.30. Starting today, you can create new EKS clusters using v1.30 and upgrade your existing clusters to v1.30 using the Amazon EKS console, the eksctl command line interface, or through an infrastructure-as-code tool.

Kubernetes version 1.30 includes stable support for pod scheduling readiness and minimum domains parameter for PodTopologySpread constraints. As a reminder, starting with Kubernetes version 1.30 or newer, any newly created managed node groups will automatically default to using AL2023 as the node operating system. For detailed information on major changes in Kubernetes version 1.30, see the Kubernetes project release notes.

Kubernetes v1.30 support for Amazon EKS is available in all AWS Regions where Amazon EKS is available, including the AWS GovCloud (US) Regions.

You can learn more about the Kubernetes versions available on Amazon EKS and instructions to update your cluster to version 1.30 by visiting Amazon EKS documentation. Amazon EKS Distro builds of Kubernetes v1.30 are available through ECR Public Gallery and GitHub. Learn more about the Amazon EKS version lifecycle policies in the documentation.
 

Introducing the Amazon Kinesis Data Streams Apache Spark Structured Streaming Connector for Amazon EMR

We are excited to announce the launch of the Amazon Kinesis Data Streams Connector for Spark Structured Streaming on Amazon EMR. The new connector makes it easy for you to build real-time streaming applications and pipelines that consume Amazon Kinesis Data Streams using Apache Spark Structured Streaming. Starting Amazon EMR 7.1, the connector comes pre-packaged on Amazon EMR on EKS, EMR on EC2 and EMR Serverless. Now, you do not need to build or download any packages and can focus on building your business logic using the familiar and optimized Spark Data Source APIs when consuming data from your Kinesis data streams.

Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store streaming data at massive scale. Amazon EMR is the cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using Apache Spark and other open-source frameworks. The new Amazon Kinesis Data Streams Connector for Apache Spark is faster, more scalable, and fault-tolerant than alternative open-source options. The connector also supports Enhanced Fan-out consumption with dedicated read throughput. To learn more and see a code example, go to Build Spark Structured Streaming applications with the open source connector for Amazon Kinesis Data Streams.
 

New open-source AWS Advanced Python Wrapper driver now available for Amazon Aurora and Amazon RDS

The Amazon Web Services (AWS) Advanced Python Wrapper driver is now generally available for use with Amazon RDS and Amazon Aurora PostgreSQL and MySQL-compatible edition database clusters. This database driver provides support for faster switchover and failover times, and authentication with AWS Secrets Manager or AWS Identity and Access Management (IAM).

The AWS Advanced Python Wrapper driver wraps the open-source Psycopg and the MySQL Connector/Python drivers and supports Python versions 3.8 or newer. You can install the aws-advanced-python-wrapper package using the pip command along with either the psycpg or mysql-connector-python open-source packages. The wrapper driver relies on monitoring database cluster status and being aware of the cluster topology to determine the new writer. This approach reduces switchover and failover times from tens of seconds to single digit seconds compared to the open-source drivers.

The AWS Advanced Python Wrapper driver is released as an open-source project under the Apache 2.0 License. Check out the project on GitHub to view installation instructions and documentation.
 

AWS re:Post Private is now available in five new regions

AWS re:Post Private is now available in five new regions: US East (N. Virginia), Europe (Ireland), Canada (Central), Asia Pacific (Sydney), and Asia Pacific (Singapore).

re:Post Private is a secure, private version of the AWS re:Post, designed to help organizations increase speed to get started with the cloud, remove technical roadblocks, accelerate innovation, and improve developer productivity. With re:Post Private, it is easier for organizations to build an organizational cloud community that drives efficiencies at scale and provides access to valuable knowledge resources. Additionally, re:Post Private centralizes trusted AWS technical content and offers private discussion forums to improve how organizational teams collaborate internally—and with AWS—to remove technical obstacles, accelerate innovation, and scale more efficiently in the cloud. On re:Post Private, convert a discussion thread into a support case, and centralize AWS Support responses for your organization’s cloud community. Learn more about using AWS re:Post Private on the product page.

AWS announces new AWS Direct Connect location in Chicago, Illinois

Today, AWS announced the opening of a new AWS Direct Connect location within the Coresite CH1 data center in Chicago, Illinois. By connecting your network to AWS at the new Illinois location, you gain private, direct access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones. This is the fourth AWS Direct Connect site within Chicago Metropolitan area and the 44th site in the United States.

The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet. The new Direct Connect location at Coresite CH1 offers dedicated 10 Gbps and 100 Gbps connections with MACsec encryption available.

For more information on the over 140 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
 

Amazon EC2 M7i-flex, M7i, C7i, and R7i instances are now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i-flex, M7i, C7i are available in the AWS GovCloud (US-East) Region. In addition, Amazon EC2 M7i-flex, M7i and R7i instances are available in the AWS GovCloud (US-West) Region. These instances are powered by powered by custom 4th Generation Intel Xeon Scalable processors (code-named Sapphire Rapids)custom processors, available only on AWS, and offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.

M7i-flex instances are the easiest way for you to get price-performance benefits for a majority of general-purpose workloads, and deliver up to 19% better price-performance compared to M6i. M7i-flex instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don't fully utilize all compute resources such as web and application servers, virtual-desktops, batch-processing, and microservices.

M7i, C7i and R7i deliver up to 15% better price-performance compared to prior generation M6i, C6i and R6i instances. They offer larger instance sizes up to 48xlarge, can attach up to 128 EBS volumes and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.

Amazon QuickSight launches public API for SPICE CMK Data Encryption

Amazon QuickSight is excited to announce the launch of public API support for Customer Managed Keys (CMK) to encrypt and manage SPICE datasets. Previously, customers were required to manually configure CMK data encryption keys via the QuickSight console UI. Now with this API enhancement, QuickSight users can programmatically opt in and configure the customer managed keys, seamlessly integrating it into their adoption and migration pipeline. Once turned on, the feature would benefit QuickSight users to 1/ be able to revoke access to SPICE datasets with one click, and 2/ maintain an auditable log that tracks how SPICE datasets are accessed. For further details, visit here.

AWS Lambda console now supports sharing test events between developers in additional regions

Developers can now share test events with other developers in their AWS account in Africa (Cape Town), Asia Pacific (Jakarta), Asia Pacific (Osaka), Europe (Milan), Europe (Spain), Europe (Zurich), Middle East (Bahrain), Middle East (UAE). Test events provide developers the ability to define a sample event in the Lambda console, and then invoke a Lambda function using that event to test their code. Previously in the above mentioned regions, test events were only available to the developers who created them. With this launch, developers can make test events available to other team members in their AWS account using granular IAM permissions. This capability makes it easier for developers to collaborate and streamline testing workflows. It also allows developers to use a consistent set of test events across their entire team.

  • No labels