You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Recent Announcements
The AWS Cloud platform expands daily. Learn about announcements, launches, news, innovation and more from Amazon Web Services.
AI21 Labs' Jamba-Instruct model now available in Amazon Bedrock

AI21 Labs’ Jamba-Instruct, a powerful instruction-following large language model, is now available in Amazon Bedrock. Fine-tuned for instruction following and built for reliable commercial use, Jamba-Instruct can engage in open-ended dialogue, understand context and subtext, and complete a wide variety of tasks based on natural language instructions.

With its 256K context window, Jamba-Instruct has the capability to ingest the equivalent of a 800-page novel or an entire company's financial filings for a given fiscal year. This large context window allows Jamba-Instruct to answer questions and produce summaries that are grounded in the provided inputs, eliminating the need for manual segmentation of documents in order to fit smaller context windows.

With its strong reasoning and analysis capabilities, Jamba-Instruct can break down complex problems, gather relevant information, and provide structured outputs. The model is ideal for common enterprise use cases such as enabling Q&A on call transcripts, summarizing key points from documents, building chatbots, and more. Whether you need assistance with coding, writing, research, analysis, creative tasks, or general task assistance, Jamba-Instruct is a powerful model that can streamline your workflow and accelerate time to production for your gen AI enterprise applications.
 

Amazon CodeCatalyst now supports GitLab.com source code repositories

Amazon CodeCatalyst now supports the use of source code repositories hosted in GitLab.com in CodeCatalyst projects. This allows customers to use GitLab.com repositories with CodeCatalyst’s features such as its cloud IDE (Development Environments), Amazon Q feature development, and custom and public blueprints. Customers can also trigger CodeCatalyst workflows based on events in GitLab.com, view the status of CodeCatalyst workflows back in GitLab.com, and even block GitLab.com pull request merges based on the status of CodeCatalyst workflows.

Customers want the flexibility to use source code repositories hosted in GitLab.com, without the need to migrate to CodeCatalyst to use it functionality. Migration is a long process and customers want to evaluate CodeCatalyst and its capabilities using their own code repositories before they decide to migrate. Support for popular source code providers such as GitLab.com is the top customer ask for CodeCatalyst. Now customers can use the capabilities of CodeCatalyst without the need for migration of source code from GitLab.com.

Amazon MSK supports in-place upgrades from M5, T3 instance types to Graviton3 based M7G

You can now upgrade your Amazon Managed Streaming for Apache Kafka (Amazon MSK) provisioned clusters running on X-86 based M5 or T3 instances and replace them with AWS Graviton3-based M7G instances with a single click of a button. In-place upgrades allows you to seamlessly switch over your existing provisioned clusters to M7G instance type for better price performance, while continuing to serve reads and writes for your connecting client applications.

Switching to AWS Graviton3 processor based M7G instances on Amazon MSK provisioned clusters allows you to achieve up to 24% compute cost savings and up to 29% higher write and read throughput over comparable MSK clusters running on M5 instances. Additionally, these instances lower energy consumption by up to 60% than comparable instances, making your Kafka clusters more environmentally sustainable.

In-place upgrades to M7G instances are now available in all AWS regions where MSK supports M7G. Please refer to our blog for more information on the price/ performance improvements of M7g instances and the Amazon MSK pricing page for information on pricing. To get started, you can update your existing clusters to M7G brokers using the AWS Management Console, and read our developer guide for more information.
 

Amazon DocumentDB announces IAM database authentication

Amazon DocumentDB (with MongoDB compatibility) now supports cluster authentication with AWS Identity and Access Management (IAM) users and roles ARNs. Users and applications connecting to an Amazon DocumentDB cluster to read, write, update, or delete data can now use an AWS IAM identity to authenticate connection requests. These users and applications can use the same AWS IAM user or role when connecting to different DocumentDB clusters and to other AWS services.

Applications running on AWS EC2, AWS Lambda, AWS ECS, or AWS EKS do not need to manage passwords in application when authenticating to Amazon DocumentDB using an AWS IAM role. These applications get their connection credentials through environment variables of an AWS IAM role, thus making it a passwordless mechanism.

New and existing DocumentDB clusters can use AWS IAM to authenticate cluster connections without modifying the cluster configuration. You can also choose both password-based authentication and authentication with AWS IAM ARN to authenticate different users and applications to a DocumentDB cluster. Amazon DocumentDB cluster authentication with AWS IAM ARNs is supported by drivers which are compatible with MongoDB 5.0+.

Authentication with AWS IAM ARNs is available in Amazon DocumentDB instance-based 5.0 clusters across all supported regions. To learn more, please refer to the Amazon DocumentDB documentation, and see the Region Support for complete regional availability. To learn more about IAM, refer to the product detail page.

Amazon Redshift Serverless with lower base capacity available in the Asia Pacific (Mumbai) Region

Amazon Redshift now allows you to get started with Amazon Redshift Serverless with a lower data warehouse base capacity configuration of 8 Redshift Processing Units (RPUs) in the AWS Asia Pacific (Mumbai) region. Amazon Redshift Serverless measures data warehouse capacity in RPUs, and you pay only for the duration of workloads you run in RPU-hours on a per-second basis. Previously, the minimum base capacity required to run Amazon Redshift Serverless was 32 RPUs. With the new lower base capacity minimum of 8 RPUs, you now have even more flexibility to a support diverse set of workloads of small to large complexity based on your price performance requirements. You can increment or decrement the RPU in units of 8 RPUs.

Amazon Redshift Serverless allows you to run and scale analytics without having to provision and manage data warehouse clusters. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. With the new lower capacity configuration, you can use Amazon Redshift Serverless for production environments, test and development environments at an optimal price point when a workload needs a small amount of compute.

To get started, see the Amazon Redshift Serverless feature page, user documentation, and API Reference.
 

Amazon Aurora now provides additional monitoring information during upgrades

Amazon Aurora now provides additional granular monitoring information during upgrades for enhanced observability. Customers can use the additional granularity shared in Amazon Aurora Events to stay informed and better manage their database upgrades.

Customers upgrade their database version, operating system, and/or other components containing security, compliance, and functional enhancements. When applying upgrades, Aurora will now emit additional messages in Aurora Events and indicate when the database cluster is online and when it is not. For database minor version and patch upgrades, customers can use the messages to get additional granular insights about the exact downtime incurred for their database including the number of connections preserved during the upgrade. To learn more about how to monitor your upgrade process, you can view the technical documentation.

Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. You can get started by launching a new Amazon Aurora DB instance directly from the AWS Console or the AWS CLI. To get started with Amazon Aurora, take a look at our getting started page.

Amazon EC2 C6a instances now available in additional regions

Starting today, the general-purpose Amazon EC2 C6a instances are now available in Asia Pacific (Hong Kong) region. C6a instances are powered by third-generation AMD EPYC processors with a maximum frequency of 3.6 GHz. C6a instances deliver up to 15% better price performance than comparable C5a instances. C6a instances offer 10% lower cost than comparable x86-based EC2 instances. These instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor that delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security.

AWS CodeBuild supports Arm-based workloads using AWS Graviton3

AWS CodeBuild’s support for Arm-based workloads now run on AWS Graviton3 without any additional configuration.

In February 2021, CodeBuild launched support for native Arm builds on the second generation of AWS Graviton processors. Support for this platform allows customers to build and test on Arm without the need to emulate or cross-compile. Now, CodeBuild customers targeting Arm benefit from the enhanced capabilities of AWS Graviton3 processors. The upgrade delivers up to 25% higher performance over Graviton2 processors. Graviton3 also uses up to 60% less energy for the same performance as comparable EC2 instances, enabling customers to reduce their carbon footprint in the cloud.

CodeBuild’s support for Arm using Graviton3 is now available in: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Stockholm), Europe (Spain), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Hyderabad), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central).

To learn more about CodeBuild’s support for Arm, please visit our documentation. To learn more about how to get started, visit the AWS CodeBuild product page.
 

Amazon ElastiCache supports M7g and R7g Graviton3-based nodes in additional AWS regions

Amazon ElastiCache now supports Graviton3-based M7g and R7g node families. ElastiCache Graviton3 nodes deliver improved price-performance compared to Graviton2. As an example, when running ElastiCache for Redis on an R7g.4xlarge node, you can achieve up to 28% increased throughput (read and write operations per second) and up to 21% improved P99 latency, compared to running on R6g.4xlarge. In addition, these nodes deliver up to 25% higher networking bandwidth.

The M7g and R7g nodes are now available for Amazon ElastiCache in the following AWS regions: US East (N. Virginia and Ohio), US West (Oregon and N. California), Canada (Central), South America (Sao Paolo), Europe (Ireland, Frankfurt, London, Stockholm, Spain and Paris (m7g only)), Asia Pacific (Tokyo, Sydney, Mumbai, Hyderabad, Seoul and Singapore) regions. For complete information on pricing and regional availability, please refer to the Amazon ElastiCache pricing page. To get started, create a new cluster or upgrade to Graviton3 using the AWS Management Console, and get more information.
 

Amazon Time Sync Service expands microsecond-accurate time to 27 EC2 instance types

The Amazon Time Sync Service now supports clock synchronization within microseconds of UTC on 27 additional Amazon Elastic Compute Cloud (Amazon EC2) instance types in supported regions, including all C7gd, M7gd, and R7gd instances.

Built on Amazon's proven network infrastructure and the AWS Nitro System, customers can now access local, GPS-disciplined reference clocks on additional EC2 instance types. These clocks can be used to more easily order application events, measure 1-way network latency, increase distributed application transaction speed, and incorporate in-region and cross-region scalability features while also simultaneously simplifying technical designs. Additionally, you can audit your clock accuracy from your instance to monitor the expected microsecond-range accuracy. Customers already using the Amazon Time Sync Service on these newly supported instance types will see improved clock accuracy automatically, without needing to adjust their AMI or NTP client settings. Customers can also use standard PTP clients and configure a PTP Hardware Clock (PHC) to get the best accuracy possible. Both NTP and PTP can be used without needing any updates to VPC configurations.

Amazon Time Sync with microsecond-accurate time is available in US East (N. Virginia) and the Tokyo regions on all R7g as well as C7i, M7i, R7i, C7a, M7a, R7a, M7g, C7gd, R7gd, and M7gd instance types. We will be expanding support to additional AWS Regions. There is no additional charge for using this service.

Instructions to configure, and more information on the Amazon Time Sync Service, are available in the EC2 User Guide.

Amazon RDS for MySQL announces Extended Support minor 5.7.44-RDS.20240529

Amazon Relational Database Service (RDS) for MySQL announces Amazon RDS Extended Support minor version 5.7.44-RDS.20240529. We recommend that you upgrade to this version to fix known security vulnerabilities and bugs in prior versions of MySQL. Learn more about the bug fixes and patches in this version in the Amazon RDS User Guide.

Amazon RDS Extended Support provides you more time, up to three years, to upgrade to a new major version to help you meet your business requirements. During Extended Support, Amazon RDS will provide critical security and bug fixes for your MySQL on Aurora and RDS after the community ends support for a major version. You can run your MySQL databases on Amazon RDS with Extended Support for up to three years beyond a major version’s end of standard support date. Learn more about Extended Support in the Amazon RDS User Guide and the Pricing FAQs.

Amazon RDS for MySQL makes it simple to set up, operate, and scale MySQL deployments in the cloud. See Amazon RDS for MySQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
 

Amazon Redshift Concurrency Scaling is now available in three additional regions

Amazon Redshift Concurrency Scaling is now available in the AWS Europe (Zurich), Europe (Spain), and Middle East (UAE) regions.

Amazon Redshift Concurrency Scaling elastically scales query processing power to provide consistently fast performance for hundreds of concurrent queries. Concurrency Scaling resources are added to your Redshift cluster transparently in seconds, as concurrency increases, to process queries without wait time. Amazon Redshift customers with an active Redshift cluster earn up to one hour of free Concurrency Scaling credits, which is sufficient for the concurrency needs of most customers. Concurrency scaling allows you to specify usage control providing customers with predictability in their month-to-month cost, even during periods of fluctuating analytical demand.

To enable Concurrency Scaling, set the Concurrency Scaling Mode to Auto in your Amazon Web Services Management Console. You can allocate Concurrency Scaling usage to specific user groups and workloads, control the number of Concurrency Scaling clusters that can be used, and monitor Cloudwatch performance and usage metrics.

Knowledge Bases for Amazon Bedrock now offers observability logs

Knowledge Bases for Amazon Bedrock is a fully managed Retrieval-Augmented Generation (RAG) capability that allows you to connect foundation models (FMs) to internal company data sources to deliver relevant and accurate responses. Knowledge Bases now supports observability, offering log delivery choice through CloudWatch, S3 buckets, and Firehose streams. This capability provides enhanced visibility and timely insights into the execution of knowledge ingestion steps.

Previously, Knowledge Bases provided basic statistics regarding content ingestion. However, this new feature offers more insights on the ingestion process, indicating whether each document was successfully processed or encountered failures. Having comprehensive insights in a timely manner ensure customers can promptly determine when their documents are ready for use with the Retrieve and RetrieveAndGenerate API calls.

This capability is supported in the all AWS Regions where Knowledge Bases is available. To learn more about these features and how to get started, refer to the Knowledge Bases for Amazon Bedrock documentation and visit the Amazon Bedrock console.

Amazon OpenSearch Serverless now available in Canada (Central) region

We are excited to announce the availability of Amazon OpenSearch Serverless in the Canada (Central) region. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless automatically provisions and scales resources to provide consistently fast data ingestion rates and millisecond response times during changing usage patterns and application demand.

With the support in the South America (Sao Paulo) region, OpenSearch Serverless is now available in 13 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), and Canada (Central).

Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.
 

Amazon RDS for MySQL supports new minor version 8.0.37

Amazon Relational Database Service (Amazon RDS) for MySQL now supports MySQL minor version 8.0.37. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MySQL, and to benefit from the bug fixes, performance improvements, and new functionality added by the MySQL community. Learn more about the enhancements in RDS for MySQL 8.0.37 in the Amazon RDS user guide.

You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also leverage Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MySQL instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide.

AWS B2B Data Interchange announces automated 999 acknowledgements for healthcare transactions

AWS B2B Data Interchange now automatically generates 999 functional acknowledgements to confirm receipt of individual X12 electronic data interchange (EDI) healthcare transactions and to report errors. This launch helps you maintain HIPAA compliance while automating delivery of 999 acknowledgements to trading partners that require them. This launch adds to AWS B2B Data Interchange’s existing support for automated TA1 acknowledgements.

Each acknowledgement generated by AWS B2B Data Interchange is stored in Amazon S3, alongside your transformed EDI, and emits an Amazon EventBridge event. You can use these events to automatically send the acknowledgements created by AWS B2B Data Interchange to your trading partners via SFTP using AWS Transfer Family or any other EDI connectivity solution. 999 X231 acknowledgements are generated for all X12 version 5010 HIPAA transactions, while 999 acknowledgements are generated for all other healthcare related X12 transactions.

Support for automated acknowledgements is available in all AWS Regions where AWS B2B Data Interchange is available and provided at no additional cost. To learn more about automated acknowledgements, visit the documentation. To get started with AWS B2B Data Interchange for building and running your event-driven EDI workflows, take the self-paced workshop or deploy the CloudFormation template.
 

Amazon RDS announces integration with AWS Secrets Manager in the AWS GovCloud (US) Regions

Amazon RDS now supports integration with AWS Secrets Manager in the AWS GovCloud (US) Regions to streamline how you manage your master user password for your RDS database instances. With this feature, RDS fully manages the master user password and stores it in AWS Secrets Manager whenever your RDS database instances are created, modified, or restored. The new feature supports the entire lifecycle maintenance for your RDS master user password including regular and automatic password rotations; removing the need for you to manage rotations using custom Lambda functions.

RDS integration with AWS Secrets Manager improves your database security by ensuring your RDS master user password is not visible in plaintext to administrators or engineers during your database creation workflow. Furthermore, you have flexibility in encrypting the secrets using your own managed key or by using a KMS key provided by AWS Secrets Manager. RDS and AWS Secrets Manager provide you the ease and security in managing your master user password for your database instances, relieving you from complex credential management activities such as setting up custom Lambda functions to manage password rotations.

For more information on this feature on RDS and Aurora engines, versions, and region availability, please refer to the RDS and Aurora user guides.
 

Amazon S3 Replication Time Control is now available in the AWS GovCloud (US) Regions

Amazon S3 Replication Time Control (S3 RTC), a feature of S3 Replication that provides a predictable replication time backed by a Service Level Agreement (SLA), is now available in the AWS GovCloud (US) Regions.

Customers use S3 Replication to replicate billions of objects across buckets to the same or different AWS Regions and to one or more destination buckets. S3 RTC is designed to replicate 99.99% of objects within 15 minutes after upload, with the majority of those new objects replicated in seconds. S3 RTC is backed by an SLA with a commitment to replicate 99.9% of objects within 15 minutes during any billing month. With S3 RTC enabled, customers can also view S3 Replication metrics (via CloudWatch) by default to monitor the time taken to complete replication, the total number and size of objects that are pending replication, as well as the number of objects that failed to replicate per minute due to misconfiguration or permission errors.

Amazon SageMaker JumpStart now provides granular access control for foundation models

Starting today, enterprise admins using Amazon SageMaker JumpStart can easily configure granular access control for foundation models (FM) that are discoverable and accessible to users within their organization. Amazon SageMaker JumpStart is a machine learning (ML) hub that offers pretrained models and built-in algorithms to help you quickly get started with ML.

Amazon SageMaker JumpStart provides access to hundreds of FMs, however many enterprise admins want more control over the FMs that can be discovered and used by users within their organization (e.g., only allowing Apache 2.0 license models to be discovered). With this new feature, enterprise admins can now create private hubs in SageMaker JumpStart through the SageMaker SDK, and add specific FMs into private hubs that can be accessible to users within their organization. Enterprise admins can also set up multiple private hubs that are tailored for different roles or accounts with a different set of models. Once set, users will be able to view hubs and models they are allowed to view and use through SageMaker Studio and the SageMaker SDK.

Granular control of FMs in SageMaker JumpStart can be used initially in US-East (Ohio) starting today. To learn more, see the blog and product page.

Amazon EC2 macOS AMIs are now available on AWS Systems Manager Parameter Store

Starting today, customers can reference the latest macOS AMIs via public parameters on the AWS Systems Manager Parameter Store. With this functionality, customers can query the public parameters to retrieve the latest macOS imageIDs, ensure that new EC2 Mac instances are launched with the latest macOS versions, and display a complete list of all available public parameter macOS AMIs. Public parameters are available for both x86 and ARM64 macOS AMIs and can be integrated with customers’ existing AWS CloudFormation templates.

This capability is supported in all AWS regions where EC2 Mac instances are available. To learn more about this feature, please visit the documentation here. To learn more about EC2 Mac instances, click here.
 

AWS Billing and Cost Management now provides Data Exports for Cost Optimization Hub

Data Exports for Cost Optimization Hub now enables customers to export their cost optimization recommendations to Amazon S3. Cost Optimization Hub recommendations are consolidated from over 15 types of AWS cost optimization recommendations, such as EC2 instance rightsizing, Graviton migration, and Savings Plan purchases across their AWS accounts and AWS Regions. Exports are delivered on a daily basis to Amazon S3 in Parquet or CSV format.

With Data Exports for Cost Optimization Hub, customers receive their recommendations in easy-to-ingest data files, which simplifies creating reports or dashboards. Customers can apply the same filters and preferences to their exports that they use in Cost Optimization Hub to deduplicate savings. Customers can also control the data included in their export using basic SQL column selections and row filters. Data Exports for Cost Optimization Hub makes it easy for customers to bring their recommendation data into analytics and BI tools for tracking, prioritizing, and sharing savings opportunities with key stakeholders.

Data Exports for Cost Optimization Hub is available in the US East (N. Virginia) Region, but include recommendations for all AWS Regions, except AWS GovCloud (US) Regions and AWS China (Beijing and Ningxia) Regions.

Learn more about Data Exports for Cost Optimization Hub exports in the Data Exports User Guide and in the Data Exports product details page. You can also learn more about Cost Optimization Hub in the Cost Optimization Hub User Guide. Get started by visiting the “Data Exports” or “Cost Optimization Hub” features in the AWS Billing and Cost Management console and creating an export of the “Cost Optimization Recommendations” table.

AWS Lambda now supports IPv6 for outbound connections in VPC in the AWS GovCloud (US) Regions

AWS Lambda now allows Lambda functions to access resources in dual-stack VPC (outbound connections) over IPv6 in the AWS GovCloud(US) Regions. With this launch, Lambda enables you to scale your application without being constrained by the limited number of IPv4 addresses in your VPC, and to reduce costs by minimizing the need for translation mechanisms.

Previously, Lambda functions configured with an IPv4-only or dual-stack VPC could access VPC resources only over IPv4. To work around the constrained number of IPv4 addresses in VPC, customers modernizing their applications were required to build complex architectures or use network translation mechanisms. With today’s launch, Lambda functions can access resources in dual-stack VPC over IPv6 and get virtually unlimited scale, using a simple function level switch. You can also enable VPC-configured Lambda functions to access the internet using egress-only internet gateway.

Lambda’s IPv6 support for outbound connections in VPC is generally available in the AWS GovCloud (US-West, US-East) Regions.

You can enable outbound access for new or existing Lambda functions to dual-stack VPC resources over IPv6 using the AWS Lambda API, AWS Management Console, AWS Command Line Interface (AWS CLI), AWS CloudFormation, AWS Serverless Application Model (AWS SAM), and AWS SDK. For more information on how to enable IPv6 access for Lambda functions in dual-stack VPC, see the Lambda documentation. To learn more about Lambda, see the Lambda developer guide.
 

AWS Billing and Cost Management now provides Data Exports for FOCUS 1.0 (Preview)

Data Exports for FOCUS 1.0 now enables customers to export their cost and usage data with the FOCUS 1.0 schema to Amazon S3. This feature is in preview. FOCUS is a new open-source cloud billing data specification that provides standardization to simplify cloud financial management across multiple sources. Data Exports for FOCUS 1.0 includes several AWS-specific columns, such as usage types and cost categories, and delivers exports on a daily basis to Amazon S3 as Parquet or CSV files.

With Data Exports for FOCUS 1.0, customers receive their costs in four standardized columns, ListCost, ContractedCost, BilledCost, and EffectiveCost. It provides a consistent treatment of discounts and amortization of Savings Plans and Reserved Instances. The standardized schema of FOCUS ensures each type of billing data appears in a consistent column with a common set of values, so data can be reliably referenced across sources.

Data Exports for FOCUS 1.0 is available in preview in the US East (N. Virginia) Region, but include cost and usage data covering all AWS Regions, except AWS GovCloud (US) Regions and AWS China (Beijing and Ningxia) Regions.

Learn more about AWS Data Exports for FOCUS 1.0 in the User Guide, product details page, and at the FOCUS open-source project webpage. Get started by visiting the Data Exports page in the AWS Billing and Cost Management console and creating an export of the “FOCUS 1.0 with AWS columns - preview” table.

Amazon Redshift Query Editor V2 is now available in AWS Canada (Calgary) region

You can now use the Amazon Redshift Query Editor V2 with Amazon Redshift in the AWS Canada (Calgary) region. Amazon Redshift Query Editor V2 makes data in your Amazon Redshift data warehouse and data lake more accessible with a web-based tool for SQL users such as data analysts, data scientists, and database developers. With Query Editor V2, users can explore, analyze, and collaborate on data. It reduces the operational costs of managing query tools by providing a web-based application that allows you to focus on exploring your data without managing your infrastructure.

Default Role in CodeCatalyst Environments

Today Amazon CodeCatalyst announces support for adding a default IAM role to an environment.

Previously - when a workflow was configured - a user was required to specify the environment, AWS account connection, and IAM role for each individual action in order for that action to interact with AWS resources. With the default IAM role, a user only needs to set the environment for an action and the AWS account connection and role are automatically applied to the action.

This feature is available in all AWS Regions where CodeCatalyst is generally available. To get started, add a default role to your environments today. To learn more, see the environments section in the CodeCatalyst documentation.
 

Amazon Chime SDK meetings is now available in the Africa (Cape Town) Region

Amazon Chime SDK now offers WebRTC meetings with API endpoints in the Africa (Cape Town) Region. With this release, Amazon Chime SDK developers can add one-to-one and group meetings with real-time audio and video to web and mobile applications from the Africa (Cape Town) Region. This release also includes the ability to connect clients to audio and video media hosted in the Africa (Cape Town) Region.

When creating meetings applications with Amazon Chime SDK, developers call API endpoints to create, update, and delete one-to-one and group meetings. The region selected for the API endpoint can impact the latency for API calls and helps control the location of meeting data, since the region is also where meeting events are received and processed. Developers using the Africa (Cape Town) Region API endpoints to create and manage Amazon Chime SDK meetings must also use the same AWS region for media because Africa (Cape Town) is an opt-in region. Developers using any of the other available control regions to create and manage their meetings can also opt-in so they can host the media for the meeting in the Africa (Cape Town) Region. Developers can opt-in to use the Africa (Cape Town) Region by opting-in through their AWS account.

Amazon SageMaker HyperPod now supports configurable cluster storage

Today, AWS announces the general availability of configurable cluster storage for SageMaker HyperPod cluster instances, which enables customers to provision additional storage for model development. This launch allows you to centrally automate the provisioning and management of additional Elastic Block Store (EBS) volumes for your cluster instances. With configurable cluster storage, you can easily integrate additional storage capacity across all your cluster instances, empowering you to customize your persistent cluster environment to meet the unique demands of your distributed training workloads.

Cluster storage on SageMaker HyperPod enables customers to dynamically allocate and manage storage resources within the cluster. Organizations can now scale their storage capacity on-demand, ensuring they have sufficient space for Docker images, logs, and custom software installations. This feature is particularly beneficial for foundation model developers working with extensive logging requirements and resource-intensive machine learning models, allowing them to effectively manage and store critical assets within a secure and scalable environment.

Record individual participants with Amazon IVS Real-Time Streaming

Amazon Interactive Video Service (Amazon IVS) Real-Time Streaming enables you to build real-time interactive video experiences. With individual participant recording, you can now record each live stream participant’s video or audio to Amazon Simple Storage Service (Amazon S3).

When recording is enabled, each participant is automatically recorded and saved as a separate file in the Amazon S3 bucket you select. This new individual recording option is in addition to the existing composite recording feature, which combines all participants into one media file. There is no additional cost for enabling individual participant recording, but standard Amazon S3 storage and request costs apply.

Amazon IVS is a managed live streaming solution that is designed to be quick and easy to set up, and ideal for creating interactive video experiences. Video ingest and delivery are available around the world over a managed network of infrastructure optimized for live video. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available.

To get started, see the following resources:

Amazon RDS for SQL Server supports up to 64TiB and 256,000 IOPS with io2 Block Express volumes

Amazon RDS for SQL Server now offers enhanced storage and performance capabilities, supporting up to 64TiB of storage and 256,000 I/O operations per seconds (IOPS) with io2 Block Express volumes. This represents an improvement from the previous limit of 16 TiB and 64,000 IOPS with IO2 Block Express. These enhancements enable transactional databases and data warehouses to handle larger workloads on a single Amazon RDS for SQL Server database instance, eliminating the need to shard data across multiple instances.

The support for 64TiB and 256,000 IOPS with io2 Block Express for Amazon RDS for SQL Server is now generally available in all AWS regions where Amazon RDS io2 Block Express volumes are currently supported. To learn more, please visit the Amazon RDS User's Guide.

Anthropic's Claude 3.5 Sonnet model now available in Amazon Bedrock

Anthropic’s Claude 3.5 Sonnet foundation model is now generally available in Amazon Bedrock. Anthropic’s most intelligent model to date, Claude 3.5 Sonnet, sets a new industry standard for intelligence. The model outperforms other generative AI models in the industry as well as Anthropic’s previously most intelligent model, Claude 3 Opus, on a wide range of evaluations, all while being one-fifth of the cost of Opus. You can now get intelligence better than Claude 3 Opus, at the same cost of Anthropic’s original Claude 3 Sonnet model.

The frontier intelligence displayed by Claude 3.5 Sonnet combined with cost-effective pricing, makes this model ideal for complex tasks such as context-sensitive customer support, orchestrating multi-step workflows, streamlining code translations, and creating user-facing applications. Claude 3.5 Sonnet exhibits marked improvements in near-human levels of comprehension and fluency. The model represents a significant leap in understanding nuance, humor, and complex instructions. It is exceptional at writing high-quality content that feels more authentic with a natural and relatable tone. Claude 3.5 Sonnet is also Anthropic’s strongest vision model, providing best-in-class vision capabilities. It can accurately interpret charts and graphs and transcribe text from imperfect images—a core capability for retail, logistics, and financial services, where AI may glean more insights from an image, graphic, or illustration than from text alone. Additionally, when instructed and provided with the relevant tools, Claude 3.5 Sonnet can independently write and edit code with sophisticated reasoning and troubleshooting capabilities.

Amazon RDS for Oracle now supports Oracle Multitenant in the AWS GovCloud (US) Regions

Amazon Relational Database Service (Amazon RDS) for Oracle now supports the Oracle Multitenant configuration on Oracle Database versions 19c and 21c running Oracle Enterprise Edition or Standard Edition 2 in the AWS GovCloud (US) Regions. With this release, the Amazon RDS for Oracle DB instance can operate as a multitenant container database (CDB) hosting one or more pluggable databases (PDBs). A PDB is a set of schemas, schema objects, and non-schema objects that logically appears to a client as a non-CDB.

With Oracle Multitenant, you have the option to consolidate standalone databases by either creating them as PDBs or migrating them to PDBs. Database consolidation can deliver improved resource utilization for DB instances, reduced administrative load, and potential reduction in total license requirements.

To create a multitenant Amazon RDS for Oracle DB instance, simply create an Oracle DB instance in the AWS Management Console or using the AWS CLI, and specify the Oracle multitenant architecture and multitenant configuration. You may also convert an existing non-CDB instance to the CDB architecture, and then modify the instance to the multitenant configuration to enable it to hold multiple PDBs. Amazon RDS for Oracle DB instances are charged at the same rate whether the instance is a non-CDB or a CDB in either the single-tenant or multi-tenant configuration.

Amazon RDS for Oracle allows you to set up, operate, and scale Oracle database deployments in the cloud. See Amazon RDS for Oracle Pricing for up-to-date pricing and regional availability.
 

Amazon Bedrock now supports compressed embeddings from Cohere Embed

Amazon Bedrock now supports compressed embeddings (int8 and binary) from the Cohere Embed model, enabling developers and businesses to build more efficient generative AI applications without compromising on performance. Cohere Embed is a leading text embedding model. It is most frequently used to power Retrieval-Augmented Generation (RAG) and semantic search systems.

The text embeddings output by the Cohere Embed model must be stored in a database with vector search capabilities, with storage costs being directly related to the dimensions of the embedding output as well as the number format precision. Cohere’s compression-aware model training techniques allows the model to output embeddings in binary and int8 precision format, which are significantly smaller in size than the often used FP32 precision format, with minimal accuracy degradation. This unlocks the ability to run your enterprise search applications faster, cheaper, and more efficiently. int8 and binary embeddings are especially interesting for large, multi-tenancy setups, where the ability to search millions of embeddings within milliseconds is a critical business advantage. Cohere’s compressed embeddings allow you to build applications which are efficient enough to put into production at scale, accelerating your AI strategy to support your employees and customers.

Cohere Embed int8 and binary embeddings are now available in Amazon Bedrock in all the AWS Regions where the Cohere Embed model is available. To learn more, read the Cohere in Amazon Bedrock product page, documentation, and Cohere launch blog. To get started with Cohere models in Amazon Bedrock, visit the Amazon Bedrock console.

AWS CodeArtifact now supports Cargo, the Rust package manager

Today, AWS announces the general availability of Cargo support in CodeArtifact. Crates, which are used to distribute Rust libraries, can now be stored in CodeArtifact.

Cargo, the package manager for the Rust programming language, can be used to publish and download crates from CodeArtifact repositories. Developers can configure CodeArtifact to fetch crates from crates.io, the Rust community’s crate hosting service. When Cargo is connected to a CodeArtifact repository, CodeArtifact will automatically fetch requested crates from crates.io and store them in the CodeArtifact repository. By storing both private first-party crates and public, third-party crates in CodeArtifact, developers can access their critical application dependencies from a single source.

CodeArtifact support for Cargo is available in all 13 CodeArtifact regions.

To learn more, see AWS CodeArtifact.
 

AWS Compute Optimizer supports rightsizing recommendations for Amazon RDS MySQL and RDS PostgreSQL

AWS Compute Optimizer now provides recommendations for Amazon RDS MySQL and RDS PostgreSQL DB instances and storage. These recommendations help you identify idle databases and choose the optimal DB instance class and provisioned IOPS settings, so you can reduce costs for over-provisioned workloads and increase the performance of under-provisioned workloads.

AWS Compute Optimizer automatically discovers your Amazon RDS MySQL and RDS PostgreSQL DB instances and analyzes Amazon CloudWatch metrics such as CPU utilization, read and write IOPS, and database connections to generate recommendations. If you enable Amazon RDS Performance Insights on your DB instances, Compute Optimizer will analyze additional metrics such as DBLoad to give you more insights to choose the optimal DB instance configurations. With these metrics, Compute Optimizer delivers idle and rightsizing recommendations to help you optimize your RDS DB instances.

This new feature is available in all AWS Regions where AWS Compute Optimizer is available except the AWS GovCloud (US) and the China Regions. To learn more about the new feature updates, please visit Compute Optimizer’s product page and user guide.
 

Amazon OpenSearch Service now supports JSON Web Token (JWT) authentication and authorization

Amazon OpenSearch Service now supports JSON Web Token (JWT) that enables you to authenticate and authorize users without having to provide any credentials or use internal user database. JWT support also makes it easy for customers to integrate with identity provider of their choice and isolate tenants in a multi-tenant application.

Until now, Amazon OpenSearch Service allowed customers to implement client and user authentication using Amazon Cognito and basic authentication with the internal user database. With JWT support, customers can now use a single token which any operator or external identity provider can use to authenticate requests to their Amazon OpenSearch Service cluster. Customers can setup JWT authentication using the console or CLI, as well as the create and update domain APIs.

Amazon SageMaker now offers a fully managed MLflow Capability

Amazon SageMaker now offers a fully managed MLflow capability. Data scientists can use familiar MLflow constructs to organize, track, and analyze ML experiments and administrators can setup MLflow with better scalability, availability, and security.

MLflow is a popular open-source tool that helps customers manage ML experiments. Data scientists and ML engineers are already using MLflow with SageMaker. However, it required setting up, managing, and securing access to MLflow Tracking Servers. With this launch, SageMaker makes it easier for customers to set-up, and manage MLflow Tracking Servers with a couple of clicks. Customers can secure access to MLflow via AWS Identity and Access Management roles. Data scientists can use MLflow SDK to track experiments across local notebooks, IDEs, managed IDEs in SageMaker Studio, SageMaker Training Jobs, SageMaker Processing Jobs, and SageMaker Pipelines. Experimentation capabilities such as rich visualizations for run comparisons and model evaluations are available to help data scientists find the best training iteration. Models registered in MLflow automatically appear in the SageMaker Model Registry for a unified model governance experience and customers can deploy MLflow Models to SageMaker Inference without building custom MLflow containers. The integration with SageMaker allows data scientists to easily track metrics during model training ensuring reproducibility across different frameworks and environments.

AWS Glue adds additional 13 new transforms including flag duplicates

AWS Glue now offers 13 new built-in transforms: Flag duplicates in column, Format Phone Number, Format case, Fill with mode, Flag duplicate rows, Remove duplicates, Month name, Is even, Cryptographic Hash, Decrypt, Encrypt, Int to IP and IP to int. AWS Glue is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. With these new transform, ETL developers can quickly build more sophisticated data pipelines without having to write custom code for these common transform tasks.

Each of these new transforms address a unique data processing need. For example, use Remove duplicates, Flag duplicates in column or Flag duplicate rows to highlight or remove the duplicates rows within your dataset, use Cryptographic Hash to apply an algorithm to hash values in the column, encrypt values in the source columns with the Encrypt transform, or decrypt these columns with the Decrypt transform. The new transformations are available for code-based jobs.
 

Announcing support for Autodesk 3ds Max Usage-Based Licensing in AWS Deadline Cloud

The AWS Deadline Cloud Usage-Based Licensing (UBL) server now offers on-demand licenses for Autodesk 3ds Max, a popular software for 3D modeling, animation, and digital imagery. This addition joins other supported digital content creation tools such as Autodesk Arnold, Autodesk Maya®, Foundry Nuke®, and SideFX® Houdini. With Deadline Cloud UBL, you only pay for use of the software during the processing of jobs.

With this release, customers can integrate 3ds Max licensing into their workflows by adding it to their license endpoints. Once configured, 3ds Max license traffic can be routed to the appropriate license endpoint, enabling seamless access and pay-as-you-go usage. This feature is currently available in the Deadline Cloud Customer-Managed fleet deployment option.

For more information, please visit the Deadline Cloud product page, and see the Deadline Cloud pricing page for UBL price details.
 

AWS Elemental MediaConnect adds source stream monitoring

AWS Elemental MediaConnect now provides information about the incoming transport stream and its program media. You can view transport stream information such as program numbers, stream types, codecs, and packet identifiers (PIDs) for video, audio, and data streams in the console or via the MediaConnect API. With this new feature you can more accurately identify and resolve issues, minimizing disruptions to your live broadcasts.

To learn more about monitoring source streams, visit the AWS Elemental MediaConnect documentation page.

AWS Elemental MediaConnect is a reliable, secure, and flexible transport service for live video that enables broadcasters and content owners to build live video workflows and securely share live content with partners and customers. MediaConnect helps customers transport high-value live video streams into, through, and out of the AWS Cloud. MediaConnect can function as a standalone service or as part of a larger video workflow with other AWS Elemental Media Services, a family of services that form the foundation of cloud-based workflows to transport, transcode, package, and deliver video.

Visit the AWS Region Table for a full list of AWS Regions where MediaConnect is available. To learn more about MediaConnect, please visit here.
 

Amazon CodeCatalyst now supports GitHub Cloud and Bitbucket Cloud with Amazon Q

Amazon CodeCatalyst now supports the use of source code repositories hosted in GitHub Cloud and Bitbucket Cloud with Amazon Q for feature development. Customers can now assign issues in CodeCatalyst to Amazon Q and direct it to work with source code hosted in GitHub Cloud and Bitbucket Cloud.

Using Amazon Q, you can go from an issue all the way to merge-ready code in a pull request. Amazon Q analyzes the issue and existing source code, creates a plan, and then generates source code in a pull request. Before, customers could only use source code repositories hosted in CodeCatalyst with this capability. Now, customers can use source code repositories hosted in GitHub Cloud or Bitbucket Cloud.

This capability is available in US West (Oregon). There is no change to pricing.

For more information, see the documentation or visit the Amazon CodeCatalyst website.

Amazon Connect Cases is now available in additional Asia Pacific regions

Amazon Connect Cases is now available in the Asia Pacific (Seoul) and Asia Pacific (Tokyo) AWS regions. Amazon Connect Cases provides built-in case management capabilities that make it easy for your contact center agents to create, collaborate on, and quickly resolve customer issues that require multiple customer conversations and follow-up tasks.

Amazon Redshift Query Editor V2 now supports 100MB file uploads

Amazon Redshift Query Editor V2 now supports uploading local files up to 100MB in size when loading data into your Amazon Redshift databases. This increased file size limit provides more flexibility for ingesting larger datasets directly from your local environment.

With the new 100MB file size limit, data analysts, engineers, and developers can now load larger datasets from local files into their Redshift clusters or workgroups using Query Editor V2. This enhancement is particularly beneficial when working with CSV, JSON, or other structured data files that previously exceeded the 5MB limit. By streamlining the upload process for sizeable local files, you can expedite data ingestion and analysis workflows on Amazon Redshift.

To learn more, see the Amazon Redshift documentation.
 

Amazon DataZone launches custom blueprint configurations for AWS services

Amazon DataZone launches custom blueprint configurations for AWS services allowing customers to optimize resource usage and costs by using existing AWS Identity and Access Management (IAM) roles and/or AWS services, such as Amazon S3. Amazon DataZone is a data management service for customers to catalog, discover, share, and govern data at scale across organizational boundaries with governance and access controls.

Amazon DataZone’s blueprints can help administrators define which AWS tools and services will be deployed for data producers like data engineers or data consumers like data scientists, simplifying access to data and increasing collaboration among project members. Custom blueprints for AWS services adds to the family of Amazon Datazone blueprints including the data lake, data warehouse, and Amazon SageMaker blueprints. With custom blueprints, administrators can include Amazon DataZone into their data pipelines by using existing IAM roles to publish existing data assets, owned by those roles, to the catalog, thereby establishing governed sharing of those data assets and enhancing governance across the entire infrastructure.

Amazon EC2 C7g and R7g instances are now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7g and R7g instances are available are now available in Europe (Milan), Asia Pacific (Hong Kong) and South America (São Paulo) Regions. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.

Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS).

Amazon EC2 C7g and R7g are available in the following AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Canada (Central), Asia Pacific (Hyderabad, Hong Kong, Mumbai, Seoul, Singapore, Sydney, Tokyo), China (Beijing, Nginxia), Europe (Frankfurt, Ireland, London, Milan, Spain, Stockholm) and South America (São Paulo).

To learn more, see Amazon EC2 C7g and R7g. To learn how to migrate your workloads to AWS Graviton-based instances, see the AWS Graviton Fast Start Program.

CodeCatalyst allows customers to use Amazon Q Developer to choose a blueprint

Today, AWS announces the general availability of a new capability of Amazon Q Developer in Amazon CodeCatalyst. Customers can now use Amazon Q to help them pick the best blueprint for their needs when getting started with a new project or on an existing project. Before, customers had to read through the descriptions of available blueprints to try and pick the best match. Now customers can describe what they want to create and receive direct guidance about which blueprint to pick for their needs. Amazon Q will also create an issue in the project for each requirement that isn’t included in the resources created by the blueprint. Users can then customize their project by assigning those issues to developers to add that functionality. They can even choose to assign these issues to Amazon Q itself, which will then attempt to create code to solve the problem.

Customers can use blueprints to create projects in CodeCatalyst that include resources, such as a source repository with sample code, CI/CD workflows that build and test your code, and integrated issue tracking tools. Customers can now use Amazon Q to help them create projects or add functionality to existing projects with blueprints. If the space has custom blueprints, Amazon Q Developer will learn and include these in its recommendations. For more information, see the documentation or visit the Amazon CodeCatalyst website.

This capability is available in regions where CodeCatalyst and Amazon Bedrock are available. There is no change to pricing.
 

AWS Glue Usage Profiles is now generally available

Today, AWS announces general availability of AWS Glue Usage Profiles, a new cost control capability that allows admins to set preventatives controls and limits over resources consumed by their Glue jobs and Notebook sessions. With AWS Glue Usage Profiles, admins can create different cost profiles for different classes of users. Each profile is a unique set of parameters that can be assigned to different types of users. For example, a cost profile for data engineer working on production pipeline could have unrestricted number of workers whereas the cost profile for a test user could have a restricted number of workers.

You can get started by creating a new usage profile with AWS Glue Studio console or by using the Glue Usage Profiles APIs. Next, you assign that profile to an IAM user or role. After following these steps, all new Glue jobs or sessions created with that particular IAM user or role, will have the limits specified in the assigned usage profile.

Amazon MWAA now supports Custom Web Server URLs

Amazon Managed Workflows for Apache Airflow (MWAA) now supports custom domain names for the Airflow web server, simplifying access to the Airflow user interface.

Amazon MWAA is a managed service for Apache Airflow that lets you use the same familiar Apache Airflow platform as you do today to orchestrate your workflows and enjoy improved scalability, availability, and security without the operational burden of having to manage the underlying infrastructure. Amazon MWAA now adds the ability to customize the redirection URL that MWAA’s single sign-on (SSO) uses after authenticating the user against their IAM credentials. This allows customers that use private web servers with load balancers, custom DNS entries, or proxies to point users to a user-friendly web address while maintaining the simplicity of MWAA’s IAM integration.

You can launch or upgrade an Apache Airflow environment with a custom URL on Amazon MWAA with just a few clicks in the AWS Management Console in all currently supported Amazon MWAA regions. To learn more about custom domains visit the Amazon MWAA documentation.

Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.
 

Amazon EC2 D3 instances are now available in Europe (Paris) region

Starting today, Amazon EC2 D3 instances, the latest generation of the dense HDD-storage instances, are available in the Europe (Paris) region.

Amazon EC2 D3 instances are powered by 2nd generation Intel Xeon Scalable Processors (Cascade Lake) and provide up to 48 TB of local HDD storage. D3 instances are ideal for workloads including distributed / clustered file systems, big data and analytics, and high capacity data lakes. With D3 instances, you can easily migrate from previous-generation D2 instances or on-premises infrastructure to a platform optimized for dense HDD storage workloads. D3 instances are offered in 4 sizes - xlarge, 2xlarge, 4xlarge, and 8xlarge.

D3 is available for purchase with Savings Plans, Reserved Instances, Convertible Reserved, On-Demand, and Spot instances, or as Dedicated Instances.

To get started with D3 instances, visit the AWS Management Console, AWS Command Line Interface (CLI), or AWS SDKs. To learn more, visit the EC2 D3 instances page.

Amazon OpenSearch Serverless now available in South America (Sao Paulo) region

We are excited to announce the availability of Amazon OpenSearch Serverless in the South America (Sao Paulo) region. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless automatically provisions and scales resources to provide consistently fast data ingestion rates and millisecond response times during changing usage patterns and application demand.

With the support in the South America (Sao Paulo) region, OpenSearch Serverless is now available in 12 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), and South America (Sao Paulo). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.
 

Introducing Maven, Python, and NuGet support in Amazon CodeCatalyst package repositories

Today, AWS announces the support for Maven, Python, and NuGet package formats in Amazon CodeCatalyst package repositories. CodeCatalyst customers can now securely store, publish, and share Maven, Python, and NuGet packages, using popular package managers such as mvn, pip, nuget and more. Through your CodeCatalyst package repositories, you can also access open source packages from from 6 additional public package registries. Your packages remain available for your development teams, should public packages and registries become unavailable from other service providers.

Amazon Kinesis Video Streams is now available in AWS GovCloud (US) Regions

Amazon Kinesis Video Streams is now available in AWS GovCloud (US-East and US-West) Regions. Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for storage, analytics, machine learning (ML), playback, and other processing. Amazon Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices. It durably stores, encrypts, and indexes video data in your streams, and allows you to access your data through easy-to-use APIs. Kinesis Video Streams enables you to playback video for live and on-demand viewing, and quickly build applications that take advantage of computer vision and video analytics through integration with Amazon Rekognition Video and Amazon Sagemaker.

For more information, please visit the Amazon Kinesis Video Streams product page, and see the AWS region table for complete regional availability information. Note that Amazon Kinesis Video Streams with WebRTC is not yet available in AWS GovCloud (US) Regions.

 

Amazon Redshift announces support for VARBYTE 16MB data type

Amazon Redshift has extended the VARBYTE data type from the current 1,024,000 bytes maximum size (see the VARBYTE What’s New announcement from December 2021) to 16,777,216 bytes max size. VARBYTE is a variable size data type for storing and representing variable-length binary strings. With this announcement, Amazon Redshift will support all existing VARBYTE functionality with 16MB VARBYTE values. VARBYTE data type can now ingest data larger than 1,024,000 bytes from Parquet, CSV and text file formats. The default size for a VARBYTE(n) column (if n is not specified) remains 64,0000 bytes.

VARBYTE 16MB support is now available in all commercial AWS Regions. Refer to the AWS Region Table for Amazon Redshift availability. For more information or to get started with Amazon Redshift VARBYTE data type, see the documentation.
 

Amazon now offers a capability to analyze issues and recommend granular tasks

Amazon CodeCatalyst now offers a new capability powered by Amazon Q to help customers analyze issues and recommend granular tasks. These tasks can then be individually assigned to users or to Amazon Q itself, helping you accelerate work. Before, customers could create issues to track work that needs to be done on a project and they needed to manually create more granular tasks that can be assigned to others on the team. Now customers can ask Amazon Q to analyze an issue for complexity and suggest ways of breaking up the work into individual tasks.

This capability is available in the PDX region. For more information, see the documentation or visit the Amazon CodeCatalyst website.
 

AWS Glue serverless Spark UI now supports rolling log files

Today, AWS announces rolling log file support for AWS Glue serverless Apache Spark UI. Serverless Spark UI enable you to get detailed information about your AWS Glue Spark jobs. With rolling log support, you can use AWS Glue serverless Spark UI to see detailed information for long-running batch or streaming jobs. Rolling log files enables you to monitor and debug large batch and streaming Glue jobs.

Amazon CodeCatalyst now offers the ability to link issues

Amazon CodeCatalyst now offers the ability to link an issue to other issues. This allows customers to link issues in CodeCatalyst as blocked by, duplicate of, related to, or blocks another issue.

Customer use CodeCatalyst issues to organize and coordinate their team's daily work. In addition, customers want to identify and visualize relationships between issues to plan the work effectively. The new capability assists teams to visualize dependencies between issues and see which issue is blocked with with other issue, is a duplicate of another issue, or if an issue blocks others issues.

Amazon RDS for MariaDB supports minors 10.11.8, 10.6.18, 10.5.25, 10.4.34

Amazon Relational Database Service (Amazon RDS) for MariaDB now supports MariaDB minor versions 10.11.8, 10.6.18, 10.5.25, and 10.4.34. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MariaDB, and to benefit from the bug fixes, performance improvements, and new functionality added by the MariaDB community.

You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also leverage Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MariaDB instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide.

Amazon RDS for MariaDB makes it straightforward to set up, operate, and scale MariaDB deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MariaDB. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
 

AWS Systems Manager now supports additional Rocky, Oracle, and Alma Linux versions

AWS Systems Manager now supports instances running Rocky Linux, Alma Linux, and Oracle Linux versions 8.8 and 8.9. Systems Manager customers running these operating systems versions now have access to all AWS Systems Manager Node Management capabilities, including Fleet Manager, Compliance, Inventory, Hybrid Activations, Session Manager, Run Command, State Manager, Patch Manager, and Distributor. For a full list of supported operating systems and machine types for AWS Systems Manager, see the user guide. Patch Manager enables you to automatically patch instances with both security-related and other types of updates across your infrastructure for a variety of common operating systems, including Windows Server, Amazon Linux, and Red Hat Enterprise Linux (RHEL). For a full list of supported operating systems for AWS Systems Manager Patch Manager, see the Patch Manager prerequisites user guide page.

This feature is available in all AWS Regions where AWS Systems Manager is available. For more information, visit the Systems Manager product page and Systems Manager documentation.
 

AWS KMS now supports Elliptic Curve Diffie-Hellman (ECDH) key agreement

The Elliptic Curve Diffie-Hellman (ECDH) key agreement enables two parties to establish a shared secret over a public channel. With this new feature, you can take another party’s public key and your own elliptic-curve KMS key that’s inside AWS Key Management Service (KMS) to derive a shared secret within the security boundary of FIPS 140-2 validated KMS hardware security module (HSM). This shared secret can then be used to derive a symmetric key to encrypt and decrypt data between the two parties using a symmetric encryption algorithm within your application.

You can use this feature directly within your own applications by calling DeriveSharedSecret KMS API, or using the latest version of the AWS Encryption SDK that supports ECDH keyring. The AWS Encryption SDK provides a simple interface for encrypting and decrypting data using a shared secret, automatically handling the key derivation and encryption process for you. In addition, ECDH key agreement can be an important building block for hybrid encryption schemes, or seeding a secret inside remote devices and isolated compute environments like AWS Nitro Enclaves.

This new feature is available in all AWS Regions, including the AWS GovCloud (US) Regions. To learn more about this new capability, see DeriveSharedSecret KMS API in the AWS KMS API Reference.
 

AWS CodeBuild now supports organization and global GitHub webhooks

AWS CodeBuild now supports organization and global webhooks for GitHub and GitHub Enterprise Server. CodeBuild webhooks automatically detect changes in your repositories and trigger new builds whenever webhook events are received. These events include GitHub Actions workflow run, commit push, release, and pull request.

With this feature, you can now configure a single CodeBuild webhook at organization or enterprise level to receive events from all repositories in your organizations, instead of creating webhooks for each individual repository. For managed GitHub Action self-hosted runners, this feature provides a centralized control mechanism, as you can set up runner environment at organization or enterprise level and use the same runner across all your repositories.

This feature is available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page.

To get started, set up organization or global webhooks in CodeBuild projects, and use them to run GitHub Actions workflow jobs or trigger builds upon push or pull request events. To learn more about using managed GitHub Actions self-hosted runners, see CodeBuild’s blog post.
 

Amazon EC2 C7i-flex instances are now available in US East (Ohio) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i-flex instances that deliver up to 19% better price performance compared to C6i instances, are available in US East (Ohio) region. C7i-flex instances expand the EC2 Flex instances portfolio to provide the easiest way for you to get price performance benefits for a majority of compute intensive workloads. The new instances are powered by the 4th generation Intel Xeon Scalable custom processors (Sapphire Rapids) that are available only on AWS, and offer 5% lower prices compared to C7i.

C7i-flex instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don't fully utilize all compute resources. With C7i-flex instances, you can seamlessly run web and application servers, databases, caches, Apache Kafka, and Elasticsearch, and more. For compute-intensive workloads that need larger instance sizes (up to 192 vCPUs and 384 GiB memory) or continuous high CPU usage, you can leverage C7i instances.

C7i-flex instances are available in the following AWS Regions: US East (Ohio), US West (N. California), Europe (Ireland, London, Paris, Spain, Stockholm), Canada (Central), Asia Pacific (Mumbai, Singapore), and South America (São Paulo).

To learn more, visit Amazon EC2 C7i-flex instances.
 

Amazon RDS for Oracle now supports memory optimized R6i instance types

Starting today, Amazon Relational Database Service (RDS) for Oracle now supports memory optimized R6i instance types featuring up to 8x the RAM per vCPU of the existing R6i instance types to better fit your workloads. Many Oracle database workloads require high memory, storage, and I/O bandwidth but can safely reduce the number of vCPUs without impacting application performance. Memory optimized R6i instances come in various configurations from 2 vCPUs to 48 vCPUs, memory from 32 GiB to 1024 GiB, and up to 64:1 memory-to-vCPU ratio. These configurations will allow you to right-size instances for your Oracle workloads.

Memory optimized R6i instances are available in Bring Your Own License (BYOL) license model and for both Oracle Database Enterprise Edition (EE) and Oracle Database Standard Edition 2 (SE2) editions. You can launch additional memory configurations of the R6i instance class in the Amazon RDS Management Console or using the AWS CLI.

Amazon DataZone introduces advanced search filtering capabilities

Amazon DataZone is a fully managed data management service to catalog, discover, analyze, share, and govern data between data producers and consumers in a customer’s organization. Amazon DataZone introduces advanced search filtering capabilities in its business data catalog. This include improved rendering of glossary term facets, the ability to switch between 'AND' and 'OR' logic for filtering, and clear summaries of selected filters, making data discovery more efficient and intuitive.

Data consumers can navigate and select from hundreds of glossary terms in an expandable, collapsible hierarchy and perform more precise searches using logic filters to help find data quickly. For example, a financial analyst preparing a report on investment performance can navigate and select relevant glossary terms from a hierarchical list, apply 'OR' logic to broaden the search or 'AND' logic to combine criteria such as investment type and industry for precise data, and review clear summaries of selected filters to efficiently adjust search results.

Amazon SES now publishes email sending events to EventBridge

Today, Amazon Simple Email Service (SES) released a new way to track email sending activity by delivering sending events to Amazon EventBridge. Customers can now select EventBridge as a delivery destination for configuration sets, making it easier to route event notifications such as bounces and complaints to any service supported by EventBridge. Customers can use EventBridge rules to filter events by selecting information of interest and building workflows or data stores to process and store events. This information can be used for use cases such as contact list updates and deliverability analytics. This update makes it easier to capture and process SES sending events for custom workflow processing.

Previously, customers could set up event destinations through configuration sets to deliver sending events to Amazon CloudWatch, Amazon Kinesis Data Firehose, Amazon Pinpoint, or Amazon SNS. Delivery was not supported to other services, such as to an SQS Queue. Customers had to use the services that were integrated with SES, and find a way to route events to other services if needed. Now, customers can route sending events to any service supported by EventBridge. Customers can build rule sets to send events to different resources, such as sending bounce and open events to different SQS Queues. This makes it easier to get sending events into customer workflows with minimal setup and operational overhead.

AWS IoT TwinMaker announces Dynamic Scene feature

AWS IoT TwinMaker makes it easier to create digital twins of real-world systems such as buildings, factories, and industrial equipment. Today, we are announcing the Dynamic Scene feature, which allows for the updating and rendering of 3D objects dynamically based on TwinMaker Entities and Knowledge Graph queries. This feature will make it easier for customers to create and update 3D scenes using Knowledge Graph queries.

Previously, customers had to manually edit scenes in the AWS console to make changes. With the Dynamic Scene feature, all 3D scene objects, settings, and data bindings are stored as entities in the Knowledge Graph. Now, customers can use Knowledge Graph queries to programmatically update the 3D scene and its structure when modifying existing assets, adding new assets, or adding new data overlays on recently installed sensors. End-users viewing these 3D scenes in their digital twin visualization applications will see the updates within 30 seconds.

TwinMaker Dynamic Scene feature is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), China (Beijing), Europe (Frankfurt), Europe (Ireland) and the AWS GovCloud (US-West) Region.


To start creating 3D scenes using Dynamic Scene feature, refer to AWS User Guide. To learn more about AWS IoT TwinMaker, visit the AWS IoT TwinMaker product page.

Amazon CodeCatalyst now supports managing billing and access with a single AWS account

Today, Amazon CodeCatalyst announces support for customers who want to use the same AWS resources for billing and managing single sign-on access for multiple spaces. You can now connect multiple CodeCatalyst spaces to a single AWS Identity Center application for managing access to CodeCatalyst spaces for your users. You can also connect multiple CodeCatalyst spaces to a single AWS account for billing purposes.

With SSO support through IAM Identity Center, it’s easier to manage which team members have access across multiple Amazon CodeCatalyst spaces. Additionally, customers can now centrally segment user access and billing across business units. Each business unit can be defined by group in IAM Identity Center and assigned its own space, limiting access and separating billing.

Amazon Bedrock now available in the (London), (São Paulo), and Canada (Central) regions

Beginning today, customers can use Amazon Bedrock in the Europe (London), South America (São Paulo), and Canada (Central) regions to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance.

With this launch, Amazon Bedrock is now available in 13 AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore - limited access), Asia Pacific (Sydney), Asia Pacific (Mumbai), Europe (Frankfurt), Europe (Paris), Europe (Ireland - limited access), AWS GovCloud (US-West), Europe (London), South America (São Paulo), and Canada (Central).

To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.
 

Cross-region failover now available in AWS Elemental MediaPackage

Starting today, you can enable your content delivery network (CDN) to transparently failover between two or more AWS Elemental MediaPackage Live origins in different AWS regions. This resilience capability uses the new support for CMAF Ingest (Interface-1) and the force-endpoint error configuration option to enable CDN failover to a backup origin in case of stale or incomplete primary stream. This capability of MediaPackage is easy to use in combination with AWS Elemental MediaLive or AWS Elemental Live encoders, and Amazon CloudFront.

Configuring MediaPackage channels for cross-region failover helps ensure smooth playback for viewers during origin failovers, eliminates buffering and interruptions, and removes the need for any viewer action.

Transparent cross-region failover can be enabled with no additional cost per channel. For information on MediaPackage charges visit the pricing page. To get started with transparent cross-region failover, visit the AWS Elemental MediaPackage documentation page.

MediaPackage functions independently or as part of AWS Elemental Media Services, a family of services that form the foundation of cloud-based video workflows and offer the capabilities you need to transport, create, package, monetize, and deliver video. Visit the AWS region table for a full list of AWS Regions where AWS Elemental MediaPackage is available.
 

Amazon DataZone achieves SOC, ISO, and CSTAR certifications

Amazon DataZone, a fully managed data management service for cataloging, discovering, analyzing, sharing, and governing data within an organization, has achieved key milestones in security and compliance certifications. These accomplishments reflect an ongoing effort to expand compliance programs to support customers' architectural and regulatory needs.

The certifications included System and Organization Controls (SOC) 1, 2, and 3, validating the effectiveness of compliance controls through independent third-party assurance. Additionally, Amazon DataZone secured International Organization for Standardization (ISO) certifications—ISO 9001, ISO/IEC 27001, ISO/IEC 27017, ISO/IEC 27018, ISO 22301, ISO/IEC 27701, and ISO/IEC 20000, ensuring the highest standards standards of quality, safety, efficiency, and interoperability. Furthermore, the service achieved the Cloud Security Alliance Security, Trust & Assurance Registry (CSA STAR) certification under CCM 4.0, assessing the security and privacy controls of its cloud services. These certifications, along with the previous HIPAA certification, strengthens its commitment to maintaining security, quality, and compliance standards.

All reports are now accessible to AWS customers in the AWS Management Console through AWS Artifact.

For more information about Amazon DataZone and how to get started, refer to our product page and review the documentation.
 

Amazon EKS open sources Pod Identity agent

Today, Amazon EKS open sourced the Pod Identity agent, providing customers with more options to package and deploy the agent into EKS clusters. Pod Identity is a feature of EKS that simplifies the process for cluster administrators to configure Kubernetes applications with AWS IAM permissions. A prerequisite for using the Pod Identity feature is running the Pod Identity agent on the cluster’s worker nodes. With the Pod Identity agent being open sourced, you can now build the agent on your own. This gives you various options to package and deploy the agent, enabling you to align with your organization’s deployment practices.

With access to the Pod Identity agent’s source code, you are able to inspect the source code and perform necessary scans as part of your build process. Additionally, you can choose to package and deploy the pod identity agent as a binary in your custom EKS AMI. Alternatively, you can build a container image from source code, and store it in your preferred container registry. You can then deploy the containerized agent using a Helm chart or as a Kubernetes manifest file.

Announcing the Access Console for NICE DCV

AWS has launched the NICE DCV Access Console, a new web-based solution for administrators and end users to more easily manage their remote desktop sessions. NICE DCV is a high-performance remote display protocol that allows users to securely connect to remote desktops from any device. Customers can now efficiently deploy an out-of-the-box solution that centralizes their NICE DCV session management.

The NICE DCV Access Console complements the existing NICE DCV Session Manager, which provides installable software packages and an API to manage the lifecycle of sessions across a fleet of NICE DCV servers. With the Access Console, administrators can centrally create and manage sessions, more easily control user and group authorization, and visualize the underlying resources. For end users, the Access Console makes it easier to create and launch new sessions on any NICE DCV client.

AWS Launch Wizard now supports resource and tag-based access controls

AWS Launch Wizard now offers resource and tag-based access controls for improved governance and security. With today’s launch, you can add tags to your AWS Launch Wizard resources and define AWS Identity and Access Management (IAM) policies to specify fine-grained permissions based on resource IDs and tags. Similarly, for resource-level access controls, you can configure IAM policies through Amazon Resource Names (ARNs) or wildcards, and specify the users, roles and actions that are permitted on the resources.

AWS Launch Wizard offers a guided way of sizing, configuring, and deploying AWS resources for third party applications, such as Microsoft SQL Server Always On and HANA based SAP systems, without the need to manually identify and provision individual AWS resources.

AWS Launch Wizard is available in 26 Regions including US East (N. Virginia, Ohio), Europe (Frankfurt, Ireland, London, Paris, Stockholm and Milan), South America (Sao Paulo), US West (N. California, Oregon), Canada (Central), Asia Pacific (Mumbai, Seoul, Tokyo, Hong Kong, Hyderabad, Singapore, and Sydney), Middle East (Bahrain, UAE), Africa (Cape Town), China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD). AWS Launch Wizard is also available in the AWS GovCloud (US-West, US-East) Regions.

To learn more about AWS Launch Wizard, visit the Launch Wizard Page. To get started, check out the Launch Wizard User Guide and API documentation page.
 

Amazon MQ is now available in AWS Canada West (Calgary) region

Amazon MQ is now available in the AWS Canada West (Calgary) region. With this launch, Amazon MQ is now available in a total of 33 regions.

Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ reduces your your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can more easily migrate to AWS without having to rewrite code.

Amazon ElastiCache Serverless now supports snapshot and restore for Memcached

Amazon ElastiCache Serverless now supports the ability to automatically backup and restore your Memcached data. You can now create a snapshot of your serverless Memcached cache and use it to restore the cache or seed data into a new serverless cache, enhancing data resilience and recovery.

Hundreds of thousands of customers use ElastiCache to build highly responsive applications. With ElastiCache Serverless, you can accelerate application performance without needing to manage infrastructure and capacity. Previously customers running Memcached workloads on ElastiCache did not have the ability to backup or preserve their data in the event of a failure, data loss, or data migration. With ElastiCache Serverless, you can now backup your Memcached data by taking a snapshot and restore with no impact on availability or performance.

AWS User Notifications is now available in Canada West (Calgary) Region

AWS User Notifications is now available in Canada West (Calgary) Region. User Notifications enables you to view notifications across accounts, regions, and services in a Console Notifications Center, and configure delivery channels where you want to receive these notifications, like email, AWS Chatbot, and AWS Console Mobile App. You can centrally setup and view notifications from AWS services, such as AWS Health events, Amazon CloudWatch alarms, or Amazon EC2 instance state changes, in a consistent, human-readable format. Notifications include URLs to direct you to resources on the AWS Console, where you can take take additional actions.

With User Notifications, you specify which events you want to be notified about, and in which channels. Any user with User Notifications permissions can enable notifications for use cases like CloudWatch alarm state changes and Health events. For example, email jane@example.com whenever an EC2 instance in region us-east-1 or ca-west-1 with tag ‘production’ changes state to “stopped”. In addition, you can aggregate multiple events into a single notification for an easy top-level view.

Configuring and viewing notifications in the Console Notifications Center is offered at no additional cost.

Amazon OpenSearch Ingestion adds support for customer managed VPC interface endpoints

Amazon OpenSearch Ingestion now allows you to create VPC interface endpoints to securely connect your VPC to Amazon OpenSearch Ingestion pipeline via AWS PrivateLink. This allows greater control in meeting your network and security posture by explicitly restricting VPC resource access to only entities that need it. Furthermore, you can now connect multiple VPCs to a single Amazon OpenSearch Ingestion pipeline in an AWS account allowing network architectures for centralized logging.

With customer-managed VPC endpoints, your VPC resources can communicate with Amazon OpenSearch Ingestion within the AWS network, which helps you meet your compliance and regulatory requirements to limit public internet connectivity. You can now use Amazon VPC APIs to connect your VPCs to Amazon OpenSearch Ingestion pipelines, giving you access to advanced VPC features like identity-based policies and alerts for endpoint events.

AWS Mainframe Modernization Application Testing is now generally available

We are excited to announce the general availability of AWS Mainframe Modernization Application Testing, an AWS Mainframe Modernization service feature that automates functional equivalence testing for mainframe application modernization and migration to AWS, and regression testing.

When modernizing mainframe applications, customers typically spend a majority of their time on software testing requiring specialized skillsets. The diversity of languages, data formats, protocols, and system dependencies add to the testing complexity. To accelerate modernization and continuous testing even long after mainframe application modernization is complete, Application Testing provides cloud-native testing capabilities, including mainframe test recording, on-demand automated test replays, comparisons, and non-regression testing at scale. The feature validates functional equivalence between source and target applications by comparing underlying data changes at scale. This approach facilitates black box testing that is agnostic to application architecture and programming languages used. Data sources supported for such comparison include data sets, databases, and online screens.

Customers benefits from test case management, testing repeatability, and testing scalability, that translate to higher quality of mass modernization, faster completion of projects, and lower costs. Built-in automation, easy access in AWS Mainframe Modernization Console, and support for a wide variety of testing use cases can free up more time and resources for innovation.

Amazon DynamoDB supports pausing global table replication in the AWS GovCloud (US) Regions

Amazon DynamoDB now supports an AWS Fault Injection Service (FIS) action to pause replication for global tables in the GovCloud (US) Regions. FIS is a fully managed service for running controlled fault injection experiments to improve an application’s performance, observability, and resilience. Global tables replicate your Amazon DynamoDB tables automatically across your choice of AWS Regions to achieve fast, local read and write performance. Customers can use the new FIS action to observe how their application responds to a pause in regional replication, and tune their monitoring and recovery process to improve resiliency and application availability.

Global tables are designed to meet the needs of high availability applications, providing you 99.999% availability, increased application resiliency, and improved business continuity. This new FIS action reproduces the real-world behavior when replication to a global table replica is interrupted and resumed. This lets customers test and build confidence that their application responds as intended when resources in a Region are not accessible. Customers can create an experiment template in FIS to integrate the experiment with continuous integration and release testing and to combine with other FIS actions. For example, DynamoDB Pause Replication is combined with other actions in the Cross-Region: Connectivity scenario to isolate a Region.

Amazon RDS for SQL Server Supports Minor Version 2022 CU13

A new minor version of Microsoft SQL Server is now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports the latest minor version of SQL Server 2022 across the Express, Web, Standard, and Enterprise editions.

We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor version include SQL Server 2022 CU13 - 16.0.4125.3.

The minor version is available in all AWS commercial regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.
 

Productionize Foundation Models from SageMaker Canvas

Amazon SageMaker Canvas now supports deploying Foundation Models (FMs) to SageMaker real-time inference endpoints, allowing you to bring generative AI capabilities into production and consume them outside the Canvas workspace. SageMaker Canvas is a no-code workspace that enables analysts and citizen data scientists to generate accurate ML predictions and use generative AI capabilities.

SageMaker Canvas provides access to FMs powered by Amazon Bedrock and SageMaker JumpStart, supports RAG-based customization, and fine-tuning of FMs. Starting today, you can deploy FMs powered by SageMaker JumpStart such as Falcon-7B, Llama-2, and more to SageMaker endpoints making it easier to integrate generative AI capabilities into your applications outside the SageMaker Canvas workspace. FMs powered by Amazon Bedrock can already be accessed using a single API outside the SageMaker workspace. By simplifying the deployment process, SageMaker Canvas accelerates time-to-value and ensures a smooth transition from experimentation to production.

To get started, log in to SageMaker Canvas to access the FMs powered by SageMaker JumpStart. Select the desired model and deploy it with the appropriate endpoint configurations such as indefinitely or for a specific duration of time. SageMaker Inferencing charges will apply to deployed models. A new user can access the latest version by directly launching SageMaker Canvas from their AWS console. An existing user can access the latest version of SageMaker Canvas by clicking “Log Out” and logging back in.

Research and Engineering Studio on AWS, Version 2024.06 now available

Today we’re excited to announce the release of Research and Engineering Studio (RES) on AWS Version 2024.06. This latest release brings support for Ubuntu 22.04, the ability to designate users of your RES environment as project owners, and a new demo experience.

RES on AWS 2024.06 now offers users the ability to launch virtual desktops with Ubuntu (22.04.3 - LTS). Users can use either the base image available through RES or create their own Ubuntu RES Ready AMI to preload their custom dependencies and applications. This latest release also allows RES administrators to designate certain individuals in their environment as project owners. Project owners can assist in the administration and management of RES projects by adding or removing users and groups.

The new demo experience automates deployment of a RES environment. The demo automates the process of setting up Single Sign-On (SSO) by using Keycloak as an alternative to AWS IAM Identity Center. Visit our demo page to learn more.

RES 2024.06 is available in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Milan), Israel (Tel Aviv) and the AWS GovCloud (US-West) Regions.

Check out additional release notes on GitHub to get started and deploy RES 2024.06 today.

Amazon OpenSearch Serverless now supports Internet Protocol Version 6 (IPv6)

We are excited to announce that Amazon OpenSearch Serverless now offers customers the option to use Internet Protocol version 6 (IPv6) addresses for the endpoint of your OpenSearch Serverless collection. Customers moving to IPv6 can simplify their network stack by enabling their OpenSearch Serverless endpoints with both IPv4 and IPv6 addresses. The continued growth of the internet is exhausting available Internet Protocol version 4 (IPv4) addresses. IPv6 increases the number of available addresses by several orders of magnitude, so customers will no longer need to manage overlapping address spaces in their VPCs. Customers can also standardize their applications on the new version of Internet Protocol by moving their OpenSearch Serverless Endpoints to IPv6 only.

OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs). 

Amazon SES now provides custom values in the Feedback-ID header

Today, Amazon Simple Email Service (SES) released a new feature to give customers control over parts of the auto-generated Feedback-ID header in messages sent through SES. This feature provides additional details to help customers identify deliverability trends. Customers can use products like PostMaster Tools by Gmail to see complaint rates by identifiers of their choice, such as sender identity or campaign ID. This makes it easier to track deliverability performance associated with independent workloads and campaigns, and accelerates troubleshooting when diagnosing complaint rates.

Previously, SES automatically generated a Feedback-ID header when sending emails on behalf of SES customers. This Feedback-ID helps customers track their deliverability performance, such as complaint rates, at the AWS account level. Now SES includes up to two custom values in the Feedback-ID header, which customers can pass to SES during sending. Customers specify message tag values for either “ses:feedback-id-a” or “ses:feedback-id-b” (or both), and SES automatically includes these values as the first and second fields in the Feedback-ID header (respectively). This gives even more granularity when viewing deliverability metrics in tools such as PostMaster Tools by Gmail.

SES supports fine grained Feedback-ID in all AWS regions where SES is available.

For more information, see the documentation for SES event publishing.

Amazon Connect now provides color coding for shift activities in agent scheduling

Amazon Connect now provides color coding for shift activities in agent scheduling, enabling a simplified experience for contact center managers and agents. With this launch, you can now configure colors for agent shift activities, such as red for breaks and lunches, green for team meetings, and purple for trainings. With customizable colors, managers can quickly see how different activities are placed in agent schedules (e.g. is more than half the team doing a training at the same time, does the team meeting include everyone, etc.). This launch also simplifies the experience for agents as they can easily understand their schedule at-a-glance for the week without having to read through each scheduled activity. Customizable colors make day-to-day schedule management more efficient for managers and agents.

AWS CloudTrail Lake announces AI-powered natural language query generation (preview)

AWS announces generative AI-powered natural language query generation in AWS CloudTrail Lake (preview), enabling you to simply analyze your AWS activity events without having to write complex SQL queries. Now you can ask questions in plain English about your AWS API and user activity, such as “How many errors were logged during the past week for each service and what was the cause of each error?” or “Show me all users who logged in using console yesterday”, and AWS CloudTrail will generate a SQL query, which you can run as is or fine-tune to meet your use case.

This new feature empowers users who are not experts in writing SQL queries or who don’t have a deep understanding of CloudTrail events. As a result, exploration and analysis of AWS activity in event data stores on CloudTrail Lake becomes simpler and quicker, accelerating compliance, security, and operational investigation.

This feature is now available in preview in AWS US East (N. Virginia) at no additional cost. Please note that running the queries generated using this feature will result in CloudTrail Lake query charges. Refer to CloudTrail pricing for details. To learn more about this feature and get started, please refer to the documentation or the AWS News Blog.

AWS Audit Manager generative AI best practices framework now includes Amazon SageMaker

Available today, the AWS Audit Manager generative AI best practices framework now includes Amazon SageMaker in addition to Amazon Bedrock. Customers can use this prebuilt standard framework to gain visibility into how their generative AI implementation on SageMaker or Amazon Bedrock follows AWS recommended best practices and start auditing their generative AI usage and automating evidence collection. The framework provides a consistent approach for tracking AI model usage and permissions, flagging sensitive data, and alerting on issues.

This framework includes 110 controls across areas such as governance, data security, privacy, incident management, and business continuity planning. Customers can select and customize controls to structure automated assessments. For example, customers seeking to mitigate known biases before feeding data into their model can use the ‘Pre-processing Techniques’ control to require evidence of validation criteria including documentation of data augmentation, re-weighting, or re-sampling. Similarly, customers can use the 'Bias and Ethics Training' control to upload documentation demonstrating that their workforce is trained to address ethical considerations and AI bias in the model.

AWS Cloud WAN introduces Service Insertion to simplify security inspection at global scale

Today AWS announces Service Insertion, a new feature of AWS Cloud WAN that simplifies the integration of security and inspection services into the Cloud WAN based global networks. Using this feature, you can easily steer your global network traffic between Amazon VPCs (Virtual Private Cloud), AWS Regions, on-premises locations, and Internet via security appliances or inspection services using central Cloud WAN policy or the AWS management console.

Customers deploy inspection services or security appliances such as firewalls, intrusion detection/protection systems (IDS/IPS) and secure web gateways to inspect and protect their global Cloud WAN traffic. With Service Insertion, customers can easily steer multi-region or multi-segment network traffic to security appliances or services without having to create and manage complex routing configurations or third-party automation tools. Using service insertion, you define your inspection and routing intent in a central policy document and your configuration is consistently deployed across your Cloud WAN network. Service insertion works with both AWS Network Firewall and third-party security solutions, and makes it easy to perform east-west (VPC-to-VPC) and north-south (Internet Ingress/Egress) security inspection across multiple AWS Regions and on-premises locations across the globe.

AWS IAM Access Analyzer now offers policy checks for public and critical resource access

AWS Identity and Access Management (IAM) Access Analyzer guides customers toward least privilege by providing tools to set, verify, and refine permissions. IAM Access Analyzer now extends custom policy checks to proactively detect nonconformant updates to policies that grant public access or grant access to critical AWS resources ahead of deployments. Security teams can use these checks to streamline their IAM policy reviews, automatically approving policies that conform with their security standards and inspecting more deeply when policies don’t conform. Custom policy checks use the power of automated reasoning to provide the highest levels of security assurance backed by mathematical proof.

Security and development teams can innovate faster by automating and scaling their policy reviews for public and critical resource access. You can integrate these custom policy checks into the tools and environments where developers author their policies, such as their CI/CD pipelines, GitHub, and VSCode. Developers can create or modify an IAM policy, and then commit it to a code repository. If custom policy checks determine that the policy adheres to your security standards, your policy review automation lets the deployment process continue. If custom policy checks determine that the policy does not adhere to your security standards, developers can review and update the policy before deploying it to production.

AWS Identity and Access Management now supports passkey as a second authentication factor

AWS Identity and Access Management (IAM) now supports passkeys for multi-factor authentication to provide easy and secure sign-ins across your devices. Based on FIDO standards, passkeys use public key cryptography, which enables strong, phishing-resistant authentication that is more secure than passwords. IAM now allows you to secure access to AWS accounts using passkeys for multi-factor authentication (MFA) with support for built-in authenticators, such as Touch ID on Apple MacBooks and Windows Hello facial recognition on PCs. Passkeys can be created with a hardware security key or with your chosen passkey provider using your fingerprint, face, device PIN, and they are synced across your devices to sign-in with AWS.

AWS Identity and Access Management helps you securely manage identities and control access to AWS services and resources. MFA is a security best practice in IAM that requires a second authentication factor in addition to the user name and password sign-in credentials. Passkey support in IAM is a new feature to further enhance MFA usability and recoverability. You can use a range of supported IAM MFA methods, including FIDO-certified security keys to harden access to your AWS accounts.

This feature is available now in all AWS Regions, except in the China Regions. To learn more about using passkeys in IAM, get started by visiting the launch blog post and Using MFA in AWS documentation.

To learn more:

AWS Private CA introduces Connector for SCEP for mobile devices (Preview)

AWS Private Certificate Authority (AWS Private CA) launches the Connector for SCEP, which lets you use a managed and secure cloud certificate authority (CA) to enroll mobile devices securely and at scale. Simple Certificate Enrollment Protocol (SCEP) is a protocol widely adopted by mobile device management (MDM) solutions for getting digital identity certificates from a CA and enrolling corporate-issued and bring-your-own-device (BYOD) mobile devices. With the Connector for SCEP, you use a managed private CA with a managed SCEP solution to reduce operational costs, simplify processes, and optimize your public key infrastructure (PKI). Additionally, the Connector for SCEP lets you use AWS Private CA with industry-leading SCEP-compatible MDM solutions including Microsoft Intune and Jamf Pro.

The Connector for SCEP is one of three connector types offered for AWS Private CA. Connectors allow you to replace your existing CAs with AWS Private CA in environments that have an established native certificate distribution solution. This means that instead of using multiple CA solutions, you can utilize a single private CA solution for your enterprise. You benefit from comprehensive support, extending to Kubernetes, Active Directory, and, now, mobile devices.

During the preview period, Connector for SCEP is available in the following AWS Regions: US East (N. Virginia).

This feature is offered at no additional charge, you only pay for the AWS Private CAs and the certificates issued from them. To get started, see the Getting started guide or go to the Connector for SCEP console

Detect malware in new object uploads to Amazon S3 with Amazon GuardDuty

Today, Amazon Web Services (AWS) announces the general availability of Amazon GuardDuty Malware Protection for Amazon S3. This expansion of GuardDuty Malware Protection allows you to scan newly uploaded objects to Amazon S3 buckets for potential malware, viruses, and other suspicious uploads and take action to isolate them before they are ingested into downstream processes.

GuardDuty helps customers protect millions of Amazon S3 buckets and AWS accounts. GuardDuty Malware Protection for Amazon S3 is fully managed by AWS, alleviating the operational complexity and overhead that normally comes with managing a data-scanning pipeline, with compute infrastructure operated on your behalf. This feature also gives application owners more control over the security of their organization’s S3 buckets; they can enable GuardDuty Malware Protection for S3 even if core GuardDuty is not enabled in the account. Application owners are automatically notified of the scan results using Amazon EventBridge to build downstream workflows, such as isolation to a quarantine bucket, or define bucket policies using tags that prevent users or applications from accessing certain objects.

AWS IAM Access Analyzer now offers recommendations to refine unused access

AWS Identity and Access Management (IAM) Access Analyzer guides customers toward least privilege by providing tools to set, verify, and refine permissions. IAM Access Analyzer now offers actionable recommendations to guide you to remediate unused access. For unused roles, access keys, and passwords, IAM Access Analyzer provides quick links in the console to help you delete them. For unused permissions, IAM Access Analyzer reviews your existing policies and recommends a refined version tailored to your access activity.

As a central security team member, you can use IAM Access Analyzer to gain visibility into unused access across your AWS organization and automate how you rightsize permissions. Security teams set up automated workflows to notify their developers about new IAM Access Analyzer findings. Now, you can include step-by-step recommendations provided by IAM Access Analyzer to notify and simplify how developers refine unused permissions. This feature is offered at no additional cost with unused access findings and is a part of the growing Cloud Infrastructure Entitlement Management capabilities at AWS. The recommendations are available in AWS Commercial Regions, excluding the AWS GovCloud (US) Regions and AWS China Regions.

To learn more about IAM Access Analyzer unused access analysis:

Amazon ECS on AWS Fargate now allows you to encrypt ephemeral storage with customer-managed KMS keys

Amazon Elastic Container Service (Amazon ECS) and AWS Fargate now allow you to use customer managed keys in AWS Key Management Service (KMS) to encrypt data stored in Fargate task ephemeral storage. Ephemeral storage for tasks running on Fargate platform version 1.4.0 or higher is encrypted with AWS owned keys by default. This feature allows you to add a self-managed security layer which can help you meet compliance requirements.

Customers who run applications that deal with sensitive data often need to encrypt data using self-managed keys to meet security or regulatory requirements and also provide encryption visibility to auditors. To meet these requirements you can now configure a customer-managed KMS key for your ECS cluster to encrypt the ephemeral storage for all Fargate tasks in the cluster. You can manage this key and audit access like any other KMS key. Customers can use this feature to configure encryption for new and existing ECS applications without changes from developers.

Amazon CloudWatch Application Signals, for application monitoring (APM) is generally available

Today, AWS announces the general availability of Amazon CloudWatch Application Signals, an OpenTelemetry (OTeL) compatible application performance monitoring (APM) feature in CloudWatch, that makes it easy to automatically instrument and track application performance against their most important business or service level objectives (SLOs) for applications on AWS. With no manual effort, no custom code, and no custom dashboards, Application Signals provides service operators with a pre-built, standardized dashboard showing the most important metrics for application performance – volume, availability, latency, faults, and errors – for each of their applications on AWS.

By correlating telemetry across metrics, traces, logs, real-user monitoring, and synthetic monitoring, Application Signals enables customers to speed up troubleshooting and reduce application disruption. For example, an application developer operating a payment processing application can see if payment processing latency is spiking and drill into the precisely correlated trace contributing to the spike to establish cause in application code or dependency. Developers that use Container Insights to monitor container infrastructure, can further identify root cause such as a memory shortage or a high CPU utilization on the container pod running the application code causing the spike.

Application Signals is generally available in 28 commercial AWS Regions, except CA West (Calgary) Region, AWS GovCloud (US) Regions and China Regions. For pricing, see Amazon CloudWatch pricing.

Try Application Signals with the AWS One Observability Workshop sample application. To learn more, see documentation to enable Amazon CloudWatch Application Signals for Amazon EKS, Amazon EC2, native Kubernetes and custom instrumentation for other platforms.
 

Amazon RDS for PostgreSQL announces Extended Support minor 11.22-RDS.20240509

Amazon Relational Database Service (RDS) for PostgreSQL announces Amazon RDS Extended Support minor version 11.22-RDS.20240509. We recommend that you upgrade to this version to fix known security vulnerabilities and bugs in prior versions of PostgreSQL.

Amazon RDS Extended Support provides you more time, up to three years, to upgrade to a new major version to help you meet your business requirements. During Extended Support, Amazon RDS will provide critical security and bug fixes for your MySQL and PostgreSQL databases on Aurora and RDS after the community ends support for a major version. You can run your PostgreSQL databases on Amazon RDS with Extended Support for up to three years beyond a major version’s end of standard support date. Learn more about Extended Support in the Amazon RDS User Guide.

You are able to leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. Learn more about upgrading your database instances, including minor and major version upgrades, in the Amazon RDS User Guide.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
 

Amazon Security Lake is now available in the AWS GovCloud (US) Regions

Amazon Security Lake is now available in the AWS GovCloud (US) Regions. You can now centralize security data from AWS environments, SaaS providers, on premises, and cloud sources into a purpose-built data lake stored in your Amazon S3 account.

Security Lake makes it easier to analyze security data, gain a more comprehensive understanding of security across your entire organization, and improve the protection of your workloads, applications, and data. Security Lake automates the collection and management of your security data across accounts and AWS Regions so that you can use your preferred analytics tools while retaining control and ownership over your security data.

For more information about the AWS Regions where Security Lake is available, see the AWS Region table. You can enable your 15-day free trial of Amazon Security Lake with a single-click in the AWS Management console

To get started, see the following list of resources:

AWS CloudFormation accelerates dev-test cycle with adjustable timeouts for custom resources

AWS CloudFormation launches a new property for custom resources called ServiceTimeout. This new property allows customers to set a maximum timeout for the execution of the provisioning logic in a custom resource, enabling faster feedback loops in dev-test cycles.

CloudFormation custom resources allow customers to write their own provisioning logic in CloudFormation templates and have CloudFormation run the logic during a stack operation. Custom resources use a callback pattern where the custom resource must respond to CloudFormation within a timeout of 1 hour. Previously, this timeout value was not configurable, so code bugs in the customer's custom resource logic resulted in long wait times. With the new ServiceTimeout property, customers can set a custom timeout value, after which CloudFormation fails the execution of the custom resource. This accelerates feedback on failures, allowing for quicker dev-test iterations.

The new ServiceTimeout property is available in all AWS Regions where AWS CloudFormation is available. Refer to the AWS Region table for details.

Refer to the custom resources documentation to learn more about the ServiceTimeout property.
 

Amazon EC2 M6in and M6idn instances are now available in Asia Pacific (Mumbai)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M6in and M6idn instances are available in AWS Regions Asia Pacific (Mumbai), Canada (Central). These sixth-generation network optimized instances, powered by 3rd Generation Intel Xeon Scalable processors and built on the AWS Nitro System, deliver up to 200Gbps network bandwidth, 2x more network bandwidth, and up to 2x higher packet-processing performance over comparable fifth-generation instances. Customers can use M6in and M6idn instances to scale the performance and throughput of network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, and Telco applications such as 5G User Plane Function.


M6in and M6idn instances are available in 10 different instance sizes including metal, with up to 128 vCPUs and 512 GiB of memory. They deliver up to 100 Gbps of Amazon Elastic Block Store (EBS) bandwidth and up to 400K IOPS, the highest Amazon EBS performance across EC2 instances. M6in and M6idn instances offer Elastic Fabric Adapter (EFA) networking support on 32xlarge and metal sizes. M6idn instances offer up to 7.6 TB of high-speed, low-latency instance storage.

With this regional expansion, M6in and M6idn instances are available in the following AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Europe (Ireland, Frankfurt, Spain, Stockholm), Asia Pacific (Mumbai, Singapore, Tokyo, Sydney), Canada (Central), and AWS GovCloud (US-West). Customers can purchase the new instances through Savings Plans, Reserved, On-Demand, and Spot instances. To learn more, see M6in and M6idn instances page.

Amazon CloudWatch announces AI-Powered natural language query generation

Amazon CloudWatch announces the general availability of natural language query generation powered by generative AI for Logs Insights and Metrics Insights. This feature enables you to quickly generate queries in context of your logs and metrics data using plain language. By simplifying the query generation process, you can accelerate gathering insights from your observability data without needing extensive knowledge of the query language.

Query Generator simplifies your CloudWatch Logs and Metrics Insights experience through natural language querying. You can ask questions in plain English, such as "Show me the 10 slowest Lambda requests in the last 24 hours" or "Which DynamoDB table is most throttled" and it will generate the appropriate queries or refine any existing queries in the query window, as well as now, automatically adjust the time ranges for queries that require data within a specified period. It also provides line-by-line explanations of the generated code, helping you learn query syntax.

This feature is now supported in US East (N. Virginia), US West (Oregon), and Asia Pacific (Tokyo)

To access the feature, click on "Query generator" in the CloudWatch Logs Insights or Metrics Insights console pages. In the help panel, select "Info" for more information. There is no charge for using Query generator. Any queries executed in Logs Insights or Metrics Insights are subject to standard CloudWatch pricing. To learn more about Query generator in CloudWatch Logs Insights or Metrics Insights, visit our getting started guide.
 

Amazon Redshift Serverless is now available in the AWS Middle East (UAE) region

Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in the AWS Middle East (UAE) region. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.

With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon S3, access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes. Amazon Redshift Serverless provides unified billing for queries on any of these data sources, helping you efficiently monitor and manage costs.

Amazon CodeCatalyst now supports Bitbucket Cloud source code repositories

Amazon CodeCatalyst now supports the use of source code repositories hosted in Bitbucket Cloud in CodeCatalyst projects. This allows customers to use Bitbucket Cloud repositories with CodeCatalyst’s features such as its cloud IDE (Development Environments), view the status of CodeCatalyst workflows back in Bitbucket Cloud, and even block Bitbucket Cloud pull request merges based on the status of CodeCatalyst workflows.

Customers want the flexibility to use source code repositories hosted in Bitbucket Cloud, without the need to migrate to CodeCatalyst to use its functionality. Migration is a long process and customers want to evaluate CodeCatalyst and its capabilities using their own code repositories before they decide to migrate. Support for popular source code providers such as Bitbucket Cloud is the top customer ask for CodeCatalyst. Now customers can use the capabilities of CodeCatalyst without the need for migration of source code from Bitbucket Cloud.

This capability is available in regions where CodeCatalyst is available. There is no change to pricing.

For more information, see the documentation or visit the Amazon CodeCatalyst website.

  • No labels