You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »



As of , the functionality described here is being planned and under development. There is not yet any firm date for release to Cornell AWS customers. Please send any feedback or questions to Paul Allen.

Introduction

Cornell AWS customers have the option to opt-in to use an AWS VPC that is shared with other Cornell AWS customers. The subnets in this shared VPC have CIDR blocks in the private Cornell network.

The resources deployed to the the shared VPC have network access to other Cornell network resources, specifically:

  • all Cornell Standard VPCs in AWS, via Transit Gateway
  • on-campus Cornell networks, via Direct Connect
  • private Cornell VNets in Azure, via Internet2 Cloud Connect

In the past, each Cornell AWS customer that required access to the private Cornell network in AWS received their own Cornell Standard VPC that provided an AWS VPC for their exclusive use. In contrast, the shared Cornell AWS VPC described in this document provides similar network connectivity in a set of AWS subnets shared among multiple Cornell AWS customers.

Benefits of Using the Shared VPC

Cornell AWS customers that opt-in to use the Shared VPC will experience the following benefits:

  • Less VPC management – The CIT Cloud Team manages manages the subnets, network ACLs, and Route Tables in the shared VPC. Customers manage the Security Groups applied to their EC2 instances and other resources deployed in the shared VPC.
  • Cheaper
    • Each Cornell Standard VPC contains at least one NAT Gateway, which typically costs about $1/day to run. In contrast, NAT Gateways deployed in the shared VPC are managed and paid for by CIT.
    • VPC Flow Logs in the shared VPC are paid for by CIT.
  • Increased resiliency
    • Customers using the shared VPC have access to subnets in all of the Availability Zones in the us-east-1 AWS Region. In contrast, the Cornell Standard VPC is typically deployed only to two Availability Zones.
    • Each private subnet in the shared VPC utilizes a NAT Gateway local to the Availability Zone where the subnet is deployed. In contrast, private subnets in the Cornell Standard VPC typically utilize a single NAT Gateway in a single Availability Zone.
  • Availability Zone matching
    • Since the Shared VPC offers access to all Availability Zones in us-east-1, customers have the option to deploy resources in specific AZs if they are trying to deploy resources in the same AZs as deployed by partners or vendors.

Caveats of Using the Shared VPC

There are a few caveats to be aware of when deciding whether to opt-in to use the Shared VPC:

  • Customers using the Shared VPC still require their own Cornell AWS account. The Shared VPC is accessible from that AWS account, once opted-in to use it.
  • The Network ACLs used by the Shared VPC may be a bit more permissive than ones a Cornell AWS customer might design for a VPC of their own. Since the Shared VPC supports a plethora of use cases and resource deployment architectures, it cannot be customized (i.e., "locked down") for a single customer. Note that customers do manage the Security Groups applied to their EC2 instances and other resources deployed in the shared VPC so they still decide the ultimate connectivity and access to their resources.
  • To be continued...

Use Cases

The Shared VPC supports many, many customer use cases. A few of those are:

  • Manual deployment of a few resources that require access to the Cornell private network
  • Deployment of three-tier (or more!) applications in the Shared VPC
  • Using Infrastructure as Code to create and manage AWS resources in the Shared VPC
  • Standing up an RDS instance in the private Cornell network
  • Deploying an API Gateway API or Lambda function with access to the Cornell network
  • Deployment of resources which will be accessed only by users on the Cornell VPN

Misuse Cases

Misuse cases are situations where the Shared VPC should not be used. Some of those are:

  • Deployment of resources that don't need access to the Cornell network
  • Cornell private network access in regions other than us-east-1 (N. Virginia)
  • Need to directly customize Network ACLs, Route Tables, or other VPC configuration
  • Peering to non-Cornell VPCs
  • Ability to use a vast number of private Cornell IP addresses in AWS
  • Deploying Kubernetes or using EKS (Kubernetes consumes vast numbers of IP addresses, which is incompatible with the Shared VPC model)


References

  • No labels