Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Contact info

Tan | ts864@cornell.edu

Overview

Currently, the way Docker containers services are deployed on AWS is basically having them run on running their Docker containers on AWS EC2 instances. This means that each service occupies a unique port on the machine. Below is a list of running services and the ports they are using.

...

Within each container, a Flask service is hosted using Gunicorn instead of native Flask to make the service more fault-tolerant.

Test and production environments for our mobile and dashboard services Test and production environments are deployed separately on the EC2 instances below.

Function
EnvironmentIP addressDomainPortSample URLKey pair file
Mobile (Online) Test
3.232.82.82DIAPER-test-key.pem
54.163.4.203

mobile-test.diaper-project.com

(deprecated: on-test.diaper.cf)

5001https://mobile-test.diaper-project.com:5001/api/monitoring

DIAPER confidential > AWS EC2 key files > DIAPER-test-key

Box folder link

Dashboard (Offline) Test54.243.201.107dashboard-test.diaper-project.com5000https://dashboard-test.diaper-project.com:5000/env/





Mobile (Online) Production

35.168.248.57

mobile-prod.diaper-project.com

(deprecated: on-prod.diaper.cf)

5001https://mobile-prod.diaper-project.com:5001/api/monitoring

DIAPER confidential > AWS EC2 key files > DIAPER-production-key

.pem

Box folder link

Dashboard (Offline) Production3.228.124.129dashboard-prod.diaper-project.com5000https://dashboard-prod.diaper-project.com:5000/env/

Procedure

The procedure for deploying is the same for both EC2 instancesall environments. First, download the key pair file corresponding to the instance and run

Code Block
chmod 400 /path/to/DIAPER-*-key.pem

Then, ssh into the corresponding EC2 instance using

Code Block
ssh -i /path/to/DIAPER-*-key.pem ec2-user@<ip address>user@<domain>


See How to ssh into AWS / BioHPC if you're having trouble


Pull your Docker image and other relevant file from GitHub. Once pulled, navigate to your project folder and you'll see multiple docker. To keep your service from blocking the console, you should run your Docker in detached mode using the -d option.

 

Now you can log out and the service will continue running on the EC2 instance.

 

Note: You must specify which .yml file to use when launching docker. See details at Domains of backend APIs and SSL > "Specify which yml file to use when launching docker on production and test environment"

Common Issues

BioHPC database timeout

If you are experiencing timeout when connecting to the BioHPC database, it' probably because the EC2 instance isn't connected to Cornell's VPN. To connect to the VPN, run the following command

Code Block
openconnect -b cuvpn.cuvpn.cornell.edu

and enter necessary information as prompted.

 

Specify which yml file to use when launching docker on production and test environment

Separate docker-compose config files named docker-compose-prod.yml and docker-compose-test.yml are created.

Since only production and test environment will use this cert. When launching docker, must specify this yml file.

Instead of your own git account and password, the username is diapertestemail@gmail.com , and the password is the token in the login secrets.

Once pulled, navigate to the folder containing docker-compose files and run the corresponding command as explained below:Command Line

Code Block
// For productiontest at 35environments.168
.248.57
docker-compose -f docker-compose-prod.yml up -d
/deploy.sh -n "Your Name" -m "Reason for deployment" test
 
// For testproduction at 3environments.232
.82.82
docker-compose -f docker-compose-test.yml up -d/deploy.sh -n "Your Name" -m "Reason for deployment" prod

// For local development on your laptop.
// These two commands are equivalents (i.e. default is docker-compose.yml)
sudo docker-compose -f docker-compose.yml up -d --force-recreate
sudo docker-compose up -d --force-recreate

For test and production environments, make sure to fill your name and reason for deployment. All test and production deployments will be logged in deploymentHistory.log under the same directory.

The commands above run your container in detached mode so that your service doesn't block the console. However, once a container is run in detached mode, all of its runtime errors will not be reported to the console. Thus, as the last step of deployment, you need to manually check that no runtime errors occurred during the launching of your service by looking at its log using

Code Block
sudo docker-compose logs -f

If there are no runtime errors, you will get an output similar to the one below.

Image Added

Now you can log out and the service will continue running on the EC2 instance.

Troubleshooting

Server is down

  1. Reboot the instance(shown below)

Image Added


      2. If the problem persists, connect to the relevant server and run the deployment script:

Code Block
// For test environments.
./deploy.sh -n "Your Name" -m "Reason for deployment" test
 
// For production environments.
./deploy.sh -n "Your Name" -m "Reason for deployment" prod

The particular issue may be highlighted during the execution of the above command.



BioHPC database timeout (Obsolete)

If you are experiencing timeout when connecting to the BioHPC database, it' probably because the EC2 instance isn't connected to Cornell's VPN. Check whether the VPN is connected with

Code Block
ps -A | grep openconnect

If the output is non-empty, then the VPN is connected. If not, you can connect to the VPN using

Code Block
sudo openconnect -b cuvpn.cuvpn.cornell.edu --reconnect-timeout 600

and enter necessary information as prompted.