Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Contact info

Tan | ts864@cornell.edu

Overview

Currently, the way services are deployed is basically running their Docker containers on AWS EC2 instances. This means that each service occupies a unique port on the machine. Below is a list of running services and the ports they are using.

...

Within each container, a Flask service is hosted using Gunicorn instead of native Flask to make the service more fault-tolerant.

Test and production environments for our mobile and dashboard services Test and production environments are deployed separately on the EC2 instances below.

EnvironmentIP addressDomainPortSample URLKey pair file
Mobile (Online) Test
3.232.82.82 (will move to 
54.163.4.203
)

mobile-test.diaper-project.

cfOffline

com

(deprecated: on-test.diaper.cf)

5001https://mobile-test.diaper-project.com:5001/api/monitoring

DIAPER confidential > AWS EC2 key files > DIAPER-test-key

.pem



Box folder link

Dashboard (Offline)
Test54.243.201.107dashboard-test.diaper-project.com5000https://dashboard-test.diaper-project.com:5000/env/





Mobile (OnlineProduction

35.168.248.57

mobile-prod.diaper-project.

cfOffline

com

(deprecated: on-prod.diaper.cf)

5001https://mobile-prod.diaper-project.com:5001/api/monitoring

DIAPER confidential > AWS EC2 key files > DIAPER-production-key

.pem

Box folder link

Dashboard (Offline) 
Production3.228.124.129dashboard-prod.diaper-project.com5000https://dashboard-prod.diaper-project.com:5000/env/

Procedure

The procedure for deploying is the same for both EC2 instancesall environments. First, download the key pair file corresponding to the instance and run

Code Block
chmod 400 /path/to/DIAPER-*-key.pem

Then, ssh into the corresponding EC2 instance using

Code Block
ssh -i /path/to/DIAPER-*-key.pem ec2-user@<domain>


See How to ssh into AWS / BioHPC if you're having trouble


Pull your Docker image and other relevant file from GitHub.Instead of your own git account and password, the username is diapertestemail@gmail.com , and the password is the token in the login secrets.

Once pulled, navigate to your project folder and run the Docker using the folder containing docker-compose up with files and run the corresponding yml filecommand as explained below:

Code Block
// For test environments.
sudo./deploy.sh docker-compose-n "Your Name" -f docker-compose-test.yml up -d --force-recreatem "Reason for deployment" test
 
// For production environments.
sudo./deploy.sh docker-compose-n "Your Name" -f docker-compose-prod.yml up -d --force-recreatem "Reason for deployment" prod

// For local development on your laptop.
// These two commands are equivalents (i.e. default is docker-compose.yml)
sudo docker-compose -f docker-compose.yml up -d --force-recreate
sudo docker-compose up -d --force-recreate

For test and production environments, make sure to fill your name and reason for deployment. All test and production deployments will be logged in deploymentHistory.log under the same directory.

The commands above run your container in detached mode so that your service doesn't block the console. However, once a container is run in detached mode, all of its runtime errors will not be reported to the console. Thus, as the last step of deployment, you need to manually check that no runtime errors occurred during the launching of your service by looking at its log using

Code Block
sudo docker-compose logs -f

If there are no runtime errors, you will get an output similar to the one below.

Image AddedWe have different yml files because production and test environments use different SSL certificates. To keep your service from blocking the console, you should run your Docker in detached mode using the -d option.

Now you can log out and the service will continue running on the EC2 instance.

...

Troubleshooting

Server is down

  1. Reboot the instance(shown below)

Image Added


      2. If the problem persists, connect to the relevant server and run the deployment script:

Code Block
// For test environments.
./deploy.sh -n "Your Name" -m "Reason for deployment" test
 
// For production environments.
./deploy.sh -n "Your Name" -m "Reason for deployment" prod

The particular issue may be highlighted during the execution of the above command.



BioHPC database timeout (Obsolete)

If you are experiencing timeout when connecting to the BioHPC database, it' probably because the EC2 instance isn't connected to Cornell's VPN. Check whether the VPN is connected with

Code Block
ps -A | grep openconnect

If the output is non-empty, then the VPN is connected. If not, you can connect to the VPN using

Code Block
sudo openconnect -b cuvpn.cuvpn.cornell.edu --reconnect-timeout 600

and enter necessary information as prompted.