Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Contact info

Tan | ts864@cornell.edu

Overview

Currently, the way services are deployed is basically running their Docker containers are deployed on AWS is basically having them run on an EC2 instance (with public IP 3.87.30.85). This means that each service occupies a unique port on the machine. Below is a list of running services and the ports they are using.

...

 

Procedure

To deploy a Docker, there are three steps:

  1. copy your Docker image and other relevant file to the EC2 instance using scp
  2. ssh into the EC2 instance
  3. run your Docker image there

Copy files 

First, download the key pair file DIAPER-test-key.pem and copy your Docker image and other relevant file to the EC2 instance using scp:

Code Block
scp -r -i DIAPER-test-key.pem /path/to/project/folder ec2-user@3.87.30.85:~/

EC2 instances. Within each container, a Flask service is hosted using Gunicorn instead of native Flask to make the service more fault-tolerant.

Test and production environments for our mobile and dashboard services are deployed separately on the EC2 instances below.

EnvironmentIP addressDomainPortSample URLKey pair file
Mobile (Online) Test54.163.4.203

mobile-test.diaper-project.com

(deprecated: on-test.diaper.cf)

5001https://mobile-test.diaper-project.com:5001/api/monitoring

DIAPER confidential > AWS EC2 key files > DIAPER-test-key

Box folder link

Dashboard (Offline) Test54.243.201.107dashboard-test.diaper-project.com5000https://dashboard-test.diaper-project.com:5000/env/





Mobile (Online) Production

35.168.248.57

mobile-prod.diaper-project.com

(deprecated: on-prod.diaper.cf)

5001https://mobile-prod.diaper-project.com:5001/api/monitoring

DIAPER confidential > AWS EC2 key files > DIAPER-production-key

Box folder link

Dashboard (Offline) Production3.228.124.129dashboard-prod.diaper-project.com5000https://dashboard-prod.diaper-project.com:5000/env/

Procedure

The procedure for deploying is the same for all environments. First, download the key pair file corresponding to the instance and run

Code Block
chmod 400 /path/to/DIAPER-*-key.pem

Then, ssh into the corresponding Then,  ssh into the EC2 instance using

Code Block
ssh -i /path/to/DIAPER-test*-key.pem ec2-user@3.87.30.85user@<domain>


See How to ssh into AWS / BioHPC if you're having trouble


Pull your Docker image and other relevant file from GitHub. Instead of your own git account and password, the username is diapertestemail@gmail.com , and the password is the token in the login secrets.

Once pulledOnce logged in, navigate to the project folder you just uploaded folder containing docker-compose files and run the Docker as you would do locally. If your service is blocking, do the following

Code Block
ctrl-z
bg
disown -h

corresponding command as explained below:

Code Block
// For test environments.
./deploy.sh -n "Your Name" -m "Reason for deployment" test
 
// For production environments.
./deploy.sh -n "Your Name" -m "Reason for deployment" prod

// For local development on your laptop.
// These two commands are equivalents (i.e. default is docker-compose.yml)
sudo docker-compose -f docker-compose.yml up -d --force-recreate
sudo docker-compose up -d --force-recreate

For test and production environments, make sure to fill your name and reason for deployment. All test and production deployments will be logged in deploymentHistory.log under the same directory.

The commands above run your container in detached mode so that your service doesn't block the console. However, once a container is run in detached mode, all of its runtime errors will not be reported to the console. Thus, as the last step of deployment, you need to manually check that no runtime errors occurred during the launching of your service by looking at its log using

Code Block
sudo docker-compose logs -f

If there are no runtime errors, you will get an output similar to the one below.

Image Added

Now Then you can log out and the service will continue running on the EC2 instance.

Troubleshooting

Server is down

  1. Reboot the instance(shown below)

Image Added


      2. If the problem persists, connect to the relevant server and run the deployment script:

Code Block
// For test environments.
./deploy.sh -n "Your Name" -m "Reason for deployment" test
 
// For production environments.
./deploy.sh -n "Your Name" -m "Reason for deployment" prod

The particular issue may be highlighted during the execution of the above command.



BioHPC database timeout (Obsolete)

If you are experiencing timeout when connecting to the BioHPC database, it' probably because the EC2 instance isn't connected to Cornell's VPN. Check whether the VPN is connected with

Code Block
ps -A | grep openconnect

If the output is non-empty, then the VPN is connected. If not, you can connect to the VPN using

Code Block
sudo openconnect -b cuvpn.cuvpn.cornell.edu --reconnect-timeout 600

and enter necessary information as prompted.