Posts Tagged ‘Elastic BeanStalk’

Creating a staging + production environment for a Node.js service on AWS Elastic Beanstalk with Cloudbees as CD service in 51 steps

February 24, 2015 Leave a comment
  1. In AWS:
  2. Login to AWS, add a new AIM user called Jenkins_EBS_Access (or whatever name makes you happy), save the credentials, attach the following policy to the user as an inline, custom policy:

  3. {
    "Version": "2012-10-17",
    "Statement": [
    "Sid": "Stmt1414820271000",
    "Effect": "Allow",
    "Action": [
    "Resource": [
  4. Go to S3, add a new bucket called deployments
  5. Add two plugins in the Jenkins manage plugins area:
    1. Cloudbees Amazon Web Services Credentials plugin
    2. Cloudbees Amazon Web Services deploy engine plugin
  6. In Cloudbees:
  7. Create your Cloudbees repository, commit a working node.js service
  8. Goto builds and add a new build
  9. In build configuration, below deploy now click on Advanced, remove the default cloudbees host service, and add an Elastic Beanstalk deployment, create a new credentials entry and add the credentials that you saved for the AWS user you just created.
  10. Click add application
  11. Pick your region (us-east-1 is the default and cheapest option)
  12. In S3 bucket, enter deployments/${BUILD_NUMBER}
  13. In application name, enter service
  14. In version label, enter ${JOB_NAME}.#${BUILD_NUMBER}
  15. In environment name, enter myservice-stg
  16. Click “promote builds when”
  17. in name enter production
  18. pick the star color you want (I like green)
  19. Click “only when manually approved”
  20. Add action “deploy application”
  21. Repeat process 8 – 13 (minus the add new credentials step, just use the ones you already added)
  22. In environment name, enter myservice-prd
  23. In build->execute shell->command, enter the following:

  24. node_version=0.10.21
    if [ ! -e nodejs-plugin-$ ]
    unzip nodejs-plugin-$
    tar xf node.tar.gz
    mv node-v* node_lib

    rm -rf target
    mkdir target

    export PATH=$PATH:$WORKSPACE/node_lib/bin

    npm install
    npm install time
    npm install grunt grunt-cli

    export PATH=node_modules/grunt-cli/bin/:$PATH
    #export PATH=$PATH:node_modules/grunt-cli/bin/


    rm -rf target/app.war
    cd dist
    zip -r ../target/app.war .

  25. Note that you might need to change the node.js version number, and also the last couple of lines, if your directory structure is a bit different than mine.  My code uses grunt and deploys to a directory called dist
  26. Again, repeat steps 8 – 13
  27. In environment name, enter myservice-stg
  28. In post build actions, in Archive the artifacts, change *.zip to *.war
  29. Click save
  30. Click Build NOW
  31. Go to the build console output view and make sure there are no errors in the build
  32. Your node.js main file should be called app.js or server.js, there are the app names which AWS calls by default (configurable), or npm start should trigger your app if all else fails.
  33. Once the build has finished, you should see that the app has been uploaded to s3, but you should expect to see a failure which says: “No Application named ‘service’ found”.  This is ok, we have not created the app yet, but we now have the app in S3 and this is where we’ll go next.
  34. Go to AWS S3, to the deployments bucket, you should see a new file under a folder with your current version number called service.  Click it and hit properties.
  35. Copy the link url
  36. Go to Elastic Beanstalk, and create a new application.  The name should be the same name you entered in cloudbees build, in the Application Name field.
  37. Now create a new web server
  38. select node.js platform, and keep the load balanced configuration
  39. under application version, pick s3, and paste the s3 bucket URL you just copied
  40. In environment name, enter the environment name you entered in cloudbees (myservice-stg)
  41. next, next
  42. In instance type pick t2.micro, this is actually cheaper than t1.micro and has better performance
  43. Go to EC2 -> key pairs (pick from the left side list), and create a new key pair.  You will automatically download a PEM file which you will later use to access your instances if you wish
  44. Back in Elastic Beanstalk, refresh the EC2 key pairs listing and pick your newly created key pair
  45. In environment tags, add the following tags:
    1. key: type, value: service
    2. key: class, value: staging
  46. These tags will let you report on usage later on
  47. Launch your environment
  48. Repeat the process, but this time with myservice-prd as the environment name, and in the class, enter production
  49. In Cloudbees, Click Build now.  The build should now succeed, and auto-deploy to the staging environment.
  50. Once it’s deployed and you see in AWS that the staging service is updated with the new build, go back to Cloudbees and
  51. Click the new build, go to Promotion Status, and on the right hand, side, click Force Promotion
  52. The build should now be promoted to the production environment
  53. Voila, you now have a CD process with staging and production environments on AWS

Setting up SSL with Amazon Elastic Beanstalk

November 12, 2014 Leave a comment

Setting up a service using Amazon’s EBS is very easy.  The documentation is clear and to the point.

However, when you try to turn on SSL, you might run into problems, as the many forum questions suggest.

Most issues revolve around two main points:

1. SSL certificate.  Getting a certificate uploaded to Amazon is not as easy as it sounds, you need to install amazon’s CLI and make sure your certificates are in the right format.  Sometimes you even need to make changes (change order of entries within the certificate, remove parts, etc.).  If you use Godaddy as a certificate source, just download an Apache compatible certificate and you can upload it as is.

2. Setting up the environment.  You can find the instructions here, and they’re all good until you get to step 3.  That’s where Amazon tells you that IF you are using VPC with your instances, you need to setup rules to allow https.  What they fail to say is that even if you don’t use VPC you still need to setup rules!

The following are instructions I got from Amazon support, after struggling with this for a couple of weeks (did not have business level support when I started working on this issue):

You need to update two security groups, one for your ELB and one for your instance, both must allow https (443)

  1. Go to your ec2 web console and click on “security groups” on the left
  2. Find the group with the following description: “ELB created security group used when no security group is specified during ELB creation – modifications could impact traffic to future ELBs”
  3. Add a rule for that group to allow https protocol port 443 from source
  4. Find the security group for your environment in that same list, and add https port 443 with the source being the default security group from step (2)

This should allow https connectivity between your load balancer and your instance.