Archive

Author Archive

Rename all files in all subdirectories

August 19, 2015 Leave a comment

for x in $(find .); do mv $x $(echo "$x" | sed 's/text-from-replace/text-to-replace/'); done

Categories: Mac stuff Tags:

Creating a staging + production environment for a Node.js service on AWS Elastic Beanstalk with Cloudbees as CD service in 51 steps

February 24, 2015 Leave a comment
  1. In AWS:
  2. Login to AWS, add a new AIM user called Jenkins_EBS_Access (or whatever name makes you happy), save the credentials, attach the following policy to the user as an inline, custom policy:

  3. {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "Stmt1414820271000",
    "Effect": "Allow",
    "Action": [
    "elasticbeanstalk:*",
    "elasticloadbalancing:*",
    "autoscaling:*",
    "ec2:*",
    "s3:*",
    "cloudformation:*",
    "sns:*",
    "cloudwatch:*",
    "iam:*"
    ],
    "Resource": [
    "*"
    ]
    }
    ]
    }
  4. Go to S3, add a new bucket called deployments
  5. Add two plugins in the Jenkins manage plugins area:
    1. Cloudbees Amazon Web Services Credentials plugin
    2. Cloudbees Amazon Web Services deploy engine plugin
  6. In Cloudbees:
  7. Create your Cloudbees repository, commit a working node.js service
  8. Goto builds and add a new build
  9. In build configuration, below deploy now click on Advanced, remove the default cloudbees host service, and add an Elastic Beanstalk deployment, create a new credentials entry and add the credentials that you saved for the AWS user you just created.
  10. Click add application
  11. Pick your region (us-east-1 is the default and cheapest option)
  12. In S3 bucket, enter deployments/${BUILD_NUMBER}
  13. In application name, enter service
  14. In version label, enter ${JOB_NAME}.#${BUILD_NUMBER}
  15. In environment name, enter myservice-stg
  16. Click “promote builds when”
  17. in name enter production
  18. pick the star color you want (I like green)
  19. Click “only when manually approved”
  20. Add action “deploy application”
  21. Repeat process 8 – 13 (minus the add new credentials step, just use the ones you already added)
  22. In environment name, enter myservice-prd
  23. In build->execute shell->command, enter the following:

  24. node_version=0.10.21
    if [ ! -e nodejs-plugin-$node_version.zip ]
    then
    wget https://s3.amazonaws.com/clickstacks/admin/nodejs-plugin-$node_version.zip
    unzip nodejs-plugin-$node_version.zip
    tar xf node.tar.gz
    mv node-v* node_lib
    fi

    rm -rf target
    mkdir target

    export PATH=$PATH:$WORKSPACE/node_lib/bin

    npm install
    npm install time
    npm install grunt grunt-cli

    export PATH=node_modules/grunt-cli/bin/:$PATH
    #export PATH=$PATH:node_modules/grunt-cli/bin/

    grunt

    rm -rf target/app.war
    cd dist
    zip -r ../target/app.war .

  25. Note that you might need to change the node.js version number, and also the last couple of lines, if your directory structure is a bit different than mine.  My code uses grunt and deploys to a directory called dist
  26. Again, repeat steps 8 – 13
  27. In environment name, enter myservice-stg
  28. In post build actions, in Archive the artifacts, change *.zip to *.war
  29. Click save
  30. Click Build NOW
  31. Go to the build console output view and make sure there are no errors in the build
  32. Your node.js main file should be called app.js or server.js, there are the app names which AWS calls by default (configurable), or npm start should trigger your app if all else fails.
  33. Once the build has finished, you should see that the app has been uploaded to s3, but you should expect to see a failure which says: “No Application named ‘service’ found”.  This is ok, we have not created the app yet, but we now have the app in S3 and this is where we’ll go next.
  34. Go to AWS S3, to the deployments bucket, you should see a new file under a folder with your current version number called service.  Click it and hit properties.
  35. Copy the link url
  36. Go to Elastic Beanstalk, and create a new application.  The name should be the same name you entered in cloudbees build, in the Application Name field.
  37. Now create a new web server
  38. select node.js platform, and keep the load balanced configuration
  39. under application version, pick s3, and paste the s3 bucket URL you just copied
  40. In environment name, enter the environment name you entered in cloudbees (myservice-stg)
  41. next, next
  42. In instance type pick t2.micro, this is actually cheaper than t1.micro and has better performance
  43. Go to EC2 -> key pairs (pick from the left side list), and create a new key pair.  You will automatically download a PEM file which you will later use to access your instances if you wish
  44. Back in Elastic Beanstalk, refresh the EC2 key pairs listing and pick your newly created key pair
  45. In environment tags, add the following tags:
    1. key: type, value: service
    2. key: class, value: staging
  46. These tags will let you report on usage later on
  47. Launch your environment
  48. Repeat the process, but this time with myservice-prd as the environment name, and in the class, enter production
  49. In Cloudbees, Click Build now.  The build should now succeed, and auto-deploy to the staging environment.
  50. Once it’s deployed and you see in AWS that the staging service is updated with the new build, go back to Cloudbees and
  51. Click the new build, go to Promotion Status, and on the right hand, side, click Force Promotion
  52. The build should now be promoted to the production environment
  53. Voila, you now have a CD process with staging and production environments on AWS

Setting up SSL with Amazon Elastic Beanstalk

November 12, 2014 Leave a comment

Setting up a service using Amazon’s EBS is very easy.  The documentation is clear and to the point.

However, when you try to turn on SSL, you might run into problems, as the many forum questions suggest.

Most issues revolve around two main points:

1. SSL certificate.  Getting a certificate uploaded to Amazon is not as easy as it sounds, you need to install amazon’s CLI and make sure your certificates are in the right format.  Sometimes you even need to make changes (change order of entries within the certificate, remove parts, etc.).  If you use Godaddy as a certificate source, just download an Apache compatible certificate and you can upload it as is.

2. Setting up the environment.  You can find the instructions here, and they’re all good until you get to step 3.  That’s where Amazon tells you that IF you are using VPC with your instances, you need to setup rules to allow https.  What they fail to say is that even if you don’t use VPC you still need to setup rules!

The following are instructions I got from Amazon support, after struggling with this for a couple of weeks (did not have business level support when I started working on this issue):

You need to update two security groups, one for your ELB and one for your instance, both must allow https (443)

  1. Go to your ec2 web console and click on “security groups” on the left
  2. Find the group with the following description: “ELB created security group used when no security group is specified during ELB creation – modifications could impact traffic to future ELBs”
  3. Add a rule for that group to allow https protocol port 443 from source 0.0.0.0/0
  4. Find the security group for your environment in that same list, and add https port 443 with the source being the default security group from step (2)

This should allow https connectivity between your load balancer and your instance.

Setting up SSL for AWS Cloudfront : problems and solutions

July 27, 2014 Leave a comment

You can follow this blog to set it all up, the problems I’ve encountered and their solutions are detailed below:

Q: What’s the command I have to run (under windows)

A: aws iam upload-server-certificate –server-certificate-name mycompanyname –certificate-body file://mycert.crt –private-key file://mykeyfile.key –certificate-chain file://customizedcertfromgodaddy.crt –path /cloudfront/justanameIchose/

Q: How do I customize my godaddy certificate to be compatible with AWS

A: AWS requires a subset of what’s included in your certificate authority’s certificate. The certificate I got from Godaddy (that’s THEIR certificate, not the one they issued for my company, i.e. the one named gd_bundle-g2-g1.crt) had 3 sections in it, I had to remove the first two.

Q: Got an error: A client error (AccessDenied) occurred when calling the UploadServerCertificate operation: User: arn:aws:iam::xxxxxxxxx:user/yyyyyyyy is not authorized to perform: iam:UploadServerCertificate on resource: arn:aws:iam::xxxxxxxxx:server-certificate/cloudfront/zzzzzzz/qqqqqqqq

A: This happens because the user whose credentials you supplied does not have enough permissions to perform this action. You should give it all permissions as explained in the blog post I referred to

Q: Got an error: A client error (MalformedCertificate) occurred when calling the UploadServerCertificate operation: Unable to parse certificate. Please ensure the certificate is
in PEM format.

A: In my case, this had nothing to do with the certificate format, it just happened because I removed the file:// prefix in the aws command and this is required. Would have been much clearer if Amazon had bothered to specify this error instead of a general “your format is wrong”, which has nothing to do with the real problem, but c’est la vie.

Q: Got an error: A client error (MalformedCertificate) occurred when calling the UploadServerCertificate operation: Unable to validate certificate chain. The certificate chain must start with the immediate signing certificate, followed by any intermediaries in order. The index within the chain of the invalid certificate is: 3

A: This happened because I did not remove the unneeded parts of the godaddy certificate. See question above.

Q: Got an error: argument –server-certificate-name is required but I look at my command and it’s there

A: The problem might be with the source from which you copied the command string, sometimes — gets replaced by a similar character which visually looks the same but is not the same ASCII code.  Just delete all the – signed and retype them yourself

Once this is all sorted out, you can continue to follow the blog post and all should work.

Categories: R&D, Uncategorized Tags: , , ,

Git issue: Your branch and ‘origin/master’ have diverged

February 4, 2014 Leave a comment

So I got to this situation:

Git status says:

mb:tagzbox zucker$ git status
# On branch master
# Your branch and ‘origin/master’ have diverged,
# and have 2 and 1 different commit(s) each, respectively.
#
# Untracked files:
# (use “git add <file>…” to include in what will be committed)
#
# .classpath
# .project
# .settings/
# target/
nothing added to commit but untracked files present (use “git add” to track)

And eclipse has this annoying up and down arrows with numbers (whatever that means):

git issue 1

Now what?  All I want is to get rid of everything local and just start from whatever is on the remote repository.

What finally did it for me was:

git reset --hard origin/master

This removed all local changes (I think) and switched me to the remote repository version.

then when doing

git pull

and

git push

git informed me that all was updated and in sync.  Oh, and the strange arrows are gone as well 🙂

Until the next time…

Categories: Git wonders

Updating an SSL certificate from Godaddy (or other) in Cloudbees

January 10, 2014 Leave a comment

So you managed to survive a whole year after getting your certificate, congratz, and now you need to replace it with a new one because the old one is about to expire.   Here are the simple steps to do that:

  1. Go to Godaddy and ask them to issue the new certificate.  You can use the CSR file you used last time you asked for a certificate.
  2. Download the certificate files, you’ll get two files with a CRT extension:  your site’s certificate, and go daddy’s CA certificate
  3. Append your site’s certificate to go daddy’s.  On unix, this would be cat your.crt godaddy.crt > finalcert.crt
  4. Validate that the certificate you created works for your deployment:
    bees app:cert:validate -a yourcloudbeesaccount -cert ./finalcert.crt -pk ./thekeyfileyougotwhenyoucreatedthecsrfile.key
  5. Update your deployment with the new certificate (you need to know the name of the ssl service you created on cloudbees, check your production app on cloudbees for this setting):
    bees app:router:update yoursslservicename-ssl -cert ./finalcert.crt -pk ./thekeyfileyougotwhenyoucreatedthecsrfile.key

That’s it, you should now have a valid new certificate live.

Categories: Uncategorized

Using a “real” CA (such as Godaddy) generated SSL certificate locally

November 13, 2013 Leave a comment

I recently got tired of going through all my local subdomains and approving the “invalid” certificate I had so that I can work locally every time I reopened chrome. Having bought a wildcard certificate for my production deployment (from Godaddy, but any would do), I knew it was only a couple of steps to get it into my project so that my local sub domains (e.g. local.tagzbox.com) would be considered “valid”.

Here are the steps to take, assuming you have openssl and keytool in your path, and are on a unix based system (I’m on Mac):

openssl pkcs12 -inkey ./yourdomain.key -in ./wildcard.yourdomain.com.crt -export -out ./yourdomain.pkcs12

This will generate a pkcs12 keystore with the certificate and key in it. Note that you need to concat your own certificate and the CA certificate, as explained here in step 3b.

Once this is done you need to create the keystore you will use, this is done using the following command:

keytool -importkeystore -srckeystore ./yourdomain.pkcs12 -srcstoretype PKCS12 -destkeystore ./yourdomain-ssl.keystore

Put the generated keystore (yourdomain-ssl.keystore) in your path, I put it in /src/main/resources so it is copied to my /classes path and thus can be used by my service.

Now you need to use it in your project, this is done through your POM file (assuming you’re using Maven, if not you should, and assuming you’re using jetty, which at least for dev environment is perfect):

	<profiles>
		<profile>
			<id>development</id>
			<build>
				<finalName>yourprojectname</finalName>
				<plugins>
					<plugin>
						<groupId>org.mortbay.jetty</groupId>
						<artifactId>jetty-maven-plugin</artifactId>
						<configuration>
							<contextPath>/</contextPath>
							<scanIntervalSeconds>0</scanIntervalSeconds>
							<connectors>
								<connector implementation="org.eclipse.jetty.server.nio.SelectChannelConnector">
									<port>8080</port>
									<maxIdleTime>60000</maxIdleTime>
								</connector>
								<connector implementation="org.eclipse.jetty.server.ssl.SslSocketConnector">
									<port>8443</port>
									<maxIdleTime>60000</maxIdleTime>
									<keystore>${project.build.directory}/classes/yourdomain-ssl.keystore</keystore>
									<password>mypass</password>
									<keyPassword>mypass</keyPassword>
								</connector>
							</connectors>
						</configuration>
					</plugin>
				</plugins>
			</build>
		</profile>
		<profile>
			<id>production</id>
			<activation>
				<activeByDefault>true</activeByDefault>
			</activation>
		</profile>
	</profiles>

A couple of things to note here:

  1. I’m using profiles, so this is activated only locally and not on production.  Maven profiles are out of scope here.
  2. I set the password to mypass, this password will be requested from you during the process of creating the keystore, just use whatever you like.
  3. This will work for your certificate, either regular or wildcard, but note that deep nested wildcard certificates (e.g. *.*.yourdomain.com) need to be generated specifically as such, otherwise local.admin.yourdomain.com won’t work)
Categories: R&D

Backing up your MongoHQ repository to Amazon S3

August 26, 2013 Leave a comment

Although MongoHQ claims to do their own backups as part of their disaster recovery strategy, I wouldn’t rely on that for two main reasons:
1. If you care about your data, don’t rely on others to back it up.
2. If you mess up your data on production, you won’t be able to recover it

So what do you need in order to backup your stuff?
First you need an amazon AWS account. Go to Amazon and create an account.
Now go into the Management Console, and pick S3 (sign up for S3 if you have to).
Create a bucket, name it (let’s call it MyBucket) and pick a region (I use US Standard, but pick w/e).
You’ll be taken to the bucket properties page, but you don’t really need to do anything there. On the upper left side you will find the services dropdown, click it and pick IAM.
Now add an IAM user (create new user), save/download the credentials (you won’t get a second chance, once you close this window it’s gone), and open the permissions tab. Add a new policy, and choose custom. Copy-paste the following:

{
  "Statement": [
    {
      "Sid": "AddMongoHQAccessToMyBucket1",
      "Action": [
        "s3:ListAllMyBuckets",
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Sid": "AddMongoHQAccessToMyBucket2",
      "Action": [
        "s3:DeleteObject",
        "s3:GetObject",
        "s3:GetObjectAcl",
        "s3:PutObject",
        "s3:PutObjectAcl"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::MyBucket/*"
    }
  ]
}

Your AWS account should now be ready to accept requests for backup into your S3 bucket.
Now go to MongoHQ admin, go to your repository’s Admin->backup tab, select the AWS region that you used in S3 (US Standard for example), put the name of the bucket that you created in S3 (MyBucket is what we used in this example), the AWS key and secret is what you got in the user credentials cvs file that you downloaded when you created the IAM user.
If you did everything right, MongoHQ will be able to verify your credentials, and just to be sure, trigger a backup and wait for it to finish, delete some collection (make sure you are working on a staging repository before you play around with it though), and click restore to get your data back.

You can also go into the S3 bucket and checkout your saved data (it’s a zip file).

Categories: R&D

Here’s one to drive you mad: is your message not replaced correctly by ResourceBundle or Spring’s ResourceBundleMessageSource?

August 15, 2013 Leave a comment

Apparently, you can put almost anything in those i18n messages, EXCEPT for a single quote!  
If you do that, your single quote will disappear (easily missed), and all place holders after the quote will not be replaced (easily noticed).

So, for example, if you have a message like this:

That's message 1, and {1} should be replaced by the word one

you will get an output of

Thats message 1, and {1} should be replaced by the word one

instead of the expected:

That's message 1, and one should be replaced by the word one

However, if you follow the rules, and put two single quotes instead of one (notice the added quote in That’s):

That''s message 1, and {1} should be replaced by the word one

You will get the correct behavior.

Follow the rules, and all shall work out fine!

Instructions for approving a self-signed certificate in Chrome on Mac

July 9, 2013 4 comments

First of all, I got this from here, so all credits go there.

On Mac, Chrome uses the system keychain for certificate lookup, so we need to add it there.  Here’s how:

  1. In the address bar, click the little lock with the X. This will bring up a small information screen. Click the button that says “Certificate Information.”
  2. Click and drag the image to your desktop. It looks like a little certificate.
  3. Double-click it. This will bring up the Keychain Access utility. Enter your password to unlock it.
  4. Be sure you add the certificate to the System keychain, not the login keychain. Click “Always Trust,” even though this doesn’t seem to do anything.
  5. After it has been added, double-click it. You may have to authenticate again.
  6. Expand the “Trust” section.
  7. “When using this certificate,” set to “Always Trust”

That’s it! Close Keychain Access and restart Chrome, and your self-signed certificate should be recognized now by the browser.

Categories: R&D Tags: , , ,