Archive for the ‘R&D’ Category

Setting up SSL with Amazon Elastic Beanstalk

November 12, 2014 Leave a comment

Setting up a service using Amazon’s EBS is very easy.  The documentation is clear and to the point.

However, when you try to turn on SSL, you might run into problems, as the many forum questions suggest.

Most issues revolve around two main points:

1. SSL certificate.  Getting a certificate uploaded to Amazon is not as easy as it sounds, you need to install amazon’s CLI and make sure your certificates are in the right format.  Sometimes you even need to make changes (change order of entries within the certificate, remove parts, etc.).  If you use Godaddy as a certificate source, just download an Apache compatible certificate and you can upload it as is.

2. Setting up the environment.  You can find the instructions here, and they’re all good until you get to step 3.  That’s where Amazon tells you that IF you are using VPC with your instances, you need to setup rules to allow https.  What they fail to say is that even if you don’t use VPC you still need to setup rules!

The following are instructions I got from Amazon support, after struggling with this for a couple of weeks (did not have business level support when I started working on this issue):

You need to update two security groups, one for your ELB and one for your instance, both must allow https (443)

  1. Go to your ec2 web console and click on “security groups” on the left
  2. Find the group with the following description: “ELB created security group used when no security group is specified during ELB creation – modifications could impact traffic to future ELBs”
  3. Add a rule for that group to allow https protocol port 443 from source
  4. Find the security group for your environment in that same list, and add https port 443 with the source being the default security group from step (2)

This should allow https connectivity between your load balancer and your instance.


Setting up SSL for AWS Cloudfront : problems and solutions

July 27, 2014 Leave a comment

You can follow this blog to set it all up, the problems I’ve encountered and their solutions are detailed below:

Q: What’s the command I have to run (under windows)

A: aws iam upload-server-certificate –server-certificate-name mycompanyname –certificate-body file://mycert.crt –private-key file://mykeyfile.key –certificate-chain file://customizedcertfromgodaddy.crt –path /cloudfront/justanameIchose/

Q: How do I customize my godaddy certificate to be compatible with AWS

A: AWS requires a subset of what’s included in your certificate authority’s certificate. The certificate I got from Godaddy (that’s THEIR certificate, not the one they issued for my company, i.e. the one named gd_bundle-g2-g1.crt) had 3 sections in it, I had to remove the first two.

Q: Got an error: A client error (AccessDenied) occurred when calling the UploadServerCertificate operation: User: arn:aws:iam::xxxxxxxxx:user/yyyyyyyy is not authorized to perform: iam:UploadServerCertificate on resource: arn:aws:iam::xxxxxxxxx:server-certificate/cloudfront/zzzzzzz/qqqqqqqq

A: This happens because the user whose credentials you supplied does not have enough permissions to perform this action. You should give it all permissions as explained in the blog post I referred to

Q: Got an error: A client error (MalformedCertificate) occurred when calling the UploadServerCertificate operation: Unable to parse certificate. Please ensure the certificate is
in PEM format.

A: In my case, this had nothing to do with the certificate format, it just happened because I removed the file:// prefix in the aws command and this is required. Would have been much clearer if Amazon had bothered to specify this error instead of a general “your format is wrong”, which has nothing to do with the real problem, but c’est la vie.

Q: Got an error: A client error (MalformedCertificate) occurred when calling the UploadServerCertificate operation: Unable to validate certificate chain. The certificate chain must start with the immediate signing certificate, followed by any intermediaries in order. The index within the chain of the invalid certificate is: 3

A: This happened because I did not remove the unneeded parts of the godaddy certificate. See question above.

Q: Got an error: argument –server-certificate-name is required but I look at my command and it’s there

A: The problem might be with the source from which you copied the command string, sometimes — gets replaced by a similar character which visually looks the same but is not the same ASCII code.  Just delete all the – signed and retype them yourself

Once this is all sorted out, you can continue to follow the blog post and all should work.

Categories: R&D, Uncategorized Tags: , , ,

Using a “real” CA (such as Godaddy) generated SSL certificate locally

November 13, 2013 Leave a comment

I recently got tired of going through all my local subdomains and approving the “invalid” certificate I had so that I can work locally every time I reopened chrome. Having bought a wildcard certificate for my production deployment (from Godaddy, but any would do), I knew it was only a couple of steps to get it into my project so that my local sub domains (e.g. would be considered “valid”.

Here are the steps to take, assuming you have openssl and keytool in your path, and are on a unix based system (I’m on Mac):

openssl pkcs12 -inkey ./yourdomain.key -in ./ -export -out ./yourdomain.pkcs12

This will generate a pkcs12 keystore with the certificate and key in it. Note that you need to concat your own certificate and the CA certificate, as explained here in step 3b.

Once this is done you need to create the keystore you will use, this is done using the following command:

keytool -importkeystore -srckeystore ./yourdomain.pkcs12 -srcstoretype PKCS12 -destkeystore ./yourdomain-ssl.keystore

Put the generated keystore (yourdomain-ssl.keystore) in your path, I put it in /src/main/resources so it is copied to my /classes path and thus can be used by my service.

Now you need to use it in your project, this is done through your POM file (assuming you’re using Maven, if not you should, and assuming you’re using jetty, which at least for dev environment is perfect):

								<connector implementation="org.eclipse.jetty.server.nio.SelectChannelConnector">
								<connector implementation="org.eclipse.jetty.server.ssl.SslSocketConnector">

A couple of things to note here:

  1. I’m using profiles, so this is activated only locally and not on production.  Maven profiles are out of scope here.
  2. I set the password to mypass, this password will be requested from you during the process of creating the keystore, just use whatever you like.
  3. This will work for your certificate, either regular or wildcard, but note that deep nested wildcard certificates (e.g. *.* need to be generated specifically as such, otherwise won’t work)
Categories: R&D

Backing up your MongoHQ repository to Amazon S3

August 26, 2013 Leave a comment

Although MongoHQ claims to do their own backups as part of their disaster recovery strategy, I wouldn’t rely on that for two main reasons:
1. If you care about your data, don’t rely on others to back it up.
2. If you mess up your data on production, you won’t be able to recover it

So what do you need in order to backup your stuff?
First you need an amazon AWS account. Go to Amazon and create an account.
Now go into the Management Console, and pick S3 (sign up for S3 if you have to).
Create a bucket, name it (let’s call it MyBucket) and pick a region (I use US Standard, but pick w/e).
You’ll be taken to the bucket properties page, but you don’t really need to do anything there. On the upper left side you will find the services dropdown, click it and pick IAM.
Now add an IAM user (create new user), save/download the credentials (you won’t get a second chance, once you close this window it’s gone), and open the permissions tab. Add a new policy, and choose custom. Copy-paste the following:

  "Statement": [
      "Sid": "AddMongoHQAccessToMyBucket1",
      "Action": [
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::*"
      "Sid": "AddMongoHQAccessToMyBucket2",
      "Action": [
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::MyBucket/*"

Your AWS account should now be ready to accept requests for backup into your S3 bucket.
Now go to MongoHQ admin, go to your repository’s Admin->backup tab, select the AWS region that you used in S3 (US Standard for example), put the name of the bucket that you created in S3 (MyBucket is what we used in this example), the AWS key and secret is what you got in the user credentials cvs file that you downloaded when you created the IAM user.
If you did everything right, MongoHQ will be able to verify your credentials, and just to be sure, trigger a backup and wait for it to finish, delete some collection (make sure you are working on a staging repository before you play around with it though), and click restore to get your data back.

You can also go into the S3 bucket and checkout your saved data (it’s a zip file).

Categories: R&D

Here’s one to drive you mad: is your message not replaced correctly by ResourceBundle or Spring’s ResourceBundleMessageSource?

August 15, 2013 Leave a comment

Apparently, you can put almost anything in those i18n messages, EXCEPT for a single quote!  
If you do that, your single quote will disappear (easily missed), and all place holders after the quote will not be replaced (easily noticed).

So, for example, if you have a message like this:

That's message 1, and {1} should be replaced by the word one

you will get an output of

Thats message 1, and {1} should be replaced by the word one

instead of the expected:

That's message 1, and one should be replaced by the word one

However, if you follow the rules, and put two single quotes instead of one (notice the added quote in That’s):

That''s message 1, and {1} should be replaced by the word one

You will get the correct behavior.

Follow the rules, and all shall work out fine!

Instructions for approving a self-signed certificate in Chrome on Mac

July 9, 2013 4 comments

First of all, I got this from here, so all credits go there.

On Mac, Chrome uses the system keychain for certificate lookup, so we need to add it there.  Here’s how:

  1. In the address bar, click the little lock with the X. This will bring up a small information screen. Click the button that says “Certificate Information.”
  2. Click and drag the image to your desktop. It looks like a little certificate.
  3. Double-click it. This will bring up the Keychain Access utility. Enter your password to unlock it.
  4. Be sure you add the certificate to the System keychain, not the login keychain. Click “Always Trust,” even though this doesn’t seem to do anything.
  5. After it has been added, double-click it. You may have to authenticate again.
  6. Expand the “Trust” section.
  7. “When using this certificate,” set to “Always Trust”

That’s it! Close Keychain Access and restart Chrome, and your self-signed certificate should be recognized now by the browser.

Categories: R&D Tags: , , ,

Little tip for mixing JSTL and Tiles

June 19, 2013 1 comment

If you want to use a value set in Tiles with JSTL in a JSP page, here’s how you do it:

<tiles:useAttribute name="anAttributeYouSetInGeneralXML"  id="theVarToUseInJSTL" classname="java.lang.Boolean"/>


<c:when test="${theVarToUseInJSTL==true}">

This is shown when anAttributeYouSetInGeneralXML is set to True



and in the definition file:

<put-attribute name="anAttributeYouSetInGeneralXML" value="true" cascade="true"/>
Categories: R&D Tags: , ,

Setting up SSL for your Cloudbees based service

May 23, 2013 1 comment

The previous post discussed setting up your domain, this post will discuss setting up SSL.

  1. Pay Godaddy or some other provider for a certificate.  This certificate will be valid for 1 year from the time it is issued.
  2. These instructions show you how to generate nginex installable certificates, which is what you need for cloudbees.
  3. Install cloudbees sdk: follow the instructions for installing the sdk
  4. Enable SSL on cloudbees for your application: follow these instructions for enabling ssl.


  1. When you need to add the cloudbees sdk to the path.  You can use the following command:
    sudo nano ~/.bash_profile
    This will let you edit using nano editor, much more user friendly than vim/vi if you’re not a unix freak.  Save the new file and restart the terminal.
    To add the sdk to the path, you need to add the following commands to the .bash_profile file and save:
    export BEES_HOME=/Volumes/srcvault/cloudbees/cloudbees-sdk-1.5.0
    export PATH=$PATH:$BEES_HOME
  2. When you run the app:router:create command you will get an IP.  You need to go to Godaddy or whatever DNS service you use, and update the A record to point to that IP.
Categories: R&D

git: undo all working dir changes including new files

January 17, 2013 Leave a comment

From : stack overflow

git reset --hard # removes staged and working directory changes
git clean -f -d # remove untracked files

Since this is so useful, I thought I’d post this for myself for reference and anyone else who needs it (you probably do if you use git)

Categories: R&D Tags: ,

Connecting to an existing Cloudbees repository from Windows

December 29, 2012 Leave a comment

This post is the result of trying to figure out how to setup a repository clone of one of my projects on my windows machine.  I work on my Mac most of the time, but at times it’s more convenient to work on my Win machine (usually when I have to debug IE issues, and need to try js fixes to see what impact they make)

So, lets start by setting up our tools and IDE on Windows.  We obviously need Java, Eclipse, and Maven, and actually that’s it.  In Eclipse, install m2e and EGit plugins.  Make sure you follow all the installation instructions for Maven with all the variables in the right place.

Now we need to setup SSH.  This needs to happen in two places, in Eclipse settings and in Cloudbees settings.  Here are the steps:

  1. In Eclipse, go to preferences->General->Network Connections->SSH2, select key management, and generate a DSA key.  Enter the pass phrase and remember it, you’ll need it in a minute.  Copy the public key to the clipboard.
  2. Click “Save private key” and save it, go to general and add the saved private key.   Apply and save.
  3. Go into Cloudbees settings->SSH keys, choose a name and paste the public key from the clipboard.  Click “Add”.
  4. Go to Cloudbees->Repositories, go into your project and copy to clipboard the SSH url unique to your repository.
  5. Go back to Eclipse, File->import, choose Git->Projects from Git.  Choose URI and paste the SSH URL you copied from your repository.  Choose ssh as the protocol.  The user should be git, and password empty.  Port can remain empty (it will be filled with the default which is 22)
  6. You will now need to enter your pass phrase.
  7. Next next next and your project will be cloned locally.
  8. To help maven work you need to go into Eclipse->Preferences->Maven, and go to installation->Add and pick the Maven installation you installed locally (instead of the Embedded one).
  9. If you run into problems with your project when you open the IDE, see this post.  If you still have problems, and you had Maven installed before, try deleting the .m2 resources directory.  You might also need to go into Preferences->Maven->User settings and reindex or update settings.
  10. Your project should automatically compile.

Problems you may encounter:

1. M2E installation fails and maven does not seem integrated into eclipse.  I couldn’t fix that and had to reinstall eclipse

2. You get complaints about javac and JRE: you’re using JRE instead of JDK, switch to JDK in eclipse->java->installed jdk’s

Categories: R&D Tags: , , , , , , ,