IMPORT YOUR OWN RSA SSH KEY INTO AMAZON EC2

Recently, I needed to import my own RSA SSH key to EC2. Here is a quick way to import to all regions

NB: You need the ec2 tools set up before you can run this. You will also need to have setup an x509 certificate pair.

You can read more about the ec2-import-keypair command in the EC2 documentation.

Logging the client IP behind Amazon ELB with Apache

When you place your Apache Web Server behind an Amazon Elastic Load Balancer, Apache receives all requests from the ELB’s IP address.

Therefore, if you wish to do anything with the real client IP address, such as logging or whitelisting, you need to make use of the X-Forwarded-For HTTP Header Amazon ELB includes in each request which contains the IP address of the original host.

Solution for logging the true client IP

Before:

After:

The one downside is that depending on how ELB treats X-Forwarded-For, it may allow clients to spoof their source IP.

Hopefully this helps out anyone experiencing this issue.

Monitoring and Scaling Production Setup using Scalr

Scalr is a fully redundant, self-curing and self-scaling hosting environment utilizing Amazon’s EC2. It is an initiative delivered by Intridea. It allows you to create server farms through a web-based interface using prebuilt AMI’s for load balancers (pound or nginx), app servers (apache, others), databases (mysql master-slave, others), and a generic AMI to build on top of. The health of the farm is continuously monitored and maintained. When the Load Average on a type of node goes above a configurable threshold a new node is inserted into the farm to spread the load and the cluster is reconfigured. When a node crashes a new machine of that type is inserted into the farm to replace it. 4 AMI’s are provided for load balancers, mysql databases, application servers, and a generic base image to customize. Scalr allows you to further customize each image, bundle the image and use that for future nodes that are inserted into the farm. You can make changes to one machine and use that for a specific type of node. New machines of this type will be brought online to meet current levels and the old machines are terminated one by one. The project is still very young, but we’re hoping that by open sourcing it the AWS development community can turn this into a robust hosting platform and give users an alternative to the current fee based services available.

Deploy Hadoop cluster with Ubuntu Juju

Here I’m using new features of Ubuntu Server (namely Ubuntu Juju) to easily deploy a small Hadoop cluster.

Add juju apt-repository:

Add charms NON-local:

Add charms local:

Generate your environment settings with

Then edit ~/.juju/environments.yaml to use your EC2 keys. It’ll look something like:

Bootstrap and then deploy charms NON-local

Bootstrap and then deploy charms local

Check Juju status:

Everything is finished and happy when

Optionally scale horizontally with datanode:

When all said and done: (SSH into namenode/datanode)