EC2: Java EE Cloud Deployment, Clustering, Session Replication, and Setting up Amazon Load Balancer

From Resin 4.0 Wiki

Revision as of 00:00, 3 April 2012 by Rick (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This tutorial is a continuation of:

Java EE EC2 Deployment with Resin

There are some issues that IP addresses are ephemeral in EC2. If you restart a server, it loses its IP addresses. Think of DHCP, but the lease expires instantly if the box is not using it. In a spoke / hub architecture, you need to know how to find the hub. The hub is like a cluster DHCP server. It knows the topology of the cluster.

Some changes in the last few releases of Resin work around these issue by allowing Resin to use public IP to find Triad members, and then members exchanging private IP addresses.

Resin typically discovers the server id by looking up the address combination of the instance. In this case, the local boxes do not know any address so you have to tell Resin what the server id is so it can look up the address it. The public IP addresses of an Amazon AMI instance is hidden from that instance, i.e., you will not see it with the ifconfig command.

You need to use the private IP addresses so that you do not incur additional expense of bandwidth metering from Amazon. You need Resin clustering to have session replication and session failover.

There are some improvements going into 4.0.28 which will make this configuration even easier. This is a how-to for 4.0.27.


Create two Elastic IP addresses (assuming you are using two machines both in a single Cluster). Use Amazon Console to create another instance of the server you setup in the first tutorial.

The first three servers in a cluster make up the Triad.


Pass the following user-data to each Amazon instance that is running Resin:

https : 8443
admin_user : admin
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
web_admin_enable : true
remote_cli_enable : true
web_admin_external : true
app_servers : ext:23.21.106.227 ext:23.21.195.83
system_key : changeme890

(New User data is only available after a restart.) Resin reading user-data assumes you followed the step in step one where you setup the ec2.xml file.

Not that ext:{IPADDRESS} denotes that this is a public IP. Resin will use the public address to ask that server what its private addresses is. This is where the system_key comes in.

Modify the /etc/init.d/resin of each server to pass the server id to resinctl. (There is commented block that sets up the id, just uncomment that block and put in the server id).


With the default configuration Server 0 has the app id of app-0 (ext:23.21.106.227), whilst Server 1 has the server id of app-1 (ext:23.21.195.83).

Essentially you are starting up Resin like this on box 0:

$ sudo resinctl start -server app-0

You are starting it up like this on box 1:

$ sudo resinctl start -server app-1

Create an Amazon Load Balancer. Add the two instances to the LB. (Use the smallest possible recheck interval for testing). Use sticky cookie support, use application cookie, set the name to JSESSIONID.

Now you have a LB and session replication just works.

Deploying to one server in the cluster will automatically deploy to every server in the cluster.


$ resinctl deploy --address 23.21.195.83 --port 8080 --user admin --password mypassword  hello.war


$ resinctl deploy-list --address 23.21.106.227 --port 8080 --user admin --password mypassword

production/webapp/default/blog


If you added another machine, you would just duplicate the first server virtual instance again, and run another instance.

Personal tools
TOOLBOX
LANGUAGES