Well, its been pretty busy at work as of late. And one of the issues we had was with a newly developed app. Instead of deploying the app as a standalone package (running on Jboss 4) we decided to separate the frontend from the backend. The frontend being just static content and then it is passed off to the actual app after connection. The benefits are you do not need to compile EVERYTHING you need into the app. It would ultimately allow for faster turn around time on development. You just change your portal content and then have the backend app running everything else. An unfortunate side effect that we have run into is that with load balancing and clustered Jboss 4 servers using mod_jk.
The frontend is unaware of the backend really. So, the frontend creates a session token and sends it to the first server that responded upon login. But, the response will ALSO just go to the fastest responder. If that happens to be the first server that is storing the session token… great. If not, then the app server basically says “I have no idea what is going on here… I do not have this session token” so it fails. So, in a clutch situation we decided to force it to only one node on the cluster. Seems simple enough… just edit your jkMount settings and point it to only one worker in the loadbalancer. Where do we get these workers? From the workers.properties config. That is usually in the /conf/ of your apache install location. So, you open up your workers.properties like so:
# Define list of workers that will be used # for mapping requests worker.list=loadbalancer,status
# Define Node1 # modify the host as your host IP or DNS name. worker.node1.port=8009 worker.node1.host=node1.mydomain.com worker.node1.type=ajp13 worker.node1.lbfactor=1 worker.node1.cachesize=10
# Define Node2 # modify the host as your host IP or DNS name. worker.node2.port=8009 worker.node2.host= node2.mydomain.com worker.node2.type=ajp13 worker.node2.lbfactor=1 worker.node2.cachesize=10
# Load-balancing behaviour worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=node1,node2 worker.loadbalancer.sticky_session=1 #worker.list=loadbalancer
# Status worker for managing load balancer worker.status.type=status
So, you pick “node2” worker and edit your jkMount (either in your httpd.conf or your vhost.conf) to reflect only that worker will be used.
You restart apache and…
It doesn’t work. Apache starts throwing 500 errors. Doesn’t know where the application is.
Well, it does. But, there is one simple step that must not be overlooked.
worker.list must have any worker you plan to use as a mount point.
So, just edit your worker.properties file and where
worker.list=loadbalancer,status
change it so that the “real” worker is included. Like so:
worker.list=loadbalancer,status,node2
Restart apache and Ta Da!
Now, still doesn’t solve the real issue. Getting load balancing to work while still passing the token back and forth. But, at least its something to remember in case of emergency.