java hosting

JavaRanch Journal
Volume 5, Number 1 (January, 2006)

JavaRanch Newsletter Articles in this issue :
JDBC Connection PoolingDavid Murphy
Printable Version
Ajax: A New Approach to Web ApplicationsEric Pascarello
Printable Version
Scriptless JSP Pages:
The Constant Constants Consternation
Bear Bibeault
Printable Version
Ruby on Rails in the Enterprise ToolboxLasse Koskela
Printable Version
Generifying your Design Patterns -- Part I: The VisitorMark Spritzler
Printable Version
The New SCJP ExamMarcus Green
Printable Version
Movin' them Doggies on the Cattle DriveCarol Murphy
Printable Version
Head First Objects Cover Contest!Pauline McNamara
Book Review of the Month - Ajax in ActionJeanne Boyarsky
Printable Version

JDBC Connection Pooling
JDBC Connection Pooling Best Practices
by David Murphy, JNetDirect


The addition of JDBC connection pooling to your application usually involves little or no code modification but can often provide significant benefits in terms of application performance, concurrency and scalability. Improvements such as these can become especially important when your application is tasked with servicing many concurrent users within the requirements of sub second response time. By adhering to a small number of relatively simple connection pooling best practices your application can quickly and easily take effective advantage of connection pooling.

Software Object Pooling

There are many scenarios in software architecture where some type of object pooling is employed as a technique to improve application performance. Object pooling is effective for two simple reasons. First, the run time creation of new software objects is often more expensive in terms of performance and memory than the reuse of previously created objects. Second, garbage collection is an expensive process so when we reduce the number of objects to clean up we generally reduce the garbage collection load.

As the saying goes, there is no such thing as a free lunch and this maxim is also true with object pooling. Object pooling does require additional overhead for such tasks as managing the state of the object pool, issuing objects to the application and recycling used objects. Therefore objects that don?t have short lifetimes in your application may not be good choices for object pooling since their low rate of reuse may not warrant the overhead of pooling.

However, objects that do have short lifetimes are often excellent candidates for pooling. In a pooling scenario your application first creates an object pool that can both cache pooled objects and issue objects that are not in use back to the application. For example, pooled objects could be database connections, process threads, server sockets or any other kind of object that may be expensive to create from scratch. As your application first starts asking the pool for objects they will be newly created but when the application has finished with the object it is returned to the pool rather than destroyed. At this point the benefits of object pooling will be realized since, now as the application needs more objects, the pool will be able to issue recycled objects that have previously been returned by the application.

JDBC Connection Pooling

JDBC connection pooling is conceptually similar to any other form of object pooling. Database connections are often expensive to create because of the overhead of establishing a network connection and initializing a database connection session in the back end database. In turn, connection session initialization often requires time consuming processing to perform user authentication, establish transactional contexts and establish other aspects of the session that are required for subsequent database usage.

Additionally, the database's ongoing management of all of its connection sessions can impose a major limiting factor on the scalability of your application. Valuable database resources such as locks, memory, cursors, transaction logs, statement handles and temporary tables all tend to increase based on the number of concurrent connection sessions.

All in all, JDBC database connections are both expensive to initially create and then maintain over time. Therefore, as we shall see, they are an ideal resource to pool.

If your application runs within a J2EE environment and acquires JDBC connections from an appserver defined datasource then your application is probably already using connection pooling. This fact also illustrates an important characteristic of a best practices pooling implementation -- your application is not even aware it's using it! Your J2EE application simply acquires JDBC connections from the datasource, does some work on the connection then closes the connection. Your application's use of connection pooling is transparent. The characteristics of the connection pool can be tweaked and tuned by your appserver's administrator without the application ever needing to know.

If your application is not J2EE based then you may need to investigate using a standalone connection pool manager. Connection pool implementations are available from JDBC driver vendors and a number of other sources.

JDBC Connection Scope

How should your application manage the life cycle of JDBC connections? Asked another way, this question really asks - what is the scope of the JDBC connection object within your application? Let's consider a servlet that performs JDBC access. One possibility is to define the connection with servlet scope as follows.

import java.sql.*;

public class JDBCServlet extends HttpServlet {

    private Connection connection;

    public void init(ServletConfig c) throws ServletException {
      //Open the connection here

    public void destroy() {
     //Close the connection here

    public void doGet (HttpServletRequest req, HttpServletResponse res) throws ServletException {
      //Use the connection here
      Statement stmt = connection.createStatement();
      ..<do JDBC work>..


Using this approach the servlet creates a JDBC connection when it is loaded and destroys it when it is unloaded. The doGet() method has immediate access to the connection since it has servlet scope. However the database connection is kept open for the entire lifetime of the servlet and that the database will have to retain an open connection for every user that is connected to your application. If your application supports a large number of concurrent users its scalability will be severely limited!

Method Scope Connections

To avoid the long life time of the JDBC connection in the above example we can change the connection to have method scope as follows.

public class JDBCServlet extends HttpServlet {

  private Connection getConnection() throws SQLException {
    ..<create a JDBC connection>..

  public void doGet (HttpServletRequest req, HttpServletResponse res) throws ServletException {
    try {
      Connection connection = getConnection();
      ..<do JDBC work>..
    catch (SQLException sqlException) {

This approach represents a significant improvement over our first example because now the connection's life time is reduced to the time it takes to execute doGet(). The number of connections to the back end database at any instant is reduced to the number of users who are concurrently executing doGet(). However this example will create and destroy a lot more connections than the first example and this could easily become a performance problem.

In order to retain the advantages of a method scoped connection but reduce the performance hit of creating and destroying a large number of connections we now utilize connection pooling to arrive at our finished example that illustrates the best practices of connecting pool usage.

import java.sql.*;
import javax.sql.*;

public class JDBCServlet extends HttpServlet {

  private DataSource datasource;

  public void init(ServletConfig config) throws ServletException {
    try {
      // Look up the JNDI data source only once at init time
      Context envCtx = (Context) new InitialContext().lookup("java:comp/env");
      datasource = (DataSource) envCtx.lookup("jdbc/MyDataSource");
    catch (NamingException e) {

  private Connection getConnection() throws SQLException {
    return datasource.getConnection();

  public void doGet (HttpServletRequest req, HttpServletResponse res) throws ServletException {
    Connection connection=null;
    try {
      connection = getConnection();
      ..<do JDBC work>..
    catch (SQLException sqlException) {
    finally {
      if (connection != null)
        try {connection.close();} catch (SQLException e) {}

This approach uses the connection only for the minimum time the servlet requires it and also avoids creating and destroying a large number of physical database connections. The connection best practices that we have used are:

  • A JNDI datasource is used as a factory for connections. The JNDI datasource is instantiated only once in init() since JNDI lookup can also be slow. JNDI should be configured so that the bound datasource implements connecting pooling. Connections issued from the pooling datasource will be returned to the pool when closed.
  • We have moved the connection.close() into a finally block to ensure that the connection is closed even if an exception occurs during the doGet() JDBC processing. This practice is essential when using a connection pool. If a connection is not closed it will never be returned to the connection pool and become available for reuse. A finally block can also guarantee the closure of resources attached to JDBC statements and result sets when unexpected exceptions occur. Just call close() on these objects also.

Connection Pool Tuning

One of the major advantages of using a connection pool is that characteristics of the pool can be changed without affecting the application. If your application confines itself to using generic JDBC you could even point it at a different vendor's database without changing any code! Different pool implementations will provide different settable properties to tune the connection pool. Typical properties include the number of initial connections, the minimum and maximum number of connections that can be present at any time and a mechanism to purge connections that have been idle for a specific period of time.

In general, optimal performance is attained when the pool in its steady state contains just enough connections to service all concurrent connection requests without having to create new physical database connections. If the pooling implementation supports purging idle connections it can optimize its size over time to accommodate varying application loads over the course of a day. For example, scaling up the number of connections cached in the pool during business hours then dynamically reducing the pool size after business hours.

Connection Pooling Metrics

In order to compare the difference between using non pooled connections and connection pooling I built a simple servlet that displays orders in Oracle's sample OE (Order Entry) database schema. The testing configuration consists of Jakarta's JMetric load testing tool, the Tomcat 5.5 servlet container and an Oracle 10g database instance. Tomcat and Oracle were running on separate 512MB machines connected by 100Mbps Ethernet.

The servlet is written to use either pooled or non pooled database connections depending on the query string passed in its URL. So the servlet can be dynamically instructed by the load tester to use (or not use) connection pooling in order to compare throughput in both modes. The servlet creates pooled connections using a Tomcat DBCP connection pool and non pooled connections directly from Oracle's thin JDBC driver. Having acquired a connection, the servlet executes a simple join between the order header and line tables then formats and outputs the results as HTML.

A JMeter test plan was created to continuously run the servlet in pooled and non pooled connection modes within thread groups of 4 threads each. A JMeter results graph is attached to each thread group to measure throughput.

As this example shows connection pooling can make a significant difference to your application's performance. In this example the throughput of 1198 requests per minute using pooled connections is almost six times faster than using non pooled connections. (In practice, the gains in your own application may not be this dramatic since our example does very little JDBC work per connection.)

Figure 1. Pooled connections throughput and times. Enlarge

Figure 2. Non pooled connections throughput and times. Enlarge
Here is the servlet code.
Here is the DBCP connection pool configuration.


If your project uses many short lived JDBC connections and does not use connection pooling now then I recommend that you seriously consider using pooling to improve your application's performance and scalability. Additionally, if your application adheres to the simple best practices presented in this article you will maximize the benefits of using connection pooling and make it truly transparent to your application.

David Murphy is Program Manager of JDBC Products at JNetDirect.

Discuss this article in The Big Moose Saloon!

Return to Top
Ajax: A New Approach to Web Applications
Ajax: A New Approach to Web Applications
by Eric Pascarello

The buzz that has been swarming the Internet is the term Ajax (Asynchronous JavaScript and XML). To my surprise, many developers still are not sure what this Ajax thing is. Hopefully I can educate those Ranchers that are still in the dark about this hot topic in the world of programming. Why is this Ajax thing so hot? Well the magic of it all is it spans every server side language out there. A PHP developer, a Java developer, and a .NET developer can all use this! This Ajax technology is not server side language specific so that is why there is great "hype." I can personally say that Ajax is not just hype. Judging by the responses I have gotten from here on the JavaRanch or out on my talks promoting Ajax in Action, it looks like Ajax is not going away, as some old timer programmers are wishing! I use it in production, and it's part of many applications that you may use on a daily basis.

It may be a surprise to some, but Ajax has been around for a long time; it's just that no one put a cleaning name to it until of late. And yes, Microsoft did implement it and others copied it (others copy Microsoft?) I am glad they did so this could take off. Ajax is supported by all of the major Web browsers now available.

This article is going to take a low level look at Ajax and see how it can help you and when you would want to use this in your application. This article is not going to give you the money to buy that brand new Testarossa, but it will give you the itching to get the keys for a test drive instead of staring at that wallpaper on your desktop. So let's stop this intro and get into the real article.

Some Ranchers looking to find information on Ajax will ask, "Where can I download the API to the Ajax language?" My response is that Ajax is not a language, but an "approach" or "methodology" to programming utilizing JavaScript's XMLHttpRequest Object. There are only a few lines of code one must use. A lot of people ask, "That's it?" These few lines of code, as we will see later on, allow you to do a lot.

An Ajax application requires knowledge of the client and server, but do not let that scare you. Ajax gives us the ability to communicate with the server without doing a traditional post back to the server. We no longer have to see the page refresh and re-render every single control of the page to check to see if there is new information. We can make double combo lists seem to fill with magic, have text pop up on the screen based on user input, and have live data show up in front of our eyes. People have been implementing this style of programming with frames, iframes, and pop up windows, but the XMLHttpRequest Object gives us a more stable and reliable way to send and retrieve the information.

Starting your engine

Now I know what you are thinking: "What is this XMLHttpRequest object you keep talking about?" It is the mechanism that drives the communication between the client and the server. For Mozilla we need one set of code, while for IE we need another. Do not worry: this cross-browser stuff is not too bad. I will give you the training wheels to get you through it.

Ajax in the Gecko browsers (Mozilla, Firefox, & Netscape), Safari, Internet Explorer 7, and Opera is based around this line of code:

var req = new XMLHttpRequest();

For Internet Explorer 5 and 6 we need to use an ActiveX constructor to get the object:

var req = new ActiveXObject("Microsoft.XMLHTTP");

All this means in the long run is we need to do a little branching to get the correct version for the browser. For right now just forget that cross browser stuff and concentrate on the object itself. Now this object has a few methods. The three that you probably will use the most are:

  • abort()
  • open("method", "URL"[, asyncFlag[, "userName"[, "password"]]])
  • send(content)

The two methods that are important to us are the open and send methods. You can see the open method as grabbing the keys to that Testarossa. It sets up the information on where to send that request too. The send method is us putting that key into the ignition, putting it in first, and stepping on that gas. The content is the data that we are sending to the server. It can be form data, soap messages, strings, etc. It is the information that is along for the ride just like the dealer sitting in the passenger seat with the seat belt on tight as it can go. Now one thing to remember is content size. If your content is small, you have a better chance of going fast. If your passenger weighs 600 pounds, the response to the server will be a little slow just like the reaction of that gas pedal. Now if you have a 100 pound guy in the seat, you will not notice any real lag and will have that head jerked back with that grin!

There are a few more methods that are not used as much, but can be useful. They are:

  • getAllResponseHeaders()
  • getResponseHeader("theLabel")
  • setRequestHeader("label", "value")

As you can see, they all deal with either getting or setting header data. And you would use the setRequestHeader if you were sending a soap message so the server knew what it was getting as an example.

Checking your instruments

We are halfway done with looking at the object. We need to examine the properties and the single event handler that the object has. Lets us just list them first. Our lonely, but very useful event handler is:

  • onreadystatechange

and our properties are:

  • readyState
  • responseText
  • responseXML
  • status
  • statusText

The onreadystatechange event handler is rather unique. We assign it a reference to a function or assign it actual code. The function is than called multiple times during the process of making and getting the request. The calls are not random, but happen at certain states in the process. There are fives states to be exact. We can find out what state the request is in by using the readyState property. The property is an integer value ranging from 0 to 4 that inform us of the different processes being performed. The exact meanings of these states are:

  • 0 = uninitialized
  • 1 = loading
  • 2 = loaded
  • 3 = interactive
  • 4 = complete

When the onreadystatechange event handler achieves the state of 4 or "complete", that means we have a response back from the server. This does not mean that it was successful though. To find out if it was a success, we need to look at the status property. The status property gives use those wonderful numbers of 200, 404, 500, etc. Of course we want to see 200 for OK! If you are real bored, you can also check the statusText which is the string message that accompanies the code value. It is useful when you need to see an error message. If you are working on the file protocol (running Ajax on the desktop (file system) of your computer), you are looking for a status code of 0 (zero).

If we get the magic number of 200, we have a successful retrieval of the server side page. The processing of the data can now be started. There are two properties that allow us to get our hands on the data. The responseText and the responseXML properties to be exact. The responseText is a string value of the data returned. The responseXML is a DOM-compatible document object. What is that you are asking? In non-geek speak it is saying we can use JavaScript methods to read the data as nodes and have more control and access. Now it is up to you on what you use and is a subject for an article on its own.

Putting Ajax in gear

So now that we have just got the keys to a dream machine, let's put it together and see it working! Since we are looking at this low level, we are going to use global variables! Below is the code with some basic comments explaining it.

//Our global variable
var req;
//function that accepts the url to retrieve
function loadXMLDoc(url) {

  //set the req to false in case it is not supported
  req = false;

  //"Gecko" XMLHttpRequest object
  if(window.XMLHttpRequest) {
    try {
      req = new XMLHttpRequest();
    catch(e) {
      req = false;
  //IE ActiveX object
  else if(window.ActiveXObject) {
    //check for newer version (there is a v3 now too)
    try {
      req = new ActiveXObject("Msxml2.XMLHTTP");
    catch(e) {
      try {
        //older version
      	req = new ActiveXObject("Microsoft.XMLHTTP");
      catch(e) {
        req = false;

  //Make sure object is supported
  if(req) {
    //Tell function to call for state changes
    req.onreadystatechange = stateChange;
    //Give it the keys specifying GET or POST"GET", url, true);
    //Since we are using GET no info to send
    //inform user that it is not supported!
    alert("This browser lacks Ajax support!");


//Handles the state changes
function stateChange() {
  //Look for the complete state of 4
  if (req.readyState == 4) {
    //Make sure status is a success
    if (req.status == 200 || req.status == 0) {
      //display content
    else {
      alert("Retrieval Error: " + req.statusText);

And that is all you need to make the request to the server. I guess you want to see how you can call this code. Well one way you can do it is in a form like this.

<form name="Form1" onsubmit="loadXMLDoc(this.t1.value);return 
  <input type="text" name="t1" value="test.txt"/>
  <input type="submit" name="sub1" value="Check Request"/>

I was kind enough to zip up the example for you too so you can play with the code. You can download it here on my JavaRanch blog:

Putting the pedal to the metal

Now you want to know how can I get away from this global scope and move into the OO world. Well you can look at examples on my blog that use one of the content loaders from Ajax in Action or you can look at many of the frameworks out there that have OO based code. The global aspect is not bad, expect it can only handle a single response at a time.

Now that all of the excitement of the code is over with, we can step back and let reality set back in. OH MY! AJAX IS JAVASCRIPT! AH! I NEED TO REWRITE MY WHOLE APPLICATION! All I can say is calm down and take a look at these causes for heart attacks.

Some people that are afraid of Ajax think they need to redesign the entire way/approach/method on how they program a web application. In reality that is not the truth at all. All of the same basic principles apply to an Ajax application such as security, business logic, and database access. The only real difference is to rethink how certain controls will work. Just because you adopt Ajax does not means the entire page can not post back. A single Ajax powered control on the page is all you need to improve a page's performance. A double combination selection list on the page can easy be transformed and powered by Ajax. A type-ahead textbox script can help users eliminate keystrokes when using a phone book look up and so forth. One little Ajax application can save a user time! You will look smart to the executive boss that saved 10 seconds in their busy golf filled schedule to find the name of one of the clients that he has a 10AM tee time with.

Now you may be concerned that Ajax is based on the XMLHttpRequest Object which is JavaScript. "I do not know JavaScript and I do not want to know all of the little quirks of the browser." That is a common thought I get in my talks. Well if you do not want your daily routine to be visiting sites like, than you are in luck. There are tons of very good frameworks out there all based around your language. If you are working in Java, Ruby, or .NET, you can find a framework that fits your need. Do a search on Google and you will not have any trouble finding one!

Now that you are all fired up about this Ajax stuff, you may want to know where can I get more information. All I can say is the web is full of stuff. You can see two sample chapters of Ajax in Action here: You can see all of the breaking news on Ajax at You can post questions directly to me at the HTML and JavaScript forum here on and you can look through my blog for more articles on Ajax.

Move into the world of the client side merging with the server and see how you can improve your users' experience. Remember to do it one control at a time and keep your code clean!

Discuss this article in The Big Moose Saloon!

Return to Top
Scriptless JSP Pages: The Constant Constants Consternation
by Bear Bibeault

I've heard it again and again and again: "I love writing scriptless JSP pages using the JSTL and EL, but dang, why isn't there a way to reference constants in the EL?!?!"

This constant consternation regarding constants is a refrain that resonates strongly with me. When I first started writing JSP pages without scriptlets and scriptlet expressions, the inability to easily reference class constants was a sorely missed capability.

In this article, the term class constant is used to mean a class field with the modifiers public, static and final.

I even flirted with cheating a little by using scriptlet expressions to reference class constants in otherwise scriptless pages. This dalliance didn't last very long as not only did it violate my sense of architectural integrity, it broke down in places where scriptless body content must be maintained (for example: within tag files and within the bodies of custom actions based on SimpleTag).

One particular area in which I felt the pain most intensely was in the naming of the form elements within an HTML <form> to be submitted to my action servlets...

The Form Element Names Example

In my web applications, I dislike repeating literal string values in more than one place. Heck, that applies to any literal values — not just strings. Few would argue that this is not a good practice to follow.

I also usually employ a pattern in which the aggregate collection of data submitted as part of a request is abstracted into a class that understands how to deal with that data. These classes are responsible for collecting the data from the request parameters, performing any conversions from string data to other formats (integers, dates and so on), performing simple context-free validations, and in general, making it easy for the action servlets to interact with the submitted data on an abstracted level. This pattern, which I unimaginatively call a Form, should be familiar to users of Struts or similar web application frameworks that employ comparable abstractions.

Within these "Form" classes I define a series of class constants that represent the names of the form elements. Additionally, these constants serve as the keys in a map of the submitted form data that the class represents. As an example, for a typical "login form", you might see:

package org.bibeault.jrjournal.ccc.test;

public class LoginForm implements Form { 1

   public static final String KEY_USERNAME = "username"; 2
   public static final String KEY_PASSWORD = "password";


1 The LoginForm class is declared. The Form interface being implemented is defined by the light-weight framework that I employ and will not be further discussed as it is not particularly relevant to this discussion.

2 For each named form element, a class constant is declared that represents a string literal containing the name of the element. In this example, our form is a simple one; defining only two elements.

In this way, the string literals that represent the form element names are defined once as class constants and then referenced throughout the application using compile-time class constant references. This prevents simple typos in the names of the form elements from turning into hard-to-find run-time bugs. Rather, a typo in the referenced constant name induces a compile-time error that is easy to find and fix quickly.

Prior to the advent of scriptless pages, you might have found the following form elements on a JSP page employing the defined LoginForm to create an HTML login form (within an appropriate <form> tag, of course):

<div>Username: <input type="text" name="<%= LoginForm.KEY_USERNAME %>"/>
<div>Password: <input type="password" name="<%= LoginForm.KEY_PASSWORD %>"/>

Because compile-time class constants are being referenced (rather than hard-coding the names of the elements) bothersome run-time bugs that might be introduced by mistyping the element names are avoided. For example, mistyping the hard-coded element name as "usrename" would result in a run-time bug (in which a call to getParameter() on the servlet request using the correct name would always result in a null value). By contrast, mistyping the class constant reference as LoginForm.KEY_USRENAME would result in a loud and obnoxious — but quick and easy-to-fix — compile-time error when the page is translated.

But within a scriptless page we're rather sunk. Because the JSP EL (Expression Language) makes no provisions for accessing class constant values, we have no means to easily replace the scriptlet expressions that reference the class constants representing the element names with scriptless EL expressions.

Wouldn't it be nice to be able to use something like:

<div>Username: <input type="text" name="${LoginForm.KEY_USERNAME}"/>
<div>Password: <input type="password" name="${LoginForm.KEY_PASSWORD}"/>

But we know that the EL makes no such provisions for referencing class constants in this or any other direct manner.

What to do? What to do? We want to avoid resorting to scriptlets, but we also want to do something better than falling back to hard-coded string literals in the JSP pages. What to do?

Reflection to the Rescue!

As the EL does not provide any mechanism to allow class constant references on the pages, we'll use some "Java voodoo" to do it ourselves! And the voodoo of which I speak is none other than Java reflection: one of the handiest tools in our Java toolbox.

Regardless of what we decide to do on-page, we're going to need a means to look up the value of a class constant given its fully qualified name. To use an example from our previous discussion, we'd like to be able to use a string such as org.bibeault.jrjournal.ccc.test.LoginForm.KEY_USERNAME to lookup its value "username".

Since we'd probably like to reference such a mechanism from multiple places (and because abstraction is just such a wonderful thing in its own right) we'll create a "class constant inspector" class to look up and obtain the value of a class constant given its fully qualified name (which I refer to within this article as its "path").

We will then be able to use this "inspector" in various ways that may suit our needs within the web application.

The ClassConstantInspector Class

The full implementation of the ClassConstantInspector class is available in the jar file accompanying this article (see the Resources section), but we will examine the "meaty" parts here.

Using this "inspector" class is quite simple: an instance is constructed supplying the path to the class constant, then the instance can be queried for the value of the constant.

The constructor for the class does the lion's share of the work:

public ClassConstantInspector( String constantPath ) 1
    throws ClassNotFoundException, NoSuchFieldException {
    FieldPathParser parser = new FieldPathParser( constantPath ); 2
    this.field = Class.forName( parser.getDeclaringClassName() )
                     .getField( parser.getFieldName() ); 3
    if (!Modifier.isPublic( this.field.getModifiers() ) || 4
        !Modifier.isStatic( this.field.getModifiers() ) ||
        !Modifier.isFinal( this.field.getModifiers() ) ) {
        throw new IllegalArgumentException( "Field " + constantPath +
                                            " is not a public static final field" );

1 The constructor is called with the path to the class constant to be inspected; for example, the string "org.bibeault.jrjournal.ccc.test.LoginForm.KEY_USERNAME".

2 The passed path name is broken down into its class name and field name ("org.bibeault.jrjournal.ccc.test.LoginForm" and "KEY_USERNAME" respectively) using the FieldPathParser class. This latter class simply breaks the passed string into its component parts and will not be further discussed in this article. An implementation using a regular expression to perform the parsing is provided in the ccc.jar file referenced in the Resources section.

3 The java.lang.reflect.Field instance describing the target field is obtained and stored in an instance variable.

4 The field is checked to make sure that it is a "class constant"; that is, a public, static and final field. If not, an IllegalArgumentException is thrown.

After construction, the value of the field can be queried by calling the getValue method:

public Object getValue() throws IllegalAccessException, InstantiationException {
    return this.field.get( null ); 1

1 The value of the field is obtained using the get() method of Field. Note that since the field is static (as enforced by the constructor check), null is passed as the parameter. For non-static fields, this parameter would be an instance of the containing object to query.

As a convenience, a static function that can be used to query for a class constant value is also included:

public static Object getValue( String constantName )
    throws NoSuchFieldException, ClassNotFoundException,
           IllegalAccessException, InstantiationException {
    return new ClassConstantInspector( constantName ).getValue();

With that under our belt, let's explore how we can exploit this mechanism to ease our constant constants consternation in scriptless JSP pages.

The <ccc:setConstant> Custom Action

When enhancing the capabilities of a JSP page, the custom action mechanism is frequently one of the first means considered.

Custom actions may be better known to you as "custom tags". As of the JSP 2.0 Specification, the term action is used rather than "tag" to described the standard and custom tag mechanisms.

One of the first directions we might consider when defining a custom action that uses our new class constant inspector is one that looks up and emits the value of the constant given its path. This would be very much like the JSTL <c:out> action except that the tag would emit the value of a class constant rather the evaluation of the value attribute.

We can certainly define such an action using the tools at our disposal. But upon a few moments of reflection (no pun intended) we can determine that such a tag would have some severe limitations; the primary of which would be that, since it is a custom action, it could not be used in contexts where custom actions are disallowed: attributes of other actions, for example.

Rather, we will pattern our class constant custom action on the JSTL <c:set> tag whose purpose is to evaluate a value and create a scoped variable that represents that value. Such scoped variables can then be used in contexts where custom actions cannot; most interestingly, within EL expressions which can be used liberally within a JSP page, including within the attributes of standard and custom actions.

The TLD entry for our custom action (with the <description> elements removed for brevity) is:


The ccc.tld file for the code examples in this article is available in the accompanying jar file referenced in the Resources section).

Its resemblance to the JSTL <c:set> action is readily apparent, with the constant attribute specifying the class constant path replacing <c:set>'s value attribute (which is usually used to specify an EL expression to be evaluated).

The implementation of this tag is surprisingly straightforward:

public class SetConstantTag extends SimpleTagSupport {

    private String fieldName;
    private String varName;
    private String scopeName;

    public void setConstant( String value ) { this.fieldName = value; }
    public void setVar( String value ) { this.varName = value; }
    public void setScope( String value ) { this.scopeName = value; }

    public void doTag() throws JspException {
        try {
          ScopedContext scopedContext = (this.scopeName == null) ?
              ScopedContext.PAGE : ScopedContext.getInstance( this.scopeName );  1
          Object constantValue =
              ClassConstantInspector.getValue( this.fieldName ); 2
          getJspContext().setAttribute( this.varName,
                                        scopedContext.getValue() ); 3
        catch (Exception e) {
            throw new JspException( "Exception setting constant " +
                                    this.fieldName, e ); 4

1 The scoped context name (one of: page, request, session or application) is used to look up the instance of the ScopedContext type-safe enumeration that describes the named context. This abstraction associates the scoped context names with the integer values expected by the PageContext class. Its implementation is available in the ccc.jar file.

2 The ClassConstantInspector class is used to lookup the value of the specified class constant.

3 The scoped variable is created using the specified var name and scope.

4 Any exception throw during the tag processing is re-thrown as the root cause of a JspException.

Applying this new custom action to the "login form" example discussed earlier might look something like:

<%@ taglib uri="" prefix="ccc" %>


<ccc:setConstant constant="org.bibeault.jrjournal.ccc.test.LoginForm.KEY_USERNAME"
<ccc:setConstant constant="org.bibeault.jrjournal.ccc.test.LoginForm.KEY_PASSWORD"


<div>Username: <input type="text" name="${usernameKey}"/></div>
<div>Password: <input type="password" name="${passwordKey}"/></div>

With this custom action, we have established a means to reference class constants on our JSP pages without resorting to scriptlets or scriptlet expressions. This represents a huge step forward over hard-coding the literals values in scriptless contexts.

But... (isn't there always a "but"?)

This mechanism is not without issues. Foremostly, it violates my architectural sensibilities in that we now have two variations of actions to create scoped variables: <c:set> for values derived from EL expressions, and <ccc:setConstant> for values derived from class constants. Would it not be completely awesome if we could figure out some way to achieve the same ends using just <c:set>?

But how?

The value of the value attribute of the <c:set> action can contain any EL expression, but we've already established that EL expressions cannot reference class constants. So how could we possibly piggy-back on <c:set> using our class constant inspection mechanism?

Well, the EL may not be able to access class constants but, as of JSP 2.0, it can access static functions via the EL function mechanism. Where can we go with that?

The ccc:constantValue EL Function

As of JSP 2.0, functions that can be called from within EL expressions can be mapped to static methods of Java classes. See section JSP.2.6 of the JSP 2.0 Specification for details.

Magically — well, by design — we already have a static method that we created in the ClassConstantInspector class that we can leverage for this very purpose. By adding the following entry in the TLD file:

    java.lang.Object getValue( java.lang.String )

we can enable access to the ClassConstantInspector.getValue() static method from within EL expressions.

So now, in our JSP page, we can use <c:set> to create scoped variables with the values of class constants.

Applying this to our login form example, we get:

<%@ taglib uri="" prefix="ccc" %>


<c:set var="usernameKey"
<c:set var="passwordKey"


<div>Username: <input type="text" name="${usernameKey}"/></div>
<div>Password: <input type="password" name="${passwordKey}"/></div>

This is getting better and better! We can now assign the value of class constants to scoped variables using the standard JSTL <c:set> action rather than resorting to creating our own non-standard action to do so.

With this EL function we could also forgo the scoped variables entirely, using the function directly in the name attributes of the form elements if you don't mind the wordiness of the function reference.

But (there's that "but" again), since we are hyper-critical perfectionists (well, at least I am), we examine our creation with a discerning eye. And when we do, we identify some limitations:

  1. The EL function mechanism is only available for JSP 2.0 containers. And while the time has come for everyone to migrate to JSP 2.0 if possible, engineering realities dictate that some people may still need to function in a JSP 1.2 world for the time being.

    The custom action we defined in the previous section can be retro-fitted to function in a pre-JSP 2.0 environment if re-implemented using classic tag handling support.

  2. This mechanism, cool as it may be, may not scale all that well. This EL function mechanism (as well as the custom action we defined in the previous section) requires us to either create a scoped variable for each and every class constant that we wish to reference within the JSP page, or use some pretty long and gnarly-looking notation directly in the markup.
  3. The EL function mechanism itself could be viewed as "notationally messy". Call me a neat-nik.

So, as powerful as this mechanism is, it may not be a perfect fit for all situations. Can we come up with even more "magic" using our class constant inspection voodoo?

The Power (and Possible Perversion) of the Map

In a previous JavaRanch Journal article we discussed the "Power of the Map" as it relates to the EL. In that article, we discovered that the java.util.Map interface was a mighty tool for referencing properties in a dynamic fashion. Could this flexibility also help us ease our constant constants consternation?

Recall that in an EL expression such as ${} — which is a short-hand notation for ${abc['xyz']} — if the scoped variable abc implements java.util.Map then the value of the EL expression is the value for the mapped key "xyz".

Can that help us? In the "Power of the Map" article we discussed having the servlet controller construct a Map of all the constants we'd need that we can send to the page as a request-scoped variable, but can we simplify that?

What if, rather than using a HashMap (or other pre-defined implementations of java.util.Map), we created our own implementation of java.util.Map that could look up the value of a key on the fly? Would that be incredibly clever, or would it be a heinous perversion of all that is good? Or both?

The ClassConstantMapper Class

Considering that the get() method of java.util.Map is the only method of interest when accessing Map entries via the EL, let's define a class that implements that interface:

public class ClassConstantMapper implements java.util.Map {

and declare the get() method required by the java.util.Map interface as follows:

public Object get( Object key ) { 1
    try {
        return ClassConstantInspector.getValue( key.toString() ); 2
    catch (Exception e) {
        throw new IllegalArgumentException( "Error looking up value of constant " +
                                            key.toString() + ": " + e.getMessage() );

1 When an instance of this class is used as a scoped variable, any references to a property of this variable will cause the get() method to be invoked with the property name as the key parameter.

2 The services of the ClassConstantInspector are used to determine the value of the class constant specified by the passed key and is returned as the value.

Since the other methods defined by the java.util.Map interface are either of no interest, or have no real meaning in this context (what would we return for the entrySet() method? A Set of all class constants for all the classes in the class path? Absurd!), we throw an UnsupportedOperationException for every method except get(). For example:

    public Set entrySet() { throw new UnsupportedOperationException(); }

We also need a means to establish a scoped variable of this class so that we can reference it on the pages. We could simply expect that the servlet controller would set it up for the page, or we could define a new custom action that would do it for us on the page itself. But, since our class has a no-args constructor, we could simply employ the services of the <jsp:useBean> standard action to establish a scoped variable using an id of our choice.

So our login form example becomes:

<jsp:useBean id="constants" class="org.bibeault.jrjournal.ccc.ClassConstantMapper"/>


<div>Username: <input type="text"
<div>Password: <input type="password"

Note that with this mechanism, the constants scoped variable can be used to reference any and all constants that we need to use on the page. Also note that since the class constant path used as the "property name" will always contain at least one period character, we will always need to use the [ ] operator notation.

This gives us a similar mechanism to the EL function described in the previous section, but with a slightly less punctuation-heavy notation.

Is this a wonderfully clever leverage of existing mechanisms, or a delicious violation of the intentions of the Map interface? You decide!

Since there is nothing page-specific about this "constant mapper" class, why bother to declare it on each page using the <jsp:useBean> standard action? If we were to establish the "constant mapper" scoped variable in application scope, it would automatically be available to each and every page in the web application.

The ConstantMapperEstablisher Class

But how would such an application-scoped variable be best established? If we put a <jsp:useBean> action on every page specifying a scope of application that eliminates any advantage of putting the variable into application scope in the first place. We could choose a single page to contain the action, but how could we guarantee that that page will be the first that gets hits after the web application is loaded?

The answer lies not within a JSP at all, but with a servlet context listener. This mechanism allows us to specify a class (actually, any number of classes that we want) that will be notified at web application startup time, giving us the opportunity to execute any initialization code we deem necessary. This is the perfect place to perform any application-level setup such as establishing a scoped variable for our constant mapping class.

The class implements the javax.servlet.ServletContextListener interface and is implemented as:

public class ClassConstantMapperEstablisher implements ServletContextListener {

    public static final String PARAM_CONSTANT_MAPPER_NAME = 1
        ClassConstantMapperEstablisher.class.getName() + ".classConstantMapperName";

    public void contextInitialized( ServletContextEvent servletContextEvent ) { 2
        ServletContext servletContext = servletContextEvent.getServletContext();
        String varName =
          servletContext.getInitParameter( PARAM_CONSTANT_MAPPER_NAME ); 3
        if (varName == null) { 4
            throw new IllegalStateException( "Context parameter " +
                                             PARAM_CONSTANT_MAPPER_NAME +
                                             " must be specified" );
        servletContext.setAttribute( varName, new ClassConstantMapper() ); 5

    public void contextDestroyed( ServletContextEvent servletContextEvent ) {} 6


1 Rather than hard-coding the name of the scoped variable to be established into application scope, the name will be specified by a context parameter in the deployment descriptor. In order to avoid possible namespace collisions with other context parameters, the context parameter name is prefixed with the listener class name.

2 The contextInitialized event is triggered by the container when the context is placed into service.

3 The name of the scoped variable to place into application scope is obtained from the context parameter.

4 If the context parameter that specifies the scoped variable name is missing, complain. Loudly.

5 An instance of ClassContantMapper is established in application scope using the name obtained from the context parameter.

6 The contextDestroyed event is triggered when the context is taken out of scope. Nothing need be done in response to this event.

Establishing this class as a context listener for the web application requires two simple entries in the deployment descriptor (web.xml) for the application; one for the required context parameter, and one for the listener:



This effectively creates a new implicit variable named constants that can be accessed in all of our JSP pages (in the same manner as true implicit variables provided by the container such as requestScope).

We have now described three different on-page mechanisms to access constants using our class constant inspector. They're a fairly powerful set of tools, each with their strengths for different circumstances, but they all exhibit a tendency towards on-page verbosity; usually in the guise of long constant path string literals (e.g. "org.bibeault.jrjournal.ccc.test.LoginForm.KEY_USERNAME"). Though this isn't anywhere near a crippling deficiency, there may be situations where something terser may be appropriate.

As previously mentioned, in the Power of the Map article we discussed having the servlet controller construct a Map of all the constants we'd need that we can send to the page as a request-scoped variable. This had the advantage of creating a Map that had simply named keys (as opposed to long class constant field paths). Could a variation on this theme using our "inspector technology" enable us to further simplify on-page constant referencing?

The <ccc:constantsMap> Action

A lot of the time I find that most, or least a large portion, of the constants referenced within a page are defined within the same class. Consider our LoginForm class example. For all the form elements defined by this abstraction, a class constant representing a string literal for the name of each element is defined. In this example there are only two, but there could be many more for more complicated forms.

We could write code that creates a map of these names and literals (perhaps within the LoginForm class itself) and have the servlet controller establish it as a request-scoped variable before forwarding to the page. But wouldn't it be nice if the page itself could cause such a Map to be established without reliance on the controller or specialized code in the classes that define the constants?

For precisely that purpose we define the <ccc:constantsMap> custom action. This action will, given a class name, search for all class constants defined by that class and establish a Map of those constants as a scoped variable.

The TLD entry for this tag:


The implementation of the action, for once not relying upon our constant inspector class (though using the same voodoo), is:

public class ConstantsMapTag extends SimpleTagSupport {

    private String className;
    private String varName;
    private String scopeName;

    public void setClassName( String value ) { this.className = value; }
    public void setVar( String value ) { this.varName = value; }
    public void setScope( String value ) { this.scopeName = value; }

    public void doTag() throws JspException {
        try {
            Map constantsMap = new HashMap(); 1
            Class declaringClass = Class.forName( this.className );
            Field[] fields = declaringClass.getFields();  2
            for (int n = 0; n < fields.length; n++ ) { 3
                if (Modifier.isPublic( fields[n].getModifiers() ) && 4
                    Modifier.isStatic( fields[n].getModifiers() ) &&
                    Modifier.isFinal( fields[n].getModifiers() ) ) {
                    constantsMap.put( fields[n].getName(),
                                      fields[n].get( null) ); 5
            ScopedContext scopedContext = (this.scopeName == null) ? 6
                ScopedContext.PAGE : ScopedContext.getInstance( this.scopeName );
            getJspContext().setAttribute( this.varName,    7
                                          scopedContext.getValue() );
        catch (Exception e ) {
            throw new JspException( "Exception setting constants map for " +
                                    this.className, e );


1 A Map instance to serve as the scoped variable is created.

2 By using the getFields() method, we obtain not only fields defined by the named class, but any fields inherited by extended classes as well. To limit the search to fields declared within the named class we would have used the getDeclaredFields() method.

3 All the located fields are iterated over.

4 Fields not considered "class constants" — that is, public, static and final fields — will be ignored.

5 Any class constants are added to the Map using the field name as the key and the field value as the mapped value.

6 The scope name string is converted to its ScopedContext type-safe enum instance.

7 Finally, the Map is established as a scoped variable using the specified name in the specified scope.

This action's usage for our login form example would be :

<%@ taglib uri="" prefix="ccc" %>




<div>Username: <input type="text" name="${LoginForm.KEY_USERNAME}"/></div>
<div>Password: <input type="password" name="${LoginForm.KEY_PASSWORD}"/></div>

Compare this with our original, script-infested example and the "Wouldn't it be nice" scenario that follows it. Is this not just too cool?

But once again, we have a ubiquitous "but"...

We've found a way to make the notation for making class constant references on the pages nice and easy, but have we really gained anything over just hard-coding the values directly onto the pages? Recall that when we did so, any errors in the names resulted in a non-specific run-time bug ("Why is the blasted username parameter always blank?"). But when we used scriptlet expressions that referenced class constants and made a similar typo, an error was reported at translation time. What happens when we use our <ccc:constantsMap> mechanism?

When an EL reference is made to a Map using a key that does not exist, no error is thrown. Rather, a default is provided for the evaluation of the EL expression in which the reference was made. In a textual context, such as we are using here, the default is the empty string. Therefore, when we make a mistake such as ${LoginForm.KEY_USRENAME} in typing the class constant name, the result will be a blank in the place of the EL expression.

Well, poo! While that's marginally better than simply hard-coding the name -- errors will result in predictable and invalid HTML patterns such as name="" which is easier to search for than mis-typed names -- it's still a far cry from the satisfying in-your-face error thrown when we use scriptlet expressions and constant references.

Is there something we can do about that?

The ClassConstantsMap Class

Because our class constant inspector is a run-time mechanism, we can't detect and report errors at translation time. But to achieve the goal of having the page blow up in our faces when we make a error in class constant references, a run-time error, as long as it is loud, obnoxious and obvious, will serve just as well.

Our problem arises from the fact that when the get() method of any java.util.Map is called with a key that does not exist, it returns a null which the EL goes about converting to a default value suitable for the referencing evaluation context.

That's not what we want in this case: what we want is for the Map to raise hell if we try to fetch a nonexistent key. Since all the other behaviors of the java.util.HashMap class that <ccc:constantsMap> places into a scoped context work exactly as we would like, all we need to do in order to achieve the behavior we'd like to see is to extend the HashMap class with a new implementation of its get() method.

The implementation is simple:

public class ClassConstantsMap extends HashMap {

    private String className;

    public ClassConstantsMap( String className ) { 1
        this.className = className;

    public Object get( Object key ) { 2
        if (super.get(  key ) == null) 3
            throw new IllegalArgumentException( "Key " + key +
                " could not be found in class constant map for " +
                                                 this.className );
        return super.get( key ); 4


1 The constructor accepts the name of the class from which the Map is being constructed. This is used primarily during error reporting, which is the entire reason for the existence of this class in the first place. This class name is stored in an instance variable for later use.

2 Like any other implementation of java.util.Map, the get() method is invoked when an EL reference is made.

3 A check for the existence of an entry for the passed key is made and an exception is thrown if one does not exist. Note that no special considerations are made for entries that do exist but have a value of null. In our scenario, we are making no distinctions between null entries and entries that do not exist.

Since neither null values nor null keys make much sense in this scenario, it would also be an easy matter to extend the HashMap's put() method to not accept null for either the key or the value of an entry.

4 If the entry exists, it is returned via the extended HashMap's get() method.

By changing the Map class used in the handler for <ccc:constantsMap> from HashMap to our new ClassConstantsMap class, we finally achieve our goal. We can easily make class constant references on the JSP page without resorting to scriptlet expressions, and if we make referencing errors, we will learn about them very quickly and spectacularly.

This time, there are no substantive "but's", and that's cool!

Happy Days are Here Again

In this article we've established a general means for determining the value of a class constant and explored a number of ways to exploit this capability to ease the pain of referencing class constants in JSP pages.

Hopefully, we have either created ready-made tools that you can begin to use in your scriptless JSP pages immediately, or have given you ideas and technology fragments that you can exploit in whatever manner makes the most sense for your own web applications.



The jar file containing the classes and TLD described in this article. This jar file is suitable for dropping into a web application's WEB-INF/lib folder in order to use these tags and classes as is.

This jar file also contains the source code for all the classes described in this article, as well as the generated javadocs for the classes and the TLD.

You will note that the classes in the jar file differ in a number of ways from those presented in this article. Some of the code presented in the body of the article has been simplified to focus upon the concepts being discussed. The actual classes as included in the jar file have more robust error checking, and common operations are factored into superclasses. Of particular note:

  • The custom action implementations define TagExtraInfo inner classes to perform translation-time checking of their static attributes.
  • Since both of the defined custom actions share the declaration and semantics of the var and scope attributes, a "tag support" superclass for these tags handles those semantics. This superclass is suitable for extending by any custom action supporting these common attributes.

The JSP 2.0 Specification
The definitive reference for JSP 2.0 information.

Javadoc API

The javadoc API descriptions for the reflection classes.
The javadoc API for the java.util.Map interface.
The javadoc API descriptions for the javax.servlet.ServletContextListener class.

Discuss this article in The Big Moose Saloon!

Return to Top
Ruby on Rails in the Enterprise Toolbox
Ruby on Rails in the Enterprise Toolbox
by LasseKoskela


If you've followed the IT blogosphere even slightly over the past months, you've unavoidably heard of a "Web 2.0" web development framework named Ruby on Rails. Ruby on Rails (or, just Rails) is indeed an exciting development and definitely has earned a lot of the attention it's getting. Following the recent 1.0 release of Rails, this article is my feeble attempt to help you see through the hype and see Rails for what it's worth. Specifically, I'd like to pose the question, where does Ruby on Rails fit in the enterprise development landscape?

Before we get started, I feel obliged to state a little disclaimer about myself. I love Rails. I love it. But, while I enjoy writing applications with Rails, I've tried hard to keep an objective perspective on things. It is this perspective I'm hoping to pass on to you, dear reader, for critical evaluation. I'm not expecting anyone to take my perspective and adopt it as-is. I'm expecting you to think for yourself, perhaps focusing your own thoughts through this article.

Let's start by talking a bit about what exactly is Ruby on Rails is.

Rails in a few words

Ruby on Rails is first and foremost a web framework and the brainchild of David Heinemeier Hansson. It's written with Ruby, a dynamically typed, interpreted programming language originating from the hands of Yukihiro Matsumoto. Rails is composed of a number of little, well-integrated frameworks, each focused on delivering some small slice of the full framework's capability, ranging from object-relational mapping to MVC to web services support.

Rails is designed to keep simple things simple. While the numerous mentions by Rails aficionados about 10-fold productivity improvements might seem ridiculous to the more critical of us, I do believe there's significant potential built right into the framework itself. I've personally found Rails a pleasure to develop with and my jaws have dropped at least a couple of times as I've ventured to trying out new aspects of the framework. Perhaps the biggest question regarding Rails in my mind right now is, how far can Rails stretch without losing its advantage?

In search of enterprise technology

What makes a technology suitable for enterprise development is difficult to define exactly. What we can do, however, is discuss aspects that we recognize as being at least somewhat relevant for the discussion. Such aspects include the kind of support available through frameworks, the power handed to us in the form of a programming language, the development environment in general, and, of course, the community at large along with the availability of third party libraries and developer skills on the market. Furthermore, issues like security and support for operations and maintenance cannot be ignored either.

The rest of this article largely follows the general direction of these aspects, hopefully creating a good understanding of how Rails fits into the modern enterprise.

Framework support

Rails itself is a web development framework. But what is a framework, to start with? A framework, according to a definition I've morphed from a variety of sources over the years, is a set of architectural guidance and reusable infrastructure. In short, by adopting a framework we're getting a bunch of conventions or "how things should be done" along with a pile of actual code we can start building on top of. Rails is just that--a well-integrated set of architectural guidance and structure, and working code to build on.

Architectural guidance

Speaking of architectural guidance, the Java Blueprints developed by Sun Microsystems to aid J2EE developers with documented "best practices". These patterns have clearly had a huge positive (and also some negative) effect on how the development community has built their enterprise applications with J2EE. Similar guidelines can be observed growing within the community that's growing fast around the Spring Framework, a lightweight open source J2EE application framework. How about Rails? Is there a Rails Pattern Catalog out there?

Yes and no. On one hand, Rails doesn't have a documented set of patterns like the mainstream technology platforms like J2EE and .NET have. On the other hand, we could argue that a Rails developer doesn't really need one because of the delicate simplicity of the framework itself. On yet another hand, Rails itself could be considered to incorporate a set of patterns in the form of idioms--the "Rails way" of doing things. Furthermore, many of the design patterns familiar to enterprise developers from the J2EE pattern catalogs as well as from the Microsoft Patterns and Practices group are equally applicable to systems developed with Ruby on Rails as they are to systems developed with J2EE or .NET.

Community support

When discussing architectural guidance, one can also not bypass the enormous support a community can provide. For any technical problem a developer faces and for any this or that decision to make, one of the first sources for help a developer turns to is the community. That community might be a local community, the company guru in technology X, or a global online community, such as a forum like JavaRanch or an IRC channel like #rubyonrails.

One could claim that the body of knowledge represented by an active community is far more important than a documented body of knowledge maintained by a single entity. Why? Because the active community is learning all the time and represents the absolute best information available at any time while the documented body of knowledge is a static snapshot with which one cannot interact and discuss. Furthermore, and perhaps more importantly, the community consists of human beings with huge amounts of tacit knowledge that can be tapped into case by case, as opposed to a static set of documents which at best is a shallow dip into that deep well of knowledge.

While Java and .NET developers enjoy a vast array of printed literature on the respective technologies, a Ruby on Rails developer has less printed literature and fewer online resources to reference. The online community, however, largely represented by the #rubyonrails IRC channel, is thriving and is alive literally 24/7 and with a number of books written to help out with learning the basics of both the Ruby language and the Rails framework, there's certainly plenty of community support available.


One aspect of frameworks and architectural guidance is the kind of idioms they bring forth. A specifically interesting one is Rails' philosophy of convention over configuration. In essence, where many of the incumbent frameworks advocate externalizing configuration into external XML documents or inline comments, Rails advocates smart defaults and minimal configuration.

Obviously there is some amount of configuration needed for Rails as well. The big differentiator, along with smart defaults, is that the Rails developer can do the necessary configuration in simple formats such as Ruby itself and YAML--a kind of structured, text-based properties file format for describing object graphs. This often leads to expressive and easily maintainable configuration files, although it's certainly possible to create a tangle with just about any format if we're not cautious.


One of the things about Rails that has gotten a lot of positive karma is the way automated testing is built into the framework as core feature. In fact, I think it ought to be core functionality for any modern framework.

Rails supports automated testing in a couple of ways. First of all, Rails assumes you're writing automated tests. That is, by default, Rails generates a unit test skeleton for every model class you ask it to generate and a functional test skeleton for every controller class you ask it to generate. That's just a minor plus, though, since there aren't too many lines of code to write anyway in those skeleton test classes.

What gives me the biggest kicks about Rails' testing support is the depth of built-in assertions and utilities for writing functional tests: tests that exercise a controller, verify that the correct database insert occurred, assert that a validation error is displayed when it should, or that the controller redirected to the right template for rendering a response. The kind of tests that we'd need much more laborious setup if it wasn't for the nifty built-ins and the overall simplicity of Rails' components themselves.

In addition to the comprehensive library of assertions and shortcuts, Rails lets us manage the data for our functional tests using the concept of fixtures, data sets described in YAML that Rails automatically populates the database with for every test, if necessary, or just once in the beginning of our test run if we've decided to use transactional fixtures. Furthermore, Rails gives our test code easy access to our test fixtures as well just to make sure we're not tempted to duplicate data between the fixture and our test code.

Simply put, the testing support in Rails is quite brilliant and is definitely setting expectations higher for other frameworks as well.

User interface

Admittedly the focus of much of the recent Web 2.0 talk, AJAX (Asynchronous JavaScript And XML) is nevertheless a serious contender in pushing our enterprise applications closer to a rich user experience in the years to come. While AJAX might be "trendy" right now and some of the best-selling books on Amazon right now are about AJAX, it's not just hot air. Asynchronous JavaScript and smart server-side code can improve the usability of our applications as well as decrease our bandwidth expenditure, if applied wisely.

Rails, as you may have guessed by now, is among the first to embrace and integrate AJAX functionality right into the framework itself. The Ruby on Rails distribution itself ships with the necessary JavaScript libraries for creating snappy effects that help our users keep track of the consequences of their actions on the user interface widgets. Similarly, Rails provides all the necessary plumbing for connecting the client-side JavaScript snippets with the smarts running on the server, allowing for straightforward implementation of dynamic data-dependent user interface widgets.


The programming language itself is a valid concern to discuss when thinking about picking a new tool into your corporate toolbox. From my perspective, the main difference between the Ruby programming language and the mainstream enterprise technologies of the modern IT department is dynamic typing.

Dynamic typing

Dynamic typing means that for a given variable, we don't need to explicitly tell which type of an object it will reference. Static typing means that we're explicitly telling the compiler that the variable will only be used to refer to objects of a given type (or subtypes thereof). Dynamic typing also means that there's no compiler to tell us when we've screwed up, passing the wrong type of object to a method. Instead, we'll get a runtime exception upon execution of that specific piece of code.

A rather common way of thinking about the different is in terms of "quick and dirty" vs "slow but maintainable", referring to the rather common experience developers share, having been exposed to tedious thousand-line Perl scripts or similarly lengthy Java code from which one can still somehow derive the original author's intent. There's no denying that a mess written in a dynamically typed language is likely harder to figure out than a mess in a statically typed language. The question then becomes, whether we should focus on making it easier to deal with the symptoms or to focus on avoiding the problem in the first place. Furthermore, dynamically typed languages like Ruby often yield more flexibility and expressiveness to our disposal which can make for significantly more readable and thus maintainable code.

A good programmer aware of these issues is far less likely to come up with a mess than a less aware colleague who working in "write-only" mode, not thinking too much about design and maintainability. The good programmer is likely to write smaller methods that have meaningful names, likely to be more careful in dividing responsibilities, and generally more likely to write good code, being aware of the pros and cons of a dynamically typed language. This is a good thing if you're a good programmer. Less so if you're not. It's certainly not all black and white.

In short, dynamically typed languages tend to give us more power and flexibility in exchange for more freedom to shoot ourselves in the foot. Luckily, there are ways to safeguard our precious little toes. One of the most important of them being automated testing.

Automated testing (again)

When writing serious applications with dynamically typed languages such as Ruby, it's even more important to have your codebase covered with automated tests. Why is that? Because the compiler doesn't catch our mistakes anymore? In part, yes, but the compiler never caught all of our mistakes anyway. I'd argue that the importance of automated tests--and techniques that promote automated tests such as test-driven development--is simply highlighted to us when we're using dynamically typed languages and are forced to drop our subconscious beliefs that the compiler catching typos is somehow protecting us from introducing defects as we go about changing our code. And, let's face it, in the enterprise, we're going to be changing a lot of code.

The way I see it, it's a good thing that dynamically typed languages make us face the reality and push us towards following good development practices. At least, that's what tends to happen with developers who've got the experience and the accompanying scars and burns. The good practices do need some support as well, however. Let's talk a bit about how that support realizes itself for the Ruby on Rails developer.


Talking about frameworks such as Ruby on Rails, we cannot blindly focus on the runtime. We have to consider both the runtime features as well as the development time support in terms of development tools such as editors, build tools, and deployment tools.

Development tools

The IDE support for statically typed languages is generally much better than for dynamically typed languages such as Ruby. This stems from the fact that the static view of Ruby code doesn't convey the types of objects and thus the methods available on those objects, which makes features like auto-completion and refactoring somewhat tricky to implement. In practice, while as a J2EE or .NET developer you have all kinds of wizards, automated refactorings, incremental compilation, and so forth, as a Rails developer you have pretty much none of this. On the other hand, with Rails you don't need most of that stuff, automated refactorings being the exception.

While Rails lacks the great IDE support we're all so used to in our existing toolset for developing enterprise applications, Rails has the advantage of an interactive console to prototype with. Ruby's interactive console, irb, is a nifty little tool for quickly trying out snippets of code. The irb console itself isn't the punch line here, however. What constitutes a major help for the Rails developer is Rails' integration of irb into the rest of the framework.

Rails comes with an executable called "script/console", which basically loads the full environment into an interactive irb session for the developer to play with. From the console, one can quickly try out things that would otherwise be somewhat awkward to do or would require sprinkling "debugging" code around the production code base. With the console, we can inspect our domain model, change values, save objects, delete objects, and so forth. We get all this without any setup at all, since the console script configures the full Rails environment for us. Oh, and we get tab completion as well.

Build tools

J2EE is probably the leading technology in terms of build tools at the moment. Apache Ant, originally written by James Duncan Davidson (coincidentally a Ruby and Rails convert nowadays) at Sun Microsystems and later donated to Apache along with Tomcat, is currently the de facto build tool for Java and J2EE projects around the world (with Maven catching up slowly but surely).

Ant has a vast number of extensions readily available for pretty much any task conceivable, and there's not many vendors that don't provide Ant tasks for integrating their tool with Ant if the product has anything to do with developing software in Java. The big question is, do these tools work with Rails applications as well or should we look into what the Ruby and Rails community has to offer?

Ruby gives us Rake, a build tool similar to Ant in the sense that it's mainly a descriptive syntax for specifying a build recipe with interdependent tasks. The main difference is that Rake files, unlike Ant scripts which are XML, are written in plain Ruby. This lets the Rake task perform pretty much any operations with the full power of the Ruby language while still preserving the descriptive approach to defining builds.

Ant certainly can be used for managing builds for Rails projects as well, although Rake is by far the more natural choice through its close integration with the Ruby platform in general. To help you make up your mind, we probably should look into the topic of continuous integration. After all, a core requirement for a build tool today is that it can be easily integrated with build servers.

Continuous integration tools

For an advocate of continuous integration, the availability of a good build server is somewhat crucial for a comfortable development experience. While as a J2EE or .NET developer, we have the luxury of choosing from a number of potential tools such as CruiseControl and CruiseControl.NET, "native" build servers for Rails applications aren't too plenty.

The only build server currently available that has explicit support for Ruby (and Ruby on Rails) projects is DamageControl. It's functionally inferior to what CruiseControl, for example, has to offer but does fulfill the basic need for continuous integration as long as you're using Subversion for configuration management. The good news, of course, is that nobody's saying we couldn't use CruiseControl for building our Ruby projects as well. I wouldn't be surprised to see a CruiseControl plugin for Rake builds pop up soon from the Rails community.

Deployment tools

Somewhat related to the topic of build tools and continuous integration is deployment after a successful build. We've got a long tradition of starting and stopping servers, deploying all sorts of archives and assemblies, copying files from one place to another, and so forth--all using a mixture of Ant builds, custom tasks, and the odd shell script. Generic build tools like Ant and Rake give us enough power to do all that, although it can get somewhat laborious.

In the Rails world, things are a bit different. First of all, we rarely restart the server during a development cycle because of Rails' edit-and-refresh capability. This can be of great help in those occasions when we'd like to do a quick visual verification of a change we've made and don't want to wait for several seconds or minutes, even, for the server to catch up with us.

The other big difference is Rails' very own Swiss Army knife, SwitchTower. SwitchTower is effectively a deployment tool that supports remote deployment (and rollback thereof) of Rails applications on UNIX/Linux servers over SSH connections. Its fundamental building blocks are shell scripts (thus its reliance on UNIX systems) and it knows how to do most necessary activities out of the box. Another strict requirement is the use of Subversion as the source repository.

There's something vaguely similar on the horizon for J2EE development as well, in the form of Cargo which is a generic API for managing a variety of application servers. Cargo could become a catalyst for a host of deployment tools for the J2EE developer. Time will tell. In the meantime, SwitchTower helps us deploy our Rails applications on clusters of any size with ease.

Suitability for enterprise use

While good deployment tools are an essential ingredient of any software project in a complex enterprise environment, there are further aspects to consider as well. Perhaps the two most important of these being performance and the ability to integrate with legacy systems. Another important enterprise ingredient is how well a framework or platform supports managing data in the system. Let's talk about the performance of the Rails platform first and come back to the data and integration needs after that.


Performance alone doesn't mean anything. In practice, when we talk about enterprise technologies needing good performance, we're mainly talking about raw speed (the ability of the virtual machine or interpreter to churn through statements in our code) and scalability (the ability to scale our system by throwing in more hardware). A further factor to take into account with web applications is the platforms' support for effective caching between requests. Let's take a look at each of these separately.

Raw speed

In terms of raw speed, Ruby still has a way to go before it has reached its big brothers, the JVM and the CLR. Years of development have evolved the JVM and Hotspot technology into a super-fast machinery that has been recorded beating even native C++ compilers. Ruby, on the other hand, is still waiting for the upgrade that should come with the new virtual machine, YARV.

Early benchmarks have shown 5-10 times faster execution times with YARV compared to the current version of Ruby, which is a clear improvement and certainly is a good sign for Ruby in the enterprise. Plus, for the odd high performance job we'd like to tack into our Ruby application, we can always fall back on writing an extension in native C code.

While Ruby might not compete quite yet with the major platforms like Java and .NET in terms of raw speed, that might actually not matter all that much for a significant portion of enterprise applications. In fact, I'd vager that for most enterprise developers a far more important non-functional requirement is that of scalability rather than raw speed.


Scalability is also a somewhat ambiguous word. Scalability comes in two flavours, horizontal and vertical scalability. Horizontal scalability means scaling "out" by adding more boxes next to the existing ones. Vertical scalability means scaling "up" by adding more memory, more CPUs, faster disks, and so forth, into the existing boxes.

Scaling up is really somewhat trivial with Rails. Being based on multi-process execution through FastCGI on top of web servers like Apache and lighttpd, Rails is well adept to making use of added memory, added CPU speed, and faster I/O.

So, if scaling up is not an issue, then what can we say about scaling out? In short, Rails scales out quite nicely up to a point. Rails delegates clustering, fail-over, load balancing, and so forth, to the web server infrastructure and thus supports whatever Apache, for example, supports in terms of high availability and high throughput properties.

In practice, the interesting question with regards to scaling out becomes a question of how to scale the database. With the platform able to scale out the application itself by adding more nodes behind a load balancer, the way Rails manages its database connections gets to be in the limelight. The multi-process architecture of FastCGI effectively means that each running process reserves a physical database connection which, at least theoretically, limits how far we can scale with such an architecture.

Generally speaking, a multi-threaded architecture like what we have in J2EE has a more scalable approach. In practice, however, these theoretical limits are far from the reality for a significant number of what we call "enterprise applications." Google, Amazon, and other major online websites are scaling out with commodity hardware and architectures similar to Rails' so it's certainly possible to scale out far beyond the departmental level with Rails, although the invested effort in tweaking the system to play nice in such extreme setups could be non-trivial as well, eating away the programmer time we've saved through Rails' productivity boost in the common case.

Caching and lazy loading

Speaking of programmer time, a cache for the web layer is something I wouldn't want to write from scratch too many times, not to mention coming up with a custom cache for the entire persistence layer.

Rails offers only very basic caching on the persistence layer but it gives us quite good page and fragment level caching for the web layer. Rails also lacks built-in support for the kind of distributed caches that are available for Hibernate, for example, although there have been reports of people building quite exciting implementations on top of memcached, the same distributed cache implementation Rails can use for storing its sessions.

Rails also supports lazy loading through its object-relational mapping component, ActiveRecord. The most notable difference between ActiveRecord, which, just like Rails itself, is more geared towards ease of use rather than raw performance, implements lazy loading in a somewhat trivial way: every association is lazy. This is, however, quite sufficient for the majority of situations and, where necessary, we always have the option of going around ActiveRecord to perform the necessary joined queries ourselves.


Enterprise systems are almost by definition important and can affect--directly or indirectly--the corporation's bottom line in radical ways when malfunctioning. In today's world with crackers and script kiddies increasingly being after money rather than plain harm, our IT systems must be resilient to attacks of various kinds. Furthermore, few applications can do without some kind of authentication so the ability to integrate with LDAP, to store encrypted passwords in a database, and so forth, is a requirement that's practically a given these days.

Protecting our data and service

As a technology mainly aimed at the web, Rails applications will be subjected to a variety of known attacks based on techniques like cross-site scripting and SQL injection. Enterprise technologies will have to be robust enough for system administrators to be able to harden the systems and shield our crucial business information as well as service delivery to our customers.

Unfortunately, there's no fool-proof way for a framework to prevent the developer from exposing his application to these attacks, but Rails does make it simple for the developer to not fall prey to these kinds of attacks.

For example, Rails gives us a view helper function called "h()" which performs HTML encoding for us with only 3 extra characters (or just 1 if we omit the parenthesis). By making it so easy to use, Rails once again encourages good practice. Similarly, ActiveRecord provides us with a wildcard query syntax similar to JDBC's PreparedStatement, which effectively helps us avoid SQL injection and often makes our code more readable as well.

Rails does introduce a potential vulnerability we have to remember to protect ourselves from, however, through one of its productivity boosting features. Rails makes it ridiculously easy to bind request parameters to domain objects upon a form submit. This, however, means that if we're not careful, a malicious user could intercept and alter--or add--form parameters in ways that could make a lot of damage. Fortunately, there is again an easy way to protect our domain objects from such malicious data injections. In practice, with Rails, such an "easy way" often means a trivial one-liner that declaratively tells Rails how to treat our model objects.

Implementing user authentication

There's more to the security aspect, however, than these basic tools for preventing common attacks. The technology must give us some kind of support for implementing authentication schemes into our applications, for instance. While Rails itself doesn't provide explicit security APIs like some other enterprise technologies do, Rails comes with easy-to-use facilities for quickly putting together simple things that work.

The fundamental building block for such facilities is ActionPack's (the Rails MVC framework) filter architecture, which lets us implement our application's authentication and authorization aspects in a non-intrusive manner without mixing too much security-related code with the actual application logic.

Based on this filter architecture, Rails gives us a choice between a variety of tools called login generators, which are effectively code generators that generate the necessary plumbing code (and database schemas) for us to take care of authentication against a database user table, for example. Some examples of the functionality available through these login generators include password hashing with random salt, email verification with disposable tokens, and password reminders.

None of the readily available login generators support LDAP, however, which means that some systems might require a bit more manual work than others, needing to adjust the outcome of existing login generators with code that connects to a directory server rather than a database. Luckily, the Ruby developer doesn't have to go about implementing the LDAP protocol himself as there are mature libraries available for integrating with most standard protocols, including LDAP.

Operations support

For any system that's going to live in an enterprise environment for longer than few weeks, the need for operations support becomes something that's painful to ignore. We need tools for monitoring our system's performance, we need tools for fixing simple problems easily without affecting the service, and we need tools for making the larger maintenance operations simple and void of risk. This is an area where Rails' architecture, relying on a web server, proves both an advantage and a disadvantage of sort.

We've been running web sites on top of the Apache web server for a decade by now, and it's probably one of the most stable and most thoroughly stress-tested software products alive. As a de facto standard, the Apache web server also has plenty of support in the form of a large user base, readily available management scripts, graphical tools, and what not. In other words, a Rails application is trivial to manage as long as the tools available for the chosen web server itself provide the desired functionality.

In general, the domain of such management and monitoring tools extends to the infrastructure rather well but doesn't quite reach the virtual machine-level performance metrics we might be interested in observing and analysing as well. This is an area where Rails currently lacks the kind of tools some of us are accustomed to with dominant enterprise technologies and, as such, it is a matter we'll have to consider when deciding whether Rails is suitable for a given system. Fortunately, most enterprise systems can be analysed and monitored well-enough using conventional, operating system-level and external performance measurement tools.


Enterprise systems often deal with complex domain models with hundreds of entities and database schemas with hundreds of tables and the chosen technology for implementing such systems must provide some support for dealing with all that complex data. Let's see how Rails can help us in this regard.

Object-relational mapping

Hardly any project I visit these days has not at least considered adopting an object-relational mapping tool for alleviating the pain of mapping objects to a relational database. The J2EE community has long been a pioneer in ORM tools with Hibernate quite probably being the number one Java ORM framework in terms of developer mindshare.

For the Rails developer, the Hibernate-equivalent is called ActiveRecord. Part of Rails, ActiveRecord provides a simple ORM framework which focuses on making the 80% of cases a breeze and leaves enough hooks for the developer to handle the remaining 20% in less of a breeze. In practice, ActiveRecord makes the live database schema part of the object model, generating the domain objects' persistence code at runtime based on what it finds from the database. The developer is free to alter the domain objects' persistence behavior with simple declarations, most of which are built into the framework itself, such as a host of validations and trivial mappings between database columns and field names on the domain object.

Over time, ActiveRecord has become more and more performant as the Rails development team has added optimizations found in other frameworks such as Hibernate. It is still, however, not as blazingly fast as Hibernate can be, for example. As already mentioned, ActiveRecord's primary goal is ease of use rather than raw performance and it really makes the simple things simple. A point we'll have to keep in mind when pondering when to use which.

Adapting legacy database schemas

In part because of Rails and ActiveRecord's goal of making life easy for the majority of cases and giving enough rope for the developer to deal with the tough cookies with proper force, ActiveRecord isn't the best framework for adapting to strange legacy database schemas.

While ActiveRecord can be made to work with a range of schemas from the simple to the complex, the sweet spot for ActiveRecord is clearly in providing unbelievable ease of use with the simple end and faring quite well in the majority of cases. If you want to kick ass in the complex end of legacy schemas, ActiveRecord should probably not be your first pick.

Data migration

One topic that doesn't seem to have gotten much attention in terms of tooling in the enterprise has been data migration. For one reason or the other, we've just managed to bite our lips time and time again when dealing with that dreaded word. This is another domain where Rails really steps up to the plate with its recent addition of the migrations concept.

Rails' migrations are essentially a way to describe changes to the database schema with simple Ruby scripts--version controlled along with the rest of the sources--providing both upward and downward migration upon deployment/rollback of a new version of the application. The Rails developers haven't invented anything new in the sense that any kind of change would be supported but, as usual, the basic stuff (adding, dropping, or renaming columns and tables) seems to work amazingly smoothly.


For an enterprise developer, the question of integration with legacy systems is an important one. There's the ERP system, the CRM system, the two competing content management systems, the existing J2EE applications, the .NET applications, a couple of applications built on top of Microsoft Excel, and then some. Oh, and let's not forget the mainframe which does our monthly paychecks. Unfortunately, every now and then, we have to integrate with these legacy systems, implemented with a variety of different technologies. Does Ruby on Rails fit this world of heterogeneous systems?

While Ruby has only a fraction of the integration capabilities presented by platforms such as J2EE or .NET, that doesn't mean Ruby and Rails cannot integrate with legacy systems. It just means that it might not be as easy as it could be.

The number of networking protocols available for Ruby developers in the form of open source implementations is growing almost daily and the access to writing extensions to Ruby with C code makes a whole world of existing native libraries available to Ruby as well. Yet, I'd guess that most Ruby developers don't feel comfortable writing C code so if you're aware of an exotic integration you need to build into your system, you better check out the availability of a robust Ruby library before setting the sail towards Ruby and Rails.

There is, of course, always the possibility of wrapping the difficult systems with something more usable such as a standard web service interface, which Rails can cope with, although that's obviously more work to do for integrating your Rails application with the legacy systems.

Speaking of standard interfaces, is there something more to standards that we should discuss? I think there is.

Standard vs Best-of-breed

Ruby on Rails is obviously not a standards-based framework such as J2EE is, for example. Instead, it could be classified in the "best-of-breed" category of frameworks, aiming for the best possible set of functionality without too much attention to backwards compatibility.

In short, there are both advantages and disadvantages posed by being or not being standard. Some of the common aspects of this is the associated learning curve with a non-standard technology and the overall availability of developers proficient in the technology. Let's talk a bit about those.

Learning curve

As an object-oriented language, Ruby doesn't pose a significant learning curve in terms of a new paradigm. Having said that, the Ruby syntax is quite different from what we're used to in languages like Java and C#. Ruby is also not taught in the vast number of universities and other academic institutions around the world like Java, for example, is. This does represent a slight disadvantage for Rails as an enterprise web development framework, in the form of a non-zero effort in learning the language itself before becoming truly productive with the framework as well as in the availability of skilled developers.

In practice, however, the learning effort is not a big one and one can get started very quickly with developing Rails applications without knowing all the magic that's possible with Ruby. Furthermore, Ruby being an easy language to learn makes the developer availability less of a showstopper.

Another aspect to consider when discussing the benefits of adopting a non-standard technology to the enterprise that's perhaps not so commonly thought of is the non-standard solution's ability to innovate and grow more effectively. Let's call it traveling without luggage.

Traveling without luggage

Following a standard does have a lot of advantages but it's never all black and white. Standards are created by standards bodies. Standards bodies are composed of individuals and corporations with a variety of vested interests. Standard bodies also have--for very good reasons--processes that take a while to churn through. All of this means that change is slower in standards than it is with independent technologies driven by a single authority.

Rails is such an independent technology, driven by a small number of core developers based on informal feedback from the larger community and based on the core developers' personal views. In practice, this shows in how the independent technology can make rapid improvements, occasionally breaking backwards compatibility in exchange for something new and shiny that makes us more productive now rather than a year from now. The way I see it, that's a good thing. I like having options. I like being able to occasionally choose the best-of-breed solution instead of the tried-and-true(-but-not-so-good) solution that's standard.


I hope that I've managed to give you ammunition to help in making decisions about whether you should look into Rails or stick to your existing toolset for that upcoming web project in your specific enterprise and what such a decision might mean for your organization.

I'm actively using Rails as well as J2EE and I'm mostly happy with both, each having their sweet spots as well as the sour spots. The kind of systems I'm building with these technologies tends to be different, though. The Rails applications are strictly about web whereas the J2EE applications tend to be more in the realm of backend systems, along with the occasional web application. Recently, I've found Rails to be a very productive and satisfying platform for typical web development and I've heard collegues report similar experiences. It's certainly nice to have Rails as an option to choose from.

While it's certainly nothing to bet your house on, the chances are that Ruby and Rails is going to be around for a while and getting a lot of attention as well. Maybe there's going to be a fourth major platform for an enterprise developer to choose from in the future along with J2EE, .NET, and LAMP, based on a powerful dynamic programming language and a framework that's truly a breath of fresh air. I for one wouldn't mind seeing that happen.


Some useful resources and pointers to forthcoming books on the topic of Ruby and Ruby on Rails:

Online resources

Published literature on Ruby and Rails

Upcoming titles on Ruby and Rails

Discuss this article in The Big Moose Saloon!

Return to Top
Generifying your Design Patterns -- Part I: The Visitor
Generifying your Design Patterns
Part I: The Visitor

by Mark Spritzler

In the upcoming months, I will be writing a series of articles on Generifying your Design Patterns. We will look at a number of Design Patterns from Singleton, Factory, Command, Visitor, and the Template Method. We will see what we can possibly gain from using generics in our design patterns and how to create them. Some patterns work well with generics, and some just don't make sense. Well, you could argue that none of them make sense. After all, all we are accomplishing is an ability to statically type them and have the compiler check that we are using the correct type.

To start off, I have chosen the Visitor pattern, since this is the first one that I attempted to generify. It has worked really well in our application, and although it might put restrictions on the overall usability of a design pattern in certain situations, it is still pretty cool, in my mind. But please note, that this is a hybrid of the Visitor pattern. There is only one visit method in our visitors and are specific to one type. I took this approach, again because I thought it was really cool. But in the next article, in the series, I will go a bit further into how you can implement the true Gang of Four Visitor pattern. It just isn't as cool in my mind as this one.

A generic look at Generics

So let's start by looking at generics and how to make a class generic. I am going to be brief here, since I am hoping that you have a little Generics experience already. We could have lots of articles just explaining generics, but it would be best to assume you have a good understanding of them already and not waste that extra space.

Generics are a way to create a template of your class and allow actual implementations of your template to be type safe. If you were to look at the Javadocs for any of the Collection classes, you would see some interesting characters like E all over the place. The E basically stands for the element type, any type that you will define when you use or extend that class. You can do the same in your classes by just defining the letters that you are going to use.

public class MyClass<A, B, C>{}

You can use any letters you like. We tend to find that E usually stands for element and T for type. So you will see a lot of people start with T and go alphabetical from there. You usually won't have more than a couple of letters.

Now, in your class code you can use the letters in place of actual Classes. So let's define a method that returns T and takes a U as a parameter.

public T myMethod(U parameter)

One thing that I have found is that when you return a T or generic, then it is usually either a method in an interface or is declared abstract because you can't create a new T with "new T();". You can call another method that returns a T and assign it to a T reference, but that method that you call will be abstract in your template. (Remember an interface method by default is public and abstract)


Another thing to understand is called "erasure". "erasure" means that when javac compiles the code, it will create bytecode that is very much like old Java code that you would have had to create if there wasn't Generics. So what does that mean? It means that the typical Visitor pattern of multiple visit method for each type cannot be generified because the "U parameter" gets "erased" and the compiler wouldn't know which of your implemented myMethods(accept() method) is actually implementing. Here, let me quote a brilliant mind here.

We can't really use this interface for visitors with more than one visit method. For example if I try:
 public class PrettyPrintVisitor implements GenerifiedVisitor<OrderHeader>,
         GenerifiedVisitor<OrderDetail>, GenerifiedVisitor<Part> {
     public void visit(OrderHeader header) {}
     public void visit(OrderDetail detail) {}
     public void visit(Part part) {}
This is illegal. You can't implement the same generic interface twice with different type arguments. (Since with erasure they'd be exactly the same interface.) To my mind, this severely limits the effectiveness of this generified interface. - Jim Yingst - JavaRanch Sheriff.

The Visitor pattern

Now, let's look at the Visitor pattern. In the Gang of Four book, the Visitor Pattern is defined as "Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates."

Here is the basic visitor pattern (remember what I said above, that we are implementing a modified version, but this code here it the real deal):

public interface Visitor {
    public void visitA(A o){}
    public void visitB(B o){}
    public void visitC(C o){}

Any object that accepts a visitor has an accept method:

public void accept(Visitor v) {}

That was pretty simple, huh?

Now we will generify it:

public interface GenericVisitor<T> {
    public void visit(T t){}

Here is an implementation of that generic visitor:

public class MyVisitor implements GenericVisitor<MyClass> {
    public void visit(MyClass myClass) {

And the object that accepts the visitor:

public class MyClass {
     public void accept(GenericVisitor<MyVisitor> visitor) {}

How about a real example?

Like above we create our GenerifiedVisitor:

package com.javaranch.visitor;

public interface GenerifiedVisitor<T> {
    public void visit(T objectToVisit);

And our GenerifiedVisitable class:

package com.javaranch.visitor;

public interface GenerifiedVisitable {
    public <T> void accept(GenerifiedVisitor<T> visitor);

An example

Looks exactly like what we already wrote, so we should really see it in action. We will create a simple Order Entry program that we need to validate certain pieces of data within an Order. The Order consists of an OrderHeader, OrderDetail, and Part object model. An OrderHeader can have multiple OrderDetails, and each OrderDetail is related to a Part. In our validation program we need to make sure that each Order has a correct customer number and at least one OrderDetail. OrderDetail must have a quantity and a Part. The Part must have a correct part number and price. The exact validation code is not in our sample code for the Visitors to save space, but I am sure you can see how you can create a Visitor for each part of the validations. So I have created one example of each type of Visitor for each part of the Order model, and that code in the visit method will simply just print the object that has been passed in.

So let's look at the Order object model first. The objects are simple JavaBean/POJO classes with getters and setters, and of course the accept method of the GenerifiedVisitable of that type. So first is the Order header. Here is its code:

package com.javaranch.visitor.impl.domain;

import com.javaranch.visitor.GenerifiedVisitable;
import com.javaranch.visitor.GenerifiedVisitor;

import java.util.List;
import java.util.ArrayList;

public class OrderHeader implements
    GenerifiedVisitable<OrderHeader> {
    private Long orderId;
    private Long customerId;
    private List<OrderDetail> orderDetails =
        new ArrayList<OrderDetail>();
    private List<ValidationError> errors =
        new ArrayList<ValidationError>();

    // All the Getters and Setters removed for space purposes

    public void accept(GenerifiedVisitor<OrderHeader> visitor) {

As you can see we have four attributes. They have their getters and setters and the accept method which takes only a GenerifiedVisitor that visits OrderHeader objects. Any other type of GenerifiedVisitor that does not visit an OrderHeader object will cause a compile time error. This makes sure you send the right kind of visitor to a visitable class. So what do we gain, what does this save, and what extra code did we have to create? Well, we have gained type safety, but we have added extra code to write by adding the type in those alligator mouths < and >. I think, in order for us to see what else we have saved; we need to look at what code we would have had to write in our visitor classes to understand some of the savings we have created.

Let's say we have a bunch of visitors for all the three objects in our model. Each visitor would need to include a bunch of if statements with instanceof to determine if the object passed are of the type that we want to visit. We don't want to have instanceof if statements and then have to cast to the correct type. This leaves the checking to be done at run time where it might take more time to find bugs, whereas a compile time check might save us that time of debugging. And we know how much more expensive it is to catch bugs later than earlier. Maintenance and debugging usually takes a significant amount of the costs. I read that somewhere, and I don't want to take that extra time to re-find where I found that. Anyway, let's quickly look at the remaining code.

Here are the OrderDetail and Part objects:

package com.javaranch.visitor.impl.domain;

import com.javaranch.visitor.GenerifiedVisitor;
import com.javaranch.visitor.GenerifiedVisitable;

public class OrderDetail
    implements GenerifiedVisitable<OrderDetail> {
    private Long detailId;
    private OrderHeader header;
    private Part part;
    private int quantity;

    // Getter and Setters removed for space purposes

    public void accept(GenerifiedVisitor<OrderDetail> visitor) {

package com.javaranch.visitor.impl.domain; import com.javaranch.visitor.GenerifiedVisitor; import com.javaranch.visitor.GenerifiedVisitable; public class Part implements GenerifiedVisitable<Part>{ private Long partId; private String description; private Double price; // Getters and Setters removed for space purposes public void accept(GenerifiedVisitor<Part> visitor) { visitor.visit(this); } }

And finally three simple quick visitors that we created:

package com.javaranch.visitor.impl.visitors;

import com.javaranch.visitor.GenerifiedVisitor;
import com.javaranch.visitor.impl.domain.OrderHeader;

public class HeaderVisitor<T>
    implements GenerifiedVisitor<OrderHeader>{
    public void visit(OrderHeader header) {

package com.javaranch.visitor.impl.visitors; import com.javaranch.visitor.GenerifiedVisitor; import com.javaranch.visitor.impl.domain.OrderDetail; public class DetailVisitor implements GenerifiedVisitor<OrderDetail>{ public void visit(OrderDetail detail) { System.out.println(detail); } }
package com.javaranch.visitor.impl.visitors; import com.javaranch.visitor.GenerifiedVisitor; import com.javaranch.visitor.impl.domain.Part; public class PartVisitor implements GenerifiedVisitor<Part>{ public void visit(Part part) { System.out.println(part); } }

Those all look the same, don't they? Well, each Visitor of course would have their own unique code based on what validation the visitor needed to do. Let's create an actual service to create an Order and our Visitors that implement the requirements. We also need to create a new class called ValidationError. This will hold a description of the validation failure that occurs and add it to a new List that we will add to the OrderHeader. When we have run through all our validations, the OrderHeader will have a complete list of all the failed validations and we can print them out. I won't post the code for the ValidationError, but it has one attribute called description and the usual getter and setter for that attribute.

So here are the Visitors:

package com.javaranch.visitor.impl.visitors;

import com.javaranch.visitor.impl.domain.OrderHeader;
import com.javaranch.visitor.impl.domain.ValidationError;
import com.javaranch.visitor.GenerifiedVisitor;

import java.util.List;

public class CustomerNumberValidationVisitor
    implements GenerifiedVisitor<OrderHeader>{
    public void visit(OrderHeader header) {
        Long customerNumber = header.getCustomerID();
        if (customerNumber == null || customerNumber == 0L) {
            ValidationError error = new ValidationError();
            List<ValidationError> errors =
            error.setErrorDescription("Invalid Customer Number");

package com.javaranch.visitor.impl.visitors; import com.javaranch.visitor.impl.domain.OrderHeader; import com.javaranch.visitor.impl.domain.ValidationError; import com.javaranch.visitor.impl.domain.OrderDetail; import com.javaranch.visitor.GenerifiedVisitor; import java.util.List; public class OrderHasDetailValidationVisitor implements GenerifiedVisitor<OrderHeader>{ public void visit(OrderHeader header) { List<OrderDetail> details = header.getOrderDetails(); if (details == null || details.size() == 0) { List<ValidationError> errors = header.getValidationErrors(); errors.add(new ValidationError("There are no Order Lines");); } } }
package com.javaranch.visitor.impl.visitors; import com.javaranch.visitor.GenerifiedVisitor; import com.javaranch.visitor.impl.domain.OrderDetail; import com.javaranch.visitor.impl.domain.OrderHeader; import com.javaranch.visitor.impl.domain.ValidationError; import java.util.List; public class QuantityValidationVisitor implements GenerifiedVisitor<OrderDetail>{ public void visit(OrderDetail detail) { int quantity = detail.getQuantity(); if (quantity == 0) { OrderHeader header = detail().getOrderHeader(); List<ValidationError> errors = header.getValidationErrors(); errors.add(new ValidationError("Detail line " + detail.getDetailId() + " has no quantity")); } } }
package com.javaranch.visitor.impl.visitors; import com.javaranch.visitor.impl.domain.OrderDetail; import com.javaranch.visitor.impl.domain.OrderHeader; import com.javaranch.visitor.impl.domain.ValidationError; import com.javaranch.visitor.impl.domain.Part; import com.javaranch.visitor.GenerifiedVisitor; import java.util.List; public class DetailHasPartValidationVisitor implements GenerifiedVisitor<OrderDetail>{ public void visit(OrderDetail detail) { Part part = detail.getPart(); if (part == null) { OrderHeader header = detail.getOrderHeader(); List<ValidationError> errors = header.getValidationErrors(); errors.add(new ValidationError("Detail line " + detail.getDetailId() + " has no part")); } } }
package com.javaranch.visitor.impl.visitors; import com.javaranch.visitor.GenerifiedVisitor; import com.javaranch.visitor.impl.domain.Part; import com.javaranch.visitor.impl.domain.OrderHeader; import com.javaranch.visitor.impl.domain.ValidationError; import java.util.List; public class PartNumberValidationVisitor implements GenerifiedVisitor<Part>{ public void visit(Part part) { Long partNumber = part.getPartId(); if (partNumber == null || partNumber == 0) { OrderHeader header = part.getDetail().getOrderHeader(); List<ValidationError> errors = header.getValidationErrors(); errors.add(new ValidationError("Invalid Part Number")); } } }
package com.javaranch.visitor.impl.visitors; import com.javaranch.visitor.impl.domain.Part; import com.javaranch.visitor.impl.domain.OrderHeader; import com.javaranch.visitor.impl.domain.ValidationError; import com.javaranch.visitor.GenerifiedVisitor; import java.util.List; public class PriceValidationVisitor implements GenerifiedVisitor<Part>{ public void visit(Part part) { Double price = part.getPrice(); if (price == null || price == 0) { OrderHeader header = part.getDetail().getOrderHeader(); List<ValidationError> errors = header.getValidationErrors(); errors.add(new ValidationError("Price for part: " + part.getPartId() + " is invalid")); } } }

In all cases, I need to get the OrderHeader, if I don't have it already, when a validation fails, so that I can add a new ValidationError to its error list.

So, now, really finally, here is the actual service class that will use these Validations and validate an Order:

package com.javaranch.visitor;

import com.javaranch.visitor.impl.domain.OrderHeader;
import com.javaranch.visitor.impl.domain.OrderDetail;
import com.javaranch.visitor.impl.domain.Part;
import com.javaranch.visitor.impl.domain.ValidationError;
import com.javaranch.visitor.impl.visitors.CustomerNumberValidationVisitor;
import com.javaranch.visitor.impl.visitors.OrderHasDetailValidationVisitor;
import com.javaranch.visitor.impl.visitors.QuantityValidationVisitor;
import com.javaranch.visitor.impl.visitors.DetailHasPartValidationVisitor;
import com.javaranch.visitor.impl.visitors.PartNumberValidationVisitor;
import com.javaranch.visitor.impl.visitors.PriceValidationVisitor;

import java.util.List;

public class VisitorService {
    public void validate(OrderHeader header) {
        CustomerNumberValidationVisitor visitor1 =
            new CustomerNumberValidationVisitor();
        OrderHasDetailValidationVisitor visitor2 =
            new OrderHasDetailValidationVisitor();
        QuantityValidationVisitor visitor3       =
            new QuantityValidationVisitor();
        DetailHasPartValidationVisitor visitor4  =
            new DetailHasPartValidationVisitor();
        PartNumberValidationVisitor visitor5     =
            new PartNumberValidationVisitor();
        PriceValidationVisitor visitor6          =
            new PriceValidationVisitor();

        List<OrderDetail> details = header.getOrderDetails();
        for (OrderDetail detail: details) {
            Part part = detail.getPart();

        for (ValidationError error : header.getValidationErrors()) {
            System.out.println("Error! " + error.getErrorDescription());



Wasn't that fun? I think the capability of Generics to increase type safety is a good thing. Some people don't want that strict static typing. In many cases, you will have to code a little more, and in many other cases, you will find that you save a lot of coding. Some design patterns really work well with generics and others don't. It really is a give and take, trial and error kind of process, but one that is really fun, at least to me. My favorite design pattern is the Template Method pattern, and I have found it to work extremely well with generics. I have implemented a cache factory of lookups to store values needed for ComboBoxes on a client. Using generics and the Template Method, I have made this extremely easy to create lookups, as well as an unexpected pleasant surprise when creating the ComboBoxes' data model. So look to the next issue of the JavaRanch newsletter for the second in a series of articles on "Generifying your Design Patterns". I hope you enjoyed this article as much as I did.

Discuss this article in The Big Moose Saloon!

Return to Top
The New Sun Certified Java Programmer Exam
The New Sun Certified Java Programmer Exam
by Marcus Green

The Sun Certified Java Programmer Exam for JDK 1.5 is one of the more significant updates to the exam syllabus. You can see the objectives of the new exam at

It makes sense that this is a big update as JDK 1.5 introduced some significant changes to the Java language. There has been considerable discussion in the SCJP forums along the lines of "should I take the 1.4 or the 1.5 exams" and some people have taken the line that there is a shortage of exam specific material for the 1.5 exam and the exam is more difficult. The exam may be slightly harder than the 1.4 exam, but that should be an incentive as a difficult exam can gather more respect than an easier exam. Either way, although it is not trivial it isn't rocket science either and good study material is starting to become available. One of the co-authors of the exam, Bert Bates had this to say about the new exam:

One of the main goals of this new exam is to create a test that is "performance based" rather than "knowledge based" (Sun's terms). Generally what this means is that a "knowledge based" question tends towards memorization of details, and a "performance based" question tends towards more real world activities like actually writing code.

One of the ways in which the new exam is performance based is the introduction of "drag and drop" questions. You are given some Java code with gaps and you have to drag from a list of options to complete the code. Be warned however that once you move off a drag-and-drop question if you go back to it your selections are not represented, which can be disconcerting. The new exam drops the "fill in the blank space" type question that worried many people on previous exams.

Although there is significant overlap with the JDK 1.4 exam I guess that there is at least 30% new material to learn. Some of the new topics are

  • regular expressions
  • the Locale class
  • the Text class
  • serializing streams
  • generic collections
  • generic method parameters
  • input/output classes
  • for-each looping
  • locales (dates and currencies)
  • enums
  • autoboxing
  • OO concepts (coupling and cohesion)
  • jar files
  • sorting and searching collections
  • covariant returns
  • JavaBean naming
  • varargs

In my view these are all valid new additions to the exam as they cover subjects that you are likely to need in real world Java programming. As there is so much material I can only touch on some topics in this article. I was delighted to see that the new exam does not cover the bit shifting operators that were covered in the JDK 1.4 exam. It is possible to spend years as a Java programmer without ever having to deliberately shift a bit.

Regular expressions

The exam now covers the Java regular expression operators that were introduced with JDK 1.4. Fortunately there are limitations on what you have to know as the exam topic specifies what operators will be tested and also says "The use of *, +, and ? will be limited to greedy quantifiers, and the parenthesis operator will only be used as a grouping mechanism, not for capturing content during matching". If you have ever done regular expressions in another language such as Perl you will have a head start with Java regular expressions. You can read about Java Regular expressions at


The Generic collections and method parameters are possibly the biggest new topic to be introduced as they are one of the biggest changes to the Java language. Although it is not an exam specific resource there is a very useful tutorial on this topic at and you can see how sun cover the topic at The Generic collections are the type of facility that once you understand you will almost certainly start to use in your "real world" code. The syntax of Generic collections can seem strange at first but I found the syntax easy to get used to.

Generics address two issues with the collection classes. One is that as a programmer you have to remember to write the rather pointless code that casts back from Object to whatever the type of reference is. You will probably have seen code where you have an instance of a Collection such as a Vector called v that contains strings and a loop is retrieving each element thus

String s = (String) v.get(0);

With generic collections you don't have to cast back to the original type, because the instance of the collection knows the type of its elements.

Perhaps more importantly, using generic collections i moves some errors back from runtime to compile time. Personally I would much prefer errors to show up at compile time than months after a product has gone live and it is in the hands of real end users.


This version of the exam brings the reintroduction of Input/Output classes. I was surprised when this was dropped from the syllabus for the JDK 1.4 version of the exam and you may find web based resources aimed at the JDK 1.3 version of the exam to be useful in this area. An interesting new addition is serialization, a relatively small topic but one with very wide applicability.

The for-each loop

The introduction of the for-each loop is a delightful "syntactic sugar" to the language. Anyone familiar with PHP or Perl will have missed this feature. Instead of having to obtain an Iterator object and use the next method you can create a single line construct for moving through all of the elements of a collection. This is explained by Sun at


Enums are a compact and neat new feature for the language that allow the creation of named constants with their own name space. For example you could create an enum for days of the week as Mon,Tue ...Sun. Wherever you were expecting a DayOfWeek enum, it would have to be something that represented one of the days. Unlike with integer constants it would not be possible to assign some other arbitrary numerical value. Enums are appropriate where you have a known and fairly limited set of values. Read more about enums at

How to study

Get a good book that explicitly covers the exam. There are now several available, and you will find them mentioned in the SCJP forum at JavaRanch. Take a look at my Frequently asked Questions List which has been updated to cover the JDK 1.5 exam. Read the exam objectives carefully, make sure you cover them all. Write large numbers of small programs that cover each topic. By small programs I mean less than 50 lines of code. Test yourself against mock exam questions but make sure they are questions aimed at the JDK 1.5 topics. Contribute to the JavaRanch forums. We like to say there are no stupid questions at JavaRanch but there are some badly asked questions. A well asked question includes you telling us what you currently understand, or more importantly what you think you understand. There are some very experienced and knowledgeable people who are keen to help/show off their knowledge and they are just a web page away.

Discuss this article in The Big Moose Saloon!

Return to Top
Movin' them dogies on the Cattle Drive
by Carol Murphy

Well cowpokes, 2005 has screeched to a halt and left us poised at the brim of 2006, ready to step off the edge and start this ride all over again. Time for the annual state of the drive address. Don't know about you, but it seems to me like we were just doing this very thing not too long ago!

Turning back to ponder the events of last year, I noticed something about the good ole Cattle Drive. We had a total of 22 drivers who submitted assignments during 2005, and 7 of those are on the active list, meaning they've submitted something within the last 30 days. This leaves 15 drivers on the inactive list. Of those 15, 4 have completed the drive in its current form, and 1 of those 4 has gone back for some remedial work in servlets.

Now, statistics ain't my strong suit, but indulge me a bit as I try to make some sense out of this stuff.

There are 11 inactive students, and 5 of them submitted Java-4b (Say) as their last assignment. That's 45% bogging down on this particular assignment. Now I knew Java-4b was a toughie, but this here's gotta be statstical proof! And we all know statistics don't lie!

Now, let's get to the meat of this address.

Four Mooseheads on their walls!

Congratulations to these 3 Cattle Drivers, who stayed in the saddle through all sorts of bad weather and rough trails, and succeeded in bagging their 4th Moose Head this past year. They are:
  • Terry Broman
  • Pauline McNamara
  • Kate Head
Let's give 'em all another round of applause!

New Drivers

These intrepid souls decided to join the drive this year. Although some of them are currently on the inactive list, we know they're out there somewhere, struggling through some box canyon or other, but hopefully they'll be popping in soon. The others are doing some very impressive riding, and making fine progress.

Here's the list:
  • Jean-Marc Bottin
  • Daniel Hyslop
  • Patrick van Zandbeek
  • Barb Rudnick
  • Tommy Leak
  • Tom Henner
  • Adam Price
  • Deb Wright
  • Kelly Loyd

Citation in Order?

I'm not sure if we should encourage driving fast, but I would be remiss if I did not mention the pace at which Kelly Loyd, one of our newest Cattle Drivers, is tearing up the trail. Them dogies ain't gonna have a speck of fat on 'em if he keeps up this pace! Y'all should check him out soon, because if you wait, he might be finished before you stop in.

Those Old Familiar Faces

Some of our older cowhands got back in the saddle and are kicking up some fresh dust. From as far back as 2001! Where they've been holed up and what they've been doing may remain a mystery, but the point is they're back, and ready to ride. Here's the names:

  • Stuart Goss
  • John Cooper
  • Adam Vinueza
  • Elouise Kivineva
  • Julianne Gross
  • Carol Murphy

Elouise wished to make it known that reports of her demise were greatly exagerated, and that order for a tombstone should be cancelled. She ain't ready fer that yet, not by a long shot!

Hard to Pigeon Hole

These four folks left me scratching my head. Three of 'em are stuck out in 4b country. Maybe they're holed up together, or just wandering around in search of water, I dunno. The other is in OOP, so he's bagged himself at least one moose, but nobody's heard from him lately. Keep an eye out for these folks, and give 'em some help if ya' see 'em!

Be on the lookout for:

  • Soumya Savindranath
  • Marcus Laubli
  • Andy Boyd
  • David McCartney

Let's Not Forget Our Nitpickers!

Let us take this time to thank Marilyn and Pauline for their efforts, without which none of this would work. Pauline has signed on as a moderator for the Cattle Drive thread, for which I am very grateful, and she now has the double duty of nit-picking code with Marilyn, and moderating the Drive. Where she finds time to sleep, I'm sure I don't know. Anyway, they both do a stellar job, and I hope they feel appreciated!

Well, that about does it for this year's State of the Drive Address. Can't help but notice that nobody took my dare to come back and do the current version of Servlets. Perhaps a fear of Ants? Here's hoping 2006 is a great year for everyone, and hope to see you out on the trail! Let's go round up them strays! Yeeeee-Hawwwwww!

Mosey on over to The Cattle Drive Forum in The Big Moose Saloon!

Return to Top
Head First Objects Cover Contest
by Pauline McNamara

Mosey on over to the Bunkhouse Porch folks, we got a contest going on! You could win an entire Head First library (that's right all of 'em), a free copy of Head First Objects, and other fun stuff. Dave Wood, the author of Head First Objects, wants you to convince him which cover model to use for the book. Join the fun, get creative, and you just might get lucky...

Return to Top
Book Review of the Month -- Ajax in Action
Book Review of the Month

Ajax in Action
David Crane, Eric Pascarello, and Darren James

"Ajax in Action" is not only an excellent book on Ajax, but the best JavaScript book I have ever read. The authors note early on that Ajax is a process, not a technology. This theme permeates the book. There is an emphasis on requirements, design, implementation, testing and maintenance. So the book shows how to do a real project, not just how to code.

Keeping with the real project theme, there is information throughout on refactoring and design patterns. The authors present low level coding idioms as well. All this creates a language for coding Ajax applications. The second half of the book walks you through the entire development process for five sample applications.

The book targets a wide audience range, from enterprise developers to self-taught scripters. Basic concepts are explained concisely for newcomers and experienced developers may skim certain sections. However these sections are a very small part of the 600+ page book.

An appendix covers an introduction to JavaScript. While you would want to supplement it with materials from the web, it clearly covers the advanced topics that are hard to find elsewhere. There are also introductions and tips on CSS and DOM. In short, I learned a ton about non-Ajax development and page manipulations too.

And the book even has a screenshot of JavaRanch! I was expecting a good book when I saw Bear and Ernest's comments on the back. But it still managed to exceed my expectations!

(Jeanne Boyarsky - Bartender, December 2005)
More info at || More info at

Discuss this book review in The Big Moose Saloon!

Return to Top
Managing Editor: Ernest Friedman-Hill