How to boost the performance of your J2EE apps

By Arnout J. Kuiper


Performance is a major issue to most J2EE applications. In this article I will outline some techniques that can have a significant impact on J2EE applications. The basis for this article is a J2EE application that was built for a large travel agency in the Netherlands. In general, this application implements an online browseable catalog of holidays that are in stock and that can be booked by customers.

Separate static and dynamic content

One of the first performance bottlenecks that was addressed was the serving of content. In a typical web application, 90-95% of the resources (like images and HTML pages) that are requested by a client’s web browser are static, while the remaining 5-10% of the requests is for dynamic content. Most application servers are not that efficient at serving static content, in comparison with specialized web servers like Apache. Therefore it is a good idea to separate static content from dynamic content, by placing the static content on a web server, while keeping the dynamic content on the application server. It is also possible to install the web server and application server on different machines, in order to improve scalability. If the load on the website should increase, an extra web server can be added easily.

By introducing the web server for static content, the total number of HTTP requests that the website is capable of handling per hour increases considerably. This is owing to the fact that the web server handles static requests much faster, and in much larger amounts than the application server. And because the application server is no longer serving this static content, it can use the extra time to handle more dynamic requests.

Unify the static site with the dynamic site

With separation of content the application appears as two websites. In most situations this is undesirable. In order to make the separation transparent to the end-user, a proxy module was configured in the web server. By means of this module requests matching a certain pattern – in this case a pattern that matches requests for dynamic content - are proxied to the application server. On the basis of the patterns the web server proxy module decides whether a request should be handled by the web server itself (in the case of static content), or whether it should be proxied to the application server (in the case of dynamic content).

As a result, the client sees only one site and one server. It provides also a security advantage, as clients never interact with the application server directly. Another advantage is that the web server acts as a facade, making it is easier to implement changes in the infrastructure of the site.

The unification also simplifies the use of SSL. SSL encryption/decryption can all take place on the web server, whilst needing only one SSL certificate. In the case of the travel agency website the web server is also much faster at SSL encryption and decryption than the application server.

Cache the dynamic content

In general, serving static content poses no problems. Most web servers (e.g. Apache) do a pretty good job there. Most performance problems are associated with dynamic pages. Why not cache these pages instead of generating them over and over again for each request?

In most situations, dynamic pages are dynamic for a reason. For instance, the content might change with time, or pages may be personalized or localized.

Let’s take a closer look at the case where the content changes with time. “The content changes with time” is a very generic statement. You should ask yourself which content changes with what frequency. For instance, the homepage contains some random content, which changes very frequently (each request results in another homepage), while the catalog content is updated every half hour.

It is all about user perception. Generating a homepage with different content for each request gives users the impression that the site is very dynamic. But would a single user notice if the homepage were to refresh only every 10 seconds instead of with each request? No, because of the average time between page views, which is about 10 seconds (depending on which study you use as a reference). So if we can cache this page for 10 seconds, user perception of the site does not change, while this cached page can be served to other users within that 10-second timeframe.

If you also take into account how the average user navigates through the site, then in most cases the cache time can be set to a higher value. With regard to the travel agency site, the typical user starts with the homepage, then browses through the site, and then comes back to the homepage. On this site the average time between these two homepage visits is about 10 minutes. So the cache time can be extended without any problem. We used 2 minutes to be on the safe side.

Let’s do some arithmetic for this homepage example. Suppose a site has 3600 visitors per hour, and each user accesses the homepage twice in his/her session. So in one hour the site has to handle 7200 homepage requests. Without caching, the homepage has to be generated 7200 times. When the cache expiration time is set to 2 minutes, only 30 homepages have to be generated at maximum. Only 0,42% of the page requests ends up at the applications server, while the cache serves the rest. The longer the expiration time, the lower this percentage will be.

So in short: don’t just accept that the content is constantly updated, and would therefore not be cacheable. Whether something is cacheable not only depends on its update frequency, but also on the navigational structure of the site, and on the user perception.

For the travel agency site, we used Squid-Cache as the cache, which we installed between the web server and the application server. The web server is placed before the cache on purpose, as the SSL encryption/decryption is done on the web server, the cache can still cache dynamic pages that are to be transported to the client over SSL. In case of the travel agency application, the cache handled about 99% of the dynamic requests on prime time.

Solve session troubles

Unfortunately, caching cannot be applied in all situations. One of these situations is where sessions are involved. Sessions are used for a lot of things, like personalization, user tracking and shopping carts. The problem with sessions is, is that they are unique for each user, and therefore make each request unique.

There is no use in caching personalized pages, because these pages are pretty unique to each user. A shopping cart also needs some sort of a session, which makes it uncacheable.

User tracking is a different matter. On a lot of sites user tracking (logging how users navigate through the site) is done by means of sessions in combination with dynamic pages. These pages cannot be cached, because of the sessions.

Fortunately, there are other ways to implement user tracking. One way is an application component that is capable of serving a small transparent gif-image, and logs subsequently all information from the requests for that image, including cookie information that can be used to identify users uniquely. There are companies that offer this service on a commercial basis, and there are some free implementations on the Internet that can be incorporated in the application itself.

If this solution is used, sessions will no longer be required for user tracking, and so pages that need user tracking can be cached.


In addition to the optimization of page serving, we also paid attention to the logic of the J2EE application itself. The core of the travel agency site is a catalog that contains all the holidays that are offered. The bottleneck for most of these applications is the database, so we decided to do things in a less traditional way. After some analysis, we found out that it was possible to keep the entire catalog in memory. Because lots of complex queries are performed on the catalog, it is important to make the information access as easy as possible.

For this purpose, a data structure called “The Soup” was created. This Soup contained lightweight, directly usable domain objects. All these objects have bi-directional relations (implemented as references) with other related objects within the soup. These relations are the major strength of the data structure. The Soup is used to get a starting point in the data structure. This is the only instance where a search is required for obtaining an object (although searching is nothing more than a lookup in a HashMap). From that point on, it is only a matter of following references, to collect the information needed.

As opposed to a classic database approach, we didn’t have to wait for the database to perform our query and we no longer had to process the results of the database queries. The information was readily available, prepackaged as domain objects, ready to be used. Because of the relations within the data structure, the Soup also outperforms classic caching, especially where complex queries are concerned. All this gave a significant performance boost for catalog related queries.

Pre-process Offline

When we compared the data requirements of the site with the data feed from the backend systems, we discovered that a lot of data processing could be done on forehand. E.g. structuring the data such that it is easier to use on the site, filtering the data, doing some complex calculations, etc. By doing this processing offline (outside the application server), the application server is released from it, and therefore has more CPU cycles left for its main task: serving the end-user.

In the case of the travel agency site, the data feed was very complex. Three backend databases had to be combined, that were not designed to be combined. The data in these databases was not that clean, and the data in the different databases was contradictory (in one database, a destination was in Spain, while according to another database, it was in Turkey). The database structures were also completely different, which made the joining of the databases very hard. All in all, a rather complex job to produce some consistent and clean data for the site from these backend systems. If the application servers had done all this work online, they would not have had much time left to do some real work for the end-user. By doing this pre-processing work offline, the performance of the application servers increased by more than a factor 3.


Compared with the traditional way J2EE applications are architected, the relatively simple measures described in this article, gave the travel agency application a real boost. These measures are generic such that they can be easily leveraged to other J2EE applications. Most measures can be implemented with no additional license costs by using open source software. We used the Apache web server ( with mod_ssl (, and Squid-Cache was used for the caching (

About the author

Arnout J. Kuiper is a Java architect within the Sun Java Center in the Netherlands, with a broad interest in Java and web-related technologies.