Upgrading a Seam 2 app to JBoss 7

I recently went through the process of upgrading a Seam 2.X application to JBoss 7.1.1. While Marek Novotny’s tutorial will lead you down the right path, there was one issue that led me down a rabbit hole.

Initially, the intention was to use Hibernate 3 as a JBoss module allowing it to be shared among applications. This deviates from the tutorial, but simplifies the library requirements if you are deploying several Seam 2 applications to the same server. Unfortunately, this issue cropped up:

14:54:13,042 WARN  [org.jboss.modules] 
  (MSC service thread 1-4) Failed to define class 
  org.jboss.as.jpa.hibernate3.infinispan.InfinispanRegionFactory 
  in Module "deployment.jboss-seam-booking.ear:main" from Service Module 
  Loader: java.lang.LinkageError: Failed to link org/jboss/as/jpa/hibernate3/
  infinispan/InfinispanRegionFactory (Module "deployment.jboss-seam-
  booking.ear:main" from Service Module Loader)
...
Caused by: java.lang.NoClassDefFoundError: 
  org/hibernate/cache/infinispan/InfinispanRegionFactory
...
Caused by: java.lang.ClassNotFoundException: 
  org.hibernate.cache.infinispan.InfinispanRegionFactory from 
  [Module "deployment.jboss-seam-booking.ear:main" from Service Module Loader]

Apparently this problem will be resolved in JBoss 7.1.2, but in order to get something working now an alternative approach was necessary.

The next step was to attempt to bundle the Hibernate 3 jars within the application as discussed in the tutorial. This worked fine except that JBoss still attempted to manage Hibernate 3 as a JPA provider. This produced the same exception shown above.

After researching the issue, the following setting was discovered. This setting is specified for each persistence unit in persistence.xml:

<properties>
   <property name="jboss.as.jpa.managed" value="false"/>
</properties>

Success! This setting stops the JBoss container from managing Hibernate 3 as a JPA provider. Once this setting was changed, the application deployed successfully.

By |JBoss Seam|Comments Off on Upgrading a Seam 2 app to JBoss 7

Java and Rails integration with JAX-RS and ActiveResource

ActiveResource makes it easy to integrate Rails applications through RESTful services, but what if the resource is being produced by a Java application. JAX-RS is the best bet for writing these resources and with a few tricks we can get a service to conform to the requirements of an ActiveResource service. Read on to see that integrating your Rails and Java applications is easier than you think.

What ActiveResource expects

ActiveResource has several conventions it follows that make it easy to produce and consume RESTful services between Rails applications. The easiest way to communicate with ActiveResource is to follow these conventions. The following conventions are expected:

  • The type of the attribute must be specified as an XML attribute (otherwise the type is String)
  • Class names and attributes follow a dash convention for multiple words (e.g. my_attribute would be my-attribute in XML)
  • Resource paths follow the underscore convention for the class name and the class name should be pluralized (e.g. if we’re creating a resource for MyResource the base path should be /path_to_services/my_resources)

JAX-RS provides the ability to customize XML marshalling by extending the XmlAdapter class. This class simply requires that you provide the logic to marshal and unmarshal your object in the required fashion. ActiveResource uses element attributes to specify type information.

For example, if we have a MyApp class with an integer representing the number of Rails clients. The XML expected by ActiveResource would be:

<my-app>
  <id type="integer">5</id>
  <name>My Cool App</name>
  <rails-clients type="integer">1</rails-clients>
</my-app>

There are a couple of things to note here. ActiveResource follows a dash convention for class names and attributes that contain multiple words. The ActiveResource class would be as follows:

class MyApp < ActiveResource::Base
  self.site = 'http://solutionsfit.com/services/

  schema do
    attribute 'id', :integer
    attribute 'name', :string
    attribute 'rails_clients', :integer
  end
end

Notice also the use of the type="integer" attribute for the XML elements. This ensures that the id and rails_clients attributes are of boolean type. In addition, by convention, the path to this resource based on the self.site definition above would be: http://solutionsfit.com/services/my_apps.

Defining a JAX-RS XmlAdapter

To achieve this, we can create an XmlAdapter class in our Java application:

public class ActiveResourceIntegerAdapter 
    extends XmlAdapter {
  @Override
  public ActiveResourceInteger marshal(Integer i) 
      throws Exception {
    return new ActiveResourceInteger(i);
  }

  @Override
  public ActiveResourceInteger unmarshal(ActiveResourceInteger i) 
      throws Exception {
    return i.getValue();
  }
}

The next step is to define the ActiveResourceInteger class which specifies how the marshalling should occur:

public class ActiveResourceInteger {
  private String type = "integer";
  private Integer value;
	
  public ActiveResourceInteger() {}
	
  public ActiveResourceInteger(Integer value) {
    this.value = value;
  }
	
  @XmlAttribute
  public String getType() {
    return type;
  }
  public void setType(String type) {
    this.type = type;
  }
  @XmlValue
  public Integer getValue() {
    return value;
  }
  public void setValue(Integer value) {
    this.value = value;
  }
}

Creating the JAX-RS Resource

Creating the JAX-RS resource is simple once we have defined our JAX-RS extension. First create the resource in the normal fashion:

@Path("/services/my_apps")
@Produces("application/xml")
public class MyAppResource {
  @GET
  @Path("/{myAppId}")
  public MyApp getMyApp(@PathParam("myAppId") Integer myAppId) {
    // retrieval logic for getting and returning MyApp instance
  }
}

Notice that we follow another ActiveResource convention here by pluralizing my_apps in the @Path definition. Now let's look at the MyApp definition:

@XmlRootElement(name="my-app")
public class MyApp {
  private Integer id;
  private String name;
  private Integer railsClients;

  @XmlElement(name="id")
  @XmlJavaTypeAdapter(ActiveResourceIntegerAdapter.class)
  public Integer getId() {
    return this.id;
  }
  public void setId(Integer id) {
    this.id = id;
  }

  @XmlElement(name="rails-clients")
  @XmlJavaTypeAdapter(ActiveResourceIntegerAdapter.class)
  public Integer getRailsClients() {
    return this.railsClients;
  }
  public void setRailsClients(Integer railsClients) {
    this.railsClients = railsClients;
  }

  // ... ...
}

Here we simply add the @XmlJavaTypeAdapter annotation with our adapter class specified to the integer attributes. We also used the dash convention when specifying the root element name and the attribute names.

Now when your class is marshalled to XML it will follow the conventions required by ActiveResource ensuring that your attributes are named and typed as expected. These adapters can be created for each type to avoid the default String type for your ActiveResource class attributes.

By |Java, JAX-RS, Rails, REST|Comments Off on Java and Rails integration with JAX-RS and ActiveResource

JBoss Seam: Agile RIA Development Framework

JBoss just published a white paper that describes how Seam enables rapid development of RIAs by eliminating technology hurdles and placing developer focus back on solving business problems. Specific enterprise use cases demonstrate how increasingly complex features can rapidly be introduced into a software product with Seam in an iterative fashion.

The white paper is written in a way that avoids a technical deep-dive so that regardless of technical expertise, the value of Seam can be well understood. This allows not only developers but also management understand why JBoss Seam is the right framework for an organization’s next agile RIA development project. I authored the paper in collaboration with my colleague Nirav Assar.

Enterprise Mashups with RESTful Web Services and jQuery (Part 2)

RESTful web services and jQuery make it easy to create an enterprise mashup. This 2 part article discusses how to create a simple enterprise mashup using jQuery. Part 1 introduced the basic requirements of your enterprise services, the simplicity of JSON, and how to consume your RESTful web services using jQuery. Part 2 covers consuming services from across the enterprise with JSONP, accessing secure resources, and handling error conditions.

Gathering data from across the enterprise

A mashup generally retrieves data from services throughout the enterprise. You may recall form part 1 that our RESTful web service was located at: http://solutionsfit.com/services/rest/consultants. So where is our mashup located? If it is located in a different domain than solutionsfit.com, say: mycorpdomain.net, the browser will enforce the same-origin policy browser restriction and disallow access.

This restriction can be bypassed through use of JSONP (JSON with padding). For more information on what JSONP is and how to support JSONP within your web services, see my previous posting: Serving up JSONP from your JAX-RS Web Services.

By adding the callback=? parameter to our getJSON invocation in Part 1 we inform jQuery to use JSONP. The parameter is simply added to the URL we are requesting:

jQuery.getJSON(
  'http://solutionsfit.com/services/rest/consultants&callback=?',
  function(data) {
    // ... ...
  }

When jQuery recognizes that a JSONP request is being performed, it takes the function we defined, assigns a unique name, and adds it as a global function. It then replaces the question mark in the callback=? parameter with the name it assigned. This allows the service to wrap the JSON result with the callback function to invoke.

If the service to return our consultants supported JSONP, and jQuery requests: http://solutionsfit.com/services/rest/consultants?callback=jquery12345, the following result would be returned.

jquery12345([{
 "consultant" : {
   "firstName": "Jacob",
   "lastName": "Orshalick",
   "blogFeed": "http://solutionsfit.com/blog/feed"
 },
{
 "consultant": {
   "firstName": "Nirav",
   "lastName": "Assar",
   "blogFeed": "http://assarconsulting.blogspot.com/feeds/posts/default"
 }
}]);

As you can see, the JSON result is wrapped with the callback function jquery12345. Because jQuery adds this function as a global function, it will be called when the service result is evaluated. As a final step, jQuery removes that function once the callback completes.

Accessing secure enterprise web services

So far we have assumed that our services provide wide open access, but most internal enterprise services require authentication. In addition, it is often necessary to restrict what data is returned to a user based on roles and permissions. While this may seem complex, intranet access can provide a unique advantage when using jQuery to create an enterprise mashup.

Intranet environments often provide single-sign on mechanisms that are not available through external service invocations. Given the HTTP-centric approach of REST, the most natural fit for RESTful web service authentication is HTTP authentication. While the specification only provides for BASIC and DIGEST authentication, almost all current browsers support the much more secure HTTP Negotiate mechanism.

The HTTP Negotiate mechanism is the most common use of SPNEGO which allows a client and server to negotiate an authentication mechanism. Many enterprise intranet domains, especially those using Active Directory for authentication, utilize the HTTP Negotiate mechanism (e.g. NTLM or Kerberos) to achieve single-sign on behavior. By securing RESTful web services through the HTTP Negotiate mechanism, and using jQuery to invoke the service, we can rely on the browser’s built in HTTP Negotiate capabilities to authenticate the user.

Let’s say that our previous RESTful web service now required authentication through the HTTP Negotiate mechanism. Now when the URL is requested http://solutionsfit.com/services/rest/consultants, an HTTP 401 Unauthorized response is sent with the following header entry.

WWW-Authenticate: Negotiate

If the browser is accessing a trusted domain it will attempt to authenticate silently through the Negotiate mechanism (either NTLM or Kerberos). As you would expect, this is the same behavior we would see from accessing a general web application protected by the HTTP Negotiate mechanism.

When the jQuery getJSON request is performed, the AJAX invocation receives the same response. As with the previous case, the browser will silently negotiate user authentication and receive the expected JSON result. No additional code is necessary as long as we are accessing a trusted domain and a service supporting the HTTP Negotiate mechanism. The service will silently authenticate the user and provide the appropriate JSON result according to the user’s privileges.

As a JBoss user, I recommend JBoss Negotiation for securing RESTful services through the HTTP Negotiate mechanism. JBoss Negotiation provides a Tomcat authenticator and JAAS login module to add SPNEGO support to JBoss.

Handling failure conditions

So we’ve discussed what happens if things go right, but what if things go wrong? What if the service is down or we can’t authenticate? We need to be able to inform the user that a failure occurred. The current implementation of the jQuery getJSON function does not handle error conditions when using a JSONP request. The call fails silently and any defined error function is ignored.

A simple approach to handle error conditions in a generic way is a timeout. The following implementation demonstrates how a timeout could be applied to our getJSON call in Part 1.

var requestCompleted = false;

window.setTimeout(function() {
  if(!requestedCompleted) {
    jQuery("#consultants")
      .append('<tr><td style="color: red">' +
        'An error occurred while processing this request' +
        '</td></tr>');
  }
}, 5000);

jQuery.getJSON(
  'http://solutionsfit.com/services/rest/consultants&callback=?',
  function(data) {
    requestCompleted = true;

    jQuery.each(data, function(i,item) {
      var consultant = item.consultant;
      var consultantHtml = '<tr>' +
        '<td>' + consultant.firstName + ' '
          + consultant.lastName + '</td>' +
        '<td><a href="' + consultant.blogFeed + '">' +
          'Blog Feed</a></td>' +
        '</tr>';

      jQuery("#consultants").append(consultantHtml);
    });
  }
);

The timeout above is set to 5 seconds, but should be set according to an expected response time for your service. The window.setTimeout function will invoke our defined error handling function after a 5 second period. Unless the request completes within that period and invokes our callback function, the following message will be displayed to the user.

An error occurred while processing this request

While this may be a reasonable approach, there are certainly drawbacks. First, we don’t know what error occurred, simply that the request timed out. If the request fails due to an authentication issue for example, we would likely want to inform the user so they could get the issue resolved. Second, if we set our timeout period too low, we could error out on requests that actually complete. Fortunately, if these issues are of concern, there is an alternative.

The jQuery JSONP project in GoogleCode provides support for error handling. While the usage is not as elegant as the standard getJSON function, it does provide the necessary features to handle these concerns. This StackOvervflow entry provides an example of usage.

Conclusion

As you have seen in this 2 part series, jQuery simplifies enterprise mashup development with RESTful web services. jQuery provides a clean approach to retrieving and rendering service data, bypasses the same-origin policy browser restriction through JSONP support, and allows you to take advantage of HTTP authentication. Hopefully better error handling will be incorporated into JSONP support in future jQuery revisions, but as you have seen, there are ways to get around this issue.

Enterprise Mashups with RESTful Web Services and jQuery (Part 1)

RESTful web services and jQuery make it easy to create an enterprise mashup. This 2 part article discusses how to create a simple enterprise mashup using jQuery. Part 1 introduces what is required of your enterprise services, the simplicity of JSON, and how to consume your RESTful web services using jQuery. Part 2 will cover consuming services from across the enterprise with JSONP, accessing secure resources, and handling error conditions.

Enterprise Web Services: Got REST?

First things first, we need RESTful web services to provide the data we intend to consume. Are there already web services exposed in your enterprise that provide a RESTful API? Do these services support JSON? If not, there are a wide array of technologies that make it simple to expose RESTful web services from your existing applications.

If you have RESTful web services, but they only support an XML result it is quite simple to support JSON with JAX-RS. JAX-RS uses the HTTP Accept header to determine what media type should be sent back as a result. The following HTTP header entry would indicate that the client is requesting a JSON result:

Accept: application/json

RESTEasy provides a portable JAX-RS implementation that makes it simple to expose services supporting a variety of media types. If you happen to be using Seam, exposing RESTful services through RESTEasy is a no-brainer. See the Seam documentation for more details.

Why use JSON when you have XML?

It is now common to expose REST services that return a result in JSON format. The format is described as a lightweight data-interchange format. The major advantage to JSON is that it is JavaScript native format. This means if a RESTful service is invoked through a JavaScript AJAX call and returns a JSON result, the data returned by the service can be used without additional parsing.

For example, we could have the following URL tied to a consultants list resource.

http://solutionsfit.com/services/rest/consultants

When a GET request is received for this URL the following JSON response is generated.

[{
 "consultant" : {
   "firstName": "Jacob",
   "lastName": "Orshalick",
   "blogFeed": "http://solutionsfit.com/blog/feed"
 },
{
 "consultant" : {
   "firstName": "Nirav",
   "lastName": "Assar",
   "blogFeed": "http://assarconsulting.blogspot.com/feeds/posts/default"
 }
}]

If this JSON result was stored in a JavaScript variable named solutionsfitConsultants, I could alert the user of Jacob Orshalick’s blog URL with the following JavaScript snippet.

var consultant = solutionsfitConsultants[0].consultant;

alert(consultant.firstName + ' ' + consultant.lastName);

Obviously this makes it very easy to render results to the user by removing the additional step of parsing that is required with XML.

Request and display JSON data with jQuery

jQuery makes it simple to consume a RESTful service providing a JSON result through a simple AJAX call. Let’s look at an example. The following HTML provides the shell for the service results I want to display in my mashup.

<table width="100%">
  <thead>
    <tr>
      <th>
        Consultant
      </th>
      <th>
        Blog Feed
      </th>
    </tr>
    <tbody id="consultants">
    </tbody>
</table>

The jQuery getJSON function makes it simple to consume the service that provides us the list of consultants. The getJSON function sends an AJAX GET request to the resource URL with an HTTP Accept header of application/json.

jQuery.getJSON(
  'http://solutionsfit.com/services/rest/consultants&callback=?',
  function(data) {
    jQuery.each(data, function(i,item) {
      var consultant = item.consultant;
      var consultantHtml = '<tr>' +
        '<td>' + consultant.firstName + ' ' 
          + consultant.lastName + '</td>' +
        '<td><a href="' + consultant.blogFeed + '">' +
          'Blog Feed</a></td>' +
        '</tr>';

      jQuery("#consultants").append(consultantHtml);
    });
  }
);

Notice that we also use the jQuery each function to loop through the JSON results. Each result in the returned JSON array is set into the variable item.

We then generate the HTML to add to the table for each consultant and set it into the consultantHtml variable. We then select the

element that will contain the consultant results using it’s element ID and use the jQuery append function to append the consultantHtml.

When the page is fully rendered, the resulting HTML will be:

<table width="100%">
  <thead>
    <tr>
      <th>
        Consultant
      </th>
      <th>
        Blog Feed
      </th>
    </tr>
    <tbody id="consultants">
      <tr>
        <td>
          Jacob Orshalick
        </td>
        <td>
          <a target="_blank" 
              href="http://solutionsfit.com/blog/feed/">
            Blog Link
          </a>
        </td>
      </tr>
      <tr>
        <td>
          Nirav Assar
        </td>
        <td>
          <a target="_blank" href=
            "http://assarconsulting.blogspot.com/feeds/posts/default/">
            Blog Feed
          </a>
        </td>
      </tr>
    </tbody>
</table>

Note that only JavaScript libraries and HTML have been used to consume the service. This provides the flexibility to render this content in a web application, a portal, or even a static HTML page!

That’s it for round one of enterprise mashups. Stay tuned for part 2 which will discuss how to gather data from across the enterprise with JSONP and how accessing secured services is made easy.

Serving up JSONP from your JAX-RS Web Services

If you are developing RESTful services that will be consumed by AJAX clients on different servers, you will likely need to support JSONP. JSONP allows your RESTful web services to support cross-domain communication by enabling your clients to bypass the same-origin policy browser restriction. While some JAX-RS implementations support JSONP, this article demonstrates how any JAX-RS web service can support JSONP through a servlet filter.

Why you would use JSONP

Cross-domain communication is a common problem when developing rich web clients that utilize RESTful web services. Browsers impose the same-origin policy which is described in depth in: Cross-domain communications with JSONP.

To paraphrase the problem, a script loaded from one location, say http://solutionsfit.com/blog/, could not execute an AJAX request that gets properties from a service outside of the domain solutionsfit.com. The diagram below describes this scenario.

If the AJAX request to geonames.org returned a basic JSON response, the browser would not allow access to this data. This is a problem for many AJAX applications, especially mashups, which may access a number of resources to generate content. JSONP (JSON with padding) solves this problem by wrapping the returned data with a function.

The function is invoked as a callback once the AJAX call completes with the JSON results passed as an argument. This requires that the callback function be defined in the web page. So, in the diagram above, if the response from geonames.org returns a function that takes the JSON result as an argument, we can bypass the same-origin policy. This of course requires the web service being invoked to support JSONP.

Creating a Servlet Filter to process JSONP requests

JAX-RS does not support JSONP by default. Some implementations provide an extension to produce JSONP content but some do not (see the RESTEasy JIRA issue). As a Seam user, RESTEasy is the perfect option for exposing RESTful services due to it’s tight integration. As RESTEasy does not currently support JSONP, I needed a solution. Fortunately, you can add support of JSONP using a servlet filter. The following implementation is a naive approach, but shows the general idea.

public class JSONPRequestFilter 
     extends org.jboss.seam.web.AbstractFilter {
  public void doFilter(ServletRequest request, ServletResponse response, 
      FilterChain chain) throws IOException, ServletException {
    if (!(request instanceof HttpServletRequest)) {
       throw new ServletException("This filter can " +
         " only process HttpServletRequest requests");
    }

    HttpServletRequest httpRequest = (HttpServletRequest) request;
      
    if(isJSONPRequest(request))
    {
      ServletOutputStream out = response.getOutputStream();

      out.println(getCallbackParameter(httpRequest) + "(");
      chain.doFilter(request, response);
      out.println(");");

      response.setContentType("text/javascript");
    }
    else
    {
      chain.doFilter(request, response);
    }
  }

  private String getCallbackMethod(HttpServletRequest httpRequest)
  {
    return httpRequest.getParameter("callback");
  }

  private boolean isJSONPRequest(HttpServletRequest httpRequest)
  {
    String callbackMethod = getCallbackMethod(httpRequest);
    return (callbackMethod != null && callbackMethod.length() > 0);
  }
}

This filter processes any request that provides a callback function parameter (the signature of a JSONP request). It wraps the JSON return data with a function, the value of the callback parameter. In my appication I needed to support multiple media return types so I implemented a more robust approach that checks the Accept header to verify that text/javascript or application/javascript is an accepted media type. It also wraps the HttpServletRequest to inform RESTEasy that application/json is the preferred media type. This ensures that RESTEasy provides a JSON response.

Configuring the JSONP Servlet Filter

To configure the filter to apply to RESTful web service requests, you would simply add the following to your project’s web.xml.

<filter>
  <filter-class>com.solutionsfit.rest.JSONPRequestFilter</filter-class>
  <filter-name>JSONPRequestFilter</filter-name>
</filter>
  
<filter-mapping>
  <filter-name>JSONPRequestFilter</filter-name>
  <url-pattern>/seam/resource/rest/*</url-pattern>
</filter-mapping>

The url-pattern should match the base URL pattern for your RESTFul web service requests. The default base URL for a Seam configuration is shown above. Once this is complete, you can bypass the same-origin policy by making JSONP requests to your JAX-RS web services.

Security Considerations

As a final note, be aware that there are security considerations associated with the use of JSONP and cross-site request forgery attacks. The same-origin policy exists to eliminate this issue, so appropriate precautions should be taken to ensure that security is enforced. Always ensure that you understand the implications of using JSONP prior to enabling it for your web services and prior to invoking a service that provides JSONP support.

Tuning Queries when using Pagination with JPA and Hibernate

In recent performance tuning of some EJBQL search queries, I’ve had a lot of discussions with other developers on database pagination. There are some definite nuances that you have to be aware of when using Hibernate’s pagination feature, so I thought I would explain them here.

Quick Introduction to Database Pagination with JPA

Database pagination allows you to step through a result set in manageable chunks (say 5 at a time). This is an important feature when a result set is large. Imagine if the user selects the first result of 1000. Essentially 999 out of 1000 results were wasted. This is wasteful in terms of CPU cycles on the database server, network usage, CPU cycles on the application server, and memory allocation. On the other hand, if we only loaded 10 results into memory, we’ve only wasted 9 results. As the result set grows, this problem becomes more important to address.

Database pagination with JPA is quite simple through the javax.persistence.Query. The following method invocations retrieve the first 10 results for the query:

javax.persistence.Query query =
  em.createQuery("select order from Order as order
  left join order.customer as customer
  where customer.name like '%' || :name || '%'");

query.setParameter("name", name);
query.setFirstResult(0);
query.setMaxResults(10);

// returns 10 or less orders
List<Order> orders = query.getResultList();

The max number of results to retrieve at one time can be any number you choose. As the user pages through the data, we alter the setFirstResult(int) to retrieve the next set of results.

Query Tuning with Fetch Joins

When paging through a result-set, you may be interested in performing fetch joins to enhance query performance. This avoids the N+1 select problem when walking lazy relationships for displaying data. For example, let’s say we are working with an order management system. This order management system allows users to search for orders that have been placed by customers. Our domain would look something like the following, where an Order has one Customer and a Customer can be associated to many Orders.

This relationship could be described in the Order entity as:

@Entity
public class Order
{
  // ... ...

  @ManyToOne
  private Customer customer;

  // ... ...
}

In the search results, the users want to see both Order and Customer information on the page. Lazily loading the Customer results in a query being executed to retrieve the Customer for each Order displayed. To avoid this, we can perform a fetch join on the Customer when retrieving the Order results. Here is the resulting EJBQL:

select order from Order as order
  left join fetch order.customer as customer
  where customer.name = '%' || :name || '%'

This ensures that only a single query is executed to load both the Order and the Customer results. An example of what the SQL result set might look like in this case would be:

| order_id | cust_id | cust_name       |
----------------------------------------
| 1        | 1       | Jacob Orshalick |
| 2        | 2       | Nirav Assar     |
| 3        | 3       | John Doe        |

As you can see, each Order is associated to a single Customer which ensures a unique result set. In this case we are guaranteed that limiting the result set to 5 will always result in 5 or less unique Order results. This is generally the right solution for a @OneToOne or a @ManyToOne relationship.

Fetching One-to-many or Many-to-many Relationships

Fetching one-to-many or many-to-many relationships gets a bit tricky. The moment you introduce a fetch join for a one-to-many or many-to-many relationship, Hibernate will load all results into memory and then only return you the max number of results you requested. This is due to the semantics of SQL queries.

Going back to our example, we will likely have a list of LineItem entries for each order that tell us what Products the Customer purchased on the Order.

And the Order entity would now look like:

@Entity
public class Order
{
  // ... ...

  @ManyToOne
  private Customer customer;

  @OneToMany
  private List<LineItem> lineItems;

  // Getters and Setters
}

The users request that we display the LineItem entries below each Order in the search results. So we can just do another fetch join and load this data as well right? Here is the resulting EBJQL:

select distinct order from Order as order
  left join fetch order.customer as customer
  left join fetch order.lineItems
  where customer.name = '%' || :name || '%'

Once you introduce this additional fetch into the query, Hibernate will present the following message in the log:

  [org.hibernate.hql.ast.QueryTranslatorImpl] firstResult/maxResults
  specified with collection fetch; applying in memory!

This message is telling you that Hibernate is retrieving all results from the database, and then only returning the first 10 results (or the number of max results you specified). So why does Hibernate do this? Let’s have a look at an example of what the SQL result set generated from this query might look like.

| order_id | cust_id | cust_name       | line_id | product_sku |
----------------------------------------------------------------
| 1        | 1       | Jacob Orshalick | 1       | 1403-1209   |
| 1        | 1       | Jacob Orshalick | 2       | 1405-1333   |
| 2        | 2       | Nirav Assar     | 3       | 1300-1222   |
| 3        | 3       | John Doe        | 4       | 1400-3029   |
| 3        | 3       | John Doe        | 5       | 1401-1000   |
| 3        | 3       | John Doe        | 6       | 1200-1000   |

Each database has it’s own SQL syntax for limiting the result set, but assuming we limit the result-set to 5 results on the database side we would only get the first 5 results. As you can see, the result set returned duplicates the Order and Customer information for each LineItem on the Order. Thanks to the way Hibernate processes these results, we would still see the 3 expected orders (order_id = 1, 2, 3), but the database would only return us 2 of the LineItem entries for John Doe’s order. This is an incorrect result from the user’s point-of-view.

Knowing this, Hibernate rightfully retrieves all results in this case and then returns you the 3 Order results with all associated LineItem entries. But, to ensure correctness, you lose the value of pagination. So will we always face the N+1 select problem when using pagination with @OneToMany or @ManyToMany relationships? Not if you consider other options from a user experience perspective.

Other Options for one-to-many Relationships

There are a number of ways to enhance performance without losing the advantages of database pagination.

Display LineItem Entries only when Requested

Technology combinations like RichFaces and Seam make this simple. Basically you can walk the lazy relationship only when the user requests this information through an AJAX request. Through use of a <rich:togglePanel> a link can be provided to expand the Order data for the user. Because Seam allows an EntityManager to span requests lazily loading this data is simple.

Another simple option is using REST and JSON to retrieve the LineItem entries through an AJAX request when accessed by the user. A simple RESTful invocation (http://my-server/order/1/lineItems) allows the LineItem entries to be retrieved for an Order and we can then parse the results and display them back to the user. RESTEasy makes this simple for any Java application.

Display the LineItem Entries on a Details Page

This is the easiest and most obvious solution. Just display high-level Order information on the search results and the user can then access a details page that provides additional details. In general, this is the solution I generally push users toward for simplicity.

Display a High-level LineItem Summary Information

Another option is to give high-level information (e.g. number of LineItem entries) on the search page, and then display all information on a detail page. With the flexibility of EJBQL, you can use aggregate functions (e.g. count(lineItem.id) ) with a group-by clause to avoid the issues with a one-to-many. But, this also generally requires introduction of DTOs to hold the query result data or additional parsing of the result set.

Performance Tuning Always has Trade-offs

As I always say when discussing performance tuning, there are always trade-offs. Whether it’s additional complexity or changes to user experience, we always have to consider the implications of tuning our applications.

Second-level caching: Still an effective performance tuning technique

I keep reading discussions regarding the performance of Seam applications. These discussions are generally centered around the performance overhead of the interception techniques used by Seam. While this is definitely a valid issue in certain scenarios, see this excellent forum discussion started by Tobias Hill, many tend to blame Seam too quickly for their performance issues. If it is taking many seconds or even minutes to load a page, in most cases your application is more likely to blame than Seam.

In my experience, most performance issues stem from data access. Improperly tuned queries (a common culprit) and not using the second-level cache of your ORM provider when appropriate can lead to some serious performance implications in your application. While second-level caching is nothing new, here I will describe why it is important to a Seam application and how you can improve performance using Hibernate’s second-level cache provider.

Before I go any further, note that second-level caching is not the only caching solution you have available if you are using Seam. Seam provides a multi-layer caching solution that allows you to cache page fragments and objects easily while abstracting away the details. You can read all about Seam’s multi-layer caching solution in Chapter 34 of Seam Framework: Experience the Evolution of Java EE.

Loading Reference Data

Seam provides an elegant solution to the common problem of associating entities based on a dropdown selection. Take the common booking example with Seam. We are attempting to book a Hotel and we need to input credit card information. The type of credit card is likely to be a dropdown, but that dropdown is going to need to associate to a CreditCardType entity.

@Entity
public class CreditCardType implements Serializable
{
  @Id
  private Long providerId;
  private String description;
  // ... ...
}

Our Booking class then needs a reference to the CreditCardType class.

@Entity
public class Booking implements Serializable
{
  @Id
  private Long id;
  // ... ...
  @ManyToOne
  private CreditCard creditCard;
  // ... ...
}

To make this task simple, Seam provides the <s:entityConverter /> component which ensures that the user selection is converted to an entity for association with your object.

<h:selectOneMenu id="creditCard" value="#{booking.creditCard}"
    required="true">
  <s:selectItems noSelectionLabel="" var="type"
    value="#{creditCardTypes}"
    itemLabel=”#{type.description}” />
    <s:convertEntity />
  </s:selectItems>
</h:selectOneMenu>

As you can see this is quite simple, but we need to load the creditCardTypes into the conversation context in order to associate an instance to our entity. This is because the creditCardTypes need to be managed instances in the conversation-scoped persistence context. It is quite simple to accomplish this through a @Factory method scoped to the conversation.

@Name(“bookingAction”)
@Scope(CONVERSATION)
public class BookingAction implements Serializable {
  // ... ...
  @In private EntityManager entityManager;

  @Factory(“creditCardTypes”)
  public List<creditcard> loadCreditCardTypes()
  {
    return entityManager.createQuery("select c from " +
      "CreditCardType as c order by c.description").getResultList();
  }
  // ... ...
}

Great, so now we can load our entities into the context and associate them using a dropdown, so what’s the catch? The factory method only executes once, right?  The problem is that the query that loads the CreditCardType instances into the conversation context executes every time a new conversation requests the dropdown list.  This can cause the initial page load to lag.

This may not be a problem in this simple case as we only have this one dropdown, but what if we have many dropdowns on the screen? Even further, what if this dropdown list is used by several conversations? Doesn’t it seem wasteful to hit the database every time we need it? We can avoid the database hit and still achieve the same benefits by using second-level caching.

Second-level caching with Hibernate

Second-level caching is intended for data that is read-mostly. It allows you to store the entity and query data in-memory so that this data can be retrieved without the overhead of returning to the database. You can configure the cache expiration policy, which determines when the data will be refreshed in the cache (e.g. 1 hour, 2 hours, 1 day, etc.) according to the requirements for that entity. An entity like CreditCardType is certainly read-mostly so it is definitely a good candidate for the second-level cache.

Using Hibernate, it is quite simple to cache an entity by using the @Cache annotation.

@Entity
@Cache(usage = CacheConcurrencyStrategy.READ_ONLY)
public class CreditCardType implements Serializable {
  // ... ...
}

We then need to include the jars necessary for a second-level cache provider. I tend to use Ehcache as I find it simple to use and it is fully supported by Seam’s multi-layered caching solution.

Once you include the appropriate jars, you must configure Hibernate to use second-level caching. In your persistence.xml file, add the following properties for your persistence-unit definition.

<persistence-unit name="myBookingDS">
  ... ...
  <properties>
    <property name="hibernate.cache.provider_class"
      value="org.hibernate.cache.EhCacheProvider" />
    <property name="hibernate.cache.use_second_level_cache"
      value="true" />
    <property name="hibernate.cache.use_query_cache"
      value="true" />
    ... ...
  </properties>
<persistence-unit>

The hibernate.cache.provider_class should be specific to the cache provider you are using. Hibernate supports a number of implementations as described in the reference documentation.

Notice that we also set hibernate.cache.use_query_cache to true. This allows us to take the caching a step further by caching the query itself and not just the entities. In order to cache the query, we can take two approaches: use the Hibernate Session API or the Hibernate @NamedQuery annotation. Let’s look at the Hibernate Session API approach first. Our factory method above changes to the following:

@Name(“bookingAction”)
@Scope(CONVERSATION)
public class BookingAction implements Serializable {
  // ... ...
  @In private EntityManager entityManager;

  @Factory(“creditCardTypes”)
  public List<CreditCard> loadCreditCardTypes()
  {
    Session session = (Session) entityManager.getDelegate();

    Query query = session.createQuery("select c from " +
      "CreditCard as c order by c.description");
    query.setCacheable(true);

    return query.list();
  }
  // ... ...
}

Now you will notice in the logs that once the creditCardTypes have been loaded, even a new conversation does not cause a database call the next time these entities are requested. The query and the entities are loaded directly from the second-level cache in-memory.

The other approach is to use the Hibernate @NamedQuery annotation which gives the option to cache your query.

@Entity
@NamedQuery(name="getCreditCardTypes",
  query="select c from CreditCard as c " +
      "order by c.description",
  cacheable=true)
public class CreditCardType implements Serializable
{
  @Id
  private Long id;
  private String description;
  // ... ...
}

The @NamedQuery can then be retrieved through the createNamedQuery() method in the EntityManager API.

While we are only showing one scenario here, there are many cases where second-level caching can be applied in your application.

No silver bullet

By no means am I claiming here that second-level caching is the solution for every scenario. Performance tuning is somewhat of an art. It is definitely handy to know the various potential hot spots when tuning an application, but a solution that works in one case may not work in others. Simply read up on the various approaches and techniques to tune your application so that you can apply each technique when the time is right.

Seam Framework promotion at JavaRanch

Michael Yuan and I will be answering questions about the book and Seam in general at JavaRanch this week in the JBoss forum.  If you would like to ask us a question feel free to stop by!  They will be selecting four random posters in the forum to win a free copy of the book provided by Prentice Hall. We look forward to a good week of questions and hope to see you there!

Seam UI Refcard Released

As a follow-up to the Core Seam Refcard, DZone has now released my companion reference for using Seam with JSF. The Seam UI Refcard has now been released through the DZone Refcardz site and includes:

  • Simplifying JSF
  • Page Navigation
  • JSF Component Annotations
  • JSF Component Tags
  • Hot Tips and more…

So download the Seam UI Refcard here and please send your comments and feedback to refcardz@dzone.com. For in-depth coverage of Seam 2.1, you can also purchase the just released Seam Framework: Experience the Evolution of Java EE.

In a related story, JavaLobby posted an interview with me to coincide the release of the reference card. Check it out!