Bring up the topic of stateful and stateless applications and you can guarantee to get a house divided, almost equally on both sides. There are die-hard proponents of statelessness (and frameworks to support it;Spring for one) and those that support stateful behavior(and frameworks to support it; JBoss Seam for one) .

This leaves the average developer confused – is holding state good or bad? The answer is different : It is inevitable in circumstances and avoidable in certain others.

A regular web application normally contains state while service calls in the integration or infrastructure layer may not. Often the latter is designed that way i.e being stateless for sake of scalability and fail-over.

An often used approach to maintain state is a user session. The most common choice is the HttpSession. Cluster-aware application servers handle replication of these sessions. In-efficiencies in session replication are often cited as reasons to move to a stateless design or look for alternative means for replication. Lets take a look at common approaches to managing user sessions before we decide on the merit of this move. Session replication choices:

  • No replication. No fail-over. Sticky behavior is the only choice in a redundant server deployment.
  • In memory replication. Default behavior in J2EE application servers.
  • DB based replication. Optional behavior in .Net and Ruby On Rails platforms.

Take any approach and you can find people give you many reasons why not to use them. Some of reasons can be : lopsided load in case of stickiness, inefficient in-memory replication and cluster license cost(3 times more) in case of in-memory replication and increase in DB I/O in case of DB based sessions.

We might do better by addressing the problem before trying to find more efficient solutions. Control over what is considered as valid state and the size of the state object graph matter more. I follow these principles/practices when handling state in my application. Some of them are not new and are in fact best practice recommendations for performance, robustness and overall hygiene of the system:

  • Store only key information in the session i.e only minimal data with which you can re-construct the entire session graph.
  • Store only domain model or equivalent data objects. Avoid objects that hold behavior. An easy way to implement this is to wrap session access with a layer that entertains say only XSD derived objects, which effectively cuts out behavioral class instances.
  • Set a limit to the size of the session i.e avoid large session graphs. The session wrapper can ensure this.
  • Persist session only if dirty. Applies to cases where there is container support and in custom session persistence implementation.

An application that follows all of the above would rarely need to debate on the cost of maintaining state via sessions – in memory or in the DB.

DB based persistence is considered expensive and a mis-fit to store transient data such as user session information. However, interestingly frameworks like .Net and Ruby on Rails(RoR), that matured later than J2EE, provide this as an option. In fact, it is the default in RoR, if Iam not wrong.

Recently I had the choice to architect a SOA based platform to build applications on top. We wanted the core services to be stateless to easily scale out when required. Naturally we preferred the application servers to NOT be clustered and be load balanced instead. The applications built on top had to contain minimal state however. We also decided to mask session management from the consuming applications and implemented session persistence and therefore recovery using the DB. While there were initial apprehensions about DB I/O bottlenecks, adopting the principles described above helped us tide over the issue. The end-applications have been in production for a year now. The logic we used in favor of DB based sessions was this : nature of DB access for say 100 concurrent users would mostly be READ with the odd case of a WRITE(i.e when session gets dirty). 100 reads of small sized records using an index on a table is extremely fast as there are no concurrency or transaction isolation issues as each read is for a specific record independent of the other. Anyway, we have the option to switch back(courtesy the wrapper over session management) to Http sessions and clustering if performance sucked, which hasn’t happened till date.

To sum it up : the debate between Stateful and Stateless applications and consequently that on the most efficient session persistence/replication mechanism is really a matter of choice if session is handled with some discipline in the application.

Advertisements

Thanks to the likes of Google, the world today sees application scalability in a new light – that of not being only dependent on Symmetric Multi-Processor (SMP) boxes.  What probably doesnot come out clearly from such implementations is optimization of the available CPU power.

I read somewhere that only a portion of the world’s available processing power is actually used. On the other hand, dont many of us worry about applications being slow? One area that is being addressed to improve performance in regular J2EE applications is that of data access through application strategies(partitioning of data, lazy load e.t.c) and technologies (caches and data grids) , often provided by the vendor themselves.

The ubiquitous  nature of Http and its applications has created its own patterns of application design. The positive ones being:

  • Stateless behaviour of applications
  • Tiers in the application
  • Application security and identity management

On the other hand, it has also led to stereo-typing of applications. Iam yet to see significant number of  applications that deviate from the standard J2EE patterns : MVC–>Service Locator–> Facade –> DAO. It has constrained us in some ways:

  •  The flow from web to the database and back is one thread of activity
  • Patterns donot let us think otherwise
  • Platform specifications are considered sacred – not spawning new threads inside an EJB container for e.g

In physical deployments, we consider the job done, by having, say a hardware load-balancer in place to seemingly “load balance” requests between servers. Its not often that load balancing happens by nature of work that needs to be done in order to service a request. It often is a simple IP level round-robin or at best a weighted one based on CPUs on a server.

This leads to the question : Is scalability a factor of number of machines/CPUs?

It appears so unless we think differently. To prove the point, in a IP level load-balanced setup, once a request is assigned to a server, the burden is solely on the machine servicing the request and thereby processing the entire thread of execution for a period of time when other servers could have unused processing capability.

There are ways to address this issue and ensure high CPU utilization before deciding that scalability is a factor of number of machines/CPU:

  1. Co-locating applications : different applications have varied peak loads. Co-locating applications on a shared setup(software i.e framework, hardware)  ensures overall better scalability and availability. [I have worked on an engagement where we have 6 applications co-deployed in production on just 2 blade servers]
  2. Leveraging multi-threading capabilities of the JVM. Now but isnt that against the specifications? Actually no, if you use the features of the JVM to multi-thread say Message Driven Bean(MDB) for e.g

There are some fundamental changes to the way we design applications in order to make the second point(multi-threading) a reality.

Lets take an example: A sequence of activity in a regular J2EE application to generate invoices would involve:

  1.  Validating the incoming data
  2. Grouping  request data – by article, customer, country e.t.c.
  3. Retrieving master and transactional data from the RDBMS
  4. Calculating the invoice amount – tax, other computations
  5. Generating the final artifact – XML, PDF e.t.c.

In most designs, 1. to 5. happen via components implemented in one or other  of the stereotyped J2EE tiers and the execution is therefore serial in nature.

What if we implemented a few of the above steps using the Command pattern i.e the component takes a well defined request and produces a well defined response using the request data provided?

This component may then be remoted: as SOA services or just plain remotely invocable objects, say stateless EJBs.

Go on and now implement a request processor that breaks up a request to multiple discreet units of work. Each unit of work is then a message – request and a response. The messages may then be distributed across machines using a suitable channel – a JMS queue with MDBs listening to it. The interesting thing happens here : the container would spawn MDBs depending on the number of messages in the queue and other resource availability factors, thereby providing the multi-threaded execution.  The MDBs themselves may be deployed across machines to “truly” spread the load across available machines.

I therefore believe that scalability in a well designed system is a factor of number of threads that can be efficiently executed in parallel and not just on the  number of machines/CPUs.