Using Asynchronous Servlets to Deal with Hung Threads
Pages: 1, 2, 3

Future Servlet to the Rescue

The future servlet has been around already in WebLogic Server 6.1, but this functionality has been “hidden” inside WebLogic Server, until the release of WebLogic Server 9.1, where BEA has finally made this model public. So let's examine our future servlet model to see how it can help us. To make use of this servlet, implement the weblogic.servlet.FutureResponseServlet class.

Instead of implementing doRequest(), doResponse(), and doTimeout() methods, all we have to do is define a service method that handles servlet requests:

 public void  
                         service(HttpServletRequest req, FutureServletResponse rsp)
                      

If we decide to provide an immediate answer, all we have to do is call the send method on the FutureServletResponse instance. However, if we want to delay the response in the future, we must provide an extension of the TimerTask class. Here's an example:

     

Timer timer = new Timer();     

ScheduledTask mt = new ScheduledTask(rsp, timer);

myTimer.schedule(mt, 100);



......



public class ScheduledTask extends TimerTask {   

  private FutureServletResponse rsp;   

  Timer timer;                                          

  

  ScheduledTask(FutureServletResponse rsp, Timer timer){      

    this.rsp = rsp;      

    this.timer = timer;    

  }    

  

  public void run() {

   try {        

      PrintWriter out = rsp.getWriter();        

      out.println("This is a delayed answer");        

      rsp.send();        

      timer.cancel();      

   } catch(IOException e) {

      e.printStackTrace();      

   }    

  }

But how can one leverage this class in a real use case? Let's go back to our Ajax paradigm. When a new request arrives, if we follow our traditional model, we should provide an immediate response to the client and sit back waiting for a new request. But here is the catch: If we push the client's request in a buffer, returning immediately to the client, we could then process the request asynchronously in a different thread and send the response to the client only when the backend process has terminated.

In other words, instead of client-server polling, we process the request on the server and we simply notify the client when processing has terminated. Decoupling the request from the response can prevent the execute from being too busy, leading to a more scalable application. It can be also used as a defense against Ajax flooding requests.

But how will the client know when server-side processing has terminated? When we deal with event-driven programming, the Observer Pattern kicks in (Figure 4).

For those who still aren't familiar with this pattern, here is a short introduction: This pattern relies on three actors: the Subject , the Observer, and the Concrete Observer.

Observer pattern 

Figure 4. The Observer pattern

  • The Subject provides an interface for attaching and detaching observers.
  • The Observer defines an updating interface for all observers, to receive update notification from the subject.
  • The Concrete Observer maintains a reference with the Subject, in order to receive the state of the subject when a notification is received. It contains this function that will be executed eventually.

 

Here's a practical example of this approach:  

public class FutureServlet extends FutureResponseServlet {



 long time = 100;

 public void service(HttpServletRequest req, FutureServletResponse rsp)    

                                       throws IOException, ServletException {

   PrintWriter pw=null;

   try {

     pw = rsp.getWriter();

   } catch (IOException e) {

     e.printStackTrace();

   }

   pw.println("Request arrived");

   pw.flush();

   new Subject(req,rsp);

 }  

}



class Subject extends Observable {

 private static Stack <FutureServletResponse> 

                 stackResponses = new Stack <FutureServletResponse>();



 public Subject (HttpServletRequest req, FutureServletResponse rsp){



  PrintWriter pw=null;

  try {

    pw = rsp.getWriter();

  } catch (IOException e) {

    e.printStackTrace();

  }

  pw.println("Request sent to server");

  pw.flush();



  addObserver(new ConcreteObserver());

  stackResponses.push(rsp);

  startListening();

 }



 public void startListening() {

  Thread t = new Thread() {

   public void run() {

    while (true) {

     try {

      Thread.sleep(1000);

     } catch (InterruptedException e) {

      e.printStackTrace();

     }

     if (stackResponses.isEmpty()) {

      continue;

     }

     FutureServletResponse rsp =  stackResponses.pop(); 



     setChanged();

                     
                         // Notify event to Observer attached 

     notifyObservers(rsp); 

    }

   }

  };

  t.start();

 }



 class ConcreteObserver implements Observer {

  public ConcreteObserver() {}



  public void update(Observable o, Object arg) {

   FutureServletResponse rsp = (FutureServletResponse) arg; 

   PrintWriter pw=null;

   try {

     pw = rsp.getWriter();

     pw.println("Response sent at :" + new Date() + "
                        

");

     pw.flush();

     rsp.send(); 

     pw.close();

   } catch (IOException e) {

    e.printStackTrace();

   }

  }

 }

}  

                      

As you can see, when a new request kicks in, its request and response are stored in a stack. Then the Subject class registers as observer the ConcreteObserver class, which holds the business functions to be performed when notification occurs.

Using this simple but effective approach we could scale Ajax applications, reducing the impact on the execute thread, which can now serve even more requests at the same time.

Note that BEA says that this model "gives you full control over how the response is handled and allows more control over thread handling. However, using this class to avoid hung threads requires you to provide most of the code." They recommend the Abstract Asynchronous Servlet approach in most cases.

Conclusion

Under ideal testing conditions, network timeouts don't occur and the amount of effort that has to be expended to handle timeouts makes it tempting to ignore them. However, when requests arriving at the server start growing, poorly written Web applications may give rise to overloaded servers and eventually stall, causing a network client to block indefinitely.

For these reasons, it is necessary to prevent hung-thread proliferation. The Abstract Asynchronous Servlet provides hooks to decouple response from incoming requests and timing out unresponsive requests.

However, if you need to schedule your response at a certain time in the future and you need full control over thread handling, you can adopt the Future Response Servlet interface.

References

Francesco Marchioni joined the Java community in 1997 and is certified as a Sun Enterprise Architect. He is an employee of Pride SpA and has designed and developed many J2EE applications on the Weblogic Platform for BEA Customers.