Thursday, August 31, 2006

Closures in Java and The Other Side of Backwards Compatibility

It's the power of community - the awesome force that makes Java evolve, has once again started roaring at the news of a possible inclusion of Closures in Dolphin. Not that all of the ruminations are favorable, in fact the functional gurus at LtU have once again started chanting about how Java contains a number of fundamental (technical) design flaws and that Sun should start immediately looking for an alternative to Java as the next programming language.

My current post has been triggered by the excellent article that Bill Venners has written in Artima, where he has lamented how the extreme efforts to maintain backwards compatibility in Java is hindering the elegance of the language. Bruce Eckel had also raised this point many times in the past when Sun pushed in its broken Generics implementation philosophy in Java 5 for the sake of maintaining backwards compatibility with the millions of lines of existing codebase. Bill has hit the nail right on its head -
There's a natural law in programming language and API design: as backwards compatibility increases, elegance decreases. Backwards compatibility is very important. There's a cost to breaking code, but there's also a cost to not breaking it—complexity in the developer's face.

In trying to compromise with some of the inadequacies of the language, Java is turning out a feature bloat. Large enterprise applications have started to accumulate blobs of codebase built upon contradictory features of the language, just because Java did not clean 'em up in subsequent releases and still continues to support the legacy, maybe with a couple of hundreds of deprecated warnings. I think this is a far worse situation than breaking backwards compatibility.

Microsoft has displayed much more sanity in this regard and have made a conscious effort to clean things up in the course of the evolution of C#. I know the codebase size that C# has in the industry is in no way comparable to that of Java - but still I cannot support the path that Java has adopted, which, in a way has encouraged piling of inadequate code bloats. Java has released version 5, Mustang is on its way later this year, we are looking and planning for Dolphin - yet in the most popular object oriented language of the industry, primitives are no objects. We cannot have the elegance of writing

200.times { |i|
  # do something
}


Look at the tons of code in any legacy application today and you will be stunned by the amount of effort people have taken for special processing of primitives.

Closures with Backwards Compatibility ?

Closures are typically a functional programming artifact, though all modern scripting languages have been supporting it. C# has rolled out its implementation of closures through delegates and are bringing lambdas in 3.0. I suspect these have been the major triggers behind the sudden clairvoyance of Gilad Bracha and his team in announcing the support of closures in Dolphin.

Closure Implementation

Sun has been thoroughly conservative on any change in JVM, Gilad has talked about his struggle to force "invokedynamic" as the only change in the JVM for years. Closures, as this post suggests, can be implemented as a syntactic sugar at the javac level by creating closure objects on the heap and autoboxing of mutant parameters. I have strong doubts if Sun will go to the extent of changing JVM for implementing efficient cheap-to-use closures. Reason - Backwards Compatibility !

Closure Usage

In my earlier post on this subject, I had mentioned about internal iterators, which I would like to see as part of the closures package. As Joe Walker has mentioned in his blog, and Bill has discussed based on his suggestion, we would like to see a .each() method in the Collection interface. Again this cannot be done without breaking existing codebase, since it adds to the interface Collection. The question is "Will Sun go for this ?" or make us eat the humble pie by offering the much less elegant workaround of statics in Collections.each(Collection, ..). Once again Backwards Compatibility hinders the added elegance !

As a workaround to the above problem of maintaining backwards compatibility by adding more methods to existing interfaces, C# has come up with "extension methods", while Scala has introduced "implicits" and "views". Martin Odersky has had a very good discussion of these capabilities in the Weblog forum of Bill's article.

We need to wait till Java comes up with a strategy to address these issues.

Closures - Will it make Java a Functional Programming Language ?

Definitely not! I think adding closures will be just an attempt to reduce the awkwardness of the interfaces-anonymous classes idiom, now used to abstract an algorithm over a piece of code. Just by adding closures to Java, developers will never start thinking in terms of monads and combinators while composing their programs. But given an efficient implementation and a proper library support, it will help add elegance to programming in Java.

Here's the tailpiece from Karsten Wagner in a typical Java bashing in LtU ..
To get all those nice stuff you want Java needs much more than just a simply syntax for closures. So I think it's better to let Java stay the way it is and phase it out in the long term and use a real better language instead, because Java simply is beyond repair in too many points.

I am not that pessimistic, I still make my living on Java (though I seek the joy of programming in Scala) and hope that Gilad and his team bring out a killer offering with Closures.

PostScript

Just when I was rumbling through the formatting of this entry, I noticed this in InfoQ, where Sun has created JSR 270 to remove features from the Java SE platform. This is definitely a welcome step .. more details can be found in Mark Reinhold's blog.

Monday, August 28, 2006

Validation Logic : Wire up Using Spring Validators

Colin Yates has blogged about Validation Logic in his first post of Interface 21 blog. He has talked about the importance of a Validator abstraction, where you apply your business specific validation rules to your populated domain object. This is a real pragmatic advice when you are designing a large enterprise application. But when you are talking about an abstraction, you need to discuss how it will collaborate with the other participants of the mix. Take an example of a Web application, which has multiple tiers - you need to be very clear about the role that your validator is going to play in each of them. As a reader of the i21 blog, I started thinking of many use cases, scenarios and possible collaborations that our dear friend validator can play when bouncing across the layers of the application. I thought of posting a reply to clarify some of my understandings, but ultimately ended up writing this blog entry, hoping that this might trigger some discussions to clarify the reigning confusions. In what follows, I will try to think aloud my understanding and hope that the community will enrich this discussion with their collective thoughts.

Wiring the Validation Logic

Having the validation logic separated out in a validator abstraction is important, but, I think the more important part and the most confusing part of implementing validation is the wiring of the validation logic with the rest of the application. Follow the comments section of Colin's posting and my last statement will justify for itself. Various readers have posted their observations on how to wire the logic within the scope of the various tiers of a typical Web based application.

One of the pioneers in this wiring has been the Rails' in-model validation, which Cedric sees as an improvement over validation tied to the presentation framework. Rails offers validation logic as part of the model, which wires itself nicely within the framework and gets invoked automagically before persistence of the model object. Rails' validation engine offers some amount of context sensitivity as well through the protected hooks like validate, validate_on_create, validate_on_update, which the application developers can override to plug in custom rules. There can be complicated use cases where this model is not a 100% fit, but as DHH has mentioned - this is one of those "most people, most of the time" things. We're not after the "all people, all of the time" kind of framework coverage. Quite in alignment with the Ruby philosophy :-).

In the Java community, we do not have the clairvoyance of the Ruby world, we do not have a DHH to give us the dictum. The result is inevitable and we have Commons Validator, Struts, WebWork (powered by XWork validator), RIFE, Spring, Tapestry, Stripes, Wicket and many other variants implementing the same practice in their own different ways. It's time for a JSR that will bring harmony to the chaos of implementing the validation logic across all tiers of the application - enter JSR 303: Bean Validation!

Validators - Separate Abstraction ?

The main point in Colin's blog was to advise people to identify *all* the rules that define *valid* data, and the uniqueness of that data is just as much a validation rule as saying the username must not be null. He speaks about the necessity of a validator abstraction, which is provided so well by Spring. In one of his responses, he mentions
It simply boils down to this; identify *all* the validation logic and then apply it in a single coherent, atomic, explicit operation. Validators are powerful things, they can do more than just the *syntax* of the data, they can, and should also check the *semantics* of the data.

But, should we have the Validator as an abstraction separate from the domain object or follow the Rails philosophy and club 'em together with the domain object ? Working with Spring, I would like to have validators as a separate abstraction for the following reasons :

  • Validators are tier agnostic and can be used to collaborate with multiple layers -


    • Business Layer for enforcing business logic on "semantically valid" objects

    • Persistence layer to ensure valid objects get persisted

    • MVC layer to enforce data binding from request parameters


  • Validators in Spring have their lifecycles controlled through the IoC - hence these singletons can be nicely wired together through DI

  • Spring MVC requires validators as separate objects


Validators - Collaborating across Layers

In his blog, Colin goes on to say that he does not want to get down into whether this validation should be applied at web layer, or middle layer, or both etc. But, as I mentioned above, the collaboration of the validators with the participants of the various tiers of application is an area where people get confused the most.

Let me try to summarize my understanding of how to engineer the validator abstraction across the application layers, so that we can reuse the abstraction, avoid code duplication and have the validation logic tied to the domain model (since it is the domain objects that we are trying to validate). My understanding will be based on the implementation of Spring MVC, which does a nice job of engineering this glue.

Step 1: Form a Validator Abstraction

The domain class has an associated validator class. This is a deviation from the Rails implementation, but this allows a nice refactoring of the validation logic into a separate abstraction than the domain class - in fact the wiring of the validator with the domain class can be done through mixins (or as we say in the Java world - the interceptors). I think XWork engineers validators based on this interceptor technology.

Step 2: Wire Controllers to Populate Domain Object

In a typical Web application, we have Controllers that represent a component which receives HttpServletRequest and HttpServletResponse instances just like an HttpServlet and is also able to participate in an MVC workflow. Spring offers a base interface Controller, while Struts has the notion of an Action. This controller intercepts the request and creates a Command object out of the request parameters. The framework takes care of this object creation and population through the usual JavaBeans engine of property setters and getters and additional property editors (as in Spring). This Command object can be the domain object itself or it may have the domain object wrapped in it - the application developer knows how to get the domain object out of the command object.

Step 3: Validate the Domain Object Before Submission

Once the controller has successfully populated the command object, it executes all the validators registered to validate the object. Instead of automatically triggering the validation as part of data binding, Spring also offers callbacks to do the same as post-processing of the binding phase. The following snippet is from the Spring samples (jpetstore):

import org.springframework.samples.jpetstore.domain.Account;

protected void onBindAndValidate(HttpServletRequest request,
  Object command, BindException errors)
    throws Exception {

  // command object
  AccountForm accountForm = (AccountForm) command;
  // domain object (POJO)
  Account account = accountForm.getAccount();

  // validate the object
  getValidator().validate(account, errors);
  ...
}


The above strategy describes how the validation logic is bubbled up from the domain layer and is used by the controller to send error messages to the user from the web tier. All the validation logic is centralized in the validator abstraction and is executed through validator.validate(..) and the errors being propagated through the exception structure.

Conclusion

  • Validators are domain level objects.

  • Validators are separate abstractions from the domain classes.

  • Validators can be "mixed-in" through interceptors if required.

  • Validators encapsulate "syntax" as well as "semantics" of the domain object.

  • In a typical Web application, validators can be wired together with Controllers, DAOs etc. to provide services at the other tiers of the application. Yet they are not coupled with any abstraction of the other layers of the application.

Sunday, August 20, 2006

Closures At Last !

There has been some significant movements amongst the Java leaders to introduce this much awaited feature in the Java programming language. The big team of Gilad Bracha, Neal Gafter, James Gosling and Peter von der Ahé has released a draft proposal for adding closures to Dolphin (JDK 7). I know Gilad has been a big proponent of having closures in Java and he has expressed his frustration in his blog at Java being a so late entry to close this out.

Brevity

Thoughts about introducing closures in Java has definitely been triggerred by the excellent support of closures provided by C# and the host of scripting languages like Ruby, Groovy, Python and Javascript. The syntax, as proposed in the draft looks a bit cumbersome to me, particularly after getting used to the elegance of Groovy, Ruby and even C#. I know Java, being a statically typed language does not help in making the closure syntax as elegant as dynamic languages.

My Wishlist of Associated Features

If closures see the light of the day in Java, then, I would like to have the following associated features, which will make the set more complete :


  • Type aliasing / typedefs : Without type aliasing it will be extremely cumbersome to write the entire type signature everytime. I am sure this will also make programming with generics much easier. The keyword is *syntax-brevity* and type aliasing is a great way to achieve it.

  • Currying : Higher order functions and closures are definitive steps towards implementing full currying features.

  • Generic Closures : It will be interesting to find out how closures will mix with generics.

  • Internal Iterators : I would love to write code like the following:


    int[] list = ...
    int[] evens = Arrays.findAll(list, (int n) { return n % 2 == 0; });

Sunday, August 13, 2006

Extend Your Type Transparently using Spring Introductions

One of the main intents of the Bridge design pattern is to allow decoupled dual hierarchies of interfaces and implementations growing independently and allowing users the flexibility to compose. However, the binding part is that all implementations have to abide by the base class of contract dictated by the abstract interface.

Readers of my blog must have been bored by now with my regular chantings for the necessity of a generic data access layer in Java based applications. I have designed one which we have been using in some of the Java projects - I have blogged extensively about the design of such an artifact here, here and here. The DAO layer has been designed as a Bridge with a dual hierarchy of interfaces acting as client contracts backed up by the implementation hierarchies. So long the clients had been using the JDBC implementation and have never complained about the contracts. Only recently I thought that I will have to sneak in a JPA implementation as well, since Spring has also started supporting JPA.

Things fell into place like a charm, till I hit upon a roadblock in the design. If u need to provide some contracts which make sense for some specific implementation (not all), then what do u do ? The basic premise of using Bridge is to have a single set of interfaces (contracts) which all implementations need to support. We have the following options :

  • Throw exceptions for unsupported implementations and hope the user does not use 'em. Document extensively warning users not to venture into these territories. But if my client is like me and does not have the habit of reading documentations carefully before coding, then he may be in for some surprises.


  • Use the Extension Object Design Pattern, which allows u to extend an object's interface and lets client choose and access the interfaces they need. Cool - this is what I need to extend the contract of my generic DAO ! But hold on !! Look at the very first line of the pattern's intent, as described by Erich Gamma .. "Anticipate that an object’s interface needs to be extended in the future.". What this means is that u will have to design your abstraction anticipating a'priori that it may be extended. So if the necessity of providing extensions is an afterthought (which is, in my case), then it doesn't fit the bill.


Extension of the Generic DAO Contract

One of the nifty features of EJB QL is that the user can specify a constructor within the SELECT clause that can allocate non-entity POJOs with the set of specified columns as constructor arguments. Let me illustrate through an example shamelessly copied from Richard Monson-Haefel and Bill Burke's Enterprise JavaBeans book.

public class Name {
  private String first;
  private String last;

  public Name(String first, String last) {
    this.first = first;
    this.last = last;
  }

  public String getFirst() { return first; }
  public String getLast() { return last; }
}


Note that Name is NOT an entity. Using EJB QL, we can actually write a query which will return a list of Name classes instead of a list of Strings.

SELECT new com.x.y.Name(c.firstName, c.lastName) FROM Customer c

I wanted to provide a contract which can return a collection of objects belonging to a different class than the Entity itself :

public <Context, Ret> List<Ret> read(Context ctx,
      String queryString,
      Object[] params);


And I wanted to have this contract specifically for the JPA implementation.

Dynamic Extension Objects using Inter-type Declarations in Aspects

Inter-type declarations in aspects provide a convenient way to declare additional methods or fields on behalf of a type. Since I have been using Spring 2.0 for the JPA implementation of the DAO, I went in for Spring Introductions, which allow me to introduce new interfaces (and a corresponding implementation) to any proxied object.

Quick on the heels, I came up with the following contract which will act as a mixin to the DAO layer:

public interface IJPAExtension {
  public <Context, Ret> List<Ret> read(Context ctx,
      String queryString,
      Object[] params);
}


and a default implementation ..

public class JPAExtension<T extends DTOBase> implements IJPAExtension {
  public <Context, Ret> List<Ret> read(Context ctx,
      String queryString,
      Object[] params) {
    // ...
  }
}


And .. the Weaving in Spring 2.0

The client who wishes to use the new interface needs to define the extension object just to introduce the mixin - the rest is the AOP magic that weaves together all necessary pieces and makes everybody happy.

@Aspect
public class DAOExtension {

  @DeclareParents(value="com.x.y.dao.provider.spring.jpa.dao.*+",
    defaultImpl=JPAExtension.class)
  private IJPAExtension mixin;
}


The original contracts remain unpolluted, other implementations do not bloat, still we have successfully introduced new functionalities in the JPA implementation, still without the client committing to any implementation class (we all know why to program-to-interfaces - right ?). The client code can write the following :

IJPAExtension mixin = (IJPAExtension)restaurantDao;
List<RName> res =
    mixin.read(factory,
      "select new com.x.y.dao.provider.spring.jpa.RName(r.id, r.name) from Restaurant r where r.name like ?1",
      params);


Inter-type declarations are not a very frequently used feature of aspect oriented programming. But it is a useful vehicle for implementing many patterns in a completely non-invasive way. I found it very useful while extending my JPA based DAO implementations without adding to the base contracts of the bridge.

Tuesday, August 08, 2006

XML Integration in Java and Scala

During my trip to JavaOne 2006, I had missed out the session by Mark Reinhold where he discussed Java's plan of integrating XML into the Java programming language. There have been lots of discussions in various forums about the possibilities of this happening in Dolphin - Kirill Grouchnikov has blogged about his thoughts on what he would like to see as part of native XML support in Java. The community, as usual, is divided on this subject - many people feel that integrating XML into the Java language will be a serious compromise on the simplicity of the language. Look at the comments section of this posting in JavaLobby. This feeling of compromise has gained more momentum in view of the upcoming integration of the scripting languages like Javascript, ECMA, Rhino etc. with Java (JSR 223).

Anyway, I think Java will have the first cut integration of XML in Dolphin. In the JavaOne session, Mark had discussed some of the options which they plan to offer in java.lang.XML, so as to make XML processing simpler in Java and liberate the programmers from the hell of dealing with DOM apis. Microsoft has already published its implementation of XML integration into C# and VB in the form of XLinq. I tried my hands at it using the June CTP and found it to be quite elegant. In fact the whole stuff looks seamless with the entire LINQ family and Microsoft's plan of fixing the infamous ROX triangle. Java has been lagging behind in this respect and is trying to make its last attempt to catch up - though expect nothing till Dolphin! I appreciate the fact that considering the millions of user base that Java has today and its committments to the community as being the default choice for enterprise platform (unless u r Bruce Tate, of course!), it is not easy to veto a change in the language. Still, better late, than never.

<scala/xml>

A few days ago, I was browsing through some of the slides of Mark from JavaOne, when I thought that it will be a worthwhile exercise to find out how these could be implemented in Scala, which, in fact offers the most complete XML integration as part of the language. I have repeatedly expressed my views about Scala in my blog (see here) and how positive I feel about saying Hello Scala. XML integration in Scala is no exception - in fact the nicest part of this integration is that the designers did not have to do much extra to push XML as a first class citizen in the Scala world. The elements of Scala that make it a nice host to XML integration are some of the core features of the language itself :

  • Scala being a functional language suppports higher order functions, which provides a natural medium to handle recursive XML trees

  • Scala supports pattern matching, which can model algebraic data types and be easily specialized for XML data

  • For-comprehensions in Scala act as a convenient front end syntax for queries


Go through this Burak Emir paper for more on how XML integration in Scala offers scalable abstractions for service based architectures.

For brevity, I am not repeating the snippets as Mark presented. They can be found in the JavaOne site for the session TS-3441. I will try to scratch the head with some of the equivalent Scala manifestations.

Disclaimer: I am no expert in Scala, hence any improvements / suggestions to make the following more Scala-ish is very much welcome. Also I tested these codes with the recent drop of 2.1.7-patch8283.

Construction : XML Literals

This example adds more literals to an existing XML block. Here's the corresponding snippet in Scala:


val mustang =
  <feature>
    <id>29</id>
    <name>Method to find free disk space</name>
    <engineer>iris.garcia</engineer>
    <state>approved</state>
  </feature>;

def addReviewer(feature: Node, user: String, time: String): Node =
  feature match {
    case <feature>{ cs @ _* }</feature> =>
      <feature>{ cs }<reviewed>
      <who>{ user }</who>
      <when>{ time }</when>
      </reviewed></feature>
  }

Console.println(addReviewer(mustang,
        "graham.hamilton",
        "2004-11-07T13:44:25.000-08:00"));


The highlights of the above implementation are the brevity of the language, mixing of code and XML data in the method addReviewer() and the use of regular expression pattern matching which can be useful for non-XML data as well. In case u wish, u can throw in some Java expressions within XML data as well.

Queries, Collections, Generics, Paths

This snippet demonstrates the capabilities of XML queries in various manifestations including XPath style queries. One major difference that I noticed is that the Scala representation of runtime XML is immutable, while the assumption in Mark's example was that java.lang.XML is mutable. I am not sure what will be the final Java offering, but immutable data structures have their own pros, and I guess, the decision to make XML runtime representation immutable was a very well thought out one by the Scala designers. This adds little verbosity to the Scala code below compared to its Java counterpart.

val mustangFeatures =
  <feature-list>
    <release>Mustang</release>
    <feature>
      <id>29</id>
      <name>Method to find free disk space</name>
      <engineer>iris.garcia</engineer>
      <state>approved</state>
    </feature>
    <feature>
      <id>201</id>
      <name>Improve painting (fix gray boxes)</name>
      <engineer>scott.violet</engineer>
      <state>approved</state>
    </feature>
    <feature>
      <id>42</id>
      <name>Zombie references</name>
      <engineer>mark.reinhold</engineer>
      <state>rejected</state>
    </feature>
  </feature-list>;

def isOpen(ft: Node): Boolean = {
  if ((ft \ "state").text.equals("approved"))
    false
  true
}

def rejectOpen(doc: Node): Node = {

  def rejectOpenFeatures(features: Iterator[Node]): List[Node] = {
    for(val ft <- features) yield ft match {

      case x @ <feature>{ f @ _ * }</feature> if isOpen(x.elements.next) =>
        <feature>
        <id>{(x.elements.next \ "id").text}</id>
        <name>{(x.elements.next \ "name").text}</name>
        <engineer>{(x.elements.next \ "engineer").text}</engineer>
        <state>rejected</state>
      </feature>

      case _ => ft
    }
  }.toList;

  doc match {
    case <feature-list>{ fts @ _ * }</feature-list> =>
      <feature-list>{ rejectOpenFeatures(fts.elements) }</feature-list>
  }
}

val pp = new PrettyPrinter( 80, 5 );
Console.println(pp.format(rejectOpen(mustangFeatures)));


The observations on the XML querying support in Scala are :

  • Use of for-comprehensions (in rejectOpenFeatures()) adds to the brevity and clarity of the clarity of the code

  • XPath methods (in isOpen() .. remember in Scala ft \ "state" becomes ft.\("state")) allows XQuery style of programming.


Another example which combines both of the above features and makes it a concise gem, is the following from another Burak Emir presentation:

for (val z <- doc(“books.xml”)\“bookstore”\“book”;
    z \ “price” > 30)
yield z \ “title”


Streaming In and Out

Mark showed an example of formatting XML output after summarizing all approved features from the input XML. We can have a similar implementation in Scala as follows :

def findApproved(doc: Node): Node = {

  def findApprovedFeatures(features: Iterator[Node]): List[Node] = {
    for(val ft <- features; (ft \ "state").text.equals("approved"))
      yield ft
    }.toList;

  doc match {
    case <feature-list>{ fts @ _ * }</feature-list> =>
      <feature-list>{ findApprovedFeatures(fts.elements) }</feature-list>
  }
}

Console.println(new PrettyPrinter(80, 5)
      .format(findApproved(XML.loadFile("mustang.xml"))));


Along with formatted output, the snippet above also demonstrates loading of XML from a stream.


On the whole, Scala's support for XML processing is very rich, more so, because of the support that it gets from the underlying features of the language. Scala offers powerful abstractions for transformations (scala.xml.transform), parsing, validations, handling XML expressions, XPath projections, supporting XSLT style transformations and XQuery style querying. The Scala XML library is fairly comprehensive - most importantly it is alive and kicking. Till u have the same support in Java (Dolphin is still at least one year away), enjoy <scala/xml>.