I have found this common question among attendees from the recently concluded MindTree(http://www.mindtree.com) Osmosis tech fest and last year’s Opengroup’s EA conference where Kamran (MindTree CTO) presented case studies on SOA implementation:

I know the “why” of SOA but not sure of the “how”. Is there a methodology?

The often misleading answer to this question is to tie an SOA adoption to a complete Enterprise Architecture (EA) definition exercise. This leads to the following impressions/myths:

  • SOA is for large organizations or programmes
  • SOA adoption must be a big-bang approach

The truth is : SOA can be adopted just as easily for mid-sized opportunities as it can for large engagements. The difference is in the methodology used.

At MindTree, we have created an approach to SOA adoption. Inputs for this came from a couple of true blue-blooded SOA implementations in the travel industry – one for a large content aggregator that owns a couple of Global Distribution Systems(GDS) for airline industry alike the Sabre and Amadeus of the world, the second is for an organization that is the premier trade organization for the Airline travel industry.

Both these implementations were multi-million dollar initiatives – dont be carried away by the size and lead to one of the myths described above!

In both initiatives, SOA adoption was incremental – this is the beauty of the model adopted.

Lets look at an outline to SOA adoption. It would comprise of:

  • Establish drivers for SOA – defines the case for using SOA
  • Perform portfolio analysis – establishes choice of technology for SOA implementa-tion, influences build, wrap or retire decisions on business processes.
  • Define the architectural goals and the scope of SOA deployment.
  • Define technology architecture and choice of tools & frameworks.
  • Define roadmap for the SOA components
  • Plan for implementation
  • Define implementation and reuse governance

Those who have looked at EA frameworks would catch a sense of resemblance with some of the steps defined above. It is a valid observation and in fact leads us to one of the two SOA adoption models i.e a Combined EA and SOA model.

On the other hand, some of the steps might appear too expensive for a mid-sized organization. This leads to a variant in the model and is termed the Basic SOA model.

An outine to the basic model would look like:

  • Check for existence of suitably articulated drivers for SOA adoption . Doesnot includethe task of identifying the drivers.
  • Define the architectural goals and the scope of SOA deployment – set the expecta-tions that scope is limited to discovery, build and deployment of services only
  • Technology and choice of tools & frameworks
  • Plan for implementation – of the services and applications on top

How does an organization decide to go with one of the two models? This can be partly answered by the scope of SOA adoption. The scope can be quantified in the maturity of the SOA deployment and the services there-in. Attention given to progressive SOA deployment can ensure that an enterprise can start with one model and move on to a higher and better one over a period of time.

The level of SOA deployment is listed below, in increasing order of sophistication:

  • Level 0 – Identify data and behavior to be deployed as services.
  • Level 1 – Design, build and expose services.
  • Level 2 – Support multiple channels and clients in service invocation.
  • Level 3 – Publish, discover and compose services. Also called as Orchestration.
  • Level 4 – Secure services & perform metering to determine usage
  • Level 5 – Operate and manage the services and associated infrastructure
  • Level 6 – Ensure reuse and capture metrics on benefits incurred

The levels are achieved in an iterative manner in most real world deployments as rarely does one identify all the services before attempting to design, build and deploy the first set of services for use by client applications.

More on the methodology and approach along with consulting can be provided on request.


I have seen enough content on the net to create a web application that supports asian languages input and display.

You can find a good article here : http://java.sun.com/developer/technicalArticles/Intl/HTTPCharset/

On the other hand, I couldnot find comprehensive example on achieving the same using rich clients such as a Swing application.

Some help from the internet and effort later, I was successful at writing a Swing application to display Japanese characters read from a .properties file. Some learnings from this exercise are:

  • First and foremost, you need to install fonts that support the language you are trying to use. For e.g you need a Japanese font if you intend to run the application on a English version of Windows. You can download one(MS Mincho) from : http://www.themeworld.com/cgi-bin/redir.pl/fonts/msmincho.zip
  • On Windows, this needs to be installed under the /windows/fonts folder.
  • The font is the only thing that needs to be installed on your client machine to run the application. Using the font is lots easier in Swing unlike in AWT. For AWT components i.e one that has a native peer, you need to customize the settings of the JRE i.e modify font.properties under /jre/lib to include the font you have installed under each font type. I have intentionally not provided the details as it applies to AWT and not Swing, the subject of this post.
  • Now in your Swing application, you just need to set the font of the Swing component before setting its text.
  • Now forthe source of the text. It can come from a text file such as a .properties file. Note here though that the file must contain the content in ascii form and not in unicode form. You may use the native2ascii tool that comes with Java 2 SDK to achieve the conversion.

I have attached a sample Swing application that displays Japanese text from a .properties file. It is available as Editor.doc (rename to Editor.java)

You also need the .properties file. It is available as messagesbundle_ja_jp.doc (rename to messagesbundle_ja_jp.properties)

Copy both files to a single folder. You can then run the application as:

javac -classpath . Editor.java

java -classpath . Editor ja JP

You will then be able to see the application as shown below:

Japanese Swing app

Of late I have been quite intrigued by some analysts reports on IT becoming a commodity service. By being a commodity, it no longer appears intellectual or elite.
The driving forces – cost, expected higher productivity, competition, e.t.c.

We view and judge programming languages and platform by the amount of flexibility that they provide. This explains the umpteen number of configuration files that our applications have these days. After all we have been taught to “externalize” as much as possible, out of the application code – the reason : maintainability and flexibility.

But havent we taken it a bit too far? How often do table and column names change for e.g? Can we instead agree on conventions for a few of these instead. Why conventions? Because it opens many exciting possibilities around creating or using frameworks to do a lot of work for you. A great example is the Ruby On Rails(RoR) platform. Iam not endorsing RoR here. I however like its idea of being able to do so much behind the scenes because the application artifacts – tables, classes follow convention. Coming to think of it – we do enforce conventions, dont we?

So why not create the conventions ins such a way that it can benefit us, the developers, and not just some standards watch dog? Eventually everybody benefits – the developer writes less code, the project is done cheaper, developers are seen as being more productive, cost comes down e.t.c

Get what Iam driving at? I seriously feel that the comments in the early part of this post will become a reality. We just need to be able re-define the way we do things – one such is adopting “Convention over Configuration” and building intelligent frameworks on top. I see RoR doing that.

Got this cool idea on ClustrMaps from another blog. Going to update mine to include one as well. Truly amazes me when you see what ideas people come up with world-over.
Makes me wonder about the revenue model for sustenance of these companies……

Like any developer, I used to trust only the code I wrote myself. All that has changed with OpenSource and its widespread use.
However Iam always on the lookout to write something that is better than what is available in OpenSource – personally a means to justify the act of writing an application 🙂
The growing volume of data in an enterprise can be a liability or an asset, depending on how you see it. Access to this data converts it to useful information.
How does one access information easily? Do we really care about the millions of hits that Google returns? I dont think we go beyond the first couple of pages.
I define “Effective search” to address the above issue – I need to get to the information of interest fast, period.
OpenSource indexing and search frameworks are far behind the commercial ones like a Google search appliance or the Verity or other search engines.
Looking around, Lucene turned out to be a good fit for my index. The catch is I still required parsers, readers and data sources to make it complete.
This led me to write Ferret. It doesnot re-invent the wheel i.e wherever possible.
The good news is that it can index file systems & web sites(secure inranets and public sites). The best part is that it is highly customizable – I can add a datasource to index databases for e.g or add parsers to new file types.
The recent announcement on availability of Omnifind led me to evaluate it and of course compare with Ferret. After some extensive study of its features, Iam still to find out if I will be able to recommend it to a client when I cannot customize many aspects except the look & feel maybe. Also it beats me why I cannot schedule an indexing operation or atleast provide API to invoke the indexer! Omnifind suits the “indexing for dummies” needs but not for any active deployment within a coprorate portal for e.g.
For now, Ferret does all this and has found a client 🙂

I ask – how many times have I written a login screen for any application? I have lost count. I did it when I started programming with the tools that support GUI. List includes – Foxbase, VB, Web Forms, and later Java Swing and now portal pages.
The options to store the credentials remain pretty much the same – RDBMS and a Directory server for more sophisticated implementations.

Frameworks like JAAS provide ability to plug-in implementations for login modules. Is it enough just to authenticate the user? Applications of course want to retrieve details of the logged-in user that includes the standard identifying information and minimal data that can relate a user with a business entity – one of importance in your domain model.

This trail is a thought process and implementation approach for an Identity management framework that may be extended to support authorization.

Continuing with the idea of creating an Identity Management system, the first thing that came to my mind was identifying the key data elements in the system. I liked the Unix idea for this and came up with the following split:

Authentication elements
Authorization elements

Obviously the two need to tie together. The Authorization elements have behavior modelled in as well.

Authentication is fairly easy – you verify a given set of credentials against a user database. Dont be fooled to think database refers to a RDBMS only, it can be a Directory server or another user data store.

In the process, you take care of protocol, encryption e.t.c. to validate against the store. Plenty of frameworks and packages exist that allow you to easily integrate authentication.

Authorization is a challenge as it may be applied to a varied set of Objects that need to be secured, under varied rules(expressions) and for various users.

Now coming to the Authorization part.
Authorization is all about the following:

Object, Subject, Permission

Here Object can be any entity in your security framework. Take care to prevent proliferation of such object definitions – for e.g links to perform CRUD on an entity are NOT separate candidates for Object instances. They are in fact manifestations of the permissions on an Entity.

Subject refers to the Role that the currently logged in Principal is playing. Principal is nothing but the logged in user whose credentials you verified in the Authentication step. The Principal also contains the Subject reference/ID i.e the role code and is retrieved post login.

Permission specify permitted actions on an Object for a Subject. This completes the picture.

The last step would be to associate Principal(s) to Subject(s) aka User Provisioning.

Your security framework is complete in terms of data setup i.e you are able to authenticate against a store, maintain association between a user and his Subject/role and also the mappings between Object, Subject and granted Permission(s).

During runtime, you would fetch the Subject instances provisioned for the logged in Principal and retrieve all granted Permission instances for all Object types. You r framework then identifies the most relevant Subject(and associated Permission instances) for the present authorization check and simply determines if the “intent” i.e the attempted action is “implied” by any of the granted permissions for e.g if the “intent” is “read”, then a granted permission for “update” implies a “read”.

Simple huh?

The trail is born

January 30, 2007

Well I had to start with something to get my blog up….