Archive for September, 2004

Intelligent Intermediaries

An intelligent intermediary is an additional infrastructure layer designed to help address the location transparency and failover issues as well as issues such as enforcing security policies, monitoring service level metrics and implementing logging and auditing requirements.

By routing web services invocation requests through the intermediary instead of directly to the service endpoint, much of the burden of implementing consistent location transparency and failover can be shifted to the intermediary and away from the providers or consumers of the web services themselves.

The intelligent intermediary must either maintain its own registry of web services or connect to an existing UDDI registry. Rather than require web services clients to make a series of inquiry transactions prior to invoking the web service, the intermediary uses either special parameters added to the endpoint URL or uses rules that examine the content of the request itself to locate an instance of the desired service.

The intermediary implements some type of “liveness checking” approach, perhaps using “synthetic transactions”, so that only working service instances will be invoked. If a failure is detected on a particular service invocation, the intermediary attempts to locate a backup service and handles rerouting the invocation to the working node without reporting an error to the web service client.

The ability to apply routing rules to inspect incoming messages and reroute them as needed can also be used to implement service versioning. Service requests containing certain XML elements or containing explicit version metadata information can be routed to older versions automatically even while new versions of the service are deployed to handle requests from clients that need that need the new versions.

The intelligent intermediary approach still depends on being able to locate a working intermediary in the same way that using a UDDI registry depends on being able to locate a working registry server. This potential point of failure can be addressed by placing a hardware-based load balancer or content switch in front of the synchronized intermediaries leveraging the network hardware?s ability to detect the failure of a server and automatically reroute traffic to a working host.

Another approach would be to load the addresses of known intermediary servers from properties files at startup and build helper services to perform liveness checking of these intermediaries on a scheduled basis and return addresses of working intermediaries to web service clients as needed. The latter approach still places too much burden on individual clients, in my opinion.

Several vendors have begun offering intelligent intermediaries sometimes called proxies, web service routers, nodes or brokers. Some of these products have been built from the ground up as web services focused infrastructure while others have been retooled from existing message-bus or systems management products.

webMethods acquired its intelligent intermediary product when it bought The Mind Electric last October. TME built Fabric from the ground up to address the service-oriented architecture needs of organizations moving beyond their initial early-steps web services efforts.

As I showed in an earlier article, the combination of webMethods Fabric 1.0.2 and webMethods Developer with Feature Pack 1 can be used to provide location transparency and automatic failover to consumers of web services provided by webMethods Integration Server. Because Fabric automatically populates a UDDI v2 registry when services are registered in the Fabric, clients that prefer to use the UDDI registry approach to implementing location transparency and failover can still do so while other clients such as Portlets created using webMethods Portal can route requests through Fabric to do this.

In summary, intelligent intermediaries such as webMethods Fabric should provide significant advantages to organizations that need to implement location transparency and automatic failover for large numbers of web services over alternative point-to-point or UDDI registry-based approaches.

Mark Carlson
Conneva, Inc.
mcarlson {AT}

Comments No Comments »

For one of my clients, I’m currently considering how best to introduce location transparency to web services implemented using Integration Server and called from a variety of clients including webMethods Portal, Modeler and other IS servers.


In many organization’s early web services efforts, a point-to-point approach is used to invoke application functionality exposed as web services. In this approach, the endpoint URL address is hardcoded into the client code used to invoke the service. If the server hosting the web service is unavailable, the call to the service fails. Code to retry the call after some timeout period can be added, but helps only in best-case scenarios in which the hosting serve becomes available again after a very short time.

If an application needed to invoke only a very small number of web services and the addresses of backup copies of those services were known in advance, properties files containing the addresses of the backup servers could be read at startup and used to locate a working copy in the event of the failure of the primary web services host. This approach becomes very difficult to maintain when the invoking application needs to invoke more than just a handful of servers or when some services are “mirrored” and others are not.

Registry-based Discovery and Failover

A more robust approach seems to be to use a web services registry based on UDDI. Version 3.0.1 of this specification was released in October 2003 and now supports multiple (redundant) registries, user-friendly, publisher-assigned service keys and other needed features. New features of UDDIv3 are listed here.

With UDDI, web services are published to the registry either manually using a vendor-provided user interface or programmatically using API commands. Before making the call to invoke a service, the client first sends one or more inquiry requests in the form of a web services calls to the UDDI server. The UDDI server returns the metadata about the services that match the inquiry criteria and a “bindingTemplate” that provides the technical information needed by applications to bind and interact with the Web service.

The UDDI spec offers the “invocation” usage pattern in which the application locates the service to be invoked by querying the UDDI server and then caches the bindingTemplate information. If a subsequent invocation fails, the client re-queries the UDDI server to obtain fresh bindingTemplate information and re-attempts the invocation. If the invocation is successful, the cached bindingTemplate information is updated. Of course, if the UDDI server itself is down additional effort will be required to locate backup registry servers.

The problem with this approach, in my opinion, is that the burden of discovering web services binding information and implementing a registry-based failover mechanism is placed on each web services client.

Organizations with more than a handful of web services clients will struggle to implement these discovery and failover techniques consistently across multiple development teams, physical locations and programming environments.

Another complicating factor for large organizations will be enforcing consistent publication of web service information into the registry using a taxonomy that has been appropriately designed for that specific organization or industry.

So, if point-to-point approaches are shortsighted and fragile and UDDI-based approaches are complicated and place undue burden on users of web services, what other alternatives are there and how to they compare?

More on that in an upcoming post.

Mark Carlson
Conneva, Inc.
mcarlson {AT}

Comments No Comments »