Archive for the “Architecture” Category

An old friend from college forwarded a job opportunity to me for a senior level position at a major US company.

One of the job requirements for this position was for specific hands-on experience with open source projects. This company is focused on using open source to create their next generation SOA environment.

As part of an industry that is struggling currently, this company’s stated goal for open source adoption is cost savings.

Maybe the driver for adoption of open source integration frameworks like Synapse will not be application vendor support, but companies struggling to stay current technically in the midst of trying economic downturns.

Comments No Comments »

Nathan Lee is on a roll. In today’s post he lists a few tools that are in his “toolkit” on every webMethods Integration Server project.

With webMethods and creating mechanisms, I tend to find that the following “toolkit” of things are quite useful:

  • an “invokeService” java service (that is: a service that takes in the pipeline and the service name you want to invoke and makes a call to invoke it)
  • specifications (not used that much in normal day to day stuff, but the equivalent of a java interface in webMethods.. defines the inputs/outputs of a service so that you provide an implementation of a certian operation)
  • properties management services (to allow you to have configuration files that you change a few properties to alter the way the mechanism works)
  • Wrapper documents (they allow you to wrap up your payload and add metadata to the document)
  • Knowledge of the IData and IDataCursor and java services (in order to be able to directly play around with the pipeline, moving beyond what is capable using flow alone).
  • knowledge of the ServerAPI and Service utility objects in the webMethods API (these give you access to some of the extra stuff you need to know about what’s going on with execution)

So what tools are in your toolkit? Add your comments on Nathan’s blog.

Comments No Comments »

Nathan Lee is venturing abroad (from Sydney) and digging into the early stages of a new project. He shares some thoughts on what it means to “engine-ify” an integration project here.

Comments No Comments »

The primary purpose of an Enterprise Services Bus or ESB is to implement a layer of enterprise-class infrastructure that provides a standard way for an organization to expose services so that they can be discovered and executed by authorized users.

The services exposed by an ESB can be created by web-service enabling portions of existing applications, leveraging new web-services API’s provided by existing packaged apps or creating new web services from scratch.

The combination of development language, hardware platform or package vendor on which a service depends is not important to the ESB other than that the combination must provide a satisfactory level or service to consumers.

One approach for implementing ESB infrastructure is to create an “intelligent intermediary” (or fabric of intermediaries) through which all requests for services are routed.

This approach enables consistent implementation of the organization’s security framework, provides a mechanism to seamlessly fail over to a new instance of a requested service and supports horizontal scalability by routing requests to “farms” or “grids” of enterprise services.

The intelligent intermediary provides these things without impacting the individual services or the legacy applications and COTS packages that provide them. In essence, the ESB becomes a loosely coupled “container” for enterprise services analogous in many ways to a J2EE application server but without the restrictions of developing the services themselves as Enterprise Java Beans.

The intelligent intermediary approach is often criticized because it adds a layer between the service consumer and the service provider. This layer seems (and probably is) excessive when only a few departmental-class services need to be exposed and consumed. However, when one measures the work (and heavy handed enforcement) required to push enterprise security, transparency and quality of service standards to each developer or provider of a service, the so-called overhead looks like a bargain by comparison.

For years organizations have been creating rat’s nests of point-to-point integrations that were a nightmare to maintain. Creating a point-to-point mesh of web services invocations is just this generation’s way of re-inventing the wheel. Try adding a WS-Security based security framework to one of those messes. It can be done, but at much greater cost (time and money) than spending upfront to design and build or acquire an ESB to act as the intelligent intermediary.

We’re still in the intense hype phase of the Enterprise Services Bus and Service Oriented Architecture adoption cycle. Organizations are struggling to plot a course that avoids pitfalls of being an early adopter, emergent vendors are slapping out immature products to gain market and mind share and the mature integration vendors are adapting their core products to address the space through new releases and acquisitions.

While the product cycle takes its normal course, organizations can continue to web-service enable existing applications that provide value as enterprise services. During this preparation phase, however, organizations should implement an intermediary of some sort even if it does nothing but get developers (and vendors) in the habit of routing requests there first.

The initial version of the intermediary may not have all of the organizations desired capabilities, but it will help avoid pushing complexities of enterprise class infrastructure down to individual services and pave the way for the (perhaps not-so-distant) future when the tool vendors have once again caught up with the needs of their customers.

Comments No Comments »

When not writing about his travels in the UK or sushi in Paris, Graham has been writing about what it takes to produce “Good Software”. So far he has two entries in the series, part 1 and part 2.

Update: Two more parts are now up. Part 3 and Part 4.

Comments No Comments »

Fellow Baylor Computer Science alumnus, Keith Ray, frequently writes on Agile Programming and Test-Driven Developmment.

Comments No Comments »

CustomWare’s Nathan Lee has a good post that explores the architecture of webMethods Reverse Invoke here. Scroll down to see details on CustomWare’s latest release of its WmUnit unit testing tool for Integration Server.

Comments No Comments »

Intelligent Intermediaries

An intelligent intermediary is an additional infrastructure layer designed to help address the location transparency and failover issues as well as issues such as enforcing security policies, monitoring service level metrics and implementing logging and auditing requirements.

By routing web services invocation requests through the intermediary instead of directly to the service endpoint, much of the burden of implementing consistent location transparency and failover can be shifted to the intermediary and away from the providers or consumers of the web services themselves.

The intelligent intermediary must either maintain its own registry of web services or connect to an existing UDDI registry. Rather than require web services clients to make a series of inquiry transactions prior to invoking the web service, the intermediary uses either special parameters added to the endpoint URL or uses rules that examine the content of the request itself to locate an instance of the desired service.

The intermediary implements some type of “liveness checking” approach, perhaps using “synthetic transactions”, so that only working service instances will be invoked. If a failure is detected on a particular service invocation, the intermediary attempts to locate a backup service and handles rerouting the invocation to the working node without reporting an error to the web service client.

The ability to apply routing rules to inspect incoming messages and reroute them as needed can also be used to implement service versioning. Service requests containing certain XML elements or containing explicit version metadata information can be routed to older versions automatically even while new versions of the service are deployed to handle requests from clients that need that need the new versions.

The intelligent intermediary approach still depends on being able to locate a working intermediary in the same way that using a UDDI registry depends on being able to locate a working registry server. This potential point of failure can be addressed by placing a hardware-based load balancer or content switch in front of the synchronized intermediaries leveraging the network hardware?s ability to detect the failure of a server and automatically reroute traffic to a working host.

Another approach would be to load the addresses of known intermediary servers from properties files at startup and build helper services to perform liveness checking of these intermediaries on a scheduled basis and return addresses of working intermediaries to web service clients as needed. The latter approach still places too much burden on individual clients, in my opinion.

Several vendors have begun offering intelligent intermediaries sometimes called proxies, web service routers, nodes or brokers. Some of these products have been built from the ground up as web services focused infrastructure while others have been retooled from existing message-bus or systems management products.

webMethods acquired its intelligent intermediary product when it bought The Mind Electric last October. TME built Fabric from the ground up to address the service-oriented architecture needs of organizations moving beyond their initial early-steps web services efforts.

As I showed in an earlier article, the combination of webMethods Fabric 1.0.2 and webMethods Developer with Feature Pack 1 can be used to provide location transparency and automatic failover to consumers of web services provided by webMethods Integration Server. Because Fabric automatically populates a UDDI v2 registry when services are registered in the Fabric, clients that prefer to use the UDDI registry approach to implementing location transparency and failover can still do so while other clients such as Portlets created using webMethods Portal can route requests through Fabric to do this.

In summary, intelligent intermediaries such as webMethods Fabric should provide significant advantages to organizations that need to implement location transparency and automatic failover for large numbers of web services over alternative point-to-point or UDDI registry-based approaches.

Mark Carlson
Conneva, Inc.
mcarlson {AT} conneva.com

Comments No Comments »

For one of my clients, I’m currently considering how best to introduce location transparency to web services implemented using Integration Server and called from a variety of clients including webMethods Portal, Modeler and other IS servers.

Point-To-Point

In many organization’s early web services efforts, a point-to-point approach is used to invoke application functionality exposed as web services. In this approach, the endpoint URL address is hardcoded into the client code used to invoke the service. If the server hosting the web service is unavailable, the call to the service fails. Code to retry the call after some timeout period can be added, but helps only in best-case scenarios in which the hosting serve becomes available again after a very short time.

If an application needed to invoke only a very small number of web services and the addresses of backup copies of those services were known in advance, properties files containing the addresses of the backup servers could be read at startup and used to locate a working copy in the event of the failure of the primary web services host. This approach becomes very difficult to maintain when the invoking application needs to invoke more than just a handful of servers or when some services are “mirrored” and others are not.

Registry-based Discovery and Failover

A more robust approach seems to be to use a web services registry based on UDDI. Version 3.0.1 of this specification was released in October 2003 and now supports multiple (redundant) registries, user-friendly, publisher-assigned service keys and other needed features. New features of UDDIv3 are listed here.

With UDDI, web services are published to the registry either manually using a vendor-provided user interface or programmatically using API commands. Before making the call to invoke a service, the client first sends one or more inquiry requests in the form of a web services calls to the UDDI server. The UDDI server returns the metadata about the services that match the inquiry criteria and a “bindingTemplate” that provides the technical information needed by applications to bind and interact with the Web service.

The UDDI spec offers the “invocation” usage pattern in which the application locates the service to be invoked by querying the UDDI server and then caches the bindingTemplate information. If a subsequent invocation fails, the client re-queries the UDDI server to obtain fresh bindingTemplate information and re-attempts the invocation. If the invocation is successful, the cached bindingTemplate information is updated. Of course, if the UDDI server itself is down additional effort will be required to locate backup registry servers.

The problem with this approach, in my opinion, is that the burden of discovering web services binding information and implementing a registry-based failover mechanism is placed on each web services client.

Organizations with more than a handful of web services clients will struggle to implement these discovery and failover techniques consistently across multiple development teams, physical locations and programming environments.

Another complicating factor for large organizations will be enforcing consistent publication of web service information into the registry using a taxonomy that has been appropriately designed for that specific organization or industry.

So, if point-to-point approaches are shortsighted and fragile and UDDI-based approaches are complicated and place undue burden on users of web services, what other alternatives are there and how to they compare?

More on that in an upcoming post.

Mark Carlson
Conneva, Inc.
mcarlson {AT} conneva.com

Comments No Comments »

Hopefully, the fruits of Graham’s ease of use evangelism efforts won’t take too long to make their way into the products.

Comments No Comments »