The intelligence of machines and the branch of computer science which aims to create it

Artificial Intelligence Journal

Subscribe to Artificial Intelligence Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Artificial Intelligence Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Artificial Intelligence Authors: William Schmarzo, Stackify Blog, Pat Romanski, Liz McMillan, Progress Blog

Related Topics: Artificial Intelligence Journal

Artificial Intelligence: Article

Process-Centric Business Integration

Process-Centric Business Integration

Accumulating a critical mass of data will create intelligence. Or so the IT community has believed for the past two decades. However, intelligence originates with the Latin words inter ligere, reading between the lines, so intelligence is better defined as the capability of understanding that which is not self-evident, in other words, that which does not come from data.

In fact, it's the processes that move and change the data that constitute a company's true intellectual capital. Hence, business integration needs to focus on business processes more than on data integration.

This faith in the supremacy of data started with the advocates of artificial intelligence who believed that upon obtaining a critical mass of information, systems would eventually become self-aware and smart. This was followed by "table-driven" applications, which represented an attempt to make systems easily configurable by nonprogrammer users through a data-driven approach. The latest application of this belief has been enterprise application integration (EAI) and business-to-business integration (B2Bi) technologies that have tried to integrate disparate applications and processes by mapping, or standardizing, data and adequately routing it from one application to another.

In all three cases the cost, time, and effort required for implementation, and the difficulty in affecting subsequent changes have all been much higher than anticipated. As time goes on, there are greater and greater doubts about the cost-effective maintainability of these constructs and the rationality of the approach. As Roy Schulte of the GartnerGroup describes it, we have gone from "spaghetti interfaces" to "spaghetti EAI."

Why are we having these problems? Because we've been educated in static, data-centric methodologies that separate the collection of functional requirements and data design (both treated as data) from the product programming, implementation, execution, and reengineering. However, this separation means that these requirements and design are really an abstraction written on paper that, for all intents and purposes, are as good as carved in stone with no reasonable path to implementation except a programmer's capability. The intelligent programmer will then "read between the lines" and come up with a free translation of the specs into program code. This code also has understandable errors of interpretation of the data "in the lines" plus the restrictions imposed by the existing data design. Therefore, the approach doesn't always render the desired results, and the people who originally designed the solution can't change the design and expect the same quality the second time around, unless it's done by the same programmer. On top of it all, it's a lengthy process.

In EAI projects the effect is compounded because there are restrictions to the solution posed by the preexisting applications and their application program interfaces (APIs), data, and hard-wired process logic, not to mention the impossibility of creating a development environment that has all the possible combinations of applications and network configurations. The first wave of EAI assumed that the existing applications already "knew" what the processes were and what to do with the data they were fed. By simply sharing data and transforming it so it could be understood by the target application, we would have a sound, easy-to-use integration infrastructure. Unfortunately, existing applications were not designed to talk to each other nor are they intelligent. They can't understand data that's not in the form of the predefined structures they've been built to understand. So it's not only a problem of data formats and routing, it's also a problem of having to add functionality that knows what to do with the new data, to the existing data, and with and for the users of the system.

This functionality is the materialization of the domain expert's and programmer's knowledge of the problem at hand in the form of an application program, not in the form of data.

Some EAI and B2Bi vendors integrated the message-brokering capabilities with data transformation and adapter capabilities, and added scripting capabilities (similar to "stored procedures" in the RDBMS market). The more modern ones decided to go the XML route where the description of the content travels with the message, a great improvement no doubt. Some of them even went through the pain of adding state engines to manage processes associated with the data-brokering capabilities. In this way they plan to create stacks that can take care of all the functions required for seamless business integration and interaction. The persisting inconvenience is that all this functionality is added within a data-centric conception of the problem; therefore process management, to them, is a functionality set of the data-management stack. As we'll see later, this approach greatly complicates the rendering of viable and maintainable solutions.

An alternative to the data-centric approach is the process-driven component approach. In this approach the design and documentation phase blends intimately with the programming, implementation, and reengineering phases and clarifies what the needs for services from people and preexisting data and applications really are. In essence, it consists of creating a new "supervisory application" (the driving process) that renders the desired result utilizing services from people and/or existing systems.

At first hand, with all of the buzz from current EAI and e-commerce vendors about adding "process automation" capabilities to their offerings, you would infer that process management is just a feature set that exists within data-management stacks or applications. Actually, it's not. Nor can it be. Process management is a completely different methodological approach toward solving the integration problem, and it must be stack and application independent. It constitutes an independent tier of logic, the process-logic tier.

Just as there's a business-logic tier that accounts mainly for the "functional" or departmental business rules, the process-logic tier accounts for cross-departmental and cross-company business rules. If it's not seen in this light, these capabilities will be underutilized or, even worse, used the wrong way, creating even greater complexity.

When creating a process-driven integration solution, the design phase of the process logic is generally done using graphical process modelers, and by first designing these processes as if every activity in the process was going to be fulfilled manually by some human participant. This phase documents the requirements in a form that's completely understandable both by businesspersons and programmers.

The next step is to assess which of the preexisting applications provides data or functionality that can replace human intervention, then design both a use-case and an interface definition to automate these activities.

After this top-down design phase, an adapter construction phase fulfills the defined needs of services from the underlying applications to the supervisory application. Programmers do this, but because of the top-down design approach, they now have a complete understanding of what is needed and how it will be used.

Once these services are externalized and cataloged as components, each of the related process models is transformed into an executable "supervisory application."

This top-down approach is the opposite of the current EAI approach that first assesses what applications are being used and what they need to communicate with each other, then proceeds directly to define interface definition languages (IDLs), document type definitions (DTDs), or similar mechanisms to model the data to be shared.

The advantages of the process-driven or "supervisory-application" approach versus the data-modeling approach are:

  1. It's process-driven, not data-centric: Data format and transformation is relevant only on an as-needed basis to fulfill the need for services to the process. It's done after designing the processes or supervisory applications. As a result, the adapter development effort is adequately scoped and reduced.
  2. It's naturally distributable: Processes can be cut into sub- and coprocesses as needed. These processes can be hosted on a federation of servers constituting an integrated supervisory "hyperapplication" that's infinitely scalable and executes across company firewalls.
  3. It facilitates B2B integration: In well-designed systems, these supervisory applications talk to one another through their interfaces that are independent of the underlying applications' interfaces. In this way, the process, subprocess, and coprocess relationships are maintained, even if any or all of them change substantially.
  4. It supports continuous improvement: It adds the ability to monitor and measure operations throughout the federation of processes or hyperapplications that supervise them.
  5. It's flexible and application-agnostic: The supervisory applications don't need to change if the preexisting applications change (i.e., to a new version). Just snap-in the wrapper (adapter component) to the new implementation of that interface.
This process-driven approach translates to a greater time-to-market of solutions, greater control over the execution of the business, and unsurpassed flexibility.

Supervisory applications are in charge of getting or putting messages into message-oriented middleware (MOMs), calling components from object-requesting brokers (ORBs) or application servers, and getting or putting data into databases or repositories. They are "active," read between lines, deal with problems, and create seamless audit trails. They are the intelligence.

In contrast, in the data-centric approach, the intent is to base the integration strategy on one selected MOM, ORB, or application server. The following are the problems that arise.

  • Lack of control over business process execution
    Since these technologies are passive, they can't control or ensure execution. They actually need applications to put, get, or call their services. Therefore, the need for some form of supervisory application persists. Typically, building these applications requires high-caliber programmers plus very knowledgeable domain experts. But they also require rigorous methodology and modeling capabilities to be able to obtain the desired results. If these capabilities aren't encapsulated in one product or are not complete, the results will vary depending on the experts at hand and may vary from supervisory application to supervisory application, making their integration as difficult as integrating the original applications. Integrating two middleware solutions is as difficult as integrating two applications using middleware.
  • Ripple effect of changes
    Typically the data structures used in these technologies need to be predefined and used within the adapters or connectors. The problem is, if the data structure changes, so must all the adapters or connectors that use it.

    This is a maintenance and change management nightmare. It also relates back to the previous point: because these technologies are passive, they don't use their own data representations, and their data is used and managed by the adapters and connectors (the intelligent pieces). Different programmers that usually map the message data to an API create adapters and connectors. In the process-driven approach, the supervisory application takes data from the underlying services, transforms it into its own set of variables, and has the ability to transform these into any other format it may need in the future. These variables are managed and maintained centrally by the supervisory application that's designed by a single person. If they change, the adapters don't need to change; if the adapters change, the process variables don't need to change.

    In conclusion, EAI and B2Bi adapters are not reusable if the business data requirements change; process-driven components are.

  • Incompatibility across companies
    A company may try to homogenize its own MOM, ORB, or application server strategy on one technology - a difficult feat in itself - but suppliers and customers aren't usually amenable to changing their technology choices to match their partners.
  • Inability to unplug
    Third-party underlying applications in the present or future may not be compatible with the strategy selected, limiting vendor choices.
  • Inflexibility
    The bottom-up approach imposed by these technologies renders a solution that's virtually engraved in stone. Change is very difficult and costly because the original construction is only completely understood by the authors during a short period of time after having completed it. The business process, logic, and rules aren't inherently visible or changeable.

The data-centric approach necessarily translates to slower time-to-market, a more cumbersome execution, heterogeneous supervisory applications that continue to diverge, disparate metrics (if any) to control overall execution, and a great inflexibility to change or adapt to new requirements once the initial implementation is done.

Data-centric technologies are passive; they need to be told what to do. They're cumbersome, constitute a scalability obstacle, and are extremely inflexible. In sum, they're tools that intelligent people need to use wisely, if at all.

The process-driven approach is a new way of architecting integration solutions. Process-driven tools collapse the traditional stages of IT projects into three parts:

  1. Process design, testing, and redesign
  2. Component programming, testing, and snap-in integration
  3. Solution implementation by publishing processes to the intervening servers
The top-down approach ensures that the resulting supervisory application encompasses the strategy, tactics, business rules, and logic that management wanted. The snap-in approach to integrating underlying applications as components makes both initial implementation and ongoing changes easier and quicker.

Most customers think current state-of-the-art EAI and B2Bi solutions are extremely expensive, slow-to-market, and impossible to maintain. System integrators can take advantage of this situation, but it's painful for their customers.

Relief is in sight. Innovative customers are starting to adopt the process-driven approach because it's easier, less expensive, and naturally suited for change.

More Stories By Felix Racca

Felix Racca is founder and executive vice president of Fuegotech. He
holds the equivalent of a bachelor's degree in systems engineering
from Universidad de Buenos Aires, as well as an MBA from Escuela de
Direccion y Negocios, Universidad Austral (IAE Buenos Aires).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.