The intelligence of machines and the branch of computer science which aims to create it

Artificial Intelligence Journal

Subscribe to Artificial Intelligence Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Artificial Intelligence Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Artificial Intelligence Authors: Elizabeth White, Pat Romanski, Liz McMillan, Corey Roth, Yeshim Deniz

Related Topics: MultiTouch Developer Journal, Artificial Intelligence Journal, CEP on Ulitzer

cep: RSS Feed Item

Blackboards for Complex Event Processing

An interesting post on Tim Bass’ CEP blog [*1] describing Blackboard Systems, which is an established term from the era of AI research for “distributed knowledge systems” that co-operatively solve problems. Tim and I have previously mentioned blackboards and blackboard systems in the context of Complex Event Processing (CEP), but the passage of time has meant that “blackboard” is more significant for implying “distributed shared memory” [*2] in a CEP context, rather than just co-operating threads or agents looking at a shared database or memory structure [*3]. Distributed memory is a requirement we see for scalable, high-throughput event processing beyond what you can fit into a single machine’s (or JVM’s) memory space.

A general progression for “CEP system complexity” on how the system handles memory is:

  • in-memory only, with persistence for reliability / restore operations
    = small, fast, independent CEP or Event Stream Processing (ESP) applications
  • single-machine, multi-process (for example using multiple cores), sharing the same memory
    = small-medium, pretty fast, with a restricted number of co-operating processes
  • multi-machine network of processes (exploiting control as well as data events across the network):
    • independent memory models
      = where the problem area can be partitioned without side effects: multiple parallel identical processes (for performance)
    • shared-memory models (usually using some cache technology)
      = where the problem area is large and inter-dependent, requiring inter-dependent or co-operating processes (for solution complexity) (as well as allowing for parallelism for performance).

CEP frameworks can generally support all these models (out of the box as for TIBCO BusinessEvents, or with various amounts of custom development work). Of course, the last model (multi-machine network with shared memory) is the interesting one for “Blackboard System” types of architectures (i.e. cooperative CEP agents working against a shared information model and event store, possibly under the control of a Master Control Program / Agent).

Other useful references are:

One suspects the “blackboard systems” domain and terminology is overdue some updates thanks to developments in the Complex Event Processing space.


[1] Disclaimer: Tim is an ex-colleague and runs a vendor-independent blog on aspects of CEP.

[2] Blackboard systems historically used a single memory model (i.e. multiple threads or processes using a single machine’s memory model). But the interesting aspect for CEP is not that event processing agents can create new events to be used by other CEP agents (which is pretty much de facto CEP runtime behavior), but that the memory model can exist across multiple machines (i.e. can be distributed).

[3] This old paper even suggested that blackboard systems’ reign in AI research was curtailed by rule systems’ use of independent rulesets operating on a shared working memory - i.e. standard rule engine behavior. Rule-driven CEP engines like TIBCO BusinessEvents can certainly operate this way, with “independent” declarative rulesets cooperating on a problem. This approach is more difficult if you can represent your CEP or ESP solution only as a “flow diagram”, as you are explicitly fixing (non-declaratively) the interoperation of the CEP processing elements.

Read the original blog entry...