This release of the Coherence Incubator is now deprecated. For the latest supported release, help and documentation, please refer to the Coherence Incubator Project on java.net at http://cohinc.java.net.
The Event Distribution Pattern provides a simple mechanism distribute application defined Events via one or more Event Channels. Specifically the Event Distribution Pattern allows the definition, construction and use of Event Distributors, that of which have the responsibility of guaranteeing in-order asynchronous delivery of Events via the defined Event Channels.
There are several Event Distributor implementations, each based on different internal technologies to provide Event delivery guarantees. Levels of performance, scalability and quality of service may differ between implementations. As of writing, there are two out-of-the-box Event Distributor implementations; one based on Coherence itself, another that leverages JMS providers.
Like Event Distributors, there are many Event Channels each of which provides different capabilities and ultimate destinations for events being distributed. Ultimately it's the responsibility of an Event Channel to perform the actual distribution of Events. Commonly used Event Channel implementations include:
Ultimately the purpose of this pattern is to provide an extensible, high-performance, highly-available, general purpose scalable framework to distribute application events occurring in one Coherence Cluster to one or more possibly distributed Coherence Clusters, Caches or other Devices. As such, the Event Distribution Pattern is used as core infrastructure for the Push Replication Pattern.
Version 1.2.2 is a patch release that resolves an issue where by an application may use a large amount of CPU when nothing is happening.
Version 1.2.0 is a patch release of the Event Distribution Pattern. Below are the highlights for this release.
Version 1.1.2 is a patch release of the Event Distribution Pattern. Below are the highlights for this release.
Version 1.1.1 is a patch release of the Event Distribution Pattern. Below are the highlights for this release.
Version 1.1.0 is the first upgrade release of the Event Distribution Pattern. Numerous fixes and performance related changes have been made. Below are the highlights for this release.
While this is a new project it is heavily inspired by the previous internal implementations of Push Replication. For the most part the majority of the functionality provided by Push Replication 3.x remains but it is now available to be used outside of Push Replication itself. The most significant change has been to terminology. The following table outlines the changes in terminology and concepts.
* Previous releases are available from the attachments page
The following section outlines the commonly used event distribution topologies supported by the Event Distribution Pattern. These are simply some examples of what is possible. Each of these topologies may be enhanced and combined to create a range of alternative approaches.
In the "Active Passive" topology events from an "active cluster" are asynchronously distributed to another a "passive cluster".
In the "Hub and Spoke" topology events from an active "hub" cluster are asynchronously distributed to a number of passive "spoke" clusters.
In the "Active Active" topology events from each "active" cluster are asynchronously distributed to other "active" clusters.
In the "Centralized" topology a centralized cluster serves as a "hub" distribute events to a set of "leaf" clusters.
This strategy most effective when a cluster "owns" a set of entries in a cache (i.e. is exclusively responsible for producing events on information it owns). "Leaf" clusters only ever distribute events to the "hub" cluster, which is then responsible for distributing said events to all other
In this model, one of the clusters is configured as a "hub" by specifying the <replication-role>. Each of the other clusters is designated as "leaf" clusters.
|For Cache-based Event Channels, each topology supports Conflict Resolution at the destination cluster through the specification of ConflictResolvers. The default conflict resolver (called a BruteForceConflictResolver) will simply overwrite the existing value - that is, last write wins.|
|We highly recommend that all clusters using the Event Distribution Pattern are uniquely identifiable. To achieve this, each cluster should be configured such that the combination of their site and cluster names are unique. To declare the name of the geographical site in which a cluster is located you can either use the Coherence system override property tangosol.coherence.site or configure the <cluster-config>. Likewise to declare the name of a cluster use either the Coherence system override property tangosol.coherence.cluster or again, configure the the <cluster-config>.|
The following section outlines the xml configuration structure for the Event Distribution Pattern as they appear in a Cache Configuration.
|The definition of an <event:distributor> may occur in two places, one with in a <cache-config>, another within an <cache-mapping>.|
|...||More detail is available|
|*||Zero or more|
|+||One or more|