This release of the Coherence Incubator is now deprecated. For the latest supported release, help and documentation, please refer to the Coherence Incubator Project on java.net at http://cohinc.java.net.
Prior to running the Push Replication example you'll need to build it. Before starting you'll want to make sure that you've set the JAVA_HOME and COHERENCE_HOME variables in your shell. Once done to build the example you use the Ant script:
The following set of examples illustrate how to configure and run each of the currently supported deployment models for push replication. Note that these examples are designed to run on a single system. They can be updated and modified to run on distinct systems by modifying the host names in the cache configurations for each individual example:
In the Active Passive example, one cluster is being actively published to. The active site then replicates the data to a passive site. The passive site has no active clients publishing to it, but all the updates from the active site are available.
There are two classes in the Active-Passive example:
Both the active and passive sites have an application tier and a cluster tier. For the sake of simplicity the caching tiers consist of a single storage enabled node. For the active site the ActivePassivePublisher is the client tier and the ActivePassiveListener is the client on the passive site.
The active site is configured to use push replication to replicate data to the passive site. This is accomplished with three main steps:
The passive site simply needs to have a proxy service configured. The proxy service will respond to Extend requests from the active site. The RemoteInvocationPublisher will send commands to the passive site to publish updates to entires. Both sites must have the incubator jars in their classpath.
To run the Active-Passive example you'll need four command line windows. Before starting you'll want to make sure that you've set the JAVA_HOME and COHERENCE_HOME variables in your shells. Once done to run the example you'll want to:
With all four processes running you should see the publisher adding entries to the active cache, and the listener reporting updates from the passive cache.
The Hub and Spoke example extends the Active-Passive example out from a single passive site to multiple passive sites that are updated from a single publisher known as the hub.
There are three classes in the hub-and-spoke example:
The design for the hub and spoke example is the same as the active-passive design except that we've added an additional remote site to publish to. Basically this means that on the active site, two Coherence*Extend connections are configured (one for each remote site to publish to). In addition, two publishers are programatically registered (again one for each site). The active-cache-config.xml has the two remote-invocation-schemes configured and the HubSpokePublisher will register two Publishers and then start publishing data to the active cache. The remote sites are both configured with proxy services to receive updates via the Proxy Service.
To run the Hub and Spoke example you'll need six command line windows. Before starting you'll want to make sure that you've set the JAVA_HOME and COHERENCE_HOME variables in your shells. Once done to run the example you'll want to:
With all six processes running you should see the publisher adding entries to the active cache, and the two listeners reporting updates from the passive caches.
The Active Active example extends the Active Passive example out to have two remote sites both actively updating the cache at the same time. This example shows how to trivially deal with conflict resolution by always taking updates from a single site for objects that are written actively on both sites.
There are two classes in the Active-Passive example:
The design for the Active-Active example is the same as the active-passive design except that both sites are configured to replicate. There are two tiers, the caching tier and the application tier. On the caching tier, both sites are configured to use the PublishingCacheStore which accomplished by overriding the distributed-scheme-with-publishing-cachestore from the defaults in the standard Push Replication configuration. You can see this by looking at either cache configuration.
In addition to using the PublishingCacheStore, both sites need to be have remote invocation schemes configured as well as a proxy service. The remote-invocation-scheme for the active1 cache will point to the proxy service configured on the active2 cache. In turn the remote-invocation-scheme for the active2 cache will point to the proxy service configured on the active1 cache. This is how the two sites are wired up to communicate with one another.
Each publisher application will register publishers that are responsible for sending cache updates to the appropriate remote site. In addition, the publishers are registered with the ActiveActiveConflictResolver which will always take updates from site1 as the winner. Once the publishers are registered, each app will then load the cache with entires.
To run the Active Active example you'll need four command line windows. Before starting you'll want to make sure that you've set the JAVA_HOME and COHERENCE_HOME variables in your shells. Once done to run the example you'll want to:
With all four processes running you should see the publishers adding entries to their respective caches. The publishers will also log updates to their respective caches as they come in.
The centralized example illustrates how a cluster can serve as a hub to dispatch replication activity to a set of leaf clusters. Three leaf clusters replicate updates to a mutually exclusive set of keys in a single cache, named "publishing-cache". These updates are published to the hub cluster which processes the entries and routes to other leaf clusters. Note that this example illustrates how a single cache can be updated by multiple clusters without conflict resolution if each publishing cluster takes exclusive ownership of updates to a set of entries.
There are four classes in the Central example:
Four Coherence clusters are used for this example and are assigned site names. Site0 is the hub cluster, while Site1, Site2, and Site3 represent three leafs clusters. Except for a special case entry published by the Hub to initiate processing, all updates are performed by leaf clusters.
After cache-servers for all four clusters are launched and initialized, a single LeafPublisher is launched at each site. The LeafPublishers wait for a signal from the HubController to start publishing. The HubController is started last, and registers batch publishers to push updates to the three leaf clusters. It then puts a single specially marked cache entry into the publishing cache (with a key whose value is a Java String, "StartPublishing"). This put propagates to the three leaf clusters whose LeafListeners see the control key, and initiate publishing. The leafs publish a mutually exclusive set of 1000 entries whose keys are String representations of Long values: Site1 publishes entries with keys from 0 to 999. Site2 publishes entries with keys from 1000 to 1999, and Site3 publishes entries with keys from 2000 to 2999. The values for entries are randomly generated Long values.
Logically the LeafPublishers produce a combined total of 3000 entries, which are all published to the four instances of "publishing-cache" living in the four clusters. Physically 12000 updates are processed. After a LeafPublisher finishes its job publishing its set of entries, it waits to receive the last entries from other leafs (i.e. 999, 1999, and 2999). Once a leaf has received all entries from other leafs, it publishes a final special control entry that indicates to the Hub that all entries have been received and processed in the local cluster. Each leaf creates an entry with its site name embedded (e.g. "Site1Done"). After putting this entry into the cache a LeafPublisher exits. Once the HubController gets sees the SiteNDone entries for all three leafs it exits. When the HubController exits, all clusters should have entries with keys from 0 to 2999 with identical values.
To run the Centralized example you'll need eight command line windows. Before starting you'll want to make sure that you've set the JAVA_HOME and COHERENCE_HOME variables in your shells. Once done to run the example you'll want to:
With all eight processes running you should see the publishers adding entries to the publishing cache, and the hub listener reporting on samples of updates from the leaf clusters.
The most striking aspect of this example is how a very complex replication topology can be supported with very little application specific code. The HubController simply registers three batch publishers and then listens to publishing activity (mostly for purposes of seeing something in the demo.) The leaf publishers are relieved of having to know about the scope of the cluster topology. It only needs to know how to publish to a hub and does not have to know about the existence of other leaf clusters. Before using this topology a developer should understand that a central model puts a resource burden on the hub since it takes on the task of publishing to all leafs. Because of this, it will use substantially more CPU and memory resources to support replication than the leaf clusters it is serving.
* Previous releases are available from the attachments page