Configuring and Using Coherence Extend

Skip to end of metadata
Go to start of metadata

Coherence*ExtendTM extends the reach of the core Coherence TCMP cluster to a wider range of consumers, including desktops, remote servers and machines located across WAN connections. Typical uses of Coherence*Extend include providing desktop applications with access to Coherence caches (including support for Near Cache and Perform Continuous Query) and Coherence cluster "bridges" that link together multiple Coherence clusters connected via a high-latency, unreliable WAN.

Coherence*Extend consists of two basic components: a client and a Coherence*Extend clustered service hosted by one or more ${xhtml} processes. The adapter library includes implementations of both the ${xhtml} and ${xhtml} interfaces that route all requests to a Coherence*Extend clustered service instance running within the Coherence cluster. The Coherence*Extend clustered service in turn responds to client requests by delegating to an actual Coherence clustered service (for example, a Partitioned or Replicated cache service). The client adapter library and Coherence*Extend clustered service use a low-level messaging protocol to communicate with each other. Coherence*Extend includes the following transport bindings for this protocol:

  • Extend-JMSTM : uses your existing JMS infrastructure as the means to connect to the cluster
  • Extend-TCPTM : uses a high performance, scalable TCP/IP-based communication layer to connect to the cluster

The choice of a transport binding is configuration-driven and is completely transparent to the client application that uses Coherence*Extend. A Coherence*Extend service is retrieved like a Coherence clustered service: via the CacheFactory class. Once obtained, a client uses the Coherence*Extend service in the same way as it would if it were part of the Coherence cluster. The fact that operations are being sent to a remote cluster node (over either JMS or TCP) is transparent to the client application.

This sections contain the following information:

General Instructions

Configuring and using Coherence*Extend requires four basic steps:

  1. Create a client-side Coherence cache configuration descriptor that includes one or more <remote-cache-scheme> and <remote-invocation-scheme> configuration elements
  2. Create a cluster-side Coherence cache configuration descriptor that includes one or more <proxy-scheme> configuration elements
  3. Launch one or more DefaultCacheServer processes
  4. Create a client application that uses one or more Coherence*Extend services
  5. Launch the client application

The following sections describe each of these steps in detail for the Extend-JMS and Extend-TCP transport bindings.

Configuring and Using Coherence*Extend-JMS

Client-side Cache Configuration Descriptor

A Coherence*Extend client that uses the Extend-JMS transport binding must define a Coherence cache configuration descriptor which includes a <remote-cache-scheme> and/or <remote-invocation-scheme> element with a child <jms-initiator> element containing various JMS-specific configuration information. For example:

This cache configuration descriptor defines two caching schemes, one that uses Extend-JMS to connect to a remote Coherence cluster (<remote-cache-scheme>) and one that maintains an in-process size-limited near cache of remote Coherence caches (again, accessed via Extend-JMS). Additionally, the cache configuration descriptor defines a <remote-invocation-scheme> that allows the client application to execute tasks within the remote Coherence cluster. Both the <remote-cache-scheme> and <remote-invocation-scheme> elements have a <jms-initiator> child element which includes all JMS-specific information needed to connect the client with the Coherence*Extend clustered service running within the remote Coherence cluster.

When the client application retrieves a NamedCache via the CacheFactory using, for example, the name "dist-extend", the Coherence*Extend adapter library will connect to the Coherence cluster via a JMS Queue (retrieved via JNDI using the name "jms/coherence/Queue") and return a NamedCache implementation that routes requests to the NamedCache with the same name running within the remote cluster. Likewise, when the client application retrieves a InvocationService by calling CacheFactory.getConfigurableCacheFactory().ensureService("ExtendJmsInvocationService"), the Coherence*Extend adapter library will connect to the Coherence cluster via the same JMS Queue and return an InvocationService implementation that executes synchronous Invocable tasks within the remote clustered JVM to which the client is connected.

Cluster-side Cache Configuration Descriptor

In order for a Coherence*Extend-JMSTM client to connect to a Coherence cluster, one or more DefaultCacheServer processes must be running that use a Coherence cache configuration descriptor which includes a <proxy-scheme> element with a child <jms-acceptor> element containing various JMS-specific configuration information. For example:

This cache configuration descriptor defines two clustered services: one that uses Extend-JMS to allow remote Extend-JMS clients to connect to the Coherence cluster and a standard Partitioned cache service. Since this descriptor is used by a DefaultCacheServer it is important that the <autostart> configuration element for each service is set to true so that clustered services are automatically restarted upon termination. The <proxy-scheme> element has a <jms-acceptor> child element which includes all JMS-specific information needed to accept client connection requests over JMS.

The Coherence*Extend clustered service will listen to a JMS Queue (retrieved via JNDI using the name "jms/coherence/Queue") for connection requests. When, for example, a client attempts to connect to a Coherence NamedCache called "dist-extend", the Coherence*Extend clustered service will proxy subsequent requests to the NamedCache with the same name which, in this case, will be a Parititioned cache. Note that Extend-JMS client connection requests will be load balanced across all DefaultCacheServer processes that are running a Coherence*Extend clustered service with the same configuration.

Configuring your JMS Provider

Coherence*Extend-JMS uses JNDI to obtain references to all JMS resources. To specify the JNDI properties that Coherence*Extend-JMS uses to create a JNDI InitialContext, create a file called jndi.properties that contains your JMS provider's configuration properties and add the directory that contains the file to both the client application and DefaultCacheServer classpaths.

For example, if you are using WebLogic Server as your JMS provider, your jndi.properties file would look like the following:

java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory
java.naming.provider.url=t3://localhost:7001
java.naming.security.principal=system
java.naming.security.credentials=weblogic

Additionally, Coherence*Extend-JMS uses a JMS Queue to connect Extend-JMS clients to a Coherence*Extend clustered service instance. Therefore, you must deploy an appropriately configured JMS QueueConnectionFactory and Queue and register them under the JNDI names specified in the <jms-initiator> and <jms-acceptor> configuration elements.

For example, if you are using WebLogic Server, you can use the following Ant script to create and deploy these JMS resources:

Launching an Extend-JMS DefaultCacheServer Process

To start a DefaultCacheServer that uses the cluster-side Coherence cache configuration described earlier to allow Extend-JMS clients to connect to the Coherence cluster via JMS, you need to do the following:

  • Change the current directory to the Coherence library directory (%COHERENCE_HOME%\lib on Windows and $COHERENCE_HOME/lib on Unix).
  • Make sure that the paths are configured so that the Java command will run.
  • Start the DefaultCacheServer command line application with the directory that contains your jndi.properties file and your JMS provider's libraries on the classpath and the -Dtangosol.coherence.cacheconfig system property set to the location of the cluster-side Coherence cache configuration descriptor described earlier.

For example, if you are using WebLogic server as your JMS provider, run the following command on Windows (note that it is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

java -cp coherence.jar;<directory containing jndi.properties>;<WebLogic home>\server\lib\wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the server-side cache configuration descriptor>
     com.tangosol.net.DefaultCacheServer

On Unix:

java -cp coherence.jar:<directory containing jndi.properties>:<WebLogic home>/server/lib/wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the server-side cache configuration descriptor>
     com.tangosol.net.DefaultCacheServer

Launching an Extend-JMS Client Application

To start a client application that uses Extend-JMS to connect to a remote Coherence cluster via JMS, you need to do the following:

  • Change the current directory to the Coherence library directory (%COHERENCE_HOME%\lib on Windows and $COHERENCE_HOME/lib on Unix).
  • Make sure that the paths are configured so that the Java command will run.
  • Start your client application with the directory that contains your jndi.properties file and your JMS provider's libraries on the classpath and the -Dtangosol.coherence.cacheconfig system property set to the location of the client-side Coherence cache configuration descriptor described earlier.

For example, if you are using WebLogic server as your JMS provider, you would run the following command on Windows (note that it is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

java -cp coherence.jar;<directory containing jndi.properties>;<WebLogic home>\server\lib\wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the client-side cache configuration descriptor>
     <client application Class name>

On Unix:

java -cp coherence.jar:<directory containing jndi.properties>:<WebLogic home>/server/lib/wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the client-side cache configuration descriptor>
     <client application Class name>

Configuring and Using Coherence*Extend-TCP

Client-side Cache Configuration Descriptor

A Coherence*Extend client that uses the Extend-TCP transport binding must define a Coherence cache configuration descriptor which includes a <remote-cache-scheme> and/or <remote-invocation-scheme> element with a child <tcp-initiator> element containing various TCP/IP-specific configuration information. For example:

This cache configuration descriptor defines two caching schemes, one that uses Extend-TCP to connect to a remote Coherence cluster (<remote-cache-scheme>) and one that maintains an in-process size-limited near cache of remote Coherence caches (again, accessed via Extend-TCP). Additionally, the cache configuration descriptor defines a <remote-invocation-scheme> that allows the client application to execute tasks within the remote Coherence cluster. Both the <remote-cache-scheme> and <remote-invocation-scheme> elements have a <tcp-initiator> child element which includes all TCP/IP-specific information needed to connect the client with the Coherence*Extend clustered service running within the remote Coherence cluster.

When the client application retrieves a NamedCache via the CacheFactory using, for example, the name "dist-extend", the Coherence*Extend adapter library will connect to the Coherence cluster via TCP/IP (using the address "localhost" and port 9099) and return a NamedCache implementation that routes requests to the NamedCache with the same name running within the remote cluster. Likewise, when the client application retrieves a InvocationService by calling CacheFactory.getConfigurableCacheFactory().ensureService("ExtendTcpInvocationService"), the Coherence*Extend adapter library will connect to the Coherence cluster via TCP/IP (again, using the address "localhost" and port 9099) and return an InvocationService implementation that executes synchronous Invocable tasks within the remote clustered JVM to which the client is connected.

Note that the <remote-addresses> configuration element can contain multiple <socket-address> child elements. The Coherence*Extend adapter library will attempt to connect to the addresses in a random order, until either the list is exhausted or a TCP/IP connection is established.

Cluster-side Cache (a.k.a Coherence Extend Proxy) Configuration Descriptor

In order for a Coherence*Extend-TCPTM client to connect to a Coherence cluster, one or more DefaultCacheServer processes must be running that use a Coherence cache configuration descriptor which includes a <proxy-scheme> element with a child <tcp-acceptor> element containing various TCP/IP-specific configuration information. For example:

This cache configuration descriptor defines two clustered services, one that uses Extend-TCP to allow remote Extend-TCP clients to connect to the Coherence cluster and a standard Partitioned cache service. Since this descriptor is used by a DefaultCacheServer it is important that the <autostart> configuration element for each service is set to true so that clustered services are automatically restarted upon termination. The <proxy-scheme> element has a <tcp-acceptor> child element which includes all TCP/IP-specific information needed to accept client connection requests over TCP/IP.

The Coherence*Extend clustered service will listen to a TCP/IP ServerSocket (bound to address "localhost" and port 9099) for connection requests. When, for example, a client attempts to connect to a Coherence NamedCache called "dist-extend-direct", the Coherence*Extend clustered service will proxy subsequent requests to the NamedCache with the same name which, in this case, will be a Partitioned cache.

Launching an Extend-TCP DefaultCacheServer Process

To start a DefaultCacheServer that uses the cluster-side Coherence cache configuration described earlier to allow Extend-TCP clients to connect to the Coherence cluster via TCP/IP, you need to do the following:

  • Change the current directory to the Coherence library directory (%COHERENCE_HOME%\lib on Windows and $COHERENCE_HOME/lib on Unix)
  • Make sure that the paths are configured so that the Java command will run
  • Start the DefaultCacheServer command line application with the -Dtangosol.coherence.cacheconfig system property set to the location of the cluster-side Coherence cache configuration descriptor described earlier

For example (note that the following command is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

java -cp coherence.jar:<classpath to client application> 
     -Dtangosol.coherence.cacheconfig=file://<path to the server-side cache configuration descriptor>
     com.tangosol.net.DefaultCacheServer

Launching an Extend-TCP Client Application

To start a client application that uses Extend-TCP to connect to a remote Coherence cluster via TCP/IP, you need to do the following:

  • Change the current directory to the Coherence library directory (%COHERENCE_HOME%\lib on Windows and $COHERENCE_HOME/lib on Unix)
  • Make sure that the paths are configured so that the Java command will run
  • Start your client application with the -Dtangosol.coherence.cacheconfig system property set to the location of the client-side Coherence cache configuration descriptor described earlier

For example (note that the following command is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

java -cp coherence.jar:<classpath to client application> 
     -Dtangosol.coherence.cacheconfig=file://<path to the client-side cache configuration descriptor>
     <client application Class name>

Example Coherence*Extend Client Application

The following example demonstrates how to retrieve and use a Coherence*Extend CacheService and InvocationService. This example increments an Integer value in a remote Partitioned cache and then retrieves the value by executing an Invocable on the clustered JVM to which the client is attached:

Note that this example could also be run on a Coherence node (i.e. within the cluster) verbatum. The fact that operations are being sent to a remote cluster node (over either JMS or TCP) is completely transparent to the client application.

Coherence*Extend InvocationService
Since, by definition, a Coherence*Extend client has no direct knowledge of the cluster and the members running within the cluster, the Coherence*Extend InvocationService only allows Invocable tasks to be executed on the JVM to which the client is connected. Therefore, you should always pass a null member set to the query() method. As a consequence of this, the single result of the execution will be keyed by the local Member, which will be null if the client is not part of the cluster. This Member can be retrieved by calling service.getCluster().getLocalMember(). Additionally, the Coherence*Extend InvocationService only supports synchronous task execution (i.e. the execute() method is not supported).

Advanced Configuration

Network Filters

Like Coherence clustered services, Coherence*Extend services support pluggable network filters. Filters can be used to modify the contents of network traffic before it is placed "on the wire". Most standard Coherence network filters are supported, including the compression and symmetric encryption filters. For more information on configuring filters, see the Network Filters section.

To use network filters with Coherence*Extend, a <use-filters> element must be added to the <initiator-config> element in the client-side cache configuration descriptor and to the <acceptor-config> element in the cluster-side cache configuration descriptor.

For example, to encrypt network traffic exchanged between a Coherence*Extend client and the clustered service to which it is connected, configure the client-side <remote-cache-scheme> and <remote-invocation-scheme> elements like so (assuming the symmetric encryption filter has been named symmetric-encryption):

and the cluster-side <proxy-scheme> element like so:

The contents of the <use-filters> element must be the same in the client and cluster-side cache configuration descriptors.

Connection Error Detection and Failover

When a Coherence*Extend service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (that is, CacheService or InvocationService) will dispatch a MemberEvent.MEMBER_LEFT event to all registered ${xhtml} and the service will be stopped. If the client application attempts to subsequently use the service, the service will automatically restart itself and attempt to reconnect to the cluster. If the connection is successful, the service will dispatch a MemberEvent.MEMBER_JOINED event; otherwise, a fatal exception will be thrown to the client application.

A Coherence*Extend service has several mechanisms for detecting dropped connections. Some are inherent to the underlying protocol (that is, a javax.jms.ExceptionListener in Extend-JMS and TCP/IP in Extend-TCP), whereas others are implemented by the service itself. The latter mechanisms are configured via the <outgoing-message-handler> configuration element.

The primary configurable mechanism used by a Coherence*Extend client service to detect dropped connections is a request timeout. When the service sends a request to the remote cluster and does not receive a response within the request timeout interval (see <request-timeout>), the service assumes that the connection has been dropped. The Coherence*Extend client and clustered services can also be configured to send a periodic heartbeat over the connection (see <heartbeat-interval> and <heartbeat-timeout>). If the service does not receive a response within the configured heartbeat timeout interval, the service assumes that the connection has been dropped.

You should always enable heartbeats when using a connectionless transport, as is the case with Extend-JMS.
If you do not specify a <request-timeout/>, a Coherence*Extend service will use an infinite request timeout. In general, this is not a recommended configuration, as it could result in an unresponsive application. For most use cases, you should specify a reasonable finite request timeout.

Read-only NamedCache Access

By default, the Coherence*Extend clustered service allows both read and write access to proxied NamedCache instances. To prohibit Coherence*Extend clients from modifying cached content, use the <cache-service-proxy> child configuration element. For example:

Client-side NamedCache Locking

By default, the Coherence*Extend clustered service disallows Coherence*Extend clients from acquiring NamedCache locks. To enable client-side locking, use the <cache-service-proxy> child configuration element. For example:

If you enable client-side locking and your client application uses the NamedCache.lock() and unlock() methods, it is important that you specify the member-based (rather than thread-based) locking strategy for any Partitioned or Replicated cache services defined in your cluster-side Coherence cache configuration descriptor. Because the Coherence*Extend clustered service uses a pool of threads to execute client requests concurrently, it cannot guarantee that the same thread will execute subsequent requests from the same Coherence*Extend client.

To specify the member-based locking strategy for a Partitioned or Replicated cache service, use the <lease-granularity> configuration element. For example:

Disabling Proxied Services

By default, the Coherence*Extend clustered service exposes two proxied services to clients: a CacheService proxy and an InvocationService proxy. In some cases, it may be desirable to disable one of the two proxies. This is possible via the <enabled> configuration element in each of the corresponding proxy configuration sections. For example, to disable the InvocationService proxy so that remote clients cannot execute Invocable objects within the cluster, you'd configure the Coherence*Extend clustered service like so:

Likewise, to prevent remote clients from accessing caches in the cluster, you'd use the following configuration:

Labels:
extend extend Delete
jms jms Delete
tcp tcp Delete
cache-config cache-config Delete
provider provider Delete
defaultcacheserver defaultcacheserver Delete
namedcache namedcache Delete
failover failover Delete
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.