distributed-scheme

Skip to end of metadata
Go to start of metadata

distributed-scheme

Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme

Description

The distributed-scheme defines caches where the storage for entries is partitioned across cluster nodes. See the service overview for a more detailed description of partitioned caches. See the partitioned cache samples for examples of various distributed-scheme configurations.

Clustered Concurrency Control

Partitioned caches support cluster wide key-based locking so that data can be modified in a cluster without encountering the classic missing update problem. Note that any operation made without holding an explicit lock is still atomic but there is no guarantee that the value stored in the cache does not change between atomic operations.

Cache Clients

The partitioned cache service supports the concept of cluster nodes which do not contribute to the overall storage of the cluster. Nodes which are not storage enabled are considered "cache clients".

Cache Partitions

The cache entries are evenly segmented into a number of logical partitions, and each storage enabled cluster node running the specified partitioned service will be responsible for maintain a fair-share of these partitions.

Key Association

By default the specific set of entries assigned to each partition is transparent to the application. In some cases it may be advantageous to keep certain related entries within the same cluster node. A key-associator may be used to indicate related entries, the partitioned cache service will ensure that associated entries reside on the same partition, and thus on the same cluster node. Alternatively, key association may be specified from within the application code by using keys which implement the ${xhtml} interface.

Cache Storage (Backing Map)

Storage for the cache is specified via the backing-map-scheme. For instance a partitioned cache which uses a local cache for its backing map will result in cache entries being stored in-memory on the storage enabled cluster nodes.

Failover

For the purposes of failover a configurable number of backups of the cache may be maintained in backup-storage across the cluster nodes. Each backup is also divided into partitions, and when possible a backup partition will not reside on the same physical machine as the primary partition. If a cluster node abruptly leaves the cluster, responsibility for its partitions will automatically be reassigned to the existing backups, and new backups of those partitions will be created (on remote nodes) in order to maintain the configured backup count.

Partition Redistribution

When a node joins or leaves the cluster, a background redistribution of partitions occurs to ensure that all cluster nodes manage a fair-share of the total number of partitions. The amount of bandwidth consumed by the background transfer of partitions is governed by the transfer-threshold.

Elements

The following table describes the elements you can define within the distributed-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service which will manage caches created from this scheme.

services are configured from within the operational descriptor.
<serializer> Optional Specifies the class configuration info for a ${xhtml} implementation used by the Partitioned service to serialize and deserialize user types.

For example, the following configures a ${xhtml} that uses the default coherence-pof-config.xml configuration file to write objects to and read from a stream:
<listener> Optional Specifies an implementation of a ${xhtml} which will be notified of events occurring on the cache.
<backing-map-scheme> Optional Specifies what type of cache will be used within the cache server to store the entries.

Legal values are:

When using an overflow-based backing map it is important that the corresponding backup-storage be configured for overflow (potentially using the same scheme as the backing-map). See the partitioned cache with overflow sample for an example configuration.
<partition-count> Optional Specifies the number of partitions that a partitioned cache will be "chopped up" into. Each node running the partitioned cache service that has the local-storage option set to true will manage a "fair" (balanced) number of partitions. The number of partitions should be larger than the square of the number of cluster members to achieve a good balance, and it is suggested that the number be prime. Good defaults include 257 and 1021 and prime numbers in-between, depending on the expected cluster size. For large clusters it is recommended that the partition count not exceeded 16,381, regardless of the number of storage enabled members. A list of first 1,000 primes can be found at http://www.utm.edu/research/primes/lists/small/1000.txt



Legal values are prime numbers.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<key-associator> Optional Specifies a class that will be responsible for providing associations between keys and allowing associated keys to reside on the same partition. This implementation must have a zero-parameter public constructor.
<key-partitioning> Optional Specifies a class that implements the ${xhtml} interface, which will be responsible for assigning keys to partitions. This implementation must have a zero-parameter public constructor.

If unspecified, the default key partitioning algorithm will be used, which ensures that keys are evenly segmented across partitions.
<partition-listener> Optional Specifies a class that implements the ${xhtml} interface.
<backup-count> Optional Specifies the number of members of the partitioned cache service that hold the backup data for each unit of storage in the cache.

Value of 0 means that in the case of abnormal termination, some portion of the data in the cache will be lost. Value of N means that if up to N cluster nodes terminate at once, the cache data will be preserved.

To maintain the partitioned cache of size M, the total memory usage in the cluster does not depend on the number of cluster nodes and will be in the order of M*(N+1).

Recommended values are 0 or 1.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<backup-count-after-writebehind> Optional Specifies the number of members of the partitioned cache service that will hold the backup data for each unit of storage in the cache that does not require write-behind, i.e. data that is not vulnerable to being lost even if the entire cluster were shut down. Specifically, if a unit of storage is marked as requiring write-behind, then it will be backed up on the number of members specified by the backup-count element, and if the unit of storage is not marked as requiring write-behind, then it will be backed up by the number of members specified by the backup-count-after-writebehind element.

This value should be set to 0 or this setting should not be specified at all. The rationale is that since this data is being backed up to another data store, no in-memory backup is required, other than the data temporarily queued on the write-behind queue to be written. (Note that the setting also applies to write-through data, or any data that can be re-loaded from another data store by a CacheLoader or CacheStore.)

The value of 0 means that once write-behind has occurred, the backup copies of that data will be discarded. However, until write-behind occurs, the data will be backed up in accordance with the backup-count setting.

Recommended value is 0 or this element should be omitted altogether.
<backup-storage> Optional Specifies the type and configuration for the partitioned cache backup storage.
<thread-count> Optional Specifies the number of daemon threads used by the partitioned cache service.

If zero, all relevant tasks are performed on the service thread.

Legal values are positive integers or zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<lease-granularity> Optional Specifies the lease ownership granularity. Available since release 2.3.

Legal values are:

  • thread
  • member

A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<transfer-threshold> Optional Specifies the threshold for the primary buckets distribution in kilo-bytes. When a new node joins the partitioned cache service or when a member of the service leaves, the remaining nodes perform a task of bucket ownership re-distribution. During this process, the existing data gets re-balanced along with the ownership information. This parameter indicates a preferred message size for data transfer communications. Setting this value lower will make the distribution process take longer, but will reduce network bandwidth utilization during this activity.

Legal values are integers greater then zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<local-storage> Optional Specifies whether or not a cluster node will contribute storage to the cluster, i.e. maintain partitions. When disabled the node is considered a cache client.
Normally this value should be left unspecified within the configuration file, and instead set on a per-process basis using the tangosol.coherence.distributed.localstorage system property. This allows cache clients and servers to use the same configuration descriptor.

Legal values are true or false.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<autostart> Optional The autostart element is intended to be used by cache servers (that is, ${xhtml}). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.
<task-hung-threshold> Optional Specifies the amount of time in milliseconds that a task can execute before it is considered "hung". Note: a posted task that has not yet started is never considered as hung. This attribute is applied only if the Thread pool is used (the "thread-count" value is positive).

Legal values are positive integers or zero (indicating no default timeout).

Default value is the value specified in the tangosol-coherence.xml descriptor.
<task-timeout> Optional Specifies the timeout value in milliseconds for requests executing on the service worker threads. This attribute is applied only if the thread pool is used (the "thread-count" value is positive).

Legal values are positive integers or zero (indicating no default timeout).

Default value is the value specified in the tangosol-coherence.xml descriptor.
<request-timeout> Optional Specifies the maximum amount of time a client will wait for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:

(1) the time it takes to deliver the request to an executing node (server);
(2) the interval between the time the task is received and placed into a service queue until the execution starts;
(3) the task execution time;
(4) the time it takes to deliver a result back to the client.

Legal values are positive integers or zero (indicating no default timeout).

Default value is the value specified in the tangosol-coherence.xml descriptor.

<operation-bundling> Optional Specifies the configuration info for a bundling strategy.
Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.