diff --git a/docs/modules/ROOT/pages/capacity-planning.adoc b/docs/modules/ROOT/pages/capacity-planning.adoc index 27e731423..fc08e6393 100644 --- a/docs/modules/ROOT/pages/capacity-planning.adoc +++ b/docs/modules/ROOT/pages/capacity-planning.adoc @@ -472,24 +472,14 @@ trade symbols (distinct keys). ==== Cluster Size and Performance -The https://hazelcast.com/resources/jet-3-0-streaming-benchmark/[benchmark] -generates the expected data stream (50k events / second, 10k distinct -keys) and measures how the cluster size affects the processing latency. +The following table shows maximum and average latencies out of an example data stream (50k events / second, 10k distinct keys), +and measures how the cluster size affects the processing latency. -We benchmarked this job on a cluster of 3, 5 and 9 members. We started +We benchmarked a job on a cluster of 3, 5 and 9 members. We started with a 3-member cluster as that is a minimal setup for fault-tolerant operations. For each topology, we benchmarked a setup with 1, 10, 20 and 40 jobs running in the cluster. -The metric we measured was latency evaluated as ```RESULT_PUBLISHED_TS - -ALL_TRADES_RECEIVED_TS``` (https://hazelcast.com/resources/jet-3-0-streaming-benchmark/[learn -more]). -You can use this approach or design a metric that fits your application -SLAs. Moreover, our example records the maximum and average latency. -Consider measuring the result distribution, as the application SLAs are -frequently expressed using it, e.g., app processes 99.999% of data under -200 milliseconds). - Cluster machines were of the recommended minimal configuration: AWS https://aws.amazon.com/ec2/instance-types/c5/[c5.2xlarge] machines, each of 8 CPU, 16 GB RAM, 10 Gbps network. diff --git a/docs/modules/integrate/pages/elasticsearch-connector.adoc b/docs/modules/integrate/pages/elasticsearch-connector.adoc index f88790627..79b027373 100644 --- a/docs/modules/integrate/pages/elasticsearch-connector.adoc +++ b/docs/modules/integrate/pages/elasticsearch-connector.adoc @@ -27,7 +27,7 @@ The Elasticsearch connector source provides a builder and several convenience factory methods. Most commonly you need to provide the following: * A client supplier function, which returns a configured instance of - `RestClientBuilder` (see link:https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-low-usage-initialization.html#java-rest-low-usage-initialization[Elasticsearch documentation]), + `RestClientBuilder` (see link:https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/java-rest-low-usage-initialization.html[Elasticsearch documentation]), * A search request supplier, specifying a query to Elasticsearch, * A mapping function from `SearchHit` to a desired type. @@ -96,7 +96,7 @@ on Elasticsearch side to fix these issues. The Elasticsearch connector sink provides a builder and several convenience factory methods. Most commonly you need to provide: -* A client supplier, which returns a configured instance of `RestHighLevelClient` (see link:https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-low-usage-initialization.html#java-rest-low-usage-initialization[Elasticsearch documentation]), +* A client supplier, which returns a configured instance of `RestHighLevelClient` (see link:https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/java-rest-low-usage-initialization.html[Elasticsearch documentation]), * A mapping function to map items from the pipeline to an instance of one of `IndexRequest`, `UpdateRequest` or `DeleteRequest`. diff --git a/docs/modules/jcache/pages/tck.adoc b/docs/modules/jcache/pages/tck.adoc index 07aff58a3..51809e2bb 100644 --- a/docs/modules/jcache/pages/tck.adoc +++ b/docs/modules/jcache/pages/tck.adoc @@ -69,6 +69,6 @@ mvn -Dimplementation-groupId=com.hazelcast -Dimplementation-artifactId=hazelcast clean install ---- -See also the link:https://docs.google.com/document/d/1m8d1Z44IFGAd20bXEvT2G--vWXbxaJctk16M2rmbM24/edit?ts=59fdff73[TCK 1.1.0 User Guide^] or link:https://docs.google.com/document/d/1w3Ugj_oEqjMlhpCkGQOZkd9iPf955ZWHAVdZzEwYYdU/edit[TCK 1.0.0 User Guide^] +See also the link:https://docs.google.com/document/d/1m8d1Z44IFGAd20bXEvT2G--vWXbxaJctk16M2rmbM24/edit#[TCK 1.1.0 User Guide^] or link:https://docs.google.com/document/d/1w3Ugj_oEqjMlhpCkGQOZkd9iPf955ZWHAVdZzEwYYdU/edit[TCK 1.0.0 User Guide^] for more information about the testing instructions. diff --git a/docs/modules/kubernetes/pages/deploying-in-kubernetes.adoc b/docs/modules/kubernetes/pages/deploying-in-kubernetes.adoc index d11cbd290..2a22eb187 100644 --- a/docs/modules/kubernetes/pages/deploying-in-kubernetes.adoc +++ b/docs/modules/kubernetes/pages/deploying-in-kubernetes.adoc @@ -89,7 +89,6 @@ Explore some step-by-step guides about how to use Hazelcast in Kubernetes. * link:https://docs.hazelcast.com/tutorials/hazelcast-platform-operator-expose-externally[Connect to Hazelcast from Outside Kubernetes] * link:https://docs.hazelcast.com/tutorials/hazelcast-platform-operator-external-backup-restore[Restore a Cluster from Cloud Storage] * link:https://docs.hazelcast.com/tutorials/hazelcast-platform-operator-wan-replication[Replicate Data between Two Hazelcast Clusters] -* link:https://docs.hazelcast.com/tutorials/hazelcast-platform-operator-map-store-mongodb-atlas[Configure MongoDB Atlas as an External Data Store for the Cluster] === Hazelcast Features