forked from ppine7/kafka-elasticsearch-consumer
-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
upgrade to Kafka 2.5.1, ES 7.8, Java 11 #110
Open
ppine7
wants to merge
2
commits into
BigDataDevs:master
Choose a base branch
from
ppine7:java11
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,6 @@ | ||
#Thu Jul 13 16:56:59 EEST 2017 | ||
#Thu Jan 23 14:28:37 EST 2020 | ||
distributionBase=GRADLE_USER_HOME | ||
distributionPath=wrapper/dists | ||
zipStoreBase=GRADLE_USER_HOME | ||
zipStorePath=wrapper/dists | ||
distributionUrl=https\://services.gradle.org/distributions/gradle-4.0.1-all.zip | ||
distributionUrl=https\://services.gradle.org/distributions/gradle-6.0-all.zip | ||
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,79 +1,72 @@ | ||
### Kafka properties #################################### | ||
# all properties starting with this prefix - will be added to KafkaPRoperties object | ||
# all properties starting with this prefix - will be added to KafkaProperties object | ||
# with the property name = original property name minus the prefix | ||
kafka.consumer.property.prefix=consumer.kafka.property. | ||
# Kafka Brokers host:port list: <host1>:<port1>,…,<hostN>:<portN> | ||
# default: localhost:9092 | ||
# old: kafka.consumer.brokers.list=localhost:9092 | ||
consumer.kafka.property.bootstrap.servers=localhost:9092 | ||
|
||
# Kafka Consumer group name prefix - | ||
# each indexer job will have a clientId = kafka.consumer.group.name + "_" + partitionNumber | ||
# default: kafka_es_indexer | ||
# old: kafka.consumer.group.name=kafka_es_indexer | ||
# each job will have a clientId = kafka.consumer.group.name + "_" + partitionNumber | ||
consumer.kafka.property.group.id=kafka-batch-consumer | ||
|
||
# kafka session timeout in ms - is kafka broker does not get a heartbeat from a consumer during this interval - | ||
# consumer is marked as 'dead' and re-balancing is kicking off | ||
# default: 30s x 1000 = 30000 ms | ||
# old: kafka.consumer.session.timeout.ms=30000 | ||
consumer.kafka.property.session.timeout.ms=30000 | ||
|
||
# Max number of bytes to fetch in one poll request PER partition | ||
# default: 1M = 1048576 | ||
# old: kafka.consumer.max.partition.fetch.bytes=1048576 | ||
consumer.kafka.property.max.partition.fetch.bytes=1048576 | ||
|
||
# application instance name: | ||
# used as a common name prefix of all consumer threads | ||
application.id=app1 | ||
|
||
# Kafka Topic from which the message has to be processed | ||
# mandatory property, no default value specified. | ||
kafka.consumer.source.topic=my_log_topic | ||
|
||
#Number of consumer threads | ||
kafka.consumer.pool.count=5 | ||
|
||
# time in ms to wait for new messages to arrive when calling poll() on Kafka brokers , if there are no messages right away | ||
# WARNING: make sure this value is not higher than kafka.consumer.session.timeout.ms !!! | ||
# WARNING: make sure this value is not higher than session.timeout.ms !!! | ||
# default: 10 sec = 10 x 1000 = 10000 ms | ||
kafka.consumer.poll.interval.ms=10000 | ||
|
||
# number of time poll records will be attempted to be re-processed in the event of a recoverable exception | ||
# from the IBatchMessageProcessor.beforeCommitCallBack() method | ||
kafka.consumer.poll.retry.limit=5 | ||
|
||
# time delay in ms before retires of the poll records in the event of a recoverable exception | ||
# time delay in ms before retries of the poll records in the event of a recoverable exception | ||
# from the IBatchMessageProcessor.beforeCommitCallBack() method | ||
kafka.consumer.poll.retry.delay.interval.ms=1000 | ||
# in the case when the max limit of recoverable exceptions was reached: | ||
# if set to TRUE - ignore the exception and continue processing the next poll() | ||
# if set to FALSE - throw ConcumerUnrecoverableException and shutdown the Consumer | ||
# if set to FALSE - throw ConsumerUnrecoverableException and shutdown the Consumer | ||
kafka.consumer.ignore.overlimit.recoverable.errors=false | ||
|
||
### ElasticSearch properties #################################### | ||
# ElasticSearch Host and Port List for all the nodes | ||
# Example: elasticsearch.hosts.list=machine_1_ip:9300,machine_2_ip:9300 | ||
# ElasticSearch host and port List for all the nodes | ||
# example: elasticsearch.hosts.list=machine_1_ip:9300,machine_2_ip:9300 | ||
elasticsearch.hosts.list=localhost:9300 | ||
|
||
# Name of the ElasticSearch Cluster that messages will be posted to; | ||
# Tip: Its not a good idea to use the default name "ElasticSearch" as your cluster name. | ||
# name of the ElasticSearch Cluster that messages will be posted to; | ||
elasticsearch.cluster.name=KafkaESCluster | ||
|
||
# ES Index Name that messages will be posted/indexed to; this can be customized via using a custom IndexHandler implementation class | ||
# Default: "kafkaESIndex" | ||
elasticsearch.index.name=kafkaESIndex | ||
# ES Index Name that messages will be posted/indexed to; | ||
# this can be customized in your own implementation of a batch processor, for example in the processMessage() method | ||
elasticsearch.index.name=kafka-es-index | ||
|
||
# ES Index Type that messages will be posted/indexed to; this can be customized via using a custom IndexHandler implementation class | ||
# Default: “kafkaESType” | ||
# TODO deprecate this | ||
# ES Index Type that messages will be posted/indexed to; this can be customized in your own implementation of a batch processor | ||
elasticsearch.index.type=kafkaESType | ||
|
||
#Sleep time in ms between re-attempts of sending batch to ES , in case of SERVICE_UNAVAILABLE response | ||
# Default: 10000 | ||
# Sleep time in ms between re-attempts of sending batch to ES , in case of SERVICE_UNAVAILABLE response | ||
# default: 10s = 10*1000 = 10000ms | ||
elasticsearch.reconnect.attempt.wait.ms=10000 | ||
|
||
# number of times to try to re-connect to ES when performing batch indexing , if connection to ES fails | ||
elasticsearch.indexing.retry.attempts=2 | ||
# sleep time in ms between attempts to connect to ES | ||
# default: 10s = 10*1000 = 10000ms | ||
elasticsearch.indexing.retry.sleep.ms=10000 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
6.6 is the latest one