Skip to content

Releases: confluentinc/confluent-kafka-go

v2.0.2

23 Jan 09:33
cde2827
Compare
Choose a tag to compare

This is a feature release:

  • Added SetSaslCredentials. This new method (on the Producer, Consumer, and AdminClient) allows modifying the stored SASL PLAIN/SCRAM credentials that will be used for subsequent (new) connections to a broker (#879).
  • Channel based producer (Producer ProduceChannel()) and channel based consumer (Consumer Events()) are deprecated (#894).
  • Added IsTimeout() on Error type. This is a convenience method that checks if the error is due to a timeout (#903).
  • The timeout parameter on Seek() is now ignored and an infinite timeout is used, the method will block until the fetcher state is updated (typically within microseconds) (#906)
  • The minimum version of Go supported has been changed from 1.11 to 1.14.
  • KIP-222 Add Consumer Group operations to Admin API.
  • KIP-518 Allow listing consumer groups per state.
  • KIP-396 Partially implemented: support for AlterConsumerGroupOffsets.
  • As result of the above KIPs, added (#923)
    • ListConsumerGroups Admin operation. Supports listing by state.
    • DescribeConsumerGroups Admin operation. Supports multiple groups.
    • DeleteConsumerGroups Admin operation. Supports multiple groups (@vsantwana).
    • ListConsumerGroupOffsets Admin operation. Currently, only supports 1 group with multiple partitions. Supports the requireStable option.
    • AlterConsumerGroupOffsets Admin operation. Currently, only supports 1 group with multiple offsets.
  • Added SetRoundtripDuration to the mock broker for setting RTT delay for a given mock broker (@kkoehler, #892).
  • Built-in support for Linux/ arm64. (#933).

Fixes

  • The SpecificDeserializer.Deserialize method was not returning its result correctly, and was hence unusable. The return has been fixed (#849).
  • The schema ID to use during serialization, specified in SerializerConfig, was ignored. It is now used as expected (@perdue, #870).
  • Creating a new schema registry client with an SSL CA Certificate led to a panic. This was due to a nil pointer, fixed with proper initialization (@HansK-p, @ju-popov, #878).

Upgrade Considerations

  • OpenSSL 3.0.x upgrade in librdkafka requires a major version bump, as some legacy ciphers need to be explicitly configured to continue working, but it is highly recommended not to use them. The rest of the API remains backward compatible, see the librdkafka release notes below for details.
  • As required by the Go module system, a suffix with the new major version has been added to the module name, and package imports must reflect this change.

confluent-kafka-go is based on librdkafka v2.0.2, see the librdkafka v2.0.0 release notes and later ones for a complete list of changes, enhancements, fixes and upgrade considerations.

Note: There were no confluent-kafka-go v2.0.0 or v2.0.1 releases.

v1.9.2

02 Aug 20:47
1092e01
Compare
Choose a tag to compare

v1.9.2 is a maintenance release:

confluent-kafka-go is based on librdkafka v1.9.2, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v1.9.1

07 Jul 19:51
5811a4f
Compare
Choose a tag to compare

v1.9.1 is a feature release:

confluent-kafka-go is based on librdkafka v1.9.1, see the librdkafka release notes for a complete list of changes, enhancements, fixes and upgrade considerations.

v1.9.0

21 Jun 19:44
c6c4e03
Compare
Choose a tag to compare

v1.9.0 is a feature release:

Fixes

  • Fix Rebalance events behavior for static membership (@jliunyu, #757, #798).
  • Fix consumer close taking 10 seconds when there's no rebalance needed (@jliunyu, #757).

confluent-kafka-go is based on librdkafka v1.9.0, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v1.8.2

14 Dec 11:15
Compare
Choose a tag to compare

confluent-kafka-go v1.8.2

This is a maintenance release:

  • Bundles librdkafka v1.8.2
  • Check termination channel while reading delivery reports (by @zjj)
  • Added convenience method Consumer.StoreMessage() (@finncolman, #676)

confluent-kafka-go is based on librdkafka v1.8.2, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

Note: There were no confluent-kafka-go v1.8.0 and v1.8.1 releases.

v1.7.0

11 May 08:20
13ae115
Compare
Choose a tag to compare

confluent-kafka-go is based on librdkafka v1.7.0, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

Enhancements

  • Experimental Windows support (by @neptoess).
  • The produced message headers are now available in the delivery report
    Message.Headers if the Producer's go.delivery.report.fields
    configuration property is set to include headers, e.g.:
    "go.delivery.report.fields": "key,value,headers"
    This comes at a performance cost and are thus disabled by default.

Fixes

  • AdminClient.CreateTopics() previously did not accept default value(-1) of
    ReplicationFactor without specifying an explicit ReplicaAssignment, this is
    now fixed.

v1.6.1

11 Mar 11:15
ef84d2e
Compare
Choose a tag to compare

v1.6.1

v1.6.1 is a feature release:

  • KIP-429: Incremental consumer rebalancing - see cooperative_consumer_example.go
    for an example how to use the new incremental rebalancing consumer.
  • KIP-480: Sticky producer partitioner - increase throughput and decrease
    latency by sticking to a single random partition for some time.
  • KIP-447: Scalable transactional producer - a single transaction producer can
    now be used for multiple input partitions.
  • Add support for go.delivery.report.fields by @kevinconaway

Fixes

  • For dynamically linked builds (-tags dynamic) there was previously a possible conflict
    between the bundled librdkafka headers and the system installed ones. This is now fixed. (@KJTsanaktsidis)

confluent-kafka-go is based on and bundles librdkafka v1.6.1, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v1.5.2

05 Nov 12:09
8bd1e41
Compare
Choose a tag to compare

confluent-kafka-go v1.5.2

v1.5.2 is a maintenance release with the following fixes and enhancements:

  • Bundles librdkafka v1.5.2 - see release notes for all enhancements and fixes.
  • Documentation fixes

confluent-kafka-go is based on librdkafka v1.5.2, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v1.4.2

07 May 07:06
bb5bb31
Compare
Choose a tag to compare

confluent-kafka-go v1.4.2

v1.4.2 is a maintenance release:

  • The bundled librdkafka directory (kafka/librdkafka) is no longer pruned by Go mod vendor import.
  • Bundled librdkafka upgraded to v1.4.2, highlights:
    • System root CA certificates should now be picked up automatically on most platforms
    • Fix produce/consume hang after partition goes away and comes back,
      such as when a topic is deleted and re-created (regression in v1.3.0).

librdkafka v1.4.2 changes

See the librdkafka v1.4.2 release notes for changes to the bundled librdkafka included with the Go client.

v1.4.0

08 Apr 15:32
00f7f54
Compare
Choose a tag to compare

confluent-kafka-go v1.4.0

  • Added Transactional Producer API and full Exactly-Once-Semantics (EOS) support.
  • A prebuilt version of the latest version of librdkafka is now bundled with the confluent-kafka-go client. A separate installation of librdkafka is NO LONGER REQUIRED or used.
  • Added support for sending client (librdkafka) logs to Logs() channel.
  • Added Consumer.Position() to retrieve the current consumer offsets.
  • The Error type now has additional attributes, such as IsRetriable() to deem if the errored operation can be retried. This is currently only exposed for the Transactional API.
  • Removed support for Go < 1.9

Transactional API

librdkafka and confluent-kafka-go now has complete Exactly-Once-Semantics (EOS) functionality, supporting the idempotent producer (since v1.0.0), a transaction-aware consumer (since v1.2.0) and full producer transaction support (in this release).
This enables developers to create Exactly-Once applications with Apache Kafka.

See the Transactions in Apache Kafka page for an introduction and check the transactions example for a complete transactional application example.

Bundled librdkafka

The confluent-kafka-go client now comes with batteries included, namely prebuilt versions of librdkafka for the most popular platforms, you will thus no longer need to install or manage librdkafka separately.

Supported platforms are:

  • Mac OSX
  • glibc-based Linux x64 (e.g., RedHat, Debian, etc) - lacks Kerberos/GSSAPI support
  • musl-based Linux x64 (Alpine) - lacks Kerberos/GSSAPI support

These prebuilt librdkafka has all features (e.g., SSL, compression, etc) except for the Linux builds which due to libsasl2 dependencies does not have Kerberos/GSSAPI support.
If you need Kerberos support, or you are running on a platform where the prebuilt librdkafka builds are not available (see above), you will need to install librdkafka separately (preferably through the Confluent APT and RPM repositories) and build your application with -tags dynamic to disable the builtin librdkafka and instead link your application dynamically to librdkafka.

librdkafka v1.4.0 changes

Full librdkafka v1.4.0 release notes.

Highlights:

  • KIP-98: Transactional Producer API
  • KIP-345: Static consumer group membership (by @rnpridgeon)
  • KIP-511: Report client software name and version to broker
  • SASL SCRAM security fixes.