Releases: netscaler/netscaler-observability-exporter
Release 1.10.001
Version 1.10.001
Fixed issues
- Fixed the issue of
ServerUrl
option for Kafka not accepting multiple bootstrap brokers. With this fix, theServerUrl
option now accepts a comma-separated list of bootstrap Kafka brokers. For example,X.X.X.X:9092,Y.Y.Y.Y:9092
.
Version 1.9.001
What's new
Support for DEBUG severity level
A new severity level, DEBUG, is now supported. The DEBUG severity level provides comprehensive container logging that contains fatal, error, informational, and debug messages.
Numerous logs are now supported. For the complete list of logs, see Log descriptions.
Support to export Auditlogs and Events to Kafka
NetScaler Observability Exporter can now export NetScaler Events and Auditlogs to Kafka in JSON format.
Fixed issues
- Fixed a data loss issue with Splunk export over SSL. As part of the fix, a new field
ConnectionPoolSize
is introduced.ConnectionPoolSize
andMaxConnections
can be used to control the rate at which data is exported. For specifics of these fields, see Description of configuration parameters.
Version 1.8.001
What's new
Support to enable logging
Now, you can enable logging on NetScaler Observability Exporter according to different severity levels. These logs are helpful to monitor and troubleshoot NetScaler Observability Exporter deployments. For more information, see the NetScaler Observability Exporter documentation.
Fixed issues
- NetScaler Observability Exporter occasionally stops sending data to Splunk under low traffic conditions. This issue is fixed now.
- For Kafka deployments, the AVRO schema is showing null values. This issue is fixed now.
Release 1.7.001
Version 1.7.001
What's new
Enhancements
Citrix ADC Observability Exporter image size is reduced to make a lighter version of the image.
Release 1.6.001
Version 1.6.001
What's new
Support for exporting transactions in the JSON format from Citrix ADC Observability Exporter to Kafka
You can now export transactions from Citrix ADC Observability Exporter to Kafka in the JSON format apart from the AVRO format.
A new parameter DataFormat
is introduced in the Kafka deployment ConfigMap to support transactions in the JSON format.
For more information, see Deploy Citrix ADC Observability Exporter with Kafka.
Enhancements
Now, you can run Citrix ADC Observability Exporter with the user id (UID) as nobody
under the group id (GID) nogroup
on both Kubernetes and OpenShift platforms. The UID nobody
and GID nogroup
are the least privileged user and group ids within the system.
Fixed issues
-
Earlier, Citrix ADC Observability Exporter was sending out invalid JSONs for fields like
srcMetadataJSON
,srcLabelsJSON
,dstMetadataJSON
, anddstLabelsJSON
for JSON specific endpoints. This issue is fixed now. -
Earlier, Citrix ADC Observability Exporter was not logging HTTP response status codes. This issue is fixed now.
Release 1.5.001
Version 1.5.001
Enhancements
-
Added rate-limiting support for transactions in JSON based endpoints: ElasticSearch, Splunk, and Zipkin. The following parameters are added for rate-limiting support:
transRateLimitEnabled
transRateLimit
transQueueLimit
transRateLimitWindow
For more information, see rate limiting support for transactions in ElasticSearch, Splunk, and Zipkin.
-
You can now run Citrix ADC Observability Exporter as a non-root user named
coe_guest
with
the user-id as1000
and the group-id as1000
.
Release 1.4.001
Version 1.4.001
What's new
Sample dashboards for the Splunk endpoint
- Sample dashboards (HTTP, TCP, and SSL) are provided by Citrix to visualize the data exported by Citrix ADC Observability Exporter to the Splunk endpoint. Using the dashboards, you can monitor the Citrix ADC traffic.
Enhancement
Currently, some SSL flags and SSL cipher values are encoded and exported in the same format in the JSON and Avro records. Now, these values are decoded while exporting as JSON records.
Bug fixes
- Avro files exported to the Kafka endpoint were not getting deleted from Citrix ADC Observability Exporter. Now, the exported files are getting deleted.
Release 1.3.001
Version 1.3.001
What's New
Support for Splunk Enterprise
- Citrix ADC Observability Exporter now supports Splunk Enterprise as an endpoint. You can add Splunk Enterprise as an endpoint to receive audit logs, events, and transactions from Citrix ADC for analysis. Splunk Enterprise provides a graphical representation of these data. For more information, see Splunk Enterprise.
Bug Fixes
- Host header issues in COE's requests and response handling for all endpoints
Release 1.2.001
Version 1.2.001
What's New
Elasticsearch support enhancements
-
Earlier, when Citrix Observability Exporter was sending transaction data as JSON to the Elasticsearch server, the data fields were in the raw data format. Now, some of the fields such as the IP address, timestamps, and a few SSL flag values are in string format.
-
Now, you can configure the following options for the Elasticsearch index for flexibility and scalability.
IndexPrefix
: Specifies the value which is used as a prefix to the index string.IndexInterval
: Specifies the interval for creating the Elasticsearch index.
For more information, see Elasticsearch support enhancements.
Change in the Citrix Observability Exporter configuration file structure
- The structure of the Citrix Observability Exporter configuration file has changed in this release. You should use the updated YAML files to deploy Citrix Observability Exporter for this release.
Release 1.1.001
Version 1.1.001
What's New
Prometheus support
Citrix Observability Exporter now supports collecting time series data(metrics) from Citrix ADC instances and exports them to Prometheus. You can then add Prometheus as a data source to Grafana and graphically view the Citrix ADC metrics and analyze the metrics.
For more information, see Time series data support.
Support for deployment using Helm charts
You can now deploy Citrix Observability Exporter using Helm charts. To deploy Citrix Observability Exporter using Helm charts, see Deploy using Helm charts.
Custom header logging
Custom header logging enables logging of all HTTP headers of a transaction and currently supported on the Kafka endpoint.
For more information, see Custom header logging.