Releases: nats-io/nats-operator
Release v0.8.3
Installing
Docker Image:
docker run natsio/nats-operator:0.8.3
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.8.3/00-prereqs.yaml
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.8.3/10-deployment.yaml
Added
- Multiple arch builds
- Support for Kubernetes 1.22.2
- Ability to annotate generated services
- Cluster name support
Release v0.7.4
Installing
Docker Image:
docker run connecteverything/nats-operator:0.7.4
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.7.4/00-prereqs.yaml
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.7.4/10-deployment.yaml
Added
- Added support for leafnode remotes (#284
---
apiVersion: nats.io/v1alpha2
kind: NatsCluster
metadata:
name: example-nats-cluster
spec:
size: 3
version: "2.1.7"
pod:
volumeMounts:
- name: user-credentials
mountPath: /etc/nats-creds
readOnly: true
leafnodeConfig:
remotes:
- url: nats://1.2.3.4:7422
credentials: /etc/nats-creds/user.ncreds
template:
spec:
volumes:
- name: user-credentials
secret:
secretName: user-credentials
- Added support for pod annotations in the Helm Charts (#272)
Fixed
-
Fixed config reloader sidecar also watches TLS certs (#281)
-
Fixed support for using gateways when no external ip access is enabled (#282)
Changed
- Changed default image to
nats:2.1.8
- Changed Go version to 1.15.1 in nats-operator container
Release v0.7.2
Installing
Docker Image:
connecteverything/nats-operator:0.7.2
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.7.2/00-prereqs.yaml
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.7.2/10-deployment.yaml
Added
- Added support for
reject_unknown
option from gateways
Release v0.7.0
Installing
Docker Image:
connecteverything/nats-operator:0.7.0
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.7.0/00-prereqs.yaml
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.7.0/10-deployment.yaml
Added
- Added support to template the nats container
Fixed
- Many helm chart related fixes
Changed
- Cluster scoped mode was changed to supports NatsServiceRoles across different namespaces.
The location of the service account will be the one that dictates where the bound token secret
will be created. To try the full example with cluster scoped mode in minikube:
$ minikube start \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key \
--extra-config=apiserver.service-account-key-file=/var/lib/minikube/certs/sa.pub \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-api-audiences=api,spire-server \
--extra-config=apiserver.authorization-mode=Node,RBAC \
--extra-config=kubelet.authentication-token-webhook=true
$ kubectl apply -f example/nats-operator-cluster-scoped-rbac.yaml
$ kubectl apply -f example/nats-operator-cluster-scoped.yaml
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-7fqmb 1/1 Running 0 5m57s
kube-system coredns-66bff467f8-s9q4q 1/1 Running 0 5m57s
kube-system etcd-minikube 1/1 Running 0 5m58s
kube-system kube-apiserver-minikube 1/1 Running 0 5m58s
kube-system kube-controller-manager-minikube 1/1 Running 0 5m58s
kube-system kube-proxy-7s8w5 1/1 Running 0 5m57s
kube-system kube-scheduler-minikube 1/1 Running 0 5m58s
kube-system storage-provisioner 1/1 Running 0 5m56s
my-admin-app-ns nats-admin-user-pod 1/1 Running 0 92s
my-app-ns nats-user-pod 1/1 Running 0 92s
nats-io nats-operator-6f545874b4-hvl75 1/1 Running 0 2m52s
nats-system nats-cluster-1 2/2 Running 0 84s
nats-system nats-cluster-2 2/2 Running 0 74s
nats-system nats-cluster-3 2/2 Running 0 68s
$ kubectl exec -n my-app-ns -it nats-user-pod -- /bin/sh
$ nats-sub -s nats://nats-user:`cat /var/run/secrets/nats.io/token`@nats-cluster.nats-system foo.bar
$ nats-sub -s nats://nats-user:`cat /var/run/secrets/nats.io/token`@nats-cluster.nats-system foo.asdf
Release v0.6.2
Added support to define tls ciphers
Example:
---
apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
name: "tls-nats"
spec:
# Number of nodes in the cluster
size: 3
tls:
# Certificates to secure the NATS client connections
serverSecret: "nats-server-tls"
# Name of the CA in serverSecret
serverSecretCAFileName: "ca.crt"
# Name of the key in serverSecret
serverSecretKeyFileName: "tls.key"
# Name of the certificate in serverSecret
serverSecretCertFileName: "tls.crt"
cipherSuites:
- TLS_...
curvePreferences:
- example...
Release v0.6.0
Installing
Docker Image:
connecteverything/nats-operator:0.6.0
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.6.0/00-prereqs.yaml
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.6.0/10-deployment.yaml
Changed
Release v0.5.0
Installing
Docker Image:
connecteverything/nats-operator:0.5.0-v1alpha2
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.5.0/00-prereqs.yaml
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.5.0/10-deployment.yaml
Changelog
This version adds support for NATS v2 new decentralized auth features, and super clusters using gateway and leafnodes. Full example below:
---
apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
name: "nats-super-cluster"
spec:
size: 3
version: "edge-v2.0.0-RC12"
serverImage: "synadia/nats-server"
natsConfig:
debug: true
trace: true
tls:
# TLS timeout for client connections
clientsTLSTimeout: 5
# TLS timeout for gateway connections
gatewaysTLSTimeout: 5
# Certificates to secure the NATS client connections:
serverSecret: "nats-server-tls"
serverSecretCAFileName: "ca.crt"
serverSecretKeyFileName: "tls.key"
serverSecretCertFileName: "tls.crt"
# Certificates to secure the routes.
gatewaySecret: "nats-gateways-tls"
gatewaySecretCAFileName: "ca.crt"
gatewaySecretKeyFileName: "tls.key"
gatewaySecretCertFileName: "tls.crt"
# Certificates to secure the leafnode.
leafnodeSecret: "nats-leafnodes-tls"
leafnodeSecretCAFileName: "ca.crt"
leafnodeSecretKeyFileName: "tls.key"
leafnodeSecretCertFileName: "tls.crt"
pod:
enableClientsHostPort: true
advertiseExternalIP: true
gatewayConfig:
name: example
hostPort: 32328
gateways:
- name: example
url: nats://example.com:32328
leafnodeConfig:
hostPort: 4224
operatorConfig:
secret: operator-jwt
systemAccount: "AASYS..."
resolver: URL(https://example.com/jwt/v1/accounts/)
template:
spec:
# Required to be able to lookup public ip address from a server.
serviceAccountName: "nats-server"
Release v0.4.5
Installing
Docker Image:
connecteverything/nats-operator:0.4.5-v1alpha2
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.4.5/00-prereqs.yaml
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.4.5/10-deployment.yaml
Added
apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
name: nats-auth-file-example
namespace: default
spec:
size: 1
version: "1.4.1"
natsConfig:
maxPayload: 20971520
pod:
enableConfigReload: true
volumeMounts:
- name: authconfig
mountPath: /etc/nats-config/authconfig
auth:
# Needs to be under /etc/nats-config where nats looks
# for its config file, or it won't be able to be included
# by /etc/nats-config/gnatsd.conf
clientsAuthFile: "authconfig/auth.json"
template:
spec:
initContainers:
- name: secret-getter
image: "busybox"
command: ["sh", "-c", "echo 'users = [ { user: 'foo', pass: 'bar' } ]' > /etc/nats-config/authconfig/auth.json"]
volumeMounts:
- name: authconfig
mountPath: /etc/nats-config/authconfig
volumes:
- name: authconfig
emptyDir: {}
- Add support to toggle TLS certs verify for clients (#180)
apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
name: "nats"
spec:
size: 3
serverImage: "nats"
version: "1.4.1"
tls:
verify: true
serverSecret: "nats-certs"
- Add support for multiple config files to reloader sidecar (#171)
Changed
-
Updated default reloader sidecar version to
connecteverything/nats-server-config-reloader:0.4.5-v1alpha2
-
Updated RBAC to use less permissions (#143)
Release v0.4.4
Installing
Docker Image:
connecteverything/nats-operator:0.4.4-v1alpha2
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.4.4/00-prereqs.yaml
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.4.4/10-deployment.yaml
Added
- Add fields to extend TLS timeout (#154)
apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
name: "nats"
spec:
size: 3
tls:
serverSecret: "nats-clients-tls"
routesSecret: "nats-routes-tls"
clientsTLSTimeout: 5
routesTLSTimeout: 5
Changed
-
Reversed order of CRD init operations (#155)
-
Updated metrics prometheus-nats-exporter to version 0.2.0,
add channelz and serverz to the arguments of the metrics container (#151)
Fixed
- Bugfix for issue of service role tokens being deleted when same used multiple times
Removed
- Removed garbage collection code, rely on cascade delete done by K8S on objects (ownership references) (#150)
Release v0.4.3
Installing
Docker Image: connecteverything/nats-operator:0.4.3-v1alpha2
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.4.3/00-prereqs.yaml
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.4.3/10-deployment.yaml
Added
- Added support to customize some of the NATS configuration #134
---
apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
name: "nats-custom"
spec:
size: 3
version: "1.4.1"
natsConfig:
debug: true
trace: true
# Duration within quotes
writeDeadline: "5s"
# In bytes, in this case 5MB
maxPayload: 5242880
maxConnections: 10
maxSubscriptions: 10
maxPending: 1024 # In bytes
disableLogtime: true
maxControlLine: 2048
pod:
# NOTE: Only supported in Kubernetes v1.12+.
enableConfigReload: true
# Defaults but can be customized to be a different image
reloaderImage: "connecteverything/nats-server-config-reloader"
reloaderImageTag: "0.2.2-v1alpha2"
reloaderImagePullPolicy: "IfNotPresent"
Fixed
- Fixed support for allow/deny in permissions #136