From 6df3f4892e9c96606f5e73f72c530b2a7358e6af Mon Sep 17 00:00:00 2001 From: plameniv <47831130+plameniv@users.noreply.github.com> Date: Tue, 22 Jun 2021 15:59:48 -0400 Subject: [PATCH] Merge master into develop (#267) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update .travis.yml * Decorator returning class with empty string name (hotfix 1.1) (#74) * Decorator returning class with empty string name [full cli] * bump version [full ci] * build change to address issue with yarn 1.22.0 * [full ci] * core: support multiple concurrent writers for BoundedPriorityQueue (#78) It's possible to get into a state where we've hit our limit on items that we can add to the bounded priority queue and a large backlog of items are accumulated. Once we can begin to process that backlog after a `whenNotFull` promise resolves, the first item in the backlog will create a new `whenNotFull` and proceed to await it. Subsequent items in backlog queue will do the same but overwrite the promise of the first item leading to a chain of promises for backlogged items that are unresolvable. This PR ensures that we never overwrite the `whenNotFull` promise for backlogged queue items and we can resolve them all eventually. * core: unable to close bounded priority queue (#114) core: fix issue with bounded priority queue not correctly supporting closing of queues immediately after it was drained leading to potential `Cannot read property 'resolve' of undefined` errors due to whenNotFull being undefined. * Cookie Cutter 1.2 (#119) * parent 598250b089f6ad64cf3655c110b6756dc0ef3302 author sklose 1574703790 -0500 committer sklose 1579188453 -0500 parent 598250b089f6ad64cf3655c110b6756dc0ef3302 author sklose 1574703790 -0500 committer sklose 1579188394 -0500 initial commit develop * Bump up the versions for google cloud NPM packages (#11) * Integration command exits with jest exit code (#40) * Integration command exits with jest exit code [full ci] * revert version bumps [full ci] * Fix failing Kafka integration test (#43) * Remove Deprecated new Buffer usage (#46) * #12 remove deprecated new Buffer usage * Support Changes in Metrics Tags for Prometheus (#38) * Support Changes in Metrics Tags for Prometheus * add integration test to .travis.yml [full ci] * set jestTimeout to a larger value [full ci] * testing change to integrate always returning 0 * [full ci] * increasing time to wait for Prometheus to scrape * [full ci] * increase wait to 120 sec [full ci] * adding localhost so Docker/Prom works for linux * [full ci] * look for ports starting with 300 [full ci] * show all results from netstat [full ci] * File service discovery for Prometheus targets [full ci] * moving targets.json generation to integration file * [full ci] * [full ci] * slimming down docker-compose.yml file * [full ci] * addressing comments * [full ci] * Clean up for review * Fix IMetricsTags to ILabelValues conversion * Addressing comments * kubernetes: adjust logic for creating a watch (#48) kubernetes: adjust logic for creating a watch * update kubernetes-client to latest version on master that contains various internal fixes for creation and handling of a watch * adjust logic for startWatch to wrap watch function inside of a promise and handle timeouts differently * change callback for watch to not be async azure: fix build error to take in correct BufferEncoding and update kb calculations * Revert commit of incorrect yarn.lock from #48 (#50) yarn.lock was committed with private registry information. rebuild yarn.lock in order to fix deployment issues from #48. * kubernetes: fix kubernetes-client package build issue with Node 8 (#51) kubernetes-client package requires use of node version greater than 8 but we require 8 in the travis ci build pipeline. as a temporary solution till we deprecate Node 8 completely as part of #52 update the build pipeline to use a separate package.v8.json which doesn't build it. * build: fix grep check to only match if we're using node 8 (#53) * kubernetes: update @kubernetes/client-node package to 0.11.1 (#54) * kubernetes: add in additional logging details for watch errors (#56) * improve timeout of k8s watch (#59) * Allow opt-in to creating Azure queues dynamically on write (#55) * core: update ConsoleLogger to correctly log out nested objects (#60) core: update ConsoleLogger to correctly log out nested objects The console logger renders out `[object Object]` if a field is an object so destructure it into `.` delimited nested fields (e.g. foo.bar.baz) * Update ConsoleLogger to take in options letting users select a maxDepth for log outputs * kubernetes: create k8sPollSource that periodically polls for resources instead of using a watch (#61) * Create `KubernetesBase` class that both poll and watch sources extend from * Add in tests for `KubernetesPollSource` * Add preprocessor concept for Azure Queues to support messages not following the envelope format (#65) * fix missing bind when creating new azure queues (#63) * fix lint errors (#67) * refactor queue client to avoid exporting type from azure library as p… (#68) * refactor queue client to avoid exporting type from azure library as part of public api * kubernetes: add in timeout to poll request in case request to k8s hangs (#70) * Fix bug when options are ignored in QueueClient (#69) * Decorator returning class with empty string name (#73) * Decorator returning class with empty string name [full cli] * remove export, add link to TS issue [full cli] * bump version [full ci] * core: do not overwrite pending promises for the backlog of queued items (#76) It's possible to get into a state where we've hit our limit on items that we can add to the bounded priority queue and a large backlog of items are accumulated. Once we can begin to process that backlog after a `whenNotFull` promise resolves, the first item in the backlog will create a new `whenNotFull` and proceed to await it. Subsequent items in backlog queue will do the same but overwrite the promise of the first item leading to a chain of promises for backlogged items that are unresolvable. This PR ensures that we never overwrite the `whenNotFull` promise for backlogged queue items and we can resolve them all eventually. * core: version bump to 1.2.0-beta.11 (#77) * Update Cookie Cutter Dependencies (#75) * better throughput in RPC mode while guaranteeing correctness of state (#83) * Update dependencies for Node 8 (#86) * Update dependencies for Node 8 * add white space [full ci] * remove white space [full ci] * A few more deps updates [full ci] Co-authored-by: Plamen Ivanov * Reveal Azure Blob & Queue Service URL params, to allow for pointing at a local emulator. (#89) * Way to Inspect 'Invalid' Messages (#82) * Way to Inspect 'Invalid' Messages * unit test * remove annotator from unit test * allow publishing from inside the invalid handler * Updating docs * impoving clarity of doc entry * actual change of doc * add failSpan if input validation fails for Serial * Use custom error to signal no invalid msg handler * Missed files * Do not propagate NoInvalidHandlerError * Remove custom error and add hasInvalid function * add case in unit test * more tests for ConventionBasedMessageDispatcher * refactoring to simplify code * addressing comments Co-authored-by: Plamen Ivanov * Better Log Message when a Message Fails to Process and fix to upsertSproc SeqConErr details (#91) * Better Log Message when a Message Fails to Process * upsertSproc SeqConErr details fix * use context's logger Co-authored-by: Plamen Ivanov * Update CHANGELOG.md (#98) * prepare 1.2-rc.1 (#105) * upgrade dependencies due to vulnerabilities (#111) * bump version * update changelog / fix link Co-authored-by: Kshitiz Gupta Co-authored-by: plameniv <47831130+plameniv@users.noreply.github.com> Co-authored-by: Connor Ross Co-authored-by: Tanvir Alam Co-authored-by: Chris Pinola Co-authored-by: Ilya Butorine Co-authored-by: Plamen Ivanov Co-authored-by: Sean Halpin * Fix breaking API change in Azure Queues (#121) * Adding source unit to Config's timespanOf function * Adding Azure Queue change and other fixes Co-authored-by: Plamen Ivanov * prometheus module should not throw an error when incrementing by 0 (#128) * Have separate queue capacity per priority level (#162) Co-authored-by: Plamen Ivanov * Stabilize 1.3 (#202) * parent 598250b089f6ad64cf3655c110b6756dc0ef3302 author sklose 1574703790 -0500 committer sklose 1579188453 -0500 parent 598250b089f6ad64cf3655c110b6756dc0ef3302 author sklose 1574703790 -0500 committer sklose 1579188394 -0500 initial commit develop * Bump up the versions for google cloud NPM packages (#11) * Integration command exits with jest exit code (#40) * Integration command exits with jest exit code [full ci] * revert version bumps [full ci] * Fix failing Kafka integration test (#43) * Remove Deprecated new Buffer usage (#46) * #12 remove deprecated new Buffer usage * Support Changes in Metrics Tags for Prometheus (#38) * Support Changes in Metrics Tags for Prometheus * add integration test to .travis.yml [full ci] * set jestTimeout to a larger value [full ci] * testing change to integrate always returning 0 * [full ci] * increasing time to wait for Prometheus to scrape * [full ci] * increase wait to 120 sec [full ci] * adding localhost so Docker/Prom works for linux * [full ci] * look for ports starting with 300 [full ci] * show all results from netstat [full ci] * File service discovery for Prometheus targets [full ci] * moving targets.json generation to integration file * [full ci] * [full ci] * slimming down docker-compose.yml file * [full ci] * addressing comments * [full ci] * Clean up for review * Fix IMetricsTags to ILabelValues conversion * Addressing comments * kubernetes: adjust logic for creating a watch (#48) kubernetes: adjust logic for creating a watch * update kubernetes-client to latest version on master that contains various internal fixes for creation and handling of a watch * adjust logic for startWatch to wrap watch function inside of a promise and handle timeouts differently * change callback for watch to not be async azure: fix build error to take in correct BufferEncoding and update kb calculations * Revert commit of incorrect yarn.lock from #48 (#50) yarn.lock was committed with private registry information. rebuild yarn.lock in order to fix deployment issues from #48. * kubernetes: fix kubernetes-client package build issue with Node 8 (#51) kubernetes-client package requires use of node version greater than 8 but we require 8 in the travis ci build pipeline. as a temporary solution till we deprecate Node 8 completely as part of #52 update the build pipeline to use a separate package.v8.json which doesn't build it. * build: fix grep check to only match if we're using node 8 (#53) * kubernetes: update @kubernetes/client-node package to 0.11.1 (#54) * kubernetes: add in additional logging details for watch errors (#56) * improve timeout of k8s watch (#59) * Allow opt-in to creating Azure queues dynamically on write (#55) * core: update ConsoleLogger to correctly log out nested objects (#60) core: update ConsoleLogger to correctly log out nested objects The console logger renders out `[object Object]` if a field is an object so destructure it into `.` delimited nested fields (e.g. foo.bar.baz) * Update ConsoleLogger to take in options letting users select a maxDepth for log outputs * kubernetes: create k8sPollSource that periodically polls for resources instead of using a watch (#61) * Create `KubernetesBase` class that both poll and watch sources extend from * Add in tests for `KubernetesPollSource` * Add preprocessor concept for Azure Queues to support messages not following the envelope format (#65) * fix missing bind when creating new azure queues (#63) * fix lint errors (#67) * refactor queue client to avoid exporting type from azure library as p… (#68) * refactor queue client to avoid exporting type from azure library as part of public api * kubernetes: add in timeout to poll request in case request to k8s hangs (#70) * Fix bug when options are ignored in QueueClient (#69) * Decorator returning class with empty string name (#73) * Decorator returning class with empty string name [full cli] * remove export, add link to TS issue [full cli] * bump version [full ci] * core: do not overwrite pending promises for the backlog of queued items (#76) It's possible to get into a state where we've hit our limit on items that we can add to the bounded priority queue and a large backlog of items are accumulated. Once we can begin to process that backlog after a `whenNotFull` promise resolves, the first item in the backlog will create a new `whenNotFull` and proceed to await it. Subsequent items in backlog queue will do the same but overwrite the promise of the first item leading to a chain of promises for backlogged items that are unresolvable. This PR ensures that we never overwrite the `whenNotFull` promise for backlogged queue items and we can resolve them all eventually. * core: version bump to 1.2.0-beta.11 (#77) * Update Cookie Cutter Dependencies (#75) * better throughput in RPC mode while guaranteeing correctness of state (#83) * Update dependencies for Node 8 (#86) * Update dependencies for Node 8 * add white space [full ci] * remove white space [full ci] * A few more deps updates [full ci] Co-authored-by: Plamen Ivanov * Reveal Azure Blob & Queue Service URL params, to allow for pointing at a local emulator. (#89) * Way to Inspect 'Invalid' Messages (#82) * Way to Inspect 'Invalid' Messages * unit test * remove annotator from unit test * allow publishing from inside the invalid handler * Updating docs * impoving clarity of doc entry * actual change of doc * add failSpan if input validation fails for Serial * Use custom error to signal no invalid msg handler * Missed files * Do not propagate NoInvalidHandlerError * Remove custom error and add hasInvalid function * add case in unit test * more tests for ConventionBasedMessageDispatcher * refactoring to simplify code * addressing comments Co-authored-by: Plamen Ivanov * Better Log Message when a Message Fails to Process and fix to upsertSproc SeqConErr details (#91) * Better Log Message when a Message Fails to Process * upsertSproc SeqConErr details fix * use context's logger Co-authored-by: Plamen Ivanov * Update CHANGELOG.md (#98) * bump develop to 1.3 (#106) * Add Dead Letter Queue to QueueInputSource (#93) * Add Dead Letter Queue to QueueInputSource * Addressing comments * pass in a modified config to dead letter queue * addressing comment * changing API to expect values in milliseconds * Updating docs * Docs update with example dead letter queue config * update package.jsoon * doc nit Co-authored-by: Plamen Ivanov * merge release/1.2 into develop (#112) * upgrade dependencies due to vulnerabilities (#111) * Remove support for node 8 (#113) * core: unable to close bounded priority queue (#116) core: fix issue with bounded priority queue not correctly supporting closing of queues immediately after it was drained leading to potential Cannot read property 'resolve' of undefined errors due to whenNotFull being undefined. * ConcurrentMessageProcessor suppresses error details (#124) * ConcurrentMessageProcessor suppresses error details When message handling fails outside of the message handler the ConcurrentMessageHandler currently throws a generic Error that hides the underlying root cause error. This PR changes it to re-throw the original error, similar to what the SerialMessageProcessor is doing. * Fix breaking API change in Azure Queues (#122) * Fix breaking API change in Azure Queues (#121) * Adding source unit to Config's timespanOf function * Adding Azure Queue change and other fixes Co-authored-by: Plamen Ivanov * DeadLetterQueue fixes * bump versions * rebase and bump version Co-authored-by: Plamen Ivanov * Add RedisStreamSink & RedisStreamSource (#126) * Add GCP PubSub Sink (#125) * Bump Redis version to publish new package (#127) do version bump missing in PR #126 * prometheus module should not throw an error when incrementing by 0 (#130) merge back from master * Prevent config.parse() output from being used as input (#134) * multi cosmos collections (#81) (#117) * fix MsSqlSink throws wrong error (#140) * fix MsSqlSink throws wrong error * Update package.json * Update MssqlSink.ts * Add AMQP Sink + Source (#136) * Add AMQP Sink + Source * add the actual Sink/Source files * properly close the sink's connection * Adding initialization and disposal to source * Basic producer and consumer scripts. * replace AsyncPipe with BoundedPriorityQueue * Refactor to get correct produce/consume behavior * improve connection call and add port as optional * fix yaml.lock file * Adding integration test * add new line at end of yml file * Add Copyright text and set "--passWithNoTest" * add msg release listener [full ci] * add AMQP integration to travis build * fix .travis.yml [full ci] * addressing comments * Fix integration test/setup [full ci] * Adding tracing [full ci] * minor corrections Co-authored-by: Plamen Ivanov * Add metrics to AMQP (#144) * Add metrics to AMQP * bump version * switch to this.channel * trigger [full ci] * Adding periodic metrics * Add metadata [full ci] * Lint and style fix [full ci] * Addressing comments Co-authored-by: Plamen Ivanov * Add docs to AMQP package (#147) * Add docs to AMQP package * correcting module name * Addressing comments * Adding example files * minor nits * rename * Simplify config Co-authored-by: Plamen Ivanov * Add cookie-cutter-jaeger (#151) * Add cookie-cutter-jaeger * remove interface * add lock * update per comments Co-authored-by: Marco Garcia * cookie-cutter-redis: add support for multiple streams (#155) * cc-redis: add support for multiple streams * fix test * update spanLogAndSetTags * update getPendingMessagesForConsumerGroup * update RedisStreamSink * rename test * add tests * add more tests * update per feedback * update xReadGroup * full ci Co-authored-by: Marco Garcia * cookie-cutter-redis: add metrics (#156) * cookie-cutter-redis: add metrics * full ci * update docs * update docs * full ci * remove array tag Co-authored-by: Marco Garcia * Have separate queue capacity per priority level (#160) * Have separate queue capacity per priority level * Fix memory leak likely caused by promise chaining * fixing floating promises * white space change [full ci] Co-authored-by: Plamen Ivanov Co-authored-by: Sebastian Klose * Fix lz4 error by adding resolution (#161) * Fix lz4 error by adding resolution * yarn.lock file change Co-authored-by: Plamen Ivanov * Add new Jaeger package to README (#158) * fix broken metrics, reclaim PEL messages less often (#163) * fix pending list not fully drained on startup, added password config (#164) * fix messages are acked on error (#168) * check for failed acks to redis (#169) * fix xReadGroup ignores all but first message from batch (#170) * Implement IEncodedMessageEmbedder for ProtoMessageEncoder (#174) * Implement IEncodedMessageEmbedder for ProtoMessageEncoder * version change * unit tests Co-authored-by: Plamen Ivanov * cleanup redis stream implementation (#171) * make some redis options nullable so the default value can be overwrit… (#177) * troubleshoot redis stream issue (#179) * Allow negative values in Prometheus histogram (#181) Co-authored-by: Plamen Ivanov * Prevent BoundedPriorityQueue from deprioritizing waiting enqueue calls (#180) * Prevent BoundedPriorityQueue from deprioritizing waiting enqueue calls * versioon bump * handle floating promises Co-authored-by: Plamen Ivanov * Fix backwards compatibility for ProtoMessageEncoder (#182) * Version bump for Proto change (#183) Co-authored-by: Plamen Ivanov * Change LogLevel from Error -> Warn when retrieving Kafka watermarks (#184) * fix 'yarn audit' issues [full ci] (#187) * fix sec vuln in node-fetch (#189) * Add auth for amqp source and sink (#192) * Add auth for amqp source and sink * trigger [full ci] * do not overwrite default creds [full ci] * do not overwrite default creds [full ci] * do not overwrite default creds [full ci] * do not overwrite default creds [full ci] * add missing new line [full ci] * lint fix * trigger ci [full ci] * trigger ci Co-authored-by: prachi.tandon@jet.computer * Add AMQP package to README (#193) Co-authored-by: Plamen Ivanov * Add vhost support for amqp (#194) * Add vhost support for amqp * Add vhost support for amqp * Add vhost support for amqp [full ci] * Add vhost support for amqp * Add vhost support for amqp * Add vhost support for amqp * version bump Co-authored-by: prachi.tandon@jet.computer * kubernetes: adjust logging to be less verbose (#195) * detect when kafkajs is stuck with stale broker metadata (#186) * detect when kafkajs is stuck with stale broker metadata and terminate application * lint * Ensure RedisClient's "type" metric label is always a string (#196) * Ensure metrics type label is always a string * Bump cookie-cutter-redis version to 1.3.0-beta.13 * Fix wrong string function in Kafka (#197) * Fix wrong string function in Kafka * proper conversion of object to string Co-authored-by: Plamen Ivanov * address vulnerability in node-forge package [full ci] (#200) * create 1.3-rc [full ci] * bump version, add missing license headers * fix code dupe * update changelog Co-authored-by: Kshitiz Gupta Co-authored-by: plameniv <47831130+plameniv@users.noreply.github.com> Co-authored-by: Connor Ross Co-authored-by: Tanvir Alam Co-authored-by: Chris Pinola Co-authored-by: Ilya Butorine Co-authored-by: Plamen Ivanov Co-authored-by: Sean Halpin Co-authored-by: Emma Lynch Co-authored-by: Dillon Mulroy Co-authored-by: Kshitiz Gupta Co-authored-by: Chris Pinola Co-authored-by: Marco Garcia Co-authored-by: Marco Garcia Co-authored-by: prachi30 <51481988+prachi30@users.noreply.github.com> Co-authored-by: prachi.tandon@jet.computer * Don't extract data field in QueueInputSource when encoder isEmbeddable (#205) * Don't extract data field in QueueInputSource when encoder isEmbeddable * remove yarn.lock changes * trigger [full ci] * clean up Co-authored-by: Plamen Ivanov * Initialize github action workflow for master (#244) * Initialize github action workflow for master * Rename workflow * Cron at minute 40 of each hour * Clean up * Add develop branch as well Co-authored-by: Plamen Ivanov * Run with different branch on schedule (#248) * Run with different branch on schedule * More cron times * Add inputs for workflow_dispatch Co-authored-by: Plamen Ivanov * Make checkout jobs mutually exclusive, add labels (#249) Co-authored-by: Plamen Ivanov * Add full workflow to master (#250) * Add full workflow to master * Run all jobs on master without full ci keyword * Add longer timeout in core test * Use only 1 checkout step [ful ci] * Shorted build for integrate jobs [full ci] * Cleanup and rename workflow file [full ci] * Run [full ci] without needing macOs test job * Return macOS to integration needs Co-authored-by: Plamen Ivanov * Fix issue with commit message (#255) * Fix issue with commit message * Try without commit message * Add : after if * Add missing $ * Remove setting of commit message * Add if conditions on commit message set * Revert temp changes * Use oneline git log format instead of adding an if check on the commit message steps * Add missing space in git log command. Make this multiline again. Co-authored-by: Plamen Ivanov * Core application test increase timeout (#253) * Core application test increase timeout * Don't bump version * Add yarn install steps to npm publish and pages Co-authored-by: Plamen Ivanov * Check github.ref on push for publishing pages (#256) * Check github.ref on push for publishing pages * Switch to contains check and echo contents * remove echo of ref * clean up Co-authored-by: Plamen Ivanov * Properly connect/disconnect Kafkajs producer mstr (#259) Co-authored-by: Plamen Ivanov * Remove extra if check from build step in master (#262) Co-authored-by: Plamen Ivanov * Convert Redis err to str then pass to Prometheus master (#258) * Convert Redis err to str then pass to Prometheus master * Switch to only using the error name Co-authored-by: Plamen Ivanov * Remove .travis.yml and update badges (#265) Co-authored-by: Plamen Ivanov * Clean up extra files * Revert to develop yarn.lock * Missing return * Run [full ci] Co-authored-by: Sebastian Klose Co-authored-by: Tanvir Alam Co-authored-by: Kshitiz Gupta Co-authored-by: Connor Ross Co-authored-by: Chris Pinola Co-authored-by: Ilya Butorine Co-authored-by: Plamen Ivanov Co-authored-by: Sean Halpin Co-authored-by: Emma Lynch Co-authored-by: Dillon Mulroy Co-authored-by: Kshitiz Gupta Co-authored-by: Chris Pinola Co-authored-by: Marco Garcia Co-authored-by: Marco Garcia Co-authored-by: prachi30 <51481988+prachi30@users.noreply.github.com> Co-authored-by: prachi.tandon@jet.computer Co-authored-by: Plamen Ivanov --- .github/workflows/node.js.yml | 1 - .travis.yml | 162 ------------------ CHANGELOG.md | 45 ++++- README.md | 2 +- docs/docs/Module_Redis.md | 2 +- .../azure/src/__test__/utils/helpers.test.ts | 7 + packages/azure/src/utils/helpers.ts | 7 + packages/redis/package.json | 2 +- packages/redis/src/RedisClient.ts | 7 +- packages/redis/src/RedisStreamSink.ts | 7 + packages/redis/src/RedisStreamSource.ts | 7 + .../src/__test__/client.integration.test.ts | 7 + packages/redis/src/__test__/protocol.test.ts | 7 + .../src/__test__/stream.integration.test.ts | 7 + packages/redis/src/__test__/utils.ts | 7 + 15 files changed, 107 insertions(+), 170 deletions(-) delete mode 100644 .travis.yml diff --git a/.github/workflows/node.js.yml b/.github/workflows/node.js.yml index 3622e05b..ffca5261 100644 --- a/.github/workflows/node.js.yml +++ b/.github/workflows/node.js.yml @@ -53,7 +53,6 @@ jobs: runs-on: ubuntu-latest steps: - name: Checkout Code - if: ${{ github.event_name == 'pull_request' }} uses: actions/checkout@v2 - name: Setup Ubuntu Node 12 uses: actions/setup-node@v2-beta diff --git a/.travis.yml b/.travis.yml deleted file mode 100644 index e36b9d5d..00000000 --- a/.travis.yml +++ /dev/null @@ -1,162 +0,0 @@ -language: node_js -branches: - only: - - master - - develop - - /^release/.*$/ -stages: - - smoke - - name: test - if: type = cron OR (type = pull_request AND (branch = master OR commit_message =~ /\[full(-| )ci\]/)) - - name: integrate - if: type = cron OR (type = pull_request AND (branch = master OR commit_message =~ /\[full(-| )ci\]/)) - - name: deploy - if: (branch in (master, develop) OR branch =~ ^release/.*$) AND type = push - -matrix: - include: - - name: "Linux Node 10 (Lint + Test)" - os: linux - node_js: 10 - stage: smoke - script: yarn build && yarn lint && yarn test - - - name: "Audit Dependencies" - os: linux - node_js: 10 - stage: smoke - if: type = cron - script: yarn audit - - - name: "Linux Node 12" - os: linux - node_js: 12 - stage: test - - name: "OSX Node 10" - os: osx - node_js: 10 - stage: test - - name: "Windows Node 10" - os: windows - node_js: 10 - stage: test - env: - - YARN_GPG=no # Windows build agent will hang without this - - - name: "AZURE" - os: windows - node_js: 10 - stage: integrate - env: - - YARN_GPG=no # Windows build agent will hang without this - - NODE_TLS_REJECT_UNAUTHORIZED="0" - - COSMOS_SECRET_KEY="C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==" - - AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;TableEndpoint=http://127.0.0.1:10002/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;" - - AZURE_STORAGE_ACCESS_KEY="Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==" - - AZURE_STORAGE_ACCOUNT="devstoreaccount1" - - AZURE_STORAGE_URL="http://127.0.0.1:10000/devstoreaccount1" - - RUNNING_IN_CI="1" - script: | - PowerShell -c "Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope LocalMachine" - PowerShell -File packages/azure/start_emulators.ps1 - PowerShell -File packages/azure/run_integration_tests.ps1 - - - name: "MSSQL" - os: linux - node_js: 10 - stage: integrate - services: - - docker - script: | - echo "$DOCKER_TOKEN" | docker login -u "$DOCKER_USERNAME" --password-stdin - yarn build && cd packages/mssql && yarn integrate - - - name: "AMQP" - os: linux - node_js: 10 - stage: integrate - services: - - docker - script: | - echo "$DOCKER_TOKEN" | docker login -u "$DOCKER_USERNAME" --password-stdin - yarn build && cd packages/amqp && yarn integrate - - - name: "Kafka" - os: linux - node_js: 10 - stage: integrate - services: - - docker - script: | - echo "$DOCKER_TOKEN" | docker login -u "$DOCKER_USERNAME" --password-stdin - yarn build && cd packages/kafka && yarn integrate - - - name: "Prometheus" - os: linux - node_js: 10 - stage: integrate - services: - - docker - script: | - echo "$DOCKER_TOKEN" | docker login -u "$DOCKER_USERNAME" --password-stdin - yarn build && cd packages/prometheus && yarn integrate - - - name: "S3" - os: linux - node_js: 10 - stage: integrate - services: - - docker - script: | - echo "$DOCKER_TOKEN" | docker login -u "$DOCKER_USERNAME" --password-stdin - yarn build && cd packages/s3 && yarn integrate - - - name: "Redis" - os: linux - node_js: 10 - stage: integrate - services: - - docker - script: | - echo "$DOCKER_TOKEN" | docker login -u "$DOCKER_USERNAME" --password-stdin - yarn build && cd packages/redis && yarn integrate - - - name: "Publish to NPM" - os: linux - node_js: 10 - stage: deploy - env: - - secure: "X4e3tI6zIOShCVHgao7vd7qJQXh2nOlE0SCYGsQhSAxe3c9+rBX7ih9tS5PTACOyvxXqrhqVHhn7P9yTZQT1ponFY0yzWDHcTWtzIDLaXnL189MN6hNq7zz6aPC/zy6/RLUay63/yeyHWCl2MlJYxjcFTsgJRVU2W9hzyl99SDPQrDTNwZKN3rEdqDQxjXTZAxIRRE4RNYPfOkTQpKXIP0JiFUszKhGOE4YA56f0YNVQG2OCdlG/y38qDRK7RTVeechao3XdpkIfuDcJNlx4xFFTRmuciOIbgM6UsDZaCIKjVa06LhR9WApTGG5J91lTpC/3h6v1A0dY13drjQaHY1XLca0Yt2zYTu+o8gLngaPuqLuQDFNXBkuov/r8o8dqAIHHPr/TfeT5mSs7bgTLOFc0ABttpubk6T1LuITZvdEwtfH2z0mivad7d0P4C+jQRlN8mO8kuBzDo9BfX9Bn2yStRdvXcbjhw2tPth16RWIq2HmpTyC/wWRYFdREJnRRGC5GNnEKZbJN2zXxOrLeJkbQeFdavpwDZqKV0xi3ni5CBKIw/hF4efF5VKuIrMkTgdi3pI5QmZHqDOYNgeLG1KFTm6mwZGD9VHpFSt3WokC/92SbcxM/UvLCQ/vxp+CkvcIuzTQ9b5bCakYbAJ5MTBjzfsEhep4lB1Hytb0C668=" - script: | - echo "//registry.npmjs.org/:_authToken=${NPM_TOKEN}" > ~/.npmrc - yarn build && node ./.ci/deploy.js - - - name: "Publish to GitHub Pages" - if: branch = master - os: linux - node_js: 10 - stage: deploy - env: - - GH_NAME=sklose - - GH_EMAIL=sebastian.klose@walmart.com - - secure: "GlsrfPBNJWOzBL0xGo/B/V9x6Q4FvTt4icFKIneqfpopPEqJZKG6Pc1IcE0bssuiv+FIq+K9I0CAQF7Lz9e5X1BT/Sscnz69qTlbf0f46DBvXE8bNeLrRH7FEQHuiyEosJiB0lF9lgok0PwCsJXyAeevktNmI9udkULk9wYGrA8EiQnrJwApYBhJenHpy51fSFO2rHwE7acgaf6kKZXvIkiaWg35+jM+JJwfw3eSWi+ktfJ3KXsMPk0eb+ji0Py85vL2IMvJt7aDYSf5RIIApZjHuWCdSw42gPX3szlmFJ7Kb6BwKQd5Bo5otIlywg3uZ4G08xnJg6vHIAxDU93bgT0i82qHAkCo8IWsl6XcZFisdjdplI1Aw2w2HiIFtT0APPmUwarQWW7m8Y6s3uTDMXQCl2CFlaL0pK4y1tfz68M3YI4aotueQDJbDxUn/C81K6RnVZPV/oXGfx3WMN6nCdMQwX2xwS6AxCO6X1N7suXmSq6Bew/yP9kO+tD8Qd2BOOnXvFCqlmHgL0MVYEAZVl3bo4aayfoquT/iLG85+XoSRgnbVXEhHiYQQP3g0AorBOamBGkaWd71kJBgwX6JTn49RFK/8A85pNEovz6L72hV9uFFlKDJpOeo5UgZOZ76xQS5TPF7HCeT8IPui5F5boN+NE9OzFG1despJYt8WEo=" - script: | - git config --global user.name "${GH_NAME}" - git config --global user.email "${GH_EMAIL}" - echo "machine github.com login ${GH_NAME} password ${GH_TOKEN}" > ~/.netrc - cd docs/website && yarn install && GIT_USER="${GH_NAME}" yarn run publish-gh-pages - - -before_install: - - curl -o- -L https://yarnpkg.com/install.sh | bash -s -- --version 1.21.1 - - export PATH="$HOME/.yarn/bin:$PATH" - -install: - - yarn install --frozen-lockfile - -script: yarn build && yarn test - -env: - global: - - DOCKER_USERNAME: wmtcookiecutter - - secure: pR9Vrg1oJ4HM6vWH7NSyzzhlEaXm3EAg8gNl9JPtMJ8eCMiahS7fo4HfH35JCNBMeSjNoXF8+CX+N63aQdTTYb3NXtvqciO/ErFqaDmb/ah4ANNpbLCpaCIK0D7Xwi6xdnXw0HiyKOlnKAN46i9ISfaFdv8qHo4tk1q8URRu3TIz0hjECCM6jbou6ehlejr5sYMil/gU58NznU1C58NEOBDJ4b+kDC13Atsjf8DwKINe+ZFS2Y0tcgAs7OnfcCALFqEcf9H3d6bLPiThfHGVfVWnutcjc+P9FurOEpXhN1yTwsPbqWB5G9iNs7Ju85nLX8JnfvJab24d5kPaEUfH8MqcKJ5F/GX3gvRc1Sa6EyHL8wFodeln4raO2fQ6GTnAgRTiqr6mDzYTugfeF8PFjO2nDVUT7HKGv+j7/vhxsHMaZGzx/hxdVGai+9bFdsOkMgwBnjlXu8x6XpjuTjiAwIW5D8XepqS+LwUd9nQAY2o1URpmQotqfqPS5yL907tsNPDVA4yRqXPlM0ETXAzq7TPsNzK26V1x/7xl2MTWD3msy2DWz3/th/GmrLLpTQcBmzXEOshxChXBUyY5ML5w/4eMAiJCdI6Az0liUzjJbatM2DstvZfeZyYZo0P4HRFDWNzsSAgkgN12nz6ZSb94ggt+SeEYbWSMzVpZc5EHZWI= diff --git a/CHANGELOG.md b/CHANGELOG.md index f9741d0b..9f884d03 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,47 @@ -# 1.2-beta +# 1.3 + +## core + +- fixed bug where BoundedPriorityQueue could de-prioritize elements under certain circumstances which could lead to out-of-order processing +- deprecated support for Node v8 + +## kafka + +- detect stale broker metadata and crash kafkajs client; this seems to be a bug in kafkajs where the metadata can get stale and all reconnect attempts of kafkajs will fail. + +## prometheus + +- allow negative values for histograms + +## kubernetes + +- less verbose logging + +## amqp + +- new module with sink and source for AMQP compatible message buses + +## protobuf + +- support embedding encoded protobuf data as base64 in JSON documents + +## redis + +- support multiple streams for source and sink +- added missing metrics +- fix messages getting lost when receiving in batches + +## azure + +- support accessing multiple cosmos collection from within the same service +- added support for dead letter queues for Azure Queues + +## mssql + +- fixed sink swallows actual error message when a transaction fails to commit + + +# 1.2 ## core diff --git a/README.md b/README.md index a9994b16..0409eee2 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ An opinionated framework for building event-driven and request/response based mi | Stable | Beta | |--------|------| -| [![Build Status](https://travis-ci.org/walmartlabs/cookie-cutter.svg?branch=master)](https://travis-ci.org/walmartlabs/cookie-cutter) | [![Build Status](https://travis-ci.org/walmartlabs/cookie-cutter.svg?branch=develop)](https://travis-ci.org/walmartlabs/cookie-cutter) | +| [![Build Status](https://github.com/walmartlabs/cookie-cutter/actions/workflows/node.js.yml/badge.svg?branch=master)](https://github.com/walmartlabs/cookie-cutter/actions) | [![Build Status](https://github.com/walmartlabs/cookie-cutter/actions/workflows/node.js.yml/badge.svg?branch=develop)](https://github.com/walmartlabs/cookie-cutter/actions) | ## Features diff --git a/docs/docs/Module_Redis.md b/docs/docs/Module_Redis.md index 388aacc1..c64d8344 100644 --- a/docs/docs/Module_Redis.md +++ b/docs/docs/Module_Redis.md @@ -151,4 +151,4 @@ The following metadata is available in the message handler via `ctx.publish` | Name | Description | Type | Tags | | ------------------------------------------- | ----------- | ---- | ---- | -| cookie_cutter.redis_producer.msg_published | the number of messages sent to redis server | `increment` | `stream_name`, `result` | \ No newline at end of file +| cookie_cutter.redis_producer.msg_published | the number of messages sent to redis server | `increment` | `stream_name`, `result` | diff --git a/packages/azure/src/__test__/utils/helpers.test.ts b/packages/azure/src/__test__/utils/helpers.test.ts index 89376134..0d95b98d 100644 --- a/packages/azure/src/__test__/utils/helpers.test.ts +++ b/packages/azure/src/__test__/utils/helpers.test.ts @@ -1,3 +1,10 @@ +/* +Copyright (c) Walmart Inc. + +This source code is licensed under the Apache 2.0 license found in the +LICENSE file in the root directory of this source tree. +*/ + import { getCollectionInfo } from "../../utils/helpers"; describe("State Key Parsing", () => { diff --git a/packages/azure/src/utils/helpers.ts b/packages/azure/src/utils/helpers.ts index e6630f60..e0e5474e 100644 --- a/packages/azure/src/utils/helpers.ts +++ b/packages/azure/src/utils/helpers.ts @@ -1,3 +1,10 @@ +/* +Copyright (c) Walmart Inc. + +This source code is licensed under the Apache 2.0 license found in the +LICENSE file in the root directory of this source tree. +*/ + import { Stream } from "stream"; /** diff --git a/packages/redis/package.json b/packages/redis/package.json index a61cfd1f..b1e7e140 100644 --- a/packages/redis/package.json +++ b/packages/redis/package.json @@ -1,6 +1,6 @@ { "name": "@walmartlabs/cookie-cutter-redis", - "version": "1.4.0-beta.4", + "version": "1.4.0-beta.5", "license": "Apache-2.0", "main": "dist/index.js", "types": "dist/index.d.ts", diff --git a/packages/redis/src/RedisClient.ts b/packages/redis/src/RedisClient.ts index eb7fd6cb..73e7d772 100644 --- a/packages/redis/src/RedisClient.ts +++ b/packages/redis/src/RedisClient.ts @@ -20,6 +20,7 @@ import { OpenTracingTagKeys, } from "@walmartlabs/cookie-cutter-core"; import { Span, SpanContext, Tags, Tracer } from "opentracing"; +import { RedisError } from "redis"; import { isString, isNullOrUndefined } from "util"; import { IRedisOptions, IRedisClient, IRedisMessage } from "."; import { RedisProxy, RawReadGroupResult, RawPELResult, RawXClaimResult } from "./RedisProxy"; @@ -334,7 +335,7 @@ export class RedisClient implements IRedisClient, IRequireInitialization, IDispo streamName, consumerGroup, result: RedisMetricResults.Error, - error: err, + errorType: err instanceof RedisError ? err.name : "NonRedisError", }); throw err; @@ -368,7 +369,7 @@ export class RedisClient implements IRedisClient, IRequireInitialization, IDispo streamName, consumerGroup, result: RedisMetricResults.Error, - error: err, + errorType: err instanceof RedisError ? err.name : "NonRedisError", }); throw err; @@ -497,7 +498,7 @@ export class RedisClient implements IRedisClient, IRequireInitialization, IDispo streamName, consumerGroup, result: RedisMetricResults.Error, - error: err, + errorType: err instanceof RedisError ? err.name : "NonRedisError", }); throw err; diff --git a/packages/redis/src/RedisStreamSink.ts b/packages/redis/src/RedisStreamSink.ts index 514839de..638d7af7 100644 --- a/packages/redis/src/RedisStreamSink.ts +++ b/packages/redis/src/RedisStreamSink.ts @@ -1,3 +1,10 @@ +/* +Copyright (c) Walmart Inc. + +This source code is licensed under the Apache 2.0 license found in the +LICENSE file in the root directory of this source tree. +*/ + import { IOutputSink, IPublishedMessage, diff --git a/packages/redis/src/RedisStreamSource.ts b/packages/redis/src/RedisStreamSource.ts index bfecf576..a6d02583 100644 --- a/packages/redis/src/RedisStreamSource.ts +++ b/packages/redis/src/RedisStreamSource.ts @@ -1,3 +1,10 @@ +/* +Copyright (c) Walmart Inc. + +This source code is licensed under the Apache 2.0 license found in the +LICENSE file in the root directory of this source tree. +*/ + import { IInputSource, IRequireInitialization, diff --git a/packages/redis/src/__test__/client.integration.test.ts b/packages/redis/src/__test__/client.integration.test.ts index 9325941c..1642bb5e 100644 --- a/packages/redis/src/__test__/client.integration.test.ts +++ b/packages/redis/src/__test__/client.integration.test.ts @@ -1,3 +1,10 @@ +/* +Copyright (c) Walmart Inc. + +This source code is licensed under the Apache 2.0 license found in the +LICENSE file in the root directory of this source tree. +*/ + import { createRedisClient } from "./utils"; import { SpanContext } from "opentracing"; diff --git a/packages/redis/src/__test__/protocol.test.ts b/packages/redis/src/__test__/protocol.test.ts index 4143a63b..8d377c77 100644 --- a/packages/redis/src/__test__/protocol.test.ts +++ b/packages/redis/src/__test__/protocol.test.ts @@ -1,3 +1,10 @@ +/* +Copyright (c) Walmart Inc. + +This source code is licensed under the Apache 2.0 license found in the +LICENSE file in the root directory of this source tree. +*/ + import { parseRawReadGroupResult } from "../RedisClient"; import { RawReadGroupResult } from "../RedisProxy"; diff --git a/packages/redis/src/__test__/stream.integration.test.ts b/packages/redis/src/__test__/stream.integration.test.ts index 40e59f09..3d39fecc 100644 --- a/packages/redis/src/__test__/stream.integration.test.ts +++ b/packages/redis/src/__test__/stream.integration.test.ts @@ -1,3 +1,10 @@ +/* +Copyright (c) Walmart Inc. + +This source code is licensed under the Apache 2.0 license found in the +LICENSE file in the root directory of this source tree. +*/ + import { Application, StaticInputSource, diff --git a/packages/redis/src/__test__/utils.ts b/packages/redis/src/__test__/utils.ts index 7ee739f6..7011d90a 100644 --- a/packages/redis/src/__test__/utils.ts +++ b/packages/redis/src/__test__/utils.ts @@ -1,3 +1,10 @@ +/* +Copyright (c) Walmart Inc. + +This source code is licensed under the Apache 2.0 license found in the +LICENSE file in the root directory of this source tree. +*/ + import { redisClient, IRedisClient, RedisStreamMetadata } from ".."; import { JsonMessageEncoder,