Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Commit

Permalink
Merge branch 'main' into simple-githash-embed
Browse files Browse the repository at this point in the history
  • Loading branch information
dbarbuzzi committed Jun 27, 2024
2 parents 0e7c4f6 + 80701e4 commit 97b1890
Show file tree
Hide file tree
Showing 3 changed files with 2 additions and 4 deletions.
2 changes: 1 addition & 1 deletion .github/actions/nm-get-docker-tags/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ outputs:
description: "extra tag for the docker image based on build type, either latest (for RELEASE) or nightly (for NIGHTLY)"
value: ${{ steps.tags.outputs.extra_tag }}
build_version:
"version of nm-vllm, e.g. 0.4.0, 0.4.0.20240531"
description: "version of nm-vllm, e.g. 0.4.0, 0.4.0.20240531"
value: ${{ steps.tags.outputs.build_version }}
runs:
using: composite
Expand Down
2 changes: 0 additions & 2 deletions .github/actions/nm-summary-build/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,6 @@ runs:
using: composite
steps:
- run: |
BUILD_STATUS=${{ inputs.build_status }}
BUILD_EMOJI=$(./.github/scripts/step-status ${BUILD_STATUS})
WHL_STATUS=${{ inputs.whl_status }}
WHL_EMOJI=$(./.github/scripts/step-status ${WHL_STATUS})
echo "testmo URL: ${{ inputs.testmo_run_url }}" >> $GITHUB_STEP_SUMMARY
Expand Down
2 changes: 1 addition & 1 deletion .github/scripts/step-status
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

STEP_STATUS=${1}

if [ $STEP_STATUS -eq 0 ]; then
if [ "$STEP_STATUS" -eq 0 ]; then
# green check
echo -e "\xE2\x9C\x85"
else
Expand Down

1 comment on commit 97b1890

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bigger_is_better

Benchmark suite Current: 97b1890 Previous: 4ab3b8a Ratio
{"name": "request_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 2.4621964989259952 prompts/s 2.50032143674207 prompts/s 1.02
{"name": "token_throughput", "description": "VLLM Engine throughput - synthetic\nmodel - NousResearch/Llama-2-7b-chat-hf\nmax_model_len - 4096\nbenchmark_throughput {\n \"use-all-available-gpus_\": \"\",\n \"input-len\": 256,\n \"output-len\": 128,\n \"num-prompts\": 1000\n}", "gpu_description": "NVIDIA L4 x 1", "vllm_version": "0.5.1", "python_version": "3.10.12 (main, Jun 7 2023, 13:43:11) [GCC 11.3.0]", "torch_version": "2.3.0+cu121"} 945.4834555875822 tokens/s 960.1234317089549 tokens/s 1.02

This comment was automatically generated by workflow using github-action-benchmark.

Please sign in to comment.