Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[do-not-merge] Ibm 20241218 #266

Closed
wants to merge 413 commits into from
Closed
Changes from 1 commit
Commits
Show all changes
413 commits
Select commit Hold shift + click to select a range
e225110
[Kernel] Remove if-else with identical branches in marlin 2:4 (#10687)
tlrmchlsmth Nov 27, 2024
1209261
[Model] Support telechat2 (#10311)
shunxing12345 Nov 27, 2024
418cb3b
[Bugfix][Hardware][CPU] Fix intel-omp version to avoid segfault (#10700)
bigPYJ1151 Nov 27, 2024
9e0a147
[V1] Update interface for mistral-format Pixtral (#10703)
ywang96 Nov 27, 2024
308cc5e
[ci] fix slow tests (#10698)
youkaichao Nov 27, 2024
c411def
[torch.compile] fix shape specialization (#10722)
youkaichao Nov 27, 2024
b98c62b
[Bugfix] Fix GGUF inference with FP16 unquantized checkpoint (#10675)
Isotr0py Nov 27, 2024
197b448
[Bugfix][Mamba] Fix Multistep on Mamba-like models (#10705)
mzusman Nov 27, 2024
9b4b150
[Bugfix] Ignore `lm_head` when loading embedding models (#10719)
DarkLight1337 Nov 27, 2024
395b1c7
[Frontend] don't block event loop in tokenization (preprocess) in Ope…
tomeras91 Nov 27, 2024
cb4e1c3
[misc] upgrade filelock version (#10731)
youkaichao Nov 28, 2024
70dc14f
[Model] support bitsandbytes quantization with minicpm3 model (#10682)
zixuanzhang226 Nov 28, 2024
278be67
[Doc] Update model in arch_overview.rst to match comment (#10701)
spacewander Nov 28, 2024
d9b4b3f
[Bug][CLI] Allow users to disable prefix caching explicitly (#10724)
rickyyx Nov 28, 2024
a79b122
[V1] Do not allocate beyond the max_model_len (#10730)
WoosukKwon Nov 28, 2024
9a8bff0
[Kernel] Update vllm-flash-attn version (#10736)
WoosukKwon Nov 28, 2024
3ed5e73
[TPU] Update requirements-tpu (#10726)
richardsliu Nov 28, 2024
5fc5ce0
[Model] Added GLM-4 series hf format model support vllm==0.6.4 (#10561)
sixsixcoder Nov 28, 2024
8c1e77f
[Kernel] Update vllm-flash-attn version to reduce CPU overheads (#10742)
WoosukKwon Nov 28, 2024
98f47f2
[V1] Optimize the CPU overheads in FlashAttention custom op (#10733)
WoosukKwon Nov 28, 2024
c83919c
[Model] Add Internlm2 LoRA support (#5064)
Isotr0py Nov 28, 2024
fa6ecb9
[Model] Clean up MiniCPMV (#10751)
DarkLight1337 Nov 29, 2024
c82b432
[Misc] typo find in sampling_metadata.py (#10740)
noooop Nov 29, 2024
3132aac
[Bugfix] Fix Idefics3 bug (#10778)
jeejeelee Nov 29, 2024
661175b
[platform] Add verify_quantization in platform. (#10757)
wangxiyuan Nov 29, 2024
40bc242
[Bugfix] Fix OpenVino/Neuron `driver_worker` init (#10779)
NickLucche Nov 30, 2024
16ee07f
[Model] Refactor Molmo weights loading to use AutoWeightsLoader (#10771)
Isotr0py Nov 30, 2024
e7cfc4e
[Interleaved ATTN] Support for Mistral-8B (#10591)
patrickvonplaten Nov 30, 2024
7e4bbda
[doc] format fix (#10789)
wangxiyuan Nov 30, 2024
1337071
[Model] Replace embedding models with pooling adapter (#10769)
DarkLight1337 Dec 1, 2024
f877a7d
[Misc] Improve type annotations for `support_torch_compile` (#10763)
DarkLight1337 Dec 1, 2024
d2f058e
[Misc] Rename embedding classes to pooling (#10801)
DarkLight1337 Dec 1, 2024
169a0ff
[doc] add warning about comparing hf and vllm outputs (#10805)
youkaichao Dec 1, 2024
c11f172
[Misc] Adding `MMMU-Pro` vision dataset to serving benchmark (#10804)
ywang96 Dec 1, 2024
0590ec3
[Core] Implement disagg prefill by StatelessProcessGroup (#10502)
KuntaiDu Dec 2, 2024
b18c9bb
[Model] Add BNB support to Llava and Pixtral-HF (#10795)
Isotr0py Dec 2, 2024
b795477
[core] Avoid metrics log noise when idle - include speculative decodi…
cduk Dec 2, 2024
073a4bd
[Kernel] Use `out` arg in flash_attn_varlen_func (#10811)
WoosukKwon Dec 2, 2024
e25810a
Fill TorchSDPAAttentionMetadata seq_lens_field for prefill (#10799)
maxdebayser Dec 2, 2024
63a1641
[misc] remove xverse modeling file (#10814)
youkaichao Dec 2, 2024
995a148
[doc]Update config docstring (#10732)
wangxiyuan Dec 2, 2024
ef31eab
[Model]: add some tests for aria model (#10770)
xffxff Dec 2, 2024
e95f275
[CI/Build] Update `mistral_common` version for tests and docs (#10825)
DarkLight1337 Dec 2, 2024
a4c4daf
[misc] use out argument for flash attention (#10822)
youkaichao Dec 2, 2024
b45f0d7
[Misc][LoRA] Move the implementation of lora bias to punica.py (#10829)
jeejeelee Dec 2, 2024
519cc6c
[Misc][XPU] Avoid torch compile for XPU platform (#10747)
yma11 Dec 2, 2024
9b14d97
Fix openvino on GPU (#10793)
janimo Dec 2, 2024
4c05edb
[Model] Add TP and BNB quantization support to LlavaMultiModalProject…
Isotr0py Dec 2, 2024
4433195
[Bugfix] Prevent benchmark_throughput.py from using duplicated random…
mgoin Dec 3, 2024
d746268
[Model] support bitsandbytes quantization with minicpm model (#10842)
zixuanzhang226 Dec 3, 2024
a4cf256
[Bugfix] Fix QKVParallelLinearWithShardedLora bias bug (#10844)
jeejeelee Dec 3, 2024
21fe7b4
[core][distributed] add pynccl broadcast (#10843)
youkaichao Dec 3, 2024
dc5ce86
[torch.compile] remove compilation_context and simplify code (#10838)
youkaichao Dec 3, 2024
ef51831
[Doc] Add github links for source code references (#10672)
russellb Dec 3, 2024
3257d44
[Misc] Remove deprecated names (#10817)
DarkLight1337 Dec 3, 2024
9323a31
[Core][Performance] Add XGrammar support for guided decoding and set …
aarnphm Dec 3, 2024
f6084f6
[Speculative Decoding] Move indices to device before filtering output…
zhengy001 Dec 3, 2024
3bc94ca
[V1] VLM - Run the mm_mapper preprocessor in the frontend process (#1…
alexm-neuralmagic Dec 3, 2024
2f2cdc7
[MISC][XPU] quick fix for XPU CI (#10859)
yma11 Dec 3, 2024
7090c27
[Bugfix] Only require XGrammar on x86 (#10865)
mgoin Dec 3, 2024
7c32b68
[Frontend] correctly record prefill and decode time metrics (#10853)
tomeras91 Dec 3, 2024
a061fe6
[Build][Bugfix] Using the correct type hint (#10866)
gshtras Dec 3, 2024
381ac93
[Benchmark] Benchmark structured output with datasets (#10557)
xuechendi Dec 4, 2024
d2bd88b
[CI/Build] Replace mean with torch.all in test_pynccl.py (#10876)
tlrmchlsmth Dec 4, 2024
b5b647b
Drop ROCm load format check (#10767)
wangxiyuan Dec 4, 2024
fa2dea6
[ci/build] Change queue name for Release jobs (#10875)
khluu Dec 4, 2024
c9ca4fc
[ci/build] Job to build and push release image (#10877)
khluu Dec 4, 2024
8db957e
[bugfix] fixed parameter “n” when set parameter “bestof” > 1 (#10854)
o2363286 Dec 4, 2024
c92acb9
[ci/build] Update vLLM postmerge ECR repo (#10887)
khluu Dec 4, 2024
01d079f
[LoRA] Change lora_tokenizers capacity (#10796)
xyang16 Dec 4, 2024
10398b4
[Model] Consolidate ViTs attention implementation without mask (#10893)
Isotr0py Dec 4, 2024
82eb5ea
Benchmark serving structured output (#10880)
xuechendi Dec 4, 2024
e4c34c2
[CI/Build] improve python-only dev setup (#9621)
dtrifiro Dec 4, 2024
2a56e12
[V1] Fix when max_model_len is not divisible by block_size (#10903)
WoosukKwon Dec 5, 2024
7883c2b
[benchmark] Make H100 benchmark optional (#10908)
khluu Dec 5, 2024
8d370e9
[Bugfix] Fallback to outlines for complex json schemas (#10899)
mgoin Dec 5, 2024
aa39a8e
[Doc] Create a new "Usage" section (#10827)
DarkLight1337 Dec 5, 2024
1f958a7
[Bugfix] Fix BNB loader target_modules (#10720)
jeejeelee Dec 5, 2024
39c89e7
[Misc] Update llama 3.2 template to support system prompt with images…
tjohnson31415 Dec 5, 2024
571da8f
[Misc][LoRA] Clean up the function interface of Punica (#10917)
jeejeelee Dec 5, 2024
998eeaf
[CI/Build] Bump test transformers version (#10106)
Isotr0py Dec 5, 2024
a430652
[Misc][Gaudi] Avoid torch.compile and enable lazy collectives (#10897)
kzawora-intel Dec 5, 2024
9743d64
[ci][build] add tests for python only compilation (#10915)
youkaichao Dec 5, 2024
db87eb6
[torch.compile] use size tuning for specific sizes (#10933)
youkaichao Dec 6, 2024
b031a45
[torch.compile] add logging for compilation time (#10941)
youkaichao Dec 6, 2024
222f5b0
[CI/Build] Fix broken multimodal test (#10950)
DarkLight1337 Dec 6, 2024
a1887f2
[torch.compile] fix deprecated code (#10948)
youkaichao Dec 6, 2024
8b59631
[Core] Support Lark grammars for XGrammar (#10870)
mgoin Dec 6, 2024
7406274
[Doc] add KubeAI to serving integrations (#10837)
samos123 Dec 6, 2024
c05cfb6
[misc] fix typo (#10960)
youkaichao Dec 6, 2024
dcdc3fa
[ci] fix broken tests (#10956)
youkaichao Dec 6, 2024
69d357b
[Core] Cleanup startup logging a bit (#10961)
russellb Dec 7, 2024
acf092d
[Bugfix] Fix test-pipeline.yaml (#10973)
jeejeelee Dec 7, 2024
955fa95
[3/N] Support and implement merged input processor for LLaVA model (#…
DarkLight1337 Dec 7, 2024
f13cf9a
[Build] Fix for the Wswitch-bool clang warning (#10060)
gshtras Dec 7, 2024
b26b4cd
[Misc][LoRA] Refactor and clean MergedQKVParallelLinearWithLora imple…
Isotr0py Dec 7, 2024
bf0e382
[Model] Composite weight loading for multimodal Qwen2 (#10944)
DarkLight1337 Dec 7, 2024
1c768fe
[Doc] Explicitly state that InternVL 2.5 is supported (#10978)
DarkLight1337 Dec 7, 2024
39e227c
[Model] Update multi-modal processor to support Mantis(LLaVA) model (…
DarkLight1337 Dec 7, 2024
c889d58
[Doc] Explicitly state that PP isn't compatible with speculative deco…
DarkLight1337 Dec 7, 2024
78029b3
[BugFix][Kernel]: fix illegal memory access in causal_conv1d when con…
xffxff Dec 7, 2024
1b62745
[core][executor] simplify instance id (#10976)
youkaichao Dec 7, 2024
7be15d9
[core][misc] remove use_dummy driver for _run_workers (#10920)
youkaichao Dec 7, 2024
fd57d2b
[torch.compile] allow candidate compile sizes (#10984)
youkaichao Dec 8, 2024
a11f326
[V1] Initial support of multimodal models for V1 re-arch (#10699)
ywang96 Dec 8, 2024
43b05fa
[torch.compile][misc] fix comments (#10993)
youkaichao Dec 8, 2024
46004e8
[misc] clean up and unify logging (#10999)
youkaichao Dec 9, 2024
af7c4a9
[Doc][V1] Add V1 support column for multimodal models (#10998)
ywang96 Dec 9, 2024
d1c2e15
[torch.compile] add dynamo time tracking (#11005)
youkaichao Dec 9, 2024
c690357
[V1] Fix Detokenizer loading in `AsyncLLM` (#10997)
ywang96 Dec 9, 2024
e691b26
[Core] Require xgrammar >= 0.1.6 (#11021)
russellb Dec 9, 2024
aea2fc3
[Platform] Move `async output` check to platform (#10768)
wangxiyuan Dec 9, 2024
25b79d9
[V1] Input Batch Relocation (#10962)
varun-sundar-rabindranath Dec 9, 2024
edc4fa3
[ci/build] Recompile CI dependencies list with Python 3.12 (#11013)
khluu Dec 9, 2024
3b61cb4
[V1] Further reduce CPU overheads in flash-attn (#10989)
WoosukKwon Dec 9, 2024
ca87149
[Misc][LoRA] Abstract PunicaWrapper (#10955)
jeejeelee Dec 9, 2024
a811dd6
[Model] merged input processor for Phi-3-Vision models (#10977)
Isotr0py Dec 9, 2024
cbcbdb1
[Bugfix][Hardware][Gaudi] Bump vllm_hpu_extension version (#11028)
kzawora-intel Dec 9, 2024
1a2f8fb
[v1] fix use compile sizes (#11000)
youkaichao Dec 9, 2024
9c6459e
[Neuron] Upgrade neuron to 2.20.2 (#11016)
xendo Dec 9, 2024
b63ba84
[ROCm][bugfix] scpecilative decoding worker class (#11035)
gshtras Dec 9, 2024
5ed5d5f
Build tpu image in release pipeline (#10936)
richardsliu Dec 9, 2024
6faec54
[V1] Do not store `None` in self.generators (#11038)
WoosukKwon Dec 9, 2024
6d52528
[Docs] Add dedicated tool calling page to docs (#10554)
mgoin Dec 10, 2024
d1f6d1c
[Model] Add has_weight to RMSNorm and re-enable weights loading track…
Isotr0py Dec 10, 2024
391d7b2
[Bugfix] Fix usage of `deprecated` decorator (#11025)
DarkLight1337 Dec 10, 2024
980ad39
[Frontend] Use request id from header (#10968)
joerunde Dec 10, 2024
bc192a2
[Pixtral] Improve loading (#11040)
patrickvonplaten Dec 10, 2024
28b3a1c
[V1] Multiprocessing Tensor Parallel Support for v1 (#9856)
tlrmchlsmth Dec 10, 2024
ebf7780
monitor metrics of tokens per step using cudagraph batchsizes (#11031)
youkaichao Dec 10, 2024
e35879c
[Bugfix] Fix xgrammar failing to read a vocab_size from LlavaConfig o…
sjuxax Dec 10, 2024
bfd6104
Update README.md (#11034)
dmoliveira Dec 10, 2024
82c73fd
[Bugfix] cuda error running llama 3.2 (#11047)
GeneDer Dec 10, 2024
fe2e10c
Add example of helm chart for vllm deployment on k8s (#9199)
mfournioux Dec 10, 2024
beb16b2
[Bugfix] Handle <|tool_call|> token in granite tool parser (#11039)
tjohnson31415 Dec 10, 2024
d05f886
[Misc][LoRA] Add PEFTHelper for LoRA (#11003)
jeejeelee Dec 10, 2024
9b9cef3
[Bugfix] Backport request id validation to v0 (#11036)
joerunde Dec 10, 2024
250ee65
[BUG] Remove token param #10921 (#11022)
flaviabeo Dec 10, 2024
e739194
[Core] Update to outlines >= 0.1.8 (#10576)
russellb Dec 10, 2024
75f89dc
[torch.compile] add a flag to track batchsize statistics (#11059)
youkaichao Dec 10, 2024
134810b
[V1][Bugfix] Always set enable_chunked_prefill = True for V1 (#11061)
WoosukKwon Dec 10, 2024
9a93973
[Bugfix] Fix Mamba multistep (#11071)
tlrmchlsmth Dec 11, 2024
d5c5154
[Misc] LoRA + Chunked Prefill (#9057)
aurickq Dec 11, 2024
ffa48c9
[Model] PP support for Mamba-like models (#10992)
mzusman Dec 11, 2024
e39400a
Fix streaming for granite tool call when <|tool_call|> is present (#1…
maxdebayser Dec 11, 2024
2e33fe4
[CI/Build] Check transformers v4.47 (#10991)
DarkLight1337 Dec 11, 2024
3fb4b4f
[ci/build] Fix AMD CI dependencies (#11087)
khluu Dec 11, 2024
9974fca
[ci/build] Fix entrypoints test and pin outlines version (#11088)
khluu Dec 11, 2024
61b1d2f
[Core] v1: Use atexit to handle engine core client shutdown (#11076)
russellb Dec 11, 2024
2e32f5d
[Bugfix] Fix Idefics3 fails during multi-image inference (#11080)
B-201 Dec 11, 2024
40766ca
[Bugfix]: Clamp `-inf` logprob values in prompt_logprobs (#11073)
rafvasq Dec 11, 2024
8f10d5e
[Misc] Split up pooling tasks (#10820)
DarkLight1337 Dec 11, 2024
cad5c0a
[Doc] Update docs to refer to pooling models (#11093)
DarkLight1337 Dec 11, 2024
b2f7754
[CI/Build] Enable prefix caching test for AMD (#11098)
hissu-hyvarinen Dec 11, 2024
fd22220
[Doc] Installed version of llmcompressor for int8/fp8 quantization (#…
bingps Dec 11, 2024
91642db
[torch.compile] use depyf to dump torch.compile internals (#10972)
youkaichao Dec 11, 2024
d643c2a
[V1] Use input_ids as input for text-only models (#11032)
WoosukKwon Dec 11, 2024
66aaa77
[torch.compile] remove graph logging in ci (#11110)
youkaichao Dec 11, 2024
72ff3a9
[core] Bump ray to use _overlap_gpu_communication in compiled graph t…
ruisearch42 Dec 11, 2024
d1e21a9
[CI/Build] Split up VLM tests (#11083)
DarkLight1337 Dec 11, 2024
452a723
[V1][Core] Remove should_shutdown to simplify core process terminatio…
tlrmchlsmth Dec 11, 2024
4e11683
[V1] VLM preprocessor hashing (#11020)
alexm-neuralmagic Dec 12, 2024
7439a8b
[Bugfix] Multiple fixes to tool streaming with hermes and mistral (#1…
cedonley Dec 12, 2024
8fb26da
[Docs] Add media kit (#11121)
simon-mo Dec 12, 2024
24a36d6
Update link to LlamaStack remote vLLM guide in serving_with_llamastac…
terrytangyuan Dec 12, 2024
ccede2b
[Core] cleanup zmq ipc sockets on exit (#11115)
russellb Dec 12, 2024
1da8f0e
[Model] Add support for embedding model GritLM (#10816)
pooyadavoodi Dec 12, 2024
f092153
[V1] Use more persistent buffers to optimize input preparation overhe…
WoosukKwon Dec 12, 2024
8195824
[Hardware][Intel-Gaudi] Enable LoRA support for Intel Gaudi (HPU) (#1…
SanjuCSudhakaran Dec 12, 2024
62de37a
[core][distributed] initialization from StatelessProcessGroup (#10986)
youkaichao Dec 12, 2024
85362f0
[Misc][LoRA] Ensure Lora Adapter requests return adapter name (#11094)
Jeffwan Dec 12, 2024
4816d20
[V1] Fix torch profiling for offline inference (#11125)
ywang96 Dec 12, 2024
d4d5291
fix(docs): typo in helm install instructions (#11141)
ramonziai Dec 12, 2024
5d71257
[Bugfix] Quick fix to make Pixtral-HF load correctly again after 39e2…
sjuxax Dec 12, 2024
2c97eca
[Misc] Validate grammar and fail early (#11119)
comaniac Dec 12, 2024
9f3974a
Fix logging of the vLLM Config (#11143)
JArnoldAMD Dec 12, 2024
db6c264
[Bugfix] Fix value unpack error of simple connector for KVCache trans…
ShangmingCai Dec 12, 2024
78ed8f5
[Misc][V1] Fix type in v1 prefix caching (#11151)
comaniac Dec 13, 2024
30870b4
[torch.compile] Dynamic fp8 + rms_norm fusion (#10906)
ProExpertProg Dec 13, 2024
1efce68
[Bugfix] Use runner_type instead of task in GritLM (#11144)
pooyadavoodi Dec 13, 2024
3989a79
[Bugfix] Update starcoder2 to remap k/v scale names for kv_cache quan…
dsikka Dec 13, 2024
00c1bde
[ROCm][AMD] Disable auto enabling chunked prefill on ROCm (#11146)
gshtras Dec 13, 2024
34f1a80
[Bugfix][V1] Fix 'NoneType' object has no attribute 'hash_value' (#11…
comaniac Dec 13, 2024
be39e3c
[core] clean up cudagraph batchsize padding logic (#10996)
youkaichao Dec 13, 2024
7cd7409
PaliGemma 2 support (#11142)
janimo Dec 13, 2024
f93bf2b
[Bugfix][CI][CPU] add missing datasets package to requirements-cpu.tx…
bigPYJ1151 Dec 13, 2024
eeec9e3
[Frontend] Separate pooling APIs in offline inference (#11129)
DarkLight1337 Dec 13, 2024
969da7d
[V1][VLM] Fix edge case bug for InternVL2 (#11165)
ywang96 Dec 13, 2024
d1fa714
[Refactor]A simple device-related refactor (#11163)
noemotiovon Dec 13, 2024
c31d4a5
[Core] support LoRA and prompt adapter in content-based hashing for B…
llsj14 Dec 13, 2024
5b0ed83
[Bugfix] using len(tokenizer) instead of tokenizer.vocab_size in Allo…
zhangjf-nlp Dec 13, 2024
238c0d9
[Misc] Add tokenizer_mode param to benchmark_serving.py (#11174)
alexm-neuralmagic Dec 13, 2024
0920ab9
[Doc] Reorganize online pooling APIs (#11172)
DarkLight1337 Dec 13, 2024
0a56bcc
[Bugfix][Hardware][CPU] Enable Gemma2 with SDPA on CPU backend (#11169)
janimo Dec 13, 2024
0d8451c
[Distributed] Allow the placement group more time to wait for resourc…
Jeffwan Dec 13, 2024
4863e5f
[Core] V1: Use multiprocessing by default (#11074)
russellb Dec 14, 2024
4b5b8a6
[V1][Bugfix] Fix EngineCoreProc profile (#11185)
tlrmchlsmth Dec 14, 2024
9855aea
[Bugfix][V1] Re-compute an entire block when fully cache hit (#11186)
comaniac Dec 14, 2024
24a3d12
update compressed-tensors to latest version (#11183)
dhuangnm Dec 14, 2024
4825926
[Core] Update outlines and increase its threadpool size (#11140)
russellb Dec 14, 2024
ea7bd68
[V1][Bugfix] Fix V1 TP trust-remote-code (#11182)
tlrmchlsmth Dec 14, 2024
3cb5769
[Misc] Minor improvements to the readability of PunicaWrapperBase (#1…
jeejeelee Dec 14, 2024
9c3dadd
[Frontend] Add `logits_processors` as an extra completion argument (#…
bradhilton Dec 14, 2024
93abf23
[VLM] Fully dynamic prompt replacement in merged input processor (#11…
DarkLight1337 Dec 14, 2024
6d917d0
Enable mypy checking on V1 code (#11105)
markmc Dec 14, 2024
8869368
[Performance][Core] Optimize the performance of evictor v1 and v2 by …
llsj14 Dec 14, 2024
15859f2
[[Misc]Upgrade bitsandbytes to the latest version 0.45.0 (#11201)
jeejeelee Dec 15, 2024
a1c0205
[torch.compile] allow tracking forward time (#11081)
youkaichao Dec 15, 2024
b10609e
[Misc] Clean up multi-modal processor (#11207)
DarkLight1337 Dec 15, 2024
96d673e
[Bugfix] Fix error handling of unsupported sliding window (#11213)
DarkLight1337 Dec 15, 2024
38e599d
[Doc] add documentation for disaggregated prefilling (#11197)
KuntaiDu Dec 15, 2024
d263bd9
[Core] Support disaggregated prefill with Mooncake Transfer Engine (#…
ShangmingCai Dec 15, 2024
25ebed2
[V1][Minor] Cache np arange to reduce input preparation overhead (#11…
WoosukKwon Dec 15, 2024
da6f409
Update deploying_with_k8s.rst (#10922)
AlexHe99 Dec 16, 2024
69ba344
[Bugfix] Fix block size validation (#10938)
chenqianfzh Dec 16, 2024
17138af
[Bugfix] Fix the default value for temperature in ChatCompletionReque…
yansh97 Dec 16, 2024
b3b1526
WIP: [CI/Build] simplify Dockerfile build for ARM64 / GH200 (#11212)
cennn Dec 16, 2024
bddbbcb
[Model] Support Cohere2ForCausalLM (Cohere R7B) (#11203)
janimo Dec 16, 2024
d927dbc
[Model] Refactor Ultravox to use merged input processor (#11198)
Isotr0py Dec 16, 2024
2ca830d
[Doc] Reorder vision language examples in alphabet order (#11228)
Isotr0py Dec 16, 2024
efbce85
[misc] Layerwise profile updates (#10242)
varun-sundar-rabindranath Dec 16, 2024
551603f
[core] overhaul memory profiling and fix backward compatibility (#10511)
youkaichao Dec 16, 2024
35ffa68
[Docs] hint to enable use of GPU performance counters in profiling to…
bk-TurbaAI Dec 16, 2024
c301616
[ci][tests] add gh200 tests (#11244)
youkaichao Dec 16, 2024
88a412e
[torch.compile] fast inductor (#11108)
youkaichao Dec 17, 2024
35bae11
fix gh200 tests on main (#11246)
youkaichao Dec 17, 2024
0064f69
[CI] Add test case with JSON schema using references + use xgrammar b…
mgoin Dec 17, 2024
66d4b16
[Frontend] Add OpenAI API support for input_audio (#11027)
kylehh Dec 17, 2024
59c9b6e
[V1][VLM] Proper memory profiling for image language models (#11210)
ywang96 Dec 17, 2024
e88db68
[Platform] platform agnostic for EngineArgs initialization (#11225)
wangxiyuan Dec 17, 2024
2bfdbf2
[V1][Core] Use weakref.finalize instead of atexit (#11242)
tlrmchlsmth Dec 17, 2024
02222a0
[Misc] Kernel Benchmark for `RMSNorm` (#11241)
ywang96 Dec 17, 2024
f9ecbb1
[Misc] Allow passing logits_soft_cap for xformers backend (#11252)
Isotr0py Dec 17, 2024
2d1b9ba
[Bugfix] Fix request cancellation without polling (#11190)
joerunde Dec 17, 2024
c77eb8a
[Bugfix] Set temperature=0.7 in test_guided_choice_chat (#11264)
mgoin Dec 18, 2024
bf8717e
[V1] Prefix caching for vision language models (#11187)
comaniac Dec 18, 2024
866fa45
[Bugfix] Restore support for larger block sizes (#11259)
kzawora-intel Dec 18, 2024
8b79f9e
[Bugfix] Fix guided decoding with tokenizer mode mistral (#11046)
wallashss Dec 18, 2024
f04e407
[MISC][XPU]update ipex link for CI fix (#11278)
yma11 Dec 18, 2024
60508ff
[Kernel]: Cutlass 2:4 Sparsity + FP8/Int8 Quant Support (#10995)
dsikka Dec 18, 2024
996aa70
[Bugfix] Fix broken phi3-v mm_processor_kwargs tests (#11263)
Isotr0py Dec 18, 2024
362cff1
[CI][Misc] Remove Github Action Release Workflow (#11274)
simon-mo Dec 18, 2024
f954fe0
[FIX] update openai version (#11287)
jikunshang Dec 18, 2024
ca5f54a
[Bugfix] fix minicpmv test (#11304)
joerunde Dec 18, 2024
c75f396
Merge branch 'main' of https://github.com/vllm-project/vllm into ibm-…
fialhocoelho Dec 18, 2024
f428215
Squash 6357
fialhocoelho Dec 18, 2024
d279a64
Squash 11307
fialhocoelho Dec 18, 2024
11ed70e
Squash 10647
fialhocoelho Dec 18, 2024
24c996d
Squash 10235
fialhocoelho Dec 18, 2024
5f1698f
install numactl to enable fastsafetensors and adapter for 0.6.5
fialhocoelho Dec 18, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
[TPU] Update requirements-tpu (vllm-project#10726)
Signed-off-by: Richard Liu <ricliu@google.com>
  • Loading branch information
richardsliu authored Nov 28, 2024
commit 3ed5e7314667f0a9c0c47e6d635ac82fd93296a2
10 changes: 5 additions & 5 deletions requirements-tpu.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ ray[default]
--find-links https://storage.googleapis.com/libtpu-releases/index.html
--find-links https://storage.googleapis.com/jax-releases/jax_nightly_releases.html
--find-links https://storage.googleapis.com/jax-releases/jaxlib_nightly_releases.html
torch==2.6.0.dev20241114+cpu
torchvision==0.20.0.dev20241114+cpu
torch_xla[tpu] @ https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.6.0.dev20241114-cp310-cp310-linux_x86_64.whl
jaxlib==0.4.32.dev20240829
jax==0.4.32.dev20240829
torch==2.6.0.dev20241126+cpu
torchvision==0.20.0.dev20241126+cpu
torch_xla[tpu] @ https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-2.6.0.dev20241126-cp310-cp310-linux_x86_64.whl
jaxlib==0.4.36.dev20241122
jax==0.4.36.dev20241122