-
-
Notifications
You must be signed in to change notification settings - Fork 10.3k
Description
Your current environment
$ python collect_env.py
Collecting environment information...
System Info
==============================
OS : Red Hat Enterprise Linux 9.5 (Plow) (x86_64)
GCC version : (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.34
==============================
PyTorch Info
PyTorch version : 2.7.1+cu126
Is debug build : False
CUDA used to build PyTorch : 12.6
ROCM used to build PyTorch : N/A
==============================
Python Environment
Python version : 3.12.5 (main, Apr 2 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-5)] (64-bit runtime)
Python platform : Linux-5.14.0-284.88.1.el9_2.x86_64-x86_64-with-glibc2.34
==============================
CUDA / GPU Info
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
Nvidia driver version : 550.127.08
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel Xeon Processor (Cascadelake)
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 6
BogoMIPS: 4799.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni md_clear arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.5 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 160 MiB (40 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-39
NUMA node1 CPU(s): 40-79
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-cufile-cu12==1.11.1.6
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvshmem-cu12==3.3.20
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.4.0+gitf7888497
[pip3] pyzmq==27.0.2
[pip3] torch==2.7.1
[pip3] torchaudio==2.7.1
[pip3] torchvision==0.22.1
[pip3] transformers==4.55.4
[pip3] triton==3.3.1
[conda] Could not collect
==============================
vLLM Info
ROCM Version : Could not collect
Neuron SDK Version : N/A
vLLM Version : 0.10.1rc2.dev353+gd3d2aad5a (git sha: d3d2aad)
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV12 PIX 0-39 0 N/A
GPU1 NV12 X NODE 0-39 0 N/A
NIC0 PIX NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
==============================
Environment Variables
NVIDIA_VISIBLE_DEVICES=GPU-d26b78fb-ce95-35e7-99b4-873357947e6e,GPU-3f3e7db7-60cb-57fb-00e2-7362485c95c4
VLLM_ALLOW_LONG_MAX_MODEL_LEN=1
VLLM_WORKER_MULTIPROC_METHOD=fork
VLLM_USAGE_SOURCE=production-docker-image
TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC=15
CUDA_VISIBLE_DEVICES=0,1
CUDA_VISIBLE_DEVICES=0,1
VLLM_DISABLE_COMPILE_CACHE=1
TORCH_NCCL_DUMP_ON_TIMEOUT=0
LD_LIBRARY_PATH=/opt/vllm/lib/python3.12/site-packages/nvidia/nvtx/lib:/opt/vllm/lib/python3.12/site-packages/nvidia/cuda_runtime/lib:/opt/vllm/lib/python3.12/site-packages/nvidia/cuda_nvrtc/lib:
VLLM_NO_USAGE_STATS=1
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
🐛 Describe the bug
When running vllm serve -pp 2 BAAI/bge-multilingual-gemma2 --enforce-eager
the server crashes with
File "/home/vllm/repo/vllm/v1/engine/core.py", line 90, in __init__
self._initialize_kv_caches(vllm_config)
File "/home/vllm/repo/vllm/v1/engine/core.py", line 201, in _initialize_kv_caches
unify_kv_cache_configs(kv_cache_configs)
File "/home/vllm/repo/vllm/v1/core/kv_cache_utils.py", line 1161, in unify_kv_cache_configs
assert group_rank_0.kv_cache_spec == group_rank_i.kv_cache_spec
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
The cause seems to be that in both ranks there is a single KCacheGroupSpec that is somehow not merged with the others:
rank=0 kv_cache_config:
KVCacheConfig(
num_blocks=49871,
kv_cache_groups=[
KVCacheGroupSpec(
layer_names=[
'model.layers.1.self_attn.attn',
'model.layers.3.self_attn.attn',
'model.layers.5.self_attn.attn',
'model.layers.7.self_attn.attn',
'model.layers.9.self_attn.attn',
'model.layers.11.self_attn.attn',
'model.layers.13.self_attn.attn',
'model.layers.15.self_attn.attn',
'model.layers.17.self_attn.attn',
'model.layers.19.self_attn.attn',
],
kv_cache_spec=FullAttentionSpec(
block_size=16,
num_kv_heads=8,
head_size=256,
dtype=torch.bfloat16,
use_mla=False,
sliding_window=None,
attention_chunk_size=None,
),
),
KVCacheGroupSpec(
layer_names=[
'model.layers.0.self_attn.attn',
'model.layers.2.self_attn.attn',
'model.layers.4.self_attn.attn',
'model.layers.6.self_attn.attn',
'model.layers.8.self_attn.attn',
'model.layers.10.self_attn.attn',
'model.layers.12.self_attn.attn',
'model.layers.14.self_attn.attn',
'model.layers.16.self_attn.attn',
'model.layers.18.self_attn.attn',
],
kv_cache_spec=SlidingWindowSpec(
block_size=16,
num_kv_heads=8,
head_size=256,
dtype=torch.bfloat16,
use_mla=False,
sliding_window=4096,
),
),
KVCacheGroupSpec(
layer_names=[
'model.layers.20.self_attn.attn',
],
kv_cache_spec=SlidingWindowSpec(
block_size=16,
num_kv_heads=8,
head_size=256,
dtype=torch.bfloat16,
use_mla=False,
sliding_window=4096,
),
),
],
)
rank=1 kv_cache_config:
KVCacheConfig(
num_blocks=49848,
kv_cache_groups=[
KVCacheGroupSpec(
layer_names=[
'model.layers.21.self_attn.attn',
'model.layers.23.self_attn.attn',
'model.layers.25.self_attn.attn',
'model.layers.27.self_attn.attn',
'model.layers.29.self_attn.attn',
'model.layers.31.self_attn.attn',
'model.layers.33.self_attn.attn',
'model.layers.35.self_attn.attn',
'model.layers.37.self_attn.attn',
'model.layers.39.self_attn.attn',
],
kv_cache_spec=FullAttentionSpec(
block_size=16,
num_kv_heads=8,
head_size=256,
dtype=torch.bfloat16,
use_mla=False,
sliding_window=None,
attention_chunk_size=None,
),
),
KVCacheGroupSpec(
layer_names=[
'model.layers.41.self_attn.attn',
],
kv_cache_spec=FullAttentionSpec(
block_size=16,
num_kv_heads=8,
head_size=256,
dtype=torch.bfloat16,
use_mla=False,
sliding_window=None,
attention_chunk_size=None,
),
),
KVCacheGroupSpec(
layer_names=[
'model.layers.22.self_attn.attn',
'model.layers.24.self_attn.attn',
'model.layers.26.self_attn.attn',
'model.layers.28.self_attn.attn',
'model.layers.30.self_attn.attn',
'model.layers.32.self_attn.attn',
'model.layers.34.self_attn.attn',
'model.layers.36.self_attn.attn',
'model.layers.38.self_attn.attn',
'model.layers.40.self_attn.attn',
],
kv_cache_spec=SlidingWindowSpec(
block_size=16,
num_kv_heads=8,
head_size=256,
dtype=torch.bfloat16,
use_mla=False,
sliding_window=4096,
),
),
],
)
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.