-
-
Notifications
You must be signed in to change notification settings - Fork 10.2k
[Core/DBO][1/N] Add Dual-Batch Overlap mechanism to VLLM #23693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core/DBO][1/N] Add Dual-Batch Overlap mechanism to VLLM #23693
Conversation
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
parallel_group.add_argument( | ||
"--dbo-decode-token-threshold", | ||
**parallel_kwargs["dbo_decode_token_threshold"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the future plan for this argument? Will we add a separate --dbo-prefill-token-threshold
? Could there be one argument instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep we are planning to add a prefill version of this argument.
vllm/model_executor/layers/fused_moe/deepep_ll_prepare_finalize.py
Outdated
Show resolved
Hide resolved
fused_out_buffer = SharedResizableBuffer() | ||
workspace13_buffer = SharedResizableBuffer() | ||
workspace2_buffer = SharedResizableBuffer() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a general memory footprint reduction. Primarily targeting cudagraphs, though.
vllm/v1/worker/gpu_ubatch_wrapper.py
Outdated
# if we are using mrope | ||
if positions.ndim == 2: | ||
sliced_positions = positions[:, tokens_slice] | ||
else: | ||
sliced_positions = positions[tokens_slice] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the mrope interaction? Could we add a comment explaining it here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's largely just that mrope adds an additional dimension to the positions
tensor so we need to slice the lower dimension. I'll add a comment.
vllm/v1/worker/ubatch_splitting.py
Outdated
# Sanity Check that the existing padding isn't giving us an empty second | ||
# ubatch. Abort if so | ||
if is_second_ubatch_empty(num_tokens_unpadded, num_tokens_padded): | ||
logger.debug("Aborting ubatching %s %s", num_tokens_unpadded, | ||
num_tokens_padded) | ||
should_ubatch = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this something that's expected to happen sometimes, and that's OK? If not, I think this should be a warning instead.
And then could you add a bit more detail to the log, e.g.
# Sanity Check that the existing padding isn't giving us an empty second | |
# ubatch. Abort if so | |
if is_second_ubatch_empty(num_tokens_unpadded, num_tokens_padded): | |
logger.debug("Aborting ubatching %s %s", num_tokens_unpadded, | |
num_tokens_padded) | |
should_ubatch = False | |
# Sanity Check that the existing padding isn't giving us an empty second | |
# ubatch. Abort if so | |
if is_second_ubatch_empty(num_tokens_unpadded, num_tokens_padded): | |
logger.warning("Empty second µbatch detected: unpadded tokens: %s, padded tokens: %s", | |
num_tokens_unpadded, | |
num_tokens_padded) | |
should_ubatch = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is expected to happen and isn't necessarily a bug when it does. I find the debug log to be really helpful when debugging misc padding issues. We can certainly take it out, though.
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few minor comments. Overall I think this is ready to land otherwise. Maybe a little rough around the edges with the model runner changes but will be great to have this landed on main, especially as we have a prefill DBO PR ready to be reviewed as soon as this one lands.
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
This pull request has merge conflicts that must be resolved before it can be |
…dbo-full-cudagraphs Signed-off-by: Sage Moore <[email protected]>
This pull request has merge conflicts that must be resolved before it can be |
I thought the Could it be a real problem? @elvircrn, @dougbtv
Edit: Confirmed it's picking up old binaries. |
Signed-off-by: Sage Moore <[email protected]>
…dbo-full-cudagraphs Signed-off-by: Sage Moore <[email protected]>
Hey! Quick question - do you have any performance numbers for this change? Mainly wondering about the efficiency of the communication-computation overlap strategy in the PR. |
Purpose
This PR adds support for Dual-Batch Overlap in VLLM. In it's current state it will only be abled when a user provides the --enable-microbatching flag. Furthermore, it will only be used when all DP groups are running full-decode batches. This PR supports running DBO with full cudagraphs, which is essential for minimizing the CPU overhead and getting performance from this feature.
To implement Dual-Batch Overlap (DBO), at a high level, we split the batch into two microbatches. Then using two threads and two cuda streams, one for communication and one for computation, to overlap the dispatch and combine all-to-all kernels of one microbatch with the compute kernels of the other microbatch.
When microbatching is enabled and supported, the GPUModelRunner will split the batch into two token_slices. These token_slices are then passed into the attention meta data builders during _prepare_inputs to generate one attention metadata object per-microbatch. When actually running the model, the model runner will spawn off two microbatching threads that will each communicate with each other using a UBatchContext. Each of these threads will then run self.model with the appropriate attention meta data.
Without any additional modifications to the code, this will just result in one microbatch running to completion before the other microbatch starts. In order to get overlaps, we've added a "yield" call that can be inserted into the all-to-all kernels to interleave the two microbatches. The yield_and_switch_from_compute_to_comm function yield the CPU from this thread (thread A) to the other microbatching thread (thread B). Once thread A has resumed execution, either because thread B yielded the CPU or finished it's execution, it will swap over to the communication stream and start dispatching kernels there. yield_and_switch_from_comm_to_compute behaves similarly but in the opposite direction. It swaps from the communication stream to the compute stream.
There are both GPU and CPU events to synchronize all of this. That being said, it is absolutely critical that only one microbatching thread is running at a time, meaning the other one is waiting on an event. It is also absolutely critical that both microbatches are running the exact same number of yields.
Test Plan
In general my test plan was to run lm_eval with
deepseek-ai/DeepSeek-V2-Lite
. We've also run numerous times with R1 in a multi node setup and verified that lm_eval produces reasonable output.Non-DBO Runs
Eager
Command
VLLM_ALL2ALL_BACKEND=deepep_low_latency vllm serve --model="deepseek-ai/DeepSeek-V2-Lite" --data-parallel-size 2 --enable-expert-parallel --enforce-eager
Result
Default
Command
VLLM_ALL2ALL_BACKEND=deepep_low_latency g2 vllm serve --model="deepseek-ai/DeepSeek-V2-Lite" --data-parallel-size 2 --enable-expert-parallel
Result
DBO Runs
Eager
Command
VLLM_ALL2ALL_BACKEND=deepep_low_latency g2 vllm serve --model="deepseek-ai/DeepSeek-V2-Lite" --data-parallel-size 2 --enable-expert-parallel --enforce-eager --enable-microbatching --microbatching-token-threshold 4
Result
Full cudagraphs
Command
VLLM_ALL2ALL_BACKEND=deepep_low_latency g2 vllm serve --model="deepseek-ai/DeepSeek-V2-Lite" --data-parallel-size 2 --enable-expert-parallel --compilation_config '{"cudagraph_mode": "full_decode_only"}' --enable-microbatching --microbatching-token-threshold 4
Result