commit_hash string | pr_url string | pr_date string | timeline_extracted_at string | analysis_extracted_at string | models list | perf_command string | has_serving bool | has_latency bool | has_throughput bool | uses_lm_eval bool | commit_subject string | commit_message string | commit_date string | files_changed list | stats dict | diff_text string | apis list | affected_paths list | repo string | hardware string | lm_eval_command string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fc542144c4477ffec1d3de6fa43e54f8fb5351e8 | https://github.com/vllm-project/vllm/pull/12563 | 2025-01-31 | 2025-09-07 17:46:50 | 2025-09-07 17:46:50 | [
"meta-llama/Llama-3.1-8B-Instruct"
] | python benchmarks/benchmark_serving.py --model meta-llama/Llama-3.1-8B-Instruct --num-prompts 100 | true | false | false | true | [Feature] Fix guided decoding blocking bitmask memcpy (#12563) | [Feature] Fix guided decoding blocking bitmask memcpy (#12563)
**[Guided decoding performance optimization]** Sending the guided
decoding bitmask in xgrammar to the GPU
(`self.token_bitmask.to(scores.device)`) is a blocking operation that
prevents the CPU from pre-launching the sampler kernels. The CPU waits
until decode is complete, then copies the bitmask over. This PR changes
the operation to async via setting `non-blocking=True`.
(Current) The CPU is blocked on a `cudaStreamSynchronize` and only
pre-empts the sampling kernels after bitmask application. Below is the
Nsys profile for one decode phase from Llama 3.1 8B.

With the optimization, this is no longer the case:

--------- | 2025-01-31T15:37:30-08:00 | [
"vllm/model_executor/guided_decoding/xgrammar_decoding.py"
] | {
"commit_year": 2025,
"num_edited_lines": 4,
"num_files": 1,
"num_hunks": 1,
"num_non_test_edited_lines": 4,
"num_non_test_files": 1,
"num_test_files": 0,
"only_non_test_files": 1,
"only_test_files": 0
} | diff --git a/vllm/model_executor/guided_decoding/xgrammar_decoding.py b/vllm/model_executor/guided_decoding/xgrammar_decoding.py
index 2d8594cb8..ee30ce96f 100644
--- a/vllm/model_executor/guided_decoding/xgrammar_decoding.py
+++ b/vllm/model_executor/guided_decoding/xgrammar_decoding.py
@@ -307,8 +307,8 @@ class XGrammarLogitsProcessor:
# Note: In this method, if the tensors have different dimensions
# on CPU device fails, but on GPU it runs without error. Hence the
# unsqueeze above for scores, to match the token bitmask shape
- xgr.apply_token_bitmask_inplace(scores,
- self.token_bitmask.to(scores.device))
+ xgr.apply_token_bitmask_inplace(
+ scores, self.token_bitmask.to(scores.device, non_blocking=True))
if device_type != "cuda":
scores = scores.to(dtype).to(device_type).squeeze() | [
"None"
] | [
"vllm/engine/llm_engine.py",
"vllm/v1/engine/llm_engine.py",
"vllm/entrypoints/llm.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
fa63e710c7fbaae3a445f669d3b5ba6b9a4ef412 | https://github.com/vllm-project/vllm/pull/12094 | 2025-01-26 | 2025-09-07 17:46:54 | 2025-09-07 17:46:54 | [
"meta-llama/Meta-Llama-3-8B"
] | VLLM_USE_V1=1 python benchmarks/benchmark_latency.py --model meta-llama/Meta-Llama-3-8B --tensor-parallel-size 1 --input-len 1000 --batch-size 32 | false | true | false | true | [V1][Perf] Reduce scheduling overhead in model runner after cuda sync (#12094) | [V1][Perf] Reduce scheduling overhead in model runner after cuda sync (#12094) | 2025-01-26T00:42:37-08:00 | [
"vllm/v1/outputs.py",
"vllm/v1/sample/sampler.py",
"vllm/v1/worker/gpu_model_runner.py"
] | {
"commit_year": 2025,
"num_edited_lines": 34,
"num_files": 3,
"num_hunks": 6,
"num_non_test_edited_lines": 34,
"num_non_test_files": 3,
"num_test_files": 0,
"only_non_test_files": 1,
"only_test_files": 0
} | diff --git a/vllm/v1/outputs.py b/vllm/v1/outputs.py
index acc3a944e..32aee44e3 100644
--- a/vllm/v1/outputs.py
+++ b/vllm/v1/outputs.py
@@ -8,7 +8,7 @@ import torch
class SamplerOutput:
# [num_reqs]
- sampled_token_ids: List[int]
+ sampled_token_ids: torch.Tensor
# [num_reqs, max_num_logprobs + 1]
logprob_token_ids: Optional[torch.Tensor]
diff --git a/vllm/v1/sample/sampler.py b/vllm/v1/sample/sampler.py
index 7cd42ca21..9ad665a64 100644
--- a/vllm/v1/sample/sampler.py
+++ b/vllm/v1/sample/sampler.py
@@ -50,9 +50,8 @@ class Sampler(nn.Module):
# Use int32 to reduce the tensor size.
sampled = sampled.to(torch.int32)
- # NOTE: CPU-GPU synchronization happens here.
sampler_output = SamplerOutput(
- sampled_token_ids=sampled.tolist(),
+ sampled_token_ids=sampled,
logprob_token_ids=topk_indices,
logprobs=topk_logprobs,
prompt_logprob_token_ids=None,
diff --git a/vllm/v1/worker/gpu_model_runner.py b/vllm/v1/worker/gpu_model_runner.py
index 4b3c325de..6339f1f03 100644
--- a/vllm/v1/worker/gpu_model_runner.py
+++ b/vllm/v1/worker/gpu_model_runner.py
@@ -775,10 +775,10 @@ class GPUModelRunner:
sampling_metadata=sampling_metadata,
)
- sampled_token_ids = sampler_output.sampled_token_ids
# TODO(woosuk): The following loop can be slow since it iterates over
# the requests one by one. Optimize.
num_reqs = self.input_batch.num_reqs
+ request_seq_lens: List[Tuple[int, CachedRequestState, int]] = []
for i, req_id in enumerate(self.input_batch.req_ids[:num_reqs]):
assert req_id is not None
req_state = self.requests[req_id]
@@ -787,10 +787,10 @@ class GPUModelRunner:
assert seq_len <= req_state.num_tokens
if seq_len == req_state.num_tokens:
# Append the sampled token to the output token ids.
- token_id = sampled_token_ids[i]
- self.input_batch.token_ids_cpu[i, seq_len] = token_id
self.input_batch.num_tokens[i] += 1
- req_state.output_token_ids.append(token_id)
+ # OPTIMIZATION: Priming the state updates for later updates.
+ req_state.output_token_ids.append(0)
+ request_seq_lens.append((i, req_state, seq_len))
else:
# Ignore the sampled token from the partial request.
# Rewind the generator state as if the token was not sampled.
@@ -799,6 +799,21 @@ class GPUModelRunner:
# This relies on cuda-specific torch-internal impl details
generator.set_offset(generator.get_offset() - 4)
+ # num_reqs entries should be non-None
+ assert all(
+ req_id is not None for req_id in
+ self.input_batch.req_ids[:num_reqs]), "req_ids contains None"
+ req_ids = cast(List[str], self.input_batch.req_ids[:num_reqs])
+
+ # NOTE: GPU -> CPU Sync happens here.
+ # Move as many CPU operations as possible before this sync point.
+ sampled_token_ids = sampler_output.sampled_token_ids.tolist()
+ # Update with the actual token ids
+ for i, req_state, seq_len in request_seq_lens:
+ token_id = sampled_token_ids[i]
+ self.input_batch.token_ids_cpu[i, seq_len] = token_id
+ req_state.output_token_ids[-1] = token_id
+
if sampler_output.logprob_token_ids is None:
logprob_token_ids = None
else:
@@ -808,12 +823,6 @@ class GPUModelRunner:
else:
logprobs = sampler_output.logprobs.cpu()
- # num_reqs entries should be non-None
- assert all(
- req_id is not None for req_id in
- self.input_batch.req_ids[:num_reqs]), "req_ids contains None"
- req_ids = cast(List[str], self.input_batch.req_ids[:num_reqs])
-
model_runner_output = ModelRunnerOutput(
req_ids=req_ids,
req_id_to_index=self.input_batch.req_id_to_index, | [
"vllm.v1.outputs.SamplerOutput",
"vllm.v1.sample.sampler.Sampler.forward",
"vllm.v1.worker.GPUModelRunner.execute_model"
] | [
"vllm/v1/worker/gpu_model_runner.py",
"vllm/v1/sample/sampler.py",
"vllm/model_executor/layers/sampler.py",
"vllm/v1/sample/tpu/sampler.py",
"vllm/outputs.py",
"vllm/v1/outputs.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=meta-llama/Meta-Llama-3-8B,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
6dd94dbe94c1820a1e224cba65efcf0befa97995 | https://github.com/vllm-project/vllm/pull/12380 | 2025-01-24 | 2025-09-07 17:46:57 | 2025-09-07 17:46:57 | [
"meta-llama/Meta-Llama-3-8B"
] | python benchmarks/benchmark_latency.py --model meta-llama/Meta-Llama-3-8B --load-format dummy | false | true | false | true | [perf] fix perf regression from #12253 (#12380) | [perf] fix perf regression from #12253 (#12380) | 2025-01-24T11:34:27+08:00 | [
"vllm/worker/model_runner.py"
] | {
"commit_year": 2025,
"num_edited_lines": 5,
"num_files": 1,
"num_hunks": 2,
"num_non_test_edited_lines": 5,
"num_non_test_files": 1,
"num_test_files": 0,
"only_non_test_files": 1,
"only_test_files": 0
} | diff --git a/vllm/worker/model_runner.py b/vllm/worker/model_runner.py
index cf2f1c6b3..bf1a40d48 100644
--- a/vllm/worker/model_runner.py
+++ b/vllm/worker/model_runner.py
@@ -455,7 +455,6 @@ class ModelInputForGPUBuilder(ModelRunnerInputBuilderBase[ModelInputForGPU]):
self.enable_prompt_adapter = (self.runner.prompt_adapter_config
is not None)
self.multi_modal_input_mapper = self.runner.multi_modal_input_mapper
- self.decode_only = True
# Attention metadata inputs.
if self.attn_backend is not None:
@@ -477,6 +476,10 @@ class ModelInputForGPUBuilder(ModelRunnerInputBuilderBase[ModelInputForGPU]):
finished_requests_ids: Optional[List[str]] = None) -> None:
self.finished_requests_ids = finished_requests_ids
+ # if the current batch is decode-only.
+ # will be set to False if there is any non-decode request.
+ self.decode_only = True
+
# Intermediate data (data in CPU before going to GPU) for
# the current sequence group.
self.inter_data_list: List[ | [
"vllm.worker.model_runner.ModelInputForGPUBuilder.__init__"
] | [
"vllm/worker/model_runner.py",
"vllm/engine/llm_engine.py",
"vllm/v1/engine/llm_engine.py",
"vllm/entrypoints/openai/serving_completion.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=meta-llama/Meta-Llama-3-8B,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
310aca88c984983189a57f1b72e3b1dde89fb92f | https://github.com/vllm-project/vllm/pull/11870 | 2025-01-09 | 2025-09-07 17:47:12 | 2025-09-07 17:47:12 | [
"meta-llama/Meta-Llama-3-70B"
] | python benchmarks/benchmark_latency.py --model meta-llama/Meta-Llama-3-70B --load-format dummy --enforce-eager -tp 4 | false | true | false | true | [perf]fix current stream (#11870) | [perf]fix current stream (#11870) | 2025-01-09T07:18:21Z | [
"vllm/distributed/device_communicators/pynccl.py",
"vllm/distributed/parallel_state.py",
"vllm/utils.py",
"vllm/worker/multi_step_model_runner.py"
] | {
"commit_year": 2025,
"num_edited_lines": 61,
"num_files": 4,
"num_hunks": 14,
"num_non_test_edited_lines": 61,
"num_non_test_files": 4,
"num_test_files": 0,
"only_non_test_files": 1,
"only_test_files": 0
} | diff --git a/vllm/distributed/device_communicators/pynccl.py b/vllm/distributed/device_communicators/pynccl.py
index fda4d007c..efc599871 100644
--- a/vllm/distributed/device_communicators/pynccl.py
+++ b/vllm/distributed/device_communicators/pynccl.py
@@ -10,6 +10,7 @@ from vllm.distributed.device_communicators.pynccl_wrapper import (
ncclRedOpTypeEnum, ncclUniqueId)
from vllm.distributed.utils import StatelessProcessGroup
from vllm.logger import init_logger
+from vllm.utils import current_stream
logger = init_logger(__name__)
@@ -96,7 +97,7 @@ class PyNcclCommunicator:
self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
self.world_size, self.unique_id, self.rank)
- stream = torch.cuda.current_stream()
+ stream = current_stream()
# A small all_reduce for warmup.
data = torch.zeros(1, device=device)
self.all_reduce(data)
@@ -119,7 +120,7 @@ class PyNcclCommunicator:
out_tensor = torch.empty_like(in_tensor)
if stream is None:
- stream = torch.cuda.current_stream()
+ stream = current_stream()
self.nccl.ncclAllReduce(buffer_type(in_tensor.data_ptr()),
buffer_type(out_tensor.data_ptr()),
in_tensor.numel(),
@@ -141,7 +142,7 @@ class PyNcclCommunicator:
f"this nccl communicator is created to work on {self.device}, "
f"but the input tensor is on {input_tensor.device}")
if stream is None:
- stream = torch.cuda.current_stream()
+ stream = current_stream()
self.nccl.ncclAllGather(
buffer_type(input_tensor.data_ptr()),
buffer_type(output_tensor.data_ptr()), input_tensor.numel(),
@@ -162,7 +163,7 @@ class PyNcclCommunicator:
f"this nccl communicator is created to work on {self.device}, "
f"but the input tensor is on {input_tensor.device}")
if stream is None:
- stream = torch.cuda.current_stream()
+ stream = current_stream()
self.nccl.ncclReduceScatter(
buffer_type(input_tensor.data_ptr()),
buffer_type(output_tensor.data_ptr()), output_tensor.numel(),
@@ -177,7 +178,7 @@ class PyNcclCommunicator:
f"this nccl communicator is created to work on {self.device}, "
f"but the input tensor is on {tensor.device}")
if stream is None:
- stream = torch.cuda.current_stream()
+ stream = current_stream()
self.nccl.ncclSend(buffer_type(tensor.data_ptr()), tensor.numel(),
ncclDataTypeEnum.from_torch(tensor.dtype), dst,
self.comm, cudaStream_t(stream.cuda_stream))
@@ -189,7 +190,7 @@ class PyNcclCommunicator:
f"this nccl communicator is created to work on {self.device}, "
f"but the input tensor is on {tensor.device}")
if stream is None:
- stream = torch.cuda.current_stream()
+ stream = current_stream()
self.nccl.ncclRecv(buffer_type(tensor.data_ptr()), tensor.numel(),
ncclDataTypeEnum.from_torch(tensor.dtype), src,
self.comm, cudaStream_t(stream.cuda_stream))
@@ -201,7 +202,7 @@ class PyNcclCommunicator:
f"this nccl communicator is created to work on {self.device}, "
f"but the input tensor is on {tensor.device}")
if stream is None:
- stream = torch.cuda.current_stream()
+ stream = current_stream()
if src == self.rank:
sendbuff = buffer_type(tensor.data_ptr())
# NCCL requires the sender also to have a receive buffer
diff --git a/vllm/distributed/parallel_state.py b/vllm/distributed/parallel_state.py
index a837c1dc5..be7f16ef5 100644
--- a/vllm/distributed/parallel_state.py
+++ b/vllm/distributed/parallel_state.py
@@ -357,10 +357,7 @@ class GroupCoordinator:
return out
pynccl_comm = self.pynccl_comm
assert pynccl_comm is not None
- # TODO: pynccl should not use `stream=`
- # it can just always use the current stream.
- out = pynccl_comm.all_reduce(input_,
- stream=torch.cuda.current_stream())
+ out = pynccl_comm.all_reduce(input_)
if out is None:
# fall back to the default all-reduce using PyTorch.
# this usually happens during testing.
diff --git a/vllm/utils.py b/vllm/utils.py
index a92b77efd..0b0905e67 100644
--- a/vllm/utils.py
+++ b/vllm/utils.py
@@ -944,6 +944,39 @@ def find_nccl_library() -> str:
return so_file
+prev_set_stream = torch.cuda.set_stream
+
+_current_stream = None
+
+
+def _patched_set_stream(stream: torch.cuda.Stream) -> None:
+ global _current_stream
+ _current_stream = stream
+ prev_set_stream(stream)
+
+
+torch.cuda.set_stream = _patched_set_stream
+
+
+def current_stream() -> torch.cuda.Stream:
+ """
+ replace `torch.cuda.current_stream()` with `vllm.utils.current_stream()`.
+ it turns out that `torch.cuda.current_stream()` is quite expensive,
+ as it will construct a new stream object at each call.
+ here we patch `torch.cuda.set_stream` to keep track of the current stream
+ directly, so that we can avoid calling `torch.cuda.current_stream()`.
+
+ the underlying hypothesis is that we do not call `torch._C._cuda_setStream`
+ from C/C++ code.
+ """
+ global _current_stream
+ if _current_stream is None:
+ # when this function is called before any stream is set,
+ # we return the default stream.
+ _current_stream = torch.cuda.current_stream()
+ return _current_stream
+
+
def enable_trace_function_call_for_thread(vllm_config: "VllmConfig") -> None:
"""Set up function tracing for the current thread,
if enabled via the VLLM_TRACE_FUNCTION environment variable
diff --git a/vllm/worker/multi_step_model_runner.py b/vllm/worker/multi_step_model_runner.py
index a2c2cebf8..acce92349 100644
--- a/vllm/worker/multi_step_model_runner.py
+++ b/vllm/worker/multi_step_model_runner.py
@@ -14,7 +14,7 @@ from vllm.model_executor.layers.sampler import (PromptLogprobs, SampleLogprobs,
get_pythonized_sample_results)
from vllm.sequence import (CompletionSequenceGroupOutput, IntermediateTensors,
Logprob, SequenceGroupMetadata, SequenceOutput)
-from vllm.utils import PyObjectCache, async_tensor_h2d
+from vllm.utils import PyObjectCache, async_tensor_h2d, current_stream
from vllm.worker.model_runner import (GPUModelRunnerBase,
ModelInputForGPUWithSamplingMetadata)
from vllm.worker.model_runner_base import (
@@ -498,7 +498,7 @@ class MultiStepModelRunner(GPUModelRunnerBase[StatefulModelInput]):
# appended sampler output from last iteration
# - also maybe pythonize if CPU is ahead of GPU
- current_stream = torch.cuda.current_stream()
+ stream = current_stream()
if not model_input.is_first_multi_step:
# Explicitly block on the previous step's forward to make sure we
# don't clobber any GPU tensors still in use.
@@ -541,7 +541,7 @@ class MultiStepModelRunner(GPUModelRunnerBase[StatefulModelInput]):
num_steps=1)
# record the event for the current step so that the next step can sync
- model_input.record_step_event(current_stream)
+ model_input.record_step_event(stream)
if get_pp_group().is_last_rank and self.is_driver_worker:
assert isinstance(output, list)
@@ -552,7 +552,7 @@ class MultiStepModelRunner(GPUModelRunnerBase[StatefulModelInput]):
# event for the pythonization so that we only pythonize if the
# tensors are ready. May be able to be combined with the step event
output_ready_event = torch.cuda.Event()
- output_ready_event.record(current_stream)
+ output_ready_event.record(stream)
if self.parallel_config.pipeline_parallel_size > 1:
output[0].sampled_token_ids_cpu = output[
0].sampled_token_ids.cpu() | [
"vllm.distributed.device_communicators.pynccl.PyNcclCommunicator.all_reduce",
"vllm.utils.current_stream",
"vllm.worker.multi_step_model_runner.MultiStepModelRunner.execute_model"
] | [
"vllm/distributed/device_communicators/pynccl.py",
"vllm/distributed/parallel_state.py",
"vllm/worker/multi_step_model_runner.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=meta-llama/Meta-Llama-3-70B,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
b55ed6ef8ab0dce7fb0f79ff292dafdb4d22610c | https://github.com/vllm-project/vllm/pull/11692 | 2025-01-02 | 2025-09-07 17:47:18 | 2025-09-07 17:47:18 | [
"meta-llama/Llama-3.1-8B-Instruct"
] | python benchmarks/benchmark_serving.py --model meta-llama/Llama-3.1-8B-Instruct --backend vllm | true | false | false | true | [V1][Minor] Optimize token_ids_cpu copy (#11692) | [V1][Minor] Optimize token_ids_cpu copy (#11692) | 2025-01-02T12:04:58-07:00 | [
"vllm/v1/worker/gpu_input_batch.py",
"vllm/v1/worker/gpu_model_runner.py"
] | {
"commit_year": 2025,
"num_edited_lines": 14,
"num_files": 2,
"num_hunks": 4,
"num_non_test_edited_lines": 14,
"num_non_test_files": 2,
"num_test_files": 0,
"only_non_test_files": 1,
"only_test_files": 0
} | diff --git a/vllm/v1/worker/gpu_input_batch.py b/vllm/v1/worker/gpu_input_batch.py
index e79145300..f8a1427c6 100644
--- a/vllm/v1/worker/gpu_input_batch.py
+++ b/vllm/v1/worker/gpu_input_batch.py
@@ -66,8 +66,9 @@ class InputBatch:
pin_memory=False,
)
self.token_ids_cpu = self.token_ids_cpu_tensor.numpy()
- self.num_computed_tokens_cpu = np.empty(max_num_reqs, dtype=np.int32)
+ self.num_tokens = np.zeros(max_num_reqs, dtype=np.int32)
self.num_prompt_tokens = np.zeros(max_num_reqs, dtype=np.int32)
+ self.num_computed_tokens_cpu = np.empty(max_num_reqs, dtype=np.int32)
# Attention-related.
self.block_table = torch.zeros(
@@ -189,6 +190,7 @@ class InputBatch:
end_idx = start_idx + len(request.output_token_ids)
self.token_ids_cpu[req_index,
start_idx:end_idx] = request.output_token_ids
+ self.num_tokens[req_index] = request.num_tokens
self.num_computed_tokens_cpu[req_index] = request.num_computed_tokens
num_blocks = len(request.block_ids)
@@ -290,14 +292,15 @@ class InputBatch:
self.req_ids[last_req_index] = None
self.req_id_to_index[req_id] = empty_index
- # TODO(woosuk): Optimize the copy of token_ids_cpu and
- # block_table_cpu.
- self.token_ids_cpu[empty_index] = self.token_ids_cpu[
- last_req_index]
+ num_tokens = self.num_tokens[last_req_index]
+ self.token_ids_cpu[empty_index, :num_tokens] = self.token_ids_cpu[
+ last_req_index, :num_tokens]
+ self.num_tokens[empty_index] = num_tokens
self.num_prompt_tokens[empty_index] = \
self.num_prompt_tokens[last_req_index]
self.num_computed_tokens_cpu[
empty_index] = self.num_computed_tokens_cpu[last_req_index]
+ # TODO(woosuk): Optimize the copy of block_table_cpu.
self.block_table_cpu[empty_index] = self.block_table_cpu[
last_req_index]
self.temperature_cpu[empty_index] = self.temperature_cpu[
diff --git a/vllm/v1/worker/gpu_model_runner.py b/vllm/v1/worker/gpu_model_runner.py
index 995de54e8..75098b033 100644
--- a/vllm/v1/worker/gpu_model_runner.py
+++ b/vllm/v1/worker/gpu_model_runner.py
@@ -644,6 +644,7 @@ class GPUModelRunner:
# Append the sampled token to the output token ids.
token_id = sampled_token_ids[i]
self.input_batch.token_ids_cpu[i, seq_len] = token_id
+ self.input_batch.num_tokens[i] += 1
req_state.output_token_ids.append(token_id)
else:
# Ignore the sampled token from the partial request. | [
"InputBatch.add_request",
"InputBatch.condense",
"GPUModelRunner._update_states"
] | [
"vllm/v1/worker/gpu_input_batch.py",
"vllm/v1/worker/gpu_model_runner.py",
"vllm/engine/llm_engine.py",
"vllm/v1/engine/llm_engine.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
3b61cb450d899dc423feb264c297d4d18d701678 | https://github.com/vllm-project/vllm/pull/10989 | 2024-12-09 | 2025-09-07 17:47:34 | 2025-09-07 17:47:34 | [
"meta-llama/Llama-3.1-8B-Instruct"
] | python benchmarks/benchmark_latency.py --model meta-llama/Llama-3.1-8B-Instruct --batch-size 32 --input-len 512 --output-len 128 | false | true | false | true | [V1] Further reduce CPU overheads in flash-attn (#10989) | [V1] Further reduce CPU overheads in flash-attn (#10989) | 2024-12-09T12:38:46-08:00 | [
"csrc/cache_kernels.cu",
"vllm/v1/attention/backends/flash_attn.py"
] | {
"commit_year": 2024,
"num_edited_lines": 35,
"num_files": 2,
"num_hunks": 2,
"num_non_test_edited_lines": 35,
"num_non_test_files": 2,
"num_test_files": 0,
"only_non_test_files": 1,
"only_test_files": 0
} | diff --git a/csrc/cache_kernels.cu b/csrc/cache_kernels.cu
index 1be806bbf..8a95279f9 100644
--- a/csrc/cache_kernels.cu
+++ b/csrc/cache_kernels.cu
@@ -307,10 +307,20 @@ void reshape_and_cache_flash(
torch::Tensor& key_cache, // [num_blocks, block_size, num_heads, head_size]
torch::Tensor&
value_cache, // [num_blocks, block_size, num_heads, head_size]
- torch::Tensor& slot_mapping, // [num_tokens]
+ torch::Tensor& slot_mapping, // [num_tokens] or [num_actual_tokens]
const std::string& kv_cache_dtype, const double k_scale,
const double v_scale) {
- int num_tokens = key.size(0);
+ // NOTE(woosuk): In vLLM V1, key.size(0) can be different from
+ // slot_mapping.size(0) because of padding for CUDA graphs.
+ // In vLLM V0, key.size(0) is always equal to slot_mapping.size(0) because
+ // both include padding.
+ // In vLLM V1, however, key.size(0) can be larger than slot_mapping.size(0)
+ // since key includes padding for CUDA graphs, while slot_mapping does not.
+ // In this case, slot_mapping.size(0) represents the actual number of tokens
+ // before padding.
+ // For compatibility with both cases, we use slot_mapping.size(0) as the
+ // number of tokens.
+ int num_tokens = slot_mapping.size(0);
int num_heads = key.size(1);
int head_size = key.size(2);
int block_size = key_cache.size(1);
diff --git a/vllm/v1/attention/backends/flash_attn.py b/vllm/v1/attention/backends/flash_attn.py
index d37989055..251a103e6 100644
--- a/vllm/v1/attention/backends/flash_attn.py
+++ b/vllm/v1/attention/backends/flash_attn.py
@@ -138,14 +138,25 @@ class FlashAttentionImpl(AttentionImpl):
# Profiling run.
return output
- num_actual_tokens = attn_metadata.num_actual_tokens
+ # IMPORTANT!
+ # NOTE(woosuk): With piece-wise CUDA graphs, this method is executed in
+ # eager-mode PyTorch. Thus, we need to be careful about any CPU overhead
+ # in this method. For example, `view` and `slice` (or `[:n]`) operations
+ # are surprisingly slow even in the case they do not invoke any GPU ops.
+ # Minimize the PyTorch ops in this method as much as possible.
+ # Whenever making a change in this method, please benchmark the
+ # performance to make sure it does not introduce any overhead.
+ num_actual_tokens = attn_metadata.num_actual_tokens
# Reshape the input keys and values and store them in the cache.
- key_cache = kv_cache[0]
- value_cache = kv_cache[1]
+ # NOTE(woosuk): Here, key and value are padded while slot_mapping is
+ # not padded. However, we don't need to do key[:num_actual_tokens] and
+ # value[:num_actual_tokens] because the reshape_and_cache_flash op uses
+ # the slot_mapping's shape to determine the number of actual tokens.
+ key_cache, value_cache = kv_cache.unbind(0)
torch.ops._C_cache_ops.reshape_and_cache_flash(
- key[:num_actual_tokens],
- value[:num_actual_tokens],
+ key,
+ value,
key_cache,
value_cache,
attn_metadata.slot_mapping, | [
"vllm.v1.attention.backends.flash_attn.FlashAttentionImpl.forward",
"torch.ops._C_cache_ops.reshape_and_cache_flash"
] | [
"vllm/attention/backends/flash_attn.py",
"vllm/v1/attention/backends/flash_attn.py",
"vllm/_custom_ops.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
98f47f2a4032f8c395268de80858c64ffcfc60fa | https://github.com/vllm-project/vllm/pull/10733 | 2024-11-28 | 2025-09-07 17:47:41 | 2025-09-07 17:47:41 | [
"facebook/opt-125m"
] | python benchmarks/benchmark_latency.py --model facebook/opt-125m | false | true | false | true | [V1] Optimize the CPU overheads in FlashAttention custom op (#10733) | [V1] Optimize the CPU overheads in FlashAttention custom op (#10733) | 2024-11-28T09:01:02-08:00 | [
"vllm/v1/attention/backends/flash_attn.py"
] | {
"commit_year": 2024,
"num_edited_lines": 17,
"num_files": 1,
"num_hunks": 4,
"num_non_test_edited_lines": 17,
"num_non_test_files": 1,
"num_test_files": 0,
"only_non_test_files": 1,
"only_test_files": 0
} | diff --git a/vllm/v1/attention/backends/flash_attn.py b/vllm/v1/attention/backends/flash_attn.py
index 5f8535eaa..e618edf7d 100644
--- a/vllm/v1/attention/backends/flash_attn.py
+++ b/vllm/v1/attention/backends/flash_attn.py
@@ -135,6 +135,13 @@ class FlashAttentionImpl(AttentionImpl):
assert k_scale == 1.0 and v_scale == 1.0, (
"key/v_scale is not supported in FlashAttention.")
+ # Reshape the query, key, and value tensors.
+ # NOTE(woosuk): We do this outside the custom op to minimize the CPU
+ # overheads from the non-CUDA-graph regions.
+ query = query.view(-1, self.num_heads, self.head_size)
+ key = key.view(-1, self.num_kv_heads, self.head_size)
+ value = value.view(-1, self.num_kv_heads, self.head_size)
+
output = torch.empty_like(query)
torch.ops.vllm.unified_v1_flash_attention(
output,
@@ -153,7 +160,7 @@ class FlashAttentionImpl(AttentionImpl):
self.alibi_slopes,
self.logits_soft_cap,
)
- return output
+ return output.view(-1, self.num_heads * self.head_size)
def unified_v1_flash_attention(
@@ -184,11 +191,6 @@ def unified_v1_flash_attention(
attn_metadata: FlashAttentionMetadata = current_metadata
num_actual_tokens = attn_metadata.num_actual_tokens
- # Reshape the query, key, and value tensors.
- query = query.view(-1, num_heads, head_size)
- key = key.view(-1, num_kv_heads, head_size)
- value = value.view(-1, num_kv_heads, head_size)
-
# Reshape the input keys and values and store them in the cache.
key_cache = kv_cache[0]
value_cache = kv_cache[1]
@@ -218,8 +220,7 @@ def unified_v1_flash_attention(
block_table=attn_metadata.block_table,
softcap=logits_soft_cap,
)
- attn_output = attn_output.view(num_actual_tokens, -1)
- # TODO(woosuk): Optimize this.
+ # TODO(woosuk): Remove this unnecessary copy.
output[:num_actual_tokens].copy_(attn_output) | [
"vllm.v1.attention.backends.flash_attn.FlashAttentionImpl.forward",
"vllm.v1.attention.backends.flash_attn.unified_v1_flash_attention"
] | [
"vllm/attention/backends/flash_attn.py",
"vllm/v1/attention/backends/flash_attn.py",
"vllm/_custom_ops.py",
"csrc/torch_bindings.cpp",
"csrc/cpu/torch_bindings.cpp",
"csrc/rocm/torch_bindings.cpp",
"csrc/moe/torch_bindings.cpp"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=facebook/opt-125m,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
8c1e77fb585c4f42783a3d88c1efc7c9e15fd89f | https://github.com/vllm-project/vllm/pull/10742 | 2024-11-28 | 2025-09-07 17:47:44 | 2025-09-07 17:47:44 | [
"meta-llama/Llama-3.1-8B-Instruct"
] | python benchmarks/benchmark_latency.py --model meta-llama/Llama-3.1-8B-Instruct --batch-size 32 --input-len 512 --output-len 128 | false | true | false | true | [Kernel] Update vllm-flash-attn version to reduce CPU overheads (#10742) | [Kernel] Update vllm-flash-attn version to reduce CPU overheads (#10742) | 2024-11-28T08:31:28-08:00 | [
"CMakeLists.txt"
] | {
"commit_year": 2024,
"num_edited_lines": 2,
"num_files": 1,
"num_hunks": 1,
"num_non_test_edited_lines": 2,
"num_non_test_files": 1,
"num_test_files": 0,
"only_non_test_files": 1,
"only_test_files": 0
} | diff --git a/CMakeLists.txt b/CMakeLists.txt
index 45a3b484e..f43bf8143 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -522,7 +522,7 @@ else()
FetchContent_Declare(
vllm-flash-attn
GIT_REPOSITORY https://github.com/vllm-project/flash-attention.git
- GIT_TAG d886f88165702b3c7e7744502772cd98b06be9e1
+ GIT_TAG fdf6d72b48aea41f4ae6a89139a453dae554abc8
GIT_PROGRESS TRUE
# Don't share the vllm-flash-attn build between build types
BINARY_DIR ${CMAKE_BINARY_DIR}/vllm-flash-attn | [
"None"
] | [
"vllm/attention/backends/flash_attn.py",
"vllm/v1/attention/backends/flash_attn.py",
"vllm/attention/ops/triton_flash_attention.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
6e36f4fa6ce64619b9ea94c88a157f5783a63a65 | https://github.com/vllm-project/vllm/pull/7874 | 2024-09-02 | 2025-09-07 17:48:01 | 2025-09-07 17:48:01 | [
"meta-llama/Llama-3.1-8B-Instruct"
] | python benchmarks/benchmark_serving.py --model meta-llama/Llama-3.1-8B-Instruct --backend vllm --num-prompts 100 | true | false | false | true | improve chunked prefill performance | improve chunked prefill performance
[Bugfix] Fix #7592 vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. (#7874) | 2024-09-02T14:20:12-07:00 | [
"tests/basic_correctness/test_chunked_prefill.py",
"vllm/core/scheduler.py"
] | {
"commit_year": 2024,
"num_edited_lines": 18,
"num_files": 2,
"num_hunks": 2,
"num_non_test_edited_lines": 15,
"num_non_test_files": 1,
"num_test_files": 1,
"only_non_test_files": 0,
"only_test_files": 0
} | diff --git a/tests/basic_correctness/test_chunked_prefill.py b/tests/basic_correctness/test_chunked_prefill.py
index fc6f829c3..a63ac380e 100644
--- a/tests/basic_correctness/test_chunked_prefill.py
+++ b/tests/basic_correctness/test_chunked_prefill.py
@@ -116,6 +116,9 @@ def test_models_with_fp8_kv_cache(
pytest.skip(
"#7378: CUDA illegal memory access (undiagnosed) facebook/opt-125m"
)
+ if ((model, kv_cache_dtype, chunked_prefill_token_size) == (
+ "nm-testing/Qwen2-1.5B-Instruct-FP8-K-V", "fp8_e4m3", 4)):
+ pytest.skip("flakey test, see: #7874 #8051")
max_num_seqs = chunked_prefill_token_size
max_num_batched_tokens = chunked_prefill_token_size
diff --git a/vllm/core/scheduler.py b/vllm/core/scheduler.py
index 4c2f71582..81c78bda3 100644
--- a/vllm/core/scheduler.py
+++ b/vllm/core/scheduler.py
@@ -1027,16 +1027,21 @@ class Scheduler:
# Update waiting requests.
self.waiting.extendleft(running_scheduled.preempted)
+
# Update new running requests.
- self.running.extend([s.seq_group for s in prefills.seq_groups])
- self.running.extend(
- [s.seq_group for s in running_scheduled.decode_seq_groups])
- self.running.extend(
- [s.seq_group for s in running_scheduled.prefill_seq_groups])
+ # By default, vLLM scheduler prioritizes prefills.
+ # Once chunked prefill is enabled,
+ # the policy is changed to prioritize decode requests.
self.running.extend(
[s.seq_group for s in swapped_in.decode_seq_groups])
self.running.extend(
[s.seq_group for s in swapped_in.prefill_seq_groups])
+ self.running.extend(
+ [s.seq_group for s in running_scheduled.decode_seq_groups])
+ self.running.extend(
+ [s.seq_group for s in running_scheduled.prefill_seq_groups])
+ self.running.extend([s.seq_group for s in prefills.seq_groups])
+
# Update swapped requests.
self.swapped.extend(running_scheduled.swapped_out)
return SchedulerOutputs( | [
"vllm.core.scheduler.Scheduler.schedule",
"vllm.core.scheduler.SchedulerOutputs"
] | [
"vllm/core/scheduler.py",
"vllm/v1/core/sched/scheduler.py",
"vllm/attention/ops/chunked_prefill_paged_decode.py",
"vllm/engine/llm_engine.py",
"vllm/v1/engine/llm_engine.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
e3580537a41a46b0f3cd750b86b633c1857a8c90 | https://github.com/vllm-project/vllm/pull/7753 | 2024-08-28 | 2025-09-07 17:48:09 | 2025-09-07 17:48:09 | [
"RedHatAI/Meta-Llama-3-8B-Instruct-FP8"
] | python benchmarks/benchmark_serving.py --model RedHatAI/Meta-Llama-3-8B-Instruct-FP8 --enable-prefix-caching --enable-chunked-prefill --max-num-batched-tokens 2048 | true | false | false | true | [Performance] Enable chunked prefill and prefix caching together (#7753) | [Performance] Enable chunked prefill and prefix caching together (#7753) | 2024-08-28T00:36:31-07:00 | [
"tests/basic_correctness/test_chunked_prefill.py",
"tests/core/test_block_manager.py",
"tests/core/test_chunked_prefill_scheduler.py",
"vllm/core/block_manager_v1.py",
"vllm/core/block_manager_v2.py",
"vllm/core/embedding_model_block_manager.py",
"vllm/core/interfaces.py",
"vllm/core/scheduler.py",
"vllm/worker/model_runner.py"
] | {
"commit_year": 2024,
"num_edited_lines": 252,
"num_files": 9,
"num_hunks": 12,
"num_non_test_edited_lines": 107,
"num_non_test_files": 6,
"num_test_files": 3,
"only_non_test_files": 0,
"only_test_files": 0
} | diff --git a/tests/basic_correctness/test_chunked_prefill.py b/tests/basic_correctness/test_chunked_prefill.py
index 1211e6ba5..fc6f829c3 100644
--- a/tests/basic_correctness/test_chunked_prefill.py
+++ b/tests/basic_correctness/test_chunked_prefill.py
@@ -6,6 +6,7 @@ prefill requests are chunked.
Run `pytest tests/models/test_chunked_prefill.py`.
"""
+from contextlib import nullcontext
import pytest
@@ -156,3 +157,68 @@ def test_models_with_fp8_kv_cache(
name_0="no_chunked_prefill",
name_1="chunked_prefill",
)
+
+
+@pytest.mark.parametrize("max_tokens", [16])
+@pytest.mark.parametrize("enforce_eager", [False])
+@pytest.mark.parametrize("chunk_size", [30, 32])
+@pytest.mark.parametrize("use_v2_block_manager", [False, True])
+# NOTE: Increasing this in this suite will fail CI because we currently cannot
+# reset distributed env properly. Use a value > 1 just when you test.
+@pytest.mark.parametrize("tensor_parallel_size", [1])
+def test_with_prefix_caching(
+ vllm_runner,
+ max_tokens: int,
+ enforce_eager: bool,
+ chunk_size: int,
+ use_v2_block_manager: bool,
+ tensor_parallel_size: int,
+) -> None:
+ """
+ Checks exact match decode with and without prefix caching
+ with chunked prefill enabled.
+ """
+ model = "meta-llama/Llama-2-7b-chat-hf"
+ # The common prompt has 142 tokens with Llama-2 tokenizer.
+ common_prompt = "You are a helpful AI assistant " * 20
+ unique_prompts = [
+ "Question", # Warmup
+ "Question", # Fully cached
+ "Another question", # Partial cached
+ ]
+ full_prompts = [f"{common_prompt}\n{p}" for p in unique_prompts]
+
+ max_num_batched_tokens = max_num_seqs = chunk_size
+ outputs = {} # type: ignore
+ check_result = True
+ for enable in (True, False):
+ with vllm_runner(
+ model,
+ dtype="half",
+ max_num_batched_tokens=max_num_batched_tokens,
+ enable_chunked_prefill=True,
+ enable_prefix_caching=enable,
+ tensor_parallel_size=tensor_parallel_size,
+ use_v2_block_manager=use_v2_block_manager,
+ enforce_eager=enforce_eager,
+ max_num_seqs=max_num_seqs,
+ ) as vllm_model:
+ # It should fail when prefix caching is enable and chunk
+ # size is not a multiple of block size (16).
+ should_fail = chunk_size % 16 != 0 and enable
+ check_result &= not should_fail
+ outputs[enable] = []
+ # Send the request one-by-one to ensure the cache is populated.
+ with pytest.raises(ValueError) if should_fail else nullcontext():
+ for prompt in full_prompts:
+ outputs[enable] += vllm_model.generate_greedy([prompt],
+ max_tokens)
+
+ # Check results only if we did not expect a failure.
+ if check_result:
+ check_outputs_equal(
+ outputs_0_lst=outputs[False],
+ outputs_1_lst=outputs[True],
+ name_0="w/o prefix caching",
+ name_1="with prefix caching",
+ )
diff --git a/tests/core/test_block_manager.py b/tests/core/test_block_manager.py
index cd306b9e4..2ee9f2082 100644
--- a/tests/core/test_block_manager.py
+++ b/tests/core/test_block_manager.py
@@ -595,3 +595,43 @@ def test_sliding_window_multi_seq():
# assert all blocks are free now
assert block_manager.get_num_free_gpu_blocks() == num_gpu_blocks
+
+
+def test_mark_blocks_as_computed_with_prefix_cache_and_chunked_prefill():
+ """When prefix cache and chunked prefill are enabled, the block manager
+ should only mark a chunk of blocks as computed instead of all blocks.
+ """
+
+ block_size = 4
+ num_cpu_blocks = 0
+ num_gpu_blocks = 16
+ block_manager = BlockSpaceManagerV1(block_size,
+ num_gpu_blocks,
+ num_cpu_blocks,
+ watermark=0,
+ enable_caching=True)
+
+ # Set prompt size to have num_gpu_blocks - 1 full blocks.
+ prompt_length = block_size * num_gpu_blocks - 1
+
+ # Allocate (reserve) all blocks.
+ _, seq_group = create_dummy_prompt("0",
+ prompt_length,
+ block_size=block_size)
+ block_manager.allocate(seq_group)
+ assert seq_group.seqs[0].n_blocks == num_gpu_blocks
+
+ # 1st chunk: Compute 2 and half blocks. Should mark 2 blocks as computed.
+ token_chunk_size = int(block_size * 2.5)
+ block_manager.mark_blocks_as_computed(seq_group, token_chunk_size)
+ computed_blocks = block_manager.get_all_computed_blocks(seq_group.seqs[0])
+ assert len(computed_blocks) == 2
+
+ # Actual computed tokens.
+ seq_group.seqs[0].data.update_num_computed_tokens(token_chunk_size)
+
+ # 2nd chunk: Complete 3rd block and additional 4 blocks.
+ token_chunk_size = int(block_size * 4.5)
+ block_manager.mark_blocks_as_computed(seq_group, token_chunk_size)
+ computed_blocks = block_manager.get_all_computed_blocks(seq_group.seqs[0])
+ assert len(computed_blocks) == 7
diff --git a/tests/core/test_chunked_prefill_scheduler.py b/tests/core/test_chunked_prefill_scheduler.py
index 6d9c2f3eb..2f6ea632a 100644
--- a/tests/core/test_chunked_prefill_scheduler.py
+++ b/tests/core/test_chunked_prefill_scheduler.py
@@ -562,3 +562,42 @@ def test_chunked_prefill_max_seqs():
assert len(get_sequence_groups(out)) == max_seqs
assert not running[0].is_prefill()
assert not running[1].is_prefill()
+
+
+def test_perfix_caching():
+ """Verify allocating full blocks when prefix caching is enabled."""
+ block_size = 4
+ max_seqs = 10
+ max_model_len = 80
+ max_num_batched_tokens = 64
+ scheduler_config = SchedulerConfig(max_num_batched_tokens,
+ max_seqs,
+ max_model_len,
+ enable_chunked_prefill=True)
+ cache_config = CacheConfig(block_size,
+ 1.0,
+ 1,
+ "auto",
+ enable_prefix_caching=True)
+ cache_config.num_cpu_blocks = 0
+ cache_config.num_gpu_blocks = 32
+ scheduler = Scheduler(scheduler_config, cache_config, None)
+ running: List[SequenceGroup] = []
+
+ # Add seq groups to scheduler.
+ for i in range(2):
+ _, seq_group = create_dummy_prompt(str(i),
+ block_size=block_size,
+ prompt_length=50)
+ scheduler.add_seq_group(seq_group)
+ running.append(seq_group)
+
+ seq_group_meta, out = schedule_and_update_computed_tokens(scheduler)
+ assert set(get_sequence_groups(out)) == set(running)
+ assert seq_group_meta[0].token_chunk_size == 50
+ # Verify it is chunked. Note that although the budget is 64-50=14,
+ # we only allocate full blocks for prefix caching, so only 4*(14//4)=12
+ # tokens are allocated.
+ assert seq_group_meta[1].token_chunk_size == 12
+ assert out.num_prefill_groups == 2
+ assert out.num_batched_tokens == 62
diff --git a/vllm/core/block_manager_v1.py b/vllm/core/block_manager_v1.py
index 666723313..24ab9eb66 100644
--- a/vllm/core/block_manager_v1.py
+++ b/vllm/core/block_manager_v1.py
@@ -681,14 +681,20 @@ class BlockSpaceManagerV1(BlockSpaceManager):
for block in block_table:
block.last_accessed = access_time
- def compute_full_blocks_in_seq(self, seq: Sequence):
+ def compute_full_blocks_in_seq(self, seq: Sequence, token_chunk_size: int):
if seq.seq_id not in self.block_tables:
return
- max_full_block = seq.get_len() // self.block_size - 1
+
+ # When chunked prefill is enabled, the computed full blocks
+ # should be calculated based on the number of computed tokens.
+ max_computed_tokens = (seq.data.get_num_computed_tokens() +
+ token_chunk_size)
+ computed_full_blocks = max_computed_tokens // self.block_size
+
block_table = self.block_tables[seq.seq_id]
- if max_full_block == -1:
+ if computed_full_blocks == 0:
return
- for i in reversed(range(max_full_block)):
+ for i in reversed(range(computed_full_blocks)):
if block_table[i].computed:
break
block_table[i].computed = True
@@ -718,10 +724,11 @@ class BlockSpaceManagerV1(BlockSpaceManager):
ids_list = [self.get_all_computed_blocks(seq) for seq in seqs]
return commonprefix([ids for ids in ids_list if ids != []])
- def mark_blocks_as_computed(self, seq_group: SequenceGroup):
+ def mark_blocks_as_computed(self, seq_group: SequenceGroup,
+ token_chunk_size: int):
if self.enable_caching:
for seq in seq_group.get_seqs():
- self.compute_full_blocks_in_seq(seq)
+ self.compute_full_blocks_in_seq(seq, token_chunk_size)
def get_prefix_cache_hit_rate(self, device: Device) -> float:
if device == Device.GPU:
diff --git a/vllm/core/block_manager_v2.py b/vllm/core/block_manager_v2.py
index 7d2db43cb..b06385b06 100644
--- a/vllm/core/block_manager_v2.py
+++ b/vllm/core/block_manager_v2.py
@@ -290,7 +290,8 @@ class BlockSpaceManagerV2(BlockSpaceManager):
self._last_access_blocks_tracker.update_last_access(
seq.seq_id, now)
- def mark_blocks_as_computed(self, seq_group: SequenceGroup):
+ def mark_blocks_as_computed(self, seq_group: SequenceGroup,
+ token_chunk_size: int):
# If prefix caching is enabled, mark immutable blocks as computed
# right after they have been scheduled (for prefill). This assumes
# the scheduler is synchronous so blocks are actually computed when
diff --git a/vllm/core/embedding_model_block_manager.py b/vllm/core/embedding_model_block_manager.py
index f16f66e99..c47d7d8df 100644
--- a/vllm/core/embedding_model_block_manager.py
+++ b/vllm/core/embedding_model_block_manager.py
@@ -80,7 +80,8 @@ class EmbeddingModelBlockSpaceManager(BlockSpaceManager):
seq_group: List[Sequence]) -> List[int]:
return []
- def mark_blocks_as_computed(self, seq_group: SequenceGroup):
+ def mark_blocks_as_computed(self, seq_group: SequenceGroup,
+ token_chunk_size: int):
pass
def get_prefix_cache_hit_rate(self, device: Device) -> float:
diff --git a/vllm/core/interfaces.py b/vllm/core/interfaces.py
index becd0d2e7..96f8dd851 100644
--- a/vllm/core/interfaces.py
+++ b/vllm/core/interfaces.py
@@ -115,7 +115,8 @@ class BlockSpaceManager(ABC):
pass
@abstractmethod
- def mark_blocks_as_computed(self, seq_group: SequenceGroup):
+ def mark_blocks_as_computed(self, seq_group: SequenceGroup,
+ token_chunk_size: int):
pass
@abstractmethod
diff --git a/vllm/core/scheduler.py b/vllm/core/scheduler.py
index fbc53afa3..51fde6e4e 100644
--- a/vllm/core/scheduler.py
+++ b/vllm/core/scheduler.py
@@ -1226,7 +1226,8 @@ class Scheduler:
# will crash the vLLM instance / will not retry.
for scheduled_seq_group in scheduler_outputs.scheduled_seq_groups:
self.block_manager.mark_blocks_as_computed(
- scheduled_seq_group.seq_group)
+ scheduled_seq_group.seq_group,
+ scheduled_seq_group.token_chunk_size)
self._seq_group_metadata_cache[self.next_cache_id].reset()
@@ -1457,10 +1458,27 @@ class Scheduler:
for seq in seqs:
num_new_tokens += seq.get_num_new_tokens()
assert num_new_tokens > 0
- # Chunk if a running request cannot fit in.
- # If number of seq > 1, it means it is doing beam search in a
- # decode phase. Do not chunk in that case.
+ # Chunk if a running request cannot fit in the given budget.
+ # If number of seq > 1, it means it is doing beam search
+ # in a decode phase. Do not chunk.
if enable_chunking and len(seqs) == 1:
- num_new_tokens = min(num_new_tokens,
- budget.remaining_token_budget())
+ remaining_token_budget = budget.remaining_token_budget()
+ if self.cache_config.enable_prefix_caching:
+ # When prefix caching is enabled, we always allocate
+ # the number of new tokens that is dividable by the block size
+ # to avoid partial block matching.
+ block_size = self.cache_config.block_size
+ reminder = budget.token_budget % block_size
+ if reminder != 0:
+ raise ValueError("When enabling chunked prefill and "
+ "prefix caching, max_num_batched_tokens "
+ "(chunk size) must be dividable by "
+ "block size, but got chunk_size "
+ f"({budget.token_budget}) % block_size "
+ f"({block_size}) = {reminder}")
+ if remaining_token_budget < num_new_tokens:
+ num_new_tokens = (remaining_token_budget //
+ block_size) * block_size
+ else:
+ num_new_tokens = min(num_new_tokens, remaining_token_budget)
return num_new_tokens
diff --git a/vllm/worker/model_runner.py b/vllm/worker/model_runner.py
index f556e4ea1..2b287a5d2 100644
--- a/vllm/worker/model_runner.py
+++ b/vllm/worker/model_runner.py
@@ -501,23 +501,48 @@ class ModelInputForGPUBuilder(ModelRunnerInputBuilderBase[ModelInputForGPU]):
and self.sliding_window is None
and inter_data.is_prompt)
inter_data.prefix_cache_hit = prefix_cache_hit
- if self.chunked_prefill_enabled and prefix_cache_hit:
- raise RuntimeError(
- "chunked prefill cannot be used with prefix caching now.")
-
- # If prefix cache is hit, advance context length to bypass
- # hit blocks. Accordingly, input tokens, position and query length
- # have to be updated.
- if prefix_cache_hit:
- assert computed_block_nums is not None
- context_len = len(computed_block_nums) * self.block_size
+
+ if not prefix_cache_hit:
+ return
+
+ assert computed_block_nums is not None
+ # The cache hit prompt tokens in this sequence. Note that
+ # this may be larger than the sequence length if chunked
+ # prefill is enabled.
+ prefix_cache_len = len(computed_block_nums) * self.block_size
+ # The number of so far computed prompt tokens in this sequence.
+ context_len = inter_data.context_lens[seq_idx]
+ # The total number of prompt tokens in this sequence.
+ # When chunked prefill is enabled, this is the token number of
+ # computed chunks + current chunk.
+ seq_len = inter_data.seq_lens[seq_idx]
+ if prefix_cache_len <= context_len:
+ # We already passed the cache hit region,
+ # so do normal computation.
+ pass
+ elif context_len < prefix_cache_len < seq_len:
+ # Partial hit. Compute the missing part.
+ uncomputed_start = prefix_cache_len - context_len
inter_data.input_tokens[seq_idx] = inter_data.input_tokens[
- seq_idx][context_len:]
+ seq_idx][uncomputed_start:]
inter_data.input_positions[seq_idx] = inter_data.input_positions[
- seq_idx][context_len:]
+ seq_idx][uncomputed_start:]
+ context_len = prefix_cache_len
+
inter_data.context_lens[seq_idx] = context_len
inter_data.query_lens[
seq_idx] = inter_data.seq_lens[seq_idx] - context_len
+ elif seq_len <= prefix_cache_len:
+ # Full hit. Only compute the last token to avoid
+ # erroneous behavior. FIXME: Ideally we should directly
+ # mark all tokens as computed in the scheduler and do not
+ # schedule this sequence, so this case should not happen.
+ inter_data.input_tokens[seq_idx] = inter_data.input_tokens[
+ seq_idx][-1:]
+ inter_data.input_positions[seq_idx] = inter_data.input_positions[
+ seq_idx][-1:]
+ inter_data.query_lens[seq_idx] = 1
+ inter_data.context_lens[seq_idx] = inter_data.seq_lens[seq_idx] - 1
def _compute_for_sliding_window(self, inter_data: InterDataForSeqGroup,
seq_idx: int, | [
"ModelRunner.generate_greedy",
"Scheduler.schedule",
"BlockSpaceManager.mark_blocks_as_computed"
] | [
"vllm/worker/model_runner.py",
"vllm/core/scheduler.py",
"vllm/v1/core/sched/scheduler.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=RedHatAI/Meta-Llama-3-8B-Instruct-FP8,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
2deb029d115dadd012ce5ea70487a207cb025493 | https://github.com/vllm-project/vllm/pull/7822 | 2024-08-26 | 2025-09-07 17:48:12 | 2025-09-07 17:48:12 | [
"RedHatAI/Meta-Llama-3-8B-Instruct-FP8"
] | python benchmarks/benchmark_prefix_caching.py --model RedHatAI/Meta-Llama-3-8B-Instruct-FP8 --output-len 200 --enable-prefix-caching | true | false | false | true | [Performance][BlockManagerV2] Mark prefix cache block as computed after schedule (#7822) | [Performance][BlockManagerV2] Mark prefix cache block as computed after schedule (#7822) | 2024-08-26T11:24:53-07:00 | [
"tests/core/block/test_prefix_caching_block.py",
"vllm/core/block/prefix_caching_block.py",
"vllm/core/block_manager_v2.py"
] | {
"commit_year": 2024,
"num_edited_lines": 63,
"num_files": 3,
"num_hunks": 6,
"num_non_test_edited_lines": 32,
"num_non_test_files": 2,
"num_test_files": 1,
"only_non_test_files": 0,
"only_test_files": 0
} | diff --git a/tests/core/block/test_prefix_caching_block.py b/tests/core/block/test_prefix_caching_block.py
index c2226870c..25be2dd13 100644
--- a/tests/core/block/test_prefix_caching_block.py
+++ b/tests/core/block/test_prefix_caching_block.py
@@ -708,6 +708,37 @@ class TestPrefixCachingBlockAllocator:
token_ids=token_ids)
assert allocator.get_prefix_cache_hit_rate() > 0.99
+ # Test case for marking cache hit blocks as computed right after
+ # a batch of prefill sequences are scheduled.
+ @staticmethod
+ def test_touch_block():
+ block_size = 16
+ common_blocks = 4
+ allocator = PrefixCachingBlockAllocator(num_blocks=8,
+ block_size=block_size)
+
+ common_token_ids = list(range(block_size * common_blocks))
+
+ # Mimic the behavior of allocating the same block chain
+ # (i.e., common prefix) for a batch of 3 different prefill sequences.
+ for _ in range(3):
+ blocks = TestPrefixCachingBlockAllocator.create_immutable_chain(
+ block_size=block_size,
+ token_ids=common_token_ids,
+ allocator=allocator,
+ )
+ block_ids = [block.block_id for block in blocks]
+ # The allocated blocks should be marked as touched
+ # but not computed.
+ computed_block_ids = allocator.get_computed_block_ids(
+ [], block_ids, skip_last_block_id=False)
+ assert len(computed_block_ids) == 0
+
+ allocator.mark_blocks_as_computed([])
+ computed_block_ids = allocator.get_computed_block_ids(
+ [], block_ids, skip_last_block_id=False)
+ assert len(computed_block_ids) == common_blocks
+
@staticmethod
def create_immutable_chain(
block_size: int,
diff --git a/vllm/core/block/prefix_caching_block.py b/vllm/core/block/prefix_caching_block.py
index 432a6651a..a87e814cf 100644
--- a/vllm/core/block/prefix_caching_block.py
+++ b/vllm/core/block/prefix_caching_block.py
@@ -1,6 +1,6 @@
"""Token blocks."""
from os.path import commonprefix
-from typing import Dict, FrozenSet, Iterable, List, Optional, Tuple
+from typing import Dict, FrozenSet, Iterable, List, Optional, Set, Tuple
from vllm.core.block.common import (CacheMetricData, CopyOnWriteTracker,
get_all_blocks_recursively)
@@ -73,6 +73,11 @@ class PrefixCachingBlockAllocator(BlockAllocator):
# prefix hash will be in this dict, even if they have refcount 0.
self._cached_blocks: Dict[PrefixHash, BlockId] = {}
+ # A list of immutable block IDs that have been touched by scheduler
+ # and should be marked as computed after an entire batch of sequences
+ # are scheduled.
+ self._touched_blocks: Set[BlockId] = set()
+
# Used to track status of each physical block id
self._block_tracker: Dict[BlockId, BlockTracker] = {}
for block_id in block_ids:
@@ -438,10 +443,14 @@ class PrefixCachingBlockAllocator(BlockAllocator):
assert self._refcounter.get(block.block_id) > 0
if block.content_hash not in self._cached_blocks:
- # No cached content hash => Set this block as cached
- # (Note that this block is not computed yet =>
- # Will be computed after free())
+ # No cached content hash => Set this block as cached.
+ # Note that this block cannot be marked as computed yet
+ # because other sequences in the same batch cannot reuse
+ # this block.
self._cached_blocks[block.content_hash] = block.block_id
+ # Mark this block as touched so that it can be marked as
+ # computed after the entire batch of sequences are scheduled.
+ self._touched_blocks.add(block.block_id)
return block.block_id
# Reuse the cached content hash
@@ -507,7 +516,10 @@ class PrefixCachingBlockAllocator(BlockAllocator):
"Mark block as accessed which is not belonged to GPU")
def mark_blocks_as_computed(self, block_ids: List[int]) -> None:
- raise NotImplementedError("Marking as computed is incremental")
+ # Mark all touched blocks as computed.
+ for block_id in self._touched_blocks:
+ self._block_tracker[block_id].computed = True
+ self._touched_blocks.clear()
def _track_block_id(self, block_id: Optional[BlockId],
computed: bool) -> None:
diff --git a/vllm/core/block_manager_v2.py b/vllm/core/block_manager_v2.py
index b7d9451f1..7d4919a0d 100644
--- a/vllm/core/block_manager_v2.py
+++ b/vllm/core/block_manager_v2.py
@@ -287,11 +287,11 @@ class BlockSpaceManagerV2(BlockSpaceManager):
seq.seq_id, now)
def mark_blocks_as_computed(self, seq_group: SequenceGroup):
- # The only need for mark block as computed is for prefix caching,
- # while currently we could determine whether one block is computed
- # or not by check whether it has content hash.
- # So this function is useless for block_v2.
- pass
+ # If prefix caching is enabled, mark immutable blocks as computed
+ # right after they have been scheduled (for prefill). This assumes
+ # the scheduler is synchronous so blocks are actually computed when
+ # scheduling the next batch.
+ self.block_allocator.mark_blocks_as_computed([])
def get_common_computed_block_ids(
self, seqs: List[Sequence]) -> GenericSequence[int]: | [
"PrefixCachingBlockAllocator.mark_blocks_as_computed",
"BlockSpaceManagerV2.mark_blocks_as_computed"
] | [
"vllm/core/block/prefix_caching_block.py",
"vllm/core/block_manager.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=RedHatAI/Meta-Llama-3-8B-Instruct-FP8,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
fc7b8d1eefcbe837a56b7c080509417fe5167e6c | https://github.com/vllm-project/vllm/pull/7364 | 2024-08-09 | 2025-09-07 17:48:14 | 2025-09-07 17:48:14 | [
"meta-llama/Llama-3.1-8B-Instruct"
] | python benchmarks/benchmark_serving.py --model meta-llama/Llama-3.1-8B-Instruct --backend vllm --num-prompts 100 | true | false | false | true | [Performance] e2e overheads reduction: Small followup diff (#7364) | [Performance] e2e overheads reduction: Small followup diff (#7364) | 2024-08-09T15:49:36Z | [
"vllm/core/block_manager_v1.py",
"vllm/sequence.py"
] | {
"commit_year": 2024,
"num_edited_lines": 7,
"num_files": 2,
"num_hunks": 2,
"num_non_test_edited_lines": 7,
"num_non_test_files": 2,
"num_test_files": 0,
"only_non_test_files": 1,
"only_test_files": 0
} | diff --git a/vllm/core/block_manager_v1.py b/vllm/core/block_manager_v1.py
index 622aca66a..ad26d3c51 100644
--- a/vllm/core/block_manager_v1.py
+++ b/vllm/core/block_manager_v1.py
@@ -336,9 +336,9 @@ class BlockSpaceManagerV1(BlockSpaceManager):
# Assign the self-attention block tables for each sequence.
if len(wait_seqs) == 1:
- self.block_tables[wait_seqs[0].seq_id] = block_table
+ self.block_tables[seq.seq_id] = block_table
else:
- for seq in seq_group.get_seqs(status=SequenceStatus.WAITING):
+ for seq in wait_seqs:
self.block_tables[seq.seq_id] = block_table.copy()
# Allocate encoder sequence
diff --git a/vllm/sequence.py b/vllm/sequence.py
index ba477efc5..fd2dc9656 100644
--- a/vllm/sequence.py
+++ b/vllm/sequence.py
@@ -655,6 +655,9 @@ class SequenceGroup:
return [seq for seq in self.seqs if not seq.is_finished()]
def get_finished_seqs(self) -> List[Sequence]:
+ if self.is_single_seq:
+ return self.seqs if self.seqs[0].is_finished() else []
+
return [seq for seq in self.seqs if seq.is_finished()]
def update_num_computed_tokens(self, num_new_computed_tokens: int): | [
"BlockSpaceManagerV1",
"SequenceGroup.get_finished_seqs"
] | [
"vllm/sequence.py",
"vllm/engine/llm_engine.py",
"vllm/v1/engine/llm_engine.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
660470e5a36b8e52083615ad7c85e9b4fd4c72ce | https://github.com/vllm-project/vllm/pull/7193 | 2024-08-06 | 2025-09-07 17:48:19 | 2025-09-07 17:48:19 | [
"meta-llama/Llama-3.1-8B-Instruct"
] | python benchmarks/benchmark_serving.py --model meta-llama/Llama-3.1-8B-Instruct --tensor-parallel-size 1 --enable-prefix-caching --use-v2-block-manager | true | false | false | true | [Core] Optimize evictor-v2 performance (#7193) | [Core] Optimize evictor-v2 performance (#7193) | 2024-08-06T12:34:25-07:00 | [
"vllm/core/evictor_v2.py"
] | {
"commit_year": 2024,
"num_edited_lines": 6,
"num_files": 1,
"num_hunks": 2,
"num_non_test_edited_lines": 6,
"num_non_test_files": 1,
"num_test_files": 0,
"only_non_test_files": 1,
"only_test_files": 0
} | diff --git a/vllm/core/evictor_v2.py b/vllm/core/evictor_v2.py
index 3dd12e2e2..5b1a208b7 100644
--- a/vllm/core/evictor_v2.py
+++ b/vllm/core/evictor_v2.py
@@ -91,8 +91,9 @@ class LRUEvictor(Evictor):
# at the start of OrderedDict. Loop through all these blocks to
# find the one with maximum number of hashed tokens.
for _id, block in self.free_table.items():
- if evicted_block.last_accessed > block.last_accessed or (
- evicted_block.last_accessed == block.last_accessed and
+ if evicted_block.last_accessed < block.last_accessed:
+ break
+ if (evicted_block.last_accessed == block.last_accessed and
evicted_block.num_hashed_tokens < block.num_hashed_tokens):
evicted_block = block
evicted_block_id = _id
@@ -109,6 +110,7 @@ class LRUEvictor(Evictor):
def update(self, block_id: int, last_accessed: float):
self.free_table[block_id].last_accessed = last_accessed
+ self.free_table.move_to_end(block_id)
def remove(self, block_id: int):
if block_id not in self.free_table: | [
"None"
] | [
"vllm/engine/llm_engine.py",
"vllm/v1/engine/llm_engine.py",
"vllm/entrypoints/api_server.py",
"vllm/entrypoints/openai/api_server.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.1-8B-Instruct,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
89a84b0bb7b30706a02836234a94493ea8f780bf | https://github.com/vllm-project/vllm/pull/6779 | 2024-07-26 | 2025-09-07 17:48:26 | 2025-09-07 17:48:26 | [
"Qwen/Qwen1.5-0.5B"
] | python benchmarks/benchmark_serving.py --model Qwen/Qwen1.5-0.5B --backend vllm --num-prompts 2048 --input-len 1024 | true | false | false | true | [Core] Use array to speedup padding (#6779) | [Core] Use array to speedup padding (#6779) | 2024-07-25T21:31:31-07:00 | [
"vllm/model_executor/layers/sampler.py",
"vllm/model_executor/sampling_metadata.py",
"vllm/sequence.py"
] | {
"commit_year": 2024,
"num_edited_lines": 46,
"num_files": 3,
"num_hunks": 9,
"num_non_test_edited_lines": 46,
"num_non_test_files": 3,
"num_test_files": 0,
"only_non_test_files": 1,
"only_test_files": 0
} | diff --git a/vllm/model_executor/layers/sampler.py b/vllm/model_executor/layers/sampler.py
index 5c376797a..121458f81 100644
--- a/vllm/model_executor/layers/sampler.py
+++ b/vllm/model_executor/layers/sampler.py
@@ -220,7 +220,7 @@ def _apply_min_tokens_penalty(
seqs_to_penalize: List[int] = []
for j, seq_id in enumerate(seq_ids):
seq_data = seq_group.seq_data[seq_id]
- if len(seq_data.output_token_ids) < min_tokens:
+ if len(seq_data.output_token_ids_array) < min_tokens:
seqs_to_penalize.append(j)
if seqs_to_penalize:
diff --git a/vllm/model_executor/sampling_metadata.py b/vllm/model_executor/sampling_metadata.py
index 390b5d173..27b37a9d5 100644
--- a/vllm/model_executor/sampling_metadata.py
+++ b/vllm/model_executor/sampling_metadata.py
@@ -1,4 +1,5 @@
import random
+from array import array
from dataclasses import dataclass
from typing import Dict, List, Optional, Tuple
@@ -329,8 +330,8 @@ class SamplingTensors:
user-defined seed for each sequence.
extra_entropy: extra entropy to use when generating seeds.
"""
- prompt_tokens: List[List[int]] = []
- output_tokens: List[List[int]] = []
+ prompt_tokens: List[array] = []
+ output_tokens: List[array] = []
top_ks: List[int] = []
temperatures: List[float] = []
top_ps: List[float] = []
@@ -432,13 +433,15 @@ class SamplingTensors:
if (seq_group.is_prompt
and sampling_params.prompt_logprobs is not None):
prefill_len = len(seq_group.prompt_logprob_indices)
- prompt_tokens.extend([] for _ in range(prefill_len))
- output_tokens.extend([] for _ in range(prefill_len))
+ prompt_tokens.extend(
+ array('l') for _ in range(prefill_len))
+ output_tokens.extend(
+ array('l') for _ in range(prefill_len))
if seq_group.do_sample:
for seq_id in seq_ids:
seq_data = seq_group.seq_data[seq_id]
- prompt_tokens.append(list(seq_data.prompt_token_ids))
- output_tokens.append(list(seq_data.output_token_ids))
+ prompt_tokens.append(seq_data.prompt_token_ids_array)
+ output_tokens.append(seq_data.output_token_ids_array)
sampling_tensors = SamplingTensors.from_lists(
temperatures, top_ps, top_ks, min_ps, presence_penalties,
@@ -454,9 +457,9 @@ class SamplingTensors:
frequency_penalties: List[float],
repetition_penalties: List[float],
sampling_seeds: List[int], sample_indices: List[int],
- prompt_tokens: List[List[int]],
- output_tokens: List[List[int]], vocab_size: int,
- extra_seeds_to_generate: int, device: torch.device,
+ prompt_tokens: List[array], output_tokens: List[array],
+ vocab_size: int, extra_seeds_to_generate: int,
+ device: torch.device,
dtype: torch.dtype) -> "SamplingTensors":
# Note that the performance will be very bad without
# pinned memory.
diff --git a/vllm/sequence.py b/vllm/sequence.py
index 0cd4c7e71..72821ecea 100644
--- a/vllm/sequence.py
+++ b/vllm/sequence.py
@@ -3,6 +3,7 @@ import copy
import enum
import math
from abc import ABC, abstractmethod
+from array import array
from collections import defaultdict
from dataclasses import dataclass, field
from typing import (TYPE_CHECKING, Dict, List, Mapping, Optional, Set, Tuple,
@@ -119,10 +120,10 @@ class SequenceData:
prompt_token_ids: List[int],
output_token_ids: Optional[List[int]] = None,
) -> None:
- self._prompt_token_ids: List[int] = list(prompt_token_ids)
+ self._prompt_token_ids = array('l', prompt_token_ids)
self._prompt_token_ids_tuple: Tuple[int, ...] = tuple(prompt_token_ids)
- self._output_token_ids: List[int] = (
- list(output_token_ids) if output_token_ids is not None else [])
+ self._output_token_ids = array(
+ 'l', output_token_ids if output_token_ids is not None else [])
self.cumulative_logprob = 0.0
# The number of tokens that are computed (that run against the model).
@@ -132,8 +133,8 @@ class SequenceData:
self._update_cached_all_tokens()
def _update_cached_all_tokens(self):
- self._cached_all_token_ids: List[int] = (self._prompt_token_ids +
- self._output_token_ids)
+ self._cached_all_token_ids: List[int] = list(self._prompt_token_ids +
+ self._output_token_ids)
@property
def prompt_token_ids(self) -> Tuple[int, ...]:
@@ -141,19 +142,27 @@ class SequenceData:
@prompt_token_ids.setter
def prompt_token_ids(self, new_prompt_token_ids) -> None:
- self._prompt_token_ids = list(new_prompt_token_ids)
+ self._prompt_token_ids = array('l', new_prompt_token_ids)
self._prompt_token_ids_tuple = tuple(new_prompt_token_ids)
self._update_cached_all_tokens()
+ @property
+ def prompt_token_ids_array(self) -> array:
+ return self._prompt_token_ids
+
@property
def output_token_ids(self) -> Tuple[int, ...]:
return tuple(self._output_token_ids)
@output_token_ids.setter
def output_token_ids(self, new_output_token_ids) -> None:
- self._output_token_ids = list(new_output_token_ids)
+ self._output_token_ids = array('l', new_output_token_ids)
self._update_cached_all_tokens()
+ @property
+ def output_token_ids_array(self) -> array:
+ return self._output_token_ids
+
def append_token_id(self, token_id: int, logprob: float) -> None:
self._output_token_ids.append(token_id)
self._cached_all_token_ids.append(token_id) | [
"vllm.sequence.SequenceData.prompt_token_ids_array",
"vllm.sequence.SequenceData.output_token_ids_array",
"vllm.model_executor.sampling_metadata.SamplingTensors.from_lists"
] | [
"vllm/sequence.py",
"vllm/model_executor/sampling_metadata.py",
"vllm/v1/sample/sampler.py",
"vllm/model_executor/layers/sampler.py",
"vllm/v1/sample/tpu/sampler.py"
] | vllm | H100 | lm_eval --model vllm --model_args pretrained=Qwen/Qwen1.5-0.5B,dtype=auto --trust_remote_code --tasks gsm8k --batch_size auto --limit 100 |
9ed82e7074a18e25680ab106fc846364ad97bc00 | https://github.com/vllm-project/vllm/pull/6520 | 2024-07-19 | 2025-09-07 17:48:29 | 2025-09-07 17:48:29 | [
"meta-llama/Llama-2-7b-hf"
] | python benchmarks/benchmark_serving.py --model meta-llama/Llama-2-7b-hf --backend vllm --num-prompts 100 | true | false | false | true | [Misc] Small perf improvements (#6520) | [Misc] Small perf improvements (#6520) | 2024-07-19T12:10:56-07:00 | [
"tests/core/block/test_block_manager_v2.py",
"tests/core/block/test_cpu_gpu_block_allocator.py",
"vllm/core/block/block_table.py",
"vllm/core/block/prefix_caching_block.py",
"vllm/model_executor/models/__init__.py",
"vllm/sequence.py",
"vllm/utils.py"
] | {
"commit_year": 2024,
"num_edited_lines": 69,
"num_files": 7,
"num_hunks": 11,
"num_non_test_edited_lines": 50,
"num_non_test_files": 5,
"num_test_files": 2,
"only_non_test_files": 0,
"only_test_files": 0
} | diff --git a/tests/core/block/test_block_manager_v2.py b/tests/core/block/test_block_manager_v2.py
index d0ca09c4b..d7863a9ae 100644
--- a/tests/core/block/test_block_manager_v2.py
+++ b/tests/core/block/test_block_manager_v2.py
@@ -249,10 +249,13 @@ def test_append_slots(block_size, prompt_len, num_slots_to_append,
# Expect consumed blocks to be new blocks required to support the new slots.
expected_consumed_blocks = len(
- chunk_list(
- list(
- range(prompt_len + num_slots_to_append + num_lookahead_slots)),
- block_size)) - len(chunk_list(list(range(prompt_len)), block_size))
+ list(
+ chunk_list(
+ list(
+ range(prompt_len + num_slots_to_append +
+ num_lookahead_slots)),
+ block_size))) - len(
+ list(chunk_list(list(range(prompt_len)), block_size)))
assert num_consumed_blocks == expected_consumed_blocks
diff --git a/tests/core/block/test_cpu_gpu_block_allocator.py b/tests/core/block/test_cpu_gpu_block_allocator.py
index 15b76d909..a9e38d404 100644
--- a/tests/core/block/test_cpu_gpu_block_allocator.py
+++ b/tests/core/block/test_cpu_gpu_block_allocator.py
@@ -58,10 +58,10 @@ def test_allocate_immutable_block(num_cpu_blocks: int, num_gpu_blocks: int,
unique_token_ids = list(
range((num_cpu_blocks + num_gpu_blocks) * block_size))
- gpu_token_ids = chunk_list(unique_token_ids[:num_gpu_blocks * block_size],
- block_size)
- cpu_token_ids = chunk_list(unique_token_ids[num_gpu_blocks * block_size:],
- block_size)
+ gpu_token_ids = list(
+ chunk_list(unique_token_ids[:num_gpu_blocks * block_size], block_size))
+ cpu_token_ids = list(
+ chunk_list(unique_token_ids[num_gpu_blocks * block_size:], block_size))
assert allocator.get_num_free_blocks(Device.CPU) == num_cpu_blocks
assert allocator.get_num_free_blocks(Device.GPU) == num_gpu_blocks
diff --git a/vllm/core/block/block_table.py b/vllm/core/block/block_table.py
index 49e63c231..06b816eb3 100644
--- a/vllm/core/block/block_table.py
+++ b/vllm/core/block/block_table.py
@@ -1,3 +1,4 @@
+import math
from typing import List, Optional
from vllm.core.block.common import BlockList
@@ -337,10 +338,17 @@ class BlockTable:
This is required for the scheduler to determine whether a sequence can
continue generation, or if it must be preempted.
"""
+ # Math below is equivalent to:
+ # all_token_ids = token_ids + [-1] * num_lookahead_slots
+ # token_blocks = self._chunk_token_blocks_for_append(all_token_ids)
+ # return len(token_blocks)
- all_token_ids = token_ids + [-1] * num_lookahead_slots
- token_blocks = self._chunk_token_blocks_for_append(all_token_ids)
- return len(token_blocks)
+ num_token_ids = len(token_ids) + num_lookahead_slots
+ first_chunk_size = self._block_size - (self._num_full_slots %
+ self._block_size)
+ num_token_blocks = (1 + math.ceil(
+ (num_token_ids - first_chunk_size) / self._block_size))
+ return num_token_blocks
def _chunk_token_blocks_for_append(
self, token_ids: List[int]) -> List[List[int]]:
@@ -351,6 +359,7 @@ class BlockTable:
"""
first_chunk_size = self._block_size - (self._num_full_slots %
self._block_size)
- token_blocks = [token_ids[:first_chunk_size]] + chunk_list(
- token_ids[first_chunk_size:], self._block_size)
+ token_blocks = [token_ids[:first_chunk_size]]
+ token_blocks.extend(
+ chunk_list(token_ids[first_chunk_size:], self._block_size))
return token_blocks
diff --git a/vllm/core/block/prefix_caching_block.py b/vllm/core/block/prefix_caching_block.py
index f272e23ee..d102ad404 100644
--- a/vllm/core/block/prefix_caching_block.py
+++ b/vllm/core/block/prefix_caching_block.py
@@ -552,9 +552,12 @@ class PrefixCachingBlockAllocator(BlockAllocator):
# runner.
# It returns a list of int although type annotation says list of string.
+ if len(computed_seq_block_ids) == 1:
+ return computed_seq_block_ids[0]
+
return commonprefix([
ids for ids in computed_seq_block_ids # type: ignore
- if ids != []
+ if ids
])
def get_num_blocks_touched(self,
diff --git a/vllm/model_executor/models/__init__.py b/vllm/model_executor/models/__init__.py
index 87508a116..aa5a70757 100644
--- a/vllm/model_executor/models/__init__.py
+++ b/vllm/model_executor/models/__init__.py
@@ -1,3 +1,4 @@
+import functools
import importlib
from typing import Dict, List, Optional, Type
@@ -98,6 +99,14 @@ _ROCM_PARTIALLY_SUPPORTED_MODELS: Dict[str, str] = {
class ModelRegistry:
+ @staticmethod
+ @functools.lru_cache(maxsize=128)
+ def _get_model(model_arch: str):
+ module_name, model_cls_name = _MODELS[model_arch]
+ module = importlib.import_module(
+ f"vllm.model_executor.models.{module_name}")
+ return getattr(module, model_cls_name, None)
+
@staticmethod
def load_model_cls(model_arch: str) -> Optional[Type[nn.Module]]:
if model_arch in _OOT_MODELS:
@@ -114,10 +123,7 @@ class ModelRegistry:
"Model architecture %s is partially supported by ROCm: %s",
model_arch, _ROCM_PARTIALLY_SUPPORTED_MODELS[model_arch])
- module_name, model_cls_name = _MODELS[model_arch]
- module = importlib.import_module(
- f"vllm.model_executor.models.{module_name}")
- return getattr(module, model_cls_name, None)
+ return ModelRegistry._get_model(model_arch)
@staticmethod
def get_supported_archs() -> List[str]:
diff --git a/vllm/sequence.py b/vllm/sequence.py
index 1cebf68d4..6c12a01bd 100644
--- a/vllm/sequence.py
+++ b/vllm/sequence.py
@@ -457,24 +457,25 @@ class SequenceGroup:
self.prompt_adapter_request = prompt_adapter_request
self.encoder_seq = encoder_seq
self.trace_headers = trace_headers
+ self._first_seq = next(iter(self.seqs_dict.values()))
@property
def prompt(self) -> Optional[str]:
# All sequences in the group should have the same prompt.
# We use the prompt of an arbitrary sequence.
- return next(iter(self.seqs_dict.values())).prompt
+ return self._first_seq.prompt
@property
def prompt_token_ids(self) -> List[int]:
# All sequences in the group should have the same prompt.
# We use the prompt of an arbitrary sequence.
- return next(iter(self.seqs_dict.values())).prompt_token_ids
+ return self._first_seq.prompt_token_ids
@property
def multi_modal_data(self) -> "MultiModalDataDict":
# All sequences in the group should have the same multi-modal data.
# We use the multi-modal data of an arbitrary sequence.
- return next(iter(self.seqs_dict.values())).multi_modal_data
+ return self._first_seq.multi_modal_data
@property
def lora_int_id(self) -> int:
diff --git a/vllm/utils.py b/vllm/utils.py
index f3025a68d..f906d8258 100644
--- a/vllm/utils.py
+++ b/vllm/utils.py
@@ -415,9 +415,10 @@ def init_kmp_env():
os.environ['KMP_REDUCTION_BARRIER_PATTERN'] = "dist,dist"
-def chunk_list(lst: List[T], chunk_size: int) -> List[List[T]]:
+def chunk_list(lst: List[T], chunk_size: int):
"""Yield successive chunk_size chunks from lst."""
- return [lst[i:i + chunk_size] for i in range(0, len(lst), chunk_size)]
+ for i in range(0, len(lst), chunk_size):
+ yield lst[i:i + chunk_size]
def cdiv(a: int, b: int) -> int: | [
"BlockTable.get_num_token_blocks",
"ModelRegistry.load_model_cls",
"SequenceGroup.prompt",
"SequenceGroup.prompt_token_ids",
"SequenceGroup.multi_modal_data"
] | [] | vllm | H100 | lm_eval --model vllm --model_args pretrained=meta-llama/Llama-2-7b-hf,dtype=auto --tasks gsm8k --batch_size auto --limit 100 |
End of preview. Expand
in Data Studio
ISO-Bench Dataset
A curated dataset of real-world software performance optimization commits from vLLM and SGLang, designed for evaluating AI agents on code optimization tasks.
Dataset Summary
| Config | Commits | Repository |
|---|---|---|
vllm |
39 | vLLM (LLM inference engine) |
sglang |
15 | SGLang (LLM serving framework) |
Each entry represents a human-authored performance optimization commit with:
- The original commit diff and message
- Performance benchmark commands (
perf_command) - Model configurations for benchmarking
- Hardware requirements
- API surface analysis
Usage
from datasets import load_dataset
# Load vLLM optimization commits
vllm = load_dataset('Lossfunk/ISO-Bench', 'vllm', split='train')
# Load SGLang optimization commits
sglang = load_dataset('Lossfunk/ISO-Bench', 'sglang', split='train')
# Example: inspect a commit
print(vllm[0]['commit_subject'])
print(vllm[0]['perf_command'])
print(vllm[0]['models'])
Schema
| Field | Type | Description |
|---|---|---|
commit_hash |
string | Short hash of the optimization commit |
pr_url |
string | URL to the pull request |
commit_subject |
string | Commit message subject line |
commit_message |
string | Full commit message |
diff_text |
string | Unified diff of the optimization |
models |
list[string] | HuggingFace model IDs used for benchmarking |
perf_command |
string | Command to run the performance benchmark |
has_serving |
bool | Whether commit affects serving performance |
has_latency |
bool | Whether commit affects latency |
has_throughput |
bool | Whether commit affects throughput |
uses_lm_eval |
bool | Whether correctness is validated via lm-eval |
lm_eval_command |
string | lm-eval command for correctness validation |
files_changed |
list[string] | Files modified in the commit |
apis |
list[string] | Affected API endpoints/functions |
affected_paths |
list[string] | Code paths affected by the change |
hardware |
string | Required hardware (e.g., GPU type) |
stats |
struct | Commit statistics (lines changed, files, hunks) |
How It Works
Each dataset entry captures a real performance optimization made by an expert developer. AI agents are given the codebase at the parent commit (before optimization) and must independently discover and implement a performance improvement. Their patches are then benchmarked against the human expert's solution using wall-clock timing comparisons.
License
Apache 2.0
- Downloads last month
- 38