PyPI page
Home page
Author:
vLLM Team
Summary:
A high-throughput and memory-efficient inference and serving engine for LLMs
Latest version:
0.9.2
Required dependencies:
aiohttp
|
blake3
|
cachetools
|
cloudpickle
|
compressed-tensors
|
depyf
|
einops
|
fastapi
|
filelock
|
gguf
|
huggingface-hub
|
importlib_metadata
|
lark
|
llguidance
|
lm-format-enforcer
|
mistral_common
|
msgspec
|
ninja
|
numba
|
numpy
|
openai
|
opencv-python-headless
|
outlines
|
partial-json-parser
|
pillow
|
prometheus-fastapi-instrumentator
|
prometheus_client
|
protobuf
|
psutil
|
py-cpuinfo
|
pybase64
|
pydantic
|
python-json-logger
|
pyyaml
|
pyzmq
|
ray
|
regex
|
requests
|
scipy
|
sentencepiece
|
setuptools
|
six
|
tiktoken
|
tokenizers
|
torch
|
torchaudio
|
torchvision
|
tqdm
|
transformers
|
typing_extensions
|
watchfiles
|
xformers
|
xgrammar
Optional dependencies:
boto3
|
datasets
|
fastsafetensors
|
librosa
|
pandas
|
runai-model-streamer
|
runai-model-streamer-s3
|
soundfile
|
tensorizer
Downloads last day:
32,706
Downloads last week:
462,281
Downloads last month:
2,124,689