PyPI page
Home page
Author:
vLLM Team
License:
Apache 2.0
Summary:
A high-throughput and memory-efficient inference and serving engine for LLMs
Latest version:
0.6.6.post1
Required dependencies:
aiohttp
|
blake3
|
cloudpickle
|
compressed-tensors
|
depyf
|
einops
|
fastapi
|
filelock
|
gguf
|
importlib_metadata
|
lark
|
lm-format-enforcer
|
mistral_common
|
msgspec
|
numpy
|
nvidia-ml-py
|
openai
|
outlines
|
partial-json-parser
|
pillow
|
prometheus-fastapi-instrumentator
|
prometheus_client
|
protobuf
|
psutil
|
py-cpuinfo
|
pydantic
|
pyyaml
|
pyzmq
|
ray
|
requests
|
sentencepiece
|
setuptools
|
six
|
tiktoken
|
tokenizers
|
torch
|
torchvision
|
tqdm
|
transformers
|
typing_extensions
|
uvicorn
|
xformers
|
xgrammar
Optional dependencies:
boto3
|
decord
|
librosa
|
runai-model-streamer
|
runai-model-streamer-s3
|
soundfile
|
tensorizer
Downloads last day:
83,750
Downloads last week:
427,449
Downloads last month:
1,579,081