PyPI page
Home page
Author:
vLLM Team
License:
Apache 2.0
Summary:
A high-throughput and memory-efficient inference and serving engine for LLMs
Latest version:
0.6.1.post2
Required dependencies:
aiohttp
|
einops
|
fastapi
|
filelock
|
gguf
|
importlib-metadata
|
lm-format-enforcer
|
mistral-common
|
msgspec
|
numpy
|
nvidia-ml-py
|
openai
|
outlines
|
partial-json-parser
|
pillow
|
prometheus-client
|
prometheus-fastapi-instrumentator
|
protobuf
|
psutil
|
py-cpuinfo
|
pydantic
|
pyyaml
|
pyzmq
|
ray
|
requests
|
sentencepiece
|
six
|
tiktoken
|
tokenizers
|
torch
|
torchvision
|
tqdm
|
transformers
|
typing-extensions
|
uvicorn
|
vllm-flash-attn
|
xformers
Optional dependencies:
librosa
|
opencv-python
|
soundfile
|
tensorizer
Downloads last day:
27,702
Downloads last week:
119,512
Downloads last month:
502,838