PyPI Stats

Search

All packages
Top packages

Track packages

vllm-fixed


PyPI page
Home page
Author: vLLM Team
License: Apache-2.0
Summary: A high-throughput and memory-efficient inference and serving engine for LLMs
Latest version: 1.0.0
Required dependencies: aiohttp | blake3 | cachetools | cloudpickle | compressed-tensors | datasets | depyf | einops | fastapi | filelock | gguf | huggingface-hub | importlib_metadata | lark | llguidance | lm-format-enforcer | mistral_common | msgspec | ninja | numpy | openai | opencv-python-headless | opentelemetry-api | opentelemetry-exporter-otlp | opentelemetry-sdk | opentelemetry-semantic-conventions-ai | outlines | partial-json-parser | pillow | prometheus-fastapi-instrumentator | prometheus_client | protobuf | psutil | py-cpuinfo | pydantic | python-json-logger | pyyaml | pyzmq | requests | scipy | sentencepiece | setuptools | six | tiktoken | tokenizers | torch | torchaudio | torchvision | tqdm | transformers | triton | typing_extensions | watchfiles | xgrammar
Optional dependencies: boto3 | fastsafetensors | librosa | runai-model-streamer | runai-model-streamer-s3 | soundfile | tensorizer

Downloads last day: 0
Downloads last week: 22
Downloads last month: 27