PyPI Stats

Search

All packages
Top packages

Track packages

vllm-test-tpu


PyPI page
Home page
Author: vLLM Team
Summary: A high-throughput and memory-efficient inference and serving engine for LLMs
Latest version: 0.9.0.1
Required dependencies: aiohttp | blake3 | cachetools | cloudpickle | cmake | compressed-tensors | depyf | einops | fastapi | filelock | gguf | huggingface-hub | importlib_metadata | jinja2 | lark | llguidance | lm-format-enforcer | mistral_common | msgspec | ninja | numpy | openai | opencv-python-headless | opentelemetry-api | opentelemetry-exporter-otlp | opentelemetry-sdk | opentelemetry-semantic-conventions-ai | outlines | packaging | partial-json-parser | pillow | prometheus-fastapi-instrumentator | prometheus_client | protobuf | psutil | py-cpuinfo | pydantic | python-json-logger | pyyaml | pyzmq | ray | requests | scipy | sentencepiece | setuptools | setuptools-scm | six | tiktoken | tokenizers | torch | torch_xla | torchvision | tqdm | transformers | typing_extensions | watchfiles | wheel | xgrammar
Optional dependencies: boto3 | fastsafetensors | librosa | runai-model-streamer | runai-model-streamer-s3 | soundfile | tensorizer

Downloads last day: 1
Downloads last week: 50
Downloads last month: 58