PyPI page
Home page
Author:
vllm-mlx contributors
License:
Apache-2.0
Summary:
Rapid-MLX — AI inference for Apple Silicon. Drop-in OpenAI API, 2-4x faster than Ollama.
Latest version:
0.6.23
Required dependencies:
fastapi
|
huggingface-hub
|
jsonschema
|
mcp
|
mlx
|
mlx-lm
|
numpy
|
pillow
|
psutil
|
pyyaml
|
requests
|
tabulate
|
tokenizers
|
tqdm
|
transformers
|
uvicorn
Optional dependencies:
black
|
cn2an
|
fugashi
|
gradio
|
jieba
|
loguru
|
misaki
|
mlx-audio
|
mlx-embeddings
|
mlx-vlm
|
mypy
|
num2words
|
numba
|
opencv-python
|
ordered_set
|
outlines
|
phonemizer
|
pytest
|
pytest-asyncio
|
pytz
|
ruff
|
scipy
|
sounddevice
|
soundfile
|
spacy
|
tiktoken
|
torch
|
torchvision
|
unidic-lite
|
vllm
Downloads last day:
1,669
Downloads last week:
7,018
Downloads last month:
10,927