Skip to content

[Bug]: Installation fail on Macbook Pro M1 ARM64 with 64Go RAM: TypeError: unsupported operand type(s) for |: 'type' and 'NoneType' #30606

@pilere

Description

@pilere

Your current environment

The output of python collect_env.py
Collecting environment information...
==============================
        System Info
==============================
OS                           : macOS 15.6.1 (arm64)
GCC version                  : Could not collect
Clang version                : 17.0.0 (clang-1700.4.4.1)
CMake version                : Could not collect
Libc version                 : N/A

==============================
       PyTorch Info
==============================
PyTorch version              : 2.8.0
Is debug build               : False
CUDA used to build PyTorch   : None
ROCM used to build PyTorch   : N/A

==============================
      Python Environment
==============================
Python version               : 3.9.18 (main, Oct  3 2025, 10:09:16)  [Clang 17.0.0 (clang-1700.3.19.1)] (64-bit runtime)
Python platform              : macOS-15.6.1-arm64-arm-64bit

==============================
       CUDA / GPU Info
==============================
Is CUDA available            : False
CUDA runtime version         : No CUDA
CUDA_MODULE_LOADING set to   : N/A
GPU models and configuration : No CUDA
Nvidia driver version        : No CUDA
cuDNN version                : No CUDA
HIP runtime version          : N/A
MIOpen runtime version       : N/A
Is XNNPACK available         : True

==============================
          CPU Info
==============================
Apple M1 Max

==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.0.2
[pip3] pyzmq==27.1.0
[pip3] torch==2.8.0
[pip3] torchaudio==2.8.0
[pip3] torchvision==0.23.0
[pip3] transformers==4.57.3
[conda] Could not collect

==============================
         vLLM Info
==============================
ROCM Version                 : Could not collect
vLLM Version                 : 0.11.0
vLLM Build Flags:
  CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
  Could not collect

==============================
     Environment Variables
==============================
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1

🐛 Describe the bug

following the process to install mistral locally: https://docs.mistral.ai/mistral-vibe/local

it fail during the vllm installation

# vllm serve mistralai/Devstral-Small-2-24B-Instruct-2512 --tool-call-parser mistral --enable-auto-tool-choice --port 8080
INFO 12-13 11:30:12 [__init__.py:216] Automatically detected platform cpu.
Traceback (most recent call last):
  File "/Users/smerle/.asdf/installs/python/3.9.18/bin/vllm", line 5, in <module>
    from vllm.entrypoints.cli.main import main
  File "/Users/smerle/.asdf/installs/python/3.9.18/lib/python3.9/site-packages/vllm/entrypoints/cli/__init__.py", line 3, in <module>
    from vllm.entrypoints.cli.benchmark.latency import BenchmarkLatencySubcommand
  File "/Users/smerle/.asdf/installs/python/3.9.18/lib/python3.9/site-packages/vllm/entrypoints/cli/benchmark/latency.py", line 5, in <module>
    from vllm.benchmarks.latency import add_cli_args, main
  File "/Users/smerle/.asdf/installs/python/3.9.18/lib/python3.9/site-packages/vllm/benchmarks/latency.py", line 18, in <module>
    from vllm.engine.arg_utils import EngineArgs
  File "/Users/smerle/.asdf/installs/python/3.9.18/lib/python3.9/site-packages/vllm/engine/arg_utils.py", line 42, in <module>
    from vllm.reasoning import ReasoningParserManager
  File "/Users/smerle/.asdf/installs/python/3.9.18/lib/python3.9/site-packages/vllm/reasoning/__init__.py", line 5, in <module>
    from .basic_parsers import BaseThinkingReasoningParser
  File "/Users/smerle/.asdf/installs/python/3.9.18/lib/python3.9/site-packages/vllm/reasoning/basic_parsers.py", line 8, in <module>
    from vllm.entrypoints.openai.protocol import (ChatCompletionRequest,
  File "/Users/smerle/.asdf/installs/python/3.9.18/lib/python3.9/site-packages/vllm/entrypoints/openai/protocol.py", line 52, in <module>
    from vllm.entrypoints.chat_utils import (ChatCompletionMessageParam,
  File "/Users/smerle/.asdf/installs/python/3.9.18/lib/python3.9/site-packages/vllm/entrypoints/chat_utils.py", line 48, in <module>
    from vllm.model_executor.models import SupportsMultiModal
  File "/Users/smerle/.asdf/installs/python/3.9.18/lib/python3.9/site-packages/vllm/model_executor/models/__init__.py", line 11, in <module>
    from .registry import ModelRegistry
  File "/Users/smerle/.asdf/installs/python/3.9.18/lib/python3.9/site-packages/vllm/model_executor/models/registry.py", line 426, in <module>
    class _LazyRegisteredModel(_BaseRegisteredModel):
  File "/Users/smerle/.asdf/installs/python/3.9.18/lib/python3.9/site-packages/vllm/model_executor/models/registry.py", line 442, in _LazyRegisteredModel
    module_hash: str) -> _ModelInfo | None:
TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions