Skip to main content

NexaAI Windows ARM64 Setup Guide

This guide demonstrates how to use the NexaAI SDK for various AI inference tasks on NPU devices, including:
  • LLM (Large Language Model): Text generation and conversation
  • VLM (Vision Language Model): Multimodal understanding and generation
  • Embedder: Text vectorization and similarity computation
  • Reranker: Document reranking
  • ASR (Automatic Speech Recognition): Speech-to-text transcription
  • CV (Computer Vision): OCR/text recognition

Prerequisites

1. Install the correct Python version

If you prefer, we also offer a video tutorial for the installation. Check it out here. NexaAI requires Python 3.11 – 3.13 (ARM64 build) on Windows ARM. Please download and install the official ARM64 Python from the python-3.11.1-arm64.exe. Make sure you read the instructions below carefully before proceeding.
IMPORTANT: Make sure you select “Add python.exe to PATH” on the first screen of the installation wizard.
🛑 Make sure you restart the terminal or your IDE after installation.
⚠️ Do not use Conda or x86 builds — they are incompatible with native ARM64 binaries. If you are in a conda environment, run conda deactivate first.
Verify the installation: In case your environment path gets overriden by some environment manager, we recommend you to run the following commands to restore PATH variable from system settings.
$systemPath = [Environment]::GetEnvironmentVariable('Path', 'Machine')
$userPath   = [Environment]::GetEnvironmentVariable('Path', 'User')
$env:Path   = "$userPath;$systemPath"
Then verify your python executable has the correct architecture and version (3.11 - 3.13)
python -c "import sys, platform; print(f'Python version: {sys.version}')"
Your output should look like:
Python version: 3.11.0 (main, Oct 24 2022, 18:15:22) [MSC v.1933 64 bit (ARM64)]
Expected output must contain version 3.11.0 and architecture ARM64. If it does show AMD64 or incorrect version, try the following:
  • (If you have conda installed) Run conda deactivate to deactivate the current conda environment.
  • (If your python executable points to the x86 version) You may need to make the ARM64 Python come before the x86 Python in your PATH.
    • Hit the Win key, and type env, and hit Enter to select Edit the system environment variables setting.
    • Click on Environment Variables... button.
    • Select Path and click Edit....
    • Find your ARM64 Python installation path, and move it to the top of the list.
    • Hit OK for several times to close all the dialogs and save the changes.
  • (If you forgot to select “Add python.exe to PATH” on the first screen of the installation wizard)
    • Run the installation wizard again, follow the instructions to remove the current installation, and then reinstall from the Wizard. Make sure to select “Add python.exe to PATH” this time.

2. Create and activate a virtual environment

python -m venv nexaai-env
nexaai-env\Scripts\activate

3. Install the NexaAI SDK

pip install nexaai

4. Verify Your Environment

Run the following code to ensure you have the right environment:
import sys
import platform

# ANSI color codes
RED = "\033[91m"
GREEN = "\033[92m"
YELLOW = "\033[93m"
BOLD = "\033[1m"
RESET = "\033[0m"

min_ver = (3, 11)
max_ver = (3, 13)
current_ver = sys.version_info
arch = platform.machine()

if not (min_ver <= (current_ver.major, current_ver.minor) < max_ver) or arch.lower() != "arm64":
    print("\n" + "=" * 80)
    print(f"{BOLD}{RED}WARNING: Your Python version or architecture is not compatible.{RESET}")
    print(f"Detected version: {current_ver.major}.{current_ver.minor}, architecture: {arch}")
    print(f"{YELLOW}Required: Python 3.11 - 3.13 & architecture 'arm64'.{RESET}")
    print("=" * 80)
    print(f"{RED}DO NOT continue to the following code!{RESET}\n")
    print("To install arm64 Python:")
    print("  - Download Python 3.11-3.13 for arm64 from https://www.python.org/downloads/")
    print("  - Install and verify by running: python3 --version and python3 -c 'import platform; print(platform.machine())'")
    print("  - Launch Jupyter and make sure to select the arm64 Python kernel in 'Kernel > Change kernel'.")
    sys.exit(1)
else:
    print(f"{GREEN}[VERIFICATION PASSED] Python version and architecture are correct. You may continue to the following sections.{RESET}")

Authentication Setup

Before running any examples, you need to set up your NexaAI authentication token.

Set Token in Code

Replace "YOUR_NEXA_TOKEN_HERE" with your actual NexaAI token from https://sdk.nexa.ai/:
import os

# Replace "YOUR_NEXA_TOKEN_HERE" with your actual token from https://sdk.nexa.ai/
os.environ["NEXA_TOKEN"] = "YOUR_NEXA_TOKEN_HERE"

assert os.environ.get("NEXA_TOKEN", "").startswith(
    "key/"), "ERROR: NEXA_TOKEN must start with 'key/'. Please check your token."

1. LLM (Large Language Model) NPU Inference

Using NPU-accelerated large language models for text generation and conversation. Llama3.2-3B-NPU-Turbo is specifically optimized for NPU.
import io
import os

from nexaai.common import GenerationConfig, ModelConfig, ChatMessage
from nexaai.llm import LLM


def llm_npu_example():
    """LLM NPU inference example"""
    print("=== LLM NPU Inference Example ===")

    # Model configuration

    # Use huggingface Repo ID
    model_name = "NexaAI/Llama3.2-3B-NPU-Turbo"
    # Alternatively, use local path
    # model_name = os.path.expanduser(r"~\.cache\nexa.ai\nexa_sdk\models\NexaAI\Llama3.2-3B-NPU-Turbo\weights-1-3.nexa")

    plugin_id = "npu"
    device = "npu"
    max_tokens = 100
    system_message = "You are a helpful assistant."

    print(f"Loading model: {model_name}")
    print(f"Using plugin: {plugin_id}")
    print(f"Device: {device}")

    # Create model instance
    m_cfg = ModelConfig()
    llm = LLM.from_(model_name, plugin_id=plugin_id, device_id=device, m_cfg=m_cfg)

    # Create conversation history
    conversation = [ChatMessage(role="system", content=system_message)]

    # Example conversations
    test_prompts = [
        "What is artificial intelligence?",
        "Explain the benefits of on-device AI processing.",
        "How does NPU acceleration work?"
    ]

    for i, prompt in enumerate(test_prompts, 1):
        print(f"\n--- Conversation {i} ---")
        print(f"User: {prompt}")

        # Add user message
        conversation.append(ChatMessage(role="user", content=prompt))

        # Apply chat template
        formatted_prompt = llm.apply_chat_template(conversation)

        # Generate response
        print("Assistant: ", end="", flush=True)
        response_buffer = io.StringIO()

        for token in llm.generate_stream(formatted_prompt, g_cfg=GenerationConfig(max_tokens=max_tokens)):
            print(token, end="", flush=True)
            response_buffer.write(token)

        # Get profiling data
        profiling_data = llm.get_profiling_data()
        if profiling_data:
            print(f"\nProfiling data: {profiling_data}")

        # Add assistant response to conversation history
        conversation.append(ChatMessage(role="assistant", content=response_buffer.getvalue()))
        print("\n" + "=" * 50)


llm_npu_example()

2. VLM (Vision Language Model) NPU Inference

Using NPU-accelerated vision language models for multimodal understanding and generation. OmniNeural-4B supports joint processing of images and text.
import os
import io

from nexaai.vlm import VLM
from nexaai.common import GenerationConfig, ModelConfig, MultiModalMessage, MultiModalMessageContent


def vlm_npu_example():
    """VLM NPU inference example"""
    print("=== VLM NPU Inference Example ===")

    # Model configuration

    # Use huggingface repo ID
    model_name = "NexaAI/OmniNeural-4B"
    # Alternatively, use local path
    # model_name = os.path.expanduser(r"~\.cache\nexa.ai\nexa_sdk\models\NexaAI\OmniNeural-4B\weights-1-8.nexa")
    
    plugin_id = "npu"
    device = "npu"
    max_tokens = 100
    system_message = "You are a helpful assistant that can understand images and text."
    image_path = '/your/image/path'  # Replace with actual image path if available

    print(f"Loading model: {model_name}")
    print(f"Using plugin: {plugin_id}")
    print(f"Device: {device}")

    # Check for image existence
    if not (image_path and os.path.exists(image_path)):
        print(f"\033[93mWARNING: The specified image_path ('{image_path}') does not exist or was not provided. Multimodal prompts will not include image input.\033[0m")

    # Create model instance
    m_cfg = ModelConfig()
    vlm = VLM.from_(name_or_path=model_name, m_cfg=m_cfg, plugin_id=plugin_id, device_id=device)

    # Create conversation history
    conversation = [MultiModalMessage(role="system",
                                      content=[MultiModalMessageContent(type="text", text=system_message)])]

    # Example multimodal conversations
    test_cases = [
        {
            "text": "What do you see in this image?",
            "image_path": image_path
        }
    ]

    for i, case in enumerate(test_cases, 1):
        print(f"\n--- Multimodal Conversation {i} ---")
        print(f"User: {case['text']}")

        # Build message content
        contents = [MultiModalMessageContent(type="text", text=case['text'])]

        # Add image content if available
        if case['image_path'] and os.path.exists(case['image_path']):
            contents.append(MultiModalMessageContent(type="image", path=case['image_path']))
            print(f"Including image: {case['image_path']}")

        # Add user message
        conversation.append(MultiModalMessage(role="user", content=contents))

        # Apply chat template
        formatted_prompt = vlm.apply_chat_template(conversation)

        # Generate response
        print("Assistant: ", end="", flush=True)
        response_buffer = io.StringIO()

        # Prepare image and audio paths
        image_paths = [case['image_path']] if case['image_path'] and os.path.exists(case['image_path']) else None
        audio_paths = None

        for token in vlm.generate_stream(formatted_prompt,
                                         g_cfg=GenerationConfig(max_tokens=max_tokens,
                                                                image_paths=image_paths,
                                                                audio_paths=audio_paths)):
            print(token, end="", flush=True)
            response_buffer.write(token)

        # Get profiling data
        profiling_data = vlm.get_profiling_data()
        if profiling_data:
            print(f"\nProfiling data: {profiling_data}")

        # Add assistant response to conversation history
        conversation.append(MultiModalMessage(role="assistant",
                                              content=[MultiModalMessageContent(type="text", text=response_buffer.getvalue())]))
        print("\n" + "=" * 50)


vlm_npu_example()

3. Embedder NPU Inference

Using NPU-accelerated embedding models for text vectorization and similarity computation. embeddinggemma-300m-npu is a lightweight embedding model specifically optimized for NPU.
import numpy as np
from nexaai.embedder import Embedder, EmbeddingConfig


def embedder_npu_example():
    """Embedder NPU inference example"""
    print("=== Embedder NPU Inference Example ===")

    # Model configuration

    # Use huggingface repo ID
    model_name = "NexaAI/embeddinggemma-300m-npu"
    # Alternatively, use local path
    # model_name = os.path.expanduser(r"~\.cache\nexa.ai\nexa_sdk\models\NexaAI\embeddinggemma-300m-npu\weights-1-2.nexa")

    plugin_id = "npu"
    batch_size = 2

    print(f"Loading model: {model_name}")
    print(f"Using plugin: {plugin_id}")
    print(f"Batch size: {batch_size}")

    # Create embedder instance
    embedder = Embedder.from_(name_or_path=model_name, plugin_id=plugin_id)
    print('Embedder loaded successfully!')

    # Get embedding dimension
    dim = embedder.get_embedding_dim()
    print(f"Embedding dimension: {dim}")

    # Example texts
    texts = [
        "On-device AI is a type of AI that is processed on the device itself, rather than in the cloud.",
        "Nexa AI allows you to run state-of-the-art AI models locally on CPU, GPU, or NPU.",
        "A ragdoll is a breed of cat that is known for its long, flowing hair and gentle personality.",
        "The capital of France is Paris.",
        "NPU acceleration provides significant performance improvements for AI workloads."
    ]

    query = "what is on device AI"

    print(f"\n=== Generating Embeddings ===")
    print(f"Processing {len(texts)} texts...")

    # Generate embeddings
    embeddings = embedder.generate(
        texts=texts,
        config=EmbeddingConfig(batch_size=batch_size)
    )

    print(f"Successfully generated {len(embeddings)} embeddings")

    # Display embedding information
    print(f"\n=== Embedding Details ===")
    for i, (text, embedding) in enumerate(zip(texts, embeddings)):
        print(f"\nText {i + 1}:")
        print(f"  Content: {text}")
        print(f"  Embedding dimension: {len(embedding)}")
        print(f"  First 10 elements: {embedding[:10]}")
        print("-" * 70)

    # Query processing
    print(f"\n=== Query Processing ===")
    print(f"Query: '{query}'")

    query_embedding = embedder.generate(
        texts=[query],
        config=EmbeddingConfig(batch_size=1)
    )[0]

    print(f"Query embedding dimension: {len(query_embedding)}")

    # Similarity analysis
    print(f"\n=== Similarity Analysis (Inner Product) ===")
    similarities = []

    for i, (text, embedding) in enumerate(zip(texts, embeddings)):
        query_vec = np.array(query_embedding)
        text_vec = np.array(embedding)
        inner_product = np.dot(query_vec, text_vec)
        similarities.append((i, text, inner_product))

        print(f"\nText {i + 1}:")
        print(f"  Content: {text}")
        print(f"  Inner product with query: {inner_product:.6f}")
        print("-" * 70)

    # Sort and display most similar texts
    similarities.sort(key=lambda x: x[2], reverse=True)

    print(f"\n=== Similarity Ranking Results ===")
    for rank, (idx, text, score) in enumerate(similarities, 1):
        print(f"Rank {rank}: [{score:.6f}] {text}")

    return embeddings, query_embedding, similarities


embeddings, query_emb, similarities = embedder_npu_example()

4. ASR (Automatic Speech Recognition) NPU Inference

Using NPU-accelerated speech recognition models for speech-to-text transcription. parakeet-npu provides high-quality speech recognition with NPU acceleration.
import os
import time

from nexaai.asr import ASR, ASRConfig


def asr_npu_example():
    """ASR NPU inference example"""
    print("=== ASR NPU Inference Example ===")

    # Model configuration

    # Use huggingface Repo ID
    model_name = "NexaAI/parakeet-npu"
    # Alternatively, use local path
    # model_name = os.path.expanduser(r"~\.cache\nexa.ai\nexa_sdk\models\NexaAI\parakeet-npu\weights-1-5.nexa")

    plugin_id = "npu"
    device = "npu"
    # Example audio file (replace with your actual audio file)
    audio_file = r"path/to/audio"  # Replace with actual audio file path

    print(f"Loading model: {model_name}")
    print(f"Using plugin: {plugin_id}")
    print(f"Device: {device}")

    assert os.path.exists(
        audio_file), f"ERROR: The specified audio_file ('{audio_file}') does not exist. Please provide a valid audio file path to test ASR functionality."

    # Create ASR instance
    asr = ASR.from_(name_or_path=model_name, plugin_id=plugin_id, device_id=device)
    print('ASR model loaded successfully!')

    # Basic ASR configuration
    config = ASRConfig(
        timestamps="segment",  # Get segment-level timestamps
        beam_size=5,
        stream=False
    )

    print(f"\n=== Starting Transcription ===")
    start_time = time.time()

    # Perform transcription
    result = asr.transcribe(audio_path=audio_file, language="en", config=config)

    end_time = time.time()
    transcription_time = end_time - start_time

    # Display results
    print(f"\n=== Transcription Results ===")
    print(f"Transcription: {result.transcript}")
    print(f"Processing time: {transcription_time:.2f} seconds")

    # Display segment information if available
    if hasattr(result, 'segments') and result.segments:
        print(f"\nSegments ({len(result.segments)}):")
        for i, segment in enumerate(result.segments[:3]):  # Show first 3 segments
            start_time = segment.get('start', 'N/A')
            end_time = segment.get('end', 'N/A')
            text = segment.get('text', '').strip()
            print(f"  {i +1}. [{start_time:.2f}s - {end_time:.2f}s] {text}")
        if len(result.segments) > 3:
            print(f"  ... and {len(result.segments) - 3} more segments")

    # Get profiling data
    profiling_data = asr.get_profiling_data()
    if profiling_data:
        print(f"\nProfiling data: {profiling_data}")

    return result


result = asr_npu_example()

5. Reranker NPU Inference

Using NPU-accelerated reranking models for document reranking. jina-v2-rerank-npu can perform precise similarity-based document ranking based on queries.
from nexaai.rerank import Reranker, RerankConfig


def reranker_npu_example():
    """Reranker NPU inference example"""
    print("=== Reranker NPU Inference Example ===")

    # Model configuration

    # Use huggingface repo ID
    model_name = "NexaAI/jina-v2-rerank-npu"
    # Alternatively, use local path
    # model_name = os.path.expanduser(r"~\.cache\nexa.ai\nexa_sdk\models\NexaAI\jina-v2-rerank-npu\weights-1-4.nexa")

    plugin_id = "npu"
    batch_size = 4

    print(f"Loading model: {model_name}")
    print(f"Using plugin: {plugin_id}")
    print(f"Batch size: {batch_size}")

    # Create reranker instance
    reranker = Reranker.from_(name_or_path=model_name, plugin_id=plugin_id)
    print('Reranker loaded successfully!')

    # Example queries and documents
    queries = [
        "Where is on-device AI?",
        "What is NPU acceleration?",
        "How does machine learning work?",
        "Tell me about computer vision"
    ]

    documents = [
        "On-device AI is a type of AI that is processed on the device itself, rather than in the cloud.",
        "NPU acceleration provides significant performance improvements for AI workloads on specialized hardware.",
        "Edge computing brings computation and data storage closer to the sources of data.",
        "A ragdoll is a breed of cat that is known for its long, flowing hair and gentle personality.",
        "The capital of France is Paris, a beautiful city known for its art and culture.",
        "Machine learning is a subset of artificial intelligence that enables computers to learn without being explicitly programmed.",
        "Computer vision is a field of artificial intelligence that trains computers to interpret and understand visual information.",
        "Deep learning uses neural networks with multiple layers to model and understand complex patterns in data."
    ]

    print(f"\n=== Document Reranking Test ===")
    print(f"Number of documents: {len(documents)}")

    # Rerank for each query
    for i, query in enumerate(queries, 1):
        print(f"\n--- Query {i} ---")
        print(f"Query: '{query}'")
        print("-" * 50)

        # Perform reranking
        scores = reranker.rerank(
            query=query,
            documents=documents,
            config=RerankConfig(batch_size=batch_size)
        )

        # Create (document, score) pairs and sort
        doc_scores = list(zip(documents, scores))
        doc_scores.sort(key=lambda x: x[1], reverse=True)

        # Display ranking results
        print("Reranking results:")
        for rank, (doc, score) in enumerate(doc_scores, 1):
            print(f"  {rank:2d}. [{score:.4f}] {doc}")

        # Display most relevant documents
        print(f"\nMost relevant documents (top 3):")
        for rank, (doc, score) in enumerate(doc_scores[:3], 1):
            print(f"  {rank}. {doc}")

        print("=" * 80)

    return reranker


reranker = reranker_npu_example()

6. Computer Vision (CV) NPU Inference

Run NPU-accelerated computer vision tasks (e.g., OCR/text recognition) on images.
import os
from nexaai.cv import CVCapabilities, CVModel, CVModelConfig, CVResults


def cv_ocr_example():

    # Use huggingface repo ID
    model_name = "NexaAI/paddleocr-npu"
    # Alternatively, use local path
    # model_name = os.path.expanduser(r"~\.cache\nexa.ai\nexa_sdk\models\NexaAI\paddleocr-npu\weights-1-1.nexa")

    image_path = r"path/to/image"

    config = CVModelConfig(capabilities=CVCapabilities.OCR)
    cv = CVModel.from_(name_or_path=model_name, config=config, plugin_id='npu')

    assert os.path.exists(image_path), f"ERROR: Image file not found: {image_path}"

    results = cv.infer(image_path)

    print(f"Number of results: {results.result_count}")
    for result in results.results:
        print(f"[{result.confidence:.2f}] {result.text}")


cv_ocr_example()

Next Steps