▶️Use Model
To help developers to easily find and run the right model, Nexa AI Hub provide comprehensive filter system and SDK.
Explore in Model Hub
The goal of Nexa Model Hub is to help developers find the most suitable models. To achieve the goal, we provide the following filter options:
Model Type
Computer Vision
Image-to-Text
Image-to-Image
Audio
Text-to-Speech
Automatic Speech Recognition
Multimodal
Image-Text-to-text
NLP
Text Generation
Chat Completion
Question Answering
File Format Tag
GGUF
GGUF is an optimized binary format designed for efficient model loading and saving, particularly suited for inference tasks. It is compatible with GGML and other executors. Developed by @ggerganov, the creator of llama.cpp (a widely-used C/C++ LLM inference framework), GGUF forms the foundation of the Nexa SDK's GGML component.
ONNX
ONNX is an open standard format for representing machine learning models. It establishes a common set of operators and a unified file format, enabling AI developers to utilize models across various frameworks, tools, runtimes, and compilers. ONNX shows unique performance advantages on devices with limited ram(mobile, IoT). The Nexa SDK's ONNX component is built upon the onnxruntime framework.
Parameters
The Nexa Model Hub specializes in on-device models with parameter less than 10 billion.
RAM
This metric indicates the minimum random access memory (RAM) necessary for local model execution.
File Size
Displays the total storage space required for the model.
Use Model
Download Nexa SDK
Follow the Installation to download the appropriate SDK for your operating system.
Run Model using SDK
Nexa SDK enables developers to use one line of code to run the model that fits your specific requirement locally. The one line of code follows this pattern:
To see example and popular MODEL_PATH, see Supported Popular Models below:
Download more official models from Nexa model hub
To find the right code following "nexa", there are two ways:
Find the model in the model hub using search and filter, and click on "run this model" to copy the code to run model locally.
Follow the run a model section in CLI Reference
Last updated