ONNX
ONNX Interface
NexaTextInference
A class used for loading text models and running text generation.
Methods
run(): Run the text generation loop.
run_streamlit(): Run the Streamlit UI.
Arguments
model_path
(str): Path or identifier for the model in Nexa Model Hub.local_path
(str): Local path of the model. Either model_path or local_path should be provided.temperature
(float): Temperature for sampling.min_new_tokens
(int): Minimum number of new tokens to generate.max_new_tokens
(int): Maximum number of new tokens to generate.top_k
(int): Top-k sampling parameter.top_p
(float): Top-p sampling parameter.profiling
(bool): Enable timing measurements for the generation process.streamlit
(bool): Run the inference in Streamlit UI.
Example Code
from nexa.onnx import NexaTextInference
model_path = "gemma"
inference = NexaTextInference(
model_path=model_path,
local_path=None,
temperature=0.7,
max_new_tokens=512,
top_k=50,
top_p=0.9,
profiling=True
)
# run() method
inference.run()
# run_streamlit() method
inference.run_streamlit(model_path)
NexaImageInference
A class used for loading image models and running image generation.
Methods
run(): Run the text-to-image generation loop.
run_streamlit(): Run the Streamlit UI.
generate_image(prompt, negative_prompt): Generate images based on the given prompt
Arguments
model_path
(str): Path or identifier for the model in Nexa Model Hub.local_path
(str, optional): Local path of the model.output_path
(str): Output path for the generated image. Example: "generated_images/image.png"num_inference_steps
(int): Number of inference steps.num_images_per_prompt
(int): Number of images to generate per prompt.width
(int): Width of the output image.height
(int): Height of the output image.guidance_scale
(float): Guidance scale for diffusion.random_seed
(int): Random seed for image generation.streamlit
(bool): Run the inference in Streamlit UI.
Example Code
from nexa.onnx import NexaImageInference
model_path = "lcm-dreamshaper"
inference = NexaImageInference(
model_path=model_path,
local_path=None,
num_inference_steps=4,
width=512,
height=512,
guidance_scale=1.0,
random_seed=0,
)
# run() method
inference.run()
# run_streamlit() method
inference.run_streamlit(model_path)
# generate_image(prompt, negative_prompt) method
inference.generate_image(prompt="a lovely cat", negative_prompt="no hair")
NexaTTSInference
A class used for loading text-to-speech models and running text-to-speech generation.
Methods
run(): Run the text-to-speech loop.
run_streamlit(): Run the Streamlit UI.
Arguments
model_path
(str): Path or identifier for the model in Nexa Model Hub.local_path
(str): Local path of the model. Either model_path or local_path should be provided.output_dir
(str): Output directory for TTS generated audio.sampling_rate
(int): Sampling rate for audio processing.streamlit
(bool): Run the inference in Streamlit UI.
Example Code
from nexa.onnx import NexaTTSInference
model_path = "ljspeech"
inference = NexaTTSInference(
model_path=model_path,
local_path=None
)
# run() method
inference.run()
# run_streamlit() method
inference.run_streamlit()
NexaVoiceInference
A class used for loading voice models and running voice transcription.
Methods
run(): Run the auto speech generation loop.
run_streamlit(): Run the Streamlit UI.
Arguments
model_path
(str): Path or identifier for the model in Nexa Model Hub.local_path
(str): Local path of the model. Either model_path or local_path should be provided.output_dir
(str): Output directory for transcriptions.sampling_rate
(int): Sampling rate for audio processing.streamlit
(bool): Run the inference in Streamlit UI.
Example Code
from nexa.onnx import NexaVoiceInference
model_path = "whisper-tiny"
inference = NexaVoiceInference(
model_path=model_path,
local_path=None
)
# run() method
inference.run()
# run_streamlit() method
inference.run_streamlit()
Last updated
Was this helpful?