ONNX
Inference with ONNX models
Last updated
Inference with ONNX models
Last updated
Command usage
Options
-t, --temperature
: Temperature for sampling
-m, --max_new_tokens
: Maximum number of new tokens to generate
-k, --top_k
: Top-k sampling parameter
-p, --top_p
: Top-p sampling parameter
-sw, --stop_words
: List of stop words for early stopping
-pf, --profiling
: Enable profiling logs for the inference process
-st, --streamlit
: Run the inference in Streamlit UI
Streamlit Interface
Command Usage
Options
-ns, --num_inference_steps
: Number of inference steps
-np, --num_images_per_prompt
: Number of images to generate per prompt
-H, --height
: Height of the output image
-W, --width
: Width of the output image
-g, --guidance_scale
: Guidance scale for diffusion
-o, --output
: Output path for the generated image
-s, --random_seed
: Random seed for image generation
-st, --streamlit
: Run the inference in Streamlit UI
Streamlit Interface
Command Usage
Options
-o, --output_dir
: Output directory for transcriptions
-r, --sampling_rate
: Sampling rate for audio processing
-st, --streamlit
: Run the inference in Streamlit UI
Streamlit Interface
Command Usage
Options
-o, --output_dir OUTPUT_DIR
: Output directory for text-to-speech output
-r, --sampling_rate SAMPLING_RATE
: Sampling rate for audio processing
-st, --streamlit
: Run the inference in Streamlit UI
Streamlit Interface