when GPU inference needs to be faster than PyTorch or ONNX Runtime alone
TensorRT Optimization AI Prompt
This prompt optimizes NVIDIA GPU inference with TensorRT through an ONNX-based pipeline, optional FP16 or INT8 precision, calibration, and engine serialization. It is meant for teams chasing the lowest possible latency on supported NVIDIA hardware.
Optimize this model for NVIDIA GPU inference using TensorRT. 1. Conversion path: PyTorch โ ONNX โ TensorRT engine - Export to ONNX (opset 17, dynamic axes for batch) - Build TensorRT engine using trtexec or the TensorRT Python API 2. Precision selection: - FP32: baseline, no accuracy loss - FP16: enable with builder_config.set_flag(trt.BuilderFlag.FP16) โ typically 2ร speedup, minimal accuracy loss - INT8: requires calibration dataset for activation range statistics. Use IInt8EntropyCalibrator2. Up to 4ร speedup, requires validation. 3. Engine build configuration: - Set optimization profiles for dynamic shape engines: min, optimal, and max input shapes - workspace size: 4GB (larger allows TensorRT to try more kernel alternatives) - Enable timing cache for faster re-builds 4. INT8 calibration: - Provide 100โ500 representative calibration samples (not validation set) - Run calibration and save calibration table for reuse - Validate accuracy: if accuracy drops > 1%, use layer-wise precision override for sensitive layers 5. Layer-wise precision override: - Keep the first and last layers in FP32 - Mark softmax and normalization layers as FP32 - Use FP16 or INT8 for the bulk of the network 6. Performance measurement: - Use trtexec --percentile=99 for accurate p99 latency - Compare: PyTorch eager, TorchScript, ONNX Runtime, TensorRT FP16, TensorRT INT8 7. Engine serialization and loading: - Serialize engine to disk โ engines are GPU-specific, not portable - Load at inference time and bind input/output buffers Return: full TensorRT conversion pipeline, INT8 calibration code, precision comparison table, and engine serving wrapper.
When to use this prompt
when TensorRT FP16 or INT8 optimization is under consideration
when calibration and layer-wise precision control are needed
when you need a reusable serialized engine and serving wrapper
What the AI should return
A TensorRT conversion pipeline, calibration code for INT8 if needed, precision comparison results, and engine loading or serving code.
How to use this prompt
Open your data context
Load your dataset, notebook, or working environment so the AI can operate on the actual project context.
Copy the prompt text
Use the copy button above and paste the prompt into the AI assistant or prompt input area.
Review the output critically
Check whether the result matches your data, assumptions, and desired format before moving on.
Chain into the next prompt
Once you have the first result, continue deeper with related prompts in Model Compression.
Frequently asked questions
What does the TensorRT Optimization prompt do?+
It gives you a structured model compression starting point for ml engineer work and helps you move faster without starting from a blank page.
Who is this prompt for?+
It is designed for ml engineer workflows and marked as advanced, so it works well as a guided starting point for that level of experience.
What type of prompt is this?+
TensorRT Optimization is a single prompt. You can copy it as-is, adapt it, or use it as one step inside a larger workflow.
Can I use this outside MLJAR Studio?+
Yes. The prompt text works in other AI tools too, but MLJAR Studio is the best fit when you want local execution, visible Python code, and reusable notebooks.
What should I open next?+
Natural next steps from here are Compression Pipeline Chain, Knowledge Distillation, ONNX Export and Validation.