when GPU utilization is low and input loading may be the bottleneck
DataLoader Optimization AI Prompt
This prompt diagnoses whether the DataLoader is the training bottleneck and then tunes worker count, prefetching, pinning, and data format choices to improve utilization. It is aimed at eliminating input pipeline stalls that starve the GPU.
Diagnose and optimize the DataLoader to eliminate I/O bottlenecks in this training pipeline. 1. Diagnose if I/O is the bottleneck: - Run training with an all-random dataset (no disk I/O): if GPU utilization increases significantly, DataLoader is the bottleneck - Profile DataLoader: measure time spent in __getitem__ vs training step 2. num_workers tuning: - Rule of thumb: start with num_workers = number of CPU cores / 2 - Benchmark num_workers = 0, 2, 4, 8, 16: find the value that maximizes GPU utilization - Note: too many workers increases memory usage and can cause shared memory errors 3. Prefetching: - prefetch_factor=2 (default): each worker prefetches 2 batches ahead - Increase to 4 if GPU is fast relative to I/O - persistent_workers=True: avoids worker restart overhead each epoch 4. Data format optimization: - Convert images to WebDataset (tar-based streaming) if reading many small files - Use Parquet + PyArrow for tabular data with columnar reads - Memory-mapped files (np.memmap) for large arrays that fit in RAM - Store preprocessed tensors as .pt files to skip preprocessing in __getitem__ 5. Memory pinning: - pin_memory=True: pinned (page-locked) memory enables faster CPU→GPU transfers - Use non_blocking=True in .to(device) calls 6. On-GPU preprocessing: - Move augmentation to GPU using Kornia or torchvision transforms v2 on CUDA tensors - Reduces per-worker CPU load Return: bottleneck diagnosis procedure, optimization implementations, and benchmark comparing before vs after.
When to use this prompt
when tuning num_workers, prefetch_factor, or persistent_workers
when storage format changes could improve throughput
when you need before-versus-after benchmarking of the data pipeline
What the AI should return
A DataLoader diagnosis procedure, concrete optimization implementations, and benchmark results showing the impact of the improvements.
How to use this prompt
Open your data context
Load your dataset, notebook, or working environment so the AI can operate on the actual project context.
Copy the prompt text
Use the copy button above and paste the prompt into the AI assistant or prompt input area.
Review the output critically
Check whether the result matches your data, assumptions, and desired format before moving on.
Chain into the next prompt
Once you have the first result, continue deeper with related prompts in Optimization.
Frequently asked questions
What does the DataLoader Optimization prompt do?+
It gives you a structured optimization starting point for ml engineer work and helps you move faster without starting from a blank page.
Who is this prompt for?+
It is designed for ml engineer workflows and marked as intermediate, so it works well as a guided starting point for that level of experience.
What type of prompt is this?+
DataLoader Optimization is a single prompt. You can copy it as-is, adapt it, or use it as one step inside a larger workflow.
Can I use this outside MLJAR Studio?+
Yes. The prompt text works in other AI tools too, but MLJAR Studio is the best fit when you want local execution, visible Python code, and reusable notebooks.
What should I open next?+
Natural next steps from here are Flash Attention Integration, Full Optimization Chain, GPU Profiling.