Welcome to my Jetson Lab! ⚡️
Here you’ll find a curated series of tutorials and experiments on the Jetson Orin Nano Super from first-time setup and system profiling to benchmarking, running models, and refining high-performance C++ inference pipelines.
Each article is written to stand alone as a practical guide, yet together they build a cohesive roadmap for anyone exploring edge AI on Jetson hardware. The focus is always on real, measurable results: storage performance, system analysis, and end-to-end optimization for efficient deployment.

Roadmap
➤ Part 0: Overview & goals (what I’m building, success criteria, BOM)
➤ Part 1: Flash + JetPack & dev env (CMake, compilers, basic CUDA/TensorRT install)
➤ Part 2: Storage choice & setup (SSD vs SD, quick I/O numbers, link to your SSD setup article)
➤ Part 3: Baseline perf probes (tegrastats, Nsight Systems, simple C++ microbench)
➤ Part 4: First model running (tiny classifier/detector) with timing + memory
➤ Part 5: Optimization passes (FP16/INT8, batch size, pinned memory, threads)
➤ Part 6: Packaging a small C++ inference app + CLI
➤ Part 7: Results & lessons (tables, graphs, “what I’d do next”)