Recently, super-resolution (SR) techniques have been proposed to enhance the quality of images generated by neural radiance fields (NeRF) and improve their inference speeds. However, existing NeRF+SR methods introduce additional input features, loss functions, and/or complex training procedures like knowledge distillation, which increase the training overhead. In this study, we aim to leverage SR for efficiency gains without requiring expensive training or architectural changes. Our approach involves creating a simple NeRF+SR pipeline that directly combines existing modules, and we introduce a lightweight augmentation technique called random patch sampling for training. Compared to existing NeRF+SR methods, our pipeline reduces the computational overhead of SR and enables training speeds up to 23× faster, allowing it to be run on consumer devices like the Apple MacBook. Experimental results demonstrate that our pipeline can upscale NeRF outputs by 2-4× while maintaining high quality, significantly improving inference speeds by up to 18× on an NVIDIA V100 GPU and 12.8× on an M1 Pro chip. In conclusion, SR can be a straightforward yet effective technique for enhancing the efficiency of NeRF models on consumer devices.