Recently, super-resolution (SR) techniques have been proposed to improve neural radiation fields (NeRF) outputs and generate high-quality images with improved inference speeds. However, existing NeRF+SR methods increase the training overhead by using additional input features, loss functions, and/or expensive training procedures such as knowledge distillation. In this paper, we aim to leverage RS to achieve efficiency gains without costly training or architectural changes. Specifically, we build a simple NeRF+SR pipeline that directly combines existing modules and propose a lightweight augmentation technique, random patch sampling, for training. Compared to existing NeRF+SR methods, our pipeline mitigates the computational overhead of SR and can be trained up to 23x faster, making it possible to run on consumer devices such as the Apple MacBook. Experiments show that our pipeline can improve NeRF outputs by 2-4x while maintaining high quality, increasing inference speeds up to 18x on an NVIDIA V100 GPU and 12.8x on an M1 Pro chip. We conclude that SR can be a simple but effective technique to improve the efficiency of NeRF models for consumer devices.