Generative antagonistic networks (GANs), trained on a large-scale image dataset, can be a good approximator of the natural image manifold. GAN inversion, which uses a pretrained generator as a deep generative preamp, is a promising tool for image restoration under corruptions. However, GAN’s return on investment may be limited by its lack of robustness against unknown large corruptions, meaning the restored image could easily deviate from basic reality. In this article, we propose a Robust GAN Inversion (RGI) method with a testable robustness guarantee to achieve image restoration under unknown \textit{gross} corruptions, where a small fraction of pixels are completely damaged. Under mild assumptions, we show that the restored image and the identified corrupt region mask converge asymptotically to the basic reality. Furthermore, we extend RGI to Relaxed-RGI (R-RGI) for generator fine-tuning to mitigate the gap between the learned manifold of GAN and the real-image manifold, while avoiding trivial overfitting to the corrupted input image, which further improves image restoration and corrupt region mask identification performance. The proposed RGI/R-RGI method unifies two important applications with next-generation performance (SOTA): (i) semantic maskless painting, where corruptions are unknown missing regions, the restored background can be used to restore missing content; (ii) unmonitored pixel anomaly detection, where the corruptions are unknown anomalous regions, the retrieved mask can be used as the anomalous region segmentation mask.