*Work done during the internship at Apple
Audiovisual speech contains synchronized visual and audio information that provides cross-modal supervision for learning representations for both automatic speech recognition (ASR) and visual speech recognition (VSR). We present Continuous Pseudolabeling for Audiovisual Speech Recognition (AV-CPL), a semi-supervised method for training an audiovisual speech recognition (AVSR) model on a mix of labeled and unlabeled videos with continuously regenerated pseudolabels. Our models are trained for speech recognition from audiovisual inputs and can perform speech recognition using both audio and visual modalities, or just one modality. Our method uses the same audiovisual model for both supervised training and pseudolabel generation, mitigating the need for external speech recognition models to generate pseudolabels. AV-CPL achieves significant improvements in VSR performance on the LRS3 dataset while maintaining practical ASR and AVSR performance. Finally, using only visual speech data, our method can leverage unlabeled visual speech to improve VSR.