*Equal taxpayers
Parameter efficient fine-tuning (PEFT) for customizing automatic speech recognition (ASR) has recently shown promise for adapting general population models to atypical speech. However, these approaches assume a priori knowledge of the atypical speech disorder to which you are adapting, the diagnosis of which requires expert knowledge that is not always available. Even given this knowledge, paucity of data and high variability between and within speakers further limit the effectiveness of traditional adjustment. To circumvent these challenges, we first identify the minimum set of model parameters necessary for ASR adaptation. Our analysis of the effect of each individual parameter on adaptation performance allows us to reduce the word error rate (WER) by half while adapting 0.03% of all weights. To alleviate the need for cohort-specific models, we next propose the novel use of a meta-learned hypernetwork to generate highly individualized expression-level adaptations on the fly for a diverse set of atypical speech features. By evaluating adaptation at the global, cohort, and individual levels, we show that hypernetworks generalize better to out-of-distribution speakers, while maintaining an overall relative WER reduction of 75.2% using 0.1% of the total budget. of the parameter.