Self-supervised features are typically used instead of filterbank features in speech verification models. However, these models were originally designed to ingest filterbanks as inputs, and thus training them with self-supervised features assumes that both types of features require the same amount of learning for the task. In this work, we observe that pre-trained self-supervised speech features inherently include the information required for a posterior speech verification task, and thus we can simplify the posterior model without sacrificing performance. To this end, we revisit the posterior model design for speech verification using self-supervised features. We show that we can simplify the model to use 97.51% fewer parameters while still achieving an average 29.93% improvement in performance on SUPERB. Consequently, we show that the simplified posterior model is more data-efficient compared to the baseline model: it achieves better performance with only 60% of the training data.