Active learning has been extensively studied as a method for efficient data collection. Among the many approaches in the literature, Roy & McCallum (2001) Expected Error Reduction (EER) has been shown to be an effective method for active learning: select the candidate sample that, in expectation, minimizes the error in a set not labeled. However, EER requires the model to be retrained for each candidate sample, and therefore has not been widely used for modern deep neural networks due to this large computational cost. In this article, we reformulate EER under the lens of Bayesian active learning and derive a computationally efficient version that can use any Bayesian parameter sampling method (such as Gal & Ghahramani (2016)). Next, we compare the empirical performance of our method using Monte Carlo dropout for parameter sampling with the most advanced methods in the deep active learning literature. Experiments are performed on four standard reference data sets and three WILDS data sets (Koh et al., 2021). The results indicate that our method outperforms all other methods except one in the data change scenario: an information-independent, model-dependent theoretical method that requires an order of magnitude higher computational cost (Ash et al., 2019).