State-of-the-art automatic augmentation methods (for example, AutoAugment and RandAugment) for visual recognition tasks diversify training data using a large set of augmentation operations. The range of magnitudes of many magnifying operations (for example, brightness and contrast) is continuous. Therefore, to make the search computationally manageable, these methods use fixed, manually defined ranges of magnitude for each operation, which can lead to suboptimal policies. To answer the open question about the importance of magnitude ranges for each augment operation, we introduce RangeAugment which allows us to efficiently learn the range of magnitudes for single and compound augment operations. RangeAugment uses an auxiliary loss based on image similarity as a measure to control the range of magnitudes of the augment operations. As a result, RangeAugment has a single scalar parameter for the search, image similarity, which we simply optimize via linear search. RangeAugment integrates seamlessly with any model and learns model- and task-specific augmentation policies. With extensive experiments on the ImageNet dataset across different networks, we show that RangeAugment achieves competitive performance for state-of-the-art auto-augmentation methods with 4 to 5 times fewer augmentation operations. Experimental results on semantic segmentation, object detection, basic models, and knowledge distillation further show the effectiveness of RangeAugment.