This study explores the use of the ranking rank as a non-supervised evaluation metric for general-use speech-use coders trained through self-supervised learning (SSL). Traditionally, evaluating the performance of these encoders is intensive in resources and requires data labeled from downstream tasks. Inspired by the vision domain, where the ranking rank has been promising to evaluate image encoders without adjusting the subsequent data labeled, this work examines its applicability in the speech domain, considering the temporal nature of the signals. The results indicate that the range is correlated with the lower face performance within the encoder layers in various downstream and out -of -scenarios and outside the domain. However, the range does not reliably predict the best performance layer for specific downstream tasks, since the lower threshold layers can exceed the highest classification. Despite this limitation, the results suggest that the ranking rank can be a valuable tool to monitor training progress in SSL speech models, offering a less demanded alternative for resources to traditional evaluation methods.