We present EELBERT, an approach for transformer-based model compression (e.g. BERT), with minimal impact on the accuracy of downstream tasks. This is achieved by replacing the input embedding layer of the model with dynamic embedding calculations, e.g. on the fly. Since the input embedding layer represents a significant fraction of the model size, especially for smaller BERT variants, replacing this layer with an embedding calculation function helps us significantly reduce the model size. Empirical evaluation on the GLUE benchmark shows that our BERT variants (EELBERT) suffer minimal regression compared to traditional BERT models. Through this approach, we are able to develop our smallest model, UNO-EELBERT, which achieves a GLUE score within 4% of the fully trained BERT-tiny while being 15 times smaller in size (1.2 MB ).