We present a positional description scheme (PDS) designed for digit sequences, which integrates placeholder value information for each digit. Given the structural limitations of subword tokenization algorithms, language models face critical text normalization (TN) challenges when handling numerical tasks. Our scheme addresses this challenge through straightforward preprocessing, which preserves the model architecture while significantly simplifying number normalization, making the problem tractable. This simplifies the task and facilitates more compact production-ready models capable of learning from smaller datasets. Furthermore, our investigations reveal that PDS improves the arithmetic processing capabilities of language models, resulting in a relative accuracy improvement from 23% to 51% on complex arithmetic tasks. We demonstrate that PDS effectively mitigates fatal numerical normalization errors in neural models, requiring only a modest amount of training data without rule-based finite state transducers (FSTs). We show that PDS is essential for both text-to-speech processing and speech recognition, enabling effective TN under throughput constraints.