Despite the successes of large language models (LLMs), they exhibit significant drawbacks, particularly when processing long contexts. Their inference cost scales quadratically with respect to the sequence length, making them expensive to implement in some real-world text processing applications such as Retrieval Augmented Generation (RAG). Furthermore, LLMs also exhibit the “distraction phenomenon,” where irrelevant context in the cue degrades output quality. To address these drawbacks, we propose a novel RAG cueing methodology, Overlay Cueing, that can be directly applied to pre-trained transformer-based LLMs without the need for fine-tuning. At a high level, Overlay Cueing allows the LLM to process input documents in parallel cueing paths, discarding paths once they are deemed irrelevant. We demonstrate our method’s ability to simultaneously improve time efficiency on a variety of question answering benchmarks using multiple pre-trained LLMs. Furthermore, our technique significantly improves accuracy when the retrieved context is large relative to the context the model was trained on. For example, our approach facilitates a 93x reduction in computation time while improving accuracy by 43% on the NaturalQuestions-Open dataset with the model fine-tuned by MPT-7B instructions over naive RAG.