In recent years, end-to-end automatic speech recognition (ASR) systems have proven to be remarkably accurate and effective, but these systems still have a significant error rate for entity names that appear infrequently in their data. of training. In parallel with the rise of end-to-end ASR systems, large language models (LLMs) have proven to be a versatile tool for various natural language processing (NLP) tasks. In NLP tasks where a database of relevant knowledge is available, retrieval augmented generation (RAG) has achieved impressive results when used with LLM. In this work, we propose a RAG-like technique to correct speech recognition entity naming errors. Our approach uses a vector database to index a set of relevant entities. At run time, database queries are generated from possibly erroneous textual ASR hypotheses, and the entities retrieved by these queries are fed, along with the ASR hypotheses, to an LLM that has been adapted to correct ASR errors. . Overall, our best system achieves relative word error rate reductions of 33% to 39% on synthetic test sets focused on voice assistant queries for rare musical entities without backtracking on the STOP test set, a set of Publicly available voice assistant tests covering many domains.