This paper presents an efficient decoding approach for end-to-end automatic speech recognition (E2E-ASR) with large language models (LLM). Although shallow fusion is the most common approach to incorporating language models into E2E-ASR decoding, we face two practical problems with LLMs. (1) LLM inference is computationally expensive. (2) There may be a vocabulary discrepancy between the ASR model and the LLM. To resolve this mismatch, we need to retrain the ASR model and/or the LLM, which is time-consuming at best and in many cases not feasible. We propose a “delayed fusion,” which applies LLM scores to ASR hypotheses with a delay during decoding and allows for easier use of pretrained LLMs in ASR tasks. This method can reduce not only the number of hypotheses qualified by the LLM but also the number of inference calls by the LLM. It also allows ASR hypotheses to be re-tokenized during decoding if ASR and LLM use different tokenizations. We show that delayed fusion provides higher decoding speed and accuracy compared to shallow fusion and better N-score using the LibriHeavy ASR corpus and three public LLMs, OpenLLaMA 3B and 7B and Mistral 7B.