With the help of creative engineering and learning in context, large language models (LLMs) are known to generalize well to a variety of text-based natural language processing (NLP) tasks. However, to perform well on spoken language comprehension (SLU) tasks, LLMs must be equipped with a built-in speech modality or must rely on speech-to-text conversion from an automated speech recognition (ASR) system. available on the market. ) system. In this work, we focus on the latter configuration where the accuracy of LLM on SLU tasks is limited by the accuracy of a frozen ASR system on the given speech input. Specifically, we address the speech intent classification task, where a high word error rate (WER) implies that the LLM may not have the correct textual information to understand spoken intent. To alleviate this problem, we propose to ask the LLM for a list of the n best ASR hypotheses instead of just the error-prone 1 best hypothesis. We first explore how to prompt the LLM with descriptive prompts that explain the concept of n best lists to invoke the LLM's emergent skills to understand the task; followed by the adjustment of the LoRA adapters in the intention classification task. We demonstrate the effectiveness of our approach on a speech detection task targeting a binary device, as well as on a keyword detection task on a Google voice command dataset, where systems using the top n cues list outperform those using the 1 best ASR output; thus paving the way for an efficient method to exploit the uncertainty of ASR through LLM for voice-based applications.