Through the use of creative prompt engineering and in-context learning, large language models (LLMs) have shown the ability to generalize effectively in various text-based natural language processing (NLP) tasks. However, when it comes to performing well in spoken language understanding (SLU) tasks, LLMs either need to have built-in speech capabilities or rely on converting speech to text using an off-the-shelf automation speech recognition (ASR) system. In this study, we specifically focus on the latter scenario, where the accuracy of LLMs in SLU tasks is limited by the accuracy of a pre-existing ASR system for the given speech input. Our primary objective is speech intent classification, where a high word-error-rate (WER) indicates that the LLM may not have the correct textual information to comprehend the spoken intent. To address this issue, we propose prompting the LLM with an n-best list of ASR hypotheses, instead of solely relying on the error-prone 1-best hypothesis. Initially, we examine the use of descriptive prompts to introduce the concept of n-best lists and leverage the LLM’s emerging abilities to understand the task. Subsequently, we fine-tune LoRA adapters specifically for the intent classification task. We demonstrate the effectiveness of our approach in two tasks: binary device-directed speech detection and keyword spotting, using the Google speech commands dataset. The systems utilizing n-best list prompts outperform those using 1-best ASR outputs, thereby providing an efficient method to leverage ASR uncertainty through LLMs for speech-based applications.