The rapid evolution of large language models (LLMs) and conversational assistants requires dynamic, scalable, and configurable conversational datasets for training and evaluation. These datasets must accommodate diverse user interaction modes, including text and speech, each of which presents unique modeling challenges. Knowledge graphs (KGs), with their structured and evolving nature, offer an ideal foundation for current and accurate knowledge. Although human-curated KG-based conversational datasets exist, they struggle to keep pace with the rapidly changing information needs of users. We present ConvKGYarn, a scalable method for generating up-to-date and configurable conversational KGQA datasets. Qualitative psychometric analyses demonstrate ConvKGYarn’s effectiveness in producing high-quality data comparable to popular conversational KGQA datasets across several metrics. ConvKGYarn excels by adhering to human interaction settings and operating at significantly larger scale. We demonstrate the utility of ConvKGYarn by testing LLMs on a variety of conversations, exploring the model’s behavior on conversational KGQA ensembles with different configurations based on the same set of KG facts. Our results highlight ConvKGYarn’s ability to improve KGQA fundamentals and assess the parametric knowledge of LLMs, thus offering a robust solution for the ever-evolving landscape of conversational assistants.