The rapid evolution of large language models (LLMs) and conversational assistants requires dynamic, scalable, and configurable conversational datasets for training and evaluation. These data sets must adapt to various modes of user interaction, including text and voice, each of which presents unique modeling challenges. Knowledge graphs (KGs), with their structured and evolving nature, offer an ideal basis for current and accurate knowledge. Although human-curated KG-based conversational datasets exist, they struggle to keep pace with the rapidly changing information needs of users. We present ConvKGYarn, a scalable method for generating up-to-date and configurable conversational KGQA datasets. Qualitative psychometric analyzes demonstrate the effectiveness of ConvKGYarn in producing high-quality data comparable to the popular KGQA conversational data sets across several metrics. ConvKGYarn excels at adhering to human interaction settings and operating at a significantly larger scale. We show the usefulness of ConvKGYarn by testing LLM on various conversations, exploring the behavior of the model on KGQA conversational sets with different configurations based on the same KG fact set. Our results highlight ConvKGYarn's ability to improve KGQA fundamentals and assess parametric knowledge of LLMs, thus offering a robust solution for the ever-evolving landscape of conversational assistants.