This article was accepted at the IEEE Symposium on Visual Languages and Human-Centered Computing (VL/HCC) 2024
Programmers frequently interact with machine learning tutorials in computational notebooks and have been adopting code generation technologies based on large language models (LLMs). However, they encounter difficulties in understanding and working with the code produced by LLMs. To mitigate these challenges, we present a novel workflow in computational notebooks that augments LLM-based code generation with an additional ephemeral UI step, offering users UI constructs as an intermediate stage between user prompting and code generation. We introduce this workflow in BISCUIT, an extension for JupyterLab that provides users with LLM-generated ephemeral UIs based on the context of their code and intentions, allowing them to understand, guide, and explore with LLM-generated code. Through a user study in which 10 novices used BISCUIT for machine learning tutorials, we found that BISCUIT provides users with code representations to aid understanding, reduces the complexity of prompt engineering, and creates a playing field for users to explore different variables and iterate on their ideas.