Prompt engineering is an iterative procedure that often requires extensive manual effort to formulate suitable prompts to effectively drive large language models (LLMs) on specific tasks. Incorporating short-form examples is a vital and effective approach to providing LLMs with precise and tangible prompts, leading to improved LLM performance. However, identifying the most informative prompts for LLMs is an intensive task that often involves examining a large search space. In this demonstration, we demonstrate an interactive tool called APE (Active Prompt Tuning) designed to refine prompts through human feedback. Inspired by active learning, APE iteratively selects the most ambiguous prompts for human feedback, which will be transformed into short-form examples within the prompt.