This paper presents a framework, called EMOTION, for generating expressive movement sequences in humanoid robots, improving their ability to engage in human-like non-verbal communication. Nonverbal cues such as facial expressions, gestures, and body movements play a crucial role in effective interpersonal interactions. Despite advances in robotic behaviors, existing methods often fail to mimic the diversity and subtlety of human non-verbal communication. To address this gap, our approach leverages the in-context learning capability of large language models (LLMs) to dynamically generate sequences of socially appropriate gesture movements for human-robot interaction. We used this framework to generate 10 different expressive gestures and conduct online user studies comparing the naturalness and understandability of the movements generated by EMOTION and its human feedback version, EMOTION++, with those of human operators. The results demonstrate that our approach matches or exceeds human performance in generating natural and understandable robotic movements in certain scenarios. We also provide design implications for future research to consider a set of variables when generating expressive robotic gestures.