Hand gesture recognition is becoming an increasingly common mode of human-computer interaction, especially as cameras proliferate in everyday devices. Despite continued advances in this field, gesture personalization is often underexplored. Personalization is crucial as it allows users to define and demonstrate gestures that are more natural, memorable, and accessible. However, personalization requires efficient use of user-provided data. We present a method that allows users to easily design custom gestures with a monocular camera from a demonstration. We employ transformative and meta-learning techniques to address low-opportunity learning challenges. Unlike previous work, our method supports any combination of static and dynamic gestures with one or two hands, including different viewpoints. We evaluated our personalization method through a user study with 20 gestures collected from 21 participants, achieving up to 97% average recognition accuracy in a demonstration. Our work provides a viable path for vision-based gesture personalization, laying the foundation for future advances in this domain.