Hand gesture recognition is becoming an increasingly common mode of human-computer interaction, especially as cameras proliferate in everyday devices. Despite continued advances in this field, gesture personalization is often underexplored. Personalization is crucial as it allows users to define and demonstrate gestures that are more natural, memorable, and accessible. However, personalization requires efficient use of user-provided data. We present a method that allows users to easily design custom gestures with a monocular camera from a demonstration. We employ transformative and meta-learning techniques to address low-opportunity learning challenges. Unlike previous work, our method supports any combination of static and dynamic gestures with one or two hands, including different viewpoints, and the ability to handle irrelevant hand movements. We implemented three real-world applications using our personalization method, conducted a user study, and achieved up to 94% average recognition accuracy in a demonstration. Our work provides a viable path for vision-based gesture personalization, laying the foundation for future advances in this domain.