Hair is one of the most notable features of the human body and impresses with its dynamic qualities that bring scenes to life. Studies have consistently shown that dynamic elements have greater appeal and fascination than static images. Social media platforms like TikTok and Instagram are witnessing the daily sharing of great portrait photographs as people aim to make their images attractive and artistically captivating. This impulse drives researchers’ exploration into the realm of animating human hair in still images, with the goal of delivering a vivid, aesthetically pleasing and beautiful visual experience.
Recent advances in the field have introduced methods of infusing still images with dynamic elements, animating fluid substances such as water, smoke and fire within the frame. However, these approaches have largely overlooked the intricate nature of human hair in real-life photographs. This article focuses on the artistic transformation of human hair within portrait photography, which involves translating the image into a cinemagraph.
A cinemagraph represents an innovative short video format that is favored by professional photographers, advertisers and artists. Find utility in various digital mediums, including digital ads, social media posts, and landing pages. The fascination with cinemagraphs lies in their ability to merge the strengths of still images and videos. Certain areas within a cinemagraph feature subtle, repetitive movements in a short loop, while the rest remains static. This contrast between stationary and mobile elements effectively captivates the viewer’s attention.
Through the transformation of a portrait photograph into a cinemagraph, complete with subtle hair movements, the idea is to enhance the appeal of the photograph without detracting from the static content, creating a more compelling and engaging viewing experience.
Existing commercial software and techniques have been developed to generate high-fidelity cinemagraphs from input videos by selectively freezing certain regions of the video. Unfortunately, these tools are not suitable for processing still images. In contrast, there has been growing interest in still image animation. Most of these approaches have focused on animating fluid elements such as clouds, water, and smoke. However, the dynamic behavior of hair, composed of fibrous materials, presents a distinct challenge compared to fluid elements. Unlike the animation of fluid elements, which has received a lot of attention, the animation of human hair in real portrait photographs has been relatively unexplored.
Animating hair in a static portrait photograph is challenging due to the intricate complexity of hair structures and dynamics. Unlike the smooth surfaces of the human body or face, hair comprises hundreds of thousands of individual components, resulting in complex, non-uniform structures. This complexity leads to intricate movement patterns within the hair, including interactions with the head. While specialized techniques for hair modeling exist, such as using dense camera arrays and high-speed cameras, they are often expensive and time-consuming, limiting their practicality for real-world hair animation.
The paper presented in this article presents a new ai method to automatically animate hair within a static portrait photograph, eliminating the need for user intervention or complex hardware configurations. The idea behind this approach lies in the reduced sensitivity of the human visual system to individual hair strands and their movements in real portrait videos, compared to the synthetic strands of a digitized human in a virtual environment. The proposed solution is to animate “strands of hair” rather than individual strands, creating a pleasing visual experience. To achieve this, the article presents a hair strand animation module, allowing for an efficient and automated solution. An overview of this framework is illustrated below.
The key challenge in this context is how to extract these hair strands. While related works, such as hair modeling, have focused on hair segmentation, these approaches mainly aim at extracting the entire hair region, which differs from the objective. To extract meaningful hair strands, researchers innovatively frame hair strand extraction as an instance segmentation problem, where an individual segment within a still image corresponds to a hair strand. By adopting this problem definition, researchers leverage instance segmentation networks to facilitate the extraction of hair strands. This not only simplifies the problem of extracting the hair strand, but also allows the use of advanced networks for effective extraction. Additionally, the paper presents the creation of a hair strands dataset containing real portrait photographs to train the networks, along with a semi-annotation scheme to produce real annotations for the identified hair strands. The following figure shows some sample results from the paper compared to state-of-the-art techniques.
This was the brief of a novel ai framework designed to transform still portraits into cinemagraphs by animating strands of hair with pleasing movements without noticeable artifacts. If you are interested and want to know more about it, feel free to check out the links given below.
Review the Paper and Project page. All credit for this research goes to the researchers of this project. Also, don’t forget to join. our 31k+ ML SubReddit, Facebook community of more than 40,000 people, Discord channel, and Electronic newsletterwhere we share the latest news on ai research, interesting ai projects and more.
If you like our work, you’ll love our newsletter.
We are also on WhatsApp. Join our ai channel on Whatsapp.
Daniele Lorenzi received his M.Sc. in ICT for Internet and Multimedia Engineering in 2021 from the University of Padua, Italy. He is a Ph.D. He is a candidate at the Institute of Information technology (ITEC) at the Alpen-Adria-Universität (AAU) in Klagenfurt. He currently works at the ATHENA Christian Doppler Laboratory and his research interests include adaptive video streaming, immersive media, machine learning and QoS/QoE evaluation.
<!– ai CONTENT END 2 –>