Irish philosopher George Berkeley, best known for his theory of immaterialism, once said: “If a tree falls in a forest and no one is around to hear it, does it make a sound?”
What about ai generated trees? They probably wouldn't make noise, but they will still be essential for applications such as adapting urban flora to climate change. To that end, the novel “D-Tree FusionDeveloped by researchers at MIT's Computer Science and artificial intelligence Laboratory (CSAIL), Google and Purdue University, it fuses artificial intelligence and tree growth models with data from Google's Auto Arborist to create accurate 3D models of urban trees existing. The project has produced the first large-scale database of 600,000 environmentally aware, simulation-ready tree models across North America.
“We're marrying decades of forestry science with modern ai capabilities,” says Sara Beery, MIT assistant professor of electrical and computer engineering (EECS), principal investigator of MIT's CSAIL, and co-author of a new article about Tree-D Fusion. “This allows us to not only identify trees in cities, but also predict how they will grow and impact their environment over time. We are not ignoring the last 30 years of work to understand how to build these 3D synthetic models; “Instead, we are using ai to make this existing knowledge more useful across a broader set of individual trees in cities across North America and, eventually, around the world.”
Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but branches it off by generating full 3D models from individual images. While previous attempts at tree modeling were limited to specific neighborhoods or had issues with scale accuracy, Tree-D Fusion can create detailed models that include typically hidden features, such as the backs of trees that are not visible in the images. street photographs. .
The practical applications of technology go far beyond mere observation. Urban planners could use Tree-D Fusion to one day look into the future, anticipating where growing branches could tangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and improvements in health. air quality. These predictive capabilities, the team says, could shift urban forest management from reactive maintenance to proactive planning.
A tree grows in Brooklyn (and many other places)
The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of each tree's shape and then using traditional procedural models to simulate realistic branch and leaf patterns based on the tree's genus. This combination helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as different possible local temperatures and variable access to groundwater.
Now, as cities around the world face <a target="_blank" href="https://www.nature.com/articles/s43247-022-00539-x“>rising temperaturesThis research offers a new window into the future of urban forests. In collaboration with MIT Sensible City LabPurdue University and the Google team are embarking on a global study that reimagines trees as living climate shields. Their digital modeling system captures the intricate dance of shadow patterns across the seasons, revealing how strategic urban forestry could hopefully transform sweltering city blocks into more naturally cooled neighborhoods.
“Every time a street mapping vehicle passes through a city, we don't just take snapshots: we watch these urban forests evolve in real time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, offering cities a powerful lens to observe how environmental stresses shape tree health and growth patterns across their urban landscape.”
ai-based tree modeling has become an ally in the pursuit of environmental justice: by mapping the urban tree canopy in unprecedented detail, a sister project of the Google ai for Nature Team has helped uncover disparities in access to green spaces in different socioeconomic areas. “We're not just studying urban forests: we're trying to cultivate more equity,” Beery says. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green roofs, the benefits extend to all residents equally.
it's a breeze
While Tree-D fusion marks significant “growth” in this field, trees can present a unique challenge for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees are nature's shapeshifters: they sway in the wind, intertwine branches with neighboring ones, and constantly change shape as they grow. Tree-D fusion models are “simulation-ready” because they can estimate the shape of trees in the future, depending on environmental conditions.
“What makes this work exciting is how it pushes us to rethink the fundamental assumptions of computer vision,” Beery says. “While 3D scene understanding techniques such as photogrammetry or NeRF (neural radiation fields) are excellent for capturing static objects, trees demand new approaches that can account for their dynamic nature, where even a gentle breeze can dramatically alter its structure from one moment to the next”.
The team's approach of creating rough structural envelopes that approximate the shape of each tree has proven remarkably effective, but certain problems remain unresolved. Perhaps the most perplexing is the “tangled tree problem”; When neighboring trees grow into each other, their intertwined branches create a puzzle that no current artificial intelligence system can fully solve.
The scientists see their data set as a springboard for future innovations in computer vision and are already exploring applications beyond street view images, looking to extend their approach to platforms such as iNaturalist and wildlife camera traps.
“This marks just the beginning of Tree-D Fusion,” says Jae Joong Lee, a PhD student at Purdue University who developed, implemented and deployed the Tree-D-Fusion algorithm. “Together with my collaborators, I envision expanding the capabilities of the platform on a planetary scale. Our goal is to use ai-powered insights in the service of natural ecosystems: supporting biodiversity, promoting global sustainability, and ultimately benefiting the health of our entire planet.”
Beery and Lee's co-authors are Jonathan Huang, ai director at Scaled Foundations (formerly of Google); and four others from Purdue University: doctoral students Jae Joong Lee and Bosheng Li, professor and dean of remote sensing Songlin Fei, assistant professor Raymond Yeh, and professor and associate director of computer science Bedrich Benes. Their work is based on efforts supported by the United States Department of Agriculture (USDA) Natural Resources Conservation Service and is directly supported by the USDA National Institute of Food and Agriculture. The researchers presented their findings at this month's European Conference on Computer Vision.