Blender
In this tutorial, I want to fill a big gap on the internet: how to leverage one of the best 3D tools for manipulating and visualizing massive point cloud data sets.
This tool is called Blender. It allows us to address complex analytical scenarios by experimenting with different data visualization techniques. And this is precisely what unites us here.
What is the best fundamental workflow for linking Reality Capture data sets (in the form of point clouds) with Blender's extended data visualization capabilities?
florent: Reality Capture is a somewhat “new” term that can be quite confusing, knowing that some software and companies got their name from it. You can see this “discipline” as a specialized branch of “3D Mapping”, where the goal is to capture real-world 3D geometries with various sensors such as LiDAR or passive cameras (via photogrammetry and 3D computer vision). You can see how we do it in this article: 3D reconstruction guide with photogrammetry
In this guide, I break the process down into nine clear steps, as illustrated below.
This will allow us to generate several Route Extraction Visualization Products, like this one: