Humanoid robots have significant gaps in their detection and perception, which makes it difficult to plan the movement in dense environments. To address this, we present Armor, a new egocentric perception system that integrates hardware and software, specifically incorporating deep -shaped depth sensors for humanoid robots. Our distributed perception approach improves the spatial awareness of the robot and facilitates the most agile movement planning. We also train an imitation learning policy based on transformers (IL) in simulation to perform dynamic collisions avoidance, taking around 86 hours of humanistic humanistic movements of the Amass data set. We show that our armor perception is superior against a configuration with multiple dense dense depth cameras and mounted externally, with a reduction of 63.7% in collisions and an improvement of 78.7% in the success rate. We also compare our IL policy with an expert in movement planning based on curlo sampling, which shows 31.6% less collisions, a successful rate of success 16.9% and a reduction of 26 × in computational latency. Finally, we display our armor perception in our GR1 humanoid of the real world of Fourier's intelligence. We are going to update the link to the source code, the HW description and the CAD 3D files in the ARXIV version of this text.