Reinforcement learning practitioners typically avoid hierarchical policies, especially in image-based observation spaces. Typically, the performance improvement on a single task compared to flat policy counterparts does not justify the additional complexity associated with implementing a hierarchy. However, by introducing multiple levels of decision-making, hierarchical policies can compose lower-level policies to more effectively generalize across tasks, highlighting the need for multitask evaluations. We analyze the benefits of hierarchy through pixel-wise simulated multitask robotic control experiments. Our results show that hierarchical policies trained with task conditioning can (1) increase performance on training tasks, (2) lead to improved reward and state-space generalizations on similar tasks, and (3) decrease the complexity of fine-tuning required to solve novel tasks. Therefore, we believe that hierarchical policies should be considered when building reinforcement learning architectures capable of generalizing across tasks.