Wearable sensors have penetrated people's lives, giving way to impactful applications in interactive systems and activity recognition. However, practitioners face significant obstacles in detecting heterogeneities, requiring customized models for different platforms. In this paper, we conduct a comprehensive evaluation of the generalization of motion models across sensor locations. Our analysis highlights this challenge and identifies key locations on the body to build location-invariant models that can be integrated into any device. To do this, we present the largest multi-location activity data set (N=50, 200 cumulative hours), which we make available to the public. We also present on-device deployable motion models that achieve a frame-level F1 score of 91.41% from a single model, regardless of sensor placement. Finally, we investigate data synthesis across locations, aiming to alleviate laborious data collection tasks by synthesizing data at one location from data at another. These contributions advance our vision of low-barrier, location-invariant activity recognition systems, catalyzing research in HCI and ubiquitous computing.