The growing capabilities of large generative models and their increasingly widespread deployment have raised concerns about their reliability, security, and potential misuse. To address these issues, recent work has proposed controlling model generation by directing model activations to effectively induce or prevent the emergence of concepts or behaviors in the generated output. In this article we present Activation Transport (AcT), a general framework for directing activations guided by optimal transport theory that generalizes much previous activation directing work. AcT is modality independent and provides fine-grained control over model behavior with negligible computational overhead, while minimally affecting model capabilities. We experimentally show the effectiveness and versatility of our approach in addressing key challenges in large language models (LLM) and text-to-image (T2I) diffusion models. For LLMs, we show that AcT can effectively mitigate toxicity, induce arbitrary concepts, and increase their veracity. In T2I, we show how AcT enables fine-grained style control and concept negation.