While server-side large language models (LLMs) demonstrate proficiency in tool integration and complex reasoning, deploying small language models (SLMs) directly on devices provides opportunities to improve latency and privacy, but it also presents unique challenges for precision and memory. Introducing CAMPHOR, an innovative on-device multi-agent SLM framework designed to handle multiple user inputs and reason about personal context locally, ensuring privacy is maintained. CAMPHOR employs a hierarchical architecture where a high-order reasoning agent decomposes complex tasks and coordinates expert agents responsible for personal context retrieval, tool interaction, and dynamic plan generation. By implementing parameter sharing between agents and leveraging fast compression, we significantly reduce model size, latency, and memory usage. To validate our approach, we present a novel dataset that captures multi-agent task trajectories focused on personalized mobile assistant use cases. Our experiments reveal that optimized SLM agents not only outperform closed-source LLMs in completing F1 tasks by 35%, but also eliminate the need for device-server communication while improving privacy. .