Multimodal Large Language Models (MLLM) have enabled numerous advances in understanding and reasoning in domains such as vision, but we have not yet seen this great success in time series. Although previous work on time series MLLM has shown promising performance in time series forecasting, very few works show how an LLM could be used for natural language time series reasoning. We propose a novel multimodal time series LLM approach that learns generalizable information across multiple domains with powerful zero-shot performance. First, we train a lightweight time series encoder on an LLM to directly extract time series information. We then fine-tuned our model with time series tasks augmented with chain of thought to encourage the model to generate reasoning paths. We show that our model learns a latent representation that reflects specific time series features (e.g., slope, frequency), as well as outperforms GPT-4o on a set of zero-shot reasoning tasks across a variety of domains.