The inability to linearly XOR classify has motivated much of deep learning. We revisit this old problem and show that linear XOR classification is, in fact, possible. Instead of separating data across half-spaces, we propose a slightly different paradigm, equality separation, which adapts the SVM objective to distinguish data within or outside the margin. Our classifier can then be embedded into neural networks with a smooth approximation. From its properties, we intuit that equality separation is well-suited for anomaly detection. To formalize this notion, we introduce closure numbers, a quantitative measure of the ability of classifiers to form closed decision regions for anomaly detection. Building on this theoretical connection between binary classification and anomaly detection, we test our hypothesis in supervised anomaly detection experiments, showing that equality separation can detect both visible and invisible anomalies.