Neural knowledge-to-text generation models often have difficulty generating faithful descriptions of input facts: they may produce hallucinations that contradict the given facts or describe facts that are not present in the input. To reduce hallucinations, we propose a novel decoding method, TWEAK (Think While Effectively Articulating Knowledge). TWEAK treats the sequences generated in each decoding step and its future sequences as hypotheses, and ranks each generation candidate based on how well its corresponding hypotheses support the input facts using a Hypothesis Verification Model (HVM). We first demonstrate the effectiveness of TWEAK by using a natural language inference (NLI) model such as HVM and report improved fidelity with minimal impact on quality. We then replace the NLI model with our task-specific HVM trained on a first-of-its-kind dataset, FATE (Fact-Aligned Textual Entailment), which combines input data with its faithful and hallucinated descriptions with the hallucinated stretches marked. The new HVM further improves fidelity and quality and runs faster. Overall, the best TWEAK variants improve on average 2.22/7.17 points in fidelity measured by FactKB over WebNLG and TekGen/GenWiki, respectively, with only 0.14/0.32 points degradation in measured quality by BERTScore on the same data sets. Since TWEAK is a decoding-only approach, it can be integrated with any neural generative model without the need for retraining.