This paper presents an extension to train end-to-end context-aware transformative transducer (CATT) models by using a simple but efficient method to extract hard negative phrases from the latent space of the context encoder. During training, given a reference query, we extract a series of similar phrases using fuzzy nearest neighbor search. These sample sentences are then used as negative examples in the context list along with random and truthful contextual information. By including approximate nearest neighbor (ANN-P) phrases in the context list, we encourage the learned representation to disambiguate between similar, but not identical bias phrases. This improves bias accuracy when there are multiple similar phrases in the bias inventory. We conducted experiments on a large-scale data regime obtaining up to 7% relative word error rate reduction for the contextual part of the test data. We are also expanding and evaluating the CATT approach in transmission applications.