*= Equal taxpayers
In the context of a voice assistant system, steering refers to the phenomenon where a user issues a follow-up command attempting to direct or clarify a previous turn. We propose STEER, a direction detection model that predicts whether a follow-up turn is the user’s attempt to steer the previous command. Building a training data set to drive use cases poses challenges due to the cold start issue. To overcome this, we develop heuristic rules to sample opt-in usage data, approximating positive and negative samples without any annotation. Our experimental results show promising performance in identifying steering intent, with over 95% accuracy on our sample data. Furthermore, STEER, along with our sampling strategy, effectively aligns with real-world steering scenarios, as evidenced by its strong zero-shot performance on a human-rated evaluation set. In addition to relying solely on user transcripts as input, we introduce STEER+, an improved version of the model. STEER+ uses a semantic parse tree to provide more context on words outside the vocabulary, such as named entities that often appear at the sentence boundary. This further improves model performance, reducing the error rate in domains where entities appear frequently, such as messaging. Finally, we present a data analysis that highlights the improvement in user experience when voice assistants support use case steering.