Aligning large language models (LLMs) to human expectations without human-annotated preference data is a major problem. In this paper, we propose a method to evaluate response preference using the output probabilities of response pairs under contrastive cue pairs, which could achieve better performance on LLaMA2-7B and LLaMA2-13B compared to RLAIF. Based on this, we propose an automatic alignment method, Direct Large Model Alignment (DLMA). First, we use contrastive cue pairs to automatically generate preference data. Then, we continue to evaluate the generated preference data using contrastive cue pairs and calculate a self-reward score. Finally, we use the DPO algorithm to effectively align LLMs by combining this self-reward score. In the experimental stage, our DLMA method could outperform the RLHF method without relying on human-annotated preference data.