Can we align LLMs with honesty by adjusting instruction? Addressing Hallucinations in Large Language Models with Rejection-Aware Adjustment of Instructions
Researchers from the Hong Kong University of Science and technology and the University of Illinois Urbana-Champaign have collaborated to address ...