Sign languages are essential for the deaf community and with hearing problems (DHH). Sign language generation systems have the potential to support communication by translating written languages, such as English, in signed videos. However, current systems often do not meet the needs of users due to the bad translation of grammatical structures, the absence of facial signals and body language, and insufficient visual and movement fidelity. We address these challenges based on recent advances in LLM and video generation models to translate prayers in English into natural ASL signers. The text component of our model extracts information for manual and non -manual components of ASL, which are used to synthesize skeletal pose sequences and corresponding video frames. Our findings of a user study with 30 DHH participants and exhaustive technical evaluations demonstrate significant progress and identify critical areas necessary to meet the user's needs.