Preference-based reinforcement learning (PbRL) has shown great promise in learning from human preference binary feedback on the agent's trajectory behaviors, where one of the main goals is to reduce the amount of human feedback queried. While binary labels are a direct comment on the goodness of a trajectory behavior, there is still a need to resolve credit assignment, especially in limited feedback. We propose our work, PRIor On Rewards (PRIOR), which learns a global forward dynamics model to approximate a priori selective attention on states that serves as a means to perform credit allocation on a given trajectory. Furthermore, we propose an auxiliary objective that redistributes the total expected performance according to these PRIORs as a simple but effective means to improve reward learning performance. Our experiments on six robot manipulation and three locomotion PbRL benchmarks demonstrate PRIOR's significant improvements in the efficiency of feedback sampling and reward retrieval. Finally, we present our extensive ablations studying our design decisions and the ease of using PRIOR with existing PbRL methods.