*Equal taxpayers
Large language models (LLMs) are increasingly tailored to accomplish specific tasks for implementation in real-world decision systems. Several previous works have investigated the bias transfer hypothesis (BTH) by studying the effect of fitting adaptation strategy on model fairness to find that the fairness in pre-trained masked language models has a limited effect on the fairness of the models when adapted using tuning. In this work, we extend the study of BTH to causal models under fast adaptations, as cues are an accessible and computationally efficient way to implement models in real-world systems. Unlike previous work, we establish that the intrinsic biases in the previously trained Mistral, Falcon and Llama models are strongly correlated (rho >= 0.94) with the biases when the same models receive zero and few shots, using a pronoun co- reference resolution task. Furthermore, we find that bias transfer remains strongly correlated even when LLMs are specifically asked to display fair or biased behavior (rho >= 0.92), and the duration of some shots and stereotype composition vary (rho > = 0.97). Our findings highlight the importance of ensuring fairness in pretrained LLMs, especially when they are then used to perform subsequent tasks through rapid adaptation.