In this work, we propose the synthesis of mutual reinforcement data (MRDS) within the LLMs to improve the task of summary of dialogue of few shots. Unlike the previous methods that require external knowledge, we reinforce the synthesis of the LLM dialogue and summary capabilities, which allows them to complement each other during training and improve general performance. Dialogue synthesis capacity is improved by optimization of preferences directed with the summary capacity preferences score. The summary capacity is reinforced by the additional high quality dialogue dialogue produced by the ability to synthesize dialogue. By taking advantage of the proposed MRDS mechanism, we obtain the internal knowledge of LLM in the synthetic data format, and we use it to increase the real training data set of few shots. Empirical results show that our method improves the summary of dialogue, achieving a 1.5% increase in rouge scores and a 0.3% improvement in Bert's scores in environments of few shots. In addition, our method reaches the highest average scores in human evaluations, overcoming both the previously trained models and the baselines adjusted only for summary tasks.