Many apps have basic accessibility issues such as missing labels or low contrast. Automated tools can help app developers detect basic issues, but they can be laborious to run or require writing dedicated tests. In this work, we develop a system for generating mobile app accessibility reports through a collaborative process with accessibility stakeholders at Apple. Our method combines various data collection methods (e.g., app crawling, manual recording) with an existing accessibility scanner. Many of these scanners rely on scanning a single screen, and a key problem in app-wide accessibility reports is effectively removing duplicate issues and summarizing those collected across an app. To this end, we develop a screen clustering model with 96.9% accuracy (F1 score 88.8%) and UI element matching heuristics with 97% accuracy (F1 score 98.2%). We combined these technologies into a system to report and summarize unique issues in an application, and evaluated our system with 18 accessibility-focused engineers and testers. The system can enhance your existing accessibility testing toolset and address key limitations in current accessibility scanning tools.