As language models become increasingly advanced, concerns have arisen around the ethical and legal implications of training them on vast and diverse data sets. If the training data is not understood correctly, sensitive information could leak between the training and test data sets. This could expose personally identifiable information (PII), introduce bias or unwanted behavior, and ultimately produce lower-quality models than expected. The lack of complete information and documentation on these models creates significant ethical and legal risks that must be addressed.
A team of researchers from various institutions, including MIT, Harvard Law School, UC Irvine, MIT Center for Constructive Communication, Inria, Univ. Lille Center, Contextual ai, ML Commons, Olin College, Carnegie Mellon University, Tidelift, and Cohere For ai have demonstrated their commitment to promoting transparency and responsible use of data sets by publishing a comprehensive audit. The audit includes Data Provenance Explorer, an interactive user interface that allows professionals to track and filter data provenance for widely used open source fine-tuning data collections.
Copyright laws give authors exclusive ownership of their work, while open source licenses encourage collaboration in software development. However, supervised ai training data presents unique challenges for open source licensees in managing data effectively. The interplay between copyright and permissions within collected data sets remains to be determined, with legal challenges and uncertainties surrounding the application of relevant laws to generative ai and supervised data sets. Previous work has highlighted the importance of data documentation and attribution, and data sheets and other studies highlight the need for comprehensive documentation and justification for the preservation of data sets.
The study by researchers involved manually retrieving pages and automatically extracting licenses from HuggingFace configurations and GitHub pages. They also used the Semantic Scholar public API to retrieve academic publication release dates and citation counts. To ensure fair treatment across languages, the researchers used a number of data properties on the characters, such as text metrics, dialogue turns, and sequence length. Additionally, they conducted a landscape analysis to trace the lineage of more than 1,800 text data sets, examining their source, creators, licensing conditions, properties, and subsequent use. To facilitate auditing and monitoring processes, they developed tools and standards to improve transparency and responsible use of data sets.
Landscape analysis has revealed stark differences in the composition and focus of commercially available open and closed data sets. Hard-to-access data sets dominate essential categories, such as languages with fewer resources, more creative tasks, a broader range of topics, and newer, more synthetic training data. The study has also highlighted the problem of misattribution and incorrect use of frequently used data sets. On popular dataset hosting sites, licenses are often miscategorized and license skip rates exceed 70%, with error rates exceeding 50%. The study emphasizes the need for comprehensive data documentation and attribution. It also highlights the challenges of synthesizing documentation for models trained on multiple data sources.
The study concludes that there are significant differences in the composition and focus of commercially open and closed data sets. Impenetrable data sets monopolize important categories, indicating a deepening divide in the types of data available under different licensing conditions. The study found frequent license misclassifications on dataset hosting sites and high license skipping rates. This points to problems in misattribution and informed use of popular data sets, raising concerns about data transparency and responsible use. The researchers published their full audit, including the Data Provenance Explorer, to contribute to continued improvements in the transparency and trustworthy use of data sets. The landscape analysis and tools developed in the study aim to improve the transparency and understanding of data sets, addressing the legal and ethical risks associated with training language models on inconsistently documented data sets.
Review the Paper and Project. All credit for this research goes to the researchers of this project. Also, don't forget to join. our SubReddit of more than 35,000 ml, 41k+ Facebook community, Discord channel, and Electronic newsletterwhere we share the latest news on ai research, interesting ai projects and more.
If you like our work, you'll love our newsletter.
Sana Hassan, a consulting intern at Marktechpost and a dual degree student at IIT Madras, is passionate about applying technology and artificial intelligence to address real-world challenges. With a strong interest in solving practical problems, she brings a new perspective to the intersection of ai and real-life solutions.
<!– ai CONTENT END 2 –>