The Center for Research on Foundation Models (CRFM) at Stanford University, in collaboration with the research units of MIT Media Lab and Princeton, has unveiled the Foundation Model Transparency Index. This index rates the transparency of large-scale AI models from various companies in terms of their model development processes, from the training data to the labeling of the data, model transparency, usage risks, security measures, and more, across 13 categories.
According to the results, the Llama 2 model by Meta ranks first with a score of 54/100, primarily due to its open-source nature. Meanwhile, GPT-4 by OpenAI ranks third with a score of 47/100, with its highest score in the capabilities category.
Models that are entirely open-source generally performed well, as they have high model transparency scores. Llama 2 takes the top spot, followed by Hugging Face’s BLOOMZ at second place, and Stable Diffusion 2 at fourth place.
Researchers emphasize the increasing importance of model transparency, as many companies are starting to withhold their model data. Notably, OpenAI, despite having the word “Open” in its name, is not truly open, and GPT-4 reveals less information publicly compared to previous versions.
Having a centralized transparency index helps customers or users have more confidence in using these models. However, even the highest-scoring model only achieved a score of 54 out of 100, indicating that there is still much room for improvement in transparency within the AI world. The index itself provides a 100-page document with detailed information and all the data used.
Source – Stanford
TLDR: The Foundation Model Transparency Index rates the transparency of large-scale AI models. Llama 2 by Meta ranks first, while GPT-4 by OpenAI ranks third. Open-source models generally performed well. Transparency is increasingly important as companies withhold model data. The index helps users have more confidence, even though there is room for improvement. The document provides detailed information and all the data used.