Google has announced the expansion of its Vulnerability Reward Program (VRP) to cover vulnerabilities and bugs related to Generative AI. This program aims to incentivize research on the security and risks associated with AI in order to make AI a safe space for everyone.
An example of a vulnerability in Generative AI that Google considers is unfair bias, inappropriate modifications to models, and misinterpretation of data. Google states that while they already have a security team testing various risks that could arise from implementing Generative AI in their products, additional collaboration with external researchers will further enhance the safety of their Generative AI.
Google explains the scope of the vulnerabilities that are eligible or ineligible for this program in more detail in this link.
TLDR: Google is expanding its Vulnerability Reward Program to include vulnerabilities and bugs related to Generative AI. They encourage external research to enhance the safety of their Generative AI in areas such as unfair bias, inappropriate model modifications, and misinterpretation of data. More information on eligible and ineligible vulnerabilities can be found in the provided link.