Highlights:
- To address generative AI risks, Google uses global security expertise to identify and fix vulnerabilities.
- Google’s Trust and Safety teams adopt a comprehensive approach to enhance anticipatory testing as generative AI integrates into more products.
Artificial intelligence (AI) is the newest and greatest technology, and criminals and attackers alike are increasingly using it to advance their attack methods. Google LLC has extended its Vulnerability Rewards Program to include threats related to generative artificial intelligence in response to the growing threat.
Producing content that closely resembles human-generated output is the specialty of generative AI. The generated content can be anything from text and images to more intricate patterns; while applicable in legitimate use cases, it can also be a powerful tool for attackers looking to trick individuals or systems. The potential for a wide range of cyber threats, including deepfake videos and counterfeit textual data, is increased by generative AI’s ability to create convincing fake content.
Google indicates that it recognizes these new challenges by expanding the use of VRP to include generative AI. Traditionally, the VRP pays outside security researchers for identifying and disclosing possible security flaws in Google’s network. Rewards will now be available to researchers who find security holes or other threats in generative AI models and applications. To address the risk associated with generative AI, Google is utilizing the collective knowledge of the international security community to identify and address generative AI’s vulnerabilities.
Additionally, Google is reexamining the best practices for classifying and reporting bugs. Compared to traditional digital security concerns, generative AI raises different ones, like the possibility of unfair bias, model manipulation, incorrect data interpretations, or “hallucinations.” As generative AI is incorporated into more products and features, Google’s Trust and Safety teams take a comprehensive approach to the safeguards they create to better anticipate and test for potential risks.
Google’s VRP expansion is part of a larger trend in which major tech companies and organizations attempt to address the new challenges that artificial intelligence (AI) presents. Google and other top AI firms convened at the White House earlier this year to plan and discuss ways to mitigate vulnerabilities built into AI systems.
Recently, two additional methods were revealed to fortify the AI open-source supply chain. Google’s efforts to strengthen supply chain security for AI focus on building on its current partnership with the Open Source Security Foundation. The Google Open Source Security Team is going to implement two key tools: Sigstore, which is intended to guarantee signature transparency, and Supply-chain Levels for Software Artifacts, which is intended to harden the supply chain.
As the industry continues to assess AI risks, Google intends to kickstart collaboration with the open-source community by integrating supply chain security into the machine learning development lifecycle.