Highlights:
- A recent report from Rezilion Inc., a software supply chain security platform, delves into the issue of security in AI by identifying several security risks associated with generative artificial intelligence.
- As per the report’s findings, even the popular and cutting-edge analyzed models display immature and poor security postures.
While generative artificial intelligence, powered by generative pre-trained transformers and large language models, has gained significant popularity, it’s worth noting that the broader discussion around the sector often overlooks the security implications associated with these advancements.
A recent report from Rezilion Inc., a software supply chain security platform, delves into this precise issue by identifying several security risks associated with generative AI. The report investigated the 50 most popular generative AI projects on GitHub, employing the Open Source Security Foundation Scorecard as an evaluation tool. Results from the investigation highlighted various risks present in generative AI, including data management risks, inherent model risks, trust boundary risks, and general security issues.
As per the researchers, many AI models are granted excessive access and authorization, often needing more adequate security measures to ensure their safety. According to the research, the lack of maturity and basic security best practices in open-source projects that use these models, along with excessive access and authorization, creates an environment favorable for breaches.
As per the report’s findings, even the popular and cutting-edge analyzed models display immature and poor security postures. The OSSF Scorecard rated the models with an average score of 4.6 out of 10, based on the study of 50 models. For instance, the most popular GPT-based project on GitHub, Auto-GPT, had a Scorecard score of 3.7.
According to Yotam Perkal, Rezilion’s Director of Vulnerability Research, “Generative AI is increasingly everywhere, but it’s immature and extremely prone to risk. On top of their inherent security issues, individuals and organizations provide these AI models with excessive access and authorization without proper security guardrails.”
Perkal has emphasized that the primary goal of Rezilion’s research is to highlight how the utilization of insecure generative AI and LLMs in open-source projects leads to poor security posture, ultimately resulting in a high-risk environment for organizations.
The research from Rezilion concludes with a set of recommended practices for securely deploying and operating generative AI systems. These recommendations encompass informing teams about the risks involved with adopting new technologies, closely checking security risks about LLMs and open-source ecosystems, implementing security practices, handling comprehensive risk assessments, and fostering a culture of security awareness.