Highlights:
- The integration lets organizations use Nvidia’s AI infrastructure for high-performance machine learning while ensuring visibility and security through JFrog’s DevSecOps workflows.
- By combining JFrog Artifactory with Nvidia NIM, JFrog claims enterprises can streamline AI model management and speed up the deployment of LLMs.
Recently, a software supply chain company, JFrog Ltd., unveiled a new runtime security solution along with a product integration with Nvidia Corp. This collaboration provides users with advanced security features and the ability to protect artificial intelligence models.
JFrog’s new JFrog Runtime provides comprehensive protection for applications across their entire lifecycle, from development through deployment and production. This service seamlessly integrates into DevSecOps workflows, enabling organizations to implement security measures at each stage of the software supply chain. By addressing vulnerabilities in real time ensures that cloud-native applications, such as containers in Kubernetes environments, are continuously monitored for potential risks.
JFrog Runtime’s features include real-time vulnerability detection and risk prioritization, enabling security and development teams to identify security issues based on their business impact speeding up the triage process. The platform also protects applications from post-deployment threats, such as malware or privilege escalation attacks, by utilizing advanced monitoring for cloud-based workloads.
JFrog Runtime fosters better collaboration between security and development teams by offering a unified platform for risk management. This platform enables developers to track software packages from multiple sources to ensure compliance and version control. At the same time, security teams can enforce policies that uphold the integrity of the software throughout its entire lifecycle.
Alongside the announcement of JFrog Runtime, JFrog also revealed a partnership with Nvidia to integrate Nvidia NIM microservices into the JFrog Platform. This integration allows enterprises to deploy secure, GPU-optimized AI models swiftly. Nvidia NIM, a microservices platform within Nvidia AI Enterprise, provides GPU-optimized infrastructure for the safe and efficient deployment of high-performance AI and large language models.
This new integration enables organizations to utilize Nvidia’s AI infrastructure for high-performance machine learning while ensuring visibility and security through JFrog’s unified DevSecOps workflows.
By combining JFrog Artifactory with Nvidia NIM, JFrog claims that enterprises can streamline AI model management and speed up the deployment of large language models (LLMs). The platform also provides centralized control, ensuring compliance and traceability throughout AI deployments, from development to production.
“As enterprises scale their generative AI deployments, a central repository can help them rapidly select and deploy models that are approved for development. The integration of Nvidia NIM microservices into the JFrog Platform can help developers quickly get fully compliant, performance-optimized models quickly running in production,” said Vice President of Enterprise Strategic Partnerships at Nvidia, Pat Lee.
JFrog Artifactory offers a unified solution for storing and managing all artifacts, binaries, packages, files, containers, and components used throughout software supply chains. The integration of the JFrog Platform with Nvidia NIM is expected to integrate containerized AI models as software packages into current software development workflows.