Industry

Insights

Industry trends and practical insights for building safer AI.

VirtueGuard Now Available on Vertex AI Garden

Building Trustworthy AI in Finance: The AllianceBernstein and Virtue AI Case Study

MCPGuard: First Agent-based MCP Scanner to Protect AI Agents

Llama 4 Scout & Maverick Redteaming Analysis

Ensuring the safety and security of AI models is paramount as their adoption accelerates. At Virtue AI, our advanced redteaming platform VirtueRed employs over 100

EU AI Act: What Enterprises Need to Know and How Virtue Helps

The European Union’s AI Act, published in July 2024, marks a significant regulatory milestone in the global governance of artificial intelligence. As the first comprehensive

GPT-4.5 vs Claude 3.7 – Advanced Redteaming Analysis

As AI models become increasingly adopted, ensuring their safe and responsible deployment is essential. At Virtue AI, we conduct rigorous red-teaming evaluations to stress-test AI

Can Reasoning Improve Model Safety & Security? Claude 3.7 Red-Teaming Analysis by VirtueAI

This blog provides a summary of the latest red-teaming evaluations by our VirtueRed on three leading Anthropic models—Claude 3.7 Sonnet, Claude 3.7 Sonnet Thinking, and

VirtueGuard Dashboard: Turning Insights into Stronger AI Security

We’re excited to introduce the new VirtueGuard Dashboard, giving you real-time insights and continuous improvements for AI security. VirtueGuard has worked behind the scenes to

How Safe Are OpenAI o3-mini and Deepseek-R1? A Comparative Red-Teaming Analysis by VirtueAI

This blog provides a summary of our findings. For the free full R1 report, visit here. For the free full o3-mini report, visit here. Introduction:

How Safe is Your AI Coding Assistant? A Virtue AI Security Audit

2024 has been a breakout year for AI coding tools. The thriving ecosystem grew with products like Cursor, GitHub Copilot, and Codeium transforming software development

Accelerating Trust in AI: The Rivos and Virtue AI Approach to AI Safety and Security

We are excited to present a comprehensive assessment of the safety and security of the recently announced Llama-3.2-Vision model, exploring its potential and challenges as