Virtue AI
RESEARCH TERMS
We conduct pioneering AI research to empower and ensure safe and secure AI.
Red Teaming & Risk Assessments
Pioneering comprehensive AI risk assessment across multiple sectors and languages. Our advanced red teaming algorithms rigorously test AI models and systems, ensuring robust safety measures aligned with global regulations.
Guardrail & Threat Mitigation
Developing cutting-edge, customizable content moderation solutions for text, image, audio, and video. Our guardrails offer transparent, policy-compliant protection with unparalleled speed and efficiency.
Safe Models & Agents
Crafting AI models and agents with inherent safety features, from secure code generation to safe decision-making. We’re integrating safety and compliance directly into AI development processes, setting new standards for responsible AI.
Publications
COLEP: Certifiably Robust Learning-Reasoning Conformal Prediction via Probabilistic Circuits
Abstract: Conformal prediction has shown spurring performance in constructing statistically rigorous prediction sets for arbitrary black-box machine learning models, assuming the data is exchangeable. However,
MMSum: A Dataset for Multimodal Summarization and Thumbnail Generation of Videos
Abstract: Multimodal summarization with multimodal output (MSMO) has emerged as a promising research direction. Nonetheless, numerous limitations exist within existing public MSMO datasets, including insufficient
ChatScene: Knowledge-Enabled Safety-Critical Scenario Generation for Autonomous Vehicles.
Abstract: We present ChatScene, a Large Language Model (LLM)-based agent that leverages the capabilities of LLMs to generate safety-critical scenarios for autonomous vehicles. Given unstructured
InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining
Abstract: Pretraining auto-regressive large language models~(LLMs) with retrieval demonstrates better perplexity and factual accuracy by leveraging external databases. However, the size of existing pretrained retrieval-augmented
HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding.
Abstract: While large vision-language models (LVLMs) have demonstrated impressive capabilities in interpreting multi-modal contexts, they invariably suffer from object hallucinations (OH). We introduce HALC, a
Effects of Exponential Gaussian Distribution on (Double Sampling) Randomized Smoothing.
Abstract: Randomized Smoothing (RS) is currently a scalable certified defense method providing robustness certification against adversarial examples. Although significant progress has been achieved in providing