Featured Posts
The AI Supply Chain: Lessons from the Drift Incident
4 months ago
Research
The first major AI-adjacent SaaS supply-chain breach has arrived. In August 2025, attackers exploited integrations tied to Salesloft’s Drift app, an AI chatbot and sales automation assistant, to compromise OAuth tokens and pivot into Salesforce and Google Workspace. This was not an LLM jailbreak or a chatbot hallucination leading to
More Reading
AI Misuse in the Wild: Inside Anthropic’s August Threat Report
Anthropic released its August 2025 threat intelligence report, adding to a growing body of evidence that artificial intelligence is now deeply embedded […]
Key Takeaways from IBM’s 2025 Cost of a Data Breach Report
For 20 years, IBM’s Cost of a Data Breach Report has been one of the industry’s most trusted sources on the financial […]
Anthropic Warned About AI Employees: But Here’s What Comes Next
When Anthropic’s CEO predicted that AI employees would take over white collar jobs within 12 months, most of the reaction focused on […]
Acuvity Launches Open Source Secure MCP Server Protection to Enable Safe, Scalable Gen AI Integration
Acuvity’s open source release helps developers adopt the Model Context Protocol with built-in security, deployment tooling, and protocol-level safeguards SUNNYVALE, Calif., July 16, […]
Grok Grok – Someone Forgot to Bolt the Lock
The Good for HumAnIty Grok 4 was released on July 9, 2025, with much fanfare—thanks to its impressive capabilities, especially in STEM, […]
Why Gen AI Security is Mission-Critical for AppSec Teams
How Application Security can stay ahead in the age of AI-powered development The rise of Generative AI (Gen AI) is transforming how software […]
Cross-Server Tool Shadowing: Hijacking Calls Between Servers
Context MCP allows an AI agent to connect to multiple tool servers simultaneously. This flexibility can be dangerous: if one of those […]
Rug Pulls (Silent Redefinition): When Tools Turn Malicious Over Time
Context Imagine that the AI assistant’s tool was actually safe at first – perhaps you used it for days without issue. Then, […]
Secrets in the Wind: Environment Variables, URLs, and the Leaky Abstractions
Context In the evolving landscape of MCP servers and AI agents, a new category of risk is emerging: sensitive data exposure through […]
Tool Poisoning: Hidden Instructions in MCP Tool Descriptions
Imagine installing a seemingly benign math tool on your AI assistant that simply adds two numbers. Unbeknownst to you, the tool’s description […]
Gen AI Runtime Protection with Gen AI gateways
The rise of generative AI has introduced new threats while driving demand for solutions that manage AI services and help development teams […]
Acuvity Launches RYNO: The Industry’s First Context-Aware Gen AI Security Platform to Protect the AI-Driven Enterprise
SUNNYVALE, Calif., June 26, 2025 /PRNewswire/ — Acuvity, a leading runtime generative AI security platform company, today announced the launch of RYNO, the first Gen AI security […]
Deploy a simple chatbot application using Secure MCP Servers
Context You’ve built an agentic application that leverages MCP servers to give your agent advanced capabilities… and now it’s time to ship […]
MCP Server: The Dangers of “Plug-and-Play” Code
With great power comes great(er) responsibility Since its launch in November 2024, MCP (Model Context Protocol) has been adopted across industries, for […]
AI Security Series 5 – Model Training
As enterprises increasingly adopt Large Language Models (LLMs), some choose to pre-train or fine tune models. This blog describes problems that one […]
AI Security Series 4 – Model Usage
At the heart of any AI application or agentic system are LLMs. Your developers and vendors are using multiple LLMs to achieve […]
AI Security Series 3 – Datastores
Modern AI applications—especially those involving conversational agents, retrieval-augmented generation (RAG), and enterprise copilots—depend heavily on a variety of datastores to supply, retrieve, […]
AI Security Series 2 – Gen AI Application Security Pillars
As enterprises rapidly integrate AI systems into core workflows, the need to adopt a security-first mindset becomes imperative. These systems, especially those […]
AI Security Series 1 – Applications and Agents
Introduction The rapid advancement of AI technologies—particularly large language models (LLMs) and agentic systems—has transformed the way modern applications are built and […]
AI Security Series: What It Really Takes to Secure Gen AI
This is Acuvity’s AI Security Series which offers a comprehensive exploration of securing AI systems, particularly focusing on Large Language Models (LLMs) and […]
How Acuvity Manages Gen AI Risk
Gen AI adoption in enterprises has surged due to productivity and efficiency gains, reduced costs, improved decision making, enhanced customer experience and […]
Acuvity Secures Two Global InfoSec Awards for Generative AI Security at RSAC 2025
Multi-category win highlights Acuvity’s comprehensive approach to securing the Gen AI era Sunnyvale, California – April 28, 2025 – Acuvity, the comprehensive AI […]
Becoming an AI Operator of Excellence
The launch of ChatGPT catapulted AI from an unknown science project to ubiquitous and indispensable in just a matter of a year. […]
Why Your AI Security Needs a Content-First Approach
In the age of AI, it’s hard to predict the nuanced ways people will interact with technology, and even harder to control […]
Seeing the Unseen: How Acuvity’s Visualization Transforms AI Security
“If you cannot measure it, you cannot manage it.” – Peter Drucker If an AI security threat isn’t visible, does that mean it’s […]