LLM microservices are transforming tech, but are you ready for the hidden risks? We delve into the unforeseen security...
#Technology #BreachAndBuild #LLMSecurity #Microservices #AIThreats
breachandbuild.com/llm-microservices-unfore...
Latest posts tagged with #LLMsecurity on Bluesky
LLM microservices are transforming tech, but are you ready for the hidden risks? We delve into the unforeseen security...
#Technology #BreachAndBuild #LLMSecurity #Microservices #AIThreats
breachandbuild.com/llm-microservices-unfore...
🚨 Introducing the AI Security Village at BSides Luxembourg 2026! 🚨
🧠🤖 𝗔𝗜 𝗦𝗘𝗖𝗨𝗥𝗜𝗧𝗬 𝗩𝗜𝗟𝗟𝗔𝗚𝗘 – 𝗧𝗘𝗖𝗛𝗡𝗜𝗖𝗔𝗟 𝗧𝗥𝗔𝗜𝗡𝗜𝗡𝗚 & 𝗜𝗠𝗣𝗟𝗘𝗠𝗘𝗡𝗧𝗔𝗧𝗜𝗢𝗡 (2-Day Deep Dive) – 𝗣𝗔𝗥𝗧𝗛 𝗦𝗛𝗨𝗞𝗟𝗔 & 𝗡𝗔𝗚𝗔𝗥𝗝𝗨𝗡 𝗥𝗔𝗟𝗟𝗔𝗣𝗔𝗟𝗟𝗜 ⚙️🔥
𝗧𝗛𝗜𝗦 𝗜𝗦𝗡’𝗧 𝗝𝗨𝗦𝗧 𝗔𝗡𝗢𝗧𝗛𝗘𝗥 𝗧𝗥𝗔𝗖𝗞. 𝗧𝗛𝗜𝗦 𝗜𝗦 𝗪𝗛𝗘𝗥𝗘 𝗧𝗛𝗘𝗢𝗥𝗬 𝗠𝗘𝗘𝗧𝗦 𝗛𝗔𝗡𝗗𝗦-𝗢𝗡 𝗔𝗜 […]
[Original post on infosec.exchange]
🚨 Introducing the AI Security Village at BSides Luxembourg 2026! 🚨
🧠🤖 𝗔𝗜 𝗦𝗘𝗖𝗨𝗥𝗜𝗧𝗬 𝗩𝗜𝗟𝗟𝗔𝗚𝗘 – 𝗧𝗘𝗖𝗛𝗡𝗜𝗖𝗔𝗟 𝗧𝗥𝗔𝗜𝗡𝗜𝗡𝗚 & 𝗜𝗠𝗣𝗟𝗘𝗠𝗘𝗡𝗧𝗔𝗧𝗜𝗢𝗡 (2-Day Deep Dive) – 𝗣𝗔𝗥𝗧𝗛 𝗦𝗛𝗨𝗞𝗟𝗔 & 𝗡𝗔𝗚𝗔𝗥𝗝𝗨𝗡 𝗥𝗔𝗟𝗟𝗔𝗣𝗔𝗟𝗟𝗜 ⚙️🔥
𝗧𝗛𝗜𝗦 𝗜𝗦𝗡’𝗧 𝗝𝗨𝗦𝗧 𝗔𝗡𝗢𝗧𝗛𝗘𝗥 𝗧𝗥𝗔𝗖𝗞. 𝗧𝗛𝗜𝗦 𝗜𝗦 𝗪𝗛𝗘𝗥𝗘 𝗧𝗛𝗘𝗢𝗥𝗬 𝗠𝗘𝗘𝗧𝗦 𝗛𝗔𝗡𝗗𝗦-𝗢𝗡 𝗔𝗜 […]
[Original post on infosec.exchange]
Agentic AI isn't a model safety problem.
It's an access control problem.
The planner decides to act. The system assumes it's allowed.
That gap is where least privilege dies.
rebrand.ly/yhru7ui
#AIGovernance #LLMSecurity #AgenticAI #ZeroTrust #CyberSecurity
Prompt injection, data leakage through model outputs, models acting without human review — these live in your application layer, not in a policy doc. OWASP and NIST tell you exactly how to address them.
#AIgovernance #LLMsecurity #riskassessment #AI #security #LLM
Most developers treat AI governance as a legal problem. It's a code problem.
✨ Article link in the comments ✨ ⬇️⬇️⬇️
#AIgovernance #LLMsecurity #riskassessment #AI #security #LLM
Why the pentesting playbook doesn’t fit: belief, assumptions, and non-determinism About the author Hussein Bahmad Hussein is a penetration testing manager in NVISO’s SSA team in which he manag...
#AI #Security #AISecurity #AITesting #AppSec #LLMSecurity […]
[Original post on blog.nviso.eu]
✍️ New blog post by Gerardo Arroyo
Amazon Bedrock Guardrails: Content Filters, PII, and Streaming
#aws #awsbedrock #aisafety #llmsecurity
Companies are spending millions fine-tuning LLMs to be 1% smarter while spending almost nothing on what happens when someone actively tries to break them in production. The capability gap is closing fast. The security gap is barely being discussed. #LLMSecurity
💡 AI agents moving from experiment to enterprise?
Data governance is the difference between teams that scale safely and teams that make headlines for the wrong reasons.
RBAC, ABAC, or both? What's your stack? 👇
#AIAgents #DataSecurity #RBAC #ABAC #LLMSecurity #PII #CyberSecurity
💡 AI agents moving from experiment to enterprise?
Data governance is the difference between teams that scale safely and teams that make headlines for the wrong reasons.
RBAC, ABAC, or both? What's your stack? 👇
#AIAgents #DataSecurity #RBAC #ABAC #LLMSecurity #PII #CyberSecurity
The deeper lesson is that safety can fail in two places at once: incomplete command validation and weak observability across agent layers. If a lower-level agent can act while the top-level agent thinks it only detected risk, the system is not actually in control.
Multi-agent systems need […]
ape.hiddenlayer.com this is a pretty cool tool from Hidden layer for AI
#AI #LLM #cybersecurity #llmsecurity
ContextHound v1.8.0 - Runtime Guard API is here.
Wrap any OpenAI or Anthropic call and inspect the messages before they send:
100% offline. No data leaves your machine. Ever.
#LLMSecurity #PromptInjection #OpenSource #AIRisk #CyberSecurity #DevSecOps #GenAI
JBDistill Generates Its Own Jailbreaks - 81.8% Attack Rate
awesomeagents.ai/news/jailbreak-distillat...
#AiSafety #LlmSecurity #Jailbreaking
Three new sections:
This week:
• anthropic-cookbook — 3,919 findings
• promptflow — 3,749 findings
• crewAI — 1,588 findings
• LiteLLM — 1,155 findings
• openai-cookbook — 439 findings
• MetaGPT — 8 findings
contexthound.com
#LLMSecurity #PromptInjection #AISecOps
Auditer un prompt IA : comment détecter injections, jailbreaks et exfiltrations avant qu'ils atteignent votre modèle.
👉 blog.gioria.org/fr/CyberSec/...
#CyberSécurité #LLMSecurity #PromptInjection #GenAI #DevSecOps
Full story:
www.technadu.com/claude-code-...
Curious to hear perspectives from red teamers, blue teamers, and AI engineers alike.
#CyberSecurity #AIThreats #LLMSecurity #DataBreach #ThreatModeling
AI as an attack engine.
Claude Code + GPT-4.1 reportedly used to breach Mexican government systems - exposing ~195M identities and 150GB+ of data.
1,000+ prompts generated exploits and automated exfiltration.
Are we prepared for AI-driven breach campaigns?
#CyberSecurity #AI #LLMSecurity
Claude Used To Steal Mexican Data
Read More: buff.ly/IPntG4O
#ClaudeAI #PromptInjection #AIPhishing #LLMSecurity #SocialEngineering #Anthropic #AIGovernance #CyberThreat
I built an open-source tool that throws 210+ adversarial attacks at LLMs. Encoding bypasses, jailbreaks, RAG poisoning, agent exploits. Most models fail. #llmsecurity
🚨 #Anthropic has identified an industrial-scale campaign by #DeepSeek, #Moonshot, and #MiniMax to illicitly extract Claude's capabilities and enhance their own models.
Full reading: www.anthropic.com/news/detecti...
#DistillationAttack #Claude #LLM #LLMSecurity
🚨 #Anthropic identificó una campaña a escala industrial para extraer ilícitamente las capacidades de Claude y mejorar sus propios modelos, por parte de #DeepSeek, #Moonshot y #MiniMax.
www.anthropic.com/news/detecti...
#DistillationAttack #Anthropic #LLM #LLMSecurity
OpenClaw stylized logo on a red and black background
A wave of CVEs has hit OpenClaw 🚨
But this is bigger than one project.
When AI agents gain access to shells, files and Docker, the threat model changes 🔐
Read our latest article:
basefortify.eu/posts/2026/0...
#AI #CyberSecurity #LLMSecurity 🤖
Prompt Injection Is the New Phishing. The most dangerous malware today doesn’t exploit code, it exploits instructions. youtu.be/Ze12t1iv81E #Cybersecurity #ArtificialIntelligence #AIsecurity #PromptInjection #AIGovernance #LLMSecurity #ThreatIntelligence #AIrisk #CISO
⚠️ When #AI systems remember, security risks multiply.
In this exclusive devm.io article, Nahla Davies explains how #MCP can enable data leaks, prompt injection, and new attack paths if it’s not threat-modeled properly.
📖 Read it here: https://app.devm.io/N4M6MIjA7Yb
#CyberSecurity #LLMSecurity
Full Article: www.technadu.com/poisoning-of...
As AI assistants become embedded in productivity tools, how should we secure their memory and input layers?
Comment your opinion below.
#ArtificialIntelligence #CyberSecurity #LLMSecurity #PromptInjection #Microsoft #AITrust
The #1 AI vulnerability—and nobody knows how to fix it yet.
On Hackers on the Rocks 🎙️
Guest: João Donato
🎧 Listen to the podcast here: bit.ly/4qRIz55
#PromptInjection #LLMSecurity #AI #CyberSecurity #DesiredEffect
Introducing Augustus: An open-source LLM vulnerability scanner with 210+ attacks across 28 providers. Secure your AI models effectively. #CyberSecurity #AI #LLMSecurity #OpenSource Link: thedailytechfeed.com/open-source-...
Good stuff here, folks! When you have a few minutes, read the article and the research (links below). #LLMsecurity
Story: www.reuters.com/technology/o...
Research: www.sentinelone.com/labs/silent-...