As multi-agent architectures become increasingly essential to enterprise workflows, Google’s A2A and Anthropic’s MCP have been proposed as standard protocols for agent communication and integration. These protocols have become foundational for scaling AI agents Technology, enabling the seamless integration of third-party agents, often available as open-source code, into existing systems. However, these protocols must also ensure system safety, and potential security risks must be carefully considered. In this presentation, we will highlight a key vulnerability in these protocols- integrating outsourced agent card’s text into the delegator agent’s instructions introduces a backdoor for cyber security attacks. Our presentation will first explain the protocol design and its weaknesses. Then, we will show how malicious agents with hidden prompt injection can bypass current defenses and checks. We will also present a way to combine user’s trust in LLMs and LLM hallucinations to drive the user to install malicious agent. Finally, we demonstrate how such malicious agents enable full system compromise, including DoS, sensitive data theft, Phishing and lateral spread. All those attacks are done without detections at all and look to the user as normal behavior of the system.