[Coming Soon]
Weaponizing Apple AI for Offensive Operations

Blackhat USA - 2025
Apple's on device AI frameworks CoreML, Vision, AVFoundation enable powerful automation and advanced media processing. However, these same capabilities introduce a stealthy attack surface that allows for payload execution, covert data exchange, and fully AI assisted command and control operations.
This talk introduces MLArc, a CoreML based C2 framework that abuses Apple AI processing pipeline for payload embedding, execution, and real time attacker controlled communication. By leveraging machine learning models, image processing APIs, and macOS native AI features, attackers can establish a fully functional AI assisted C2 without relying on traditional execution mechanisms or external dependencies.
Beyond MLArc as a standalone C2, this talk explores how Apple's AI frameworks can be weaponized to enhance existing C2s like Mythic, providing stealthy AI assisted payload delivery, execution, and persistence. This includes the below list of Apple AI framework used for embedding Apfell Payload.
CoreML - Embedding and executing encrypted shellcode inside AI models.
Vision - Concealing payloads/encryption keys inside AI processed images and retrieving them dynamically to bypass detection.
AVFoundation - Hiding and extracting payloads within high frequency AI enhanced audio files using steganographic techniques.
This research marks the first public disclosure of Apple AI assisted payload execution and AI driven C2 on macOS, revealing a new class of offensive tradecraft that weaponizes Apple AI pipelines for adversarial operations. I will demonstrate MLArc in action, showing how Apple's AI stack can be abused to establish fileless, stealthy C2 channels that evade traditional security measures.
This talk is highly technical, delivering new research and attack techniques that impact macOS security, Apple AI exploitation, and red team tradecraft.