Security researchers have identified multiple attack scenarios targeting MLOps platforms such as Azure Machine Learning (Azure ML), BigML, and Google Cloud Vertex AI.
Azure ML can be compromised through device code phishing, where attackers steal access tokens and steal models stored on the platform, according to a new research article by Security Intelligence. This attack vector exploits weaknesses in identity management and allows unauthorized access to machine learning (ML) assets.
BigML users face threats from exposed API keys found in public repositories, which could allow unauthorized access to private datasets. API keys often do not have an expiration policy and pose a persistent risk if not rotated frequently.
Google Cloud Vertex AI is vulnerable to phishing and privilege escalation attacks that allow attackers to extract GCloud tokens and access sensitive ML assets. Attackers can leverage compromised credentials to perform lateral movement within an organization’s cloud infrastructure.
Read more about machine learning security: New study reveals security risks in ChatGPT plugin
protective measures
Security experts recommend several protective measures for each platform.
For Azure ML, best practices include enabling multi-factor authentication (MFA), isolating networks, encrypting data, and enforcing role-based access control (RBAC). BigML requires users to enforce MFA, rotate credentials frequently, and implement fine-grained access controls. Limit data exposure For Google Cloud Vertex AI, we recommend following the principle of least privilege, disabling external IP addresses, enabling detailed audit logging, and minimizing privileges for service accounts.
As enterprises increasingly rely on AI technology for critical operations, it has become essential to protect MLOps platforms from threats such as data theft, model extraction, and dataset poisoning. Implementing proactive security configurations can strengthen your defenses and protect sensitive AI assets from evolving cyber threats.
Broader findings
Security Intelligence investigation reveals vulnerabilities affecting a wide range of MLOps platforms, including Amazon SageMaker, JFrog ML (formerly Qwak), Domino Enterprise AI and MLOps Platform, Databricks, DataRobot, W&B (Weights & Biases), Valohai, and TrueFoundry It has become.