Background and context
This week’s ThreatsDay-style roundup is less about a single blockbuster breach and more about a pattern defenders keep seeing: attackers are refining familiar techniques and stitching them together into quieter, more effective intrusion chains. The themes highlighted in the bulletin—OAuth abuse, tools that disable endpoint protection, phishing through Signal, malicious ZIP delivery, and attacks against AI platforms—map closely to trends documented by Microsoft, Google, MITRE, CISA, and OWASP over the past several years. Rather than relying on one noisy malware family, many intrusions now blend identity abuse, social engineering, and defense evasion to stay below the threshold of traditional alerts [Microsoft; Google Cloud; MITRE ATT&CK].
The common thread is trust abuse. OAuth workflows are trusted because they often use legitimate login pages. Messaging apps are trusted because they feel personal and encrypted. ZIP files are trusted because they are ordinary business attachments. AI platforms are trusted because they are treated like productivity infrastructure rather than high-value production systems. Attackers do not need a novel zero-day if they can borrow legitimacy from systems users already recognize.
OAuth traps: access without stealing a password
OAuth consent phishing remains one of the cleaner ways to compromise cloud accounts. Instead of harvesting a password directly, the attacker persuades a user to authorize a malicious or attacker-controlled app. If the user clicks through a legitimate consent screen and grants permissions such as mailbox access, file access, or offline access, the attacker may receive tokens that allow persistent access even after the user changes their password [Microsoft, “Protecting against malicious OAuth apps”; Google Workspace Admin Help].
That matters because multi-factor authentication may not help if the user is authorizing the app themselves. In Microsoft 365 and Google Workspace environments, this can lead to mailbox surveillance, document theft, business email compromise, or internal phishing from a real account. MITRE tracks this under techniques including application access token theft and valid account abuse [MITRE ATT&CK T1528, T1078].
For defenders, OAuth abuse is often visible in places many teams do not monitor closely enough: new app consents, unusual service principal creation, risky scopes, refresh token usage, or access from odd IP ranges. The attack is technically simple but operationally powerful because it turns a user click into durable cloud access.
EDR killers and the fight over visibility
The “EDR killer” angle reflects another established but serious trend: once an attacker lands on an endpoint, one of the first goals may be to blind defenders. MITRE classifies this as impairing defenses, and the methods range from stopping services with administrative tools to abusing vulnerable signed drivers in so-called bring-your-own-vulnerable-driver attacks [MITRE ATT&CK T1562; CISA Known Exploited Vulnerabilities guidance; Microsoft threat research].
Why does this matter so much? Because endpoint detection and response tools often provide the telemetry security teams depend on to spot ransomware staging, credential dumping, and lateral movement. If an attacker can disable or tamper with the agent, they create a short but dangerous window in which they can move faster than the response team. Public reporting from Microsoft and other vendors has repeatedly shown ransomware operators using service control tools, registry changes, and driver abuse to suppress security products before encryption or exfiltration [Microsoft Security Blog; CISA #StopRansomware resources].
Technically, this is not always a flashy exploit. The attacker may use built-in Windows utilities such as sc.exe, taskkill, PowerShell, or reg.exe. In more advanced cases, they load a legitimately signed but vulnerable driver to gain kernel-level capabilities and terminate protected processes. That is why driver blocklists, tamper protection, and least-privilege controls matter as much as the EDR product itself.
Signal phishing: trusted chat as an attack channel
Phishing through Signal is part of a broader move away from email-only lures. Messaging platforms are attractive because users tend to respond faster, trust contacts more, and face fewer enterprise filtering controls. A Signal message from a known colleague, journalist source, executive assistant, or crypto contact can be far more convincing than a cold email [CISA phishing guidance; Google Threat Analysis Group reporting on social engineering trends].
The mechanics vary. Attackers may send a link to a lookalike login page, ask the victim to approve a device-linking request, or push them toward a fake account recovery flow. In some cases, the attacker first compromises one account and then uses that legitimate identity to phish others. The end-to-end encryption that protects privacy also limits centralized inspection, which makes user awareness and device security more important. For people who rely on secure communications, pairing Signal with stronger device hygiene and privacy protection practices can reduce exposure, but it does not remove the risk of social engineering.
Zombie ZIP and why archives still work
Archive-based delivery is old, but it remains effective because it adapts well. A ZIP file can hide a shortcut, script, HTML smuggling page, nested archive, or disk image that eventually launches malware or a credential theft chain. Security products have improved at scanning attachments, but attackers keep changing the wrapping: double extensions, Unicode tricks, password-protected archives, or nested containers that frustrate automated inspection [MITRE ATT&CK T1566.001, T1204; Proofpoint threat reports; Microsoft phishing research].
The “Zombie ZIP” label likely points to this reanimation of archive abuse rather than a single new class of exploit. The danger is not the ZIP format itself so much as what it lets attackers package. A file that looks inert in an inbox can turn into an LNK shortcut, JavaScript dropper, or HTML file that reconstructs a payload only after user interaction. This is one reason email-borne malware has not disappeared despite years of awareness training.
Defenders should pay attention to parent-child process chains after extraction. Explorer spawning a scripting engine, shortcut file, or installer from a temporary directory is often a clearer signal than the attachment alone.
AI platform hacks expand the attack surface
The AI platform angle may be the most forward-looking part of the roundup, but it fits a pattern already documented by OWASP, cloud providers, and software supply-chain researchers. AI environments often combine sensitive data, expensive compute, API keys, notebooks, model artifacts, and broad cloud permissions in one place. That makes them attractive targets for data theft, model theft, cloud abuse, and supply-chain compromise [OWASP Top 10 for LLM Applications; Google Cloud AI security guidance; Microsoft cloud security guidance].
In practice, “AI platform hack” can mean several things: stolen API keys for inference services, exposed notebooks containing secrets, malicious Python dependencies in ML workflows, poisoned training data, or unauthorized access to model registries and object storage. The issue is not only the AI model. It is the surrounding infrastructure. Many teams still treat notebooks and experimentation environments as semi-informal spaces, even when they have access to production datasets and privileged service accounts.
This makes AI compromise less exotic than it sounds. In many cases it is standard cloud intrusion wearing an AI badge. But the downstream impact can be significant: theft of proprietary models, silent model tampering, leakage of sensitive prompts or datasets, and lateral movement into the broader cloud estate.
Impact assessment
The organizations most at risk are those with heavy SaaS dependence, broad third-party app ecosystems, decentralized collaboration habits, and mixed cloud-to-endpoint visibility. That includes enterprises running Microsoft 365 or Google Workspace, managed service providers, media and nonprofit organizations, software firms, crypto-related businesses, and companies building internal AI systems [Microsoft; Google; OWASP].
Severity varies by combination. On their own, a malicious ZIP or a Signal phish may look like routine nuisance activity. Combined with OAuth abuse and EDR tampering, they can support a full intrusion path: initial access, persistence, defense suppression, internal phishing, data theft, and ransomware deployment. For executives, finance teams, journalists, developers, and administrators, the risk is higher because these users often have privileged access, trusted communications, or access to sensitive data.
The larger concern is operational asymmetry. Defenders often treat these as separate problem sets—identity, endpoint, messaging, email, AI security—while attackers happily chain them together.
How to protect yourself
Lock down OAuth consent. Disable user consent for third-party apps where practical, require admin approval for high-risk scopes, review existing app permissions, and monitor for new service principals or unusual token use [Microsoft; Google Workspace Admin Help].
Harden endpoint defenses against tampering. Enable EDR tamper protection, restrict local admin rights, deploy Microsoft’s vulnerable driver blocklist or equivalent controls, and alert on security service stoppage, event log clearing, and suspicious driver loads [Microsoft; CISA].
Treat chat apps as phishing channels. Train staff to verify unusual requests received over Signal and other messengers, especially device-link prompts, recovery requests, or urgent file shares. Encourage out-of-band confirmation for sensitive actions.
Inspect archives beyond the attachment. Block or quarantine high-risk attachment types where possible, detonate archives in sandboxing workflows, and watch for suspicious execution chains after extraction. Password-protected archives from unknown senders should be treated as high risk.
Secure AI environments like production systems. Rotate API keys, remove secrets from notebooks, enforce least privilege for service accounts, monitor model registries and storage buckets, and review package dependencies in ML pipelines [OWASP; Google Cloud].
Protect sessions and traffic on untrusted networks. For staff who travel or work remotely, using strong encryption and secure network practices helps reduce interception risk, though it will not stop consent phishing or malicious app authorization.
Centralize visibility across identity, endpoint, and cloud. The most useful detections often come from correlation: a risky OAuth grant followed by unusual mailbox access, then endpoint security suppression, then archive execution or cloud API anomalies.
This week’s roundup is a reminder that many of the most effective attack paths are not new. They are polished, combined, and aimed at the seams between tools and teams. That is why these stories matter: not because each trick is unprecedented, but because they keep working.




