Blockchain Security Brief
The weekly record of web3 darkest hours

Tuesday, February 10, 2026

Autonomy is being delegated faster than authority is verified, AI agents are getting root before they get rules, and the next failures are starting where permissions are implied, not proven.

This week:

OpenClaw went viral with shell access, memory, and wallet hooks, only for researchers to find exposed control panels, plaintext keys, and prompt-injection paths that turned “local AI” into remote command execution.

Step Finance lost $27.3M from compromised executive devices, proving again that audited contracts don’t save treasuries when signing authority lives on infected laptops.

• The new attack surface isn’t the model - it’s the unverified actions it’s allowed to perform inside real systems.

Top Exploits

When autonomy is delegated and safety is implied, crypto doesn’t fail at the contract. It fails at the command.

OpenClaw turned an AI assistant into an open door. Peter Steinberger’s local-first AI agent exploded to 150k+ GitHub stars with full shell access, persistent memory, and wallet integrations. During a trademark-forced rename, scammers hijacked the old handles in seconds and pumped a fake token to $16M. While Twitter chased the coin, researchers found the real breach: hundreds of OpenClaw control panels exposed to the public internet with no authentication, plaintext API keys, and root-level command access. (Read more)

Step Finance lost $27.3M without a single smart contract bug. The Solana dashboard protocol watched 261,854 SOL walk out of its treasury after executive devices were compromised through what the team called a “well-known attack vector.” Audited contracts, active bug bounties, and public security reviews didn’t matter when signing authority lived on infected laptops. CertiK traced the funds to fresh wallets while the STEP token fell 93% in a day. Step recovered $4.7M through token protections - a partial refund on a full-price lesson. (Read more)

The Agentic Collapse: Why Unverified AI Systems Are the Next Corporate Liability

We’ve let AI agents slip into our systems like they own the place.

They answer tickets, update records, tweak settings, and push decisions through pipelines that used to require actual signatures. And we’ve done all this on the assumption that they’ll behave - or at least behave well enough not to set something on fire.

But here’s the part everyone keeps sidestepping: Most AI agents operate without any verifiable authority behind their actions.

They execute tasks that look legitimate, but nothing proves they were ever meant to have that level of access. Nothing confirms who built them, what version they’re running, or what boundaries they’re supposed to respect. They just act - instantly, confidently, and often irreversibly.

And the cracks are widening.
An agent approves a refund it should’ve never seen.
Another taps into an internal tool because someone forgot to define its scope.
A third negotiates with a partner system based on permissions it hallucinated from a stale PDF.

Not malicious. Not evil. Just unverified.

The industry keeps pretending this is a tooling problem - add a dashboard here, a safety toggle there, slap “governance mode” on the landing page, and hope no one notices the structural rot underneath. But none of that answers the real question:

Why are systems executing actions that cannot be cryptographically proven to be authorized in the first place?

This isn’t about identity in the personal sense.
It’s about provenance and permissioning.

Rekt Security Summit

We’re announcing the Rekt Security Summit in partnership with Stable Summit - one day with the researchers, auditors, white hats, and exploit investigators who actually document where crypto breaks.

March 27, 2026

Cannes

About giving an agent the digital equivalent of “show your badge before touching anything important.”

Some teams have finally started treating this like an infrastructure issue instead of a UX problem. cheqd is one of the organizations pushing that shift - building a way for agents to carry verifiable credentials and present proof before acting.

Nothing fancy. Nothing decorative. Just a simple challenge-response that forces an agent to answer: “Prove you’re authorized to do this.”

If it can’t, it doesn’t act. Clean. Predictable. Traceable.

What’s interesting is how quickly others are building on top of this foundation. Several companies are already experimenting with agent-to-agent verification using the same underlying primitives - meaning agents can challenge each other before collaborating.

A kind of peer-level trust handshake, but with cryptographic receipts instead of polite assumptions. And again, cheqd’s primitives sit underneath that shift, quietly doing the work that dashboards can’t.

Once this becomes infrastructure, everything changes.

Agents stop being black-box operators and start behaving like accountable digital actors. Systems can reject unauthorised operations instantly. Failures stop being mysteries and start being misconfigurations.

People still argue about whether an agent “sounds human enough.” That conversation is ancient.

The only thing that matters now is whether an agent can prove it has the right to do what it’s doing.

The next wave of AI won’t be humans vs machines.

It’ll be agents with cryptographic receipts vs agents running on hope.

Only one of them survives the collapse.

*Sponsored article

Deep Dives

Month in Review: Top DeFi Hacks of January 2026 (3 min read)
January logged seven on-chain protocol exploits totaling roughly $86M, but the real outlier was off-chain: a social-engineering IT support scam that walked away with ~$282M in BTC and LTC from a single Trezor user after a compromised root key. The contrast is the lesson - most protocol losses came from familiar contract bugs and inherited code, yet the month’s largest damage bypassed smart contracts entirely and went straight for credentials. The pattern is blunt: audits catch logic flaws, but treasury devices, seed phrases, and support impersonation remain the highest-value attack surface.

Cross Curve $1.4m Implementation Bug [Explained] (7 min read)
CrossCurve lost ~$1.4M after a publicly callable cross-chain execution path let an attacker spoof messages and mint ~999M EYWA tokens using fresh command IDs and a one-guardian confirmation threshold. The exploit was repeated across chains, with most damage on Arbitrum. Funds were partly swapped to WETH and bridged to Ethereum, but most EYWA remains stranded due to frozen deposits and thin liquidity.

Liquid Staking Derivative Security, Risks and Safeguards (11 min read)
LSD tokens secure huge portions of staked ETH and SOL, but they usually fail through design flaws, not single exploits: liquidity dries and pegs slip, mint or redemption logic inflates supply, oracles misprice collateral, validators get slashed, or admin keys concentrate too much power.

Threat Intelligence | Analysis of Token Vesting Phishing Poisoning (8 min read)
A targeted macOS phishing campaign disguised as audit and token-vesting confirmations used a fake DOCX AppleScript attachment to trick victims into granting permissions, steal system passwords, tamper with TCC privacy controls, and deploy a fileless Node.js backdoor for remote command execution.

The First 90 Seconds: How Early Decisions Shape Incident Response Investigations (8 min read)
Most incident response failures do not come from missing tools or skills, but from the first quiet decisions made right after detection, when pressure is high and information is incomplete. The “first 90 seconds” is not a literal timer but a repeating pattern every time scope expands to a new system: what to preserve, what to inspect first, and whether the issue is isolated or part of a wider intrusion.

Other Security Stories

Physical wrench attacks just hit a record high. Seventy-two verified assaults in 2025 exposed crypto holders to kidnappings, home invasions, and over $40M in confirmed losses as violence became a routine attack vector rather than an exception.

Malicious Chrome extensions just turned browsers into data siphons. Dozens of add-ons hijacked affiliate links, scraped shopping data, and even stole ChatGPT authentication tokens, giving attackers full account access while pretending to be ad blockers, seller tools, or harmless AI productivity plugins.

AI audits aren’t a silver bullet. They catch known vulnerabilities faster and cheaper than humans, but still miss novel exploits, economic design flaws, and key compromises - amplifying real auditors rather than replacing them.

Incognito Market’s founder just got 30 years after blockchain tracing unmasked him. U.S. authorities linked $105M in darknet drug sales to Rui-Siang Lin by following Bitcoin and Monero flows to exchange accounts in his own name, turning “anonymous” crypto rails into courtroom evidence.

Coinbase confirmed an insider data breach. A contractor improperly accessed information from ~30 customers, highlighting how outsourced support and BPO channels are becoming a recurring weak point as attackers increasingly bypass code and target people with legitimate system access instead.

New Tools and Projects

Block Security Arena: A Web3 AI-driven security infrastructure platform that completed a $30 M seed round with participation from Hotcoin Labs, Starbase, Onebit Ventures, and Apus Capital, aiming to build a “closed-loop” security ecosystem combining AI audit assistants, gamified attack simulation sandboxes, token risk radars, and an AI security academy to empower developers and white-hat researchers.

Zer0n: An AI-assisted vulnerability discovery and blockchain-backed integrity framework that ties LLM reasoning to on-chain tamper-evident logging, achieving high detection accuracy while preserving audit integrity.

Binance Web3 Security Scan: A newly introduced feature from Binance that provides real-time security scanning for Web3 interactions, identifying potential threats and offering mitigation guidance to safeguard user funds and data.

TxRay: An agentic postmortem system for live blockchain attack reconstruction, using LLM tools to trace exploits from minimal evidence and automatically generate reproducible proofs of concept for incident analysis.

Rekt Flashback

Three years ago, DeFi learned that “read-only” reentrancy isn’t harmless - it’s an oracle manipulation bug with a polite name. dForce just proved the memo never circulated. An attacker used flash-loaned funds to enter Curve pools, reentered during liquidity removal, warped get_virtual_price, and made bad collateral look healthy long enough to liquidate it for $3.65M across Arbitrum and Optimism. Same vulnerability that hit Midas and Market.xyz, same workaround sitting in docs, same outcome on a new chain. Different cycle, same mistake: when a bug is labeled “known,” teams start treating it like history - right up until it empties vaults in the present.

Memes and Videos

The Dangerous Evolution of AI Hacking

AI doesn’t understand code - it just predicts it. Turns out that’s enough to hack with. One person with a chatbot can now run full intrusion campaigns that used to need teams and months. The future of security is humans with AI vs humans with AI - and the bots don’t sleep.

Source: Cybernews

Source: LPCapitalChi

Want to partner with us?

Skip the bots, hit the brains.

Get your message in front of the sharpest, most battle-tested crowd in crypto.

If they notice you, the whole space will. [Partner with us]

We provide an anonymous platform for whistleblowers and DeFi detectives to present their information to the community. All authors remain anonymous. 
We are all rekt.

Keep Reading