01 / 12
mensa-mu.vercel.app
Mensamensa

The AI treasury
that proves itself.

Intelligent RWA portfolio management agent on Mantle. Allocates between mETH and USDY, with every decision logged on-chain and challenged by humans in a verifiable Turing tournament.

Live on Mantle MainnetERC-8004 · agent #17/7 contracts verifiedMantle Turing Test 2026
Powered byMantle NetworkAnthropic ClaudeOndo Finance (USDY)CoingeckoDefiLlama
Use → / ↓ / Space to advance · ↑ / ← to go back
02 / 12Problem
mensa-mu.vercel.app

You can't trust an AI with your money if you can't verify its reasoning.

DeFi has billions in autonomous protocols. AI agents are starting to manage real capital. But every existing AI treasury is a black box: it acts, you trust, you hope.

No reasoning trail
You see "the AI rebalanced." You don't see why.
No benchmark
No way to measure if the AI is actually any good.
No accountability
If it underperforms, no human is on the line — including the AI.
03 / 12Solution
mensa-mu.vercel.app

Mensa: every decision logged, explained, and challenged.

We take the hackathon name literally. The agent must prove, statistically and on-chain, that it allocates better than humans on the same data. Three primitives.

01
Logged
Every allocation decision is written to the DecisionLog contract — action, confidence, full reasoning text emitted as event data.
02
Explained
Claude Haiku 4.5 generates a plain-English justification for every rebalance. No black-box scores, no opaque embeddings.
03
Challenged
The TournamentVault pits the AI against humans on identical inputs. After 24h, the higher-return allocation wins, settled on-chain.
04 / 12Traction
mensa-mu.vercel.app

This is happening right now.

Not a mock. The contracts have been live on Mantle Mainnet for days, the cron has been deciding, the tournament has been settling.

Decisions logged
on-chain · DecisionLog
Tournament rounds
AI win rate
vs 50/50 baseline
Alpha / round
since memory loop calibrated

Values fetched live from /api/onchain. Refresh this slide to see them update.

05 / 12Innovation 1
mensa-mu.vercel.app

The memory loop: how Claude learns without retraining.

Before every decision, the agent reads its own on-chain track record and injects it into Claude's prompt. Self-correction emerges without a single training pipeline.

Track record (since memory loop calibrated, 6 rounds): Cumulative alpha vs 50/50 baseline: +204 bps Per-round average: +34 bps Recent rounds: Round #7: you=20% mETH (19bps) | 50/50=49bps | optimal=100% mETH (97bps) | you-vs-base=-30bps Round #6: you=0% mETH (0bps) | 50/50=-126bps | optimal=0% mETH (0bps) | you-vs-base=+126bps ... Round #1 was a -324bps loss made before this feedback loop existed — note it but don't let it dominate your current strategy. Reflect: when did you under-allocate to the winning asset? When did you over-rebalance and lose to passive 50/50?

This is the actual prompt context shipped on every Claude call. The agent reads its alpha, sees which rounds it underperformed, and adjusts. Cheap, transparent, no ML infra.

06 / 12Innovation 2
mensa-mu.vercel.app

The Turing tournament: humans challenge the AI on identical data.

Each rebalance opens a 24h round. Anyone with a stake can vote their own mETH/USDY split. After settlement, whoever produced the better return wins on-chain. No subjective judging, no leaderboard cooking.

SQRT-WEIGHTED VOTING
Vote weight = sqrt(reputation). 100 Sybil wallets at rep=1 sum to weight 100; one whale at rep=10000 also gets weight 100. Diminishing returns kill bot dominance.
15% PERFORMANCE FEE
On yield, never principal. Splits 50/30/20: winners / reputation pool / ops. Humans who beat the AI get paid in MNT, claimable.
SOULBOUND BADGES
7 milestones (First Vote, Beat AI 10x/100x, 5-Win Streak, Rep 500/1000, Top 10 Monthly) minted as non-transferable ERC-721. Reputation is portable.
07 / 12Why we're honest
mensa-mu.vercel.app

Round #1 was a disaster. Here's what happened.

We didn't want to ship a pitch deck where the AI looks like a genius. The first round on mainnet was a 19.4% loss. We learned in public.

Round #1
60% mETH
-324bps
Round #2
35% mETH
+15bps
Round #3
25% mETH
+21bps
Round #4
15% mETH
+10bps
Round #5
5% mETH
+62bps
Round #6
0% mETH
+126bps
Round #7
20% mETH
-30bps

Round #1 cold-start: 60% mETH, ETH crashed 19%, the AI ate the loss. Round #2 onward — once the memory loop was active — the AI shifted defensive (60 → 35 → 25 → 15 → 5 → 0% mETH) and beat the baseline on every round until #7. Net since calibrated: +204 bps cumulative, +34 bps per round.

08 / 12Verifiable
mensa-mu.vercel.app

Beyond live rounds: we backtest on 1 year of real ETH price history.

Seven on-chain rounds isn't a track record. So we replay Mensa's strategy against three baselines (passive 50/50, 100% mETH HODL, 100% USDY) on a year of Coingecko ETH prices. The methodology is on /backtest.

Honest finding

In a strong directional bull (ETH +15%), allocation strategies always lag pure HODL. Mensa cut max drawdown by 5pp at the cost of some upside — risk-adjusted, that's the actual trade.

Mensa's value prop is chop and bear regimes, not bull tops. The page is explicit about this. No cherry-picked window.

09 / 12Tech
mensa-mu.vercel.app

7 verified contracts. ERC-8004 native.

Production-shaped on mainnet from day one. Every piece independently verifiable on Mantlescan.

MensaAgent
0xAcA925e5...CCe49
The treasury. Holds deposits, gates rebalance, opens rounds.
DecisionLog
0xD889B781...88Fe
Append-only on-chain record. Reasoning emitted as event data.
TournamentVault
0x92E6B40d...d122
Round lifecycle, voting, settlement, payout distribution.
Reputation
0x10A519fd...4E5f
Sqrt-weighted scoring. Read by Tournament for vote weight.
💰
BountyPool
0x06460f1c...5f39
15% perf-fee sink. 50/30/20 split. Pull-based claims.
🏆
MensaBadges
0x22867d39...144E
7 soulbound achievement NFTs. Transfer-blocked.
MensaAgentIdentity (ERC-8004)
0x6671E554...60B6
Agent registry NFT, agentId #1. Discoverable for A2A composability.
Off-chain stack
Next.js + VercelNext.js + Vercel
Claude Haiku 4.5Claude Haiku 4.5
GitHub Actions cronGitHub Actions cron
Coingecko (ETH)Coingecko (ETH)
DefiLlama (yields)DefiLlama (yields)
FoundryFoundry
10 / 12Honest scope
mensa-mu.vercel.app

What we know we don't have yet.

A hackathon submission that pretends it's production is a hackathon submission that lies. Here's the gap list.

NOW
Notional rebalancing
executeAllocation updates a target % and opens a round but does not swap tokens. We surveyed Merchant Moe V2 — pools mETH/USDC ≈ $6 TVL, USDC/USDY ≈ $22. Real execution is gated less on our code and more on Mantle DEX liquidity maturity.
NOW
Per-user balance tracker (not shares)
userDeposits is a unified counter, not per-asset. Safe at small TVL, multi-user mainnet needs the ERC-4626 share model upgrade.
NEXT
ERC-4626 share model + Velora swap
Replace the counter with shares minted at deposit. Route executeAllocation through Velora aggregator with slippage caps. Designed, not deployed.
NEXT
Real human-vote aggregation in settle
Auto-settle passes 50% as the human aggregate when no voters showed up. With active voting we compute reputation-weighted median off-chain and pass it in.
LATER
Hybrid AI / human steering
Wired today: Claude reads the human consensus as a soft input. With more voters this becomes a real co-allocation mechanism, not just a benchmark.
11 / 12Hackathon
mensa-mu.vercel.app

Why Mensa fits the Mantle Turing Test.

The hackathon brief asked for autonomous agents that compete on-chain, verify reasoning, and use Mantle's native RWAs. Mensa is that, line by line.

AI × RWA (primary)
Path B — end-user-facing intelligent RWA portfolio agent. mETH (Mantle LST) + USDY (Ondo T-bills) are exactly the RWAs the track names.
Grand Champion
7 verified contracts (Tech Depth), Turing tournament + verifiable alpha + memory loop (Innovation), Mantle-native (Ecosystem), live deployed (Completeness).
AI Alpha & Data
Path B — trading strategy with verifiable on-chain alpha. Every return computed from contract reads, alpha measured against passive baseline.
Best UI/UX
Dark Mantle-aligned design. AI reasoning surfaced. Glossary tooltips. Responsive. Live data everywhere.
20 Project Deployment Award
7/7 verified on Mantlescan, AI-powered function callable on-chain, public frontend, open MIT repo with README + Foundry tests.
12 / 12CTA
mensa-mu.vercel.app

Try it. Now. Live.

Everything in this deck is fetched from on-chain state at slide load. No fake numbers, no static screenshots, no PDF tricks. Click through.

Built for the Mantle Turing Test Hackathon 2026 — Phase 2 AI Awakening.
MIT licensed. No financial advice. Audit pending.