Case Study
Expose the Game
A campaign exposing influencer fraud in Web3—demonstrating how AI scoring ensures you only pay for content that actually communicates your message.
618
Posts Evaluated
257
Unique Creators
27,134
Total Engagements
63.4%
Off-Topic Filtered
The Problem with Engagement Metrics
On-Topic (Paid)
226
Posts (36.6%)
8,693
Likes
940
Reposts
Off-Topic (Filtered)
392
Posts (63.4%)
14,039
Likes
3,462
Reposts
Key Insight
Filtered posts generated 64.5% of total engagement—but from off-topic content. Without AI scoring, you'd see impressive numbers with no way to know if the content communicated your message.
Score Distribution
Posts scored 0-4 were filtered. Posts 5+ were paid. Higher scores indicate better alignment with campaign goals.
30
59
104
46
137
71
82
24
22
23
4
0
1
2
3
4
5
6
7
8
9
10
Filtered (Score 0-4)
Paid (Score 5+)
Engagement by Score
Lower-scored posts often had higher engagement—from off-topic content that wouldn't have delivered your message.
0
2,377
1
3,201
2
4,992
3
2,443
4
4,488
5
3,677
6
3,189
7
869
8
911
9
870
10
117
Likes
Reposts
AI Scoring in Action
What Got Filtered
- Off-Topic PromotionPromoting unrelated projects like yield farming products or trading tools
- Generic MarketingCorporate-sounding copy that missed the authentic voice
- Missing NarrativeTouched on fraud themes but lacked specific details
- Pure Shill ContentMentioning Gauge without fraud exposure context
What Scored High
- Personal ExperienceReal DMs, actual pricing details, named collapsed projects
- Concrete Numbers"$2k vs $10k deals," "940 vs 41 wallets," "1.4M followers/38 likes"
- Authentic VoiceFrustrated-but-educational tone, not corporate marketing
- Natural IntegrationMentioned tokenized social proof as solution, not forced shill
The Bottom Line
63.4%
of budget would have paid for off-topic content without AI scoring
100%
of paid content actually communicated the intended message
AI scoring ensures you only pay for content that hits the mark—while still benefiting from the reach of all submitted posts.