Getting Started | 15 Minutes | Master Cross-Platform Intelligence Gathering, Double Your Research Efficiency
TIP
last30days-Skill is currently the most comprehensive AI research skill on ClawHub. It supports parallel searching across 10+ signal sources, automated deduplication and scoring, and generates factual reports with citations.
GitHub: mvanhorn/last30days-skill
Target Audience Profile
- Developers with 1-5 years of experience.
- Users who perform daily AI tool selection, competitive research, or technical trend tracking.
- Users with command-line basics, familiar with Claude Code or Codex CLI.
- Those wishing to escape the fragmented process of "scrolling Reddit/HN for info."
Core Dependencies & Environment
| Dependency | Description | Required |
|---|---|---|
| Node.js 18+ | Skill runtime environment | Yes |
| Python 3.10+ | Primary scripting language | Yes |
| ScrapeCreators API Key | 3-in-1 for Reddit/TikTok/Instagram | Yes |
| Claude Code or Codex CLI | Skill execution host | Yes |
| X AUTH_TOKEN / CT0 | X search authentication (Optional) | No |
| Bluesky App Password | Bluesky search (Optional) | No |
| Polymarket Gamma API | Prediction market data (Free) | No |
Project Structure Tree
last30days-skill/
βββ SKILL.md # Skill definition (deployed to ~/.claude/skills/)
βββ SPEC.md # Full technical specification
βββ CLAUDE.md # Claude Code development guidelines
βββ scripts/
β βββ last30days.py # Python entry point (research engine)
β βββ sync.sh # Deployment sync script
β βββ lib/
β βββ __init__.py # Package entry (eager imports prohibited)
β βββ env.py # Environment variable loader
β βββ dates.py # Date range and confidence calculations
β βββ cache.py # 24h TTL cache
β βββ http.py # Standard library HTTP client
β βββ models.py # OpenAI/xAI model auto-selection
β βββ openai_reddit.py # Reddit search (OpenAI Responses API)
β βββ xai_x.py # X search (xAI Responses API)
β βββ reddit_enrich.py # Deep metric pulling for Reddit posts
β βββ hackernews.py # Hacker News (Algolia Free API)
β βββ polymarket.py # Polymarket prediction market (Gamma API)
β βββ bluesky.py # Bluesky/AT Protocol search
β βββ truthsocial.py # Truth Social search
β βββ normalize.py # Raw response β Normalized Schema
β βββ score.py # Multi-signal scoring model
β βββ dedupe.py # Near-duplicate detection
β βββ render.py # Markdown / JSON report rendering
β βββ schema.py # Type definitions and validation
βββ skills/last30days/
β βββ last30days.sh # Shell wrapper (Claude Code Skill entry)
βββ fixtures/ # Test fixture data
Step 1 Installation & Auth Configuration
1.1 Install via Claude Code Plugin (Recommended)
If you are already using Claude Code, simply use the plugin command:
/plugin marketplace add mvanhorn/last30days-skill
/plugin install last30days@last30days-skill
Alternatively, use the official ClawHub tool:
clawhub install last30days-official
WARNING
Plugin installation requires your Claude Code version to support the /plugin command. Verify claude --version >= 1.0.
1.2 Manual Git Clone Installation
Don't want to use the plugin? Clone directly to your local machine:
# Clone to Claude Code skills directory
git clone https://github.com/mvanhorn/last30days-skill.git \
~/.claude/skills/last30days
# Enter directory and view files
cd ~/.claude/skills/last30days
ls -la scripts/
1.3 Configure ScrapeCreators API Key (Required)
This is the unified entry point for Reddit, TikTok, and Instagram. One key covers everything:
- Visit scrapecreators.com to register and get an API Key.
- Create the configuration file:
mkdir -p ~/.config/last30days
cat > ~/.config/last30days/.env << 'EOF'
# Required: 3-in-1 Key for Reddit + TikTok + Instagram
SCRAPECREATORS_API_KEY=sc_xxxxxxxxxxxxxxxxxxxx
# Optional: OpenAI API (can be omitted if logged into Codex)
OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxx
EOF
chmod 600 ~/.config/last30days/.env # Protect sensitive files
1.4 Optional Auth for X / Bluesky
X Search (Recommended Method):
# 1. Log in to x.com, open browser DevTools (F12)
# 2. Application β Cookies β Copy the Values for auth_token and ct0
# 3. Write to .env
cat >> ~/.config/last30days/.env << 'EOF'
# X Search Auth (Cookie method)
AUTH_TOKEN=xxxxxxxxxxxxxxxxxxxx
CT0=xxxxxxxxxxxxxxxxxxxx
# X Alternative: xAI API (No cookies required)
XAI_API_KEY=xai-xxxxxxxxxxxxxxxxxxxx
EOF
Bluesky Search:
# 1. Go to bsky.app/settings/app-passwords and create an App Password
# 2. Write to .env
cat >> ~/.config/last30days/.env << 'EOF'
# Bluesky/AT Protocol
BSKY_HANDLE=yourhandle.bsky.social
BSKY_APP_PASSWORD=xxxx-xxxx-xxxx-xxxx
EOF
TIP
X cookies and Bluesky passwords are optional. If you want to keep it simple, the free Polymarket, Hacker News, and Reddit sources cover most scenarios.
1.5 Verify Successful Installation
# Run a simple test (--mock uses local test data, saves API quota)
python3 ~/.claude/skills/last30days/scripts/last30days.py "Claude Code tips" --mock --emit=compact
If you see output like this, everything is working correctly:
=== last30days Report: Claude Code tips ===
Sources: reddit, hackernews | Time: 2026-03-25 | Mode: mock
[results...]
Step 2 Basic Usage
2.1 /last30days Command Line
In Claude Code, type the command directly:
/last30days best Claude Code prompts
Using Codex CLI:
python3 ~/.claude/skills/last30days/scripts/last30days.py "best Claude Code prompts" --emit=compact
TIP
By default, it searches for hot content from the last 30 days across 10+ signals including Reddit, X, YouTube, TikTok, Instagram, Hacker News, Polymarket, and Bluesky. A single search takes 2-8 minutes; niche topics may take longer.
2.2 Understanding the Report Structure
After execution, you will receive a structured report looking like this:
# last30days Report: best Claude Code prompts
## Sources Searched (6 active)
reddit | x | hackernews | polymarket | youtube | reddit_threads
## Top Findings
...
## Best Practices (Community-verified methods)
...
## Prompt Pack (Ready to copy)
...
## Recent Developments
...
## References (With source links)
...
The report automatically performs:
- Convergence Detection: Content mentioned on multiple platforms receives higher weight.
- Deduplication: Only the best version of semantically similar posts is kept.
- Recency Decay: Newer content is ranked higher.
- Citations: Every conclusion is accompanied by original links.
2.3 --quick Mode
In a hurry? Add --quick to skip some deep search steps:
python3 ~/.claude/skills/last30days/scripts/last30days.py \
"Cursor AI vs Windsurf" \
--quick \
--emit=compact
2.4 --days=N Custom Time Window
You aren't limited to 30 days; you can customize the range:
python3 ~/.claude/skills/last30days/scripts/last30days.py \
"OpenClaw latest news" \
--days=7 \
--emit=compact
2.5 --refresh Force Cache Refresh
Results are cached for 24 hours by default. To search again:
python3 ~/.claude/skills/last30days/scripts/last30days.py \
"AI code editors comparison" \
--refresh \
--emit=compact
Step 3 Multi-Source Parallel Search Mechanism
3.1 How Reddit Search Works
Reddit is the most important signal source. The skill uses a two-step process:
# scripts/lib/openai_reddit.py (Simplified)
def search_reddit(query: str, days: int = 30) -> list[dict]:
# Step 1: Preliminary discovery via OpenAI Responses API + web_search
response = openai.responses.create(
model="gpt-4o",
input=f"Find active Reddit discussions about: {query}",
tools=[{"type": "web_search"}],
max_tokens=2000
)
# Step 2: Pull raw post JSON to get actual upvotes/comments count
enriched = reddit_enrich.fetch_threads(response.urls, days)
return enriched
TIP
After v2.9, Reddit defaults to ScrapeCreators. One key covers Reddit, TikTok, and Instagram, offering more stability than independent solutions.
3.2 Hacker News Integration
Free access, no API Key required:
# scripts/lib/hackernews.py
def search_hackernews(query: str, days: int = 30) -> list[dict]:
# HN Algolia API is open and free
url = "https://hn.algolia.com/api/v1/search"
params = {
"query": query,
"tags": "story",
"numericFilters": f"created_at_i>{cutoff_timestamp(days)}"
}
resp = requests.get(url, params=params)
items = resp.json()["hits"]
# Ranked by combined points + num_comments
return sorted(items, key=lambda x: x["points"] + x["num_comments"] * 2, reverse=True)
3.3 Polymarket Prediction Market Data
This is a unique highlight of last30daysβit doesn't just look at "what people say," but also at "how much money people are betting":
# scripts/lib/polymarket.py
def search_polymarket(query: str) -> list[dict]:
# Gamma API free access
url = "https://gamma-api.polymarket.com/markets"
resp = requests.get(url, params={"topic": query, "limit": 20})
markets = resp.json()
# Five-factor scoring model
for m in markets:
m["composite_score"] = (
m["text_relevance"] * 0.30 +
m["volume_24h"] * 0.30 +
m["liquidity"] * 0.15 +
m["price_velocity"] * 0.15 +
m["outcome_competitiveness"] * 0.10
)
return sorted(markets, key=lambda x: x["composite_score"], reverse=True)
Actual effect: If you search for "OpenClaw," you'll see not only Reddit threads but also Polymarket data regarding "OpenClaw Monthly Active Users." The odds backed by real money are often more convincing than any forum post.
3.4 Multi-Signal Scoring Model Analysis
Results from all sources flow into a unified scoring pipeline:
# scripts/lib/score.py
def compute_composite_score(item: dict, query: str, days: int) -> float:
# 1. Text Relevance (Bidirectional similarity + synonym expansion)
text_score = bidirectional_similarity(item["text"], query)
# 2. Engagement Normalization
engagement_score = normalize_velocity(item["engagement"])
# 3. Source Authority Weighting (HN > Reddit > X > TikTok)
authority_score = SOURCE_WEIGHTS[item["source"]]
# 4. Cross-platform Convergence Bonus (Multiple platforms = Weight bonus)
convergence_score = detect_convergence(item, all_results)
# 5. Recency Decay (0.98 exponential decay per day)
recency_score = 0.98 ** days_since_post(item["timestamp"])
return (
text_score * 0.35 +
engagement_score * 0.25 +
authority_score * 0.20 +
convergence_score * 0.10 +
recency_score * 0.10
)
In v2.5 blind tests, scoring accuracy improved from 3.73/5.0 to 4.38/5.0 (~17% improvement).
Step 4 Comparison Mode (X vs Y)
4.1 "Cursor vs Windsurf" Case Study
One of the coolest featuresβcomparing two tools directly:
/last30days cursor vs windsurf
Or via command line:
python3 ~/.claude/skills/last30days/scripts/last30days.py \
"cursor vs windsurf" \
--emit=compact
4.2 Parallel Three-Way Research Mechanism
Comparison mode triggers three independent research threads:
# scripts/lib/score.py (Comparison mode branch)
def run_comparative_mode(query_a: str, query_b: str, base_query: str):
# Three-way parallel search
results_a = run_research(query_a) # Path 1: Search Cursor only
results_b = run_research(query_b) # Path 2: Search Windsurf only
results_base = run_research(base_query) # Path 3: Search comparative discussions
# Generate side-by-side report
return render_comparison(results_a, results_b, results_base)
4.3 Understanding the Comparison Report
The output format looks roughly like this:
## Comparative Analysis: Cursor vs Windsurf
### Strengths
| Dimension | Cursor | Windsurf |
|---|---|---|
| Completion Speed | βββββ | βββ |
| Context Understanding| ββββ | βββββ |
| Multi-file Editing | ββββ | βββ |
| Ecosystem | βββββ | βββ |
### Weaknesses
| Dimension | Cursor | Windsurf |
|---|---|---|
| Memory Usage | High | Medium |
| Offline Support | Poor | Good |
### Community Sentiment (30-day)
- Cursor: 78% positive (1,240 discussions)
- Windsurf: 65% positive (890 discussions)
### Data-Driven Verdict
Cursor leads in community engagement and code completion quality.
Windsurf excels in contextual understanding for complex refactors.
**Recommendation**: Use Cursor for daily coding; Windsurf for architecture planning.
Step 5 Embedding into Other Skills / CI Pipelines
5.1 Injection as Context
Other skills can directly reference research results from last30days:
## Recent Research Context
!python3 ~/.claude/skills/last30days/scripts/last30days.py \
"your research topic" \
--emit=context
5.2 Reading Context from Files
## Research Context
!cat ~/.local/share/last30days/out/last30days.context.md
5.3 JSON Output for CI/CD
Inject research results into automated pipelines:
# Output in JSON format for program consumption
python3 ~/.claude/skills/last30days/scripts/last30days.py \
"best LLM for code review 2026" \
--emit=json > research.json
# Check recommended Top 3 models
python3 -c "
import json
data = json.load(open('research.json'))
for r in data['top_results'][:3]:
print(f\"- {r['title']} (score: {r['score']:.2f})\")
"
5.4 Project-Level .env Overrides
Don't want global config? Placing a .claude/last30days.env in the project root will override global settings:
mkdir -p .claude
cat > .claude/last30days.env << 'EOF'
# API Key valid only for this project
SCRAPECREATORS_API_KEY=sc_project_specific_key
OPENAI_API_KEY=sk-project-specific-key
EOF
TIP
This is particularly useful for team collaborationβeveryone can use their own key while skill behavior remains consistent.
5.5 Session Startup Auto-Validation
v2.9.5 added config checks at session start. Claude Code automatically verifies if your .env config is complete every time it starts:
# Trigger validation manually
python3 ~/.claude/skills/last30days/scripts/last30days.py \
--validate-config
Clear errors appear if keys are missing:
β SCRAPECREATORS_API_KEY is missing (required)
β
OPENAI_API_KEY found
β οΈ BSKY_HANDLE is missing (optional)
Troubleshooting
Q1: Error reporting SCRAPECREATORS_API_KEY missing
Cause: Environment variables were not loaded correctly.
Solution:
# Verify key exists
cat ~/.config/last30days/.env | grep SCRAPECREATORS_API_KEY
# If installed via plugin, try manual sync
bash ~/.claude/skills/last30days/scripts/sync.sh
Q2: X search always returns empty results
Cause: AUTH_TOKEN / CT0 cookies have expired or are invalid.
Solution:
# 1. Log in to x.com again
# 2. Copy the latest auth_token and ct0
# 3. Update .env
# Or use xAI API alternative (no cookies needed)
echo "XAI_API_KEY=xai-your-key" >> ~/.config/last30days/.env
Q3: Search is slow (over 10 minutes)
Cause: Defaults to up to 10 parallel sources; niche topics require more API retries.
Solution:
# 1. Use --quick to skip some deep search logic
python3 ~/.claude/skills/last30days/scripts/last30days.py \
"niche topic" \
--quick
# 2. Check if specific sources are timing out
python3 ~/.claude/skills/last30days/scripts/last30days.py \
"topic" \
--sources=reddit # Search Reddit only for faster debugging
Q4: Polymarket returns irrelevant markets
Cause: Polymarket matching is keyword-based; some topics may not have active markets.
Solution:
# Manually specify keyword expansion
python3 ~/.claude/skills/last30days/scripts/last30days.py \
"AI coding assistant" \
--polymarket-tags="artificial intelligence,llm,gpt" \
--emit=compact
Q5: Claude Code shows Permission denied on scripts
Cause: Shell scripts lack execution permissions.
Solution:
chmod +x ~/.claude/skills/last30days/skills/last30days.sh
chmod +x ~/.claude/skills/last30days/scripts/last30days.py
Q6: Few Reddit posts in report, but lots of discussion exists
Cause: Post-v2.9 defaults to ScrapeCreators; Key is configured but format may be wrong.
Solution:
# Verify if ScrapeCreators Key is valid
curl -H "x-api-key: sc_your_key" \
https://api.scrapecreators.com/v1/reddit/search?q=test
# If Key is valid but no data, check if it reverted to OpenAI
# Force ScrapeCreators in .env
echo "FORCE_SCRAPECREATORS=1" >> ~/.config/last30days/.env
Further Reading / Advanced Directions
1. Fine-tuning the Scoring Model
Weights in scripts/lib/score.py are hardcoded. If you have specific source preferences, you can fork and modify them:
# Increase Hacker News authority weight from 0.20 to 0.35
SOURCE_WEIGHTS = {
"hackernews": 0.35, # β Changed from 0.20
"reddit": 0.25,
"x": 0.15,
"polymarket": 0.15,
"youtube": 0.10,
}
2. Adding Custom Sources
Want to integrate GitHub Issues or LinkedIn? Follow the interface spec in hackernews.py, create a new module under scripts/lib/, and register it in last30days.py. SPEC.md contains full interface definitions.
3. Automated Scheduled Research
Use cron to run periodic research and update your knowledge base:
# Run "AI tools weekly" research every morning at 8 AM
0 8 * * * python3 ~/.claude/skills/last30days/scripts/last30days.py \
"AI developer tools weekly" \
--days=7 \
--emit=md \
--output ~/Documents/Last30Days/ai-tools-weekly-$(date +\%Y-\%m-\%d).md
4. Knowledge Base Integration
Inject research results directly into your personal knowledge base (RAG pipeline):
# Generate context with citations to feed another skill
python3 ~/.claude/skills/last30days/scripts/last30days.py \
"Claude Code advanced techniques" \
--emit=context > ~/.knowledge/last30_context.md
Resources
- GitHub: mvanhorn/last30days-skill
- ClawHub: last30days-official
- ScrapeCreators API: scrapecreators.com
- OpenClaw Official: openclaw.ai
- Hacker News Algolia API: hn.algolia.com/api
- Polymarket Gamma API: gamma-api.polymarket.com