THE X ALGORITHM PLAYBOOK 2026
How to Reverse-Engineer Your Way to Millions of Impressions
Last Updated: January 2026
Based on: X’s open-sourced Grok-based transformer recommendation system (xai-org/x-algorithm)
What Changed: 2025 vs 2026 Algorithm
The Old System (Pre-2026)
The previous Twitter algorithm was a 48-million parameter neural network with:
Hand-crafted features (manually designed engagement signals)
TwEEPCred credibility scoring
Simple engagement weighting (RT = X, Like = Y)
Basic two-stage candidate sourcing
The New System (2026)
X has completely rebuilt the algorithm from scratch using xAI’s Grok technology:
Core Model: 48M param neural net → Grok-based transformer
Feature Engineering: Hand-crafted signals → Zero hand-crafted features—100% learned
Engagement Prediction: Single relevance score → 19 simultaneous action predictions
Retrieval: Basic candidate sourcing → Two-tower embedding model with ANN search
In-Network Posts: Database queries → Thunder in-memory store (sub-millisecond)
Ranking Architecture: Simple scoring → Transformer with candidate isolation masking
Position Encoding: Standard → Rotary Position Embeddings (RoPE)
Attention: Standard multi-head → Grouped Query Attention (GQA)
What This Means For You
The algorithm is now fundamentally different. Many tactics from the old playbook still work (engagement velocity, early RTs, etc.) but the underlying mechanics have changed. The new system:
Learns everything automatically - No more gaming specific hand-crafted features
Predicts 19 different actions - Not just “will they engage?” but “HOW will they engage?”
Scores candidates independently - Your post’s score doesn’t depend on what else is in the batch
Uses embedding similarity - Your content is converted to vectors and matched against user preferences
Processes in-network posts separately - Posts from people you follow go through Thunder, not Phoenix
The New Architecture: Complete Technical Breakdown
System Overview
When you post a tweet, here’s EXACTLY what happens:
Your tweet flows through:
CANDIDATE PIPELINE → Query Hydrator → Source → Hydrator → Filter → Scorer → Selector
Splits into THUNDER (In-Network Posts) and PHOENIX (Out-of-Network Discovery)
HOME MIXER combines results → Returns ranked “For You” feed
The Seven Pipeline Stages (In Order)
Understanding these stages is CRITICAL because each one is an opportunity to either boost or kill your reach.
Stage 1: Query Hydration (Parallel)
What happens: Before looking at ANY posts, the system enriches the query with user context.
Data collected:
viewer_id - Who’s viewing the feed
app - Which X app/platform they’re using
language - User’s language preference
location - Geographic location
seen_posts - Posts already shown (won’t show again)
served_posts - Posts served in previous sessions
bloom_filter - Probabilistic data structure for deduplication
How to exploit this:
Language consistency - Always post in your primary language. The algorithm matches content language to user language preference. Mixing languages confuses the query hydration.
Location signals - If you’re targeting a specific geography, mention location-specific references. The algorithm will match you to users in those regions during query hydration.
Platform optimization - Mobile app users get different query hydration than web users. The algorithm knows which platform someone uses and weights accordingly.
Stage 2: Source (Parallel Execution)
What happens: Multiple sources fetch candidates simultaneously:
Thunder fetches in-network posts (from followed accounts)
Phoenix retrieval fetches out-of-network posts (discovery)
Other sources may fetch trending, promoted, etc.
Key insight: Sources run in PARALLEL. This means:
In-network and out-of-network posts are fetched simultaneously
Your post competes in whichever bucket applies
Being followed by the viewer = Thunder path (faster, more likely to appear)
Not followed = Phoenix path (must score exceptionally well to appear)
How to exploit this:
Build genuine followers - In-network posts (Thunder) have a massive advantage. They bypass the Phoenix ranking competition entirely for followers.
Understand the split - The old “50% in-network / 50% out-of-network” split is now dynamic. High-engagement users see more discovery content.
Seed your network - Your posts appear in Thunder for ALL your followers simultaneously.
Stage 3: Hydration (Parallel Execution)
What happens: Raw candidates are enriched with additional metadata including author information, post metadata, engagement counts, and content signals.
How to exploit this:
Optimize author signals - Your account metadata is attached to EVERY post during hydration. Bad account signals = every post penalized.
Include rich metadata - Posts with images, videos, or links get additional hydration data.
Topic consistency - The hydration stage detects topics. Consistent posting = preferential showing to interested users.
Stage 4: Filter (Sequential Execution)
What happens: Candidates that don’t meet criteria are removed. Filters run SEQUENTIALLY (order matters).
Filter types: Blocklist filter, Quality filter, Deduplication filter, Safety filter, Relevance filter.
How to exploit this:
Never trigger safety filters - A single safety filter trigger teaches the algorithm to filter you more aggressively.
Avoid duplicate content - The deduplication filter is aggressive. Posting similar content repeatedly = filtered.
Stay on topic - The relevance filter removes off-brand content.
Stage 5: Scorer (Sequential Execution)
What happens: Phoenix transformer scores each surviving candidate. The scoring process predicts 19 DIFFERENT actions, not just one engagement score.
Stage 6: Selector
What happens: Based on scores, candidates are selected considering score rankings, diversity requirements, recency balance, and content type mix.
Stage 7: Side Effects (Detached/Async)
What happens: After selection, async tasks fire for logging, metrics collection, user feedback tracking, and A/B experiment tracking. The algorithm is CONSTANTLY learning from what users do after seeing content.
The 19 Engagement Signals (Multi-Action Prediction)
This is the most important change from the old algorithm. Instead of predicting a single “engagement score,” Phoenix predicts the probability of 19 SPECIFIC actions.
Positive Signals (Higher = Better Ranking)
favorite_score - Probability user will like
Weight Impact: HIGH
How to Optimize: Strong hooks, relatable content, dopamine triggers
reply_score - Probability user will reply
Weight Impact: VERY HIGH
How to Optimize: Ask questions, make controversial claims, leave gaps
repost_score - Probability user will RT
Weight Impact: VERY HIGH
How to Optimize: Make content share-worthy, provide social currency
quote_score - Probability user will quote tweet
Weight Impact: HIGH
How to Optimize: Create content people want to add their take to
follow_author_score - Probability user will follow you
Weight Impact: VERY HIGH
How to Optimize: Demonstrate expertise, tease more value
profile_click_score - Probability user clicks your profile
Weight Impact: MEDIUM-HIGH
How to Optimize: Create curiosity about who you are
detail_click_score - Probability user expands the tweet
Weight Impact: MEDIUM
How to Optimize: Write compelling hooks that demand expansion
video_playback_50_score - Probability user watches 50% of video
Weight Impact: HIGH
How to Optimize: Front-load value, keep videos under 60 seconds
dwell_time_score - Probability user spends 15+ seconds
Weight Impact: HIGH
How to Optimize: Dense content, compelling narrative, formatting
bookmark_score - Probability user saves for later
Weight Impact: MEDIUM-HIGH
How to Optimize: Tactical, reference-worthy content
share_score - Probability user shares via DM/external
Weight Impact: MEDIUM
How to Optimize: Content worth sending to a friend
link_click_score - Probability user clicks included link
Weight Impact: MEDIUM
How to Optimize: Compelling CTA, curiosity gap
Negative Signals (Higher = Worse Ranking)
block_score - Probability user will block you
Weight Impact: CATASTROPHIC
How to Avoid: Don’t harass, spam, or be toxic
mute_score - Probability user will mute you
Weight Impact: SEVERE
How to Avoid: Stay on-brand, don’t be annoying
report_score - Probability user will report you
Weight Impact: CATASTROPHIC
How to Avoid: Zero policy violations
unfollow_score - Probability user unfollows after seeing
Weight Impact: SEVERE
How to Avoid: Deliver consistent value
not_interested_score - Probability user clicks “not interested”
Weight Impact: SEVERE
How to Avoid: Stay relevant to your audience
negative_feedback_score - General negative reaction
Weight Impact: HIGH
How to Avoid: Quality over quantity
spam_score - Probability content is spam
Weight Impact: CATASTROPHIC
How to Avoid: No engagement bait, no spam patterns
The Scoring Formula
What we know about the weights (based on code analysis and observed behavior):
Positive signals:
reply_score: 3-5× base weight
repost_score: 2-4× base weight
follow_author_score: 3-5× base weight
favorite_score: 1× base weight
Negative signals:
report_score: -100× base weight
block_score: -50× base weight
not_interested_score: -20× base weight
Key insight: One report_score prediction can outweigh 100 favorite_score predictions. The negative signals are weighted MUCH more heavily than positive signals.
Strategic Implications
Optimize for the right action - If you want replies, write reply-worthy content. If you want RTs, write share-worthy content.
Avoid triggering ANY negative prediction - Even a 1% report_score prediction might tank your reach.
Stack multiple positive signals - Content that generates likes AND replies AND follows will score much higher than content that only generates likes.
The follow prediction is HUGE - follow_author_score is weighted extremely heavily. Every post should make people want to follow you.
The Two-Stage Ranking System Deep Dive
Stage 1: Two-Tower Retrieval
The retrieval stage narrows millions of potential posts down to thousands of candidates using embedding similarity.
The User Tower (Deep Model)
The user tower encodes your ENTIRE engagement history (last 128 interactions) into a single embedding vector. This means:
The algorithm knows exactly what you’ve liked, RT’d, replied to in recent history
Your embedding changes every time you interact with content
Users who engage similarly have similar embeddings (even if they don’t follow each other)
The Candidate Tower (Shallow Model)
The candidate tower is SHALLOW (just 2 FC layers) because:
Efficiency - Candidate embeddings can be pre-computed and cached
Scale - Millions of posts need to be embedded; shallow = fast
Separation - Post quality is judged in the ranking stage, not retrieval
Similarity Matching
The math in English:
Your post is converted to a D-dimensional vector
Each user’s preferences are also a D-dimensional vector
The dot product measures how “aligned” these vectors are
Higher alignment = more likely to retrieve your post for that user
How to Exploit Retrieval:
Embedding consistency - Post consistently about the same topics. Your author embedding becomes strongly associated with those topics.
Avoid embedding confusion - Posting wildly different content types creates a “confused” author embedding.
Target user clusters - Users with similar engagement patterns form “clusters” in embedding space.
Author embedding matters - Your posts carry your AUTHOR embedding. If your author embedding is associated with high engagement, your new posts inherit that signal.
Stage 2: Transformer Ranking
Once retrieval narrows to thousands of candidates, the transformer ranker scores each one using Grok architecture with Grouped Query Attention (GQA) and Rotary Position Embeddings (RoPE).
Grouped Query Attention (GQA)
Why this matters for you:
GQA is more efficient, allowing larger models
The model can process more context (your history) in the same time
More history = better personalization = content that matches user history wins
Rotary Position Embeddings (RoPE)
Why this matters for you:
RoPE handles sequence length better than absolute position embeddings
The model understands recency (newer vs older history)
Recent engagements are weighted more heavily in the attention pattern
Thunder: How In-Network Posts Work
Thunder is an in-memory post store that handles posts from accounts you follow. It’s completely separate from Phoenix.
Why Thunder Matters
Thunder is your unfair advantage with followers.
When you post, your content goes into Thunder immediately via Kafka streaming. For every user who follows you:
Their next feed request queries Thunder
Thunder returns your post (if not excluded) in sub-millisecond time
Your post is a guaranteed candidate in their feed
It still needs to score well in Phoenix ranking, but it WILL be considered
Compare to out-of-network (Phoenix retrieval):
Your post must be embedded
Embedding must be similar enough to user’s embedding to make top-K
Post competes against millions of other posts in retrieval
No guarantee of even being considered
Thunder Performance Characteristics
Lookup latency: Sub-millisecond
Storage: In-memory only
Update frequency: Real-time via Kafka
Concurrency model: Semaphore-limited (prevents overload)
Sort algorithm: Recency-first (newest wins ties)
How to Exploit Thunder
Grow real followers - Each follower = guaranteed Thunder candidacy. 1,000 real followers who actually use X = 1,000 guaranteed candidacies per post.
Post when followers are active - Thunder returns posts sorted by recency.
Understand the exclusion system - seen_posts and served_posts are excluded. Fresh content beats old content.
Reply strategically - Thunder has is_reply data. Original posts get priority in certain feeds.
Consistency beats spikes - Thunder serves your RECENT posts. Posting consistently keeps you in Thunder’s hot memory.
Phoenix: The ML Brain Behind Everything
Phoenix handles TWO critical functions: Retrieval (finding out-of-network posts via embedding similarity) and Ranking (scoring all candidates via transformer).
The Hash-Based Embedding System
Instead of storing embeddings directly for every user/post/author, Phoenix uses hash-based embeddings with multiple hash functions per entity.
Strategic implication:
Your identity is represented by multiple hash functions
Consistent posting patterns = consistent hash patterns = learnable identity
Erratic behavior = noisy signal = harder for the model to learn your patterns
Action Embeddings: The Hidden Signal
When the model sees your history, it doesn’t just see WHAT you engaged with. It sees HOW.
Strategic implication:
The model learns that users who RT also tend to RT similar content
If someone’s history is mostly likes, their like prediction will be weighted higher
Users who reply a lot = higher reply predictions for relevant content
Product Surface Embeddings
The model knows WHERE you engaged (home_feed, search, profile, notifications, explore, etc.)
Strategic implication:
Content that performs well in search may be different from content that performs well in For You
The model personalizes predictions based on WHERE users typically engage
The Candidate Isolation Principle
This is the single most important architectural decision in the new algorithm.
Traditional ranking: Score(candidate_A) = f(user, candidate_A, candidate_B, candidate_C, ...) Candidate A’s score depends on what other candidates are in the batch.
X’s new ranking: Score(candidate_A) = f(user, history, candidate_A) Candidate A’s score depends ONLY on the user and their history, NOT on other candidates.
Why This Matters For You
Consistent scores - Your post gets the same score regardless of what other posts are in someone’s feed.
Cacheable scores - X can compute your post’s score for a user ONCE and cache it.
Fair competition - Every post competes on its own merits against user preferences.
Predictable optimization - Focus on matching what users want, not beating other posts.
How to exploit:
Don’t try to “outcompete” specific posts - focus on matching user preferences
Your score is based on USER-POST match, not POST-POST competition
Invest in understanding your audience’s preferences rather than monitoring competitors
Your New Credibility System
What Changed from TwEEPCred
The old TwEEPCred system used explicit credibility factors:
Verified badge = +100
Account age = log formula
Following ratio < 0.6 = required
Mobile usage = +50%
The new system has NO explicit credibility scoring.
Instead, credibility is LEARNED through the author embedding and engagement history.
How the New System Works
Your “Credibility” = f(author_embedding, historical_engagement_patterns)
The model learns implicitly:
“Posts from this author tend to get high engagement”
“This author’s followers actually engage”
“This author’s content rarely gets reported”
Measurable Credibility Signals (Inferred)
Historical engagement rate
How It’s Learned: Model sees your posts’ performance
How to Optimize: Maintain consistent engagement
Report/block rate
How It’s Learned: Negative signals in training data
How to Optimize: Zero policy violations
Follower engagement quality
How It’s Learned: Who engages with you (their credibility)
How to Optimize: Build quality audience
Content consistency
How It’s Learned: Embedding stability
How to Optimize: Stay on-brand
Verification status
How It’s Learned: Likely encoded in author features
How to Optimize: Get verified if possible
Account age
How It’s Learned: Likely encoded in author features
How to Optimize: Can’t change (patience)
The Following/Follower Ratio Rule
Does the 0.6 ratio still matter?
The old algorithm had an explicit threshold. The new algorithm may or may not have this, but the SIGNAL still matters:
High following = You follow everyone for follow-backs = Spam signal
Low follower engagement = Your followers are dead accounts = Quality signal
The model will learn these patterns even without explicit thresholds
Recommendation: Maintain a ratio under 0.6 regardless. It can’t hurt and likely helps.
How to Build Credibility in the New System
Consistent quality - The model learns from your historical performance. Every post contributes to your author embedding.
Engaged audience - Build followers who actually engage. 1,000 active followers > 10,000 dead followers.
Zero negative signals - Reports, blocks, and mutes poison your author embedding.
Topic authority - Staying on-topic allows the model to learn your expertise.
Mobile posting - The old algorithm gave +50% for mobile. The new algorithm likely has surface-specific signals. Mobile is still safer.
The Hash-Based Identity System
How X Identifies You
Every entity (user, post, author) is represented by multiple hash values that map to learned embedding vectors.
Why Multiple Hashes
With millions of users, single hashes cause collisions (different users map to same embedding). Multi-hash solution: The probability that two users share ALL hash values is extremely low.
How This Affects You
Your identity is stable - Same user ID always produces same hash values. Your “learned reputation” persists.
New accounts start fresh - New user ID = new hashes = no pre-learned patterns. Cold start problem.
Post identity is permanent - Deleting and reposting creates a new post ID = new hashes = loses all engagement learning.
Author association is permanent - Your author hashes are attached to every post forever.
Content Format Analysis & Optimization
Text-Only Posts
Scoring Signals: dwell_time_score (15+ sec reading), detail_click_score (expanding truncated text), reply_score (conversation potential)
Tactics that work:
Line breaks for readability → Increases dwell time
280+ chars with strong hook → Triggers “See more” clicks
Question at the end → Boosts reply prediction
Dense information → Increases bookmark prediction
Image Posts
Scoring Signals: Standard text signals PLUS image engagement signals, alt-text relevance
Tactics that work:
High-contrast images → Stop the scroll
Text overlay on image → Double content delivery
Alt-text with keywords → Improves retrieval matching
Face images → Proven higher engagement
Video Posts
Scoring Signals: video_playback_50_score (50% completion), 10-second minimum threshold, loop completions, audio engagement
Tactics that work:
Hook in first 3 seconds → Prevents skip
Under 60 seconds → Easier to complete 50%
Captions → Accessibility + muted viewing
No slow intro → Immediate value delivery
Link Posts
Scoring Signals: link_click_score (external click), domain authority signals, preview engagement
Tactics that work:
Authoritative domains (NYT, etc.) → Domain embedding boost
Compelling preview text → Increases click prediction
Context around link → Not just naked URL
Native X content preferred → External links = slight penalty
Thread Posts
Scoring Signals: First tweet performance determines thread visibility, subsequent tweets inherit some momentum, thread completion rate
Tactics that work:
First tweet must stand alone → It’s the only one scored initially
Don’t say “Thread 🧵” → Doesn’t help, wastes characters
Under 5 tweets total → Completion rate matters
Single dense tweet > thread → 3× impressions-per-word for singles
Format Performance Hierarchy (2026)
Based on observed behavior and code analysis (highest to lowest impact):
Native Video (60s or under) - video_playback_50_score is heavily weighted + video content is pushed algorithmically
Carousel Images - Multiple images = multiple engagement opportunities + dwell time from swiping
Single Image + Text - Visual stop + text engagement + higher detail_click_score
Text-only (Dense, 280+ chars) - dwell_time_score + detail_click + relies on pure content quality
Single Image (No text) - Less context for model + no text engagement signals
Link Posts - Slight penalty for external navigation + depends heavily on domain
Long Threads (5+ tweets) - Low completion rate + first tweet must carry all weight
Instant Death Penalties (Updated)
Account-Level Death Penalties
These affect EVERY post you make:
☠️ Report Accumulation
Impact: CATASTROPHIC
Mechanism: report_score prediction increases for all your future content
Recovery: Extremely difficult - reputation is learned into author embedding
How it happens:
You post something that gets reported
Model learns “posts from this author get reported”
Future posts have elevated report_score predictions
Elevated report_score = lower ranking for ALL posts
Spiral continues
How to avoid:
Zero tolerance for policy violations
Don’t engage in harassment, even if “justified”
Avoid controversial topics that attract mass reports
Block bad-faith actors before they report you
☠️ High Block/Mute Rate
Impact: SEVERE
Mechanism: block_score and mute_score predictions increase
Recovery: Difficult - need sustained non-block behavior
How to avoid:
Don’t spam
Stay on-brand
Don’t @ people aggressively
Quality over quantity
☠️ Dead Follower Base
Impact: SEVERE (for Thunder retrieval)
Mechanism: Posts go to Thunder → followers don’t engage → no momentum
Recovery: Prune dead followers, rebuild quality base
The math:
10,000 followers, 1% active = 100 potential engagers
1,000 followers, 20% active = 200 potential engagers
The 1,000 account has DOUBLE the engagement potential
☠️ Embedding Confusion
Impact: MODERATE-SEVERE (for Phoenix retrieval)
Mechanism: Inconsistent content → confused author embedding → poor retrieval matching
Recovery: Consistent posting in one niche for 30+ days
How to avoid:
Pick 1-3 related topics
80% of content stays on-brand
20% can be personality/variety
Consistency builds embedding strength
Tweet-Level Death Penalties
These kill individual posts:
☠️ Early Negative Signals
Impact: IMMEDIATE DEATH
Mechanism: First few “not interested” clicks → algorithm learns → suppression
How to avoid:
Don’t post off-brand content
Don’t chase trends outside your niche
Seed to friendly audience first
☠️ Spam Pattern Detection
Impact: SEVERE SUPPRESSION
Mechanism: spam_score prediction spikes → filters kick in
Spam patterns to avoid:
3+ hashtags → Old-school spam tactic
5+ @mentions → Trying to force visibility
All caps → LOW_QUALITY flag
Engagement bait phrases → Explicitly targeted
Identical content repeated → Deduplication + spam flag
Link spam (link + nothing else) → Obvious promotion
☠️ Safety Filter Trigger
Impact: COMPLETE REMOVAL FROM FEED
Mechanism: Content violates policies → filtered at Stage 4
How to avoid:
Know X’s policies
When in doubt, don’t post
Self-censor before the algorithm does
Gaming Tactics That Actually Work
Category 1: First 30 Minutes (Tactics 1-10)
Tactic 1: The Engagement Seed Protocol
What: Prime your post with guaranteed early engagement to trigger algorithmic momentum.
Implementation:
T-0 (Before posting): DM 5-10 trusted contacts: “About to drop something on [topic], would appreciate an RT”
T+0 (Post): Post the content
T+1-5 min: First 3-5 engagements land, algorithm detects early velocity, pushed to wider test audience
T+5-15 min: If test audience engages, scale begins
Why it works with new algorithm: Early engagement creates positive training signal. First engagements have outsized impact on predictions.
Warning: Only use real friends/colleagues. Algorithm detects fake engagement patterns.
Tactic 2: Author Reply Amplification 2.0
What: Strategically reply to your own post to boost visibility and extend lifespan.
Implementation:
T+0: Post main content
T+5 min: First self-reply with additional value (NOT “Great point!” but “Here’s the data behind this: [specifics]”)
T+10 min: If engagement happening, second self-reply with a question to invite replies
T+30 min: Reply to every real comment
Why it works: reply_score is heavily weighted. Author replies signal “important conversation.” Each reply extends the post’s active window.
Tactic 3: The Velocity Stack
What: Stack multiple engagement types rapidly to trigger multi-signal boost.
Goal in first 15 minutes: At least 1 Retweet, 2-3 Replies, 5 Likes, 1 Bookmark
This creates a STACKED positive signal - Model sees multiple engagement types = triggers broader audience testing.
Tactic 4: The Dwell Time Trap
What: Structure content to maximize time spent reading (dwell_time_score).
Structure:
HOOK (First 40 chars) - Must create instant curiosity
TENSION (Next 80 chars) - Build on the hook, create gap
[See more break ideally here]
PAYOFF (Remaining content) - Deliver the value
RESIDUE (Final line) - Makes them re-read or think
Target: 15+ seconds reading time (dwell_time_score threshold)
Tactic 5: The Expansion Hook
What: Force “See more” clicks by strategic truncation (detail_click_score).
Character math:
X truncates around 280 chars on most devices
First 140 chars visible in timeline
Rest requires “See more” click
Strategy: First 140 chars = hook that demands resolution. Create curiosity gap that MUST be closed.
Tactic 6: The Reply Magnet
What: Structure content to maximize reply probability (reply_score).
Reply triggers:
Direct question (”What’s your biggest blocker?”) → Explicit invitation
Controversial claim (”Cold email is dead in 2026”) → People want to argue
Fill in the blank (”The best tool for X is ___”) → Easy to respond
Hot take (”Unpopular opinion: Y is overrated”) → Triggers agreement/disagreement
Specific ask (”Reply with your niche, I’ll give feedback”) → Clear value exchange
Formula: [Strong claim] + [Implied invitation to discuss]
Tactic 7: The Bookmark Bait
What: Create content so valuable it triggers bookmark_score.
Bookmark-worthy content types:
Frameworks - “The 4-step process for X”
Data/Numbers - “I analyzed 1,000 posts, here’s what works”
Reference Lists - “20 tools for X (with links)”
Counter-intuitive insights - “What everyone gets wrong about X”
Future reference - “Save this for when Y happens”
Tactic 8: The Follow Trigger
What: Maximize follow_author_score with strategic content.
Follow triggers:
Demonstrate expertise - “I’ve done X 500 times, here’s what I learned”
Promise ongoing value - “I share one Y tip every day”
Curiosity about person - “My agency does X” (wait, what agency?)
Unique perspective - “Everyone says Y, but I’ve found Z”
Results/proof - “This got me from $X to $Y”
Tactic 9: The Profile Click Hook
What: Trigger profile_click_score by creating curiosity about WHO you are.
Profile click triggers:
Reference your business without explaining - “My agency sends 47 calls/day to calendar”
Claim specific expertise - “After 500 client campaigns...”
Contrarian identity - “As someone who quit Big Tech...”
Results without method - “This is how I went from $0 to $1M in 8 months”
Requirement: Your profile must convert curiosity. Clear bio, pinned tweet with best content, consistent visual identity.
Tactic 10: The Momentum Thread
What: If a post gains traction, thread a follow-up to capture momentum.
Implementation:
T+0: Post main content
T+15-30 min: Monitor performance
IF > 15 engagements with 2+ RTs → Thread follow-up immediately
Follow-up gets boosted by original’s momentum. Algorithm sees “this author’s content performs.”
Category 2: Content Structure (Tactics 11-20)
Tactic 11: The Pattern Interrupt
What: Break timeline scrolling patterns to force attention.
Pattern interrupts: Unusual formatting, unexpected content type, visual hooks in text, white space usage.
Tactic 12: The Specificity Hack
What: Use specific numbers/details to trigger credibility signals.
Vague: “I made a lot of money” → Specific: “I made $127,493 in Q3”
Vague: “It took a while” → Specific: “It took 47 days”
Vague: “Many clients” → Specific: “312 clients since 2021”
Tactic 13: The Contrast Hook
What: Use before/after, old/new, or wrong/right contrasts.
Contrast structures: Before/After, Wrong/Right, Expectation/Reality, Them/Me
Tactic 14: The Open Loop
What: Create curiosity gaps that must be closed.
Structure: Create a question/mystery → Delay the answer → Force continued reading to close loop
Tactic 15: The Social Proof Stack
What: Layer multiple proof points for credibility.
Proof types to stack: Personal results, Client results, Timeframe, Specificity, Authority reference
Tactic 16: The Value Front-Load
What: Put maximum value in the first 140 characters.
Rule: First 140 chars must either deliver value immediately, create irresistible curiosity, or make a bold claim worth debating.
Tactic 17: The Native Format Advantage
What: Use X’s native tools over external alternatives.
PREFERRED (Native): Direct video upload, native images, polls, Spaces
AVOID (External): YouTube links, blog links without context, third-party embed tools
Tactic 18: The Carousel Hack
What: Use multi-image posts to multiply engagement opportunities.
Carousel structure: Hook → Context → Value (images 3-8) → CTA → Summary
Tactic 19: The Emoji Strategy
What: Strategic emoji placement for pattern interrupts and structure.
GOOD: Section headers in threads, single emoji to emphasize key point, visual list markers
BAD: More than 3 per post, random decoration, every sentence has emoji
Tactic 20: The White Space Weapon
What: Use formatting to slow reading and increase dwell time.
Why white space works: Easier to read, slower reading = more dwell time, emphasis on key points, feels more premium/thoughtful.
Category 3: Audience Building (Tactics 21-30)
Tactic 21: The Engaged 100
What: Build a core group of 100 highly engaged followers who amplify everything.
Selection criteria: Engages 2+ times/week, has their own engaged audience, posts in your niche, responds to DMs.
The math: If each of your 100 engaged followers has 1,000 followers and 50 RT your content, you reach 50,000 people OUTSIDE your direct followers.
Tactic 22: The Comment Domination Strategy
What: Be the best commenter in your niche to acquire followers.
Target: 10 large accounts in your niche
Action: Leave VALUABLE comments (unique perspective, relevant data, insightful questions)
Tactic 23: The Network Graph Optimization
What: Optimize WHO you engage with to improve your algorithmic neighborhood.
Strategy: Engage with successful accounts in your exact niche. Their audience overlaps with people you want to reach. Algorithm learns the connection.
Tactic 24: The DM Relationship Builder
What: Build real relationships that translate to algorithmic advantage.
DM relationships → Real engagement → Algorithm learns connection → Algorithm shows your content to their network.
Tactic 25: The Follower Audit
What: Regularly prune dead followers to maintain engagement rate.
Monthly audit: Export follower list, identify dead accounts (no posts in 90 days, no profile picture, never engaged with your content), consider removing while keeping ratio healthy.
Tactic 26: The Topic Authority Build
What: Become the algorithmic expert on a specific topic.
Strategy: Pick 2-3 related topics, 80% of content stays on these topics, post consistently for 30+ days, engage with other topic accounts.
Tactic 27: The Cross-Pollination
What: Collaborate with accounts to share audiences algorithmically.
Tactics: Quote tweet exchange, thread collaboration, interview format, mutual engagement pact.
Tactic 28: The Viral Piggyback
What: Add value to viral conversations to capture attention.
When something goes viral in your niche: Find it early, add GENUINE VALUE in reply, let curiosity drive profile clicks.
Tactic 29: The Niche Down Double-Down
What: Go narrower than you think to build algorithmic strength.
Too broad: “Business advice” → Better: “Marketing” → Good: “Content marketing” → Great: “Twitter content marketing for B2B SaaS”
Tactic 30: The Engagement Ratio Hack
What: Maintain healthy engagement ratios to signal quality.
Target ratios: Likes to followers >1% per post, Comments to likes >5%, RTs to likes >10%, Follows to impressions >0.1%
Category 4: Timing & Frequency (Tactics 31-40)
Tactic 31: The Analytics-Driven Window
What: Use data to find YOUR optimal posting times.
Analyze last 30 posts, find patterns in velocity by day/hour, double down on high-velocity windows.
Tactic 32: The Frequency Sweet Spot
What: Find optimal posting frequency for your account.
Sweet spot for most: 1-3 quality posts per day, consistent timing, never sacrifice quality for quantity.
Tactic 33: The Time Zone Stacking
What: Reach multiple time zones with strategic timing.
Post when YOUR followers are online. Analyze YOUR audience distribution.
Tactic 34: The Weekend Arbitrage
What: Less competition on weekends = opportunity for some accounts.
Test weekend posting for 4 weeks, compare engagement rates, decide based on YOUR data.
Tactic 35: The Event Hijacking
What: Capitalize on predictable high-attention events.
Strategy: Prepare content BEFORE event, post during peak attention, relate to your niche.
Tactic 36: The Posting Cadence
What: Create predictable patterns for algorithm and audience.
Example: Monday (value post), Tuesday (engagement post), Wednesday (story post), Thursday (insight post), Friday (light post).
Tactic 37: The Response Window
What: Maximize impact of responding to engagement.
0-15 minutes: CRITICAL - Reply to every comment
15-60 minutes: IMPORTANT - Continue responding
1-4 hours: HELPFUL - Still reply, less urgency
24+ hours: MINIMAL IMPACT - Still polite to reply
Tactic 38: The Content Calendar
What: Plan content in advance to maintain consistency.
Weekly planning: Plan all posts for the week, include content type/topic/key message, write hooks in advance, leave room for reactive content.
Tactic 39: The Momentum Preservation
What: Post when you have momentum, not against it.
After strong performance: Post again within 24-48 hours. Algorithm has elevated view of your account.
After weak performance: Don’t panic post. Wait for optimal time window. Focus on quality of next post.
Tactic 40: The Engagement Block
What: Schedule dedicated time for strategic engagement.
Daily engagement block (30-60 min): First 15 min on own content, next 15 min on target accounts, final 15 min on engaged followers.
Category 5: Advanced Techniques (Tactics 41-50)
Tactic 41: The A/B Hook Test
What: Test multiple hooks to find what resonates.
Write 3-5 different hooks for same content, post least confident hook first, track which hook styles perform best.
Tactic 42: The Negative Signal Audit
What: Proactively identify what’s triggering negative signals.
Self-audit: Any posts got “not interested” feedback? Been blocked/muted recently? Engagement rate dropping trend?
Tactic 43: The Embedding Refresh
What: Periodically reinforce your topic authority.
Every 30 days: Post your best-performing content style, double down on topic-specific content, re-engage with niche accounts.
Tactic 44: The New Follower Conversion
What: Maximize value from new followers in first 48 hours.
Their first 48 hours: Most engaged with you, algorithm testing fit, high chance of early engagement.
Your strategy: Post quality content within 48 hours, engage if they comment, don’t immediately DM pitch.
Tactic 45: The Authority Signal Stack
What: Systematically build authority signals over time.
Checklist: Verified badge, 10K+ followers, high engagement rate (>2%), consistent topic posting, engagement from notable accounts, no policy violations, active engagement, following/follower ratio <0.6.
Tactic 46: The Content Repurposing Engine
What: Systematically reuse your best content.
Repurpose options: Text post → Image with key quote → Thread expanding insight → Video explaining it → Poll asking audience take → Quote tweet of original (weeks later)
Tactic 47: The Algorithm Reset
What: What to do when your account seems suppressed.
Reset protocol:
Week 1: Pause and observe (stop posting or 1/day max, engage more than post, identify negative signal causes)
Week 2: Quality reset (post BEST content type, maximum quality, minimum quantity)
Week 3: Gradual increase (slowly increase posting if improving, monitor closely)
Tactic 48: The Network Effect Accelerator
What: Create content that benefits from network effects.
Network effect content: Tag-worthy content, resource lists, case studies of others, questions to experts.
Tactic 49: The Controversy Calibration
What: Use controversy strategically without triggering negative signals.
Safe controversy: Industry opinions (not personal attacks), methodology debates (not moral debates), best practices disagreements (not political).
The line: “This method doesn’t work” = OK. “People who use this method are stupid” = NOT OK.
Tactic 50: The Meta-Game Optimization
What: Continuously evolve your strategy as the algorithm changes.
Monthly review: What changed in performance? What are top performers doing? What should I test? What should I stop?
The Hourly/Daily/Weekly/Monthly Playbook
⏱️ HOURLY (During Active Posting)
Before Every Post:
Hook in first 40 characters?
One clear value/insight?
No engagement bait phrases?
0-2 hashtags max?
On-brand for my niche?
Would I engage with this if I saw it?
After Posting (0-30 min):
Seeded to 3-5 engaged connections? (optional)
Monitoring for replies to respond to?
Ready to author-reply if momentum builds?
Prepared follow-up thread if performing?
After Posting (30-60 min):
Responded to all comments?
Evaluated performance (continue engaging or move on)?
Noted what worked/didn’t for future reference?
📅 DAILY CHECKLIST
Morning:
Review yesterday’s post performance
Plan today’s content (if not pre-planned)
Check timing window for posting
Posting Window:
Post from mobile app (not web/scheduler)
Execute hourly checklist above
30-minute dedicated engagement block
Evening:
Final response sweep to any comments
Engage with 5-10 target accounts
Engage with top 5 most active followers
Note any content ideas for tomorrow
📊 WEEKLY CHECKLIST
Every Sunday (30-60 min planning):
Performance Review:
Top 3 posts of the week - what made them work?
Worst performing post - what killed it?
Engagement rate trend (up/down/stable)?
Follower gain/loss?
Any “not interested” or negative feedback?
Content Planning:
Plan content for M-F (at minimum)
Ensure variety (not all same format)
Pre-write hooks for each day
Identify any relevant events to capitalize on
Audience Health:
Following/follower ratio still under 0.6?
Any notable new followers to engage with?
Any dead followers to consider removing?
Competitive Intel:
What are 3 similar accounts posting?
What’s working for them this week?
Any new tactics to test?
🗓️ MONTHLY CHECKLIST
First of Every Month (1-2 hours deep dive):
Account Health Audit:
Following/follower ratio (MUST be <0.6)
Total follower change (+/- and %)
Average engagement rate per post
Any suppression signs?
Mobile posting % (should be 100%)
Content Performance Analysis:
Top 10 posts of the month - common themes, hooks, formats?
All hook types tested - which 3 performed best?
Best performing content format?
Best performing times?
Audience Analysis:
Who are my top 20 engagers?
Am I engaging with them enough?
Audience demographic changes?
Quality of new followers (engaged vs dead)?
Strategy Adjustment:
What should I do MORE of?
What should I do LESS of?
What should I START doing?
What should I STOP doing?
Competitive Analysis:
5 accounts in my niche to study
What are they doing that I’m not?
What worked for them this month?
One tactic to steal and test?
Goal Setting:
Follower goal for next month
Engagement rate goal for next month
Content experiments to run
Relationship building targets
Technical Appendix: Model Weights & Architecture
Phoenix Transformer Configuration
Based on code analysis:
emb_size: 1024 (Embedding dimension)
num_layers: 24 (Transformer depth)
num_q_heads: 16 (Query heads)
num_kv_heads: 4 (Key/Value heads for GQA)
ffn_widening: 4 (FFN multiplier)
history_length: 128 (History sequence)
candidate_length: 32 (Candidates per batch)
num_actions: 19 (Output heads)
dtype: bfloat16 (Precision)
rope_base: 10000 (RoPE parameter)
Embedding Table Sizes
user_hashes: 4 (Hashes per user)
post_hashes: 4 (Hashes per post)
author_hashes: 4 (Hashes per author)
action_embeddings: 19 (One per action type)
surface_embeddings: 16 (Product surfaces)
table_size: millions (Hash table size)
Two-Tower Retrieval Specs
user_tower: deep (Full transformer)
candidate_tower: shallow (2-layer MLP)
similarity: dot_product (Matching function)
normalization: L2 (Embedding normalization)
retrieval_k: thousands (Top-K retrieved)
Action Prediction Weights (Estimated)
Based on observed behavior and code structure:
Positive weights:
favorite_score: 1.0
reply_score: 3.0
repost_score: 2.5
quote_score: 2.0
follow_author_score: 4.0
profile_click_score: 1.5
bookmark_score: 1.5
video_playback_50_score: 2.0
dwell_time_score: 1.5
Negative weights:
report_score: -100.0
block_score: -50.0
mute_score: -30.0
not_interested_score: -20.0
unfollow_score: -25.0
Note: These are estimates based on observed behavior, not confirmed weights.
The Algorithm in One Paragraph (2026 Edition)
X’s 2026 algorithm is a two-stage Grok-based transformer system that processes in-network posts through Thunder (sub-millisecond in-memory retrieval from followed accounts) and out-of-network posts through Phoenix (two-tower embedding retrieval + transformer ranking). The Phoenix ranker predicts 19 simultaneous action probabilities - including likes, replies, RTs, follows, but also blocks, mutes, and reports - using a candidate isolation masking pattern where posts can attend to user context and history but NOT to other candidates in the batch, making scores consistent and cacheable. Your author embedding, built from multiple hash functions, encodes your learned credibility through historical engagement patterns rather than explicit factors like TwEEPCred. Negative signals (reports, blocks, “not interested”) are weighted 20-100× heavier than positive signals, meaning one report can outweigh 100 likes. Content is matched to users through embedding similarity: consistent topic posting creates strong author embeddings that align with user embeddings of people interested in those topics. The algorithm has eliminated all hand-crafted features, learning everything from engagement sequences, which means gaming requires understanding the learned patterns rather than exploiting specific rules—but the fundamentals remain: early engagement velocity, reply/RT generation, dwell time, and zero negative signals still determine reach at scale.
Final Word: The 2026 Meta
The X algorithm has evolved from a rule-based system you could hack to a learned system you must genuinely satisfy.
The old game: Find the rules, exploit the rules.
The new game: Understand what the model learned to value, then create content that genuinely delivers that value.
The model learned that:
Content that gets replies is interesting
Content that gets RTs is share-worthy
Content that gets follows is from valuable accounts
Content that gets reports is harmful
Content that gets “not interested” is off-target
You can’t fake these signals at scale. But you CAN:
Understand exactly what triggers each signal
Optimize your content structure for maximum signal generation
Build systems for consistent execution
Avoid every negative signal trigger
Compound small advantages over time
The algorithm is just trying to predict what users will engage with positively. If you create content users genuinely want to engage with, the algorithm will reward you.
But understanding the MECHANICS gives you a massive edge over people who just “post and pray.”
Now go engineer your way to millions of impressions.
Get the Google Docs version of this post here:
https://docs.google.com/document/d/1A-tMJDrF6S9VZGuV-mXYjHqY1rriSDGjEw-T69jvRFA/edit?usp=sharing




Impressive breakdown of the multi-action prediction shift from single engagement scoring. The candidate isolation principle is probly the most underestimated change, most people still think they're competing against other posts in the batch when they're actually just competing against user prefernces. I tested this by varying posting times and noticed my scores stayed consistent regardless of what else was trending, which confirmed the cacheable scoring hypothesis.