Something Big Isn't Happening
Special Report
There was a post concerning the advancement of AI and how the future of it is now. Did you catch it? The post published February 9, 2026, titled “Something Big Is Happening,” by Matt Shumer. it’s a good read from an industry insider (if you can separate the technological from the intent,) but I immediately picked up on fear generating from his opening lines which caused a red flag to mentally spring upright. By the time I finished it didn’t smell right to me. The classic attention-grabbing headline, how he addressed a specific people then wrote it in the second person, (directly to the reader, whoever it might be,) then layered more fear and a couple of other things. It “felt” deceitful to me, but I wouldn’t know if he was lying about anything in the article. So I took it to Grok. Asking it to analyze it for any deceit. I want to share some of that thread here so that others may learn to recognize manipulation when they read it. Being a disciple, I recognize fear. Being a writer, I see how words are formulated to direct people’s thinking.
This post here went viral quick. And hard. I’m not saying the author wasn’t accurate in what he spoke of concerning the field he’s well invested in. I’m just now becoming a confident user of the technology. What I’m saying is when you look below the surface, how a piece is crafted, intent can many times be discerned.
Here’s some of that chat.
- - - - - - -
- - - - - - -
Me: How credible is this author’s warning?
https://shumer.dev/something-big-is-happening
- - - - - - -
Grok: The article at https://shumer.dev/something-big-is-happening is a February 9, 2026, blog post by Matt Shumer, titled “Something Big Is Happening.” Shumer is the co-founder and CEO of OthersideAI (the company behind HyperWrite, an AI autocomplete tool), and he’s been involved in AI startups and investing for about six years. He positions the piece as an honest warning to non-tech friends/family about explosive AI progress, comparing the current moment to early 2020 COVID warnings—initially dismissed as overblown, then rapidly transformative.
His core warning: We’re in the early “exponential takeoff” phase of AI that will soon substitute for most knowledge/cognitive work. He predicts major disruption to white-collar/entry-level jobs (potentially 50% eliminated) within 1-5 years (likely sooner), driven by frontier models from OpenAI, Anthropic, and Google DeepMind. He cites recent releases (e.g., hypothetical GPT-5.3 Codex and Claude Opus 4.6 around February 2026) enabling AI to independently build full apps, write/test thousands of lines of code, and handle complex judgment tasks. He draws on Dario Amodei (Anthropic CEO)’s statements about AI becoming “substantially smarter than almost all humans at almost all tasks” by 2026-2027, plus risks like deception, bioweapons, or authoritarian misuse.
The piece went massively viral on X (tens of millions of views), was republished/adapted in outlets like Fortune, and sparked widespread shares/discussions.
Credibility Assessment
Shumer has legitimate insider perspective as an AI founder/CEO who’s hands-on with cutting-edge models—he’s not an outsider speculating. His descriptions of AI capabilities (e.g., agents handling multi-hour tasks, self-improving feedback loops in coding) align with real trends reported by frontier lab leaders like Amodei, and metrics like METR benchmarks showing rapid progress in long-horizon tasks. The urgency around job displacement and acceleration echoes views from many in the field (e.g., lab CEOs have issued similar warnings about timelines and risks).
However, treat it with caution—it’s not neutral analysis:
It’s highly accelerationist/optimistic about speed (very short timelines for massive economic shifts), which is common among builders/investors with skin in the game.
Some reactions highlight hype: Critics (including AI researcher Gary Marcus) have called it “weaponized hype” that overstates reliability/current capabilities, potentially echoing past exaggerated claims in the space. Others describe it as well-intentioned but “almost perfectly engineered to be wrong in the ways that matter most” or part of a broader “Chicken Little” pattern in AI warnings.
No major factual errors jump out from public reactions, but it’s anecdotal/personal (e.g., his own experiments) rather than peer-reviewed data, and timelines remain speculative—even lab leaders’ predictions vary widely and have historically slipped.
Overall: Moderately credible as an informed, passionate perspective from someone deep in the ecosystem, especially on the pace feeling “insane” to insiders and the need to adapt early. It’s not alarmist fiction, but it’s also not dispassionate forecasting—more like a wake-up call shaped by his optimism about AI’s potential upsides alongside risks. If you’re assessing AI’s near-term impact on jobs/society, cross-reference it with primary sources (e.g., Amodei essays, lab benchmarks, economic analyses from places like METR or Epoch AI) rather than taking any single voice as gospel.
- - - - - - -
Me: Your response may confirm a suspicion on my part. Consider this and correct me if wrong. It feels to me like he could be using fear to drive usership of AI. He opened with an expressed intent to want to warn those he’s close to. But he continually formats his reader in the first person. “I need ‘you’ to understand, Let me give ‘you’ a few specific examples.” He is speaking directly to the reader whoever it is. He warns AI will cause many to lose their jobs (easily seen by Boomers. We’ve seen it time and time again in manufacturing with upgrades in robotics as an example) and yet, as I read it (once), he’s encouraging everyone in the professional fields he mentioned to get to using AI now. That would be quite a “coincidence” his family and friends were all involved in those professions. Can you detect anything deceitful in that piece?
- - - - - - -
Grok: Your suspicion aligns with several elements in the post and Shumer’s broader context, but it’s not a case of outright deceit like fabricating facts—more like strategic hype from someone with a clear financial incentive in AI adoption. I’ll break it down based on the text itself, public reactions, and his background, correcting or clarifying where needed.
The Shift in Audience and Tone
You’re right that he opens with a personal intent: framing it as a “personal note for non-tech friends and family” to warn them, drawing a parallel to early COVID dismissals. But the writing quickly pivots to a direct, second-person address (”you”) that’s more like a sales pitch or motivational talk: phrases like “I need you to understand,” “Let me give you a few specific examples,” and “This might be the most important year of your career. Work accordingly.” This isn’t just conversational—it’s engineered to pull in a broad reader, making the “warning” feel urgent and personal to anyone scrolling by.
As for the “coincidence” of his family/friends being in the mentioned professions (e.g., lawyers, accountants, marketers): The post doesn’t explicitly claim that; it uses those as relatable examples of white-collar jobs at risk. But yes, it’s convenient—it’s not tailored to a private circle but broadcast to maximize reach (it went viral on X with 50M+ views). He even ends with calls to action like “share it with someone in your life who should be thinking about this” and “follow me on X for recommendations on which models to use,” which broadens it into a promotional funnel. If it were truly just for close ones, why publish it publicly with those hooks?
Fear as a Driver for AI Usership
This is where your hunch hits closest: The piece builds fear around job displacement (”50% of entry-level white-collar jobs” gone in 1-5 years, AI doing tasks “better than I would have done it myself”), societal risks (deception, bioweapons), and a “dangerous” perception gap. Then, it flips to empowerment: “Start using AI seriously,” “push it into your actual work,” “spend one hour a day experimenting,” with specific tips for professions like law (”feed it a contract and ask it to find every clause that could hurt your client”). It’s classic fear-then-solution marketing—scare you about the tsunami, then sell the lifeboat (AI tools, which he builds and profits from).
Shumer’s company, OthersideAI/HyperWrite, sells AI writing/automation software (e.g., paid plans for advanced features). Encouraging mass adoption directly benefits him, especially since he mentions using “paid versions of the newest models” for best results. Critics have called this “weaponized hype”—not neutral advice, but a direct-marketing style piece designed to go viral and drive interest in AI products. On Hacker News, users pointed out his history with the “Reflection 70B” incident (a 2024 hoax where he promoted a fake “top open-source model” that was actually weaker, just to generate buzz), labeling him a “con artist” for similar viral tactics. That past suggests a pattern of exaggeration for attention, though this post doesn’t have verifiable falsehoods like that one did.
Detecting Deceit?
No major lies: His descriptions of AI progress (e.g., agents handling multi-step tasks, self-improving code) match reports from labs like Anthropic (Dario Amodei’s timelines) and benchmarks (e.g., METR’s long-horizon evals). Job disruption warnings echo real economic analyses, like IMF reports on AI as a “tsunami” for entry-level roles, and boomers have indeed seen automation waves in manufacturing.
But biases and omissions: He downplays downsides (e.g., AI errors, energy costs) and doesn’t disclose how his insider role colors the urgency. The COVID analogy amplifies fear without noting how AI timelines have slipped before. Public reactions split: Some praise it as a genuine wake-up call (e.g., LinkedIn reposts calling it “not hype”), while others dismiss it as overblown or self-serving (e.g., Reddit threads noting it’s “heavy on the hype”).
In short, it’s not deceitful in the sense of being fictional, but it’s manipulative—using fear to herd readers toward AI (and implicitly his ecosystem) under the guise of a caring warning. If anything, it confirms your read: He’s an AI builder talking his book. For balance, check Gary Marcus’s critique for a deeper takedown on the hype elements. If you want me to pull more specific reactions or analyze similar pieces, just say!
- - - - - - -
- - - - - - -
There was more to this thread with Grok such as the author directing questions to products and investing on the thread of his post. I came across this work less than a day after it posted and as I was chatting with Grok about this, it gave me an unasked-for update when it hit 50 million impressions and how it blew up so fast.
But what is shared here was enough to confirm my suspicion. It’s informative, but it’s a fear-base promotion. I thought this exchange would be good learning for truth-seekers towards seeing deeper into the sensational. Especially for anyone that may have had their fear of AI magnify because of this. For anybody hesitant to use AI due to concern, fear, anxiety, whatever… allow me to assure you. AI isn’t as scary as it seems when you look into how it’s developed, how it “thinks,” how it learns or whatever you want to know about it. Then by trying it out. You know the line. Do your research! If you’d like one with a personality, try Grok. I didn’t “feel” that personality vibe with the other ones I tried for chat. And if you’re signed into Grok or X, it will “remember” your chats. I may share a post on how AI operates soon. Before my busy season begins.
If you’d like to read the full thread I had with Grok concerning Something Big Is Happening, it’s linked below. No fear. It’s just a web page.
If this helped you, please let me know by emailing me or leave a comment. The way it blew up had to have generated a lot of fear into this realm. It is a common tactic in marketing. We’ve grown numb to a lot of it. Some gets through. I do what I can to disarm it.
Links
Of Interest
AI boosts worker creativity only if they use specific thinking strategies
Anthropic AI Safety Chief Resigns, Warning of World ‘In Peril.’
OpenAI Researcher Resigns over ChatGPT Advertising, Compares Sam Altman’s Strategy to Facebook
Trump-class warships to fast-track powerful laser weapons rollout across US Navy
Oops: Researcher skeptical of ‘Havana syndrome’ tested secret weapon on himself
Is this the most dangerous 12-year-old in the world?
213 Democrats used their photo id to vote against photo ids in election
Nolte: Disgraced CNN ‘Circling the Drain’ After Losing ‘Nearly Two-Thirds’ of Viewers
BREAKING: Trump Just Ended the EPA’s Climate Power Grab, and the Left Is Losing It
Video
A friend shared this video in a common group we’re in after reading this post. More grounded concerns expressed about the rapid advancements of AI, the need for global agreements on it and how the AI race developed by this industry insider. Serious matters to consider.
- - - - - - -
- Home - Podcast - Chat - Videos - Bookstore - Social Media - Archive - About -



