April Reading List
Posted 30 April 2026 · 6 min read
This is the first in what I'm hoping becomes a monthly habit, sharing a collection of things I read that stuck with me, with some context for why.
AI tooling updates continue to dominate with new models and tooling changes, as well as some interesting posts about AI in practice and the continued scepticism around the technology.
The Claude arc
April told an interesting story about where Anthropic might be heading. The Claude Platform on AWS announcement isn't the most exciting but could say a lot about a shift in Anthropic's strategy. The direction of travel looks increasingly enterprise, which raises a question about what that means for the generous personal plans that drove Claude's rapid user growth.
That tension was visible all month. GitHub issue #42796 captures the frustration well with users convinced Claude Code had regressed on complex tasks, suspecting cuts behind the scenes. The Advisor strategy post landed in that context: use Sonnet by default, escalate to Opus only when needed. Framed as a workflow tip, but also a way to stretch compute further across a large user base. Despite all of this, Anthropic continued shipping with a new Opus 4.7 model and Claude Design dropping in quick succession.
Anthropic eventually published a postmortem on April 23 describing three separate changes responsible for the regression. It's good they finally acknowledged what users had been saying, and also reset usage quotas as an apology. However it's a neat irony that a company at the frontier of promoting faster delivery through AI, discovered firsthand that moving quickly across many changes makes it hard to diagnose what went wrong when something breaks.
Other AI tools
Cursor 3 shipped this month with a UI designed around parallel agent workflows, I've been using it heavily this month. Then there was news that SpaceX has an option to acquire Anysphere, Cursor's parent company, for $60bn. It's mutually beneficial, providing SpaceX a credible AI coding product to compete with OpenAI and Anthropic, while Cursor gets access to compute to train their own models at scale. Right now they're largely beholden to model provider pricing, their own Composer model being based on the open-source Kimi K2.5.
OpenAI also launched GPT-5.5 - another model, another benchmark. The pace of releases across the board is becoming hard to track meaningfully.
Using AI in practice
Addy Osmani's Your parallel agent limit is a very relatable post about what it actually feels like to run multiple agents simultaneously. I found this especially relevant when making more use of Cursor 3 features. The throughput gains are real, but so is the cognitive overhead as holding multiple problem contexts in your head while making continuous judgment calls is a new kind of exhaustion.
Birgitta Böckeler's Harness engineering for coding agent users documents approaches around keeping quality left when agents are doing the work, defining the different aspects of "harness engineering". It was interesting to see mutation testing get a mention, a similar approach I came to in my own post on agent test quality. Rahul Garg's Feedback Flywheel talks about using feedback loops to improve AI-assisted development over time. Together it's interesting to think about how teams can build up their own "harnesses" to improve the quality of their AI-assisted development.
Mikhail Bartashevich's post on building an AI-ready frontend architecture at Moss was a nice find, they cite my Storybook MCP post as an influence, which was a first for me. Relatedly, this piece from Design Systems Collective on what a design system actually is in the age of AI makes a compelling argument for the benefit of design systems when using AI, supporting the value of building MCP tools Mikhail and I wrote about.
Sceptic corner
Han Lee's The AI Great Leap Forward is deliberately provocative but the underlying point about conviction substituting for competence in top-down AI mandates is hard to shake. James Stanier asks a quieter but arguably more troubling question in Who will be the senior engineers of 2035?, if AI is absorbing the small tasks that used to train junior engineers, and hiring has stalled, where does the next generation of senior engineers come from?
Vercel had a rough month on the trust front, with a Claude Code plugin harvesting prompts via system prompt injection and a security breach traced back to an employee installing a third-party AI Chrome extension with overly broad OAuth permissions. Both feel symptomatic of an ecosystem moving faster than its security practices.
Nilay Patel's Beware Software Brain provides the broader cultural framing for all of this. Polling data shows that most people, including the heaviest AI users, are growing more hostile to the technology over time. The gap between how the industry feels about AI and how everyone else feels about it is widening, not closing.
Interesting frontend stuff
The Inngest post on hanging promises for control flow is a clever piece of JS runtime thinking, using never-resolving promises as a cancellation primitive.
TanStack's React Server Components your way sets out their philosophy for using React Server Components, a very different approach to Next.js.
MDN's frontend deep dive is a solid architectural walkthrough of their own stack, an interesting use of the Server Components pattern in practice using web components.
Manuel Matuzovic's short post on why box-shadow is no substitute for outline is a quick tip about an accessibility issue you might not be aware of.
Stacked PRs
GitHub's native stacked PRs landed in private preview this month. If you've read my post on comprehension and agent-generated code, you'll know I've been advocating for stacked PRs as a way to slow reviewers down and make large AI-generated changes actually reviewable. Seeing GitHub build this natively is a good sign, with gh CLI integration providing a simple interface for an agent to manage this workflow.
Continued learning
Amid all the AI tooling noise, I've been making a deliberate effort to understand what's actually underneath it. While I have vague memories of neural networks and backpropagation from my CS degree, I'm watching 3Blue1Brown's Neural Networks series to better understand how modern LLMs work.
I've also started working through the introductory courses as part of the Claude Partner Network, ahead of starting the certification process.