Microsoft files #
Someone said - in Linux, everything is a file. In Microsoft, everything is a copilot. Lol.
Added on April 5, 2026
Someone said - in Linux, everything is a file. In Microsoft, everything is a copilot. Lol.
Added on April 5, 2026
The Helsinki Bus Station: let me describe what happens there.
Some two-dozen platforms are laid out in a square at the heart of the city. At the head of each platform is a sign posting the numbers of the buses that leave from that particular platform. The bus numbers might read as follows: 21, 71, 58, 33, and 19.
Each bus takes the same route out of the city for a least a kilometer stopping at bus stop intervals along the way where the same numbers are again repeated: 21, 71, 58, 33, and 19.
Now let’s say, again metaphorically speaking, that each bus stop represents one year in the life of a photographer, meaning the third bus stop would represent three years of photographic activity.
Ok, so you have been working for three years making platinum studies of nudes. Call it bus #21.
You take those three years of work on the nude to the Museum of Fine Arts Boston and the curator asks if you are familiar with the nudes of Irving Penn. His bus, 71, was on the same line. Or you take them to a gallery in Paris and are reminded to check out Bill Brandt, bus 58, and so on.
Shocked, you realize that what you have been doing for three years others have already done.
So you hop off the bus, grab a cab (because life is short) and head straight back to the bus station looking for another platform.
This time you are going to make 8x10 view camera color snapshots of people lying on the beach from a cherry picker crane.
You spend three years at it and three grand and produce a series of works that illicit the same comment: haven’t you seen the work of Richard Misrach? Or, if they are steamy black and white 8x10 camera view of palm trees swaying off a beachfront, haven’t you seen the work of Sally Mann?
So once again, you get off the bus, grab the cab, race back and find a new platform. This goes on all your creative life, always showing new work, always being compared to others.
What to do?
It’s simple. Stay on the bus. Stay on the f*king bus.
Why, because if you do, in time you will begin to see a difference.
The buses that move out of Helsinki stay on the same line but only for a while, maybe a kilometer or two. Then they begin to separate, each number heading off to its own unique destination. Bus 33 suddenly goes north, bus 19 southwest.
For a time maybe 21 and 71 dovetail one another but soon they split off as well, Irving Penn is headed elsewhere.
It’s the separation that makes all the difference, and once you start to see that difference in your work from the work you so admire (that’s why you chose that platform after all), it’s time to look for your breakthrough.
Added on April 4, 2026
Being able to see ourselves as something beyond our job (our means of survival) is a luxury. If a person can't provide for themselves the rest goes out the window fast.
The only way to ease the anxiety in people isn't with fluff about their 'human worth', but rather to help them envision other tangible and plausible ways in which they can provide for themselves.
The cold reality, in my opinion, is that the things we value about ourselves are generally not that valuable to others. I love my own personality and humanity, my soul if you will, but nobody's paying me for it, and so I have to value it accordingly.
Added on March 23, 2026
The people who love you don't love you because you're good at your job. They love you because of something else entirely. Maybe it's your humor. Maybe it's that you actually listen. Maybe it's that you remember things about their lives and ask about them. Maybe it's simply that you show up. You're present. You don't extract a conversation and then disappear.
I can automate my job (honestly it feels great for now I'm getting so much done). I can't automate my presence. I can't outsource my attention. I can't delegate my capacity to sit with someone when they're confused or scared or just need to feel known. That's the thing I'm actually built for.
If you've built your entire sense of self around technical skill, the disruption happening in AI feels like existential threat. And it should be. The skill that which you exchanged for money and stability is being replaced, you are being replaced just shuffled around. The machine doesn't replace you. It replaces part of what you do. It does nothing for the actual thing that makes you valuable in your life.
Added on March 23, 2026
But warmth. Empathy. The ability to sit with someone in their confusion and make them feel understood. The ability to crack a joke at exactly the right moment and remind someone that they're not alone. The capacity to be fully present with another person, to see them not as a role they're playing but as a whole human being… that cannot be automated away and hopefully never will.
Your existence is a measurement of your relationships to the poeple and world around you. Buber wrote about "I-It" and "I-You" relationships, (Ich-Du in german). An "I-It" relationship treats the other person as an object, a function, something to be used. A doctor in an I-It relationship with their patient is fixing a broken thing. A software engineer in an I-It relationship with their coworkers is just executing tasks. An I-You relationship is mutual and real. The other person isn't a role or a function. They're a whole self. Buber said human life finds its meaningfulness in those relationships. It is in how you relate, not in what you produce, which has meaning.
Added on March 23, 2026
Super interesting research, and it indeed confirms that current LLMs can't think from scratch by default.
Though, if allowed to do tool calling, LLMs surely be able to write the code in few-shots.
Standard benchmarks make it nearly impossible to tell. A model trained on billions of lines of Python that scores 90% on HumanEval might be doing something genuinely intelligent, or it might be doing something much simpler: pattern-matching against memorized solutions it has effectively seen before. We wanted to find out which one it actually is.
The intuition behind the work is simple. When you learn Fibonacci in Python, you can write it in Java tomorrow without years of Java training, because you transfer the logic rather than the syntax. The loop, the state, the termination condition all carry over. Syntax is just a costume, and a programmer fluent in one language can learn another in days by reasoning from first principles. LLMs claim to do something like this too, and we wanted to see whether they actually can or whether what looks like reasoning is really just a very large lookup table.
To separate genuine reasoning from memorization, you need a setting where the model cannot fall back on anything it has seen before. That setting, it turns out, already exists. It just takes the form of programming languages almost nobody uses eg Brainfuck, Whitespace.
[...]
These languages all share one crucial property: they appear almost nowhere in training data.
[...]
We tested GPT-5.2, O4-mini, Gemini 3 Pro, Qwen3-235B, and Kimi K2 across five prompting strategies, with three independent runs per configuration to ensure statistical reliability. These are models that score between 85 and 95 percent on HumanEval, MBPP. On our benchmark, the best model in the best configuration scored 11.2 percent, and most scored below 5 percent on average across all five languages.
More striking than the low overall numbers was what happened as problems got harder: every single model, in every language, in every prompting strategy, scored exactly 0 percent on every problem beyond the Easy tier.
Added on March 21, 2026
Instead of giving up, I forced myself to reproduce all my manual commits with agentic ones. I literally did the work twice. I'd do the work manually, and then I'd fight an agent to produce identical results in terms of quality and function (without it being able to see my manual solution, of course).
This was excruciating, because it got in the way of simply getting things done. But I've been around the block with non-AI tools enough to know that friction is natural, and I can't come to a firm, defensible conclusion without exhausting my efforts.
But, expertise formed. I quickly discovered for myself from first principles what others were already saying, but discovering it myself resulted in a stronger fundamental understanding.
Break down sessions into separate clear, actionable tasks. Don't try to "draw the owl" in one mega session.
For vague requests, split the work into separate planning vs. execution sessions.
If you give an agent a way to verify its work, it more often than not fixes its own mistakes and prevents regressions.
Added on March 21, 2026
AI clearly pressures the traditional SaaS business model. Procurement teams are negotiating harder and some long-tail software products face structural headwinds. But SaaS is a delivery mechanism, not the endpoint of value creation.
The next generation of software is adaptive, agent-driven, outcome-based, and deeply integrated. The winners will not be static tool providers, they will be those who can best adapt to change.
Every technological shift reorders the stack and the companies pricing static workflows WILL struggle. The companies owning data, trust, compute, energy, and verification may thrive.
Margin compression in one layer does not imply collapse of the entire digital economy. It signals transition.
Added on March 21, 2026
AI decreases costs in every sector and when service costs go down, purchasing power increases with or without wage growth.
The doom loop becomes dominant only if AI replaces labor without materially expanding demand. The optimistic scenario emerges if cheaper compute and productivity yields entirely new categories of consumption and economic activity.
Added on March 21, 2026
Security is tedious, people naturally want to first make things work, then make them reliable, and only then make them secure.
Added on March 17, 2026
Paul Graham explains why all good designs converge at the same one. While good branding, at times is just opposite to good design.
So even in this early example we see an important point about the relationship between brand and design. Branding isn't merely orthogonal to good design, but opposed to it. Branding by definition has to be distinctive. But good design, like math or science, seeks the right answer, and right answers tend to converge.
Branding is centrifugal; design is centripetal.
There is some wiggle room here of course. Design doesn't have as sharply defined right answers as math, especially design meant for a human audience. So it's not necessarily bad design to do something distinctive if you have honest motives. But you can't evade the fundamental conflict between branding and design, any more than you can evade gravity.
Indeed, the conflict between branding and design is so fundamental that it extends far beyond things we call design. We see it even in religion. If you want the adherents of a religion to have customs that set them apart from everyone else, you can't make them do things that are convenient or reasonable, or other people would do them too. If you want to set your adherents apart, you have to make them do things that are inconvenient and unreasonable.
It's the same if you want to set your designs apart. If you choose good options, other people will choose them too.
Added on March 10, 2026
Author argues that why AI can't replace lots of fields like lawyers, or Poker but can replace chess or software engineers. They say that, AI can just product the output as it has read as text, but it can't react to a hostile environment, and modulate it's response that way. While at times, a law document might sound as if it counters the future questions of an adversary. It's more 'they've learned the language of strategy more than the dynamics of it'.
Domain experts say “AI won’t replace me” because they know that “producing coherent output” is table stakes.
The REAL job is produce output that achieves an objective in an environment where multiple agents are actively modeling and countering you.
Why do outsiders think AI can already do these jobs? They judge artifacts but not dynamics:
“This product spec is detailed.”
“This negotiation email sounds professional.”
“This mockup is clean.”
Experts evaluate any artifact by survival under pressure:
“Will this specific phrasing trigger the regulator?”
“Does this polite email accidentally concede leverage?”
“Will this mockup trigger the engineering veto path?”
“How will this specific stakeholder interpret the ambiguity?”
These are simulation-based questions. The outsider doesn’t know to ask them because they don’t have the mental model that makes them relevant.
[...]
There’s a deeper reason LLMs are at a permanent handicap here: the thing you’re trying to learn is not fully contained in the text8. They can catch up by sheer brute force, but are far more inefficient than humans, and the debt is coming due now.
When an investor publishes a thesis, consider what is not in it:
The position sizing that limits the exposure
The timing that avoided telegraphing intent
Strategic concealment
How the thesis itself is written to not move the market against them
What they’d actually do if proved wrong tomorrow
Text is the residue of action. The real competence is the counterfactual recursive loop: what would I do if they do this? what does my move cause them to do next? what does it reveal about me? That loop is the engine of adversarial expertise, and it’s weakly revealed by corpora.
This is why models can recite game theory but still write the “nice email” that leaks leverage. They’ve learned the language of strategy more than the dynamics of strategy.
This is what domain expertise really is. Not a larger knowledge base. Not faster reasoning. It’s a high-resolution simulation of an ecosystem of agents who are all simultaneously modeling each other. And that simulation lives in heads, not in documents. The text is just the move that got documented. The theory that generated it is called skill.
[...]
Not every domain follows poker dynamics. You have certain fields very close to chess, and LLMs are already poised to be successful in them.
Writing code is probably the most clear example:
System is deterministic
Rules are fixed and explicit
No hidden state that matters
Correctness is objective and verifiable
No agent is actively trying to counter the model
The same “closed world” structure shows up in others: Math / Formal proofs, data transformation, translation, factual research, compliance heavy clerical work (invoice matching, reconciliation), where you can iterate towards the right move without needing a “theory of the mind”.
The important caveat is that many domains are chess-like in their technical core but become poker-like in their operational context.
Professional software engineering extends well beyond the chess-like core. Understanding ambiguous requirements means modeling what the stakeholder actually wants versus what they said. Writing good APIs means anticipating how other developers will misuse them. Code review is social: you’re modeling reviewers’ preferences and concerns. Architectural decisions account for unknown future requirements and organizational politics. That is, the parts outsiders don’t see but senior engineers spend much of their time simulating.
The parts that look like the job are chess (like). The parts that are the job are poker.
Difficulty is orthogonal to “openness” of a domain. Proving theorems is hard. Negotiating salary is easy. But theorem-proving is chess-shaped and negotiation is poker-shaped.
This is why the disconnect between experts and outsiders is domain-specific. Ask a competitive programmer if AI can solve algorithm problems, and they’ll say yes because they’ve watched it happen. Ask a litigator if AI can handle depositions, and they’ll laugh because they live in a world where every word is a move against an adversary who’s modeling them back.
[....]
The fix is a different training loop. We need models trained on the question humans actually optimize: what happens after my move? Grade the model on outcomes (did you get the review, did you concede leverage, did you get exploited), not on whether the message sounded reasonable.\
That requires multi-agent environments where other self-interested agents react, probe, and adapt. Stop treating language generation as single-agent output objective and start treating it as action in a multi-agent game with hidden state, where exploitability is a failure mode.
Closing the Loop
The “AI can replace your job” debate often confuses artifact quality with strategic competence. Both sides are right about what they’re looking at. They’re looking at different things.
LLMs can produce outputs that look expert to outsiders because outsiders grade coherence, tone, and plausibility. Experts grade robustness in adversarial multi-agent environments with hidden state.
Years of operating in adversarial environments have trained them to automatically model counterparties, anticipate responses, and craft outputs robust to exploitation. They do it without thinking, because in their world, you can’t survive without it.
LLMs produce artifacts that look expert. They don’t yet produce moves that survive experts.
[....]
The Priya example nails it. the finance friend evaluated the email in isolation. the experienced coworker simulated how it would land in Priya's inbox, against her triage heuristics, under deadline pressure
This is the gap between LLMs writing code and LLMs building systems. code that compiles isn't code that survives contact with users, adversaries, edge cases.
[....]
Been running production systems solo for 20 years. the best operators aren't the ones who know the most commands — they're the ones who can simulate what will break next. "if I do X, the cache invalidates, which triggers Y, which overloads Z." that's a world model
Added on March 10, 2026
In the past few vears, life has offered me certain clarifying moments - not dramatic, just steady unmistakable. In difficult hours. when something goes wrong or is misunderstood, people reveal the lens through which they see you. Some look first for fault. Others look first for context. Some tighten. Others lean in.
I have had relatives, tied to me by ancestry, assume the worst in moments when I most needed steadiness. And I have had friends - no shared surname, no inherited obligation - offer me the simple dignity of trust. They asked questions before forming conclusions. They chose curiosity over judgment. That choice felt like shelter.
.........
Perhaps that is all family finally is - not the people who know vour story, but the people who decide, aqain and again, to read it qenerously.
Added on February 26, 2026
The better you document your work, the stronger contracts you define, the easier it is for someone to clone your work. I wouldn't be surprised if we end up seeing open source commercial work bend towards the SQLite model (open core, private tests). There's no way Cloudflare could have pulled this off without next's very own tests.
Added on February 25, 2026
In the morning the world was felled branches and standing water. I started reading McCarthy.
I'd put him off for years. He's one of those authors everyone insists you have to read, which is usually enough to send me wandering in the opposite direction. I prefer stumbling into authors rather than being assigned them. But my wife had gifted me All the Pretty Horses, and four days without power felt like the right time. I read it in a single sitting. Then The Crossing. Then The Road.
Added on February 16, 2026
Finally, I go back to my manager with a risk assessment, not with a concrete estimate. I don’t ever say “this is a four-week project”. I say something like “I don’t think we’ll get this done in one week, because X Y Z would need to all go right, and at least one of those things is bound to take a lot more work than we expect. Ideally, I go back to my manager with a series of plans, not just one:
We tackle X Y Z directly, which might all go smoothly but if it blows out we’ll be here for a month
We bypass Y and Z entirely, which would introduce these other risks but possibly allow us to hit the deadline
We bring in help from another team who’s more familiar with X and Y, so we just have to focus on Z
In other words, I don’t “break down the work to determine how long it will take”. My management chain already knows how long they want it to take. My job is to figure out the set of software approaches that match that estimate.
Added on February 14, 2026
Cursor builds its first view of a codebase using a Merkle tree, which lets it detect exactly which files and directories have changed without reprocessing everything. The Merkle tree features a cryptographic hash of every file, along with hashes of each folder that are based on the hashes of its children.
Small client-side edits change only the hashes of the edited file itself and the hashes of the parent directories up to the root of the codebase. Cursor compares those hashes to the server's version to see exactly where the two Merkle trees diverge. Entries whose hashes differ get synced. Entries that match are skipped. Any entry missing on the client is deleted from the server, and any entry missing on the server is added. The sync process never modifies files on the client side.
The Merkle tree approach significantly reduces the amount of data that needs to be transferred on each sync. In a workspace with fifty thousand files, just the filenames and SHA-256 hashes add up to roughly 3.2 MB. Without the tree, you would move that data on every update. With the tree, Cursor walks only the branches where hashes differ.
When a file changes, Cursor splits it into syntactic chunks. These chunks are converted into the embeddings that enable semantic search. Creating embeddings is the expensive step, which is why Cursor does it asynchronously in the background.
Most edits leave most chunks unchanged. Cursor caches embeddings by chunk content. Unchanged chunks hit the cache, and agent responses stay fast without paying that cost again at inference time. The resulting index is fast to update and light to maintain.
The indexing pipeline above uploads every file when a codebase is new to Cursor. New users inside an organization don't need to go through that entire process though.
When a new user joins, the client computes the Merkle tree for a new codebase and derives a value called a similarity hash (simhash) from that tree. This is a single value that acts as a summary of the file content hashes in the codebase.
The client uploads the simhash to the server. The server then uses it as a vector to search in a vector database composed of all the other current simhashes for all other indexes in Cursor in the same team (or from the same user) as the client. For each result returned by the vector database, we check whether it matches the client similarity hash above a threshold value. If it does, we use that index as the initial index for the new codebase.
This copy happens in the background. In the meantime, the client is allowed to make new semantic searches against the original index being copied, resulting in a very quick time-to-first-query for the client.
Added on February 12, 2026
When one of my favorite fiction authors talks about AI, I gotta take notes.
I do think that part of the reason I dislike AI is because it is too focused on the product and not the process. Yes, the message is journey before destination. It is always journey before destination, but there's a specific take on it this time.
Maybe someday the language models will be able to write books better than I can. But here's the thing, using those models in such a way absolutely misses the point because it looks at art only as a product. Why did I write White Sand Prime? It wasn't to produce a book to sell. I knew at the time that I wasn't going to write a book that was going to sell. It was for the satisfaction of having written a novel and feeling the accomplishment in learning how to do it. I tell you right now, if you've never finished a project on this level, it's one of the most sweet and beautiful and transcendent moments in my life was holding that manuscript, thinking to myself, I did it.
[....]
This is the difference between data from Star Teek and a large language model. At least the ones operating right now. Data created art because he wanted to grow. He wanted to become something. He wanted to understand, art is the means by which we become what we want to be. The purpose of writing all those books in my earlier years wasn't to produce something I could sell. It was to turn me into someone who could create great art. It took an amateur and it made him a professional. I think this is why I rebel against the AI art product so much because they steal the opportunity for for growth from us.
[...]
The difference is that the books aren't the product. They aren't the art, Not completely. And this is the point. The book, the painting, the film script is not the only art. It's important, but in a way, it's a receipt. It's a diploma. The book you write, the painting you create, the music you compose is important and artistic, but it's also a mark of proof that you have done the work to learn because in the end of it all, you are the art. The most important change made by an artistic endeavor is the change it makes in you. The most important emotions are the ones you feel when writing that story and holding the completed work. I don't care if the AI can create something that is better than what we can create because it cannot be changed by that creation.
Added on February 9, 2026
Why one should create art even if it achieves you nothing, even if you're bad at it.
The choreographer Merce Cunningham said once "You have to love dancing to stick to it .It gives you nothing back ,no manuscripts to store away ,no paintings to show on walls and maybe hang in museums ,no poems to be printed and sold ,nothing but that single fleeting moment when you feel alive ."
Added on February 9, 2026
Adapt to the customer, not the other way around
The times of asking customers to change how they work are gone. Now, SaaS vendors that differentiate by being ultra customizable win the hearts of customers.
How? It’s the most powerful secret to increase usage. We’ve all heard the classic SaaS problem where the software is sold at the beginning of the year, but no one actually ends up using it because of how inflexible it is and the amount of training needed.
And if a SaaS is underutilized, it gets noticed. And that leads to churn.
This is the case with one of my customers, they have a complex SaaS for maintenance operations. But turns out, this was not being used at the technician level because they found the UI too complex4.
How I’m solving this is essentially a whitelabelled vibe-coding platform with in-built distribution and secure deployments. When they heard of my solution they were immediately onboard. Their customer success teams quickly coded a very specific mobile webapp for the technicians to use and deployed it in a few days.
Now, the IC technician is exposed to just those parts of the SaaS that they care about i.e. creating maintenance work orders. The executives get what they want too, vibe coding custom reports exactly the way they want vs going through complicated BI config. They are able to build exactly what they want and feel like digital gods while doing it.
Usage for that account was under 35%, and is now over 70%. They are now working closely with me to vibe code new “micro-apps” that work according to all of their customer workflows. And the best part? This is all on top of their existing SaaS which works as a system of record and handles security, authentication, and supports lock-in by being a data and a UI moat.
This is exactly what I’m building: a way for SaaS companies to let their end-users vibe code on top of their platform (More on that below). My customers tell me it’s the best thing they’ve done for retention, engagement, and expansion in 2026 – because when your users are building on your platform, they’re not evaluating your competitors.
Added on February 9, 2026
How to survive
1. Be a System of Record
If the entire company’s workflows operates on your platform, i.e. you’re a line-of-business SaaS, you are integrated into their existing team already. They know your UI and rely on you on the day to day.
For example, to create a data visualization I won’t seek any SaaS. I’ll just code one myself using many of the popular vibe coding tools (my team actually did that and it’s vastly more flexible than what we’d get off-the-shelf).
Being a “System of Record” means you’re embedded so deeply that there’s no choice but to win. My prediction is that we’ll see more SaaS companies go from the application layer to offering their robust SoR as their primary selling point.
Added on February 9, 2026
Loved the T-Rex analogy.
There’s a concept in behavioral science called the “effort heuristic.” It’s the idea that we tend to value information more if we worked for it. The more effort something requires, the more meaning we assign to the result. When all knowledge is made effortless, it’s treated as disposable. There’s no awe, no investment, no delight in the unexpected—only consumption.
(I'm reminded of the scene in Jurassic Park when the tour Jeep pulls up to the Tyrannosaurus rex exhibit. Doctor Grant says “The T-Rex doesn't want to be fed. It wants to hunt.”)
Added on February 8, 2026
Firstly, software companies have an inherent bias for action. They value speed and shipping highly. Concerns, by definition, slow things down and mean people have to look at things which they hadn’t budgeted for. And so unless your concern is big enough to overcome the “push for landing”, there’s little chance for any meaningful change to come from you saying something. In fact, it’s very likely that you’ll be largely ignored.
Related to this, even if the team does take your concern seriously, you have to be careful not to do it too often. Once or twice, you might be seen as someone who is upholding “quality”. But do it too often and you quickly move to being seen as a “negative person”, someone who is constantly a problem maker, not a problem “fixer”. You rarely get credit for the disasters you prevented. Because nothing happened, people forget about it quickly.
There’s also the problem that every time you push back, you are potentially harming someone’s promotion packet or a VP’s “pet project.” You are at risk of burning bridges and creating “enemies”, at least of a sort. Having a few people who disagree in a big company with you is the cost of doing business, but if you have too many, it starts affecting your main work too.
Finally, there is also the psychological impact. There is one of you and hundreds of engineers working in spaces that your expertise might help with. Your attention is finite, but the capacity for a large company to generate bad ideas is infinite. Speaking from experience, getting too involved in stopping these quickly can make you very cynical about the state of the world. And this is really not a good place to be.
Added on February 8, 2026
Enterprise SaaS platforms have spent years (and millions) solving these problems: role-based access control, encryption at rest and in transit, penetration testing, compliance certifications, incident response procedures. Your customers may not consciously value this — until something breaks.
The challenge is that security is invisible when it works. You need to communicate this value proactively: remind customers that the “simple” tool they could vibe-code themselves would require them to also handle auth, permissions, backups, uptime, and compliance.
Added on February 5, 2026
Jasmine Sun went to Shenzhen, China and asked Chinese AI researcher a few questions. They seem bit too driven.
What does a day in your life look like?” we asked. “I wake up and I check Twitter.”
“Do you have to work 996?” “No,” he laughed. “It’s 007 now.” (Midnight to midnight, seven days a week.)
“Do you guys worry about AI safety?” “We don’t think about risks at all.”
“Based,” said Aadil.
Added on January 31, 2026
I’ve been thinking about obsessions and how they materialize. Things we want, achievements we need, people we admire, attention we crave. I only just realized that a fixation is almost always a sign that the call is coming from inside the house. It’s never actually about the thing. Or maybe it is, but not entirely. Here’s what I mean: Pining for a certain accolade is likely less about the accolade and more about a gaping hole inside that an achievement would supposedly fill. A salve for a scar. An ointment for an insecurity. Maybe it helps, maybe it’s worth it, but it will never satiate without acknowledging the real thing that’s screaming. The one that’s urging the running and chasing.
Added on January 30, 2026
In 1978, the Czech dissident Václav Havel, later president, wrote an essay called The Power of the Powerless. And in it, he asked a simple question: How did the communist system sustain itself?
And his answer began with a greengrocer. Every morning, this shopkeeper places a sign in his window: "Workers of the world, unite!" He doesn't believe it. No one does. But he places the sign anyway to avoid trouble, to signal compliance, to get along. And because every shopkeeper on every street does the same, the system persists.
Not through violence alone, but through the participation of ordinary people in rituals they privately know to be false.
Havel called this "living within a lie." The system's power comes not from its truth but from everyone's willingness to perform as if it were true. And its fragility comes from the same source: when even one person stops performing — when the greengrocer removes his sign — the illusion begins to crack.
Added on January 30, 2026
If you have done something cool, or you have studied something for a long time, or you have thought something interesting, and you are writing it up, and you are at a loss how to get started, try to extract out the key phrase:
What do you find yourself ranting about to people repeatedly? What does the Wikipedia entry miss that frustrates you? How would the world be different if this were not true? If you were telling a friend in a rush why you were excited to write this down, what would you say? Just say that! Just… start with the interesting part first.
When writing, your first job is this:
First, make me care.
Added on January 30, 2026
If we want to hook the reader, provoke their curiosity about this anomaly. Boil it down to a single sentence: “Venice is interesting because it was an empire with no farms.” And there we have our title: “Empires Without Farms”. An apparent paradox, which intrigues the reader, and starts them thinking about what empires they know of but had never thought about their lack of agriculture, and whether that is true, and if it is, how could it have been true, what did they eat and why didn’t they lose wars if they didn’t grow all their own food…?
Added on January 30, 2026
If Tesla was valued fairly, it would probably be at the tune of $5B. But I’ll never bet against it, because the markets can remain irrational for longer than I can remain solvent.
Added on January 29, 2026
LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage.
Added on January 28, 2026
But if people stop using the apps and websites and start sending agents instead, that business really starts to break down. Because DoorDash and all the other service providers make their money by having a direct relationship with customers they can monetize in lots of different ways. It’s basic stuff like promotions, deals and discounts, ads for other stuff, their own subscriptions like DashPass and Uber One, and whatever other ideas they might have to make money in the future.
But AI doesn’t care about any of that stuff — if you ask for a car to the airport, an AI might just open Uber and Lyft and always pick the cheapest ride. These big App Store era services might just become commodity databases of information competing on price alone, which might not actually be sustainable, even if it might be the future. In fact, this past May at the Google I/O developer conference, Google DeepMind CEO Demis Hassabis said that he thinks we might not need to render web pages at all in an agent-first world.
Added on January 26, 2026
Retired United States Navy General William McRaven echoed a similar sentiment in his book, The Wisdom of the Bullfrog, writing,
“I found in my career that if you take pride in the little jobs, people will think you worthy of the bigger jobs.”
He illustrated this point with a story from early in his career when rather than being assigned to lead a mission, he was tasked with building a float that would represent the Navy SEALs (often referred to as “frogmen”) in the Fourth of July parade.
After receiving the assignment, McRaven was admittedly dejected. In his mind, he had joined the Navy SEALs to lead missions, not build parade floats. But a seasoned team member offered him a quiet piece of advice, saying:
“Sooner or later we all have to do things we do not want to. But if you are going to do it, do it right. Build the best damn Frog Float you can.”
McRaven took the message to heart, pouring himself into the task and the float went on to win first prize in its category.
Added on January 21, 2026
Folks at Wikipedia made this awesome guide to detect LLMs in articles. While most of it is high level, but once you've read enough AI generated stuff, you can see a similar pattern.
1. Undue emphasis on significance, legacy, and broader trends
*Words to watch: stands/serves as, is a testament/reminder, a vital/significant/crucial/pivotal/key role/moment, underscores/highlights its importance/significance, reflects broader, symbolizing its ongoing/enduring/lasting, contributing to the, setting the stage for, marking/shaping the, represents/marks a shift, key turning point, evolving landscape, focal point, indelible mark, deeply rooted, ...
2. LLM writing often puffs up the importance of the subject matter by adding statements about how arbitrary aspects of the topic represent or contribute to a broader topic. There is a distinct and easily identifiable repertoire of ways that it writes these statements.
Eg: The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain. [...]
Added on January 21, 2026
Johann describes the “Normalization of Deviance” phenomenon, where repeated exposure to risky behaviour without negative consequences leads people and organizations to accept that risky behaviour as normal.
This was originally described by sociologist Diane Vaughan as part of her work to understand the 1986 Space Shuttle Challenger disaster, caused by a faulty O-ring that engineers had known about for years. Plenty of successful launches led NASA culture to stop taking that risk seriously.
Johann argues that the longer we get away with running these systems in fundamentally insecure ways, the closer we are getting to a Challenger disaster of our own.
Added on January 12, 2026
Tech debt projects are always a hard sell to management, because even if everything goes flawlessly, the code just does roughly what it did before. This project was no exception, and the optics weren't great. I did as many engineers do and "ignored the politics", put my head down, and got it done. But, the project went long, and I lost management's trust in the process.
I realized I was essentially trying to solve a people problem with a technical solution.
Added on January 12, 2026