<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="http://www.knowingken.com/feed.xml" rel="self" type="application/atom+xml" /><link href="http://www.knowingken.com/" rel="alternate" type="text/html" /><updated>2026-03-31T19:48:43+00:00</updated><id>http://www.knowingken.com/feed.xml</id><title type="html">Knowing Ken</title><subtitle>Ken&apos;s blog</subtitle><entry><title type="html">Making and meeting Moloch</title><link href="http://www.knowingken.com/making-and-meeting-moloch" rel="alternate" type="text/html" title="Making and meeting Moloch" /><published>2026-03-31T00:00:00+00:00</published><updated>2026-03-31T00:00:00+00:00</updated><id>http://www.knowingken.com/making-and-meeting-moloch</id><content type="html" xml:base="http://www.knowingken.com/making-and-meeting-moloch"><![CDATA[<h1 id="making-and-meeting-moloch">Making and meeting Moloch</h1>

<p><em>Game design, tradeoffs, and the pressure to make the numbers go up.</em></p>

<p>This is a reflection on the makings of <a href="https://projectbasilisk.com">Project Basilisk</a>, a short-ish game-ish experience about creating safe AGI in a world where being first is all that matters.</p>

<p>There’s a moment that any creator or founder can recognize - the moment when their carefully-crafted idea makes contact with reality. Who budges? Certainly not reality, not unless your creation is large enough to change it. Most aren’t.</p>

<p>My moment came when I released the alpha version of a game I’d been chipping away at for months. It wasn’t in a great state - I’d taken to heart the saying “if you’re not embarrassed by the first version of your product, you’ve released too late”<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>. I knew I’d get some flack. I didn’t know how much I’d get.</p>

<p>Some of the comments were easy to receive. They raised items I’d already had on my backlog, things that were consistent with my vision that I’d just needed a nudge to get to. Those comments were the nudge I needed to make my vision coalesce a little further.</p>

<p>Others were more difficult. They questioned whether this was a game at all. The numbers reflected this - the drop-off looked existential. My idea came into contact with reality and was found wanting.</p>

<p>What do you do when you have a vision and the world wants something else?</p>
<h2 id="origins">Origins</h2>

<p>But I’m getting ahead of myself. Project Basilisk<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup> was originally conceived as a sermon. Inspired by Universal Paperclips<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup>, I’d wanted to create an expectation-subverting narrative about the difficulties of creating safe AGI amidst the incentive systems and market structures that pressure labs to go faster, no matter the cost. The core idea of an incremental game is that the numbers go up. More numbers go up more faster = more dopamine. The core idea of my game was, “hey, maybe we should stop and think about the speed at which the numbers go up, and whether in fact we want the numbers to go up at all, before the numbers become sentient and slip out of our control”. (The numbers being a ham-fisted allegory for AI research, naturally.)</p>

<p>As a person of limited technical proficiency<sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">4</a></sup>, I turned towards the latest and greatest version of Claude Code. I’d used it before for limited projects, but this would be my first real foray into creating something from the ground up with a coding agent.</p>

<p>One of the interesting (and fun, debatably?) things about working with LLMs is their limited context window. Because LLMs are an excellent opportunity for one to speedrun the history of effective management and software development practices, this meant I got to do one of my favorite things: create documentation, which meant <a href="https://knowingken.com/write-bad.html">writing a lot</a> about my ideas. I started with a north star vision document, which gradually morphed into a constellation of 30+ design documents covering the main principles of each major system in the game, from the economy to research to alignment. And because this wasn’t meant to be ‘just a game’, pages upon pages of narrative background, arcs, and character backstories.</p>

<p>I really enjoy writing because it forces me to confront what a hypocrite I am. My original design document had twelve “core” values. That was way <a href="https://knowingken.com/values.html">too many values to be useful</a>. I could have gone forwards and created something that did a lot of things in a mediocre fashion. Instead, I killed my darlings, until I was left with one core principle:</p>
<ol>
  <li>The mechanics are the message: players should realize the morals of the game through the actions they take.
About half-way through development, I realized I had neither the time nor the aptitude to simulate the world entire, so I wrote down a fallback principle:</li>
  <li>Meaning &gt; education &gt; balance &gt; fun.</li>
</ol>

<p>(It turns out that these principles would become increasingly difficult to follow. That’s when I knew I’d chosen useful ones.)</p>

<p>Having written down my vision and values, it was a simple matter to do the thing and make the game.</p>

<p>If you haven’t played the game - spoiler alert:</p>

<p>You play as the CEO of an AI lab competing to build AGI. There are strong economic and competitive pressures. It’s difficult to do everything by the book and still make it to AGI first - and making it there first is the only thing that matters.</p>

<p>What trade-offs are you willing to make, and what risks are you willing to take, in your pursuit of greatness? Will you cut safety work to accelerate timelines, look past data provenance in a crunch, or ignore warning signals to maintain investor confidence? And how will your decisions affect the shape of the AI you create?</p>

<h2 id="contact">Contact</h2>

<p>This is barely a game.</p>

<p>The gameplay is oddly cruel.</p>

<p>There’s no replayability.<sup id="fnref:5" role="doc-noteref"><a href="#fn:5" class="footnote" rel="footnote">5</a></sup></p>

<p>The first rounds of feedback were brutal. Bounce rates were sky-high: nearly two-thirds of players who started the game bounced after less than five minutes. The funnel after that was barely any brighter. After the first twenty-four hours, less than 1% of players who had started the game had finished it. I didn’t know if that was good or bad in the grand scheme of gaming analytics (and I still don’t), but it felt bad for a game I had designed to take around 80 minutes to play through.</p>

<p>The initial ratings were no better. A deluge of one-stars pummeled the listings. The message was inescapable: it felt like an unequivocal failure.</p>

<p>I’d intentionally released the game in a half-finished state. Out of two planned story arcs, only the first - which I rather considered a prolonged tutorial or demo - was part of the initial launch. The feedback made me question whether it even made sense to work on the second half.</p>

<p>But despite the headwinds, I discovered some powerful forces at my back.</p>

<p>First, a few positive comments began slipping through. Though few and far between, each one was a lifeline providing the approval and vindication that became motivation.</p>

<p>Second, the ratings began to split. I’d assumed that the ratings would come in a normal distribution - that the slew of initial one-stars meant it’d never climb higher. But over the next several days, as players came back to the game and completion rates slowly rose, a bimodal distribution emerged. Yes, a lot of people hated it. But if they didn’t hate it, it looked like they loved it. Four- and five-star ratings began to bump up the averages. The distribution of ratings wasn’t something I’d given much thought to before, but I came to appreciate the bimodal, love-it-or-hate-it verdict far more than had the ratings been a mediocre slew of twos and threes.<sup id="fnref:6" role="doc-noteref"><a href="#fn:6" class="footnote" rel="footnote">6</a></sup></p>

<p>Third, stubbornness runs deep in my blood, whether by nature or nurture. There was no way I was giving up on something I thought could be great, not if my big bull head had anything to say about it.</p>

<p>If years of product management had taught me one thing, it was to go to the numbers. So I pulled up Posthog and started looking through the conversion funnel in detail. Perhaps a large part of the bounce rate was due to the audience - so I broke down the funnel by referrer. It turned out that certain domains had much higher conversion. Audiences that were willing to give weirder, genre-bending games a shot made it a lot further than audiences who were expecting a Cookie Clicker clone<sup id="fnref:7" role="doc-noteref"><a href="#fn:7" class="footnote" rel="footnote">7</a></sup>.</p>

<p>I knew that 100% conversion was a pipe dream, but maybe I was leaving some low-hanging fruit unpicked. I crunched the numbers - nearly half of the drop-offs were with players who spent time clicking around the UI and were confused about what to do, eventually giving up. There were signals of intent; these players didn’t immediately see the game and bounce, they gave it a shot. Building a real tutorial for them became my top priority.<sup id="fnref:8" role="doc-noteref"><a href="#fn:8" class="footnote" rel="footnote">8</a></sup></p>

<p>The tutorial worked. Most players went through it, and those who did completed the game at 4x the rate of those who didn’t.</p>

<p>But the tutorial was not a panacea. There was a very large segment of players who bounced before even reading the first guiding message. The game was very narrative-heavy. These players would never be part of my target audience. Despite this, I felt a deep urge to change my creation to appeal to them.</p>

<p>What if I just simplified the mechanics a little, so a player didn’t really have to understand the game to play it? Reduced the tension between being safe and being fast, so losing stops being an option? Turned uncomfortable tradeoffs into win-wins, so players didn’t have to make hard choices? Smoothed out the abstractions and gutted the concepts, so as to avoid the confusing contradictions of reality?</p>

<p>What if I hyper-optimized for the metrics that were legible, the metrics I had chosen to measure in large part because they were the “standard” in product / gaming analytics? If I chased conversion, retention, playtime, users, while silently trading off against the metrics that were harder to measure - the educational value in the tension, the intentional anxiety, the realism of the concepts?</p>

<p>There is a moment that every creator and founder recognizes, when their vision meets reality and is found wanting. There will be an unbearably intense, seductive, coercive urge to compromise their vision, the ethos that defines their creation, in service of making the numbers go up.</p>

<p>This was mine.</p>
<h2 id="im-so-meta-even-this-allegory">I’m So Meta Even This Allegory</h2>

<p>Project Basilisk was originally intended as a commentary on how structural incentives and competitive pressures can test even the most principled founders.</p>

<p>I did not expect to find it testing myself.</p>

<p>I’d wanted to create an educational experience, a genre-bending hybrid interactive fiction / simulation / incremental / strategy / game / sermon. The market I entered wanted fun and features, a predictable experience where the numbers go up without too much thought involved. I was competing for player attention against every other game out there.</p>

<p>The metrics for my vision were bleak. The hypothetical metrics for a twisted version - with a shorter game loop, faster feedback, a stripped-down narrative, all optimized for retention and fun - were much better.</p>

<p>I found myself facing the same tradeoff I had designed for my players.</p>

<p>It would be so easy to abandon the original vision. So satisfying to watch my own numbers - players, completions, ratings - go up.</p>

<p>But then I thought back to my original reasons for building this at all. I hadn’t gone into it wanting the approval of every player. I hadn’t wanted to create the next Cookie Clicker or Angry Birds or Candy Crush.</p>

<p>I had wanted to make something that meant something. And compromising here would subtract from the meaning I wanted to convey.</p>

<p>“The mechanics are the message. Prefer education over balance. Prefer balance over fun.”</p>

<p>And so the anguished decision became simple. The seduction became banal. Having accepted that I could not reach everyone, I chose to make the experience the best it could be for those I could reach.</p>
<h2 id="in-which-we-all-face-moloch-together">In which we all face Moloch together</h2>

<p>Scott Alexander called it <a href="https://slatestarcodex.com/2014/07/30/meditations-on-moloch/">Moloch</a>: the way competition forces individually-rational actors to sacrifice the things they value most, not out of malice or incompetence, but because the structures they operate in won’t let them do otherwise.</p>

<p>I met a baby Moloch in the corners of the internet where I’d shared my game. The stakes were low, despite how high they felt in the moment. My Moloch was relatively easy to overcome.</p>

<p>The leading AI labs of our time face a much larger Moloch. Resisting Moloch doesn’t just mean driving forward stubbornly against a few mean words on the internet. It may mean defying your national security apparatus and potentially being blacklisted across the country. It means holding fast to care, when carelessness is so often easier, more profitable, more convenient. And the consequences are far more severe. Giving into my Moloch would’ve meant the world losing a single weird indie game and gaining a gacha game (or a gambling app, or a short-form video platform).</p>

<p>Giving into our shared Moloch may mean a world that has lost all that makes it recognizable.</p>

<p>And even if we do resist Moloch - if we somehow build something “safe” and “aligned” - we’ll find another Moloch with a tougher question yet: safe for whom? Aligned to what values?</p>

<hr />

<p>This essay can be differently experienced as a game-thing <a href="https://projectbasilisk.com">here</a>.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1" role="doc-endnote">
      <p>Reid Hoffman, if the internet is to be believed. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:2" role="doc-endnote">
      <p>Originally “AGI Incremental”, until I spent 3 hours trying to solve the hardest problem in development - naming things. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:3" role="doc-endnote">
      <p>Among other games too numerous to list. A feeble and very incomprehensive attempt: Papers, Please; Frostpunk; The Roottrees are Dead; Type Help; A Dark Room; Biotomata; Skynet Simulator; The McDonald’s Video Game. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:4" role="doc-endnote">
      <p>Some of my friends say that this is wildly inaccurate, but they disagree about the direction, so I have yet to find a better label. <a href="#fnref:4" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:5" role="doc-endnote">
      <p>Although I suppose this commenter played it through at least once, which I count as a win. <a href="#fnref:5" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:6" role="doc-endnote">
      <p>I’m still not sure to what extent game ratings are driven by sampling bias and whether such starkly bimodal distributions are simply the norm. If you have data on this, I’d love to chat! <a href="#fnref:6" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:7" role="doc-endnote">
      <p>No offense to Cookie Clicker, grandfather of the genre that it is. <a href="#fnref:7" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:8" role="doc-endnote">
      <p>Building tutorials when afflicted with the curse of knowledge is much more difficult than I had imagined. <a href="#fnref:8" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name></name></author><summary type="html"><![CDATA[Making and meeting Moloch]]></summary></entry><entry><title type="html">Vibe-coding ants</title><link href="http://www.knowingken.com/vibe-coding-ants" rel="alternate" type="text/html" title="Vibe-coding ants" /><published>2026-03-16T00:00:00+00:00</published><updated>2026-03-16T00:00:00+00:00</updated><id>http://www.knowingken.com/vibe-coding-ants</id><content type="html" xml:base="http://www.knowingken.com/vibe-coding-ants"><![CDATA[<h1 id="vibe-coding-ants">Vibe-coding ants</h1>

<p><em>Ants, assembly, and local optima.</em></p>

<p><a href="https://moment.com/">Moment</a> recently held a wonderfully fun programming challenge:</p>
<blockquote>
  <p>You write a program in a custom assembly-like (we call it ant-ssembly) instruction set that controls 200 ants. Each ant can sense nearby cells (food, pheromones, home, other ants) but has no global view. The only coordination mechanism is pheromone trails, which ants can emit and sense them, but that’s it. Your program runs identically on every ant.</p>

  <p>The goal is to collect the highest percentage of food across a set of maps. Different map layouts (clustered food, scattered, obstacles) reward very different strategies. The leaderboard is live.</p>
</blockquote>

<p>As a non-technical product manager, I was curious how far pure vibes could get me. I ended up with a final score of 599 (out of 1000), placing 52nd out of ~27k submissions (note that players could submit multiple times, so the actual number of entrants was probably much lower). For context, first place achieved 856 points, and the top 10 was rounded out at 770.</p>

<p><strong>Conclusion</strong>: the vibes were great at bootstrapping a program even when the user (me) had farcically little idea about what was going on. They were also good (too good) at optimizing the program into a corner that we later couldn’t get out of.</p>

<p><strong>Contents:</strong> <a href="#methods">Methods</a> · <a href="#strategy-some-things-that-worked-some-things-that-failed">Strategy</a> · <a href="#other-fun-stuff-genetic-simulator">Genetic simulator</a></p>

<h2 id="methods">Methods</h2>
<p>I<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup> immediately realized that I did not want to use the (very beautiful) web editor that Moment provided, as the transfer of code in and out of the browser would add overhead to the agent dev loop. So I (read: Claude) downloaded the HAR file from my browser, extracted the simulator engine, wrote a local Node.js simulator for the end-to-end process, and put some reference docs around it. Combined with a testing script so Claude didn’t have to continually rediscover how our harness worked, this meant that we could conduct small 12-map tests and larger 120-map evaluations locally as part of the dev/test iteration loop.</p>

<p>Then I let Claude loose. Despite having a 20x Max subscription, I became slightly worried about how many tokens it was burning, and so throughout the project I took major detours to develop some tooling/scripting for Claude to use, including a static ops budget analyzer, ant and map diagnosis tools, and automated parameter sweepers. Creating tooling for these sorts of repetitive analyses saved a considerable amount of time (and tokens) - <strong>if I caught Claude doing something repeatedly that could be automated, I had it automate it.</strong></p>

<p>The iteration loop:</p>
<ol>
  <li>Pull hypotheses from our running roadmap and lessons documents</li>
  <li>Generate list of suggested code enhancements</li>
  <li>Snapshot baseline scores</li>
  <li>Launch subagents to build out enhancements in parallel, isolated worktrees</li>
  <li>Run 12-map tests on a static seed to smoke-test for significant regressions</li>
  <li>If passed, run 120-map random-seed evals for a more statistically-sound signal</li>
  <li>Decide whether to incorporate or reject the changes</li>
  <li>Update roadmap/lessons document and return to step 1</li>
</ol>

<p>In addition to this core iteration loop, I had some Claude agents conducting research into prior art around pathfinding algorithms, ant colony behavior in nature, other ant simulation contests, etc. Several breakthroughs came from this research. Why reinvent the wheel?</p>

<p>While the project started around the ref docs, roadmap, and lessons, documentation quickly grew, to the point where we started some meta-documentation such as system maps so that Claude could efficiently find what it needed. By the end the lessons log had 200+ entries, and we’d search it before starting any experiment. <strong>Not repeating past mistakes sounds obvious, but dev agents love to repeatedly make the same mistakes unless specifically instructed not to</strong>, and lessons doc + hooks are good (though not perfect) ways to implement enforcement.</p>

<h2 id="strategy-some-things-that-worked-some-things-that-failed">Strategy: some things that worked, some things that failed</h2>

<h3 id="the-final-brain">The final brain</h3>

<p>The final brain I submitted used two general states - one to explore the map and find food (exploring), the other to take food back to the nest (homing).</p>

<p>The main homing signal was a Dijkstra gradient-driven green pheromone field built from the nest outwards. Each ant would sniff nearby cells and find the strongest green signal and mark its own cell slightly weaker. Ants trying to get home would follow the gradient uphill back to the nest.</p>

<p>Theoretically, this method could’ve led to trails that were 255 cells long due to pheromone strength caps and decay. In practice, however, trails were less than half that length because our outward-bound ants intentionally weakened the trail by an extra 1/cell. Combined with imperfect outward-bound pathfinding, the green nest gradient frequently peaked at 80-100 cells long.</p>

<p>Inversely, I used red as the food signal. Ants would pick up food, follow green home, and mark red along the way so exploring ants could climb the red gradient upwards towards the food.</p>

<p>For around 80% of the challenge, I used dead reckoning. However, this was quite costly, both in terms of ops budget as the ants had to spend a significant portion of each tick recording its movements, and memory register use since the ants had to store its location vis a vis the nest. In my final submission, I had dropped dead reckoning in favor of a reworked pheromone system.</p>

<h3 id="challenges-and-missed-opportunities">Challenges and missed opportunities</h3>

<p>My final submission had <strong>one unused register and two unused pheromone channels</strong> (blue and yellow). Since a major challenge I faced was trying to extend the green pheromone trails, these channels were obvious avenues of exploration. Despite hours of brainstorming and trying out several designs with Claude on how best to incorporate these, we were never able to find a design that worked.</p>

<p>As I’m sure other competitors experienced, wall-following was a major challenge, especially for the gauntlet and fortress maps. While we eventually landed on a manageable wall-following solution, I’m sure it wasn’t ideal and some additional points could have been achieved with a better solution.</p>

<p>I spent a significant amount of time working with Claude on clean-room rewrite attempts. None of these led anywhere, perhaps because they were too ambitious in scope and didn’t have a clear architectural thesis that we were betting on. As a result, when the rewrites inevitably ran into hiccups like significant score regressions, they were quickly abandoned.</p>

<h3 id="mistakes--lessons">Mistakes &amp; lessons</h3>

<h4 id="its-hard-to-reliably-identify-small-enhancements-when-theres-an-element-of-randomness">It’s hard to reliably identify small enhancements when there’s an element of randomness.</h4>
<p>The contest was technically deterministic, in that the same code would result in the same scores. However, due to the pseudo-random number generator used internally, code that was logically identical could lead to different results if instructions shifted the RNG sequence in critical paths, even with the same seed and maps.</p>

<p>PRNG noise caused up to +-30 point divergences on the 12-map eval, which meant that it was very difficult to reliably identify incremental enhancements. I spent a non-trivial amount of time trying to figure out how to separate signal from noise on the 12-map tests, eventually just falling back to 120-map tests in an effort to smooth out the noise.</p>

<h4 id="early-design-choices-can-lead-to-lock-in-on-local-optima">Early design choices can lead to lock-in on local optima.</h4>
<p>In the first version we developed, Claude included features such as dead reckoning which became load-bearing. The rest of the brain was so optimized around it that any incremental change would lead to significant score regressions, which led to us concluding that dead reckoning was an essential part of the system.</p>

<p>After stalling, we eventually explored a ground-up rework, removing dead reckoning and using the additional ops to rework our pheromone system, which both freed up ops and led to small score gains itself. The AI iteration loop was excellent at local optimization, but it wasn’t able to chain together these experiments into a more cohesive rework without explicit prompting.</p>

<h4 id="smart-architecture-beats-incremental-enhancements">Smart architecture beats incremental enhancements.</h4>

<p>A core tenet of my AI iteration loop was making small, isolated, and testable changes. This approach works well for tuning parameters, but it quickly led to ceilings on architectural exploration. Most of the major score leaps we had were driven by patient diagnosis and brainstorming to see what enhancements were interdependent and needed to be tested together. This is probably the hardest thing to operationalize in my loop, because it was structured around incremental and reversible changes by design.</p>

<h4 id="kill-and-resurrect-your-darling-hypotheses">Kill (and resurrect) your darling hypotheses.</h4>
<p>Many times while iterating, we marked hypotheses as invalidated, but never revisited those hypotheses later on, when the architecture and relevant conditions (such as register availability or ops budget) had changed.</p>

<p>After changing our process to regularly revisit stale hypotheses and to proactively brainstorm conditions under which hypotheses should be retested, we discovered several enhancements which led to incremental score gains.</p>

<h2 id="other-fun-stuff-genetic-simulator">Other fun stuff (genetic simulator)</h2>
<p>Around 24 hours before the contest ended, I accepted that no major breakthrough was foreseeable within the context of the LLM-driven iterative development cycle, so I decided to have a bit of fun and spun up a genetic simulator and a couple cloud machines to run it overnight. Despite being run for a dozen hours on two of the finest machines a new account limited by quota could get, this approach did not surface any improvements over the hand-tuned brain (hypotheses as to why below), but it was way more fun than it had any right to be.</p>

<p>The first attempt was rather dismal. The genetic simulator operated at the instruction level and was unable to reliably create viable programs in the enormous search space presented therein. Despite adding several funnel steps (including a static analyzer to ensure mutated code was valid), these programs struggled to even make it to the base 45 score achieved by the contest-provided sample brain.</p>

<p>Seeing the instruction-level mutations fail, I pivoted to evolving state machines. Basically, this meant pulling out specific states or routines from my hand-crafted brain and using those as the building blocks for the genetic simulator. Unfortunately, this method effectively gates the potential ceiling of the genetic sim at my (lack of) ingenuity in creating the base states. Even starting from scratch, the evolved brains were never able to break past the local optima that my hand-tuned brain had found itself in.</p>

<p>While the genetic simulator was ultimately not successful at producing a better brain in the ~24 hours I ran it, I found it was an excellent way to grok certain genetic/evolutionary concepts, effectively re-discovering concepts from high school biology.</p>

<p>For example, early attempts quickly converged on the locally-optimized brain we already had, because it had the highest score and score was our only fitness signal. By identifying additional signals and implementing a multi-objective fitness function, we were able to encourage greater diversity in the “gene pool” which led to more interesting mutations and certain brains which populated specific map-type niches.</p>

<p>Mutation granularity was also a major dilemma. The initial instruction-level mutations were ideal in the sense that the simulator could (and did) come up with anything, theoretically allowing for breakthroughs that neither I nor Claude may have thought of. In practice, 99.999….% of “anything” was broken programs. The state-machine mutations were better but perhaps went too far in the other direction. When the smallest unit of change was an entire state/routine, it was too coarse for the simulator to make gradual improvements, and the simulator converged repeatedly on the program we had already built. If working on this for more time, digging in on the right level of mutation granularity would be at the top of my list.</p>

<p>As a bonus, because the simulator relied on running large batches of programs through the maps, I found the original JS engine lacking in terms of performance, so Claude rewrote it in C / WASM which improved throughput tremendously. This was also a fun little detour into the performance of genetic simulators and how optimizations that can improve performance (such as creating more gates and steeper dropoffs between funnel steps) must trade off against genetic diversity that might only become beneficial later on.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1" role="doc-endnote">
      <p>Throughout this post, I use “I” and “Claude” interchangeably, because “Claude” requires 5 more letters to type than “I”. It is reasonable to read that any step which required coding was done exclusively by Claude, and 90% of the ideas were generated by Claude as well. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name></name></author><summary type="html"><![CDATA[Vibe-coding ants]]></summary></entry><entry><title type="html">Notes on memory: introduction</title><link href="http://www.knowingken.com/memory-intro" rel="alternate" type="text/html" title="Notes on memory: introduction" /><published>2025-09-28T00:00:00+00:00</published><updated>2025-09-28T00:00:00+00:00</updated><id>http://www.knowingken.com/memory-intro</id><content type="html" xml:base="http://www.knowingken.com/memory-intro"><![CDATA[<h1 id="notes-on-memory-intro">Notes on memory: intro</h1>

<p>Welcome to my notes on memory, where I make an effort to understand memory, its effects on our person and our society, and implications for the development of artificial intelligence.</p>

<p>Why write about memory?</p>

<p>I believe that memory will be the next great frontier for artificial intelligence. This is based on a few core hypotheses.</p>

<p>First, that true artificial intelligence is conditioned on a masterful implementation of memory. Without the ability to build true long-term understanding of the mechanisms through which the world works and incorporate those lessons unto itself, AI will never progress past a facsimile of intelligence - able to solve bounded problems, but never able to meaningfully improve itself beyond the existing corpus of knowledge.</p>

<p>Second, that the development of memory for AI is blocked mostly by conceptual and design questions - not technical obstacles. I don’t think we have a comprehensive and coherent vision for what memory in AI is designed to achieve, which means the problem space is much too large. Part of the problem is that the word “memory” is overloaded - people tend to talk past each other without an aligned understanding of what we’re trying to achieve. Without this vision - a definite optimism around memory - existing efforts are doomed to incremental iteration in hopes of a breakthrough.</p>

<p>Third, that developing such a vision for AI memory significantly improves our ability to build the thing. I’ll readily admit that I’m not a technical expert, so I might be underestimating the challenges on that front. I do know that it’s incredibly difficult to build something if you don’t know what you’re building and why. A clear vision for AI memory will introduce constraints and limit the problem space, providing direction and purpose to those who are building at the frontiers.</p>

<p>As I’ve said, I’m by no means an expert (except insofar as I’m a forgetful person). I’m a product manager with a hobby interest in AI - not an academic or professional. I hope that the process of writing these notes will deepen my knowledge of the space. I also hope that my journey will help interested readers - AI researchers in particular - develop a more rounded understanding of a concept which seems simple but has a surprising amount of detail.</p>

<p>I appreciate you taking the time to humor me and my musings. If you’re interested in discussing memory, please feel free to reach out. Enjoy!</p>]]></content><author><name></name></author><category term="memory" /><summary type="html"><![CDATA[Notes on memory: intro]]></summary></entry><entry><title type="html">When must we lobotomize ourselves?</title><link href="http://www.knowingken.com/rhetoragnosia" rel="alternate" type="text/html" title="When must we lobotomize ourselves?" /><published>2025-07-26T00:00:00+00:00</published><updated>2025-07-26T00:00:00+00:00</updated><id>http://www.knowingken.com/rhetoragnosia</id><content type="html" xml:base="http://www.knowingken.com/rhetoragnosia"><![CDATA[<h1 id="when-must-we-lobotomize-ourselves">When must we lobotomize ourselves?</h1>

<p><em>Rhetoragnosia, superhumanly persuasive AI, and truth.</em></p>

<blockquote>
  <p><strong>Rhetor</strong>, from Greek <strong>“rhētōr,”</strong> meaning “speaker” or “orator”.</p>

  <p><strong>Agnosia</strong>, from Greek <strong>“agnōsia,”</strong> meaning “ignorance” or “not knowing”.</p>
</blockquote>

<p>In his 2002 short story “Liking What You See: A Documentary,” Ted Chiang explores calliagnosia, a technology that eliminates your ability to see physical beauty. I strongly recommend reading the story - major spoilers ahead.</p>

<p>Liking What You See centers around a debate on a college campus around making calliagnosia, or “calli”, mandatory for students. Proponents argue that calli addresses “lookism” - prejudice against unattractive people - and lets users see the true inner beauty of other people rather than their outward appearance. Opponents claim that calli stunts users’ ability to appreciate natural beauty and prevents them from internally developing the maturity to look past physical appearance.</p>

<p>Calli is an incredibly interesting concept which I’d like to explore it more deeply at some point. But this post isn’t about calli.</p>

<p>The anti-calli faction is supported by a marketing firm, Wyatt/Hayes, which releases a supernaturally persuasive video against calli. The video is revealed to have been edited to enhance the speaker’s “vocal intonation, facial expressions, and body language”. Calli proponents struggle to address these new developments, with some considering adoption of newer technologies that allow one to block out facial expressions or intonation in a bid to defend themselves.</p>

<h2 id="learning-to-speak-on-the-speech-team">Learning to speak on the speech team</h2>

<p>I wonder a lot about persuasion and rhetoric. In high school, I competed in extemporaneous speaking, an activity which involves delivering a persuasive speech with minimal preparation. While topics were usually bounded to contemporaneous events, it was still quite difficult to stay on top of everything going on in the world at a deep enough level to write and practice a 5-7 minute speech from scratch with only 30 minutes of prep time.</p>

<p>One strategy we used was modular speeches - basic outlines that could be adapted by swapping in topic-specific theses and evidence. These outlines came with stock openers (hooks), argument frameworks, and other rhetorical flourishes designed to sound reasonable even if their relevance to the subject matter was tenuous. Good speakers would tie in the generic framework to their topic, and doing so was much faster and more reliable than coming up with the entire speech from scratch. I remember getting to the point where I could deliver semi-decent performances with almost no knowledge of the underlying topic - an incredible reward-to-effort ratio from the perspective of a lazy kid.</p>

<p>But something about it left me feeling unsettled - learning that the packaging of an argument can be almost entirely divorced from its content.</p>

<p>It’s difficult to separate our perceptions of a person from the first, visual impressions we have of them. It’s similarly difficult to separate the factual or logical content of a persuasive argument from the fashion in which it is delivered. Chiang writes about superhuman persuasion over video through enhancing nonverbal cues, but this also affects voice and text, though perhaps to a lesser extent. Just like absent-minded students might gloss over a hidden division by zero in an otherwise compelling proof that 1=2, the absent-minded reader can be all too easily pulled in by writing that seems well-reasoned but on further examination lacks substance.</p>

<p>(Despite my resistance, I still managed to learn some things about rhetoric and the world from extemp. I think that would be less true if I had competed today, though I may have learned some things about prompting LLMs instead.)</p>

<h2 id="some-theories-of-rhetoric-from-an-evolutionary-perspective">Some theories of rhetoric from an evolutionary perspective</h2>

<p>A minor digression - what does it mean for an argument to seem well-reasoned? And why do rhetorical techniques work in general? I think there are several plausible evolutionary psychology-based perspectives on rhetoric.</p>

<p>First, ideas which are delivered persuasively may be more likely to be true. A person who puts a lot of effort into making their position sound good probably put a lot of effort into researching the argument as a whole. Story-telling lets the speaker adopt the feelings of truthiness and tradition associated with these stories onto their position. Confidence and appeals to emotion fall under this category. A speaker who argues passionately and with emotion signals a deeper confidence in the idea and raises the stakes by putting their social reputation on the line. If they’re wrong, it damages their credibility more severely, so they’re more likely to have put greater effort into ensuring they’re right.</p>

<p>Second, effective rhetoric makes you believe that the speaker is part of your ingroup and thus their ideas are aligned with (and will result in the greater adoption of) your values, regardless of their truth value. From this lens, story-telling demonstrates that the speaker is the kind of person who knows your stories deeply enough to find the underlying patterns and frameworks (such as the monomyth) and therefore is more likely to align with your values. Many other rhetorical techniques, such as social proof and reciprocity, would fall into this category. I think generally charisma - humor, relatability, and general likeability - falls under here as well, as understanding the audience intimately is a prerequisite to deploying these techniques effectively.</p>

<p>In small tribes where social reputation matters and deception is costly, these heuristics likely served us well. The most persuasive person was often the most knowledgeable, most investment, and most aligned with group interests.</p>

<p>But, as they are wont to do, our monkey brains become liabilities in our modern information environment.</p>

<h2 id="the-great-decoupling">The Great Decoupling</h2>

<p>Conversations around rhetoric sometimes paint it as a bad thing - a distraction from the pure truth-seeking that leads to better outcomes. I think there are good reasons why rhetoric is effective and that it in fact helps us find better ideas quickly. In short, the persuasiveness of an argument is often a reasonably good indicator of its truth value.</p>

<p>It would be easy and convenient to draw the line there, but unfortunately it’s not a stable equilibrium.</p>

<p>We’ve seen time and time again that there are bad actors in the system, those who deploy charisma and rhetoric with the end goal of seeming right and convincing others rather than finding or spreading the truth. And so, perhaps since the invention of language, the persuasiveness of an argument has become increasingly decoupled from its truth value.</p>

<p>The advent of mass communication - whether through the printing press, radio, television, or internet - has only accelerated this decoupling. As mass communication makes it easier to reach larger audiences, it has also increased the returns on rhetoric, making it more and more profitable to invest solely in the delivery of a message without regard for its factual content. Demagogues, advertisers, and influencers ceaselessly push forward the frontiers of this decoupling. Radio allowed charismatic dictators to reach millions simultaneously. Television greatly increased the important of image and appearance in political discourse. Social media algorithms reward content that generates engagement over content that promotes understanding. Each technological leap has made it easier to optimize for persuasion over truth.</p>

<p>AI will be the final nail in the coffin.</p>

<h2 id="llms-completely-decouple-persuasiveness-and-truth-value">LLMs completely decouple persuasiveness and truth value</h2>

<p>The more I engage with AI, the more I worry about its effect on our ability to distinguish between the truth and true-sounding nonsense, an ability that is already being overwhelmed in the algorithmic age. Some researchers are already <a href="https://arxiv.org/pdf/2403.14380">suggesting</a> that LLMs can be as or more persuasive than humans in certain contexts. Both <a href="https://model-spec.openai.com/2025-04-11.html#avoid_targeted_political_manipulation">OpenAI</a> and <a href="https://www.anthropic.com/legal/aup">Anthropic</a> have explicitly set policy for their LLMs to avoid their use for political campaigning and the generation of misinformation. But the problem runs far deeper than just political manipulation.</p>

<p>LLMs represent the democratization of access to and deployment of superhuman rhetorical ability at scale. This will have structural consequences for our ability to communicate at the most fundamental levels.</p>

<p>An LLM can generate text that sounds like it was written by a domain expert, complete with appropriate jargon, confident assertions, and seemingly sophisticated reasoning - all while being fundamentally wrong about key facts. Unlike human charlatans, who might slip up or show inconsistencies, LLMs can maintain a facade of expertise with superhuman consistency.</p>

<p>Where human deception requires individual effort and carries reputational risk, AI-generated content can be produced at massive scale at limited cost and consequence for the generator. We’re already seeing this with AI-generated academic papers, fake reviews, and synthetic social media personas. If you thought bots were bad in the early 21st century, you’re in for an incredibly rude awakening over the next few years.</p>

<p>Social media and search algorithms are already creating echo chambers. Future AI systems will be able to amplify and leverage these situations to tailor their rhetorical approach to individual psychological profiles, optimizing their persuasive techniques for maximum effectiveness on each specific person. This is Chiang’s enhanced video taken to its logical extreme - not just improving delivery but customizing the entire argument structure to exploit individual cognitive biases.</p>

<p>As AI becomes better at mimicking the signals we’ve traditionally used to assess credibility - institutional affiliation, writing quality, internal consistency - these heuristics become unreliable.</p>

<p>The result is a kind of rhetorical inflation: as everyone gains access to superhuman persuasive ability, the baseline level of sophisticated-sounding argumentation rises dramatically, but without any corresponding increase in the actual truth value of the claims being made.</p>

<p>We’re destroying our ability to communicate and connect at an unprecedented scale.</p>

<h2 id="when-must-we-lobotomize-ourselves-1">When must we lobotomize ourselves?</h2>

<p>Calli represents a rebellion against enhanced beauty, giving individuals control over how they want to engage with aesthetic manipulation.</p>

<p>As superhuman persuasion becomes more common, we will need a way to protect ourselves - to opt out of rhetoric. From a naive perspective, this could look like browser extensions that strip emotional language from posts, only presenting factual claims. AI assistants trained to identify and flag rhetorical techniques, in hopes that they’re slightly less insidious out of the shadows. Social norms around clearly labeling opinion versus fact, emotional appeal versus logical argument.</p>

<p>In practice, I anticipate it will be much more difficult to separate rhetoric from facts. Chiang explains calli as working by blocking certain neural pathways associated with the recognition of beauty, as exemplified by clear skin, symmetry, and facial proportions. A precise definition of beauty can be hard to pin down. Unfortunately, defining truth is even harder. In offloading the separation of rhetoric from fact to a third party, we inherently agree to hand over the duty to define what the facts are. Philosophers have spent millennia trying to establish the natures of <a href="https://plato.stanford.edu/entries/epistemology/">knowledge</a> and <a href="https://plato.stanford.edu/entries/truth/">truth</a>.</p>

<p>Will our ability to define and understand the truth outpace a bad actor’s ability to muddy the waters?</p>

<p>Even if we can develop such cognitive defenses, will it be worth it?</p>

<p>There’s real value in rhetoric and persuasion that goes beyond manipulation. Good rhetoric can make important truths more accessible and memorable. The civil rights movement succeeded not just because it was morally right, but because great orators like Martin Luther King Jr. could communicate that moral truth in ways that moved people to action. Appeals to emotion are appeals to part of what makes us human. If we completely deafen ourselves to rhetoric - effectively lobotomizing ourselves in a sense - we must accept the immeasurable tragedy of losing the ability to appreciate so much of the human experience, from great texts to everyday passion.</p>

<p>But as AI makes rhetorical manipulation increasingly powerful and accessible, we may not have a choice. Just as Chiang’s story considers blocking facial expressions and vocal intonation to defend against enhanced videos, we may need to develop tools to help us separate the logical content of arguments from their emotional packaging.</p>

<p>The need is becoming increasingly urgent. In a world where anyone can sound like an expert and any argument can be made to seem compelling, our heuristics for detecting truth have crossed from unreliable to flat-out dangerous.</p>

<p>The lesson I take from Chiang isn’t about beauty or rhetoric specifically. At a deeper level, it’s about the importance of maintaining agency over our own perceptions. As AI makes it easier than ever to manipulate how we think and feel, the ability to choose what influences us - and how - may become one of our most precious freedoms.</p>

<p>The technology to enhance rhetoric already exists. The question is whether we’ll develop the wisdom to sometimes turn it off.</p>

<p>Do you like what I’m saying, or just how I’m saying it?</p>]]></content><author><name></name></author><summary type="html"><![CDATA[When must we lobotomize ourselves?]]></summary></entry><entry><title type="html">Postgres performance: how poor statistics affect join strategies</title><link href="http://www.knowingken.com/postgres-part-1" rel="alternate" type="text/html" title="Postgres performance: how poor statistics affect join strategies" /><published>2024-01-11T00:00:00+00:00</published><updated>2024-01-11T00:00:00+00:00</updated><id>http://www.knowingken.com/postgres-part-1</id><content type="html" xml:base="http://www.knowingken.com/postgres-part-1"><![CDATA[<h1 id="postgres-performance---a-deep-dive-on-how-poor-statistics-affect-join-strategies">Postgres performance - a deep-dive on how poor statistics affect join strategies.</h1>

<p><strong>Summary:</strong></p>
<ul>
  <li>The Postgres query planner can produce query plans with much longer runtimes than optimal due to poor choice of join methods.</li>
  <li>One reason the planner chooses non-optimal join methods is by underestimating the results of previous joins, an error that compounds in more complex queries with multiple joins.</li>
</ul>

<p>The Postgres query planner is pretty neat. For a given SQL query, there can be multiple plans that return the same results, but not all plans are created equal. The planner analyzes information related to the datasets and joins in the query, and tries to find the optimal (fastest) way to execute the query.</p>

<p>Like most underlying tools, when the planner does its job well, nobody notices it. It’s when when the planner fails horrifically that it draws attention, and if you’re like me - a relative newcomer to query optimization - you might not even know that the query planner exists, much less the effect it has on your query latencies. This post attempts to explain how a gap in the query planner could lead to slower performance by several orders of magnitude, as chronicled from the perspective of a product manager turned ad-hoc-database-administrator-for-a-day.</p>

<h2 id="choosing-the-right-query-plan-has-significant-effects-on-query-performance">Choosing the right query plan has significant effects on query performance.</h2>
<p>As noted earlier, a key function of the query planner is to determine the fastest way to execute a given query. One major variable between execution plans is the join methods used to combine data from various tables. Each join method carries a set of trade-offs, and depending on the characteristics of the underlying tables, performance between join methods can vary by multiple orders of magnitude.</p>

<p>A refresher on join terminology: a join is a process that merges data from two tables, referenced as the left and right(-side) table for clarity. The tables are joined using a shared column or set of columns called join attributes. For example, if you have a table with product information and a table with order information, you might join the two on product ID, which would be the join attribute.</p>

<p>The three join methods that Postgres uses are:</p>
<ul>
  <li>Nested loop join: As the name suggests, for each row on the left table, we loop over every row in the right table. This has the advantage of low overhead, with no sorting or other work needs, but processing costs increase dramatically as the table sizes scale. In a naive implementation, if we have M rows in the left table and N rows in the right table, we’ll need to scan M x N rows in total.</li>
  <li>Merge join: Both tables are sorted on the join attribute and then scanned from the top down. This way, we only have to run through each row once on each side - scanning a total of M + N rows. The trade-off is the computation involved in the sort process - without diving into the time and space complexities of sorting algorithms, we can assume that this adds some overhead to the process.</li>
  <li>Hash join: The right table is loaded into a hash table, which is a process that makes it very easy to look up a specific row based on its join attribute (which become hash keys). Then we run through each row of the left table and look up the matching right-side rows using the hash table. The trade-off here is in the overhead of building and storing the hash table.</li>
</ul>

<p>A thorough exploration of all the trade-offs involved in choosing the optimal join methods is outside the scope of this post (and my expertise). Suffice it to say that - and this is a highly simplified take missing a lot of nuance - nested loop joins are better if both tables are small, merge joins are better for tables that are already sorted, and hash joins are better for large tables. (TODO: Placeholder for deep-dive on join strategy trade-offs.)</p>

<h2 id="the-query-planner-uses-statistics-including-row-estimates-to-inform-crucial-query-planning-decisions">The query planner uses statistics, including row estimates, to inform crucial query planning decisions.</h2>
<p>So, how does the planner choose which join method to use? Let’s assume a join selection algorithm that runs as follows, leveraging the comparative advantages of each join method (again, highly simplified):</p>
<ol>
  <li>If both the left and right table are already sorted, use a merge join.</li>
  <li>Else, if the total number of rows M x N is larger than 100, use a hash join.</li>
  <li>Else, use a nested loop join.
This means that there are some pieces of information which are crucial for the planner to make a well-informed optimal join method choice:</li>
  <li>The sort status of both tables</li>
  <li>The size of both tables
As we’ll come to see, inaccurate information at this stage can have a significant effect on performance. For example, if the query planner believes that the input tables each only have 1 row, then the product of rows is just 1 - an ideal case for the nested loop join. If instead, both input tables have 1,000 rows, the nested loop will need to process a total of 1,000,000 rows. Such as mismatch would lead the query planner to wildly underestimate the computation costs involved and select an inefficient nested loop over a hash join.</li>
</ol>

<h2 id="postgres-assumes-columns-are-uncorrelated-when-calculating-row-estimates">Postgres assumes columns are uncorrelated when calculating row estimates.</h2>
<p>How could such large misestimates arise? One culprit is correlation. By default, the planner assumes that columns values are not correlated. This can lead to incorrect row estimates if columns used in a filter, for example, are correlated.</p>

<p>For example, take a table that contains the hair color and state of residence of everyone living in the U.S. To make this concrete, let’s say the total population is 300 million, of which 150 million have brown hair and 20 million live in New York.</p>

<p>Postgres collects basic table and column statistics when you analyze the table. This information includes the number of rows in the table (cardinality), as well as statistics around the values that appear in each column, including the relative incidence of the most common values (selectivity). In this example, the cardinality of the table is 300 million, the selectivity of brown hair is 150m/300m = 50%, and the selectivity of New York is 20m/300m = 6.67%.</p>

<p>Let’s say you want to find the number of brown-haired New Yorkers. If these values are uncorrelated, as the query planner assumes, then you can get a pretty good estimate by multiplying the cardinality (300m - the total number of records) by the selectivity of each factor (50% for brown hair and 6.67% for New York). This gives you around ~10 million, and unless you have special information about New York being especially attractive (or repulsive!) to brunettes, this seems like a reasonable guess.</p>

<h2 id="correlated-columns-can-lead-to-wildly-inaccurate-row-estimates">Correlated columns can lead to wildly inaccurate row estimates.</h2>
<p>That’s all well and good when the variables are uncorrelated, but what if they’re not?</p>

<p>For example, let’s say that the table also contains the zip code of each person. Assume that 25,000 people live in zip code 10001, which (known is us, but unbeknownst to Postgres) is located entirely within the state of New York. By definition, the number of New Yorkers in zip code 10001 is the same as the number of residents in zip code 10001 - that is, 25,000.</p>

<p>However, by default, Postgres doesn’t know that these two attributes are correlated. Using the same naive methodology as before, the planner will multiple the cardinality of the table (300m) against the selectivity of each factor (6.67% New York, 0.0083% zip code 10001) and estimate that only ~1,670 people meet both conditions. This would be a underestimating the actual number by a factor of 15.</p>

<h2 id="bad-row-estimates-lead-to-bad-query-plans">Bad row estimates lead to bad query plans.</h2>
<p>Imagine if we added further conditions on similarly correlated attributes such as city of residence or area code - these correlation errors would compound and lead to even larger misestimates. Given what we know about how critical accurate row estimates are to inform query planning and join strategies, it’s evident that the planner’s underestimations could have significant effects on query performance.</p>

<p>So we have a bit of a problem. In the following post, I’ll discuss how Postgres uses extended statistics to deal with correlation within a table, the limitations on correlations between tables, and other strategies for generating more accurate estimates and better query plans when faced with this situation.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Postgres performance - a deep-dive on how poor statistics affect join strategies.]]></summary></entry><entry><title type="html">Building a payments product: introduction</title><link href="http://www.knowingken.com/building-payments-product-intro" rel="alternate" type="text/html" title="Building a payments product: introduction" /><published>2024-01-07T00:00:00+00:00</published><updated>2024-01-07T00:00:00+00:00</updated><id>http://www.knowingken.com/building-payments-product-intro</id><content type="html" xml:base="http://www.knowingken.com/building-payments-product-intro"><![CDATA[<h1 id="building-a-payments-product-introduction">Building a Payments Product: Introduction</h1>

<p>Payments are just moving money from one place to another, right? How hard can it be?</p>

<p>This is the introduction for a series covering the fundamentals of standing up a new payments processing product.</p>

<p>I spent a few years as a product manager at a large bank. One of my major projects was launching a new payments product which involved integrating a third party digital wallet provider as a payment partner. The goal was to enable the bank’s corporate clients to send money to their customers’ digital wallets without each corporate client having to integrate with the digital wallet provider itself. These integrations incur a large start-up cost as well as ongoing administrative overhead - think development costs, contract negotiations, funding arrangements, monitoring, etc. - so a corporate client gets significant value out of being integrated with a relatively small number of banks through which they can reach a relatively large number of payment endpoints.</p>

<h2 id="some-basic-terminology">Some basic terminology</h2>

<p>If corporations paying their customers (or in payments lingo, a B2C or business-to-consumer payment, crucially distinct from a C2B payment flowing in the other direction) is not intuitive, join the club. You, like most consumers, almost certainly act as a payor (a payment sender) several times as frequently as you act as a payee (a payment recipient). A few common examples of a B2C payment flow are:</p>
<ul>
  <li>Payroll - whether you’re an airline pilot or a rideshare driver, your employer need to be able to pay you for work done.</li>
  <li>Refunds - if you return a purchase to a merchant, they need a way to refund your money.</li>
  <li>Insurance claims - if your apartment floods, your insurance provider needs a way to pay out on your insurance claim.</li>
</ul>

<p>This post is written from the perspective of a bank or payment provider. “Payment” generically refers to a cash flow between two parties in any direction. I’ll use the terms outbound payment, disbursement, or payout interchangeably to mean a cash flow from the bank/client to the end customer, while inbound payment, receipt, or pay-in will mean a cash flow from the end customer to the bank/client. It’s important to segment based on the direction of the flow because different flows have varying needs and priorities.</p>

<h2 id="first-understand-your-customers-and-market">First, understand your customers and market.</h2>

<p>Today, there are a number of digital wallet providers in the US. This is a relatively recent development - a few years ago, there were really only two dominant players in the space. To better guide our decision-making around partner selection, it became important to understand what motivates the primary buyers of payment products - usually corporate treasurers or those in a similar role. One of their responsibilities is to find ways to make payouts to customers, with major considerations being (roughly in order of priority): reach, cost, customer experience, and speed.</p>

<p>If you are building a payments product, you will at almost certainly need to integrate with a partner or network at some stage, and understanding your customers’ needs helps provide a framework for evaluating potential partners. (You can disregard the integration-related portions if you are reading this in the year 3000 and the global financial system is run through an all-encompassing hive mind.)</p>

<h3 id="they-want-to-pay-everyone">They want to pay everyone.</h3>

<p>The value of reach emerges from a few base considerations: you want to be able to pay all your customers and offer choices and flexibility, but you don’t want to implement dozens of payment methods - even though integrating through a central service provider such as a bank means you can pick up an incremental payment method at a significantly lower cost than a DIY integration, there is still cost involved.</p>

<p>You might guess that reach is a simple matter of market share - the more customers, the better reach, but there are some complications. For various reasons, businesses can be much more concerned with ensuring each and every customer has at least one payment option vs. providing many options that overlap. In other words, a business may prefer a payment method that only reaches 2% of its customer base if those customers would otherwise be unserved over a method that has 20% overall consumer penetration that overlaps entirely with populations that are already served. These businesses may be willing to cough up transaction costs that are loss-making at the transaction level, customer level, and possibly even at the customer segment level based on strategic, reputational, or regulatory concerns.</p>

<p>Takeaway: niche payment methods with high costs and low overall adoption might be viable products if they provide access to underserved populations.</p>

<h3 id="they-dont-want-to-spend-a-lot-of-money-to-pay-people">They don’t want to spend a lot of money to pay people.</h3>

<p>A second priority is costs. I like to think of these in two parts: bank costs and network costs. Bank costs are the typical costs associated with offering a product - technology, implementation, sales, support, etc. Network costs are charged by the organization which runs the payment rail. In the mainstream US banking system, this is either The Clearing House (TCH) or the Federal Reserve. Traditionally, payments between large financial institutions leverage the TCH network because large banks are more likely to be members of the TCH network, but ultimately, both TCH and the Fed’s services are nearly identical, and banking products abstract over any remaining differences before they reach a client. Payment products on traditional rails such as checks, ACH, and domestic wires are largely commoditized.</p>

<p>Costs become more interesting and differentiated when examining more esoteric or emerging rails, such as the card networks (with very different implications depending on credit vs. debit), real-time gross settlement systems, digital wallets, and cryptocurrency / distributed ledger money. Without getting too side-tracked, the economics of international wires can also be highly variable as a result of the fascinating complexities of correspondent banking arrangements.</p>

<p>Takeaway: costs, while a priority, are unlikely to be the differentiating factor between competing products on the same payment rail but become more significant when comparing payment rails.</p>

<h3 id="they-want-their-customers-to-be-happy">They want their customers to be happy.</h3>

<p>Third is the customer experience. The overwhelming majority of end users neither know nor care about payments systems and technology. They will not appreciate the nuances of settlement, reconciliation, tracking, or dispute resolution processes across the countless stakeholders which might touch a payment. From their perspective, if there is a problem with the payment, there’s a problem with the company they’re dealing with.</p>

<p>Payment providers and systems can vary significantly across these areas. Traditional rails are highly regulated and therefore tend to have consistent, if bureaucratic, support. Newer rails tend towards a more self-service approach. Large payment providers invest significantly in building experienced operations and support departments to abstract away most of these differences from the customer’s perspective, but they are still ultimately constrained by the rules of the rails. (This assumes you’re a high-value enterprise customer; the consumer and small business experience can be much more frustrating.)</p>

<p>Takeaway: a bad payments experience, no matter how loosely related to anything a client has control over, reflects poorly on the client nonetheless, while good payments experiences can boost customer loyalty and satisfaction; businesses are willing to pay for movement on the spectrum away from the former towards the latter.</p>

<h3 id="they-have-a-need-for-speed">They have a need for speed.</h3>

<p>Last, but not least, is speed. Particularly important in disbursements is the fact that customers want their money, and they want it now. The importance that a client places on speed is dependent on the use cases and customer segments they deal with. Some use cases, such as insurance claims or gig economy payouts, naturally carry a higher sense of urgency due to the end customer’s needs. A casual gig worker who is working a shift to buy something today needs their money today; a salaried professional with a financial buffer places a lower value on faster settlement. Furthermore, customer expectations are shifting over time, and as other parts of their lives become faster-paced, they naturally expect faster payments as well. Speed is deliberately broken out from other facets of the customer experience as it has become a major differentiating factor in the way payment products are positioned and marketed, with a significant gap between delayed settlement systems and faster payments schemes.</p>

<p>Takeaway: time is money, and people will spend money so that they can get their money sooner.</p>

<h3 id="you-should-abstract-away-secondary-concerns-from-your-customer">You should abstract away secondary concerns from your customer.</h3>

<p>You might notice that accuracy, reliability, and risk (fraud, regulatory, etc.) are not on this list. Those are, with a few exceptions, pretty standard across most disbursement products and don’t serve as a differentiator. There are occasions where these deserve to be considered more thoroughly when positioning a new payments product - most prominently among less established and emerging payment rails - but most payment providers have done a relatively good job of abstracting these away from the client experience. Of course, these will be relevant if you’re designing a payments product, but that is a topic for another time.</p>

<h2 id="integrations-are-a-big-investment-so-keep-the-long-term-in-mind">Integrations are a big investment, so keep the long term in mind.</h2>

<p>After evaluating the two major competitors in the digital wallet space, you might find that they are closely matched in terms of costs, customer experience, and speed. Reach becomes the dominating differentiator. As noted earlier, while useful, the total number of people reached by each wallet provider is not necessarily the most important factor in this analysis. If one provider reaches 50% of the US, but your clients’ customers aren’t in that 50%, then their reach is useless for your needs. Based on analysis of major clients who had expressed interest in a digital wallets solutions, including research and interviews to determine what their focus customer segments were, one provider had significantly better penetration for the specific product use cases and positioning we were targeting.</p>

<p>There are also considerations related to the long-term strategy for the product, such as international opportunities or lateral expansion leveraging the relationship. Selecting a payment rail or partner is often only the first step in a months- or years-long process to launch a product, and it’s important to maintain a long-term view of the integration and partnership.</p>

<p>Now that we’ve built a basic understanding the payments industry and have chosen a partner, it’s time to explore product-specific decisions. While this post focused more on the business side, the next section will delve into the various technical decisions and tradeoffs that are instrumental in bringing a payments product from idea to reality.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Building a Payments Product: Introduction]]></summary></entry><entry><title type="html">The optimal amount of most Bad Things is non-zero</title><link href="http://www.knowingken.com/optimal" rel="alternate" type="text/html" title="The optimal amount of most Bad Things is non-zero" /><published>2023-09-14T00:00:00+00:00</published><updated>2023-09-14T00:00:00+00:00</updated><id>http://www.knowingken.com/optimal</id><content type="html" xml:base="http://www.knowingken.com/optimal"><![CDATA[<h1 id="the-optimal-amount-of-most-bad-things-is-non-zero">The optimal amount of most Bad Things is non-zero.</h1>

<p>The overwhelming costs of complete eradication.</p>

<h2 id="there-is-a-difference-between-optimal-and-ideal">There is a difference between optimal and ideal.</h2>

<p>Patrick McKenzie writes that “<a href="https://www.bitsaboutmoney.com/archive/optimal-amount-of-fraud/">the optimal amount of fraud is greater than zero</a>”.
This is interesting because fraud is widely considered a Bad Thing.
Our intuition may be that the optimal amount of Bad Things is zero.
After all, things that are Bad have negative effects.
Wouldn’t we all be better off with fewer Bad Things?</p>

<p>In a vacuum, I think the answer is yes.
And to put it slightly differently - the <em>ideal</em> amount of Bad Things is zero.
Unfortunately, few of us have the privilege of living in a vacuum.
The rest of us may look up to ideals but end up having to make decisions based on what is <em>optimal</em> - that is, the ideal state of things given a certain context and set of constraints.</p>

<h2 id="when-should-an-optimal-amount-of-a-bad-thing-not-be-zero">When should an optimal amount of a Bad Thing not be zero?</h2>
<p>This rests on the following assumptions:</p>
<ol>
  <li>Resources are limited.</li>
  <li>Changing the amount of a Thing costs resources.</li>
  <li>The naturally occuring amount of a Thing is non-zero.</li>
  <li>The further away from its natural amount, the more resources it takes to change the amount of a Thing. This is most common when interventions cannot be perfectly targeted and instead are implemented through a stochastic process.</li>
  <li>Without consistent pressure, the amount of a Thing tends to revert to its natural amount.</li>
</ol>

<h2 id="a-non-zero-number-of-examples">A non-zero number of examples.</h2>
<p>There are <a href="/tradeoffs.md">tradeoffs in everything</a>.
I think about this a lot, especially when people tell me that I should have zero tolerance for something.</p>

<h3 id="the-optimal-amount-of-fraud-is-non-zero-revisited">The optimal amount of fraud is non-zero, revisited.</h3>

<p>Taking McKenzie’s example of card fraud and (1) as a given, the rest of case becomes: (2) Reducing card fraud has direct costs, such as the time and money spent on detecting and preventing fraud, as well as indirect costs, such as lost revenue from customers frustrated by anti-fraud measures. (3) Fraud naturally occurs at a non-zero rate. (4) Anti-fraud measures have non-zero error rates. As a result, when the amount of fraud decreases, the relative amount of false negatives increases. (5) When effective anti-fraud programs cease, the amount of fraud increases.</p>

<p>The only way to have zero fraud is to have no commerce.</p>

<h3 id="the-optimal-amount-of-bugs-is-non-zero">The optimal amount of bugs is non-zero.</h3>
<p>I work as a product manager. Sometimes, I will get questions about why I let bugs get into production. <del>After I am done blaming my engineering team,</del> I end up explaining, in slightly different words, that the optimal amount of bugs is non-zero. (1) Given. (2) Researching and fixing bugs takes time and energy. (3) Almost all software is of sufficient complexity that a certain level of unexpected behavior is… expected. (4) The existing testing framework catches most major bugs and is itself optimized between accuracy, speed, and cost. Expanding it to catch the incredibly niche bug you found (and the other slightly different permutations of the bug depending on the software configuration) would mean no more feature delivery. Ever. (5) As the codebase changes, testing and QA procedures have to change in concert or they will become less useful at catching bugs.</p>

<p>The only way to have zero bugs is to have no software.</p>

<h3 id="the-optimal-amount-of-personal-injury-is-non-zero">The optimal amount of personal injury is non-zero.</h3>

<p>A few days ago, I stubbed my toe. I was not very happy about it at the time, but on further reflection, I think the occasional injury is a natural consequence of this optimization paradigm. (1) Given. (2) Avoiding injuries incurs a cost in terms of activities I must avoid. (3) I am not superhuman enough to naturally and effortlessly avoid all injuries. (4) I don’t know beforehand what activites will lead to an injury. Statistics invariably show that you can get injured doing the most mundane activities, so the more minor the injury I want to avoid, the greater the cost in terms of activities I cannot partake in. (5) Every moment brings new opportunities for injuries if I am not carefully trying to avoid them.</p>

<p>The only way to have zero injuries is to have no life.</p>

<h2 id="this-is-not-always-true-but-it-is-useful">This is not always true, but it is useful.</h2>
<p>When might this not hold?
I think assumptions 1 and 2 are widely accepted and minimally controversial. I am less confident that 3-5 are universally applicable across all Bad Things - specifically, I am suspicious of the reliance on a “natural” amount of a Thing. But I cannot think of a good counter-example at the moment.</p>

<p>In other words, this model may not be perfect. Please get in touch if you have ideas on how to better construct the principles, see a major (or minor) gap, have a counter-example, or want to discuss in general.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[The optimal amount of most Bad Things is non-zero.]]></summary></entry><entry><title type="html">Write more, even if it’s bad</title><link href="http://www.knowingken.com/write-bad" rel="alternate" type="text/html" title="Write more, even if it’s bad" /><published>2023-09-12T00:00:00+00:00</published><updated>2023-09-12T00:00:00+00:00</updated><id>http://www.knowingken.com/write-bad</id><content type="html" xml:base="http://www.knowingken.com/write-bad"><![CDATA[<h1 id="write-more-even-if-its-bad">Write more, even if it’s bad.</h1>

<p>At least you get the gist of it.</p>

<h2 id="dont-let-writing-be-blocked-by-perfectionism">Don’t let writing be blocked by perfectionism.</h2>
<p>Hi, my name is Ken, and I’m a perfectionist.
I first tried perfectionism as a kid, and since then, I’ve always looked for ways to make things more perfect.
(Don’t ask me how many times I rewrote that sentence.)
Perfectionism leads to the worst time of writer’s block for me.</p>

<p>How could I possibly start an essay without having a perfect outline in mind, one that weaves so effortlessly through all of my key points at such a level of detail that it may as well be the entire writing itself?
Even with an outline in mind, it seems that whenever I start writing, all the thoughts that seemed so effortlessly self-proving come out in a jumbled slurry.
It is horrifying at times.
And it becomes less painful to not write, to not even open the door to that chance.</p>

<h2 id="reject-perfection-embrace-vulnerability">Reject perfection, embrace vulnerability.</h2>
<p>To add yet another block, writing for an unknown audience is scary.
It’s like throwing thoughts into the void, except the void is an 800 pound gorilla who hates being pelted with imperfect thoughts and has an ample supply of bricks to return to you.
Also the gorilla is responsible for your job and wealth and friends and reputation and has become such a tortured metaphor that he is especially enraged and sensitive to any stray imperfections.</p>

<p>Is that catastrophizing? Maybe.
Regardless, the gorilla always hangs out at the periphery of my vision when I’m writing.
He tells me not to publish anything that has a single word out of place, a single sentence unedited.</p>

<p>Rejecting the gorilla is hard, and I don’t really have great advice on how to do it.
I’ve found that I write more freely at night, when I’m a little tired and there are fewer inhibitions on the words that hurtle out of my mind.
You might want to fight the gorilla of perfection with another animal of choice.
Perhaps a giraffe of social commitment that is empowered when you tell your friends that you’ve started blogging, and the shame of having a blog with only two pages overwhelms the gorilla.
Or the lion of delusional optimism, wherein you become so confident about your inevitable success that the gorilla fades away into the shadows.
Whatever it is, find something to push you forwards that overpowers the friction and inertia that make it so easy to stay still.</p>

<p>(Looking back, I’m going to call night-time writing the rhino of reduced inhibitions. I don’t know why this swerved so completely into random and increasingly unnecessary animal metaphors, but I can’t not commit to the bit at this stage. I also need to shoe-horn in a monkey somewhere, then I can title this section “embrace monke”.)</p>

<h2 id="even-bad-writing-is-good">Even bad writing is good.</h2>
<p>I’ve written lots of things that I wouldn’t say I’m proud of.
Not because they are actively awful, but because I get this nagging feeling that they could be better in ways I can’t quite identify.
I choose to publish them anyways.
Only through practice can you get better at something.
I’m trying to get better at writing and at sharing things that are still in the draft and haven’t been perfect.
Even throughout my rambling and meandering, as long as you roughly understand what I’m going for, that’s a win.</p>

<p>Epistemic status: self-evidencing.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Write more, even if it’s bad.]]></summary></entry><entry><title type="html">Everything is Tradeoffs</title><link href="http://www.knowingken.com/tradeoffs" rel="alternate" type="text/html" title="Everything is Tradeoffs" /><published>2023-09-04T00:00:00+00:00</published><updated>2023-09-04T00:00:00+00:00</updated><id>http://www.knowingken.com/tradeoffs</id><content type="html" xml:base="http://www.knowingken.com/tradeoffs"><![CDATA[<h1 id="everything-is-tradeoffs">Everything is Tradeoffs</h1>

<p>Decision-making can be hard. Even seemingly innocuous choices come with a barrage of tradeoffs.
At a fundamental level, every choice you make has an opportunity cost: all of the other choices you could have made that are incompatible with your current choice.
Even the time and energy spent making a decision is a cost.</p>

<h2 id="yes-everything">Yes, everything.</h2>

<p>Why might this be non-obvious, particularly when applied across all the choices we make in a day?
Many times, the problem is relatively inconsequential, so picking a random point reasonably near the Pareto frontier is fine, and you can quickly discard most options because they are strictly worse.
With more important decisions, it’s possible to quickly narrow down the relevant tradeoffs by using heuristics such as values and principles (assuming your <a href="values.md">values are useful</a>).
Only a very small subset of decisions make it to the stage where we consciously consider the tradeoffs involved.</p>

<h2 id="awareness-of-tradeoffs-is-a-superpower">Awareness of tradeoffs is a superpower.</h2>

<p>I’m not suggesting that we need to carefully evaluate all of the decisions we make.
Some, like the order in which you put on clothes, have battle-worn heuristics. For example, putting on underwear before pants is pretty infallible.
But awareness of the underlying reality - that tradeoffs <em>are</em> involved in every decision - can help you recognize the situations in which the general heuristics may not align with your goals.
If you’re a superhero or otherwise rebelling against the societal tyranny of clothing order, then you might make a different decision - trading off comfort for whatever reason it is that Superman wears underwear on the outside. (I googled it so you don’t have to: <a href="https://screenrant.com/why-superman-underwear-outside-costume-explained/">Underpants on tights were signifiers of extra-masculine strength and endurance in 1938.</a>)</p>

<h2 id="be-suspicious-of-things-without-tradeoffs">Be suspicious of things without tradeoffs.</h2>

<p>This is also a reminder that there’s no such thing as a free lunch.
If something seems too good to be true, it probably is - and it’s better to know why upfront than to be blindsided by an unexpected tradeoff later on.
If a proposal only has positives, find the negatives before they find you.
Look out for tradeoffs everywhere, because everything is tradeoffs.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Everything is Tradeoffs]]></summary></entry><entry><title type="html">Make your values less useless</title><link href="http://www.knowingken.com/values" rel="alternate" type="text/html" title="Make your values less useless" /><published>2023-08-27T00:00:00+00:00</published><updated>2023-08-27T00:00:00+00:00</updated><id>http://www.knowingken.com/values</id><content type="html" xml:base="http://www.knowingken.com/values"><![CDATA[<h1 id="make-your-values-less-useless">Make your values less useless.</h1>

<p>If everything is valued, nothing is. It doesn’t have to be this way.</p>

<h2 id="most-values-are-useless">Most values are useless.</h2>

<p>I bet you’ve seen company values that seems to be laundry lists of every generic, positive-adjacent adjective that the drafter could imagine. Integrity. Kindness. Generosity. Boldness. Accountability. Fun. Customer-centricity. Learning. Innovation. Diversity. Teamwork. Passion.</p>

<p>As pleasant as they may sound, I don’t find these value lists to be particularly compelling.
It’s hard to put a finger on why, but I think it’s because they don’t strike me as being useful or applicable to the day-to-day work that goes on in the company.</p>

<p>For example - what does it mean to have integrity as a value?
To prefer telling the truth over lying, all else equal? Is that really a conflict that comes up frequently and intensely enough to merit an explicit reminder?
What happens when telling the truth conflicts with another value, such as kindness or boldness?</p>

<h2 id="values-dont-have-to-be-useless">Values don’t have to be useless.</h2>

<p>This is all fine if you accept that corporate values need to be nothing more than cliches.
Maybe not all companies need to be trailblazers.
Maybe it’s fine to slot in placeholder values with minimal thought and focus on other things.
Or maybe values are just observations - not intended to direct, but to describe.</p>

<p>I think well-written and useful values can be something more: a signal at the essence of what differentiates a company and its culture.
At their best, values are a prominent north star to coordinate groups and guide decision-making at scale, cutting through the natural overhead of large organizations.
If you believe in the power of values to represent a shared dream - to unite and inspire - then tepid and generic values are a tragedy.</p>

<h2 id="useful-values-are-lean">Useful values are lean.</h2>

<p>People have bounded attention spans and memories.
If you have a long list of values, you might as well have no values, because nobody is going to remember them anyways.</p>

<p>Even if you do manage to remember all these values, the more you have, the more likely they are to conflict.
When you have conflicting values, you get bogged down in debates over which values take precedence for particular decisions.
These debates slow decision-making and lead to internal strife unless everybody involved is well-aligned on which values come first.
(One way to relax this limit is to stack-rank your values, which solves for conflicts, but not for memorability.)
You can fall back to personal judgment, but each time you do so, the importance of the values erode.</p>

<p>Like many tools, values become significantly less useful if you forget them or can’t figure out which one to use.
Keep your list of values short, memorable, and differentiating.</p>

<h2 id="useful-values-force-trade-offs">Useful values force trade-offs.</h2>

<p>They can be turned into principles which can repeatedly inform decision-making.
Take the <a href="https://agilemanifesto.org/">Agile Manifesto</a>:</p>
<blockquote>
  <p>Individuals and interactions <em>over</em> processes and tools. <br />
Working software <em>over</em> comprehensive documentation. <br />
Customer collaboration <em>over</em> contract negotiation. <br />
Responding to change <em>over</em> following a plan.</p>
</blockquote>

<p>Each value is worded in a way that unambiguously describes a trade-off.
There are many ways you can do this, but I like to structure each of my values as “Prefer X over Y”.
Crucially, both X and Y must be reasonably desirable in order for the value to be useful.
In other words, if the value could be reversed (i.e., “Prefer Y over X”) and still present an acceptable approach, then it is actually meaningful.
Values have to inform hard choices, and they can’t do so if the written paragon is a weak-willed statement like “prefer honesty and respect over treachery and genocide”.</p>

<h2 id="useful-values-reflect-reality">Useful values reflect reality.</h2>

<p>Values are vulnerable to becoming divorced from the realities of person or company’s actions.
Even if the values you write down are the same ones in your head on day 1, time and experience rarely leave them unscathed.
Every choice is a referendum on their continued importance, and unless you keep them in the forefront of your decision-making processes, they’re likely to dim over time.
Once values stop being present in decision-making, they become afterthoughts.
From there, it’s a short trip to irrelevance.</p>

<p>Similarly, values need to be revisited and refreshed as an entity changes or environment conditions shift.
A company that has grown from one person to ten thousand while keeping the same set of values is almost certainly deluding itself, either about the appropriateness of its values when it first adopted them, their current applicability, or continuity between the two.</p>

<p>There may appear to be a tension between reinforcement and adaptability.</p>
<nobr><!-- Something about the impossibility of observing without interacting with the system. --></nobr>
<p>Reinforcement doesn’t mean that a value needs to remain static.
In fact, the visible stress that occurs when a vestigial value is prioritized in a changing environment is a key indicator that the value should be revisited.
This stress would never surface if the value wasn’t constantly being evoked.
As such, value reinforcement and evolution are inherently intertwined, and a useful value system cannot exist without both.</p>

<h2 id="make-your-values-more-useful">Make your values more useful.</h2>

<p>Almost all lists of company values are superficial.
They’re full of platitudes that cannot be used to inform real-world choices because they’re too numerous, all of nominally equal importance, and rarely referenced.</p>

<p>To make your values useful, limit their number, structure them in a way that forces tradeoffs, and reinforce them by tightly integrating them into decision-making processes.
All of this applies to personal values, too.</p>

<p>Values don’t have to be empty, but you have to do a lot of work to ensure they’re not.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Make your values less useless.]]></summary></entry></feed>