Unintelligent Design

2022-8-15

I started writing the below when I saw a squirrel run up a tree, and was at the time thinking about Mito's goal of 5% growth weekly. I ended up here. Lots of unfinished work at the end.

Section 1: Introduction and Roasts

In which I argue that our current decision making tools are not designed to perform well in the environments most of us our offering. I try and understand where exactly our decision making tools go wrong as motivation for Section 2, where I propose a new set of strategies for decision making that lead to more effective operation.

Chapter 1: Squirrel Strategies

Ms. squirrel as a ~fun~ example of how processes can create long-term planning with no explicit long-term thinking.

The Eastern Gray Squirrel

In warm months, the Eastern Gray Squirrel prepares for winter by stashing nuts and seeds across many hidey-holes. In colder months, she returns to each stash spot to retrieve them.

But she doesn’t dig up all the nuts and seeds she buries. In fact, she’ll forget about almost half of them.

Given that Ms. Squirell does die of starvation, it’s natural to wonder why she would do such a bad job eating all the food in her pantry - remembering where you hide your food is an effective strategy for fighting starvation.

But the larders of the Eastern Gray Squirrel do not go to waste - they just aren't used that winter. These nuts and seeds grow into bushes and trees that produce nuts and seeds of their own. And so probably-dead squirrels become farmers of the forest.

These crops are harvested and stored by squirrels living in the same area generations later - maybe not the original farmer, but her children, or grandchildren, or later descendants.

Long-term planning without long-term thinking

These squirrels, obviously, aren’t doing any sort of explicit long-term planning when they forget their nuts and seeds. At least, we can confidently say Ms. Squirell isn’t thinking “at least my 10x great grandchildren get to munch” as she starves to death.

But then, doesn’t the stupidest Eastern Gray Squirrel do a better job caring for future generations than all but the most ardent climate activists? Us big-brains know that deforestation is driving climate change, and yet how many of us, even with our increasingly-panicked calls to "think of the children," can claim to have planted a single tree?

My point here isn’t that squirrels are ecological geniuses (or that humans are evil and that we should copy Ms. Squirell - I eat enough nuts and seeds as is). I’m trying to figure out how, without doing any sort of long-term planning explicitly, squirrels plan for the far-future - all without needing to do any of the long-term thinking that we humans seem so obsessed with.

Evolution built that squirrel

When looking to explain the curious behavior of the Eastern Gray Squirrel, we must turn to our tools for explaining much of animal behavior: an evoluationary toolkit.

So we ask: how did squirrel’s end up this way, where they appear to be doing some sort of accidental long-term planning?

The answer, it turns out, can be understood through the process that created Ms. Squirell in the first place.

Persistence of the lineage

The simplest model of evolution is that "survival of the fittest organism" drives progress and persistence of a species over time. This is a good-enough approximation, but it is a bit misleading which organism evolution is really concerned with.

Imagine a squirell has a mutation that allows it to outcompete all other squirells around it and so survive. That’s great for the squirell. Now, imagine that that adaptation also causes all of the squirell’s children to die (say, the mutation makes Ms. Squirell particular murderous of any and all Baby Squirells). This is not so adaptive, and this mutation will die.

In other words, the organism that must survive is not a single generation, but a lineage of animals over time.

The organisms that win out in an evolutionary setting are those that lodge themselves as far into the future as possible. Doing so requires not just caring about yourself, but about your children, and children's children, etc. If all your children will die in 5 generations because you’re chopping down all the trees and not planting any more, then you best join the climate activists on the picket line.

And so, because the process that builds squirrels is concerned not with a single generation but the whole lineage, we get squirrels that end up doing a damn-good job of caring for their environment over long-term time, without a single squirrel needing to think a single long-term thought.

Chapter 2: Introducing Uncertainty

I explain my experience with uncertainty by introducing you to Mito, my company. I clarify what type of uncertainty I’ll be rambling about here (so it’s less uncertain).

Lack of business fundamentals

The summer before my senior year of college, bad sleep and a class field trip to San Francisco convinced me I should create a company. I was going to build: a tool that sat on top of software like Photoshop and iMovie and make them collaborative. My girlfriend came up with the name: Saga - a generalized version control system.

Knowing little about company building, I spend the first two or so months sitting in SF coffee shops and trying to write some Python code to realize my vision. I didn’t have a specific target I was working towards. I was mostly trying to figure out if I could write any useful code (which wasn’t something I had done in school before). I remember feeling very lonely, and not making much progress.

I, unsuprisingly, knew very little about most of what I was attemping.

For one, I was not a good programmer. I had worked on exactly one medium-sized Python project before in a previous research role, which meant that I knew enought to shoot myself in the foot with it. I also spent a shocking amount of time fretting about Python’s performance limitations, and wondering if I had picked the wrong language to work with. My code was not good, and I wrote slowly.

For two, I didn’t understand what was required to build a product people actually sued. I thought I would be able to write some Python code, and these users I was dreaming of would just bust down the door the coffee shop I was squatting in at the time. My plan, although never explicitly stated or written down, was to make a thing, and then wait for people to use it.

Uncertainty can be lack of knowledge

Imagine you’re arriving in a new city, and the two day old Salmon you ate before your plane flight isn’t sitting right, and so you’re desperately looking for the shortest walk to the nearest bathroom. From your perspective, the most efficient path to the closest bathroom is uncertain.

This path isn’t uncertain because it doesn’t exist. There’s some specific path you could follow that would be the shortest: left at the water fountain, take ten steps, try not to shit yourself, and then walk through the doors on the right. With time, concentrated study, the help of Google, or just walking around the airport terminal, you’d eventually be able to figure this path out.

This is the first fundamental type of uncertainaty that I dealt with at Mito: uncertainty due to lack of knowledge.

When I started a company, being a bad programmer, not knowing how to think about company building - these were one piece of the uncertainty that I was dealing with. If I had built a company before, or programmed much at all, I likely wouldn’t have run into these problems. At least, I would have a much easier time overcoming them.

Unexpected Pivots

After three months of SF coffee shops with nothing really to show for it, I realized I was ineffective alone: a non-working prototype, no users, and no clue where to go from there.

So, I brought on my two best friends from college, and we formed a company. Both of them were starting from the same point as me: knowing nothing about company building, programming, or really anything all.

As such, our tiny company had no process for anything we did. We didn’t write down decisions we made (I can’t remember if we made decisions, to be honest). We spent the next 5 months or so applying to accelerators, developing the MVP (which was bad Python built on top of my terrible Python), and generally arguing about what was the missing thing to make our company successful.

After 5 further months of very little visible progress, two suprising events occured:

  1. We were invited to interview at YCombinator, after applying the second time.
  2. We focused our product on version control for Excel, after some advice from an advisor.

We, suprisingly, got into YCominator, and spent the entirety of the batch building a shitty prototype of the Excel version control system. We had a few paying users by the end of the batch, but none of them ever used the product.

In the final days of our YC batch, we pivoted again. We had spend the past 3 months talking to spreadsheet users (we had learned that talking to users was good, by now), and so we naturally decided to build a tool that solved the large-data and repeatability problems we had observed with spreadsheets. We called this new tool: Mito - Edit a spreadsheet, Generate Python Code.

Uncertainty can be more than lack of knowledge

Now, you’re buying a house in San Francisco. You read up on the local housing market to make sure that you’re in a great neighborhood, meet with all of your neighbors, and do the due diligence necessary to be sure of your investment. Then, on the day you finally sign the papers and purchase the house, a mega-earthquake hits SF, and your house burns down.

This is the second type of uncertainty: uncertainty due to unpredictable events. You failed to account for this uncertatinty in your due dillegence.

Hold up - you say - you should have accounted for it! Everyone knows a mega-earthquake was coming to SF eventually. This isn’t unexpected at all! You should have just factored this into your evaluation...

But this earthquake isn’t meant as a specific example of an event to think of - rather as a general example of forgetfullness. If not an earthquake, or a fire, or foreign power’s attack on America, or a riot based on bread prices, or an alien arriving and abducting your neighborhood specifically, or a snorgleborgle bog, then it will be something else entirely.

Uncertinaty from unpredictable events is the second type of uncertinaty I encounted at Mito. Our first 8 months of experience helped us execute our initial vision for the Excel version control system better than the previous attempt at generalized version control. We might have been able to guess - or at least hope - that we’d get into YC. But we never thought we would end up building a spreadsheet tool, even though we were the ones who eventually made a decision to do so!

The realized outcome is not even an outcome we could have possibly imagined when we started.

Startups and uncertainty

With all three products I’ve spent time working on, I was wildly unknowledgable about the domain of each of them. I decided to build a version control tool despite knowing maximum 2 git commands, and then decided to build a Excel tool having used Excel once, and then decided to build a data science despite never really having done real data science.

One hand: this is a terrible way to try to build a business. Having deep experience in the problems you’re trying to solve really does help in building a product. The other hand: it’s hard to construct a more uncertain environment to operate in.

For me, working at a startup has been much revealing to this effect.

Most of the problems I solved in school are a simple sort of puzzle with a simple sort of answer that you can get to formulaically: recognize the problem, search through other problems you’ve seen that are similar, and then adapt the solution to that problem.

Working in a startup is not like this.

Creatives didn’t seem like they had a good tool for maintaining versions. Wasn’t there a joke of “movie_final_final_export_5.mp4”? This seemed like the problem to solve - and with a product no less. But I had no set of problems I’d seen before that looked anything like that.

Then, when no one used our product, I tried to figure out what the problem was again. Was our tool not useful? Did our potential users not experience the problem? Were we communicating about what the product could do ineffectively? I wasn’t sure of the problem, and I certainly didn’t have any problems to match it against.

Then, when users started downloading our product but churning almost immediately, I tried to figure out why? Was the onboarding process bad? Was our product offering bad? Were there crashes we were experiencing we weren’t measuring?

In this startup, all the problems I’ve ran into have uncertainty built into them. This book is the result of Mito forcing me to figure out how to make decisions effectively despite this uncertainty.

What this book is about

The heart of this book is an analysis of decision making under uncertainty, as well as a set of propose tools for operation in uncertain environments for human beings that lead to the most effective operation.

As such, these tools must deal differently with the different types of uncertainty one might encounter. The tools one needs for dealing with their lack of knowledge about the fastest way to the bathroom are not the same tools that one needs for dealing with the potential of alien abduction.

Chapter 3: Decision Making from Birth

In which I talk about our inborn and born-out strategies for decision making: where they come from, and why they might fall short.

Early decision making

The first decision I really remember making: after getting off the bus one Kindergarten afternoon, I decided to throw a rock at the short, decorative lamppost in my front yard. My dad was watching from the window, and so my birthday present that year was a piece of glass and a long conversation about respect, trust, and making good decisions.

From birth, we’re bombarded with tools for decision making. We start with simple tools - like The Golden Rule. Just do your best to understand that other people have feelings too, Nate! We then to Pro/Con lists somewhere around third grade, which were exciting mostly because they felt so fair and balanced.

Then, in late high school, you start being asked seriously about my plans for your future. Not that “what do you want to be when you grow up” wasn’t a question we all got from a young age, but this is the first time that I felt like I was being asked about my plans for a real reason - there were decisions to make about the future now: where to go to school, what to major in, what classes to take?

The choice seems almost hilarious now, but I wasn’t sure if I wanted to study mechanical engineering or computer science. I thought I’d prefer working with my hands. A single lab class taught me that I hate the hyper-precise physical world and machines that build it, and my plans solidified on falling in love with computers.

To the future

We humans aren't the only animals that create long term plans, but the variety, detail, and complexity present in our plans are unique. I planned for the homework I have to do in the upcoming days, I plan for how I’m going to avoid social engagements months away, and I plan for the what city to have a shitty apartment in a few years from now. As a society, we try and plan for our children, and often even our social projects and works on the order of hundreds of years.

A “Plan” here means “a detailed proposal for doing or achieving something.” There’s a lot of different things that fall into this bucket.

Plans for the future

At the start of most days, I make a TODO list with what I’d hope to accomplish: get outside for a walk, finish up programming the new code optimization passes PR for Mito, write a bit of this book, don’t get too distracted by YouTube.

These TODO lists are reasonably accomplishable (again, avoiding YouTube) - and most of the things on this plan on short-term.

I have other types of plans though. When I was 18, I planned to go to 4 years of college. When I was 21, I planned to start a company. Within Mito, we have multi-month long initiatives. These plans are not close in spirit to “take a walk” - and it shows.

Short-term planning vs. long-term planning

It might seem like that the only distinction between a short-term plan and a long-term plan is timeline. If it makes sense to create one type of plan, then it seems obvious the other type of plan makes sense as well.

But just grouping these both under the title "plan" hides some serious differences between these two structures: how present uncertainty is in both of them.

Inherent uncertainty is different between long and short-term plans for obvious reasons. On any specific day, the odds I can't get to my todo list because Aunt March falls down the stairs is quite low. On the hand, most of Mito’s multi-month initiatives end up being distrusted, by a change in priorities, a product pivot, or (literally) a broken shoulder. The odds a short-term plan gets disrupted by an unexpected event are low, but the odds a long-term plan get disrupted by an unexpected event is high.

Similarly, the relevance of lack of knowledge driven uncertainty almost always increases with longer-term plans as well. The longer you’re going to operate on some plan, the more you need to know up front to make a successful plan, and the more likely you don’t know these things.

Increasing uncertainty

But there’s more that determines the uncertainty of a plan than just it’s timeline: it’s also dependent on the environment you’re planning in.

The physical sciences, like Chemistry and Physics, lend themselves to plan-making fairly easily: for a set temperature and pressure, the specific molecules you put in a container will react a certain way. You can plan for this reaction with as close to certainty that there is.

If we’re in an environment where the laws of Physics were changing fairly often, planning for this reaction would go from certain to totally uncertain - unless the chemists plan was to die.

How effective plans are in a system is a function of how much uncertainty is present in that system. How much uncertainty is present in the system is a function of how quickly change is occurring, and how predictable that change is.

Our changing environment

Over the past 10k years of recorded human history, the change in our human environments has not been constant.

Although it’s not totally clear what went down in the annals of prehistory, what is clear is that for most humans in most of human history, life did not change with the speed it does today. I'm writing this on Notion on the Cloud on my Computer, using Electricity - my grandmother was using writing tools invented two thousand years ago.

We’ve found ourselves in an environment that changes dramatically quicker and less predictably than most of our evolutionary context. But like any other evolutionary trait, our planning tools are preserved from the earlier times.

Just like the calorie-limited context we evolved in led to an evolutionary trait called “binge if you find sugar” has led to particularly maladaptive behavior in our current context (see: me eating 10 chocolate chip cookies last night) - perhaps these increasing levels of uncertainty has made our desire to plan obsolete?

Planning for Mito in Jupyter Notebooks

Mito, our spreadsheet that generates Python code, started out as an extension for JupyterLab, which is one of the most popular data science environments currently in use. JupyterLab has a predecessor, called a Jupyter Notebook. For the first 578 days of its existence, Mito only worked in JupyterLab, not Jupyter Notebooks.

Notably, though, there are 2x more Jupyter Notebook users than JupyerLab users. We had selected Lab for our first version just because we didn’t know this first. This was fine for about a year, but eventually we realized that many of our users wanted Mito to work in a Jupyter Notebook.

Doing so, we thought, would be a huge technical lift - and so we pushed this work further and further back, waiting about 6 months before we even really considered it.

The first task was making a plan of attack. I spend about 2 days laying out the steps, which ended up looking something like:

  1. Step 1: Create an Architectural Review - a document that describes how Notebook extension work at a high-level, their main differences from JupyterLab extensions, and what APIs we should be on the look out for.
  2. Step 2: Create My Very First Notebook Extension - literally get it to extend the notebook in any way you want.
  3. Step 3: Get a single Python package working in JupyterLab and JupyterNotebook, ideally in the later with a single extra installation command.
  4. etc.

Each of these steps had three estimated time bounds: a minimum time, an expected time, and a max time. This allowed me to create a confidence interval for the total time it would take to support notebooks. My conclusion from all of these steps, is that it would take at least 14 days of full time work to add notebook support, and potentially as long as 2 months.

Given how long I expected it to take, and that I wanted to do it in a contiguous chunk, we kept pushing back the start date, delaying our support on Notebooks for another 2 months.

Adding Mito to Jupyter Notebooks

When it came time to implement, I skipped all the steps in my plan, and just attempted to add Notebook support directly. It took me 8 hours to do.

Ok, Nate sucks at planning

There’s no doubt - I do suck at planning. I had little experience with JupyterLab, no experience with Jupyter Notebooks, not much experience with porting software, not much experience with softweare generally - these are all uncertainty from lack of information, and they all conspire to make me a terrible planner.

In final calculation, I was somewhere between 14x and 60x more pessimistic than I should have been. But can’t all of us relate to this exact feeling?

If you actually track the long-term planning I do in practice, you would inevitably discover the “3 stages of my New Year’s resolution:”

  1. A fraction of plans that are made even get started. A much smaller fraction get completed. Think of almost every New Year’s resolution.
  2. Of the plans that get completed, only a fraction of them would have the desired effect that the planner had. How many “go to the gym every day this year” ends in fitness vs. a debilitating injury?
  3. Of those plans that get completed, how many of them actually go according to plan? Like my plan for Notebook support, perhaps they are wildly pessimistic. More often, they are probably wildly optimistic and overlook some key detail.
  4. Of those plans that had the desired effect, many of them have unintended consequences that make this plan questionable at "achieving the goal" that the plan-creator had in the first place. Become a gym rat has other many other effects - at least, I try and hide my recent yoga obsession from all but my closest friends.

For myself at least, I estimate I complete less than 5% of the plans I made, and less than half of these achieve what I want them to. Think plans like "I want this job" or "I want to try keto" or "I'm gonna follow this exercise routine." I would say that less than 1% of the plans I create (and really intend to follow) actually achieve my goals.

Why I keep planning, in spite of my failures

So, given that I can’t seem to plan for shit - why do I keep doing it?

Well, making a todo list is quite fun - some days, it’s more fun that doing my actual work.

Just like plans, sex feels good. That’s good, because evolutionary processes want to encourage reproduction, but the new environment we’re in lets us access porn on our phones at any time. But just because sex feels good doesn't mean that you should wake up every morning and start your workday by watching porn.

Perhaps plans are the same. They made us more effective at operating in our evolutionary past, the in (relatively) predictable and consistent environments we evolved in, and so evolutionary processes made them feel good. But waking up every morning and making a plan for the future is just mental masturbation.

There are many examples of evolutionarily adaptive traits that no longer serve us. Our focus on sex and the environment of porn is one. Our focus on quick calories and access to corn syrup is another. Given how much more uncertain our world is than in the past, so planning in uncertainty may become maladaptive.

Chapter 4: Worse than Useless

In which I argue that planning requires a world model, and that creating a world model, in many cases, is worse than useless and can be harmful.

A plan for accomplishing something requires belief that the specific steps proposed by the plan will have some specific effect.

In other words, if you want to accomplish cooking an omelette, and (Step 1) is to crack the eggs, then you implicitly believe that cracking the eggs is a necessary (or at least helpful) step in making an omelette.

So, there’s some model you have internally, inside your head, about how the world will respond to your actions. I’ll call this a world model.

The difference between your world model and the world

The most well tested world model in history was that of Newtons dynamics. For over 400 years, thousands of people conducted millions of experiments - to the effect that late 1800s physical scientists believed that the world was close to being fully understood. Then, in a shocking turn of events, Einstein came along and showed that Newton’s world model was just a rough approximation of what really was going on.

There are two ways to read the above story.

The first: there is a human with their understanding of the world, which they capture in a world model. Over time, through theorizing and experiments, they improve the model to be closer to the truth of the world, which allows them to be better at prediction. This is a story of human progress through understanding.

The second: there is a human with their understanding of the world, which they capture in a world model. Over time, through theorizing and experiments, they improve the model to be closer to the truth of the world. At the same time, their confidence in their world model improves, potentially at a faster rate than their understanding does. This is the story of human failure through overconfidence. Despite our improving understanding of the world, we do not get better at operating it. Better models lead to more confidence than they deserve, and in turn the operational errors we make are correspondingly larger and more massive.

Einstein is unique in that he didn’t think that his world model was an accurate view of the world - he expected that it would be disproven (or improved upon) in his lifetime as well. He just thought it was closer to the ultimate truth than Newton’s was.

But this is not the interpretation of the relativity that gets passed to us, or that most of us have. We think only about how much closer we are to the truth, not about the overconfidence that this belief might bring us.

Scholes as an example of model failure

I will now give a hilarious example of model failure in practice, using examples drawn directly from Nassim Nicholas Taleb’s work. (Indeed, much of this chapter and the following draw from his arguments, although you will see where we diverge later on. Don’t worry if you haven’t read his books, although I would recommend it.)

The total volume in financial markets, at this point in time, is dominated by financial derivatives. So, if you’re at all interested in finance, you may have heard of options (they are among those things that folks on Robinhood mortgage their house to buy). If you haven’t, for our purposes, you can think of options are simply a financial product that allows you to hedge your risk against some underlying asset.

The most famous formula for pricing options is the Black-Scholes formula. Given a set of input parameters about the underling asset and the option, this formula will output a price that you can pay for that option. It was developed by a Mr. Black and Mr. Scholes in 1973. Scholes, as an award for his work on this equation, won a Nobel prize in 1997.

Mr. Scholes was not just an academic, but also a board member of the infamous hedge fund Long-term Capital Management (LCTM). LTCM was initially a very successful fund, with annualized returns that averaged over 30% in the first 3 years of its operation.

In its fourth year of operation, 1 year after Mr. Scholes received a Nobel prize, LCTM lost 4 billion dollars over 4 months due to a combination of high leverage and exposure to global financial crises. LCTM went bankrupt, was bailed out, and in doing so caused shockwaves that almost brought down the large portions of the economy with it.

Scholes, the man who invented the formula for risk-free pricing of options - who won the highest economic honor for his work - had created models that worked for a few years, before they broke catastrophically. High-leverage was responsible for LCTM’s bankrupcy, and the source of high-leverage is the overconfidence that Black-Scholes (and other models like it) give the model’s operator.

The financial modeling efforts of Scholes did not lead to more effective operation. Instead, they lead to overconfidence in the form of high-leverage, and this high-leverage brought down the company he was on the board of, and almost the entire financial system with it.

How uncertainty relates to model error

This example clarifies how exactly uncertainty and better models interact: the new models that we have make us (the operator of the model) more likely to trust that there is no source of uncertainty in the world outside of the context of the model.

As a result of this trust in “only modeled uncertainty,” we take risks that we would not take otherwise, if we believed the world was a more uncertain place in ways our model did not capture. For LCTM, this inherent uncertainty manifested as financial crises.

The other very interesting property that this example highlights: LCTM actually did very well for their first 3 years of operation. Models can and often do, in the short term, lead to better performance on the metrics that we care about. But one would much rather a consistent 8% over 10 years than 30% for 3 years before total bankruptcy.So, just because models lead to effective operation in the short term, does not mean that we can conclude that operating with the model is more effective than operating without the model.

We’re not just worried about reproduction (of the fund) for a single year. Hedge fund folks are looking for the survival of the hedge-fund lineage.

Chapter 4: Decision Making Under uncertainty

In which I engage with a leading thinker on decision making, and argue that his lack of operating experience limits his ability to propose actually implementable decision making strategies.

Exposure

So, perhaps the decision-making strategies we learn from birth are not attuned to uncertainty. I’m hardly the first person to point this out. The first thinker I ever encountered who discussed decision making under uncertainty was the previously mentioned Nassim Nicholas Taleb.

In his book Antifragile (which I read right after our pivot to Mito) Taleb argues that effective operation in uncertain environments requires moving away from worlds models and estimation of failure probabilities, as these are inherently unknowable.

Instead, he argues we should move to evaluating systems through the effects that possible events will have on them. If stressing events have a weakening effect on the system, it is a fragile system; if stressing events have a strengthing effect on the system, it is an antifragile system.

Instead of trying to account for inherent uncertainty, which we definitionally cannot do, we move to judge a system through its local, immediate properties of fragility and antifragility.

Limitations of future effect evaluation

I actually spend a few months trying to apply this during early days of Mito. I tried hard to not think about how likely (or unlikely) it was “we would land a large enterprise client if we added this feature” - as this was very uncertain to ever happen. Instead, I thought about “how does adding this feature make this tool robust against stressing events.”

Yeah, I couldn’t figure out what the hell that meant either.

So, while Taleb’s strategies are a step in the right direction, in that they respect uncertainty and avoid too much thinking about a future we cannot possibly reason about well, they don’t fully solve our problem.

For one, one is always required to make some implicit estimation of probabilities about future events. If I judged every new Mito feature in the context of “the effects of a super-massive asteroid hitting earth before deployment” - my decision making process would lead me to quit feature development and just work on my doomday bunker to ensure Mito’s survival.

Of course, no one really thinks this is the right thing to be doing (although if you switched asteroid for pandemic, perhaps more people would agree). So there is clearly some probability cutoff point where we don’t consider the effects of future events, if they are too outlandish to worry about.

Moreover, antifragility isn’t a useful property to analyze for many things. A product isn’t a evolving thing outside of the work I explicitly put into it - so perhaps I can think about making our “product process” (which does evolve) antifragile to stressing events - but at the end of the day I need some tools for making decisions on specific features than just antifragility.

These tools, per our above analysis, cannot rely on some reasoning about the future. Instead, we must explore the present, as antifragility does, or perhaps try and learn from the past.

From future event effects to past event effects

Nassim’s general framework for innovation is broader than just the concept of antifragility. He also introduces the concept of an option - a generalization of the financial product we talked about above. An option is literally just that: one possible thing that you can do.

Instead of planning for the future, he argues, we should continue to do the things that work in the past. He calls this “realizing an option.”

Notably, doing so does not require reasoning about future probabilities of events, or even the effects of those events; rather we just need to keep doing what has been working for us so far.

As an example of taking an option: let’s imagine that you’re marketing your product through 3 channels, or doing 3 different types of exercise, and you want to chop the least effective one and keep the most effective one. Generating options was doing these three things in the first place, and realizing the options would be doubling down on one of them.

Past event effects and optionality

Like antifragility, one faces issues when they attempt to put this into practice in a startup.

To summarize simply, it’s totally impossible to figure out which option to take, because it’s not even remotely clear what options are on the table in the first place, not to mention general struggles in figuring out “what has worked” or even what “working” really should mean.

Struggles of figuring out “what was working” in the past

At Mito, we do heavy tracking of our retention, which is a measure of how many users return to our product after using it once. This is a great metric to figure out how much users are liking our product, and is something that we explicitly target as a highest level goal.

When this number went up and down in the past, we would go through a process of “retro” - which was just us attempting to use Nassim Nicholas Taleb’s optionality framework (not necessarily by name) to figure out: which product changes led to the effects we were observing.

After attempting to retrospect for a few months, we realized there were a couple major issues:

  1. The product changes that we think are big are not necessarily the changes our users think our big. Rebuilding our spreadsheet from scratch feels big because it was a lot of code, but that really just biases us. Users don’t care how much work we put in for a feature - just how much it improves their life.
  2. Some changes we make are not something we are aware of. For example, we certainly fixed some bugs by ditching old code, but these bugs weren’t bugs we knew about. Perhaps these unknown bug fixes were what caused the massive increase in retention - we would never know.
  3. There is no way to look for negative causes, which is to say things we did that stopped bad effects. If we said “no” to developing a feature that would have led to terrible performance, then making this decision effectively increased retention. But there’s also no way of knowing how much of our process is driven by negative vs. positive changes.
  4. Even if we can determine which changes had impacts, it is still almost impossible to say why the changes had an impact that they did. For example, imagine we observe that adding a new way to get data into Mito improves data import rates, and so we conclude users are using the new feature effectively. But perhaps we just unbroke some other feature in the process of adding the new one. And so we draw the incorrect conclusion “we need to add more features” when actually the conclusion should be “we need to build a tool that is less buggy.”

The basic conclusion here is that it’s pretty much impossible to figure out what product changes lead to improvements in our metrics that we care about, if we’re just looking back retrospectively.

What is immediately clear, then, is that any option-based strategy of execution needs to do more than just say “tinker, and take what works.” It needs to define a process for actually figuring out what these options are, evaluating them in a way that is robust against noise, pruning those that aren’t working, and doubling down on those that work. This is very hard.

Furthermore, given the complexities and biases on display in the above question, it’s natural to ask the question: is it actually beneficial to look back and try to measure this in the first place?

The nutritional epidemiology fallacy

“Well,” you might say, “I agree that it’s pretty hard to figure out what caused changes in the past, but surely not looking back at any of our old data is dumb. It’s always better to have more information vs having less.” Attempting to realize options is better than not attempting to draw conclusions, right?

Not really! In the case where the thing you measure is very likely to be biased, and in turn very likely to mislead you, the best thing to do is to not look at this old data at all.

Nutritional epidemiology (the study of how [what people eat] effects [their risk of disease]) is a great example here, as it’s very similar to our situation. Researchers use non-randomized, historical data (with large reporting errors, given self-reporting) to attempt to tease out cause and effect between diet and disease.

And look! It turns out that eating a handful of berries a day can slow your cognitive decline for 2.5 years. Of course, this is an absurd effect size, and obviously bullshit.

And yet, my totally lovely and very intellegent mother had a berry phase once. I’m pretty sure she also interpreted berries as including other small fruits like figs - which, it turns out, are pretty much just a sugar bomb and probably didn’t help dad’s diabetes. Sorry, Pop.

More data and more analysis is not better than less if that more data and more analysis is going to mislead you. In this case, avoiding data that you know is going to be misleading data is actually the best bet.

Against “more data and better models is better” generally

In two cases now, I have argued that more “knowledge” of a certain type is not necessarily beneficial to the operator.

In the first case, I adapted Nassim Nicholas Taleb’s argument that models tend to fragile by increasing human’s trust in the models, and since their confidence outpaces their understanding, the model ends up fragelizing the operator to risk they cannot account for due to uncertainty.

In this case, I disagreed with Nassim Nicholas Taleb’s argument that just “tinkering and then taking an option” is the solution to this problem of modeless operation. Looking into the past does not give us the ability to conclude cause and effect, and so realizing an option is just as likely to be the right option as it is our bias about what we think the right option is.

In both cases, what limits us is our human bias; we do not have the ability to look either fowards or backwards in time, we just have the ability to think in the present; and our biases are inescapable when we’re thinking!

Chapter 5: Experiments

In which I introduce experiments as the only real tool for learning from the past, but note that it suffers from limitations that make it hard to implement in most real-world cases that aren’t drug testing.

Introduce experiments

In some ways, the impossibility of establishing cause and effect by reflecting on the past is not surprising. There’s really only one way of establishing causality in a complex system you didn’t design, and it’s through an experiment. For our purposes, experiments take the following shape:

  1. Formulate a hypothesis. This must be something specific, like “drug X reduces death rates by 5%” or “bug fix Y increases importing by 10%”.
  2. Randomize into groups. For us, two groups will do.
    1. Half of drug users get drug X, and half of Mito users get bug fix Y.
    2. The other half will get a placebo. For a drug, this can be a pill. For a bug fix, this isn’t well defined.
  3. Conduct the experiment, doing your very best to not fuck anything up - from changing some other variable, fixing some other bug, accidentally randomizing badly, or anything else minor that will invalidate your results.
  4. Collect and analyze the data. Again, don’t fuck up while you do this. Reject or accept your hypothesis.

In analyzing, you can see if your intervention actually meets the intended effect that you hypothesized it would. Note that your hypothesis is crucial, TODO: explain relationship between effect size and power and significance?.

Limitations in determining cause and effect

Experiments are the only serious way to determine potential cause and effect, and even they struggle from many issues. Simply put, they allow us to isolate a system and to change a single variable to see how it effects another variable of interest.

Note here, though, that we’re dealing with potential cause and effect. An experiment does not allow you to really establish cause and effect, only that there might be a cause and effect relationship.

Let’s consider an example where you’re introducing a new heart attack prevention drug, that works by blocking all sodium absorption into the blood (this sounds like it would kill you, but it’s a thought experiment). You have a hypothesis that excess sodium in the blood is the cause of heart attacks.

You conduct the experiment. You give half of the randomly sampled patients your drug, and the other half a placebo, and record heart attacks in both groups.

Heart attacks don’t exist in your new group. That’s great! Hypothesis confirmed right? No salt in the blood means no heart attacks, so we can conclude that salt in the blood is the cause of heart attacks.

Not exactly! Just imagine the following (not so far off process): increased salt concentration in the blood causes conversion of glucose to fructose, and frustose drives up blood pressure, and this causes heart attacks. That’s the real cause and effect.

So: does salt cause heart attacks? Sure, but only if you’re also eating glucose! And you only measured heart attacks - not absorbing any salt would be terrible for you for other reasons, presumably.

On ruling out cause and effect

It might seem like an experiment might allow us to rule out some cause and effect. Imagine that this drug fails to prevent heart attacks. Since there is no salt in the blood, and there are still heart attacks, then salt cannot possibly be the cause of heart attacks, right?

Again, we have to be careful here. Perhaps salt in the blood is a cause of heart attacks, but not the only cause of heart attacks. Maybe heart attacks happen in exactly two cases: there is too much salt in the blood, or not enough in the blood.

Complex systems are very complex, and the conclusions we can draw from interacting with them — even in the most structured and educational way we can dream of — are very limited!

Other issues with experiments

Note that the above philosophical concerns pale in concern to the practical problems of running experiments: in many cases, they are impossible, and in many other cases, they are wildly expensive.

In most realistic situations, these two limitations make experiments to establish plausible cause and effects (or rule out cause and effects) impossible.

In the case of Mito, the technical complexity of running an experiment is non-trivial. We need a way to toggle specific features for specific sets of users, measure which users are on which set. More than this though, we need to make sure that the experiment runs long enough that we can draw real conclusions.

Doing so poses it’s own limitations on our product process - costs that we are unable to bear right now!

What’s the point

If we cannot conclude anything about cause and effect, what is the point of experiments then? Well, you can still conclude useful things from an experiment - just not necessarily the things you might first jump to.

If you do give a heart attack prevention drug, and it prevents heart attacks dramatically and in a statistically significant way (don’t make me get into it), then this is meaningful. You might not be able to conclude anything about cause and effect, but you will be able to conclude that this drug might be worth giving to some folks in some cases!

Note here that not making conclusions about a cause and effect is very important! Just as with looking backwards at history, this is most likely to just confirm your biases and show you want you want to see - rather than allowing you to operate more effectively.

Chapter 7: Mechanistic Explanations

In which I argue that the complexity that can be found in mechanistic explanations of complex systems is bad for a variety of reasons.

What I’m rejecting above

So, we’re got a few things we’re rejecting here:

  1. Planning for the future. Uncertainty means that no matter how good your model gets, you’ll overestimate it, and get burned.
  2. Learning from the past. Your biases mean that no matter how much you look, you’ll just end up justifying the effects you see with the causes you want to blame, rather than seeing the truth.
  3. Experiments to learn cause and effect. Experiments might teach you about what effects occur for a given intervention, but they cannot reveal the internal mechanisms of a complex system, and they are very expensive.

What these things have in common

In all of these cases, we see a similar pattern. Where one goes wrong is the process of trying to build a model that explains the mechanisms for operation - a model that attempts to use cause and effect to figure out how to operate effectively. With a plan for the future, this model is implied. With learning from the past and experiments, this is exactly the sort of knowledge you’re aiming for.

I call this form of explanation mechanistic for obvious reasons.

Mechanistic explanations for effects are most of the explanation that we see in the world. Things like:

  1. He fell on the ice because it was slippery.
  2. Inflation is up because fuel prices are rising.
  3. Psychedelic mushrooms can help people with PTSD by creating new neural pathways.

All of these explanations are mechanistic explanations: they explain some effect with a cause.

The infinite regress of the mechanistic explanations

As soon as you start looking a bit deeper at any of these explanations, you realize there is a whole universe of simplifications that they hide.

Ok, so he slipped on ice because it’s slippery. Why is ice slippery? Oh, it’s because a thin layer of water forms on top of the ice when you step on it. Well, why does that water form? Well, it’s because ice is less dense than water, so stepping on it compresses it and turns it back to water. Well, why is ice less dense than water? Etc. Ah, so inflation is up because fuel prices are rising. Why are fuel prices are rising? Well, because Russia is waging war on Ukraine right now. Why is that? Etc. Etc. Etc.

At some point, somewhere in your mechanistic explanation, you must not go deeper. You give up - claiming both a) this is an appropriate level of detail, and b) you’ve included all the relevant factors.

But as we’ve explored above: it’s likely that you’ve done neither of these two things, and will never be able to. It’s very possible that we humans, with our limited computational power and tiny brains, will never be able to go deep enough.

Where mechanistic explanations work

There are notably examples of where mechanistic explanations work. Theories of gravity and relativity propose some specific ways that bodies interact with each other.

With Physics, science can reduce things to a single subcomponent to study with an experiment; but this is hard in the world of economics. Physical entities like molecules compose into well-defined systems with well-defined contexts, and this allow us to construct models with mechanistic explanations that work in practice.

This is not true of the many other contexts we attempt to apply models - and namely, is not true in complex, opaque, and uncertain systems that make up much of the world that we’re attempting to operate in!

How mechanistic explanations lead to complexity

We will also note of mechanistic explanations: they are, by their very nature, concerned with the details. At least, they are concerned with enough of the details that they lead to accurate modeling.

This, in turn, leads to complexity for the operator of the model! As anyone who has taken an economics class can tell you, these models are not so easy to learn to apply.

Chapter 8: The Costs of Complexity

In which we argue that the complexity that comes from mechanistic explanations are also just expensive.

A simple decision making model

Let’s start with a simple decision making model, that most humans use on a daily bases:

If that food smells like it’s rotten, don’t eat it.

This is a very simple heuristic. You smell the food, and in that moment, you make an easy call.

Now, let’s imagine for a second that we weren’t inborn with an aversion to rotten smelling foods. How would we go about making decisions on what to eat?

Well, in 1978, a Mr. Scholes would invent a device that would detect poisons and sickening-agents in food - for which he would receive a Nobel prize. You’d avoid smelling your food (that’s a lot of effort), and instead would just rely on his device for checking anything you eat.

In the short term, it would be great. You’d catch some food that didn’t smell rotten but would have made you a bit sick otherwise, due to low levels of bad bacteria. You also would get to eat some food that smelled rotten, as the device could detect that this specific rotten food was as of yet edible. You’d leverage this device to eat rotten smelling food that wasn’t poisonous, and for 3 years, you’d reduce your food-waste by 30%.

On the fourth year, you device would malfunction due an abnormally shaped peanut getting stuck in the input tube without you realizing. You’d fail to scan the steak you were about to cook (although if you smelled it, you’d notice it was wildly-rotten smelling), and after cooking and eating it, you would loose 100% of your value, and you’d die. Your death would send shockwaves through the rest of the financial system.

Of course, when placed in that context, complex model making can be made to look very absurd. And yet, detecting what bacteria and compounds make a human sick isn’t much harder than modeling the economy of a country. So why would we use an explicit world model there?

Cost of complexity #1: complexity leads to overconfidence

We’ve made this point above multiple times, so we’ll keep it short here. It was the confidence in the world model, and similarly the confidence that our world model wouldn’t break for reasons it did not capture, that lead to our demise.

The example above illustrates the first reason we want to keep this world model simple: the complexity of the world model leads to overconfidence, and in turn the worse failure modes.

Cost of complexity #2: decision making costs

The second cost of complexity is the strict operational cost of making decisions with complex world models. Although technology can lighten the load, having to buy, store, maintain, and use a piece of technology to smell your food for you is a hassle. Just smelling it yourself is easy.

Decision making is a funny creature, in that it can expand to fill any container. The infinite regress of the world we exist in means that there’s always another point to make, always another take to take. A group of people can debate, bike-shed, and discuss any issue as long as they’d like (or hate) to.

The complexity that comes from mechanistic world models elevates these discussions to front and center. Decision making becomes a process of enumeration, of making all the points, and of trying to think through all the levels that could be pulled to get the desired effect. I’ve found myself spending so much time in decision making it takes weeks to get to the actual process of executing the decision!

And this is not just a function of social dynamics. Indeed, the Jupyter Notebook plan of attack that I made all by myself took longer than actually implementing it (while also not being useful).

In many cases, the decision is unimportant (or misunderstood) for reasons you cannot possibly conceive of when you’re making it. All that ends up mattering is the operating that you did, and the things you learned from it in the process on your way to making the next decision.

Past, present and future

So, where does this leave us? Looking to the future to see where we’re going is worse than useless, because of uncertainty. Looking to the past to figure out what worked is detrimental, given our biases and complexities in collecting real data about cause and effect. Experiments are massively limited in scope, and the mechanistic models they help create are expensive to implement.

We reject any and all mechanistic explanations as a tool for operating in complex and uncertain environments.

All we are left to work with is the present. This, in the end, is one of real operational recommendation we can take from Nassim Nicholas Taleb. The easiest thing to be really sure of is what is happening in the current moment.

Chapter 9: Thinking Dumb and Slow

In which I note the similarity between many of the above ideas and some other popular decision making ethics, but note where I differ.

Other decision making frameworks

I am hardly the first person to call out the costs of experiments, the failures of nutritional epidemiology, or how hard it is to plan in uncertain environments. I’m also certainly not the first to try to think of a way out of the current ethic of decision making we mostly engage in.

I’d like to spend some time here engaging with these other methods of decision making. Some of these methods are popular, some of them used to be popular. Some of them are just coming into vogue. Many of them attempt to diagnose the reasons for our decision making woes in terms of fundamental traits of humans.

The methods of decision making that I’m be talking about here will all be individual decision making tools. By this, I mean tools that you can apply by yourself to make a call - not tools like democracy, or hierarchy, or any other organizational structure. We’ll touch on those much-larger questions in the later and more-unsatisfying chapters of this book.

Modern Rationalism

Examples: Slate Star Codex, Nick Bostrom, the Effective Altruist (EA) Movement, the FTX crypto exchange, LessWrong, OvercomingBias.

Modern rationalism is and loose collection of decision making procedures, understandings, and ethics that grew up on the internet; I will refer to its adherents as rationalists.

Rationalists often cite Elizeier Yudkowsky’s sequences as a cornerstone the modern rationalist movement, and through his writings on LessWrong and OvercomingBias we can see the general theory of rationalist decision making:

  1. Acknowledge that you, as a human, have biases that cloud your decision making and understanding. Try and uderstand these biases so that you can avoid them.
  2. Now that you’re avoiding your biases, find “true beliefs” - which are beliefs that have predictive power.
  3. Use these “true beliefs” to accomplish the goals that you want as effectively as possible.

Rationalist converts think of rationalism as systemized winning, and indeed at this level of description it is hard to see what is wrong with the procedure that rationalists describe. Isn’t avoiding our biases to try and approach the truth and then accomplish our goals as effectively as possible legit?

Thinking Fast and Slow

Thinking Fast and Slow came out in 2011, and any treatment of modern theories of decision making would be remiss without mentioning it.

This book is a descriptivist of the decision making procedures that human beings. There is “System 1,” which is a fast, intuitive, and easily mislead automatic decision making process, and there is “System 2,” which is a slower, methodical, and conscious decision making process. Human decision making is flawed as we put too much trust in the mostly-invisible system one, and in turn in exposes us to our biases in ways we do not expect.

The book ends each chapter with a section called “Talking about X” - where they provide a list of ways to communicate about the bias under discussion. Example sentences include things like “Are we sure we want to invest in this stock? Reversion to the mean will likely occur and this likely will not continue to do better than average.”

Interestingly, in the years since it was published, it has come under fire for overstating the power of its conclusions due to some underpowered studies. To quote the author:

What the blog gets absolutely right is that I placed too much faith in underpowered studies. As pointed out in the blog, and earlier by Andrew Gelman, there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples.

The inconsistency of mechanistic explanations for failures of mechanistic explanations

The above ironic example highlights one major issue with both Modern Rationalism and and the Thinking Fast and Slow approach: there’s very little evidence that identifying and being able to talk about one’s biases actually allows one to avoid them effectively.

Our issues run deeper than that though. These decision making frameworks understand biases as an internal fact of the human mind that break our decision making tools by making us overvalue some options and discount some others in a way that does not reflect reality. And then, as a result of biases causing a misevaluation of options, we make the wrong decision.

But note what we’ve done here: we’ve constructed a causal, mechanistic explanation for decision making itself. Human beings, this causal story goes, are, at least in some part mechanical decision making processes that have a broken evaluation tool: a tool that is broken because of “confirmation bias” and “availability biases” and “question substation biases.”

Nowhere in these decision making frameworks that frame themselves as overcoming bias do they admit that using these decision making frameworks is itself a decision, made with all the very same biases that make evaluating the decision making framework itself as tricky a proposition as any of the decisions the framework itself needs to motivate.

TODO: I think the above could be cleaned up and clarified a bit. I’m trying to argue there’s a sort of logical inconsistency that appears in these frameworks - they call out bias without really letting us avoid it in the main decision of accepting these frameworks.

What’s the point of a decision making framework?

At the end of each Thinking Fast and Slow chapter there is a section called “speaking of X,” where X is the bias we were informed of during the chapter. For example, the chapter on the availability heuristic, we learn that humans estimate base-rate statistics based on the available information they have that is easily accessible. This leads them to overestimate some values and underestimate some values in a consistent way. The “Speaking of intuitive predictions” section then advises to say things like “he’s a long way from the market, there’s always room for regression to the mean. Let’s take into account the strength of our evidence, and regress our position to the mean.” TODO: clean this up with a better example.

This is where this book gets very silly. Of course, just hearing a few sentences that apply the idea of reversion to the mean to acknowledging uncertainty isn’t going to lead to anyone making better decisions.

To test this, I attempted to use some of these exact sentences in a product meeting where we were discussing new features, as well as the uncertain question of how well I thought they would lead to retention changes. TODO: actually attempt to do this and see what happens!

Of course, no one should be surprised that just hearing about our biases is enough to overcome them. If things were that easy, we could have talked our way into good decisions a long time ago.

In the modern rationalist movement, on the other hand, it is easy to find examples of people discussing the “changes to the probability of nuclear war based on the war in Ukraine.” Indeed, if you join rationalist forums, you will very likely to be made to feel like a right-old idiot for not playing these estimation games over email; the subject matter is so important and the numbers so high that not engaging with it feels like a hubristic death wish. TODO: dig up this email thread and see specifically what the numbers were.

TODO: I dont’ really know what my criticism of this is. It’s just like, so stupid, I can’t even express it. There’s clearly no predicting it at all.

Whole body yes

TODO: this is relying on implicit intuition

Become an expert

TODO: this gives examples of

Relying on Implicit Intuition

TODO: How can I respond to this without giving an mechanistic explination of failures? I am really not sure here.

Chapter 10: Down with Baysianism

In which I do an actual analysis of

TODO: I should go and look at less wrong from befor Covid, and see how people did. I should talk about how the vast majority of these people footgun, which is crazy.

TODO: I should try and find other examples of communities who explicit apply things wrong.


Section 2: New Decision Making Tools

In which I use the learnings from the above section to propose a new decision making ethic, and argue that this performs better in practice. I also argue it is more fun to implement.

Chapter 10: making decisions about decision making

In which I argue that having a framework for decision making is itself the first decision one needs to make. I argue that we do not need a top down approach to decision making, but rather can choose to approach our decision making framework similar to how we approach specific decisions themselves.

Top-down decision making frameworks

Modern Rationalism and Thinking Fast and Slow both have the same general top-down description for decision making: understand your biases better, learn to avoid them, which will allow you to build better explicit models (where they make sense), and then apply them.

These are both top-down decision making frameworks. The process for making decisions is specified before any specific decisions are brought up - the process is general, and can be applied as such.

The are a few benefits to top-down decision making tools. First and foremost, a top-down decision making framework is highly legible and formalizable, which makes it easy to communicate, critique and improve in the abstract.

Even above, the fact that I can describe “rationalism” as a specific sequence of steps and approach to a problem allows me to engage with it very easily - this makes it easy to learn. Moreover, the fact that this process can be described abstractly makes it easy to convince yourself it’s worth applying - it just sounds so reasonable!

The process of adopting a new top-down decision making framework

There’s an interesting dynamic that occurred when I was first exposed to rationalism, that went something like this:

  1. Was exposed to rationalist blogs, thoughts, etc - starting with an introduction through smart and cool coworkers that I had.
  2. I read and did my best to understand rationalist positions and their general top-down decision making strategies.
  3. I began to parrot certain rationalist arguments to friends and family about things like AI alignment, food security, effective altruism, and more, without calling myself a rationalist.
  4. My parroting of the arguments made me think that I might agree with rationalists a lot, so I started calling myself a rationalist.
  5. I did very little thinking about explicitly applying this top-down framework when I encountered new problems, but continued to call myself and think of myself as a rationalist.

There are a few really interesting points about how this process went for me, which I will explore in the sections below as a helpful way of trying to understand what making decisions and decision making frameworks might look like.

Moving from the specific to the general

First, note that I moved from parroting of arguments created by a rationalist process to acceptance of the general decision making framework. Although this might not be more generally true for most people who accept rationalism, I think it likely mirrors many peoples experiences with decision making tools: you parrot the specific decisions before some “acceptance” of the overall framework.

The simplistic model of “learned about a top-down decision making model, and then adopted it” is not so simple. The fact that the model itself is designed in a top-down way does not mean that it was adopted in this manner - and in fact, a gradual, incremental approach to adoption was actually how things occurred for me.

My current decision making framework selects the new decision making framework

Moreover, the process of accepting the new, top-down decision making framework of rationalism was done with non-rationalist decision making tools. I did not use the tools I was now deciding was better to make the decision to switch to start believing this. In fact, there was very little explicit thought about moving to calling myself a rationalist (if that can be pin-pointed) - it just kind of happened through the incremental process described above.

This is the second important point: if we’re ever going to let our decision making processes evolve (which we surely should as we learn more about the world!), we need to be aware of the fact that this evolution process is driven by the decision making process itself.

This implies a sort of interesting path-dependence with respect to decision making processes: the decision making framework you have current evaluates and may evolve itself to the new one, and so the decision making tool you’re starting with determines where you might end up.

Note that in my case, my initial decision making framework was not explicit: it was an implicit sort of “does it ViBe” or a “do I like it” or something that occurred inside my head. I won’t hypothesize about the mechanisms of action, but it certainly wasn’t any explicit thinking.

Sticky-ness doesn’t imply a good decision making framework

So, in some ways, I feel like not much more than a sequence of decisions: some of these decisions are decisions like “where should I put my foot” that are made my unconscious mind, and some of them are conscious decisions like “what feature should our product build.” Some of these conscious decisions are meta-decisions, about what decision making framework to use. The non-meta, conscious decision making is done by whatever the decision making process I selected in the meta-game at that point in time.

So it’s not unreasonable to think about myself as a pretty much just a series of decision making algorithms that are evolving over time. And now that I’ve got some sort of evolutionary process identified, I want to make sure that it’s serving me.

It’s natural to think that the decision making process advances according to a fitness metric called “performs well in the real world” - that I go from some process of decision making that is not as good at making effective decisions to one that does better. But if the first 10 chapters of this book argues anything, it is that this is explicitly not the case for humans. We spend our time mentally masturbating with plans, or building models that lead to fragile decisions.

Why might this be the case? Well, it’s because the evolutionary goals of the decision making process are not “be good at decision making” but rather something way more complex. Some combination of being good at decision making, being easily implementable, and, as planning demonstrates, being fun!

One key takeaway for us is this: just because a decision making framework is widely used, or enjoyable to deploy, does not make a good decision making framework. The only thing that makes a good decision making framework is that it makes good decisions - which is actually the most fun thing of all.

Accepting a bottom-up decision framework

Another key takeaway from the above is that we should not expect to specify a full, final decision making process in this document - as representing a decision making process as a static thing is unrealistic, but also because there’s certainly no way we’ll be able to get it right without a lot of tinkering.

The decision making process described here will instead be a set of starting conditions - both in terms of structures and processes for decision making. Furthermore, the decision making procedure described in later chapters will attempt to give some structure and process to the meta-decision of what the decision making procedure should look like!

Chapter 11: Introducing Present Heuristics

In which I introduce the notion of a “present heuristic” as a tool for making conscious decisions, and they characterize a sufficient tool for good decision making, as long as they make sure to focus on the lineage.

So, it makes little sense to plan for the future with an explicit world model in complex systems - as this planning is as likely to fragilize us in the long-term as enable us in the short-term. And it makes little sense to look back at the past to try and learn from it, as the selection and sampling errors are effectively unescapable in any complex system. So what are we left with? Just the present.

Similarly, we’ve argued above that the complexity in decision making procedures has massive costs that make it not worth it in almost all of the complex systems. So what are we left with? Just simple decision making rules.

Thus, we end up with a present heuristic.

Defining a present present heuristic

Let’s circle back to the simple heuristic we used above:

If that food smells like it’s rotten, don’t eat it.

This heuristic is based entirely on the immediate moment. You smell some meat, and in that specific moment, you can tell it smells bad. So you decide not to eat it.

This is a present heuristic: a decision making rule that is primarily concerned with the current observable state of the world with your own senses as a human being.

By collecting information with these senses, and then making a decision based on this information, you avoid misleading yourself with the past or present. This is the “present” aspect of the present heuristics.

By limiting yourself to “heuristics,” we limit the complexity of our decision making rules. Simply put, we force the tools we’re using to make decisions to be simple, to avoid the costs of complexity.

Note here that when we talk about heuristics, we are not referring to the subconscious, instant heuristics that Thinking Fast and Slow argues that humans apply automatically. Instead, these heuristics are simple but explicit rules.

The obvious limitations of a present heuristic

Even this very simple of a present heuristic immediately raises some interesting challenges with applying them.

For one, people purposefully eat rotten smelling food all the time. Many cheeses, kimchi, shark meat, all the great wonders of the flavor - these are all just a few examples of the rotten-smelling foods that many of us eat and enjoy - ignoring (or even totally forgetting about) the fact that this food smells rotten in the first place.

So perhaps we should construct a new heuristic:

If a food smells like it’s rotten, don’t eat it. Unless you have seen other people eat it and not get sick, or if you’ve yourself eaten it before and not gotten sick.

At first glance, this addendum to the rule breaks the two conditions we set on present heuristics: it relies on data you’ve seen in the past, and it is at least twice as complex as the original heuristic that we presented.

The sliding scale of present heuristics

So how do we maintain “present” and “heuristic” when we broke both of them in our first simple example of a present heuristic?

Well, present heuristics are not an end goal. They are an ideal that we can never reach. The general thesis of this book is that present heuristics can be the most effective and robust decision making tools we have available to ourselves. In practice, many of the decision making tools that we will end up deploying are not in-fact present heuristics, but somewhere on the spectrum closer to present heuristics than is standard currently.

On Past-focused Present Heuristics

I have recently trying to create a new parkour sport (shameful, I know). Generally, the rules are: pick a rocky stream, and run through it for a set distance as fast as you can, jumping from rock to rock.

Not all rocks, in this game, are made equal. Big, solid rocks are fine, but the rocks you need to watch out for are those that move when you jump on them, sending me falling into the water (why am I running through the stream? idk.).

I rely on the following present heuristic when I’m running the stream by my house, which I do multiple times per week:

If you walked on a rock before, it’s fine to jump to. If it’s a big rock, then it’s probably fine to jump to as well.

Note here that this heuristic doesn’t feel like a present heuristic - it feels like it relies on some past observation of data.

But notably, rather than applying some heuristic to the past, I am simply remembering and reapplying a present heuristic that I applied in the past. So really, rather than applying a past heuristic, I am just remembering and reusing an existing present heuristic I already applied.

An example of a past heuristic would be something like TODO: insert here

Reuse of previous present heuristics

When, then, is it appropriate to reuse present heuristics that you applied in the past without needing to reapply them?

The first, and most obvious, condition for reuse is that the environment that you’re attempting to reapply this heuristic must be the same. In the case of my stream, I have to redefine routes and learn which rock to trust every time there is a big storm that shifts rocks and debris downstream. You cannot continue to rely on the past application of a present heuristic if the environment has changed.

Second, you need to make sure that you really are relying on a past application of a present heuristic vs. a present application of a present heuristic. The main way to tell the difference between these two things is how you relate the (input) that you control to the (output) that is of interest to you.

As an example:

  1. When I step on a rock, I immediately know if it is a stable rock or not. If it doesn’t tip me into the water, I can conclude that it will not tip me into the water.
  2. If I take a new medicine, and it doesn’t make me feel sick immediately, I cannot conclude that it will never make me feel sick. The output of sickness for a pill can be days, weeks, or even years from the moment I take it.

Thus, to make sure you’re reapplying a present heuristic from the past rather than applying a past heuristic to the present, check this time delta you’re evaluating over; is it immediate, or almost immediate? If not, you might not being relying on a present heuristic.

Returning to our cheese-eating limitations

And thus, our updated rotten-food eating heuristic is shed in a new light:

If a food smells like it’s rotten, don’t eat it. Unless you have seen other people eat it and not get sick, or if you’ve yourself eaten it before and not gotten sick.

First, blue cheese is pretty much blue cheese. So the context of your cheese eating is pretty much static. Second, the effect time between eating bad food and getting sick is almost always less than 12 hours, at least in my limited (and painful) experience.

So, we’re not so far away from a present heuristic. Really, we are just reapplying a present heuristic from the past in the present moment :)

Chapter 12: Present Heuristics are Enough

In which I relate present heuristics back to Ms. Squirell, and argue that if a single simple heuristic created intelligent humans, similar heuristics can probably capture whatever goals you want, as long as they are iteratively applied.

Evolution as a present heuristic

It might seem like the above is both limited and contrived; we’ve introduced the notion of a present heuristic, but how can we possibly argue that simple decision making procedures like this will actually result in useful outcomes?

Well, Ms. Squirell was constructed by a process with just a single present heuristic.

Evolution doesn’t worry about how fit you were in the past, or how fit you might be in the future. It doesn’t worry about how well you fit inside a product funnel, or whether you are a high-value customer or a low-value customer.

Evolution simply worried about how fit you are at the current moment; if you’re not fit enough, you die, and so your genes don’t go forth. If you’re fit enough, you survive for another moment, and maybe in that moment you’ll get to reproduce.

This single, simple present heuristic is enough to create the human brain.

Your business goals and present heuristics

Unless you’re some fancy AI startup trying to create general artificial intelligence, your business goals are almost certainly less complex than the human brain. As such, it’s totally reasonable to construct a set of present heuristics that capture your goals and allow us to accomplish them.

But note that there’s a bit more structure than just this present heuristic that created our brains. We oversimplified a bit by arguing that all that was needed was a single, simple present heuristic. Of course, there is only one heuristic when it comes to which animals survive - but that doesn’t say much about which new animal come to exist.

This is the second part of the evolutionary system: an iterative environment, where at each iteration, mutation occurs. Then, application of the present heuristic decides which things continue to survive into the future.

Chapter 13: Creating an Iterative Environment

In which I introduce the notion of an iterative environment. This environment is both where the present heuristics are applied to create evolving strategies, and also where the present heuristics themselves evolve.

Applying present heuristics in an iterative environment

In the evolutionary environment that is the world, there are a few properties that we’d like to focus on:

  1. A simple present heuristic that decides that specific instance of an organism lives and dies.
  2. An iterative environment where specific instances of organisms reproduce in a (potentially) mutated way.

Note here the relationship between simple present heuristics and the iterative environment. The iterative environment generates potential options, and the simple present heuristic strikes down those that are not performing well according to itself.

Why present heuristics make sense in an iterative environment

How does this evolutionary system make sense at all? It’s possible (and indeed easy) to imagine a different set of conditions that live to intelligent life.

For example, we might (and do) imagine that an all-knowing God constructed all living things to fit into a specific niche: the classic creationist argument that bananas fit so well into a human hand that they must have been designed as snacks.

Setting aside an an omniscient god, it’s totally possible to imagine a evolutionary system that evolves according to a different ethic:

  1. There is one organism that is much smarter than the other organisms.
  2. This one organism controls the habitat, food, breeding habits, and perhaps even genes of all the other animals, deciding what lives and survives.

It might seem like this system is not so different than the system described above; we’re just changing the present heuristic for fitness. But in fact this heuristic is quite different. In the first case, survival had no long-term thinking component - in the second, the long-term thinking of us humans is the primary component of selection.

Why to create an iterative environment for yourself

In any case, the fact that the evolutionary system that created intelligent life was not based on future planning tells us that planning is not needed to create structures that were previously unimaginable (literally, there were no brains to imagine it). Generating lots of options through mutation and then selecting forward those that work has created the largest, most robust, complex, interesting, and beautiful system one could possibly imagine.

And at the same time, the unbelievable diversity of life that has been generated through this process is worth noting too. There are bees that create psychedelic honey, a bugs that can freeze themself to death, and shrimp can see a million more colors, and humans with their big brains, and a million other unbelievable and shocking organisms that we have yet to discover.

One only needs to spend a few weeks looking around to see that the diversity of strategy in any human-realm of execution is so much less diverse and creative. Seemingly, there are those that take the primary social dream at the current time (money and success), and those that fall off this path. No one really gets that interesting - at least, I have yet to meet anyone who produces psychedelic honey - and those that do we usually laud as true originals, celebrities, cult-leaders, or some other (morally ambitious) person of interest.

For me, and perhaps for you, creating an iterative environment for yourself will hopefully allow you to capture this experimental nature and use it to push yourself into new areas. Some areas, you’ll be a dodo, and these strategies will go extinct. In other places, potentially as a dust mite in someones eye brows, you’ll find a niche.

Creating an iterative environment for yourself

So let’s imagine you’re in some uncertain environment and you’re looking to operate inside of it using present heuristics and an iterative environment. What would this look like? We take the two components of the evolutionary environment we created above:

  1. A simple present heuristic that decides that specific instance of an organism lives and dies → A set of simple present heuristic that decides if you continue or stop a specific strategy.
  2. An iterative environment where specific instances of organisms reproduce in a (potentially) mutated way → An explicitly iterative process where you combine, mutation, and ideate on new strategies to introduce, and introduce some of them.

Let me get specific and talk about how both of these systems exist in Mito:

  1. We have a current set of strategies and processes we use at Mito. For example, to communicate with our users, we have an initial drip sequence that goes out to new users. We also aim to send out one or two basic product tips per week.

These email sets are a primary email communication strategy. I’ll note here that we don’t really document this explicitly, and the “current set of strategies” is just something we all know about. There’s no need to write everything down in all cases (although in many cases, it can help to track strategies explicitly). 2. We have a weekly sprint process, where we meet and go over the results for the week. One of the things we discuss is communication with users: if we’re talking to them enough, if we have other ideas for getting in contact with them in a helpful way for both us and them, and more.

Note that what strategies consist of, and what timeline you should be trying to iterate under depend highly on what the setting is. Also, what does it mean to “go over results”. We explore more specific implementation details in later sections, and so pause on the specifics for now.

Focusing on execution vs. planning

The structure I describe, on top of the the aforementioned benefits, also has another really useful benefit: it keeps you focused on execution rather than planning.

Programming, riding a bike, building a product, making friends, kissing: most things worth doing are best learned by actually doing them. Present heuristics are explicitly designed to keep planning minimal and cheap, so that you don’t spend time planning but rather spend time actually doing things.

At the end of the day, one of the most important (controllable) inputs to one’s success in any environment is how good their execution is - and the best way to get better at executing is to execute. As such, any system that keeps one focused on executing is key!

Note here that just because execution is more important than planning doesn’t mean one should only keep their head down and execute. The iterative approach makes sure that you look up and force yourself to iterative, to improve, and to mutate to try new things that may not work, or may be the best strategy you’ve ever before tried.

Iterative environments and your health

One thing that an iterative environments make immediately clear is that you’re engaged in an explicit marathon made up of smaller sprints.

Something that this makes obvious: you cannot afford to die of burnout.

In fact, given that just giving up is the leading cause (probably before death) of unfinished projects, burning out and so killing your lineage is a big no-go. As such, any sprint-strategy that might burn you out is immediately a no-go.

For me, this means working reasonable hours, sleeping well, not making my startup the only thing in my life. I have a feeling that this is true for most other founders, although what specifically you can handle you know better than I!

Chapter 14: Introducing Metrics

In which I argue that mutation and killing those things that do not work requires having some understanding of what is working, which requires metrics!

A metric for success

I argued above that looking back at the past and attempting to construct a causal narrative of what happened is pretty much just destined to lock you into your biases rather than help you go in the right direction. So, what does it mean then, when I say that an iterative execution framework with present heuristics in Mito requires “meeting to go over results?”

At the highest-level, an iterative execution framework needs some objective metric that judges whether or not it succeeds in accomplishing what you want it to.

There are a few things we need to define here. First, we need to talk about what a metric is in the first place. Then, we’ll talk about why some objectivity in the metric is necessary, and what limitations come as a result.

In later chapters, we will get into practical tools and tips of defining a metric; as I learned in the past 2 years of Mito, it is not so easy!

What a metric is

When I refer to a metric, I am actually talking about two things:

  1. The measure of some variable.
  2. A specific target goal for that variable within each iteration.

In Mito’s context, an example of a metric is “5% more revenue at the end of this week than at the start” or “2% higher retention this month than last month.” In a personal context, a metric might be “get outside and go on a walk 20 days this month” or “message a friend at least once a week.” There is a (measure) of some variable, and then an explicit target for what the measurement should look like.

On the objectivity of metrics

Imagine that you’re trying to define an iterative execution environment for your business. You have two options for the sort of metric you want to construct:

  1. 2% retention growth month over month.
  2. New users like the product more than the month before.

The first is a hard measurement. It’s a real number that you can look at using any good analytics tool. The latter is a soft, non-objective measurement. You could measure it by talking to people, by asking for survey information, etc.

There are a few reasons to be searching for an objective metric to capture your goals.

The first, and most practice, is that measurement error is both real and very common. As I will explore in later sections, it took upwards of 6 months for Mito to construct a retention metric that was measuring what we wanted to. Although they do not solve this problem by any means, objective metrics make it easier to make sure you’re measuring what you think you are - subjective metrics just introduce another level of challenge here.

The second problem is that of accountability. I won’t say much on it here, beyond reminding future Nate to include this accountability in the later social implications section of this book. Don’t forget! TODO this last sentence.

The second problem with subjective metrics is how they encourage one to lean into their biases, much like narrative construction from the past. As an example, a single call with a user who really likes Mito often leaves us with the feeling of success - even if that vast majority of our users are unsatisfied. By leaning on a subjective metric, we’re much more likely to just project our own emotional landscape of Mito’s current state onto the metric, rather than really measuring what we want to me.

On subjective metrics that feel objective

There is an interesting category of metrics that Mito has had in the past that I refer to as “subjective metrics that feel objective.”

As an example of this: the technical process in Mito (the part of the company that I lead, that is responsible for actually writing code) has the goal each week of “completing 80% of the sprint stories that we set out at the start of the week.”

On one hand, this feels very much like an objective metric. We write down all our sprint stories at the start of the week (change the website language, test and deploy the new concatenate functionality, fix a bug in the installer). And then at the end of the week we can look back and see if we actually did 80% of these stories.

But what is interesting here is that because we also set up the stories each week, effectively what we’re doing is redefining a new metric every week. Each week, as we construct the sprint stories, we effectively define a new metric to hit: 80% of those specific sprint stories. This, in turns, allows us to subjectively play with how much we commit to. Rather than being held to some objective outcome, we again fall into the issues of measurement error and leaning into narrative construciton and biases that comes with subjective measurement.

Sometimes, it is possible to take a process with a metric that is subjective that feels objective and turn it into a process that has a better objective metric. For Mito, we took the “teams” process (which was the process that worked with teams to get them signed up for Mito) from a “80% of tasks done goal” to a “20% revenue growth in teams month over month goal.” We will explore this specific case in later sections.

Limitations of objective metrics

But in other cases, moving a process to an objective metric can be very hard. The technical process is fundamentally about implementing whatever work the rest of the company deems valuable, in the fastest, most robust, and most long-term sustainable way. How do we define an objective metric that measures this?

It is quite hard to do so. Development evades attempts to define a rigorous metric of productivity that is not easily game-able (just ask FB interns who get return offers on the number of lines of code that they write).

We will explore some other limitations with objective metrics in situations below.

A metric with no causal narrative

A metric is meant to tell you if the iterative execution environment you executing inside in is succeeding or not. A metric is not meant to tell you a) why you are succeeding or not succeeding, or b) how you might go about succeeding. Indeed, if it facilitated either of these things, we would just be creating a mechanist explination.

How then, does it help? If you cannot use a metric to judge specific strategies or present heuristics, and it doesn’t guide the creation of new strategies, what the hell does it even do?

What a metric can do is control the rate of mutation in strategies and heuristics in your iterative execution environment. If you are meeting the metric part of your goal, then you still mutate, but you do so slow. If you are not meeting the metric part of your goal, then you should kill more strategies, or at least mutate them faster.

The general idea here is that success should be met with exploitation of what is working. You cannot say exactly what you are doing is working (while in fact, what you are doing might be worse than doing nothing) - but at least you are succeeding according to your metric. And if you are not succeeding, you are encouraging exploration of new strategies and heuristics that might be able to help you succeed.

The process of deciding what strategies and heuristics to kill and which to keep we explore in more detail in later sections. But you must do you best best to avoid creating causal narratives that really just serve you biases, and not your ability to execute well! TODO.

Chapter 15: Practical Tips for Metrics

In which I give the 5 most important rules for defining good highest-level metrics, and give examples of how we didn’t follow them in Mito, and how this caused things to break.

Melding a metric and success

The most important feature of a metric is that it a) it captures what successful execution would be, while b) actually making you (the operator) feel that success.

As an example, we could define a metric for Mito’s growth that is “Mito is used in at least one more fishery each month,” and aim to target this. But of course, this wouldn’t be success for Mito. No offense to the fisherpeople, but we don’t give a damn about them using Mito over anyone else.

When thinking about what successful execution is, it is useful to try and restrict yourself to what successful execution must be. That is: if this thing the execution enviornment is attempting to help, what must be true in the future? For a product like Mito, we must have revenue.

This in turn naturally leads to the metric of revenue growth. While we could target other things like “total number of users” (including non-paying users), this would be a mistake if success is indeed defined by revenue, as it is for most businesses in the long-term.

A brief aside on network effects and goals

Things get admittedly wishy-washy here, when it comes to avoiding planning.

For example, for some companies, avoiding revenue for as long as possible is actually the best bet. Most social networks (blarg) require massive growth before they have the chance to get value from their network in the form of ads.

First, this author would like to note his general suspicion about the general ethical landscape around these business models. If one is executing on a company that looks like this, for what my opinion is worth, I vote don’t.

If you must, then the highest-level metrics you want should indeed be just a raw user count number. If you’re not a company that relies on network effects, then revenue should be highest-level indeed!

Achieving your metric’s goal should feel like success

On top of being those things that must be true if your business will succeed, a good metric should also make you, the operator, feel like you’re succeeding!

The primary reasons for this are psychological ones.

First, a metric that feels like success is just much more practically motivating. If you can feel success in your bones, you’re much more likely to try and reach for it. The value of trying hard to meet a metric that actually means success is obvious.

Second, this iterative execution environment is meant to be a replacement for planning, which is a pleasurable mental activity. If we don’t provide some pleasant mental benefit to this iterative execution environment, you’re much more likely to go back to long-term planning, much to the disadvantage of your business.

The metric should be just on the limit of achievability

Your metric should be achievable if you try hard, and not achievable if you don’t try hard.

Note that hard here does not mean you burn yourself out by working 100s of hours. Hard might mean working hard in terms of hours for a day or too (avoiding burnout for lineage reasons), or, even better, working hard on the things that have the biggest bang for buck.

This is for similar reasons to the above point. Mito has a revenue growth goal: if that revenue goal was 1000% growth a week, it would clearly be impossible, and so as a result, totally not motivating. Similarly, if the goal was .0001% a week, it would be achievable without any effort, and so not actually be a metric at all.

The alignment of metric timelines with work timelines

Your metric should be on a timeline of the work that your metric encourages.

As an example, let’s imagine that you’re a company whose bread and butter is enterprise sales - and you have an aggressive growth goal of ~20% revenue a month. There are a couple of ways to represent this as a metric:

  1. 20% revenue increase month-over-month
  2. .6% revenue increase day-over-day
  3. .00025% revenue increase hour over hour

Except, anyone who has done a drop of enterprise sales knows that a) the contracts are large, and b) the timelines are long. There’s no chance you’ll hit your hourly goal - weekly is still pretty much nonsense, but monthly is more reasonable.

In practice, this alignment of timelines is important because it stops you from working hard on reaching a metric that isn’t useful. The hourly metric above is so absurd as to be not actionable, but if you had a daily revenue increase goal, it might encourage you to mutate strategies to try to extract small amounts of additional revenue from existing customers. This is not a good long-term growth strategy.

In Mito’s case, we used to have a retention growth metric that was on a weekly timespan, but this was not a long-term enough for us to make real progress on this metric. As a result, we would spend Thursday and Friday of each week sending users personalized emails to try and convince them to come back on the tool - despite the fact that we never observed these personalized emails working. It was simply that we had no other levers to pull!

That loud loud: avoiding noise and measuring signal

Make sure that the change in the metric you are targeting is something that the population size you are measuring can actually measure meaningfully.

Here, we start getting more technical, but no less important. If you’re defining a metric that is a change in a number, then you need to make sure that this number you are measuring is actually measuring something meanigful.

A short story to illustrate this point. At Mito, we used to target a 2% retention increase week over week (if this sounds very aggressive to you, it is, already breaking some of the above rules). In any case, this metric had another issue: we only had about 100 users coming on per week, and the “natural variability” in the one-week retention metric was much, much greater than 2%.

If we made no changes at all to product, and sent the exact same communications to users both weeks, we would routinely get variability in the retention measurement of > 4%. As such, trying to move a real change of 2% was clearly impossible.

It took us a very long time to recognize this problem, and in the meantime we spent more of our time struggling with (and demoralized by) the fact that we were not hitting our goal. It was impossible for us to hit our goal as our goal wasn’t measuring something real!

Practical tips for measuring signal and not noise:

  • Be clear about the size of the effect you’re trying to measure.
  • Look at the natural variation on the underlying metric. Make sure that it is less than this effect size, or measuring this will be very tough.

TODO: be clear about what this looks like in practice.

Metric should be easy to evaluate

TODO: talk about this!

Metrics should not be complicated to understand

TODO: talk about how we went to a subset of the cohort, and this introduced this notion of sample variance, which leads to distrust, and in turn less striving toward the metric.

Make sure you’re comparing across cohorts

If your metric is comparing things (which it probably is - at least past to present), then make sure you’re comparing things that actually make sense to be compared.

This might seem very obvious, and perhaps it is, but I can say that this is something that is actually quite easy to fuck up.

TODO: give the example where we were comparing earlier upgrade success to later upggrade success, but the earlier people had 3 more weeks to make the call.

Mito used to defined retention metrics in terms of months of the year, but it turns out that 31 days is actually not super easily comparable to 28. That is an obvious example, but it actually took us some time to realize.

There are many more non-obvious examples of where you are not comparing cohorts but rather looking at some other “total number” metric. These metrics are often known as “vanity metrics” because they feel good, but in fact are not a good measurement of progress. A great example of this is just a total user count number.

In practice, it feels like since you’re comparing the total number of users this month to last month, you’re comparing across cohorts. But these two cohorts are defined over different months! As such, rather than measuring something that tells you if your product is improving, you’re just measuring something that tells you if you’re marketing more, or spending more, etc.

Avoid vanity metrics. Try and build your metrics across cohorts, if at all possible.

Metric and external communication

It’s nice if your metrics is a useful tool for communicating with the outside world. Practically, this means that if you tell someone your metric (and that you’ve been hitting it), they say “cool” rather than “why that one?”

This isn’t just so that you can brag about your business - try to avoid using it for this. Rather, this is so that you can use your highest-level metric in the same way that other use a long-term plan.

With high-growth businesses, often employees join not on the current state of the product or the revenue, but on the potential for future growth. Communicating this potential for future growth can be very challenging if you don’t have plans for the future - but if your metric can communicate how you’ve been succeeding, then you’re good!

Chapter 16: Targeting Metrics and Prioritization

In which I talk about how to think about targeting a metric, decision making tools and model processes, and how this relates to prioritization.

On a prioritizing to hit the metric

With all good metrics, you’re going to have to do some work to hit it. With all work, there’s a limited amount that you can accomplish in the timeframe you have.

As such, you need to prioritize which work to work on. How do you do this?

The most obvious answer, and where I always find myself starting, is to make a list of all the ideas I have for work I could do, and then to narrow this down to the things I think I should do (and have enough time for).

In some ways, what I’m doing internally is taking the list of things I could do and lining them up based on how much I think they will help - a bang for buck sort of argument. The answer is to use a certain class of present he

The difference between a proxy and a measurement

Why it’s important to be clear with your proxies

Chapter 16: The Structure We Have Created

In which I take a step back and review the structure we’ve created. Despite taking a few chapters to describe in detail, the actual structure is quite simple.

Execution strategies

There are some set of specific actions that you are taking to get things done. In a business, this might be sending product updates to users, adding new features, talking to users, raising money, or more.

That aim for a metric

The goal of these execution strategies is to hit a metric, which is a specific objective measurement as well as a goal for the value (or growth in the value) of that measurement over a time period.

Inside an iterative environment

These strategies are designed and executed inside of an iterative environment (that iterates at least as often as the metric has a timeline of) where you get multiple chances at bat.

By evaluating strategies with present heuristics

These strategies are evaluated and evolved during these iterations through the use of present heuristics, which are a decision making tool that do their best to avoid trying to build mechanistic explanations and instead are just based on local information you can evaluate.

What is missing from the above

There’s a bunch of implementation details missing from the above.

The most obvious, probably, is that I haven’t been clear where these present heuristics supposedly should come from.

I also have been very light on details about what mutating, evolving, and killing strategies looks like in practice. How might we do this?

We will take each of these questions in turn, starting with the first: how does one go about constructing present heuristics?

Chapter 16: Constructing Present Heuristics from Scratch

In which I talk about where it is possible to construct present heuristics from scratch, where it is not, and how to create present heuristics without attempting to create mechanistic explanations.

Reviewing a present heuristic

We introduced present heuristics above as: a decision making rule that is primarily concerned with the current observable state of the world with your own senses as a human being.

Let’s be specific about what we mean by “current observable state of the world.” This is a loose definition of current. If evaluating the present heuristic takes an hour, it’s still present. If it takes a day, it’s on the edge of being not present. A week is certainly not the present (what did you do last Sunday)? Through our restriction to current time, we avoid building incorrect causal narratives and mechanistic explanations about complex systems in uncertain environments.

In practice, some details of the past can be relied on for the construction of a present heuristic, if you were tracking these things in the past. Examples:

  • If you keep track of the hours a specific feature takes to implement as you’re implementing it, you can rely on this number when constructing a present heuristic.
  • If you wrote down how each user felt about a specific communication as you were talking to them, you can use this information when constructing a present heuristic.
  • Etc.

In practice, this means that you can construct a present heuristic by relying on data collected by present observations that were collected at any point in time. Note that this is not the same as looking back retroactively and trying to remember what the present evaluations were at that point in the past - your memory of the past really cannot be relied on (TODO: cite this!)

An example of constructing a present heuristic

At Mito, we have a metric of increasing our month-over-month retention by >2%. To do so, we have a variety of heuristics that we apply to make decisions about what changes to work on for product.

For 6 months, we wanted to add support for Jupyter Notebooks to Mito. However, as I was technical lead, I estimated that this would take upwards of 2 months to do, and so kept pushing it off (my first mistake was making a plan here).

When we eventually got around to implementing this work, it took a grand total of 1 week to implement, which was something that I tracked while it was happing. My planning and estimations, as a result of being off by an order of magnitude, resulted in our most requested feature taking 6 months longer to complete than it should have. Shame on me.

I had done three things:

  1. Written down my initial estimates (2 months of work), and had written justification using this estimation to push off the work. This made it clear I was not misremembering the past, and we had decided to not do this work as a result of this estimate.
  2. I tracked how long this integration took me from start to finish (about 1 week). This make it clear, from a present evaluation perspective, how absolutely far off my time estimations were.

This allowed us to create the following present heuristic:

In practice, this present heuristic just encourages us to prioritize as much as we can to work on the work we believe to be the most valuable. Don’t spend time messing about with fragile estimations and plans, just get started on those things that seem like they would have the most bang.

The other present heuristics I got from this project

There are a variety of other heuristics that were developed from this project, as a result of be documenting the entirety of the creation process while it was occuring, including:

At the

On timelines

  • Immediate (less than a day)
  • Not a week
  • Def not a year

Isn’t this absurdly limiting?

yes.

  1. There are other ways to get good heuristics, as we will see below.

Chapter 17: Stealing Present Heuristics from Others

In which I talk about how you can rip off present heuristics from other folks, and some conditions to evaluate if other present heuristics make sense in your context.

When others’ present heuristics are relevant

Operators (not thought leaders)

Antifragility

Chapter 18: Cultural Sources of Present Heuristics

In which I respond to a class of popular current thinkers who argue that {the Bible | the old people | culture generally} is a good source of present heurstics that are applicable.

On cultural sources of present heuristics

You’ll note, the above addendum of allowing yourself to eat food if you’ve seen other people eat it, and not seen them get sick - isn’t really present is it? It’s based on some historical information about previous people that you have observed.

Learning from other people

When learning from someone, dont’ ask ‘what decision you made” but rather “how did you come to that decision”

This is a great source for mutation. For example, why a comapny failed (that pebble article) - the answer isnt’ “I didn’t think of a long-0term vision” - the question is why you didn’t think of a long-term vision! I didn’t think it was important then leads to - what did you prioritize and why?

Chapter 19: The General Shape of Good Present Heuristics

In which I talk about some observed properties that many useful present heuristics have in common, and how they can be useful when evaluating new present heuristics. In some ways, these are present heuristics for present heuristics

Chapter 20: Time to mutate

In which I talk about the conditions what mutation means in the context of strategies inside an iterative environment with present heuristics and metrics: where, when, why, and how to approach this.

TODO: be clear about what this means, give some examples where we fucked this up in Mito.

Meta-iteration: the changing present heuristics

TODO: argue about how one might think about needing to change your present heuristics. How do you know if you’re doing well enough? The answer: metrics!

Chapter 15: Evolving Strategies with Metrics

Why iteration and mutation are required

A present heuristic like “animal fitness”

TODO: talk about how this uniquely relates to each type of uncrtinaty

Chapter 13: Present Heuristics and Mutation

In which I note the realationship between present heuristics and mutiation, and also ntoe that the mutation might not just occur at the decision level but also at the meta-level of the heuristic? IDk...

Present heuristics are the process for selection? Not the animals themselves. But what if we put the present heuristics through a selection process as well?

TODO: this is an interesting meta-relationship I need to explore more!

Anti-fragility as a present heuristic

Chapter 9: Present Heuristics and Mutation

In which I explore the relationship between present heuristics and mutation, and argue that iteration is a core component of successfully applying present heuristics effectively.

Even in our rock example, we iteratively did things. This is important.

TODO: argue that reapplication of present heuristics applied in the past is key for humans, which does not have unlimited ability to redefine - this is where we diff from nature

TODO: argue that killing the things that don’t work is a key part of making mutation work.

Chapter 10: Metrics Drive Mutation

In which I argue that mutation and killing those things that do not work requires having some understanding of what is working, which requires metrics!

TODO: talk about how to develop good metrics. All of the things in the doc, about effect size, etc. And then also timeline match up, and also the lack of knowledge you have up front - so just making your metrics “what must be true for success.”

TODO: also argue that you need to be really clear with definitions

Chatper 10: On Non-iterative Things

In which I argue that almost everything can be defined in an interative way, and that not doing so is likely a cop-out vs. being an actual limitation of the system you’re working with.

TODO: Argue about space-x, and the diference in shuttles, and how one is winning - there isn’t anything more “monolith” than a rocketship - it all goes at once to the literal moon - and yet we can still do it here!

Chapter 8: The Risk of Ruin

In which I put some conditions on a very important limitation of present heuristics that all should respect.

TODO: talk about how conclusions about food not being posionins are tough, in the case of tail events, which food does not have really.

Chapter 8: Heuristics for Heuristics

In which I argue about why we start with basic heuristics, and how we can abstract from here, and where and why this abstract ends.

The proposed decision making tool above - that of a present heuristic - but it’s natural to wonder how to actually construct these in the first place. How might we go about creating a list of present heuristics that we use to make decisons?

Present Heuristics from Present Experience

Fundamentally, based on the fact that a present heuristic is a thing that comes from present experience, it’s not suprising that you should do your best to create them from present experience.

The structure for this is personal, I think. For me, I like to to retrospectives on those things which

TODO: argue what present good present heuristics might look like? Things like “keeping options open” and “staying focused” and “prirotizing?” or something. IDK.

Chatper 10: On Evolutionary Systems

Chapter 10: On Learning from Others

  • We try to

Anti-fragility as a present heuristic

as we’re mostly going to just confirm the things we already believe and further lock ourselves into

So, we cannot model or plan for the future in most relevant cases (or at least, doing so exposes us to dangers ← TODO: argue this well!). And we cannot look easily look at the past for guidance, without letting the narratives and biases we have rule our life. So what are we left with?

Well, we are left with the present.

  1. Don’t try and tell a story about the past
  2. Don’t try and predict the future
  3. All that’s left: the present
  4. So, we need to be able to use models that are only “answerable” in the present. That is all we have.

Can we really say heuristics are better?

The main argument of this book is simply:

  1. We create models of the world from our understanding of the past, which is fundamentally unknowable.
  2. We create models of the world for the purpose of planning for the future, which also fundamentally does not work.
  3. To combine these points: for most real world systems that are worth our time to study, outside of the context of theoretical physics, mathematics, models are fundamentally broken in a way that makes them more likely to hurt you than help you.
  4. Present-time heuristics are cheaper and more effective than these models at helping you operate in the real world.

Now, we’re in a real quandary. Making an argument like “present-time heuristics are better than models for most complex systems of interesting” certainly needs some justification. If I was a decision theorist, I would then go about proving this in some model.

It might look something like: I have a real world system S, that I am modeling with M, which can be seen as a process from S → A, where A are my actions. But for any complex S, M is broken, and actually, some other decision making process H (for heuristic) leads to better results under these distributions.

Notably, what we have above is a meta-model, a model of a model of a system.

Thus, we reach the fundamental challenge of this argument: there is no way to prove that present-heuristics for decision making are better than formal models without using a formal model, but in this way the proof working out would also invalidate itself, as it would be invalidating the formal model it relies on.

What does mean about heuristics

So, if we cannot prove that present-heuristics are better, what can we do?

We can operate. And through operation, we can see the results of our decision making.

This process has an ethic of doing. This process argues that the complexity of the real world is fundamentally irreducible to our understanding, and really all there is.

Chapter X: Heuristic Developement

TODO: how then can we develop heuristics.

This theory of operation is pretty simple.

Here’s the problem: my thesis might be “models are bad” and that “these local heuristics are better.” But proving that might require a model (of the bad sort), which might mean that the only way to prove that the heuristics are better requires actually accepting the heuristics in the first place.

And, worse, proving that the models work (within the models) might work. So we end up with a setting where we have a provable thing vs. a not provable one.

Really, the conclusion here is that the fundamental complexity of the world is uncommunicatable through models; the things that matter are not the things we have the complexity to model.

Metrics

  1. Driving metric
  2. Non-actionable metric
    1. Real-time retention number

Chapter X: but our models and planning works so well

TODO: take the example of the space shuttle, and the development of rockets, as a great example where it seems like an explicit world model from sceince led to effective operation.

Then then compare it to Elon Musk’s SpaceX, and show actually an iterative approach where you use really simple decision making procedures (and link to them) is actually what leads to the most effective progress on rockets.

So say even the most salient examples - those things that seem like we couldn’t have done without plans - this is where we see that this rapid iteration, and learning, actually leads to much better outcomes in the long-term.

TODO: figure out how to unify and address some long-term vision and the iteration being the most effective way to get there. Note that Elon musks long-term plan was very light on details.

Chapter X: On the social effects of no long-term planning

I

As a quick note on the social effects of such maxiums: most people really love planning, no matter what they tell you. Pretty much everyone alive loves discussing models of cause and effect. Pretty much, the above comment is well outside how most people like to operate. You’ll (fairly often) find yourself in a situation where your friends are speculating on cause and effect about really complex things.

I can’t really say how to handle these, as I haven’t figured it out yet. To take a minor-rationalist approach, I think it depends on what your goals are in the conversations.

In most cases, I believe that joining in on speculating (and then coming back to writing this book) is a good strategy. The only thing I’ve really been able to convince people of is that I am a bit loony, and perhaps a conservative in the closet.

Long term vision

The key thing here is that this is where long-term vision is necessary ; to keep people motivated. this should be a loose structure, something you can lean on like elon musks plan, but doesn’t really motivate any day to day decisions.

TODO: I need to insert a section (probably a chapter here) about the relationship between indivigudal decision making tools and a group. There’s a totally different question here called “orginizaitonal structure” whcih results in things like democracy, heirarchy, etc.

This is not a book about that sort of decision making - but I think that potentially there’s a way to generalize this indiviguagl work to the genearl case of groups. Note that the main issue is the social dynamic that other people pretty much think this is bullshit.

Chapter X: On Creating an Iterative Enviornment inside of a Non-iterative one

In which I explore how to create an iterative enviornment inside of a larger structure that might have other constraints that you need to meet. How can you begin to execute like this inside of a larger structure?

Chapter X: the ethical implications of present heuristics

  • Note that ethical injunctions are often present heuristics (do not kill, as this current momement) - and not based on a particualarly complex world model.
  • Talk about how we need to make sure our effective operation is actually a good thing - these are tied up in one the same.
  • Do some heuristic development for htis.

Sources:

  • Risk, Uncertainty, and Profit

    • https://fraser.stlouisfed.org/files/docs/publications/books/risk/riskuncertaintyprofit.pdf
    • Notes:
      • Chapter 1, paragraph 1 has a good bit on what abstraction does and why it is required in economics.
      • He makes some points about how Physics, science can reduce things to a single subcomponent to study with an experiment; but this is hard in the world of economics:
        • Chapter: Reducability and composability of models.
          • There is emergent behavior when it comes to molecules (see: humans), but when you’re studying and making predictions based on physics, there
          • In every context, combining molecules together in the same way will result in the same reaction.
          • This is not true of an economic context, and indeed the opposite is true; in fact, I’d argue that defining the relevant context is in-fact impossible.
        • There’s some good stuff in chapter one where he makes the argument about how the search for laws/modeling generally is a good enterprise for what it allows you to predict. This is exactly what I am looking to refute on my chapter on model failure, and the difference between confidence in our model and the actual delta between model and item
        • “When the number of factors taken into account in deduction becomes large, the process rapidly becomes unmanageable and errors creep in, while the results generally loose in generality” - page 8
  • Decision Making under Uncertinaty ( I need to read this!)

    • https://algorithmsbook.com/files/dm.pdf

    • Note the examples they include in the introduction: the stock market is not like the rest of them... doh

    • 1.4.6: “An agent must be able to quantify its uncertainty to make informed decisions in uncertain environments” - literally wtf. How does this make any sense in the real world.

    • Decision Tree

      A decision tree is a tool for storing a compact representation of a discrete multivariate distribution of n binary variables.

      Instead of needing O(2^n) storage size for storing a lookup table of variables, you only need O(COUNT(DISTINCT probailities)), which notably is upper bounded by O(2^n), and in some cases can be much smaller.

    • A full support distribution just means everything in the input has a probablity? or positive prob or something?

      • A rather simple distribution is the multivariate uniform distribution, which assigns a constant prob-ability density everywhere there is support
    • I am making my way through this text. It is followable.

    • I need to learn bayes rule, and actually try and derive it and apply it. The derivation is easy, but what does applying it look like...

    • Beyesan Networks

      • Primarily an efficinecy thing! You can do the same reasoning over just a full joint distribution, but it requires you to store a ton more parameters.
      • How? Well there’s this notion of Conditional Independence which I am too tired to look at right now, but I should do my best to understand!
  • Against Baysianism

    https://josephnoelwalker.com/139-david-deutsch/

    • Lots of good stuff that relates popper to my thinking around inductionism, etc.

On why you can’t look at successful people

TODO: three arms argument. Maybe it’s the best. But everyone is just born with two arms, and no one thinks to try anything different.

On the social dynamics of being against planning

TODO: this it the hardest chapter. This is why I am writing this book. How to do this?

After about 8 months of this, we decided to pivot again.

We’d been working heavily

The summer before my senior year of college, I was struck

I decided I was going to make a startup. I had an idea for a tool that would allow you to take applications like Photoshop or iMovie and make them collaborative. I called it “Saga: Generalized Version Control.” I spend 3 months hacking together what I called an MVP in Python. After about 3 months, I brought on my two best friends from college (and highschool), and we formed a company.

For the next semester or so, we applied to accelerators, built

The precursor

I’ve been attempting to start a company for that past 3 years, give or take a few months. I

At time of writing, the author is a co-founder of a small startup.

Chapter 2: Decision-making in uncertainty

As explored above, plans only really break down when they are made in uncertain environments, when the likelyhood of some “disrupting event” is reasonably high. For a complex enough plan, over a long enough time frame, this likelyhood ends up at 1.

We’re a bit of a lurch, here. Complex, uncertain environments are those where plans are the most absurd, but at a first glance they are also the places that planning calls to us the most. Calls for “let’s make a plan” happen only when you’re planning a particularly complex trip involving a car and boat and plane, and 3 of your 7 uncles, not when you’re walking down the street to the store.

To be clear, what we’re looking for here is not a replacement for planning. What we’re looking for is strategies for decision making in uncertainty.

Model Environments for Uncertainty

Let’s get a bit more formal. What do I mean when I say uncertainty? Let’s introduce some simple examples of what sort of environments I’m talking about. Notably here, I’m looking to avoid abstractions that miss the point - for example, environments that give too much structure to the uncertainty.

I’m not looking to present these decision making environments as anything other than demonstrating the differences between uncertainty, lack of information, etc. The whole point about uncertain environments is that they aren’t easily modeled!

Uncertainty and lack of information

There are two different ways in which “uncertainty” will manifest in our model environments; inherent uncertainty, and lack of information.

Inherent uncertainty: buying a house in San Francisco

Inherent uncertainty is when, no matter how much information you have about the system, the remains some real unknowns that you cannot know exist.

A great example of inherent uncertainty is buying a house in San Francisco. You can read up on the local housing market, make sure your in a great neighborhood, meet with all of your neighbors, and do all of the due diligence necessary to be convinced that this house is a good investment. Then, on the day you finally sign the papers and purchase the house, an earthquake strikes SF, and your house burns down.

Now - you might recoil at this example - “everyone knows earthquakes might strike SF — you should have accounted for this!” you exclaim. But the earthquake here isn’t meant as a specific example of what you should think of - rather as an example of something you’ll inevitably forget. If not an earthquake, a fire, or foreign powers attack on America, or a riot based on bread prices, or an alien arriving and abducting your neighborhood, or a snorgleborgle bog, or something else entirely.

It is the very nature of these events that resist prediction. Indeed, you couldn’t begin to list these events, let alone estimate the probability that each occur.

Lack of information: a walk in a new city

Uncertainty stemming from lack of information means that you, as an agent in the system, simply don’t know the relevant facts (or even, what the relevant facts are).

As an example: imagine you’re in a new city, and you’re about to shit your pants, and so you want to walk to the nearest bathroom, as quickly as possible. The path there is uncertain, but not because some unknown event will disrupt your walk - but simply because you don’t know what the

Chapter 3: Existing Strategies for Decision Making in Uncertainty

From future event probabilities to future event effects

There is a well-known thread of argument, mostly commonly presented and popularized by Nassim Nicolas Taleb, in his book Antifragile, where he argues that uncertain environments means you need to move away from estimation of failure probabilities, as these are inherently unknowable.

Instead, we should move to evaluating systems through the effects that possible events will have on it. If the events have a weakening effect on the system, it is a fragile system; if stressing events have a strengthing effect on the system, it is an antifragile system. This means that we move from some future estimation from immediate, local evaluation - and mostly avoid trying to reason about the uncertainty at all.

Issues with future event effect evaluation

While this is a step in the right direction (and indeed, this writing is in some major way inspired by Antifragile), it doesn’t fully solve our problem.

For one, one is still required to make some implicit estimation of probabilities about future events. If one considers the effect of a super-massive asteroid hitting earth as a potential future event, then every system is fragile to this event - and the decision making equation really changes in favor of “drop everything and prepare for an asteriod.”

Of course, no one really thinks this is the right thing to be doing (although if you switched asteriod for pandemic, perhaps more people would agree). So theres some cutoff point where we don’t consider the effects of future events, if they are too outlandish (even if they are clearly possible, ask the dinosaurs).

Evaluating future effects still requires some evaluation of future events probability, which is exactly what we were trying to avoid in the first place.

From future event effects to past event effects

In uncertain environment, we’d like to move away from an evaluation of the future. But how then do we make decisions?

This is a great question. Nassim Nicholas Taleb has an answer: we continue to do things that work in the past, something he calls “realizing an option.”

Notably, doing so does not require reasoning about future probabilities of events, or really the effects of those events; rather we just need to keep doing what has been working for us.

Past event effects and optionality

Nassim Nicholas Taleb argues that tinkering (mutation), and then seeing what works and continuing to do it (taking the option) is the real key to innovation. Here, his recommendations break down again, this time as a result of his lack of operating experience and actually being an innovator.

In building Mito, a couple of facts became immediately obvious about realizing optionality: it’s totally impossible to figure out which option to take, let alone actually being sure what options you’ve created for yourself, not to mention the almost impossible question of figuring out “what has worked” or what “work” even really means.

The hard part of taking an option, and the strategies we lay out in the rest of this document are meant to attack this specifically.

A quick aside on the struggles of figuring out “what was working” in the past

At Mito, we do heavy tracking of our retention, which is a measure of how many users return to our product after using it once. This is a great metric to figure out how much users are liking our product, and is something that we explicitly target as a highest level goal.

When this number went up and down in the past, we would go through a process of “retro” - where we would attempt to figure out which changes we made the product made positive/negative impacts, so we could make more or less of these changes respectively. Notably: us attempting to use Nassim Nicholas Taleb’s optionality framework.

But after some thinking, we realized there were a few challenges:

  1. The product changes that we think are big are not necessarily the changes our users think our big. Rebuilding our spreadsheet library from scratch feels big because it was a lot of code, but that really just biases us to make us think that our users care as well. Users don’t care how much work we put in.
  2. Some changes we make are not in the release notes or something we are aware of. For example, we certainly fixed some sheet crashing bugs that didn’t make it into release notes. Perhaps this was what caused the massive increase in retention, because 20% of people encountered it.
  3. There is no way to look for negative causes, which is to say things we did that stopped bad things. E.g. if we said “no” to developing a feature that crashes the sheet, this counterfactually increased retention (by not decreasing it). But we have no way of knowing what negative causes did what, let alone replicating them. There’s also no way of knowing how much of our process is driven by negative vs. positive changes.
  4. Even if we can determine which changes had impacts, it is still almost impossible to say why the changes had an impact that they did. For example, consider we observe that adding a file browser improves importing rates. But it turns out that actually they are equivalent from the user perspective, it’s just the new one crashes less. So we might draw the conclusions “we need a better interface for file selection” when actually the conclusion should be “we need to build a tool that crashes less.”

The basic conclusion from there is that it is pretty much practically impossible to figure what product changes actually lead to improvements in our metrics that we care about, if we’re just looking back.

So notably, realizing options requires knowing which options are paying off. But in the case of complex product questions, doing so retroactively is effectively impossible.

Furthermore, given the complexities and biases on display in the above question, it’s natural to ask the question: is it actually beneficial to look back and try to measure this in the first place?

How biases can lead to past analysis being helpful rather than harmful: or, the nutritional epidemiology fallacy.

“Well,” you might say, “I agree that it’s pretty hard to figure out what caused changes in the past, but surely not looking back at any of our old data is dumb. It’s always better to have more information vs having less.”

Not really! In the case where the thing you measure is very likely to be biases, and in turn very likely to mislead you, the best thing to do is to not look at this old data to try and draw conclusions at all.

Nutritional epidemiology [the studying of how what people ate effect their risk of disease] is a great example here, as it’s very similar to our situation. Researchers use non-randomized, historical data (with large reporting errors, given self-reporting) to attempt to tease out cause and effect between diet and disease.

And look! It turns out that eating a handful of berries a day can slow your cognitive decline for 2.5 years.

Except, of course, this is an absurd effect size, and totally bullshit. And yet, my mother went on a berry kick. In fact, she misinterpreted berries as including other small fruits like Figs, which it turns out are a sugar bomb, and may have contributed to my father developing type 2 diabetes. Sorry, dad.

Cause and effect

Really, none of the above should be surprising. There really is only one way of establishing causality, and that’s through a process of experiment. Doing that for Mito is possible (A/B testing), if incredibly hard. Doing this for non-repeatable tasks (e.g. most real things in life) is pretty much impossible.

Really, I don’t claim to know cause and effect of anything that falls into the uncertain realm, unless there exists a study that does an experiment. This includes relationships, world politics, etc.

As a quick note on the social effects of such maxiums: most people really love planning, no matter what they tell you. Pretty much everyone alive loves discussing models of cause and effect. Pretty much, the above comment is well outside how most people like to operate. You’ll (fairly often) find yourself in a situation where your friends are speculating on cause and effect about really complex things.

I can’t really say how to handle these, as I haven’t figured it out yet. To take a minor-rationalist approach, I think it depends on what your goals are in the conversations.

In most cases, I believe that joining in on speculating (and then coming back to writing this book) is a good strategy. The only thing I’ve really been able to convince people of is that I am a bit loony, and perhaps a conservative in the closet.

Chapter 4: Past, Future, and Present

So: planning for the future is dumb, because of uncertainty about what will happen. Planning for the past is stupid because we are more likely to fit our biases to the past than really understand what happened, which is fundamentally impossible enterprise.

What is left to us? Two things, really: experiments, and present-heuristics.

Experiments

Experiemnets are the only legit way to determine potential cause and effect, and even they struggle from many issues that make this very challenge. Simply put, they allow us to isolate a system to allow us to change a single variable, and see how it effects another variable of interest.

Note here: potential cause and effect. An experiment does not allow you to really establish cause and effect, only that something might have a cause and effect.

This is not just some “oh experiments are hard and expensive complaint,” but something very fundamental the complex nature of the systems we study.

If you change A, and our measuring B, you could conclude that “changing A effects changing B.” But in fact, it could be that the change to A also effected C, and only when C and A are effected together does B change.

A → B is not the same as A + C → B. Note here I am not saying A → C → B, but rather: effect of “treatement” is A, C, not just A. That is, the systems are complex that even doing as good as we possibly could, we can never be 100% sure that the things we isolating the single variable that is moving.

Other issues with experiements

The above concerns pale in comparison to the real criticism with experiments. In many cases, they are simply impossible. In others, they are prohibitively expensive. In most realistic situations, these two limitations make exerierments to establish plausible cause and effects (or rule out cause and effects) impossible.

On ruling out other causes and effects

It might seem like an experiemnt might allow us to rule out some cause and effect. But we have to be careful here. For example, imagine we introduce a drug D, that causes A to move, but this causes on effect on B.

You might say this allows you to conclude that A does not cause B. But in fact, D might just be stopping B from changing. Aka, you cannot conclude much beyond the effect of your treamtment in those certain scenarios - but not about the causal mechanisms of the treatment.

Mendelian Randomization

Present Heuristics

So, we cannot model or plan for the future in most relevant cases (or at least, doing so exposes us to dangers ← TODO: argue this well!). And we cannot look easily look at the past for guidance, without letting the narratives and biases we have rule our life. So what are we left with?

Well, we are left with the present.

  1. Don’t try and tell a story about the past
  2. Don’t try and predict the future
  3. All that’s left: the present
  4. So, we need to be able to use models that are only “answerable” in the present. That is all we have.

Can we really say heuristics are better?

The main argument of this book is simply:

  1. We create models of the world from our understanding of the past, which is fundamentally unknowable.
  2. We create models of the world for the purpose of planning for the future, which also fundamentally does not work.
  3. To combine these points: for most real world systems that are worth our time to study, outside of the context of theoretical physics, mathematics, models are fundamentally broken in a way that makes them more likely to hurt you than help you.
  4. Present-time heuristics are cheaper and more effective than these models at helping you operate in the real world.

Now, we’re in a real quandary. Making an argument like “present-time heuristics are better than models for most complex systems of interesting” certainly needs some justification. If I was a decision theorist, I would then go about proving this in some model.

It might look something like: I have a real world system S, that I am modeling with M, which can be seen as a process from S → A, where A are my actions. But for any complex S, M is broken, and actually, some other decision making process H (for heuristic) leads to better results under these distributions.

Notably, what we have above is a meta-model, a model of a model of a system.

Thus, we reach the fundamental challenge of this argument: there is no way to prove that present-heuristics for decision making are better than formal models without using a formal model, but in this way the proof working out would also invalidate itself, as it would be invalidating the formal model it relies on.

What does mean about heuristics

So, if we cannot prove that present-heuristics are better, what can we do?

We can operate. And through operation, we can see the results of our decision making.

This process has an ethic of doing. This process argues that the complexity of the real world is fundamentally irreducible to our understanding, and really all there is.

Chapter ?: On Heuristic Developement

TODO: how then can we develop heuristics.

This theory of operation is pretty simple.

Here’s the problem: my thesis might be “models are bad” and that “these local heuristics are better.” But proving that might require a model (of the bad sort), which might mean that the only way to prove that the heuristics are better requires actually accepting the heuristics in the first place.

And, worse, proving that the models work (within the models) might work. So we end up with a setting where we have a provable thing vs. a not provable one.

Really, the conclusion here is that the fundamental complexity of the world is uncommunicatable through models; the things that matter are not the things we have the complexity to model.

Metrics

  1. Driving metric
  2. Non-actionable metric
    1. Real-time retention number

Should we measure what effected our retention?

Good question. I’ll tell ya what - at the start of this document - I thought yes! It’s something I’ve done before, and so I got cracking how I normally do, and made this:

...

At least no one can accuse me of laziness.

But then I realized the above issues with looking at the changes we made like this. I realized that I was filtering down the changes we made to the things that I thought were big - which really were. And I realized, as I was writing the release notes this morning, that the lack of detail that I put in them for all of history mean they are a really incomplete record of the changes we made anyways.

I do not believe we currently have the infrastructure to figure out what impacted our retention, and I further believe that attempting to figure out cause and effect from historical data is way more likely to mislead us than to lead us in the right direction - to focusing on product changes that make an impact!

And the cost of us being mislead? We decide to focus on product work that hasn’t changed things in the past, even though we expect it to, and get frustrated when it doesn’t change anything.

Rationality and decision making in uncertainty

There’s a common thread of reasoning that goes something like the following:

  1. Yes, we acknowledge that you do not have a perfect decision making process for your decisions, as you’re in an uncertain environment.
  2. However, you do have some decision making process / evaluation procedure to arrive at the decision you did.
  3. Thus, this is better than nothing, so you should go with this decision from your decision procedure, as compared to some other, more-basic-and-stupid-sounding strategy: like, randomly select from the possible options

This informal argument is also formalized into some scary-sounding mathematics that make it hard to disagree with such things. Consider the following example.

TODO:

Uncertainty in existing decision making models

  • https://arxiv.org/abs/1905.09638 ← note that this is the same two different types of uncertainty that I came up with, all by myself. How cool!

Chapter 2: The Past, the Future, and the Present

In the sections below, we will develop some basic mathematics to show that the above is not always true.

A toy model of a decision in uncertainty

Let’s imagine you are an actor with a single choice between a set of $D$ options. Let’s make a strong assumption that each option $d \in D$ has a payout of $payout(d).$

Note that $payout(d)$ here is fundamentally unknowable to us, as the agent making the decision, as we are in an uncertain environment. Instead, you have $estimated_payout(d)$, which is your best estimation of this payout. Note that the more uncertain the environment, the more likely that $payout(d)$ and $estimated_payout(d)$ differ.

The above argument holds that

But let’s consider a few cases where this isn’t the case:

  1. Your mental biases lead you to overvalue

TODO: this whole section develops a model that pretty much just says “uh, yeah, what if you’re wrong; which isn’t super useful. We need to argue about tail events that flip the payoff from negative to positive or something, to really say something useful. Also, it would be nice if I could figure out how to bring in the different types of uncertainty within the structure of the payout - maybe there is payout (which is some real, unknowable distribution), best_known_payout (which is a distribution that differs from the first, but is the most educated possible agent, and then actual_known_payout).

I also feel like there would be some useful examples here of where uncertainty has entered into almost reasonable decision making. How about: chernoble (RBMC reactors don’t explode), etc.

You have some evaluation procedure you can run, decision(D) -> d. That is, given the set of options, it returns the best option.

Now imagine each

How can we go about operating in complex, uncertain environments?

If we’re going to no longer plan in plans, then one option is to

Chapter 2: Squirrel Strategies

Necessary Conditions for Squirrel Strategies

How do you actually operationalize all the above into a process that makes decisions and drives things forward? At a high-level, we're looking to create an evolutionary sandbox.

This evolutionary sandbox must preserve the structure of evolution, if it hopes to reap the benefits that it creates. These structures include:

  1. A measure of fitness that is concerned with the survival of a lineage, not just a single entity.
  2. Death of unfit entities, and preservation of more fit entities.
  3. Creation of new structures that have the chance to compete.

The specifics of this depending on what you're applying this to.

TODO: I need to think about the dynamics and distribution about the above!

Chapter 3: It’s Loud in Here

In the past 3 years of attempting to build a company, I’ve learned a lot. Some of the most surprising learnings are around how to create good “fitness metrics.” I’ve written about it extensively, but the TLDR is:

  1. Be clear about what effect size we’re expecting.
  2. Make sure the metric has the ability to measure that effect size (e.g. is robust against noise).
  3. Compare across cohorts. Aka, make sure you’re measuring the different groups on the same metric on a symmetric time period.

When creating squirrel strategies, the same holds true. If, on some hedonistic quest, I decide my measure of fitness is just pure happiness, and my pure happiness moves up and down each month independent of my actual actions, then this metric will cull and evolve these actions randomly. Not only must there be an effect of my actions on the fitness metric, but I need to be sure that what I am measuring is indeed effect, and not just some random noise.

Moreover, my human brain is no simple product. With a (increasingly-less) shitty spreadsheet data science tool, it is reasonably appropriate for one to treat each new month of users as reasonably differentiated. In the case of my mind, this is not the case. Each new month defines a new starting point - a new cohort that enters into the metric - as my mindspace is a function of all previous months.

But just because not all our conditions hold need not mean we toss them all our. If we want to drive our behavior with some primary metric, we still do need to make sure that this metric measures a real effect (and an effect we care about).

Chapter 4: the cost and joy of decision making

Chapter 5: heuristics and uncertainty

TODO: build some models that make an argument that heuristics can actually perform better than other things.

Chapter 6: a concrete proposal for decision making in uncertainty

  1. Don’t spend much time debating decisions. It’s an increasingly expensive cost.
  2. Continue to do things that worked in the past.
  3. Stop doing things that didn’t work in the past.
  4. Spent a lot of your energy trying to figure out if an event worked, or didn’t work, or if you simply don’t have enough signal to tell. If you don’t have enough signal to tell, figure out how to amplify it.
    1. TODO: we need a chapter on cause and effect, and talking about how hard this really is, and really what the problem is here is that we don’t really know what causes what.

Chapter 7: you don’t know what happened in the past

  1. There is the issue we raise above about attribution which is very challenging.
  2. On a personal level, in your own life, this is hard.
  3. With a product and thousands of people, this is even harder.
  4. In history, this is impossible:
    1. Our historical understand is mostly myth that represents how we want to think about the world.
    2. The Dawn of Everything, how we really don’t
  5. The only way to know which option to take is to run explicit experiments.
    1. In some contexts, experiments are impossible.

Chapter 8: quantified self, n=1 experiments

HMM. I need to try this to write about it realistically.

Chapter 9: Bringing Squirell Strategies to Teams

Chapter 3: Business and Uncertainty

I’ve written extensively about how to come up metrics that allow you to accurately judge if your product is doing better. At a high-level, the three key things to keep in mind are

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2657782 ← This paper does a better job justifying my thinking than anything I’d be able to write myself!

Chapter 10: meta strategies

  1. If all of this is true, then we should expect the metrics to evolve as well.
  2. So we need to be dynamic about what we’re targeting

Types of Metrics

  • There is a growth based one
  • And there is one where we want to target a specific binary outcome
    • Maybe we can transform this into non-binary metric by make the percentage of time periods that we hit it overtime go up over time, till it reaches one?
    • Why start low? For the same reason that we only target the max we can hit - realistic goals are necessary.
  • These are not the same, and should be treated differently... how?

Existing Literature

  • Decision Theory Primer: https://plato.stanford.edu/entries/decision-theory
    • Completness and Transitivity are clearly bullshit, and not real things that anyone maintains (namely, the set of known outcomes is fundamentally unknowable, and attempting to structure or introspect it is terrible).
    • The money pump argument is so dumb; agents would just stop going in a circle because it would lead to them falling apart. Agents don’t just reason at one step - they reason about the future as well!
    • Ordinal vs. cardinal utility is important; percentage improvements imply a cardial utility? Do we really want to target growth to completion? This is interesting:
      • The main question: this is about decision making under uncertainty - does it require a concrete targeting metric? I feel like the answer is yes, but this needs to be justified.
      • Because we do have some goals that
    • When money is involved in the equation, one could think about the utilities as being explicitly cardinal and comparable; but this is not really the case (or at least, not really how anyone thinks about it).
    • Paradoxes introduced in the formal mathematics and the corresponding responses are nonsense, and a demonstration that these formalisms are fucking stupid; agents don’t operate like this.
      • Why do we expect artifical agents to work like this? I am not sure

https://www.econlib.org/archives/2013/08/economath_fails.html

For goal setting, ambivalent about goal achieving

TODO: something about long-term and short term plans differ in their relationship to the goal (how directly they are concerned with it), and how this non-directness is actually better as it leads you to focus on the execution rather than the

Anxiety, zen, and the future

But we evolved to do long-term planning!

As you might be able to guess from the above, this is an essay that argues for restricting the cases in which you do long-term planning. This, at first glance, seems pretty dumb.

After all, humans did evolve brains that are capable for doing long-term planning for a reason, right?

I argue that the sorts of long-term plans that the human brain evolved for are not what we find ourselves planning for most of the time, in the modern environment we exist in currently. More specifically:

  1. Most of the environments that we operate in would fundamentally be unrecognizable to 99.9999% of our ancestors. For 99.99999% of our ancestors, the environments they operated would be recognizable to each-other.
  2. The environments we're operating in are evolving at a pace that humans have never before seen. This rapid change is a new feature of our environments.

Both of these facts mean that we're in an hyper-novel, increasing complex and rapid moving environment. All of these facts combine to mean one thing: our long-term planning infrastructure, which was raised on planning for next year in the neighboring valley, is woefully unequipped to plan for the future that currently exists.

Human hubris

I think that if you asked most people about their ability to create and follow through with long-term plans that accomplish their goals, most people would say they believe in themselves planning, or at the very least they think that planning for the future is, in most cases, a very worthwhile thing.

I think, though, if you tracked the long-term planning most people did in practice, you would discover:

  1. A fraction of plans that are made even get started, and a much smaller fraction get completed.
  2. Of the plans that get completed, only a fraction of them would have the desired effect that the planner had.
  3. Of those plans that had the desired effect, many of them have unintended consequences that make this plan questionable at "achieving the goal" that the plan-creator had in the first place.

For myself at least, I estimate I complete less than 5% of the plans I made, and less than half of these achieve what I want them to. Think plans like "I want this job" or "I want to try keto" or "I'm gonna follow this exercise routine." I would say that less than 1% of the plans I create (and really intend to follow) actually achieve my goals.

So why do we keep planning then?

In reflecting on this, the question immediately becomes: why do I keep creating long-term plans, if they don't accomplish my long-term goals?

I think the reason is that long-term planning, as an act in itself, is a close relative of day-dreaming. Planing allows you imagine a future world where your goals are accomplished and you are happy. Having a plan to get to that world makes it seem possible or even likely.

If we think that long-term planning really did make sense for most of history - when we weren't operating in the current, hyper-novel modern environment - and that it was our evolutionary niche, than it makes perfect sense that it is pleasurable. Evolution likes to reward us for putting in work on things that help our lineage survive. See: pleasure from sex.

But if the new, hyper-novel modern environment we're operating in makes the plans we create 100x less useful, than this pleasure to plan for the future drives us no where. Long-term planning in hyper-novel environments is just bullshit that feels good. Feel free to do it, but don't delude yourself: it's just mental masturbation.

Long-term planning, but from the past

Other than long-term planning, the other thing that human brains is particularly good at is learning. And what's nice about learning is that you can learn from the past.

Now: the past is not some simple linear history of events. Effects and causes are almost impossible to untangle, and figuring out why something happened is really quite hard, even with as simple a question as "why aren't we friends anymore."

To make things worse, each person reporting history to you has some structure or narrative they already believe through which the history is relayed - making it very hard to know if the facts your getting are worth anything.

But reflection on history, especially history you were there for, is orders of magnitude easier than than planning for the future. At least you have some chance of untangling cause and effect!

The point here is that we need not claim all of our higher functioning is non-functional in the hyper-modern environments we live in. We can insist we won't long term plan while still learning from the past and using it to make decisions about the "current state of the world."

How to make decisions without long-term planning

How do you actually operationalize all the above into a process that makes decisions and drives things forward? At a high-level, we're looking to create an evolutionary sandbox.

This evolutionary sandbox must preserve the structure of evolution, if it hopes to reap the benefits that it creates. These structures include:

  1. A measure of fitness that is concerned with the survival of a lineage, not just a single entity.
  2. Death of unfit entities, and preservation of more fit entities.
  3. Creation of new structures that have the chance to compete.

We leave the specifics of this structure to the specific area you're applying this to! See some basic examples below, though.

How this relates to the MVP

  • MVP is a single mutation
  • Do single, easy expirments, and then mutate them
  • Father and the Table:
    • He built this really nice table as his first woodworking project.
    • It stopped him from using the woodworking space, as the table was too nice
    • If he had done something shittier, it wouldn't have stopped i
  • Speed it a relevant question here! the length of an epoch.

How this relates to science (evolution v revolution)

I made the same model for evolution and science

Science is an evolutionary process, isn’t it? These genes are the ones that survive

The nature of scientific revolution : this idea is having a structure to operate in allows for progress while also limiting rn future and the. There needs to be a revolution

I wonder if evolution is like that: or if it is constant and small or if sometimes there are huge new jumps

Models that are relevant

  • https://open.spotify.com/episode/2SxPQ2IvkmwLsfkZZLHlib?si=f32df0ee14a14e3c ← this podcast triggered this:
  • Viewing cancer though an evolutionary lens:
    • The idea here is that cancer is an evolutionary growth
    • But note: this is not predator prey relationships, because when the predator (our immune system) kills the prey (the cancer), the predator also dies. So the dynamics are different here.
    • So actually we're looking at something closer to: https://evolution.berkeley.edu/the-relevance-of-evolution/agriculture/refuges-of-genetic-variation-controlling-crop-pest-evolution/ (Evolutionary Models of Pests!)
  • The idea:
    • When you are killing the cancer, you actually want to not kill all of it, if the treatment is working.
    • Because the cancer will then be dominated by cells that are resistant to the treatment.
    • So, you do this thing called refuge:
      • You don't put pesticide on the entire field, but only on some of it.
      • Because there is a cost to being resistant to the pesticide, thus, when the pesticide is not applied, the un-pest-controlled pests will outcompete and repopulate the entire field:
        • Note that this is a process that takes a bit, and the net result is that integrated pest load is lower during this time.
      • Then, you can do this again! You can just control the pests again, and by continually "reintroducing" pests that are not resistant, you can keep controlling them.
    • The other idea: when you look at extinction events, you get to extinction by lots of small perturbations, not one big thing:
      • The point here is that when a population gets small, it becomes very susceptible to small effects that knock it down even farther.
      • This means that we shouldn't really be looking for a single, silver bullet to cure something like cancer, but rather we should knock it low (using the above strategies), and then throw the kitchen sink at it to get rid of the remainder of it (these small perturbations).

How does this relate to the above document.

  • The first major idea: the model that you pick to model something non-evolutionary needs to accurately represent the dynamics of the new system you're operating in.
    • Cancer is not predator/prey. It is pest/crop.
    • Similarly, when thinking about what memes are; they seem like their dynamics would be closest to viruses or parasites:
      • https://pubmed.ncbi.nlm.nih.gov/8919665/: there is a tradeoff being how a parasite exploits the host organisms, and how much it sucks out.
      • Also, what about mutually beneficial parasites? Is this a thing that would most closely model the dynamics of the system of memes...
    • Another point here: I do think memes are evolutionary objects, and the ideas that I am choosing to continue with (I should move out) are really just local memes (or memes over time); I need to think about unifying these in my model!
  • The second major idea: extinction events, and the dynamics around extinction are very worth of study for understanding evolutionary dynamics in other contexts.
    • The main example that we use of extinction is dinosaurs, but really this is the exception (a single event that leads to their exintinction) rather than the rule.
    • Usually, there is a major cause or two, and then there are single further causes that really knock this species off; notably, it's pretty hard to attribute it to a single effect.
    • When it comes to understanding survival of ideas/memes/things, it really helps to understand how those things would die:
      • I am not sure how exactly this enters into the model... where would I actually use this in decision making? IDK - but these dynamics must be useful for killing off ideas...
        • Here's one idea: imagine I have some harmful meme (e.g. one about how success or sex will bring me happiness, vs. finding happiness in myself).
        • I want it to go extinct (for now, reminder that it might reevolve). Well, I need to first do a major perturbation, until I don't have it.
          • The main perturbation to the narrative might be the psycadelics, and a real meditation on how the narrative is negative.
          • Then, I might go on the most fun vacation that I can think of, and try and have as much fun as possible to do it.

Evolution and Antifragility

  • A big question here is how you can end up at many of the same conclusions when analyzing things through an evolutionary lens (e.g. Rick Johnson's Peter Attia podcast on avoiding fructose), and through the lens of anti-fragility (e.g. when Nassim Taleb says that he no longer eats apples, as they have been bred beyond recognizition).
  • But then you realize: antifragility is a key property of evolution. Evolutionary systems are in some ways defined by their antifragility. It might just be the case that anything that is antifragile is in fact an evolutionary system.
  • The takeaways for me: the formalization of the system, specifically with respect to non-linearity (and long-tails) is one part of this that I don't understand and really should focus on understanding.

The problem with single interaction thinking

  • Inspiration
    • I went to pee after writing about how this related to antifragility, and I saw a book on game theory, and I have spent the past few months talking about how we never think about the iterated games; then I realized this was very related to this question of "ruin" that nassim nicholas taleb always talks about! This was in the course of a single pee.
  • When we're using formal models or tools to analyze things, and we use a model that only captures one time period, we miss the evolutionary dynamics of the system.
    • When an economist labels someone as "risk averse" because they won't bet on something that has a positive expected value, they aren't reconizing that in the long-term, all of these models (that require full input) do in fact go bust! Thus, we miss things here.
      • Coding for this question

        • I wanted to see how long it would take to go bust, but this is a bit annoying to simulate.
        • It is identical to a problem where if you have a coin with some probability of being True, let's say t, then do you always get to a sequence of heads of a certain length? What is the formula that describes that length...
        import random
        
        def main():
            NUM_EXPERIMENTS = 100
            PROB_HEADS = .2
            LOOKING_LENGTH = 10
        
            for length in range(1000, 100000, 1000):
                expirement_had_chain_of_looking_length = 0
                for _ in range(NUM_EXPERIMENTS):
                    count = 0
                    for i in range(length):
                        if random.random() < PROB_HEADS:
                            count += 1
                            if count >= LOOKING_LENGTH:
                                expirement_had_chain_of_looking_length += 1
                                break
                        else:
                            count = 0
        
                print(f"For {length}, {expirement_had_chain_of_looking_length / NUM_EXPERIMENTS} had chains of {LOOKING_LENGTH} length")
        
        main()
        
      • Math for this question (TODO)

        • With a fair coin, in a sequence of length n, what are the odds that you will get ate least k heads in a row, where k <= n ?

          $$ \sum_{i=k}^{i=n} P(\text{exists a sequence of length i}) $$

        • But this is hard, because then we don't know how many sequences of that length there can be!

        • I also kinda want to prove something different here...

        • I am doing this math on paper, making progress. It is fun, and I am learning a bit about the math that is helpful and fun.

          • I wonder if a simulator would be useful!
    • The flip side of this is how I used to analyze protocols; I would say "it can be attacked, so it is secure" - but really this misses so much of the dynamics that define if a system is going to be able to survive the attack or what... this is the crucial part of the dynamic (can it stay alive!).
  • Therefore: I am entirely against static models. If you model doe is not have a notion of evolution of the system over time, then I think it's a bullshit, non-predictive model.
    • E.g. in modeling of the stock market, we need to be able to think about how actors will optimize latency to the market - and then get the HFT.
    • And then other agents will react with the threat of regulation (this is something that is not in the models, for example).
  • The missing thing in these models is not that the notion of "rationality" is limited; but rather that
    • I wanted to ask the question, if I have a coin with a probability

My dream about an evolutionary sandbox

  • Wednesday, December 8th; I just had a dream about how intellegent life evolved;
  • it was that someone created a worldwide game with prizes that was to create creates that then went and evolved.
  • There was a huge development because someone got to a "bird."
  • There was some confusion, because there was some crypto stuff related, onwership of the game and such, that didn't really make sense to me.
  • Then, for some reason, the place we were talking about it in turned into an all-out fight, and we fought to the death.
  • Anyways, an MMPOG seems like a really interestingly angle to do evolution;
    • You could be the creature, and evolution is done by the game?
    • You control the enviornment, evolution is done by the game?
    • You write the evolution algorithm on the genes?

I don't like this idea

  • First, of all, cults are really supremely fucked up: https://www.theatlantic.com/national/archive/2014/06/the-seven-signs-youre-in-a-cult/361400/
  • Second, what if you used the tools of cult deprogramming to get rid of the negative influence of society and goal setting in my brain

Asexual vs sexual reproduction

Asexual is good if the future looks exactly the same. Sexual is better is the future is more uncertain, as this increased random mutation.

This is relevant to how one evolves ideas. Specifically, I should think about what I’m doing and then have a baby with someone else’s idea

I could even use gpt3 to join these two ideas together. It notably should be a mate I find attractive, but also diferent from me!

On culture as literally false but metaphorically true

The arguemnet is people figure out to do things without really knowing. but also there clearly are terrible and long lived superstitions.

So I think a better way to think about this is old vs new cultural things. Old things are more likely to metaphorically true.

But only a certain type. I think there’s a relationship to power and violence. For cultural traits that maintain their dominance through such forces, it’s a lot more likely to be maladaptive.

Todo: I should find things that were unarguable maladaptive, that were around for a while, and see how fheu stayed, to try and falsify this!

The fitness metric is the goal

One of the main theses of this argument is that removing goals is necessary. However the structure requires a fitness metric, which itself becomes the goal. I’ve done a bit of thinking about what we might make the fitness metric to escape this paradox.

The first option is that we make the fitness metric the survival of the framework itself. This is a bit of a meta-trick; it’s also pretty much bullshit, because then the framework can just be defined as whatever I happen to be doing at that point in time (if the framework is represented in the framework). And since the framework should be in the framework, and must Lesley be able to be changed, this fitness metric effectively reduces the framework to nothing at all other than “what I’m doing”; which doesn’t sound very useful.

Another issue here is that if I choose a single value, something like happiness or satisfaction or money, by optimizing one value I will likely drive the other values down. This is affectively the problem of AI alignment, and that capturing what I want in my existence is actually a very complex metric. The relationship between this formalization and AI alignment makes me believe that AI systems will likely be evolutionary in nature.

One could imagine making a tuple of metrics that try fitness, but then this is just a different way to formalize and implement goalsetting.

Another idea is to make the fitness metric how much desire is created by the action. The idea here is that if desire really is the root of the problem, then optimizing for removing it will solve it. This sounds bizarre, but might be promising. It feels like life would be very boring without desire.

But then I wonder if there are different types of desire that it would be worth subtyping on. Desire to kiss my girlfriend he’s sitting in front of me is probably good and healthy and fine and something I’d be happy optimizing, but desire to cheat isn’t, or even the desire to date around and date really pretty people.

It does seem like optimizing for removing desire would have the effect of reducing what I desire, if I do it successfully, but the problem here is that excepting this as my fitness metric requires changing much of the preferences that I currently hold, which are mostly based around what I want and what I desire.

That is, I started thinking about this fitness metric as a way to formalize going towards what I already wanted, but what I’m realizing is that obviously going towards those things will have the same problem with the structure or without it. Really a solution like optimizing for happiness is not different than what I am doing currently.

So I’m in this interesting place where I feel like I’m being asked to resend my preferences or change my preferences, but it’s my preference to do that, because I had a metal level I recognize that my preferences aren’t actually leading me towards us happy and healthy. So in some ways, if I’m initially happy with a solution of the structure as described as clearly failed because I have an escaped from my preference is enough.

Another option that I just realized is to bass this metric on some sort of community goal. For example, instead of validating the success of something based on how I feel about it, I could validate it based on how other people feel about it. Or if that’s too weird, I could base it on for example how much empathy for other people it gives me. The motivation here is that just drives the process through a more communal approach which might be very fundamental to how these systems of intelligence evolve in the first place. Often it’s argued that humans beings social nature or what allowed our brains and justified the growth in our brains. Maybe this is how I should operate as well.

Moreover, it escapes the problem of the fact that I am defining this entirely based on my self. That is a probably very 21st-century American/revolutionary idea, and not necessarily in a good way.

https://www.youtube.com/watch?v=yDHm7lGArRU