I read Paul Graham; I am a big fan of his writing.
One day he wrote:
Things are always breaking at YC [Y Combinator, his company], because our strategy is to find bottlenecks by hitting them. That may sound irresponsible, but in practice it’s the way most complex systems get optimized.
Let’s play with his words, though, and instead speak about “high performance” systems as opposed to complex systems. Until last night it had not occurred to me what the impact of that statement would be, if true.
The statement is almost a tautology in the sense that it is self-proving. A high-performance system is, almost by definition, something that has reached, or almost reached, the limits of performance. If it hasn’t, then it isn’t high-performance. And something that is close to the limits of its performance will fail, periodically. And by failing, you can learn why it failed, and improve. But only by crossing the threshold of what was previously thought possible can one extend the boundaries of performance.
So, it makes sense, and it feels true. And if you concede that it is true, a number of things suddenly fall into place. But they come from one major thing: failure should be an objective in-and-of-itself. One should seek to fail.
On a personal note, this felt like a major insight for me. Of course, I’m not the first to this party. Thomas Watson’s quote about doubling your failure rate comes to mind. But for me these sorts of things never had a real grounding, and they never quite landed on me. But now, for me at least, the reasoning is more clear. Failure is not just an indicator of ambition or tenacity. More fundamentally, it is a sine qua non of outsized success.
Fundamentally, this implies a shift from thinking “what do I want to do next” — the implication being of course: “what should I succeed at (or try to succeed at) next?” — to thinking “how do I want to fail? How do I want to purposefully reach beyond what I think I can do?”
I’ve heard of the concept of a “failure resume” a document showing initiatives undertaken, but ultimately unachieved by the writer as something used to judge candidates in a hiring process — alongside a traditional resume, showing successes. What would the optimal balance of these two documents be? Full-up on one, and nothing on the other would indicate either a lack of ambition (no failures), or an inability to learn from mistakes (no successes).
It seems that you would want these entries to be somewhat in balance.
So what is failure, then? Is it an indicator? Is it a result? It seems to me that (like success, its sibling with a flipped sign) it is an instrument that you use (to advance yourself as a person), and that, by itself, it should carry no positive or negative connotation. But, for better or for worse, it seems to come with a personal stigma, which must be overcome.
Crops in future generations will not be grown in exactly the same places they are grown today. Some existing agriculture areas will lose their viability due to climatic shifts. Other areas may pick up the slack — but only if we properly incentivize investment in the right places.
Subsidized crop insurance distorts the financial equation for farmers. It may very well be that many farmers are sitting on land that no longer offers a true risk-adjusted opportunity for profit. But when the government literally takes risk out of the equation, there is no incentive to do anything different.
A few days ago I added a single line to the “about me” section of this blog.
I am a big believer in the Ideological Turing Test
Well — what is that exactly?
First, let’s talk about the Turing Test. This test, proposed by Alan Turing, key inventor of computation, defines a key milestone in artificial intelligence. The idea is that if a human is systematically unable to tell machine from human in the course of conversation, then the machine could be said to be “thinking” — like a human. To pass the test, of course, the machine has to be very good at imitating a human being.
So that brings us to our variation. I am sure that versions of this idea have existed for a very long time, but the genesis of this idea in the form that it reached me can be traced to Bryan Caplan. The idea is that in a debate between two opponents, the competitor that can more ably imitate the other’s argument is the more credible of the two. Why?
Mill states it well: “He who knows only his own side of the case knows little of that.” If someone can correctly explain a position but continue to disagree with it, that position is less likely to be correct. And if ability to correctly explain a position leads almost automatically to agreement with it, that position is more likely to be correct. … It’s not a perfect criterion, of course, especially for highly idiosyncratic views. But the ability to pass ideological Turing tests - to state opposing views as clearly and persuasively as their proponents - is a genuine symptom of objectivity and wisdom.
I think this is unbelievably spot-on. Caplan, and others, go further and suggest that in long-running debates (like those between different camps of economists, which is the sort of conflict that inspired Caplan to develop this idea) that actual competitions be organized to determine a winner. But in general the logic of this thinking speaks for itself. In so many mini-debates I see the opponents mis-stating each others’ positions to the point where the points of contention are undefined, or at least understood differently by each participant. In these cases, no progress is made through discourse, and that’s a shame.
So, to repeat a homely adage, “Seek first to understand, then to be understood.”
I was inspired to write this post while perusing the latest scandal around Herbalife, a very large company which appears to be a giant pyramid scheme. This presentation, done by a prominent hedge fund that has taken a massive short position, is an impressive takedown of the company. (Here is an accompanying website.)
While the company produces cash for shareholders, the thesis is that the company will eventually deteriorate due to legal issues and business model unsustainability. The most interesting take, however, came from an investor with a slightly different take:
“I am utterly convinced by everything in Bill Ackman’s presentation except the final conclusion — that Herbalife’s stock will collapse,” said Hempton, who also is a noted short seller. ”I took a long position on Christmas Eve.
“I suspect that Herbalife is so profitable and so powerful they will see Mr. Ackman’s attack off — and the easiest way to do that is to buy back stock (and make the stock go up),” he adds. “Mr. Ackman has given them the incentive to return their huge (but tainted) profits to shareholders (and I plan to be a recipient shareholder).”
This illustrates a fundamental problem with socially responsible investing (SRI), or paying a premium for one type of asset over another, holding relative risk and return constant: there will always be people that only care about the money.
This means two things will happen: first, this group will bid back up the price of non-socially-responsible investments (because they only care about return characteristics), confounding any attempt to create a non-financial premium. Second, this opportunity to correct the price imbalance just gives this group an opportunity to make more money! So not only should SRI not have a lasting effect, but it should also reward non-SRI investors in the bargain!
To illustrate this, imagine that there are two groups of people (“do-gooders” and “cold-hearts”) and two types of investments (“SRI” and “non-SRI”). Let’s imagine that the SRI and non-SRI investments offer the same risk/return characteristics and are priced at par — 100.
Let’s say that to the do-gooders, the SRI assets have a non-financial premium of 20, and the non-SRI assets have a non-financial premium of -20, so the two types of assets are worth 120 and 80, respectively. To the cold-hearts, there is no non-financial premium; both are just worth 100.
If we modeled this in two stages, first the do-gooders would sell their non-SRI assets to buy SRI assets until they hit their equilibrium prices of 120 and 80, respectively. Then, the cold-hearts would just undo all that, effectively arbitraging across the two (financially) equivalent classes of assets, until their prices were back at 100/100.
Of course, this interplay would not happen in discrete stages — it happens continuously — but the effect is the same. SRI is a broken concept. To drive investment towards the things that we want, we need to make them more profitable, plain and simple.
from a good friend who decided after a lifetime of gun ownership to sell his assault weapons:
As far as bans/controls going forward, if the line is going to be meaningful, it has to be drawn at semiautomatic weapons with detachable magazines. This is lenient enough to allow bolt-action rifles, which need the bolt to be worked between each shot and are the overwhelming practical preference for hunters and marksmen alike, while being strict enough to exclude what an almost comically ill-informed media describes as “military-grade” weapons. The common denominator between the handgun that a gang member uses on the street and the AR-15 that a schizophrenic uses to kill moviegoers is not that they were designed for military use in combat, but rather that they both a) fire one shot per trigger pull without reloading between shots, and b) accept rounds from a detachable magazine of varying size that can be switched out for a fresh one in a matter of a couple seconds. It’s this technology that escalates a shooting from one or two dead to twenty or thirty dead— you can get more rounds in the air faster, and there’s almost no reload time in which you’re vulnerable.
That’s why I’m already frustrated that people are talking about renewing the Assault Weapons Ban of 1994. It was a meaningless piece of legislation, because rather than addressing what really makes these weapons so potent, it defined an assault weapon as any firearm that has more than two of an enumerated list of features— things like a pistol grip, a bayonet lug, a collapsible buttstock, a bayonet lug, a detachable magazine (thankfully), or a grenade launcher attachment. So what happened is that manufacturers continued making the same firearms, but they would simply hack off the bayonet lug or pin the buttstock in the extended position. Using these tricks, the rifle that Lanza used would have been completely legal during the ban. I bought one myself during the ban.
It’s sad but true. People need to move. Climate change will affect us in a million ways, and a necessary (but insufficient!) condition of being able to carry out the massive undertaking of adaptation is by allowing market signals to work.
Right after Sandy passed through the East Coast, I wrote:
One area that will be interesting to observe following this Hurricane is the insurance market, as insurers revisit the premia they charge for wind, water, and fire insurance. If they decide that the threat of hurricanes on the highly populated East Coast is higher in a changing or different climate, than the cost to reinsurers to, in turn, insure themselves through catastrophe bonds or other insurance-linked-securities will drive primary market prices higher. Higher premia for these things will send a signal that it is better (all other things equal) to live or develop elsewhere.
Because of the quickening pace of disaster, those who want insurance or are required to buy it now face much higher costs in risky areas. Premiums for homeowners’ insurance (which covers wind damage) doubled in Florida between 2002 and 2007, tripling in some areas after the 2004-5 hurricane seasons, if insurance was available at all.
Many insurers have raised their premiums because of increased risk estimates, higher cost of reinsurance (insurers transfer part of their risk to international reinsurers), the requirement by regulators and rating agencies that insurers hold more capital in order to reduce the likelihood of insolvency, and the need to provide shareholders with an attractive return.
I speculated that this might be the case, but it appears that rising prices are already occurring. This is the most objective indication that we have that climate change is already underway.
Insurance market prices are effectively representations of distributions. The price of car insurance is a representation of the likelihood that you’ll get into an accident. The distribution for 18-25 year-olds is different than for 30-40 year-olds, and the prices are different. Since our climate is a distribution, insurance price changes for climate-dependent phenomena should be an indicator that the distribution has changed (or, in other words, that the climate has changed).
Assuming that there is a competitive market for climate-dependent insurance in Florida, the evidence cited above is unbiased evidence that climate change has occurred and is occurring. Setting a price too high will mean lost business for the company, and judging by how many people have dropped their wind coverage, this consequence has been suffered by the insurance companies.
Why is this important? In my opinion, it’s that the market is telling us that climate change is happening. Although the scientific evidence is staggering, it has never been quite enough to convince many. Market evidence is rarely cited, and I think it deserves more attention.
A second point that the authors make is relevant to adaptation: subsidized public flood insurance (i.e., flood insurance that not only is administered publicly, but is also run at below break-even prices), and homeowners dropping other forms of climate-dependent insurance (flood, wind, etc.) is skewing incentives and is promoting increased development and the perpetuation of development in areas that are relatively more exposed to climate risks. This is a problem. The market signals of climate change should be felt and increased prices borne by those who are ultimately exposed to the risks. Why? People need the incentive to move, and as the world changes, our geography and our economy need to change along with it (and, actually, ahead of the changes if possible).
Nate Silver’s forecast would have done better had he gotten a few wrong.
I really like 538 and they have done a great job. I have one minor quibble with how they represent their forecasts and their results. 538’s forecasts are not binary predictions, they are probability estimates. Meaning, that of all the races he projects to be at 70% probability, the winner should be the leading candidate 70% of the time - not 100% of the time. But 538 and many others are saying that he got 50/50 states right, and that confuses the picture a lot. What does it mean to call Florida “correctly” when the model gave Obama a 50.1 chance of winning?
The example I used when explaining it to some friends was that if a weatherman, for 10 straight days, said there was a 55% chance of rain, and it rained every day - you could say that the weatherman correctly predicted the weather every day, but the correct interpretation is that his estimates were too low.
The code below simply takes 538’s estimates right before the election, with the probability figure being that which 538 assigned to the eventual winner. The code simply assumes that 538’s probability estimates were exactly correct, and shows the distribution of states that would be in error in a run of 10,000 simulations. As you can see, if 538’s estimates were exactly correct, it’s significantly more likely that he would have gotten 1, 2, or 3 states wrong than none at all.
This is not to say that his estimates weren’t spot on - there is a good chance they were - I only want to point out that there is a little bit more going on than saying that 538 got 50/50 states “right”.
Each of these modelers has taken a different approach to a difficult problem - namely how to estimate the outcome of an election using available data. There are tradeoffs around each approach and there’s a fairly lively debate about which approach is most enlightening. I am excited to see how they fare tomorrow.
On the other hand, you have a bunch of contributors making proclamations about what will happen, seemingly untethered to the evidence available to them. They only show the results of their internal model (should there be one), not the model itself. Obviously many of them are motivated by something other than the “search for truth”, but even if they were, their contribution is generally useless. Here’s why.
First, a model can be useful even if its predictions are wrong. Especially when it is predicting things that are compositional in nature (i.e. not single events in and of themselves, but composed of many events). Second, a model can make a correct prediction even when the model itself is wrong.
Most importantly, when two projections disagree, it’s impossible to work out why they disagree unless we know how the models that produced them work in the first place.
This last point is the most important. Without a model, you are treating projection like opinion.