Are technologies of the future a wise investment in the present? by Chip Kent

In last quarter’s letter (2018Q2 “Technology Of The Future”), I explored four new technologies -- CRISPR gene editing, self-driving cars, nuclear fusion, and quantum cryptography -- with the potential to change our world.  In the future, these technologies will make our health better, our commutes safer, our energy cheaper, and our computers more powerful.

If the future looks so bright, and if we know what technologies might change the world, then why not invest in them?  Unfortunately, it is far easier to predict which technologies might change the future than it is to predict which particular early-stage company might get lucky and make it big.

To illustrate this principle, let us consider a world-changing technology of the past: cars.  In the late 1800’s, cars were the hot new technology, and Detroit was the Silicon Valley of its day.  The auto industry went from one-off, expensive, hand-built, wooden buggies, to technological wonders produced by the millions.  Today, Americans buy almost 20 million new cars each year.

Many of the cars that U.S. consumers buy are produced by the “American” brands of GM, Ford, and Chrysler.  What we forget are all of the dead American car companies. Have you ever heard of DeLorean, Duesenberg, Packard, Studebaker, Auburn, Edsel, or Tucker?  How about Bacon and Beggs? In total, there have been about 3,000 U.S. car companies whose names range from ABC to Zip. Almost all of the 3,000 companies had died by 1920.  When car technology was new, automobile innovation exploded. In turn, this innovation led to an explosion in the number of car manufacturers, but eventually, this multitude of options collapsed into just 3 major players.  Would you have been able to pick the 3 winners out of 3,000 choices? Only 0.1% of U.S. auto businesses became winners.

Similar dynamics occur during other technological revolutions, and in broad terms, we can divide these dynamics into the three stages of an industry’s life cycle.  In the first stage, the new technology creates a proliferation of both ideas and businesses trying to profit from those ideas. Many of these businesses never turn a profit.  In the second stage, the dust settles from the explosion of new businesses, and a few winners begin to dominate the industry. These winners typically enjoy a period of steady growth and solid profits.  In the third stage, the once-hot technology becomes commonplace and easily reproduced. At this point, new businesses appear, which compete to produce the good at the lowest possible price (think cheap Chinese knock-off).  During this second proliferation of producers, margins significantly decrease.


It is possible to profit by investing in each of these three stages, but each stage requires a different investing strategy.  

Most first-stage investments go to zero, but there is an occasional huge winner.  This is where Venture Capital (VC) funds invest. A VC fund may invest in 10 businesses.  When the VC fund managers select investments, they must make sure that each of the investments has the potential to increase by at least 10x.  That is because, in a typical fund made up of 10 businesses, 7 will die, 2 will return about zero, and 1 will succeed. If the fund ends up with 2 big winners, then investors are very happy.  First-stage investing is a game of low odds and high variability.

As an industry transitions from the first stage to the second stage, the weak businesses fail, leaving a few winners.  With less competition, these winners have higher odds of success. Second-stage companies can potentially grow 10-30% per year for decades.  This is a game of higher odds and lower variability. These are my favorite investments.

Eventually, industries transition to stage three.  In stage three, the technology becomes common, competition increases, and margins decrease.  The winners of stage three are the businesses that can produce the technology at the lowest cost.  For example, at one point, knitting textiles was high tech. The US and the UK were the global leaders, and New England was the Silicon Valley of its age.  However, eventually textile technology proliferated globally. Now, the US and UK cannot compete with third-world sweatshops. Textile technology is cheap, common, and available.  Today, winning textile businesses are low-cost producers that have the lowest labor costs.

It is tough to invest in stage-three companies.  Competition abounds, and a stage-three winner must sustain low production costs, or its razor-thin profit margins, and its profits, will evaporate.  This is a game of low odds and high variability.

Occasionally, a stage-three business has built-in advantages which give it an edge in the marketplace.  Saudi Aramco is one such example. Everyone has the technology to pull oil from the ground, but very few can produce a barrel of oil for less than $10, like Aramco does.  Aramco won the geological lottery, so it is able to profit no matter what happens to the rest of the industry. This illustrates what makes a good stage-three investment.   

It is possible to make good investments in emerging technologies, but most of these opportunities look more like lottery tickets than wise investments.  In the first stage, the vast majority of investments lose, while only a few win. Instead, I prefer to concentrate on the second stage -- a game of higher odds and lower variability.  Warren Buffett explained it this way in his 2000 annual letter:

At Berkshire, we make no attempt to pick the few winners that will emerge from an ocean of unproven enterprises. We’re not smart enough to do that, and we know it. Instead, we try to apply Aesop’s 2600-year-old equation to determine opportunities in which we have reasonable confidence as to how many birds are in the bush and when they will emerge ...
— Warren Buffett

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

Which four technologies might change your world? by Chip Kent

As a technology nerd, I’m always looking to the future.  For the last 16,000 years, humans have produced a steady stream of new technologies that have drastically improved our lives.  Early inventions such as agriculture or copper tools took thousands of years to spread around the world. Newer inventions such as the internet or smartphones become global in a decade.  

What’s next?  A few technologies that I see as potentially world-changing are: CRISPR, quantum computers, self-driving cars, and nuclear fusion.


CRISPR (pronounced “crisper”) is a class of DNA sequences found in bacteria and archaea which constitute the immune system of the cells.  CRISPRs are found in ~50% of bacteria and ~90% of archaea. Each CRISPR contains a snippet of DNA which matches a virus seen at some point in the cell’s evolutionary past.  These snippets allow CRISPR to detect and destroy similar viruses during subsequent attacks.

In 2013, scientists figured out how to hijack CRISPR to perform highly targeted genome edits.  As such, CRISPR provides us with an unparalleled tool to change life as we know it. Just one year later, in 2014, over 1000 research papers discussed how to use CRISPR to change human cells, modify yeasts to make biofuels, alter crop strains, and change mosquitos to eliminate malaria.  By the start of 2018, 86 people in China have already had their genes edited by CRISPR. It is a brave new world.

2) Self-Driving Cars

In the not too distant future, taxi drivers and bus drivers will be a thing of the past, and you will be able to take a nap or check your email while commuting to the office.  

The 2004 DARPA Grand Challenge was a battle between autonomous vehicles to navigate the Mojave Desert.  The “winning” vehicle only completed 7 miles of the course. Today, 14 years later, the two leading autonomous driving companies -- Waymo (Google) and Cruise (GM) -- have logged millions of autonomous miles, and both companies average 5,000 or more miles between any human interventions.

Next year, GM will begin production of a car without a steering wheel or pedals.  The future is coming quickly.

This rapid surge in autonomous vehicle technology has been made possible by new artificial intelligence algorithms as well as massive performance leaps in GPU (Graphics Processing Unit) and TPU (Tensor Processing Unit) hardware to run such algorithms.

Traditional computer algorithms execute a set of instructions devised by their human creators. These creators can then devise tests to ensure that the programs are performing the prescribed rules.  However, the new AI algorithms that power autonomous vehicles are very different. The AI algorithm creators lay out how the AI “brain” is wired up.  Then, the human creators feed the AI algorithms massive amounts of data so that the AI algorithms can “learn” what to do. As a result, the machines are now programming themselves.  We can no longer understand what the program is doing, examine how it is making decisions, or even test that the machine is doing what we want. Because of this, using an autonomous vehicle is effectively a leap of faith that the machine has learned to drive better than a human.

3) Nuclear Fusion

The media typically describes nuclear fusion as a technology that is always 30 years away.  Performing nuclear fusion is not a challenge. Right now, it is possible to perform nuclear fusion in your garage.  Instead, the challenge is performing fusion in such a way that it produces more energy than it consumes.    

With little media fanfare, fusion technology has steadily improved at a massive rate.  Compared to the fusion of the 1960s, today’s nuclear fusion has a 100,000+x improvement in the triple product (temperature x density x confinement time).  In the coming decades, better numerical simulations and new, innovative reactor designs may finally push nuclear fusion over the break-even threshold as a commercially viable energy source.  

4) Quantum Computers

Quantum computers were first proposed in the 1950s by Nobel Prize winning Caltech physicist Richard Feynman.  Standard computers store information as and compute upon bits -- zeros and ones. By contrast, quantum computers store information as and compute upon qubits (quantum bits) -- a fuzzy blur somewhere between zero and one.  

Quantum computers have proven extremely difficult to build.  Tiny atomic vibrations create enough noise to destroy a quantum state and to ruin a computation.  Physicists have been developing ways to avoid these errors by creating computers that run at temperatures near absolute zero and by coupling multiple qubits in special ways to correct for errors.  

Quantum hardware has progressed at a rapid rate.  In 1998, the first working 2-qubit computer was created.  Today, Google has created a 72-qubit computer named Bristlecone.  Within the next year or so, quantum computers will have enough qubits to be more powerful than any classical computers for certain types of problems.

One important problem that quantum computers excel at is factoring integers.  While this may seem unimportant, integer factorization underlies essentially all of the cryptography which keeps our computers and our communications secure.  Right now, there is an arms race between the speed of new quantum computers and the technology that keeps our computers and communications secure. Let’s hope that cryptography makes rapid progress before quantum computers allow anyone to decrypt and read all internet traffic.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

What is a fair interest rate? by Chip Kent

What is the most important variable in investing?  The answer is simple: future interest rates. Interest rates drive all other investor behavior.  For example, if interest rates are high, then home buyers are unwilling to pay as much for a house, because they will have large interest payments. Conversely, if interest rates are low, then home buyers are willing to pay more for a house, because they are not devoting as much cash flow to interest payments.  The same rules apply to all financial assets. If interest rates are high, it pushes asset prices down, and if interest rates are low, then it pulls asset prices up.  (For more details, see my previous article, “What is a baseball team, a bond, or a business worth?”)

Current interest rates are computed from bond prices.  These current interest rates fluctuate up and down based on several factors such as how investors are feeling, how the central bank (Federal Reserve) has decided to manipulate markets, etc.  These dynamic fluctuations mean that current interest rates may or may not reflect reasonable interest rates, given a long-term view.

Since interest rates influence the prices of all other financial assets, it raises the question of: “What should interest rates be?”  I think the most thoughtful answer to this question came from a 1993 paper by John Taylor of Stanford.  Taylor looked at the interest-rate problem through the lens of Optimal Control Theory.  While Optimal Control Theory sounds complex, it is conceptually very simple. As an example, take a self-driving car.  If the car drifts too far to the left, it steers right. If the car drifts too far to the right, it steers left. Small, incremental steers to the left or right keep the car between the lines.  

Taylor’s model applies the same reasoning to the economy.  Instead of steering a car with a wheel, the central bank steers the economy with interest rates.  If the economy is growing too quickly or if current productivity is unsustainable, then the central bank should raise interest rates to reign things in.  On the other hand, if the economy is weak or if productivity is languishing, then the central bank should lower interest rates to stimulate activity. By manipulating interest rates, the central bank can attempt to steer the economy and keep it between the lines.

Subsequent research has built upon Taylor’s work.  These papers primarily (1) tweak constants so that interest rate adjustments are more or less aggressive and (2) switch which factors are used to steer the economy (e.g. unemployment rate instead of productivity).  While these modifications produce slightly different numbers, the results are typically quite similar to Taylor’s original model.


The chart above shows one version of a Taylor Rule, computed with Federal Reserve data.  The red line is the actual market interest rate, and the blue line is the Taylor Rule target interest rate.  From this chart, we can make two interesting observations. First, during the 1970s, the Taylor Rule suggested that rates needed to be higher than they actually were in order to control inflation.  However, it was not until the 1980s that the Federal Reserve raised rates enough to finally moderate inflation. Second, from 2000-2008, the Taylor Rule once again suggested that interest rates needed to be higher than they actually were.  If the Federal Reserve had raised interest rates, it might have softened or avoided the housing bubble entirely.


The chart above focuses on the last decade, and like the previous chart, it is extremely interesting.  Fallout from the Great Recession left the US with weak growth and a very poor employment market. Many Taylor-variant models suggested that the Federal Reserve should impose negative interest rates in order to force the economy forward.  Some models suggested negative rates for a year or two, while other models suggested that we should have had negative rates all the way until 2016. While Europe ventured into negative interest rates, the US did not. It is possible that the sluggish economic growth and weak inflation we experienced over the last decade resulted simply from limiting interest rates to zero, rather than letting interest rates go negative as some Taylor models suggested.

Currently, the economy is beginning to heat up, unemployment is extremely low, and inflation is beginning to appear.  The various Taylor-like models now say that the economy is either between the lines or possibly is getting overheated.  As a result, almost all models prescribe interest rates of 4% or possibly more.

Most people would be stunned to see interest rates quickly rise to 4%.  Yet if the Federal Reserve continues to run interest rates significantly below 4%, it may negatively impact the economy.  As I said before, future interest rates are the most important variable in investing. Right now, a gap exists between what interest rates should be -- according to Taylor -- and what they are.  As mindful investors, we should be aware of the unintended outcomes that may result from this discrepancy.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

When should you tell a CEO, "You're fired"? by Chip Kent

Out of all the jobs on the planet, being CEO is possibly the hardest job to be fired from for poor performance.  This sounds counterintuitive, but in reality: 1) the exact specifications on what constitutes good vs bad CEO performance are rarely laid out; 2) the CEO can always blame a subordinate for negative outcomes; and 3) the boss who can fire the CEO (the board of directors) more often than not offers no governance, and instead, simply rubber stamps anything set before it.

Of the CEO’s functions, the most important is allocating the business’s cash flow.  While the subject sounds complex, a CEO can really only allocate cash flow to a few purposes:  (1) paying taxes, (2) servicing debt, (3) reinvesting in the business, (4) repurchasing shares, (5) acquiring other businesses, and (6) disbursing cash to equity holders (dividends).

Options 1 and 2 -- taxes and debt -- are fairly straightforward.  Equity holders are in partnership with debtholders and the government.  Death and taxes are inevitable.  Not only does the government always get its cut of the profits, but the government gets to decide how big that cut is.  Over the last few decades, the government’s cut has been high.  Over the next few years, it appears that the government has voluntarily given up some of its cut, which leaves more profits for equity holders.  As for debt, the CEO decides how much debt the business should have and how much risk versus opportunity such debt poses to the business.  More debt may mean more money for short-term growth, but it also makes the business more susceptibility to bankruptcy during a downturn.

Options 3-6 are the most interesting and the most commonly misunderstood.  In the financial media, it is common to hear statements like “a company should pay a dividend” (Option 6) or “repurchasing shares is good” (Option 4).  Unfortunately, neither statement is always true (sometimes these might be good choices, and sometimes they might be poor choices), and the truth can not be distilled down to a simple sound bite.  

In reality, only one factor determines whether a CEO’s decision is a good capital allocation or a poor one, and that factor is the expected return of this choice versus other alternatives.  Unfortunately, a CEO may struggle to determine how to allocate capital, because he or she likely ascended to the CEO role from a position that did not require financial literacy.  Without the base knowledge of how to determine what is or is not a good capital allocation, how can the CEO make good decisions?  Often, CEOs who are financially ignorant rely on management consultants, and management consultants are incentivized to allocate capital in ways that maximize the consultant’s fees, rather than in ways that maximize shareholder profits.  Additionally, a CEO’s compensation plan may be structured such that what is best for the CEO is not what is best for the shareholder.  In a recent egregious instance, a new CEO sold a business, which we owned, at a massive discount to its intrinsic value with zero premium to the market price.  Shameful!  The CEO will receive a multi-million dollar payout for a few months of “work” while the shareholders get the shaft.

Back to Option 3: When should a CEO reinvest in a business?  When the return is higher than other alternatives.  Amazon and Berkshire Hathaway are fantastic examples of businesses that have effectively reinvested almost all of their profits over decades.  With Amazon, Jeff Bezos has grown sales at greater than 20% per year while establishing multiple monopolies.  With Berkshire, Warren Buffett has compounded the business’s book value at greater than 20% per year since 1965.  Both Bezos and Buffett have an investment hurdle for any investment.  If a project is unlikely to return 20% per annum, they will not fund it.

By contrast, more ordinary CEOs do not have such investment hurdles, and as a result, they succumb to two common problems.  The first problem happens when a CEO increases earnings simply by increasing the amount of capital used to generate the earnings.  These new earnings are produced with a very poor return on capital.  Buffett described the situation like this:

When returns on capital are ordinary, an earn-more-by-putting-up-more record is no great managerial achievement. You can get the same result personally while operating from your rocking chair. Just quadruple the capital you commit to a savings account and you will quadruple your earnings. You would hardly expect hosannas for that particular accomplishment. Yet, retirement announcements regularly sing the praises of CEOs who have, say, quadrupled earnings of their widget company during their reign — with no one examining whether this gain was attributable simply to many years of retained earnings and the workings of compound interest.
— Warren Buffett

The second problem happens when a CEO reinvests in a slowly dying business with high capital requirements.  Imagine a manufacturing business that is trying to compete against a cheaper foreign competitor.  In this scenario, the rational path forward is to limit reinvestment and to return as much capital as possible to investors.  Unfortunately, in most circumstances, the CEO continues buying the latest gadgets, with hopes that such gadgets will somehow permanently fend off the cheaper competitors.  They almost never do.

Option 4: When should a CEO repurchase shares?  When they are cheap!  A quality CEO should have a very good idea what his business is worth.  When the business trades significantly below that value, it is a good time to purchase shares.  When the business trades significantly above that value, it is a bad time to purchase shares.  For instance, Buffett has indicated that Berkshire will consider repurchasing shares if they ever fall to 120% of book value.  Any price above this does not clear Buffett’s investment hurdle for Berkshire to repurchase shares.  John Malone of Liberty Media takes this option one step further.  Malone not only repurchases (buys) shares when they are cheap, but he issues (sells) new shares when they are expensive.  He then uses the proceeds to buy back shares when the price finally falls.  What a brilliant insight!  Using this strategy, Malone has returned in excess of 20% per year since 1973.  

Unfortunately, most CEOs have no idea what their business is worth.  Instead, like most investors, they choose to buy high and to sell low.  The figure below clearly demonstrates the problem.  The higher the market (green line), the higher the amount of share repurchases (dark blue bars).  If a CEO were rational and understood what his business was worth, then he would do the opposite: purchase large numbers of shares when the market is down and taper off buying as the market goes up.  So much for rationality.


Share repurchases can also mask how executives pillage a company.  Many executive compensation plans pay out an outrageous fraction of a company’s earnings to executives.  These payouts typically happen via stock options.  To mask the dilution caused by such stock grants, many companies purchase offsetting amounts of stock, at any price -- but many investors fail to notice this sleight of hand.

Option 5: When should a CEO purchase another business?  When it is cheap!  Such a transaction should only happen when the CEO can acquire -- at a very good price -- the future cash flows or assets of the purchased business.  (Note that combining the businesses may reduce some overhead, leading to cash flows greater than the separate businesses.)  

Unfortunately, the average CEO shows the same skill at acquiring other businesses as he does at repurchasing his own company’s shares.  The chart below shows merger and acquisition activity over time.  You can see large activity peaks in 2000, 2007, and 2015, with troughs in 1991, 2002, and 2009.  Peak buying coincides with market tops (2000 & 2007), and inactivity coincides with market bottoms (2002 & 2009).  Once again, so much for buying low and selling high.


Buffett summarized the situation this way:

The sad fact is that most major acquisitions display an egregious imbalance: They are a bonanza for the shareholders of the acquiree; they increase the income and status of the acquirer’s management; and they are a honey pot for the investment bankers and other professionals on both sides. But, alas, they usually reduce the wealth of the acquirer’s shareholders, often to a substantial extent. That happens because the acquirer typically gives up more intrinsic value than it receives. Do that enough, says John Medlin, the retired head of Wachovia Corp., and “you are running a chain letter in reverse.” ... The acquisition problem is often compounded by a biological bias: Many CEOs attain their position in part because they possess an abundance of animal spirits and ego.
— Warren Buffett

Option 6: When should a CEO pay a dividend to shareholders?  The answer depends on what other opportunities are available.  A dividend should not be paid if reinvesting in the business, repurchasing shares, or purchasing other businesses can be done with returns higher than an investor is likely to get with cash.  On the other hand, if the business is in a long-term decline, if the stock price is high, and if there are no cheap businesses to purchase, then paying a dividend may make sense.  However, there is a catch.  Uncle Sam has decided to take two cuts out of the dividend pie.  The first cut is via the corporate tax, and the second cut is via the individual income tax.  Up until recently, this could mean a 35% corporate tax followed by a 35% dividend tax, so only 42% of the business profit may make it to the shareholder.  This double taxation makes dividends very tax inefficient, and it skews the optimal strategy towards other avenues for capital deployment.

As an investor, why should you care what makes a good CEO versus a poor one?  You should care because, over time, a CEO’s capital deployment decisions either increase or decrease the intrinsic value of a business.  Good decisions can lead to rapid, exponential increases in value, while bad decisions are the equivalent of setting fire to millions of dollars.  Unfortunately, most CEOs do not make good capital allocation decisions, yet they try to sell their decisions to investors as if they were wise choices.  Frequently, the only way to tell a CEO, “You’re fired,” is by cutting his business from your portfolio.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

Did Harvey cause Houston's worst flood since glyptodons roamed the Earth? by Chip Kent

This August, Hurricane Harvey dumped 33 trillion gallons of water on the Houston area, flooding 50 counties, 100,000 homes, and 1 million cars.  Along with a few Cecropia investors, my brother and his family were among those affected.

In the days immediately after the water began to recede, Brooke and I put in long hours gutting my brother’s house, so that it could begin drying out.  One evening, exhausted, we collapsed on my parents’ couch.  As we sat, we heard the news station describe the flood as a once-in-40,000 year event.  

40,000 years, really?  40,000 years ago North America was filled with mastodons, mammoths, cheetahs, giant sloths, camels, and even glyptodons (think 2-ton armadillo).  Calling the flood a once-in-40,000 year event seems especially strange, when in a 1994 flood, I canoed through the same neighborhoods that flooded again in 2017.  Was Harvey really a once-in-40,000 year flood, or a once-in-23 year flood?


I don’t believe for a minute that the Harvey flooding was a once-in-40,000 year event.  What I do believe is that the news station’s 40,000-year estimate resulted from atrocious mathematical reasoning -- the same atrocious mathematical reasoning that regularly appears in finance.

Flawed mathematical reasoning -- be it in finance or in flood prediction -- commonly results from three errors:  (1) the new situation not being like the past; (2) simple models not reflecting reality; and (3) humans drastically misestimating rare events.  Let us look at a few illustrative cases.

First, let us consider Hurricane Harvey.  The first question to ask is: does reliable information exist for major floods over the past 40,000 years?   I assume not.  Reliable data from a USGS analysis of the 1994 flood showed that only 44% of measurement stations exceeded a 100-year flood -- meaning that 56% of measurement stations recorded flood levels that would occur more often than once per century.  This data suggests that the 2017 flood falls far short of the once-per-40,000 years estimate (which takes us to flaw #3 -- humans drastically misestimate rare events).  Additionally, it is worth noting that much of the flooding happened near waterways that have been dammed.  The dams altered the water’s natural flow, but above and beyond this, some allege that improper release of water from the dams made the flooding worse than it would have been.  Clearly the dams complicated Harvey’s effects, and that brings us to flaw #2 -- simple models may not reflect reality.

From Harvey, let us turn to the world of finance.  Long Term Capital Management (LTCM) was a multi-billion dollar hedge fund founded in 1994 by a group of elite bond traders and two Nobel prize winners.  For the first four years, LTCM returned 21%, 43%, 41%, and 27% with relatively little volatility.  Under the covers, LTCM executed a conceptually simple strategy.  LTCM looked for cases where one bond was cheap relative to another bond.  When LTCM found such cases, it would sell the expensive bond and buy the cheap bond.  Frequently, this meant that LTCM had bought less liquid bonds and had sold short highly liquid bonds.  Furthermore, in order to juice its returns, LTCM leveraged this trade by more than 25-to-1.  

While the world behaved “normally”, LTCM reaped enormous profits.  However, 1998 was not a normal year.  Solomon Brothers, one of LTCM’s competitors, decided to exit its arbitrage business in July 1998.  Solomon’s portfolio was similar to LTCM’s, so when Solomon liquidated its positions, that drove down the prices of LTCM’s long positions, and it drove up the prices of LTCM’s short positions, leading to significant losses.  Just afterwards in August and September of 1998, Russia decided to default on its ruble-denominated bonds.  At the time, it was unthinkable that a sovereign government would default on locally denominated bonds, since conventional wisdom assumed that the government would simply print money to pay for the bonds.  (That brings us to flaw #1: the new situation not being like the past.)

Faced with such an anomaly, investors panicked, driving up the prices of liquid bonds (which LTCM had sold short) and driving down the prices of liquid bonds (which LTCM had bought).  This adverse change in the bond spreads, combined with LTCM’s massive leverage, lead to crushing losses.  In the end, 1998 was not like the years preceding it:  Solomon Brothers, a major bond-trading player, exited the market, and the odds of Russia defaulting on its bonds were far higher than LTCM anticipated (see flaw #3:  humans drastically misestimate rare events).  These logical errors not only wiped out LTCM’s investors, but they required the Federal Reserve to bail out LTCM’s creditors to avoid a widespread failure of the financial system.


For the third case, let us consider a contemporary example:  Bitcoin.  Bitcoin is a decentralized digital payment system based upon cryptographic algorithms.  Bitcoin allows users to transfer payment, more or less anonymously, without the need for an intermediary, such as a bank.

Two theories about Bitcoin prevail.  The first theory rests on the fact that Bitcoin is designed to have a maximum of 21 million Bitcoins.  These Bitcoins are “mined” by running computationally expensive calculations on a computer.  This fixed size leads some people to assert that Bitcoins are analogous to stores of value that have a fixed total size, such as gold.  By extension then, the fixed-size theory says that Bitcoins must have an intrinsic value, because there are only a finite number of them.  

The contrasting theory, proposed by others like Warren Buffett, is that Bitcoin resembles a checkbook -- a mechanism of transmitting value -- and therefore, it has no inherent value.  In both cases, the analogies draw on past beliefs, and as we saw with flaw #1 -- sometimes the new situation is not like the past.  

Which theory will be right?  Do Bitcoins have intrinsic value, or are they inherently worthless, like the pages in a checkbook?  Does a sane foundation underpin the Bitcoin craze (see the figure below for Bitcoin’s price history), or is Bitcoin the modern equivalent of buying tulip bulbs in Holland in the 1630s?  Whatever happens, it is unlikely that both theories will prove correct.


Let us think about Bitcoin in terms of flaw #3 -- how humans drastically misestimate the odds of low-probability events.  The first risk case is fairly straightforward.  Since Bitcoin offers a reasonable amount of anonymity, various types of criminals have popularized Bitcoin for transferring assets.  In the most notable instance, until the site was shut down in 2013, essentially anything, including heroine, driver’s licenses, stolen credit cards, and weapons, could be bought from the Silk Road website and paid for anonymously with Bitcoin.  In another notable case, a virus held Los Angeles Valley College computers hostage for $28,000 in Bitcoins.

The US and Chinese governments have begun scrutinizing this illicit money transfer and money laundering, imposing financial regulations, like Know-Your-Customer.  Such scrutiny may make transacting illegal activities more difficult with Bitcoin, and that may reduce demand.

A more interesting risk case is quantum computing.  Bitcoin’s cryptographic algorithms assume that certain calculations are fast to perform and certain calculations are slow to perform.  This computational asymmetry is at the root of Bitcoin’s security model.  However, quantum computers throw a wrench into this security model.  Unlike classical computers which use on-off switches (zeroes and ones), quantum computers use the physics of quantum mechanics to perform calculations.  Because quantum computers work via a completely different mechanism than classical computers, quantum computers can make quick work of many classes of problems that are very difficult on classical computers.  In the last decade, we have seen massive progress in the effort to develop a general-purpose quantum computer, and within a few years, quantum computers should be available which can crack Bitcoin’s cryptographic algorithms, assuming that some government entity does not already have such computers.  In this scenario, have Bitcoin’s proponents correctly estimated the threat from quantum computers?  After all, how would you like to have your net worth in a currency that can be hacked?  

In all three cases -- Harvey’s flood, LTCM’s collapse, and Bitcoin’s spike -- people fell prey to the same flaws in their mathematical reasoning.  Whatever decision you face (For instance, is flood insurance worth the cost?  Is Bitcoin a money maker or a mania?) remember: (1) the new situation may not be like the past; (2) simple models may not reflect reality; and (3) humans drastically misestimate the odds of rare events.  Two-ton armadillos roamed the earth 40,000 years ago, and however far-fetched, rare, or unimaginable your worst-case scenario seems, it may be more likely to happen than you imagine.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

What is a baseball team, a bond, or a business worth? by Chip Kent

Over the years, I have read countless articles on the market and have discussed financial concepts with many people.  From these interactions, I have observed that the vast majority of investors (and authors) have no idea how to calculate what a business is worth.  

The preponderance of investors use what Howard Marks calls “first order thinking.”  In the first-order worldview, good news means that a stock should move up, and bad news means that a stock should move down.  At first glance, this thinking seems very logical.  Such thinking is very simple and not mentally taxing.  Unfortunately, such thinking leads to booms and busts.  Assume a stock has a sequence of positive news.  The first-order worldview would have the stock increase in price for each bit of news, without regard for its initial or final price.  

Reality is far more complex.  Higher-order thinking requires an investor to ask: “Even though the news is good, is this price too high?” or “Even though the news is bad, is the price too low?”  Understanding what a business is worth provides both an anchor to answer such questions and a rational basis to go against the crowd.

Let us begin by considering the most simple possible business.  This is a very steady business with no debt that makes $1M of free cash flow every year.  (Free cash flow is how much cash a business generates after it pays its operating expenses and capital expenditures.)  How much is this business worth?  To buy this business, would you pay $2M?  $10M?  $100M?  

It is actually impossible to answer this question with the information I have given you.  To determine what the business is worth, you need to compare it against a “guaranteed” investment, such as high-grade bonds.  To calculate what this business is worth, divide its free cash flow by the interest rate of the “guaranteed” investment -- because at the resulting price, the business will produce the same return as high-grade bonds.  If interest rates were 10%, then the business is worth $1M/10%=$10M.  If interest rates were 1%, then the business would be worth $1M/1%=$100M.  At the end of the day, interest rates drive what all financial assets are worth, be they bonds, businesses, or baseball teams.

Let us now make the business slightly more complex.  The business now has $1M of debt, but it still produces the same $1M of free cash flow, after interest payments.  What is a business with debt worth?  To understand this, consider an everyday analogy: owning a home with a mortgage.  Having a mortgage does not affect what price you could sell your home for (your home’s value) -- but having a mortgage does affect your equity as an owner (homeowner’s equity = value of home - value of mortgage).  Similarly, whether or not a business has debt, its overall value remains the same (just as having a mortage does not affect a home’s sale price).  But as with having a mortgage, having debt does decrease a business owner’s equity, which is equal to the value of the business minus the value of the debt.  In the above example, assuming free cash flow of $1M, debt of $1M, and interest rates of 10%, the debt-laden business would be worth ($1M/10%-$1M)=$9M.  

Clearly, reality is more nuanced than these simple cases.  Businesses may be growing or shrinking, interest rates may be changing, and legislators may be revising rules the business plays by.  The uncertainty of these factors inevitably leads to ranges for what a business is worth.  For example, if interest rates will be somewhere between 9% and 11%, the simple debt-free business would be worth between $9.1M and $11.1M.

Currently, long-term interest rates are around 2.5%, but historically, long-term interest rates have averaged closer to 5%.  This difference in rates corresponds to a 2x difference in what the stock market is worth.  If long-term interest rates stay put for the next 30 years, stocks will be worth much more than they currently are, but if rates increase to their historical average, stocks will be worth somewhat less than they currently are.

At the end of the day, there are two truths about what a business is worth.  First, interest rates dictate what all financial assets are worth, from bonds to businesses to baseball teams.  (Remember the 10x increase in the business’s value, when interest rates dropped from 10% to 1%.)  Second, a business can be worth only as much as the cash that can be extracted from it, adjusted for the fact that money in the future is worth less than money today.  In the short-term, emotions may drive a business’s stock price (witness the short-lived bubble for money-losing companies like, but in the long-term, logic and numbers prevail.  The market eventually gets things right.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.


Who wins: the sloth or the hare? by Chip Kent

You have heard me mention Peter Lynch in previous articles.  Lynch, a value-investor who ran Fidelity’s Magellan Fund, averaged a 29% annual return between 1977-1990.  In other words, if you had invested $1k in Lynch’s fund in 1977, it would have grown to $43k by 1990.  Lynch’s return was stellar, but the real question is: “How did his investors do?” 

The answer is quite surprising.  Lynch calculated that the average investor in his fund made only 7% per year between 1977 and 1990.  For this average investor, $1k only grew to $2.5k!  How could this be?  Where did all their money go?

Lynch explained it this way:  When his fund had a setback, money would flow out of the fund through redemptions.  Once the fund’s performance improved, money would pile back into the fund, having missed the recovery.  Investors bought high and sold low, and it cost them dearly.

What Lynch’s investors did to themselves seems so strange that it must be a fluke.  But in reality, the evidence indicates that when an investor looks in the mirror, he sees the worst enemy of his investing success:  himself. 

My friend Eric Falkenstein examines the actual returns that investors achieve in his book, The Missing Risk Premium.  Over the long haul, the US stock market has had an inflation-adjusted return of about 6-7% per year.  Yet, through mistiming the market and paying transaction costs, the average investor drops that 6-7% per year return all the way down to a 2-3% per year return, or just 0-1% per year after paying taxes.  Ouch!

Which investors do better?  Once again, the answer may surprise you.  According to an internal Fidelity study of client accounts between 2003-2012, the best-performing accounts belonged to customers who forgot they had their Fidelity accounts.

Do institutional investors, who command huge research budgets and an army of analysts, perform better than individuals?  Unfortunately not.  A 2009 study in the Financial Analysts Journal analyzed 80,000 institutional investment decisions between 1984-2007.  The study concluded that the investment products receiving new contributions underperformed products experiencing withdrawals over the following one, three, and five years.

Joel Greenblatt, another well-known value investor with a 40% annual return, discussed a similar phenomenon in his book, The Big Secret.  Greenblatt studied which managers performed in the top 25% for 2000-2010.  For those managers with the best record over the decade:

  • 97% spent at least 3 out of 10 years in the bottom 50% of performance.
  • 79% spent at least 3 out of 10 years in the bottom 25% of performance.
  • 47% spent at least 3 out of 10 years in the bottom 10% of performance.

When asked about the study, Greenblatt said:

You’re pretty sure that none of their clients actually stuck with them to get the good returns. And to beat the market you have to do something [a] little different than the market.  You’ve gotta zig and zag a little differently.  But clients are not very patient.

In the studies discussed above, investors shared two common errors:  first, being impatient; and second, buying high and selling low.  It is easy to identify whether or not we are patient investors -- just look at your turnover or your average holding period (see the graph above).  What is harder to understand is why investors so often buy high and sell low -- a behavioral phenomenon that is both irrational and counterproductive.  Perhaps the simplest way to explain this phenomenon is to consider the two ways that humans tend to make purchasing decisions, epitomized by how we buy chicken vs. how we buy perfume.

How do you buy chicken?  When my wife goes to the grocery store, she will purchase chicken if it is below an acceptable price -- in our area, $3.00/lb for chicken breast.  If the price is above this acceptable threshold, the Kent household will dine on turkey, beef, pork, or vegetables.  On the other hand, if chicken is on sale, Brooke backs up the Mazda and fills the garage freezer.  Brooke understands the economic value of chicken relative to the alternatives.  As the price goes down, the quantity we purchase goes up.

How you buy perfume is an entirely different beast.  Having been gifted with both an X and a Y chromosome, I have zero ability to differentiate between good and bad perfumes.  Lengthy discussions with a perfumer add nothing to my understanding.  As a result, if I’m buying perfume, I buy an expensive bottle, because if it costs more, it must be better, right?  Most consumers apply a similar rationale to watches:  a Cartier must be better than a Timex because it costs more, right?  (But does it tell time any better?)  With luxury goods, consumers invert their normal demand response.  As the price goes up, we assume the product is more desirable, and we purchase more.  Conversely, as the price goes down, we assume the product is less desirable, and we purchase less.  Strange but true.

Given the two ways that people tend to make buying decisions -- chicken vs. perfume -- which method do people tend to use when buying stocks?  Suppose a stock goes “on sale,” and its price decreases.  Do investors stampede to the exchanges to buy more?  Or, suppose a stock’s price increases.  Do investors rush to sell?  In my observation, the vast majority of investors purchase stocks -- or other investments -- exactly like they purchase perfume or luxury goods.  The more the price increases, the more they want to buy.  The more the price decreases, the more they want to sell.  However, as the studies above have shown, such behavior is quite expensive, and it costs investors a huge price in their lifetime returns.  In contrast, a value-investing approach, like ours, forces investors to view stocks the same way we view chicken.  This mindset gives value investors an advantage over the herd.

As food for thought, do you think of stocks as chicken or perfume?

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

What do earthquakes, terrorists, and investments have in common? by Chip Kent

Recently, a very good friend of mine asked me when the next recession would be.  I frequently receive such questions.  As a fund manager, I’m expected to know such things -- or at least, I’m expected to spout off a bunch of gibberish so that I sound as if God himself told me the answer. 

Quite simply, I don’t know when the next recession will be.  While such questions are intellectually interesting, they are not likely to have predictable answers.  In fact, as we will explore below, the odds of timing most financial events are similar to the odds of timing an earthquake or a terrorist attack.

Nate Silver’s excellent and very readable book, The Signal And The Noise: Why So Many Predictions Fail -- But Some Don’t, provides some very interesting data for understanding what is predictable.  Nate’s claim to fame is the website, which has used statistical models to very accurately predict US election outcomes.

If you are like me, your elementary-school teacher told you: “Scientists are working very hard to predict when earthquakes will occur, and they are on the verge of a major breakthrough.”  However, to date this breakthrough has not happened.  Various scientists have had interesting ideas, but none have panned out for predicting an earthquake’s timing. 

Does this mean that earthquakes are not predictable?  No -- in fact, earthquakes are very predictable, if we ask the right questions about them.

The figure below plots the size of an earthquake vs the number of times such an earthquake occurred between January 1964 and March 2012.  The plot is a nice straight line, which indicates a very predictable system.  As a result, if we ask the right question about earthquakes (“How often does a magnitude 7 earthquake occur?”), then their behavior is very predictable.  However, if we ask the wrong question about earthquakes (“When will the next earthquake hit Los Angeles?”), then their behavior is not predictable.

Terrorist events are very similar to earthquakes -- it is extremely hard to predict when they will occur.  In fact, terrorist events are so difficult to predict that the U.S. government resorts to spying on every person worldwide (including you) in hopes of timing the next terrorist attack.  Yet despite this intrusive spying, terrorist attacks still regularly occur, both in the U.S. and abroad.

Does this mean that terrorist events are not predictable?  Again, terrorist events are predictable if we ask the right questions about them.

The figure below plots the number of fatalities from a terrorist event vs the number of such events in NATO countries between 1979 and 2009.  The dot to the far right is September 11, 2001.  Again, the plot is a nice straight line, which indicates a very predictable system.  As a result, if we ask the right question about terrorist events (“On average, how often do we expect to see a terrorist attack that kills at least 100 people?”), then the answer is very predictable.  However, if we ask the wrong question about terrorist events (“When will New York City experience its next terrorist attack?”), then the answer is not predictable.

The frightening extrapolation of this plot is that an event killing 100,000 people will likely occur on average about once per 150 years.  Such devastation could easily result from a crude nuclear weapon detonating within a city.  Assuming an average lifetime, a 1 in 2 chance exists of such a tragedy occurring during your lifetime.

In investing, the same principles apply.  If we ask the right question, then there is predictability in the market.  However, if we ask the wrong question, then the market is totally random.

For example, consider predicting the average return of the S&P 500.  The figures below use a price-to-earnings ratio to predict the average return of the market over 1-year and 20-year periods.  Over 1 year, the market’s behavior is extremely random and unpredictable.  By contrast, over 20-year periods, the market’s behavior has been quite predictable.

This is the exact reason why we own the businesses in Cecropia’s portfolio for many years.  Over short periods, like a couple of years, randomness dominates.  However, over much longer periods, there is predictability.

Back to our original question.  When will the next recession occur? 

A recession is two consecutive quarters of decreasing GDP (Gross Domestic Product).  The following figure shows predicted GDP vs actual GDP between 1986 and 2006.  As you can see, there is no correlation between predicted GDP and actual GDP.  The result: GDP and recession predictions are all noise and no signal, and it is likely that no one can predict with accuracy when the next recession will be.

(As an aside, note how likely Trump’s plan for 5-6% GDP growth is.)

One of the most critical and rarely discussed questions in investing is:  “What is predictable, and what is just noise?”  Our fund concentrates on what has been predictable.  Every position we own is based on one or more statistical views of the world which we believe appear predictive.  In a short time-period, like one year, I can’t say if a security we own will increase or decrease in price.  However, I can say that statistically, owning securities such as we own has produced good returns over long periods of time.

I’ll conclude with the prescient wisdom of Peter Lynch, a value investor who averaged a 29.2% annual return while running Fidelity’s Magellan Fund from 1977-1990.  Speaking about the “wrong” question, Peter said:

I spend about 15 minutes a year on economic analysis. The way you lose money in the stock market is to start off with an economic picture. I also spend 15 minutes a year on where the stock market is going.

Peter understood what is predictable and what is just noise. 

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

How does code breaking relate to investing? by Chip Kent

Wars have always pushed technology in interesting directions.  For instance, since conflict began, soldiers have needed to communicate securely on the battlefield.  From the dawn of written communication through World War I, codes were made and codes were broken, but people had very little mathematical understanding of information.

During World War II, computer-based encryption took hold.  The most famous instance of WW2 cryptography is the German Enigma machine.  The Enigma is a keyboard connected to a series of rotors.  When you press a key on the keyboard, the series of rotors scramble the output.  To descramble the output, you have to know the original configuration of the rotors before the message was typed.

To combat the German Enigma, the Allies were forced to understand the basic mathematics of information.  Alan Turing and a team of Allied cryptographers levered seemingly insignificant details -- like Enigma’s inability to encode a letter as itself -- to break the mathematical armor protecting Enigma.  While this mathematical battle remained classified for many years, it was recently depicted in two great movies:  “Imitation Game” and “Enigma.

Claude Shannon, a U.S. wartime cryptographer who worked for AT&T’s Bell Labs, met Alan Turing when Turing was stationed in Washington, D.C., for two months in 1943.  Impressed with and influenced by Turing’s work, by 1944, Claude Shannon had single handedly created a complete theory of information.  Shannon’s theories show how to precisely quantify information, determine how much information can travel over a wire, quantify how to ensure that communication is reliable, demonstrate how to compress data, etc.  Shannon’s work is incredible, and it certainly has impacted your life more than Einstein’s more famous theories.  Without Shannon, for instance, there would be no internet.            

By now, you are wondering: “How in the world does this relate to investing?”  In 1956, John Kelly, a scientist at AT&T’s Bell Labs, sought to understand noise issues in AT&T’s long-distance telephone signal.  While thinking about improving the telephone system, Kelly began thinking about gambling -- a risque subject for a mathematician in the 1950s.  One question Kelly asked was: “What would happen if a gambler had a source of useful information, but the source was not always right?”

As an example, consider a gambler in Chicago betting on horse races taking place in New York.  This Chicago gambler has a contact in New York who will call him with the results of the race just before betting ends in Chicago.  Unfortunately, during an exciting race, the New York crowd can get loud and rowdy.  As a result, the Chicago gambler may not be able to understand what his source says, and therefore, he may bet on the wrong horse.

Kelly’s analysis of placing bets with unreliable information came to be known as “The Kelly Criterion.”  The Kelly Criterion says that the optimal strategy is to maximize your average return. 

To achieve this, you should bet more in cases where you are more likely to win more money, and additionally, you should bet more in cases where you are more certain of the outcomes.

In the professional gambling world, the Kelly Criterion is very important.  For example, in blackjack, a good card counter only has a 0.5% advantage over the casino.  With such a small edge, it is imperative that a card counter optimally use the available information.  By adjusting bet sizes using the Kelly Criterion, a good card counter can improve his edge to roughly 1%.

In his book “The Dhando Investor”, Monish Pabrai gives a clear and concise example from the investing world.  For the business Stewart Enterprises, Pabrai estimates the outcomes to be:

Screen Shot 2017-05-20 at 9.55.02 AM.png

I’ll spare you the complicated mathematics involved in calculating the Kelly Criterion.  However, what the formula says is, given the above probabilities and expected returns, if an investor has the choice between allocating capital to Stewart Enterprises or allocating capital to cash, the Kelly Criterion indicates that investor should be 97.5% invested in Stewart. 

At this point, Pabrai had not heard of the Kelly Criterion, and he invested 10% of his fund’s assets in Stewart Enterprises.  If Pabrai had known of Kelly, would he have invested more?  Since I know about the Kelly Criterion, would I have invested more than 10%?  Maybe not.  Here is why.

There are five primary reasons not to bet the full amount suggested by the simple Kelly Criterion.  (We can account for these cases using a more complex extension of Kelly’s work.)

1) Opportunity costs.  Kelly’s theory says that you should never make investments where there is a probability of total loss for the portfolio.  Let’s assume that we can choose between allocating capital to cash, to Stewart Enterprises, and to Trawets Enterprises.  Trawets has exactly the same payoff odds as Stewart, and the outcomes of Stewart and Trawets are independent.  As a result, if we allocated based on the simple Kelly Criterion, we would invest 97.5% of the portfolio in Stewart and 97.5% of the portfolio in Trawets.  Because we have invested almost double our assets, there is a non-zero chance that the portfolio will experience a total loss.  As a result, when presented with multiple investment opportunities, the optimal amount to invest in each security is less than the amount that you would invest if the only options were cash and a single security.

2) Overbetting is more harmful than underbetting.  If you know the exact probabilities for all outcomes of an investment, then the Kelly Criterion tells you the optimal amount of capital to allocate to the investment.  If you allocate slightly less to the investment, you will get a little less return and less volatility.  On the other hand, if you invest more than the Kelly Criterion, you will get less return and more volatility -- not a good combination.  Furthermore, betting in excess of a certain threshold will guarantee negative expected returns, no matter how favorable each of your individual bets are.  In the case of Stewart Enterprises, a bet of 97.5% is optimal.  Reducing the bet by 2.5% to 95% will reduce the return and the volatility very slightly.  On the other hand, increasing the bet by 2.5% to 100% will almost certainly lead to ruin in the long run, because in 1% of cases, the investment is a total loss. 

Keep in mind that the optimal Kelly size is dictated by the actual odds of the problem, rather than what we estimate the odds to be.  In cases where there is uncertainty in what the actual odds are, investing a fraction of what Kelly suggests protects us from accidentally overbetting if we have estimated the odds too optimistically.

3) Investing with the full Kelly size results in drawdowns that are beyond the comfort of most investors.  By investing a fraction of the Kelly size, the portfolio volatility is easier to stomach, without losing too much return.  For instance, in the Stewart Enterprises case, investing full Kelly size results in an average return of 50%, but there always is a 1% chance of losing 97.5% of your portfolio’s value.  However, investing half Kelly size results in an average return of 31%, but 1% of the time the maximum loss is only 49%.

4) Infrequent extreme events happen much more frequently than we appreciate.  As much as we would prefer not to think about it, extreme events -- like a nuclear weapon detonating in a major city -- do happen, albeit rarely.  Because these events are rare and extreme, we inevitably underestimate the odds of them occurring when we compute possible investment outcomes.  Again, using a fractional Kelly strategy insulates us from such estimation errors. 

5) We may never reach the “long run.”  It can be mathematically proven that for fixed goals -- such as multiplying your capital by 100x or 1,000x -- the Kelly investor will reach the goal, on average, in less time than all other strategies.  To achieve these results, however, an investor or bettor must be able to make a sufficient number of investments or bets.  The catch is that an investor may be unwilling or unable to make enough bets to attain the desired goal with sufficient odds.  In such cases, it may be optimal to invest less than what the simple Kelly strategy would suggest.

While many people are familiar with Warren Buffett, very few are familiar with how he actually allocates capital.  His approach is clearly at odds with the bulk of the financial industry.  Buffett thinks like a Kelly investor.  He commonly bets between 25%-40% of his net worth on a single company, and he bets more than that in situations with higher certainty and higher payout.  Here is how Buffett described his allocation process:

I have 2 views on diversification. If you are a professional and have confidence, then I would advocate lots of concentration. For everyone else, if it’s not your game, participate in total diversification. The economy will do fine over time. Make sure you don’t buy at the wrong price or the wrong time. That’s what most people should do, buy a cheap index fund and slowly dollar cost average into it. If you try to be just a little bit smart, spending an hour a week investing, you’re liable to be really dumb.

If it’s your game, diversification doesn’t make sense. It’s crazy to put money into your 20th choice rather than your 1st choice. “Lebron James” analogy. If you have Lebron James on your team, don’t take him out of the game just to make room for someone else. If you have a harem of 40 women, you never really get to know any of them well.

Charlie and I operated mostly with 5 positions. If I were running 50, 100, 200 million, I would have 80% in 5 positions, with 25% for the largest. In 1964 I found a position I was willing to go heavier into, up to 40%. I told investors they could pull their money out. None did. The position was American Express after the Salad Oil Scandal. In 1951 I put the bulk of my net worth into GEICO. Later in 1998, LTCM was in trouble. With the spread between the on-the-run versus off-the-run 30-year Treasury bonds, I would have been willing to put 75% of my portfolio into it. There were various times I would have gone up to 75%, even in the past few years. If it’s your game and you really know your business, you can load up. [...]

In stocks, it’s the only place where when things go on sale, people get unhappy. If I like a business, then it makes sense to buy more at 20 than at 30. If McDonalds reduces the price of hamburgers, I think it’s great.
— Warren Buffett, 2008

The portfolio of Monish Pabrai (the well-known value investor mentioned earlier) likewise demonstrates Kelly investing.

As this discussion began with information theory, so it will end.  The great information theorist Claude Shannon was also a great investor.  His long-term return was reported to be 28% per year -- exceeding Buffett’s return for the same period.  From the limited information available on Shannon’s portfolio, we know that he would take positions in excess of 80% in a single security.  I’m certain that Shannon’s investment ride was extremely bumpy, but he made optimal decisions with the available, uncertain information.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

What would the Mesopotamians think of our interest rates? by Chip Kent

For the last 35 years, interest rates have steadily declined from around 16% to around 1%.  Since 2008, our country has been in a near zero interest rate environment.  35 years is a long time.  The only market participants who experienced increasing interest rates are retirement age or approaching retirement age.  All other market participants have only experienced a slow and steady decrease in interest rates (as seen in the chart below) -- which corresponds to a slow and steady increase in bond prices.

Screen Shot 2017-05-13 at 8.32.38 AM.png

Such extended, one-way price moves tend to produce beliefs like “bonds are safe” or “you can’t lose money in bonds,” etc.  While these beliefs may be accurate during a one-way decrease in interest rates with tame inflation, reality is quite a bit more complex. 

While the last 35 years have experienced a one-way decrease in interest rates, the preceding 35 years corresponded to a one-way increase in interest rates.  During such increases in interest rates, bonds can become worth less.  Furthermore, if a bond is held until maturity, the holders will get back their money, but the money they do get back will likely be worth far less than expected, since high interest rates typically correspond to high inflation.  (On the figure below, note how high interest rates typically correspond with high inflation, while low interest rates typically correspond with low inflation.)

Andy Haldane from the Bank of England has done an interesting analysis on interest rates.  From various sources, he has compiled a 5,000 year history of prevailing interest rates -- from Mesopotamia to today.  Over this 5,000 year history, interest rates spent most of their time between about 4% and 6%.  More remarkably, our current near zero interest rates are at a 5,000 year low.  I guess the Mesopotamians were unwilling to lend for zero return.

As time passes since our recent deflationary event (the Great Recession), I expect interest rates will eventually begin trending towards more historically common levels.  Over time, I expect governments will have a hard time restraining spending and money printing, which will result in increased inflation.  In the case of the Great Depression, roughly a decade after the depression began, interest rates started their upward march. 

Furthermore, over the last few years, a stronger and stronger chorus of investors have piled into lower and lower quality bonds.  The reasoning typically follows this rationale: “I used to be able to get a 5% return; now I have to get at least a 4% return”.  Instead of stopping to think about whether a 4% return is reasonable in the current 1% return environment, the investor purchases any investment making at least 4%.  To achieve these “must have” yields, investors end up grabbing any garbage that promises the required yield (e.g. Greek government bonds for 5%, Petrobras 100-year bonds for 6.85%, etc.).  I see such risk-agnostic behavior as concerning, and I see the grab for garbage as very concerning when combined with a 5-millenium low in interest rates.

So, what does all of this mean for stocks?

Since the 1870’s, the average earnings yield (earnings / price) has been about 6% -- which corresponds to a P/E ratio of 16.7.  This long-term average earnings yield is close to the 4%-6% long-term average of interest rates.  Therefore, a reasonable P/E of the market is something like 1/(interest rate).  If interest rates stay in the 1%-2% range for an extended period, stocks could reasonably trade at 50x-100x earnings (2x-4x current prices).  Similarly, interest rates could increase to 4% or more without leading to a lasting drop in stock prices.  This analysis is at odds with the common media perceptions that increasing interest rates must correspond with a decrease in stock prices and that stocks in general are currently overpriced.

There is one more peculiar implication of a zero interest rate world worth discussing.  In the last few years, stock prices of money-losing businesses, which can best be described as toxic sludge, have outperformed the stock prices of money-making businesses. 

A business is worth the sum of all future cash flows adjusted for the the time value of money.  A dollar tomorrow should be worth less than a dollar today.

Things become very strange as interest rates approach zero.  With zero interest rates, cash flow in 30 years is worth the same as cash flow tomorrow.  Assuming zero interest rates will last forever allows investors to use this bizarre calculus to justify the price of a money-losing, highly-speculative biotech or social media company because the company “will eventually make a lot of money”. 

Because the hypothetical cash flows of a “make money someday business” exist far in the future, the value of the business is very sensitive to interest rate assumptions.  Furthermore, with a rise in interest rates, lenders will begin demanding more to loan the money to keep the business operational long enough to reach the hypothetical payday.  A small change in rates could rapidly bring the party to an end.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

Could selling a Jaguar have predicted Brexit? by Chip Kent

This quarter’s defining feature was Brexit, which caused significant price movements for both the entire market as well as our portfolio.  Clearly, none of the price movements were in a good direction.  Over the long-term, however, I don’t expect Brexit to have a meaningful impact on the U.S. economy.

The E.U. is constructed in a manner very similar to the Articles of Confederation, which governed the U.S. in the 1780s.  Just as the Articles of Confederation eventually failed because they lacked a strong central government, so too may the E.U. suffer the same fate.

As a personal example, about 15 years ago, I sold a vintage Jaguar XKE to a gentleman in Germany -- in his words, so he “could drive it very fast on the Autobahn.”  Hans was extremely detail-oriented and discovered that he could dodge a 20% tax by importing the car into the Netherlands and having it trucked to Germany, rather than importing it directly into Germany.  This clearly was a sign of a dysfunctional system.

One area of potential concern is banking in the E.U.  After the 2008-2009 mess, the U.S. forced its banks to clean up their balance sheets and reduce leverage.  The E.U. still has not completely addressed that problem.  

Another aspect of the Brexit is the fate of banking in London.  London maintains its position as the European global banking hub because treaties allow banks to locate in London and operate throughout Europe.  With Brexit many of those treaties will need to be renegotiated.  If the renegotiation is not successful, London-based banks will face challenges.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

How does the current market compare to long-term history? by Chip Kent

Over the last few years, the market has put a premium on growth, momentum (industry “mumbo jumbo” for buying things that have gone up a lot recently), money-losing companies, passive investing in whatever indices have gone up the most recently, and money-losing IPOs.  This environment is the antithesis of our fundamental approach.  Consider that, over the last 12 months, buying only money-losing companies would have gained 20-50%.  By contrast, for 2015, buying the cheapest value stocks and shorting the most expensive value stocks would have lost 13%.  These outcomes are the opposite of what the businesses’ fundamentals dictate.  Over the long-run, I do not believe that the current trends are sustainable.

The current period marks the longest duration on record where value has underperformed.  Value has not been this cheap relative to growth since the peak of the dotcom bubble.  The following table and plot provide some insight into the market’s current behavior.

Screen Shot 2017-08-22 at 2.22.25 PM.png

(Data from Fred Piard)


Despite the current market environment, I believe that the long-term will be kind to our value-investing strategy.  As long as I continue to do a good job valuing companies, I believe our effort will be rewarded -- even if it takes some time.  In the long-term, the market is efficient and businesses eventually trade around what they are worth.  In the short-term, anything can -- and often does -- happen.

 Although unpleasant, it is very important that value strategies sometimes underperform in the short-term.  Historically, a value strategy underperforms in ⅓ of years and has a 10% chance of underperforming over a 2-year horizon.  However, historically, over a 5+-year horizon, value-investing strategies have outperformed >75% of the time.  In other words, value investing works, but it doesn’t always work.

We not only expect our value-investing strategy to underperform occasionally -- but more importantly, this fact works to our advantage. If value investing worked every year, every month, every day, and in every market environment, then everyone would be a value investor.  If everyone were a value investor, then our opportunities to invest in mis-priced businesses would evaporate.  However, the reality is that value investing doesn’t always work (like now).  That reality keeps out competition, which allows the strategy to continue to work over the long-term.

Right now, many self-proclaimed value investors have thrown in the towel -- or their investors have thrown in the towel for them.  Yes, the market’s current bumpiness is unpleasant, but it is this bumpiness that winnows out our competition.  Less competition makes it easier for us to work towards our goal of above-average long-term returns, provided that we stay the course.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

Could simple arithmetic estimate when the oil glut will end? by Chip Kent

The price of oil is elastic over the long-term but inelastic over the short-term.  Over the long-term, the price of oil is determined by the cost of extracting the last barrel of oil the world requires.  Given enough time, oil production adjusts up or down in response to market prices; similarly, given enough time, demand adjusts up or down in response to market prices as consumers choose between a new SUV or a new hybrid car.

Over the short-term, however, market prices react dramatically to small perceived changes in supply and demand.  Over short periods, oil production remains relatively constant while demand also remains constant as consumers continue driving the same cars.  If more oil is produced than is demanded, the oil ends up in storage, driving down the price; by contrast, if less oil is produced than is demanded, then prices skyrocket.  

As a physicist, I like to do back-of-the-envelope calculations to understand the world around me. These calculations are very crude but often yield important insights.  Let’s look at the current oversupply of oil.

The world produces 96.6M barrels/day of oil.  For the sake of simplicity, let’s use 100M barrels/day.  If the world stopped drilling new wells, oil production would fall 5-8%/year as existing oil fields become less productive.  At the same time, oil demand grows by 1-2%/year. Combining these two factors, the world needs to produce 6-10% more oil each and every year. This 6-10M barrels/day of new production is roughly equivalent to the production of the U.S., Russia, or Saudi Arabia -- and it must come online each and every year.

Billions are invested each and every year to achieve the new 6-10M barrels/day of production.  As the price of oil has fallen, investment has fallen as well.  Currently, oil producers are only investing 65-75% of what they would have invested during a normal year.  As a result, instead of producing 6-10M barrels/day of new production, we should only expect 3.9-7.5M barrels/day of new production -- basically a 2.1-2.5M barrels/day shortfall.

In October 2014, the oil market was estimated to be oversupplied by 1-2M barrels/day.  With the decreased level of investment by oil producers, this market should take roughly 6-12 months to come back into balance.  Add on another 6 months for the time it took to wind down existing drilling.  This would indicate the oil market should come back into balance in the October 2015 to April 2016 range.  Using up excess global inventories will extend the estimate a few more months.


These calculations are clearly very crude, but they do indicate that global production and demand should balance out over the coming months.  Because short-term prices are highly influenced by emotions, oil prices may take more or less time to adjust.

Just as a baby can’t be created in one month by getting nine women pregnant, only time will stabilize the oil market.  However, once demand exceeds supply, oil prices may react sharply.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

Does your stomach trump your brain? by Chip Kent

I’ll begin this article with two quotes by Peter Lynch. Lynch, a value-investor who ran Fidelity’s Magellan Fund, averaged a 29% annual return between 1977-1990.

The key organ for the stock market is not the brain, but the stomach.

I can’t recall ever once having seen the name of a market timer on Forbes’ annual list of the richest people in the world.  If it were truly possible to predict corrections, you’d think somebody would have made billions by doing it.

This quarter, the overall market experienced a correction of more than 10% within 5 trading days -- “correction” being industry jargon for losing a lot of money very quickly.  We haven’t experienced such a move for a few years, but during the last 115 years, investors experienced on average:

  • -5% market correction: 3x / year

  • -10% market correction: 1x / year

  • -20% market correction: 1x / 3.5 years

  • -30% market correction: 1x / 8 years

Such fluctuations are a fact of life in equity investment.  The last few years of tranquility were unusual, while, in fact, the recent fluctuation was very usual.  Since market fluctuations are much larger than fluctuations in the economics of the underlying businesses, these market fluctuations present opportunities for value investors. Although unnerving, such volatility is our friend.

The other fact of life is that human beings are emotional animals.  We crave certainty, especially in financial markets, where such certainty cannot possibly exist.  Psychologically, we are programmed to panic at wild market swings.  Dealing with market swings rationally is a learned response.

Screen Shot 2017-08-22 at 2.17.45 PM.png

A market loss of 10% in only five trading days is unusual.  Since 1976, there have been only six times where the market has fallen 10% or more in five trading days:

The recovery time for the previous declines was relatively short.  Even in the 1987 case, which appears long, the market ended up for the year.  

Going forward, I expect precipitous drops to become much more common than what we’ve seen before.  Over the last two decades, computers have taken over the operation of the markets.  As I discussed in a previous article, “The High Cost of High Frequency Trading,”  these computers are very dumb.  The computers know the price of everything and the value of nothing.  As a result, markets appear to be well-functioning and liquid -- until they are not.

Real-time financial media has enabled financial panic to spread faster than ever before, while internet trading, cell phone trading, and algorithmic trading have allowed that panic to be acted upon with unprecedented speed.  For example, during the recent correction, “investors” were willing to pay almost 5% of their portfolio to insure against losses over the next 30 days!  That is panic.

Fear of loss drives a lot of investor behavior and leads people to make short-term and irrational decisions in order to ease their fear of loss.  Suddenly, self-professed long-term investors are unable to control their fear of loss and decide that the only sensible thing to do is become a “market timer.”  These decisions will hurt their long-term return outcomes and provide opportunity for those who are prepared to focus their energies on the things that count and that can be controlled.  

Emotion is one of the investor’s greatest enemies.  Fear makes it hard to remain optimistic about holdings whose prices are plummeting, just as envy makes it hard to refrain from buying the appreciating assets that everyone else is enjoying owning.  Superior investors may not be insulated, but they manage to act as if they are. 
— Howard Marks

Market corrections always raise the question of hedging.  Running a hedged portfolio at all times is obvious, right?  Not quite.  What’s clear to the broad consensus of investors is almost always wrong.  When the costs of hedging are considered, hedging is much less attractive -- at least in the current market.

Far more money has been lost by investors preparing for corrections, or trying to anticipate corrections, than has been lost in corrections themselves. 
— Peter Lynch

An investor could hedge by (1) shorting a broad market index, (2) buying puts on a broad market index, or (3) shorting a basket of the most toxic stocks he can find.  Currently the S&P 500 is cheap relative to bonds.  If an investor shorts the S&P or buys puts on the S&P, and bond yields stay low, he will likely lose money on the hege.  Bond yields would have to rise above 5% -- which I think is unlikely -- or the market would need to appreciate significantly before shorting the index would seem like a good idea.  Similarly, shorting a basket of terrible businesses tends to cost a few percent per year.  And don’t forget that toxic sludge went up 20%-50% in the last 12 months.  

In the current environment, I estimate that each of the three hedging strategies would cost on the order of 5%/year.  While a smoother ride would be nice, I personally don’t think it is worth the cost.  I would prefer a lumpy 15% return instead of a smooth 10% return -- the 5%/year drag would end up costing 66% over 10 years.

Having said that, there are situations where an investor could make money on a hedge.  For example, in both 1987 and 2000, stocks were very expensive relative to bonds, so hedging stock exposure would have made sense.

Most long/short hedge funds have significantly underperformed since 2009 because they insisted on running short positions even though the market was historically undervalued.  While I’m certain the smooth performance of these funds made them easy to sell to prospective investors after the preceding tumultuous years, being market-neutral ended up costing those investors a lot of money.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

How are interest rates like a thermostat? by Chip Kent

As I write this, my basement office is quite cold while my upstairs family room is quite comfortable.  A single thermostat controls both parts of the house.  I could choose to make my office very comfortable while making my family room very hot, but there is no way to make both my office and the family room comfortable.  It is just not possible.

The Federal Reserve faces a similar dilemma.  It has one knob (interest rates) which impacts multiple parts of the economy.  Any tweak to the knob will improve some areas of the economy while making other areas worse.

Since 2008, the Fed has set interest rates extremely low -- at least I consider zero to be low. These low rates have stimulated spending and economic growth.  As you would expect, low interest rates have encouraged buyers to take on debt to buy couches, clothes, cars, stocks, and homes, etc.  After 7 years of this policy, debt levels have grown, and asset prices have started to become concerning.  Increasing interest rates would curb the growth of US debt and asset prices, helping us to avoid another 2008-like event.

At the same time that the Fed wants to raise rates, Europe has lowered rates in an effort to stimulate its economy.  Currently 10-year US government debt pays 2.3% while German debt pays only 0.7%.  This interest rate differential has lead many Europeans to convert their euros to dollars in an effort to take advantage of higher US interest rates.  The increased demand for dollars relative to euros has driven up the value of the dollar, making US products more expensive overseas.  Because about half of sales for the largest US businesses originate overseas, a decrease in overseas sales significantly impacts corporate profitability.  

The Fed is confronted with a tough dilemma.  Raising interest rates can get US debt levels and asset prices under control.  Unfortunately, raising interest rates will also drive up the US dollar and therefore drive down exports and corporate profitability.  Just as I have to choose between discomfort in either my office or the family room, the Fed must choose between discomfort in different parts of the economy.  Any adjustment the Fed makes to interest rates will create discomfort.  The question is: “Who will be comfortable?”  

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

How do taxes engineer your future? by Chip Kent

By reading hundreds of annual reports a year, I see many trends.  One interesting trend is the globalization of revenue.  Over the last few decades, US-based businesses have steadily grown their revenue overseas.  In fact, the largest 500 US businesses now generate about half of their sales overseas.

The growth of overseas sales combined with US tax law has produced a peculiar situation.  When a US business generates a profit overseas, it is first taxed by the country where the revenue was generated.  When the business finally decides to bring the cash it has generated back to the US, the US government slaps it with a 35% tax.

CEOs are not stupid.  When faced with the 35% repatriation tax, they decide to create factories and hire employees overseas.  Furthermore, US businesses have been migrating headquarters to countries with lower corporate tax rates.  As an example, a recent merger of a US and an Italian business placed the headquarters in England in order to minimize the combined businesses’ tax rate.   

In a world with easy travel and fast communications, taxation becomes yet another variable to tweak in order to optimize a business’s profitability.  Unfortunately, the incentives created by the current US tax law move economic and job growth from the US to countries with more favorable tax treatment.  Future US economic and job growth is being engineered, either for the good or bad, by our tax law.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

Can you bargain hunt in an expensive market? by Chip Kent

Several partners and potential partners have asked me about how value investing works in an expensive market.

Since the 2009 bottom, the market has run-up considerably. This run-up creates angst and anxiety among investors.  The natural question is: in an expensive market, do opportunities still exist for value investors? Absolutely.

In the United States, there are roughly 5,000 listed stocks that are large and liquid enough to invest in.  Of these 5,000 stocks, at any given moment, some will be undervalued, some will be fairly valued, and some will be overvalued.  

Our fund targets an absolute, long-term return.  As a result, all stocks we purchase must clear an absolute return hurdle, before we will invest in them.  Our return hurdle is simple: for us to buy a stock, we must believe that it has the potential to double to triple in price over the next 2-3 years, while posing a minimal risk of permanent loss of capital.  

In a cheap market, many stocks will clear this return hurdle, making my job easy.  In an expensive market, like our current one, very few stocks will clear this return hurdle, making my job more difficult. (See the chart below.)


2014Q3_stocks_worth_buying (1).png

In both situations, however, some securities exist which meet our buying criteria.  Even when an expensive market narrows our pool of prospects, we still have some bargain securities to choose from.  Remember, out of the 5,000 U.S. listed stocks, we only need to find 10 bargains to fill our portfolio.  We can accomplish this even in an expensive market.  There is a bear market somewhere.

Over time, undervalued stocks move towards fair value, overvalued stocks move towards fair value, and fairly valued stocks become either overvalued or undervalued.  Stock prices fluctuate up and down.  For instance, for the average stock, its yearly high is 30% above its yearly low. Clearly, the value of the underlying business did not fluctuate this much, but the stock price did.   


To put value-investing in an everyday context, imagine that your neighbor Joe approaches you. Joe offers to sell you his home for half of what you know you could sell Joe’s home for in three years.  Without hesitation, you immediately accept Joe’s offer.  You don’t stop to check the price of the S&P 500, nor do you consider the status of any global military conflicts.  You don’t even look up what changes to the Federal Reserve policy might occur.  You see Joe’s bargain for what it is, and you capitalize on it.

Now imagine that a different neighbor, Paul, stops by the day after you buy Joe’s home. Paul offers to sell you his home for one-third of what you could sell it for in three years. Probably, you regret that you weren’t lucky enough to receive Paul’s offer first, but still, you never question that you got a good bargain in buying Joe’s home.

Value investing functions in the same way, but with one twist.  Suppose that you receive bids, every single day, from potential buyers of Joe’s home.  You don’t have to sell them Joe’s home, but you do have to look at their daily bids.  Undoubtedly, these bids will fluctuate up and down (sometimes drastically), depending on which people bid, or on how many people bid, or on how the people felt when they bid.  Often, their daily bids will not reflect the intrinsic value of Joe’s home.  As the owner of Joe’s home, your job is to only accept the bids, once they reflect what you know Joe’s home is worth.  Until you receive a bid above the intrinsic value of Joe’s home, it is best to tune out the noise of the daily auction.

As value investors, we do not buy pieces of paper (“stocks”). Instead, we buy pieces of businesses. Our job is to seek out an above-average business being sold at a below-average price, then wait for the business’s market price to reflect its intrinsic value.  With 5,000 U.S. businesses being auctioned off each and every day, there will always be a few screaming deals.

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.

How much does high-frequency trading cost you? by Chip Kent

I spent more than 7 years deeply involved in high­-frequency trading (HFT). During this time, I was a senior quantitative analyst and a portfolio manager at a leading option market­-making hedge fund, which engaged in HFT. This position gave me a unique view of how current incentives in the electronic securities markets encourage an HFT arms­ race, which hurts investors in the long­ term.

The securities markets began so that two parties could meet and exchange an object of economic value at a price established by market participants. In all of these markets, parties with better information had a competitive advantage, because they could adjust their supply or their demand in response to the information they received.

The telegraph’s invention ushered in a new, electronic era in which those with better technology also possessed better information. The speed of this electronic communication increased as we progressed from the telegraph, to the telephone, to stand­alone computers, to networked computers connected to electronic exchanges.

This last evolution -- networked computers connected to electronic exchanges --­ gave birth to HFT. In simplest terms, HFT is an automated trading platform. It uses powerful computers to transact a very large number of orders at speeds faster than a human can process. By 2013, HFT accounted for about 50%, by volume, of all U.S. equities trades.

Today’s state­-of-­the-­art computing hardware is so good that we have approached the physical limits imposed by the speed of light. A good HFT system makes decisions in a few millionths of a second. During such a short time, light can travel only a few thousand feet. This physical limitation necessitates the use of exotic, special-­purpose hardware which HFT companies co-locate at the exchanges.

HFT is a double­-edged sword. On the positive side, electronic trading, and HFT in particular, has significantly narrowed the bid/ask spread that market participants pay to transact, and that reduces costs to investors. On the negative side, compressing the bid/ask spread has decreased the quantity available to trade at the bid/ask, and that makes it difficult to purchase more than a few hundred shares at the bid/ask. This small quoted size leads to very thin markets. In turn, these thin markets are extremely susceptible to supply/demand dislocations, as we observed during the May 2010 “flash crash” and during similar mini flash crashes which occur in a few securities each week.

HFT contributes to these flash crashes because of the computer hardware and software that it requires. Consider how this plays out: when an HFT system receives data from the exchange, it must react instantly, adjust its calculations, and send new trading instructions back to the exchange, all within a few millionths of a second. By contrast, a human eye blink takes 300 milliseconds. That is over 1,000 times longer than the microsecond turn­around that HFT systems require.

To achieve this microsecond turn­ around, state-­of­-the­-art HFT systems minimize how many computations they perform. The exotic hardware that HFT firms employ to reduce latency further constrains the number of computations. The limited silicon available on FPGAs (field-programmable gate arrays) and ASICs (application­-specific integrated circuits) limits how many computations this exotic hardware can perform.

Unfortunately, minimizing computations means throwing out a great deal of error­-checking, and that makes our markets very brittle. HFT contributes to problems in the markets, because it represents a huge hidden liability for the companies that practice it, as well as for their counterparties in the markets.

Knight Capital is one well­-known example of HFT’s effects. In August 2012, Knight Capital’s HFT trading platform went haywire, quoting absurd prices for 148 NYSE stocks. This incident cost Knight $440M, and ultimately, its business.

More recently, in August 2013, Goldman’s HFT system quoted absurd prices to the U.S. equity options markets. Had the exchanges not nullified these trades, Goldman would have lost approximately $500M. In this instance, the ones who truly lost were Goldman’s counterparties in these trades. These counterparties traded with Goldman based on the absurd, HFT­-generated prices. After the counterparties received confirmation from the exchanges of these trades, they hedged their positions with stock. The exchanges nullified Goldman’s trades, but they did not nullify these hedges. This left Goldman’s counterparties with most of the loss.

In both the Goldman and the Knight cases, error­-checking was sufficiently lax for the problems to persist for an eternity in trading terms: 17 minutes for Goldman and 30 minutes for Knight. Such persistent, recurrent failures destroy trust in the robustness of the marketplace.

Our current electronic exchanges encourage HFT by design. HFT provides an enormous revenue stream for the exchanges, because the exchanges charge HFT firms fees for trading, fees for co­locating hardware, and fees for data feeds. Consider just the fees for data feeds. The exchanges sell multiple data feeds of varying speeds, and the price for the fastest feeds, which HFT firms demand, is several times the price of the slowest feed. The exchanges further encourage HFT by offering volume discounts; with these volume discounts, trading more leads to a lower cost per trade.

Between exchange fees, exotic hardware, and specialized software developers, it is easy for HFT shops to spend well in excess of $20M/year just to keep their systems competitive. Indeed, some HFT firms spend well over $100M/year. Not only that, but the required overhead is growing rapidly. A technological arms ­race exists between the HFT firms: all firms must invest in the latest and­ greatest technology, as soon as one competitor does.

For a $100M firm, this overhead of $20M/year amounts to an annualized expenditure of 20%. However, for a $1B firm, this overhead represents only 2% annually. Clearly, this puts better capitalized firms at an advantage.

If we allow this technological arms race to continue, it will significantly decrease competitiveness in the marketplace. Quite simply, smaller HFT firms will be unable to bear the overhead to stay in business. We are on an unfortunate trajectory to have just the four most capitalized HFT firms provide liquidity on the exchanges. This decreased competition is not good for investors.

The current paradigm is one of brittle, thin markets with little competition. Shouldn’t we change this paradigm for the better? Change starts by recognizing that a difference exists between HFT and electronic trading. In the computer age, we should expect our markets to be electronic. Computers are much more efficient than a bunch of men yelling at each other on the exchange floor. On the other hand, these yelling men can pause to think before mindlessly executing a trade. If we choose to slow down the electronic speed game, then we could (1) give the machines more time to contemplate the consequences of their actions before submitting an order, and (2) increase the competition between firms providing liquidity in the market, since reducing the speed would likewise reduce the required overhead.

If the SEC enacted a few simple exchange requirements, it would have these results: (1) it would drastically slow down the pace of trading, (2) it would provide electronic trading systems with sufficient time to check for errors, and (3) it would not put careful firms at a disadvantage to their competitors.

What might these exchange requirements be? First, the exchanges should add a random delay to all submitted orders before it activates them. Second, the exchanges should enforce a minimum lifetime for all orders before they can be canceled. Third, the exchanges should remove volume discounts for trading. This would reduce the back­-and-­forth churn in the market.

A delay of 0.1 seconds is unnoticeable to a human, but it is an eternity for a computer. To us, this delay is microscopic, but it would provide sufficient time for error­-checking for an HFT system running on even low-­cost hardware.

The exchanges are now publicly traded companies, so they must answer ultimately to their shareholders. If we slow down HFT trading, it will kill the exchanges’ HFT revenue stream, and that will be bad for their businesses. As a result, we should not expect the exchanges to initiate or to go along with reduced-­speed trading. Instead, it is most likely that change will have to be forced upon them.

Before the rise of HFT, each communication improvement came with error­-checking by humans. However, the current generation of HFT technology has less and less error­-checking, both by humans and by computers. We can have markets which are robust, electronic, and competitive, but to achieve this goal, we have to end the HFT arms race. Slowing down the speed of trading will not be easy, but ultimately, it will produce the best results for investors and for the markets. 

David R. “Chip” Kent IV, PhD
Portfolio Manager / General Partner
Cecropia Capital
Twitter: @chip_kent

Nothing contained in this article constitutes tax, legal or investment advice, nor does it constitute a solicitation or an offer to buy or sell any security or other financial instrument.  Such offer may be made only by private placement memorandum or prospectus.