zero sum game #my take

“Zero sum game” is a vague term. One of my financial math professors said every market is a zero sum game. After the class I brought up to him that over the long term, the stock (as well as gov bond) market grows in value [1] so the aggregate “sum” is positive. If AA sells her 50 shares to BB who later sells them back to AA, they can all become richer. With a gov bond, if you buy it at par, collect some coupons, sell it at par, then everyone makes money. My professor agreed, but he said his context was the very short term.

Options (if expired) and futures look more like ZSG to me, over any horizon.

If an option is exercised then I’m not sure, since the underlier asset bought (unwillingly) could appreciate next day, so the happy seller and the unwilling buyer could both grow richer. Looks like non-zero-sum-game.

Best example of ZSG is football bet among friends, with a bookie; Best example of NZSG is the property market. Of course we must do the “sum” in a stable currency and ignore inflation.

[1] including dividends but excluding IPO and delisting.


Export Credit Agency — some basics

Each national government has such an “exin bank”, funded by the ministry of finance. (There are also some multinational exin banks like Asian Dev Bank, World Bank…) Their mandate is to support their own exporters in terms of *default risk*. The ECA guarantees to the supplier that even if the overseas client (importer) defaults, the ECA would cover the supplier. It’s technically a loan to the importer, to be paid back. For those non-commercial risks affecting large deals (up to several billion dollars), the ECA’s have a natural advantage over commercial banks – they are financed by the government and can deal with the political and other risks across the borders.

Political risk is quite high, but the guarantee fee charged by ECA is very low. This paradox disappears if you understand that those big deals support domestic job creation, and tax/revenue generation of the large national exporters, so even if the fee charged by the ECA is arguably insufficient to cover the credit risk they take on, the decision still make sense. I think these ECA’s are using tax-payer’s money to help the home-grown exporters.

However, the ECA won’t blindly give big money to unknown foreign importers. Due diligence required.

The ECA’s are usually profitable on the back of those fees they charge (something like 1% above Libor). I guess the default intensity is statistically lower than feared, perhaps thanks to the risk analysis by the various parties. Risk assessment is the key “due diligence” and also the basis of the pricing. The #1 risk event being assessed is importer default. The exporter (supplier) are invariably blue chip corporations with a track record, and know what they are doing. 80% of the defaults (either by importer, exporter or the lending bank) are due to political risk, rather than commercial risk.

Many entities take part in the risk assessment, bringing with them special expertise and insight. The commercial bank has big teams dealing with ECA; the exporter needs to assess the buyer’s credit; the ECA has huge credit review teams… There are also specialist advisory firms who do not lend money. If any one of them identifies a high risk they can’t quantify and contain, I would say it’s only logical and prudent to hesitate.

The exporter first approach a (or a group of) commercial bank(s). The bank would seek *guarantee* from the national ECA. The guarantee covers 90% to 100% of the bank loan, so the bank has a very small credit exposure. (The ECA themselves have very high credit rating.) In the event of a default, the bank or exporter would be compensated by the ECA.

They mostly cover capital goods export, such as airplanes/trains/ships, power plants, infrastructure equipment, with long term repayment … So the supplier are mostly blue chip manufacturers. These loans are tricky because

· Long term, so event risk is much higher

· The entity to assess is a foreign entity, often in a developing country

· Big amount, so potential financial loss is sometimes too big for a single commercial lender

China, Japan and Korea are some of the biggest exporter nations.

interest rate hike hitting FX rate

I feel in most major economies the central bank manages interest rate which directly affects FX rate. FX rate doesn't affect interest rate, not directly. — higher interest rates attract foreign capital and cause the currency to appreciate. — Higher interest rates cause an appreciation. – When interest rates go up, so do yields for assets denominated in that currency; this leads to increased demand by investors and causes an increase in the value of the currency in question.

Rate hike leads to inflation, which hurts the currency in question?

Rate hike hurts corporations (including exporters) and balance of payment. Would hurt the currency in question? I doubt it.

Fed rate hike is carefully managed based on growth data. Therefore, rate hike is conditional on US recovery, which means stronger USD.

Economic growth could also mean reduced government bond issue i.e. reduced QE, i.e. slower national debt growth which helps the USD.

beta definition in CAPM – confusion cleared

In CAPM, beta (of a stock like ibm) is defined in terms of
* cov(ibm excess return, mkt excess return), and
* variance of ibm excess return

I was under the impression that variance is the measured “dispersion” among the recent 60 monthly returns over 5 years (or another period). Such a calculation would yield a beta value that’s heavily influenced or “skewed” by the last 5Y’s performance. Another observation window is likely to give a very different beta value. This beta is based on such unstable input data, but we will treat it as a constant, and use it to predict the ratio of ibm’s return over index return! Suppose we are lucky so last 12M gives beta=1.3, and last 5Y yields the same, and year 2000 also yields the same. We still could be unlucky in the next 12M and this beta
fails completely to predict that ratio… Wrong thought!

One of the roots of the confusion is the 2 views of variance, esp. with time-series data.

A) the “statistical variance”, or sample variance. Basically 60 consecutive observations over 5 years. If these 60 numbers come from drastically different periods, then the sample variance won’t represent the population.

B) the “probability variance” or “theoretical variance”, or population variance, assuming the population has a fixed variance. This is abstract. Suppose ibm stock price is influenced mostly by temperature (or another factor not influenced by human behavior), so the inherent variance in the “system” is time-invariant. Note the distribution of daily return can be completely non-normal — Could be binomial, uniform etc, but variance should be fixed, or at least stable — i feel population variance can change with time, but should be fairly stable during the observation window — Slow-changing.

My interpretation of beta definition is based on the an unstable, fast-changing variance. In contrast, CAPM theory is based on a fixed or slow-moving population variance — the probability context. Basically the IID assumption. CAPM assumes we could estimate the population variance from history and this variance value will be valid in the foreseeable future.

In practice, practitioners (usually?) use historical sample to estimate population variance/cov. This is, basically statistical context A)

Imagine the inherent population variance changes as frequently as stock price itself. It would be futile to even estimate the population variance. In most time-series contexts, most models assume some stability in the population variance.

capm – learning notes

capm is a “baby model”. capm is the simplest of linear models. I guess capm popularity is partly due to this simplicity. 2 big assumptions —
Ass1a: Over 1 period, every individual security has a return that’s normal i.e. from a Gaussian noisegen with a time-invariant mean and variance.

Ass1b: there’s a unique correlation between every pair of security’s noisegen. Joint normal. Therefore any portfolio (2 assets or more) return is normal.

Ass2: over a 2-period horizon, the 2 serial returns are iid.

In the above idealized world, capm holds. (All assumptions challenged by real data.) In real stock markets, these assumptions could hold reasonably well in some contexts.

capm tries to forecast expected return of a stock (say google). Other models like ARCH (not capm) would forecast variance of the return.

Expected return is important in the industry. Investors compare expected return. Mark said the expected return will provide risk neutral probability values and enable us to price a security i.e. determine a fair value.

Personally, i don’t have faith in any forecast over the next 5 years because I have seen many forecasts failing to anticipate crashes. However, the 100Y stock market history does give me comfort that over 20 years stock mkt is likely to provide a positive return that’s higher than the risk-free rate.

Suppose Team AA comes up with a forecast mkt return of 10% over the next 12 months. Team BB uses capm to infer a beta of 1.5 (often using past 5 years of historical returns). Then using capm model, Team CC forecasts the google 12M expected return to be 1.5 * 10%.

In the idealized world, beta_google is a constant. In practice, practitioners assume beta could be slow-changing. Over 12M, we could say 1.5 is the average or aggregate beta_google.

Personally I always feel expected return of 15% is misleading if I suspect variance is large. However, I do want to compare expected returns. High uncertainty doesn’t discredit the reasonable estimate of expected return.

“Market portfolio” is defined as the combined portfolio of all investor’s portfolios. In practice, practitioners use a stock index. The index return is used as mkt return. Capm claims that under strict conditions, 12M expected return on google is proportional to 12M expected mkt return and the scaling factor is beta_google. Capm assumes the mkt return and google return are random (noisegen) but if you repeat the experiment 99 million times the average returns would follow capm.

UIP carry trade n risk premium

India INR interest could be 8.8% while USD earns 1.1% a year. Economically, from an asset pricing perspective, to earn the IR differential (carry trade), you have to assume FX risk, specifically the possible devaluation of INR and INR inflation during the hold period. 

In reality, I think INR doesn't devalue by 7.7% as predicted by UIC, but inflation is indeed higher in India.
In a lagged OLS regression, today's IR differential is a reasonable leading indicator or predictor of next year's exchange rate. Once we have the alpha and beta from that OLS, we can also write down the expected return (of the carry trade) in terms of today's IR differential. Such a formula provides a predicted excess return, which means the carry trade earns a so-called “risk premium”. 
Note, similar to the DP, this expected return is a dynamic risk premium (lead/lag) whereas CAPM (+FamaFrench?) assumes a constant time-invariant expected excess return.. 

benchmark a static factor model against CAPM explains …

Let me put my conclusion up front — now I feel these factor models are an economist's answer to the big mystery “why some securities have consistently higher excess return than other securities.” I assume this pattern is clear when we look long term like decades. I feel in this context the key assumption is iid, so we are talking about steady-state — All the betas are assumed time-invariant at least during a 5Y observation window.

There are many steady-state factor models including the Fama/French model.

Q: why do we say one model is better than another (which is often the CAPM, the base model)?

1) I think a simple benchmark is the month-to-month variation. A good factor model would “explain” most of the month-to-month variations. We first pick a relatively long period like 5 years. We basically “confine” ourselves into some 5Y historical window like 1990 to 1995. (Over another 5Y window, the betas are likely different.)

We then pick some security to *explain*. It could be a portfolio or some index of an asset class.

We use historical data to calibrate the 4 beta (assuming 4 factors). These beta numbers are assumed steady-state during the 5Y. The time-varying (volatile) factor values combined with time-invariant constant betas would give a model estimate of the month-to-month returns. Does the estimate match the actual returns? If good match, then we say the model “explains” most of the month-to-month variation. This model would be very useful for hedging and risk management.

2) A second benchmark is less intuitive. Here, we check how accurate the 2 models are at “explaining” _steady_state_ average return.

Mark Hendricks' Econs HW2 used GDP, recession and corp profits as 3 factors (without the market factor) to explain some portfolios' returns. We basically use the 5Y average data (not month-to-month) combined with the steady-state betas to come up with an 5Y average return on a portfolio (a single number), and compare this number to the portfolio actual return. If the average return matches well, then we say …”good pricing capability”!

I feel this is an economist's tool, not a fund manager's tool. Each target portfolio is probably a broad asset class. The beta_GDP is different for each asset class.

Suppose GDP+recession+corpProfit prove to be a good “pricing model”, then we could use various economic data to forecast GDP etc, knowing that a confident forecast of this GDP “factor” would give us a confident forecast of the return in that asset class. This would help macro funds like GMO making asset allocation decisions.

In practice, to benchmark this “pricing quality”, we need a sample size. Typically we compare the 2 models' pricing errors across various asset classes and over various periods.

When people say that in a given model (like UIP) a certain risk (like uncertainty in FX rate movement) is not priced, it means this factor model doesn't include this factor. I guess you can say beta for this factor is hardcoded to 0.

risk premium — clarified

risk premium (rp) is defined as the Expected (excess) return. A RP value is an “expected next-period excess return” (ENPER) number calculated from current data, using specific factors. A RP model specifies those factors and related parameters.

Many people call these factors “risk factors”. The idea is, any “factor” that generates excess return must entail a risk. If any investor earns that excess return, then she must be (knowingly/unknowingly) assuming that risk. The Fama/French value factor and size factor are best examples.

Given a time series of historical returns, some people simply take the average as the Expected. But I now feel the context must include an evaluation date i.e. date of observation. Any data known prior to that moment can be used to estimate an Expected return over the following period (like12M). Different people use different models to derive that forward estimate i.e. a prediction. The various estimates create a supply/demand curve for the security. When all the estimates hit a market place, price discovery takes place.

Some simple models (like CAPM) assumes a time-invariant, steady-state/equilibrium expected return. It basically assumes that each year, there’s a big noisegen that influences the return of each security. This single noisegen generates the return of the broad “market”, and every security is “correlated” with it, measured by its beta. Each individual security’s return also has uncertainty in it, so a beta of 0.6 doesn’t imply the stock return will be exactly 60% of the market return. Given a historical time series on any security, CAPM simply takes the average return as the unconditional, time-invariant steady-state/equilibrium estimate of the steady-state/equilibrium long-term return.

How do we benchmark 2 steady-state factor models? See other blog posts.

Many models (including the dividend-yield model) produce dynamic estimates, using a recent history of some market data to estimate the next-period return. So how do I use this dynamic estimate to guide my investment decisions? See other posts.

Before I invest, my estimate of that return needs to be quite a bit higher than the riskfree return, and this excess return i.e. the “Risk premium” need to be high enough to compensate for the risk I perceive in this security. Before investing, every investor must feel the extra return is adequate to cover the risk she sees in the security. The only security without “risk premium” is the riskfree bond.

negative beta, sharpe, treynor

corr=1 means perfect positive corr, but doesn't tell us whether a 1 unit increase in X causes a 0.001 or 1000 units increase in Y.

When we compare returns of a fund or stock vs a stock index, we are interested in the relative size of change or “magnifying

effect”. Beta helps here.

A “normal” beta close to 1.0 means when mkt grows[1] 5%, then ibm also grows about 5%. Note this growth is fast-changing. All prices

are volatile. As shown in other posts on beta, many other CAPM variables are not volatile, but could be slow-changing.

[1] assumeing low risk-free rate, so excess return and “return” are practically no-different.

A large beta like 1.5 is more volatile. A “magnifier” stock such as tech stocks. A 5% drop in the index is likely to see a 7.5% drop

in this asset.

Beta < 1 means a "stable" stock that moves in-sync with the market but at very low magnitude.

Negative beta means short positions or something else.

A negative Sharpe ratio indicates your fund underperforms risk-less asset (like a gov bond in your fund's currency). The denominator

(std of the fund return), be it large or small, isn't responsible for this negativity.

Treynor Ratio is negative if

case1: if beta is positive, then fund underperforming risk-free rate.

case2: if beta is negative, then fund outperforming risk-free rate. This means that the fund manger has performed well, managing to

reduce risk but getting a return better than the risk free rate

book value of leverage

A simple analog is the leverage of a property bought on an (unsecured) commercial loan

Suppose the house was bought for $600k with $480k loan. After a few years, loan stays at $480 (to be paid off at maturity), but house doubles to $1.2m.

Book value of EQ is still 600-480 = $120k, but current EQ would be 1.2m – 480k = 720k.

The book value of leverage was and is still 600/120 = 5.0

The current value of leverage would be (1200k)/720k, which is lower and safer.

Now the bleak picture — suppose AS value drops from 600k to 500k. Book leverage remains 600/120 = 5.0
Current value of leverage is 500/(500 – 480) = 25.0. Dangerously high leverage. Further decline in asset valuation would wipe out equity and the entire account is under water. Some say the property is under-water but i feel really we are talking about the borrower and owner of the property — i call it the account.
(Book value of) Leverage in “literature” is defined as

(book value of) ASset / EQuity (book value)


   (LIability + EQ) / EQ …. (all book values)

The denominator is much lower as book value than the current value. For a listed company, Current value of total equity is total market cap == current share price * total shares issued so far. In contrast, Book value is the initial capital of the founder + actual dollars raised through the IPO, ignoring the increase in value of each share. Why is this book value less useful? We need to understand the term “shareholder equity”.  This term logically means the “value” of the shares held by the shareholders (say a private club of …. 500 teachers). Like the value of your house, this “value” increases over time.