CapitalIQ describes itself as a traditional fintech business.

Bloomberg is also considered a fintech business. I believe the Interactive Data RTS business is also fintech

Skip to content
# keep learning 活到老学到老

## to remove two-column,resize your browser window to narrow

# Category: z_bank`econ

# fintech companies: eg

# share buy-back #basics

# ETF share creation #over-demand context

# zero sum game #my take

# Export Credit Agency — some basics

# interest rate hike hitting FX rate

# beta definition in CAPM – confusion cleared

# capm – learning notes

# UIP carry trade n risk premium

# benchmark a static factor model against CAPM

CapitalIQ describes itself as a traditional fintech business.

Bloomberg is also considered a fintech business. I believe the Interactive Data RTS business is also fintech

Advertisements

- shares outstanding — reduced, since the repurchased shares (say 100M out of 500M total outstanding) is no longer available for trading.
- Who pays cash to who? Company pays existing public shareholders (buying on the open market), so company need to pay out hard cash! Will reduce company’s cash position.
- EPS — benefits, leading to immediate price appreciation
- Total assets — reduces, improving ROA/ROE
- Demonstrates comfortable cash position
- Initiated by — Management when they think it is undervalued
- Perhaps requested by — Existing share holder hoping to make a profit
- company has excess capital and
- A.k.a “share repurchase”

http://www.etf.com/etf-education-center/7540-what-is-the-etf-creationredemption-mechanism.html is detailed.

Imagine a DJ tracking ETF by Vanguard has NAV=$99,000 per share, but is trading at $101,000. Overpriced. So the AP will jump in for arbitrage — by ~~Buying ~~the underlying stocks and Selling a single ETF unit. Here’s how AP does it.

- AP
~~Buys~~the underlying DJ constituent stocks at the exact composition, for $99,000 - AP exchanges those for one unit of ETF from Vanguard.
- No one is buying the ETF in this step, contrary to the intuition.
- So now a brand new unit of this ETF is created and is owned by the AP

- AP SELLs this ETF unit on the open market for $101,000 putting downward pressure on the price.

Q: So how does the hot money get used to create the new ETF shares?

A: No. The hot money becomes profit to the earlier ETF investors. The ETF provider or the AP don’t receive the hot money.

Advertisements

“Zero sum game” is a vague term. One of my financial math professors said every market is a zero sum game. After the class I brought up to him that over the long term, the stock (as well as gov bond) market grows in value [1] so the aggregate “sum” is positive. If AA sells her 50 shares to BB who later sells them back to AA, they can all become richer. With a gov bond, if you buy it at par, collect some coupons, sell it at par, then everyone makes money. My professor agreed, but he said his context was the very short term.

Options (if expired) and futures look more like ZSG to me, over any horizon.

If an option is exercised then I’m not sure, since the underlier asset bought (unwillingly) could appreciate next day, so the happy seller and the unwilling buyer could both grow richer. Looks like non-zero-sum-game.

Best example of ZSG is football bet among friends, with a bookie; Best example of NZSG is the property market. Of course we must do the “sum” in a stable currency and ignore inflation.

[1] including dividends but excluding IPO and delisting.

Advertisements

Each national government has such an “exin bank”, funded by the ministry of finance. (There are also some multinational exin banks like Asian Dev Bank, World Bank…) Their mandate is to support their own exporters in terms of ***default risk***. The ECA guarantees to the supplier that even if the overseas client (importer) defaults, the ECA would cover the supplier. It’s technically a loan to the importer, to be paid back. For those non-commercial risks affecting large deals (up to several billion dollars), the ECA’s have a natural advantage over commercial banks – they are financed by the government and can deal with the political and other risks across the borders.

Political risk is quite high, but the guarantee fee charged by ECA is very low. This paradox disappears if you understand that those big deals support domestic job creation, and tax/revenue generation of the large national exporters, so even if the fee charged by the ECA is arguably insufficient to cover the credit risk they take on, the decision still make sense. I think these ECA’s are using tax-payer’s money to help the home-grown exporters.

However, the ECA won’t blindly give big money to unknown foreign importers. Due diligence required.

The ECA’s are usually profitable on the back of those fees they charge (something like 1% above Libor). I guess the default intensity is statistically lower than feared, perhaps thanks to the risk analysis by the various parties. Risk assessment is the key “due diligence” and also the basis of the pricing. The #1 risk event being assessed is importer default. The exporter (supplier) are invariably blue chip corporations with a track record, and know what they are doing. 80% of the defaults (either by importer, exporter or the lending bank) are due to political risk, rather than commercial risk.

Many entities take part in the risk assessment, bringing with them special expertise and insight. The commercial bank has big teams dealing with ECA; the exporter needs to assess the buyer’s credit; the ECA has huge credit review teams… There are also specialist advisory firms who do not lend money. If any one of them identifies a high risk they can’t quantify and contain, I would say it’s only logical and prudent to hesitate.

The exporter first approach a (or a group of) commercial bank(s). The bank would seek ***guarantee*** from the national ECA. The guarantee covers 90% to 100% of the bank loan, so the bank has a very small credit exposure. (The ECA themselves have very high credit rating.) In the event of a default, the bank or exporter would be compensated by the ECA.

They mostly cover capital goods export, such as airplanes/trains/ships, power plants, infrastructure equipment, with long term repayment … So the supplier are mostly blue chip manufacturers. These loans are tricky because

· Long term, so event risk is much higher

· The entity to assess is a foreign entity, often in a developing country

· Big amount, so potential financial loss is sometimes too big for a single commercial lender

China, Japan and Korea are some of the biggest exporter nations.

Advertisements

I feel in most major economies the central bank manages interest rate which directly affects FX rate. FX rate doesn't affect interest rate, not directly.

http://www.investopedia.com/articles/basics/04/050704.asp — higher interest rates attract foreign capital and cause the currency to appreciate.

http://www.economicshelp.org/macroeconomics/exchangerate/factors-influencing/ — Higher interest rates cause an appreciation.

http://fxtrade.oanda.com/learn/top-5-factors-that-affect-exchange-rates – When interest rates go up, so do yields for assets denominated in that currency; this leads to increased demand by investors and causes an increase in the value of the currency in question.

Rate hike leads to inflation, which hurts the currency in question?

Rate hike hurts corporations (including exporters) and balance of payment. Would hurt the currency in question? I doubt it.

Fed rate hike is carefully managed based on growth data. Therefore, rate hike is conditional on US recovery, which means stronger USD.

Economic growth could also mean reduced government bond issue i.e. reduced QE, i.e. slower national debt growth which helps the USD.

Advertisements

In CAPM, beta (of a stock like ibm) is defined in terms of

* cov(ibm excess return, mkt excess return), and

* variance of ibm excess return

I was under the impression that ~~variance is the measured “dispersion” among the recent 60 monthly returns over 5 years (or another period). Such a calculation would yield a beta value that’s heavily influenced or “skewed” by the last 5Y’s performance. Another observation window is likely to give a very different beta value. This beta is based on such unstable input data, but we will treat it as a constant, and use it to predict the ratio of ibm’s return over index return! Suppose we are lucky so last 12M gives beta=1.3, and last 5Y yields the same, and year 2000 also yields the same. We still could be unlucky in the next 12M and this beta ~~~~fails completely to predict that ratio~~… Wrong thought!

One of the roots of the confusion is the 2 views of variance, esp. with time-series data.

A) the “statistical variance”, or sample variance. Basically 60 consecutive observations over 5 years. If these 60 numbers come from drastically different periods, then the sample variance won’t represent the population.

B) the “probability variance” or “theoretical variance”, or population variance, assuming the population has a fixed variance. This is abstract. Suppose ibm stock price is influenced mostly by temperature (or another factor not influenced by human behavior), so the inherent variance in the “system” is time-invariant. Note the distribution of daily return can be completely non-normal — Could be binomial, uniform etc, but variance should be fixed, or at least stable — i feel population variance can change with time, but should be fairly stable during the observation window — Slow-changing.

My interpretation of beta definition is based on the an unstable, fast-changing variance. In contrast, CAPM theory is based on a fixed or slow-moving population variance — the probability context. Basically the IID assumption. CAPM assumes we could estimate the population variance from history and this variance value will be valid in the foreseeable future.

In practice, practitioners (usually?) use historical sample to estimate population variance/cov. This is, basically statistical context A)

Imagine the inherent population variance changes as frequently as stock price itself. It would be futile to even estimate the population variance. In most time-series contexts, most models assume some stability in the population variance.

Advertisements

capm is a “baby model”. capm is the simplest of linear models. I guess capm popularity is partly due to this simplicity. 2 big assumptions —

Ass1a: Over 1 period, every individual security has a return that’s normal i.e. from a Gaussian noisegen with a time-invariant mean and variance.

Ass1b: there’s a unique correlation between every pair of security’s noisegen. Joint normal. Therefore any portfolio (2 assets or more) return is normal.

Ass2: over a 2-period horizon, the 2 serial returns are iid.

In the above idealized world, capm holds. (All assumptions challenged by real data.) In real stock markets, these assumptions could hold reasonably well in some contexts.

capm tries to forecast expected return of a stock (say google). Other models like ARCH (not capm) would forecast variance of the return.

Expected return is important in the industry. Investors compare expected return. Mark said the expected return will provide risk neutral probability values and enable us to price a security i.e. determine a fair value.

Personally, i don’t have faith in any forecast over the next 5 years because I have seen many forecasts failing to anticipate crashes. However, the 100Y stock market history does give me comfort that over 20 years stock mkt is likely to provide a positive return that’s higher than the risk-free rate.

Suppose Team AA comes up with a forecast mkt return of 10% over the next 12 months. Team BB uses capm to infer a beta of 1.5 (often using past 5 years of historical returns). Then using capm model, Team CC forecasts the google 12M expected return to be 1.5 * 10%.

In the idealized world, beta_google is a constant. In practice, practitioners assume beta could be slow-changing. Over 12M, we could say 1.5 is the average or aggregate beta_google.

Personally I always feel expected return of 15% is misleading if I suspect variance is large. However, I do want to compare expected returns. High uncertainty doesn’t discredit the reasonable estimate of expected return.

“Market portfolio” is defined as the combined portfolio of all investor’s portfolios. In practice, practitioners use a stock index. The index return is used as mkt return. Capm claims that under strict conditions, 12M expected return on google is proportional to 12M expected mkt return and the scaling factor is beta_google. Capm assumes the mkt return and google return are random (noisegen) but if you repeat the experiment 99 million times the average returns would follow capm.

Advertisements

India INR interest could be 8.8% while USD earns 1.1% a year. Economically, from an asset pricing perspective, to earn the IR differential (carry trade), you have to assume FX risk, specifically the possible devaluation of INR and INR inflation during the hold period.

In reality, I think INR doesn't devalue by 7.7% as predicted by UIC, but inflation is indeed higher in India.

In a lagged OLS regression, today's IR differential is a reasonable leading indicator or predictor of next year's exchange rate. Once we have the alpha and beta from that OLS, we can also write down the expected return (of the carry trade) in terms of today's IR differential. Such a formula provides a predicted excess return, which means the carry trade earns a so-called “risk premium”.

Note, similar to the DP, this expected return is a dynamic risk premium (lead/lag) whereas CAPM (+FamaFrench?) assumes a constant time-invariant expected excess return..

Advertisements

http://bigblog.tanbin.com/2014/04/risk-premium-clarified.html explains …

Let me put my conclusion up front — now I feel these factor models are an economist's answer to the big mystery “why some securities have consistently higher excess return than other securities.” I assume this pattern is clear when we look long term like decades. I feel in this context the key assumption is iid, so we are talking about steady-state — All the betas are assumed time-invariant at least during a 5Y observation window.

There are many steady-state factor models including the Fama/French model.

Q: why do we say one model is better than another (which is often the CAPM, the base model)?

1) I think a simple benchmark is the month-to-month variation. A good factor model would “explain” most of the month-to-month variations. We first pick a relatively long period like 5 years. We basically “confine” ourselves into some 5Y historical window like 1990 to 1995. (Over another 5Y window, the betas are likely different.)

We then pick some security to *explain*. It could be a portfolio or some index of an asset class.

We use historical data to calibrate the 4 beta (assuming 4 factors). These beta numbers are assumed steady-state during the 5Y. The time-varying (volatile) factor values combined with time-invariant constant betas would give a model estimate of the month-to-month returns. Does the estimate match the actual returns? If good match, then we say the model “explains” most of the month-to-month variation. This model would be very useful for hedging and risk management.

2) A second benchmark is less intuitive. Here, we check how accurate the 2 models are at “explaining” _steady_state_ average return.

Mark Hendricks' Econs HW2 used GDP, recession and corp profits as 3 factors (without the market factor) to explain some portfolios' returns. We basically use the 5Y average data (not month-to-month) combined with the steady-state betas to come up with an 5Y average return on a portfolio (a single number), and compare this number to the portfolio actual return. If the average return matches well, then we say …”good pricing capability”!

I feel this is an economist's tool, not a fund manager's tool. Each target portfolio is probably a broad asset class. The beta_GDP is different for each asset class.

Suppose GDP+recession+corpProfit prove to be a good “pricing model”, then we could use various economic data to forecast GDP etc, knowing that a confident forecast of this GDP “factor” would give us a confident forecast of the return in that asset class. This would help macro funds like GMO making asset allocation decisions.

In practice, to benchmark this “pricing quality”, we need a sample size. Typically we compare the 2 models' pricing errors across various asset classes and over various periods.

When people say that in a given model (like UIP) a certain risk (like uncertainty in FX rate movement) is not priced, it means this factor model doesn't include this factor. I guess you can say beta for this factor is hardcoded to 0.

Advertisements