proof-of-work ] blockchain #key questions

Based on my discussions with some experts in the field..

Say Cody just paid Bob some amount in bitcoins. At this time, there could be a large number of pending (unaccepted) transactions like this one waiting to be accepted into the linked list (i.e. the block chain).

One or more miners without coordination, can pick up this transaction + up to 999 other transactions and attempt to add them to the block chain.

They have to overcome a challenge, a computationally intensive challenge, by brute force — They need to find a number (called nonce) such that the integer hash code from hashing function

H(localhost timestamp, //not always in sync with other hosts
[T1, T2, … T1000], // 1 or more pending transactions
the minder’s Id, //typically the IP
nonce,
previous block’s hash // <– forming a linked list or block-chain
) < barrier

The barrier is a binary number with 3 (or more) leading zeros. In other words, the challenge is mining for an nonce to give a low-enough hash code with enough leading zero bits. Once a miner finds a “solution” and creates a valid block, she broadcasts the block’s hash code (with enough leading zeros) to the network. If one node verifies it, then other nodes would soon verify it.  There’s no central authority to decide when to accept. A node accepts it and uses it as “previous block hash” in a new block, as described in TOWP.

The head block of a linked list is special — no previous block! (A Git repo has only one such “block”, but not in blockchain.) 2nd block includes the first block’s hash. 3rd block includes 2nd block’s hash, and indirectly includes first block’s hash. Therefore, for any given block KK, its entire ancestry lineage is hashed into KK.

  • TOWP — TheOriginalWhitePaper
  • POW — proofOfWork
  • lucky — Some miner could be lucky and find a “solution” in the first try, but the challenge is so tough that there’s only brute force.
  • LevelOfDifficulty — is the number of leading zeros, like 3. When hardware speeds improve, someone (perhaps the merchant) will increase the LevelOfDifficulty, as explained in TOWP
  • immutable — Just like Git, every block is immutable once accepted into the chain.
  • The most common hash function is SHA and SCRYPT.

Q: how fast is the verification vs POW
A: POW is supposed to take 10 min on average. Verification should take nanosec, according to the experts I spoke to. Beside the “attached” transactions, I don’t know what inputs there are to the verification, but it computes a hashcode and compares it to something.

Q: Fundamentally, how would verification fail when an owner double-spends a coin?

Q: what if two miners both pick Cody/Bob’s transaction and broadcasts their “solution” hash code? The first miner would win the race (and get the reward after some delay)

Q: How are two independent linked lists handled in one host?

Q: what’s the need for POW?

Q: what’s the motivation for the a miner to take up a transaction?
A: there’s a 25-coin reward, or possibly higher if the Cody or Bob increases the reward

Q: does each coin have an id?
AA: no, but there is ID for each transaction, each block and each account.

Q5: what if verification fails?
%%A: I feel it’s rare but possible. If happens, the node would ignore it.

Q5b: how would the miner know? Not sure.

 

Advertisements

2 motivations for a firm to get listed

  • Traditional motivation — get funding from public investors. Receive millions of dollar working capital etc. Some tech startups like Spotify have plenty of cash from venture capital and don’t need this money.
  • new motivation — help employees, existing investors (incl. venture capitalists) cash out.

share buy-back #basics

  • shares outstanding — reduced, since the repurchased shares (say 100M out of 500M total outstanding) is no longer available for trading.
  • Who pays cash to who? Company pays existing public shareholders (buying on the open market), so company need to pay out hard cash! Will reduce company’s cash position.
  • EPS — benefits, leading to immediate price appreciation
  • Total assets — reduces, improving ROA/ROE
  • Demonstrates comfortable cash position
  • Initiated by — Management when they think it is undervalued
  • Perhaps requested by — Existing share holder hoping to make a profit
  • company has excess capital and
  • A.k.a “share repurchase”

ETF share creation #over-demand context

http://www.etf.com/etf-education-center/7540-what-is-the-etf-creationredemption-mechanism.html is detailed.

Imagine a DJ tracking ETF by Vanguard has NAV=$99,000 per share, but is trading at $101,000. Overpriced. So the AP will jump in for arbitrage — by Buying the underlying stocks and Selling a single ETF unit. Here’s how AP does it.

  1. AP Buys the underlying DJ constituent stocks at the exact composition, for $99,000
  2. AP exchanges those for one unit of ETF from Vanguard.
    1. No one is buying the ETF in this step, contrary to the intuition.
    2. So now a brand new unit of this ETF is created and is owned by the AP
  3. AP SELLs this ETF unit on the open market for $101,000 putting downward pressure on the price.

Q: So how does the hot money get used to create the new ETF shares?
A: No. The hot money becomes profit to the earlier ETF investors. The ETF provider or the AP don’t receive the hot money.

zero sum game #my take

“Zero sum game” is a vague term. One of my financial math professors said every market is a zero sum game. After the class I brought up to him that over the long term, the stock (as well as gov bond) market grows in value [1] so the aggregate “sum” is positive. If AA sells her 50 shares to BB who later sells them back to AA, they can all become richer. With a gov bond, if you buy it at par, collect some coupons, sell it at par, then everyone makes money. My professor agreed, but he said his context was the very short term.

Options (if expired) and futures look more like ZSG to me, over any horizon.

If an option is exercised then I’m not sure, since the underlier asset bought (unwillingly) could appreciate next day, so the happy seller and the unwilling buyer could both grow richer. Looks like non-zero-sum-game.

Best example of ZSG is football bet among friends, with a bookie; Best example of NZSG is the property market. Of course we must do the “sum” in a stable currency and ignore inflation.

[1] including dividends but excluding IPO and delisting.

Export Credit Agency — some basics

Each national government has such an “exin bank”, funded by the ministry of finance. (There are also some multinational exin banks like Asian Dev Bank, World Bank…) Their mandate is to support their own exporters in terms of *default risk*. The ECA guarantees to the supplier that even if the overseas client (importer) defaults, the ECA would cover the supplier. It’s technically a loan to the importer, to be paid back. For those non-commercial risks affecting large deals (up to several billion dollars), the ECA’s have a natural advantage over commercial banks – they are financed by the government and can deal with the political and other risks across the borders.

Political risk is quite high, but the guarantee fee charged by ECA is very low. This paradox disappears if you understand that those big deals support domestic job creation, and tax/revenue generation of the large national exporters, so even if the fee charged by the ECA is arguably insufficient to cover the credit risk they take on, the decision still make sense. I think these ECA’s are using tax-payer’s money to help the home-grown exporters.

However, the ECA won’t blindly give big money to unknown foreign importers. Due diligence required.

The ECA’s are usually profitable on the back of those fees they charge (something like 1% above Libor). I guess the default intensity is statistically lower than feared, perhaps thanks to the risk analysis by the various parties. Risk assessment is the key “due diligence” and also the basis of the pricing. The #1 risk event being assessed is importer default. The exporter (supplier) are invariably blue chip corporations with a track record, and know what they are doing. 80% of the defaults (either by importer, exporter or the lending bank) are due to political risk, rather than commercial risk.

Many entities take part in the risk assessment, bringing with them special expertise and insight. The commercial bank has big teams dealing with ECA; the exporter needs to assess the buyer’s credit; the ECA has huge credit review teams… There are also specialist advisory firms who do not lend money. If any one of them identifies a high risk they can’t quantify and contain, I would say it’s only logical and prudent to hesitate.

The exporter first approach a (or a group of) commercial bank(s). The bank would seek *guarantee* from the national ECA. The guarantee covers 90% to 100% of the bank loan, so the bank has a very small credit exposure. (The ECA themselves have very high credit rating.) In the event of a default, the bank or exporter would be compensated by the ECA.

They mostly cover capital goods export, such as airplanes/trains/ships, power plants, infrastructure equipment, with long term repayment … So the supplier are mostly blue chip manufacturers. These loans are tricky because

· Long term, so event risk is much higher

· The entity to assess is a foreign entity, often in a developing country

· Big amount, so potential financial loss is sometimes too big for a single commercial lender

China, Japan and Korea are some of the biggest exporter nations.

interest rate hike hitting FX rate

I feel in most major economies the central bank manages interest rate which directly affects FX rate. FX rate doesn't affect interest rate, not directly.

http://www.investopedia.com/articles/basics/04/050704.asp — higher interest rates attract foreign capital and cause the currency to appreciate.

http://www.economicshelp.org/macroeconomics/exchangerate/factors-influencing/ — Higher interest rates cause an appreciation.

http://fxtrade.oanda.com/learn/top-5-factors-that-affect-exchange-rates – When interest rates go up, so do yields for assets denominated in that currency; this leads to increased demand by investors and causes an increase in the value of the currency in question.

Rate hike leads to inflation, which hurts the currency in question?

Rate hike hurts corporations (including exporters) and balance of payment. Would hurt the currency in question? I doubt it.

Fed rate hike is carefully managed based on growth data. Therefore, rate hike is conditional on US recovery, which means stronger USD.

Economic growth could also mean reduced government bond issue i.e. reduced QE, i.e. slower national debt growth which helps the USD.

beta definition in CAPM – confusion cleared

In CAPM, beta (of a stock like ibm) is defined in terms of
* cov(ibm excess return, mkt excess return), and
* variance of ibm excess return

I was under the impression that variance is the measured “dispersion” among the recent 60 monthly returns over 5 years (or another period). Such a calculation would yield a beta value that’s heavily influenced or “skewed” by the last 5Y’s performance. Another observation window is likely to give a very different beta value. This beta is based on such unstable input data, but we will treat it as a constant, and use it to predict the ratio of ibm’s return over index return! Suppose we are lucky so last 12M gives beta=1.3, and last 5Y yields the same, and year 2000 also yields the same. We still could be unlucky in the next 12M and this beta
fails completely to predict that ratio… Wrong thought!

One of the roots of the confusion is the 2 views of variance, esp. with time-series data.

A) the “statistical variance”, or sample variance. Basically 60 consecutive observations over 5 years. If these 60 numbers come from drastically different periods, then the sample variance won’t represent the population.

B) the “probability variance” or “theoretical variance”, or population variance, assuming the population has a fixed variance. This is abstract. Suppose ibm stock price is influenced mostly by temperature (or another factor not influenced by human behavior), so the inherent variance in the “system” is time-invariant. Note the distribution of daily return can be completely non-normal — Could be binomial, uniform etc, but variance should be fixed, or at least stable — i feel population variance can change with time, but should be fairly stable during the observation window — Slow-changing.

My interpretation of beta definition is based on the an unstable, fast-changing variance. In contrast, CAPM theory is based on a fixed or slow-moving population variance — the probability context. Basically the IID assumption. CAPM assumes we could estimate the population variance from history and this variance value will be valid in the foreseeable future.

In practice, practitioners (usually?) use historical sample to estimate population variance/cov. This is, basically statistical context A)

Imagine the inherent population variance changes as frequently as stock price itself. It would be futile to even estimate the population variance. In most time-series contexts, most models assume some stability in the population variance.

capm – learning notes

capm is a “baby model”. capm is the simplest of linear models. I guess capm popularity is partly due to this simplicity. 2 big assumptions —
Ass1a: Over 1 period, every individual security has a return that’s normal i.e. from a Gaussian noisegen with a time-invariant mean and variance.

Ass1b: there’s a unique correlation between every pair of security’s noisegen. Joint normal. Therefore any portfolio (2 assets or more) return is normal.

Ass2: over a 2-period horizon, the 2 serial returns are iid.

In the above idealized world, capm holds. (All assumptions challenged by real data.) In real stock markets, these assumptions could hold reasonably well in some contexts.

capm tries to forecast expected return of a stock (say google). Other models like ARCH (not capm) would forecast variance of the return.

Expected return is important in the industry. Investors compare expected return. Mark said the expected return will provide risk neutral probability values and enable us to price a security i.e. determine a fair value.

Personally, i don’t have faith in any forecast over the next 5 years because I have seen many forecasts failing to anticipate crashes. However, the 100Y stock market history does give me comfort that over 20 years stock mkt is likely to provide a positive return that’s higher than the risk-free rate.

Suppose Team AA comes up with a forecast mkt return of 10% over the next 12 months. Team BB uses capm to infer a beta of 1.5 (often using past 5 years of historical returns). Then using capm model, Team CC forecasts the google 12M expected return to be 1.5 * 10%.

In the idealized world, beta_google is a constant. In practice, practitioners assume beta could be slow-changing. Over 12M, we could say 1.5 is the average or aggregate beta_google.

Personally I always feel expected return of 15% is misleading if I suspect variance is large. However, I do want to compare expected returns. High uncertainty doesn’t discredit the reasonable estimate of expected return.

“Market portfolio” is defined as the combined portfolio of all investor’s portfolios. In practice, practitioners use a stock index. The index return is used as mkt return. Capm claims that under strict conditions, 12M expected return on google is proportional to 12M expected mkt return and the scaling factor is beta_google. Capm assumes the mkt return and google return are random (noisegen) but if you repeat the experiment 99 million times the average returns would follow capm.