when there’s (implicit) measure+when there is none

Needs a Measure – r or mu. Whenever we see “drift”, it means expected growth or the Mean of some distribution (of a N@T). There’s a probability measure in the context. This could be a physical measure or a T-fwd measure or a stock-numeraire or the “risk-neutral-measure” i.e. MoneyMarketAcct as the numeraire

Needs a Measure – dW. Brownian motion is always a probabilistic notion, under some measure

Needs a Measure – E[…] is expectation of something. There’s a measure in the context.

Needs a Measure – Pr[..]

Needs a Measure – martingale

Regardless of measure – time-zero fair price of any contract. The same price should result from derivation under any measure.

Regardless of measure – arbitrage is arbitrage under any measure

option pricing – 5 essential rules n their assumptions

PCP — arb + extremely tight bid/ask spread + European vanilla option only. GBM Not assumed. Any numeraire fine.

Same drift as the numeraire — tradeable + arb + numeraire must be bond or a fixed-interest bank account.

no-drift — tradeable + arb + using the numeraire

Ito — BM or GBM in the dW term. tradable not assumed. Arb allowed.

BS — tradable + arb + GBM + constant vol

quasi constant parameters in BS

dS/S = a dt + b dW [1]

[[Hull]] says this is the most widely used model of stock price behavior. I guess this is the basic GBM dynamic. Many “treasures” hidden in this simple equation. Here are some of them.

I now realize a and b (usually denoted σ) are “quasi-constant parameters”. The initial model basically assumes constant [2] a and b. In a small adaptation, a and b are modeled as time-varying parameters. In a sense, ‘a’ can be seen as a Process too, as it changes over time unpredictably. However, few researchers regard a as a Process. I feel a is a long-term/steady-state drift. In contrast, many treat b as a Process — the so-called stochastic vol.

Nevertheless in equation [1], a and b are assumed to be fairly slow-changing, more stable than S. These 2 parameters are still, strictly speaking, random and unpredictable. On a trading desk, the value of b is typically calibrated at least once a day (OCBC), and up to 3 times an hour (Lehman). How about on a volatile day? Do we calibrate b more frequently? I doubt it. Instead, implied vol would be high, and market maker may jack up the bid/ask spread even wider.

As an analogy, the number of bubbles in a large boiling kettle is random and fast-changing (changing by the second). It is affected by temperature and pressure. These parameters change too, but much slower than the “main variable”. For a short period, we can safely assume these parameters constant.

Q: where is √ t
A: I feel equation [1] doesn’t have it. In this differential equation about the instantaneous change in S, dt is assumed infinitesimal. However, for a given “distant future” from now, t is given and not infinitesimal. Then the lognormal distribution has a dispersion proportional to √ t

[2] The adjective “constant” is defined along time axis. Remember we are talking about Processes where the Future is unknown and uncertain.

change measure but using cash numeraire #drift

Background — “Drift” sounds simple and innocent, but no no no.
* it requires a probability measure
* it requires a numeraire
* it implies there’s one or (usually) more random process with some characteristics.

It’s important to estimate the drift. Seems essential to derivative pricing.
—————————————————————————
BA = a bank account paying a constant interest rate, compounded daily. No uncertainty no pr-distro about any future value on any future date. $1 today (time-0) becomes exp(rT) at time T with pr=1 , under any probability measure.

MMA = money market account. More realistic than the BA. Today (time-0), we only know tomorrow’s value, not further.
Z = the zero coupon bond. Today (time-0) we already know the future value at time-T is $1 with Pr=1 under any probability measure. Of course, we also know the value today as this bond is traded. Any other asset has such deterministic future value? BA yes but it’s unrealistic.
S = IBM stock
Now look at some tradable asset X. It could be a stock S or an option C or a futures contract … We must must, must assume X is tradable without arbitrage.
—- Under BA measure and cash as numeraire.
   X0/B0 = E (X_T/B_T) = E (X_T)/B_T   =>
   E (X_T)/X0 = B_T/B0
Interpretation – X_T is random and non-deterministic, but its expected value (BA measure) follows the _same_ drift as BA itself.
—- Under BA measure and using BA as numeraire or “currency”,
   X0/B0 = E (X_T/B_T)
Interpretation – evaluated with BA as currency, the value of X will stay constant with 0 drift.
—- Under T-measure and cash numeraire
   X0/Z0 = E (X_T/Z_T) = E (X_T)/$1   =>
   E (X_T)/X0 = 1/Z0
Interpretation — X_T is random and non-deterministic, but its expected value (Z measure) follows the _same_ drift as Z itself.
—- Under T-measure and using Z as numeraire or “currency”,
   X0/Z0 = E (X_T/Z_T)
Interpretation – evaluated with the bond as currency, the value of X will stay constant with 0 drift.
—- Under IBM-measure and cash numeraire
   X0/S0 = E (X_T/S_T)
Interpretation – can I say X follows the same drift as IBM? No. The equation below doesn’t hold because S_T can’t come out of E()!
     !wrong —>       E (X_T)/X0 = S_T/S0    ….. wrong!
—- Under IBM-measure and IBM numeraire… same equation as above.
Interpretation – evaluated with IBM stock as currency, the value of X will stay constant with 0 drift.

Now what if X is non-tradable i.e. not the price process of a tradable asset? Consider random variable X = 1/S. X won’t have the drift properties above. However, a contract paying X_T is tradeable! So this contract’s price does follow the drift properties above. See http://bigblog.tanbin.com/2013/12/tradeablenon-tradeable-underlier-in.html

numeraire paradox

Consider a one-period market with exactly 2 possible time-T outcomes w1 and w2.

Among the tradable assets is G. At termination, G_T(w1) = $6 or G_T(w2) = $12. Under G-measure, we are given Pr(w1) = Pr(w2) = 50%. It seems at time-0 (right now) G_0 should be $9, but it turns out to be $7! Key – this Pr is inferred from (and must be consistent with) the current market price of another asset [1]. Without another asset, we can’t work out the G-distro. In fact I believe every asset’s current price must be consistent with this G-measure Pr … or arbitrage!

Since every asset’s current price should be consistent with the G-Pr, I feel the most useful asset is the bond. Bond current price works out to Z_0 = $0.875. This implies a predicable drift rate.

I would say under bond numeraire, all assets (G, X, Z etc) have the same drift rate as the bond numeraire. For example, under the Z-numeraire, G has the same drift as Z.

Q: under Z-measure, what’s G’s drift?
A: $7 -> $8

It’s also useful to work out under Z-measure the Pr(w1) = 66.66% and Pr(w2) = 33.33%. This is using the G_0, G_T numbers.

Now can there be a 0-interest bank account B? In other words, could B_T = B_0 = $1? No, since such prices imply a G-measure Pr(w1) like 5/7 (Verified!) So this bank account’s current price is inconsistent with whatever asset used in [1] above.

The most common numeraires (bank accounts and discount bonds) have just one “outcome”. (In a more advanced context, bank account outcome is uncertain, due to stoch interest rates.) This stylized example is different. Given a numeraire with multiple outcomes, it’s useful to infer the bond numeraire. It’s generally easier to work with one-outcome numeraires. I feel it’s even better if we know the exact terimnal price and the current price of this numeraire — I guess only the discount bond meet this requirement.

I like this stylized 1-period, 2-outcome world.
Q1: Given Z_T, Z_0, G_0, G_T [2], can i work out the G-Pr (i.e. distro under G-numeraire)? can i swap the roles and work out the Z-Pr ?
A: I think we can work out both distros and they aren’t identical !

Q2: Given G_0 and the G_T possible values[2] without Z prices, can we work out the G-Pr (i.e. distro under G-numeraire)?
A: no we don’t have a numeraire. In a high vs a low interest-rate world, the Pr implied by G_T would be different

[2] these are like pre-set enum values. We only know these values in this unrealistic world.

"uninitialized" is usually either a pointer or primitive type

See also c++ uninitialized “static” objects ^ stackVar

1) uninitialized variable of primitive types — contains rubbish
2) uninitialized pointer — very dangerous.

We are treating rubbish as an address! This address may happen to be Inside or Outside this process’s address space.

Read/write on this de-referenced pointer can lead to crashes. See P161 [[understanding and using C pointers]].

There are third-party tools to help identify uninitialized pointers. I think it’s by source code analysis. If function3 receives an uninitialized pointer it would look completely normal to the compiler or runtime.

3) uninitialized class instance? Possible. Every class instance in c++ will have its memory layout well defined, though a field therein may fall into category 1) or 2) above.

Ashish confirmed that a pointer field of a class will be uninitialized by default.

4) uninitialized array of pointers could hold wild pointers

5) I think POD class instances can also show up uninitialized. See https://stackoverflow.com/questions/4674332/declaring-a-const-instance-of-a-class

volatile in c++ #hardware eg

(See other posts on volatile.) I now get more and more questions on this keyword.

Now i think a volatile variable can be updated by another thread, another process via shared-memory[1], or by hardware (via interrupts). P206 [[from c to c++]] has a “hardware” example.

[1] see effModernC++

P178 [[understanding and using c pointers]] shows how to read a hardware port, declared as a const pointer to a volatile integer object —

unsigned int volatile * const port = ….

Unlike in java, Volatile can also be used on regular variables, methods, fields. Similar restriction as “const”.
For memory fence, see post on c++volatile^atomic.
–experts all agree q(volatile) is unrelated to threading:

 

 

IV spr/hib, SQL, …

———- Forwarded message ———-

From: Hai Yi
Date: 23 January 2014 11:24
Subject: Citi interview (01/22/14)

this is a tough one, a real torture

input an array of strings, how to eliminate the dups and output a new array. Don’t use iteration of each element.
That is to take advantage of Set#addAll and Set#toArray
Write Factorial in two ways
recursive and non-recursive
How to customize serialization, what methods to use?
Normally the serialization in Java works in this way:
An object to be serialized, say A, needs to implement Serializable interface;
From program, use ObjectInputStream/ObjectOutStream to read in / write A.
To customize the serialization, you have to implement readObject/writeObject inside A, and both methods should be “private”.
A second way to customize serialization is to implement Externlizable – be noted this one is NOT a mark interface and it has two methods readExternal / writeExternal
the dynamic proxy in Java, whats its name?
Proxy
JNI is loaded into which area of the heap? Say for example, an oracle driver
How to record the hit counter from an application server in a multi-threading environment.
AtomicInteger
optimistic lock and pessimistic lock, what exactly the SQL in Oracle to implement them?
for optimistic lock, use an additional column in Table, with type being “Timestamp” or “version” and check it before updating.
For pessimistic lock, add “For Update” clause in the SQL thus effectively lock the access to the table.
what’s the default autowiring in Spring?
Autowired by type
whats the JVM argument is used for GC?
What the tools for memory dump?
1 mil transaction per sec down to 1 transaction in an application server, what’s the primary suspect?
How to prevent the field from being persisted.
transient keyword
How to implement lazy loading in Hibernate?
lazy load in Hibernate is one of Hibernate’s fetching strategies, its purpose is to load the related children of an entity only when the user needs them.
if using XML config, in hbm.xml, use attribute fetch = “select”
If using annotation, do these:
    @OneToMany(fetch = FetchType.LAZY, mappedBy = "stock")
     @Fetch(FetchMode.SELECT)
How to use Query in Hibernate? And Criteria ?
What are the patterns implemented in Hibernate?
ApplicationContext, what is more than BeanFactory?
maven, phase:goal, explain.
Macro processor in Linux?
Now I know you are just playing me…

std::array+alternatives #RAII dtor

feature wish list — RAII; fixed size; bound check

https://stackoverflow.com/questions/16711697/is-there-any-use-for-unique-ptr-with-array compares vector ^ unique_ptr ^ std::array. Conclusion — Vector is more powerful, more versatile, more widely useful, less restrictive. std::array is

All alternatives to std::array:

  • vector and raw array
  • boost::scoped_array and boost::shared_array — least quizzed.
  • std::unique_ptr<int[]> dynArr(new int[theSize]) # can be allocated on heap at runtime, and you don’t need to call delete[]

The dtor in each data structure is a key feature if you want RAII:

  1. std::array — dtor probably destroys each element one by one. No q(delete) called since the elements could be on stack
  2. scoped_array — dtor calls delete[], which is a key difference from scoped_ptr, according to my book [[beyond the c++ standard lib]]
  3. vector — dtor uses its allocator, which very likely calls delete[] since vector uses heap for the underlying growable array.
  4. unique_ptr<T[]> — yes as expected. This valgrind experiment shows the array-specialization uses delete[], whereas the vanilla instance uses regular q(delete).

c++buy-side data support IV (wq)

(world quant?)

local vs my in perl?
what’s bless in perl?
multiprocessing vs multithreading modules in python?

From a simple salary table, select the row with the 2nd highest salary. Address in my blog on top9

toss a dice 3 times. What’s pr(getting 3 different numbers)

Q: advantages of operator overloading?
A: sometimes you have no choice, like op= and Smart pointers
A: eg: STL containers offer bracket operator
A: eg: iterators offer increment
A: if you have a special_number class, you would consider +/-

Q: vector vs linked list

Q: difference between private and protected keywords in c++? Seldom quizzed!
A: P613 [[c++primer]] says

  • members of the subclass are unaffected by these priv/prot derivation-access-specifier. (Instead, they are controlled by the base class member’s priv/prot classification.)
  • users and children of the subclass are affected!

Q: Scan a multi-line text file just once to pick a line at “random” i.e. where each line has the same probability. Not well-defined problem. Don’t spend too much time.
A: save the lines in a vector. At end of the scan, we know the vector size. Pick a random non-negative int below vector.size()

 

[[practical api design]] java

Written by the founder and initial architect of NetBeans, this book has a java root. Freely available in pdf, I find this ebook rather detailed, not a lot of high-level (vague) concepts like most design/architecture books including the classics. It seems to deal with some real world coding and design decisions. I would say these are not life-and-death decisions but still everyday decisions. (In contrast those other books address decisions I don’t encounter or care at all — seem to belong to another level.) Therefore this book is closer to my life as a developer.

There’s a chapter against “extreme” (unhealthy) advice. Unconventional critique:
** an api must be beautiful
** an api must be correct
** an api must be simple
** an api must have good performance
** an api must be 100% compatible
** an api must be symmetrical

chapter on creating a threadsafe API, to be used by clueless and untrained programmers.

chapter on supervising the development of an API

Section on overcoming the fear of committing to a stable API

Section on Factory being better than constructor

Section on making everything final

pointer arg – 2 patterns

When we see a function take in a pointer argument, we should realize there are only 2 correct patterns. If neither patterns apply, then it’s likely a misuse.

 

I think this is a very simple knowledge, easy to apply, easy to remember, but not everyone knows it.

 

* readonly mode – pointer to const. Function receives the object in readonly mode.

 

* update mode – pointer to non-const. Function to modify the object

PCP -> div, delta, fwd-contract …

How do we internalize the PCP implications? They are hard to remember, easy to get wrong. We need to know the limitations/assumptions of each “rule of thumb”. Some rules are more fundamental than other rules.

PCP is more fundamental than BS.

PCP is more fundamental than GBM.

PCP equation applies to both 1) terminal values and 2) pre-maturity values. In fact, given 2 replicating portfolios, their values must match at all times. What if one of the securities involved is non-tradable, like a dividend-paying stock? See posts on PCP+dividend.

PCP applies only to European options, not American or binary options.

python – important details tested on IKM

Today I did some IKM python quiz. A few tough questions on practical (not obscure) and important language details. You would need to invest months to learn these. I feel a regular python coding job of 2 years may not provide enough exposure to reach this level.

Given the amount of effort (aka O2, laser) invested, I feel LROTI would be much higher in c++, java and WPF (slightly less in general c# and swing). One problem with LROTI on WPF is churn.

As we grow older and have less time to invest, LROTI is a rather serious consideration. I no longer feel like superman in terms of learning new languages.

present value of 22 shares, using share as numeraire

We all know that the present value of $1 to be received in 3Y is (almost always) below $1, basically equal to exp(-r*3) where r:= continuous compound risk-free interest rate. This is like an informal, working definition of PV.

Q: What about a contract where the (no-dividend) IBM stock is used as currency or “numeraire”? Suppose contract pays 33 shares in 3Y… what’s the PV?

%%A: I feel the PV of that cash flow is 33*S_0 i.e current IBM stock price.
I feel this “numeraire” has nothing to do with probability measure. We don’t worry about the uncertainty (or probability distribution) of future dollar price of some security. The currency is the IBM stock, so the future value of 1 share is exactly 1, without /uncertainty/randomness/ i.e. it’s /deterministic/.
—–
Similarly, given a zero bond will mature (i.e. cash flow of $1) in 3Y, PV of that cash flow is Z_0 i.e. the current market value of that bond.

R-programming resources #ebooks …

–ebooks (master copy is in USB drive)
There are also decent ebooks outside CRAN.

http://cran.r-project.org/doc/manuals/R-intro.pdf — more techie
http://cran.r-project.org/doc/contrib/Ding-R-intro_cn.pdf

http://cran.r-project.org/doc/manuals/R-data.pdf — includes excel integration
http://cran.r-project.org/doc/contrib/usingR.pdf — good
http://cran.r-project.org/doc/contrib/Verzani-SimpleR.pdf — more stats

avoid overhead of dynamic memory allocation – alloca etc

I now think there are various overhead with DMA

* search the free list for a suitable chunk
* fragmentation
* If an allocation is needed where the heap is almost used up, glibc must grab more memory from the kernel, then hand out a slice of it.

Linux alloca() and variable-length-array both avoids some of the overhead. See P267 [[linux sys programming]].

If a low-latency module does a ton of malloc(), then alloca() might outperforms malloc() easily. We should benchmark.

N(d1) >> N(d2) | high σ, r==0, S==K

N(d1) = Pr(ST > S0) , share-measure
N(d2) = Pr(ST > S0) , RN-measure

For simplicity, T = 1Y,  S= K = $1.

First, forget the formulas. Imagine a GBM stock price with high volatility without drift. What’s the prob [terminal price exceeding initial price]? Very low. Basically, over the intervening period till maturity, most of the diffusing particles move left towards 0, so the number of particles that lands beyond the initial big-bang level is very small. The “distribution” curve is squashed to the left. [1]

However, this “diffusion” and distribution curve would change dramatically when we change from RN measure to share-measure. When we change to another measure, the “probability mass” in the Distribution would shift. Here, N(d1) and N(d2) are the prob of the same event, but under different measures. The numeric values can be very different, like 84% vs 16%.

Under share measure, the GBM has a strong drift (cf zero drift under RN) —

dS = σS dt + σ S dW

Therefore when σT is high, most of the diffusing particles move right and will land beyond the initial value, which leads to Pr(ST > S0) close to 100%

— Now the formula view —
With those nice r, S, K, T values,

d1 =  σT /2
d2 = –σT /2

Remember for a standard normal distribution, if d1 and d2 are 1 and -1 (if σ=2), then N(d1) would be 68% and N(d2) would be 32%.

[1] See posts

http://bigblog.tanbin.com/2013/12/gbm-with-zero-drift.html
http://bigblog.tanbin.com/2014/01/prst-k-s0-k-and-r0-intuitively.html

Pr(S_T > K | S_0 > K and r==0), intuitively

The original question — “Assuming S_0 > K and r = 0, denote C := time-0 value of a binary call. What happens to C as ttl -> 0 or ttl -> infinity. Is it below or above 0.5?”

C = Pr(S_T > K), since the discounting to PV is non-issue. So let’s check out this probability. Key is the GBM and the LN bell curve.

We know the bell curve gets more squashed [1] to 0 as ttl -> infinity. However, E S_T == S_0 at all times, i.e. average distance to 0 among the diffusing particles is always equal to S_0. See http://bigblog.tanbin.com/2013/12/gbm-with-zero-drift.html

[1] together with the median. Eventually, the median will be pushed below K. Concrete illustration — S_0 = $10 and K = $4. As TTL -> inf, the median of the LN bell curve will gradually drop until it is below K. When that happens, Pr (S_T > K) 0 as ttl -> infinity.

——–
ttl -> 0. The particles have no time to diffuse. LN bell curve is narrow and tall, so median and mean are very close and merge into one point when ttl -> 0. That means median = mean = S_0.

By definition of the median, Pr(S_T > median) := 0.5 so Pr(S_T > S_0) = 0.5 but K is below S_0, so Pr(S_T > K) is high. When the LN bell curve is a thin tower, Pr(S_T > K) -> 100%

independence^correlation btw 2 RV, my take

I feel correlation is more  a statistics concept, less a probability concept. In contrast, Independence has 2 interpretations — in prob vs stats — see other posts in this blog.

In a theoretical model, the color vs the points on a random poker card are independent, so out of 9999 trials, the data collected should show very very low correlation, but perhaps non-zero correlation!

From this example, I feel in a theoretical model, correlation isn’t important. However, in real world statistics, correlation is probably more important than Ind. As described in other posts, I feel ind is shades of grey, to be measured … using correlation as the measure.

Whenever someone says 2 thingies are independent, i think of a logical, theoretical models (probabilistic). In the real world, we are never really sure how true.

Whenever someone talks about correlation/covariance, i think of statistics on observed data.

It’s well known that 2 uncorrelated variables may be dependent.

Many people prefer to day “A and B are uncorrelated” when they really mean “they don’t depend on or influence each other”. I feel most of the time the meaning is imprecise and unclear.

predicted future px USUALLY exceeds current px – again

This is a pretty important concept …

Extreme example – Suppose home price is believed to be unpredictable or unstable for the next 3 months[1], and short term rental is impossible, and there are many equivalent houses on sale. You just bought one of these equivalent houses but need it in 3 months and you would have all the cash by then. Do you prefer to “settle” (i.e. pay cash and get key) now (A) or (B) in 3 months? You prefer B because A means you must start paying mortgage interest 3-month earlier.

Now suppose seller exploits your preference for B, and asks $1 more to do a fwd deal (B) instead of a spot deal (A), you would be wise to still prefer B because interest amount is likely to be thousands.

So $1 is too cheap. But what’s a fair price for the fwd deal? I think it’s exactly the spot price plus the mtg interest amount. For most securities, fwd price [2] is Higher than spot. (A few assets are exceptions and therefore important[3].) First suppose fwd price == spot price as of today, and ignore the positive/negative signs below —

* if interest_1 < rent_3, then seller gains. Some competing seller would sacrifice a bit of gain to sell at a lower fwd price. Fwd price is then driven down below spot price. This is the high-coupon case.
* if interest_1 > rent_3, then seller loses. She would simply reject the proposed trade. She would have to charge a Higher fwd price to compensate for her loss. This is the usual case, where rent_3 is $0 and there’s no repo or rent market for this asset.

I feel the fair theoretical fwd price is not affected by implied volatility, or by any kind of trend. A trend can continue or reverse. I feel the calculation of fwd price is based on assumption of constant asset price or random movement for the next x months.

[1] I think in most cases of fair pricing we assume the asset’s price has no up/down trend.
[2] If the spot contract doesn’t have a pair of start/end dates, i.e. a straightforward “cash-on-delivery” instrument, then I think in many cases “fwd price” means ” delayed settlement”.

[3] Their fwd price is Lower —
– High-coupon bonds such as treasury
– High-dividend stocks
– many currency pairs

Why the premium vs discount? There’s arbitrage mathematics at play. For most products, a fwd seller could 1) borrow cash to 2) buy the underlier today, 3) lend it out for the fwd term (say 90 days), and 4) deliver it on the fwd start date as promised. All deals are executed simultaneously today, so all prices fixed together, and a profit if any is locked in.

[[safeC++]] discourages implicit conversion via OOC/cvctor

See other posts about OOC and cvctor. I am now convinced by [[safeC++]] that it’s better to avoid both. Instead, Use AsXXX() method if converting from YYY to XXX is needed. Reason is type safety. In an assignment (including function input/output), it is slightly hacky if LHS is NOT a base type of RHS. Implicit conversion is like Subversion of compiler’s type enforcement — Given a function declared as f(XXX), it should ideally be illegal to pass in a YYY. However, The implicit converters break the clean rule, from the back door.

As explained concisely on P8 [[safeC++]], The OOC is provided specifically to support implicit conversion. In comparison, The cvctor is more likely to be a careless mistake if without “explicit”.

Favor explicit conversion rather than implicit conversion. Some manager in Millennium pointed out that c++ syntax has too many back doors and is too “implicit”. Reading a piece of code you don’t know what it does, unless you have lots of experience/knowledge about all the “back-doors”.

use delete() or delete[] @@ #briefly

An address stored in a pointer (ptr-Object or ptr-Nickname — see http://bigblog.tanbin.com/2012/07/3-meanings-of-pointer-tip-on-delete-this.html) can mean either a “lone-wolf” or a “wolf pack”. 
Specifically, the logical (rather than physical) data at the address could be an array of objects or a single one.

The op-new and op-new[] operators both return an address. If you received the address without knowing which new operator was used, then it’s impossible to know how to delete it. As [[effC++]] points out, deleting the wrong way is UndefinedBehavior. 

The suggestion in [[safeC++]] is to avoid smart array (like shared_array) but use vector instead.

On a similar note, if your function receives a char*, you would have no idea whether it’s a single char or a string.

[[safeC++]]assertion technique(versatile), illustrated with NPE

Update: how to add a custom error message to assert — https://stackoverflow.com/questions/3692954/add-custom-messages-in-assert

This is a thin book. Concise and practical (rather than academic) guidelines.

#1 guideline: enlist compiler to catch as many errors as possible.

However, some errors will pass compiler and only happen at run-time. Unlike on P11, I will treat programmer errors and other run-time errors alike – we need an improvement over the “standard” outcome which is UB (undefined behavior) and we may or may not see any error message anywhere.

#2 guideline: In [[safeC++]], such an improvement is offered in the form of assertions, in other words, run-time checks. The author gives them a more accurate name “diagnostics”.

2.1) now the outcome is guaranteed termination, rather than the undefined behavior.
2.2) now there’s always an error message + a stack trace. Now 2.2) sounds like non-trivial improvement. Too good to be true? The author is a practicing programmer in a hedge fund so I hope his ideas are real-world.

Simplest yet realistic example of #2 is NPE (i.e. null pointer deref). NPE is UB and could (always? probably not) crash. I doubt there’s even an error message. Now with a truly simple “wrapper” presented on P53-54, an NPE could be diagnosed __in_time__ and an fatal exception thrown, so program is guaranteed to terminate, with an error message + stack trace.

Like a custom new/delete (to record allocations), here we replace the raw pointer with a wrapper. There we see a pattern where we replace builtin c++ constructs with our wrappers to avoid UB and get run time diagnostics —

$ this wrapper is actually a simple smart ptr
$ traditional smart ptr templates
$ custom new, delete
$ vector
$ Int class replacing int data type

The key concepts —
% assertion
% diagnostics
% run time

Q: Can every UB condition be diagnosed this way? Not sure, but the most common ones seem to be.

overload operator<< as non-friend #MIAX

Q: without the “friend” keyword, how do you support “dump” output like q(cout<<myDog<<endl), where

Dog myDog; // has private fields

Note you can modify Dog class implementation.

%%A: add const pointers to private fields, so the non-friend q(operator<<) can print them.
%%A: change Dog from a class to a struct.
A: c++ reflection is harder than java. Probably overkill here.

Linq – deferred ^ immediate execution

update — java 8 streams …

(I feel this is a low-level but popular question…)

– if an operation returns a single value (aka aggregates) then ImmEx. ([[c# indepth]]  P281)

– if an operation returns another sequence, then defEx ([[c# indepth]]  P281)

– ToList() — ImmEx

– sort uses ImmEx in SQL. Ditto Linq — [[c# indepth]]  P369

– regular SQL Select with multiple output rows? Uses defEx. As you retrieve one row, all subsequent rows are subject to change.

– SQL Select with a single-row output? probably ImmEx.