objA querying objB – vague OO-speak

Update — I think architects and “savvy communicators” often practice vague OO-speak to perfection. We may need to practice it.

In OO discussions, we often say “Look here, objectA will query objectB”, by calling getTimestamp(). This is a typical example of vague/misleading OO-speak. It’s as vague as a presidential candidate promising to “reduce deficit in 4 years” without any concrete details.  Note non-OO programmers don’t develope such vague communication habits.

Here are some of the fundamental amgiuities.

* A must get a handle on B before calling B’s non-static methods. The way to get the correct B instance is often non-trivial. That sentence invariably glosses over such details.

* It’s a method in A — methodA1 — not just “object A”, that queries B. This is a crucial point glossed over in that sentence.

** The original sentence implies A is some kind of active object that can initiate actions, but actually method A is called by another method, so the real initiator is often an external entity such as a GUI user, an incoming message, the main thread of a batch job, or a web-service/EJB request. ObjectA is NOT the initiator entity.

* Return value of the query is either saved in A or used on the fly in the methodA1. The original sentence doesn’t mention it and implies objectA has some kind of intelligence/bizlogic about how to use the return value. In some cases indeed the method in objectA has that “intelligence”, but it should not be glossed over. More often, methodA1 simplify passes the query result to the upstairs method above methodA1. The upstairs method may “belong” to a different object. All the details are glossed over in the original sentence.

Here’s an even more vague phrase – “A calling B”. This glosses over the essentials of the arguments and where they come from.


pbref^pbclone DEFINED : 3QnA any lang, Part1

pbref include 2 categories – pbcref (const-ref) and pbrref (regular ref). It’s best to update all posts …

  lhs_var =: rhs_var // pseudo code

Reading this assignment in ANY language, we face the same fundamental question –

Q1: copying object address (assume 32 bit) or are we cloning the object? Note input/output for a function call is a trickier form of assignment. In most languages, this is essentially identical to regular LHS/RHS assignments. [1]
Q2: “Is LHS memory pre-allocated (to be bulldozed) or yet to be allocated (malloc)?”
Q3: does the variable occupy a different storage than the referent/pointee object? See http://bigblog.tanbin.com/2011/10/rooted-vs-reseat-able-variables-c-c.html

With these questions,  we will see 2 different types of assignment. I call the them pass-by-reference — pbref vs pbclone — pass-by-cloning (including op= and copy-ctor). These questions are part of my Lesson 1 in Python and C++. Here are some answers

java primitives – pbclone
java non-primitives – pbref
c# value types – pbclone
c# reference types – pbref
c++ nonref by default – pbclone
c++ nonref passing into function ref-param — pbref
c++ pointer – usually pbclone on the 32-bit pointer
c++ literal – pbclone?
c++ literal passing into function ref-param?
php func by default – pbclone
php func ref-param – pbref
php func ref-return – pbref
perl subroutine using my $newVar — pbcloneperl subroutine using $_[0] — pbref
python immutables (string, tuple, literals, frozenset) – pbref — copy-on-write
** python primitive — no such thing. “myInt=3” is implemented same as string. See post on Immutable, initialize etc
python list, dict, set …- pbref

Q: what if RHS of pbref is a literal?
A: then pbref makes no sense.

Q: what if RHS is a function returning by reference?
A: you can get pbref or pbclone

[1] C++ and Perl parameter passing is multifaceted.

equivalence of array^pointer? my take

Update — the spreadsheet is a newer post.
First let’s clarify what we mean by “pointer”.

Q: an array name is comparable to a pointer ….? Nickname? PointerObject? or PureAddress?
%%A: an array name like myArray is nickname for a pure address. See post on 3 meanings of “pointer”. However, in this post we are comparing myArray with a myPointer, where myPointer is … either a PointerObject or PureAddress
%%A: myArray is like a const ptr “int * const myConstPtr”

The apparent “equivalence” of array vs pointer is a constant source of confusion. Best (one-paragraph) pocket-sized summary — http://c-faq.com/aryptr/practdiff.html

http://c-faq.com/aryptr/aryptrequiv.html is a longer answer.

#1 (principle dealing with the confusion) physical layout. When in doubt always go back to fundamentals.

A simple array on the stack is a permanent name plate [2] on a permanent block of memory, physically dissimilar to a pointer Variable. However, Pointer Arithmetic and Array Indexing are equivalent. I feel AI is implemented using PA.

#2) syntax

Pointer variable is syntactically a more _flexible_ and more powerful construct than array. However, the strong syntactical restriction on array is a feature not a limitation. Look at strongly typed vs dynamic languages.

I feel most operations on an array is implemented using pointer operations. (Not conversely — Many pointer operations are syntactically unavailable on arrays). Specifically, Passing an array into a function
is implemented by pointer bitwise copy.

That’s the situation with arrays on the Stack. On the heap situation is more confusing. Consider “new int[5]” — return value Must be saved in a pointer variable, but the underlying is a nameless array. In contrast, see [2] above. Also, that pointer variable could rebind/reseat.

attributes of an option instrument

When we say “I am short 30 OTM July IBM put option struck at 103”, we mix static instrument attributes with a non-static attribute

A call option is always a call option (unless in FX). The underlier and the call/put attribute are part of the product static data.
The strike/expiry are also part of the product static data. These are the 4 defining attributes of an option instrument in real
business conversation. It’s important to notice the difference between static and dynamic attributes.

Moneyness (ITM/OTM) is a dynamic attribute. Moneyness is like indebtedness. You can be in-debt now but later become debt-free.

A negative vega position is always negative vega. This characteristic never changes. It’s like a person’s ethnicity. However, this
is not an instrument attribute, but a position attribute. Ditto for the quantity.

SL – how to grab dispatcher, random tips

Q: how many dispatchers are there in one SL process?
%%A: 1, according to many c# guys. However, a lot of literature mentions “a dispatcher”

Q: can you get hold of the dispatcher from Any thread Always?
A: http://www.jeff.wilcox.name/2010/04/propertychangedbase-crossthread/ says not always

Q: which source is best to get the dispatcher?
A: Deployment.Current.Dispatcher is preferred over Application.Current.RootVisual.Dispatcher. See http://www.jeff.wilcox.name/2010/04/propertychangedbase-crossthread/ and other sites

Q: Is SynchronizationContext better than dispatcher?
A: http://www.jeff.wilcox.name/2010/04/propertychangedbase-crossthread/ says dispatcher is easier
A: http://dotnetslackers.com/articles/silverlight/Multithread-Programming-in-Silverlight-4.aspx demonstrates both.

Q: how many UI threads are there?
%%A: 1, according to many c# guys.

Stop orders — LimitOrder, StopLoss, TakeProfit …

See posts on limit^stop orders…

To understand stop orders, it’s instructional to contrast Limit orders vs Stop orders.

Your Limit B/S order gets executed when market moves … in your favor.
Your Stop B/S order gets executed when market moves  … against you, regardless whether you have an open position or not.
** Stop S is a Sell. You prepare a Stop S Below the mkt. It’s executed when mkt moves Down, but before Down too much. (If you buy and then quickly Execute a Stop Sell, you end up buy-high-sell-low. This would be due to a miscalculated buy and you want to cut loss quickly.)
** Stop B is a Buy. You prepare a Stop B Above the mkt. It’s executed when mkt moves Up, but before Up too much.

Stop orders are mostly used for SL. Other uses tend to be related to SL.

For example, after an FX trade at 1.22, you often prepare an offset trade SL and a similar offset trade TP. You prepare both but only one of them will get executed. SL and TP will box up your 1.22 “position open price”, i.e. one of them placed at a higher and the other at a lower price around your position.

For a buy@1.22
* the StopLoss is a sell at Lower price — at a Loss (buy high sell low)
* the TakeProfit is a sell at Higher price.

Now, the TP is just a regular limit order, while the SL is a stop order.

I think the exchange only knows LO and market orders. I think the exchange order book consists of LO only.

Here’s a fundamental difference between LO and SO. When triggered, stop orders become a market order available for execution at the next available market price. Stop orders guarantee execution but do not guarantee a particular price. Therefore, stop orders may incur slippage — There is a substantial risk that stop-loss orders left to protect open positions held overnight may be executed significantly worse than their specified price. I guess this would hapen when mkt moves against you too fast and too many like-minded investors have a SL order at the same price. Therefore, Saxo only told me SL order “usually” does limit the losses.

In contrast, LO is price-guaranteed by the exchange (though not sure about FX where there’s no exchange).

icon in SL-OOB causing silent death


I realized if the icon.png is zero-byte, then a SL OOB will fail with inconsistent behavior. Launched from the desktop shortcut, blank screen would show up. From MSVS, F5 or Ctrl-F5 would show no screen no error no log even if you enable exception debugging. Even the constructor in App.xaml.cs won’t get invoked.

GarbageCollect in JVM ^ .NET

Disclaimer — I’m no developer of garbage collector and have no real insight. This is just a collection of hearsay online comments.

Diff: Large Object heap

Diff: config parameter? JVM has (far) more. One of the very few dotnet config param is gcConcurrent.
http://msdn.microsoft.com/en-us/library/at1stbec.aspx points out

By default, VM (i.e. the runtime) uses concurrent garbage collection, which is optimized for latency. If you set gcConcurrent to false, VM uses non-concurrent garbage collection, which is optimized for throughput.

See first paragraph of http://msdn.microsoft.com/en-us/library/at1stbec.aspx