update jtable from inside an event listener: short sample

Note: one event only. In other cases, the event listener (on EDT) often generate cascading events to update other JComponents
Note: the stateful listener, which is common. The simplest listeners are stateless functors.
Note: setModel — “swapping” in a new model

Note the jtable object vs the table model object. Different addresses!

The listener ctor can receive the jtable object and save it as a field —
public SelectionChangeListener(JTable table) {
   this.table = table;
// now in the event handler
DefaultTableModel model = new DefaultTableModel(…);
this.table.repaint(); // must run on EDT???


pimpl -> bridge pattern

My pimco interviewer (Burak?) pointed out, again, the link between pimpl and bridge pattern. Perhaps he read [[effC++]]

See other posts about pimpl.

Based on whatever little i know about c++ design patterns, i feel bridge is arguably the most powerful, colorful pattern. Here’s one variation of the pattern, featuring bridge-over-2-trees. See [[head first design patterns]] and this detailed yet simple sample code

First you need to thoroughly master the pimpl idiom. Realize the pimp classes provide a service to a client.
-> To grasp the bridge pattern, remember the client is unimportant. It’s outside our picture.
-> Next, looking at the service side of the client-service divide. Realize we have refactored a single service class into an empty public facade[1] + a private impl (therefore PIMPL) class.
-> Next realize both classes can be abstract and subclassed.
-> We end up with 2 trees i.e. 2 class hierarchies. P292Duffy
Note the field in the empty public facade should be a pointer for maximum polymorphism and easy-copy
-> We see a link between the 2 trees since the facade HasA pointer to the pimpl class.
-> That link is the bridge!

[1] imprecisely

For More flexibility, we can use factory to manufacture subclasses. Factory must return pointers to avoid slicing.

event pseudo-field ^ hidden delegate field

— Based on http://www.yoda.arachsys.com/csharp/events.html
First let’s review c# properties. Example – Property “age” in class Animal is really a pair of get()/set() methods manipulating a hidden realAge field. You can do anything in the get() and set(), even ignoring the realAge, or make do without the realAge variable entirely. To your clients, age appears to be a field — a pseudofield.

An event is a pseudofield referring to a hidden delegate field. MyEvent is really a pair of add()/remove() methods to append/remove each delegate (one at a time) from the realDelegate field.

If you declare MyEvent without explicit add/remove, compiler generates them around a hidden delegate field, just like an auto-implemented property.

Compared to properties, there’s some additional syntax complexity. The type of the hidden delegate field is the type mentioned after keyword “event”.

in a param declaration, const and & are both "decorators"

Sound byte — in a param declaration, const and & are 2 standard “decorators”

Q: Func1(const vector & v) { // It’s easy to get distracted by the some_complex_expression, but put it aside — what kind of param is v?
A: v is a reference to some vector object, and you can’t modify object state via this handle. It’s known as a “const reference” but
* This doesn’t mean the object is immutable.
* This only means the object is unmodifiable using this particular handle.

Note — when you pass an arg to this param, you don’t specify const and you don’t specify a reference —
Vector myVector; Func1(myVector); // the compiler will create a 32bit const ref based on the nonref variable myVector

Jargon warning — A “handle” is a loose concept, can be a pointer, a ref, or a nonref variable.

pimpl, phrasebook

pointer – the private implementation instance is held by (smart) pointer, not by reference or by value
** P76 [[c++codingStandards]] suggests boost shared_ptr

big3 – you often need to avoid the synthesized dtor, copier and op=. See P…. [[c++codingStandard]]

FCD – see other posts

encapsulate – the “private” class is free to evolve
** wrapper class vs a private class. The public methods in the wrapper class simplify delegates to the private class.[[c++succinctly]] has a complete Pimpl example showing such “delegations”

shared_ptr (sptr) vs auto_ptr (aptr)

Both of them delete intelligently. Both overload -> to mimic ptr syntax. See http://publib.boulder.ibm.com/infocenter/comphelp/v8v101/index.jsp?topic=/com.ibm.xlcpp8a.doc/language/ref/cplr329.htm

aptr has unconventional copier/assignment, therefore unacceptable to STL containers; sptr is the only solution I know when storing pointers into containers — not scoped_ptr, not weak_ptr.

sptr is ref-counting; aptr isn’t as each aptr’s ref count is always 1.

aptr is sole-ownership; shared_ptr is shared ownership — that’s why there’s a ref count. shared ref count

shared_ptr ref counter implementation — a non-intrusive doubly-linked list to glue all sister shared_ptr instance. See http://www.boost.org/doc/libs/1_44_0/libs/smart_ptr/smarttests.htm.

Q: can you delete a smart ptr as you do a raw pointer?
%%A: no. a smart ptr is a class template. u can’t call “delete someNonrefVariableOfAnyClass”

Q: if a smart ptr is a field, what happens during destruction?
%%A: see post on DCB. Field destructors are automatically invoked.

Key weakness of ref-count is the cycle. Use weak_ptr instead.

For container usage with inheritance, see http://www.devx.com/cplus/10MinuteSolution/28347/1954

FX vol fitting, according to a practitioner online

— Based on http://quantdev.net/market/bootstrapping/4-buildinganfxvolsurface
The market provide fairly liquid quotes on volatility out to about 2 years for only 3 types of instruments —

ATM Straddle
Risk Reversal

The quotes are provided in terms of volatility for a specific delta. For example: a quote will be given for the volatility for a 25 delta Strangle, or a 10 delta Risk Reversal for a specific maturity. In order to construct our volatility surface we need quotes for an ATM Straddle, a 25 delta Strangle and a 25 delta Risk Reversal, and a 10 delta Strangle and a 10 delta Risk Reversal with a range of maturities.

We can imply the volatility for the specific deltas at a particular maturity by using our quotes. The 50 delta implied vol is simply the volatility of the ATM Straddle. An ATM Straddle is a call and a put with the same strike and maturity, and chosen so the delta of the straddle is zero.

The 25 delta call implied volatility is the ATM Straddle volatility + (25 delta Risk Reversal volatility / 2) + 25 delta Strangle.

On a smile curve, the x-axis would include {10delta, 25 delta, 50 delta, 25 delta, 10 delta} in symmetry. The volatility calculated for the 25Δ call (OTM) is the same as that for a 75Δ put (ITM), so for call values you go along the curve from End A to End B, and for put values you would go along the curve from End B to End A.

Based on http://quantdev.net/market/bootstrapping/4-buildinganfxvolsurface says but I doubt “Usually you would then turn the vol curve for each maturity so it is denominated in strike rather than delta, as it then becomes much easier to use in practice given that you know the strike for the option you want to price, but not the delta.

interpreting const var declarations c++

(based on http://www.parashift.com/c++-faq-lite/const-correctness.html)

#1 simple rule — read   B a c k w a r d s
Q: What does ….Fred const * p1…. mean?
A: reading backward, p1 is a ….ptr to a const Fred….. Similarly,
Fred const & r1– means r1 is a …. ref to a const Fred…
Fred * const p — means p is a ….const ptr to a Fred…..
Fred const * const p — means p is a ….const ptr to a const Fred ….

Now we are ready to deal with func declarations.
#2 simple rule — identify var declarations embedded in func declarations. Essentially look at return type declarations and parameter declarations.

Fred const& operator[] (unsigned * const index) const; ← subscript operators often come in pairs

– return variable Fred const& — ref to a const Fred
– param unsigned * const — const ptr to an unsigned, just like a final Object parameter in java — you can’t reseat this ptr.

You can test most of these easily in eclipse.

table constraints

For simple data validation like Price > 0, you can implement in the application loading into the table, or in the application reading from the table, or you can use table constraints.

* reusable — What if another source system (beside LE) needs to load into this table? Table constraints are automatically reusable with zero effort.

* testing — Table constraints are extremely simple and reliable that we need not test them.

* Table constraints are easier to switch on/off. We can drop/add each constraint individually, without source code migration and regression tests

* flexible — we can adjust a table constraint more easily than in application. Drop and create a new constraint. No source code change. No test required. No regression test either.

* modular — table constraints are inherently more modular and less coupled than application modules. Each constraint exists on its own and can be removed and adjusted on its own. They don’t interfere with each other.

* risk — Table constraints are more reliable and there will be less risk of bad data affecting our trailers. Validation in application can fail due to bugs. That’s why people measure “test coverage”.

* gate keeper — Table constraints are more reliable than validations in application. They are gate keepers — There’s absolutely no way to bypass a constraint, not even by bcp.

* visible — Table constraints are more visible and gives us confidence that no data could possibly exist in violation of the rule.

* data quality — You know …. both suffer from data quality. They know they should validate more, but validation in application is non-trivial. Reality is we are short on resources.

* if we keep a particular validation as a table constraint, then we don’t have to check that particular validation inside the loader AND again in the downstream trailer calculator (less testing too). You mentioned reusable valiation module. This way we don’t need any.

* hand-work — In contingency situations, it’s extremely valuable to have the option to issue SQL insert/update/bcp. Table constraints will offer some data validation. From my experience, i have not seen many input feed tables that never need hand work. I believe within 6 months after go-live, we will need hand insert/update on this table.

* Informatica — Informatica is a huge investment waiting for ROI and we might one day consider using it to load lot data. Table constraints work well with Informatica.

* LOE — The more validation we implement in application, the higher the total LOE. That’s one reason we have zero constraint in our tables. We are tight on resources.

* As a principle, people usually validate as early as possible, and avoid inserting any invalid data at all. Folks reading our
tables (perhaps from another team) may not know “Hey this is a raw input table so not everything is usable.” Once we load stuff into
a commissions table, people usually think it’s usable data. Out of our 100+ tables, do you know which ones can have invalid data?

sticky strike vs sticky delta

http://en.wikipedia.org/wiki/Volatility_smile#Evolution:_Sticky is brief and clear.

Heuristics show

Equity vol smile curve is typically stick-to-strike as spot moves. When we re-anchor a surface to a new spot, the curve plotted against Absolute strike looks unchanged, but curve plotted against delta would shift. I think curve plotted against relative strike would shift too. I don’t think it’s common to plot equity vol smile against delta.

FX vol smile curve is typically sticky delta. When we re-anchor a surface to a new spot rate, the curve plotted against delta would stay fairly stable.