c++iterate sequence, grouping every 4 items

https://github.com/tiger40490/repo1/blob/cpp1/cpp/binTree/serialize_deser.cpp has working solution that I came up myself

for(int i=0; getline(myStringstream,token, ','); ++i){
      v.push_back(token);
      if (i%4 < 3) continue;
      string id=v[0], idLe=v[2], idRi=v[3]; Data d=stoi(v[1]);
      v.clear();

 

Advertisements

## Are those leetcoders really stronger@@

You wrote “I think many leetcoders are fresh graduates, or young people, who have more time and energy.  I know I cannot compete them …”

Q: Are those leetcoders really stronger in coding tests? Generally yes, but it depends on the type of coding question.

  • For a weekend take-home assignments …. I am not sure they would produce better quality code than us. (I’m not very good either). Code smells matter to my past interviewers.
    • They may not run as many tests as we do.
    • I tend to spend 200%+ more time on the assignment
  • For white-board ….. or remote dumb editor pair programming (west coast favorite), they may not be able to explain their thought process as well as I do.
    • My 2007 Google white-board interview used pseudo code, so the leetcoders’ advantage would be much lower.
  • For completely original questions created by a hiring manager then sent via hacker rank platform, leetcoders may not be fast enough to solve them, since leetcode problems can’t include every newly invented problems.
    • I hit two problems related to shortest-path algorithm, which can be similar to leetcode problems. I also hit very unusual problems.
  • For multi-threading coding questions, leetcode doesn’t help.
    • hit me at least 8 times — UBS, Barcap, Gelber, Bbg ..

4 technique gains { coding drill #技巧提升

An essential review can be good once a while, but over-analysis can reduce our motivation for practice:(

See also my google-sheet on codingIV types and skills required —

Note I don’t target a skill like “passing all Leetcode tests” since those tests require very refined algorithm, but most coding tests focus on just a decent algorithm with normal test cases. See my letter sent to friends about “hard problems”

benefit : quick-n-dirty
syntax+idiom
other part of ECT best practices
Can we remember after X years? 3Y #highly specific items unsure 4Y
Did we improve through past IV? y they are wake-up calls
Can we improve it on the job? actually no rarely slightly
important in short codility IV? crucial y n
important in white board pair-coding? secondary FB only, not GOOG n
important in long take-home assignment? n n y

—-#1 benefit: syntax+idioms. See ##examples@syntax learning via mock coding IV 

I also realized my data structure syntax grasp is insufficient with STL etc. In a timed coding tests I had no time searching online. I feel STL syntax is more messy.

—-benefit: ECT speed of completion.

I was able to complete many timed online tests but still rejected. Still, I did better than before.

—-benefit: best practices — designs, code smells

—-benefit: key ideas (sometimes insight) into a classic problem. A real insight is powerful and reusable. This is mostly useful when we hit a “previous problem”. See https://bintanvictor.wordpress.com/2018/06/09/study-top-100-common-coding-challenges-to-learn-key-techniques/

This is the #1 benefit to XR.

I feel I hit something similar to previous problems about 50% of the time.

realistic+significant Legacy as a developer..possible@@

See also %% Original contributions as a Wall St techie

Background — I often feel there’s no (social) value in my career.

Q: How about system architect or app-owner in a big firm?

I feel a successful and stable app can survive 5 to 20 years before it gets replaced.

Q: How about documentation for an (open source or otherwise) long-living technology? Documentation can even take the form of StackOverFlow QnA.

This “legacy” may not last very long as better answers could supersede your answer any time. In fact, as the topical technology evolves, your answer is doomed to become outdated gradually. Even a very popular computer book becomes outdated over 20 years.

Also, this doesn’t feel realistic for me.

[18] y I don’t like leetcode+similar sites #XR

wipe-out — I would say this letter described something like a wipe-out experience…

Hi XR,

I may need your advice here. On 23 June I tried one medium level question on LeetCode. (Before Bloomberg interviews, I tried Bloomerg CodeCon website for a few days, with similar experience, but there’s no posted solution 🙂

To my dismay, it took me 4+ hours because I hit lots of “devils in the implementation details”.

* I dare not submit my code and run LeetCode test cases because I am scare to see tons of failures, a huge discouragement.
* I am tempted to look at the top solutions, but they are likely much cleaner and better than mine, a huge discouragement.
* Until I pass all test cases, this question is not done and I would feel guilty to give up. In contrast, my own coding projects can be 50% done and I can leave it as is. I would not feel like a quitter. My own projects have no standard test cases to show I’m 50% done or 90% done. You could say LeetCode acceptance criteria is uncompromising and uncomfortable.

Since I don’t want to be a quitter, I must keep spending hours and hours on this one question. The longer I go, the more tired and I just feel increasingly frustrated that I can’t complete even one question. In my own projects, I could give up without shame or guilt.

* Even if I were to complete a few questions 100%, I would not feel proud because other coders have completed a hundred questions, a huge discouragement.

These discouragements would presumably destroy the precious joy of coding. This “joy of coding” is precious because it’s so hard to get — I need plenty of energy, plenty of quiet time with background music, positive mood to take on tough challenges …. Each hour of coding practice easily consume twice the energy of other type of learning.

Here are other drawbacks to LeetCode:

On the Leetcode code editor, I’m unable to run my own small tests until I have a complete solution. In contrast, in my own coding projects, after I have a small module written I can test it and get some immediate satisfaction and make progress on that module. On LeetCode site, I feel like digging and digging in darkness, until I see some light at end of the tunnel.

In conclusion
* skill improvement is not higher than my own coding practice

* satisfaction, positive feedback .. is much lower
* stress is higher
Therefore, I don’t look forward to doing it again. To continue my coding practice, I would stay away from these web sites.

Then something hit me — I now realize my son is not as lazy as I thought. As a Chinese father I automatically feel he is not putting the same amount of effort as other kids in his school. But looking at myself .. Am I putting in the same amount of effort as LeetCoders? If we compare to the wrong peer group, we inevitably feel lazy, inferior, inadequate, sub-standard. Such a comparison is not the best way to motivate our kids. It is erosive, hurtful, counter-productive.

In the long run, his classmates who put in more effort doing math problems now may not do better than him. They might do better in standardized exams, but what about university level (no more standardized tests) ?

specify(by ip:port) multicast group to join

http://www.nmsl.cs.ucsb.edu/MulticastSocketsBook/ has zipped sample code showing

mc_addr.sin_port = thePort;

bind(sock, (struct sockaddr *) &mc_addr, sizeof(mc_addr) ) // set the group port, not local port!
—-
mc_req.imr_multiaddr.s_addr = inet_addr(“224.1.2.3”);

setsockopt(sock, IPPROTO_IP, IP_DROP_MEMBERSHIP,
(void*) &mc_req, sizeof(mc_req) // set the IP by sending a IGMP join-request

Note setsocopt() actually sends a request!

====That’s for multicast receivers.  Multicast senders use a simpler procedure —

mc_addr.sin_addr.s_addr = inet_addr(“224.1.2.3”);
mc_addr.sin_port = htons(thePort);

sendto(sock, send_str, send_len, 0, (struct sockaddr *) &mc_addr, …

[18] weekly coding drill plan

  • No need to pass all the tests on leetcode! See tried 3 “hard” leetcode Q’s #tests100%
  • focus on effort, the process of learning
    • not the result like how many questions I have “learned”. Result is unreliable to measure.
  • time slot? mostly weekends. Weekday evenings are probably tough but I did it many times
  • time quantum? Ideally 30-90 min, not 3 hours like in the past.
  • interview format to target?
    1. codility
    2. white-board
    3. take-home?
  • skills to target
    1. pure algo ideas
    2. ECT
    3. syntax
  • select the top 50 classic problems or contrived but popular questions.
  • don’t waste time on questions with many “dislikes”.
  • level? Some past problems are very hard. We could still try those “hard” problems when we feel engaged and confident to grow through them. If too hard, just read the solutions straightaway.
  • question pool? retry — past questions in blog. 温故而知新
  • question pool? Leetcode — if the problem is engaging and classic, even if easy
  • language? py or c++ are easier (than java) to test on my laptops
  • target? 60 minutes a week (better than 0 minutes as in the past). This includes coding, blogging, email discussions
  • how many weeks? 3, starting from late Apr 2019
  • involve friends to increase the engagement — CSY, Ashish, XR, YH, Deepak, Alok, YH,

At what juncture would kernel scheduler run@@

Kernel scheduler has an algorithm and therefore implemented as a sequence of instructions. You can think of it as some black-box function/routine.

I think it is Not really a long-running background process. In Linux, I believe it is an on-demand routine, but not run on behalf of any process.

Background — Many familiar on-demand kernel routines do run on behalf of an “owner process” —

  • accessing a file or socket
  • accessing some device such as NIC
  • accessing memory

However, other on-demand kernel routines (often interrupt handlers) do not have an “owner process”. Here are some routines —

  • reacting to timer interrupts
  • reacting to low-level emergency hardware interrupts like …. ?

So the scheduler is a classic example. I think scheduler can get triggered by timer interrupts. See P 350 [[linux kernel]]

%%data science CV

  • verizon — support vector machine
  • uchicago ?
  • PWM cost accounting analytics
  • chartered statistical data analysis, data cleansing, curve fitting ..
  • stirt risk analytics, curve building
  • stirt cloud computing
  • barclays high volume …
  • nyse high volume data parsing, analysis, big data storage, parallel computing …
  • AWS

price sensitivities = #1 valuable output of risk-run

[[complete guide]] P433, P437 …

After reading these pages, I can see that per-deal PnL and markt-to-market numbers are essential, but to the risk manager, the most valuable output of the deal-by-deal “risk run” is the family of sensitivities such as delta, gamma, vega, dv01, duration, convexity, correlation to a stock index (which is different from beta) , ..

Factor-shocks (stress test?) would probably use the sensitivity numbers too.

In Baml, the sensitivity numbers are known as “risk numbers”. A position has high risk if it has high sensitivity to its main factor (whatever that is.)

VaR can overstate/understate diversification benefits

understate the curse of concentration overpraise diversified portfolio
mathematically definitely possible probably not
correlated crisis yes possible, since VaR treats the tail as a black box. yes. portfolio becomes highly correlated. Not really diversified
chain reaction possible. Actually, Chain reaction is still better than all-eggs]1-basket yes. diversification breaks down

Well-proven in academic — VaR is, mathematically, not a coherent risk measure as it violates sub-additivity. Best illustration — Two uncorrelated credit bonds can each have $0 VaR but as a combined portfolio the VaR is non-zero. The portfolio is actually well diversified, but VaR would show risk is higher in the diversified portfolio — illogical, because the individual VaR values are simplistic. Flaw of the mathematical construction of VaR.

Even in a correlated crisis, the same could happen — based on probability distribution, individual bond’s 5% VaR is zero but portfolio VaR is non-zero.

A $0 VaR value is completely misleading. It can leave a big risk (a real possibility) completely unreported.

[[Complete guide]] P 434 says the contrary — VaR will always (“frequently”, IMHO) say the risk of a large portfolio is smaller than the sum of the risks of its components so VaR overstates the benefit of diversification. This is mathematically imprecise, but it does bring my attention to the meltdown scenario — two individual VaR amounts could be some x% of the $X original investment, and y% of $Y etc, but if all my investments get hit in GFC and I am leveraged, then I could lose 100% of my total investment. VaR would not capture this scenario as it assumes the components are lightly correlated based on history. In this case, the mathematician would cry “unfair”. The (idealized) math model assumes the correlation numbers to be reliable and unchanging. The GFC is a “regime change”, and can’t be modeled in VaR, so VaR is the wrong methodology.

interrupt+signal ] kernel

There are precise answers in the kernel code, but here are my high-level, imprecise pointers, based on [[linux kernel]]:

  • Based on my “adapted Intel terminology”, interrupts from cpu are known as cpu Exceptions, whereas interrupts from other devices are known as hardware interrupts
    • interrupting hardware includes hardware clocks, keyboard, NIC and other I/O devices.
    • cpu exceptions are generated by problematic/erroneous instructions
  • LG2 — “software interrupts” concept is not so well-defined, and used primarily for 1) notifying debugger 2) implement system calls

Both hardware interrupts and cpu exceptions can generate SIG* signals. Signals are much higher-level constructs than the hardware-level interrupts. Signal as a concept is 50% kernel 50% UserMode:

  • A signal is always addressed to a user process.
  • signal delivery is kernel’s job; signal handling is user process’s job

I feel interrupts^signals are somewhat like cells^molecules — at different levels!

swap on eq futures/options: client motive

Q1: why would anyone want to enter a swap contract on an option/futures (such a complex structure) rather than trading the option/futures directly?

Q2: why would anyone want to use swap on an offshore stock rather than trading it directly?

More fundamentally,

Q3: why would anyone want to use swap on domestic stock?

A1: I believe one important motivation is restrictions/regulation.  A trading shop needs a lot of approvals, licenses, capital, disclosures … to trade on a given futures/options exchange. I guess there might be disclosure and statuary reporting requirements.  If the shop can’t or doesn’t want to bother with the regulations, they can achieve the same exposure via a swap contract.

This is esp. relevant in cross-border trading. Many regulators restrict access by offshore traders, as a way to protect the local market and local investors.

A3: One possible reason is transparency, disclosure and reporting. I guess many shops don’t want to disclose their positions in, say, AAPL. The swap contract can help them conceal their position.

how is mkt data used ] buy-side FI analytics@@

This is a BIG bond asset manager… They use 2-factor HJM model, among others.

They use EOD market data for risk measure + risk sensitivity calculations. No real time.

Models were written by 40+ quants untrained in c++. The 16-strong IT team integrates the models

I asked “Do you use liquid fixed income market data mostly to calibrate models and use the model to price illiquid instruments?”

A: both

  • To calibrate model — every day, as explained in [[complete guide]] P436
  • To derive valuation directly on existing positions if the instruments are comparable (between ref data instrument and position instrment)

kernel bypass : possible usage ] RTS

Partially hypothetical usage scenario/proposal.

“Bypass” means .. bypassing standard kernel functions and using faster, lighter firmware instead.

“Bypass” means .. every network packet would go straight from NIC to user application, without passing through tcp/ip stack in the kernel.

“Bypass” probably means bypassing the socket buffer in kernel

Background — Traditional packet processing goes through tcp/ip software stack, implemented as a family of kernel functions. Whenever a network packet is received, NIC writes the packet to a circular array and raise a hardware interrupt. The i-handler (interrupt handler routine) and bottom-half will then perform packet processing using the kernel socket buffer, and finally copy the packet to a UserModeBuffer.

Note the two separate buffers. In RTS parser config file, we configure them as sock_recv_buf vs read_buf for every channel, regardless of TCP or multicast. The socket buffer is accessible by kernel only (probably unused when we turn on kernel bypass.) as kernel can’t expose a fast-changing memory location to a slow userland thread. I believe userland thread uses read() or similar functions to drain the socket buffer, so that kernel can refill the socket buffer. See [[linux kernel]] and https://eklitzke.org/how-tcp-sockets-work

With kernel bypass,

  • the Network card (NIC) has a FPGA chip, which contains the low-level packet processing software (actually firmware “burned” into fpga)
  • This firmware replaces tcp/ip kernel functions and delivers the packets directly and automatically to application UserModeBuffer. However, my parser relies more on another feature —
  • The SolarFlare firmware also lets my parser (user applications) read the NIC circular array directly. Zero-copy technique could bypasses not only socket buffer but also UserModeBuffer.

My parser uses SolarFlare NIC for both multicast and tcp.

Kernel bypass API was only used in some low-level modules of the framework, and disabled by default and configurable for each connection defined in configuration file.

http://jijithchandran.blogspot.com/2014/05/solarflare-multicast-and-kernel-bypass.html is relevant.

string comparison is costly #loop-unroll`

P120 [[ algos in a nutshell ]] claims that string comparison is consider expensive.

String comparison requires byte-by-byte comparison in a loop. Sometimes people need to write this loop by hand in assembly, and sometimes unroll the loop to fill the instruction pipeline.

I have no reason to doubt the authors. I believe this practice proved effective in practice.

factoryMethod: based on arg ^ host object

I used to believe a factory method always relies on arg to tell it what type of instance to manufacture.

Now I know you can call myFactoryInstance.makeTrade(); // no arg, but the host object provides the clue as to what type of instance to manufacture. The factory method is virtual.

See https://stackoverflow.com/questions/5739611/differences-between-abstract-factory-pattern-and-factory-method

python usage in FI quant lib #Pimco

In one of world’s biggest fixed income buy-side firms’ quant library, the codebase is 3/4 c++ and ¼ python including pandas, numpy, machine learning, grid computing modules. I think this is similar to Macquarie FICC quant lib.

C++ is much faster, but data structures are very limited including STL containers.

I think the funds hold mostly bonds and mortgages. How about futures, IRS? Perhaps for hedging?

numPad problem: generator

Q (Leetcode problem 17)… Given a string containing digits from 2-9 inclusive, return all possible letter combinations (not permutations) that the number could represent.

2: abc
3: def
4: ghi
5: jkl
6: mno
7: pqrs
8: tuv
9: wxyz

Example:

Input: “23”
Output: [“ad”, “ae”, “af”, “bd”, “be”, “bf”, “cd”, “ce”, “cf”].

Output need not be sorted but I would aim to print each word as sorted and also print all words in ascending order

—–
Group all the er’s into bag2, then all the qi’s into bag7… Generate the strings for each bag independently. After that, problem becomes

Q2: given N (say 11) Sets of unique strings, pick one from each set and concate the N strings as one output. Generate all output. I feel this can’t use the cookbook recipe since input is not one “pool” string but n sets. I think iterative is fine.

idea: Loop variable to keep the N indices (iterators) into each set
idea (dp + yield): generate the output for N=2. save to a collection. then take in next Set.

–yield-generator redrawC() generates …
input “88” we have 6 combos? tt tu tv uu uv vv
input “888” we have 10 combos? ttt ttu ttv tuu tuv tvv uuu uuv uvv vvv

–we need good variable names.
For the 9 digits, every digit is immediately mapped to a name string like ‘2’ -> “er” and I hope not to use the digits any more.
Java would use enum

To minimize confusion, Create typedef LOB as alias for either vector<char> or the string form. Will have 8 const LOB instances. Java would use enum

struct Bundle{
set<vector<char>> clubOfWords;
size_t repeatOfThisButton;
LOB lob; //compile-time constant
};

The utility function would be
Bundle gen(vector<char> const & lob /*lettersOnOneButton*/ , int repeat). This function is fairly simple. for er_5, we have 3^5 possible words in the club

sort input into 222223444 then create map:
“er_5” -> a bundle
“san1” -> a bundle
“si_3” -> a bundle

A major Miletone is when the map is populated with the clubs. Now generate combos … better use “append” approach.

liquid products2calibrate model→price exotics #UChicago

Essential domain knowledge, practiced in industry and also endorsed by academia.

1) On a daily basis (or otherwise periodically) use market data to calibrate a model’s parameters. Choose the more liquid instruments …

Note if you don’t re-calibrate frequently, those parameters could become obsolete, just like database index statistics.

2) use the model to price illiquid, exotic products.

Example — In my exam/interview, Professor Yuri pointed out that callable bonds, caps and floors (yes these are options) are the liquid products with liquid market data, and useful for calibration.

deleted dtor

This is an obscure feature of c++11. As stated in my blogpost big4 can be synthesized as deleted #dtor, a class Acct can have its dtor deleted by the compiler !

So how can a program function with an object of this class?

First, note the only way to create that object is by operator-new. But compiler won’t allow us to do “delete myAcctPtr”. I think the only way compiler would even create an executable is by ensuring there’s no delete on Acct pointers.

I think the program will run until shutdown, so the Acct object on heap is never deleted.

binary search in rotated sorted array

https://leetcode.com/problems/search-in-rotated-sorted-array/description/ has the requirement. I don’t want to look at other people’s solution, so I have reproduced the requirements below. I have not encountered this problem in any coding interview.

Q: Suppose an array sorted in ascending order is rotated at some pivot unknown to you beforehand. (i.e., [0,1,2,4,5,6,7] might become [4,5,6,7,0,1,2]). You are given a target value to search. If found in the array return its index, otherwise return -1. You may assume no duplicate exists in the array. Your algorithm’s runtime complexity must be in the order of O(log n).

https://github.com/tiger40490/repo1/blob/cpp1/cpp/array/binSearchRoatedArr.cpp is my solution

–Solution 2:

first run a binary search to locate the pivot point. Then run a O(1) test to discard one of the 2 segments. We are left with the remaining segment as a regular sorted array. Run binary search on it.

compiler loop-unrolling could hurt i-cache

[[art of unix programming]] P290/291 advocates (presumably based on experience) to bend over backward and ensure the central data structure + time-critical loop NEVER fall out of i-cache/d-cache.

I think this is useful sound byte to impress interviewers.

Developers need to know the size of L1 cache, L2 cache, L3 cache in our hardware. Then target the central data structure to one of them. Say L2 cache. We want to size our data structure to fit in there and never break the boundary.

“Small is beautiful” applies to both code and data.

Example 1 : cpu vs i-cache, IMHO more convincing than
Example 2 : cpu vs d-cache

In both examples, if cpu becomes fast enough then we ought to make it do more work in order to economize the caches.

— Example 1
Loop-unrolling enlarges text-section of the binary, so the additional “text” must compete for scarce instruction-cache.

[[art of unix programming]] claims that at least in some (not theoretical) cases, this compiler optimization may back fire i.e. hurt rather than improve performance.

The critical situation – suppose the loop runs in one of the extremely hot functions that need to remain in the instruction-cache, then we want to minimize code footprint.

Loading this hot function over and over from main memory can be worse than executing the original loop (without unrolling). This happens when cpu execution speed improves over the years but main memory access speed remains a bottleneck.

— Example 2
A second example “claimed” in the same paragraph, revolves around pre-computed look-up table for frequent access. This kind of optimization strategy can shave cpu cycles, but can increase the fierce competition for scarce L1 cache.

In other words, Pre-computing of a data table was designed to save cpu time but could backfire if the increased data footprint leads to data-cache misses.

We are again talking about extremely hot data that needs to remain in L1 cache. If this pre-computed data is accessed frequently, it can hog the L1 cache.

In such a case, removing the pre-computation, and re-computing each time, could be faster.

I think this problem can become real –
• If cpu is becoming much faster in recent years, or the computation gets simplified, then the cpu saving would diminish.
• At the same time, if the pre-computed data size grows larger, then the “hogging” would worsen.

ROTI@learning ED/IRS/FX-swap/repo #math-lite

Note this effort is after my basic bond math study, though I often count this effort as part of the broader “bond math” study.

Basic bond-math knowledge has robust demand on Wall St. Without hard evidence I feel ROTI is decent in basic bond math study. Q1: How is the ROTI in this study?

I feel many of the jargon terms in this space are common and essential knowledge:)

  • swap rate; comparative advantage;
  • OIS; Libor;
  • basis risk;
  • collateral;
  • curve building

However, this self-study rarely helped me:

  • MSFM course
  • Stirt job interview

Q1b: How is the market depth and robust demand of this skill?
A: not used much in the trading buy-side, but some asset management and most sell-side do need this know-how.

Note this topic is generally math-lite and much simpler than option math, so I was able to self-study:) See fixation@ROTI…dampens job satisfaction+joy@learning

Q2: how is the knowledge retention rate?
A2: decent. Thin->thick yes but not yet thick->thin

in c++11 overload resolution, TYPE means..

In the presence of rvr, compiler must more carefully choose overloads based on type of argument and type of parameter.

  • I think type-of-parameter is the declared type, as programmer wrote it.

No change in c++11. No surprise.

  • “argument” means the object. Type-of-argument means…?
  1. Traditionally it means int vs float
  2. Traditionally it means const vs non-const
  3. (Traditionally, it means Acct vs TradingAcct but this is runtime type info, not available at compile time.)
  4. In c++11, it also means rval-obj vs regular object. You can convert regular object to a rval-object via … move(). But What if argument is a rval-variable, declared as “int&& param” ?

This is one of the trickiest confusions about rvr. The compiler “reasons” differently than a human reasons. [[effModernC++]] tried to explain it on P2 and P162 but I don’t get it.

Yes the ultimate runtime int object is an rval-obj (either naturally-occurring or moved), but when compiler sees the argument is an “int&& param”, compiler treats this argument as lvalue as it has a Location !

My blogpost calling std::move() inside mv-ctor  includes my code experiments.

FRA^ED-fut: actual loan-rate fixed when@@

Suppose I’m IBM, and need to borrow in 3 months’s time. As explained in typical FRA scenario, inspired by CFA Reading 71, I could buy a FRA and agree to pay a pre-agreed rate of 550 bps.  What’s the actual loan rate? As explained in that post,

  • If I borrow on open market, then actual loan rate is the open-market rate on 1 Apr
  • If I borrow from the FRA dealer GS, then loan rate is the pre-agreed 550 bps
  • Either way, I’m indifferent, since in the open-market case, what ever rate I pay is offset by the p/l of the FRA

Instead of FRA, I could go short the eurodollar futures. This contract is always cash-settled, so the actually loan rate is probably the open-market rate, but whatever market rate I pay is offset by the p/l of the futures contract.

Q: bond price change when yield goes to zero

Can bond yield become negative? Yes 2015-2017 many bonds traded at negative yield. https://www.ft.com/content/312f0a8c-0094-11e6-ac98-3c15a1aa2e62 shows a realistic example of a vanilla bond trading at $104. Yield is negative –You pay $104 now and will get $100 repayment so you are guaranteed to lose money.

Mathematically, when yield approaches negative 100 bps, price goes to infinity.

When yield approaches zero, bond price would go to the arithmetic sum of all coupons + repayment.

spare time as slack resource #blogging,parenting,localSys

In 2017 I wrote to Gerald Robinson that spare time depletion would become one of the key challenges/pains of my forthcoming/upcoming adjustments.

For many (probably most) of my peers, if they don’t feel the drive to keep learning in their spare time, the resource depletion doesn’t feel so harmful, hazardous, stressful.. Beside this “learning”, I also have a need for quiet time for reflection.

— commute — Bayonne commute is tolerable only when I have plenty of spare time as “slack resource”.

— Parenting — (esp. my son) became a huge commitment and strain on my slack resources.

— Additional paid annual leave is supposed to be a slack resource but I think it doesn’t reduce the pressure to perform

… but I did spend these hours (weekends + evenings) in office to learn localSys, provided I’m engaged, in the mood.

Need to think harder about how to turn these large quantities of high-quality spare resource into stress-relief. These resources feel like untapped oil reserve.

spreadsheet concretize #Junli Part2

Note the java algo is event-queue based — every newly concretized cell is an event added to the event queue. When we encounter this cell again after a dequeue, All registered dependents of this cell are checked. If the check results in a new cell concretized, this cell is enqueued.

In contrast, my c++ algo is a modified BFT. Key idea is, whenever a branch node can’t be concretized (due to an unresolved upstream reference) we basically ignore that node’s subtree. The other root nodes’s BFT would eventually visit this node, unless there’s a cycle.

I believe both algorithms are relatively easy to visualize at a high level. Which algo implementation is more tricky and error-prone? I guess the BFT but not really sure.

— Topo-sort — “topological sorting” is the reusable general technique for similar problems like even scheduling. As I described to Kyle, the idea is “Given a directed graph, assign (artificial) integer ranks to all nodes so that every arrow is from a low rank to a high rank”

There are linear time algorithms to assign the ranks. I think some form of BFT may work… need more thinking.

I think it depends on what’s more natural — start from leaf nodes or start from root nodes. The start level would use lower integers.

For a typical spreadsheet, I feel it’s natural to start from nodes that have no downstream.

My c++ implementation was similar to Kahn’s algorithm.

[[Algorithms]] P 550 presents an elegant DFT  algo but not so intuitive to me yet. I think it DFT can’t solve this spreadsheet.

–Some additional notes on the c++ implementation

  • 🙂 I encountered much few seg-faults than in other projects. I think it’s because very few arrays (including vectors) are used.
  • Before I look up a map/set, I always use count() to verify.
  • 🙂 I didn’t need to check against end() as lower_bound() and find() functions would require.
  • no smart ptr needed. container of raw ptr worked well. See source code comments
  • In fact, container of cell names (as strings) is even “safer”.

python IV: xp-verification^QQ questions

Many CVs include claims like — "used py for sys automation". If interviewer wants to verify such claims, There are too many realistic IV questions they can use.

Too many, so I want to be selective. Many verification questions requires extensive learning (and periodic re-learning to compensate for low usage). See [14] mock/realistic python IV

In fact, I do come with more wide-ranging python project experience than other candidates. If out of 30 "verification questions" I struggle with 50% then other candidates probably /fare worse/.

If you look at java/c++ interviewers, they don’t ask so many "experience verification" questions. On the contrary, QQ questions by definition means "not required in projects"

min-cost partitioning #c++Flex #rare

Q: You will be given a natural number array and a threshold value T. The threshold represents the maximum length of subarrays that may be created for the challenge. Each sub-array you create has a cost equal to maximum integer within the sub-array. Your challenge is to partition the entire array into sub-arrays no longer than the threshold, and do it at minimum cost.

Function Description
Complete the function calculateCost in the editor below. The function must return an integer denoting the minimum cost of partitioning the array.

calculateCost has the following parameter(s):
a[a[0],…a[n-1]]: the integer array to be divided into sub-arrays
k: the threshold value, i.e the maximum size of any sub-array

Constraints
• 1 ≤ n ≤ 5000
• 1 ≤ k ≤ 500
• 1 ≤ a[i] ≤ 100000

For example, for T=2 and original array {1,5,2}, you have two ways to partition it:

  • {1} {5,2} total cost = 1 + 5 = 6 (this is lowest cost)
  • {1,5} {2} total cost = 5 + 2 = 7

— My greedy AlgoAA:

Update: thanks to XR here is an edge case to break AlgoAA: {49,50,99,0,98}

I will use the terms “group” and “subarray” interchangeably. A lone wolf is a group of one node.

I would first identify the global peak value, like 99. Should this node be a lone wolf? No. I can prove that it should “absorb” a neighbor node and become a subarray of two [1]. Should it absorb a 3rd node? I think I can again prove that it should. Therefore my greedy algorithm would first create a subarray of size K around the peak, leaving behind a left segment (and also a right segment), where we apply the same greedy algorithm.

[1] my informal proof — suppose the left neighbor has value 6 and is a loan wolf in the final grouping. We can improve this final grouping by merging this node with the peak. Total cost would reduce by 6. In another scenario suppose this node (value 6) is within subarray #12. Again, we can break up subarray #12, move out this “6” and merge it with the peak, without breaking any rule or increasing total cost.

So what algorithm to create the first subarray around the peak? Let’s assume k=3. There are up to 3 candidate groups, since the peak can be the first node, 2nd node or last node in its group. We can use a sliding window (of width 3) to identify the best among the candidates.

Q: why start from the peak not start from end of the array?
A: If you do, you may separate 2nd highest node from the peak, when they are adjacent. My AlgoAA would identify this situation early on, and put them in the same group.

— My greedy AlgoBB:

Each time after the window slide, we will compare the new window with the best window so far. The comparison is first based on the 2nd highest value in the window. If tied, then compare 3rd highest value in the window..

I think this is not hard to implement — convert each window to a heap then compare top to bottom.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/array/minCostPartition_Flex.cpp is a briefly tested implementation .. 60% confident.

limit-IOC ^ market-IOC

Limit IOC (Immediate-or-Cancel): Can be used for FX Spot and CFD.

An instruction to fill as much of an order as possible within pre-defined tolerances of a limit price, immediately (5 second Time-to-Live).

Unlike Market IOC orders, Limit IOC orders allow a Client to control the maximum slippage that they are willing to accept.

Under normal market conditions a Market IOC order will be filled in full immediately. In the event that it isn’t, any residual amount will be cancelled. Price Tolerance cannot be added on a Market IOC order, meaning that a client cannot control slippage.

“didn’t like my face”: we aren’t his top-favorite #bbg

Hi Deepak,

I now think there’s another reason that SIG, Bloomberg, LiquidNet, CapitalDynamics and other employers didn’t make me an offer even though I probably passed technical screening with a technical win.

In our chats, I used the generic term “didn’t like my face” as an umbrella term for several different factors. Today I want to mention a new factor – “what if this candidate takes my offer and continues to shop around?

I believe some companies shun that risk. When in doubt, they reject. When they make an offer they want to ensure the candidate will accept. They want to see “Hey we are clearly the favorite in his mind and he is in a hurry. If we make him an offer he will likely accept right away.”

Clearly, I’m not that type of candidate. I often come across as a “job shopper”, through my non-verbal language, or even through my explicit verbal answers. For example, when asked “Why are you looking to change job” I often answer “I’m actually doing fine on my current job but there are better opportunities like the role in your company.”

Q:so you didn’t write c++for rebus only configured it #CSY

Q: so you didn’t write c++for rebus(or any engine) only configured it?
A: Well, the major (I won’t use superlatives like “biggest” or “real”) challenge in my project is understanding the non-trivial legacy codebase. Once I (I won’t use “we”) reach sufficient understanding, it’s relatively straightforward to implement the required change in a “pinhole surgery”. The best change in this system is usually isolated in a few files and a few functions, among thousands.

I would say that analysis is 50% of the effort, and design, testing, debugging constitute 45% to 49% of the effort and code change is up to 5%. These percentages can vary depending on the type of change. Analysis could be 30 to 70% of the effort.

Frequently, the moment we figure out how things work inside the system, we hit that Aha moment and the problem is basically solved.

In terms of c++ work, as a team we bend over backward to minimize source code change in favor of config change, but there are exceptions —

  • I made many bug fixes.
  • Occasionally, I refactor a function as part of a change. This is unpopular in my team due to the risks. The team basically says Don’t fix something that’s not broken.
  • When we upgraded to c++11, I had to adapt my modules.
  • I added performance instrumentation, without affecting business logic
  • I often need to add logging in my local codebase as part of analysis and testing.
  • (I won’t use “we” here.)

I also need to read lots of existing code as part of analysis.

gdb: dump STL container %%experience

First let’s talk about custom containers. GDB would show the field names of an object, but frequently not the values. I guess integers values might show up but more than half the fields are pointers ( actually char-array field would be easy to print.)

If I call a function on the object, I have to be very lucky and very careful. q(->) has never worked for me so far, so I need to use q(*) to de-reference every pointer before calling a method on the pointee, and pray it works.

http://www.yolinux.com/TUTORIALS/src/dbinit_stl_views-1.03.txt works on std::map …

A simple experiment using https://github.com/tiger40490/repo1/blob/cpp1/cpp/88miscLang/containerDumpOperator.cpp

  • g++ -g theFile.cpp && gdb -iex ‘add-auto-load-safe-path .’ ./a.out
  • (gdb) print *(li._M_impl._M_start+1) # can print 2nd element if it’s std::string or double
    • Note before vector initialization, gdb already shows the addresses inside the vector, but some addresses are not populated. Just retry after the initialization.
  • std::unordered_map is doable:
    • (gdb) print **(tm._M_buckets) # prints first pair in a hash table bucket
    • (gdb) print *((**(tm._M_buckets))._M_next) # next pair in the same bucket
  • std::map content is harder
    • (gdb) print *(int*)(tm._M_t._M_impl._M_header._M_left+1) # prints one key
    • (gdb) print *(int*)(tm._M_t._M_impl._M_header._M_right+1) # prints another key in the pair
    • (gdb) print *(int*)((void*)(tm._M_t._M_impl._M_header._M_right+1)+sizeof(int)) #prints the value in the pair.
      • the (void*) is needed before we add sizeof(value_type). Without the cast, the pointer arithmetic would be different.
      • from the key field to value field, we move by 4 bytes (i.e. sizeof value_type) from  0x6050e0 to 0x6050e4. It’s actually easy to manually type .. print *0x6050e4
      • I suspect the _M_right pointer is seated at the “color” field. Increment to the key field?

pthread_join() retriev`return value: uncommon practice

Most pthreads programs don’t retrieve the return value via pthread_join().

https://stackoverflow.com/questions/3692591/return-versus-pthread-exit-in-pthread-start-functions has a comment by the author of boost::thread (reference implementation for c++11 thread library). He said

(3) I never use the return value of a thread in raw POSIX threads. However, I tend to use higher level facilities such as the Boost thread library, and more recently the C++0x thread library, which provide alternative means for transferring values between threads such as futures, which avoid the problems associated with memory management that you allude to.

Therefore, even though you felt it was unnatural to store the per-thread computed results in a global array, in practice it’s not bad. It is inherently thread-safe because there’s no data sharing, if we enforce a truly parallel mode.

That was my preferred solution, but to experiment, I also used new() to return a value to pthread_join(). Personally, I am always wary of using new() in one function and the corresponding delete() in another function … unreliable. As much as possible, I use smart pointers to manage new/delete.

https://github.com/tiger40490/repo1/edit/cpp1/cpp/thr/parallelSum_Pimco.cpp shows both solutions

Cloneable, Object.clone(), Pen.Clone() #java

A few learning points.

The Object.clone() implementation is not that important, because I should always override it in my class like Pen, but here are some observations about this Object.clone():

  • shallow copy, not deep copy, not very useful.
  • this protected method is mostly meant to be invoked by a subclass:
    • If your variable points to some Pen object that’s no Clonable, and you call it from either the same package or a subclass, then you hit CloneNoSupported.
    • if your Pen class implements Clonable, then it should override the clone() method
    • [1] The only way the default Object.clone() gets picked by compiler is when a Cat class implements Clonable but doesn’t override clone(), and you call it from either the same package or a subclass

Clonable is a special marker interface. It could trigger the CloneNotSupported exception, but if you override clone() then this exception may not hit. It’s an obscure detail.

  • I think you can override clone() without implementing Clonable, but this is tricky and non-standard.
  • You could also implement Clonable without overriding clone() .. see [1]

java: protected^package-private

https://docs.oracle.com/javase/tutorial/java/javaOO/accesscontrol.html (java8) shows two nice tables:

  • There’s no more “private protected
  • default access level is better known as “package-private” — strictly more private more restrictive than Protected . (Protected is more like Public). The 2nd table shows that
    • a “package-private” member of Alpha is accessible by Beta (same package) only, whereas
    • a “protected” member of Alpha is accessible by Beta and Alphasub
    • Therefore, “neighbors are more trusted than children”

I find it hard to remember so here are some sound bytes

  1. “protected” keyword only increases visibility never decreases it.
  2. So a protected field is more accessible than default (package-private)
    • As an example, without “protected” label on my field1, my subclasses outside the package cannot see field1.
  3. same-package neighbors are local and trusted more than children outside the package, possibly scattered over external jars

For a “protected” field1, a non-subclass in the same package can see it just as it can see a default-accessible field2

Not mentioned in the article, but when we say “class Beta can access a member x of Alpha”, it means that the compiler allows you to write, inside Beta methods, code that mentions x. It could be myAlpha.x or it could be Alpha.x for a static member.

detach()^pthread_exit()^pthread_join() at end@main()

See https://stackoverflow.com/questions/3559463/is-it-ok-to-call-pthread-exit-from-main

Q: When would main() call ..join() vs ..exit()?

  • I would call pthread_join() if main thread has something to do after child threads completes.
  • I would call pthread_exit() if main thread has nothing to do and should leave the child threads running.
    • However, if the child thread object is destructed…
  • I would call detach() to create a daemon child thread, which would automatically terminate with host process — according to MS training.
  • I would not call any of them from main() if I want main() to bring down entire process… The normal return of main() kills all threads, unlike jvm — see Anthony Williams comment in  https://stackoverflow.com/questions/3692591/return-versus-pthread-exit-in-pthread-start-functions

For c++11 std::thread object, rules are different from pthreads. If a std::thread object is destructed at end of main(), it would by default invoke std::terminate()! So before any return statement, main() need to call child.detach() , so as to leave the child Thread running.

Explained more in std::thread dtor std::terminate()

swap^cash equity trade: key differences

I now feel an equity swap is an OTC contract; whereas an IBM cash buy/sell is executed on the exchange.

  • When a swap trade settles, the client has established a contract with a Dealer. It’s a binding bilateral contract having an expiry, and possibly collateral. You can’t easily transfer the contract.
  • When a cash trade settles, the client has ownership of 500 IBM shares. No contract. No counterparty. No expiry. No dealer.

I think a cash trade is like buying a house. Your ownership is registered with the government. You an transfer the ownership easily.

In contrast, if you own a share in coop or a REIT or a real-estate private equity, you have a contract with a company as the counterparty.

Before a dealer accepts you as a swap trading partner, you must be a major company to qualify to be counterparty of a binding contract. A retail investor won’t qualify.

## good-looking designs/ideas #white elephant

I think a true architect knows the difference. The best design is often not so good-looking and hopelessly outdated.

Not all of them are classified “white elephants”. Not all of them are “designs”.

  1. lock-free — worthwhile?
  2. multi-threading — not always significantly faster than multi-Processing. I find single-threaded mode fastest and cleanest
  3. sharedMem — not always significantly faster than sockets. I briefly discussed these two choices with my friend Deepak M, in the parser/rebus context. I feel it may not be faster.
    1. other fancy IPC techniques? I am most familiar with sockets …
  4. java generic module (beyond collections) — look impressive, can be hard to maintain but doesn’t buy us much
  5. converting java system to c++ — not always brings significant performance gains
  6. forward() and move() instead of cloning
  7. hash-table — not always faster than RBTree
  8.  noSQL — not always significantly faster than REBMS with lots of RAM.  I find rdbms much more reliable and well understood. The indices, temp tables, joins, column constraints, triggers, stored procs add lots of practical value that can dramatically simplify the main application. I understand the limitations of rdbms, but most of my data stores are not so big.
  9. RPC and web services? Probably necessary, but I still don’t know how reliable they are
  10. thick client? I still feel web UI is simplest

     

PendingNew^New: OrdStatus[39]

“8” has special meaning

  • in tag 35, it means execution report
  • in tag 39 and tag 150, it means status=rejected.

PendingNew and New are two possible statuses for a given order.

PendingNew (39=A, 150=A) is relatively simple. The downstream system sends a PendingNew to upstream as soon as it receives the “envelop”, before opening, validating or saving it. I would say even a bad order can go into PendingNew.

New (39=0, 150=0) is significant. It’s an official acknowledgement (or confirmation) of acceptance. It’s synonymous with “Accept” and “Ack”. I think it means fully validated and saved for execution. For an intermediate system, usually it waits for an Ack i.e. 39=0 from exchange before sending an Ack to the upstream. Tag 39 is usually not modified.

I see both A and 0 frequently in my systems, in execution report messages.

For a market Buy order, I think it will be followed by (partial) fills, but not guaranteed, because there may be no offers, or execution could fail for any reason. For a dealer system, execution can fail due to inventory shortage. I implemented such an execution engine in 95G.

I’m no expert on order statuses.

[14] mock/realistic python IV

There are many xp-verification questions listed here. I have a blogpost about their relevance.

ess=[[essentialRef]]; cook=cookbook 1st edition

Q: optimize – how did you optimize perf?
A: I didn’t need to for my scripts.
A: https://bintanvictor.wordpress.com/2012/03/11/python-some-performance-tips-andy/

Q: try/catch usage? A: only in file IO
Q: immutable – what data types are mutable/immutable? QQ topic?
Q: threading – global interpreter lock? A: never really needed threading
Q: debugger? A: I didn’t need it
Q: command line arguments stored where? A: sys.argv?

Q: xml – what xml parser did you use? ess477
Q: read a config file?
Q: logging?
Q: DB access?
Q78: exit normally? A: sys.exit()
Q78b: normally, “raise Exception” would print to stderr stream, but how do you direct that to a log file instead?
A: set sys.stderr stream. dive205 i.e. [[dive into python]]

Q: how do you handle newline diff between OS? ess114
Q: truth table? e68 (i.e. [[essentialRef]])
Q: how do you edit on windows and deply to linux? A: samba, ftp.

— sys admin
Q: how do you pass input files to your py script? cook
A: fileinput module is convenient.

Q: how is PYTHONPATH used with sys.path?
Q: another py — how do you execute another py script? Ess89
Q: what command line options do you use often?

Q5: how do you launch an external program?
Q5b: how do you capture its output? [[cookbook]] has a slightly advanced solution
Q5c: how do you call a unix shell command? A: shutil module meets some of your needs

Q: exit with an error msg? cook540
A: raise SystemExit (‘msg’)

— data structures
Q: diff between listcomp and genexp? How would you choose?
Q: split a string on multiple delimiters? cook37 explains re.split()
Q: iterating in reverse? cook119
==coding IV
Q: how do you print to a file rather than stdout?
A: e115 — print >> anOpenFile, ‘whatever’ # using the print KEYWORD
A: c144 — print (‘whatever’, file=anOpenFile) # using the print() function
Q: concat 2 lists?
Q: initialize a dict
Q: initialize a 2D array? m52 (i.e. [[perl to python migration]])
Q: walk a directory
Q: use a dict as a “seen”
Q: iterate a dict? mig87
Q: iterate a file
Q: interpolate into a string? ess115. c61 i.e. [[cookbook]]
Q: date/time usage (datetime module is OO;  time module is procedural?)
Q: trim both ends of a string? strip()

perl regex top3 tips #modifier /m /s

https://docstore.mik.ua/orelly/perl3/lperl/ch09_05.htm shows

The part of the (haystack) string that actually matched the (needle) pattern is automatically stored in q[  $& ]

Whatever came before the matched section is in $` and whatever was after it is in $'. Another way to say that is that $` holds whatever the regular expression engine had to skip over before it found the match, and $' has the remainder of the string that the pattern never got to. If you glue these three strings together in order, you’ll always get back the original string.

— /m /s clarification:

  1. By default, q($) + q(^) won’t match newline. /m targets q($) and q(^)
  2. By default, the dot q(.) won’t match newline. /s targets the dot.
  3. The /m and /s both help get newlines matched, in different contexts.

Official doc says:

  1. /m  Treat the string being matched against as multiple lines. That is, change "^" and "$" from matching the start of the string’s first line and the end of its last line to matching embedded start and end of each line within the string.
  2. /s  Treat the string as single line. That is, change "." to match any character whatsoever, even a newline, which normally it would not match.
  3. Used together, as /ms, they let the "." match any character whatsoever, while still allowing "^" and "$" to match, respectively, just after and just before newlines within the string.

loading .gdbinit

My experiments show that $HOME/.gdbinit is discovered. I actually changed the $HOME env variable:)

However, I hit

  warning: not using untrusted file "/v/global/user/b/bi/bint/.gdbinit"

, even though I added q(gdb -iex ‘set auto-load safe-path …’). I guess the warning come up before the -iex option takes effect

  gdb -ix '/path/to/.gdbinit' # also failed due to untrusted file
  g++ -std=c++0x -g dump.cpp && gdb -iex 'add-auto-load-safe-path .' ./a.out # working


pink sheets #learning notes

The pink sheets are a stock quotation service on unlisted stocks.

  • Many are penny stocks, trading for extremely low prices,
  • some are legitimate foreign companies that don’t wish to file reports with the SEC.
  • … There’s less regulation, less transparency, more risk of fraud in these stocks.

OTC Markets Group offers this service.

PinkSheet stocks are considered non-hedgeable in some swap dealer systems. I guess liquidity is too low.

https://www.fool.com/knowledge-center/what-are-the-pink-sheets.aspx is good intro.

ETF mkt maker #RBC

The market maker (a dealer) does mostly prop trading, with very light client flow.

Q1: creation/redemption of units?
A: yes the ETF market maker participates in those. When the dealer has bought lots of underlier stocks, it would create units; when the dealer has bought a large inventory of units, it would redeem them (convert to underliers)

Q1b: what’s the motivation for dealer to do that?
A: there’s profit to be made

Q3: restrictions on short position held by a dealer?
A: there are restrictions on how long you can hold a short position without a borrow (stock loan). For regular investors it could be a few days or within 0 sec. For a market maker, it is definitely longer, like 5 days

Q3b: how about size of the short position?
A: probably not. However, if a dealer has a huge short position and looking for a borrow, the stock loan could be very expensive.

Q: how is the bid or ask price decided in the market maker system? Is it similar to the citi muni system? In a competitive, highly liquid market, demand is sensitive to price.
A: fairly simple because the underliers’ bid/ask are well-known and tight. For a bond ETF, the spread is bigger.
A: inventory level in the dealer’s account is another factor
A: pressure in the market micro-structure is another factor. If you see heavy bidding and few offers, then you may predict price rise

gdb symbol-loading too time-consuming

After I attach gdb, it immediately starts a prolonged symbol loading process. It’s better to skip the loading, and selectively load some symbols.

https://ascending.wordpress.com/2007/09/02/a-couple-of-gdb-tricks/ describes how to use ~/.gdbinit, but I had no permission to write to the ~/

gdb -iex ‘set auto-solib-add off’ …. # worked

–loading a particular *.so file

I got “No loaded shared libraries match the pattern” and fixed it by

shar file.name.so #instead of /full/path/to/file.name.so.some.version

fiber^thread, conceptually #shallow

I believe the concept of fiber is not standardized across languages. Here are some general observations

  • fibers are unknown to kernel. They are similar to userland threads that are implemented in userland thread libraries, rather than implemented by kernel and system calls.
  • like userland threads, a fiber adds less load on the kernel. See [[pthreads]]
  • diff: fibers are even more light-weight than threads
  • diff: fibers are usually short-lived, perhaps similar to tasks
  • diff: fibers have smaller stacks
  • a few language (only heard about scala?) support millions of concurrent fibers in one OS. For threads, with a IO-heavy workload, you probably can run tens of thousands of threads on a single JVM.

exposure:=semi-automatic(shallow)Accu #$valuable contexx

Opening eg — In RTS team, granted I didn’t get deep[2] socket experience or latency /engineering/ experience, but over the years semi-automatically I would get some valuable exposures, by raising good questions about .. sockets; reliable order-book replication; error recovery; throughput engineering…

eg — in mvea team, I can get some valuable exposures to FIX; large scale and reliable equity OMS; low-latency (caching); order routing; automatic hedging; partial fills; limit orders; order cancels/amends; busts… Even if I don’t get deep experience on this job, my resume could claim genuine experience! Strategic positioning … (shallow) Accumulation

eg — in citi-muni, I got exposure to mkt-making; event-driven limit order repricing; PnL roll-up; mark-to-market; swing; JMS; Gemfire…

Key points about the “pattern”:

  • thanks to the strategic contexx, you get to accumulate (semi)automatically
  • robust commercial value in the skill
  • shallow [2] accumulation — I call it “exposure”, enough to impress some interviewers.
  • [1] small amount of effort — much lower than GTD, getting a job/certificate, losing weight
  • consistent effort ..

However, as the years go by, many developers stop digging with questions and others ignore the opportunities to dig into the difficult codebase because … they don’t have to:(. The automatic learning is a valuable option if you put in some legwork [1]. In contrast, some jobs don’t offer much automatic learning —

  • OC team: not so mainstream. I could still learn some WCF; reliable windows servers;
  • Qz team: poor portability. I could still learn some curve building; ticking risk;

[2] In contrast, here are examples of “deep” experience (hopefully serving as a protective entry barrier ) —

  1. from-scratch (95G) wait/notify solution
  2. from-scratch (95G) sybase stored proc to manage inventory in the face of competing orders
  3. home-prj order book replication in 2 coding interviews — Jump + iRage
  4. home-prj FIX client/server https://github.com/tiger40490/repo1/tree/jProj/java/com/tanbinFIX
  5. home-prj swing GUI to auto-update a table viewer

 

max path sum problemS #high fan-out/fan-in

Note “path” implies directed (not bidirectional) edges i.e. dag

  • variation (rare): bi-directional edges
  • variation: single origin, single destination
  • variation: single origin, any destination among 55 leaf nodes
  • variation: each node has a +/- value
  • variation (rare): each edge has a +/- value. Edges usually outnumber nodes, as illustrated in the balloon-burst problem

–Common Special case: pyramid (see blogpost on recombinant binTree). The balloon problem has high fan-out.

I would try DP. For each node AA, determine (and store) the max cum-sum up to AA. Need to start from the layer 1 nodes (1 hop from origin), then use Layer 1 data to update layer 2 nodes.

Markov-like — For any node AA, the max cum-sum to AA only depends on the previous layer. No need to even care about a “fullpath” to AA. This property leads to dramatic *simplification*. Efficiency is a by-product of this simplification.

–Common Special case: non-recombinant binary tree.

i would use DFT + Kadane. Each origin-to-leaf path is independently handled. No layering.

Note Kadane algo handles +/- node values.

2 pitfalls on my accu path #portable,MktDepth..

Say you stay in one team for 5 years, hoping to accumulate expertise and insight

  • Trap 1: local system knowledge, 100% nonportable
    • eg: Qz
    • Tibrv wrappers in 95G is not so bad
  • Trap 1b: limited standardization across companies.
    • eg: cryptocurrency
    • eg: OMS framework? M b2bTradigEngine ^ SIG ^ ETSFlow
    • eg: software mkt data dissemination cf raw mkt data parsing
  • Trap 2 : poor market depth
    • eg: vol fitter

at run time, function must be predefined ] python #c++

As shown in https://github.com/tiger40490/repo1/blob/py1/py/str/testAbbr_ez.py, you can randomly shuffle your functions bodies (i.e. definitions) as long as … at run time, each function call can resolve to a function already seen by the interpreter, not a function to be defined later in the script.

Easiest way is to call main() after all function bodies. By the time main() runs and uses any of those functions, the interpreter runtime has already parsed the function body.

If you move func8() body to after the call to main() (I didn’t say “the body” of main()), then python will complain that func8 is undefined. It’s as bad as:

print var3; var3=3

(Note there’s no declaration vs definition for python functions.)

C is more strict — at Compile time (not run time), when the compiler encounters a name like func5(), it insists func5() declaration (or definition) is already seen.

To satisfy the compiler, people put function declarations in include files. Function definition obeys ODR.

## 2 portable skills gained{each job

These are portable, longevity skills either for IV or GTD.

+1 +2 +3 .. = subjective valuation as of 2018
[grey color = portable financial domain skill, not tech skill ]

  • NIE +1 — 1) hacking open source php
  • GS +3 higher value than all other jobs —
    • 1) java GTD 2) SQL 3) database tuning 4) Database batch processing architecture
  • Citi +1~2 — 1) bond math 2) market-maker quote pricing system “architecture”
  • 95G +2 high value over mere 5 months —
    • 1) java wait/notify 2) store-proc for trade booking 3) FIX connectivity 4) Tibrv (basics only) and MOM system architecture
  • Barc +1 — 1) option math 2) analytics library integration
  • OC +1 — 1) c# GTD 2) basic MSVS 3) quote distribution architecture
  • Stirt +0 — 1) curve building basics. This job paid reasonably well, so I won’t prefer it for a low-paying top “contexx”
  • Mac +1 — 1) serious use of standard (portable) python 2) devops including git
  • RTS +2 — 1) c++ instrumentation 2) raw mkt data feed 3) order book replication 4) bash/perl for personal automation 5) socket (basics only)
  • mvea +1 — 1) lowLatency, high volume eq trading architecture 2) c++ (not C) GTD in one of the biggest c++ trading platforms including multi-file build, gdb, crash investigation 3) advanced language features including TMP and shared_ptr

case-insensitive string search #KMP #Ahbinav

[[c++cookbook]] P174 suggests to prepare the needle by creating an all-upper-case needle, but leave the (bigger) haystack unchanged.

I discussed with my friend Abhinav. With std::search, we should probably upper-case the haystack as well. The std::search implementation can run M*N char-comparisons.

Even with the efficient KMP algorithm, typical complexity is M+N char-comparisons. So my solution is no worse than the authors’ solution.

##[18]orgro lens:which past accu proved long-term # !!quant

(There’s a recoll on this accumulation lens concept…. )

This post is Not focused on IV or GTD. More like zbs.

Holy grail is orgro, thin->thick->thin…, but most of my endeavors fell short. I have no choice but keep shifting focus. A focus on apache+mysql+php+javascript would have left me with rather few options.

  • —-hall of famers
  • 1) [T] data structure theory + implementation in java, STL, c# for IV — unneeded in projects
  • 2) [CRT] core java knowledge including java OO has seen rather low churn,
    • comparable to c++
    • much better than j2EE and c#
  • 3) [T] threading? Yes insight and essential techniques. Only for interviews. C# is adding to the churn.
  • 4) [j] java/c++/c# instrumentation using various tools. Essential for real projects and indirectly helps interviews
  • [C] core C++ knowledge
  • [C] GTD knowledge in perl/python/sh scripting
  • [j] google-style algo quiz — Only for high-end tech interviews. Unneeded in any project
  • [R] SQL? yes but not a tier one skill like c++ or c#
  • coding IV — improved a lot at RTS
  • ————————also-ran :
  • devops
  • [C] personal productivity scripts
  • [T] probability IV
  • [C] regex – needed for many coding interviews and real projects
  • [C] low level C skills@RTS {static; array; cStr; reinterpret_cast;  enum; typedef; namespace; memcpy}
  • [!T] bond math? Not really my chosen direction, so no serious investment
  • [!T] option math?
  • SQL tuning? not much demand in the trading interviews, but better in other interviews
  • [R] Unix — power-user GTD skills.. instrumentation, automation? frequently used but only occasionally quizzed
  • [R] Excel + VBA? Not my chosen direction
  • [jR !D] JGC +jvm tuning

–strengths
C= churn rate is comfortable
D = has depth, can accumulate
R= robust demand
T= thin->thick->thin achieved
j|J = relevant|important to job hunting

self-help industry: your life is stuck in a rut

The self-help industry’s messages (SMS) resonates with me.

  • burn or rot
  • I often feel a lack of direction
  • i often feel my spare time is not productive
  • I often feel left behind on the slow track, but most of us are, anywhere I look, including the managers.
  • I often feel I’m not living life to the full
  • I often feel I’m not growing, learning anything new, but it’s the norm
  • When I feel my life is “not that bad”, the self-help industry would question me “Really?”
    • marketable skill — i feel lucky that I moved into finance tech, but a non-finance job like telecom would be fine too.
    • marketable skill — I feel it’s good that I moved out perl into java with MktDepth…
    • marketable skill — I feel lucky to discovery personal strengths in lowLevel java/c++/threading/unix…
    • I feel good about the father’s job I’m doing
    • I feel good about my investments
    • I feel good about my Singapore home and my commute
    • I feel 80% good about my healthy lifestyle
    • (just a brief subset relevant to this topic)

[19]Cod`drill in our 50’s #CSY

  • I agree that Competing with younger guys on (speed) coding test is like math competition, sports competition, … where experience doesn’t matter. But in our chosen profession, this is reality to be accepted.
  • I agree on Wall St etc there exist decent jobs with modest, reasonable coding tests. I hope coding test do not grow in popularity but I think they will.
  • I agree that you and me are not less intelligent than those strong candidates at coding tests. Your steadfast conviction is inspiring n uplifting. I tend to have self doubts.

I agree that most peers at my age have stopped coding practice.  Some individuals still do. (Deepak is just one example.) I see them as role models just like male yoga practitioners, among a sea of women. Most of my peers see coding drill as a chore, a hard job, a pain in the ass, but I’m different.

For the next 5 to 10 years I hope to find joy in coding drill. Coding drill is like jigsaw puzzle n board games … mental gymnastics to keep our brain young.

There is no shame in revisiting college level subjects.  At age 44 I still studied calculus and passed math exams with flying colors. I am proud of myself, not ashamed of myself. I can probably help my son n my daughters with math till their college. Not many dads can do that.

I will say this again — some technical interviews ask probabilities n SQL. These are college level topics. We just accept and prepare for these interview questions.

ExecType[150] to complement 35=8

Tag 150 shows the execution report type, so it only (and always, according to Rahul) accompanies 35=8 not other 35=* messages.

With 35=8 alone, we won’t know what type report this is.

The first execution report we get on a given order should show 150=0 i.e. new, not a fill. I believe only the real exchange (not the intermediate routing nodes) would send this message.

I think sometimes we get 35=8;150=A i.e. pending-new, presumably before 150=0. Why? I don’t think we need to bother now.

max-profit buy 1 sell any-bought #pimco java Q8

Pimco Java HackerRank Q8

Q8: Each minute, your trading platform allows you to either buy one share, sell any number of shares that you own (short sell forbidden), or not make any transaction at all. Your task is to find the maximum profit you can obtain with an optimal trading strategy.

I remember having issues with some HackerRank test cases. Should use 64-bit int rather than the default java int.

This problem appears to be very similar to day-trading in hindsight #Nsdq but requires drastically different thinking:

  • a single scan to the left, starting from the last price point.
  • any time there’s a left-ward drop, we have a trading opportunity!

https://github.com/tiger40490/repo1/blob/py1/py/array/maxProfit_buy1sellAny.py is my tested solution, peer-reviewed by Ashish.

theoretical complexity!!guarantee4 relative-adv #STIRT

  • case: Stirt curve building, fixed income risk..
  • case: Qz technical complexities? But not theoretical, so I can’t read a book and then reason about it.
    • given more time, I will develop some in-depth understanding of Qz
  • case: code generator for python and excel add-on

I’m not so fast penetrating the cognitive barriers (built-up of local brain power), but theoretical complexity is still my relative advantage.

Somehow, all of these cases happened in Singapore, in perm roles! I guess as contractor I’m given relatively isolated, carved out, well-defined projects.

[19] camp-out4cod`drill #+localSys

My most effective coding drill is less like hackathon, more like camp-out. My localSys learning is similar.

When I’m in the mood, I know it will last a few hours up to a day, I need to shut out all distractions and capture the momentum until it subsides.

  • weekend assignment: redmart
  • weekend assignment: Jump/iRage orderbook
  • weekend assignment: Houston (quantlab?)
  • weekend assignment: GS tickDB
  • weekend assignment: DRW
  • weekend assignment: nsdq
  • Chartered unix hacking

shared_ptr {vector} for array@heap

You can’t create a raw ptr from array-new and then use it to create a shared_ptr. The final dtor of the shared_ptr club would call delete, not the required array-delete.

I would prefer shared_ptr<vector<int>>. See https://stackoverflow.com/questions/13061979/shared-ptr-to-an-array-should-it-be-used. The vector would be allocated on heap [1]. The ptr-to-vector would be deleted by the last “club member”. This deletion would trigger the (RAII) dtor of the vector. The RAII would clean up the memory of the underlying raw array.

[1] In contrast, when we instantiate a vector object as a local object the vector “shell” including the housekeeping fields are allocated on stack. See housekeeping^payload fields: vector,string,shared_ptr

If you must use shared_ptr<int> instead, then https://www.acodersjourney.com/top-10-dumb-mistakes-avoid-c-11-smart-pointers/ shows a simple custom deleter to invoke array-delete

growth factor ] string/vector/hashtable #xLang

  1. std::string
  2. vector
  3. python list
  4. ArrayList
  5. hashtables

… all have algorithms to decide exactly how many percent more capacity to acquire during re-allocation. Usually it grows up to 2.0 in capacity:

##150k@light load(GTD): which FTE#past+fut

Real deciding factor is coworker benchmark (+PIP/stigma). Are there managers tolerant of team members below the benchmark? Josh, Srini of Citi-muni..?

Even in a less demanding company, pressure can be high.

  • employer:  slower ibanks like Citi, UBS
  • employer:  some commercial banks like OC, BONY
  • employer:  large traditional buy side like AIG, Vanguard
  • employer:  3rd type financial firms like exchanges/ECNs, data vendors (Reuters?), financial product vendors,
  • employer:  non-finance like telcos
  • employer:  startups but they tend to use new technologies
  • less glamorous — like mkt data, back office
  • new codebase with short history —  like OC, StirtRisk
  • smaller codebase — like RTS
  • older workforce — like RTS
  • older technologies — like SQL, C, socket

## sell-side eq e-trading arch features #MS,Baml..

Mostly inspired by the MS equity order-management “frameworks”

  • message-based, not necessarily MOM.
    • FIX messages are the most common
    • SOAP messages are also possible.
    • BAML system is based on MOM (tibrv)
  • message routing based on rules? Seems to be central to some sell-side /bloated/ “platforms” consisting of a constellation of processes.
  • event-driven
    • client newOrder, cancel requests
    • trading venue (partial) fills
    • Citi muni reoffer is driven by market data events, but here I focus on equity systems
    • Stirt realtime risk is driven by market data events + new trade booking events
    • buy-side would have order-origination events, but here I focus on sell-side systems
  • market data subscription? Actually not so important to some eq trading engines. Buy-side would make trading decisions based on market data, but a sell-side won’t.

GS tech challenge !! beyond me

My GS technical challenge was tough but i conquered it convincingly! There were some technical challenges beyond me (such as the initial rule-engine rewrite project), but most technical challenges were within my grasp.

Somehow, over the years a negative self-image has overshadowed the positive evidence.

  • Everyone said I was technically good enough.
  • My mortgage solution code review were well-received. Mark then organized a “seminar” where I presented my mortgage solution to more than 10 guys.
  • Yang was tough but he liked my designs in errorMemos.
  • I grasped errorMemos code-base quickly, cleaned it up and were able to add new features quickly
  • I convinced many that I was a local technical expert in my team. Perhaps Chad was even stronger but that doesn’t change the above fact — consider Roger Federa.

longest palindrome subsequence !! subarray #50%

This problem is probably tough but not so common. Let’s not spend too much time. If there’s an elegant idea then I should try then read it. To keep things simple just assume there are only 3 unique chars

I feel there should be some DP solution as a shorter haystack is clearly easier than a longer haystack

====DP idea 2 (efficiency is decent but clarity is outstanding — satisfaction)
At each position i in the haystack string, we keep a Club of …. “Members” each a palindrome subseq ending at i.
Requirement: Every eligible Member must be in the Club.

For the next position after incrementing i, this DP algo builds the new Club using all earlier ClubS. We look at each [4] Member in each earlier Club. All of these Members are unique by construction 🙂 Member is a COW class, having
* an immutable length
* an immutable “seq” array of subscripts representing the subseq — no two Members have identical contents in this->seq. The last array element is always i
* an immutable hashtable (or array) of “unused chars on my left”. Each entry is {unused char -> stack of positions}. Note this stack is naturally sorted.
This local optimization eliminates the expensive backscan.

The Club class is immutable. Conceptually, there’s no reason to modify a Club once constructed.

What do we do when we examine a Member? If a Member can “grow” (rightward or both ways) to use the new char, then this Member clones itself, grows and joins the new Club. The “growth” shall update the hashtable IFF growing both ways.

The new char itself is automatically a (trivial) Member of this new Club. Crucially, this procedure is how a new palindrome subsequence gets created.

At any time, before i++, the latest constructed Club includes all the palindrome subseq ending at i.
Similar invariants hold for earlier Clubs.
This level of complete control is kinda remarkable, thanks to the elegance of this DP algorithm.

That’s a fairly complete solution, but there might exist efficiency improvements.

— pruning the ever-growing Clubs
Each Club is immutable but at every position we create a new Club.

Say Club-5 (for i=5) has 44 Members, Club-6 has 22 Members, Club-7 has 53 Members. At position 8, We need to examine each Club. However, Some of the Members across the Clubs may be hopelessly short. This can be seen when the unseen section has only 2 chars.

If we can prune the Clubs (treating them as mutable) we can hopefully reduce the total computation cost.

For this purpose, we can keep “Groups” of Members. Group-3 are all the known Members of length 3. This feature doesn’t increase run time cost.

We only need two simple variables to start pruning
* R := remaining chars = len(haystack) – i
* LeadingPack.len
LeadingPack.len – R is the pruning criteria. Any pack whose len falls below are hopeless and pruned

Because pruning has the potential to drastically reduce computation, every record breaker is good news, giving us a chance to prune many existing Members.

— An experimental, optimistic technique — The default implementation would examine every known Member but we can be lazier. As we increment i, we will focus on Group-6 i.e. the leading pack of longest Members, all length 6, across all Clubs. At a given position, when there are only 15 chars on the forward scan, One Group-6 Member might keep growing 15 steps. In that case, this Member can safely be declared the winner, without looking at the “other” Groups.

We would need to remember the original IDs of the Group-6

This is an optimistic algo.

In fact, I might pick one length-6 Member. But if this pick stops growing, then I pick another from the leading pack… backtracking? IFF all of Group-6 stop growing, then we look at Group-5.

I feel this may not work so well, since there’s a high chance that some group-1 Member has the best potential. The current length of a Member is not a good predictor of ultimate length.

If this algo works it is again using auxDS, my strength

g++ q[-L] vs [-l] .. [-I] vs [-isystem]

-L and -l (i.e. the big and small "L") both affect the linker only.

See also MSVS linker option -L vs -l #small L

-L (similar to -I) and introduces a new search directory, for *.so or *.a file search.

Q: does the order of -l options matter?
A: yes. Recall the -liberty …

Q: does the order of -L options matter?


-I and -isystem both affect the preprocessor only.

There’s not “-i” option per-se.

 

hunt down CORRECT include file+directory

When I get something like unrecognized symbol, obviously header file is missing.

This is a relatively easy challenge since it involves ascii source files, not binary. Faster to search.

  1. I start with some known include directories. I run find-grep looking for a declaration of the symbol. Hopefully I find only one declaration and it’s the correct header file to include.
  2. then I need to guess the correct form of #include
  3. Then I need to add the directory as an -I command-line option