where to see if a func param is optional #c++

You need to check the declaration (if any). Default values are specified in the declaration aka function prototype.

Checking the function definition, I saw no clue.

Advertisements

max-profit # SCB-FM

Q: maximize profit on a given a price series, you can buy, sell, buy, sell any number of times, each time one share. No buy-buy-sell-sell allowed.

Q: ok you said it’s O(N), can you do it in one scan?

====my answer

If price keeps dropping, then no trade possible

If price keeps rising steadily, then only one buy-sell pair

if i get peak-trough-peak-trough… then i must discard the first peak since I can’t do anything with it.

 

SCB-FM design IV

Q: In your parser class, onData(seq, packet) gets called by the UDP framework with

  • out of sequence packets
  • duplicate packets

Inside your onData(), you need to invoke client callback function like callback(packet) but in sequence and without duplicates. How do you achieve it?

====My solution

  • keep the packets in a circular buffer or a deque in the parser. Save each packet in a slot keyed by the seq number of the packet. Remember the nextSeqToSend. If we get a higher sequence, just warehouse it in the buffer.
  • (Interviewer didn’t ask) How do you reuse space in the circular buffer? when I’m warehousing packet #109999 in slot #9999, then logically the packet in the first slot was already sent out, so I can safely “wrap around” to save next packet in there. I can implement my system to ensure it is actually safe.

Q: if you use a deque, how do you allocate slot for packet #5 while waiting for #4?
%%A: i would allocate for both, but keep #4 slot vacant. Not sure if std::deque has this API

 

SCB-FM eTrading IV

Q: tell me more about your pricing projects
Q: is your project management style agile?

Q: what’s const correctness and “mutable”
Q: cpu cache optimization

(open) Q: forward vs backward iteration of array
%%A: i don’t know any perf difference

Q: make_shared advantage over calling ctor of shared_ptr?
%%A: memory leak… Correct. See http://www.enseignement.polytechnique.fr/informatique/INF478/docs/Cpp/en/cpp/memory/shared_ptr/make_shared.html
%%A: one allocation only
%%A: perfect forwarding

Q: is shared_ptr thread safe?
%%A: yes only for the increment of reference count
%%A: if concurrent with a copying operation on inst3, inst3 is reset on another thread, then I don’t know if it’s thread safe. See thread-unsafe shared_ptr: tiny examples

Q5: any experience with c++11?
Q5a: what are the c++11 code modernization changes you did. Examples?

Q: auto_ptr vs unique_ptr
%%A: unique_ptr can be moved (explicitly), not copied. auto_ptr can be copied and moved??
%%A: unique_ptr can go into containers. Yes see unique^shared^auto_ptr #container

so-called visible progress: Unreasonable expectations

Every week, I actually make more progress than my fellow daddies with kids and commute etc. However, Once a while, in retrospect I would fall apart and belittle and cast doubt on my progress and point out the (unfortunately invisible) long-term effect.

Putting on a critical thinker’s hat, I feel that for most guys in my situation, it’s no mean achievements to maintain current condition and make small progress, with invisible long-term effect. Anything more is asking too much, and requires luck, talent, determination, contexx etc.

  • –ranked by …? I want to highlight the unsung heroes…
  • cholesterol, dental, belly and weight? maintaining is no mean achievement
  • loving relationship with wife? maintained, even strengthened
  • relationship with in-laws? improved, as visible long term progress. More important — relationship with my own parents maintained
  • boy’s renzi and Chinese reading? improved slightly. Not really long term visible progress but at least he maintained
  • physical flexibility? maintained .. yes! Improvement? yes some, with huge effort
  • stamina? maintained … no mean achievement
  • financial domain knowledge? I expanded to FX; market data; low-latency equity; FIX exchange trading…. Visible progress but shallow.
  • algo and coding test performance?
  • bonding with kids? constantly building, deepening… Not by default, but by effort.
  • c++/c# conquered as a visible long term progress. Probably more important — java skill level maintained.
  • financial assets (mostly holding well against inflation)? yes visible progress but I tend to belittle it. Building this portfolio actually required persistent effort, years of analysis, experiments, ..

our coding drills r different: fundamental reason #le2XR

Fundamentally, one chooses how to practice based on past interview experience of his own, not hearsays.

Question for you — how many times in the last 18 months did you get a coding interview that required 30+ minutes per question?

A: about 20 out of 25 positions in my recent experience, not including a BGC-partners onsite when they gave me 4 different questions but each 20 minutes only.

Most of my coding rounds are hackerrank/codility or over weekend.

  • Now I think for you the answer is 10% (but did you include those hacker rank tests?)
  • Now I think for you, mostly you don’t need to compile ! That’s why you call them “algorithm questions” rather than coding questions.

Even if I persuade you on the importance of edit-compile-test speed, or the value of python, sooner or later you would doubt “Really? How come I seldom need to write real code so fast in an interview?”. You would eventually stop practicing with real-code and refocus on pure algorithms, by reading key ideas behind common questions.

If you spend hours of focused energy writing real code as practice, and feel worn out, your own, personal experience would eventually kick in and remind you that it’s not worth it.

Conversely, if I were to follow your method to memorize key ideas only, my personal experience would soon shout out — “Take a step back and look again, you are already pretty fast coming up with data structure and algorithm ideas, but your REAL weakness is implementing them — too slow !”

I listed all recent interview episodes in tabular format — https://bintanvictor.wordpress.com/2018/04/28/some-worthwhile-coding-iv-experiences/.

  • 100% of big tech companies require at least one coding question lasting more than 30 minutes.
  • 100% of big buy-side shops require at least one coding question lasting more than 30 minutes.
  • More than 50% of investment bank jobs also require it.
  • For the other c++ financial companies (the so-called “third party” players) like Bloomberg, exchanges, brokers, market data providers, about 80% of the jobs require it.

risk-neutral means..illustrated by CIP

Background — all of my valuation procedures are subjective, like valuing a property, an oil field, a commodity …

Risk-Neutral has always been confusing, vague, abstract to me. CIP ^ UIP, based on Mark Hendricks notes has an illustration —

  • RN means .. regardless of individuals’ risk profiles … therefore objective
  • RN means .. partially [1] backed by arbitrage arguments, but often theoretical
    • [1] partially can mean 30% or 80%
    • If it’s well supported by arbitrage argument, then replication becomes theoretical foundation of RN pricing

 

##FX/eq projects ] PWM, for SG IV

—-fx or eq
* Swing app to monitor multiple live database tables and auto-refresh upon new quotes/trades and position updates.
* Post-trade Affirmation for complex derivative deals. A workflow application. Operations verify deal attributes by phone and Affirm on the deal to send out trade confirm.
* Statistical feed “blender” (with multiple purposes), needed to remove outliers among diverse indicative eFX forward quotes. Outliers tend to mess up our auto pricers, tiered pricers and cross rate pricers.
* (Ledger balance?) PnL report. Nightly batch to apply marks on option positions and compute PnL. Reports both clients and firm accounts.
* Volatility surface fitter for FX (& equity index/single-names). Volatility smile curve fitter. Quote inversion. Option position mark-to-market, driven by live quotes from multiple live eFX markets.
* Workflow engine for Error trade reconciliation/chargeback. A single error ticket can compute PnL and chargeback across multiple (up to 100) error trades and cover trades. Every quarter, this engine reconciles hundreds of error trade whose total values add up to millions of dollars.

* Deal entry screen for trade booking after a retail dealer executes over phone. If trade validation fails, we notify the dealer to amend and resubmit. Same swing app is also used by spot/swap/options dealers to book voice trades.
—-eq
system? data normalization to take care of splits and dividends
system? basic back testing of simple strategies
system? re-sampling

* (EOS) web-based trade/order entry, with multiple validations. Used primarily by investment advisers across Americas, Europe and Asia. Thousands (up to 5-figure) of orders received daily.
** showing real time bid/ask from GS + bid/ask from trading venues
** supports most order types — limit/market/FOK/IOC/SL

* swing-based real time order book display. Updates to displayed order books come from the backend OMS via real time messaging.
* Also contributed to the OMS backend. Orders originate exclusively from private bank clients. We then monitor their cancels/amends and execution reports (via firm-wide messaging hub) from exchanges, ECN and GS private liquidity pool. Total message volume up to 6 figures.

—-fx
We are the interface to multiple ECNs to 1) receive quotes, enrich and publish to PWM clients 2) forward client orders to ECN in 2-leg spread trades 3) execute trades received from ECN 4) generate and validate (against tri-arb) cross rates using ECN quotes. Also contributed to auto quoters and RFQ engine. Supervised and monitored the progress of high-frequency eFX applications. Performed eFX development activities such as requirement gathering, design, testing and deployment. Personal contributions included
* Tiered quote pricer. Each FX customer falls into one of the tiers. When a currency pair updates in our pricer, all registered clients would get a new quote (??delivered to their Financial Advisors??). Silver tier clients receive a better bid/ask spread than regular clients; Gold tier gets the best quote.
* eFX rate update/distribution engine to downstream real time risk, PnL, position marking systems
* eFX option real time risk report (unfinished). Option position risk calc is rather slow, so each user selects a limited number of positions into his watch-list. Watched positions get periodically updated based on live ECN rates to show real-time risk.
* (questionable project) Compute cross rates for PWM trades that are large, recurring and involving illiquid currencies. Cross rates computation from daily close USD buy/sell rates.
————
— blender
http://www.olsendata.com/fileadmin/Publications/Tech_Papers/FXBlenderDoc_01.pdf (C:\0x\xz_ref)
outliers are particularly damaging to market making systems.
input – transaction prices and indicative quotes
output – only indicative quotes

— cross rate calculator

What we compute are backdated cross rates. PWM (non-trading) client agrees on a currency conversion AAA/BBB. She agrees on AAA amount, and wait for a few hours/days to get the actual BBB amount, just like cross currency credit card payment —

MasterCard exchange rates are based on multiple market sources (such as Bloomberg, Reuters, Central Banks, and others). These rates are collected during our daily rate setting process. The exchange rates displayed on the web site are derived from the buy and sell rates included in the MasterCard daily T057 Currency Conversion Rate File.

MasterCard applies the USD as unique reconciliation currency to manage all currency conversions globally. Due to possible rounding differences, the published calculated cross-rates may not precisely reflect the actual rate applied to the transaction amount when converting to the cardholder billing amount. The calculated cross-rates will be loaded onto the MasterCard web site on a daily basis.

MasterCard applies the exchange rate to transactions at the time of settlement, not at the point of authorization of the sale.

–ECN interface
(my code doesn’t use FIX. Our volume is lower. conversion from FIX to xml is encapsulated in a library)

FXAall, Currenex, Hotspot, TradeWeb , Bloomberg. Example applications:
* auto quoters
* price feeds
* market data feeds
* post trade feeds
* execution gateways

mgr position risk: age-unfriendly

Statistically, very few IT managers can maintain the income level beyond age 55.

I believe those managers in 30’s and 40’s are often more capable, more competitive and more ambitious.

Even if you are above average as a manager, the chance of rising up is statistically slim and you end up contending against the younger, hungrier, /up-and-coming/ rising stars.

max within sliding window #0%

Q (leetcode hard problem 239): Given an array nums, there is a sliding window of size k which is moving from the very left of the array to the very right. You can only see the k numbers in the window. Each time the sliding window moves right by one position, return the max item in it… constant time.

====analysis====

feature — Note each item lives a fixed number of cycles. so insert/delete are both constrained.
feature — Compare the min-stack. This is a min-queue..
idea — Compare skyline problem.
idea — sorted linked list with hm pointing to it for O(1) deletion. For insert? (i’m thinking of binary heap again.) I can efficiently insert above the highest (or 2nd highest) node, or below the lowest (or 2nd lowest).
idea — can we use the sum?
idea — fixed capacity binary heap — is insertion faster than O(logN)?

longest consecutive ints ] matrix 70% done

View story at Medium.com

Q: Given a n*n square matrix where all numbers are distinct, find the maximum length path (starting from any cell) such that all cells along the path are in increasing order with a difference of 1. We can move in 4 directions from a given cell (i, j), i.e., we can move to (i+1, j) or (i, j+1) or (i-1, j) or (i, j-1) with the condition that the adjacent cells have a difference of 1.

Example:

Input: mat[][] = {
{1, 2, 9}
{5, 3, 8}
{4, 6, 7}}
Output: 4
The longest path is 6-7-8-9.

–sol1, probably O(NN) since each node is checked no more than 5 times.

1) typewriter search for the first unvisited node. Exit if all visited. Once a node is found, designate it as cur.
2) down-search .. explore cur’s four neighbors to see if any == cur-1.
3) if found, then desingate that node as cur and continue the down-search.
4) At end of down-search, Go back to #2) and start up-search.
5) At end of up-search we have a linked list. IIF list size > 1, then all nodes on it are marked as visited.
7) go back to #1.

self-hate due to hearsay 300k salary #XR

.. You seem to feel the (hearsay) income level of 300k is the minimum you need to feel good about yourself. In that case, your worry and negative self-assessment about income is misaligned with reality.

A bit of real statistics to chew on – rank all countries by GDP per-capita. Most of the top 10 richest countries have population below 9 million including Switzerland and most of the Northern Europe countries.

Q: How many countries are richer than U.S. *and* with a population above 20 million?
Answer: zero. Japan, Germany, UK, France,  … are all less rich than the U.S. Now, I believe you don’t want to compare with developing countries like China, Korean, Taiwan, India, let’s first focus on rich countries —

  • I believe half the American families earn less than 60k combined income, so do you think half the American families are struggling to survive every day?
  • I would estimate (based on my knowledge) more than half the *families* across the rich countries earn less than USD 60k, but you are implying that a single income of 300k is the minimum you need?
  • USD 300k single income would put you in the top 2% in any rich country, but you feel that’s the minimum you need?
  • USD 300k is higher than at least half the doctors’ and lawyers’ income across the rich countries, so you seem to say most doctors and lawyers are struggling to survive based on their income
  • My wife’s income is about SGD 30k. A regular teacher’s salary in Singapore is about SGD 50k. Singapore is, by most rankings, more affluent than the U.S. and teachers are a large, white-collar workforce. By your standard, even a 500% increase in a Singapore teacher’s income would still be too low for you.
  • In one of the most expensive cities of our world – London, a USD 300k salary would be top 2%. I know from many sources that London finance IT salary is lower than New York. A 700-pound daily contract rate is “extremely rare” (unheard of to many people) but it works to be only USD 230k, but you would think that’s not enough to survive. Mind you, London is more expensive than New York.
  • Some would say London is still cheaper than … Hongkong. A UBS VP position I tried was at HKD 1.2 million, about half your minimum standard.
  • I have friends in Shanghai and Beijing – the most expensive Chinese cities (along with Shenzhen). A 300k USD salary would be one in 500 jobs over there, but you think it’s barely enough for you. They would guess you live in a city where home price is 10 times higher than Shanghai/Beijing but in reality, your home is more affordable — A comparable apartment (not a 2-storey house with backyard) in Beijing/Shanghai would cost at least USD 1.5 million.

You are living in an ivory tower (and refusing to step out to the real world) if you hold on to that irrational and negative view. You sound like a guy complaining about his 10-room, 3-story mansion. It’s not depression but hallucination.

If you carry this habit into parenting, then beware — your kids could be top of their class but you may still feel they are not good enough because they didn’t win a gold medal. Would be tragic. I think Chinese parents are demanding but most are not at that level. We love our kids and accept them. We ought to love and accept ourselves.

I recommend [[compassion and self-hate]] by Theodore Issac Rubin, my favorite American self-help writer. His books changed my life. I feel good to know he is now 95. I’m grateful to Dr Rubin; I’m grateful to my dad; I’m grateful to Buddhism teachings; and I’m grateful when I answer your questions — I often get a chance to look into myself. I thank our creator to give me the analytical powers (though not as powerful as Dr Rubin) to dissect the layers and uncover the core issues in my psyche. As I endeavor to answer your questions I often reach a deeper understanding of my own pains, stupidity, irrationality and unfairness to myself and love ones.

%% absorbency isn’t highest #don’t scold kids

Obviously 99% of us have limited technical talent like learning capacity, memory, problem solving speed .. but today I will focus on another factor “absorbency” – the capacity to endure repetitive practice, sustained focus, and build the /mileage/ required for mastery. No one has unlimited absorbency. We all reach saturation point sooner or later.

Opening example — When I was trying rock climbing, my finger and forearm would get too tired for me to continue. On the other hand, as a teenager, I could do 600 sit-up in 10 minutes non-stop.

Absorbency is one of the most important technical talents, esp. in the long run. I often feel my son is weaker on absorbency when practicing piano, Chinese handwriting and math. I think he is improving. His swimming practice has always been good.

Some individuals are not the brightest/fastest but can tolerate high amounts of practice and can become formidable like 郭靖

Let’s shift focus to my own technical absorbency. I want to say I’m not the strongest so I should not blame my son.

  1. Coding drill – I can do more than most peers, but then some grads can solve more than 5 problems a day, and pass all Leetcode tests.
  2. Quant study – I did more than half my homework on my own. Some classmates were able to practice and study more.
  3. Getting a grasp on a big codebase – absorbency required, beside learning speed + memory.
  • eg flight — how much technical learning you get on a 15H long flight is a measure of your technical absorbency.
  • eg coding drill — Rahul, me and Abhinav each find joy in coding drill but many of my friends seem to focus on reading the ideas rather than practicing and solving problems independently. Focusing on the end result rather than the journey is no fun. The joy increases your absorbency
  • Positive feedback — is a key factor affecting absorbency. Coding practice and software programming offers immediate feedback.
  • self-doubt — I frequently (once a month?) question my endeavor (and the elusive strategic value) .. detrimental to absorbency. It’s already hard to absorb the practice even without these self-doubts.
    * yoga
    * risk analytics
    * socket

max-profit: at most 2G trades#proven but untested 80%

Q(Leetcode): Say you have a price history as array. Design an algorithm to find the maximum profit. You may complete at most 2G transactions, consisting of exactly G (eg 3) buys and G sells. You may not engage in multiple transactions at the same time (i.e. you must sell the stock before you buy again). No short sell please.

No O() requirement.

====analysis=====

I feel first challenge is to list all (not more than 10) scenarios. This step has taken me a few days, even though I drew many examples.

–brute force: construct all possible pairs, rank them and pick top 2.

–solution 2:

  1. Identify all the turning points so we end up with hlhlhl… We can eliminate or ignore the other points.
  2. * identify the best pair using the max-profit algo. denote them as l1/hj
  3. * In the subarray before l1, find the best pair
  4. * in the subarray after hj, find the best pair
  5. pick the best among the two an denote it as p2
  6. Now look into the subarray l1 to hj. If there’s no enclosed pairs within then we have a simple case — use l1/hj and p2. But let’s assume there are at least 2 nodes enclosed. I will denote entire subarray as l1 h1 l2 h2 … lj hj (where l1-hj is the max-profit)
  7. * use max-profit algo to find the worst loss from h1 to lj. Suppose it’s h3 to l5.
  8. If this loss exceeds p2, then the we will return l1/h3 and l5/hj. Otherwise, return l1/hj and p2

This solution uses the max-profit algo 4 times (*).

can we use a matrix?

We can keep track of all profitable pairs i.e. le/ri indices, and also a pointer to the current best pair that’s not overlapping with “me”.

After creating 2nd pair, IFF no overlap, then we update the pointers in both instances.

After creating 7th pair, if it doesn’t overlap with the #3 highest pair, then check-update the pointer in #3.

I think if we can efficiently keep track of these then it should work.

I feel basic decision is to break the best pair or keep it

case: need to break the highest pair into 2 pairs,
case: best pair + another pair outside. I think this is easy..
case: 1,18,2,19,15,16. Perhaps the best case to solve.

peek@solution,get hint,then solve the problem yourself

I have found it rather important to follow this routine — Suppose a given easy or hard leetcode problem is too unfamiliar to me (might be familiar to you) so I have very few ideas, all on the wrong track. I would peek at the discussions to get some useful hint. I stop reading right there and go back to my drawing board and spend the next hour (or day) conceiving my own solution.

This has worked more than half the times. Benefit? I get to implement and remember my own solution.

There is a more important but subtle benefit — that hint is often related to a Reusable problem-solving technique, the holy grail of coding drill.

The way I see it, there are only a dozen or so (below 20) reusable techniques. Even if I were to memorize all of them, they would be useless to me because the real challenge is how to Apply a technique to a given problem. The way I learn to apply a technique is

* make sure I feel the hint i.e. it connects some dots in my mind
* experiment with various ways to use the hint to tackle the problem
* some experiments show progress; some become dead-end because they are the wrong ways to apply the technique
* keep sharpening "pattern recognition" until the key characteristics of the problem becomes clear in my mind.

The more I experiment with the hint, the more familiar I get… familiar with the problem, the hint, and the hitherto hidden gem behind the hint. That gem is often (not always) a reusable technique.

I won’t learn how to apply a Reusable technique if I don’t experiment, if I read a solution directly. The dots won’t connect. Pattern-recognition will still develop but slowly. Worse still, I may completely miss the hidden Reusable technique in the solution.

I always believe the real value in doing a Leetcode problem is the technique, reusable or not.

The solution I conceive will Not be as optimal as the one in the discussion … but it is easier to digest. The optimal solution in the discussion is usually rather concise (since the authors want to look cool) and elegant, but I don’t need to learn an elegant solution. I need to learn a Reusable technique. For that purpose, my own solution is easier to digest because it uses only a bit of the new technique + many familiar techniques.

remove subsequent duplicates in slist #SocGen

Q (SocGen hackerrank): given the head node of a linked list that might have duplicate values,  return the head node of a sanitized linked list without duplicates.

eg: Node@0x7089 (value=3) ->Node@0x6481 (value=2) ->Node@0x7921 (value=3) ->Node@0x6308 (value=7) ->null

would become Node@0x7089 (value=3) ->Node@0x6481 (value=2) ->Node@0x6308 (value=7) ->null

Note the first node (value=3) is still in the list, but all subsequent nodes with (value=3) are discarded.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/linkedList/removeDupNodes_SocGen.cpp is my tested solution

  • The two-pointer solution sounds simple but is over-complicated and error-prone during implementation.
  • fake-link-head technique is again useful

why I said you looked too dissatisfied #XR

聊天每 10次 觉得你有5 次(太多 🙂 流露对自己的处境严重不满。这方面我们俩类似, 所以我也有同感。正因如此, 我觉得你没必要这么不满意, 更不必苦闷。

从没听你提到你父亲。我父亲这方面给我宝贵的指点. 更重要是, 反复指点 — 我的思维习惯好难改变, 我一直有独立思考的性格和信心, 真固执 , 甚至顽固不化。我感激他不厌其烦指出我的愚人自扰.

光感激没啥用. 更重要的是 我被他的智慧和耐心逐渐地感化, 认识到自己并非顽固不化。

你我对很多问题的看法差异都与我父亲相关。比如学区;比如名校招生偏向弱族;比如各国教育系统哪个更成功; 比如对孩子评估过早…

还是说个人事业吧. 我深感自己 IQ/EQ 有限, 实在没必要和高薪的技术人员比.(更不要去比管理型人才). 所以我说目前处境不错, 偷笑还来不及.

刷题并不一定要有经济效益 — 比如拿个硅谷 或是高频 顶级公司聘约. 我比较重视能力提高,技能积累. 几年候 就算积累效果不佳, 我也希望能做到心安理得.

我的 UChicago 硕士读下来这个状况, 心安理得 着实不容易 . 我的总结 — 金融数学职位太少而且要求比我能力高, 薪水不一定比程序员高多少, 也没有 Contract 可言. 没法发挥我 (和 CSDoctor) coding 方面的特长和经验. 所以说 2013 年选择这个硕士课程, 实情了解得不够. 上了船才知道。

这次惨痛的经历决定了我对各种新技术新领域的谨慎, 徘徊, 举足不前.

既然我不看好这些领域的”钱”途, 我也没你那么不满现状. 话说回来,

* i’m good at scripting/SQL/data-processing compared to other developers I know;
* I like analyzing complex data, with attention to details;
* I have formal math training including statistics

So IF there’s some high-paying domain for me, I am open to it. That’s a big IF. The way I see it, most of those data analyst jobs are not paying well. If it pays well, it would be too hard to get in.

get ideas@top100 common Q #XR

Update — Now I realize my energy is limited so I need to allocate my spare time between

  • real coding
  • reading the key ideas

Also, I feel no positive feedback when reading the key ideas. I hit de-motivation.

Hi XR,

I now agree with your analysis — if I understand and memorize 1 key point about each problem, by reading leetcode, those points will help me significantly in a coding interview. This is one of the 4 benefits of coding practice #syntax,ECT,BP

  • If the problem is easy, then those key points will save me from running out of time.
  • If the problem is hard for everyone, then interviewer would have a lower expectation i.e. no need to complete it 100%. If I am on the right track (while other candidates are not) then i have an edge.
    1. case in point —- Last Friday I told you about shortest-path problems. I learned the technique from past interviews.
    2. case in point —- remember I told you about my own facebook question (3-way sorter/classifier #FB) Your swap technique is a valuable key point, even though it won’t help me finish the solution within 30 minutes
    3. case in point —- In 2018 I encountered comparable int array shuffling problems within 2 months at MS and LiquidNet. Key points are relevant.
    4. case in point —- In a 2018 interview, I learned to use N-by-N matrix to represent a N-node graph, then I used this key idea in another coding interview.

I wonder if I could get myself to follow a plan similar to yours:

  • every weekend (+ weekday evenings) spend 30 – 120 minutes.
  • First select some interesting” and popular question. If no idea, put it on back-burner
  • after a while, give up and read the solution to get the ideas
  • eg: edit distance
  • eg: punctuating continuous sentence
  • eg: regex
  • eg: maximum all-black sub-matrix

edit distance

The DP idea — compare matrix-path-counter, which is less obvious and easier than This one.

Q72 on Leetcode: Given two words word1 and word2, find the minimum number of operations required to convert word1 to word2. You have the following 3 operations permitted on a word:

  1. Insert a character
  2. Delete a character
  3. Replace a character

Comment — Top 100, naturally occurring. I won’t bother to pass all Leetcode tests esp. the load tests. If I pass all non-load tests I would consider my solution decent.

https://github.com/tiger40490/repo1/tree/py1/py/str has my implementation based on a DP idea online, and a spreadsheet illustration. The idea is elegant once you wrap your mind around it.

===analysis===
Starting with the small string (length S), The challenge is to project as many of the S chars to the large string (length L). If we can project 5 chars at most, then … (wrong — the remaining S-5 chars need replacement, and the other L-S chars need insertion.)

–idea2: draw all the projection arrows from S to L. In a good-projection, every arrow on the right should be more slanted than every arrow on the left. We want the largest good-projection. In the opening example, the largest would have 5 arrows, …

—-
None of these ideas has proven effective.

cross-currency eq swap

At Time 1, CK (a hedge fund based in Japan) buys IBM priced at USD 10, paying JPY 1000. 11 months later, IBM is still at USD 10 which is now JPY 990. CK faces a paper loss due to FX. I will treat USD as asset currency. CK bought 10 USD at 100 yen and now each USD is worth 99 yen only.

Now consider a comparable swap trade.

At Time 1, the dealer (say GS) buys and holds IBM on client’s behalf. How did GS pay for the shares? GS received JPY 1000 from CK and used it to get [1] 10 dollars to pay for the stock.

Q: What (standard) solutions do GS have to eliminate its own FX risk and remain transparent to client? I think GS must pass on the FX risk to client.

[1] GS probably bought USDJPY on the street. Who GS bought from doesn’t matter, even if that’s another GS trader. For an illiquid currency, GS may not have sufficient inventory. Even if GS has inventory under trade Tom, Tom may not want to Sell the inventory at the market rate at this time. Client ought to get the market rate always.

GS own account is now long USDJPY at price 100 and GS want USD to strengthen. (If GS effectively passes on the FX risk, then CK is also long USDJPY. )

I believe GS need to Sell USDJPY to CK at price 100, to effectively and completely transfer the FX risk to client. After that,

  • GS is square USDJPY.
  • CK is Long USDJPY at price 100.

I believe the FX rate used in this trade must be communicated to CK.

11 months later, GS hedge account has $0 PnL since IBM hasn’t moved. GS FX account is square. In contrast, CK suffers a paper loss due to FX, since USD has weakened.

As a check, notice that this outcome is identical to the traditional trade, where CK buys USDJPY at 100 to pay for the stock. Therefore, this deal is fair deal.

Q: Does GS make any money on the FX?
A: I don’t think so. If they do, it’s Not by design. By design, GS ought to Sell USDJPY to client at fair market price, therefore can’t profit from it.

in size-N array find The duplicate int #1 to N+1 #pigeonhole

https://leetcode.com/problems/find-the-duplicate-number/description/: Given an immutable int array nums containing n + 1 elements where each element is between 1 and n (inclusive), prove that at least one duplicate number must exist. You are guaranteed that there is only one duplicate number, find the duplicate value in O(1) space, below O(NN) time. The culprit may repeat many times.

I didn’t bother to write the code.

===== analaysis =====

contributed by a user and highly contrived:(
many likes:)

–bisection solution in O(N logN) time and O(1) space. I came up with this solution within a minute.

  1. Divide the full range [1 to n] into 2 almost-equal ranges (i.e. if n = 2K+1, then i use [1 to K] and [K+1 to n] as 2 ranges)
  2. Count how many nodes are in each range. Clearly one of the two ranges must have too many elements.
  3. Remember the boundary of that bad range so from now on we will ignore those nodes falling into the good range. We will use 2 variables to update/improve the boundary, until they coincide.
  4. within the bad range, repeat Step 1.

Key insight — progressive bisection.. non-recursive.

Key insight — applying pigeon-hold principle, we split the conceptual range. The more common (but ineffective) technique would split the physical array.

generate simple paths between 2 graph nodes

Q1: given 2 nodes in a graph containing N (eg 121) nodes, potentially with cycles, generate all simple paths between the pair. A simple path has no cycle. (In other words, length + 1 ==  # unique nodes in a simple path)

  • I think there are classic math algorithms for it, because this is part of basic graph theory. Here are some applications of this type of algorithms —
  • Q1b (special case of Q1): given 2 nodes in a C by R matrix grid, where every node is connected to (up to) four neighbors, generate all cycle-free paths.
    • I can solve this problem in python
  • Q2 (easy one based on Q1): generate all simple paths between any node pair in a graph. The shortest simple path has length=0. Longest simple path can potentially visit every node exactly once.
  • A: first generate all 121-Choose-2 node pairs. For each pair, solve Q1. Lastly generate the 121 trivial paths of length=0.
  • Q2b (special case of Q2): given a C by R (eg 11×11) matrix grid, where every node is connected to (up to) four neighbors, generate all simple paths.
  • Q2c (easy one based on Q2): given a binary tree containing no cycles, generate all paths.

— my DFT implementation (probably not 100% correct) , where each “trail” either fails or becomes a path.

  1. from NodeA start a breadcrumb/trail. We can’t revisit any node already visited on current breadcrumb,
    1. if this is a matrix, then instead of a hashtable, we can also use a shadow matrix, but the breadcrumb is much smaller than a shadow matrix
  2. if we can reach a node surrounded by nodes on the same breadcrumb, then the trail fails
  3. else we will reach NodeB 🙂 Print the breadcrumb

By construction, we won’t see duplicate paths 🙂

https://github.com/tiger40490/repo1/blob/py1/py/grid/classic_count4waySimplePaths.py is the implemnetation

–BFT? I don’t think it can print each unique path

shortest path btw 2 graph nodes #binary matrix as illutration

Q: given 2 cells in a binary matrix (1=black, 0=white=blocked), check the pair are connected and if yes return the shortest path. There exists a path of length 1 between any 2 cells IFF both are side by side or stack atop.

count paths between 2 bTree nodes #PimcoQ9 Ashish is arguably harder than this problem, but this problem allows moving in four directions.

binary-matrix island count #DeepakM technique is more applicable. A BFT path should work.

  • every reachable node is painted Green (like 2)
  • we give up after our queue is empty

https://github.com/tiger40490/repo1/blob/py1/py/grid/classic_connectedPair.py is the implementation, briefly tested.

fewest jumps to reach right end #triple jump

Q(Leetcode): Given an array of non-negative integers, you are initially positioned at the first index of the array. Each element in the array represents the maximum permitted jump length from that position.

https://github.com/tiger40490/repo1/blob/py1/py/array/tripleJump.py is my solution, not tested on Leetcode.

==== analysis =====
Typical greedy algorithm. I will jump leftward.

Suppose there are N=99 nodes in the array. I will pre-scan the N nodes to build a shadow array of integer records, each a BestLefNode. (The first record is unused.)

If BestLefNode[44] == 33, it means that based on known data, the left-most (furthest) node we can jump to from Node #44 is Node #33.

When we visit Node #7 during the scan, we will update 0 or more BestLefNode record #8 onward.

As soon as we update BestLefNode[N-1] i.e. right-most record, we exit the initial scan since the optimal solution is now available. For example, if rightmost BestLefNode has value #88, that means the furthest node we can reach from the right end is Node #88, so we will jump to #88 and then check the best destination From #88.

longest consecutive ints]O(N) #zebra

Popularity — 1000+ likes on Leetcode … possibly popular

Q(Leetcode): Given an unsorted array of integers, find the longest consecutive element sequence, in O(N) time. Eg: given [100, 4, 200, 1, 3, 2] return [1,2,3,4]

I call this the zebra problem because  every consecutive sequence of int is a black stripe and the gaps between them are white stripes. We want the widest black stripe. Obviously, each stripe has minimum size 1.

https://github.com/tiger40490/repo1/blob/py1/py/array/zebra.py is my O(N) solution, not tested on Leetcode.

========

What’s UnionFind? A reusable technique?

Like inserting interval #merging #80% done, I  feel this is a data structure problem,

To keep things simple, i will first run one iteration to remove all duplicate items.

I will use hashtable where key a known item. The value is a pointer to a “segment” object.

A segment stores the min and max values. All integers within [min, max] of the segment are always known-items during my scan of input array.

When a new item is either min-1 or max+1, we expand the segment by adjusting the extremes…

The trick is joining two segments, without link pointers. After joining, we don’t really adjust the min/max fields. We only update the max-length global variable if needed.

To keep the hashtable small, I can optionally delete from it but we don’t want to do a range delete within the loop — O(NN)

list all(and only)commit hashes from branching point to a descendant commit

Background — Suppose you made 3 commits on your feature branch name “parix”, but meanwhile, someone added 4 commits in master branch. Therefore there is now a divergence in the commit history graph.

Often times you need to visualize the divergence. You need to exactly what 3 commits are on your branch after the common ancestor.

git log master..pairx # listing the 3 additional commits in pairx branch, right after the common ancestor i.e. the branching point

git log pairx..master # to show those 4 commits.

generate combinationSum compositions #backtrack up] trie+tree

Q: https://leetcode.com/problems/combination-sum/description/

Given a set of unique candidate numbers and a target number, find all unique combinations of candidates, where each combination sums to target. Each candidate may be used repeatedly.

My solution is https://github.com/tiger40490/repo1/blob/cpp1/cpp/combo_permu/comboSum.cpp , showing a reusable backtracking technique below. However, the backtracking relies on a key insight. Suppose we have target = 7 and X=2,Y=3,Z=4 as the candidates.

  • when we try a Y, we don’t need to try any more Xs. For example, If we are trying XY, then all XX* solutions are already handled by earlier recursive calls.
  • each combo sequence is naturally pre-sorted. A blessing … but also a curse when X+X+Y anre Y+X+X are considered two distinct formulas. Latest code in github can support this requirement too.
                 root
          x        y     z
      /   |  \
     x    y   z       
    / \   | \  \
 xxx  xxy | xyz \ 
         xyy     xzz
void //can return something if needed

recurs( solutionsFound &, //growing
        curPartialSolution &, 
// above collections could be global variables, to simplify things

        remainingCandidates, /*probably an immutable global array. 
If we need the remaining array to shrink, we can still rely on startIndex to skip used candidates.*/

        gapFromTarget, 
        startIndex, //used to select from remaining candidates
) 

Inside this function, we scan remaining candidates starting from startIndex. Typically in one iteration

  1. we add a new candidate into curPartialSolution
  2. we call recurs
  3. we remove the last added candidate from curPartialSolution to restore the original curPartialSolution — backtracking up the tree.
  4. move on to the next candidate

irrational envy for all-round high flyer peers

When I first identify an acquaintance as an all-round high flyer, his (her) “note-worthy” achievements were invariablly rather few, thanks to my automatic filter on his other “success” stories … becasue those kinds of “successes” are, at a deep and personal level, unimportant to me. But then those things insidiously sneak past my defence into my inferiority complex and /infest/. Extremely irrational and 不值得.

I would rather feel inferior to someone (I know well) with many unrelated yet worthy achievements [3]. I doubt there’s any in my circle.

Incidentally, when a public speaker is introduced on stage, the audience often hear a list of “successes” which are mostly unimportant to me.

(Even though none of them is a friend I know well enough) Over the years there were a small number of acquaintances [1] I have singled out. Once I singe one out, I tend to lose my critical thinking and see many unimportant/insignificant/secondary “achievements” as enviable. Critical thinking is badly, badly needed at such a juncture!

Incidentally, one of the most effective ways to feel not-inferior is a lucrative job offer, even if I don’t take it.

The initial “enviable achievements” are usually one of these 5
1) income, almost always managerial [2]
2) investment, mostly property
3) beautiful wife
* fitness, flexibility and body shape
* short commute

The other factors are usually (as they should) in my “don’t-care/unimportant-to-me” list, but they sneak into my inferiority complex.

* (multiple) degreed from prestigous universities? Actually most of them are inferior to me!
* academic kids
* competitions and awards to himself or kids
* branded employers? many of them have fewer than mine
* running his own side business? I did while many of them didn’t
* wife professional career
* work-life balance… “easy job”? Questionable. Most high-paying jobs require effort
* writing and music skills? I achieved more than most of them!
* publications
* cars? is a liability not an asset!
* green card
* vacations to many places? Huge cost, no real gain for me
* magerial success at an erly age
* golf skills? i couldn’t care less when I’m not losing my critical thinking.
* networking skill, smooth personality? I’m not this type

[2] as soon as I hear the MD title of some ex-classmate, I lose my critical thinking defence.

Better consider [[compassion and self hate]] and Buddhist teaching

[1] Beside ML, Here are some questionable names. Many of them I barely know the name and job title, so my inferiority is fundamentally similar to my infatuation over the Indonesian girl Sandy, whom I spoke to fewer than 3 times.
* Lu Nuo — I only know he has a writing hobby …
* Cai Hongyu
* Tao YY — I don’t really know how he is doing
* Yang Yang
* Xie Xiaoli

[3] briefly on myself — math, piano, c#, swing, properties, blog, helping friends with job hunting

maximum path sum through binTree #60%

Q: (Leetcode “hard” Q124) Given a non-empty binary tree, find the maximum path sum. For this problem, a path is defined as any sequence of nodes from some starting node to any node in the tree along the parent-child connections. The path must contain at least one node and does not need to go through the root.

====analysis====

optimal solution(?) — DFT. Along each root-to-leaf path, use the max-subarray (Kadane) algo and store maxSumEndingHere value in each node.

prior knowledge can make you look brainy: algo IV

After solving the min-stack problem, I found something paradoxical , even nonsensical — Someone said this question is considered “easy”, but if you have not seen it before then you may find it extremely hard. I thought O(1) worst case was impossible. I have experienced many similar situations where prior knowledge can make you look very brainy.
  • example — detect loops in a singly linked list, where the loop could be far out.
  • example — reverse an extremely long singly linked list which breaks any recursive solution, so you must use non-recursion
  • example — in a regular unsorted binary tree (every node has up to 2 child nodes and no uplink to parent) how to print all nodes in-order but with O(1) additional space and O(N) time
I feel west coast (and Bloomberg) tend to ask this type of questions because they expect their candidates to study and acquire the knowledge before applying.
If you don’t study, and hope to come up with a reasonable solution on the spot, then you must be extremely intelligent or lucky. I think the math Olympiad gold medalists also need to study before their competitions.

tech design “debate” with mgr

Some pointers from Rahul and other colleagues

  • [GS] pick your battle
  • manager will be the one holding the baby after you leave the team.
  • I feel I need to pay attention to the para-linguistics, and not focus solely on the technical side
  • I tend to hold strong views on design question that aren’t really major. I tend to argue and hold my ground even when it’s really a minor design decision. I appear stubborn and argumentative when I suspect the decision makers have not fully understood my points. Once I see that manager completely understands my arguments, I would stop pushing.
  • I feel i should put a limit on how much time I’m “costing” my mgr. After mgr has spent, say, 10 minutes listening to my points, I should probably “give up”.
  • [MS] Mgr probably hopes that I accept the collective decision and be “happy”, not reluctantly ..
  • [MS] need to avoid giving the impression that I have made up my mind about which design is best and only here to persuade.
  • [MS] need to avoid giving the impression that I’m not really open to new input
  • [MS] try to show unbiased .. without a favorite, even though my proposal is clearly my favorite
  • I hope to strike a balance between striving for a better design, and minimizing conflict
  • Try to detach the egos from the competing designs

order partially filled but closed

Three scenarios: (Assuming a big order of to sell 9M)

  1. client doesn’t like the partials so far, and cancels the rest of the order
  2. IOC limit order is partially filled because at the requested price level there’s not enough quantity
  3. IOC market order is partially filled since there’s not enough quantity.
    • This is uncommon, but possible for an illiquid stock.

order slice^execution: jargon

An execution is also known as a fill, often a partial fill.

  • A slice is part of a request; An execution is part of a response to some request

A slice can have many fills, but a fill is always for a single request.

  • An execution always comes from some exchange, back to buy-side clients, whereas
  • A request (including a slice) always comes from upstream (like clients) to downstream (like exchanges)
  • Slicing is controlled by OMS systems like HFT; Executions are controlled by exchanges.
  • Clients can send a cancel for a slice before it’s filled; Executions can be busted only by exchanges.

generate all abbr starting from longest.. +! recursion

I won’t insist on relative ordering among the shortest.

Idea 1() — Start with longest abbreviation i.e. the original string S, assuming 5 characters.

  1. populate the smallHM with the original word
  2. copy every char except the first. save into bigHM, then print/process this abbrevation.
  3. copy every char except the 2nd and print
  4. ..
  5. copy every char except the last. Now we have 5 strings in bigHM (a Level-4 hashmap), each length S-1=4
  6. make smallHM point to bigHM object; point bigHM to an empty hm
  7. now take a string from smallHM (Level-4 collection) and generate 4 shorter strings and save them in bigHM (a Level-3 collection), each length S-2=3
  8. now take 2nd string from Level-4 …
  9. After we finish Level-4, we have generated 20 strings in Level-3, but there are only 10 distinct items! so we need a L3 hashmap.

 

inserting interval #merging #80% done

Q (Leetcode): Given a set of non-overlapping intervals, insert a new interval into existing intervals (merge if necessary) and print updated list of intervals. Intervals were a vector sorted according to their start times.

–analysis–

Now I feel the #1 main data structure is a doubly linked list (dlist) of Segment objects:

  • { segment_left_mark,
  • ptr to next node, ptr to prev node
  • optionally a (bool or) enum having A/B, where A means current segment is AboveWater (an interval) or BelowWater i.e. a gap}.

Every time this dlist is modified, we would update a “helper container” — a tree of node pointers, sorted by the segment_left_mark value. Tree to help successive inserts. However, if each insert(vector intervals) has a sorted vector then we can binary search the vector and don’t need to tree.

First, binary search to locate the left mark among all existing marks. Ditto right mark. Based on these 2 results, there are many cases.

  1. done — Case (simple) both fall into the same existing interval. No op
  2. done — case (simple) both fall into the same gap segment. Create 2 new segments and insert into the dlist
  3. done — case (simple) one boundary falls into a gap the other falls into a adjacent interval — just adjust the segment_left_mark without inserting new segment
  4. done — case — bridge: both boundaries fall into different intervals. Adjust segment_left_mark of 2 affected segments, then link up the two to skip the intermediate segments
  5. done — case — wipeout: both boundaries fall into different gaps, wiping out at least 1 interval.
  6. done — case (most complex) — one falls into an interval, the other into a non-adjacent gap.
  7. case — incoming interval left boundary is lower than all boundaries, but right boundary falls into some segment
  8. case — incoming interval is very low
  9. case (special) — if an interval becomes adjacent to another, then merge the two.

Need a sorted tree of all marks + array of segments. Redundant but helpful.

Each segment (interval or gap) is represented by {left mark, right mark} where left <= right. I will save the segment objects into (a linked list and) an array. Even elements are interval objects and odd elements are gap objects. Now superceded by dlist.

I think this problem is all about corner cases. Perhaps start with the complex cases which will take care of the simpler cases. No need to pass Leetcode tests. Due to the pointer complexity, I prefer python.

https://github.com/tiger40490/repo1/blob/py1/py/linklist/insertInterval.py is my solution but I dare not test on Leetcode

staircase problem #CSY@Bbg

Q (bbg question posed to CSY): given a staircase of height N (eg 3), you can reach it by three steps 1,1,1 or two steps 1,2 or 2,1, or a single step of 3. Generate all paths for a given N.

CSY gave a recursive solution and interviewer asked “Can you find a non-recursive solution”?

I feel this is simpler than AQR factorization and the CombinationSum problems.

Is this same as N-boy-split? No split is harder. With the split, ABC can have a “step” of AC.

easy to test — https://github.com/tiger40490/repo1/blob/cpp1/cpp/combo_permu/staircase_CSY.cpp is my tested solution.

Q: why path count == 2n-1?
Answer from CSY: f(1)=1, f(2)=2, f(n) = f(n-1)+f(n-2)…+f(1) = 2n-1
A: For a staircase of n, you can take step of 1 and have f(n-1) paths, or you can take step of 2 and have f(n-2) paths…

–jargon file:

a STEP from LEVEL 0 to 2 has LENGTH=2, and skips level 1; a PATH consists of 1 or more STEPS; a FORMULA is a complete PATH, and always starts at level 0 and may hit intermediate levels #2, #5, #6 …

–BFT solution. suppose N = 5. We model each path as a directory path from root. Each queue item is a path represented by a linked list or vector of capacity N.

I hope this solution avoids duplicates… Yes it does:)

  1. enqueue all “first levels” /1; /2; /3; /4; /5
  2. then dequeue and visit first path i.e. “1”. In this case, there are 4 levels remaining in the staircase, so enqueue /1/1;  /1/2;  /1/3;  /1/4
  3. then dequeue and visit 2nd path in the queue i.e. “/2”. enqueue /2/1; /2/2;  /2/3

Every time (after dequeue) we see a path has total length==N, we print it out as a formula.

–DP iterative solution to be implemented

  1. build the formulas for a length-2 staircase: /1/1 + /2 i.e. two formulas in FormulaArrayForStaircase2 of “fa2”
  2. then build the formulas for a length-3 staircase — fa3 = firstStepOf2 -> fa1 + firstStepOf1 -> fa2

max rectangle ] histogram

Q: https://leetcode.com/problems/largest-rectangle-in-histogram/description/. Given N possibly recurring non-negative integers representing the histogram’s bar heights, and given the width of each bar is 1, find the area of largest rectangle in the histogram.

Visually well-defined problem. Kind of naturally-occurring. Very simple data structure. No O() requirement, so I will just try my own solution.

https://github.com/tiger40490/repo1/blob/py1/py/array/maxHistoBox.py is my solution. 100% passed on Leetcode.

==== analysis — heavy on data structure design.

Key insight — one scan to update a clever data structure.

key insight — data structure is not per bar, but per height!

For every bar J, there exists an enclosing max-rectangle of J’s height. We can just compare all of these rectangles.

We might start with two extreme candidates
1) the peak — whose enclosing rectangle is likely slender — O(N) one scan to find all the peaks
2) the lowest bar — whose enclosing rectangle has width N — O(N)

If we paint the histogram as a binary matrix, then this is equivalent to anther problem max all-black submatrix #DP #zhurongbut I think there exists better solutions like O(N logN) or O(N*S) …

–algo with O[N*S] where S:= #unique heights. The binary search doesn’t show up as logS.

A pre-scan to get all distinct heights. For each distinct height, we maintain a RunRecord object {bestRun, currentRunStart, height}, in a sorted map {height -> record}. In py, I can use a pre-sorted vector of Records, sorted on height

In main scan, As we encounter a new bar of height J, we update these records.

  • if not falling or rising
    • record-J and each record-H below J must have a current run … extend that run (no-op)
  • if rising from height H
    • each record up to H must have a current run … extend that run by no-op
      • iterate the treemap up to H
    • iterate treemap from H+1 to J. start a new run for each record
  • if falling from height P to J
    • record-J and each record-H (where H <J) must have a current run … extend that run
    • iterate treemap from J+1 to P … each record-K must have a current run, indicated by a valid currentRunStart, then this record’s current run has just ended. We update bestRun and put a invalid value into currentRunStart.

At end of the main scan, every record has a bestRun i.e. the duration. I can then calc the area under each bestRun and return the max.

personal projects: any ROI ] salary@@

See also

  1. most(new/old)specializations turn out non-strategic
  2. gzTsn category
  3. t_gzSpecialize11 tag

Q: given the limited spare time we working fathers have, what specific (learning?) pet projects can enhance our salary?
A: invariably poor ROI, mostly a mirage. Moving up is the most common success story.

  • xp: MSFM?
  • xp: c++
  • xp: low latency engineering knowledge, including sockets, kernel, pthread …
  • prepare for west coast high-end interviews
  • coding practice
  • data science

find min substring containing(but not limited to)all my chars

Q (leetcode): Given a string Haystack and a string T, find the minimum window in Haystack which contains (at least) all the characters in T according to the frequencies. Time complexity O(n). Eg: minWindow(ccbabccbabcb, bbc)==bcb

If there is such a window, you are guaranteed that there will always be only one unique minimum window in Haystack. <– I thought this guarantee means something but it doesn’t.

Without loss of generality, I will assume the chars are a-z. I believe those Leetcode corner cases will use only 3 chars

—analysis—

For single-string problem, use array indexed by ascii code. I can convert T to such an array to store the required frequencies (reqFrq)

I can construct a shadow array, same length as Haystack with these payloads:

  • if the hay is not in reqFrq, then payload is a special value like nullptr
  • if the hay is in reqFrq, then….?

–SolSW: sliding-window based

  1. Scan Haystack from left and keep count of actual frequency (check against reqFrq each time). I will inevitably find the earliest good window. By construction, both ends of this window are in reqFrq.
    • Note the entire haystack is more than a good window.
  2. Now I slide the fixed-sized window. If I find another good window, with extra chars on the left, then I have found a shorter window, so I truncate my window on the left
  3. continue Step 2

max all-black submatrix #ZR

Same problem as https://leetcode.com/problems/maximal-rectangle/description/

Q: Given a 2D binary matrix filled with white(0) and black(1) cells, find the largest all-black rectangle. See raiserchu’s mail on 12 Sep 13. There is a clever DP solution, probably O(NN).

—Analysis:

Worst case — A standard chess board? We can’t do better than O(N^2) since there are N^2 cells to read.

— sol3 O(NNN) new idea based on max rectangle ] histogram treat top J:=2 rows as a histogram. Find the max rectangle therein. Then J:=3 …

  • Scan #1 O(NN): build a shadow matrix “histogram” where each integer in the cell is the height (possibly 0) of the bar anchored therein. In other words, if a cell value=5 then there are exactly 4 consecutive black cells above this (black) cell
  • Scan #2a: for each row in the shadow matrix, we run the proven algo in O(NS), Note there’s no help from previous row:(
    • S:= #unique heights
  • Scan #2 := the entire scan of all rows. so worst case we hit O(NNS)

Can we do better by reducing scan #2a complexity to O(N), by making use of the previous row results?

— sol4:

Scan #1 O(NN): build a shadow matrix “histogram” where each integer in the cell is the height (possibly 0) of the bar anchored therein

Scan #2 for each cell, remember the currentRunStart column index i.e. from that column until current column, we have an all-black box of height == current bar height

— My brute force solution 1: Each rectangle is identified by 2 vertices, i.e 4 integers. Without loss of generality, We require the “high” corner to have higher x-coordinate and higher y-coordinate than the “low” corner. (We can assume y-axis run upward.) With this O(N^4) nested loop we can iterate over all possible rectangles:

Lock low corner
Move high corner in typewriter (zigzag) steps i.e.
  hold highY and move highX step by step
  process the (series of) resulting rectangles
  increment highY and repeat
Move the lower corner in typewriter steps and repeat

Key observation: any “bad pixel” disqualifies every rectangle containing it.

— My solution 2:
1) Save all bad pixels in SQL table Bad, 
indexed by x-coordinate and 
indexed by y-ordinate

Table can be in-memory. Many sorted maps (skiplist or RB tree) support range selection. Gelber interviewer showed me how to use a SQL table to solve algo problems.

2) Follow the nested loop to iterate over all possible rectangles, either disqualify it or save/update its area in maxFound. Here’s how to disqualify efficiently:

For each rectangle under evaluation, we have 4 numbers (lowX, lowY) and (highX, highY).

select ANY from Bad where lowX < Bad.x < highX and lowY < Bad.y < highY

If any hit, then rectangle disqualified. In fact all high corners at the same horizontal level disqualify, so in the nested loop we skip ahead to increment highY

3) At end of nested loop, maxFound is the final answer.
— my earlier solution 1:
1) Iterate over all possible rectangles and save them in a SQL table Rec, indexed by the 4 integers. No need to validate each (time-consuming). Next we start elimination
2) Iterate over all bad pixels. For each bad pixel found, delete from Rec where Rec.lowX < X < Rec.highX and Rec.lowY < Y < Rec.highY

Now all remaining rows are valid candidates
3) max ( (x2 – x1)*(y2 – y1) )
— Here’s my partial solution:
We can effectively ignore all the “good pixels”.

1) Look at the x coordinates of all bad pixels. Sort them into an array. Find the largest gap. Suppose it’s between x=22 and x=33. Our candidate rectangle extends horizontally from 23 to 32, exactly. Notice there’s no bad pixel within this vertical band [1].
2) Look at the y coordinates of all bad pixels. Sort them into an array. Find the largest gap. Suppose it’s between y=15 and y=18. Our candidate rectangle extends vertically from 16 to 17, exactly.
[1] This candidate rectangle can expand All the way vertically, though it may give a bigger rectangle
Ditto horizontally.

##observations@high-volume,latency sensitive eq trading sys #CSY

This is a probably the biggest sell-side equity order-management-system (OMS) out there, written in c++11. Daily order volume is probably highest among all investment banks, presumably 6 to 7 figures based on my speculation, though a lot of them get canceled, rejected or unfilled. I can’t reveal too many internal details due to compliance.

In contrast, GS used to get about a million individual trades a day, probably not counting the high-frequency small trades.

  • I have not seen a message queue so far but they could be hidden somewhere. Earlier I heard people telling me Tibco (and similar messaging middlewares) were popular in fixed income and other trading but now I doubt it. Queues add latency.
    • We do use some pub-sub MOM but not for order messages therefore not part of order flow.
  • I haven’t noticed any locking or condition variable so far. I think single-threaded mode is faster than synchronized multi-threading. Multiple instances of the same software runs in parallel across machines. I think this is in many ways better than one big monolithic process hosting many threads. We have 4 threads per instance in some cases.
  • socket programming is not needed in any module. I believe the applications communicate via FIX, SOAP etc, on top of well-encapsulated TCP library modules.
  • RDBMS is loaded into cache at Start-of-Day and seldom accessed intra-day. I confirmed it with an ex-DBA colleague
  • no garbage collection like in java and dotnet
  • heavy use of CRTP. I don’t remember seeing many virtual functions.
  • The most important message is the order object, represented by a FIX message. The order object gets enriched and modified by multiple functions in a chain. Then it is sent out via FIX session to the next machine. As in any OMS, the order object is stateful. I still don’t know where the order objects are saved. I would think they are persisted somewhere so a crash won’t wipe out pending orders.
    • (Elsewhere, I have seen very lean and mean buy-side OMS systems that don’t persist any order! After crash, it would query the exchange for order states.)
  • The 2nd most important message is probably the response object, represented by a FIX msg. If there are 100,000 order objects then there are roughly 300,000 response objects. Each order generates multiple responses such as Rejection, PendingNew, New, PartialFill, PendingCancel, Cancelled… Response objects probably don’t need to be persisted in my view.
  • The 3rd most common message is the report message object, again in FIX format. Each order object probably generate at least one report, even if rejected. Report objects sound simple but they carry essential responsibilities , not only regulatory reporting and client confirmations, but also trade booking, trade capture… If we miss an execution report the essential books and records (inventory, positions..) would be messed up. However, these reports are not so latency sensitive.

airport gate #maximum people alive

https://careercup.com/question?id=5153263227764736 defines the problem

Q (Amazon): In a city, year of birth/death of people who where born and died between year 1900 to 2000 are given. Write an algorithm to find the year in which max people were alive. Note the years are not unique and not sorted

Similarly,

Q (FlexTrade): For an airport gate system, flight arrival/departure times are given for yesterday. What’s the maximum number of gates required at the busiest time?


Solution1: O(N logN) merge-sort all timestamps, then scan it in one pass. If an arrival, then increment counter; if a departure then decrement it.

??Solution2 (assuming arrival times are pre-sorted) Using hashtable, keyed by arrival time. Value is a count of flights arriving at that time. Every arrival creates or updates in the hashtable. Every departure deletes or decrements. Maintain a separate total count.

I think we still need sorting.

Solution3: O(N). Use array if all the years are small integers. (Regular timestamp is also small integers — 0 to 2355 in steps of 5.) Fill all arrival/departure events as +1/-1 in an array indexed by year.

Longest Parentheses run with multiple hierarchies

Q (Leetcode): Given a string containing nothing but the characters ‘(‘ and ‘)’, find the length of the longest valid (well-formed) parentheses substring.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/str/maxParensRun.cpp is my solution 100% tested on Leetcode

–My Single-iteration solution:

Challenge is data structure. I ended up with 2 data structures to be updated during the iteration

  1. A stack (holding openers’ index values) to locate the matching openers
  2. an array to save “scores”

For each closer, I will record the position of the matching opener, then compute the distance (minimum two).

 

 

simple Windows c++ (python)set-up #Eclipse is better4java

See also easiest windows GCC installer #c++17

GitBash + StrawberryPerl is the best combo for me on Windows. I put g++ commands in bash scripts to automate my GCC build and test. Note my typical project has at most 20 source files.

GitBash + StrawberryPerl + Notepad++ is better for me than any c++ IDE like Eclipse CDT (4 installations), Bloodshed (4), CodeBlocks (1), NetBeans…

  • I don’t need code completion..
  • I don’t need jump into a type definition.
  • I don’t need my debugger to be visual. StrawberryPerl includes gdb
  • I use Notepad++ for text search across hundreds of files
  • I use Notepad++ for search/replace
  • I use diff and git-diff to see code changes
  • I used github for version history

I’m such a die-hard fan of command line that the only GUI development tool I use is notepad++ and it’s optional. Since 2015, my Linux c++ code editor has been VIM, which I used on large codebases consisting of 1000+ source files.

For about 2 years EclipseCDT was my default choice, then the simpler Bloodshed became my default choice as it is simpler than EclipseCDT. However they are still over-complicated. I didn’t have a favorite until 2017, when I discovered Notepad++/GitBash/StrawberryPerl

Q: why do I use Eclipse for Java but not for c++?
A: Eclipse is more problematic, more messy for c++ than for java. The benefits (convenience, automation..) is offset by the high TCO (total cost of ownership). For example, java debugger, java code navigation, java renaming/refactor all work better than c++.

Q: how about MS VisualStudio?
A: I managed a large c++ codebase on MSVS 2015 in 2015-2016, about a year+. Too complicated, worse than MSVS for c#. I would never prefer it over my command line set-up, if given a choice.

By the way, GitBash works well with a standard python installation, but I do recommend the tweak at https://stackoverflow.com/questions/32597209/python-not-working-in-the-command-line-of-git-bash

easiest MSWindows GCC installer #c++17

http://strawberryperl.com is an easy installer for Perl and it bundles GCC. I was able to compile in c++17 mode:

g++ -std=c++17 my.cpp

My experiences with cygwin and mingw were both horrible.

  1. cygwin — too bulky and unnecessary, and affected my existing Windows features. I tried sygwin once many years ago and quickly concluded that it was designed for people unlike me. Those people would have only positive experiences about cygwin that I don’t share.
  2. mingw — supposed to be minimalist, but I tried installing it at least 3 (up to 6) times and always painful.

In contrast, I installed strawberry about 3 times and once on a friend’s laptop … always quick and easy. No tweaking required. GCC works out of the box.

Through almost 10 installations of GCC on various windows laptops, I have come to the obvious conclusion.

git-stash survival guide

Two simple steps:

git stash #no args

# Now git diff will no longer show the uncommitted changes:) …later

git stash pop

# Now git diff shows the original uncommitted changes

If git-stash-pop hits conflict on 1 file out of 3

  • the one file content will show >>>>. I will leave it alone for now
  • the other 2 files would show the original uncommitted changes. git-diff-HEAD would show these changes.
  • now I can use git-reset-HEAD and git-diff

reference(instead of ptr) to smart ptr instance

I usually pass smart pointers by value (copy-constructor or move-constructor), just like copying a raw ptr.  Therefore the code below looks unnatural:

unique_ptr<Trade> & ref2smartPtr

Actually it is rather common because

  • As Herb Sutter suggested, when we need to put pointer into containers, we should avoid raw ptr. Unique ptr is the default choice, and the first choice, followed by shared_ptr
  • I often use unique_ptr as map value . The operator[] return type is a reference to the value type i.e. reference to unque_ptr
  • I may also put unique_ptr into a vector…. ditto for vector operator[]

ways to create feature branch

A few ways to create new feature branch

If you want the branch to show up in Stash, then it is probably simpler to create it on Stash website (choose the correct parent branch …). After that

git fetch # no arg after "fetch"... will show the new branch downloaded to local repository as "remote branch"
git branch -a # list all (local and remote) branches
git checkout feature/new_br_from_stash # creates a matching local branch

Using command line without Stash

git branch feature/my_br # creates a brand-newbranch named feature/my_br, in the local repo only.
git branch # lists all available branches in local repo
git checkout feature/my_br # selects my_br as the default branch.
git branch -m feature/my_BR # renames the default branch to feature/my_BR, in the local repo only.
git push origin feature/my_BR # creates and copies branch to Stash, assuming branch doesn't exist there.

(Note Git branches are designed to be disposable. So in theory you could create a branch for a small change and then merge and throw it away. No best practice no recommendatation yet.)

All feature branches, including temporary throw-away branches, can be deleted from Stash.

git | fork auto-sync

https://confluence.atlassian.com/bitbucketserver/keeping-forks-synchronized-776639961.html says

Fork syncing helps you to keep your fork in Bitbucket Server up-to-date with changes in the upstream repository. Bitbucket Server can do this automatically for all branches (and tags) you haven’t modified in the fork.

I guess forking (and pull request) is not a feature of GIT per se but a feature of Bitbucket and GitHUB server. I think the server get notified upon a new commit and runs fast-forward git pull.

git | rebasing: basics

For me, rebase is the procedure to re-sequence commits… A simple form of history rewrite.

git pull --rebase # moves your local commits to the end of the linked list
How do I merge multiple Work-In-Progress commits into a single commit?

The following will take the last five commits and give you an interactive session that allows you to (optionally) merge them.

git rebase -i HEAD~4

More details are here http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html

git | reverting a file: various ways

I didn’t like git-revert as it complicates history. See https://www.atlassian.com/git/tutorials/undoing-changes/git-revert

How do I revert a file to remove bad commits?

git checkout HEAD -- <file or directory>

This use of git-checkout is unrelated (as far as I know) to the commit-level checkout. This solution is officially /sanctioned/ … Some git commands would suggest …

(use "git checkout -- <file>..." to discard changes in working directory)

How to Revert non-commited changes

git reset --hard HEAD  # reverts tracked files back to the head
git clean -fxd .   # deletes untracked files in current dir. You may want to cd to the repository base first

[Victor] My vote for best explanation of git-reset goes to https://git-scm.com/blog/2011/07/11/reset.html. Detailed, but more readable than the manual. Many examples are useful to me.

Warning — if you want to temporarily play with an older version of file1.java, then be careful. Make sure you save the current tip of your branch. If you git-reset–hard then you may lose that tip!

How to Revert branches back to origin

git fetch origin
git reset --hard origin/yourBranch

git-cherry-pick commits to push to main branch

How to Cherry-Pick Commits to Push to main ‘develop’ branch

Say you are working on a feature branch with other people, with you all making commits and pushing to the remote replica of this branch. At some point you will want your commits merged with develop/production. Git does provide a method of choosing which commits you would like to push. The easiest way to explain this is with an example. We assume that all the following commands are run in a command line tool.

git checkout feature/commodity
git log -3 --pretty=oneline #look at the 3 most recent commits to the repository (includes all commits to all branches - in a nicely displayed format)
$ git log -3

Looking at this log, I want to push only the latest change to ‘develop’ branch (via a pull request), the commit associated with the e5a67d7088622dd8821dfdd8e2fafa2dc938b75c hash number. Keeping a record of this hash number I then create a new feature branch from ‘develop’ branch in Stash. Assume this new branch is called feature/cherry_pick

git fetch #update local repo to get this new branch info
git checkout feature/cherry_pick #switch to this new branch, at the moment is exactly the same as origin/develop
git cherry-pick e5a67d7088622dd8821dfdd8e2fafa2dc938b75c #where the hash number is the commit I actually want to merge into 'develop'
git commit #commit this change to the local feature/cherry_pick 
git push #push this change to the remote feature/cherry_pick branch

You can now create a pull request in Stash to merge this cherry-picked commit into ‘develop’

git | diff tools #BeyondCompare,Stash..

Git – diff tools (BeyondCompare, Stash …)

–Method: git diff

This is a basic but versatile tool. It looks at two commits you provide. Note each tag represents a commit. Each branch name also represents the commit at the tip of the branch.

–Method: Stash/bitbucket comparison between branches/tags

Stash supports a directional PR diff, distinct from the git-diff command. Based on the pull-request concept, Stash treats your two commits as a FROM and a TO and shows the sequence of commits between them, as if in a pull-request.
If you click the arrow to reverse the direction, then Stash will always show a different diff result!
  • If from A to B there’s a fast forward merge, then Stash PR diff from B to A would show zero commit, since such a pull request would be empty.
  • If between A and B there’s no fast-forward merge, this PR-diff would show fewer commits than git-diff.
In summary, PR-diff is directional; git-diff is directionless. This directional diff feature is kind of unique to Stash and therefore valuable.

–Method: have two local repos (clones) to check out different branches

Figure out how to clone two physical repos. After that you could use many conventional diff tools.

–Method: git diff using Beyond Compare 3

http://www.scootersoftware.com/support.php?zz=kb_vcs#gitwindows worked for me, including a 3-way merge GUI —

git mergetool feature/my_br6 origin/feature/my_br6 -- buildChoops.sh

–Method: git diff using Winmerge

see http://stackoverflow.com/questions/1881594/use-winmerge-inside-of-git-to-file-diff, updated June 2015

I hope there are similar solutions for other diff GUI tools.

–Method: download remote version as complete file, then diff that against local file

Here’s my favorite choice to diff a remote branch version vs a local branch version:

mv buildChoops.sh buildChoops.local
git checkout origin/feature/my_br6 buildChoops.sh # download the file from the remote branch into the current directory locally
... now run any diff tool between the two files

tidy up messy commits, for pull request

Git – tidy up messy commits, for pull request

A common situation: you make many incremental, quick and dirty commits on your branch, undoing and redoing changes. You don’t need to push the entire history to a shared branch like Develop.

Many solutions exist. Git offers many commands to rewrite history on a branch. Many git users use them regularly. We could use them to clean up the commits on a feature branch before pulling them into a pull request. Here are some of the methods.

This will rewrite the history of your branch, you should only do this if no one is sharing your branch (or you don’t care about them).

1st approach – rebase, if your history is clean

This example will let you interactively select from the last four commits.

git rebase -i HEAD~4
# Alternatively, specify the last-good commit:
git rebase -i e8a9cf093   # Rebase all subsequent commits

2nd approach – If your history is too complicated (many merges etc.)

git tag backup                      # we do this so that we don't lose our history
git checkout -B temp origin/develop # checkout a clean temporary branch from develop
git merge --squash <branch>         # merge our old branch in, squishing all the commits
git commit
git branch -f <branch> temp         # force the branch to point at the new commit

3rd approach – git reset and commit again

Suppose you have 5 commits to squash,

git reset --soft <a previous commit hash, or something like HEAD^5> # rollback to the named commit.
# reset --soft modifies only commit history, not work tree or index.
# Now you can look at the files involved in those 5 commits, staged for the next commit
git diff --cached --name-status
git commit -m 'your msg'
# git push ... and create pull request

All methods so far assume the complicated commit history is in local repo not on a Stash branch. If your Stash branch has stuff you want to pull into Develop, but your Stash branch has complicated history, then …

4th approach – create a pull-request candidate branch

# create the candidate_branch from Develop, on stash
git tag backup                      # we do this so that we don't lose our history
git fetch                           # will show the new branch
git checkout candidate_branch
git merge --squash old_branch       # merge our old branch in, squashing all the commits
git commit                          # the default msg shows list of files. you might want to uncomment that in the msg.
git push origin candidate_branch    # now the candidate branch is up-to-date on Stash. Ready for pull request.

5th approach – rewrite public history

Dangerous – Know what you are doing
# clean up your local branch as illustrated above
git push --force origin   your_Stash_branch    # overwrite the Stash branch and overwrite public history - Dangerous!

6th approach – edit commit message after many merges, using git-filter-branch

The git-rebase and git-cherry-pick commands often fail in the presence of a complicated history of merges, file deletes etc. Try filter-branch.

Dangerous – can modify many commit messages by accident
# clean up the branch xyz
git checkout -b xyzRewrite # Always do the rewrite on a cloned branch please
git filter-branch --force --msg-filter 'sed "s/addYENRates/[CFMQUANT-262] addYENRates/" ' e635d034ea8..HEAD

At end of the command, 1st hash is the parent of the flawed commit. 2nd hash must be HEAD (or perhaps the name of a branch).

Command should tell you the branch is rewritten. If no commit message matches the pattern, then the command would tell you the branch is left unchanged, so it’s safe if you have a typo that matches nothing.

In either case, you can then run git diff against the original branch and verify they are identical content-wise.

In this real example we were lucky the message “addYENRates” was rather unique. If it shows up on 2 commits, then both would be rewritten.

overcoming exchange-FIX-session throughput limit

Some exchanges (CME?) limits each client to 30 orders per second. If we have a burst of order to send , I can see two common solutions A) upstream queuing B) multiple sessions

  1. upstream queuing is a must in many contexts. I think this is similar to TCP flow control.
    • queuing in MOM? Possible but not the only choice
  2. an exchange can allow 100+ FIX sessions for one big client like a big ibank.
    • Note a big exchange operator like nsdq can have dozens of individual exchanges.

Q: is there any (sender self-discipline) flow control in intranet FIX?
A: not needed.

rotate square matrix 90-degree #swap

https://leetcode.com/problems/rotate-image/discuss/18872/A-common-method-to-rotate-the-image shows

/*
 * clockwise rotate
 * first reverse up to down, then swap the symmetry 
 * 1 2 3     7 8 9     7 4 1
 * 4 5 6  => 4 5 6  => 8 5 2
 * 7 8 9     1 2 3     9 6 3
*/

For anticlockwise, personally I would reverse horizontally, then same swap

This technique relies on swap(). What if the payload objects are blackboxes (or immutable) so you can’t even see contents? Just swap the 2 pointers.

–My own solution is also reasonable, possibly more versatile. Suppose the matrix is N by N

m=N-1. Start from [0 0]. find new home, move in and kick out old tenant [0 m] …Each group has 4 nodes. a-> b -> c -> d -> a

See also spiral … If N is odd, then the kernel node stays changed. If N is even, then the innermost shell has a 2×2 matrix.

aggregation unit

I now believe there are at least two purposes, not necessarily reflected in any systems I worked on.

  • Purpose: FINRA regulatory reporting on aggregate short positions on a given stock like AAPL. Probably under Regulation SHO
  • Purpose: Self trades (also wash trades) that create a false impression of activity. I believe trading volume for AAPL would be artificially inflated by these trades. “Bona fide” trade reporting is expected. To deal with self-trades, a firm need to exclude them in trade reporting. But what if a self-trade involves two trading accounts or two algorithms? Are the two systems completely unrelated (therefore not self-trade) or both come under a single umbrella (therefore self-trade)? That’s why we assign an “aggregation unit” to each account. If the two accounts share an AggUnit then yes self-trade.

O(1) space or O(1) search or O(N) sort : tricks

Every time I see O(1) space required on an array problem, I think of …. swapping.

Every time I see O(1) space required on a list problem, I ask is it a ….. linked list.

Every time I see O(N) time required on an array problem, I think of … radix sort, applicable to 64-bit integers, 64-bit floats and strings.

Every time I see O(1) search, I think of … hash table and radix array

overvalued analytics applications #CSDoctor

Is GPS navigator necessary? Are side signal lights necessary? Some things are more necessary than others.

Trading system, risk systems, market data systems are mainstream. In contrast, I have found out that pricing analytics system is not really mainstream. Many buy-side and most smaller sell-side firms don’t use any pricing analytics beyond rudimentary derivations from raw market data.

There are also a number of sophisticated derivative and fixed-income analytics vendors including Bloomberg, Murex, Numerix.. These vendors focus on analytics so their customers don’t need deep expertise. OCBC’s quant team’s main job was validating the analytics offered by Murex.

Pricing analytics tools are “advisory” and not mandatory. The creators of those tools (like CSDoctor) tend to over-value their creations as if they are going to make the traders faster, safer, more profitable. In reality, traders can always choose not to use them.

As a contrast, take market data as example – 80% of trading shops need to build or buy market data systems as they can’t operate without it.

Don’t use null/None to indicate empty

Many developers return (python) None/null (java) and put such a value in a containers to positively indicate an empty value.

This practice creates unnecessary ambiguity (rather than reduce ambiguity) because many built-in modules authors use None/null when they have no choice.  If you positively return this value, it’s harder to differentiate the various scenarios. This becomes an unnecessary ambiguity when troubleshooting in production.

In Java, I often create a dummy instance of a MyType and return it to mean “empty”.  I think applicable to c++ too.

In python, I tend to return a negative number from a function that ordinarily returns  a MyType, since python functions can return type1 in ContextA but type2 in ContextB! Python list also allows unrelated types.

## some of my controversial decisions #home,imm,retire..

Hi Junli,

You don’t need to reply. This is my periodic review of “everything in my life”.

I have recently implemented a few controversial decisions about my career, investment, family..

(As an example, the biggest is moving back to U.S. alone and starting the green card process.)

I make major decisions carefully and slowly (unless decisiveness needed), but an observer may say I’m not a good decision maker and point out my track record. Actually I don’t remember anyone pointed them out, not even my family members. The person who point a finger at my “unwise” decisions is the “judge” in my head…

Here are some of those controversial decisions

  • I will not give up Singapore citizenship, and I will retire in Singapore, relying on the Singapore government for my retirement. Singapore system is much more caring and efficient than China or U.S. systems.
  • I plan to work till 70 or older, perhaps for a token salary. I will keep up my interview skills.
  • I have stayed away from most of the new technologies — javascript, mobile apps, big data, social media, noSQL, block-chain … Instead, I have embraced the shrinking domain of c++
  • I feel my relationship and communication skills are not my strengths so through a series of trials-and-errors I have decided to stick to a technical career.
  • I’m staying in Bayonne, planning to buy my first home here. The schools are just above average.
  • I have always preferred home locations that doesn’t need a car.
  • At age 44 I decided to leave my family in Singapore and come to the U.S. to start the GC process

tried 3″hard”leetcode Q’s #tests !! 100%

I tried Q4, Q10, Q23.

Observation — they are not really harder in terms of pure algo. I found some “medium” questions actually harder than Q4/Q23 in terms of pure algo.

Beside the algorithm, there are other factor to make a problem hard. For me and my peers, coding speed and syntax are a real problem. So the longer my program, the harder it becomes. Some of the “medium” questions require longer solutions than these “hard” problems.

Logistics of instrumentation is another factor. Some problems are easy to set up and easy to debug, whereas 3D, graph or recursive problems are tedious to set up and often confusing when you try to debug with print’s.

There’s another factor that can make any “medium” problem really hard

priority queue 2 advantage over RBtree

  • binary heap is based on array — no memory footprint of pointer attributes/fields. Also cache friendly.
  • https://en.cppreference.com/w/cpp/algorithm/push_heap shows individual insert and delete_max are both logN in worst case
  • For mass-insert, per node is O(1) in heap. For RBtree, it takes logN tries to find the right insert position.
  • Heap reading max() is O(1). RBTree can achieve the same — we can locate the next max right after delete(), so delete() is still O(N logN), but max() would be reduced to O(1).

I think removing max is O(logN) for both.

See lecture notes https://courses.cs.washington.edu/courses/cse373/02au/lectures/lecture11l.pdf and SOF post on
https://stackoverflow.com/questions/6147242/heap-vs-binary-search-tree-bst

pick java if you aspire 2be arch #py,c#

If you want to be architect, you need to pick some domains.

Compared to python.. c#.. cpp, Java appears to be the #1 best language overall for most enterprise applications.

  • Python performance limitations seem to require proprietary extensions. I rarely see pure python server that’s heavy-duty.
  • c#is less proven less mature. More importantly it doesn’t work well with the #1 platform — linux.
  • cpp is my 2nd pick. Some concerns:
    • much harder to find talents
    • Fewer open-source packages
    • java is one of the cleanest languages. cpp is a blue-collar language, rough around the edges and far more complex.

reverse slist in K-groups

https://leetcode.com/problems/reverse-nodes-in-k-group/description/ is the problem I tried today, not a classic problem. Challenge is not the algorithm per-se but the Edit-Compile-Test-Debug cycle. I think some of us can come up with a conceptual algorithm quickly, but to implement it correctly took me hours.

Similarly, the problems below are not tough due to algorithm but the ECTD cycle can take hours, sometimes due to c++ iterator pitfalls, sometimes because we can’t easily visualize the data structure .. I wrestled with all of these problem, so please feel free to try them and discuss with me.

* print any tree (you can start with a binary) by level, in zigzag sequence
* given a linked list, write a function to remove all nodes greater than 55 (or any user input). Return the head of the modified list.
* https://www.geeksforgeeks.org/zigzag-or-diagonal-traversal-of-matrix/
* https://www.geeksforgeeks.org/create-a-matrix-with-alternating-rectangles-of-0-and-x/
* https://bintanvictor.wordpress.com/2018/02/06/spiral-number-printer/

As decided last week, I didn’t bother to run the Leetcode test suit. They make me feel frustrated, worthless, defeated, inferior, weakling, quitter…. Without these tests I ran my own tests and I feel like a joyful hacker.

Even though I may not pass all Leetcode tests, I feel my code is reasonable quality and I’m proud of it.

—-Problem is well-defined but not very common.

Given a linked list, reverse the nodes of a linked list k at a time and return its modified list.

k is a positive integer and is less than or equal to the length of the linked list. If the number of nodes is not a multiple of k then left-out nodes in the end should remain as it is. O(1) space. Hopefully O(N) time.

—-My sol1: use my existing O(1) solution but now keep a count.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/linkedList/linkedListReverseInK_group.cpp

The first group and the last group are both tricky and can take up hours.

Math+intelligence ] trading != high value-add

  • — Examples of math applied outside traditional proven quant domains like VaR
  • Trade analytics, execution analytics systems — analyzing past executions, and uses statistic tools to derive some empirical or parametric distribution.
  • Sell-side pre-trade analytics to evaluate a proposed trade….
  • Real-time risk analytics

Q: How much math in these systems? Not that much.

  • Fundamentally, trading domain is math-lite…
  • Risk management is slightly more mathematical due to large data set, relaxed latency requirement, many scenarios
  • The fancier and more advanced math, the more dubious

Q: Value-add? Questionable. That’s one reason why most financial institutions don’t spend billions building such systems. They do spend billions on traditional automation systems.

Q: Who would want to pay to use these systems? Rather Few.

Q: Python? Possibly.

case study: CSDoctor’s — value-add@analytics #CSDoctor

Are equities simpler than FICC@@

I agree that FICC products are more complex, even if we exclude derivatives

  • FI product valuations are sensitive to multiple factors such as yield curve, credit spread
  • FI products all have an expiry date
  • We often calculate a theoretical price since market price is often unavailable or illiquid.
  • I will omit other reasons, because I want to talk more (but not too much) about …

I see some complexities (mostly) specific to equities. Disclaimer — I have only a short few years of experience in this space. Some of the complexities here may not be complex in many systems but may be artificially, unnecessarily complex in one specific system. Your mileage may vary.

  • Many regulatory requirements, not all straightforward
  • Restrictions – Bloomberg publishes many types of restrictions for each stock
  • Short sale — Many rules and processes around short sale
  • Benchmarks, Execution algorithms and alphas. HFT is mostly on equities (+ some FX pairs)
  • Market impact – is a non-trivial topic for quants
  • Closing auctions and opening auctions
  • Market microstructure
  • Order books – are valuable, not easy to replicate, and change by the second
  • Many orders in a published order book get cancelled quickly. I think some highly liquid government bonds may have similar features
  • Many small rules about commission and exchange fees
  • Aggregate exposure — to a single stock… aggregation across accounts is a challenge mostly in equities since there are so many trades. You often lose track of your aggregate exposure.
  • Exchange connectivity
  • Order routing
  • Order management

hearsay c++IV: Cubist

Q: TCP connection close .. handshakes?

Q3: why is new() so slow?

Q3b: what if I used array-new but then regular delete?

Q: implement Fib calculation at compile time

Q: write code to extract names and floating point numbers from a free-form string (no constraints)

malloc()performance #tips #CSY

See https://stackoverflow.com/questions/161053/which-is-faster-stack-allocation-or-heap-allocation

  • stack allocation is much faster than heap allocation but you may not notice the difference
  • custom heap allocator to replace malloc() can be as fast as stack allocation
  • Many shops like FaceBook create custom allocators because standard malloc is too slow

Why is malloc so slow? Many online commentators point their fingers at the complexity of heap memory (and free list) management.

I don’t think system call is the issue. System calls are very infrequent with malloc.

2 reasons: BM is poor model for bond price

Reason 1 — terminal value is known. It’s more than deterministic. It’s exactly $100 at maturity. Brownian Motion doesn’t match that.

Reason 2 — drift estimate is too hard too sensitive. A BM process has a drift value. U can be very careful very thorough to estimate it, but any minor change in the drift estimate would result in very large differences in the price evolution, if the bond’s lifespan is longer than 10Y.

 

I separate pure-algo from real-coding questions

Many job seekers simply say “algorithm question” or “coding interview” as if they mean the same. I see significant differences between two types of coding test — compilable code vs pure algo.

Here’s one difference to start with — real-coding questions require us to be very familiar with syntax and very fast edit-compile-test cycles. I often struggle even when my idea was correct. I often run out of time. No such requirement in a white-board algo test.

I try to visualize a spectrum image like

<=== harder on language/implementation challenges (blue side) … (red side) increasingly harder on algorithm ===>

  • * (On far left “Ultraviolet” side of the spectrum) multi-threading coding questions are light on pure computer science algorithms.
  • * (Also on the far left  side) some weekend coding assignments are very serious about design trade-off’s, reasonable assumptions, thorough automated testing … No such requirement in a pure-algo test.
  • * (On the mid-left side) real coding questions can have a code smell issue, esp. in the weekend take-home assignment. I failed many times there even though my code worked fine.
  • * (in the middle) many Hackerrank and Codility questions involve difficult algorithm and fast edit-compile-test. Very hard for me.
  • * (on the right side) pure algorithm questions are sometimes on white board, and even over phone if very short. Light on syntax; no edit-compile-test; no code smells.
  • * (on the far right side) some west coast white board interviews require candidates to ask clarifying questions, and later eyeball-test our code without a compiler. No such requirement in a real-coding interview.
  • * (On the far right side) very tough algorithm questions care only about O() not about language efficiency. Pick your language. The toughest algorithm questions don’t even care about O() so long as you can find any solution within 15 minutes.

I often perform better with pure algo questions than real-coding questions. My worst game is codility/hackerrank.

I find it imprecise to call all of them “algorithm questions” and disregard the key distinctions between the two types. Both types have some algorithm elements, but the real-coding questions often require additional skills such as quick completion, language efficiency, and code smells knowledge. A bug can often take us 10 minutes to locate — something rare in pure-algo questions.

Effective strategies to prepare for real-coding vs pure algo interviews are rather different. In this email I will only mention a few.
1) for speed, we need to practice and practice writing real code and solving short problems; you can try timing yourself.
2) for pure algo, we can use python; we can look at other people’s solutions; we still need to practice implementing the ideas
3) for white-board, we need to practice talking through a problem solution… need to practice asking clarifying questions and eyeball-testing
4) for weekend take-home tests, practice is not effective for me. I won’t prepare.
5) for multi-threading problem, need to practice with basic multi-threading problems.

factory patterns explained graphically

https://vivekcek.wordpress.com/2013/03/17/simple-factory-vs-factory-method-vs-abstract-factory-by-example/ has nice block diagrams and examples.

CC) “Simple factory” is the most widely used one but not a widely known pattern, not even a named pattern. “SimpleFactory” is just an informal name

I mostly used simple factory and not the other factory patterns.

Any time I have a method that creates and returns polymorphic objects from a family (like different Trades), I would name the entire class SomeFactory. I often add some states and additional features but as a class or struct, it is still a simple manufacturer.

BB) factory method — a family of manufacturer classes. Each supports a virtual create() method. Each is responsible for instantiating just one specific type of object. NokiaFactory.create() only creates NokiaPhone objects.

AA) AbstractFactory — too complex and needed in very special situations only.

merge K presorted lists #O(?)

Q: Merge k sorted linked lists and return it as one sorted list. Analyze and describe its complexity.

https://github.com/tiger40490/repo1/blob/py1/py/linklist/merge4lists.py is my solution.

I feel this is mostly an optimization challenge. I can think of a few solutions

–Sol1: merge 2nd list into first. Then merge 3rd list into first …

https://leetcode.com/problems/merge-k-sorted-lists/solution/ shows that this has higher runtime cost than the brackets solution.

Reason is, each 2-merge-to-1 must visit every node in both lists. So the first list nodes get visited K times!

–Sol1b: brackets.

There are only (log K) levels in the bracket so any list gets visited that many times.

–Sol3: in-place

We maintain K node-pointers for the K lists (K teams)

We also maintain a pointer to the last-added node in the merged list.

first node in K lists are put into a min-heap. Winner (smallest) team would be the “current list”. Now the winner team offers next node and add it into the heap. Winning team ..

%%Name some Original contributions as a wall St techie@@

Background — expert test-takers and Math Olympians don’t create value. Grandpa is more than an expert in his domain — he created ground-breaking original research.

See also realistic+significant legacy of me as a developer@@

Generally, developers make more lasting impact than other roles. Developers are seen as value-creators. Here are my original contributions as a Wall St techie:

  1. 95G — I created the stored proc as the basis of concurrency control
  2. 95G — I created the wait/notify framework in B2B
  3. Barc — emulating the functional programming spreadsheet in java
  4. —-Below are less “original”
  5. RTS — nyse integrated feed XDP parser. Huge impact to billions of downstream users.

python slicing q( [] )always returns a(possibly modified)clone

See mystr[:-2] copy_truncate last 2 chars #mystr[2:]

See copy_reverse list/string/tuple by [::-1]

See python initialize list/dict : perf

Among the various list cloning solutions on https://stackoverflow.com/questions/2612802/how-to-clone-or-copy-a-list:

  • slicing is consistent with string/tuple
  • list(oldList) is similar to java API
  • copy.deepcopy is useful across many containers

custom-basket ^ portflio trading

A client can ask a broker to buy “two IBM, one MSFT” either as a AA) custom basket or a BB) portfolio. The broker handles them differently.

Only the Basket (not the portfolio) is “listed” on Bloomberg (+ competitors but not on any exchanges). Client can see the pricing details in Bloomberg, with a unique basket identifier.

Booking — the basket trade is recorded as a single indivisible position; whereas the portfolio trade gets booked as individual positions. Client can only sell the entire basket; whereas the portfolio client can sell individual component stocks.

Fees — There is only one brokerage fee for the basket, but 5 for a portfolio of 5 stocks.

The broker or investment advisor can have an view and advice on a basket.

Corporate actions should be handled in the basket automatically.

I feel portfolio is more flexible, more informal than custom basket which is less formalized, less regulated than an index-tracking ETF.

Tower IV: c++,algo,past prj..

Q1: how could you use enum to improve performance
AA: use it to replace strings
AA: enum is used in template programming to move computation from run time to compile time. See Q4 and SFINAE in github.
%%A: see clever use of enum(char?) in demanding c++ app

Q1b: you put in your change but it turns out to hurt performance. Any idea?

Q4: how do you compute Fib(N) like Fib(99999) at compile time using template programming?
A: See my Fibonacci code in github

Q: how would you use multicast to send a message to a group of registered users?
%%A: less data copying than TCP

Q: In O(1) space, Given an unsorted array of natural numbers, how do you find any pair to produce a target sum? What’s your best time complexity?
%%A: if I assume the integer size is 32-bit then a fixed-size O(1) space radix structure can sort the array in O(N). See also my blog post on [[locate a pair with targetSum=55 #bbg IV #Morris]]
%%A: if integer size is unlimited, then I think the best is O(NN)

Q: How is your muni bond pricing engine designed internally?
Q2b: what kind of pricing rules live in the cache?
Q2c: how many threads to react to a given event?

Q: how did you calculate FX fwd points?
%%A: use the interest rates in the two currencies and the spot FX rate.

Q: how is your implied volatility computed from an option price?
%%A: if there’s not a closed-form formula, then Newton’s method or something similar should be able to converge quickly.

Q3 (OO design): How would you design a game software with Animal objects, various birds, and fish?
Q3a: Say the system is running fine, but now you need to add ostrich class?

coding drill: LASTING value4family10Y well-being ]U.S.

Background: for many years until late 2017, I have focused on QQ, and neglected coding tests.

See the benefits of coding IV practice in 4 benefits of coding practice #beatFront

  • If you bring your pure algo skill from 4 to 6, it will remain around that level, or slightly lower.
    • It will help you with your high-end job interviews for 5-10 years.
    • a lot of this “skill” consist of key ideas behind the top 100 common coding questions. (I tried about 1/3 of them.) I feel it’s crucial to review the high-value problems to refresh 1 or 2 key ideas for each problem.
    • Note the various max-profit problems look similar but Key ideas are vastly different.
  • If you rely on your work projects, you will remain around 3:( Your projects won’t train you for those coding challenges in terms of BestPractice, ECT or syntax 😦
  • One of the skills to improve is “syntax” — You may think the STL syntax will become familiar naturally, but in reality over a year our projects used no std::map no lower_bound() no sort() no getline() so most of the basic syntax were not developed!
  • One builds the ECT skill with focused practice, outside the day job. No shortcut. Programmers without this “mileage” won’t have this skill. Just like driving or any athletic or artistic skills.

I’m reluctant to break down by “skill” but ..

  1. syntax used for coding test — Yes we can remember most of it for 10Y, based on my experience with perl, java, php
  2. ECT — easy to lose the speed. I feel some of the ECT improvement does remain.
  3. pure algo ideas — already discussed above

Low “churn” — most coding challenges have remained unchanged for 20 years.

I also realized that my c++ ECT/BP proficiency is different from my java ECT/BP proficiency.

In terms of health benefits, this is more beneficial than QQ study. More anti-aging benefits, like physical exercise.

fixation@ROTI : too result-oriented

fixation@ROTI/payoff/success/result/accu … dampens job satisfaction+joy@learning.

This affects my “engagement”. Granted, we should not Ignore these ROTI factors, or those “smells” … instead we should evaluate our direction and take stock, but let’s not overdo it.

  • +ve Eg: Barcap option math
  • +ve Eg: Barcap swing learning
  • +ve Eg: RTS socket programming
  • -ve Eg: git
  • -ve Eg: curve building
  • -ve Eg: WCF

Consider a tour guide aiming for the tip at the end.
Consider Grandpa in his research career.
Consider a singer like 王杰 or the last few years of 邓丽君。

Q: has that increased your income or benchmark score? # more time in office, shorter commute, MSFM, c# ….

  1. This question can be posed to grandpa.
  2. This question can be posed to any education institute including the “top schools 名校”. Ironically the same questioners seem to be /fixated/ on these top schools for their kids. So for these people, this question is self-contradictory.
  3. This question can be posed to my friends engaged in quantitative investment analysis.

This question is harmful, misleading, derogatory, discriminatory, browbeating, pessimistic/fatalistic, myopic, … This question tosses aside many important things to our lives, our joys, and satisfaction —

  • career safety net
  • exploration of personal talents and personal interests
  • “in-demand” satisfaction
  • market depth
  • mobility between firms
  • freedom — I don’t want to feel “trapped”
  • observation (even conviction) on various topics, based on in-depth personal research

## Are those leetcoders really stronger@@

You wrote “I think many leetcoders are fresh graduates, or young people, who have more time and energy.  I know I cannot compete them …”

Q: Are those leetcoders really stronger in coding tests? Generally yes, but it depends on the type of coding question.

  • For a weekend take-home assignments …. I am not sure they would produce better quality code than us. (I’m not very good either). Code smells matter to my past interviewers.
    • They may not run as many tests as we do.
    • I tend to spend 200%+ more time on the assignment
  • For white-board ….. or remote dumb editor pair programming (west coast favorite), they may not be able to explain their thought process as well as I do.
    • My 2007 Google white-board interview used pseudo code, so the leetcoders’ advantage would be much lower.
  • For completely original questions created by a hiring manager then sent via hacker rank platform, leetcoders may not be fast enough to solve them, since leetcode problems can’t include every newly invented problems.
    • I hit two problems related to shortest-path algorithm, which can be similar to leetcode problems. I also hit very unusual problems.
  • For multi-threading coding questions, leetcode doesn’t help.
    • hit me at least 8 times — UBS, Barcap, Gelber, Bbg ..

realistic+significant Legacy as a developer..possible@@

See also %% Original contributions as a Wall St techie

Background — I often feel there’s no (social) value in my career.

Q: How about system architect or app-owner in a big firm?

I feel a successful and stable app can survive 5 to 20 years before it gets replaced.

Q: How about documentation for an (open source or otherwise) long-living technology? Documentation can even take the form of StackOverFlow QnA.

This “legacy” may not last very long as better answers could supersede your answer any time. In fact, as the topical technology evolves, your answer is doomed to become outdated gradually. Even a very popular computer book becomes outdated over 20 years.

Also, this doesn’t feel realistic for me.

why I don’t like leetcode+similar sites #XR

Hi XR,

I may need your advice here. On 23 June I tried one medium level question on LeetCode. (Before Bloomberg interviews, I tried Bloomerg CodeCon website for a few days, with similar experience, but there’s no posted solution.)

To my dismay, it took me 4+ hours because I hit lots of "devils in the implementation details".

* I dare not submit my code and run LeetCode test cases because I am scare to see tons of failures, a huge discouragement.
* I am tempted to look at the top solutions, but they are likely much cleaner and better than mine, a huge discouragement.
* Until I pass all test cases, this question is not done and I would feel guilty to give up. In contrast, my own coding projects can be 50% done and I can leave it as is. I would not feel like a quitter. My own projects have no standard test cases to show I’m 50% done or 90% done. You could say LeetCode acceptance criteria is uncompromising and uncomfortable.

Since I don’t want to be a quitter, I must keep spending hours and hours on this one question. The longer I go, the more tired and I just feel increasing frustrated that I can’t complete even one question. In my own projects, I could give up without shame or guilt.

* Even if I were to complete a few questions 100%, I would not feel proud because other coders have completed a hundred questions, a huge discouragement.

These discouragements would presumably destroy the precious joy of coding. This "joy of coding" is precious because it’s so hard to get — I need plenty of energy, plenty of quiet time with background music, positive mood to take on tough challenges …. Each hour of coding practice easily consume twice the energy of other type of learning.

Here are other drawbacks to LeetCode:

On the Leetcode code editor, I’m unable to run my own small tests until I have a complete solution. In contrast, in my own coding projects, after I have a small module written I can test it and get some immediate satisfaction and make progress on that module. On LeetCode site, I feel like digging and digging in darkness, until I see some light at end of the tunnel.

In conclusion
* skill improvement is not higher than my own coding practice

* satisfaction, positive feedback .. is much lower
* stress is higher
Therefore, I don’t look forward to doing it again. To continue my coding practice, I would stay away from these web sites.

Then something hit me — I now realize my son is not as lazy as I thought. As a Chinese father I automatically feel he is not putting the same amount of effort as other kids in his school. But looking at myself .. Am I putting in the same amount of effort as LeetCoders? If we compare to the wrong peer group, we inevitably feel lazy, inferior, inadequate, sub-standard. Such a comparison is not the best way to motivate our kids. It is erosive, hurtful, counter-productive.

In the long run, his classmates who put in more effort doing math problems now may not do better than him. They might do better in standardized exams, but what about university level (no more standardized tests) ?

specify(by ip:port) multicast group to join

http://www.nmsl.cs.ucsb.edu/MulticastSocketsBook/ has zipped sample code showing

mc_addr.sin_port = thePort;

bind(sock, (struct sockaddr *) &mc_addr, sizeof(mc_addr) ) // set the group port, not local port!
—-
mc_req.imr_multiaddr.s_addr = inet_addr(“224.1.2.3”);

setsockopt(sock, IPPROTO_IP, IP_DROP_MEMBERSHIP,
(void*) &mc_req, sizeof(mc_req) // set the IP by sending a IGMP join-request

Note setsocopt() actually sends a request!

====That’s for multicast receivers.  Multicast senders use a simpler procedure —

mc_addr.sin_addr.s_addr = inet_addr(“224.1.2.3”);
mc_addr.sin_port = htons(thePort);

sendto(sock, send_str, send_len, 0, (struct sockaddr *) &mc_addr, …

At what juncture would kernel scheduler run@@

Kernel scheduler has an algorithm and therefore implemented as a sequence of instructions. You can think of it as some black-box function/routine.

I think it is Not really a long-running background process. In Linux, I believe it is an on-demand routine, but not run on behalf of any process.

Background — Many familiar on-demand kernel routines do run on behalf of an “owner process” —

  • accessing a file or socket
  • accessing some device such as NIC
  • accessing memory

However, other on-demand kernel routines (often interrupt handlers) do not have an “owner process”. Here are some routines —

  • reacting to timer interrupts
  • reacting to low-level emergency hardware interrupts like …. ?

So the scheduler is a classic example. I think scheduler can get triggered by timer interrupts. See P 350 [[linux kernel]]

price sensitivities = #1 valuable output of risk-run

[[complete guide]] P433, P437 …

After reading these pages, I can see that per-deal PnL and markt-to-market numbers are essential, but to the risk manager, the most valuable output of the deal-by-deal “risk run” is the family of sensitivities such as delta, gamma, vega, dv01, duration, convexity, correlation to a stock index (which is different from beta) , ..

Factor-shocks (stress test?) would probably use the sensitivity numbers too.

In Baml, the sensitivity numbers are known as “risk numbers”. A position has high risk if it has high sensitivity to its main factor (whatever that is.)

VaR can overstate/understate diversification benefits

understate the curse of concentration overpraise diversified portfolio
mathematically definitely possible probably not
correlated crisis yes possible, since VaR treats the tail as a black box. yes. portfolio becomes highly correlated. Not really diversified
chain reaction possible. Actually, Chain reaction is still better than all-eggs]1-basket yes. diversification breaks down

Well-proven in academic — VaR is, mathematically, not a coherent risk measure as it violates sub-additivity. Best illustration — Two uncorrelated credit bonds can each have $0 VaR but as a combined portfolio the VaR is non-zero. The portfolio is actually well diversified, but VaR would show risk is higher in the diversified portfolio — illogical, because the individual VaR values are simplistic. Flaw of the mathematical construction of VaR.

Even in a correlated crisis, the same could happen — based on probability distribution, individual bond’s 5% VaR is zero but portfolio VaR is non-zero.

A $0 VaR value is completely misleading. It can leave a big risk (a real possibility) completely unreported.

[[Complete guide]] P 434 says the contrary — VaR will always (“frequently”, IMHO) say the risk of a large portfolio is smaller than the sum of the risks of its components so VaR overstates the benefit of diversification. This is mathematically imprecise, but it does bring my attention to the meltdown scenario — two individual VaR amounts could be some x% of the $X original investment, and y% of $Y etc, but if all my investments get hit in GFC and I am leveraged, then I could lose 100% of my total investment. VaR would not capture this scenario as it assumes the components are lightly correlated based on history. In this case, the mathematician would cry “unfair”. The (idealized) math model assumes the correlation numbers to be reliable and unchanging. The GFC is a “regime change”, and can’t be modeled in VaR, so VaR is the wrong methodology.

maturity bucketing #StirtRisk

[[complete guide]] P 457 pointed out VaR systems often need to aggregate cashflow amounts across different deals/positions, based on the “due date” or “maturity date”.

Example — On 12/31 if there are 33 payable amounts and 88 receivable amounts, then they get aggregated into the same bucket.

I think bucketing is more important in these cases:

  • a bond has maturity date and coupon dates
  • a swap has multiple reset dates
  • most fixed income products
  • derivative products — always has expiry dates

In StirtRisk, I think we also break down that 12/31 one day bucket by currency — 12/31 USD bucket, 12/31 JPY bucket, 12/31 AUD bucket etc.

Q: I wonder why this is so important to VaR and other market risk systems. (I do understand it hits “credit risk”.)
%%A: For floating rate products, the cashflow amount on a future date depends on market factors.
%%A: FX rate on a future date 12/31 is subject to market movements
%%A: contingent claim cashflow depends heavily on market prices.
%%A: if 12/31 falls within 10D, then 10D VaR would be impacted by the 12/31 market factors

swap on eq futures/options: client motive

Q1: why would anyone want to enter a swap contract on an option/futures (such a complex structure) rather than trading the option/futures directly?

Q2: why would anyone want to use swap on an offshore stock rather than trading it directly?

More fundamentally,

Q3: why would anyone want to use swap on domestic stock?

A1: I believe one important motivation is restrictions/regulation.  A trading shop needs a lot of approvals, licenses, capital, disclosures … to trade on a given futures/options exchange. I guess there might be disclosure and statuary reporting requirements.  If the shop can’t or doesn’t want to bother with the regulations, they can achieve the same exposure via a swap contract.

This is esp. relevant in cross-border trading. Many regulators restrict access by offshore traders, as a way to protect the local market and local investors.

A3: One possible reason is transparency, disclosure and reporting. I guess many shops don’t want to disclose their positions in, say, AAPL. The swap contract can help them conceal their position.

how is mkt data used ] buy-side FI analytics@@

This is a BIG bond asset manager… They use 2-factor HJM model, among others.

They use EOD market data for risk measure + risk sensitivity calculations. No real time.

Models were written by 40+ quants untrained in c++. The 16-strong IT team integrates the models

I asked “Do you use liquid fixed income market data mostly to calibrate models and use the model to price illiquid instruments?”

A: both

  • To calibrate model — every day, as explained in [[complete guide]] P436
  • To derive valuation directly on existing positions if the instruments are comparable (between ref data instrument and position instrment)

kernel bypass : possible usage ] RTS

Partially hypothetical usage scenario/proposal.

“Bypass” means .. bypassing standard kernel functions and using faster, lighter firmware instead.

“Bypass” means .. every network packet would go straight from NIC to user application, without passing through tcp/ip stack in the kernel.

Background — Traditional packet processing goes through tcp/ip software stack, implemented as a family of kernel functions. Whenever a network packet is received, NIC writes the packet to a ring buffer and raise a hardware interrupt. The i-handler (interrupt handler routine) and bottom-half will then perform packet processing in the kernel socket buffer, and finally copy it to a UserModeBuffer.

Note the two separate buffers. In our parser config file, we configure them as sock_recv_buf vs read_buf. The former is accessible by kernel only and is not used when we turn on kernel bypass.

In contrast, with kernel bypass,

  • the Network card (NIC) has a FPGA chip, which contains the low-level packet processing software (actually firmware “burned” into fpga)
  • This firmware replaces tcp/ip kernel functions and delivers the packets directly to application. However, my parser relies more on another feature —
  • The SolarFlare firmware also lets my parser (user applications) access the NIC ring-buffer directly. Zero-copy technique bypasses the socket receive buffer in the kernel.

My parser uses SolarFlare NIC for both multicast and tcp.

Kernel bypass API was only used in some low-level modules of the framework, and disabled by default and configurable for each connection defined in configuration file.

http://jijithchandran.blogspot.com/2014/05/solarflare-multicast-and-kernel-bypass.html is relevant.

factoryMethod: based on arg ^ host object

I used to believe a factory method always relies on arg to tell it what type of instance to manufacture.

Now I know you can call myFactoryInstance.makeTrade(); // no arg, but the host object provides the clue as to what type of instance to manufacture. The factory method is virtual.

See https://stackoverflow.com/questions/5739611/differences-between-abstract-factory-pattern-and-factory-method

python usage in FI quant lib #Pimco

In one of world’s biggest fixed income buy-side firms’ quant library, the codebase is 3/4 c++ and ¼ python including pandas, numpy, machine learning, grid computing modules. I think this is similar to Macquarie FICC quant lib.

C++ is much faster, but data structures are very limited including STL containers.

I think the funds hold mostly bonds and mortgages. How about futures, IRS? Perhaps for hedging?

HFT developers seldom need to optimize latency

I bet 90% of HFT developers (non-architects) Never need to optimize latency or throughput, based on my experience in

  • RTS
  • mvea

Why? The latency sensitive codebase is low-level and stable, so no change required. Regular developers only need to use existing frameworks and avoid latency penalties.

If the frameworks are strict, then there are few chances to hit latency penalty.

Best way to acquire professional experience in latency engineering — put your ideas to work in a real project and tune it until it works. Look at my wait/notify code in 95G. I think Kam did something similar in the TREP project in RTS.

If you don’t have such an opportunity, then you must read up in your spare time and ask some good questions in a good contexx.

liquid products2calibrate model→price exotics

Essential domain knowledge, practiced in industry and also endorsed by academia.

1) On a daily basis (or otherwise periodically) use market data to calibrate a model’s parameters. Choose the more liquid instruments …

Note if you don’t re-calibrate frequently, those parameters could become obsolete, just like database index statistics.

2) use the model to price illiquid, exotic products.

Example — In my exam/interview, Professor Yuri pointed out that callable bonds, caps and floors (yes these are options) are the liquid products with liquid market data, and useful for calibration.

binary search in rotated sorted array

https://leetcode.com/problems/search-in-rotated-sorted-array/description/ has the requirement. I don’t want to look at other people’s solution, so I have reproduced the requirements below. I have not encountered this problem in any coding interview.

Q: Suppose an array sorted in ascending order is rotated at some pivot unknown to you beforehand. (i.e., [0,1,2,4,5,6,7] might become [4,5,6,7,0,1,2]). You are given a target value to search. If found in the array return its index, otherwise return -1. You may assume no duplicate exists in the array. Your algorithm’s runtime complexity must be in the order of O(log n).

https://github.com/tiger40490/repo1/blob/cpp1/cpp/array/binSearchRoatedArr.cpp is my solution

–Solution 2:

first run a binary search to locate the pivot point. Then run a O(1) test to discard one of the 2 segments. We are left with the remaining segment as a regular sorted array. Run binary search on it.

ROTI@learning ED/IRS/FX-swap/repo #math-lite

Note this effort is after my basic bond math study, though I often count this effort as part of the broader “bond math” study.

Basic bond-math knowledge has robust demand on Wall St. Without hard evidence I feel ROTI is decent in basic bond math study. Q1: How is the ROTI in this study?

I feel many of the jargon terms in this space are common and essential knowledge:)

  • swap rate; comparative advantage;
  • OIS; Libor;
  • basis risk;
  • collateral;
  • curve building

However, this self-study rarely helped me:

  • MSFM course
  • Stirt job interview

Q1b: How is the market depth and robust demand of this skill?
A: not used much in the trading buy-side, but some asset management and most sell-side do need this know-how.

Note this topic is generally math-lite and much simpler than option math, so I was able to self-study:) See fixation@ROTI…dampens job satisfaction+joy@learning

Q2: how is the knowledge retention rate?
A2: decent. Thin->thick yes but not yet thick->thin

FRA^ED-fut: actual loan-rate fixed when@@

Suppose I’m IBM, and need to borrow in 3 months’s time. As explained in typical FRA scenario, inspired by CFA Reading 71, I could buy a FRA and agree to pay a pre-agreed rate of 550 bps.  What’s the actual loan rate? As explained in that post,

  • If I borrow on open market, then actual loan rate is the open-market rate on 1 Apr
  • If I borrow from the FRA dealer GS, then loan rate is the pre-agreed 550 bps
  • Either way, I’m indifferent, since in the open-market case, what ever rate I pay is offset by the p/l of the FRA

Instead of FRA, I could go short the eurodollar futures. This contract is always cash-settled, so the actually loan rate is probably the open-market rate, but whatever market rate I pay is offset by the p/l of the futures contract.

Q: bond price change when yield goes to zero

Can bond yield become negative? Yes 2015-2017 many bonds traded at negative yield. https://www.ft.com/content/312f0a8c-0094-11e6-ac98-3c15a1aa2e62 shows a realistic example of a vanilla bond trading at $104. Yield is negative –You pay $104 now and will get $100 repayment so you are guaranteed to lose money.

Mathematically, when yield approaches negative 100 bps, price goes to infinity.

When yield approaches zero, bond price would go to the arithmetic sum of all coupons + repayment.

min-cost partitioning #c++Flex #rare

Q: You will be given a natural number array and a threshold value T. The threshold represents the maximum length of subarrays that may be created for the challenge. Each sub-array you create has a cost equal to maximum integer within the sub-array. Your challenge is to partition the entire array into sub-arrays no longer than the threshold, and do it at minimum cost.

Function Description
Complete the function calculateCost in the editor below. The function must return an integer denoting the minimum cost of partitioning the array.

calculateCost has the following parameter(s):
a[a[0],…a[n-1]]: the integer array to be divided into sub-arrays
k: the threshold value, i.e the maximum size of any sub-array

Constraints
• 1 ≤ n ≤ 5000
• 1 ≤ k ≤ 500
• 1 ≤ a[i] ≤ 100000

For example, for T=2 and original array {1,5,2}, you have two ways to partition it:

  • {1} {5,2} total cost = 1 + 5 = 6 (this is lowest cost)
  • {1,5} {2} total cost = 5 + 2 = 7

— My greedy AlgoAA:

Update: thanks to XR here is an edge case to break AlgoAA: {49,50,99,0,98}

I will use the terms “group” and “subarray” interchangeably. A lone wolf is a group of one node.

I would first identify the global peak value, like 99. Should this node be a lone wolf? No. I can prove that it should “absorb” a neighbor node and become a subarray of two [1]. Should it absorb a 3rd node? I think I can again prove that it should. Therefore my greedy algorithm would first create a subarray of size K around the peak, leaving behind a left segment (and also a right segment), where we apply the same greedy algorithm.

[1] my informal proof — suppose the left neighbor has value 6 and is a loan wolf in the final grouping. We can improve this final grouping by merging this node with the peak. Total cost would reduce by 6. In another scenario suppose this node (value 6) is within subarray #12. Again, we can break up subarray #12, move out this “6” and merge it with the peak, without breaking any rule or increasing total cost.

So what algorithm to create the first subarray around the peak? Let’s assume k=3. There are up to 3 candidate groups, since the peak can be the first node, 2nd node or last node in its group. We can use a sliding window (of width 3) to identify the best among the candidates.

Q: why start from the peak not start from end of the array?
A: If you do, you may separate 2nd highest node from the peak, when they are adjacent. My AlgoAA would identify this situation early on, and put them in the same group.

— My greedy AlgoBB:

Each time after the window slide, we will compare the new window with the best window so far. The comparison is first based on the 2nd highest value in the window. If tied, then compare 3rd highest value in the window..

I think this is not hard to implement — convert each window to a heap then compare top to bottom.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/array/minCostPartition_Flex.cpp is a briefly tested implementation .. 60% confident.

limit-IOC ^ market-IOC

Limit IOC (Immediate-or-Cancel): Can be used for FX Spot and CFD.

An instruction to fill as much of an order as possible within pre-defined tolerances of a limit price, immediately (5 second Time-to-Live).

Unlike Market IOC orders, Limit IOC orders allow a Client to control the maximum slippage that they are willing to accept.

Under normal market conditions a Market IOC order will be filled in full immediately. In the event that it isn’t, any residual amount will be cancelled. Price Tolerance cannot be added on a Market IOC order, meaning that a client cannot control slippage.

“didn’t like my face” : not top-favorite

Hi Deepak,

I now think there’s another reason that SIG, Bloomberg, LiquidNet and other employers didn’t make me an offer even though I passed technical screening.

In our chats, I used the generic term "didn’t like my face" as an umbrella term for several different factors. Today I want to mention a new factor – "what if this candidate takes my offer and continues to shop around?"

I believe some companies don’t like that risk. When they make an offer the want to ensure the candidate will accept. They want to see "Hey we are clearly the favorite in his mind and he is in a hurry. If we make him an offer he will likely accept right away."

Clearly, I’m not that type of candidate. I often come across as a "job shopper", through my non-verbal language or even through my explicit verbal answers. For example, when asked "Why are you looking to change job" I often answer "I’m actually doing fine on my current job but there are better opportunities like the role in your company."

So, please beware of the subtle signals you send to interviewers.

Victor

Q:so you didn’t write c++for rebus only configured it #CSY

Q: so you didn’t write c++for rebus(or any engine) only configured it?
A: Well, the major (I won’t use superlatives like “biggest” or “real”) challenge in my project is understanding the non-trivial legacy codebase. Once I (I won’t use “we”) reach sufficient understanding, it’s relatively straightforward to implement the required change in a “pinhole surgery”. The best change in this system is usually isolated in a few files and a few functions, among thousands.

I would say that analysis is 50% of the effort, and design, testing, debugging constitute 45% to 49% of the effort and code change is up to 5%. These percentages can vary depending on the type of change. Analysis could be 30 to 70% of the effort.

Frequently, the moment we figure out how things work inside the system, we hit that Aha moment and the problem is basically solved.

In terms of c++ work, as a team we bend over backward to minimize source code change in favor of config change, but there are exceptions —

  • I made many bug fixes.
  • Occasionally, I refactor a function as part of a change. This is unpopular in my team due to the risks. The team basically says Don’t fix something that’s not broken.
  • When we upgraded to c++11, I had to adapt my modules.
  • I added performance instrumentation, without affecting business logic
  • I often need to add logging in my local codebase as part of analysis and testing.
  • (I won’t use “we” here.)

I also need to read lots of existing code as part of analysis.

gdb: dump STL container %%experience

First let’s talk about custom containers. GDB would show the field names of an object, but frequently not the values. I guess integers values might show up but more than half the fields are pointers ( actually char-array field would be easy to print.)

If I call a function on the object, I have to be very lucky and very careful. q(->) has never worked for me so far, so I need to use q(*) to de-reference every pointer before calling a method on the pointee, and pray it works.

http://www.yolinux.com/TUTORIALS/src/dbinit_stl_views-1.03.txt works on std::map …

A simple experiment using https://github.com/tiger40490/repo1/blob/cpp1/cpp/88miscLang/containerDumpOperator.cpp

  • g++ -g theFile.cpp && gdb -iex ‘add-auto-load-safe-path .’ ./a.out
  • (gdb) print *(li._M_impl._M_start+1) # can print 2nd element if it’s std::string or double
    • Note before vector initialization, gdb already shows the addresses inside the vector, but some addresses are not populated. Just retry after the initialization.
  • std::unordered_map is doable:
    • (gdb) print **(tm._M_buckets) # prints first pair in a hash table bucket
    • (gdb) print *((**(tm._M_buckets))._M_next) # next pair in the same bucket
  • std::map content is harder
    • (gdb) print *(int*)(tm._M_t._M_impl._M_header._M_left+1) # prints one key
    • (gdb) print *(int*)(tm._M_t._M_impl._M_header._M_right+1) # prints another key in the pair
    • (gdb) print *(int*)((void*)(tm._M_t._M_impl._M_header._M_right+1)+sizeof(int)) #prints the value in the pair.
      • the (void*) is needed before we add sizeof(value_type). Without the cast, the pointer arithmetic would be different.
      • from the key field to value field, we move by 4 bytes (i.e. sizeof value_type) from  0x6050e0 to 0x6050e4. It’s actually easy to manually type .. print *0x6050e4
      • I suspect the _M_right pointer is seated at the “color” field. Increment to the key field?

pthread_join() retriev`return value: bad practice

Most pthreads programs don’t retrieve the return value via pthread_join().

https://stackoverflow.com/questions/3692591/return-versus-pthread-exit-in-pthread-start-functions has a comment by the author of boost::thread (reference implementation for c++11 thread library). He said

(3) I never use the return value of a thread in raw POSIX threads. However, I tend to use higher level facilities such as the Boost thread library, and more recently the C++0x thread library, which provide alternative means for transferring values between threads such as futures, which avoid the problems associated with memory management that you allude to.

Therefore, even though you felt it was unnatural to store the per-thread computed results in a global array, in practice it’s not bad. It is inherently thread-safe because there’s no data sharing, in a truly parallel mode.

That was my preferred solution, but to experiment, I also used new to return a value to pthread_join(). Personally, I am always wary of using new() in one function and the corresponding delete() in another function … unreliable. As much as possible, I use smart pointers to manage new/delete.

https://github.com/tiger40490/repo1/edit/cpp1/cpp/thr/parallelSum_Pimco.cpp shows both solutions

Cloneable, Object.clone(), Pen.Clone() #java

A few learning points.

The Object.clone() implementation is not that important, because I should always override it in my class like Pen, but here are some observations about this Object.clone():

  • shallow copy, not deep copy, not very useful.
  • this protected method is mostly meant to be invoked by a subclass:
    • If your variable points to some Pen object that’s no Clonable, and you call it from either the same package or a subclass, then you hit CloneNoSupported.
    • if your Pen class implements Clonable, then it should override the clone() method
    • [1] The only way the default Object.clone() gets picked by compiler is when a Cat class implements Clonable but doesn’t override clone(), and you call it from either the same package or a subclass

Clonable is a special marker interface. It could trigger the CloneNotSupported exception, but if you override clone() then this exception may not hit. It’s an obscure detail.

  • I think you can override clone() without implementing Clonable, but this is tricky and non-standard.
  • You could also implement Clonable without overriding clone() .. see [1]

java access levels: protected^package-private..

https://docs.oracle.com/javase/tutorial/java/javaOO/accesscontrol.html (java8) shows two nice tables:

  • There’s no more “private protected
  • default access level is better known as “package-private” — strictly more restrictive than Protected . (Protected is more like Public). The 2nd table shows that
    • a “package-private” member of Alpha is accessible by Beta (same package) only, whereas
    • a “protected” member of Alpha is accessible by Beta and Alphasub

I find it hard to remember so here are some sound bytes

  1. “protected” keyword only increases visibility never decreases it.
  2. So a protected field is more accessible than default (package-private)
    • As an example, without “protected” on field1, subclasses outside the package cannot see field1.
  3. same-package neighbors are trusted more than children outside the package

For a “protected” field1, a non-subclass in the same package can see it just as it can see a default-accessible field2

Not mentioned in the article, but when we say “class Beta can access a member x of Alpha”, it means that the compiler allows you to write, inside Beta methods, code that mentions x. It could be myAlpha.x or it could be Alpha.x for a static member.

swap^cash equity trade: key differences

I now feel an equity swap is an OTC contract; whereas an IBM cash buy/sell is executed on the exchange.

  • When a swap trade settles, the client has established a contract with a Dealer. It’s a binding bilateral contract having an expiry, and possibly collateral. You can’t easily transfer the contract.
  • When a cash trade settles, the client has ownership of 500 IBM shares. No contract. No counterparty. No expiry. No dealer.

I think a cash trade is like buying a house. Your ownership is registered with the government. You an transfer the ownership easily.

In contrast, if you own a share in coop or a REIT or a real-estate private equity, you have a contract with a company as the counterparty.

Before a dealer accepts you as a swap trading partner, you must be a major company to qualify to be counterparty of a binding contract. A retail investor won’t qualify.

PendingNew^New: OrdStatus(tag 39)

PendingNew and New are two possible statuses for a given order.

PendingNew (39=A, 150=A) is relatively simple. The downstream system sends a PendingNew to upstream as soon as it receives the “envelop”, before opening, validating or saving it. I would say even a bad order can go into PendingNew.

New (39=0, 150=0) is significant. It’s an official acknowledgement (or confirmation) of acceptance. It’s synonymous with “Accept” and “Ack”. I think it means fully validated and saved for execution. For an intermediate system, usually it waits for an Ack i.e. 39=0 from exchange before sending an Ack to the upstream. Tag 39 is usually not modified.

For a market Buy order, I think it will be followed by (partial) fills, but not guaranteed, because there may be no offers, or execution could fail for any reason. For a dealer system, execution can fail due to inventory shortage. I implemented such an execution engine in 95G.

I’m no expert on order statuses.

perl regex modifier /m /s clarified

  1. By default, q($) + q(^) won’t match newline. /m targets q($) and q(^)
  2. By default, the dot q(.) won’t match newline. /s targets the dot.
  3. The /m and /s both help get newlines matched, in different contexts.

Official doc says:

  1. /m  Treat the string being matched against as multiple lines. That is, change "^" and "$" from matching the start of the string’s first line and the end of its last line to matching embedded start and end of each line within the string.
  2. /s  Treat the string as single line. That is, change "." to match any character whatsoever, even a newline, which normally it would not match.
  3. Used together, as /ms, they let the "." match any character whatsoever, while still allowing "^" and "$" to match, respectively, just after and just before newlines within the string.

loading .gdbinit

My experiments show that $HOME/.gdbinit is discovered. I actually changed the $HOME env variable:)

However, I hit

  warning: not using untrusted file "/v/global/user/b/bi/bint/.gdbinit"

, even though I added q(gdb -iex ‘set auto-load safe-path …’). I guess the warning come up before the -iex option takes effect

  gdb -ix '/path/to/.gdbinit' # also failed due to untrusted file
  g++ -std=c++0x -g dump.cpp && gdb -iex 'add-auto-load-safe-path .' ./a.out # working


pink sheets #learning notes

The pink sheets, are a stock quotation service on unlisted stocks.

  • Many are penny stocks, trading for extremely low prices,
  • some are legitimate foreign companies that don’t wish to file reports with the SEC.
  • … There’s less regulation, less transparency, more risk of fraud in these stocks.

OTC Markets Group offers this service.

PinkSheet stocks are considered non-hedgeable in some swap dealer systems. I guess liquidity is too low.

https://www.fool.com/knowledge-center/what-are-the-pink-sheets.aspx is good intro.

ETF mkt maker #RBC

The market maker (a dealer) does mostly prop trading, with very light client flow.

Q1: creation/redemption of units?
A: yes the ETF market maker participates in those. When the dealer has bought lots of underlier stocks, it would create units; when the dealer has bought a large inventory of units, it would redeem them (convert to underliers)

Q1b: what’s the motivation for dealer to do that?
A: there’s profit to be made

Q3: restrictions on short position held by a dealer?
A: there are restrictions on how long you can hold a short position without a borrow (stock loan). For regular investors it could be a few days or within 0 sec. For a market maker, it is definitely longer, like 5 days

Q3b: how about size of the short position?
A: probably not. However, if a dealer has a huge short position and looking for a borrow, the stock loan could be very expensive.

Q: how is the bid or ask price decided in the market maker system? Is it similar to the citi muni system? In a competitive, highly liquid market, demand is sensitive to price.
A: fairly simple because the underliers’ bid/ask are well-known and tight. For a bond ETF, the spread is bigger.
A: inventory level in the dealer’s account is another factor
A: pressure in the market micro-structure is another factor. If you see heavy bidding and few offers, then you may predict price rise

gdb symbol-loading too time-consuming

After I attach gdb, it immediately starts a prolonged symbol loading process. It’s better to skip the loading, and selectively load some symbols.

https://ascending.wordpress.com/2007/09/02/a-couple-of-gdb-tricks/ describes how to use ~/.gdbinit, but I had no permission to write to the ~/

gdb -iex ‘set auto-solib-add off’ …. # worked

–loading a particular *.so file

I got “No loaded shared libraries match the pattern” and fixed it by

shar file.name.so #instead of /full/path/to/file.name.so.some.version