self-hate due to hearsay 300k salary #XR

.. You seem to feel the (hearsay) income level of 300k is the minimum you need to feel good about yourself. In that case, your worry and negative self-assessment about income is misaligned with reality.

A bit of real statistics to chew on – rank all countries by GDP per-capita. Most of the top 10 richest countries have population below 9 million including Switzerland and most of the Northern Europe countries.

Q: How many countries are richer than U.S. *and* with a population above 20 million?
Answer: zero. Japan, Germany, UK, France,  … are all less rich than the U.S. Now, I believe you don’t want to compare with developing countries like China, Korean, Taiwan, India, let’s first focus on rich countries —

  • I believe half the American families earn less than 60k combined income, so do you think half the American families are struggling to survive every day?
  • I would estimate (based on my knowledge) more than half the *families* across the rich countries earn less than USD 60k, but you are implying that a single income of 300k is the minimum you need?
  • USD 300k single income would put you in the top 2% in any rich country, but you feel that’s the minimum you need?
  • USD 300k is higher than at least half the doctors’ and lawyers’ income across the rich countries, so you seem to say most doctors and lawyers are struggling to survive based on their income
  • My wife’s income is about SGD 30k. A regular teacher’s salary in Singapore is about SGD 50k. Singapore is, by most rankings, more affluent than the U.S. and teachers are a large, white-collar workforce. By your standard, even a 500% increase in a Singapore teacher’s income would still be too low for you.
  • In one of the most expensive cities of our world – London, a USD 300k salary would be top 2%. I know from many sources that London finance IT salary is lower than New York. A 700-pound daily contract rate is “extremely rare” (unheard of to many people) but it works to be only USD 230k, but you would think that’s not enough to survive. Mind you, London is more expensive than New York.
  • Some would say London is still cheaper than … Hongkong. A UBS VP position I tried was at HKD 1.2 million, about half your minimum standard.
  • I have friends in Shanghai and Beijing – the most expensive Chinese cities (along with Shenzhen). A 300k USD salary would be one in 500 jobs over there, but you think it’s barely enough for you. They would guess you live in a city where home price is 10 times higher than Shanghai/Beijing but in reality, your home is more affordable — A comparable apartment (not a 2-storey house with backyard) in Beijing/Shanghai would cost at least USD 1.5 million.

You are living in an ivory tower (and refusing to step out to the real world) if you hold on to that irrational and negative view. You sound like a guy complaining about his 10-room, 3-story mansion. It’s not depression but hallucination.

If you carry this habit into parenting, then beware — your kids could be top of their class but you may still feel they are not good enough because they didn’t win a gold medal. Would be tragic. I think Chinese parents are demanding but most are not at that level. We love our kids and accept them. We ought to love and accept ourselves.

I recommend [[compassion and self-hate]] by Theodore Issac Rubin, my favorite American self-help writer. His books changed my life. I feel good to know he is now 95. I’m grateful to Dr Rubin; I’m grateful to my dad; I’m grateful to Buddhism teachings; and I’m grateful when I answer your questions — I often get a chance to look into myself. I thank our creator to give me the analytical powers (though not as powerful as Dr Rubin) to dissect the layers and uncover the core issues in my psyche. As I endeavor to answer your questions I often reach a deeper understanding of my own pains, stupidity, irrationality and unfairness to myself and love ones.

Advertisements

My absorbency[def]!=highest #don’t scold boy 耐得住寂寞

See also 耐得住寂寞,也是天分 #absorbency for students

Obviously 99% of us have limited technical talent like learning capacity, memory, problem solving speed .. but today I will focus on another factor “absorbency | 耐得住寂寞” – the capacity to endure repetitive practice, sustained focus, often in solitude, and build the /mileage/ required for mastery. No one has unlimited absorbency. Sooner or later we all reach saturation point, or “breaking point” (Liu Shuo).

Opening example — I often tell my friends “Study is not hard work for me.”

example — When I was trying rock climbing, my finger and forearm would get too tired for me to continue. On the other hand, as a teenager, I could do 600 sit-up in 10 minutes non-stop.

example — jogging and yoga turns out 2 b%%most heroic self-mastery

Absorbency is one of the top 5 most important technical talents, (possibly top 3), esp. in the long run.

Some fellow parents (like Li Yi) said the vast majority of primary school pupils are not very different in IQ. I would add that differences in absorbency is huge. I often feel my son is weaker on absorbency when practicing piano, Chinese handwriting and math. I think he is improving. His swimming practice has always been good. My daughter shows more absorbency for hand-writing, renzi, and piano.

Some individuals are not the brightest/fastest but can tolerate high amounts of practice and can become formidable like 郭靖, 小龙女, compared to 杨过.

Note the word “absorbency” is more precise than ‘effort’. Absorbency is more about capacity, less about attitude, motivation.

Let’s shift focus to my own technical absorbency. I want to point out I’m not the strongest so I should not blame my son.

  1. Coding drill – I can do more than most peers, but then some grads can solve more than 5 problems a day, and pass all Leetcode tests.
  2. Quant study – I did more than half my homework on my own. Some classmates were able to practice and study more.
  3. Getting a grasp on a big codebase – absorbency required, beside learning speed + memory.
  • eg flight — how much technical learning you get on a 15H long flight is a measure of your technical absorbency.
  • eg coding drill — Rahul, me and Abhinav each find joy in coding drill but many of my friends seem to focus on reading the ideas rather than practicing and solving problems independently. Focusing on the end result rather than the journey is no fun. The joy increases your absorbency
  • Positive feedback — is a key factor affecting absorbency. Coding practice and software programming offers immediate feedback.
  • self-doubt — I frequently (once a month?) question my endeavor (and the elusive strategic value) .. detrimental to absorbency. It’s already hard to absorb the practice even without these self-doubts.
    * yoga
    * risk analytics
    * socket

max-profit at-most-2K trades #proven but untested

Q(Leetcode): Say you have a price history as array. Design an algorithm to find the maximum profit. You may complete at most 2K transactions, consisting of exactly K (eg 2) buys and K sells. You may not engage in multiple transactions at the same time (i.e. you must sell the stock before you buy again). No short sell please.

No O() requirement.

====analysis=====

I feel first challenge is to list all (not more than 10) scenarios. This step has taken me a few days, even though I drew many examples.

–Idea 3 for 2K, based on Leetcode discussion

f[2k, ii] represents the max profit up until prices[ii] using at most 2k transactions. 
f[2k, ii] = max(f[2k, ii-1], prices[ii] + max_for_all_jj(f[2k-2, jj-1] - prices[jj]))Two possibilities
  1. the optimal solution at [2k,ii] doesn’t involve the price point ii, so solution is f[2k,ii-1]
  2. the optimal solution at [2k,ii] involves a Sell at prince point ii. In this scenario, the last buy is at some previous price point jj, and before jj we have an optimal solution at [2k-2, jj-1]

–solution 1 (O(NN) brute force): construct all possible pairs, rank them and pick top 2.

–solution 2 (O(N) only works for K=2)

  1. Identify all the turning points so we end up with HLHLHL… We can eliminate or ignore the other points.
  2. * identify the best pair using the max-profit algo. denote them as L1/Hj
  3. * In the subarray before L1, find the best pair
  4. * in the subarray after Hj, find the best pair
  5. pick the best among the two an denote it as p2
  6. Now look into the subarray L1 to Hj. If there’s no enclosed pairs within then we have a simple case — use L1/Hj and p2. But let’s assume there are at least 2 nodes enclosed. I will denote entire subarray as L1 H1 L2 H2 … Lj Hj (where L1-Hj is the max-profit)
  7. * use max-profit algo to find the worst loss from H1 to Lj. Suppose it’s H3 to L5.
  8. If this loss exceeds p2, then the we will return L1/H3 and l5/Hj. Otherwise, return L1/Hj and p2

This solution uses the watermark algo 4 times (*).

I feel basic decision is to break the best pair or keep it

case: need to break the highest pair into 2 pairs,
case: best pair + another pair outside. I think this is easy..
case: 1,18,2,19,15,16. Perhaps the hardest case to solve.

–other ideas, for K > 2

can we use a matrix?

We can keep track of all profitable pairs i.e. le/ri indices, and also a pointer to the current best pair that’s not overlapping with “me”.

After creating 2nd pair, IFF no overlap, then we update the pointers in both instances.

After creating 7th pair, if it doesn’t overlap with the #3 highest pair, then check-update the pointer in #3.

I think if we can efficiently keep track of these then it should work.

minimum hardware requirement to run linux #historical observation

  1. Bell labs Unix, in its early days, were able to run on hardware considered “underpowered” even by the standard of that day — P33 [[art of unix programming]]. I believe contemporary kernels were unable to run on those low-end machines.
  2. Linux (P77) has a similar characteristic. In the 1990’s the big commercial Unixes targeted enterprise-class hardware but Linux emphasized doing more with less. Today, Linux powers 99% of the world’s most powerful supercomputers but Linux also runs on low-end or obsolete hardware.

In both cases, I feel that an OS designed with very low minimum hardware requirement turned out to be actually more efficient, more adaptable, more versatile, more powerful than their conventional competitors.

peek@solution,get hint,then solve the problem yourself

I have found it rather important to follow this routine — Suppose a given easy or hard leetcode problem is too unfamiliar to me (might be familiar to you) so I have very few ideas, all on the wrong track. I would peek at the discussions to get some useful hint. I stop reading right there and go back to my drawing board and spend the next hour (or day) conceiving my own solution.

This has worked more than half the times. Benefit? I get to implement and remember my own solution.

There is a more important but subtle benefit — that hint is often related to a Reusable problem-solving technique, the holy grail of coding drill.

The way I see it, there are only a dozen or so (below 20) reusable techniques. Even if I were to memorize all of them, they would be useless to me because the real challenge is how to Apply a technique to a given problem. The way I learn to apply a technique is

* make sure I feel the hint i.e. it connects some dots in my mind
* experiment with various ways to use the hint to tackle the problem
* some experiments show progress; some become dead-end because they are the wrong ways to apply the technique
* keep sharpening "pattern recognition" until the key characteristics of the problem becomes clear in my mind.

The more I experiment with the hint, the more familiar I get… familiar with the problem, the hint, and the hitherto hidden gem behind the hint. That gem is often (not always) a reusable technique.

I won’t learn how to apply a Reusable technique if I don’t experiment, if I read a solution directly. The dots won’t connect. Pattern-recognition will still develop but slowly. Worse still, I may completely miss the hidden Reusable technique in the solution.

I always believe the real value in doing a Leetcode problem is the technique, reusable or not.

The solution I conceive will Not be as optimal as the one in the discussion … but it is easier to digest. The optimal solution in the discussion is usually rather concise (since the authors want to look cool) and elegant, but I don’t need to learn an elegant solution. I need to learn a Reusable technique. For that purpose, my own solution is easier to digest because it uses only a bit of the new technique + many familiar techniques.

## eg@portableGTD skills: valuable but never quizzed

Background — survival skills not tested in IV are usually [1] localSys and [2] portable GTD

  • — dev-tool knowledge
  • my posts on c++ build errors
  • Many posts on gdb
  • most posts on eclipse and MSVS
  • posts on git/cvs
  • posts on bash
  • posts on vi, less, find
  • — beyond tool knowledge
  • correlate logs, config, source code, data store…
  • script automation and shell customization skills
  • techniques and familiarity with jira

extern^static on var^functions

[1] https://stackoverflow.com/questions/14742664/c-static-local-function-vs-global-function confirmed my understanding that static local function is designed to allow other files to define different functions under this name.

extern var file-scope static var static func extern-C func
purpose 1 single object shared across files [1] applies. 99% sure [1] name mangling
purpose 2 private variable private function
alternative 1 static field anon namespace no alternative
alternative 2 singleton
advantage 1 won’t get used by mistake from other file
disadv 1 “extern” is disabled if you provide an initializer no risk. very simple
disadv 2
put in shared header? be careful  should be fine  not sure

MS-Outlook defer-send rule #3spaces

I have used this type of rules in many companies. My current set-up is —

Apply this rule after I submit a msg marked as Normal importance: defer delivery by 1 minute.

I wish there’s a “importance = A or B” condition.

y I said you looked too dissatisfied #XR

聊天每 10次 觉得你有5 次(太多 🙂 流露对自己的处境严重不满。这方面我们俩类似, 所以我也有同感。正因如此, 我觉得你没必要这么不满意, 更不必苦闷。

从没听你提到你父亲。我父亲这方面给我宝贵的指点. 更重要是, 反复指点 — 我的思维习惯好难改变, 我一直有独立思考的性格和信心, 真固执 , 甚至顽固不化。我感激他不厌其烦指出我的愚人自扰.

光感激没啥用. 更重要的是 我被他的智慧和耐心逐渐地感化, 认识到自己并非顽固不化。

你我对很多问题的看法差异都与我父亲相关。比如学区;比如名校招生偏向弱族;比如各国教育系统哪个更成功; 比如对孩子评估过早…

还是说个人事业吧. 我深感自己 IQ/EQ 有限, 实在没必要和高薪的技术人员比.(更不要去比管理型人才). 所以我说目前处境不错, 偷笑还来不及.

刷题并不一定要有经济效益 — 比如拿个硅谷 或是高频 顶级公司聘约. 我比较重视能力提高,技能积累. 几年候 就算积累效果不佳, 我也希望能做到心安理得.

我的 UChicago 硕士读下来这个状况, 心安理得 着实不容易 . 我的总结 — 金融数学职位太少而且要求比我能力高, 薪水不一定比程序员高多少, 也没有 Contract 可言. 没法发挥我 (和 CSDoctor) coding 方面的特长和经验. 所以说 2013 年选择这个硕士课程, 实情了解得不够. 上了船才知道。

这次惨痛的经历决定了我对各种新技术新领域的谨慎, 徘徊, 举足不前.

既然我不看好这些领域的”钱”途, 我也没你那么不满现状. 话说回来,

* i’m good at scripting/SQL/data-processing compared to other developers I know;
* I like analyzing complex data, with attention to details;
* I have formal math training including statistics

So IF there’s some high-paying domain for me, I am open to it. That’s a big IF. The way I see it, most of those data analyst jobs are not paying well. If it pays well, it would be too hard to get in.

[18] get ideas@G200 common Q #XR

Update — Now I realize my energy is limited so I need to allocate my spare time between

  • real coding
  • reading the key ideas

Also, I feel no positive feedback when reading the key ideas. I hit de-motivation.

Hi XR,

I now agree with your analysis — if I understand and memorize 1 key point about each problem, by reading leetcode, those points will help me significantly in a coding interview. This is one of the 4 benefits of coding practice #syntax,ECT,BP

  • If the problem is easy, then those key points will save me from running out of time.
  • If the problem is hard for everyone, then interviewer would have a lower expectation i.e. no need to complete it 100%. If I am on the right track (while other candidates are not) then i have an edge.
    1. case in point —- Last Friday I told you about shortest-path problems. I learned the technique from past interviews.
    2. case in point —- remember I told you about my own facebook question (3-way sorter/classifier #FB) Your swap technique is a valuable key point, even though it won’t help me finish the solution within 30 minutes
    3. case in point —- In 2018 I encountered comparable int array shuffling problems within 2 months at MS and LiquidNet. Key points are relevant.
    4. case in point —- In a 2018 interview, I learned to use N-by-N matrix to represent a N-node graph, then I used this key idea in another coding interview.

I wonder if I could get myself to follow a plan similar to yours:

  • every weekend (+ weekday evenings) spend 30 – 120 minutes.
  • First select some interesting” and popular question. If no idea, put it on back-burner
  • after a while, give up and read the solution to get the ideas
  • eg: edit distance
  • eg: punctuating continuous sentence
  • eg: regex
  • eg: maximum all-black sub-matrix

edit distance

The DP idea — compare matrix-path-counter, which is more visual and easier than This one.

Q72 on Leetcode: Given two words word1 and word2, find the minimum number of operations required to convert word1 to word2. You have the following 3 operations permitted on a word:

  1. Insert a character
  2. Delete a character
  3. Replace a character

Comment — Top 100, naturally occurring. I won’t bother to pass all Leetcode tests esp. the load tests. If I pass all non-load tests I would consider my solution decent.

https://github.com/tiger40490/repo1/tree/py1/py/str has my implementation based on a DP idea online, and a spreadsheet illustration. The idea is elegant once you wrap your mind around it.

===analysis===
Starting with the small string (length S), The challenge is to project as many of the S chars to the large string (length L). If we can project 5 chars at most, then … (wrong — the remaining S-5 chars need replacement, and the other L-S chars need insertion.)

–idea2: draw all the projection arrows from S to L. In a good-projection, every arrow on the right should be more slanted than every arrow on the left. We want the largest good-projection. In the opening example, the largest would have 5 arrows, …

—-
None of these ideas has proven effective.

in size-N array find The duplicate int #1 to N+1 #pigeonhole Abhinav

https://leetcode.com/problems/find-the-duplicate-number/description/: Given an immutable int array nums containing n + 1 elements where each element is between 1 and n (inclusive), prove that at least one duplicate number must exist. You are guaranteed that there is only one duplicate number, find the duplicate value in O(1) space, below O(NN) time. The culprit may repeat many times.

I didn’t bother to write the code.

===== analaysis =====

contributed by a user and highly contrived:(
many likes:)

–bisection solution in O(N logN) time and O(1) space. I came up with this solution within a minute.

  1. Divide the full range [1 to n] into 2 almost-equal ranges (i.e. if n = 2K+1, then i use [1 to K] and [K+1 to n] as 2 ranges)
  2. Count how many nodes are in each range. Clearly one of the two ranges must have too many elements.
  3. Remember the boundary of that bad range so from now on we will ignore those nodes falling into the good range. We will use 2 variables to update/improve the boundary, until they coincide.
  4. within the bad range, repeat Step 1.

Key insight — progressive bisection.. non-recursive.

Key insight — applying pigeon-hold principle, we split the conceptual range. The more common (but ineffective) technique would split the physical array.

generate no-loop paths: graph node A→B

Q1: given 2 nodes in a graph containing N (eg 121) nodes, potentially with cycles, generate all simple paths between the pair. A simple path has no cycle. (In other words, for a simple path, length + 1 ==  # unique nodes)

I think there are classic math algorithms for it, because this is part of basic graph theory. Here are some applications of this type of algorithms —

  • Q1b (special case of Q1): given 2 nodes in a C-by-R rectangular grid, where every node is connected to (up to) four neighbors, generate all cycle-free paths.
    • Below, I solved this problem in python
  • Q2 (simple application of Q1 algo): generate all simple paths between ALL node pair in a graph. The shortest simple path has length=0. Longest simple path can potentially visit every node exactly once.
  • A: first generate all 121-Choose-2 node pairs. For each pair, solve Q1. Lastly generate the 121 trivial paths of length=0.
    • repetitive 😦
  • Q2b (special case of Q1): In a C-by-R rectangular grid, where every node is connected to (up to) four neighbors, generate all cycle-free paths between all pairs.
  • Q2c (easy one based on Q2): given a binary tree containing no cycles, generate all paths.

— A1b: my DFT implementation (probably not 100% correct) , where each “trail” either fails or becomes a path.

  1. from NodeA start a breadcrumb/trail. We can’t revisit any node already visited on current breadcrumb,
    1. if this is a matrix, then instead of a hashtable, we can also use a shadow matrix, but the breadcrumb is much smaller than a shadow matrix
  2. if we reach a node surrounded by nodes on the same breadcrumb, then the trail fails at a dead-end
  3. else we will reach NodeB 🙂 Print the breadcrumb

By construction, we won’t see duplicate paths 🙂

https://github.com/tiger40490/repo1/blob/py1/py/grid/classic_count4waySimplePaths.py is the implementation

–BFT? I don’t think BFT can print each unique path

shortest path:2nodes]binary matrix #BFT

Q: given 2 cells in a binary matrix (1=black, 0=white=blocked), check the pair are connected and if yes return the shortest path. There exists a path of length 1 between any 2 cells IFF both are side by side or stack atop.

count paths between 2 bTree nodes #PimcoQ9 Ashish is arguably harder than this problem, but this problem allows moving in four directions.

binary-matrix island count #DeepakM technique is more applicable. A BFT path should work.

  • every reachable node is painted Green (like 2)
  • we give up after our queue is empty

https://github.com/tiger40490/repo1/blob/py1/py/grid/classic_connectedPair.py is the implementation, briefly tested.

fewest jumps to reach right end #triple jump

Q(Leetcode): Given an array of non-negative integers, you are initially positioned at the first index of the array. Each element in the array represents the maximum permitted jump length from that position.

https://github.com/tiger40490/repo1/blob/py1/py/array/tripleJump.py is my solution, NOT tested on Leetcode.

==== analysis =====
Typical DP+

greedy algorithm. I will jump leftward starting from right end  [1].

Suppose there are N=99 nodes in the array. I will pre-scan the N nodes to build a shadow array of integer records, each a BestLefNode. (The first record is unused.)

Eg: If BestLefNode[44] == 33, it means that based on known data so far, the left-most (furthest) node we can jump to from Node #44 is Node #33.

When we visit Node #7 during the rightward scan, we will update 0 or more BestLefNode records from #8 onward.

As soon as we update BestLefNode[N-1] i.e. right-most record, we exit the initial scan since the optimal solution is now available. For example, if rightmost BestLefNode has value #88, that means the furthest node we can reach from the right end is Node #88, so we will jump to #88 and then check the best destination From #88.

[1] why not start from left end and jump rightward? No I think there’s no symmetry in this problem. From Node 1 the immediately-reachable nodes are a continuous region.

longest consecutive ints]O(N) #zebra

Popularity — 1000+ likes on Leetcode … possibly popular

Q(Leetcode #128): Given an unsorted array of integers, find the longest consecutive element sequence, in O(N) time. Eg: given [100, 4, 200, 1, 3, 2] return [1,2,3,4]

I call this the zebra problem because  every consecutive sequence of int is a black stripe and the gaps between them are white stripes. We want the widest black stripe. Obviously, each stripe has minimum size 1.

https://github.com/tiger40490/repo1/blob/py1/py/array/zebra.py is my O(N) solution, not tested on Leetcode.

========

What’s UnionFind? A reusable technique?

Like inserting interval #merging #80% done, I  feel this is a data structure problem,

To keep things simple, i will first run one iteration to remove all duplicate items.

I will use hashtable where key a known item. The value is a pointer to a “segment” object.

A segment stores the min and max values. All integers within [min, max] of the segment are always known-items during my scan of input array.

When a new item is either min-1 or max+1, we expand the segment by adjusting the extremes…

The trick is joining two segments, without link pointers. After joining, we don’t really adjust the min/max fields. We only update the max-length global variable if needed.

To keep the hashtable small, I can optionally delete from it but we don’t want to do a range delete within the loop — O(NN)

git | list all(and only)commits from branching point to a descendant commit

Background — Suppose you made 3 commits on your feature branch name “parix”, but meanwhile, someone added 4 commits in master branch. Therefore there is now a divergence in the commit history graph.

Often times you need to visualize the divergence. You need to exactly what 3 commits are on your branch after the common ancestor.

git log master..pairx # listing the 3 additional commits in pairx branch, right after the common ancestor i.e. the branching point

git log pairx..master # to show those 4 commits.

generate combinationSum compositions #backtrack up] trie+tree

Q: https://leetcode.com/problems/combination-sum/description/

Given a set of unique candidate numbers and a target number, find all unique combinations of candidates, where each combination sums to target. Each candidate may be used repeatedly.

My solution is https://github.com/tiger40490/repo1/blob/cpp1/cpp/combo_permu/comboSum.cpp , showing a reusable backtracking technique described below. It’s clever and brief. However, the efficiency is questionable. Memoization might be applicable here.

This backtracking relies on a key insight. Suppose we have target = 7 and X=2,Y=3,Z=4 as the candidates.

  • when we try a Y, we don’t need to try any more Xs. For example, If we are trying XY, then all XX* solutions are already handled by earlier recursive calls.
  • each combo sequence is naturally pre-sorted internally.
  • Also, “lower” formulas are generated before “higher” formulas
                 root
          x        y     z
      /   |  \
     x    y   z       
    / \   | \  \
 xxx  xxy | xyz \ 
         xyy     xzz
void //can return something if needed

recurs( solutionsFound &, //growing
        curPartialSolution &, 
// above collections could be global variables, to simplify things

        remainingCandidates, /*probably an immutable global array. 
If we need the remaining array to shrink, we can still rely on startIndex to skip used candidates.*/

        gapFromTarget, 
        startIndex, //used to select from remaining candidates
) 

Inside this function, we scan remaining candidates starting from startIndex. Typically in one iteration

  1. we add a new candidate into curPartialSolution
  2. we call recurs
  3. we remove the last added candidate from curPartialSolution to restore the original curPartialSolution — backtracking up the tree.
  4. move on to the next candidate

I feel this is more greedy than DP

irrational envy for all-round high flyers

When I first identify an acquaintance as an all-round high flyer, his (her) “note-worthy” achievements were invariablly rather few, thanks to my automatic filter on his other “success” stories … becasue those kinds of “successes” are, at a deep and personal level, unimportant to me. But then those things insidiously sneak past my defence into my inferiority complex and /infest/. Extremely irrational and 不值得.

I would rather feel inferior to someone (I know well) with many unrelated yet worthy achievements [3]. I doubt there’s any in my circle.

Incidentally, when a public speaker is introduced on stage, the audience often hear a list of “successes” which are mostly unimportant to me.

(Even though none of them is a friend I know well enough) Over the years there were a small number of acquaintances [1] I have singled out. Once I singe one out, I tend to lose my critical thinking and see many unimportant/insignificant/secondary “achievements” as enviable. Critical thinking is badly, badly needed at such a juncture!

Incidentally, one of the most effective ways to feel not-inferior is a lucrative job offer, even if I don’t take it.

The initial “enviable achievements” are usually one of these 5
1) income, almost always managerial [2]
2) investment, mostly property
3) beautiful wife
* fitness, flexibility and body shape
* short commute

The other factors are usually (as they should) in my “don’t-care/unimportant-to-me” list, but they sneak into my inferiority complex.

* (multiple) degreed from prestigous universities? Actually most of them are inferior to me!
* academic kids
* competitions and awards to himself or kids
* branded employers? many of them have fewer than mine
* running his own side business? I did while many of them didn’t
* wife professional career
* work-life balance… “easy job”? Questionable. Most high-paying jobs require effort
* writing and music skills? I achieved more than most of them!
* publications
* cars? is a liability not an asset!
* green card
* vacations to many places? Huge cost, no real gain for me
* magerial success at an erly age
* golf skills? i couldn’t care less when I’m not losing my critical thinking.
* networking skill, smooth personality? I’m not this type

[2] as soon as I hear the MD title of some ex-classmate, I lose my critical thinking defence.

Better consider [[compassion and self hate]] and Buddhist teaching

[1] Beside ML, Here are some questionable names. Many of them I barely know the name and job title, so my inferiority is fundamentally similar to my infatuation over the Indonesian girl Sandy, whom I spoke to fewer than 3 times.
* Lu Nuo — I only know he has a writing hobby …
* Cai Hongyu
* Tao YY — I don’t really know how he is doing
* Yang Yang
* Xie Xiaoli

[3] briefly on myself — math, piano, c#, swing, properties, blog, helping friends with job hunting

max path sum through binTree #self-tested

Q1: Given a non-empty binary tree of signed integers, find the maximum path sum. For this problem, a path is defined as any sequence of nodes from any starting node to any node in the tree along the parent->child connections. The path must contain at least one node and does not need to go through the root. No uplink. No cycle.

Q2 (Leetcode “hard” Q124): uplink provided. No clue yet

====analysis====

My solution — DFT. Along each root-to-leaf path, use the max-subarray (Kadane) algo and store maxSumEndingHere value in each node, for reuse.

Q: is there any duplicate work?
A: I hope not, thanks to memoization i.e. Node::meh field

Q: do we visit every path?
A: I think so.

I simplified the idea further in

https://github.com/tiger40490/repo1/blob/cpp1/cpp/algo_binTree/maxPathSum.cpp

Time complexity is .. O(V+E) = O(N), since I visit every node and follow each edge once only.

There might be algorithmically superior solutions on leetcode but I don’t want it to affect my joy, motivation and momentum.

prior knowledge can make you look brainy: algo IV

After solving the min-stack problem, I found something paradoxical , even nonsensical — Someone said this question is considered “easy”, but if you have not seen it before then you may find it extremely hard. I thought O(1) worst case was impossible. I have experienced many similar situations where prior knowledge can make you look very brainy.
  • example — detect loops in a singly linked list, where the loop could be far out.
  • example — reverse an extremely long singly linked list which breaks any recursive solution, so you must use non-recursion
  • example — in a regular unsorted binary tree (every node has up to 2 child nodes and no uplink to parent) how to print all nodes in-order but with O(1) additional space and O(N) time
I feel west coast (and Bloomberg) tend to ask this type of questions because they expect their candidates to study and acquire the knowledge before applying.
If you don’t study, and hope to come up with a reasonable solution on the spot, then you must be extremely intelligent or lucky. I think the math Olympiad gold medalists also need to study before their competitions.

tech design “debate” with mgr

Some pointers from Rahul and other colleagues

  • [GS] pick your battle
  • manager will be the one holding the baby after you leave the team.
  • I feel I need to pay attention to the para-linguistics, and not focus solely on the technical side
  • I tend to hold strong views on design question that aren’t really major. I tend to argue and hold my ground even when it’s really a minor design decision. I appear stubborn and argumentative when I suspect the decision makers have not fully understood my points. Once I see that manager completely understands my arguments, I would stop pushing.
  • I feel i should put a limit on how much time I’m “costing” my mgr. After mgr has spent, say, 10 minutes listening to my points, I should probably “give up”.
  • [MS] Mgr probably hopes that I accept the collective decision and be “happy”, not reluctantly ..
  • [MS] need to avoid giving the impression that I have made up my mind about which design is best and only here to persuade.
  • [MS] need to avoid giving the impression that I’m not really open to new input
  • [MS] try to show unbiased .. without a favorite, even though my proposal is clearly my favorite
  • I hope to strike a balance between striving for a better design, and minimizing conflict
  • Try to detach the egos from the competing designs

y an order partially filled would get cancelled

Three scenarios: (Assuming a big order to sell 9M)

  1. client doesn’t like the partials so far, and cancels the rest of the order
  2. IOC limit order is partially filled because … at the requested price level there’s not enough quantity
  3. IOC market order is partially filled since there’s not enough quantity.
    • This is uncommon, but possible for an illiquid stock and big order.

Note a fully filled order has nothing left to cancel. All you can hope for is a bust initiated by exchange.

order slice^fill: jargon

An execution is also known as a fill, often a partial fill.

  • A slice is part of a request; An execution is part of a response

A slice can have many fills, but a fill is always for a single request.

  • An execution always comes from some exchange, back to buy-side clients, whereas
  • A request (including a slice) always comes from upstream (like clients) to downstream (like exchanges)
  • Slicing is controlled by OMS systems like HFT; Executions are controlled by exchanges.
  • Clients can send a cancel for a slice before it’s filled; Executions can be busted only by exchanges.