priorityQueue can delete any node

http://www.mathcs.emory.edu/~cheung/Courses/171/Syllabus/9-BinTree/heap-delete.html shows how to delete any node in a priorityQueue i.e. binary heap.

In a toy order book, you can use priority queue. See https://algs4.cs.princeton.edu/24pq/ but I don’t know of any serious, commercial usage.

context switch: 1-10 microsec

See also #CPU-cycles]1 time slice ~ millions #AshS 

In a MS interview, Deepak was asked about (typical) duration of a TaskSwitch or context switch. I think 20 millisec is way too long.

https://stackoverflow.com/questions/21887797/what-is-the-overhead-of-a-context-switch and other sites suggest “below 10 microsecond”

— One of the most interesting and popular focus points is the difference between thread switching vs process switching

The L1-cache (private to a cpu core) reload happens if a core switches to another Process…?

 

Daily sense@goodWork: 2 factors

As a conscientious professional, my ever-present daily sense of “good work” depends mostly on

  • AA) visible output, visible to boss and other people. AA is dominant and all but determins my sense of good work.
  • BB) personal sacrifice — sometimes invisible to others. Factors inlucde the amount of hours I spend in office

AA without high BB? sometimes I feel a bit guilty, but usually I feel proud of myself. Remember 95G, Barcap…

Now, grandpa’s view (echoed by others) is to reduce the influence of AA, to desensitize myself. AA is an external factor, not even an objective measurement. It’s perilous to let AA control my sense of good work.

问心无愧. 尽我所能.

##[16] lower jobs:Plan B if !!dev-till-70

I had blogged about this before, such as the blogger-pripri post on “many long term career options”

Hongzhi asked what if you can’t go to the US.

* Some Singapore companies simply keep their older staff since they still can do the job, and the cost of layoff is too high. My experience at OC/Macq convinced me to have very low hope.
* Support and maintenance tech roles
* Some kind of managerial role ? I guess I am mediocre but could possibly do it, but i feel hands-on work is easier and safer.
* Teach in a Poly or private school ? possibly not my strength
* Run some small business such as Kindergarten with wife

##xp staying-on after PIP, with dignity

  • +xp GS: informal, unofficial PIP, before I went to Kansas City for July 4th. After it, I stayed for close to a year.
  • +xp Stirt: I overcame the setback at end of 2014 and scored Meet/Meet albeit without a bonus.

Now I think it is possible to stay on with dignity, even pride. I didn’t do anything disgraceful.

I hope at my age now, I would grow older and wiser, less hyper-sensitive less vulnerable, more mellow more robust, or even more resilient.

That would be a key element of personal growth and enlightenment.

FTE or contractor: lowest stress]U.S. as family@@

After hibernation, for the first few years in the U.S. as a family, quite possibly my stress level (and wife’s) would grow to a record high.

Q: Would a contract or VP job give me stress relief?
A: Clearly, contract job feels lighter, despite the lower salary and lower job security.

Boss personality is the biggest wildcard, but I would say job nature (esp. expectation) is crucial/vital. VPs are benchmarked and ranked. As contractor, I think renewal based on budget is the main appraisal.

MSOL zoom by mouse/touchpad/touchscreen

My home laptop screen resolution is such that zoom-in is required when reading outlook messages. How do I use keyboard to easily zoom in on a new message?

— touch screen two-finger pinch works, even though I disabled it in my laptop

— Ctrl + right-scrollUp is similar to the slider.

With mouse, this gesture is probably proven and reliable. With a touchpad, I relied on two-finger scroll.

Two-finger up-scroll + Ctrl does zoom in 🙂

— two-finger page scroll without Ctrl key : is unrelated to zooming

Windows gives us two choices. I chose “Down motion means scroll down”. Intuitively, the two-finger feels like holding the vertical scroll-bar on a webpage and in Initellj.

Note Intellij scroll is otherwise impossible with touch-screen or touchpad!

pointer as class field #key notes

Pointer as a field of a class is uninitialized by default, as explained in uninitialized is either pointer or primitive. I think Ashish’s test shows it

However, such a field creates an opportunity for mv-ctor. The referent object is usually on heap on in static memory

Note “pointer”  has a well-defined meaning as explained in 3 meanings of POINTER + tip on q[delete this)]

cancel^thread.stop() ^interrupt^unixSignal

Cancel, interrupt and thread.stop() are three less-quizzed topics that show up occasionally in java/c++ interviews. They are fundamental features of the concurrency API. You can consider this knowledge as zbs.

As described in ##thread cancellation techniques: java #pthread,c#, thread cancellation is supported in c# and pthreads, whereas java indirectly supports it.

— cancel and java thread.stop() are semantically identical but java thread.stop() is forbidden and unsafe.

PTHREAD_CANCEL_ASYNCHRONOUS is usable in very restricted contexts. I think this is similar to thread.stop().

— cancel and interrupt both define stop points.  In both cases, the target thread choose to ignore the cancellation/interrupt request, or check them at the stop points.

Main difference between cancel and interrupt ? Perhaps just the wording. In pthreads there’s only cancel, no interrupt, but in java there’s no cancel.

https://stackoverflow.com/questions/16280418/pthread-cancel-asynchronous-cancels-the-whole-process

Note interrupted java thread probably can’t resume.

Nos sure if Unix signal handler also supports stop points.

Java GC on-request is also cooperative. You can’t force the GC to start right away.

Across the board, the only immediate (non-cooperative) mechanism is power loss. Even a keyboard “kill” is subject to software programmed behavior, typically the OS scheduler. 

comfort,careerMobility: CNA+DeepakCM #livelihood

https://www.channelnewsasia.com/news/commentary/mid-career-mobility-switch-tips-interview-growth-mindsets-11527624 says

“The biggest enemy of career mobility is comfort … Comfort leads us to false security and we stop seeking growth, both in skills and mindset agility. I see all the time, even amongst very successful senior business people, that the ones who struggle with career advancement, are the ones whose worlds have become narrow – they engage mainly with people from their industry or expertise area, and their thinking about how their skills or experience might be transferable can be surprisingly superficial.”

The comfort-zone danger … resonates with Deepak and me.

— My take:

The author cast a negative light on comfort-zone, but comfort is generally a good thing. Whereas, admittedly, those individuals (as described) pay a price for their comfort-zones, there are 10 times more people who desire that level of comfort, however imperfect this comfort is.

Comfort for the masses means livelihood. For me, comfort has the additional meaning of carefree.

Whereas the author’s focus is maximum advancement, not wasting one’s potential i.e. FOLB and endless greed, our goal is long-term income security (including job security [1]). This goal is kinda holy grail for IT professionals.  Comfort as described in CNA is temporary, not robust enough. Therefore, I see three “states of comfort in livelihood”

  • Level 1 — no comfort. Most people are in this state. Struggling to make ends meet, or unbearable commute or workload (my GS experience)
  • Level 2 — short (or medium) term comfort, is the subject of the CNA quote.  Definition of “short-term” is subjective. Some may feel 10Y comfort is still temporary. Buddhists would recognize the impermanence in the scheme.
  • Level 3 — long-term comfort in livelihood. Paradoxically, a contingency contractor can achieve this state of livelihood security if he can, at any time, easily find a decent job, like YH etc. I told Deepak that on Wall St, (thanks to dump luck) Java provides a source of long-term comfort and livelihood security. Detachment needed!

[1] income security is mostly job security. Fewer than 1% of the population can achieve income security without job security. These lucky individuals basically have financial freedom. But do take into account (imported) inflation, medical, housing, unexpected longevity,,

Deepak pointed out a type of Level-2 comfort — professional women whose main source of meaning, duty, joy is the kids they bring up. For them, even with income instability, the kids can provide comfort for many years.

Deepak pointed out a special form of Level-3 carefree comfort — technical writers. They can have a job till their 80’s. Very low but stable demand. Very little competition. Relatively low income.

Deepak felt a key instability in the IT career is technology evolution (“churn” in my lingo), which threatens to derail any long-term comfort. I said the “change of the guard” can be very gradual.

— Coming back to the carefree concept. I feel blessed with my current carefree state of comfort. It’s probably temporary, but pretty rare.

Many would point to my tech learning, and challenge me — Is that carefree or slavery to the Churn? Well, I have found my niche, my sweet spot, my forte, my 夹缝, with some moat, some churn-resistance.

Crucially, what others perceive as constant learning driven by survival instinct, is now my livelong hobby that keeps my brain active. Positive stress in a carefree life.

The “very successful senior business people”, i.e. high-flyers, are unlikely to be carefree, given the heavy responsibilities. Another threat is the limelight, the competition for power and riches. In contrast, my contractor position is not nearly as desirable, enviable, similar to the technical writers Deepak pointed out.

3stressors: FOMO^PIP^ livelihood[def]

  • PIP
  • FOMO/FOLB including brank envy
  • burn rate stress esp. the dreaded job-loss

jolt: FSM dividend has never delivered the de-stressor as /envisioned/. In contrast, my GRR has produced /substantial/ nonwork income, but still insufficient to /disarm/ or blunt the threat of PIP ! In my experience, Job-loss stressor is indeed alleviated by this income or the promise thereof 🙂

Excluding the unlucky (broken, sick,,) families, I feel most “ordinary” people’s stress primarily come from burn rate i.e. making ends meet, including job-loss fear. I feel the middle class families around me could survive  at a much lower theoretical burn rate of SGD 3.5-4.5k (or USD 6k perhaps… no 1st-hand experience) but they choose the rat race of keeping up with the Jones’s (FOMO). Therefore, their burn rate becomes 10k. See also SG: bare-bones ffree=realistic #WBank^wife^Che^3k and covid19$$handout reflect`Realistic burn rate

For some, FOMO takes them to the next level — bench-marking against the high-flyers.

—– jolt: PIP^job-loss fear

For the middle class, any burn rate exceeding 3k is a real (albeit subconscious) stressor because the working adults now need to keep a job and build up a job-loss contingency reserve. Remember Davis Wei….3-month is tough for him? How about 30 years? In a well-publicized OCBC survey during covid19, most Singaporean workers can’t last 6M

With a full time job, salaried workers experience a full spectrum of stressors including PIP. PIP would be impotent/toothless if the job is like a hobby. I would say very few people have such a job.

Aha .. Contract career is free of PIP.

For me (only me), job loss is a lighter stressor than PIP fear. In fact, I don’t worry about end of contract [2] and bench time. I worry more about humiliating bonus. I’d rather lose a contract job than receiving a token bonus after PIP.

I think PIP is the least shared stressor of the three stressors[1]. Even though some percentage of my fellow IT professionals have experienced PIP, they seem to shrug it off. In contrast, I lick my wound for years, even after it turns into a permanent scar. Most people assume that my PIP fear was fundamentally related to cashflow worry, but I am confident about job hunting. So my PIP fear is all about self-esteem and unrelated to cashflow.

[1] In the covid19 aftermath (ongoing), SG government worry mostly about job loss i.e. livelihood. Next, they worry about career longevity, in-demand skills, long-term competitiveness, housing, healthcare and education… all part of the broader “livelihood” concept. As parents, we too worry about our kids’ livelihood.

[2] Because I have a well-tested, strong parachute, I’m not afraid of falling out (job loss)

Q: imagine that after Y2, this job pays me zero bonus, and boss gives some explicit but mild message of “partial meet”. Do I want to avoid further emotional suffering and forgo the excellent commute + flexible hours + comfortable workload + hassel-free medical benefit?
A: I think Khor Siang of Zed would stay on. I think ditto for Sanjay of OC/MLP. Looking at my OC experience, I think I would stay on.

Life-chances are more about livelihood and less about FOMO.

— Deepak’s experience

Deepak converted form contractor to perm in mid 2014, but on 30 Oct 2014, lost his job in UK. He sat on the bench for thirteen months and started working in Nov 2015, in the U.S. This loss of income was a real blow, but in terms of the psychological scars, I think the biggest were 1) visa 2) job interviews difficulties. He didn’t have a PIP scar.

 

unrolled linked list with small payload: d-cache efficiency

This blogpost is partly based on https://en.wikipedia.org/wiki/Unrolled_linked_list.

This data structure is designed as a replacement for vanilla linked list with 2 + 1 features

  1. mid-stream insert/delete — is slightly slower than linked list, much better than deque
  2. traversal — d-cache efficiency during traversal, esp. with small payloads
    • This is the main advantage over linked list

There’s a 3rd task, slightly uncommon for linked list

  1. jump by index — quasi-random access.. See [17]deque with uneven segments #GS

— comparison with deque, which has a fixed segment length, except the head and tail segments

  • Deque offers efficient insert/delete at both ends. At Mid-stream would require shifting, just as in vector
  • Deque offers O(1) random access by index, thanks to the fixed segment length

[20] charmS@slow track #Macq mgrs#silent majority

another critique of the slow track.

My Macq managers Kevin A and Stephen Keith are fairly successful old-timers. Such an individual would hold a job for 5-10 years, grow in know-how, effectiveness (influence,,,). Once a few years they move up the rank. In their eyes, a job hopper or contractor like me is hopelessly left on the slow track — rolling stone gathers no moss.

I would indeed have felt that way if I had not gained the advantages of burn rate + passive incomes. No less instrumental are my hidden advantages like

  • relatively carefree hands-on dev job, in my comfort zone
  • frugal wife
  • SG citizenship
  • stable property in HDB
  • modest goal of an average college for my kids
  • See also G5 personal advantages: Revealed over15Y

A common cognitive/perception mistake is missing the silent majority of old timers who don’t climb up. See also …

read/write volatile var=enter/exit sync block

As explained in 3rd effect@volatile introduced@java5

  • writing a volatile variable is like exiting a synchronized block, flushing all temporary writes to main memory;
  • reading a volatile variable is like entering a synchronized block, reloading all cached shared mutables from main memory.

http://tutorials.jenkov.com/java-concurrency/volatile.html has more details.

https://stackoverflow.com/questions/9169232/java-volatile-and-side-effects also addresses “other writes“.

denigrate%%intellectual strength #ChengShi

I have a real self-esteem problem as I tend to belittle my theoretical and low-level technical strength. CHENG, Shi was the first to point out “你就是比别人强”.

  • eg: my grasp of middle-school physics was #1 strongest across my entire school (a top Beijing middle school) but I often told myself that math was more valuable and more important
  • eg: my core-java and c++ knowledge (QQ++) is stronger than most candidates (largely due to absorbency++) but i often say that project GTD is more relevant. Actually, to a technical expert, knowledge is more important than GTD.
  • eg: I gave my dad an illustration — medical professor vs GP. The Professor has more knowledge but GP is more productive at treating “common” cases. Who is a more trusted expert?
  • How about pure algo? I’m rated “A-” stronger than most, but pure algo has far lower practical value than low-level or theoretical knowledge. Well, this skill is highly sought-after by many world-leading employers.
    • Q: Do you dismiss pure algo expertise as worthless?
  • How about quant expertise? Most of the math has limited and questionable practical value, though the quants are smart individuals.

Nowadays I routinely trivialize my academic strength/trec relative to my sister’s professional success. To be fair, I should say my success was more admirable if measured against an objective standard.

Q: do you feel any IQ-measured intelligence is overvalued?

Q: do you feel anything intellectual (including everything theoretical) is overvalued?

Q: do you feel entire engineering education is too theoretical and overvalued? This system has evolved for a century in all successful nations.

The merit-based immigration process focus on expertise. Teaching positions require expertise. When laymen know you are a professional they expect you to have expertise. What kind of knowledge? Not GTD but published body of jargon and “bookish” knowledge based on verifiable facts.

scan an array{both ends,keep`min/max #O(N)

I feel a reusable technique is

  • scan the int array from Left  and keep track of the minimum so far. Optionally save it in an array
  • scan the int array from right and keep track of the maximum so far. Optionally save it in an array
  • save the difference of these two arrays

With these three shadow arrays, many problems can be solved visually and intuitively, in linear  time.

eg: max proceeds selling 2 homes #Kyle

How about the classic max profit problem?

How about the water container problem?

range bump-up@intArray 60% #AshS

https://www.hackerrank.com/challenges/crush/problem Q: Starting with a 1-indexed array of zeros and a list of operations, for each operation add a “bump-up” value to each of the array element between two given indices, inclusive. Once all operations have been performed, return the maximum in your array.

For example, given array 10 of zeros . Your list of 3 operations is as follows:

    a b k
    1 5 3
    4 8 7
    6 9 1

Add the values of k between the indices a and b inclusive:

index->	 1 2 3  4  5 6 7 8 9 10
	[0,0,0, 0, 0,0,0,0,0, 0]
	[3,3,3, 3, 3,0,0,0,0, 0]
	[3,3,3,10,10,7,7,7,0, 0]
	[3,3,3,10,10,8,8,8,1, 0]

The largest value is 10 after all operations are performed.

====analysis

This (contrived) problem is similar to the skyline problem.

— Solution 1 O(minOf[N+J, J*logJ ] )

Suppose there are J=55 operations. Each operation is a bump-up by k, on a subarray. The subarray has left boundary = a, and right boundary = b.
Step 1: Sort the left and right boundaries. This step is O(N) by counting sort, or O(J logJ) by comparison sort. A conditional implementation can achieve O(minOf[N+J, J*logJ ] )

In the example, after sorting, we get 1 4 5 6 8 9.

Step 2: one pass through the sorted boundaries. This step is O(J).
Aha — the time complexity of this solution boils down to the complexity of sorting J small positive integers whose values are below n.

3overhead@creating a java stackframe]jvm #DQH

  • additional assembly instruction to prevent stack overflow… https://pangin.pro/posts/stack-overflow-handling mentions 3 “bang” instructions for each java method, except some small leaf methods
  • safepoint polling, just before popping the stackframe
  • (If the function call receives more than 6 arguments ) put first 6 args in register and the remaining args in stack. The ‘mov’ in stack involves more instructions than registers. The subsequent retrieval from stack is likely L1 cache, slower than register read.

age40-50career peak..really@@stereotype,brainwash,,

stereotype…

We all hear (and believe) that the 40-50 period is “supposed” to be the peak period in the life of a professional man. This expectation is created on the mass media (and social media such as Linkedin) brainwash that presents middle-aged managers as the norm. If not a “manager”, then a technical architect or a doctor.

[[Preparing for Adolescence]] illustrates the peer pressure (+self-esteem stress) felt by the adolescent. I feel a Deja-vu. The notion of “normal” and “acceptable” is skewed by the peer pressure.

Q: Out of 100 middle-aged (professional or otherwise) guys, how many actually reach the peak of their career in their 40’s?
A: Probably below 10%.

In my circle of 40-somethings, the norm is plateau or slow decline, not peak. The best we could do is keep up our effort and slow down the decline, be it wellness, burn rate, learning capacity, income,,,

It’s therefore hallucinatory to feel left behind on the slow track.

Q: at what age did I peak in my career?
A: I don’t want to overthink about this question. Perhaps towards the end of my first US era, in my late 30s.

I think middle-aged professional guys should read [[reconciliations]] by Theodore Rubin. The false expectation creates immense burden.

const data member initialization: simple on the surface

The well-known Rule 1 — a const data member must be initialized exactly once, no more no less.

The lesser-known Rule 2 — for class-type data member, there’s an implicit default-initialization feature that can kick in without us knowing. This default-init interacts with ctor initializer in a strange manner.

On a side note, [[safeC++]] P38 makes clever use of Rule 2 to provide primitive wrappers. If you use such a wrapper in place of a primitive field (non-const), then you eliminate the operational risk of “forgetting to initialize a non-const primitive field

The well-known Rule 3 — the proper way to explicitly initialize a const field is the ctor initializer, not inside ctor body.

The lesser-known Rule 4 — at run-time, once control passes into the ctor body, you can only modify/edit an already-initialized field. Illegal for a const field.

To understand these rules, I created an experiment in https://github.com/tiger40490/repo1/blob/cpp1/cpp/lang_misc/constFieldInit.cpp

— for primitive fields like int, Rule 2 doesn’t apply, so we must follow Rule 1 and Rule 3.

— for a class-type field like “Component”,

  • We can either leave the field “as is” and rely on the implicit Rule 2…., or
  • If we want to initialize explicitly, we must follow Rule 3. In this case, the default-init is suppressed by compiler.

In either case, there’s only one initialization per const field (Rule 1)

joinable instance of std::thread

[[effModernC++]] P 252 explains why in c++ joinable std::thread objects must not get destroyed. Such a destruction would trigger std::terminate(), therefore, programmers must make their std::thread objects non-joinable before destruction.

The key is a basic understanding of “joinable”. Informally, I would say a joinable std::thread has a real thread attached to it, even if that real thread has finished running. https://en.cppreference.com/w/cpp/thread/thread/joinable says “A thread that has finished executing code, but has not yet been joined is still considered an active thread of execution and is therefore joinable.”

An active std::thread object becomes unjoinable

  • after it is joined, or
  • after it is detached, or
  • after it is be “robbed” via move()

The primary mechanism to transition from joinable to unjoinable is via join().

std::thread key points

For a thread to actually become eligible, a Java thread needs start(), but c++ std::thread becomes eligible immediately after initialization i.e. after it is initialized with its target function.

For this reason, [[effModernC++]] dictates that between an int field and a std::thread field in a given class Runner, the std::thread field should be the last initialized in constructor. The int field needs to be already initialized if it is needed in the new thread.

Q1: Can you initialize the std::thread field in the constructor body?
A: yes unless the std::thread field is a declared const field

Now let’s say there’s no const field.

Q2: can the Runner copy ctor initialize the std::thread field in the ctor body, via move()?
A: yes provided the ctor parameter is non-const reference to Runner.
A: no if the parameter is a const reference to Runner. move(theConstRunner) would evaluate to a l-value reference, not a rvr. std::thread ctor and op= only accept rvr, because std::thread is move-only

See https://github.com/tiger40490/repo1/tree/cpp1/cpp/sys_thr for my experiments.

##[18]G4qualities I admire]peers: !!status #mellow

I ought to admire my peers’ [1] efforts and knowledge (not their STATUS) on :

  1. personal wellness
  2. parenting
  3. personal finance, not only investment and burn rate
  4. mellowness to cope with the multitude of demands, setbacks, disappointments, difficulties, realities about the self and the competition
  5. … to be Compared to
    • zbs, portable GTD, not localSys
    • how to navigate and cope with office politics and big-company idiosyncrasies.

Even though some of my peers are not the most /accomplished/ , they make a commendable effort. That attitude is admirable.

[1] Many people crossing my path are … not really my peers, esp. those managers in China. Critical thinking required.

I don’t have a more descriptive title for this blogpost.

2011 white paper@high-perf messaging

https://www.informatica.com/downloads/1568_high_perf_messaging_wp/Topics-in-High-Performance-Messaging.htm is a 2011 white paper by some experts. I have saved the html in my google drive. Here are some QQ  + zbs knowledge pearls. Each sentence in the article can expand to a blogpost .. thin->thick.

  • Exactly under what conditions would TCP provide low-latency
  • TCP’s primary concern is bandwidth sharing, to ensure “pain is felt equally by all TCP streams“. Consequently, a latency-sensitive TCP stream can’t have priority over other streams.
    • Therefore, one recommendation is to use a dedicated network having no congestion or controlled congestion. Over this network, the latency-sensitive system would not be victimized by the inherent speed control in TCP.
  • to see how many received packets are delayed (on the receiver end) due to OOS, use netstat -s
  • TCP guaranteed delivery is “better later than never”, but latency-sensitive systems prefer “better never than late”. I think UDP is the choice.
  • The white paper features an in-depth discussion of group rate. Eg: one mkt data sender feeding multiple (including some slow) receivers.

 

analyzing my perception of reality

Using words and numbers, am trying to “capture” my perceptions (intuitions + observations+ a bit of insights) of the c++/java job market trends, past and future. There’s some reality out there but each person including the expert observer has only a limited view of that reality, based on limited data.

Those numbers look impressive, but actually similar to the words — they are mostly personal perceptions dressed up as objective measurements.

If you don’t use words or numbers then you can’t capture any observation of the “reality”. Your impression of that reality [1] remains hopelessly vague. I now believe vague is the lowest level of comprehension, usually as bad as a biased comprehension. Using words + numbers we have a chance to improve our perception.

[1] (without words you can’t even refer to that reality)

My perceptions shape my decisions, and my decisions affect my family’s life chances.

My perceptions shape my selective listening. Gradually, actively, my selective listening would modify my “membrane” of selective listening! All great thinkers, writers update their membrane.

Am not analyzing reality. Instead, am basically analyzing my perception of the reality, but that’s the best I could do. I’m good at analyzing myself as an object.

Refusing to plan ahead because of high uncertainty is lazy, is pessimistic, is doomed.

latency zbs in java: lower value cf c++@@

Warning — latency measurement gotchas … is zbs but not GTD or QQ

— My tech bet — Demand for latency QQ will remain higher in c++ than java

  • The market’s perception would catch up with reality (assuming java is really no slower than c++), but the catch-up could take 30 years.
  • the players focused on latency are unused to the interference [1] by the language. C++ is more free-wheeling
  • Like assembly, c++ is closer to hardware.
  • In general, by design Java is not as a natural a choice for low latency as c++ is, so even if java can match c++ in performance, it requires too much tweaking.
  • related to latency is efficiency. java is a high-level language and less efficient at the low level.

[1] In the same vein, (unlikely UDP) TCP interferes with data transmission rate control, so even if I control both sender and receive, I still have to cede control to TCP, which is a kernel component.

— jvm performance tuning is mainstream and socially meaningful iFF we focus on
* machine saturation
* throughput
* typical user-experience response time

— In contrast, a narrow niche area is micro-latency as in HFT

After listening to FPGA, off-heap memory latency … I feel the arms race of latency is limited to high-speed trading only. latency technology has limited economic value compared to mobile, cloud, cryptocurrency, or even data science and machine learning.

Churn?

accu?

 

find all subsums divisible by K #Altonomy

Q: modified slightly from Leetcode 974: Given an array of signed integers, print all (contiguous, non-empty) subarrays having a sum divisible by K.

https://github.com/tiger40490/repo1/blob/py1/py/algo_arr/subarrayDivisibleByK.py is my one-pass, linear time solution. I consider this technique an algoQQ. Without prior knowledge, a O(N) solution is inconceivable.

I received this problem in an Altonomy hackerrank. I think Kyle gave me this problem too.

===analysis

Sliding window? I didn’t find any use.

Key idea — build data structure to remember cumulative sums, and remember all the positions hitting a given “cumsum level”. My homegrown solution kSub3() shows more insight !

enumerate()iterate py list/str with idx+val

The built-in enumerate() is a nice optional feature. If you don’t want to remember this simple simple syntax, then yes you can just iterate over xrange(len(the_sequence))

https://www.afternerd.com/blog/python-enumerate/#enumerate-list is illustrated with examples.

— to enumerate backward,

Since enumerate() returns a generator and generators can’t be reversed, you need to convert it to a list first.

for i, v in reversed(list(enumerate(vec)))

c++nlg pearls: xx new to refresh old 知新而后温故

Is this applicable in java? I think so, but my focus here is c++.

— 温故而知新 is less effective at my level. thick->thin, reflective.

— 知新而后温故 — x-ref, thin->thick->thin learning.

However, the pace of learning new knowledge pearls could appear very slow and disappointing. 5% new learning + 95% refresh. In such a case, the main benefit and goal is the refresh. Patience and Realistic expectation needed.

In some situations, the most effective learning is 1% new and 99% refresh. If you force yourself to 2% new and 98% refresh, learning would be less effective.

This technique is effective with distinct knowledge PEARLS. Each pearl can be based on a sentence in an article but developed into a blogpost.

 

non-volatile field can have volatile behavior #DQH

Unsafe.getObjectVolatile() and setObjectVolatile() should be the only access to the field.

I think for an integer or bool field (very important use cases), we need to use Unsafe.putIntVolatile() and Unsafe.getIntVolatile()

Q: why not use a volatile field?
A: I guess in some designs, a field need not be volatile at most access points, but at one access point it needs to behave like volatile field.  Qihao agrees that we want to control when we want to insert a load/store fence.

Non-volatile behavior usually has lower latency.

 

half%%peers could be Forced into retirement #Honglin

Reality — we are living longer and healthier.

Observation — compared to old men, old women tend to have more of a social life and more involvement with grandchildren.

I suspect that given a choice, half the white-collar guys in my age group actually wish to keep working past 65 (or 70), perhaps at a lower pace. In other words, they are likely to retire not by choice. My reasoning for the suspicion — Beside financial needs, many in this group do not have enough meaningful, “engaging” things to do. Many would suffer.

It takes long-term planning to stay employed past 65.

I think most of the guys in this category do not prepare well in advance and will find themselves unable to find a suitable job. (We won’t put it this way, but) They will be kinda forced into early retirement. The force could be health or in-demand skillset or …

[19] keen observer@SG workers in their 40s

“Most of them in the 40s are already stable and don’t want to quit. Even though the pay may not be so good, they’re willing to work all the way[1]. It’s an easy-going life.”

The observer was comparing SG (white or blue collar) employees across age groups, and this is the brief observation of the 40-something. This observation is becoming increasingly descriptive of me… Semi-retired on the back of my passive income streams.

[1] I interpret “all the way” as all the way to retirement age, no change of direction, not giving in to boredom, sticking to the chosen career despite occasional challenges (pains, disappointments, setbacks).

 

local variables captured in nested class #Dropcopy

If a (implicitly final) local variable [1] is captured inside a nested class, where is the variable saved?

https://stackoverflow.com/questions/43414316/where-is-stored-captured-variable-in-java explains that the anonymous or local class instance has an implicit field to hold the captured variable !

[1] local variable can be an arg passed into the enclosing function. Could a primitive type or a reference type i.e. heapy thingy

The java compiler secretly adds this hidden field. Without this field, a captured primitive would be lost and a captured heapy would be unreachable when the local variable goes out of scope.

A few hours later, when the nested class instance need to access this data, it would have to rely on the hidden field.

 

lambda^anon class instance ] java

A java lambda expression is used very much like an instance of an anonymous class. However, http://tutorials.jenkov.com/java/lambda-expressions.html#lambda-expressions-vs-anonymous-interface-implementations pointed out one interesting difference:

The anonymous instance in the example has a field named. A lambda expression cannot have such fields. A lambda expression is thus said to be stateless.

get collection sum after K halving operations #AshS

Q: given a collection of N positive integers, you perform K operations like “half the biggest element and replace it with its own ceiling”. Find the collection sum afterwards.

Note the collection size is always N. Note K(like 5) could exceed N(like 2), but I feel it would be trivial.

====analysis====

This is a somewhat contrived problem.

I think O(N + K log min(N,K)) is pretty good if feasible.

git | merge-commits and pull-requests

Key question — Q1: which commit would have multiple parents?

— scenario 1a:

  1. Suppose your feature branch brA has a commit hash1 at its tip; and master branch has tip at hashJJ, which is the parent of hash1
  2. Then you decide to simply q[ git merge brA ] into master

In this simple scenario, your merge is a fast-forward merge. The updated master would now show hash1 at the tip, whose only parent is hashJJ.

A1: No commit would have multiple parents. Simple result. This is the default behavior of git-merge.

Note this scenario is similar to https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-request-merges#rebase-and-merge-your-pull-request-commits

However, github or bit-bucket pull-request flow don’t support it exactly.

— scenario 1b:

Instead of simple git-merge, what about pull request? A pull-request uses q[ git merge –no-ff brA ] which (I think) unconditionally creates a merge-commit hashMM on maser.

A1: now hashMM has two parents. In fact, git-log shows hashMM as a “Merge” with two parent commits.

Result is unnecessarily complex. Therefore, in such simple scenarios, it’s better to use git-merge rather than pull request.

https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-request-merges explains the details.

— Scenario 2: What if ( master’s tip ) hashJJ is Not parent of hash1?

Now maser and brA have diverged. I think you can’t avoid a merge commit hashMM.

A1: hashMM

— Scenario 3: continue from Scenario 1b or Scenario2.

3. Then you commit on brA again , creating hash2.

Q: What’s the parent node of hash2?
A: I think git actually shows hash1 as the parent, not hashMM !

Q: is hashMM on brA at all?
A: I don’t think so but some graphical tools might show hashMM as a commit on brA.

I think now master branch shows  hashMM having two parents (hash1+hashMM), and brA shows hash1 -> hash2.

I guess that if after the 3-way-merge, you immediately re-create (or reset) brA from master, then hash2’s parent would be hashMM.


Note

  • direct-commit on master is implicitly fast-forward, but merge can be fast-forward or non-fast-forward.
  • fast-forward merge can be replaced by a rebase as in Scenario 1a. Result is same as direct-commit.
  • fast-forward merge-commit (Scenario 1b) and 3way merge (Scenario 2) both create a merge-commit.
  • git-pull includes a git-merge without –no-ff

Optiver coding hackathon is like marathon training

Hi Ashish,

Looking back at the coding tests we did together, I feel it’s comparable to a form of “marathon training” — I seldom run longer than 5km, but once a while I get a chance to push myself way beyond my limits and run far longer.

Extreme and intensive training builds up the body capacity.

On my own, it’s hard to find motivation to run so long or practice coding drill at home, because it requires a lot of self-discipline.

Nobody has unlimited self-discipline. In fact, those who run so much or takes on long-term coding drills all have something beside self-discipline. Self-discipline and brute force willpower is insufficient to overcome the inertia in every one of these individuals. Instead, the invisible force, the wind beneath their wings is some forms of intrinsic motivation. These individuals find joy in the hard drill.

( I think you are one of these individuals — I see you find joy in lengthy sessions of jogging and gym workout. )

Without enough motivation, we need “organized” practice sessions like real coding interviews or hackathons. This Optiver coding test could probably improve my skill level from 7.0 to 7.3, in one session. Therefore, these sessions are valuable.

[18] latency in a typical sell-side DMA box: 10micros

(This topic is not GTD not zbs, but relevant to some QQ interviewers.)

https://www.youtube.com/watch?v=BD9cRbxWQx8 is a 2018 presentation.

  1. AA is when a client order hits a broker
  2. Between AA and BB is the entire broker DMA engine in a single process, which parses client order, maintains order state, consumers market data and creates/modifies the outgoing FIX msg
  3. BB is when the broker ships the FIX msg out to exchange.

Edge-to-edge latency from AA to BB, if implemented in a given language:

  • python ~ about 50 times longer than java
  • java – can aim for 10 micros if you are really really good. Dan recommends java as a “reasonable choice” iFF you can accept 10+ micros. Single-digit microsecond shops should “take a motorbike not a bicycle”.
  • c# – comparable to java
  • FPGA ~ about 1 micro
  • ASIC ~ 400 ns

— c/c++ can only aim for 10 micros … no better than java.

The stronghold of c++, the space between java and fpga, is shrinking … “constantly” according to Dan Shaya. I think “constantly” is like the growth of Everest.. perhaps by 2.5 inches a year

I feel c++ is still much easier, more flexible than FPGA.

I feel java programming style would become more unnatural than c++ programming in order to compete with c++ on latency.

— IPC latency

Shared memory beats TCP hands down. For an echo test involving two processes:

Using an Aeron-based messaging application, 50th percentile is 250 ns. I think NIC and possibly kernel (not java or c++) are responsible for this latency.

sponsored DMA

Context — a buy-side shop (say HRT) uses a DMA connection sponsored by a sell-side like MS (or Baml or Instinet) to access NYSE. MS provides a DMA platform like Speedway.

The HRT FIX gateway would implement the NYSE FIX spec. Speedway also has a FIX spec for HRT to implement. This spec should include minor customization on the NYSE spec.

I have seen the HPR spec. (HPR is like an engine running in Baml or GS or whatever.) HPR spec seems to talks about customization for NYSE, Nsdq etc …re Gary chat.

Therefore, the HRT FIX gateway to NYSE must implement, in a single codebase,

  1. NYSe spec
  2. Speedway spec
  3. HPR spec
  4. Instinet spec
  5. other sponsors’ spec

The FIX session would be provided (“sponsored”) by MS or Baml, or Instinet. I think the HRT FIX gateway would connect to some IP address belonging to the sponsor like MS. Speedway would forward the FIX messages to NYSE, after some risk checks.

VWAP=a bmark^exectionAlgo

In the context of a broker algos (i.e. an execution algo offered by a broker), vwap is

  • A benchmark for a bulk order
  • An execution algo aimed at the benchmark. The optimization goal is to minimize slippage against this benchmark. See other blogposts about slippage.

The vwap benchmark is simple, but the vwap algo implementation is non-trivial, often a trade secret.

Avichal: too-many-distractions

Avichal is observant and sat next to me for months. Therefore I value his judgment. Avichal is the first to point out I was too distracted.

For now, I won’t go into details on his specific remarks. I will simply use this simple pointer to start a new “thread”…

— I think the biggest distraction at that time was my son.

I once (never mind when) told grandpa that I want to devote 70% of my energy to my job (and 20% to my son), but now whenever I wanted to settle down and deep dive into my work, I feel the need and responsibility to adjust my schedule and cater to my son. and try to entice him to study a little bit more.

My effort on my son is like Driving uphill with the hand-brake on.

As a result, I couldn’t have a sustained focus.

gradle: dependency-jar refresh, cache, Intellij integration..

$HOME/.gradle holds all the jars from all previous downloads.

[1] When you turn on debug, you can see the actual download : gradle build –debug.

[2] Note IDE java editor can use version 123 of a jar for syntax check, but the command line compilation can use version 124 of the jar. This is very common in all IDEs.

When I make a change to a gradle config,

  • Intellij prompts for gradle import. This seems to be unnecessary re-download of all jars — very slow.
  • Therefore, I ignore the import. I think as a result, Intellj java editor [2] would still use the previous jar version as the old gradle config is in effect. I live with this because my focus is on the compilation.
  • For compilation, I use the grade “build” action (probably similar to command line build). Very fast but why? Because only one dependency jar is refreshed [3]
  • Gary used debug build [1] to prove that this triggers a re-download of specific jars iFF you delete the jars from $HOME/.gradle/caches/modules-2/files-2.1

[3] For a given dependency jar, “refresh” means download a new version as specified in a modified gradle config.

— in console, run

gradle build #there should be a ./build.gradle file

Is java/c# interpreted@@No; CompiledTwice!

category? same as JIT blogposts

Q: are java and c# interpreted? QQ topic — academic but quite popular in interviews.

https://stackoverflow.com/questions/8837329/is-c-sharp-partially-interpreted-or-really-compiled shows one explanation among many:

The term “Interpreter” referencing a runtime generally means existing code interprets some non-native code. There are two large paradigms — Parsing: reads the raw source code and takes logical actions; bytecode execution : first compiles the code to a non-native binary representation, which requires much fewer CPU cycles to interpret.

Java originally compiled to bytecode, then went through an interpreter; now, the JVM reads the bytecode and just-in-time compiles it to native code. CIL does the same: The CLR uses just-in-time compilation to native code.

C# compiles to CIL, while JIT compiles to native; by contrast, Perl immediately compiles a script to a bytecode, and then runs this bytecode through an interpreter.

bone health for dev-till-70 #CSY

Hi Shanyou,

I have a career plan to work as a developer till my 70’s. When I told you, you pointed out bone health, to my surprise.

You said that some older adults suffer a serious bone injury and become immobile. As a result, other body parts suffer, including weight, heart, lung, and many other organs. I now believe loss of mobility is a serious health risk.

These health risks directly affect my plan to work as a developer till my 70’s.

Lastly, loss of mobility also affects our quality of life. My mom told me about this risk 20 years ago. She has since become less vocal about this risk.

Fragile bones become more common when we grow older. In their 70’s, both my parents suffered fractures and went through surgeries.

See ## strengthen our bones, reduce bone injuries #CSY for suggestions.

available time^absorbency[def#4]:2 limiting factors

see also ## identify your superior-absorbency domains

Time is a quintessential /limiting factor/ — when I try to break through and reach the next level on some endeavor, I often hit a /ceiling/ not in terms of my capacity but in terms of my available time. This is a common experience shared by many, therefore easy to understand. In contrast, a more subtle experience is the limiting factor of “productive mood” [1].

[1] This phrase is vague and intangible, so sometimes I speak of “motivation” — not exactly same and still vague. Sometimes I speak of “absorbency” as a more specific proxy.

“Time” is used as per Martin Thompson.

  • Specific xp: Many times I took leaves to attend an IV. The time + absorbency is a precious combination that leads to breakthrough in insight and muscle-building. If I only provide time to myself, most of the time I don’t achieve much.
    • I also take leave specifically to provide generic “spare time” for myself but usually can’t achieve the expected ROTI.
  • Specific xp: yoga — the heightened absorbency is very rare, far worse than jogging. If I provide time to myself without the absorbency, I won’t do yoga.
  • the zone — (as described in my email) i often need a block of uninterrupted hours. Time is clearly a necessary but insufficient condition.
  • time for workout — I often tell my friends that lack of time sounds like an excuse given the mini-workout option. Well, free time still helps a lot, but motivation is more important in this case.
  • localSys — absorbency is more rare here than coding drill, which is more rare than c++QQ which is more rare than java QQ
  • face time with boy — math practice etc.. the calm, engaged mood on both sides is very rare and precious. I tend to lose my cool even when I make time for my son.
  • laptop working at train stations — like MRT stations or 33rd St … to capture the mood. Available time by itself is useless

exec algo: with-volume

— WITH VOLUME
Trade in proportion to actual market volume, at a specified trade rate.

The participation rate is fixed.

— Relative Step — with a rate following a step-up algo.

This algo dynamically adjusts aggressiveness(participation rate) based on the
relative performance of the stock versus an ETF. The strategy participates at a target percentage of overall
market volume, adjusting aggressiveness when the stock is
significantly underperforming (buy orders) or outperforming (sell orders) the reference security since today’s open.

An order example: “Buy 90,000 shares 6758.T with a limit price of ¥2500.
Work order with a 10% participation rate, scaling up to 30%
whenever the stock is underperforming the Nikkei 225 ETF (1321.OS)
by 75 basis points or more since the open.”

If we notice the reference ETF has a 2.8% return since open and our 6758.T has a 2.05% return, then the engine would assume 6758.T is significantly underperforming its peers (in its sector). The engine would then step up the participation to 30%, buying more aggressively, perhaps using bigger and faster slices.

What if the ETF has dropped 0.1% and 6758.T has dropped 0.85%? This would be unexpected since our order is a large order boosting the stock. Still, the other investors might be dumping this stock. The engine would still perceive the stock as underperforming its peers, and step up the buying speed.

Y alpha-geeks keep working hard #speculation

Based on my speculation, hypothesis, imagination and a tiny bit of observation.

Majority of the effective, efficient, productive tech professionals don’t work long hours because they already earn enough. Some of them can retire if they want to.

Some percentage of them quit a big company or a high position, sometimes to join a startup. One of the reasons — already earned enough. See my notes on envy^ffree

Most of them value work-life balance. Half of them put this value not on lips but on legs.

Many of them still choose to work hard because they love what they do, or want to achieve more, not because no-choice. See my notes on envy^ffree

by-level traversal output sequence: isBST #AshS

Q: given a sequence of payload values produced from a by-level traversal of a binary tree, could the tree be a BST?

Ashish gave me this question. We can assume the values are floats.

====analysis

(Not contrived, somewhat practical)

— idea 1:

Say we have lined up the values found on level 11. We will split the line-up into sections, each split point being a Level-10 value.

Between any two adjacent values on level 10 (previous level), how many current-level values can legally fit in? I would say up to 2.

In other words, each section can have two nodes or fewer. I think this is a necessary (sufficient?) condition.

–idea 2: One-pass algo

Try to construct a BST as we consume the sequence.

Aha — there’s only one possible BST we can build. If we can’t build one, then return false. I think most sequences can produce a BST. Can you come up with a sequence that can’t construct a BST? I doubt it.

(unordered)map erase: prefer by-key !! itr #AshS

Relevant in coding tests like speed-coding, take-home, onsite. Practical knowledge is power !

As shown in https://github.com/tiger40490/repo1/blob/cpp1/cpp/lang_misc/mapEraseByVal.cpp

.. by-key is cleaner, not complicated by the iterator invalidation complexities.

You can save all the “bad” keys, and later erase one by one, without the invalidation concerns. You can also print the keys.

if you were to save the “bad” iterators, then once you erase one iterator, are the other iterators affected? No, but I don’t want to remember.

STL iterator invalidation rules, succinctly has a succinct summary, but I don’t prefer to deal with these complexities when I have a cleaner alternative solution.

std::pair mvctor = field-by-field

std::pair has no pointer field so I thought it needs no meaningful mvctor, but actually std::pair mvctor is defaulted i.e. field-wise move i.e. each field is moved.

If a pair holds a vector and a string, then the vector would be move-constructed, and so does the string.

Q1: So what kind of simple class would have no meaningful mvctor?
%%A: I would say a class holding no pointer whatsoever. Note it can embed another class instance as a field.

Q2: so why is std::pair not one of them?
A: std::pair is a template, so the /concrete/ field type can be a dynamic container including std::string.

All dynamic containers use pointers internally.

fine print ] source code

Q1: “Here is 95% of the logic” — In what contexts is such a documentation considered complete? Answered at the end.

When we programmers read source code and document the “business logic” implemented thereby, we are sometimes tempted to write “My write-up captures the bulk of the business logic. I have omitted minor details, but they are edge cases. At this stage we don’t need to worry about them”. I then hope I have not glossed over important details. I hope the omitted details are just fine prints. I was proven wrong time and again.

Sound byte: source code is all details, nothing but details.

Sound byte: Everything in source code is important detail until proven otherwise. The “proof” takes endless effort, so in reality, Everything in source code is important detail.

The “business logic” we are trying to capture actually consists of not only features and functionalities, but functional fragments i.e. the details.

When we examine source code, a visible chunk of code with explicit function names, variable names, or explicit comments are hard to miss. Those are the “easy parts”, but what about those tiny functional fragments …

  • Perhaps a short condition buried in a complicated if/while conditional
  • Perhaps a seemingly useless catch block among many catches.
  • Perhaps a break/continue statement that seems serve no purpose
  • Perhaps an intentional switch-case fall-through
  • Perhaps a seemingly unnecessary sanity check? I tend to throw in lots of them
  • Perhaps some corner case error handling module that look completely redundant and forgettable, esp. compared to other error handlers.
  • Perhaps a variable initialization “soon” before an assignment
  • Perhaps a missing curly brace after a for-loop header

( How about the equal sign in “>=” … Well, that’s actually a highly visible fragment, because we programmers have trained vision to spot that “=” buried therein. )

Let me stress again. The visibility, code size … of a functional fragment is not indication of its relative importance. The low-visibility, physically small functional fragments can be equally important as a more visible functional fragment.

To the computer, all of these functional fragments are equally significant. Each could have impact on a production request or real user.

Out of 900 such “functional fragments”, which ones deal with purely theoretical scenarios that would never arise (eg extraneous data) .. we really don’t know without analyzing tons of production data. One minor functional fragment might get activated by a real production data item. So the code gets executed unexpectedly, usually with immediate effect, but sometimes invisibly, because its effect is concealed by subsequent code.

I would say there are no fine-prints in executable source code. Conversely, every part of executable source code is fine print, including the most visible if/else. Every executable code has a real impact, unless we use real production data to prove otherwise.

A1: good enough if you have analyzed enough production data to know that every omitted functional fragment is truly unimportant.

intellij=cleaner than eclipse !

intellij (the community version) is much cleaner than eclipse, and no less rich in features.

On a new job, My choice of java ide is based on
1) other developers in the team, as I need their support

2) online community support — as most questions are usually answered there
I think eclipse beats intellij

3) longevity — I hate to learn a java ide and lose the investment when it loses relevance.
I think eclipse beats intellij, due to open-source

)other factors include “clean”

The most popular tools are often vastly inferior for me. Other examples:
* my g++ install in strawberryPerl is better than all the windows g++ installs esp. msvs
* my git-bash + strawberryPerl is a better IDE than all the fancy GUI tools
* wordpress beats blogger.com hands down
* wordpad is a far simpler rich text editor than msword or browsers or mark-down editors

value-based type as mutex #Optional.java

Value-based type is a new concept in java. Optional.java is the only important example I know, so I will use it as illustration.

One of the main ideas about value types is the lack of object identity (or perhaps their identity is detectable only to the underlying implementation i.e. JVM not Java applications). In such a world, how could we tell whether variables aa and bb “really” are the same or different?

Q: why avoid locking on value-based objects?
%%A: locking is based on identity. See why avoid locking on boxed Integers
A: https://stackoverflow.com/questions/34049186/why-not-lock-on-a-value-based-class

Side note — compared to java, c++ has a smaller community and collective brain power so discussions are more limited.

Y avoid us`boxed Integer as mutex

https://stackoverflow.com/questions/34049186/why-not-lock-on-a-value-based-class section on “UPDATE – 2019/05/18” has a great illustration

Auto-boxing of “33” usually produces distinct objects each time, but could also produce the same object repeatedly. Compiler has the leeway to optimize, just as in c++.

Remember: locking is based on object identity.

wage+homePrice: biased views@China colleagues

— salary

Many China colleagues (YH, CSY, CSDoctor, Jenny as examples) say their classmates earn much more than them in the U.S.  These colleagues seem to claim their Typical Chinese counterpart earns higher than in the U.S.

Reality — those classmates are outliers. The average salary in China is much lower than U.S. Just look at statistics.

Some of these guys (like CSY) feel inferior and regret coming to the U.S.

— cost of reasonable lifestyle

Many Chinese friends complain that cost level is higher in China than U.S. and Singapore. A young MLP colleague (Henry) said a RMB 500k/year feels insufficient to a Chinese 20-something.

In reality, per-sqft property price is indeed higher in some cities than in the U.S. Everything else in China, cost level is much lower than in the U.S. Just look at statistics.

success in long-term learning: keen!=interest

For both my son and my own tech learning over the long term, “interest” is not necessarily the best word to capture the key factor.

I was not really interested in math (primary-secondary) or physics (secondary). In college, I tried to feel interested in electronics, analog IC design etc but unsuccessful. At that level, extrinsic motivation was the only “interest” and the real motivation in me. Till today I don’t know if I have found a real passion.

Therefore, the strongest period of my life to look at is not college but before college. Going through my pre-U schools, my killer strength was not so much “interest” but more like keenness — sharp, quick and deep, absorbency…

Fast forward to 2019, I continue to reap rewards due to the keenness — in terms of QQ and zbs tech learning. Today I have stronger absorbency than my peers, even though my memory, quick-n-deep, sharpness .. are no longer outstanding.

Throughout my life, Looking at yoga, karaoke, drawing, sprinting, debating, piano, .. if I’m below-average and clueless, then I don’t think I can maintain “interest”.

Optional.java notes

Q: if an optional is empty, will it remain forever empty?

— An Optional.java variable could but should never be null, as this instance need a state to hold at least the boolean isPresent.

If a method is declared to return Optional<C>, then the author need to ensure she doesn’t return a null Optional ! This is not guaranteed by the language.

https://dzone.com/articles/considerations-when-returning-java-8s-optional-from-a-method illustrates a simple rule — use a local var retVal throughout then, at the very last moment, return Optional.ofNullable(retVal). This way, retVal can be null but the returned reference is never null.

If needed, an Optional variable should be initialized to Optional.empty() rather than null.

https://dzone.com/articles/considerations-when-returning-java-8s-optional-from-a-method shows the pitfalls of returning Optional from a method.

I feel this is like borrowing from a loan shark to avoid accidental credit card interest charge.

If you are careful, it can help you avoid the NPE of the traditional practice, but if you are undisciplined (like most of us), then this new stragey is even worse —

Traditional return type is something like String, but caller has to deal with nulls. New strategy is supposed to remove the risk/doubt of a null return value, but alas, caller still needs to check null first, before checking against empty!

–immutability is tricky

  1. the referent object is mutable
  2. the Optional reference can be reseated, i.e. not q[ final ]
  3. the Optional instance itself is immutable.
  4. Therefore, I think an Optional == a mutable ptr to a const wrapper object enclosing a regular ptr to a mutable java object.

Similarity to String.java — [B/C]

Compare to shared_ptr instance — [A] is true.

  • C) In contrast, a shared_ptr instance has mutable State, in terms of refcount etc
  • B) I say Not-applicable as I seldom use a pointer to a shared_ptr

— get() can throw exception if not present

— not serializable

— My motivation for learning Optional is 1) QQ 2) simplify my design in a common yet simple scenario

https://www.mkyong.com/java8/java-8-optional-in-depth/ is a demo, featuring … flatMap() !!

OO-modeling: c++too many choices

  • container of polymorphic Animals (having vtbl);
  • Nested containers; singletons;
  • class inheriting from multiple supertypes ..

In these and other OO-modeling decisions, there are many variations of “common practices” in c++ but in java/c# the best practice usually boils down to one or two choices.

No-choice is a Very Good Thing, as proven in practice. Fewer mistakes…

These dynamic languages rely on a single big hammer and make everything look like a nail….

This is another example of “too many variations” in c++.

dev jobs ] Citi SG+NY #Yifei

My friend Yifei spent 6+ years in ICG (i.e. the investment banking arm) of Citi Singapore.

  • Over 6Y no layoff. Stability is Yifei’s #1 remark
  • Some old timers stay for 10+ years and have no portable skill. This is common in many ibanks.
  • Commute? Mostly in Changi Biz Park, not in Asia Square
  • Low bonus, mostly below 1M
  • VP within 6 years is unheard-of for a fresh grad

I feel Citi is rather profitable and not extremely inefficient, just less efficient than other ibanks.

Overall, I have a warm feeling towards Citi and I wish it would survive and thrive. It offers good work-life balance, much better than GS, ML, LB etc

[17] deadlock involving 1 lock(again) #StampedLock

— Scenario 3: If you app uses just one lock, you can still get deadlock. Eg: ThreadM (master) starts ThreadS (slave). S acquires a lock. M calls the now-deprecated suspend() method on S, then tries to acquire the lock, which is held by the suspended S. S is waiting to be resumed by M — deadlock.

Now the 4 conditions of deadlock:
* wait-for cycle? indeed
* mutually exclusive resource? Just one resource
* incremental acquisition? indeed
* non-Preemptive? Indeed. But if there’s a third thread, it can restart ThreadS.

— Scenario 2: a single thread using a single non-reentrant lock (such as StampedLock.java and many c++ lock objects) can deadlock with itself. A common mis-design.

— Scenario 1:

In multithreaded apps, I feel single-lock deadlock is very common. Here’s another example — “S gets a lock, and wait for ConditionA. Only M can create conditionA, but M is waiting for the lock.”

Here, ConditionA can be many things, including a Boolean flag, a JMS message, a file, a count to reach a threshold, a data condition in DB, a UI event where UI depends partly on ThreadM.

P90 [[eff c#]] also shows a single-lock deadlock. Here’s my recollection — ThrA acquires the lock and goes into a while-loop, periodically checking a flag to be set by ThrB. ThrB wants to set the flag but need the lock.

 

debugger stepping into library

I often need my debugger to step into library source code.

Easy in java:

c++ is harder. I need to find more details.

  • in EclipseCDT, STL source code is available to IDE ( probably because class templates are usually in the form of header files), and debugger is able to step through it, but not so well.

Overall, I feel debugger support is significantly better in VM-based languages than c++, even though debugger was invented before these new languages.

I guess the VM or the “interpreter” can serve as an “interceptor” between debugger and target application. The interceptor can receive debugger commands and suspend execution of the target application.

complacent guys]RTS #comfortZone #DeepakCM

Deepak told me that Rahul, Padma etc stayed in RTS for many years and became “complacent” and uninterested in tech topics outside their work. I think Deepak has sharp observations.

I notice many Indian colleagues (compared to East European, Chinese..) uninterested in zbs or QQ topics. I think many of them learn the minimum to pass tech interviews. CSY has this attitude on coding IV but the zbs attitude on socket knowledge

–> That’s a fundamental reason for my QQ strength on the WallSt body-building arena.

If you regularly benchmark yourself externally, often against younger guys, you are probably more aware of your standing, your aging, the pace of tech churn, … You live your days under more stress, both negative and positive stress.

I think these RTS guys may benchmark internally once a while, if ever. If the internal peers are not very strong, then you would get a false sense of strength.

The RTS team may not have any zbs benchmark, since GTD is (99.9%) the focus for the year-end appraisal.

These are some of the reasons Deepak felt 4Y is the max .. Deepak felt H1 guys are always on our toes and therefore more fit for survival.

fear@large codebase #web/script coders

One Conclusion — my c++ /mileage/ made me a slightly more confident, and slightly more competent programmer, having “been there; done that”, but see the big Question 1 below.

— Historical view

For half my career I avoided enterprise technologies like java/c++/c#/SQL/storedProc/Corba/sockets… and favored light-weight technologies like web apps and scripting languages. I suspect that many young programmers also feel the same way — no need to struggle with the older, harder technologies.

Until GS, I was scared of the technical jargon, complexities, low-level API’s debuggers/linkers/IDE, compiler errors and opaque failures in java/SQL … (even more scared of C and Windows). Scared of the larger, more verbose codebases in these languages (cf the small php/perl/javascript programs)… so scared that I had no appetite to study these languages.

— many guys are unused to large codebases

Look around your office. Many developers have at most a single (rarely two) project involving a large codebase. Large like 50k to 100k lines of code excluding comments.

I feel the devops/RTB/DBA or BA/PM roles within dev teams don’t require the individual to take on those large codebases. Since it’s no fun, time-consuming and possibly impenetrable, few of them would take it on. In other words, most people who try would give up sooner or later. Searching in a large codebase is perhaps their first challenge. Even figuring out a variable’s actual type can be a challenge in a compiled language.

Compiling can be a challenge esp. with C/c++, given the more complex tool chain, as Stroustrup told me.

Tracing code flow is a common complexity across languages but worse in compiled languages.

In my experience, perl,php,py,javascript codebases are usually small like pets. When they grow to big creatures they are daunting and formidable just like compiled language projects. Some personal experiences —
* Qz? Not a python codebase at all
* pwm comm perl codebase? I would STILL say codebase would be bigger if using a compiled language

Many young male/female coders are not committed to large scale dev as a long-term career, so they probably don’t like this kinda tough, boring task.

— on a new level

  • Analogy — if you have not run marathons you would be afraid of it.
  • Analogy — if you have not coached a child on big exams you would be afraid of it.

I feel web (or batch) app developers often lack the “hardcore” experience described above. They operate at a higher level, cleaner and simpler. Note Java is cleaner than c++. In fact I feel weaker as java programmer compared to a c++ programmer.

Q1: I have successfully mastered a few sizable codebases in C++, java, c#. So how many more successful experiences do I need to feel competent?
A: ….?

Virtually every codebase feels too big at some time during the first 1-2 years, often when I am in a low mood, despite the fact that in my experience, I was competent with many of these large codebases.
I think Ashish, Andrew Yap etc were able to operate well with limited understanding.
I now see the whole experience as a grueling marathon. Tough for every runner, but I tend to start the race assuming I’m the weakest — impostor syndrome.
Everyone has to rely on log and primitive code browsing tools. Any special tools are usually marginal value. With java, live debugger is the most promising tool but still limited pain-relief. Virtually all of my fellow developers face exactly the same challenges so we all have to guess. I mean Yang, Piroz, Sundip, Shubin, … virtually all of them, even the original authors of the codebase. Even after spending 10Y with a codebase, we could face opaque issues. However, these peers are more confident against ambiguity.

[19] 4 Deeper mtv2work4more$ After basic ffree

Note sometimes I feel my current ffree is so basic it’s not real ffree at all. At other times I feel it is real, albeit basic, ffree. After achieving my basic ffree, here are 3 deeper motivations for working hard for even more money:

  • am still seeking a suitable job for Phase B. Something like a light-duty, semi-retirement job providing plenty of free time (mostly for self-learning, blogging, helping kids). This goal qualifies as a $-motivation because … with more financial resources, I can afford to take some desirable Phase-B jobs at lower pay. In fact, I did try this route in my 2019 SG job search.
  • I wish to spend more days with grandparents — need more unpaid leaves, or work in BJ home
  • more respect, from colleagues and from myself
  • stay relevant for 25 years. For next 10 years, I still want more upstream yet churn-resistant tech skills like c++.

–Below are some motivations not so “deep”

  • better home location (not size) — clean streets; shorter commute; reasonable school.. Eg Bayonne, JC
  • Still higher sense of security. Create more buffers in the form of more diversified passive incomes.

— Below are some secondary $-motivations

  • * more time with kids? Outside top 10 motivations.
  • * better (healthy) food? usually I can find cheaper alternatives
  • * workout classes? Usually not expensive

profilers for low-latency java

Most (simple) java profilers are based on jvm safepoint. At a safepoint, they can use JVM API to query the JVM. Safepoint-based profiling is relatively easy to implement.

s-sync (Martin) is not based on safepoint.

Async profiler is written for openJDK, but some features are usable on Zinc JVM. Async profiler is based on process level counters. Can’t really measure micro-latency with any precision.

Perf is an OS-level profiler, probably based on kernel counters.

“Strategic” needs a re-definition #fitness

“Strategic” i.e. long-term planning/t-budgeting needs a re-definition. quant and c# were two wake-up calls that I tragically missed.

For a long time, the No.1 strategic t-expense was quant, then c#/c++QQ, then codingDrill (the current yellowJersey).

Throughout 2019, I considered workout time as inferior to coding drill .. Over weekends or evenings I often feel nothing-done even though I push myself to do “a bit of” yoga, workout, or math-with-boy, exp tracking,

Now I feel yoga and other fitness t-spend is arguably more strategic than tech muscle building. I say this even though fitness improvement may not last.

Fitness has arguably the biggest impact on brain health and career longevity

git | reword historical commit msg

Warning — may not work if there’s a recent merge-in on your branch

Find the target commit and its immediate parent commit.

git rebase -i the.parent.commit

First commit in the list would be your target commit. Use ‘r’ for the target commit and don’t change other commits. You will land in vim to edit the original bad commit msg. Once you save and quit vim, the rebase will complete, usually without error.

Now you can reword subsequent commit messages.

c++low-^high-end job market prospect

As of 2019, c++ low-end jobs are becoming scarce but high-end jobs continue to show robust demand. I think you can see those jobs across many web2.0 companies.

Therefore, it appears that only high-end developers are needed. The way they select a candidates is … QQ. I have just accumulated the minimum critical mass for self-sustained renewal.

In contrast, I continue to hold my position in high-end coreJava QQ interviews.

conclusions: mvea xp

Is there an oth risk? Comparable to MSFM, my perception of the whole experience shapes my outlook and future decision.

  • Not much positive feedback beside ‘providing new, different viewpoints’, but Josh doesn’t give positive feedback anyway
  • should be able to come back to MS unless very stringent requirement
  • Josh might remember Victor as more suitable for greenfield projects.
  • I think Josh likes me as a person and understands my priorities. I did give him 4W notice and he appreciated.
  • I didn’t get the so-called “big picture” that Josh probably valued. Therefore I was unable to “support the floor” when team is out. The last time I achieved that was in Macq.
  • work ethic — A few times I worked hard and made personal sacrifices. Josh noticed.
  • In the final month, I saw myself as fairly efficient to wrap up my final projects including the “serialization” listed below

Q: I was brought in as a seasoned c++ old hand. Did I live up to that image? Note I never promised to be an expert
A: I think my language knowledge (zbs, beyond QQ) was sound
A: my tool chain GTD knowledge was as limited as other old hands.

Q: was the mvea c++ codebase too big for me?
A: No, given my projects are always localized. and there were a few old hands to help me out.

I had a few proud deliveries where I had some impetus to capture the momentum (camp out). Listed below. I think colleagues were impressed to some extent even though other people probably achieved more. Well, I don’t need to compare with those and feel belittled.

This analysis revealed that Josh is not easily impressed.  Perhaps he has high standard as he never praised anyone openly.

  • * I identified two stateless calc engines in pspc. Seeing the elegant simplicity in the design, I quickly zoomed in, stepped over and documented the internal logic and replicated it in spreadsheet.
  • * my pspc avg price sheet successfully replicated a prod “issue”, shedding light into a hitherto murky part of the codebase
  • * I quickly figure out serialization as root cause of the outage, probably within a day.
  • * I had two brave attempts to introduce my QOT innovation
  • * My 5.1 Brazil pspc project was the biggest config project to date. I single-handedly overcame many compilation (gm-install) and startup errors. In particular, I didn’t leave the project half-cooked, even though I had the right to do so.
  • I make small contributions to python test set-up

##With time2kill..Come2 jobjob blog

  • for coding drill : go over
    • [o] t_algoClassicProb
    • [o] t_commonCodingQ22
    • t_algoQQ11
    • open questions
  • go over and possibly de-list
    1. [o] zoo category — need to clear them sooner or later
    2. [o] t_oq tags
    3. t_nonSticky tags
    4. [o] t_fuxi tags
    5. Draft blogposts
    6. [o] *tmp categories? Sometimes low-value
    7. remove obsolete tags and categories
  • Hygiene scan for blogposts with too many categories/tags to speed up future searches? Low value?
  • [o=good for open house]

##command line c++toolchain: Never phased out

Consider C++ build chain + dev tools  on the command line. They never get phased out, never lost relevance, never became useless, at least till my 70s. New tools always, always keep the old features. In contrast, java and newer languages don’t need so many dev tools. Their tools are more likely to use GUI.
  • — Top 5 examples similar things (I don’t have good adjectives)
  • unix command line power tools
  • unix shell scripting for automation
  • C API: socket API, not the concepts
  • — secondary examples
  • C API: pthreads
  • C API: shared memory
  • concepts: TCP+UDP, http+cookies
Insight — unix/linux tradition is more stable and consistent. Windows tradition is more disruptive.

Note this post is more about churn (phased-out) and less about accumulation (growing depth)

finding1st is easier than optimizing

  • Problem Type: iterate all valid choices without duplicates — sounds harder than other types, but usually the search space is quite constrained and tractable
    • eg: regex
    • eg: multiple-word search in matrix
  • Problem Type: find best route/path/combo, possibly pruning large subtrees in the search space — often the hardest type
  • Problem Type: find fist — usually easier

In each case, there are recurring patterns.

automation scripts for short-term GTD

Background — automation scripts have higher values in some areas than others

  1. portable GTD
    • Automation scripts often use bash, git, SQL, gradle, python/perl… similar to other portable GTD skills like instrumentation know-how
  2. long-term local (non-portable) GTD
  3. short-term local (non-portable) GTD

However, for now let’s focus on short-term local GTD. Automation scripts are controversial in this respect. They take up lots of time but offer some measurable payoff

  • they could consolidate localSys knowledge .. thick -> thin
  • They can serve as “executable-documentation”, verified on every execution.
  • They reduce errors and codify operational best practices.
  • They speed up repeated tasks. This last benefit is often overrated. In rare contexts, a task is so repetitive that we get tired and have to slow down.

— strength — Automation scripts is my competitive strength, even though I’m not the strongest.

— respect – Automation scripts often earn respect and start a virtuous cycle

— engagement honeymoon — I often experience rare engagement

— Absorbency — sometimes I get absorbed, though the actual value-add is questionable .. we need to keep in mind the 3 levels of value-add listed above.

jGC heap: 2 unrelated advantages over malloc

Advantage 1: faster allocation, as explained in other blogposts

Advantage 2: programmer can "carelessly" create an "local" Object in any method1, pass (by reference) the object into other methods and happily forget about freeing the memory.

In this extremely common set-up, the reference itself is a stack variable in method1, but the heapy thingy is "owned" by the GC.

In contrast, c/c++ requires some "owner" to free the heap memory, otherwise memory would leak. There’s also the risk of double-free. Therefore, we absolutely need clearly documented ownership.

max-sum non-adjacent subSequence: signedIntArr#70%

Q: given a signed int array a[], find the best sub-sequence sum. Any Adjacent elements in a[] should not both show up. O(N) time and O(1) space.

====analysis

— O(1) idea 2:

Two simple variables needed: m_1 is the m[cur-1] and m_2 is m[cur-2]. So compare a[cur] + m_2 vs m_1.

https://github.com/tiger40490/repo1/blob/py1/py/algo_arr/nonAdjSeqSum.py is self-tested DP solution.

FB 2019 ibt Indeed onsite

  • coding rounds — not as hard as the 2011 FB interview — regex problem
    • Eric gave positive feedback to confirm my success but perhaps other candidates did even better
    • No miscommunication as happened in the VersionedDict.
    • 2nd Indeed interviewers failed me even though I “completed” it. Pregnant interviewer may follow suit.
  • data structure is fundamental to all the problems today.
  • SDI — was still my weakness but I think I did slightly better this time
  • Career round — played to my advantage as I have multiple real war stories. I didn’t prepare much. The stories just came out raw and hopefully authentic
  • the mini coding rounds — played to my advantage as I reacted fast, thanks to python, my leetcode practice …

So overall I feel getting much closer to passing. Now I feel One interview is all it takes to enter a new world

* higher salary than HFT
* possibly more green field
* possibly more time to research, not as pressed as in ibanks

sorting collection@chars|smallIntegers

Therefore, given a single (possibly long) string, sorting it should Never be O(N logN) !

Whenever an algo problem requires sorting a bunch of English letters, it’s always, always better to use counting sort. O(N) rather O(N logN)

Similarly, in more than one problems, we are given a bunch of integers bound within a limited range. For example, the array index values could be presented to us as a collection and we may want to sort them. Sorting such integers should always , always be counting sort.

GTD fear-graph: delays^opacity^benchmark..

  • — manifestations first, fundamental, hidden factors last
  • PIP, stigma, damagedGood. Note I’m not worried about cashflow
  • [f] project delays
  • [f] long hours
  • [f] long commute
  • large codebase in brown field. Am slower than others at reading code.
  • [f] opacity — is worse than complexity or codebase size
  • figure-out speed benchmark — including impostor’s syndrome
  • [f = faced by every team member]
The connections among these “nodes” look complex but may not be really complex.
PIP is usually the most acute, harmful item among them. I usually freak out, though in hind sight I always feel I tried my best and should not feel ashamed. Josh is one manager who refuses to use PIP, so under Josh I had no real fear.

dev career]U.S.: fish out of water in SG

Nowadays I feel in-demand on 1) Wall St, 2) with the web2.0 shops. I also feel welcome by 3) the U.S. startups. In Singapore, this feeling of in-demand was sadly missing. Even the bank hiring managers considered me a bit too old.

Singapore banks only has perm jobs for me, which feel unsuitable, unattractive and stressful.

In every Singapore bank I worked, I felt visibly old and left behind. Guys at my age were all managers… painful.

expertise: Y I trust lecture notes more than forums

Q: A lot of times we get technical information in forums, online lecture notes, research papers. why do I trust some more than others?

  1. printed books and magazines — receive the most scrutiny because once printed, mistakes are harder to correct. Due to this fundamental reason, there is usually more scrutiny.
  2. research papers — receive a lot of expert peer review and most stringent editorial scrutiny, esp. in a top journal and top conference. There’s also internal review within the authors’ organization, who typically cares about its reputation.
  3. college lecture notes — are more “serious” than forums,
    • mostly due to the consequence of misinforming students.
    • When deciding what to include in lecture notes, many professors are conservative and prudent when adding less established, less proven research findings. The professor may mention those findings verbally but more prudent about her posted lecture notes.
    • research students ask lots of curious questions and serve as a semi-professional scrutiny of the lecture notes
    • lecture notes are usually written by PhD holders
  4. Stackoverlow and Quora — have a quality control via voting by accredited members.
  5.  the average forum — Least reliable.

[11]concurrent^serial allocation in JVM^c++

–adapted from online article (Covalent)

Problem: Multithreaded apps create new objects at the same time. During object creation, memory is locked. On a multi CPU machine (threads run concurrently) there can be contention

Solution: Allow each thread to have a private piece of the EDEN space. Thread Local Allocation Buffer
-XX:+UseTLAB
-XX:TLABSize=
-XX:+ResizeTLAB

You can also Analyse TLAB usage -XX:+PrintTLAB

Low-latency c++ apps use a similar technique. http://www.facebook.com/notes/facebook-engineering/scalable-memory-allocation-using-jemalloc/480222803919 reveals insight of lock contention in malloc()

Q: When do U choose c++over python4personal coding #FB

A facebook interviewer asked me “I agree that python is great for coding interviews. When do you choose c++?” I said

  1. when I need to use template meta programming
  2. when I need a balanced tree

Now I think there are other reasons

  1. when I practice socket programming
  2. when I practice memory mgmt techniques — even java won’t give me the low-level controls
  3. when I parse binary data using low-level constructs like reinterpret_cast, endianness conversion
  4. when I practice pthreads — java is easier

I didn’t showcase c++ but I felt very confident and strong about my c++, which probably shined through in front of the last interviewer (from the architect team)

I think my c++/python combo was good combo for the FB onsite, even though only one interviewer noticed my c++ strength.

%%FB onsite coding/SDI questions

— Q: design type-ahead i.e. search suggestion. Scalability is key.

— Q: innerProduct2SparseArray (Table aa, Table bb). First you need to design the undefined “Table” class to represent a sparse array.
Then you need to write real code for innerProduct2SparseArray(), assuming the two Tables are already populated according to your definition.

— Q: add2DecimalsSameLength (string decimal1, string decimal2) to return a string version of the sum. Can’t use the python integer to hold the sum, as in java/c++ integers have limited size. You must use string instead.

Aha — carry can only be 0 or 1
I forget to add the last carry as an additional digit beyond the length of the input strings 😦

— Q: add2longBinary(string binaryNum1, string binaryNum2). I told interviewer that my add2BigDecimal solution should work, so we skipped this problem.

— Q: checkRisingFalling(listOfInts) to return 1 for rising, -1 for falling and 0 for neither. Rising means every new num is no lower than previous

— Q: checkDiameter(rootOfBinTree)

save iterators and reuse them+!invalidation risk

For a static STL container, the iterator objects can be safely stored and reused. The dreaded iterator invalidation is a risk only under structural changes.

Many coding interview questions allow (even require) me to save those iterators and store them in a vector, a hash table … Afterwards, we retrieve an iterator value, and visit the next/previous of it.

SDI: 3 ways to expire cached items

server-push update ^ TTL ^ conditional-GET # write-through is not cache expiration

Few Online articles list these solutions explicitly. Some of these are simple concepts but fundamental to DB tuning and app tuning. https://docs.oracle.com/cd/E15357_01/coh.360/e15723/cache_rtwtwbra.htm#COHDG198 compares write-through ^ write-behind ^ refresh-ahead. I think refresh-ahead is similar to TTL.

B) cache-invalidation — some “events” would trigger an invalidation. Without invalidation, a cache item would live forever with a infinity TTL, like the list of China provinces.

After cache proxies get the invalidation message in a small payload (bandwidth-friendly), the proxies discard the outdated item, and can decide when to request an update. The request may be skipped completely if the item is no longer needed.

B2) cache-update by server push — IFF bandwidth is available, server can send not only a tiny invalidation message, but also the new cache content.

IFF combined with TTL, or with reliability added, then multicast can be used to deliver cache updates, as explained in my other blogposts.

T) TTL — more common. Each “cache item” embeds a time-to-live data field a.k.a expiry timestamp. Http cookie is the prime example.

In Coherence, it’s possible for the cache proxy to pre-emptively request an update on an expired item. This would reduce latency but requires a multi-threaded cache proxy.

G) conditional-GET in HTTP is a proven industrial strength solution described in my 2005 book [[computer networking]]. The cache proxy always sends a GET to the database but with a If-modified-since header. This reduces unnecessary database load and network load.

W) write-behind (asynchronous) or write-through — in some contexts, the cache proxy is not only handling Reads but also Writes. So the Read requests will read or add to cache, and Write requests will update both cache proxy and the master data store. Drawback — In distributed topology, updates from other sources are not visible to “me” the cache proxy, so I still rely one of the other 3 means.

TTL eager server-push conditional-GET
if frequent query, in-frequent updates efficient efficient frequent but tiny requests between DB and cache proxy
if latency important OK lowest latency slower lazy fetch, though efficient
if in-frequent query good waste DB/proxy/NW resources as “push” is unnecessary efficient on DB/proxy/NW
if frequent update unsuitable high load on DB/proxy/NW efficient conflation
if frequent update+query unsuitable can be wasteful perhaps most efficient

 

@outage: localSys know-how beats generic expertise

When a production system fails to work, do you contact

  • XX) the original developer or the current maintainer over last 3Y (or longer) with a no-name college degree + working knowledge of the programming language, or
  • YY) the recently hired (1Y history) expert in the programming language with PhD and published papers

Clearly we trust XX more. She knows the localSys and likely has seen something similar.

Exception — what if YY has a power tool like a remote debugger? I think YY may gain fresh insight that XX is lacking.

XX may be poor at explaining the system design. YY may be a great presenter without low-level and hands-on know-how.

If you discuss the language and underlying technologies with XX he may show very limited knowledge… Remember Andrew Yap and Viswa of RTS?

Q: Who would earn the respect of teammates, mgr and external teams?

XX may have a hard time getting a job elsewhere .. I have met many people like XX.

lockfree stack with ABA fix #java AtomicStampedRef

http://15418.courses.cs.cmu.edu/spring2013/article/46 explains ABA in the specific context of a … lockfree stack. To counter ABA problem, It uses CAS2 i.e. CAS on a group of two objects — the original object + a stamp.

Java CAS2? I believe AtomicStampedReference is designed specifically for it.

http://tutorials.jenkov.com/java-util-concurrent/atomicstampedreference.html#atomicstampedreference-and-the-a-b-a-problem explains the AtomicStampedReference solving the ABA problem but in the last section it doesn’t clearly explain the benefit of get().

Also, the retry need a loop.

====Actually, many ABA illustrations are simplistic. Consider this typically simplistic illustration of a CAS stack:

  1. Thread 0 begins a POP and sees “A” as the top, followed by “B”. Thread saves “A” and “B”, before committing anything.
  2. Thread 1 begins and completes a POP , returning “A”.
  3. Thread 1 begins and completes a Push of “D”.
  4. Thread 1 begins and completes a Push of the same “A”. So now the actual stack top has A above D above B, i.e. with D inserted just below the original top-dog “A”
  5. Thread 0 sees that “A” is “still” on top and returns “A”, setting the new top to “B”. Therefore, Node D is lost.

— With a vector-of-pointer implementation, Thread 0 needs to save integer position within the stack. At Step 5, it should then detect that the A sitting at stack top is now at a higher position than observed earlier, detecting a symptom of ABA.

The rest is simple. Thread 0 should then query the new item (D) below A. Lastly, the CAS would compare current stack top position with the saved position, before committing.

However, in reality I think a vector-of-ptr is non-trivial to implement if we have two shared mutable things to update via CAS: 1) the stackTopPosition integer variable and 2) the (null) ptr in the next slot of the vector.

However, if Thread 1 replaces B with D, keeping the stack size unchanged, then the ABA would be undetectable by position check alone.

— with a linked list implementation, I think we only have node addresses, so at Step 5, Thread 0 can’t tell that D has been inserted between A and B.

You may consider using stack size as a second check, but it would be similar complexity as CAS2 but less reliable.

impostor’s syndrome: IV^on-the-job

I feel like impostor more on the job than in interviews. I have strong QQ (+ some zbs) knowledge during interviews. I feel it more and more often in my c++ in addition to java interviews.

Impostor’s syndrome is all about benchmarking. In job interviews, I sometimes come up stronger than the interviewers, esp. with QQ topics, so I sometimes feel the interviewer is the impostor !

In my technical discussions with colleagues, I also feel like an expert. So I probably make them feel like impostors.

So far, all of the above “expert exchanges” are devoid of any locaySys. When the context is a localSys, I have basically zero advantage.  I often feel the impostor’s syndrome because I probably oversold during interview and set up sky-high expectations.

any zbs in algo questions@@

I think Rahul may have a different view, but I feel some of QQ questions qualify as zbs, but very few of the algo tricks do.

In contrast, data structure techniques are more likely to qualify as zbs.

Obviously, the algorithm/data-structure research innovations qualify as zbs, but they are usually too complex for a 45-min coding interview.

single-writer lockfree data structure@@

Most of the lockfree data structures I have seen have 2 writer threads or more.

Q: what if in your design, there can be at most one writer thread but some reader threads. Any need for lockfree?
A: I don’t think so. I think both scenarios below are very common.

  • Scenario: If notification required, then mutex + condVar
  • Scenario: In many scenarios, notification is not needed — Readers simply drop by and read the latest record. In java, a volatile field suffices.

 

multi-core as j4 multi-threading #STM

Q: why choose multi-threaded design instead of single-threaded processes?

Most publications mention multi-core hardware as an answer. Questionable.

With multi-threading, you can run 30 threads in one process. Or you can run 30 single-threaded processes as in RTS parser and Rebus — industrial strength proven solution. Both designs make use of all CPU cores.

Between these two designs, heap memory efficiency can be different, as the 30 threads are able to share 99GB of objects in the same address space, but the 30 processes would need shared memory.

I feel the lesser known middle-ground design is … 30 threads running in single-threaded-mode, By definition, these 30 threads can only share immutable data only,. Anything mutable is thread-local.

In any multi-threaded design, those 30 threads can share the text segment i.e. memory occupied by code. Text segment tend to be smaller than the heap footprint.

In a boss-worker design, the worker threads may need to share very few mutable object, so these worker threads are not strictly STM. They are quasi-STM

funcPtr as argument, with default value

Suppose you write a dumpTree() which takes a “callback” parameter, so you can invoke callback(aTreeNodeOfTheTree).

Sounds like a common requirement?

Additionally, Suppose you want “callback” to have a default value of no-op, basically a nullptr.

Sounds like a common requirement?

I think c/c++ doesn’t have easy support for this. In my tested code https://github.com/tiger40490/repo1/blob/cpp1/cpp/algo_binTree/binTreeUtil.h, I have to define a wrapper function without the callback parameter. Wrapper would call the real function, passing in a nullptr as callback. Note nullptr needs casting.

clean language==abstracted away{hardware

Nowadays I use the word “clean” language more and more.

Classic example — java is cleaner than c# and c++. I told a TowerResearch interviewer that java is easier to reason with. Fewer surprises and inconsistencies.

The hardware is not “consistent” as wished. Those languages closer to the hardware (and programmers of those languages) must deal with the “dirty” realities. To achieve consistency, many modern languages make heavy use of heap and smart pointers.

For consistency, Everything is treated as a heapy thingy, including functions, methods, metadata about types …

For consistency, member functions are sometimes treated as data members.

latency favors STM; lockfree is for moderate latency

Strictly single-threaded mode means no shared mutable. Therefore, Really low-latency apps should probably avoid lockfree programming which assumes the presence of shared mutable. If no shared mutable, then the lockfree constructs would exert an unwanted performance penalty.

Now I think the real low-latency systems always prefer Single-Threaded-Mode. But is it feasible?

  • xtap and (loosely) Rebus are both STM — very high performance, proven designs.
  • Nasdaq’s new java-based architecture is STM, including their matching engine.
  • The matching engines in many exchanges/ECNs are STM. Remember FXAll interview?

 

job satisfaction^salary #CSDoctor

I feel CSDoctor has reasonable job satisfaction. If I were in his role my job satisfaction would be lower.

His sub-domain in finance is niche. Equity volatility data analysis, not a big infrastructure that needs a team to support. I guess it could be considered a glorified spreadsheet. I think the users are front office traders looking at his data to make strategic business decisions.

His technical domain is very niche. The team (4 people) made a conscious decision to use c# early on, and has not really changed to java. C# is shunned by West Coast and all internet companies including mobile, cloud and big-data companies. Even Wall St is increasing investment in javascript based GUI, instead of c# GUI.

I see very high career risk in such a role. What if I get kicked out? (Over 7 years the team did kick out at least 2 guys.. Last-in-first-out.) What if I don’t get promoted beyond VP and want to move out?

I won’t enjoy such a niche system. It limits career mobility. 路越走越窄.

This is one of the major job dis-satisfactions in my view. Other dis-satisfactions:

  • long hours — I think he is not forced to work long hours. He decides to put in extra hours, perhaps to get promotion.
  • stress — (due to ownership) CSDoctor is motivated by the stress. I would not feel such high motivation, if put in his shoes.
  • commute — I believe he has 1H+ commute. I feel he has very few choices in terms of jobs closer to home, because his domain is extremely niche.

His major advantage is salary, not really brank.

STL container=#1 common resource owner #RAII

Stroustrup said every resource (usually heapy thingies) need to have an owner, who will eventually return the resource.

By his definition, every resource has an acquire/release protocol. These resources include locks, DB connections and file handles. The owner is the Object responsible for the release.

  • The most common resource owner is the STL container. When a std::vector or std::unorderded_multimap … is destructed, it would release/return the resource to the heap memory manager.
  • The best-known resource-owner is the family of smart pointers.
  • You can also combine them as a container of smart pointers.
  • All of these resource owners rely on RAII

central data-store updater in write-heavy system

I don’t know how often we encounter this stringent requirement —

Soccer world cup final, or a big news about Amazon … millions of users posts comments on a web page and all comments need to be persisted and shown on some screen.

Rahul and I discussed some simple design. At the center is a single central data store.

  • In this logical view, all the comments are available at one place to support queries by region, rating, keyword etc.
  • In the physical implementation, could use multiple files or shared-memory or distributed cache.

Since the comments come in a burst, this data store becomes the bottleneck. Rahul said there are two unrelated responsibilities on the data store updaters. (A cluster of updaters might be possible.)

  1. immediately broadcast each comment to multiple front-end read-servers
  2. send an async request to some other machine that can store the data records. Alternatively, wait to collect enough records and write to the data store in a batch

Each read-server has a huge cache holding all the comments. The server receives the broadcast and updates its cache, and uses this cache to service client requests.

val@pattern_recognition imt reusable_technique

Reusable coding techniques include my classic generators, DP, augmented trees, array-element swap, bigO tricks like hash tables and medium-of-3 pivot partitioning.

  • One of the best examples of reusable technique — generate “paths/splits” without duplicates. Usually tough.

More useful than “reusable techniques” are pattern-recognition insight into the structure and constraints in a given coding problem. Without these insights, the problem is /impenetrable/intractable/.

Often we need a worst input to highlight the the hidden constraint. The worst input can sometimes quickly eliminate many wrong pathways through the forest and therefore help us see the right pathway.

However, in some context, a bigO trick could wipe out the competition, so much so that pattern recognition, however smart, doesn’t matter.

 

##broad viability factors for dev-till-70

In this blogpost I focus on broad, high-level viability factors for my dev-till-70 career plan.

  • see j4 dev-till-70
  • — #1 factor is probably health. See my blog tag. I believe I’m in control of my destiny.
  • However, in reality physical health might be the real limiting factor.
  • — #2 factor is probably market demand
  • Wall St contract market is age-friendly — [19] y WallSt_contract=my best Arena #Grandpa
  • c++ is good skillset for older guys. Core java is also good.
  • churn
  • Jiang Ling of MS said even at an old age, “easy to find paid, meaningful coding projects, perhaps working at home”. Xia Rong agreed. I kinda prefer an office environment, but if the work location is far away, at least there exists an option to take it on remotely.
  • — A G5 factor is my competitiveness among candidates. See Contract: unattractive to young developers
  • coding drill?

toy^surgeon #GS-HK@lockfree

H:= Interviewers was an OMS guy in GS-Hongkong, bent on breaking the candidate.

Q3: any real application of CAS in your project?

Q3b: why did you choose CAS instead of traditional lock-based? Justify your decision.
%%A: very simple usage scenario .. therefore, we were very sure it was correct and not slower, probably faster [1]. In fact, it was based on published solutions, customized slightly and reviewed within my team.
%%A: In this context, I would make my decisions. Actually my manager liked this kinda new ideas. Users didn’t need to know this level of details.

[1] uncontended lock acquisition is still slower than CAS

Q3d: how much faster than lock-based?
%%A: I didn’t benchmark. I know it wasn’t slower.

H: but any change is risky
%%A: no. There was no existing codebase to change.

H: but writing new code is a change
%%A: in that case, lock-based solution is also risky.

H: “Not slower” is not a good answer.

%%A: I don’t think I can convince you, but let me try one last time. Suppose my son wants to try a new toy. We know it doesn’t cost more than a familiar toy. We aren’t sure if he would actually keep it, but there’s no harm trying it out.

I said this as a last-ditch effort, since I had all but lost the chance to convince him. So I took a risky sharp turn.

H: but this is production not a toy !

So he was on the hook! What i should have said:

A: No we didn’t roll it out to production without testing. Look at what google lab does.

A: well, my wife went through laser surgery. The surgeon was very experienced and tried a relatively new technique. She was not a guinea pig. Actually the procedure is new but shown to be no worse than the traditional techniques. Basically, we don’t need to do lots of benchmarks to demonstrate a new technique is worth trying. For simple, safe, well-known-yet-new techniques, it’s not always a bad idea to try it on a small sample. Demanding extensive analysis and benchmark is a way to slow down incremental innovations.

A: the CAS technique has been well-researched for decades and tried in many systems. I also used it before.

find any black-corner subMatrix #52%

https://www.geeksforgeeks.org/find-rectangle-binary-matrix-corners-1/

Q: given a black/white matrix, find any rectangle whose all four corners are black.
Q2: list all of them
Q3 (google): find the largest

— idea 2: record all black cell locations and look out for 3-corner groups

a Rec class with {northwest corner, northeast corner, southwest corner}

first pass, For each pair on the same row, create a pair object and save it in hashmap {northwest cell -> list of x-coordinates on my right} We will iterate the list later on.

2nd pass, scan each column. For each pair on the same col, say cell A and C, use A to look-up the list of northeast corners in the hashmap. Each northeast corner B would give us a 3-corner group. For every 3-corner pack, check if the forth corner exists.

— idea 1: row by row scan.
R rows and C columns. Assuming C < R i.e. slender matrix

For first row, record each ascending pair [A.x,B,x] (No need to save y coordinate) in a big hashset. If S black cells on this row, then O(SS).

In next row, For each new pair, probe the hashset. If hit, then we have a rectangle, otherwise add to the hashset. If T black cells on this row, then O(TT) probes and (if no lucky break) O(TT) inserts.

Note one single hashset is enough. Any pair matching an earlier pair identifies a rectangle. The matching would never occur on the same row 🙂 Optionally, We can associate a y-coordinate to each record, to enable easy calculation of area.

After all rows are processed, if no rectangle, we have O(SS+TT+…). Worst case the hashset can hold C(C-1)/2 pairs, so we are bound by O(CC). We also need to visit every cell, in O(CR)

If C > R, then we should process column-by-column, rather than row-by-row

Therefore, we are bound by O( min(CC,RR) + CR). Now min(CC,RR) < CR, so we are bound by O(CR) .. the theoretical limit.

— idea 4: for each diagonal pair found, probe the other corners
If there are H black cells, we have up to HH pairs to check 😦 In each check, the success rate is too low.

— Idea 5: Brute force — typewriter-scan. For each black cell, treat it as a top-left, and look rightward for a top-right. If found, then scan down for a pair of bottom-left/bottom-right. If no then complete the given row and discard the row.

For a space/time trade-off,

  • once I find a black cell on current row, I put it in a “horizontal” vector.

standard SQL to support pagination #seek#Indeed

Context — In the Indeed SDI, each company page shows the latest “reviews” or “articles” submitted by members. When you scroll down (or hit Next10 button) you will see the next most recent articles.

Q: What standard SQL query can support pagination? Suppose each record is an “article” and page size is 10 articles.

I will assume each article has an auto-increment id (easier than a unique timestamp) maintained in the database table. This id enables the “seek”” method.  First page (usually latest 10 articles) is sent to browser.  Then the “fetch-next” command from browser would contain the last id fetched. When this command hits the server, should we return the next 10 articles after that id (AA), or (BB) should we check the latest articles again, and skip first 10? I prefer AA. BB can wait until user refreshes the web page.

The SQL-2008 industry standard supports both (XX) top-N feature and (YY) offset feature, but for several reasons [1], only XX is recommended :

select * from Articles where id < lastFetchedId fetch first 10

[1] http://www.use-the-index-luke.com clearly explains that the “seek” method is superior to the “offset” method. The BB scenario above is one confusing scenario affecting the offset method. Performance is also problematic when offset value is high. Fetching the 900,000th page is roughly 900,000 times slower than the first page.

WallSt=age-friendly to older guys like me

I would say WallSt is Open to older techies.

I wouldn’t say WallSt is kind to old techies.

I would say WallSt is age-friendly

I would say WallSt offers a bit of the best features of age-friendly professions such as doctors and accountants.

Q: I sometimes feel WallSt hiring managers are kind to older techies like me, but really?
A: I feel WallSt hiring managers are generally a greedy species but there are some undercurrents :

  • 🙂 traditionally, they have been open to older techies who are somewhat less ambitious, less driven to move up, less energetic, less willing to sacrifice personally. This tradition is prevalent for decades in U.S. work culture and I believe it will stay. No such tradition in China and Singapore,
  • 🙂 U.S.hir`mgr may avoid bright young candd #Alan
  • 🙂 some older guys do perform well above expectation. Capability decline with age is very mild and questionable in many individuals, but work ethics differ among individuals, unrelated to age.
  • 😦 if an older guy needs to be cut, many hiring managers won’t hesitate… merciless.

Overall, I think Wall St hiring managers are open to older guys but not sympathetic or merciful. They are profit-driven, not compassionateThe fact that I am so welcome on Wall St is mostly due to my java/c++ QQ, not anyone’s kindness.

I thank God. I don’t need to thank Wall St.

j4 dev-till-70 [def]

  • j4: I’m good at coding. We want to work with what we are good at. I have worked 20 years so by now I kinda know what I’m good at.
  • j4: given my mental power, use it or lose it.
  • j4: real world problem-solving, not hypothetical problems, not personal problems.
  • j4: responsibility/ownership — over some module
    • teaching also works
    • volunteering also works
  • j4: interaction with young people
    • teaching also works
  • j4: respect — from coworkers and users. I want to be a shining example of an old but respectable hands-on techie but not an expert.
    • teaching also works
  • j4: service — provide a useful service to other people. Who are these other people? LG2
    • teaching also works
  • j4: meaningful work? vague
  • j4: be relevant to the new economy? vague

advantage@old_techie: less2lose]race !!江郎才尽

See also age40-50career peak..really@@stereotype,brainwash,,

At my age now than younger years, the long-term consequence/impact of a PIP-type event is less severe less destructive, including the blow to long-horizon self confidence. Reason? That long-horizon isn’t that long for me as for the younger competitors on the rise. Imagine on a new job you struggle to clear the bar

  • in figure-things-out speed,
  • in ramp-up speed,
  • in independent code-reading…
  • in delivery speed
  • in absorbency,
  • in level of focus
  • in memory capacity — asking the same questions over and over.
  • in dealing with ambiguity and lack of details
  • in dealing with frequent changes

As a 30-something, You would feel terrified, broken, downcast, desperate .., since you are supposed to be at your prime in terms of capacity growth. A research quoted by CNA[3] found that younger members of the workforce were significantly more stressed than older workers. You would worry about “passed my peak” way too early, and facing a prolonged decline … 江郎才尽.

In contrast, an older techie like me starts the same race in a new team, against a lower level of expectation [1] and have less to prove, so I can compete while carrying less baggage.

A related advantage — some (perhaps mediocre) older techies have gone through a lot of ups and sowns, so we have some wisdom. The other side of the coin — we could fall in the trap of 刻舟求剑.

Manager and cowokers naturally have a lower expectation of older techies. Grandma’s wisdom — she always remind me that I don’t have to always benchmark myself against younger team members. In some teams, I can follow her advice.

Any example? Bill Pinsky, Paul and CSY of RTS?

[1] WallSt contract market

[3] https://www.channelnewsasia.com/news/commentary/how-to-cope-burn-out-work-demands-stress-tired-office-11496502?cid=h3_referral_inarticlelinks_24082018_cna

opaque c++trouble-shooting: bustFE streamIn

This is a good illustration of fairly common opaque c++ problems, the most dreadful/terrifying species of developer nightmares.

The error seems to be somewhat consistent but not quite.

Reproducing it in dev enviroment was a first milestone. Adding debug prints proved helpful in this case, but sometimes it would take too long.

In the end, I needed a good hypothesis, before we could set out to verify it.

     81     bool SwapBustAction::streamInImpl(ETSFlowArchive& ar)
     82     { // non-virtual
     83       if (exchSliceId.empty())
     84       {
     85         ar >> exchSliceId;
     86       }
    104     }
    105     void SwapBustAction::streamOutImpl(ETSFlowArchive& ar) const
    106     { // non-virtual
    107       if (exchSliceId.size())
    108       {
    109         ar << exchSliceId;
    110       }

When we save the flow element to file, we write out the exchSliceId field conditionally as on Line 107, but when we restore the same flow element from file, the function looks for this exchSliceId field unconditionally as on Line 85. When the function can’t find this field in the file, it hits BufferUnderflow and aborts the restore of entire flow chain.

The serialization file uses field delimiters between the exchSliceId field and the next field which could be a map. When the exchSliceId field is missing, and the map is present, the runtime would notice an unusable data item. It throws a runtime exception in the form of assertion errors.

The “unconditional” restore of exchSliceId is the bug. We need to check the exchSliceId field is present in the file, before reading it.

In my testing, I only had a test case where exchSliceId was present. Insufficient testing.

##skillist to keep brain active+healthy

— generally I prefer low-churn (but not lethargic) domains to keep my brain reasonably loaded:
[as] classic data structure+algorithms — anti-aging
[s] C/C++,
SQL,
c++ build using tool chain, and shooting using instrumentation
c++ TMP
[s] memory-mgmt
[s] socket,
[s] mkt data,
[s] bond math,
basic http?
[a=favor accu ]
[s=favor slow-changing domains]
— Avoid churn
  • jxee,
  • c#
  • scripting,
  • GUI
— Avoid white hot domains popular with young bright guys … too competitive, but if you can cope with the competition, then it could keep your brain young.
  • quant
  • cloud? but ask Zhao Bin
  • big data
  • machine learning

anagramIndexOf(): frqTable+slidingWindow

Q: Similar to java indexOf(string pattern, string haystack), determine the earliest index where any permutation of pattern starts.

====analysis

https://github.com/tiger40490/repo1/blob/py1/py/algo_str/anagramIndexOf.py is my tested solution featuring

  • O(H+P) where H and P are the lengths
  • elegant sliding window with frequency

Aha — worst input involves only 2 letters in haystack and pattern. I used to waste time on the “average” input.

On the spot I had no idea, so I told interviewer (pregnant) about brute force solution to generate all P! permutations and look for each one in the haystack

Insight into the structure of the problem — the pattern can be represented by a frequency table, therefore, this string search is easier than regex!

Then I came up with frequency table constructed for the pattern. Then I explained checking each position in haystack. Interviewer was fine with O(H*P) so I implemented it fully, with only 5 minutes left. Basically implementation was presumably slower than other candidates.

A few hours later, I realized there’s an obvious linear-time sliding window solution, but it would have taken more than the available time to implement. Interviewer didn’t hint at all there was a linear time solution.

— difficulty and intensity

I remember feeling extremely intense and extreme pressure after the interview, because of the initial panic. So on the spot I did my best. I improved over the brute force solution after calming down.

Many string search problems require DP or recursion-in-loop, and would be too challenging for me. This problem was not obvious and not easy for me, so I did my best.

I didn’t know how to deep clone a dict. The Indeed.com interviewers would probably consider that a serious weakness.

dev-till-70: 7external inputs

Context — professional (high or low end) programmer career till my 70’s. The #1 derailer is not physical health [3] but my eventual decline of “brain power” including …?

[3] CSY and Jenny Lu don’t seem to agree.

This discussion is kinda vague, and my own thoughts are likely limited in scope, not systematic. Therefore, external inputs are extremely useful. I posed the same questions to multiple friends

Q2: what can I do now given my dev-till-70 plan defined above.
Q1: how do I keep my brain healthy, and avoid harmful stress?

— Josh felt that the harmful stress in his job was worse in his junior years when he didn’t know the “big picture”. Now he feels much better because he knows the full context. I said “You are confident you can hold your end of the log. Earlier you didn’t know if you were good enough.”

— Grandpa gave the Marx example — in between intense research and writing, Marx would solve math problems to relax the brain. I said “I switch between algo problem solving and QQ knowledge”

— Alex V of MS — Ask yourself
Q: compare to the young grads, what job function, what problems can you handle better? My mental picture of myself competing against the young guys is biased against my (valuable) battlefield experience. Such experience is discounted to almost $zero in that mental picture!

When I told Alex my plan to earn a living as a programmer till 70, Alex felt I definitely need a technical specialization. Without it, you have very little hope competing with people 40 years younger. I said I intend to remain a generalist. Alex gave some examples of skills younger people may not have the opportunity to learn.

  • low-latency c++
  • c++ memory mgmt
  • specific product knowledge
  • — I said
  • sockets
  • .. I have a few skillist blogposts related to this

— Sudhir
Mental gymnastics is good, like board games and coding practice and Marx’s math practice, but all of these are all secondary to (hold your breath) … physical workout, including aerobic and strength training!

Grandpa said repeatedly the #1 key factor is physical health, though he didn’t say physical health affects brain capacity.

I told Sudhir that I personally enjoy outdoor exercise more than anything else. This is a blessing.

Also important is sleep. I think CSDoctor and grandpa are affected.

Sudhir hinted that lack of time affects sleep, workout and personal learning.

  • Me: I see physical exercise and sleep as fundamental “protections” of my brain. You also pointed out when we reach home we often feel exhausted. I wonder if a shorter commute would help create more time for sleep/workout and self-study. If yes, then is commute is a brain-health factor?
  • Sudhir: Absolutely, shorter commutes are always better, even if that means we can only afford smaller accommodation. Or look for a position that allows working remotely more frequently.

Sudhir also felt (due to current negative experience) an encouraging team environment is crucial to brain health. He said mental stress is necessary, but fear is harmful. I responded  “Startup might be better”.

–Jenny Lu felt by far the most important factor is consistent physical exercise to maintain /vitality/. She felt this is more important than mental exercise.

I said it is hard to maintain consistency. She replied that it is doable and necessary. See ##what U r good@: U often perceive as important2everyone

–Junli…. Felt mental exercise and physical exercise are both important.

When I asked him what I can do to support dev-till-70, he identified several demand-side factors —

  • He mentioned 3 mega-trends — cloud; container; micro-service.
    • Serverless is a cloud feature.
  • He singled out Spring framework as a technology “relevant till our retirement time”

— CSY pointed out the risk of bone injury.

He said a major bone injury in old age can lead to immobility and the start of a series of declines in many body parts.

— XR’s supply/demand-oriented answer is simple– keep interviewing. He felt this is the single most effective thing I can do for dev-till-70.

— Chris Ma: ##[15]types@Work2slow brain aging #campout

freelist in pre-allocated object pool #DeepakCM

Deepak CM described a pre-allocated free-list used in his telecom system.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/lang_66mem/fixedSizedFreeList.cpp is my self-tested implementation. Am proud of the low-level details that I had to nail down one by one.

He said his system initialization could make 400,000 new() calls to allocate 400,000 dummy objects and put them into a linked list. Personally, My design /emulates/ it with a single malloc() call. This is at startup time.

During the day, each new msg will overwrite [1] a linked node retrieved at the Head of the slist.

[1] using operator=(). Placement-new would be needed if we use a single malloc()

Every release() will link-in the node at the Tail of the slist. Can we link it in at the Head? I think so. Benefit — It would leave a large chunk of continuous free space near the tail. Improved Fragmentation.

Illustration — Initially the physical addresses in the slist are likely consecutive like addr1 -> addr2 -> addr3…. After some release() calls, it would look like a random sequence.

Using return-to-Head, I would get

  • pop, pop, pop: 4->5->..
  • rel 2: 2->4->5..
  • pop: 4->5->….
  • rel 1: 1->4->5…

— The API usage:

  • void ffree(void *)
  • void * fmalloc(size_t), possibly used by placement new

TreeNode lock/unlocking #Rahul,DeepakCM

Q: given a static binary tree, provide O(H) implementation of lock/unlocking subject to one rule — Before committing locking/unlocking on any node AA, we must check that all of AA’s subtree nodes are currently in the “free” state, i.e. already unlocked. Failing the check, we must abort the operation.

H:= tree height

====analysis

— my O(H) solution, applicable on any k-ary tree.

Initial tree must be fully unlocked. If not, we need pre-processing.

Each node will hold a private hashset of “locked descendants”. Every time after I lock a node AA, I will add AA into the hashset of AA’s parent, AA’s grand parent, AA’s great grandparent etc. Every time after I unlock AA, I will do the reverse i.e. removal.

I said “AFTER locking/unlocking” because there’s a validation routine

bool canChange(node* AA){return AA->empty(); } # to be used before locking/unlocking.

Note this design requires uplink. If no uplink available, then we can run a pre-processing routine to populate an uplink lookup hash table { node -> parent }

— simplified solution: Instead of a hashset, we may make do with a count, but the hashset provides useful info.

 

longest run@same char,allow`K replacements #70%

https://leetcode.com/problems/longest-repeating-character-replacement/

Q: Given a string s that consists of only uppercase English letters, you can perform at most k operations on that string. In one operation, you can choose any character of the string and change it to any other character. Find the length of the longest sub-string containing all repeating letters you can get after performing the above operations.

Here’s my description of the same problem:

Q: Suppose we stand by highway and watch cars of the each color. Only 26 possible colors. Cars pass fast, so sometimes we miscount.

My son says “I saw 11 red cars in a row in the fast lane”.
My daughter says “I saw 22 blue cars in a row in the middle lane”
We allow kids to miss up to 3 cars in their answer. In other words, my son may have seen only 8, 9 or 10 red cars in a row.

When we review the traffic video footage of N cars in a single lane, determine the max X cars in a row of the same color, allowing k mistakes. K < N.
====analysis
Suppose k is 3

— solution 1: O(N) use 2 variables to maintain topFrq and w i.e. winSize

Within a sliding window of size w, maintain a frq table. initialize w to a good conservative value of 4 (i.e. k+1).

If we notice top frq is 2, better than (w-k) i.e. w-k<=topFrq , then lucky we can be less conservative and we can expand the current window backward (possibly safer than fwd).

After expansion, immediate try further expansion. IFF impossible i.e. w – topFrq > k, then slide the window.

If correct answer is 11 i.e there’s a 11-substring containing 8 reds, I feel my sliding window will not miss it.

key^payload: realistic treeNode #hashset

I think only SEARCH trees have keys. “Search” means key search. In a search tree (and hashset), each node carries a (usually unique) key + 0 or more data fields as payload.

Insight — Conceptually the key is not part of the payload. Payload implies “additional information”. In contrast, key is part of the search tree (and hashset) infrastructure, similar to auto-increment record IDs in database. In many systems, the key also has natural meaning, unlike auto-incremented IDs.

Insight — If a tree has no payload field, then useful info can only exist in the key. This is fairly common.

For a treemap or hashmap, the key/value pair can be seen as “key+payload”. My re_hash_table_LinearProbing.py implementation saves a key/value pair in each bucket.

A useful graph (including tree , linked list, but not hashset) can have payloads but no search keys. The graph nodes can hold pointers to payload objects on heap [1]. Or The graph nodes can hold Boolean values. Graph construction can be quite flexible and arbitrary. So how do you select a graph node? Just follow some iteration/navigation rules.

Aha — similar situation: linked list has well-defined iterator but no search key . Ditto vector.

[1] With pre-allocation, a static array substitutes for the heap.

Insight — BST, priorityQ, and hash tables need search key in each item.

I think non-search-TREES are rarely useful. They mostly show up in contrived interview questions. You can run BFT, pre|post-order DFT, but not BF S / DF S, since there’s nothing to “search”.

Insight — You may say search by payload, but some trees have Boolean payloads.

std::move(): robbed object still usable !

Conventional wisdom says after std::move(obj2), this object is robbed and invalid for any operation…. Well, not quite!

https://en.cppreference.com/w/cpp/utility/move specifies exactly what operations are invalid. To my surprise, a few common operations are still valid, such as this->clear() and this->operator=().

The way I read it — if (but not iFF) an operation wipes out the object content regardless of current content, then this operation is valid on a robbed/hollowed object like our obj2.

Crucially, any robbed object should never hold a pointer to any “resource”, since that resource is now used by the robber object. Most known movable data types hold such pointers. The classic implementation (and the only implementation I know) is by pointer reseat.