JVM object referencing: does GC require indirection@@

For low-latency systems, any indirection in the critical path is a huge cost. That’s why virtual functions are often de-virtualized to avoid dynamic dispatch.

Is there any indirection when GC relocates a live object?

java≠a natural choice for low latency

I think java could deliver similar latency numbers to c/c++, but the techniques are probably unnatural to java:

  • STM — Really low latency systems should use single-threaded mode. STM is widely used and well proven. Concurrency is the biggest advantage of java but unfortunately not effective in low-latency.
  • DAM — (dynamically allocated memory) needs strict control, but DAM usage permeates mainstream java.
  • arrays — Latency engineering favors contiguous memory arrays, rather than object graphs including hash tables, lists, trees, or array of heap pointers,,. C pointers were designed based on tight integration with array, and subsequent languages have all moved away from arrays. Programming with raw arrays in java is unnatural.
    • struct — Data structures in C has a second dimension beside arrays – namely structs. Like arrays, structs are very compact, wasting no memory and can live on heap or non-heap. In java, this would translate to a class with only primitive fields. Such a class is unnatural in java.
  • GC — Low latency doesn’t like a garbage collector thread that can relocate objects. I don’t feel confident discussing this topic, but I feel GC is a handicap in the latency race. Suppressing GC is unnatural for a GC language like java.

My friend Qihao commented —

There are more management barriers than technical barriers towards low latency java. One common example is with “suppressing gc is unnatural”.

mgr role stress: app-ownership

Someone like Josh, Larry … need to be on the ball and keep track of the large number of (big or small [1]) changes affecting “his baby”.

In contrast, as a foot soldier I only have obligation (professional responsibility) to keep an eye on my project or my module. Some foot soldiers are eager to learn, but usually her scope of responsibility is much smaller.

Stay on top of “everything” — as a parent I have to be on top of everything about my kids and my house.

LocalSys — If I don’t understand one of many things in and outside my team’s applications, I don’t need to struggle to find out and clarify my understanding.

[1] it’s not always straightforward to determine if a change is big or small. The app owner need to quickly estimate the impacts of each change. Sometimes the estimate is inaccurate. There’s a bit of risk.

scan from both ends, keeping min/max

I feel a reusable technique is

  • scan the int array from Left  and keep track of the minimum so far. Optionally save it in an array
  • scan the int array from right and keep track of the maximum so far. Optionally save it in an array
  • save the difference of these two arrays

With these three shadow arrays, many problems can be solved visually and intuitively, in linear  time.

eg: max proceeds selling 2 homes #Kyle

How about the classic max profit problem?

How about the water container problem?

sorting J integers, each in [1,N]

Q: What’s the time complexity of sorting J integer scores which are all [1, N], possibly non-unique?

This is classic counting sort. My book [[algo in a nutshell]] incorrectly says O(J). Incorrect when N is large.

My analysis below uses a total of three dimensions N, J and K, where N and J can be vastly different, and K <=min(N, J), but K could be much smaller.

====analysis

Textbook counting sort is listed at the bottom. If N is smaller than J, then O(N+J) is dominated by O(J). Now suppose N is huge (like the revenue figure of a company).

Suppose there are K distinct scores among the J scores. I would want a constant-time translation from each distinct score to a distinct counter. I wish there’s a perfect hashing from the K scores to K buckets but I think it’s impossible since the K distinct values are not known in advance. Even with imperfect hashing, I can get O(1) translation from each score value to a distinct counter. I will iterate over the J scores. For each,

  • look up (on hash table) the corresponding counter.
  • If the score is new i.e. missing from the hash table,
    • then create a counter
    • add the “pair” {score -> counter} into the hash table.
    • Also insert the score into a min-heap
  • increment the counter

O(K logK) : Now pop the min-heap K times, each time to get a distinct score in ascending order. Lookup the score in hash table to get the counter. If counter for score 55 the count is 3, then output 55 three times. This output would be a sorted sequence of the original J scores.

— comparing with alternatives: Mine is O(J + K logK). If K*logK exceeds N, then I would fall back to standard counting sort.

The comparison-based sort would be O(J logJ), inferior to mine if K is much smaller than J.

The textbook counting sort would be O(J + N) , using N independent counters.

range bump-up@intArray 60% #Ashish

https://www.hackerrank.com/challenges/crush/problem Q: Starting with a 1-indexed array of zeros and a list of operations, for each operation add a “bump-up” value to each of the array element between two given indices, inclusive. Once all operations have been performed, return the maximum in your array.

For example, given array 10 of zeros . Your list of 3 operations is as follows:

    a b k
    1 5 3
    4 8 7
    6 9 1

Add the values of k between the indices a and b inclusive:

index->	 1 2 3  4  5 6 7 8 9 10
	[0,0,0, 0, 0,0,0,0,0, 0]
	[3,3,3, 3, 3,0,0,0,0, 0]
	[3,3,3,10,10,7,7,7,0, 0]
	[3,3,3,10,10,8,8,8,1, 0]

The largest value is 10 after all operations are performed.

====analysis

This (contrived) problem is similar to the skyline problem.

— Solution 1 O(minOf[N+J, J*logJ ] )

Suppose there are J=55 operations. Each operation is a bump-up by k, on a subarray. The subarray has left boundary = a, and right boundary = b.
Step 1: Sort the left and right boundaries. This step is O(N) by counting sort, or O(J logJ) by comparison sort. A conditional implementation can achieve O(minOf[N+J, J*logJ ] )

In the example, after sorting, we get 1 4 5 6 8 9.

Step 2: one pass through the sorted boundaries. This step is O(J).
Aha — the time complexity of this solution boils down to the complexity of sorting J small positive integers whose values are below n.

3overhead@creating a java stackframe]jvm #Qihao

  • additional assembly instruction to prevent stack overflow… https://pangin.pro/posts/stack-overflow-handling mentions 3 “bang” instructions for each java method, except some small leaf methods
  • safepoint polling, just before popping the stackframe
  • (If the function call receives more than 6 arguments ) put first 6 args in register and the remaining args in stack. The ‘mov’ in stack involves more instructions than registers. The subsequent retrieval from stack is likely L1 cache, slower than register read.

age40-50career peak..really@@stereotype,brainwash,

stereotype…

We all hear (and believe) that the 40-50 period is “supposed” to be the peak period in the life of a professional man. This expectation is created on the mass media (and social media such as Linkedin) brainwash that presents middle-aged managers as the norm. If not a “manager”, then a technical architect or a doctor.

[[Preparing for Adolescence]] illustrates the peer pressure (+self-esteem stress) felt by the adolescent. I feel a Deja-vu. The notion of “normal” and “acceptable” is skewed by the peer pressure.

Q: Out of 100 middle-aged (professional or otherwise) guys, how many actually reach the peak of their career in their 40’s?
A: Probably below 10%.

In my circle of 40-somethings, the norm is plateau or slow decline, not peak. The best we could do is keep up our effort and slow down the decline, be it wellness, burn rate, learning capacity, income,,,

It’s therefore hallucinatory to feel left behind on the slow track.

Q: at what age did I peak in my career?
A: I don’t want to overthink about this question. Perhaps towards the end of my first US era, in my late 30s.

I think middle-aged professional guys should read [[reconciliations]] by Theodore Rubin. The false expectation creates immense burden.

const data member initialization: simple on the surface

The well-known Rule 1 — a const data member must be initialized exactly once, no more no less.

The lesser-known Rule 2 — for class-type data member, there’s an implicit default-initialization feature that can kick in without us knowing. This default-init interacts with ctor initializer in a strange manner.

On a side note, [[safeC++]] P38 makes clever use of Rule 2 to provide primitive wrappers. If you use such a wrapper in place of a primitive field (non-const), then you eliminate the operational risk of “forgetting to initialize a non-const primitive field

The well-known Rule 3 — the proper way to explicitly initialize a const field is the ctor initializer, not inside ctor body.

The lesser-known Rule 4 — at run-time, once control passes into the ctor body, you can only modify/edit an already-initialized field. Illegal for a const field.

To understand these rules, I created an experiment in https://github.com/tiger40490/repo1/blob/cpp1/cpp/lang_misc/constFieldInit.cpp

— for primitive fields like int, Rule 2 doesn’t apply, so we must follow Rule 1 and Rule 3.

— for a class-type field like “Component”,

  • We can either leave the field “as is” and rely on the implicit Rule 2…., or
  • If we want to initialize explicitly, we must follow Rule 3. In this case, the default-init is suppressed by compiler.

In either case, there’s only one initialization per const field (Rule 1)

joinable instance of std::thread

[[effModernC++]] P 252 explains why in c++ joinable std::thread objects must not get destroyed. Such a destruction would trigger std::terminate(), therefore, programmers must make their std::thread objects non-joinable before destruction.

The key is a basic understanding of “joinable”. Informally, I would say a joinable std::thread has a real thread attached to it, even if that real thread has finished running. https://en.cppreference.com/w/cpp/thread/thread/joinable says “A thread that has finished executing code, but has not yet been joined is still considered an active thread of execution and is therefore joinable.”

An active std::thread object becomes unjoinable

  • after it is joined, or
  • after it is detached, or
  • after it is be “robbed” via move()

The primary mechanism to transition from joinable to unjoinable is via join().

std::thread key points

For a thread to actually become eligible, a Java thread needs start(), but c++ std::thread becomes eligible immediately after initialization i.e. after it is initialized with its target function.

For this reason, [[effModernC++]] dictates that between an int field and a std::thread field in a given class Runner, the std::thread field should be the last initialized in constructor. The int field needs to be already initialized if it is needed in the new thread.

Q1: Can you initialize the std::thread field in the constructor body?
A: yes unless the std::thread field is a declared const field

Now let’s say there’s no const field.

Q2: can the Runner copy ctor initialize the std::thread field in the ctor body, via move()?
A: yes provided the ctor parameter is non-const reference to Runner.
A: no if the parameter is a const reference to Runner. move(theConstRunner) would evaluate to a l-value reference, not a rvr. std::thread ctor and op= only accept rvr, because std::thread is move-only

See https://github.com/tiger40490/repo1/tree/cpp1/cpp/sys_thr for my experiments.

2011 white paper@high-perf messaging

https://www.informatica.com/downloads/1568_high_perf_messaging_wp/Topics-in-High-Performance-Messaging.htm is a 2011 white paper by some experts. I have saved the html in my google drive. Here are some QQ  + zbs knowledge pearls. Each sentence in the article can expand to a blogpost .. thin->thick.

  • Exactly under what conditions would TCP provide low-latency
  • TCP’s primary concern is bandwidth sharing, to ensure “pain is felt equally by all TCP streams“. Consequently, a latency-sensitive TCP stream can’t have priority over other streams.
    • Therefore, one recommendation is to use a dedicated network having no congestion or controlled congestion. Over this network, the latency-sensitive system would not be victimized by the inherent speed control in TCP.
  • to see how many received packets are delayed (on the receiver end) due to OOS, use netstat -s
  • TCP guaranteed delivery is “better later than never”, but latency-sensitive systems prefer “better never than late”. I think UDP is the choice.
  • The white paper features an in-depth discussion of group rate. Eg: one mkt data sender feeding multiple (including some slow) receivers.

 

analyzing my perception of reality

Using words and numbers, am trying to “capture” my perceptions (intuitions + observations+ a bit of insights) of the c++/java job market trends, past and future. There’s some reality out there but each person including the expert observer has only a limited view of that reality, based on limited data.

Those numbers look impressive, but actually similar to the words — they are mostly personal perceptions dressed up as objective measurements.

If you don’t use words or numbers then you can’t capture any observation of the “reality”. Your impression of that reality [1] remains hopelessly vague. I now believe vague is the lowest level of comprehension, usually as bad as a biased comprehension. Using words + numbers we have a chance to improve our perception.

[1] (without words you can’t even refer to that reality)

My perceptions shape my decisions, and my decisions affect my family’s life chances.

My perceptions shape my selective listening. Gradually, actively, my selective listening would modify my “membrane” of selective listening! All great thinkers, writers update their membrane.

Am not analyzing reality. Instead, am basically analyzing my perception of the reality, but that’s the best I could do. I’m good at analyzing myself as an object.

Refusing to plan ahead because of high uncertainty is lazy, is pessimistic, is doomed.

latency zbs in java: lower value cf c++@@

Warning — latency measurement gotchas … is zbs but not GTD or QQ

— My tech bet — Demand for latency QQ will remain higher in c++ than java

  • The market’s perception would catch up with reality (assuming java is really no slower than c++), but the catch-up could take 30 years.
  • the players focused on latency are unused to the interference [1] by the language. C++ is more free-wheeling
  • Like assembly, c++ is closer to hardware.
  • In general, by design Java is not as a natural a choice for low latency as c++ is, so even if java can match c++ in performance, it requires too much tweaking.
  • related to latency is efficiency. java is a high-level language and less efficient at the low level.

[1] In the same vein, (unlikely UDP) TCP interferes with data transmission rate control, so even if I control both sender and receive, I still have to cede control to TCP, which is a kernel component.

— jvm performance tuning is mainstream and socially meaningful iFF we focus on
* machine saturation
* throughput
* typical user-experience response time

— In contrast, a narrow niche area is micro-latency as in HFT

After listening to FPGA, off-heap memory latency … I feel the arms race of latency is limited to high-speed trading only. latency technology has limited economic value compared to mobile, cloud, cryptocurrency, or even data science and machine learning.

Churn?

accu?

 

find all subsums divisible by K #Altonomy

Q: modified slightly from Leetcode 974: Given an array of signed integers, print all (contiguous, non-empty) subarrays having a sum divisible by K.

https://github.com/tiger40490/repo1/blob/py1/py/algo_arr/subarrayDivisibleByK.py is my one-pass, linear time solution. I consider this technique an algoQQ. Without prior knowledge, a O(N) solution is inconceivable.

I received this problem in an Altonomy hackerrank. I think Kyle gave me this problem too.

===analysis

Sliding window? I didn’t find any use.

Key idea — build data structure to remember cumulative sums, and remember all the positions hitting a given “cumsum level”. My homegrown solution kSub3() shows more insight !

enumerate()iterate py list/str with idx+val

The built-in enumerate() is a nice optional feature. If you don’t want to remember this simple simple syntax, then yes you can just iterate over xrange(len(the_sequence))

https://www.afternerd.com/blog/python-enumerate/#enumerate-list is illustrated with examples.

— to enumerate backward,

Since enumerate() returns a generator and generators can’t be reversed, you need to convert it to a list first.

for i, v in reversed(list(enumerate(vec)))

c++nlg pearls: xx new to refresh old 知新而后温故

Is this applicable in java? I think so, but my focus here is c++.

— 温故而知新 is less effective at my level. thick->thin, reflective.

— 知新而后温故 — x-ref, thin->thick->thin learning.

However, the pace of learning new knowledge pearls could appear very slow and disappointing. 5% new learning + 95% refresh. In such a case, the main benefit and goal is the refresh. Patience and Realistic expectation needed.

In some situations, the most effective learning is 1% new and 99% refresh. If you force yourself to 2% new and 98% refresh, learning would be less effective.

This technique is effective with distinct knowledge PEARLS. Each pearl can be based on a sentence in an article but developed into a blogpost.

 

non-volatile field can have volatile behavior #Qihao

Unsafe.getObjectVolatile() and setObjectVolatile() should be the only access to the field.

I think for an integer or bool field (very important use cases), we need to use Unsafe.putIntVolatile() and Unsafe.getIntVolatile()

Q: why not use a volatile field?
A: I guess in some designs, a field need not be volatile at most access points, but at one access point it needs to behave like volatile field.  Qihao agrees that we want to control when we want to insert a load/store fence.

Non-volatile behavior usually has lower latency.

 

half%%peers could be forced into retirement

Reality — we are living longer and healthier.

Observation — compared to old men, old women tend to have more of a social life and more involvement with grandchildren.

I suspect that given a choice, half the white-collar guys in my age group actually wish to keep working past 65 (or 70), perhaps at a lower pace. In other words, They will decide to retire not by choice. My reasoning for the suspicion — Beside financial needs, many in this group do not have enough meaningful, “engaging” things to do. Many would suffer.

It takes long-term planning to stay employed past 65.

I think most of the guys in this category do not prepare well in advance and will find themselves unable to find a suitable job. (We won’t put it this way, but) They will be kinda forced into early retirement. The force could be health or in-demand skillset or …

keen observer@SG workers in their 40s

“Most of them in the 40s are already stable and don’t want to quit. Even though the pay may not be so good, they’re willing to work all the way. It’s an easy-going life.”

The observer was comparing across age groups. I think “all the way” means no change of direction, not giving in to boredom, sticking to the chosen career despite occasional challenges.

This description is becoming increasingly accurate on me… Semi-retired on the back of my passive income streams.

local variables captured in nested class #Dropcopy

If a (implicitly final) local variable [1] is captured inside a nested class, where is the variable saved?

https://stackoverflow.com/questions/43414316/where-is-stored-captured-variable-in-java explains that the anonymous or local class instance has an implicit field to hold the captured variable !

[1] local variable can be an arg passed into the enclosing function. Could a primitive type or a reference type i.e. heapy thingy

The java compiler secretly adds this hidden field. Without this field, a captured primitive would be lost and a captured heapy would be unreachable when the local variable goes out of scope.

A few hours later, when the nested class instance need to access this data, it would have to rely on the hidden field.

 

lambda^anon class instance ] java

A java lambda expression is used very much like an instance of an anonymous class. However, http://tutorials.jenkov.com/java/lambda-expressions.html#lambda-expressions-vs-anonymous-interface-implementations pointed out one interesting difference:

The anonymous instance in the example has a field named. A lambda expression cannot have such fields. A lambda expression is thus said to be stateless.

get collection sum after K halving operations #Ashish

Q: given a collection of N positive integers, you perform K operations like “half the biggest element and replace it with its own ceiling”. Find the collection sum afterwards.

Note the collection size is always N. Note K(like 5) could exceed N(like 2), but I feel it would be trivial.

====analysis====

This is a somewhat contrived problem.

I think O(N + K log min(N,K)) is pretty good if feasible.

git | merge-commits and pull-requests

Key question — Q1: which commit would have multiple parents?

— scenario 1a:

  1. Suppose your feature branch brA has a commit hash1 at its tip; and master branch has tip at hashJJ, which is the parent of hash1
  2. Then you decide to simply q[ git merge brA ] into master

In this simple scenario, your merge is a fast-forward merge. The updated master would now show hash1 at the tip, whose only parent is hashJJ.

A1: No commit would have multiple parents. Simple result. This is the default behavior of git-merge.

Note this scenario is similar to https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-request-merges#rebase-and-merge-your-pull-request-commits

However, github or bit-bucket pull-request flow don’t support it exactly.

— scenario 1b:

Instead of simple git-merge, what about pull request? A pull-request uses q[ git merge –no-ff brA ] which (I think) unconditionally creates a merge-commit hashMM on maser.

A1: now hashMM has two parents. In fact, git-log shows hashMM as a “Merge” with two parent commits.

Result is unnecessarily complex. Therefore, in such simple scenarios, it’s better to use git-merge rather than pull request.

https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-request-merges explains the details.

— Scenario 2: What if ( master’s tip ) hashJJ is Not parent of hash1?

Now maser and brA have diverged. I think you can’t avoid a merge commit hashMM.

A1: hashMM

— Scenario 3: continue from Scenario 1b or Scenario2.

3. Then you commit on brA again , creating hash2.

Q: What’s the parent node of hash2?
A: I think git actually shows hash1 as the parent, not hashMM !

Q: is hashMM on brA at all?
A: I don’t think so but some graphical tools might show hashMM as a commit on brA.

I think now master branch shows  hashMM having two parents (hash1+hashMM), and brA shows hash1 -> hash2.

I guess that if after the 3-way-merge, you immediately re-create (or reset) brA from master, then hash2’s parent would be hashMM.


Note

  • direct-commit on master is implicitly fast-forward, but merge can be fast-forward or non-fast-forward.
  • fast-forward merge can be replaced by a rebase as in Scenario 1a. Result is same as direct-commit.
  • fast-forward merge-commit (Scenario 1b) and 3way merge (Scenario 2) both create a merge-commit.
  • git-pull includes a git-merge without –no-ff

Optiver coding hackathon is like marathon training

Hi Ashish,

Looking back at the coding tests we did together, I feel it’s comparable to a form of “marathon training” — I seldom run longer than 5km, but once a while I get a chance to push myself way beyond my limits and run far longer.

Extreme and intensive training builds up the body capacity.

On my own, it’s hard to find motivation to run so long or practice coding drill at home, because it requires a lot of self-discipline.

Nobody has unlimited self-discipline. In fact, those who run so much or takes on long-term coding drills all have something beside self-discipline. Self-discipline and brute force willpower is insufficient to overcome the inertia in every one of these individuals. Instead, the invisible force, the wind beneath their wings is some forms of intrinsic motivation. These individuals find joy in the hard drill.

( I think you are one of these individuals — I see you find joy in lengthy sessions of jogging and gym workout. )

Without enough motivation, we need “organized” practice sessions like real coding interviews or hackathons. This Optiver coding test could probably improve my skill level from 7.0 to 7.3, in one session. Therefore, these sessions are valuable.

[18] latency in a typical broker DMA box

(This topic is not GTD not zbs, but relevant to some QQ interviewers.)

https://www.youtube.com/watch?v=BD9cRbxWQx8 is a 2018 presentation.

  1. AA is when a client order hits a broker
  2. Between AA and BB is the entire broker DMA engine in a single process, which parses client order, maintains order state, consumers market data and creates/modifies the outgoing FIX msg
  3. BB is when the broker ships the FIX msg out to exchange.

Edge-to-edge latency from AA to BB, if implemented in a given language:

  • python ~ about 50 times longer than java
  • java – can aim for 10 micros if you are really really good. Dan recommends java as a “reasonable choice” iFF you can accept 10+ micros. Single-digit microsecond shops should “take a motorbike not a bicycle”.
  • c# – comparable to java
  • FPGA ~ about 1 micro
  • ASIC ~ 400 ns

— c/c++ can only aim for 10 micros … no better than java.

The stronghold of c++, the space between java and fpga, is shrinking … “constantly” according to Dan Shaya. I think “constantly” is like the growth of Everest.. perhaps by 2.5 inches a year

I feel c++ is still much easier, more flexible than FPGA.

I feel java programming style would become more unnatural than c++ programming in order to compete with c++ on latency.

— IPC latency

Shared memory beats TCP hands down. For an echo test involving two processes:

Using an Aeron-based messaging application, 50th percentile is 250 ns. I think NIC and possibly kernel (not java or c++) are responsible for this latency.

sponsored DMA

Context — a buy-side shop (say HRT) uses a DMA connection sponsored by a sell-side like MS (or Baml or Instinet) to access NYSE. MS provides a DMA platform like Speedway.

The HRT FIX gateway would implement the NYSE FIX spec. Speedway also has a FIX spec for HRT to implement. This spec should include minor customization on the NYSE spec.

I have seen the HPR spec. (HPR is like an engine running in Baml or GS or whatever.) HPR spec seems to talks about customization for NYSE, Nsdq etc …re Gary chat.

Therefore, the HRT FIX gateway to NYSE must implement, in a single codebase,

  1. NYSe spec
  2. Speedway spec
  3. HPR spec
  4. Instinet spec
  5. other sponsors’ spec

The FIX session would be provided (“sponsored”) by MS or Baml, or Instinet. I think the HRT FIX gateway would connect to some IP address belonging to the sponsor like MS. Speedway would forward the FIX messages to NYSE, after some risk checks.

VWAP=bmark^execAlgo

In the context of a broker algos (i.e. an execution algo offered by a broker), vwap is

  • A benchmark for a bulk order
  • An execution algo aimed at the benchmark. The optimization goal is to minimize slippage against this benchmark. See other blogposts about slippage.

The vwap benchmark is simple, but the vwap algo implementation is non-trivial, often a trade secret.

Avichal: too-many-distractions

Avichal is observant and sat next to me for months. Therefore I value his judgment. Avichal is the first to point out I was too distracted.

For now, I won’t go into details on his specific remarks. I will simply use this simple pointer to start a new “thread”…

— I think the biggest distraction at that time was my son.

I once (never mind when) told grandpa that I want to devote 70% of my energy to my job (and 20% to my son), but now whenever I wanted to settle down and deep dive into my work, I feel the need and responsibility to adjust my schedule and cater to my son. and try to entice him to study a little bit more.

My effort on my son is like Driving uphill with the hand-brake on.

As a result, I couldn’t have a sustained focus.

gradle: dependency-jar refresh, cache, Intellij integration..

$HOME/.gradle holds all the jars from all previous downloads.

[1] When you turn on debug, you can see the actual download : gradle build –debug.

[2] Note IDE java editor can use version 123 of a jar for syntax check, but the command line compilation can use version 124 of the jar. This is very common in all IDEs.

When I make a change to a gradle config,

  • Intellij prompts for gradle import. This seems to be unnecessary re-download of all jars — very slow.
  • Therefore, I ignore the import. I think as a result, Intellj java editor [2] would still use the previous jar version as the old gradle config is in effect. I live with this because my focus is on the compilation.
  • For compilation, I use the grade “build” action (probably similar to command line build). Very fast but why? Because only one dependency jar is refreshed [3]
  • Gary used debug build [1] to prove that this triggers a re-download of specific jars iFF you delete the jars from $HOME/.gradle/caches/modules-2/files-2.1

[3] For a given dependency jar, “refresh” means download a new version as specified in a modified gradle config.

— in console, run

gradle build #there should be a ./build.gradle file

Is java/c# interpreted@@ #JIT

category? same as JIT blogposts

Q: are java and c# interpreted? QQ topic — academic but quite popular in interviews.

https://stackoverflow.com/questions/8837329/is-c-sharp-partially-interpreted-or-really-compiled shows one explanation among many:

The term “Interpreter” referencing a runtime generally means existing code interprets some non-native code. There are two large paradigms — Parsing: reads the raw source code and takes logical actions; bytecode execution : first compiles the code to a non-native binary representation, which requires much fewer CPU cycles to interpret.

Java originally compiled to bytecode, then went through an interpreter; now, the JVM reads the bytecode and just-in-time compiles it to native code. CIL does the same: The CLR uses just-in-time compilation to native code.

C# compiles to CIL, which JIT compiles to native; by contrast, Perl immediately compiles a script to a bytecode, and then runs this bytecode through an interpreter.

bone health for dev-till-70 #CSY

Hi Shanyou,

I have a career plan to work as a developer till my 70’s. When I told you, you pointed out bone health, to my surprise.

You said that some older adults suffer a serious bone injury and become immobile. As a result, other body parts suffer, including weight, heart, lung, and many other organs. I now believe loss of mobility is a serious health risk.

These health risks directly affect my plan to work as a developer till my 70’s.

Lastly, loss of mobility also affects our quality of life. My mom told me about this risk 20 years ago. She has since become less vocal about this risk.

Fragile bones become more common when we grow older. In their 70’s, both my parents suffered fractures and went through surgeries.

See ## strengthen our bones, reduce bone injuries #CSY for suggestions.

available time^absorbency[def#4]:2 limiting factors

see also ## identify your superior-absorbency domains

Time is a quintessential /limiting factor/ — when I try to break through and reach the next level on some endeavor, I often hit a /ceiling/ not in terms of my capacity but in terms of my available time. This is a common experience shared by many, therefore easy to understand. In contrast, a more subtle experience is the limiting factor of “productive mood” [1].

[1] This phrase is vague and intangible, so sometimes I speak of “motivation” — not exactly same and still vague. Sometimes I speak of “absorbency” as a more specific proxy.

“Time” is used as per Martin Thompson.

  • Specific xp: Many times I took leaves to attend an IV. The time + absorbency is a precious combination that leads to breakthrough in insight and muscle-building. If I only provide time to myself, most of the time I don’t achieve much.
    • I also take leave specifically to provide generic “spare time” for myself but usually can’t achieve the expected ROTI.
  • Specific xp: yoga — the heightened absorbency is very rare, far worse than jogging. If I provide time to myself without the absorbency, I won’t do yoga.
  • the zone — (as described in my email) i often need a block of uninterrupted hours. Time is clearly a necessary but insufficient condition.
  • time for workout — I often tell my friends that lack of time sounds like an excuse given the mini-workout option. Well, free time still helps a lot, but motivation is more important in this case.
  • localSys — absorbency is more rare here than coding drill, which is more rare than c++QQ which is more rare than java QQ
  • face time with boy — math practice etc.. the calm, engaged mood on both sides is very rare and precious. I tend to lose my cool even when I make time for my son.
  • laptop working at train stations — like MRT stations or 33rd St … to capture the mood. Available time by itself is useless

exec algo: with-volume

— WITH VOLUME
Trade in proportion to actual market volume, at a specified trade rate.

The participation rate is fixed.

— Relative Step — with a rate following a step-up algo.

This algo dynamically adjusts aggressiveness(participation rate) based on the
relative performance of the stock versus an ETF. The strategy participates at a target percentage of overall
market volume, adjusting aggressiveness when the stock is
significantly underperforming (buy orders) or outperforming (sell orders) the reference security since today’s open.

An order example: “Buy 90,000 shares 6758.T with a limit price of ¥2500.
Work order with a 10% participation rate, scaling up to 30%
whenever the stock is underperforming the Nikkei 225 ETF (1321.OS)
by 75 basis points or more since the open.”

If we notice the reference ETF has a 2.8% return since open and our 6758.T has a 2.05% return, then the engine would assume 6758.T is significantly underperforming its peers (in its sector). The engine would then step up the participation to 30%, buying more aggressively, perhaps using bigger and faster slices.

What if the ETF has dropped 0.1% and 6758.T has dropped 0.85%? This would be unexpected since our order is a large order boosting the stock. Still, the other investors might be dumping this stock. The engine would still perceive the stock as underperforming its peers, and step up the buying speed.

Y alpha-geeks keep working hard #speculation

Based on my speculation, hypothesis, imagination and a tiny bit of observation.

Majority of the effective, efficient, productive tech professionals don’t work long hours because they already earn enough. Some of them can retire if they want to.

Some percentage of them quit a big company or a high position, sometimes to join a startup. One of the reasons — already earned enough. See my notes on envy^ffree

Most of them value work-life balance. Half of them put this value not on lips but on legs.

Many of them still choose to work hard because they love what they do, or want to achieve more, not because no-choice. See my notes on envy^ffree

by-level traversal: isBST #Ashish

Q: given a sequence of payload values produced from a by-level traversal of a binary tree, could the tree be a BST?

Ashish gave me this question. We can assume the values are floats.

====analysis

(Not contrived, not practical)

— idea 1:

Say we have lined up the values found on level 11. We will split the line-up into sections, each split point being a Level-10 value.

Between any two adjacent values on level 10 (previous level), how many current-level values can legally fit in? I would say up to 2.

In other words, each section can have two nodes or fewer. I think this is a necessary (sufficient?) condition.

–idea 2: One-pass algo

Construct a BST as we consume the sequence.

Aha — there’s only one possible BST we can build.

 

(unordered)map erase: prefer by-key !! itr #Ashish

Relevant in coding tests like speed-coding, take-home, onsite. Practical knowledge is power !

As shown in https://github.com/tiger40490/repo1/blob/cpp1/cpp/lang_misc/mapEraseByVal.cpp

.. by-key is cleaner, not complicated by the iterator invalidation complexities.

You can save all the “bad” keys, and later erase one by one, without the invalidation concerns. You can also print the keys.

if you were to save the “bad” iterators, then once you erase one iterator, are the other iterators affected? No, but I don’t want to remember.

STL iterator invalidation rules, succinctly has a succinct summary, but I don’t prefer to deal with these complexities when I have a cleaner alternative solution.

std::pair mvctor = field-by-field

std::pair has no pointer field so I thought it needs no meaningful mvctor, but actually std::pair mvctor is defaulted i.e. field-wise move i.e. each field is moved.

If a pair holds a vector and a string, then the vector would be move-constructed, and so does the string.

Q1: So what kind of simple class would have no meaningful mvctor?
%%A: I would say a class holding no pointer whatsoever. Note it can embed another class instance as a field.

Q2: so why is std::pair not one of them?
A: std::pair is a template, so the /concrete/ field type can be a dynamic container including std::string.

All dynamic containers use pointers internally.

fine print]source code

Q1: “Here is 90% of the logic” — when is such a documentation complete? Answered at the end.

When we programmers read source code and document the “business logic” implemented thereby, we are sometimes tempted to write “My write-up captures the bulk of the business logic. I have omitted minor details, but they are edge cases. At this stage we don’t need to worry about them”. I then hope I have not glossed over important details. I hope the omitted details are just fine prints. I was proven wrong time and again.

Sound byte: source code is all details, nothing but details.

Sound byte: Everything in source code is important detail until proven otherwise. The “proof” takes endless effort, so in reality, Everything in source code is important detail.

The “business logic” we are trying to capture actually consists of not only features and functionalities, but functional fragments i.e. the details.

When we examine source code, a visible chunk of code with explicit function names, variable names, or explicit comments are hard to miss. Those are the “easy parts”, but what about those tiny functional fragments … Perhaps a short condition buried in a complicated if/while conditional? Perhaps a seemingly useless catch block among many catches. Perhaps a break/continue statement that seems serve no purpose? Perhaps some corner case error handling module that look completely redundant and forgettable, esp. compared to other error handlers. Perhaps a missing curly brace after a for-loop header?

( How about the equal sign in “>=” … Well, that’s actually a highly visible fragment, because we programmers have trained vision to spot that “=” buried therein. )

Let me stress again. The visibility, code size … of a functional fragment is not indication of its relative importance. The low-visibility, physically small functional fragments can be equally important as a more visible functional fragment.

To the computer, all of these functional fragments are equally significant. Each could have impact on a production request or real user.

Out of 900 such “functional fragments”, which ones deal with purely theoretical scenarios that would never arise (eg extraneous data) .. we really don’t know without analyzing tons of production data. One minor functional fragment might get activated by a real production data item. So the code gets executed unexpectedly, usually with immediate effect, but sometimes invisibly, because its effect is concealed by subsequent code.

I would say there are no fine-prints in executable source code. Conversely, every part of executable source code is fine print, including the most visible if/else. Every executable code has a real impact, unless we use real production data to prove otherwise.

A1: good enough if you have analyzed enough production data to know that every omitted functional fragment is truly unimportant.

intellij=cleaner than eclipse !

intellij (the community version) is much cleaner than eclipse, and no less rich in features.

On a new job, My choice of java ide is based on
1) other developers in the team, as I need their support

2) online community support — as most questions are usually answered there
I think eclipse beats intellij

3) longevity — I hate to learn a java ide and lose the investment when it loses relevance.
I think eclipse beats intellij, due to open-source

)other factors include “clean”

The most popular tools are often vastly inferior for me. Other examples:
* my g++ install in strawberryPerl is better than all the windows g++ installs esp. msvs
* my git-bash + strawberryPerl is a better IDE than all the fancy GUI tools
* wordpress beats blogger.com hands down
* wordpad is a far simpler rich text editor than msword or browsers or mark-down editors

java value-based types #Optional.java

Optional.java is the only important example I know, so I will use it as illustration.

One of the main ideas about value types is that they have no notion of identity (or perhaps their identity is detectable only to JVM not Java applications). In such a world, how could we tell whether variables aa and bb “really” are the same or different?

Q: why avoid locking on value-based objects?
%%A: locking is based on identity. See why avoid locking on boxed Integers
A: https://stackoverflow.com/questions/34049186/why-not-lock-on-a-value-based-class

c++ has a smaller community and collective brain power so discussions are more limited.

Y avoid us`boxed Integer as mutex

https://stackoverflow.com/questions/34049186/why-not-lock-on-a-value-based-class section on “UPDATE – 2019/05/18” has a great illustration

Auto-boxing of “33” usually produces distinct objects each time, but could also produce the same object repeatedly. Compiler has the freedom to optimize, just as in c++.

Remember: locking is based on object identity.

wage+cost levels: biased views@China colleagues

— salary

Many China colleagues (YH, CSY, CSDoctor, Jenny as examples) say their classmates earn much more than them in the U.S.  These colleagues seem to claim their Typical Chinese counterpart earns higher than in the U.S.

Reality — those classmates are outliers. The average salary in China is much lower than U.S. Just look at statistics.

Some of these guys (like CSY) feel inferior and regret coming to the U.S.

— cost of reasonable lifestyle

Many Chinese friends complain that cost level is higher in China than U.S. and Singapore. A young MLP colleague (Henry) said a RMB 500k/year feels insufficient to a Chinese 20-something.

In reality, per-sqft property price is indeed higher in some cities than in the U.S. For almost everything else, China cost level is much lower than in the U.S. Just look at statistics.

success in long-term learning: keen!=interest

For both my son and my own tech learning over the long term, “interest” is not necessarily the best word to capture the key factor.

I was not really interested in math (primary-secondary) or physics (secondary). In college, I tried to feel interested in electronics, analog IC design etc but unsuccessful. At that level, extrinsic motivation was the only “interest” and the real motivation in me. Till today I don’t know if I have found a real passion.

Therefore, the strongest period of my life to look at is not college but before college. Going through my pre-U schools, my killer strength was not so much “interest” but more like keenness — sharp, quick and deep, absorbency…

Fast forward to 2019, I continue to reap rewards due to the keenness — in terms of QQ and zbs tech learning. Today I have stronger absorbency than my peers, even though my memory, quick-n-deep, sharpness .. are no longer outstanding.

Throughout my life, Looking at yoga, karaoke, drawing, sprinting, debating, piano, .. if I’m below-average and clueless, then I don’t think I can maintain “interest”.

Optional.java notes

Q: if an optional is empty, will it remain forever empty?

— An Optional.java variable could but should never be null, as this instance need a state to hold at least the boolean isPresent.

If a method is declared to return Optional<C>, then the author need to ensure she doesn’t return a null Optional ! This is not guaranteed by the language.

https://dzone.com/articles/considerations-when-returning-java-8s-optional-from-a-method illustrates a simple rule — use a local var retVal throughout then, at the very last moment, return Optional.ofNullable(retVal). This way, retVal can be null but the returned reference is never null.

If needed, an Optional variable should be initialized to Optional.empty() rather than null.

–immutability is tricky

  1. the referent object is mutable
  2. the Optional reference can be reseated, i.e. not q[ final ]
  3. the Optional instance itself is immutable.
  4. Therefore, I think an Optional == a mutable ptr to a const wrapper object enclosing a regular ptr to a mutable java object.

Similarity to String.java — [B/C]

Compare to shared_ptr instance — [A] is true.

  • C) In contrast, a shared_ptr instance has mutable State, in terms of refcount etc
  • B) I say Not-applicable as I seldom use a pointer to a shared_ptr

— get() can throw exception if not present

— not serializable

— My motivation for learning Optional is 1) QQ 2) simplify my design in a common yet simple scenario

https://www.mkyong.com/java8/java-8-optional-in-depth/ is a demo, featuring … flatMap() !!

OO-modeling: c++too many choices

  • container of polymorphic Animals (having vtbl);
  • Nested containers; singletons;
  • class inheriting from multiple supertypes ..

In these and other OO-modeling decisions, there are many variations of “common practices” in c++ but in java/c# the best practice usually boils down to one or two choices.

No-choice is a Very Good Thing, as proven in practice. Fewer mistakes…

These dynamic languages rely on a single big hammer and make everything look like a nail….

This is another example of “too many variations” in c++.

dev jobs ] Citi SG+NY #Yifei

My friend Yifei spent 6+ years in ICG (i.e. the investment banking arm) of Citi Singapore.

  • Over 6Y no layoff. Stability is Yifei’s #1 remark
  • Some old timers stay for 10+ years and have no portable skill. This is common in many ibanks.
  • Commute? Mostly in Changi Biz Park, not in Asia Square
  • Low bonus, mostly below 1M
  • VP within 6 years is unheard-of for a fresh grad

I feel Citi is rather profitable and not extremely inefficient, just less efficient than other ibanks.

Overall, I have a warm feeling towards Citi and I wish it would survive and thrive. It offers good work-life balance, much better than GS, ML, LB etc

debugger stepping into library

I often need my debugger to step into library source code.

Easy in java:

c++ is harder. I need to find more details.

  • in EclipseCDT, STL source code is available to IDE ( probably because class templates are usually in the form of header files), and debugger is able to step through it, but not so well.

Overall, I feel debugger support is significantly better in VM-based languages than c++, even though debugger was invented before these new languages.

I guess the VM or the “interpreter” can serve as an “interceptor” between debugger and target application. The interceptor can receive debugger commands and suspend execution of the target application.

complacent guys]RTS #DeepakCM

Deepak told me that Rahul, Padma etc stayed in RTS for many years and became “complacent” and uninterested in tech topics outside their work. I think Deepak has sharp observations.

I notice many Indian colleagues (compared to East European, Chinese..) uninterested in zbs or QQ topics. I think many of them learn the minimum to pass tech interviews. CSY has this attitude on coding IV but the zbs attitude on socket knowledge

–> That’s a fundamental reason for my QQ strength on the WallSt body-building arena.

If you regularly benchmark yourself externally, often against younger guys, you are probably more aware of your standing, your aging, the pace of tech churn, … You live your days under more stress, both negative and positive stress.

I think these RTS guys may benchmark internally once a while, if ever. If the internal peers are not very strong, then you would get a false sense of strength.

The RTS team may not have any zbs benchmark, since GTD is (99.9%) the focus for the year-end appraisal.

These are some of the reasons Deepak felt 4Y is the max .. Deepak felt H1 guys are always on our toes and therefore more fit for survival.

fear@large codebase #web/script coders

One Conclusion — my c++ /mileage/ made me a slightly more confident, and slightly more competent programmer, having “been there; done that”, but see the big Question 1 below.

— Historical view

For half my career I avoided enterprise technologies like java/c++/c#/SQL/storedProc/Corba/sockets… and favored light-weight technologies like web apps and scripting languages. I suspect that many young programmers also feel the same way — no need to struggle with the older, harder technologies.

Until GS, I was scared of the technical jargon, complexities, low-level API’s debuggers/linkers/IDE, compiler errors and opaque failures in java/SQL … (even more scared of C and Windows). Scared of the larger, more verbose codebases in these languages (cf the small php/perl/javascript programs)… so scared that I had no appetite to study these languages.

— many guys are unused to large codebases

Look around your office. Many developers have at most a single (rarely two) project involving a large codebase. Large like 50k to 100k lines of code excluding comments.

I feel the devops/RTB/DBA or BA/PM roles within dev teams don’t require the individual to take on those large codebases. Since it’s no fun, time-consuming and possibly impenetrable, few of them would take it on. In other words, most people who try would give up sooner or later. Searching in a large codebase is perhaps their first challenge. Even figuring out a variable’s actual type can be a challenge in a compiled language.

Compiling can be a challenge esp. with C/c++, given the more complex tool chain, as Stroustrup told me.

Tracing code flow is a common complexity across languages but worse in compiled languages.

In my experience, perl,php,py,javascript codebases are usually small like pets. When they grow to big creatures they are daunting and formidable just like compiled language projects. Some personal experiences —
* Qz? Not a python codebase at all
* pwm comm perl codebase? I would STILL say codebase would be bigger if using a compiled language

Many young male/female coders are not committed to large scale dev as a long-term career, so they probably don’t like this kinda tough, boring task.

— on a new level

  • Analogy — if you have not run marathons you would be afraid of it.
  • Analogy — if you have not coached a child on big exams you would be afraid of it.

I feel web (or batch) app developers often lack the “hardcore” experience described above. They operate at a higher level, cleaner and simpler. Note Java is cleaner than c++. In fact I feel weaker as java programmer compared to a c++ programmer.

Q1: I have successfully mastered a few sizable codebases in C++, java, c#. So how many more successful experiences do I need to feel competent?
A: ….?

Virtually every codebase feels too big at some time during the first 1-2 years, often when I am in a low mood, despite the fact that in my experience, I was competent with many of these large codebases.
I think Ashish, Andrew Yap etc were able to operate well with limited understanding.
I now see the whole experience as a grueling marathon. Tough for every runner, but I tend to start the race assuming I’m the weakest — impostor syndrome.
Everyone has to rely on log and primitive code browsing tools. Any special tools are usually marginal value. With java, live debugger is the most promising tool but still limited pain-relief. Virtually all of my fellow developers face exactly the same challenges so we all have to guess. I mean Yang, Piroz, Sundip, Shubin, … virtually all of them, even the original authors of the codebase. Even after spending 10Y with a codebase, we could face opaque issues. However, these peers are more confident against ambiguity.

[19] 4 Deeper mtv2work4more$ After basic ffree

Note sometimes I feel my current ffree is so basic it’s not real ffree at all. At other times I feel it is real, albeit basic, ffree. After achieving my basic ffree, here are 3 deeper motivations for working hard for even more money:

  • am still seeking a suitable job for Phase B. Something like a light-duty, semi-retirement job providing plenty of free time (mostly for self-learning, blogging, helping kids). This goal qualifies as a $-motivation because … with more financial resources, I can afford to take some desirable Phase-B jobs at lower pay. In fact, I did try this route in my 2019 SG job search.
  • I wish to spend more days with grandparents — need more unpaid leaves, or work in BJ home
  • more respect, from colleagues and from myself
  • stay relevant for 25 years. For next 10 years, I still want more upstream yet churn-resistant tech skills like c++.

–Below are some motivations not so “deep”

  • better home location (not size) — clean streets; shorter commute; reasonable school.. Eg Bayonne, JC
  • Still higher sense of security. Create more buffers in the form of more diversified passive incomes.

— Below are some secondary $-motivations

  • * more time with kids? Outside top 10 motivations.
  • * better (healthy) food? usually I can find cheaper alternatives
  • * workout classes? Usually not expensive

profilers for low-latency java

Most (simple) java profilers are based on jvm safepoint. At a safepoint, they can use JVM API to query the JVM. Safepoint-based profiling is relatively easy to implement.

s-sync (Martin) is not based on safepoint.

Async profiler is written for openJDK, but some features are usable on Zinc JVM. Async profiler is based on process level counters. Can’t really measure micro-latency with any precision.

Perf is an OS-level profiler, probably based on kernel counters.

“Strategic” needs a re-definition #fitness

“Strategic” i.e. long-term planning/t-budgeting needs a re-definition. quant and c# were two wake-up calls that I tragically missed.

For a long time, the No.1 strategic t-expense was quant, then c#/c++QQ, then codingDrill (the current yellowJersey).

Throughout 2019, I considered workout time as inferior to coding drill .. Over weekends or evenings I often feel nothing-done even though I push myself to do “a bit of” yoga, workout, or math-with-boy, exp tracking,

Now I feel yoga and other fitness t-spend is arguably more strategic than tech muscle building. I say this even though fitness improvement may not last.

Fitness has arguably the biggest impact on brain health and career longevity

git | reword historical commit msg

Warning — may not work if there’s a recent merge-in on your branch

Find the target commit and its immediate parent commit.

git rebase -i the.parent.commit

First commit in the list would be your target commit. Use ‘r’ for the target commit and don’t change other commits. You will land in vim to edit the original bad commit msg. Once you save and quit vim, the rebase will complete, usually without error.

Now you can reword subsequent commit messages.

c++low-^high-end job market prospect

As of 2019, c++ low-end jobs are becoming scarce but high-end jobs continue to show robust demand. I think you can see those jobs across many web2.0 companies.

Therefore, it appears that only high-end developers are needed. The way they select a candidates is … QQ. I have just accumulated the minimum critical mass for self-sustained renewal.

In contrast, I continue to hold my position in high-end coreJava QQ interviews.

conclusions: mvea xp

Is there an oth risk? Comparable to MSFM, my perception of the whole experience shapes my outlook and future decision.

  • Not much positive feedback beside ‘providing new, different viewpoints’, but Josh doesn’t give positive feedback anyway
  • should be able to come back to MS unless very stringent requirement
  • Josh might remember Victor as more suitable for greenfield projects.
  • I think Josh likes me as a person and understands my priorities. I did give him 4W notice and he appreciated.
  • I didn’t get the so-called “big picture” that Josh probably valued. Therefore I was unable to “support the floor” when team is out. The last time I achieved that was in Macq.
  • work ethic — A few times I worked hard and made personal sacrifices. Josh noticed.
  • In the final month, I saw myself as fairly efficient to wrap up my final projects including the “serialization” listed below

Q: I was brought in as a seasoned c++ old hand. Did I live up to that image? Note I never promised to be an expert
A: I think my language knowledge (zbs, beyond QQ) was sound
A: my tool chain GTD knowledge was as limited as other old hands.

Q: was the mvea c++ codebase too big for me?
A: No, given my projects are always localized. and there were a few old hands to help me out.

I had a few proud deliveries where I had some impetus to capture the momentum (camp out). Listed below. I think colleagues were impressed to some extent even though other people probably achieved more. Well, I don’t need to compare with those and feel belittled.

This analysis revealed that Josh is not easily impressed.  Perhaps he has high standard as he never praised anyone openly.

  • * I identified two stateless calc engines in pspc. Seeing the elegant simplicity in the design, I quickly zoomed in, stepped over and documented the internal logic and replicated it in spreadsheet.
  • * my pspc avg price sheet successfully replicated a prod “issue”, shedding light into a hitherto murky part of the codebase
  • * I quickly figure out the serialization root cause of the outage
  • * I had two brave attempts to introduce my QOT innovation
  • * My 5.1 Brazil pspc project was the biggest config project to date. I single-handedly overcame many compilation (gm-install) and startup errors. In particular, I didn’t leave the project half-cooked, even though I had the right to do so.
  • I make small contributions to python test set-up

##With time2kill..Come2 jobjob blog

  • for coding drill : go over
    • [o] t_algoClassicProb
    • [o] t_commonCodingQ22
    • t_algoQQ11
    • open questions
  • go over and possibly de-list
    1. [o] zoo category — need to clear them sooner or later
    2. [o] t_oq tags
    3. t_nonSticky tags
    4. [o] t_fuxi tags
    5. Draft blogposts
    6. [o] *tmp categories? Sometimes low-value
    7. remove obsolete tags and categories
  • Hygiene scan for blogposts with too many categories/tags to speed up future searches? Low value?
  • [o=good for open house]

c++screening Questions for GregR

Similar to https://bintanvictor.wordpress.com/2017/09/15/more-mcq-questions-on-core-java-skills/, hopefully these questions can be given over phone.

Other topics easy to quiz over phone: smart pointers, std::thread, rvr/move, big4

  • — Q3: Suppose you have a simple class “Account” with only simple data fields like integers and strings, we can get an Account object constructed on 1) heap or in 2) data segment. Where else can it be constructed? For clarity, Data segment holds things like global variables.  [on stack]
  • Q3b: Suppose Account class has a default constructor Only. In which cases above is this constructor used to instantiate the Account object? [all three cases]
  • Q3c: in which one of the three cases must we use pointer to access the constructed object? [the Heap case ]
  • Q3d: in which one of the three cases above do we have a risk of memory leak? [the Heap case]
  • Q3e: in which of the three cases can the Account object construction happen before main() function starts? Hint: dynamic vs static allocation [ data segment case ]
  • Q3e2 (advanced): for a static Account object allocated in data segment, is the construction always before main()? [Not always. Consider local static variables]
  • Q3f (advanced): RAII relies on which of the 3 types of allocation? Hint: RAII invokes the destructor automatically and reliably. [stack allocation]
  • Q3g: can you have an array of Account objects constructed on heap? [yes]
  • Q3h: YES we can get such an array of Account objects, using array-new, but do we get a pointer or an array or do we get the first array element in an Account variable? [pointer]
  • Q3i: what happens when one of these Account objects is destructed? [destructor would run]
  • Q3j: OK let’s assume Account destructor runs. In which cases above is the destructor Guaranteed to run? Note we have four destruction scenarios so far — on stack, in data-segment, on heap and after array-new. [all cases, including static objects]
  • Q3k: what if we don’t define any destructor in Account class, in which cases above is destructor skipped? [Never skipped in any  case]
  • Q3L: In the array-new case, suppose 100 Account objects are constructed on heap, when we destruct the array via array-delete, how many times would the Account destructor run? [ 100 times]
  • — Q4: I have a variable var1 whose type is integer pointer, and I pass var1 directly into a function, what types of functions below can accept this argument?
    • A: funcA takes a parameter of non-constant reference to integer
    • B: funcB takes a parameter of constant reference to integer
    • * C: funcC takes a parameter of pointer to integer
    • D: funcD takes a parameter of bare integer (not pointer or reference)
  • Q4b: I want to call funcX(100+200+300), what types of function can accept this argument? [B/D]
  • Q4c: I have a variable var2 as a integer variable, and I want to pass it in like funcX(var2), what types of functions below can accept this argument? [A/B/D]
  • — Q5: which casts below are platform-specific? [correct answer is marked with *]
    • A: static_cast
    • B: down_cast
    • C: const_cast
    • D: dynamic_cast
    • * E: reinterpret_cast
  • Q5b: which casts are Not part of the c++ standard?
  • Q5c: which casts are closest to the old-school cast in C language? [A]
  • Q5d: which casts are likely to require the vtable? [D]
  • Q5e: which casts usually work with pointers and references Only? [D/E]
  • Q5f (advanced): when applied on pointers, which casts could produce an address different from the input pointer? [D]
  • — Q6: which standard data types below usually require heap allocation [see asterisks]
    • A: integer
    • * B: the string in standard library
    • C: plain vanilla array of 20 timestamps
    • * D: the list in standard library
    • * E: the vector in standard library
    • * F: the map in standard library
    • * G: unordered_map
    • H: C-style string
  • Q6b: which data types are available in C? [A C H]
  • Q6c: which data types by design won’t grow to accommodate more elements? [C H] A is a trivial case.
  • Q6d (advanced): which implementation classes for these data types must include code to allocate an array on heap? [B E G]
  • Q6d2 (more advanced, if last question was correctly answered): which one among these three classes may skip array allocation in a common optimization? [B in the form of small-string optimization]
  • Q6e: which data types offer random-access iterators capable of jumping by an arbitrary offset? [B C E H]
  • Q6f: which data types offer amortized constant-time lookup? [BCEH and G]
  • Q6g (advanced): which data type(s) offer unconditional guaranteed performance for insertion/deletion at any position and in all scenarios ? [D F]
  • Q6h: (advanced): which data structures are allocated on heap but doesn’t require reallocation of existing elements? [list and map]

##command line c++dev tools: Never phased out

Consider C++ build chain + dev tools  on the command line. They never get phased out, never lost relevance, never became useless, at least till my 70s. New tools always, always keep the old features. In contrast, java and newer languages don’t need so many dev tools. Their tools are more likely to use GUI.
  • — Top 5 examples similar things (I don’t have good adjectives)
  • unix command line power tools
  • unix shell scripting for automation
  • C API: socket API, not the concepts
  • — secondary examples
  • C API: pthreads
  • C API: shared memory
  • concepts: TCP+UDP, http+cookies
Insight — unix/linux tradition is more stable and consistent. Windows tradition is more disruptive.

Note this post is more about churn (phased-out) and less about accumulation (growing depth)

finding1st is easier than optimizing

  • Problem Type: iterate all valid choices without duplicates — sounds harder than other types, but usually the search space is quite constrained and tractable
    • eg: regex
    • eg: multiple-word search in matrix
  • Problem Type: find best route/path/combo, possibly pruning large subtrees in the search space — often the hardest type
  • Problem Type: find fist — usually easier

In each case, there are recurring patterns.

automation scripts for short-term GTD

Background — automation scripts have higher values in some areas than others

  1. portable GTD
    • Automation scripts often use bash, git, SQL, gradle, python/perl… similar to other portable GTD skills like instrumentation know-how
  2. long-term local (non-portable) GTD
  3. short-term local (non-portable) GTD

However, for now let’s focus on short-term local GTD. Automation scripts are controversial in this respect. They take up lots of time but offer some measurable payoff

  • they could consolidate localSys knowledge .. thick -> thin
  • They can serve as “executable-documentation”, verified on every execution.
  • They reduce errors and codify operational best practices.
  • They speed up repeated tasks. This last benefit is often overrated. In rare contexts, a task is so repetitive that we get tired and have to slow down.

— strength — Automation scripts is my competitive strength, even though I’m not the strongest.

— respect – Automation scripts often earn respect and start a virtuous cycle

— engagement honeymoon — I often experience rare engagement

— Absorbency — sometimes I get absorbed, though the actual value-add is questionable .. we need to keep in mind the 3 levels of value-add listed above.

jGC heap: 2 unrelated advantages over malloc

Advantage 1: faster allocation, as explained in other blogposts

Advantage 2: programmer can "carelessly" create an "local" Object in any method1, pass (by reference) the object into other methods and happily forget about freeing the memory.

In this extremely common set-up, the reference itself is a stack variable in method1, but the heapy thingy is "owned" by the GC.

In contrast, c/c++ requires some "owner" to free the heap memory, otherwise memory would leak. There’s also the risk of double-free. Therefore, we absolutely need clearly documented ownership.

max-sum non-adjacent subSequence: signedIntArr#70%

Q: given a signed int array a[], find the best sub-sequence sum. Any Adjacent elements in a[] should not both show up. O(N) time and O(1) space.

====analysis

— O(1) idea 2:

Two simple variables needed: m_1 is the m[cur-1] and m_2 is m[cur-2]. So compare a[cur] + m_2 vs m_1.

https://github.com/tiger40490/repo1/blob/py1/py/algo_arr/nonAdjSeqSum.py is self-tested DP solution.

FB 2019 ibt Indeed onsite

  • coding rounds — not as hard as the 2011 FB interview — regex problem
    • Eric gave positive feedback to confirm my success but perhaps other candidates did even better
    • No miscommunication as happened in the VersionedDict.
    • 2nd Indeed interviewers failed me even though I “completed” it. Pregnant interviewer may follow suit.
  • data structure is fundamental to all the problems today.
  • SDI — was still my weakness but I think I did slightly better this time
  • Career round — played to my advantage as I have multiple real war stories. I didn’t prepare much. The stories just came out raw and hopefully authentic
  • the mini coding rounds — played to my advantage as I reacted fast, thanks to python, my leetcode practice …

So overall I feel getting much closer to passing. Now I feel One interview is all it takes to enter a new world

* higher salary than HFT
* possibly more green field
* possibly more time to research, not as pressed as in ibanks

sorting collection@chars|smallIntegers

Therefore, given a single (possibly long) string, sorting it should Never be O(N logN) !

Whenever an algo problem requires sorting a bunch of English letters, it’s always, always better to use counting sort. O(N) rather O(N logN)

Similarly, in more than one problems, we are given a bunch of integers bound within a limited range. For example, the array index values could be presented to us as a collection and we may want to sort them. Sorting such integers should always , always be counting sort.

GTD fear-graph: delays^opacity^benchmark..

  • — manifestations first, fundamental, hidden factors last
  • PIP, stigma, damagedGood. Note I’m not worried about cashflow
  • [f] project delays
  • [f] long hours
  • [f] long commute
  • large codebase in brown field. Am slower than others at reading code.
  • [f] opacity — is worse than complexity or codebase size
  • figure-out speed benchmark — including impostor’s syndrome
  • [f = faced by every team member]
The connections among these “nodes” look complex but may not be really complex.
PIP is usually the most acute, harmful item among them. I usually freak out, though in hind sight I always feel I tried my best and should not feel ashamed. Josh is one manager who refuses to use PIP, so under Josh I had no real fear.

dev career]U.S.: fish out of water in SG

Nowadays I feel in-demand on 1) Wall St, 2) with the web2.0 shops. I also feel welcome by 3) the U.S. startups. In Singapore, this feeling of in-demand was sadly missing. Even the bank hiring managers considered me a bit too old.

Singapore banks only has perm jobs for me, which feel unsuitable, unattractive and stressful.

In every Singapore bank I worked, I felt visibly old and left behind. Guys at my age were all managers… painful.

expertise: Y I trust lecture notes more than forums

Q: A lot of times we get technical information in forums, online lecture notes, research papers. why do I trust some more than others?

  1. printed books and magazines — receive the most scrutiny because once printed, mistakes are harder to correct. Due to this fundamental reason, there is usually more scrutiny.
  2. research papers — receive a lot of expert peer review and most stringent editorial scrutiny, esp. in a top journal and top conference
  3. college lecture notes — are more “serious” than forums,
    • mostly due to the consequence of misinforming students.
    • When deciding what to include in lecture notes, many professors are conservative and prudent when adding less established, less proven research findings. The professor may mention those findings verbally but more prudent about her posted lecture notes.
    • research students ask lots of curious questions and serve as a semi-professional scrutiny of the lecture notes
    • lecture notes are usually written by PhD holders
  4. Stackoverlow and Quora — have a quality control via voting by accredited members.
  5.  the average forum — Least reliable.

[11]concurrent^serial allocation in JVM^c++

–adapted from online article (Covalent)

Problem: Multithreaded apps create new objects at the same time. During object creation, memory is locked. On a multi CPU machine (threads run concurrently) there can be contention

Solution: Allow each thread to have a private piece of the EDEN space. Thread Local Allocation Buffer
-XX:+UseTLAB
-XX:TLABSize=
-XX:+ResizeTLAB

You can also Analyse TLAB usage -XX:+PrintTLAB

Low-latency c++ apps use a similar technique. http://www.facebook.com/notes/facebook-engineering/scalable-memory-allocation-using-jemalloc/480222803919 reveals insight of lock contention in malloc()

Q: When do U choose c++over python4personal coding #FB

A facebook interviewer asked me “I agree that python is great for coding interviews. When do you choose c++?” I said

  1. when I need to use template meta programming
  2. when I need a balanced tree

Now I think there are other reasons

  1. when I practice socket programming
  2. when I practice memory mgmt techniques — even java won’t give me the low-level controls
  3. when I parse binary data using low-level constructs like reinterpret_cast, endianness conversion
  4. when I practice pthreads — java is easier

I didn’t showcase c++ but I felt very confident and strong about my c++, which probably shined through in front of the last interviewer (from the architect team)

I think my c++/python combo was good combo for the FB onsite, even though only one interviewer noticed my c++ strength.

%%FB onsite coding/SDI questions

— Q: design type-ahead i.e. search suggestion. Scalability is key.

— Q: innerProduct2SparseArray (Table aa, Table bb). First you need to design the undefined “Table” class to represent a sparse array.
Then you need to write real code for innerProduct2SparseArray(), assuming the two Tables are already populated according to your definition.

— Q: add2DecimalsSameLength (string decimal1, string decimal2) to return a string version of the sum. Can’t use the python integer to hold the sum, as in java/c++ integers have limited size. You must use string instead.

Aha — carry can only be 0 or 1
I forget to add the last carry as an additional digit beyond the length of the input strings 😦

— Q: add2longBinary(string binaryNum1, string binaryNum2). I told interviewer that my add2BigDecimal solution should work, so we skipped this problem.

— Q: checkRisingFalling(listOfInts) to return 1 for rising, -1 for falling and 0 for neither. Rising means every new num is no lower than previous

— Q: checkDiameter(rootOfBinTree)

save iterators and reuse them without invalidation risk

For a static STL container, the iterator objects can be safely stored and reused.

The dreaded iterator invalidation is a risk only under structural changes.

Many coding interview questions allow (even require) me to save those iterators and store them in a vector, a hash table …

Afterwards, we retrieve an iterator value, and visit the next/previous of it.

SDI: 3 ways to expire cached items

server-push update ^ TTL ^ conditional-GET # write-through is not cache expiration

Few Online articles list these solutions explicitly. Some of these are simple concepts but fundamental to DB tuning and app tuning. https://docs.oracle.com/cd/E15357_01/coh.360/e15723/cache_rtwtwbra.htm#COHDG198 compares write-through ^ write-behind ^ refresh-ahead. I think refresh-ahead is similar to TTL.

B) cache-invalidation — some “events” would trigger an invalidation. Without invalidation, a cache item would live forever with a infinity TTL, like the list of China provinces.

After cache proxies get the invalidation message in a small payload (bandwidth-friendly), the proxies discard the outdated item, and can decide when to request an update. The request may be skipped completely if the item is no longer needed.

B2) cache-update by server push — IFF bandwidth is available, server can send not only a tiny invalidation message, but also the new cache content.

IFF combined with TTL, or with reliability added, then multicast can be used to deliver cache updates, as explained in my other blogposts.

T) TTL — more common. Each “cache item” embeds a time-to-live data field a.k.a expiry timestamp. Http cookie is the prime example.

In Coherence, it’s possible for the cache proxy to pre-emptively request an update on an expired item. This would reduce latency but requires a multi-threaded cache proxy.

G) conditional-GET in HTTP is a proven industrial strength solution described in my 2005 book [[computer networking]]. The cache proxy always sends a GET to the database but with a If-modified-since header. This reduces unnecessary database load and network load.

W) write-behind (asynchronous) or write-through — in some contexts, the cache proxy is not only handling Reads but also Writes. So the Read requests will read or add to cache, and Write requests will update both cache proxy and the master data store. Drawback — In distributed topology, updates from other sources are not visible to “me” the cache proxy, so I still rely one of the other 3 means.

TTL eager server-push conditional-GET
if frequent query, in-frequent updates efficient efficient frequent but tiny requests between DB and cache proxy
if latency important OK lowest latency slower lazy fetch, though efficient
if in-frequent query good waste DB/proxy/NW resources as “push” is unnecessary efficient on DB/proxy/NW
if frequent update unsuitable high load on DB/proxy/NW efficient conflation
if frequent update+query unsuitable can be wasteful perhaps most efficient

 

@outage: localSys know-how beats generic expertise

When a production system fails to work, do you contact

  • XX) the original developer or the current maintainer over last 3Y (or longer) with a no-name college degree + working knowledge of the programming language, or
  • YY) the recently hired (1Y history) expert in the programming language with PhD and published papers

Clearly we trust XX more. She knows the localSys and likely has seen something similar.

Exception — what if YY has a power tool like a remote debugger? I think YY may gain fresh insight that XX is lacking.

XX may be poor at explaining the system design. YY may be a great presenter without low-level and hands-on know-how.

If you discuss the language and underlying technologies with XX he may show very limited knowledge… Remember Andrew Yap and Viswa of RTS?

Q: Who would earn the respect of teammates, mgr and external teams?

XX may have a hard time getting a job elsewhere .. I have met many people like XX.

lockfree stack with ABA fix #AtomicStampedReference

http://15418.courses.cs.cmu.edu/spring2013/article/46 explains ABA in the specific context of a lockfree stack. To counter ABA problem, It uses CAS2 i.e. CAS on a pair of objects — the original object + a stamp.

Java CAS2? I believe AtomicStampedReference is designed for it.

http://tutorials.jenkov.com/java-util-concurrent/atomicstampedreference.html#atomicstampedreference-and-the-a-b-a-problem explains the AtomicStampedReference solving the ABA problem but in the last section it doesn’t clearly explain the benefit of get().

Also, the retry should be placed in a loop.

====Actually, many ABA illustrations are simplistic. Consider this illustration:

  1. Thread 0 begins a POP and sees “A” as the top, followed by “B”. Thread saves “A” and “B” before committing.
  2. Thread 1 begins and completes a POP , returning “A”.
  3. Thread 1 begins and completes a push of “D”.
  4. Thread 1 pushes “A” back onto the stack and completes. So now the actual stack top has A above D above B.
  5. Thread 0 sees that “A” is on top and returns “A”, setting the new top to “B”.
  6. Node D is lost.

— With a vector-of-pointer implementation, Thread 0 needs to save integer position within the stack. At Step 5, it should then notice the A sitting at stack top is now at a higher position than before, and avoid ABA.

The rest is simple. Thread 0 should then query the new item (D) below A. Lastly, the CAS would compare current stack top position with the saved position, before committing.

However, in reality I think a vector-of-ptr is extremely complex to implement if we have two shared mutable things to update via CAS: 1) the stackTopPosition integer and 2) the (null) ptr in the next slot of the vector.

— with a linked list implementation, I think we only have the node addresses, so at Step 5, Thread 0 can’t tell that D has been inserted between A and B.

You may consider using stack size as a second check, but it would be similar complexity as CAS2 but less reliable.

array-based order book: phrasebook

For HFT + mkt data + .. this is a favorite interview topic, kind of similar to retransmission.

Perhaps look at … https://web.archive.org/web/20141222151051/https://dl.dropboxusercontent.com/u/3001534/engine.c has a brief design doc, referenced by https://quant.stackexchange.com/questions/3783/what-is-an-efficient-data-structure-to-model-order-book

  • TLB?
  • cache efficiency

impostor’s syndrome: IV^on-the-job

I feel like impostor more on the job than in interviews. I have strong QQ (+ some zbs) knowledge during interviews. I feel it more and more often in my c++ in addition to java interviews.

Impostor’s syndrome is all about benchmarking. In job interviews, I sometimes come up stronger than the interviewers, esp. with QQ topics, so I sometimes feel the interviewer is the impostor !

In my technical discussions with colleagues, I also feel like an expert. So I probably make them feel like impostors.

So far, all of the above “expert exchanges” are devoid of any locaySys. When the context is a localSys, I have basically zero advantage.  I often feel the impostor’s syndrome because I probably oversold during interview and set up sky-high expectations.

any zbs in algo questions@@

I think Rahul may have a different view, but I feel some of QQ questions qualify as zbs, but very few of the algo tricks do.

In contrast, data structure techniques are more likely to qualify as zbs.

Obviously, the algorithm/data-structure research innovations qualify as zbs, but they are usually too complex for a 45-min coding interview.

single-writer lockfree data structure@@

All the lockfree data structures I have seen have 2 writer threads or more.

Q: what if in your design, there can be at most one writer thread but some reader threads. Any need for lockfree?
A: I don’t think so. I think both scenarios below are very common.

  • Scenario: If notification required, then mutex + condVar
  • Scenario: In many scenarios, notification is not needed — Readers simply drop by and read the latest record. In java, a volatile field suffices.

 

multi-core as j4 multi-threading #STM

Q: why choose multi-threaded design instead of single-threaded processes?

Most publications mention multi-core hardware as an answer. Questionable.

With multi-threading, you can run 30 threads in one process. Or you can run 30 single-threaded processes as in RTS parser and Rebus — industrial strength proven solution. Both designs make use of all CPU cores.

Between these two designs, heap memory efficiency can be different, as the 30 threads are able to share 99GB of objects in the same address space, but the 30 processes would need shared memory.

I feel the lesser known middle-ground design is … 30 threads running in single-threaded-mode, By definition, these 30 threads can only share immutable data only,. Anything mutable is thread-local.

In any multi-threaded design, those 30 threads can share the text segment i.e. memory occupied by code. Text segment tend to be smaller than the heap footprint.

In a boss-worker design, the worker threads may need to share very few mutable object, so these worker threads are not strictly STM. They are quasi-STM

funcPtr as argument, with default value

Suppose you write a dumpTree() which takes a “callback” parameter, so you can invoke callback(aTreeNodeOfTheTree).

Sounds like a common requirement?

Additionally, Suppose you want “callback” to have a default value of no-op, basically a nullptr.

Sounds like a common requirement?

I think c/c++ doesn’t have easy support for this. In my tested code https://github.com/tiger40490/repo1/blob/cpp1/cpp/algo_binTree/binTreeUtil.h, I have to define a wrapper function without the callback parameter. Wrapper would call the real function, passing in a nullptr as callback. Note nullptr needs casting.

clean language==abstracted away{hardware

Nowadays I use the word “clean” language more and more.

Classic example — java is cleaner than c# and c++. I told a TowerResearch interviewer that java is easier to reason with. Fewer surprises and inconsistencies.

The hardware is not “consistent” as wished. Those languages closer to the hardware (and programmers of those languages) must deal with the “dirty” realities. To achieve consistency, many modern languages make heavy use of heap and smart pointers.

For consistency, Everything is treated as a heapy thingy, including functions, methods, metadata about types …

For consistency, member functions are sometimes treated as data members.

job satisfaction^salary #CSDoctor

I feel CSDoctor has reasonable job satisfaction. If I were in his role my job satisfaction would be lower.

His financial domain is rather niche. Equity volatility data analysis, not a big infrastructure that needs a team to support. I guess it could be considered a glorified spreadsheet. I think the users are front office traders looking at his data to make strategic business decisions.

His technical domain is very niche. The team (4 people) made a conscious decision to use c# early on, and has not really changed to java. C# is shunned by West Coast and all internet companies including mobile, cloud and big-data companies. Even Wall St is increasing investment in javascript based GUI, instead of c# GUI.

I see very high career risk in such a role. What if I get kicked out? (Over 7 years the team did kick out at least 2 guys.. Last-in-first-out.) What if I don’t get promoted beyond VP and want to move out?

I won’t enjoy such a niche system. It limits career mobility. 路越走越窄.

This is one of the major job dis-satisfactions in my view. Other dis-satisfactions:

* long hours — I think he is not forced to work long hours. He decides to put in extra hours, perhaps to get promotion.

* stress — CSDoctor is motivated by the stress. I would not feel such high motivation, if put in his shoes.

* commute — I believe he has 1H+ commute. I feel he has very few choices in terms of jobs closer to home, because his domain is extremely niche.

STL container=#1 common resource owner #RAII

Stroustrup said every resource (usually heapy thingies) need to have an owner, who will eventually return the resource.

By his definition, every resource has an acquire/release protocol. These resources include locks, DB connections and file handles. The owner is the Object responsible for the release.

  • The most common resource owner is the STL container. When a std::vector or std::unorderded_multimap … is destructed, it would release/return the resource to the heap memory manager.
  • The best-known resource-owner is the family of smart pointers.
  • You can also combine them as a container of smart pointers.
  • All of these resource owners rely on RAII

central data-store updater in write-heavy system

I don’t know how often we encounter this stringent requirement —

Soccer world cup final, or a big news about Amazon … millions of users posts comments on a web page and all comments need to be persisted and shown on some screen.

Rahul and I discussed some simple design. At the center is a single central data store.

  • In this logical view, all the comments are available at one place to support queries by region, rating, keyword etc.
  • In the physical implementation, could use multiple files or shared-memory or distributed cache.

Since the comments come in a burst, this data store becomes the bottleneck. Rahul said there are two unrelated responsibilities on the data store updaters. (A cluster of updaters might be possible.)

  1. immediately broadcast each comment to multiple front-end read-servers
  2. send an async request to some other machine that can store the data records. Alternatively, wait to collect enough records and write to the data store in a batch

Each read-server has a huge cache holding all the comments. The server receives the broadcast and updates its cache, and uses this cache to service client requests.

val@pattern_recognition imt reusable_technique

Reusable coding techniques include my classic generators, DP, augmented trees, array-element swap, bigO tricks like hash tables and medium-of-3 pivot partitioning.

  • One of the best examples of reusable technique — generate “paths/splits” without duplicates. Usually tough.

More useful than “reusable techniques” are pattern-recognition insight into the structure and constraints in a given coding problem. Without these insights, the problem is /impenetrable/intractable/.

Often we need a worst input to highlight the the hidden constraint. The worst input can sometimes quickly eliminate many wrong pathways through the forest and therefore help us see the right pathway.

However, in some context, a bigO trick could wipe out the competition, so much so that pattern recognition, however smart, doesn’t matter.

 

##broad viability factors for dev-till-70

In this blogpost I focus on broad, high-level viability factors for my dev-till-70 career plan.

  • see j4 dev-till-70
  • — #1 factor is probably health. See my blog tag. I believe I’m in control of my destiny.
  • However, in reality physical health might be the real limiting factor.
  • — #2 factor is probably market demand
  • Wall St contract market is age-friendly — [19] y WallSt_contract=my best Arena #Grandpa
  • c++ is good skillset for older guys. Core java is also good.
  • churn
  • Jiang Ling of MS said even at an old age, “easy to find paid, meaningful coding projects, perhaps working at home”. Xia Rong agreed. I kinda prefer an office environment, but if the work location is far away, at least there exists an option to take it on remotely.
  • — A G5 factor is my competitiveness among candidates. See Contract: unattractive to young developers
  • coding drill?

##strongER trec taking up large codebases

I have grown from a sysAdmin to a dev
I have grown from web and scripting pro into a java pro then c++ pro !
Next, I hope to grow my competence with large codebase.

I feel large codebase is the #1 diffentiator separating the wheat from the chaff — “casual” vs hardcore coders.

With a large codebase, I tend to focus on the parts I don’t understand, regardless that’s 20% or 80% of the code I need to read. I can learn to live with that ambiguity. I guess Rahul was good at that.

In a few cases, within 3M I was able to “take up” a sizable brown field codebase and become somewhat productive. As I told Kyle, I don’t need bottom-up in-depth knowledge to be competent.

In terms of count of successes taking up sizable brown-field codebase, I am a seasoned contractor, so I would score more points than a typical VP or an average old timer in a given system.

  • eg: Guardian — don’t belittle the challenge and complexity
  • eg: mvea c++ codebase — My changes were localized, but the ets ecosystem codebase was huge
  • eg: mvea pspc apportionment/allocation logic + price matching logic
  • eg: mtg comm (small green field), integrated into a huge brown field codebase
  • eg: StirtRisk personalization — I made it work and manager was impressed
  • eg: StirtRisk bug fixes on GUI
  • eg: Macq build system — many GTD challenges
  • eg: Moodles at NIE
  • eg: Panwest website contract job at end of 2016
  • ~~~ the obvious success stories
  • eg: AICE — huge stored proc + perl
  • eg: Quest app owner
  • eg: error memos
  • eg: RTS

Contract: unattractive to young developers

GregR convinced me that most of the young developers aren’t interested in Wall St contract jobs. I can’t remember the reasons but something like

In my experience the contractors I see are mostly above 40 or at least 35+. The younger guys (Nikhil..) tend to be part of a contract agency like Infosys.

The upshot — I have fewer strong competition. Most of the older guys are not so competitive in either IV or GTD on the job. In particular, their figure-things-out speed is slower.

toy^surgeon #GS-HK@lockfree

H:= Interviewers was an OMS guy in GS-Hongkong, bent on breaking the candidate.

Q3: any real application of CAS in your project?

Q3b: why did you choose CAS instead of traditional lock-based? Justify your decision.
%%A: very simple usage scenario .. therefore, we were very sure it was correct and not slower, probably faster [1]. In fact, it was based on published solutions, customized slightly and reviewed within my team.
%%A: In this context, I would make my decisions. Actually my manager liked this kinda new ideas. Users didn’t need to know this level of details.

[1] uncontended lock acquisition is still slower than CAS

Q3d: how much faster than lock-based?
%%A: I didn’t benchmark. I know it wasn’t slower.

H: but any change is risky
%%A: no. There was no existing codebase to change.

H: but writing new code is a change
%%A: in that case, lock-based solution is also risky.

H: “Not slower” is not a good answer.

%%A: I don’t think I can convince you, but let me try one last time. Suppose my son wants to try a new toy. We know it doesn’t cost more than a familiar toy. We aren’t sure if he would actually keep it, but there’s no harm trying it out.

I said this as a last-ditch effort, since I had all but lost the chance to convince him. So I took a risky sharp turn.

H: but this is production not a toy !

So he was on the hook! What i should have said:

A: No we didn’t roll it out to production without testing. Look at what google lab does.

A: well, my wife went through laser surgery. The surgeon was very experienced and tried a relatively new technique. She was not a guinea pig. Actually the procedure is new but shown to be no worse than the traditional techniques. Basically, we don’t need to do lots of benchmarks to demonstrate a new technique is worth trying. For simple, safe, well-known-yet-new techniques, it’s not always a bad idea to try it on a small sample. Demanding extensive analysis and benchmark is a way to slow down incremental innovations.

A: the CAS technique has been well-researched for decades and tried in many systems. I also used it before.

find any black-corner subMatrix #52%

https://www.geeksforgeeks.org/find-rectangle-binary-matrix-corners-1/

Q: given a black/white matrix, find any rectangle whose all four corners are black.
Q2: list all of them
Q3 (google): find the largest

— idea 2: record all black cell locations and look out for 3-corner groups

a Rec class with {northwest corner, northeast corner, southwest corner}

first pass, For each pair on the same row, create a pair object and save it in hashmap {northwest cell -> list of x-coordinates on my right} We will iterate the list later on.

2nd pass, scan each column. For each pair on the same col, say cell A and C, use A to look-up the list of northeast corners in the hashmap. Each northeast corner B would give us a 3-corner group. For every 3-corner pack, check if the forth corner exists.

— idea 1: row by row scan.
R rows and C columns. Assuming C < R i.e. slender matrix

For first row, record each ascending pair [A.x,B,x] (No need to save y coordinate) in a big hashset. If S black cells on this row, then O(SS).

In next row, For each new pair, probe the hashset. If hit, then we have a rectangle, otherwise add to the hashset. If T black cells on this row, then O(TT) probes and (if no lucky break) O(TT) inserts.

Note one single hashset is enough. Any pair matching an earlier pair identifies a rectangle. The matching would never occur on the same row 🙂 Optionally, We can associate a y-coordinate to each record, to enable easy calculation of area.

After all rows are processed, if no rectangle, we have O(SS+TT+…). Worst case the hashset can hold C(C-1)/2 pairs, so we are bound by O(CC). We also need to visit every cell, in O(CR)

If C > R, then we should process column-by-column, rather than row-by-row

Therefore, we are bound by O( min(CC,RR) + CR). Now min(CC,RR) < CR, so we are bound by O(CR) .. the theoretical limit.

— idea 4: for each diagonal pair found, probe the other corners
If there are H black cells, we have up to HH pairs to check 😦 In each check, the success rate is too low.

— Idea 5: Brute force — typewriter-scan. For each black cell, treat it as a top-left, and look rightward for a top-right. If found, then scan down for a pair of bottom-left/bottom-right. If no then complete the given row and discard the row.

For a space/time trade-off,

  • once I find a black cell on current row, I put it in a “horizontal” vector.

standard SQL to support pagination #Indeed

Context — In the Indeed SDI, each company page shows the latest “reviews” or “articles” submitted by members. When you scroll down (or hit Next10 button) you will see the next most recent articles.

Q: What standard SQL query can support pagination? Suppose each record is an “article” and page size is 10 articles.

I will assume each article has an auto-increment id (easier than a unique timestamp) maintained in the database table. This id enables the “seek”” method.  First page (usually latest 10 articles) is sent to browser.  Then the “fetch-next” command from browser would contain the last id fetched. When this command hits the server, should we return the next 10 articles after that id (AA), or (BB) should we check the latest articles again, and skip first 10? I prefer AA. BB can wait until user refreshes the web page.

The SQL-2008 industry standard supports both (XX) top-N feature and (YY) offset feature, but for several reasons [1], only XX is recommended :

select * from Articles where id < lastFetchedId fetch first 10

[1] http://www.use-the-index-luke.com clearly explains that the “seek” method is superior to the “offset” method. The BB scenario above is one confusing scenario affecting the offset method. Performance is also problematic when offset value is high. Fetching the 900,000th page is roughly 900,000 times slower than the first page.

WallSt=age-friendly to older guys like me

I would say WallSt is Open to older techies.

I wouldn’t say WallSt is kind to old techies.

I would say WallSt is age-friendly

I would say WallSt offers a bit of the best features of age-friendly professions such as doctors and accountants.

Q: I sometimes feel WallSt hiring managers are kind to older techies like me, but really?
A: I feel WallSt hiring managers are generally a greedy species but there are some undercurrents :

  • 🙂 traditionally, they have been open to older techies who are somewhat less ambitious, less driven to move up, less energetic, less willing to sacrifice personally. This tradition is prevalent for decades in U.S. work culture and I believe it will stay. No such tradition in China and Singapore,
  • 🙂 U.S.hir`mgr may avoid bright young candd #Alan
  • 🙂 some older guys do perform well above expectation. Capability decline with age is very mild and questionable in many individuals, but work ethics differ among individuals, unrelated to age.
  • 😦 if an older guy needs to be cut, many hiring managers won’t hesitate… merciless.

Overall, I think Wall St hiring managers are open to older guys but not sympathetic or merciful. They are profit-driven, not compassionateThe fact that I am so welcome on Wall St is mostly due to my java/c++ QQ, not anyone’s kindness.

I thank God. I don’t need to thank Wall St.

j4 dev-till-70 [def]

  • j4: I’m good at coding. We want to work with what we are good at. I have worked 20 years so by now I kinda know what I’m good at.
  • j4: given my mental power, use it or lose it.
  • j4: real world problem-solving, not hypothetical problems, not personal problems.
  • j4: responsibility/ownership — over some module
    • teaching also works
    • volunteering also works
  • j4: interaction with young people
    • teaching also works
  • j4: respect — from coworkers and users. I want to be a shining example of an old but respectable hands-on techie but not an expert.
    • teaching also works
  • j4: service — provide a useful service to other people. Who are these other people? LG2
    • teaching also works
  • j4: meaningful work? vague
  • j4: be relevant to the new economy? vague

advantage@old_techie: less to lose]race !!江郎才尽as young a man

Imagine on a new job you struggle to clear the bar

  • in figure-things-out speed,
  • in ramp-up speed,
  • in independent code-reading…
  • in delivery speed
  • in absorbency,
  • in level of focus
  • in memory capacity — asking the same questions over and over.
  • in dealing with ambiguity and lack of details
  • in dealing with frequent changes

As a 30-something, You would feel terrified, broken, downcast, desperate .., since you are supposed to be at your prime in terms of capacity growth. You would worry about passing your peak way too early, and facing a prolonged decline … 江郎才尽.

In contrast, an older techie like me starts the race in a new team, against a lower level of expectation [1] and have nothing (or less) to prove, so I can compete with less emotional baggage.

A related advantage — older techies have gone through a lot so we have some wisdom. The other side of the coin — we could fall in the trap of 刻舟求剑.

Manager and cowokers naturally have a lower expectation of older techies. Grandma’s wisdom — she always remind me that I don’t have to always benchmark myself against younger team members. In some teams, I can follow her advice.

Any example? Bill Pinsky, Paul and CSY of RTS?

[1] WallSt contract market

opaque c++trouble-shooting: bustFE streamIn

This is a good illustration of fairly common opaque c++ problems, the most dreadful/terrifying species of developer nightmares.

The error seems to be somewhat consistent but not quite.

Reproducing it in dev enviroment was a first milestone. Adding debug prints proved helpful in this case, but sometimes it would take too long.

In the end, I needed a good hypothesis, before we could set out to verify it.

     81     bool SwapBustAction::streamInImpl(ETSFlowArchive& ar)
     82     { // non-virtual
     83       if (exchSliceId.empty())
     84       {
     85         ar >> exchSliceId;
     86       }
    104     }
    105     void SwapBustAction::streamOutImpl(ETSFlowArchive& ar) const
    106     { // non-virtual
    107       if (exchSliceId.size())
    108       {
    109         ar << exchSliceId;
    110       }

When we save the flow element to file, we write out the exchSliceId field conditionally as on Line 107, but when we restore the same flow element from file, the function looks for this exchSliceId field unconditionally as on Line 85. When the function can’t find this field in the file, it hits BufferUnderflow and aborts the restore of entire flow chain.

The serialization file uses field delimiters between the exchSliceId field and the next field which could be a map. When the exchSliceId field is missing, and the map is present, the runtime would notice an unusable data item. It throws a runtime exception in the form of assertion errors.

The “unconditional” restore of exchSliceId is the bug. We need to check the exchSliceId field is present in the file, before reading it.

In my testing, I only had a test case where exchSliceId was present. Insufficient testing.

##skillist to keep brain active+healthy

— generally I prefer low-churn (but not lethargic) domains to keep my brain reasonably loaded:
[as] classic data structure+algorithms — anti-aging
[s] C/C++,
SQL,
c++ build using tool chain, and shooting using instrumentation
c++ TMP
[s] memory-mgmt
[s] socket,
[s] mkt data,
[s] bond math,
basic http?
[a=favor accu ]
[s=favor slow-changing domains]
— Avoid churn
  • jxee,
  • c#
  • scripting,
  • GUI
— Avoid white hot domains popular with young bright guys … too competitive, but if you can cope with the competition, then it could keep your brain young.
  • quant
  • cloud? but ask Zhao Bin
  • big data
  • machine learning

anagramIndexOf(): frqTable+slidingWindow

Q: Similar to java indexOf(string pattern, string haystack), determine the earliest index where any permutation of pattern starts.

====analysis

https://github.com/tiger40490/repo1/blob/py1/py/algo_str/anagramIndexOf.py is my tested solution featuring

  • O(H+P) where H and P are the lengths
  • elegant sliding window with frequency

Aha — worst input involves only 2 letters in haystack and pattern. I used to waste time on the “average” input.

On the spot I had no idea, so I told interviewer (pregnant) about brute force solution to generate all P! permutations and look for each one in the haystack

Insight into the structure of the problem — the pattern can be represented by a frequency table, therefore, this string search is easier than regex!

Then I came up with frequency table constructed for the pattern. Then I explained checking each position in haystack. Interviewer was fine with O(H*P) so I implemented it fully, with only 5 minutes left. Basically implementation was presumably slower than other candidates.

A few hours later, I realized there’s an obvious linear-time sliding window solution, but it would have taken more than the available time to implement. Interviewer didn’t hint at all there was a linear time solution.

— difficulty and intensity

I remember feeling extremely intense and extreme pressure after the interview, because of the initial panic. So on the spot I did my best. I improved over the brute force solution after calming down.

Many string search problems require DP or recursion-in-loop, and would be too challenging for me. This problem was not obvious and not easy for me, so I did my best.

I didn’t know how to deep clone a dict. The Indeed.com interviewers would probably consider that a serious weakness.

dev-till-70: 7external inputs

Context — professional (high or low end) programmer career till my 70’s. The #1 derailer is not physical health [3] but my eventual decline of “brain power” including …?

[3] CSY and Jenny Lu don’t seem to agree.

This discussion is kinda vague, and my own thoughts are likely limited in scope, not systematic. Therefore, external inputs are extremely useful. I posed the same questions to multiple friends

Q2: what can I do now given my dev-till-70 plan defined above.
Q1: how do I keep my brain healthy, and avoid harmful stress?

— Josh felt that the harmful stress in his job was worse in his junior years when he didn’t know the “big picture”. Now he feels much better because he knows the full context. I said “You are confident you can hold your end of the log. Earlier you didn’t know if you were good enough.”

— Grandpa gave the Marx example — in between intense research and writing, Marx would solve math problems to relax the brain. I said “I switch between algo problem solving and QQ knowledge”

— Alex V of MS — Ask yourself
Q: compare to the young grads, what job function, what problems can you handle better? My mental picture of myself competing against the young guys is biased against my (valuable) battlefield experience. Such experience is discounted to almost $zero in that mental picture!

When I told Alex my plan to earn a living as a programmer till 70, Alex felt I definitely need a technical specialization. Without it, you have very little hope competing with people 40 years younger. I said I intend to remain a generalist. Alex gave some examples of skills younger people may not have the opportunity to learn.

  • low-latency c++
  • c++ memory mgmt
  • specific product knowledge
  • — I said
  • sockets
  • .. I have a few skillist blogposts related to this

— Sudhir
Mental gymnastics is good, like board games and coding practice and Marx’s math practice, but all of these are all secondary to (hold your breath) … physical workout, including aerobic and strength training!

Grandpa said repeatedly the #1 key factor is physical health, though he didn’t say physical health affects brain capacity.

I told Sudhir that I personally enjoy outdoor exercise more than anything else. This is a blessing.

Also important is sleep. I think CSDoctor and grandpa are affected.

Sudhir hinted that lack of time affects sleep, workout and personal learning.

  • Me: I see physical exercise and sleep as fundamental “protections” of my brain. You also pointed out when we reach home we often feel exhausted. I wonder if a shorter commute would help create more time for sleep/workout and self-study. If yes, then is commute is a brain-health factor?
  • Sudhir: Absolutely, shorter commutes are always better, even if that means we can only afford smaller accommodation. Or look for a position that allows working remotely more frequently.

Sudhir also felt (due to current negative experience) an encouraging team environment is crucial to brain health. He said mental stress is necessary, but fear is harmful. I responded  “Startup might be better”.

–Jenny Lu felt by far the most important factor is consistent physical exercise to maintain vitality. She felt this is more important than mental exercise.

I said it is hard to maintain consistency. She replied that it is doable and necessary.

–Junli…. Felt mental exercise and physical exercise are both important.

When I asked him what I can do to support dev-till-70, he identified several demand-side factors —

  • He mentioned 3 mega-trends — cloud; container; micro-service.
    • Serverless is a cloud feature.
  • He singled out Spring framework as a technology “relevant till our retirement time”

— CSY pointed out the risk of bone injury.

He said a major bone injury in old age can lead to immobility and the start of a series of declines in many body parts.

— XR’s demand-oriented answer is simple– keep interviewing. He felt this is the single most effective thing I can do for dev-till-70.

freelist in pre-allocated object pool #DeepakM

Deepak CM described a pre-allocated free-list used in his telecom system.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/lang_66mem/fixedSizedFreeList.cpp is my self-tested implementation. Am proud of the low-level details that I had to nail down one by one.

He said his system initialization could make 400,000 new() calls to allocate 400,000 dummy objects and put them into a linked list. Personally, My design /emulates/ it with a single malloc() call. This is at startup time.

During the day, each new msg will overwrite [1] a linked node retrieved at the Head of the slist.

[1] using operator=(). Placement-new would be needed if we use a single malloc()

Every release() will link-in the node at the Tail of the slist. Can we link it in at the Head? I think so. Benefit — It would leave a large chunk of continuous free space near the tail. Improved Fragmentation.

Illustration — Initially the physical addresses in the slist are likely consecutive like addr1 -> addr2 -> addr3…. After some release() calls, it would look like a random sequence.

Using return-to-Head, I would get

  • pop, pop, pop: 4->5->..
  • rel 2: 2->4->5..
  • pop: 4->5->….
  • rel 1: 1->4->5…

— The API usage:

  • void ffree(void *)
  • void * fmalloc(size_t), possibly used by placement new

TreeNode lock/unlocking #Rahul,DeepakCM

Q: given a static binary tree, provide O(H) implementation of lock/unlocking subject to one rule — Before committing locking/unlocking on any node AA, we must check that all of AA’s subtree nodes are currently in the “free” state, i.e. already unlocked. Failing the check, we must abort the operation.

H:= tree height

====analysis

— my O(H) solution, applicable on any k-ary tree.

Initial tree must be fully unlocked. If not, we need pre-processing.

Each node will hold a private hashset of “locked descendants”. Every time after I lock a node AA, I will add AA into the hashset of AA’s parent, AA’s grand parent, AA’s great grandparent etc. Every time after I unlock AA, I will do the reverse i.e. removal.

I said “AFTER locking/unlocking” because there’s a validation routine

bool canChange(node* AA){return AA->empty(); } # to be used before locking/unlocking.

Note this design requires uplink. If no uplink available, then we can run a pre-processing routine to populate an uplink lookup hash table { node -> parent }

— simplified solution: Instead of a hashset, we may make do with a count, but the hashset provides useful info.

 

longest run@same char,allow`K replacements #70%

https://leetcode.com/problems/longest-repeating-character-replacement/

Q: Given a string s that consists of only uppercase English letters, you can perform at most k operations on that string. In one operation, you can choose any character of the string and change it to any other character. Find the length of the longest sub-string containing all repeating letters you can get after performing the above operations.

Here’s my description of the same problem:

Q: Suppose we stand by highway and watch cars of the each color. Only 26 possible colors. Cars pass fast, so sometimes we miscount.

My son says “I saw 11 red cars in a row in the fast lane”.
My daughter says “I saw 22 blue cars in a row in the middle lane”
We allow kids to miss up to 3 cars in their answer. In other words, my son may have seen only 8, 9 or 10 red cars in a row.

When we review the traffic video footage of N cars in a single lane, determine the max X cars in a row of the same color, allowing k mistakes. K < N.
====analysis
Suppose k is 3

— solution 1: O(N) use 2 variables to maintain topFrq and w i.e. winSize

Within a sliding window of size w, maintain a frq table. initialize w to a good conservative value of 4 (i.e. k+1).

If we notice top frq is 2, better than (w-k) i.e. w-k<=topFrq , then lucky we can be less conservative and we can expand the current window backward (possibly safer than fwd).

After expansion, immediate try further expansion. IFF impossible i.e. w – topFrq > k, then slide the window.

If correct answer is 11 i.e there’s a 11-substring containing 8 reds, I feel my sliding window will not miss it.

key^payload: realistic treeNode #hashset

I think only SEARCH trees have keys. “Search” means key search. In a search tree (and hashset), each node carries a (usually unique) key + 0 or more data fields as payload.

Insight — Conceptually the key is not part of the payload. Payload implies “additional information”. In contrast, key is part of the search tree (and hashset) infrastructure, similar to auto-increment record IDs in database. In many systems, the key also has natural meaning, unlike auto-incremented IDs.

Insight — If a tree has no payload field, then useful info can only exist in the key. This is fairly common.

For a treemap or hashmap, the key/value pair can be seen as “key+payload”. My re_hash_table_LinearProbing.py implementation saves a key/value pair in each bucket.

A useful graph (including tree , linked list, but not hashset) can have payloads but no search keys. The graph nodes can hold pointers to payload objects on heap [1]. Or The graph nodes can hold Boolean values. Graph construction can be quite flexible and arbitrary. So how do you select a graph node? Just follow some iteration/navigation rules.

Aha — similar situation: linked list has well-defined iterator but no search key . Ditto vector.

[1] With pre-allocation, a static array substitutes for the heap.

Insight — BST, priorityQ, and hash tables need search key in each item.

I think non-search-TREES are rarely useful. They mostly show up in contrived interview questions. You can run BFT, pre|post-order DFT, but not BF S / DF S, since there’s nothing to “search”.

Insight — You may say search by payload, but some trees have Boolean payloads.

std::move(): robbed object still usable !

Conventional wisdom says after std::move(obj2), this object is robbed and invalid for any operation…. Well, not quite!

https://en.cppreference.com/w/cpp/utility/move specifies exactly what operations are invalid. To my surprise, a few common operations are still valid, such as clear() and operator=().

The way I read it — if (but not iFF) an operation wipes out the object content regardless of current content, then this operation is valid on a robbed/hollowed object like our obj2.

Crucially, any robbed object should never hold a pointer to any “resource”, since that resource is now used by the robber object. Most movable data types hold such pointers. The classic implementation (and the only implementation I know) is by pointer reseat.

how many ways to decode #60%

Q(Leetcode 91): A message containing letters from A-Z is being encoded to numbers using the following mapping:

‘A’ -> 1, ‘B’ -> 2, … ‘Z’ -> 26
Given a non-empty string containing only digits, determine the total number of ways to decode it.

====analysis

I think this is similar to the punctuation problem.

–my botup solution

At each position in the string, keep a “score” number that represents “how many ways to decode a left-subtring ending here”

Useful — define my convenient jargon: we will say the encoding 10 to 26 are “high letters”, and the encoding 1 to 9 are “low letters”. If there are 95 ways to decode a string, i will call them 95 “formulas”.

At Position 33, i will look at score[31] (say equal to 95) and score[32] (say, equal to 97). if the two-char substring str[32:34] is between 10 and 26, then score[33] should include the 95 ways to decode str[:32]. Those 95 “formulas” can grow one high letter.

If str[33] is not ‘0’, then score[33] should also include the 97 ways to decode str[:33], because those 97 “formulas” can grow one low letter.

Aha — The 95 and the 97 formulas are all distinct because of the ending letter

I think we only need two variables to hold the previous two scores, but it’s easier to code with a score[] array.

y C++will live on #in infrastructure

I feel c++ will continue to dominate the “infrastructure” domains while application developer jobs will continue to shift towards modern languages.

Stroustrup was confident that the lines of source code out there basically ensure that c++compiler will still be needed 20 years out. I asked him “What competitors do you see in 20 years”. He estimated there are billions of c++ source code by line count.

I said C would surely survive and he dismissed it. Apparently, many of the hot new domains rely on c++. My examples below all fall under the “infrastructure” category.

  • mobile OS
  • new languages’ runtimes such as the dotnet CLR, JVM
  • blockchain mining — compute intensive
  • TensorFlow
  • AlphaGo
  • google cloud
  • Jupyter for data science
  • Most deep learning base libraries are written in c++, probably for efficiency

hash table expansion: implementation note #GS

(I don’t use the word “rehash” as it has another meaning in java hashmap. See separate blogpost.)

Note this blogpost applies to separate chaining as well as linear probing.

As illustrated in my re_hash_table_LinearProbing.py, the sequence of actions in an expansion is tricky. Here is what worked:

  1. compute bucket id
  2. insert
  3. check new size. If exceeding load factor, then
    1. create new bucket array
    2. insert all entries, including the last one
    3. reseat the pointer at the new bucket array

If you try to reduce the double-insertion of the last entry, you would try moving Step 2 to later. This is tricky and likely buggy.

Say the computed bucket id was 25, so after the expansion, you insert at (or around Bucket25), but this 25 was based on the old “capacity”. When we lookup this key, we would use the current capacity to get a bucket id of 9, so we won’t find this key.

zero out rows/columns +! auxDS

Q (Leetcode 73): Given a m x n matrix, if an element is 0, set its entire row and column to 0. Do it in-place in O(1) space. No time complexity.

I will assume all signed ints.

====analysis
I find this problem very well-defined, but the O(1) space constraint is highly contrived. I think it only needs some clever technique, not really reusable technique.

Reusable technique — for mutable array of small positive integers, with stringent space complexity —> try to save indices in the original array

Aha — I only need to know the full set of rowIDs and columnIDs.

— My O(minimum(m,n)) space solution 1:
zeroRowCnt:=how many rows to be zeroed out
zeroColCnt  :=how many columns to be zeroed out

Compare the two. Suppose zeroRowCnt == 11 is smaller. I will save the 11 rowID’s in a collection. Then first scan horizontally to zero out by column. Then use the rowIDs to zero out by row

–My O(1) space idea 2 — more elaborate than the published solution.

Aha — Can we save the 11 rowID’s in a column to be zeroed out?

insight — The “save indices in orig int array” technique is a halo trick in contrived coding problems. Is it practically useful in real world? I doubt it.

Compare zeroRowCnt and zeroColCnt as earlier. Get first rowID among the 11. Suppose it’s Row #3.

Now we know Row#3 has some zeros, so find the first column having a zero. It might be the last column (farthest east). Wherever it is, we pick that column as our “bookkeeper column”.

Visual insight — Suppose bookkeeper is Column #33. Then a[3,33] would be the first zero if we scan entire matrix by-row-and-internally-by-column

We scan row by row again (since we don’t remember those 11 rowIDs), starting after that first rowID. For every rowID found, we will zero out one corresponding cell in bookkeeper column.

Insight — We should end up with exactly 11 zeros in that column. Can’t exceed 11 (only 11 rows having zeros). Can’t fall below 11 (we save all 11 rowIDs)

From now on, freeze that column until further notice. Now zero out each Column to be zeroed out, but leave out our bookkeeper column.

Lastly, follow our bookkeeper column to zero out every “dirty row”.

func overloading: pro^con #c++j..

Overloading is a common design tool in java and other languages, but more controversial in c++. As interviewer, I once asked “justification for using overload as a design tool”. I kinda prefer printInt/printStr/printPtr… rather than overloading on print(). I like explicit type differentiation, similar to [[safe c++]] advice on asEnum()/asString() conversion function.

Beside the key features listed below, the most common j4 is convenience | readability….. a on-technical justification!

— ADL — is a key c++ feature to support smart overload resolution

— TMP — often relies heavily on function overloading

— optional parameter and default arguments — unsupported in java so overloading is the alternative.

— visitor pattern — uses overloading. See https://wordpress.com/post/bintanvictor.wordpress.com/2115

— ctor and operator overloading — no choice. Unable to use differentiated function names

— C language — doesn’t support overloading. In a sense, overloading is non-essential.

Name mangling is a key ABI bridge from c++ to C