4 components@latency

  1. Propagation delay — Amount of time required for a message to travel from the sender to receiver, which is a function of distance.
  2. Transmission delay — Amount of time required to push all the packet’s bits into the link, which is a function of the packet’s length and data rate of the link, and has nothing to do with distance
  3. Processing delay — Amount of time required to process the packet header, check for bit-level errors, and determine the packet’s destination. Personally, I would think application logic also introduces processing delay.
  4. Queuing delay —  Amount of time the packet is waiting in the queue until it can be processed. Personally, I would guess that a larger buffer tends to exacerbate queuing delay. I would say there are often multiple queuing points along the critical path, like an airport.

From https://hpbn.co/primer-on-latency-and-bandwidth/.

 

Advertisements

engagement+spare time utilization: %%strength

Very few peers are so conscious of burn^rot. Higher utilization of spare time is a key strength during my US peak + my dotcom peak + also my high school. We could analyze what’s common and what’s different between these peaks…

Outside those peaks, I also used this strength to complete my UChicago program, but the tangible benefit is smaller.

(This is different from efficiency on the job. Many efficient colleagues spend less time in office but get more done.  My style involves sacrificing personal spare time and family time.)

Looking forward, I guess this strength could be strategic for research-related domains, including any job involving some elements of research and accumulation.

A repeated manager praise for me is “broad-based”, related to this strength.

What if I hadn’t worked this hard # kids

Over my 20Y career, I showed _some_ talent professionally.

In contrast, I showed significantly more talent in school. My massive study effort increased my abilities [1] to the extent that people don’t notice the difference between my talent vs abilities. Even my IQ score improved due to my intellectual curiosity and absorbency. If these efforts are considered a talent, then yes I have the talent of diligence.

[1] eg — Chinese composition, English grammar/vocabulary, many knowledge-intensive subjects

Q1: what if I had put in just an average amount of effort in school and at work? Some people (mostly guys I really don’t know well) seem to put in sporadic efforts that average out to be “just average” but still got into similar professional level like I did, or higher.
A: For my academic excellence, persistent effort was necessary.

A: I guess sporadic effort could be enough to reach my level of professional “success” for very bright and lucky people. I doubt any professional programmer got higher than mine without consistent effort for many years.

Professional success also depends on opportunity, on positioning and timing, on inter-personal skills and relationship skills, on judgment 判断力. None of these depends directly on consistent effort.

An example of positioning — my break into Wall St java market vs my peers who didn’t.

To analyze this question, I need to single out an under-appreciated talent — absorbency capacity i.e. 耐得住寂寞, where I score in the 97th percentile.

Q2: if my son only puts in sporadic and average efforts, could he end up losing the competitions?

trauma creates knee-jerk reactions to PIP #Deepak

Hi Deepak,

Your endured a traumatic episode without a job for months. I had, on a smaller scale, traumatic experiences under managers who don’t appreciate my effort and demanded improvement in performance. I felt like damaged goods.

I now believe these traumatic experiences shape an individual’s outlook, to put it mildly. In each individual’s career, there’s only one (or two) defining experience. These singular experiences tend to leave a long and deep scar in our psyche.

In my career, the biggest pain is not job loss. In fact, as I said last time, in hind sight my job loss was a positive turning point. My biggest pains were always negative performance reviews. I’m so scared and scarred that I now assign a disproportionate value to manager’s assessment, and basically ignore other people’s assessment, and ignore the level of difficulty of my role. What I ignore are crucial factors. Ignoring them is an irrational decision and leads to distorted perception of myself relative to coworkers.

I developed naive, knee-jerk reactions that as soon as I get a negative assessment from manager, I immediately see myself as damaged goods, of inferior quality and incompetent, when in reality the role expectation could be wholly unsuitable for me. Imagine you are expected to give salesy presentations to upper management and you are seen as not persuasive not technical enough.

versioned-queue problem

I think this problem is mostly about data structure, not algorithm.

Q: Design and implement a Version-Queue. A Version-Queue maintains a version number along with normal Queue functionality. Every version is a snapshot of the entire queue. Every operation[Enqueue/Dequeue] on the Queue increments its version.

Implement the following functions:

1. Enqueue – appends an element at the end of the queue.
2. Dequeue – returns the top element of the queue.
3. Print – it takes a version number as input and prints the elements of the queue of the given version. The version number input can also be an old/historical version number.

E.g. if the current version number of the queue is 7 and the input to this function is 5, then it should print the elements of the queue when its version number was 5.

For simplicity, assume the elements are integers.

We expect you to write a helper program to test the above data structure which will read input from stdin and prints output to stdout.

Input format:
First line should have an integer n (number of operations). This should be followed by n number of lines, each denoting one operation.
e.g.
6
e 1
e 4
d
e 5
p 2
p 4

‘e’ stands for enqueue

— My design —
In addition to the regular queue data structure, we need a few helper data structures.

All current + historical queue elements are saved as individual elements in a “Snapshot” vector, in arrival order. This vector never decreases in size even during dequeue. Two pointers represent the trailing edge (dequeue) and leading edge (enqueue).

(minor implementation detail — Since it’s a vector, the pointers can be implemented as 2 integer index variables. Pointers and iterators get invalidated by automatic resizing.)

Every enqueue operation increments the right pointer to the right;
Every dequeue operation Increments the left pointer to the Right;
(No decrement on the pointers.)

With this vector, I think we can reconstruct each snapshot in history.

Every pointer increment is recorded in a Version table, which is a vector of Version objects {Left pointer, Right pointer}. For example, if Version 782 has {L=22, R=55} then the snapshot #782 is the sub-array from Index 22 to 55.

Additional space costs:
O(K) where K = number of operations

Additional time costs:
Enqueue operation — O(1). Insert at end of the Snapshot vector and Version vector
Dequeue operation — O(1). Move a pointer + insert into Version vector Print operation — O(M) where M = maximum queue capacity

Minor point — To avoid vector automatic resize, we need to estimate in advance the limit on K i.e. number of operations. If you tell me you get millions of operations a microsecond, then over a day there
could be trillions of “versions” so the Versions vector and the Snapshot vector need sufficient initial capacity.

Note we could get 9000 enqueue operations in a row.

Virtual_Machine^guest_OS

https://www.virtualbox.org/manual/ch01.html#virtintro explains that

Guest OS runs in a virtual machine or “vm”. A “vm”

  • usually refers to a container process if it’s “live”

 

  • More often, a vm means a vm-config i.e. a collection of parameters defining a physical container process to-be-started.It’s important to realize (windows host OS as example) a vm is strictly an application with a window, like a browser or shell. As such, this application has it’s own config data saved on disk.

 

many IV failures before 1st success #c++/HFT/CIV

  • i had so many failures at c++ interviews before I started passing. Now I look like a c++ rock star to some
  • I had so many failures at HFT interviews before I started passing at DRW, SIG, Tower
  • I had so many failures at remote speed coding before I started passing.

With west-coast type of companies including nsdq, I can see my rise in ranking. If I try 10 more times the chance of further progress is more than 70%.

I /may never/ become a rock star in these west coast CIVs but I see my potential as a dark horse in 1) white-board CIV, 2) pure-algo

“May never” means “may not happen” … Scott Meyers

[18] CONTINUOUS coding drill #Shi

My friend CSY said some students like his son could conceivably focus “all their time” on one skill (coding drill) for fours years in college, so they will “surely” outperform.

I pointed out that I am often seen as such an individual, but my speed coding interview performance is hardly improving.

I pointed at the number of Leetcode problems I solved with all tests passed. It grew by up to 10 each year, but a student can solve 10 leetcode problems in half a day.

I gave an analogy of my Macq manager’s weekly slow-jogging, for health not for performance. Consistent jogging like that is great for health. Health is more important than athletic performance. I said jogging and my coding drill are life-style hobbies and recreations.

For years I practiced continuous self-learning on

  • java, c++, SQL, MOM, swing — IV and GTD growth
  • Unix, python — mostly GTD
  • quant

I consider my continuous self-learning a key competitive advantage, and an important part of my absorbency capacity.

I asked a bright young Chinese grad ChengShi. He practiced real coding for months in college. I said “fast and accurate” is the goal and he agreed. A few years into his full time job, he stopped practicing the same. Result? He said for the same coding problem he was able to make it work in 10 min then, but now probably 20 minutes. I think this is typical of the student candidates.

I asked “Do you know anyone who keeps up the coding drill?” He didn’t tell me any individual but gave a few points

  • he believed the only reason for coding drill is job hunting
  • when I said continuous practice all year round will make a big difference compared to practice only when job hunting, he agreed wholeheartedly and reiterated the same observation/belief
  • but he disagrees that continuous coding drill would lead to significant professional growth as a programmer, so he would probably channel his spare energy elsewhere.
  • I think he has other ideas of significant growth. At my age, I don’t foresee any “significant growth”.

master The yield-generator

https://github.com/tiger40490/repo1/tree/py1/py/algo_combo_perm uses python q[ yield ] to implement classic generators of

  • combinations
  • permutations
  • abbreviations
  • redraw

Key features and reasons why we should try to memorize it

  1. very brief code, not too dense (cryptic), .. helps us remember.
  2. reliability — brief means fewer hiding place for a bug. I added automated tests for basic quality assurance. Published solution
  3. versatile — one single function to support multiple classic generators.
  4. yield functions’ suspend-resume feature is particular powerful and possibly a power drill. This is my first opportunity to learn it.
  5. instrumentation — relatively easy to add prints to see what’s going on.
  6. halo — yield-based generator functions are a halo and also a time-saver
  7. elegant — brief, simple, relatively easy to understand
  8. recursive — calling itself recursively in a loop therefore fairly brief but powerful
  9. useful in take-home assignments
  10. identity-aware — The second time you call myYieldFunc(44), it would usually return the same generator object as the first time you call it with that same argument (i.e. 44). If that generator object has reached end of it’s execution, then you won’t get any more output.

— How I might try to memorize it

  1. If needed, we just need to reproduce it quickly from memory a few times
  2. I added comments to help me understand and memorize it

G3 survival capabilities #health;burn rate

Out of the subconscious realm, I hand-picked a couple of personal “qualities” [2] as the (three or five) pillars of my life [1], the most important pillars since 2007. 2007 is the /watershed year/.

These are the pillars for my past, present and my future, immigration plan, family housing, retirement plan, … There’s possibly nothing new here, but this blogpost provides another perspective into the “making” of my career and the foundation of my entire life. As such, this is not a worthless reflection. Still, avoid over-thinking as there are other worthwhile reflections.

I think everyone can consider the same question — “Using words that are as specific as you can find, name 2 or more personal survival capabilities, hopefully the most important ones.”

  • AA) IV prowess — (since Chartered). Compare CSY’s experience.
    • self-renewal to stay relevant — lifelong learning. Am rather successful so far
    • my reflective blogging is actually part of the my career way-finding, motivation…
    • absorbency — of dry, theoretical, low-level domains. /continuous/ plow-back without exhaustion
    • theoretical complexity — aptitude and absorbency
    • lower-level — overall, i’m stronger than my peers at low-level
  • BB) my capacity to keep a well-paying dev job (not as “unique” as the other 2 capabilities). Even in the three Singapore cases, I was able to hold it for about 2 years, or receive a transfer or severance.
    • figure-things-out speed?  Not really my strength but my competence
    • camp-out, extra hours
    • attention to details
    • getting the big picture?
  • [G3] personal burn-rate — important to my feeling of long term security.
    • See the Davis chat on old-timers.
  • — The following factors are less “unique” but I want to give credit to
  • [G3] my healthy lifestyle and bread-earning longevity — in the long run this factor would prove increasingly important. It enables me to keep working long past retirement age.
  • [G5] my bold investment style [2]
  • my education and self-learning capabilities including English skills
  • [G3 = a top-3/Group3 factor]

! benchmarking — AA is a single-winner competition, whereas BB is about staying above minimum standard in the team.

! strengthening — I continue to plow back and build my AA/BB 内力, focusing on localSys + coding drill. A huge amount of energy, absorbency, continuous effort and dedication needed (cf. XR,YH, CSY…), though since 2018 I have noticed the ROTI is not as high as before 2012.

[1] to some extent, my family’s life also rests on these same pillars, but I tend to under-estimate the capabilities of my family members and over-estimate my earning power.
[2] in this post I don’t want to include my financial assets since I want to focus on personal qualities

effectively ignore whitespace change]code review #perl

$ perl -pe ‘s/\s//g’ tcm_creator.py> new
$ git checkout some-commit tcm_creator.py
$ perl -pe ‘s/\s//g’ tcm_creator.py> old
$ diff old new # should show no discrepancy

This worked for me. Entire file is flattened to a single long string.

IIF the whitespace change is only and always confined with a line i.e. no line splitting/joining , then perl  -l -pe is more useful.

 

async messaging-driven #FIX

A few distinct architectures:

  • architecture based on UDP multicast. Most of the other architectures are based on TCP.
  • architecture based on FIX messaging, modeled after the exchange-bank messaging, using multiple request/response messages to manage one stateful order
  • architecture based on pub-sub topics, much more reliable than multicast
  • architecture based on one-to-one message queue

reasons to limit tcost@SG job hunt #XR

XR said a few times that it is too time consuming each time to prepare for job interviews. The 3 or 4 months he spent has no long-term value. I immediately voiced my disagreement because I took IV fitness training as a lifelong mission, just like jogging or yoga or chin-up.

This view remains as my fundamental perspective, but my disposable time is limited. If I can save the time and spend in on some meaningful endeavors  [1] then it’s better to have a shorter job hunt.

[1] Q: what endeavors?
A: yoga
A: diet
A: stocks? takes very little effort
A: ?

strategic value of MOM]tech evolution

What’s the long-term value of MOM technology? “Value” to my career and to the /verticals/ I’m following such as finance and internet. JMS, Tibrv (and derivatives) are the two primary MOM technologies for my study.

  • Nowadays JMS (tibrv to a lesser extent) seldom features in job interviews and job specs, but the same can be said about servlet, xml, Apache, java app servers .. I think MOM is falling out of fashion but not a short-lived fad technology. MOM will remain relevant for decades. I saw this longevity deciding to invest my time.
  • Will socket technology follow the trend?
  • [r] Key obstacle to MOM adoption is perceived latency “penalty”. I feel this penalty is really tolerable in most cases.
  • — strengths
  • [r] compares favorably in terms of scalability, efficiency, reliability, platform-neutrality.
  • encourages modular design and sometimes decentralized architecture. Often leads to elegant simplification in my experience.
  • [r] flexible and versatile tool for the architect
  • [rx] There has been extensive lab research and industrial usage to iron out a host of theoretical and practical issues. What we have today in MOM is a well-tuned, time-honored, scalable, highly configurable, versatile, industrial strength solution
  • works in MSA
  • [rx] plays well with other tech
  • [rx] There are commercial and open-source implementations
  • [r] There are enterprise as well as tiny implementations
  • — specific features and capabilities
  • [r] can aid business logic implementation using content filtering (doable in rvd+JMS broker) and routing
  • can implement point-to-point request/response paradigm
  • [r] transaction support
  • can distribute workload as in 95G
  • [r] can operate in-memory or backed by disk
  • can run from firmware
  • can use centralized hub/spoke or peer-to-peer (decentralized)
  • easy to monitor in real time. Tibrv is subject-based, so you can easily run a listener on the same topic
  • [x=comparable to xml]
  • [r=comparable to RDBMS]

[19] 2 reasons Y I held on to c++ NOT c#

In both cases, I faced steep /uphill/ in terms of GTD-traction, engagement, sustained focus, smaller-than-expected job market [1] .. but why I held on to c++ but abandoned c#?

[1] actually c# was easier than c++ in GTD-traction, entry barrier, opacity

Reason: GUI — 95% of the c# jobs I saw were GUI but GUI is not something I decided to take on. The server-side c# job market has remained extremely small.

Reason: in 2015 after Qz, I made the conscious decision to refocus on c++. I then gained some traction in GTD and IV, enough to get into RTS. By then, it was very natural for me to hold on to c++.

— minor reasons

Reason: west coast coding tests — python and c/c++ are popular