git | merge-commits and pull-requests

Key question — Q1: which commit would have multiple parents?

— scenario 1a:

  1. Suppose your feature branch brA has a commit hash1 at its tip; and master branch has tip at hashJJ, which is the parent of hash1
  2. Then you decide to simply q[ git merge brA ] into master

In this simple scenario, your merge is a fast-forward merge. The updated master would now show hash1 at the tip, whose only parent is hashJJ.

A1: No commit would have multiple parents. Simple result. This is the default behavior of git-merge.

Note this scenario is similar to https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-request-merges#rebase-and-merge-your-pull-request-commits

However, github or bit-bucket pull-request flow don’t support it exactly.

— scenario 1b:

Instead of simple git-merge, what about pull request? A pull-request uses q[ git merge –no-ff brA ] which (I think) unconditionally creates a merge-commit hashMM on maser.

A1: now hashMM has two parents. In fact, git-log shows hashMM as a “Merge” with two parent commits.

Result is unnecessarily complex. Therefore, in such simple scenarios, it’s better to use git-merge rather than pull request.

https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-request-merges explains the details.

— Scenario 2: What if ( master’s tip ) hashJJ is Not parent of hash1?

Now maser and brA have diverged. I think you can’t avoid a merge commit hashMM.

A1: hashMM

— Scenario 3: continue from Scenario 1b or Scenario2.

3. Then you commit on brA again , creating hash2.

Q: What’s the parent node of hash2?
A: I think git actually shows hash1 as the parent, not hashMM !

Q: is hashMM on brA at all?
A: I don’t think so but some graphical tools might show hashMM as a commit on brA.

I think now master branch shows  hashMM having two parents (hash1+hashMM), and brA shows hash1 -> hash2.

I guess that if after the 3-way-merge, you immediately re-create (or reset) brA from master, then hash2’s parent would be hashMM.


Note

  • direct-commit on master is implicitly fast-forward, but merge can be fast-forward or non-fast-forward.
  • fast-forward merge can be replaced by a rebase as in Scenario 1a. Result is same as direct-commit.
  • fast-forward merge-commit (Scenario 1b) and 3way merge (Scenario 2) both create a merge-commit.
  • git-pull includes a git-merge without –no-ff

Optiver coding hackathon is like marathon training

Hi Ashish,

Looking back at the coding tests we did together, I feel it’s comparable to a form of “marathon training” — I seldom run longer than 5km, but once a while I get a chance to push myself way beyond my limits and run far longer.

Extreme and intensive training builds up the body capacity.

On my own, it’s hard to find motivation to run so long or practice coding drill at home, because it requires a lot of self-discipline.

Nobody has unlimited self-discipline. In fact, those who run so much or takes on long-term coding drills all have something beside self-discipline. Self-discipline and brute force willpower is insufficient to overcome the inertia in every one of these individuals. Instead, the invisible force, the wind beneath their wings is some forms of intrinsic motivation. These individuals find joy in the hard drill.

( I think you are one of these individuals — I see you find joy in lengthy sessions of jogging and gym workout. )

Without enough motivation, we need “organized” practice sessions like real coding interviews or hackathons. This Optiver coding test could probably improve my skill level from 7.0 to 7.3, in one session. Therefore, these sessions are valuable.

[18]latency stat: typical sell-side DMA box:10 μs

(This topic is not GTD not zbs, but relevant to some QQ interviewers.)

https://www.youtube.com/watch?v=BD9cRbxWQx8 is a 2018 presentation.

  1. AA is when a client order hits a broker
  2. Between AA and BB is the entire broker DMA engine in a single process, which parses client order, maintains order state, consumers market data and creates/modifies the outgoing FIX msg
  3. BB is when the broker ships the FIX msg out to exchange.

Edge-to-edge latency from AA to BB, if implemented in a given language:

  • python ~ about 50 times longer than java
  • java – can aim for 10 micros if you are really really good. Dan recommends java as a “reasonable choice” iFF you can accept 10+ micros. Single-digit microsecond shops should “take a motorbike not a bicycle”.
  • c# – comparable to java
  • FPGA ~ about 1 micro
  • ASIC ~ 400 ns

— c/c++ can only aim for 10 micros … no better than java.

The stronghold of c++, the space between java and fpga, is shrinking … “constantly” according to Dan Shaya. I think “constantly” is like the growth of Everest.. perhaps by 2.5 inches a year

I feel c++ is still much easier, more flexible than FPGA.

I feel java programming style would become more unnatural than c++ programming in order to compete with c++ on latency.

Kenneth of MLP said his engine gets a CMF-format order message from PM (AA) does some minimal checks and (BB) sends it as FIX to broker. Median latency from AA to BB is 40 micros.

— IPC latency

Shared memory beats TCP hands down. For an echo test involving two processes:

Using an Aeron same-host messaging application, 50th percentile is 250 ns. I think NIC and possibly kernel (not java or c++) are responsible for this latency.

Kenneth said shared memory latency (also Aeron same-host) is 1-4 micros measured between XX) PM writes the order object into shm AA) engine reads the order from shm.

AlmostIncreasing #AshS

Q: (from GrassHopper Nov 2020): given an int array (size < 100000), can you make the array strictly increasing by removing at most one element?

https://github.com/tiger40490/repo1/tree/cpp1/cpp/algo_arr has my solution tested locally.

====analysis: well-understood, simple requirement. Simple idea as implemented in sol2kill(). But there are many clumbsy ways to implement this idea.

There are also several less-simple ideas that could be useful in other problems

— idea: scanning from both ends. When left pointer hits roadblock (stop rising) at CC, we know BB and CC may need to go, but AA is safe, so we can say max-from-left is AA or higher.

When right poiner hits roadblock at BB, then we know the only roadblock in existence is AA.BB.CC.DD. So min-from-right is DD or lower. So DD must exceed AA, and one of BB and CC must be strictly between AA/DD.

If right poiner hits a roadblock far away from CC then probably hopeless.

This idea is truly one-pass, whereas my simple idea is arguably two-pass.

SG dev salary: FB^banks

Overall, it’s a positive trend that non-finance employers are showing increasing demand for candidates at my salary level. More demand is a good thing for sure.

Even though these tech shops can’t pay me the same as MLP does, 120k still makes a good living

— https://news.efinancialcareers.com/sg-en/3001699/salaries-pay-facebook-singapore  is a curated Aug-2020 review of self-reported salary figures on glassdoor.com

Mid-level dev base salary SGD 108k,  much lower than U.S. This is a vanilla dev role without any specialized skill mentioned (by the self-reporter) such as data science or security.

FB Singapore has 1000 headcount including devs, but I think the mix might be similar to the BAML mix in Harborfront — most of the devs are no in front office.

— google SG: https://news.efinancialcareers.com/sg-en/3001375/google-salaries-pay-bonuses-singapore is on google, but I find the dev salary figures unreliable.

https://www.quora.com/How-much-do-Google-Singapore-Software-Engineers-earn is another curated review.

— banking tech: https://news.efinancialcareers.com/sg-en/3000694/banking-technology-salaries-singapore is published under the same author but could be authored by someone else.

This EFC site is finance-centric, so this data is more detailed, better curated. Very close to my first-hand observations.

sponsored DMA

Context — a buy-side shop (say HRT) uses a DMA connection sponsored by a sell-side like MS (or Baml or Instinet) to access NYSE. MS provides a DMA platform like Speedway.

The HRT FIX gateway would implement the NYSE FIX spec. Speedway also has a FIX spec for HRT to implement. This spec should include minor customization on the NYSE spec.

I have seen the HPR spec. (HPR is like an engine running in Baml or GS or whatever.) HPR spec seems to talks about customization for NYSE, Nsdq etc …re Gary chat.

Therefore, the HRT FIX gateway to NYSE must implement, in a single codebase,

  1. NYSe spec
  2. Speedway spec
  3. HPR spec
  4. Instinet spec
  5. other sponsors’ spec

The FIX session would be provided (“sponsored”) by MS or Baml, or Instinet. I think the HRT FIX gateway would connect to some IP address belonging to the sponsor like MS. Speedway would forward the FIX messages to NYSE, after some risk checks.

VWAP=a bmark^exectionAlgo

In the context of a broker algos (i.e. an execution algo offered by a broker), vwap is

  • A benchmark for a bulk order
  • An execution algo aimed at the benchmark. The optimization goal is to minimize slippage against this benchmark. See other blogposts about slippage.

The vwap benchmark is simple, but the vwap algo implementation is non-trivial, often a trade secret.

Avichal: too-many-distractions

Avichal is observant and sat next to me for months. Therefore I value his judgment. Avichal is the first to point out I was too distracted.

For now, I won’t go into details on his specific remarks. I will simply use this simple pointer to start a new “thread”…

— I think the biggest distraction at that time was my son.

I once (never mind when) told grandpa that I want to devote 70% of my energy to my job (and 20% to my son), but now whenever I wanted to settle down and deep dive into my work, I feel the need and responsibility to adjust my schedule and cater to my son. and try to entice him to study a little bit more.

My effort on my son is like Driving uphill with the hand-brake on.

As a result, I couldn’t have a sustained focus.

gradle: dependency-jar refresh, cache, Intellij integration..

$HOME/.gradle holds all the jars from all previous downloads.

[1] When you turn on debug, you can see the actual download : gradle build –debug.

[2] Note IDE java editor can use version 123 of a jar for syntax check, but the command line compilation can use version 124 of the jar. This is very common in all IDEs.

When I make a change to a gradle config,

  • Intellij prompts for gradle import. This seems to be unnecessary re-download of all jars — very slow.
  • Therefore, I ignore the import. I think as a result, Intellj java editor [2] would still use the previous jar version as the old gradle config is in effect. I live with this because my focus is on the compilation.
  • For compilation, I use the grade “build” action (probably similar to command line build). Very fast but why? Because only one dependency jar is refreshed [3]
  • Gary used debug build [1] to prove that this triggers a re-download of specific jars iFF you delete the jars from $HOME/.gradle/caches/modules-2/files-2.1

[3] For a given dependency jar, “refresh” means download a new version as specified in a modified gradle config.

— in console, run

gradle build #there should be a ./build.gradle file

Is java/c# interpreted@@No; CompiledTwice!

category? same as JIT blogposts

Q: are java and c# interpreted? QQ topic — academic but quite popular in interviews.

https://stackoverflow.com/questions/8837329/is-c-sharp-partially-interpreted-or-really-compiled shows one explanation among many:

The term “Interpreter” referencing a runtime generally means existing code interprets some non-native code. There are two large paradigms — Parsing: reads the raw source code and takes logical actions; bytecode execution : first compiles the code to a non-native binary representation, which requires much fewer CPU cycles to interpret.

Java originally compiled to bytecode, then went through an interpreter; now, the JVM reads the bytecode and just-in-time compiles it to native code. CIL does the same: The CLR uses just-in-time compilation to native code.

C# compiles to CIL, while JIT compiles to native; by contrast, Perl immediately compiles a script to a bytecode, and then runs this bytecode through an interpreter.

bone health for dev-till-70 #CSY

Hi Shanyou,

I have a career plan to work as a developer till my 70’s. When I told you, you pointed out bone health, to my surprise.

You said that some older adults suffer a serious bone injury and become immobile. As a result, other body parts suffer, including weight, heart, lung, and many other organs. I now believe loss of mobility is a serious health risk.

These health risks directly affect my plan to work as a developer till my 70’s.

Lastly, loss of mobility also affects our quality of life. My mom told me about this risk 20 years ago. She has since become less vocal about this risk.

Fragile bones become more common when we grow older. In their 70’s, both my parents suffered fractures and went through surgeries.

See ## strengthen our bones, reduce bone injuries #CSY for suggestions.

available time^absorbency[def#4]:2 limiting factors

see also ## identify your superior-absorbency domains

Time is a quintessential /limiting factor/ — when I try to break through and reach the next level on some endeavor, I often hit a /ceiling/ not in terms of my capacity but in terms of my available time. This is a common experience shared by many, therefore easy to understand. In contrast, a more subtle experience is the limiting factor of “productive mood” [1].

[1] This phrase is vague and intangible, so sometimes I speak of “motivation” — not exactly same and still vague. Sometimes I speak of “absorbency” as a more specific proxy.

“Time” is used as per Martin Thompson.

  • Specific xp: Many times I took leaves to attend an IV. The time + absorbency is a precious combination that leads to breakthrough in insight and muscle-building. If I only provide time to myself, most of the time I don’t achieve much.
    • I also take leave specifically to provide generic “spare time” for myself but usually can’t achieve the expected ROTI.
  • Specific xp: yoga — the heightened absorbency is very rare, far worse than jogging. If I provide time to myself without the absorbency, I won’t do yoga.
  • the zone — (as described in my email) i often need a block of uninterrupted hours. Time is clearly a necessary but insufficient condition.
  • time for workout — I often tell my friends that lack of time sounds like an excuse given the mini-workout option. Well, free time still helps a lot, but motivation is more important in this case.
  • localSys — absorbency is more rare here than coding drill, which is more rare than c++QQ which is more rare than java QQ
  • face time with boy — math practice etc.. the calm, engaged mood on both sides is very rare and precious. I tend to lose my cool even when I make time for my son.
  • laptop working at train stations — like MRT stations or 33rd St … to capture the mood. Available time by itself is useless

exec algo: with-volume

— WITH VOLUME
Trade in proportion to actual market volume, at a specified trade rate.

The participation rate is fixed.

— Relative Step — with a rate following a step-up algo.

This algo dynamically adjusts aggressiveness(participation rate) based on the
relative performance of the stock versus an ETF. The strategy participates at a target percentage of overall
market volume, adjusting aggressiveness when the stock is
significantly underperforming (buy orders) or outperforming (sell orders) the reference security since today’s open.

An order example: “Buy 90,000 shares 6758.T with a limit price of ¥2500.
Work order with a 10% participation rate, scaling up to 30%
whenever the stock is underperforming the Nikkei 225 ETF (1321.OS)
by 75 basis points or more since the open.”

If we notice the reference ETF has a 2.8% return since open and our 6758.T has a 2.05% return, then the engine would assume 6758.T is significantly underperforming its peers (in its sector). The engine would then step up the participation to 30%, buying more aggressively, perhaps using bigger and faster slices.

What if the ETF has dropped 0.1% and 6758.T has dropped 0.85%? This would be unexpected since our order is a large order boosting the stock. Still, the other investors might be dumping this stock. The engine would still perceive the stock as underperforming its peers, and step up the buying speed.