mkt data tech skills: !portable !shared

Raw mkt data tech skill is better than soft mkt data even though it’s further away from “the money”:

  • standard — Exchange mkt data format won’t change a lot. Feels like an industry standard
  • the future — most OTC products are moving to electronic trading and will have market data to process
  • more necessary than many modules in a trading system. However ….. I guess only a few systems need to deal with raw market data. Most down stream systems only deal with the soft market data.

Q1: If you compare 5 typical market data gateway dev [1] jobs, can you identify a few key tech skills shared by at least half the jobs, but not a widely used “generic” skill like math, hash table, polymorphism etc?

Q2: if there is at least one, how important is it to a given job? One of the important required skills, or a make-or-break survival skill?

My view — I feel there is not a shared core skill set. I venture to say there’s not a single answer to Q1.

In contrast, look at quant developers. They all need skills in c++/excel, BlackScholes, bond math, swaps, …

In contrast, also look at dedicated database developers. They all need non-trivial SQL, schema design. Many need stored procs. Tuning is needed if large tables

Now look at market data gateway for OPRA. Two firms’ job requirements will share some common tech skills like throughput (TPS) optimization, fast storage.

If latency and TPS requirements aren’t stringent, then I feel the portable skill set is an empty set.

[1] There are also many positions whose primary duty is market data but not raw market data, not large volume, not latency sensitive. The skill set is even more different. Some don’t need development skill on market data — they only configure some components.

accumulation: contractor vs FTE #XR

XR,

You said that we contractors don’t accumulate (积累) as FTE do.

I do agree that after initial 2Y of tough learning, some FTE could reap the monetary rewards whereas consultants are often obliged, due to contract, to leave the team. (Although there are long-term contracts, they don’t always work out as promised.)

Here’s my experience in GS for 2.5Y. My later months had much lower “bandwidth” tension i.e. the later months required less learning and figure-things-out. Less stress, fewer negative feedbacks, less worry about my own competence, more confidence , more in-control because more familiar with the local system. If my compensation had become 150k I would say that money amounts to “reaping the reward”. In reality, the monetary accumulation was an empty promise.

As a developer stays longer, the accumulation in terms of his value-add to the team is natural and likely [1]. Managers like to point out that after a FTE stays in the team for 2Y her competence, her design, her solutions, her suggestions, her value-add per year grows higher every year. If her initial value-add to the company can be quantified as $100k, every year it grows by 30%. Alas, that doesn’t always translate to compensation.

That’s accumulation in personal income. How about accumulation in tech skill? Staying in one system usually means less exposure to other, or newer, technologies. Some developers prefer to be shielded from newer technologies. I embrace them. I feel my technical accumulation is higher when I keep moving from company to company.

[1] There are exceptions. About 5% of the old timers are, in my view, organization /dead-weights/. Their value-add doesn’t grow and is routinely surpassed within a year by bright new joiners. Often company can’t let them go due to political, legal or ethical reasons.

You said IV questions change over time so much (ignoring the superficial changes) that the IV skills we acquire today is useless in 5Y and we have to again learn new IV skills. This is not intuitive to me. Please give one typical example if you can without a lot of explaining (I understand your time constraints). I guess you mean technology churn? If I prepare for a noSQL or big data interview, then I will probably face technology churn.

On the other hand, in my experience, many interview topics remain ever-green including some hard topics — algorithms (classic algos and creative algos), classic data structures, concurrency, java OO, pass-by reference/value, SQL, unix commands, TCP/UDP sockets, garbage collection, asynchronous/synchronous concepts, pub/sub, producer/consumer, thread pool concepts… In the same vein, most coding tests are similar to 10 yeas ago when I first received them. So the study of these topics do accumulate to some extent.

edit1file]big python^c++ prod system

Q1: suppose you work in a big, complex system with 1000 source files, all in python, and you know a change to a single file will only affect one module, not a core module. You have tested it + ran a 60-minute automated unit test suit. You didn’t run a prolonged integration test that’s part of the department-level full release. Would you and approving managers have the confidence to release this single python file?
A: yes

Q2: change “python” to c++ (or java or c#). You already followed the routine to build your change into a dynamic library, tested it thoroughly and ran unit test suite but not full integration test. Do you feel safe to release this library?
A: no.

Assumption: the automated tests were reasonably well written. I never worked in a team with a measured test coverage. I would guess 50% is too high and often impractical. Even with high measured test coverage, the risk of bug is roughly the same. I never believe higher unit test coverage is a vaccination. Diminishing return. Low marginal benefit.

Why the difference between Q1 and Q2?

One reason — the source file is compiled into a library (or a jar), along with many other source files. This library is now a big component of the system, rather than one of 1000 python files. The managers will see a library change in c++ (or java) vs a single-file change in python.

Q3: what if the change is to a single shell script, used for start/stop the system?
A: yes. Manager can see the impact is small and isolated. The unit of release is clearly a single file, not a library.

Q4: what if the change is to a stored proc? You have tested it and run full unit test suit but not a full integration test. Will you release this single stored proc?
A: yes. One reason is transparency of the change. Managers can understand this is an isolated change, rather than a library change as in the c++ case.

How do managers (and anyone except yourself) actually visualize the amount of code change?

  • With python, it’s a single file so they can use “diff”.
  • With stored proc, it’s a single proc. In the source control, they can diff this single proc
  • with c++ or java, the unit of release is a library. What if in this new build, beside your change there’s some other change , included by accident? You can’t diff a binary 😦

So I feel transparency is the first reason. Transparency of the change gives everyone (not just yourself) confidence about the size/scope of this change.

Second reason is isolation. I feel a compiled language (esp. c++) is more “fragile” and the binary modules more “coupled” and inter-dependent. When you change one source file and release it in a new library build, it could lead to subtle, intermittent concurrency issues or memory leaks in another module, outside your library. Even if you as the author sees evidence that this won’t happen, other people have seen innocent one-line changes giving rise to bugs, so they have reason to worry.

  • All 1000 files (in compiled form) runs in one process for a c++ or java system.
  • A stored proc change could affect DB performance, but it’s easy to verify. A stored proc won’t introduce subtle problems in an unrelated module.
  • A top-level python script runs in its own process. A python module runs in the host process of the top-level script, but a typical top-level script will include just a few custom modules, not 1000 modules. Much better isolation at run time.

There might be python systems where the main script actually runs in a process with hundreds of custom modules (not counting the standard library modules). I have not seen it.

C for latency^TPS can use java

I’m 98% confident — low latency favors C/C++ over java [1]. FPGA is _possibly_ even faster.

I’m 80% confident — throughput (in real time data processing) is achievable in C, java, optimized python (Facebook?), optimized php (Yahoo?) or even a batch program. When you need to scale out, Java seems the #1 popular choice as of 2017. Most of the big data solutions seem to put java as the first among equals.

In the “max throughput” context, I believe the critical java code path is optimized to the same efficiency as C. JIT can achieve that. A python and php module can achieve that, perhaps using native extensions.

[1] Actually, java bytecode can run faster than compiled C code (See my other posts such as https://bintanvictor.wordpress.com/2017/03/20/how-might-jvm-beat-cperformance/)

MOM+threading ] low latency apps

Piroz told me that trading IT job interviews tend to emphasize multi-threading and MOM. Some use SQL too. I now feel all of these are unwelcome in low latency trading.

A) MOM – The HSBC interviewer was the first to point out to me that MOM adds latency. Their goal is to get the (market) data from producer to consumer as quickly as possible, with minimum stops in between.

Then I found that the ICE/IDC systems has no middleware between feed parser and order book engine (named Rebus). 29West documentation echos “Instead of implementing special messaging servers and daemons to receive and re-transmit messages, Ultra Messaging routes messages primarily with the network infrastructure at wire speed. Placing little or nothing in between the sender and receiver is an important and unique design principle of Ultra Messaging.

B) threading – ST is generally the fastest in theory and in practice. (I only have a small sample size) I feel the fastest trading engines are ST. No shared mutable.

MT is OK if they don’t compete for resources like CPU, I/O or locks. Compared to ST, most lockfree systems introduce latency like retries.

C) SQL – as stated elsewhere, flat files are much faster than relational DB. How about in-memory relational DB?

Rebus, the order book engine, is in-memory.

CFM quant devops job in hind sight

  • #1 wanted: c++ tooling — including MSVS, which was one of my key weaknesses
    • ROI? more than I could without this job but not a lot
  • #2 wanted: quant dev
    • ROI? zero
  • wanted: python extension dev
    • ROI? zero
  • wanted: excel add-in dev
    • ROI? zero
  • wanted: cloud experience including EC2 and S3
    • ROI? low
  • unexpected gain: devops experience, but market value is low
  • unexpected gain: python GTD esp. importation feature
  • unexpected gain: c++11 and tool-chain and features
  • unexpected gain: git GTD.  Never asked in IV though

Overall, I stayed too long. 6M might be enough. In the skill list above,

  • quant and cloud are high-value targets (but zero realized gain)
  • most of the items have low value in my upcoming interviews.
  • as to GTD, c++ tooling, python, git have values

Q: would a java job give more ROI?
A: I don’t think so.

more threads won’t help throughput if I/O bound

To keep things more concrete. You can think of the output interface in the I/O.

The paradox — given an I/O bound busy server, the conventional wisdom says more thread would increase CPU utilization but the work queue for CPU is already pretty much empty, whereas the I/O queue is constantly full, as the I/O subsystem keeps working at full capacity.

Ideal is simultaneous saturation. Suggestion: We could “steal” a cpu core from this engine and use it for other tasks. Additional threads or processes basically achieve that purpose. In other words, the cpu cores aren’t dedicated to this purpose.

Assumption — adding more I/O hardware is not possible. (Perhaps scaling out to more nodes could help.)

If the CPU cores are dedicated, then there’s no way to improve things without adding more I/O capacity. There’s simply too much CPU overcapacity.

stress@jobHunt^GTD

Both mental stress and physical stress. Let’s take a step back and compare the stress intensity during job hunt vs GTD stress on the job.

Many people say it’s too stressful and tiring to keep interviewing compared to a long-term job in a big company. Well, I blogged many times that I need to keep interviewing…. The stress is so far manageable.

On a regular job, the GTD stress levels range from 5 to 7 on a scale of 10 (Donald Trump on women;). Often rise to 8.

Became 9 or 10 whenever (as FTE) boss gave a negative feedback. I know from several past experiences. In contrast, contract projects felt much better.

(To be fair, I did improve after the negative feedback.)

During my job hunt including the challenging Singapore lap, my stress level felt like 4 to 7, but often positive stress, perhaps due to positive progress and positive energy.

Conclusion — I always felt more confident on the open market than recovering from setback on a job.

skill: deepen^diversify^stack up

Since 2010, I have carefully evaluated and executed 3 broad strategies:

  1. deepen – for zbs + IV
  2. diversify or branch-out. Breaking into new markets
  3. stack-up – cautiously
  • eg: Deepened my java/SQL/c++/py knowledge for IV and GTD. See post on QQ vs ZZ.
  • eg: diversified to c++, c#, quant, swing…
  • eg: diversify? west coast.
  • eg: diversify? data science
  • eg: diversify? research + teach
  • eg: stack-up to learn spring, hibernate, noSQL, GWT.

Stack-up — These skills are unlikely to unlock new markets. Lower leverage.

GTD stress/survival on the job? None of these help directly, but based on my observation GTD skill seldom advance my career as a contractor. It could create a bit of spare time, but it’s a challenge to make use of the spare time.

I feel German’s strategy goes beyond tech skills. He treats tech skills as a tool, used to implement his business ideas to create business value. I feel his target role is a mix of architect/BA but more like “product visionary”. It’s somewhat like stack-up.

c++GTD^IV muscle building: X years xp: U can cut by half

What specific topics to improve for c++ (not pure algo) coding IV? (X years experience doesn’t guarantee anything)

  • · Write code to print alternative item reversely
  • · Write ref-counted string
  • · Write auto-ptr

What specific topics to improve for c++ QnA IV? (X years experience doesn’t guarantee anything)

So in any of these areas, x years spent using a language could leave you still total beginner!