[18]why I avoid Java jobs #XR

My best recruiter Greg has discussed java roles with me many many times, in depth. Greg really listens and understands me. He showed me that java roles can pay on par with c++ if not higher.

I also agree with you that in financial IT, java is the reining top dog and there’s no credible challenger in sight. I think java will be top dog for 10 more years or longer.

Q: So then why do I turn down all java opportunities?

If java is a wing on my back, I’m not amputating this wing. I am committed to keeping my java skills up to date. I may switch to java in x months.

However, over many years I have invested heavily in c++ as another wing. Finally, this wing now feels firmer and stronger and I am going to test it, again, and again. If I start looking at java I’m going to lose focus.

c++ is harder in projects, and in interviews. It keeps my brain active — anti-aging. If I were to remain in java, i would feel bored and get old faster.

Many fellow c++ developers tell me java developers far outnumber c++ developers, so competition in java field is tougher. Therefore, they feel more secure in the c++ job market. I share that feeling to some extent.

I do feel stronger and more confident after conquering c# and c++. I also feel stronger due to python because I can now add python to my power tools.

As programmers, we all feel stagnant sometimes in our career growth. I tried many paths to break out of my stagnation. After java swing (partial success), c# (success), quant (unsuccssful) and python, C++ is now my chosen path.

I feel your chosen break-out path is data science applied to trading strategy discovery. Just as I sacrifice financially to grow my c++ wing, you may also make a sacrifice for your new career direction. I am sure you will learn something meaningful and your sacrifice will not be in vain.

Lastly, allow me to repeat that I don’t feel the need to earn the highest salary that’s available to me. I am not a slave of pay rate. I don’t have debt burden so I have financial freedom to take on new challenges that are worthwhile to me.

In 10 years I might regret “… not so worthwhile, just another broken dream like quant dream” but at this moment, I am convinced — yes worthwhile.

This is one of my many answers to the titular question. I’m asked the same question repeatedly and have given various answers.

Advertisements

Gayle: no rush2write code@@ #whiteboard

Gayle strongly advocates to get a brute-force solution as soon as possible, and walk through the “idea” in detail, before coding. However, I’d rather start coding up my idea much earlier since I need more time coding.

I guess Gayle’s experience shows that some interviewers are less worried about runnable working code than the thought process. For a tricky algo, if the idea is sound, then you basically pass….? See with algo cracked,ECT=雕虫小技

However, I feel an elaborated/detailed “idea” (item 2 below) alone is not enough to pass. I frequently run out of time implementing things like https://www.geeksforgeeks.org/print-a-given-matrix-in-spiral-form/. The algorithm is simple and not worth 40% but I was too slow completing a basic implementation. “Done is better than perfect”. Basic implementation is better than no implementation. You need to demonstrate you can implement a working solution.

Here’s my estimate of the point distribution in a typical whiteboard interview:

  1. 10(% clear understanding of requirements + clarifying questions
  2. 35(50)% detailed idea (even unimplemented) with decent performance enhancement
  3. 35(20-40)% basically working implementation.
  4. 5% performance analysis — can be tricky
  5. 5% readability — modularized, clean
  6. 5% eyeball test — can be hard to get right if code is convoluted with many variables
  7. 5% minor bugs, corner cases

2types: learn local sys for zbs/IV/survival

See also app owner^contractor#German6YY keep chang`job: zbs-perspective@@zbs+nlg insight accumu due to X years@a job: !! by default 

As a newbie, I received “career advice” to invest substantial time learning about my local system + upstream/downstream systems. It should make you a better developer … zbs. It will improve your figure-out speed, productivity and KPI, become more valuable to the team. It Will help your bonus, promotion etc.

In reality, I see two types of learning — local implementation as-is (I call it asis) and “working designs” (I call it WW):

  1. for portable zbs growth?  WW learning is valuable though zbs growth is very slow. You are often hired because you are seen as a veteran with the WW know-how.
  2. for QQ interview? both types help minimally, but QQ interviews is notoriously theoretical, so asis know-how has limited value.
  3. for design/architecture interview? WW helps; asis doesn’t
  4. for coding interview? Neither helps.
  5. for figure-out speed? asis is essential. WW is irrelevant.
    1. for survival, GTD? obviously
  6. for bonus, promotion? very Indirect. Even if I show productivity and quality, mgr may not like me. This is the #1 key reason some people should NOT follow the “career advice”

Overall, I feel this learning is

  • secondary for interview muscle building
  • essential for survival and GTD
  • marginally relevant for promotion

Q: How does a system improve over time?
A: Improvement suggestions from developers are supposed to be key, but most of those suggestions are sidelined for political and personal subjective reasons.

homemade matrix using deque@deque #py easier

Matrix is quite popular in interviews.

my own experiment https://github.com/tiger40490/repo1/blob/cpp1/cpp1/spiralFB.cpp shows

  • had better default-populate with zeros. Afterwards, you can easily overwrite individual cells without bound check.
  • it’s easy to insert a new row anywhere. Vector would be inefficient.
  • To insert a new column, we need a simple loop

For python, # zero-initialize a 5-row, 8-column matrix:
width, height = 8, 5
Matrix = [[0 for x in range(width)] for y in range(height)]

In any programming language, the underlying data structure is a uniform pile-of-horizontal-arrays, therefore it’s crucial (and tricky) to understand indexing. It’s very similar to matrix indexing — Mat[0,1] refers to first row, 2nd element.

[1] Warning — The concept of “column” is mathematical (matrix) and non-existent in our implementation, therefore misleading! I will avoid any mention of it in my source code. No object no data structure for the “column”!

[2] Warning — Another confusion due to mathematics training. Better avoid Cartesian coordinates. Point(4,1) is on 2nd row, 5th item, therefore arr[2][5] — so you need to swap the subscripts.

1st subscript 2nd subscript
max subscript  44  77
width #allocation 45 #not an index value  <==
height #allocation  ==> 78 #not an index value
example value 1 4
meaning picks 2nd row picks 5th item in the row
variable name rowId, whichRow subId [1]
Cartesian coordinate[2] y=1 (Left index) x=4

choose python^c++for IDE test

Some hiring teams have an official policy — focus on coding skills and let candidate pick any language they like. I can see some interviewers are actually language-agnostic.

90% of the hiring teams also have a timing expectation. Many candidates are deemed too slow. This is obvious in online coding tests. We all have failed those due to timing. (However, I guess on white-board python is not so much faster to code.)

If these two factors are important to a particular position, then python is likely better than c++ or java or c#.

  • Your code is shorter and easier to edit. No headers to include
  • Compiler errors are shorter
  • no uninitialized variables
  • c++ pointer or array indexing errors can crash without error message. Dynamic languages like python always give an error message.
  • Edit-Compile-Test cycle is reduced to Edit-Test
  • Python offers some shortcuts to tricky c++ tasks, such as
    • String: split,search, and many other convenient features. About 1/3 of the coding questions require non-trivial string manipulation.
    • For debugging: Easy printing of vector, map and nested containers. No iteration required.
    • Nested containers. Basically, the more complex the data structure , the more time-saving python is.
    • easy search in containers
    • iterating over any container or iterating over the characters of a string — very easy. Even easier than c++11 “auto” syntax
    • Dictionary lookup failure can return a default value
    • multiple simultaneous return values — very very easy in python functions.
    • a python function can return a bool half the times and a list otherwise!

If the real challenge lies in the algorithm like an NP-complete problem, then any language is equally hard, but I feel timed coding tests are never that hard. A Facebook seminar presenter emphasized that across tech companies, every single coding problem is always, always solvable within the time limit.

compiler selecting mv-ctor

This is a key part of understanding move-semantics. Let’s set the stage:

  • you overload a traditional insert(Amount const &)  with a move version insert(Amount &&)
  • without explicit std::move, you pass in an argument into insert()
  • (For this example, I want to keep things simple by avoid constructors, but the rules are the same.)

Q1: When would the compiler select the rvr version?

P22 [[c++stdLib]] has a limited outline. Here’s my illustration

  • if I pass in a temporary like insert(originalAmount + 15), then this argument is a rvalue so the rvr version is selected
  • if I pass in a regular variable like insert(originalAmount), then this argument is a lvalue so the traditional version is selected

After we are clear on Q1, we can look at Q2

Q2: how would std::move help?
A: insert(std::move(originalAmount)); // if we know the object behind originalAmount is no longer needed.

conflation: design IV

I have hit this same question twice — Q: in a streaming price feed, you get IBM prices in the queue but you don’t want consumer thread AA to use “outdated” prices. Consumer BB needs a full history of the prices.

I see two conflicting requirements by the interviewer. I will point out to the interviewer this conflict.

I see two channels — in-band + out-of-band needed.

  1. in-band only — if full tick history is important, then the consumers have to /process/ every tick, even if outdated. We can have dedicated systems just to record ticks, with latency. For example, Rebus receives every tick, saves it and sends it out without conflation.
  2. dual-band — If your algo engine needs to catch opportunities at minimal latency, then it can’t afford to care about history. It must ignore history. I will focus on this requirement.
  3. in-band only — Combining the two, if your super-algo-engine needs to analyze tick-by-tick history and also react to the opportunities, then the “producer” thread alone has to do all work till order transmission, but I don’t know if it can be fast enough. In general, the fastest data processing system is single-threaded without queues and minimal interaction with other data stores. Since the producer thread is also the consumer thread for the same message, there’s no conflation. Every tick is consumed! I am not sure about the scalability of this synchronous design. FIFO Queue implies latency. Anyway, I will not talk further about this stringent “combo” requirement.

https://tabbforum.com/opinions/managing-6-million-messages-per-second?print_preview=true&single=true says “Many firms mitigate the data they consume through the use of simple time conflation. These firms throw data on the floor based solely on the time that data arrived.”

In the Wells interview, I proposed a two-channel design. The producer simply updates a “notice board” with latest prices for each of 999 tickers. Registered consumers get notified out-of-band to re-read the notice board[1], on some messaging thread. Async design has a latency. I don’t know how tolerable that is. I feel async and MOM are popular and tolerable in algo trading. I should check my book [[all about HFT]]…

In-band only — However, the HSBC manager (Brian?) seems to imply that for minimum latency, the socket reader thread must run the algo all the way and send order out to exchange in one big function.

Out-of-band only — two market-leading investment bank gateways actually publish periodic updates regardless how many raw input messages hit it. Not event-driven and not monitoring every tick!

  • Lehman eq options real time vol publisher
  • BofA Stirt Sprite publishes short-term yield curves on the G10 currencies.

[1] The notification should not contain price numbers. Doing so defeats conflation and brings us back to a FIFO design.

STL trees: insert sorted data one-by-one

The STL standard requires insert() and find() to be O(log N), so the tree has to remain fairly balanced at all times.

Q: When inserting sorted data one-by-one, what’s the frequency of re-balancing?

%%A: I didn’t find any definitive answer. I think it’s quite frequent. System has to assume “This could the last insertion and there will be many queries.” so re-balancing has to kick in whenever the tree becomes lopsided.

Incidentally, if you have the sorted data in some temp container and you insert them /in one gulp/, then STL standard requires the tree container to provide O(N) insertion which is better than O(N log N).

volume alone doesn’t qualify as big data

The Oracle nosql book has these four “V”s to qualify any system as big data system. I added my annotations:

  1. Volume
  2. Velocity
  3. Variety of data format — If any two data formats account for more than 99% of your data in your system, then it doesn’t meet this definition. For example, FIX is one format.
  4. Variability in value — Does the system treat each datum equally?

Most of the so-called big-data systems I have seen don’t have these four V’s. All of them have some volume but none has the Variety or the Variability.

I would venture to say that

  • 1% of the big-data systems today have all four V’s
  • 50%+ of the big-data systems have no Variety no Variability
    • 90% of financial big-data systems are probably in this category
  • 10% of the big-data systems have 3 of the 4 V’s

The reason that these systems are considered “big data” is the big-data technologies applied. You may call it “big data technologies applied on traditional data”

See #top 5 big-data technologies

Does my exchange data qualify? Definitely high volume and velocity, but no Variety or Variability.

##satisfying Occupation { partial financial freedom

(relevant to “post-65″…)

As I told German and Yihai, I feel my passive income may reach half of my burn rate — some level of financial freedom! So how do I want to spend my remaining career on more satisfying endeavors?

  • data science and machine learning? I feel market depth is poor, and I don’t feel confident to outshine
  • more in-depth c++, like ICE projects
  • Wenqiang’s innovative xml tool? I thought it had “value” and commercial potential but Wrong
  • dotnet? I used to feel this is up and coming, with strong potential
  • swing? I used to feel it needs help but no no … I feel it’s losing ground hopelessly.
  • See also after 65..contribute to stackOverFlow+OSS testing

Elements of a satisfying job after I achieve partial financial freedom:

  1. respect from team; decent benchmarking within the team.
  2. commute
  3. freedom to blog and experiment with coding in the office. Eg: OC, ICE
  4. —LG2
  5. trySomethingNew? minor. Can enhance job satisfaction slightly. More relevant is “engagement”, rather than mindless tasks.
  6. sustainable social value and social impact. Elusive. Idealistic. May have to Let Go.
  7. workload? I don’t mind if I really like the work, but there will always be some drudgery workload in any job.
  8. salary?
  9. market depth? less important after I have (partial) financial freedom?