FB 2019 ibt Indeed onsite

  • coding rounds — not as hard as the 2011 FB interview — regex problem
    • Eric gave positive feedback to confirm my success but perhaps other candidates did even better
    • No miscommunication as happened in the VersionedDict.
    • 2nd Indeed interviewers failed me even though I “completed” it. Pregnant interviewer may follow suit.
  • data structure is fundamental to all the problems today.
  • SDI — was still my weakness but I think I did slightly better this time
  • Career round — played to my advantage as I have multiple real war stories. I didn’t prepare much. The stories just came out raw and hopefully authentic
  • the mini coding rounds — played to my advantage as I reacted fast, thanks to python, my leetcode practice …

So overall I feel getting much closer to passing. Now I feel One interview is all it takes to enter a new world

* higher salary than HFT
* possibly more green field
* possibly more time to research, not as pressed as in ibanks

Advertisements

##algo Q categories: %%strength/weakness

— relative strengths compared:

  • data-structure heavy
  • union find
  • generator — more experienced than average
  • geometry
  • sliding window
  • whiteboard relative to IDE

— weakness

  1. DP applied to string
  2. DP — as I usually can’t visualize and wrap my mind around it,
    • I have put in more effort than others, but not connecting the dots and developing intuition
  3. recursion in a loop — over my head
  4. 2D grid and matrix
    1. but I have put in more effort than others
  5. num-array. CSY is stronger
  6. priorityQ

 

## coding drill 19 Jul

  1. https://leetcode.com/problems/factorial-trailing-zeroes/
  2. https://bintanvictor.wordpress.com/wp-admin/post.php?post=31671&action=edit — merge sort in-place

— presumably solved

  1. https://leetcode.com/problems/group-anagrams/
  2. https://leetcode.com/problems/merge-two-sorted-lists/
  3. https://wordpress.com/post/bintanvictor.wordpress.com/31650 — pacific/atlantic
  4. https://wordpress.com/post/bintanvictor.wordpress.com/31657 — max-product subarray
  5. https://leetcode.com/problems/contains-duplicate/
  6. https://leetcode.com/problems/majority-element/
  7. https://wordpress.com/post/bintanvictor.wordpress.com/21764 find min in rotated array
  8. https://bintanvictor.wordpress.com/wp-admin/post.php?post=31592&action=edit — T.isSubtree(S)
  9. https://bintanvictor.wordpress.com/wp-admin/post.php?post=31792&action=edit — longest run@same char
  10. 3-sum

— previously solved problem forgotten and refreshed

  1. Leetcode #139 word-break. .. not really “medium”
  2. https://bintanvictor.wordpress.com/wp-admin/post.php?post=31745&action=edit — max sub difference
  3. https://bintanvictor.wordpress.com/wp-admin/post.php?post=31749&action=edit — min range to include all clubs
  4. https://bintanvictor.wordpress.com/wp-admin/post.php?post=31736&action=edit — find lone-wolf among AAABBBCCC
  5. https://bintanvictor.wordpress.com/wp-admin/post.php?post=31755&action=edit — flip binTree
  6. https://bintanvictor.wordpress.com/wp-admin/post.php?post=31758&action=edit — serialize linked list: each node points to a random node

— unsolved:

  1. https://leetcode.com/problems/valid-parenthesis-string/
  2. https://bintanvictor.wordpress.com/wp-admin/post.php?post=31825&action=edit — K-palindrome

 

convert 512bit int to bit-vector #python=best

In coding questions, int-to-bit_vector conversion is common. I now think python offers a simple clean solution, applicable on integers of any size:)

'{0:032b} <-32|8-> {0:08b}'.format(myInt) #convert to a 32-element list, simultaneously to an 8-element
  • first zero refers to argument #0 (we have only one argument).
  • The zero after colon means zero-padding
  • The last ‘b’ means binary
  • Don’t forget your (dental) braces 🙂

Note this is the str.format() method, not the global bulitin function format()

Return value is a python string, but easily convertible to a vector

What if the number is too large? My {0:08b}  example shows that if the int is above 255, then python does the safe thing — extending the output beyond 8-element, rather than truncating 🙂

To convert bit-vector to an int, use int(bitArr , 2) # 2 is the base i.e. binary

versioned dict #indeed

Suppose a resume consists of key/value pairs of strings. We need to support two operations —

  1. Each write(key, value) operation creates or updates a key with a new value, and returns an incremented integer vid i.e. version ID. A version ID represents a point in history, i.e. a snapshot of the entire resume
  2. Each read(vid) returns a key/value dictionary as a complete resume snapshot
  3. * In a realistic system, we also need a deleteVersion(vid) but no need to optimize for it

Single threaded system. Version 0 is an empty resume.

Q: design an efficient solution.
Q: after creating N versions via N write() operations, what’s the time and space complexity of next write()? What’s the time and space complexity of the next read()?

——-

Without loss of generality, Let’s say there are K fields in the resume version N. Note K <= N. I should have differentiated K and N, which is useful for my clarity of thinking.

—Design 5 in hind sight, optimized for read(): Simple solution would snapshot entire dict at each write() i.e. each version. Space complexity is bad (I now believe time complexity is more important.) Worst case — K==N as version1 creates key1, version 5 creates key5 etc.

A single read() is presumably O(K) just to read the K fields. I was convinced this was the theoretically limit for read() complexity, because I was thinking of network serialization — either my system or client system need to iterate over all fields of the resume. Now in hind sight I feel interviewer was thinking in java mode — to return entire dict by reference is O(1) rather than O(K) … I didn’t get his hint when he said O(K) read was not really theoretical limit.

Each write() clones the dict for a new vid and saves the new dict in a vector of pointers (!) . Therefore write() is O(K).

—My design 1, Based on the same idea, I used a vector for each key. However, my read() had to create a dict to be returned, on the fly in O(K). I didn’t think of return-by-reference/pointer.

—My design 2 is a minor optimization to remove the cloning of unchanged values. I thought I was on to something efficient but in java (and python?), all strings are copied by reference so my optimization is meaningless in java.

—My design 3 is lazy write. So write() only appends to one of the K vectors. The other K-1 vectors are updated only when needed by a later read() or write(). Amortized cost?

This O(1) write() and O(?) read() complexity can be emulated and surpassed by …

—My design 4 used K RBTrees (small trees) to optimize for frequent write() at O(1). Each write appends on one tree. STL insert with hint is O(1). No such feature in java or python. Read() would construct a new hashmap and return it.

Design 4b would use a separate vector of Record{vid, value} to replace the small tree.

Read(vid) requires binary search in each [3] of K trees. After N updates, total node count across all K trees is N, so even a naive search would not exceed O(N).

If needed, I can also maintain at O(1) cost a “replay vector of originals” i.e. original {key, value} from each write(). We can use this vector for replay.

[3] Q1: Can we avoid scanning all K trees when vid is small or K is much smaller than N ?

A: Like rebus, let’s maintain a separate vector (or RBTree) “birthday” holding records {vid, pointer to a single small tree}. Birthday expands (O(1)) by one element whenever a new small-tree is born i.e. new key is created. My read(vid=55) would binary-search in this birthday data structure using vid to locate the last-born small-tree before version 55, and then iterate backward to visit each small tree born before version 55. This optimized read() uses binary-search throughout to eliminate unnecessary linear search.

Q1b: Worst case read():  K == N i.e. every write() creates a new key. So read() is still O(N)?

A: read() can become O(A) where A:=difference between queried vid and any smaller vid previously cached.
A: If we find out in practice K is close to N, we can use additional data structure to memoize previous read(vid) results i.e. the new hashmaps. I can use a sparse vector of records {pointer to read(vid) result}, indexed by vid. Each read() would save, at O(1) cost [4], the result in this memoization vector. For the next read(vid=77) I would use binary search in this memoization vector to find the closest lower neighbor (eg 55), then replay from vid=77 towards  55, using XR’s algorithm.

[4] see my blogpost on memoization keyed by auto-increment id

Note RBTree supports efficient deletion from both birthday tree + small tree.

—-

Now i feel a big string representing the entire resume is still under 100KB. I assumed were were talking about millions of fields of gigabytes each.

For a long time I was confused and assumed that each write() could update multiple key/value pairs.

xp: pure algo question often require QQ 打通脉络

  • eg: bbg 2017 pure algo question on tree serialization required a standard BFS. Just a few lines but I struggled for a long time. If you memorize the key parts and practice enough, then the *knowledge* would show as a algo skill
  • eg: facebook regex is very hard to anyone I know, unless they worked on a similar problem before, and *know* the tricks.
  • eg: generate permutations, combinations. Can be tricky unless you *know* the key points.
  • eg: DP bottom-up vs top-down with memoization
  • eg: SCB-FM IV: stack-based queue
  • .. many more examples

打通知识脉络 is the Chinese phrase

required experience: QQ imt CIV

A few years into my technology career, I observed that Solaris/Oracle was harder to self-teach at home than linux/mysql/php/python/perl/javascript because even a high-school student can install and hack with the latter. Entry-barrier was non-existent.

Similarly, I now observe that Wall St QQ-IV demands longer experience than coding IV of west-coast style. Coding drill can use Leetcode over months. QQ requires not so much project experience but a lot of interview experience. It takes more than months of exploration and self-learning.

  • example — tcp/ip. My friend Shanyou preferred to read books systematically, but a book touches on hundreds of topic. He wouldn’t know what topics interviews like to dig into.
  • example — c++ TMP. A fresh grad can read about it but won’t know the favorite topics for interviewers
  • Compelling Example — java concurrency. A fresh grad can build up theoretical knowledge but won’t have my level of insight

Many inexperienced candidates were highly appreciated in west coast interviews. No such appreciation on Wall St because Wall St can VP or contract roles require work experience .. tried-n-tested.

  • Wall St interviews are selective in terms of experience
  • West Coast coding interviews are selective in terms of … speed and optimality

%% absorbency: experiment/SDI imt speed-coding

When you find yourself in high-absorbency mood, favor (#1 most draining) speed-coding. See list of less-draining drills at the bottom.

I can read dry QQ topics for hours each day for many days, but tend to lose steam after coding for a few hours. Speed coding drill drains my “laser” energy faster than QQ reading/blogging/experiment. When my laser is weakened (by boredom), I must drill harder to go through the “brick walls”.

I guess many fellow programmers enjoy coding more than reading. I feel lucky that in my interviews, knowledge tests still outweigh coding test.

Q: What part of coding drill is worst on my absorbency?

A: speed coding implementation is worst. It drains my laser energy fastest. After coding for a few hours I always feel like a deflated balloon and discharged battery and and need a full day to recharge.

I think frustration is the key. Self-expectation (about progress and traction) and self-image create the frustration and the drain.

Instead of traction, I often feel stuck and overspent.

I feel disappointed with myself (Deepak self-identify as “annoyed”. )

Q: Can we stop comparing with others and just compare with our past? Doable sometimes. Consider Leetcode speed-coding contest #Rahul

— Just like yoga

  • if I work on easy problems I feel wasting my time
  • if I work on tough problems I feel painful, draining and want to give up. After the practice i need hours to recover.

Q: … So can we find easier coding drills that I could enjoy (as Rahul suggested)? Definitely not easy. I think the first difficult step is self-acceptance that I can’t improve much at this age.

Q (excellent question): What type of “coding” drill can I do for hours like reading/blogging?

  • pseudo-code algo on paper/whiteboard is lighter. No ECT so I am swift and efficient. Less draining/frustrating.
  • SDI is most fun, least boring, not draining/frustrating. I can spend hours on a SDI. I feel a bit of accu. More like QQ less like coding drill.
  • concurrency coding questions are less draining as other guys are not faster
  • c++/java language feature QQ experiments are more like QQ. I can spend hours on a QQ experiment. More interesting as there’s no time-line no benchmark no frustration. Also other guys are not stronger. I feel some accu exactly like reading on these me features
  • review of my previous code is much less draining (than writing new solutions) as there’s no time-line and code is already working
  • analyzing patterns and reusable techniques (very few) in past problems. Thick->thin is the holy grail. I work hard towards it.
  • reading syntax and ECT tips in books and my blog

 

Will tough coding IV decline]10Y@@ #Rahul

It’s possible that as easy coding IVs spread or grow, tough coding IVs decline i.e. less wide-spread. In such a case, my t-investment now will have lower ROTI than expected.

It’s possible that as easy coding IVs spread or grow, tough coding IVs also grow … for differentiation of talent.

Rahul responded to my question and pointed out the two styles of interviewers he observed:

* interested in can/can’t solve — can this candidate get the problem solved in time?
* interested in thought process

Rahul felt the first type will want to use ever tougher problems to pick outstanding candidates… selectivity

Rahul felt the second type will want to use easier problems to get more insight into the candidate’s thought process.

Rahul felt to differentiate, employers can compare completion time on easy questions

%% pure-algo/dStructure skill@recent CIV

Hi Friends,

Over recent (about 12) months I have attempted several coding interviews

  • passed a Standard Chartered pure algo test over phone, short questions but non-trivial
  • probably passed a big weekend coding assignment given by an e-commerce start-up. I later found out my solution is similar to a published topological sorting algorithm so my algo is probably close to optimal.
  • passed another weekend big coding assignment given by Nasdaq
  • passed two separate paper+pencil coding tests at two reputable sell-side firms (i.e. banks or brokers)
  • passed a speed coding test at a reputable internet company headquartered in Texas.
  • probably passed a few bloomberg coding tests, both remote and onsite, on computers or whiteboard.
  • (All recent failed coding tests happened at High-frequency trading shops .. very picky.)

All of these positive experiences convinced me that my algo and esp. data structure skills are improving. So I gave myself a rating of “A-minus” among Wall Street candidates. I felt esp. confident with white-board coding — with a real compiler, my proficiency is lower than other candidates.

Then I failed, on technical ground, at two big internet companies (FB and … let’s say company XX). Ironically, these were white-board pure-algo problems involving non-trivial data structures — my traditional strengths. So what’s going on? Unexpected failures deserve analyses.

— First, I think I was still too slow by their standard. One interviewer (at XX) told me they wouldn’t reject candidates because of slow completion, but I believe that’s not what she actually meant to say (She is not native speaker). Suppose a candidate takes a long time to come up with a sub-optimal solution, I actually believe given more time he would find an optimal solution because I am that type of candidate. But interviewers would have to assume he is weaker than those who complete the optimal solution quickly.

Interviewers often say they look out for thought process, but that’s probably something they look out for in-addition-to finding decent solutions in time. I think I tend to show clear (not always correct) thought process but that’s not enough.

I think those interviewers really look for “near-optimal solutions but not because candidates memorized it before hand”, so candidate need to 1) find good solution 2) be able to explain when challenged.

— Second, I think the competitors are stronger (faster) than on Wall St. I call them “west-coast competitors” though they could be working in any country, all competing for west-coast tech jobs. I guess they practice hundreds of problems at Leetcode. Such heavy practice doesn’t guarantee success but increase their winning chance.

— Third, I think my big-O estimate needs improvement. I made multiple mistakes at XX coding rounds, and elsewhere.

Big-O has never been a show-stopper in my Wall St interviews so I didn’t pay close attention on details.

— Fourth, I think I have not tried often enough. If I keep trying similar internet companies I might get lucky and score a technical win.

—-

Given the stiff competition and high standard at west coast coding tests, I now feel it’s extremely important to set intermediate goals

  1. goal — pass some quick algo quizzes given by west coast interviewers over phone. I used to get those quizzes from Google phone round
  2. goal — pass some initial (simpler) coding round via webex/coderpad
  3. goal — impress at least one interviewer at one coding round. I can often sense that an interviewer is delighted to hear my idea
  4. goal — impress multiple interviewers, even if your solution is not optimal. I achieved this at Bloomberg
  5. goal — pass all coding rounds at a reputable west-coast tech shop — which is my original goal, never achieved
  6. goal — 100% technical win i.e. pass all technical rounds including system-design-interviews. Note you can still fail the soft skill test.
  7. goal — get an offer

With these intermediate goals, the picture is no longer black and white, but grayscale. We might have hit some of the intermediate goals already 🙂

prevalence@speed coding among coding-IV

25-33% of my past coding IV (counted by hour) were tough Speed-coding. The remaining hours were

  • Easy speed-coding problems, easy to me
  • no-coding Pure-algo quizzes, often telephonic
  • Take-home assignments
  • Threading
  • latency engineering
  • test of language knowledge
  • SDI without implementation

— west-coast coding IV≅speed-coding contest

The skillset is unneeded on any job. For GTD or PKI, this skillset is not in top 10

You need everything fast. Many students practice heavily to gain speed.

 

prior knowledge do make you look brainy: algo IV

After solving the min-stack problem, I found something paradoxical , even nonsensical — Someone said this question is considered “easy”, but if you have not seen it before then you may find it extremely hard. I thought O(1) worst case was impossible. I have experienced many similar situations where prior knowledge can make you look very brainy.
  • example — detect loops in a singly linked list, where the loop could be far out.
  • example — reverse an extremely long singly linked list which breaks any recursive solution, so you must use non-recursion
  • example — in a regular unsorted binary tree (every node has up to 2 child nodes and no uplink to parent) how to print all nodes in-order but with O(1) additional space and O(N) time
I feel west coast (and Bloomberg) tend to ask this type of questions because they expect their candidates to study and acquire the knowledge before applying.
If you don’t study, and hope to come up with a reasonable solution on the spot, then you must be extremely intelligent or lucky. I think the math Olympiad gold medalists also need to study before their competitions.

xx: weekendCIV^codingDrill^work

Amount of tech learning and zbs growth over weekend coding assignments are different from coding drill or work projects

  • intensity — highest
  • QQ halos — comparable to my language feature experiments — improves QQ
  • scale — larger than coding drill but not too large like work projects
  • BP — is a major learning focus and evaluation criteria
  • absorbency — highest
  • sustained focus — lasting over a few days, pretty rare for me

I separate pure-algo from real-coding questions

Many job seekers simply say “algorithm question” or “coding interview” as if they mean the same. I see significant differences between two types of coding test — compilable code vs pure algo.

Here’s one difference to start with — real-coding questions require us to be very familiar with syntax and very fast edit-compile-test cycles. I often struggle even when my idea was correct. I often run out of time. No such requirement in a white-board algo test.

I try to visualize a spectrum image like

<=== harder on language+implementation (blue side) … (red side) increasingly harder on pure algorithm ===>

  • * (On far left “Ultraviolet” side of the spectrum) multi-threading coding questions are light on pure computer science algorithms.
  • * (Also on the far left  side) some weekend coding assignments are very serious about design trade-off’s, reasonable assumptions, thorough automated testing … No such requirement in a pure-algo test.
  • * (On the mid-left side) real coding questions can have a code smell issue, esp. in the weekend take-home assignment. I failed many times there even though my code worked fine.
  • * (in the middle) many Hackerrank and Codility questions involve difficult algorithm and fast edit-compile-test. Very hard for me.
  • * (on the right side) pure algorithm questions are sometimes on white board, and even over phone if very short. Light on syntax; no edit-compile-test; no code smells.
  • * (on the far right side) some west coast white board interviews require candidates to ask clarifying questions, and later eyeball-test our code without a compiler. No such requirement in a real-coding interview.
  • * (On the extreme right end) very tough algorithm questions care only about O() not about language efficiency. Pick your language. The toughest algorithm questions don’t even care about O() so long as you can find any solution within 15 minutes.

I often perform better with pure algo questions than real-coding questions. My worst game is codility/hackerrank.

I find it imprecise to call all of them “algorithm questions” and disregard the key distinctions between the two types. Both types have some algorithm elements, but the real-coding questions often require additional skills such as quick completion, language efficiency, and code smells knowledge. A bug can often take us 10 minutes to locate — something rare in pure-algo questions.

Effective strategies to prepare for real-coding vs pure algo interviews are rather different. In this email I will only mention a few.
1) for speed, we need to practice and practice writing real code and solving short problems; you can try timing yourself.
2) for pure algo, we can use python; we can look at other people’s solutions; we still need to practice implementing the ideas
3) for white-board, we need to practice talking through a problem solution… need to practice asking clarifying questions and eyeball-testing
4) for weekend take-home tests, practice is not effective for me. I won’t prepare.
5) for multi-threading problem, need to practice with basic multi-threading problems.

nsdq=lightweight west coast CIV

  • like west-coast, No coding questions in threading like bbg
  • like west-coast, no memory mgmt questions
  • like west-coast, no question on containers as implemented in any language
  • arch design question — more in-depth than Indeed
  • algo + whiteboard
  • unlike west-coast, there were a bit of knowledge questions, but nothing in-depth like on Wall St.
  • Unlike west-coast, there was a weekend assignment

Now I feel some west coast interviews might be similar to nsdq interview.

IDE IV=easier4some !!4me

I guess many (majority ?) candidates consider IDE coding IV easier than white-board. I think they rely on IDE as a coding aid. Many programmers are not used to “demo” coding using white board … stressful, unnatural?

The “coding aid” feature helps me too, but it helps my competitors more !

If on white-board (or paper), my relative rating is A-, then on IDE i would score B

  • my relative weakness on IDE — ECT speed
  • my relative strength on white-board — clarity of thinking/explanation

## coding IV P/F

See also ##failed c++cod`IV: home^IDE^whiteboard:4 beat-fronts

  • paper — is a shorthand for paper, dumb editor, white-board etc
  • webex vs onsite are somewhat similar, usually pair-programming. Take-Home setting is vastly different.
  • pts (points) — awarded to myself, indicating the performance, value, difficulty ..
  • p/f are only based on interviewer’s subjective judgement
  • pp means passed convincingly, with flying colors
  • [1] I focused on working solution! See %% priorities in a take-home coding IV
  • [2] highlight: where practice can make a difference in a make-or-break test
lang location IDE? firm/person pts 1-5 Notes category type@practice needed [2]
py webex IDE GS-HK 4 2019 See highest point ]%%visible python proficiency
onsite paper FB 2019 full day
c/py onsite paper p FB 5 2019 first round algo Leetcode medium/hard
c weekend IDE HRT 5 2019 parser dStruct heavy
py onsite paper 😦 Indeed 3 2019 algo algo practice
c weekend IDE they gave up redMart 5 2019
c/py home hackerrank no news Altonomy 4 2019 4 problems #Deepak
c onsite paper p Quoine 1 2019 balanced brackets #5-min job algo
py webex IDE p Indeed 5 2019 see my blogposts algo speed-coding; memorize syntax
c home hackerrank p TradeWeb 1 2019 CSY: basic STL map manipulation algo STL + basic algo practice needed definitely
c phone phone p SCB-FM 2 2018 pure algo 1 easy + 2 hard ones. I over-performed algo array/slist problems
c weekend IDE 😦 quantlab 4 2018 Houston HFT weekend coding drill ROTI dStruct
c/thr home IDE p Pimco quant 3 2018 2H parallel programming thr study pthreads
c home hackerrank p FlexTrade 3 2018 slist cleanup#github std algo array+list problems
java home hackerrank 😦 Pimco #190m 2 2018 Q9: path through matrix. Ineffi [1] std algo too hard, but some 2D practice can help
java home hackerrank 😦 Pimco #190m 2018 Q8: buy 1 sell any algo array problems
c onsite paper p LiquidNet 2 2018 30 minutes algo array problems
c home codility 😦 Mako 4 2018 monkey. Ineffi [1] algo too hard
c home codility 😦 Mako 3 2018 quasiconstant [1] algo array practice
c onsite paper p SIG 2 2018 15-30min … RAII smart ptr std lang
c onsite paper p SIG 3 2018 push_back std lang
py onsite IDE p SIG 3 2018 movie query dStruct algo practice
c weekend hackerrank p Promethean 5 2018 2 tough graph problems algo too hard but some QQ can help
c/py home hackerrank pp Kraken 3 2018 3 (small?) problems algo algo
any onsite paper 😦 Nsdq 2018 oracle day-trading algo array problems
any onsite paper 😦 Nsdq 2018 check array 0-N in-situ algo array@int problems
java home IDE pp Nsda 4 2018 drone: medium-level implementation SDI hard to prepare
c onsite paper pp CVA 3 2018 O(N) in-situ filtering algo array problems
c onsite paper 😦 FB 2018 2 short problems std algo algo
c onsite 😦 Trex 2018 exchange messaging SDI no practice can help
py onsite IDE pp Trex 1 2018 3 short tricky problems. only 1 implemented algo algo practice can help
c onsite paper pp Quantum 1 2018 virt func not declared in base trivial lang
c/thr webex paper pp bbg 4 2018 single-thr pool QQ lang
c webex paper pp bbg 3 2018 shared ptr ctor/dtor QQ lang
py onsite paper pp bbg 5 2018 continuousSentence algo backtracking, string problems
c/thr onsite paper ? Wells 2 2017 concurrent queue std
c onsite paper p Wells 1 2017 remove_spaces. Ineffi trivial string problems
c onsite paper p bbg 2 teams 2 2017 array reshuffle lang
c onsite IDE p bbg 2 teams 1 2017 isBST() std tree problems
c onsite paper P! bbg 2 teams 1 2017 string string problems
c onsite paper 😦 bbg 2 teams 2017 filters on screen dStruct heavy dStruct
C home hackerrank 😦 Thesys 2017 3 problems out of 5 algo algo
C home IDE pp Thesys 5 2017 first N prime Fibonacci hard to prepare
c webex IDE pp bbg 2 2017 top N active stocks dStruct heavy dStruct
c onsite paper pp BAML 1 2017 various lang
c weekend IDE 😦 GS 3 2017 TickEngine dStruct heavy dStruct problems can help
C webex IDE pp bbg 2 2017 free slots dStruct heavy algo
C webex paper p bbg 1st round 2 2017 tree serialization std algo tree algo
Java onsite paper pp BGC 2 2017 multiple short programs lang
Java weekend IDE pp BGC 3 2017 connected or not hard to prepare
Java/thr weekend IDE p HSBC 4 2017 3 big questions barely passed hard to prepare
Java/thr home IDE pp pimco 4 2017 iterator again lang
Java onsite paper pp pimco-Zoltan 2 2017 a few short problems lang?
c webex IDE 😦 Citadel 2017 array shrinking array problems
py webex paper ? Broadway 2017 hashtable. Code (on github) works std algo implement std containers
cpp weekend IDE ? iRage 3 2015 order book again. No response dStruct heavy hard to prepare
py home codility p baml 2016 Ashish: applying to Qz algo
Java home codility 😦 baml 2015 Qz FX option team. too lazy algo
c home codility 😦 Jump 3rd 2015 algo
c weekend IDE pp jump 1st 3 2012 order book dStruct heavy hard to prepare
c weekend IDE 😦 DRW 3 2015 Tetris. code quality hard to prepare
c webex paper 😦 bbg -London 2015 unable to understand Q1 of 4 std algo algo
c# webex paper pp eikon 2 2013 short, easy questions QQ lang won’t practice
java home IDE 😦 MS 4 2011 FIX. Big take-home test hard to prepare
swing webex IDE pp Barx 2 2012 swing QQ lang won’t practice
C home IDE 😦 Mac 2012 2 short problems lang
java home IDE 😦 MS-comm 2012 too lazy hard to prepare
c onsite paper pp Cantor 1 2011 auto-ptr QQ lang won’t practice
java onsite paper 😦 Barc SOR 2011 recursion recursion
java onsite IDE pp RBC 1 2011 30-45 min, bond tick conversion lang
java onsite IDE 😦 UBS 2011 Suntec lang
java/thr onsite whiteboard 😦 UBS 2011 Jack Yang lockfree stack QQ
java/thr onsite whiteboard 😦 Barc 2011 wait/notify + lockfree QQ
java/thr onsite IDE 😦 Lab49 2010 executor lang
java/thr home IDE pp Gelber 3 2011 multithreaded hard to prepare
C home IDE 😦 Amazon 2011 didn’t take it seriously algo
C onsite whiteboard 😦 FB 2012 regex QQ std algo too hard, but some QQ can make a diff
any onsite whiteboard 😦 Goog 2007 algo