phone pad: generate all combos #40%

Modified from Leetcode problem 17. Suppose there is nothing but one red button. and there are C (like 7) letters on it.

Q: With 4 (=N) draws from 3 (=C) letters (same phone pad button), how many permutations? C^N.

Q: With 4 (=N) draws from 3 (=C) letters, how many combinations?

For N=1, Ans=3
For N=2, Ans=6
For N=3, Ans=? Not obvious

–sol1:

Key: Each combo is represented as a sorted vector (ascending). There’s one-to-one mapping between such a vector and a combo.

let’s say C=3 and N grows from 1 to 3 .. For N=2, I put all combos on 3 rows (3 vectors or lists)

  • aa ab ac #all that starts with a
  • bb bc #all that start with b
  • cc # lone wolf

For N=3,

  • first container is populated by prepending ‘a’ to all 3 “old” containers of items
  • 2nd container is populated by prepending ‘b’ to 2nd-and-lower old containers
  • 3rd container is always a lone wolf

–sol-i: incremental solution — for N=3, given one combo, find the “next” combo. For example after bbc, we should get bcc.

I guess this design is actually fairly simple and efficient, probably similar to  https://github.com/tiger40490/repo1/blob/cpp1/cpp/combo_permu/nextCombo/iterative.cpp

Advertisements

given unsorted ints, find median #O(N)

Classic problem —

Q: Given an unsorted int array, find its median in O(N). For simplicity, we can first assume array size is odd.

I think we can use any data structure. We can scan the array multiple times.

—-simple partition algo

Pick a random item in the array as pivot value and partition the array. Let’s say we are unlucky and get 12% low vs 88% high. So we discard the 12% and repeat. Suppose then we get 66% vs 22% (high section). We would then discard the 22%.

So we are likely to require 2N “visits”, I don’t think it would degrade to ((N logN).

—-fancy idea

In one scan find the max and min. Calculate the mid value. Say this is 23.5, not even an integer. I will use this as a pivot value to partition the array into 2 segments.

Suppose N=101, so I’m looking for #51 item. Suppose left segment has 10 and right segment has 91, so I discard left segment. Now I’m looking for the #41 in the remaining array.

Now I find max and min again (if necessary). Now, Instead of getting the mid point between them, I will use a weighted pivot of (10*min+91*max)/101, so this pivot is shifted to the right, based on suspicion of a right-heavy histogram.

–complexity?

On average, I should get N+N/2+N/4 … <2N

Worst case? The illustration above is rather unlucky since histogram is right-heavy. So my “weighted pivot” idea should alleviate it.

sequence@{max within sliding window} O(N) no dequeue #40%

Q (leetcode hard problem 239): Given an array nums, there is a sliding window of size k which is moving from the very left of the array to the very right. You can only see the k numbers in the window. Each time the sliding window moves right by one position, return the max item in it. Total run time should be O(N) but I think within each window we may not always get O(1).

https://www.interviewbit.com/problems/sliding-window-maximum/ is the same problem stated more clearly.

No need to produce a new value whenever the window slides. Just produce the final list in O(N). It may take much longer than O(1) to have the first output element!

====analysis====

Since there exists a O(N) solution i’m confident I can figure out a O(N) solution

idea — first scan to identify all the troughs. Items around trough to be removed

–latest idea, not using dequeue therefore likely original —

Update — consider a rising sequence before going any further

in my showman-stack, every element {idx,height} will show up on stage at least one episode. I would say each showman goes on stage either when it first enters the sliding window, or before leaves the sliding window. If that’s the case at each slide i only need to consider the head and tail.

first forward scan populates this deque and ensures only showmen are included. Let’s work out some examples. First part of first scan would simply find the max among first K items. 2nd scan might be combined with first scan, but for now, my goal is clear and focused — produce a clean stack of showmen-only

  • degenerate case — monotonic sequence requires virtually all elements be included
  • if a low point is “close to” a high point already in the stack, then skip it.
  • rules on the forward scan —
    • every new item needs to be pushed to the stack as it could be higher than all followers. The logic is “what before the push”
    • before pushing a new item, top of the stack is always the immediate predecessor. Need to add assert.

pseudo code:

O(N) pre-scan to find all troughs. For each of N items, record the preceding trough.

check if the most recent trough is within range. If not then nothing to pop. Assuming it is.

while new item >= top of stack __&&__ the 2nd top is within range (distance <=K):

keep pop

while new item >= top of stack && distance between is below K && there would still exist an earlier higher item within range:
//at end of the while loop, we could actually produce some output, but I would postpone it to 2nd scan

For the last K items in array, we need some O(K) special handling.

I think both conditions are necessary for popping the stack. Are these two conditions alone sufficient justification to remove the top?

longest substring without repeating chars #untested

Q(leetcode Q3): Given a string, find the longest substring without repeating characters.

–Sol1 O(N):
keep a never-shrinking sliding window + a “hashmap” of chars in it. Actually the HM is a 26-element integer array of frequencies.

Every time the lagging edge of the windows moves by one, by definition one char drops out, so we remove that char from the HM, by decrementing its frequency. If hitting 0 then we also decrement a global var uniqCnt describing the HM.

IFF uniqCnt == windowSz then window is a clean.

Every time we see a clean window and it’s longer than the longest clean window, we update our record.

max all-black submatrix O(LN)#ZR

Same problem as https://leetcode.com/problems/maximal-rectangle/description/

Q: Given a 2D binary matrix (L by N) filled with white(0) and black(1) cells, find the largest all-black rectangle. See raiserchu’s mail on 12 Sep 13. There is a clever DP solution, probably O(LN).

—Analysis:

Worst case — A standard chess board? We can’t do better than O(LN) since there are LN cells to read.

— sol5:

First scan O(LN) to record, in each cell {bar height; horizontalBarStart/End}.

— sol4 unfinished

Scan #1 O(LN): build a shadow matrix “histogram” where each integer in the cell is the height (possibly 0) of the bar anchored therein.

Scan #2 O(LN) for each cell, remember the currentRunStart column index i.e. from that column until current column, we have an all-black box of height == current bar height

— sol3 O(LNS) new idea based on max rectangle ] histogram treat top 2 (denote J:=2) rows as a histogram. Find the max rectangle therein. Then J:=3 …

  • Scan #1 O(LN): build a shadow matrix “histogram” where each integer in the cell is the height (possibly 0) of the bar anchored therein. In other words, if a cell value=5 then there are exactly 4 consecutive black cells above this (black) cell. Build it incrementally, level by level, top to bottom.
  • Scan #2a: for each row in the shadow matrix, we run the proven algo in O(NS), Note there’s no help from previous row:(
    • S:= #unique heights, N:= matrix width 
  • Scan #2 := the entire scan of L rows. so worst case we hit O(LNS)

Q: Can we do better by reducing scan #2a complexity to O(N), by making use of the previous row results?

— My brute force solution 1: Each rectangle is identified by 2 vertices, i.e 4 integers. Without loss of generality, We require the “high” corner to have higher x-coordinate and higher y-coordinate than the “low” corner. (We can assume y-axis run upward.) With this O(N^4) nested loop we can iterate over all possible rectangles:

Lock low corner
Move high corner in typewriter (zigzag) steps i.e.
  hold highY and move highX step by step
  process the (series of) resulting rectangles
  increment highY and repeat
Move the lower corner in typewriter steps and repeat

Key observation: any “bad pixel” disqualifies every rectangle containing it.

— Here’s my partial solution:
We can effectively ignore all the “good pixels”.

1) Look at the x coordinates of all bad pixels. Sort them into an array. Find the largest gap. Suppose it’s between x=22 and x=33. Our candidate rectangle extends horizontally from 23 to 32, exactly. Notice there’s no bad pixel within this vertical band [1].
2) Look at the y coordinates of all bad pixels. Sort them into an array. Find the largest gap. Suppose it’s between y=15 and y=18. Our candidate rectangle extends vertically from 16 to 17, exactly.
[1] This candidate rectangle can expand All the way vertically, though it may give a bigger rectangle
Ditto horizontally.

SCB-FM algo Q2b stack-based queue

Q: given a hardware-based stack API consisting of 3 functions {pop/push/isEmpty}, please implement a queue api consisting of 3 functions {enqueue/dequeue/isEmpty}

https://leetcode.com/problems/implement-queue-using-stacks/description/ is similar

====analysis====

service dequeue from a hidden stack.

When hidden stack is empty, pop all nodes from visible stack to hidden stack

isEmpty() must add up two sizes.

LFU cache #cf.LRU #untested

Q LFU (Least-Frequently-Used) cache to support the following operations: get and put in O(1)
* get(key) – Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.
* put(key, value) – Set or insert the value if the key is not already present. When the cache reaches its capacity, it should invalidate the least frequently used item before inserting a new item. For the purpose of this problem, when there is a tie (i.e., two or more keys that have the same frequency), the least recently used key would be evicted.

====Analysis

  1. dstruc — centry i.e. CacheEntry node {key, value, hitCount, lastHit (timestamp), pointers to prev/next centries, (optional)ptr to host LinkNode}, to be used in an inner link list.
    • invariant: hitCount can only increase
  2. dstruct — inner list of centry nodes
    • invariant: list always sorted by lastHit. We can remove any intermediate node, but incoming node is always added to the Tail
  3. dstruct — fixed-sized (rehash-free) hashtable {key -> ptr to centry}
  4. dstruct — LinkNode {level, inner list-of-centry, prev/next pointers } where all centry objects share the same hitCount denoted “level”.
  5. dstruct — outer list of LinkNodes, always sorted by level

“bubble-up” operation — Whenever a centry gets a cache-hit, its hitCount increments. It immediately and unconditionally bubbles up to the LinkNode one level higher (to be created in O(1) if necessary) ((
* [o1] query the hashtable and follow ptr to remove the centry from old linkNode
* [o1] append the centry to new linkNode, at Tail of inner list. The new LinkNode could be non-existent but Never empty!
* [o1] optionally, new host linkNode’s address is saved in the centry;
))

  • Get() hit — relatively easy. Update the hitCount and bubble up
  • Get() miss — trivial
  • Put() Update — similar to get-hit
  • Insertion (possibly after deletion) — [o1] append to Tail of the Level-1 LinkNode (to be created if necessary) and add to hashtable
  • Deletion — always from list to hashtable, never the converse
    • [o1] identify lowest level in existence, then delete the head (i.e. eviction target) of inner list
    • when a linkNode becomes empty, it must disappear from the outer list, to prevent build-up of consecutive empty LinkNodes leading to linear search for eviction target. Imagine aaaaa bbbbb c[Now need to evict an “a”]. Therefore, array of LinkNode is unacceptable.

max-profit # easy SCB-FM

Q: maximize profit on a given a price series … you can buy, sell, buy, sell any number of times, each time one share. No buy-buy-sell-sell allowed, i.e. at the time of buying, you must have no holding.

Q: ok you said it’s O(N), can you do it in one scan?

====my answer (without hindsight)

If price keeps dropping, then no trade possible

If price keeps rising steadily, then only one buy-sell pair

if i get peak-trough-peak-trough… then i must discard the first peak since I can’t do anything with it.

 

max-profit@most 2K trades #proven but untested

Q(Leetcode): Say you have a price history as array. Design an algorithm to find the maximum profit. You may complete at most 2K transactions, consisting of exactly K (eg 2) buys and K sells. You may not engage in multiple transactions at the same time (i.e. you must sell the stock before you buy again). No short sell please.

No O() requirement.

====analysis=====

I feel first challenge is to list all (not more than 10) scenarios. This step has taken me a few days, even though I drew many examples.

–solution 1 (brute force): construct all possible pairs, rank them and pick top 2.

–solution 2 (only works for K=2)

  1. Identify all the turning points so we end up with HLHLHL… We can eliminate or ignore the other points.
  2. * identify the best pair using the max-profit algo. denote them as L1/Hj
  3. * In the subarray before L1, find the best pair
  4. * in the subarray after Hj, find the best pair
  5. pick the best among the two an denote it as p2
  6. Now look into the subarray L1 to Hj. If there’s no enclosed pairs within then we have a simple case — use L1/Hj and p2. But let’s assume there are at least 2 nodes enclosed. I will denote entire subarray as L1 H1 L2 H2 … Lj Hj (where L1-Hj is the max-profit)
  7. * use max-profit algo to find the worst loss from H1 to Lj. Suppose it’s H3 to L5.
  8. If this loss exceeds p2, then the we will return L1/H3 and l5/Hj. Otherwise, return L1/Hj and p2

This solution uses the watermark algo 4 times (*).

can we use a matrix?

We can keep track of all profitable pairs i.e. le/ri indices, and also a pointer to the current best pair that’s not overlapping with “me”.

After creating 2nd pair, IFF no overlap, then we update the pointers in both instances.

After creating 7th pair, if it doesn’t overlap with the #3 highest pair, then check-update the pointer in #3.

I think if we can efficiently keep track of these then it should work.

I feel basic decision is to break the best pair or keep it

case: need to break the highest pair into 2 pairs,
case: best pair + another pair outside. I think this is easy..
case: 1,18,2,19,15,16. Perhaps the hardest case to solve.

remove subsequent duplicates in slist #SocGen

Q (SocGen hackerrank): given the head node of a linked list that might have duplicate values,  return the head node of a sanitized linked list without duplicates.

eg: Node@0x7089 (value=3) ->Node@0x6481 (value=2) ->Node@0x7921 (value=3) ->Node@0x6308 (value=7) ->null

would become Node@0x7089 (value=3) ->Node@0x6481 (value=2) ->Node@0x6308 (value=7) ->null

Note the first node (value=3) is still in the list, but all subsequent nodes with (value=3) are discarded.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/linkedList/removeDupNodes_SocGen.cpp is my tested solution

  • The two-pointer solution sounds simple but is over-complicated and error-prone during implementation.
  • fake-link-head technique is again useful

edit distance

The DP idea — compare matrix-path-counter, which is less obvious and easier than This one.

Q72 on Leetcode: Given two words word1 and word2, find the minimum number of operations required to convert word1 to word2. You have the following 3 operations permitted on a word:

  1. Insert a character
  2. Delete a character
  3. Replace a character

Comment — Top 100, naturally occurring. I won’t bother to pass all Leetcode tests esp. the load tests. If I pass all non-load tests I would consider my solution decent.

https://github.com/tiger40490/repo1/tree/py1/py/str has my implementation based on a DP idea online, and a spreadsheet illustration. The idea is elegant once you wrap your mind around it.

===analysis===
Starting with the small string (length S), The challenge is to project as many of the S chars to the large string (length L). If we can project 5 chars at most, then … (wrong — the remaining S-5 chars need replacement, and the other L-S chars need insertion.)

–idea2: draw all the projection arrows from S to L. In a good-projection, every arrow on the right should be more slanted than every arrow on the left. We want the largest good-projection. In the opening example, the largest would have 5 arrows, …

—-
None of these ideas has proven effective.

in size-N array find The duplicate int #1 to N+1 #pigeonhole Abhinav

https://leetcode.com/problems/find-the-duplicate-number/description/: Given an immutable int array nums containing n + 1 elements where each element is between 1 and n (inclusive), prove that at least one duplicate number must exist. You are guaranteed that there is only one duplicate number, find the duplicate value in O(1) space, below O(NN) time. The culprit may repeat many times.

I didn’t bother to write the code.

===== analaysis =====

contributed by a user and highly contrived:(
many likes:)

–bisection solution in O(N logN) time and O(1) space. I came up with this solution within a minute.

  1. Divide the full range [1 to n] into 2 almost-equal ranges (i.e. if n = 2K+1, then i use [1 to K] and [K+1 to n] as 2 ranges)
  2. Count how many nodes are in each range. Clearly one of the two ranges must have too many elements.
  3. Remember the boundary of that bad range so from now on we will ignore those nodes falling into the good range. We will use 2 variables to update/improve the boundary, until they coincide.
  4. within the bad range, repeat Step 1.

Key insight — progressive bisection.. non-recursive.

Key insight — applying pigeon-hold principle, we split the conceptual range. The more common (but ineffective) technique would split the physical array.

longest consecutive ints]O(N) #zebra

Popularity — 1000+ likes on Leetcode … possibly popular

Q(Leetcode): Given an unsorted array of integers, find the longest consecutive element sequence, in O(N) time. Eg: given [100, 4, 200, 1, 3, 2] return [1,2,3,4]

I call this the zebra problem because  every consecutive sequence of int is a black stripe and the gaps between them are white stripes. We want the widest black stripe. Obviously, each stripe has minimum size 1.

https://github.com/tiger40490/repo1/blob/py1/py/array/zebra.py is my O(N) solution, not tested on Leetcode.

========

What’s UnionFind? A reusable technique?

Like inserting interval #merging #80% done, I  feel this is a data structure problem,

To keep things simple, i will first run one iteration to remove all duplicate items.

I will use hashtable where key a known item. The value is a pointer to a “segment” object.

A segment stores the min and max values. All integers within [min, max] of the segment are always known-items during my scan of input array.

When a new item is either min-1 or max+1, we expand the segment by adjusting the extremes…

The trick is joining two segments, without link pointers. After joining, we don’t really adjust the min/max fields. We only update the max-length global variable if needed.

To keep the hashtable small, I can optionally delete from it but we don’t want to do a range delete within the loop — O(NN)

maximum path sum through binTree #untested 70%

Q: (Leetcode “hard” Q124) Given a non-empty binary tree of signed integers, find the maximum path sum. For this problem, a path is defined as any sequence of nodes from any starting node to any node in the tree along the parent->child connections. The path must contain at least one node and does not need to go through the root. No uplink

====analysis====

optimal solution(?) — DFT. Along each root-to-leaf path, use the max-subarray (Kadane) algo and store maxSumEndingHere value in each node, for reuse.

inserting interval #merging

Q (Leetcode): Given a set of non-overlapping intervals, insert a new interval into existing intervals (merge if necessary) and print updated list of intervals. Intervals were a vector sorted according to their start times.

–analysis–

Now I feel the #1 main data structure is a doubly linked list (dlist) of Segment objects:

  • { segment_left_mark,
  • ptr to next node, ptr to prev node
  • optionally a (bool or) enum having A/B, where A means current segment is AboveWater (an interval) or BelowWater i.e. a gap}.

Every time this dlist is modified, we would update a “helper container” — a tree of node pointers, sorted by the segment_left_mark value. Tree to help successive inserts. However, if each insert(vector intervals) has a sorted vector then we can binary search the vector and don’t need to tree.

First, binary search to locate the left mark among all existing marks. Ditto right mark. Based on these 2 results, there are many cases.

  1. done — Case (simple) both fall into the same existing interval. No op
  2. done — case (simple) both fall into the same gap segment. Create 2 new segments and insert into the dlist
  3. done — case (simple) one boundary falls into a gap the other falls into a adjacent interval — just adjust the segment_left_mark without inserting new segment
  4. done — case — bridge: both boundaries fall into different intervals. Adjust segment_left_mark of 2 affected segments, then link up the two to skip the intermediate segments
  5. done — case — wipeout: both boundaries fall into different gaps, wiping out at least 1 interval.
  6. done — case (most complex) — one falls into an interval, the other into a non-adjacent gap.
  7. case — incoming interval left boundary is lower than all boundaries, but right boundary falls into some segment
  8. case — incoming interval is very low
  9. case (special) — if an interval becomes adjacent to another, then merge the two.

Need a sorted tree of all marks + array of segments. Redundant but helpful.

Each segment (interval or gap) is represented by {left mark, right mark} where left <= right. I will save the segment objects into (a linked list and) an array. Even elements are interval objects and odd elements are gap objects. Now superceded by dlist.

I think this problem is all about corner cases. Perhaps start with the complex cases which will take care of the simpler cases. No need to pass Leetcode tests. Due to the pointer complexity, I prefer python.

https://github.com/tiger40490/repo1/blob/py1/py/linklist/insertInterval.py is my solution but I dare not test on Leetcode

staircase problem #CSY@Bbg

Q (bbg question posed to CSY): given a staircase of height N (eg 3), you can reach it by three steps 1,1,1 or two steps 1,2 or 2,1, or a single step of 3. Generate all paths for a given N.

CSY gave a recursive solution and interviewer asked “Can you find a non-recursive solution”?

I feel this is simpler than AQR factorization and the CombinationSum problems.

Is this same as N-boy-split? No split is harder. With the split, ABC can have a “step” of AC.

easy to test — https://github.com/tiger40490/repo1/blob/cpp1/cpp/combo_permu/staircase_CSY.cpp is my tested solution.

Q: why path count == 2n-1?
Answer from CSY: f(1)=1, f(2)=2, f(n) = f(n-1)+f(n-2)…+f(1) = 2n-1
A: For a staircase of n, you can take step of 1 and have f(n-1) paths, or you can take step of 2 and have f(n-2) paths…

–jargon file:

a STEP from LEVEL 0 to 2 has LENGTH=2, and skips level 1; a PATH consists of 1 or more STEPS; a FORMULA is a complete PATH, and always starts at level 0 and may hit intermediate levels #2, #5, #6 …

–BFT solution. suppose N = 5. We model each path as a directory path from root. Each queue item is a path represented by a linked list or vector of capacity N.

I hope this solution avoids duplicates… Yes it does:)

  1. enqueue all “first levels” /1; /2; /3; /4; /5
  2. then dequeue and visit first path i.e. “1”. In this case, there are 4 levels remaining in the staircase, so enqueue /1/1;  /1/2;  /1/3;  /1/4
  3. then dequeue and visit 2nd path in the queue i.e. “/2”. enqueue /2/1; /2/2;  /2/3

Every time (after dequeue) we see a path has total length==N, we print it out as a formula.

–DP iterative solution to be implemented

  1. build the formulas for a length-2 staircase: /1/1 + /2 i.e. two formulas in FormulaArrayForStaircase2 of “fa2”
  2. then build the formulas for a length-3 staircase — fa3 = firstStepOf2 -> fa1 + firstStepOf1 -> fa2

max rectangle ] histogram

Q: https://leetcode.com/problems/largest-rectangle-in-histogram/description/. Given N possibly recurring non-negative integers representing the histogram’s bar heights, and given the width of each bar is 1, find the area of largest rectangle in the histogram.

Visually well-defined problem. Kind of naturally-occurring. Very simple data structure. No O() requirement, so I will just try my own solution.

https://github.com/tiger40490/repo1/blob/py1/py/array/maxHistoBox.py is my solution. 100% passed on Leetcode.

==== analysis — heavy on data structure design.

Key insight — one scan to update a clever data structure.

key insight — data structure is not per bar, but per height!

For every bar J, there exists an enclosing max-rectangle of J’s height. We can just compare all of these rectangles.

We might start with two extreme candidates
1) the peak — whose enclosing rectangle is likely slender — O(N) one scan to find all the peaks
2) the lowest bar — whose enclosing rectangle has width N — O(N)

If we paint the histogram as a binary matrix, then this is equivalent to anther problem max all-black submatrix #DP #zhurongbut I think there exists better solutions like O(N logN) or O(N*S) …

–algo with O[N*S] where S:= #unique heights. The binary search doesn’t show up as logS.

A pre-scan to get all distinct heights. For each distinct height, we maintain a RunRecord object {bestRun, currentRunStart, height}, in a sorted map {height -> record}. In py, I can use a pre-sorted vector of Records, sorted on height

In main scan, As we encounter a new bar of height J, we update these records.

  • if not falling or rising
    • record-J and each record-H below J must have a current run … extend that run (no-op)
  • if rising from height H
    • each record up to H must have a current run … extend that run by no-op
      • iterate the treemap up to H
    • iterate treemap from H+1 to J. start a new run for each record
  • if falling from height P to J
    • record-J and each record-H (where H <J) must have a current run … extend that run
    • iterate treemap from J+1 to P … each record-K must have a current run, indicated by a valid currentRunStart, then this record’s current run has just ended. We update bestRun and put a invalid value into currentRunStart.

At end of the main scan, every record has a bestRun i.e. the duration. I can then calc the area under each bestRun and return the max.

find min substring containing(but!!limited to)all my chars

Update — a similar sliding window is used in longest substring without repeating chars

Q (leetcode): Given a string Haystack and a string T, find the minimum window in Haystack which contains (at least) all the characters in T according to the frequencies. Time complexity O(n). Eg: minWindow(ccbabccbabcb, bbc)==bcb

If there is such a window, you are guaranteed that there will always be only one unique minimum window in Haystack. <– I thought this guarantee means something but it doesn’t.

Without loss of generality, I will assume the chars are a-z. I believe those Leetcode corner cases will use only 3 chars

—analysis—

For single-string problem, use array indexed by ascii code. I can convert T to such an array to store the required frequencies (reqFrq)

I can construct a shadow array, same length as Haystack with these payloads:

  • if the hay is not in reqFrq, then payload is a special value like nullptr
  • if the hay is in reqFrq, then….?

–SolSW: sliding-window based

  1. Scan Haystack from left and keep count of actual frequency (check against reqFrq each time). I will inevitably find the earliest good window. By construction, both ends of this window are in reqFrq.
    • Note the entire haystack is more than a good window.
  2. Now I slide the fixed-sized window. If I find another good window, with extra chars on the left, then I have found a shorter window, so I truncate my window on the left
  3. continue Step 2

airport gate #maximum people alive

https://careercup.com/question?id=5153263227764736 defines the problem

Q (Amazon): In a city, year of birth/death of people who where born and died between year 1900 to 2000 are given. Write an algorithm to find the year in which max people were alive. Note the years are not unique and not sorted

Similarly,

Q (FlexTrade): For an airport gate system, flight arrival/departure times are given for yesterday. What’s the maximum number of gates required at the busiest time?


Solution1: O(N logN) merge-sort all timestamps, then scan it in one pass. If an arrival, then increment counter; if a departure then decrement it.

??Solution2 (assuming arrival times are pre-sorted) Using hashtable, keyed by arrival time. Value is a count of flights arriving at that time. Every arrival creates or updates in the hashtable. Every departure deletes or decrements. Maintain a separate total count.

I think we still need sorting.

Solution3: O(N). Use array if all the years are small integers. (Regular timestamp is also small integers — 0 to 2355 in steps of 5.) Fill all arrival/departure events as +1/-1 in an array indexed by year.

Longest Parentheses run with multiple hierarchies

Q (Leetcode): Given a string containing nothing but the characters ‘(‘ and ‘)’, find the length of the longest valid (well-formed) parentheses substring.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/str/maxParensRun.cpp is my solution 100% tested on Leetcode

–My Single-iteration solution:

Challenge is data structure. I ended up with 2 data structures to be updated during the iteration

  1. A stack (holding openers’ index values) to locate the matching openers
  2. an array to save “scores”

For each closer, I will record the position of the matching opener, then compute the distance (minimum two).

 

 

reverse slist in K-groups

https://leetcode.com/problems/reverse-nodes-in-k-group/description/ is the problem I tried today, not a classic problem. Challenge is not the algorithm per-se but the Edit-Compile-Test-Debug cycle. I think some of us can come up with a conceptual algorithm quickly, but to implement it correctly took me hours.

Similarly, the problems below are not tough due to algorithm but the ECTD cycle can take hours, sometimes due to c++ iterator pitfalls, sometimes because we can’t easily visualize the data structure .. I wrestled with all of these problem, so please feel free to try them and discuss with me.

* print any tree (you can start with a binary) by level, in zigzag sequence
* given a linked list, write a function to remove all nodes greater than 55 (or any user input). Return the head of the modified list.
* https://www.geeksforgeeks.org/zigzag-or-diagonal-traversal-of-matrix/
* https://www.geeksforgeeks.org/create-a-matrix-with-alternating-rectangles-of-0-and-x/
* https://bintanvictor.wordpress.com/2018/02/06/spiral-number-printer/

As decided last week, I didn’t bother to run the Leetcode test suit. They make me feel frustrated, worthless, defeated, inferior, weakling, quitter…. Without these tests I ran my own tests and I feel like a joyful hacker.

Even though I may not pass all Leetcode tests, I feel my code is reasonable quality and I’m proud of it.

—-Problem is well-defined but not very common.

Given a linked list, reverse the nodes of a linked list k at a time and return its modified list.

k is a positive integer and is less than or equal to the length of the linked list. If the number of nodes is not a multiple of k then left-out nodes in the end should remain as it is. O(1) space. Hopefully O(N) time.

—-My sol1: use my existing O(1) solution but now keep a count.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/linkedList/linkedListReverseInK_group.cpp

The first group and the last group are both tricky and can take up hours.

merge K presorted lists #O(?)

Q: Merge k sorted linked lists and return it as one sorted list. Analyze and describe its complexity.

Note K could be much larger than N.

https://github.com/tiger40490/repo1/blob/py1/py/linklist/merge4lists.py is my solution.

I feel this is mostly an optimization challenge. I can think of a few solutions

–Sol1: merge 2nd list into first. Then merge 3rd list into first …

https://leetcode.com/problems/merge-k-sorted-lists/solution/ shows that this has higher runtime cost than the brackets solution.

Reason is, each 2-merge-to-1 must visit every node in both lists. So the first list nodes get visited K times!

–Sol1b: brackets.

There are only (log K) levels in the bracket so any list gets visited that many times.

–Sol3: in-place (inefficient)

We maintain K node-pointers for the K lists (K teams)

We also maintain a pointer to the last-added node in the merged list.

first node in K lists are put into a min-heap. Winner (smallest) team would be the “current list”. Now the winner team offers next node and add it into the heap. Winning team ..

What if N=1 and K is 1 billion?

min-cost partitioning #c++Flex #rare

Q: You will be given a natural number array and a threshold value T. The threshold represents the maximum length of subarrays that may be created for the challenge. Each sub-array you create has a cost equal to maximum integer within the sub-array. Your challenge is to partition the entire array into sub-arrays no longer than the threshold, and do it at minimum cost.

Function Description
Complete the function calculateCost in the editor below. The function must return an integer denoting the minimum cost of partitioning the array.

calculateCost has the following parameter(s):
a[a[0],…a[n-1]]: the integer array to be divided into sub-arrays
k: the threshold value, i.e the maximum size of any sub-array

Constraints
• 1 ≤ n ≤ 5000
• 1 ≤ k ≤ 500
• 1 ≤ a[i] ≤ 100000

For example, for T=2 and original array {1,5,2}, you have two ways to partition it:

  • {1} {5,2} total cost = 1 + 5 = 6 (this is lowest cost)
  • {1,5} {2} total cost = 5 + 2 = 7

— My greedy AlgoAA:

Update: thanks to XR here is an edge case to break AlgoAA: {49,50,99,0,98}

I will use the terms “group” and “subarray” interchangeably. A lone wolf is a group of one node.

I would first identify the global peak value, like 99. Should this node be a lone wolf? No. I can prove that it should “absorb” a neighbor node and become a subarray of two [1]. Should it absorb a 3rd node? I think I can again prove that it should. Therefore my greedy algorithm would first create a subarray of size K around the peak, leaving behind a left segment (and also a right segment), where we apply the same greedy algorithm.

[1] my informal proof — suppose the left neighbor has value 6 and is a loan wolf in the final grouping. We can improve this final grouping by merging this node with the peak. Total cost would reduce by 6. In another scenario suppose this node (value 6) is within subarray #12. Again, we can break up subarray #12, move out this “6” and merge it with the peak, without breaking any rule or increasing total cost.

So what algorithm to create the first subarray around the peak? Let’s assume k=3. There are up to 3 candidate groups, since the peak can be the first node, 2nd node or last node in its group. We can use a sliding window (of width 3) to identify the best among the candidates.

Q: why start from the peak not start from end of the array?
A: If you do, you may separate 2nd highest node from the peak, when they are adjacent. My AlgoAA would identify this situation early on, and put them in the same group.

— My greedy AlgoBB:

Each time after the window slide, we will compare the new window with the best window so far. The comparison is first based on the 2nd highest value in the window. If tied, then compare 3rd highest value in the window..

I think this is not hard to implement — convert each window to a heap then compare top to bottom.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/array/minCostPartition_Flex.cpp is a briefly tested implementation .. 60% confident.

case-insensitive string search #KMP #Ahbinav

[[c++cookbook]] P174 suggests to prepare the needle by creating an all-upper-case needle, but leave the (bigger) haystack unchanged.

I discussed with my friend Abhinav. With std::search, we should probably upper-case the haystack as well. The std::search implementation can run M*N char-comparisons.

Even with the efficient KMP algorithm, typical complexity is M+N char-comparisons. So my solution is no worse than the authors’ solution.

max-profit buy 1 sell any-bought #pimco java Q8

Pimco Java HackerRank Q8

Q8: Each minute, your trading platform allows you to either buy one share, sell any number of shares that you own (short sell forbidden), or not make any transaction at all. Your task is to find the maximum profit you can obtain with an optimal trading strategy.

I remember having issues with some HackerRank test cases. Should use 64-bit int rather than the default java int.

This problem appears to be very similar to day-trading in hindsight #Nsdq but requires drastically different thinking:

  • a single scan to the left, starting from the last price point.
  • any time there’s a left-ward drop, we have a trading opportunity!

https://github.com/tiger40490/repo1/blob/py1/py/array/maxProfit_buy1sellAny.py is my tested solution, peer-reviewed by Ashish.

count paths between 2 bTree nodes #PimcoQ9 Ashish

The DP idea — compare matrix-path-counter and EditDistance

  • showcasing efficient queue in python.
  • showcasing using (x,y) coordinates as dictionary key
  • showcasing find max value in a dictionary

–Requirement: (https://leetcode.com/problems/unique-paths-ii/description/ is similar)

See Q9.pdf in the email to Ashish. Here are some key points:

Consider a maze mapped to a matrix with an upper left corner at coordinates (row, column) = (0, 0). Any movement must be in increasing row or column direction. You must determine the number of distinct paths through the maze. You will always start at position (0, 0), the top left, and end up at (max(row), max(column)), the bottom right.
1 1 0 1
1 1 1 1
As an example, consider the above matrix where 1 indicates an open cell and 0 indicates blocked. You can only travel through open cells, so no path can go through the cell at (0, 2). There are two distinct paths to the goal. As a 2nd example, matrix below has 10 paths:
1 1 1 1
1 1 1 1
1 1 1 1

https://github.com/tiger40490/repo1/blob/py1/py/2d/classic_pathCountQ9.py is my solution.

==== Analysis ====
I see a binary tree, where each node is a cell. Cell (0,0) has down/left node (1,0) + right node (0,1). I feel this is similar to a more generic problem “count paths between 2 nodes in a binary tree”

–DFT:

😦 I implemented something like a DFT but it was too slow with some test cases

😦 can overflow stack

🙂 DFT can print each unique path. I think BFT can’t.

🙂 DFT is easier to implement than BFT

–DynamicProgramming BFT solution from origin, as I described to Bill Pinsky:

This solution is more versatile than the one from Ashish. It can handle any directed graph.

Mark each node as “computed” once we have computed a score denoting how many paths-from-root there are. Save the score in a shadow matrix.

Origin node has score 1. Blocked cell has score 0.

Start BFT from origin. The two immediate neighbors are set to 1 (or 0 if blocked). Every node can be computed by adding up above-score + left-score. (This algo is simpler than the EditDistance algo.)

Performance Keynote — My implementation was extremely slow (albeit correct) until I added an “if already computed, then continue” early in the loop

Q: why is score[1,1] accessed 4 times?
A: node[1,1] is added to the queue twice. Each dequeue would need one check.
A: Also score[1,2] need to access score[1,1] as a parent. Ditto score[2,1]

–Ashish Singh gave me a much simpler solution, as shown in my github code.

partition list into good^bad sections #c++LiquidNet

Q: template<typename I, typename P>
I myPartition(I bb, I const end, P isgood); //return first Bad node in the range after shuffling all Bad ones to the end. If no Bad node, then return the original end.

You can call it like firstBad = shuffle(li.begin(), li.end(), isEven). Itr and Pred are template params or well-defined typedefs.

Note we can’t assume the iterator provide sort order. We can only inc/dec and check equality.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/linkedList/partition_LNet.cpp is my solution but it’s hard to reason because I move two rather than one pointer. Interviewer gave a simpler solution with similar time complexity.

Lessons:

  • in a two-iterator algo, if every move on each iterator has some logic, then it is probably over-complicated. In this case, we could move one iterator according to some logic, and simply move the other iterator to the same spot.
  • “bb should be at or before ee” — the invariant is critical but hard to maintain in the face of complex logic
  • There were many scenarios in my design. There are fewer scenarios in the interviewer’s design.

movie query #SIG

Like the other coding question from the same employer, this is mostly about nested data structure.

Q1: Given sample csv file below (unsorted), implement an efficient query function get_movies(genre, Start_year, End_year). Optimize for query, not initial loading. Bonus points if you make it work within 75 minutes.

  #movieId,title,date,genres
  1,Toy Story 3,2010,Adventure|Animation|Children|Comedy|Fantasy
  2,Toy Story,1995,Adventure|Animation|Children|Comedy|Fantasy
  ...... 20,000 more lines

https://github.com/tiger40490/repo1/blob/py1/py/movieQuery_SIG.py is my working code, good enough for a small data set like movies, but inefficient for bigger data set like newspaper articles.

Q2: what if the number of genres could be thousands, like a tag cloud.

— Q1 analysis
My initial thought was on binary search, with O(logN) + O(E-S) which is often equivalent to O(N). Locate first and last item in a sorted list, then iterate over enclosed items.

After some brief discussion with interviewer, I came up with a design of O(1) + O(E-S), arguably the fastest, featuring a jagged 2D array:

  • Assumption: genres are shared and shows up repeatedly in the file, rather than random strings that may show up only once.
  • use contiguous array to hold all movies, array indexed by year.
  • Within each “year bucket”, store multiple movies in a contiguous array indexed by genreId.
    • Note there is no gap in the genreId /numbers/. I initially said hardcoded genre enum but interview (Ken) challenged me with frequently changing genres. I now feel we can generate genreId as we load the csv file. This is comparable to some perfect-hashing described in Q2 analysis below.
  • each “cell” holds a list-of-shared_ptr-to-movie
  • each movie object is on heap, and owned by shared_ptr’s

— Q2 analysis:
If tags are user-generated, random and uncoordinated, then the total population of tags could be unlimited. Our genreId would go up to millions. Solution? Perhaps use a separate hash table within each “year bucket” instead.

However, since the query is “by tag” it implies that the most important tags are shared.

At this point, I feel design should depend on the real, not imaginary, context, so it’s kind of pointless to speculate what type of tags we will get.

check array@0-N in-situ #Nsdq#contrived

— Q1: Given a size-5 array of integers. For every element x: 0<= x <=4. Array would be considered Good if all elements are unique, i.e. all the numbers 0 to 4 show up exactly once each. Please write a program to check if the array is good. If Bad, then print every number that’s showing more than once. O(N) time and O(1) space please.

We will denote array size as sz, so N == sz-1.  N=4 and sz=5 in the opening example.

— comments:

I feel this question (even without the bonus) is too hard to complete on the spot, unless you have done it before.

I made a mistake that would be unforgivable in west coast and Bloomberg interviews — failed to clarify requirements and assumed input array is immutable.

— Q1b (bonus): Also print every missing number. (Requires iterating entire input.)

Example A {1,2,0,2,0} has repeating 0 and 2, and missing 3 and 4. Example B {3,1,4,2,0} is Good.

— comments:

If you feel the O() complexity requirements are daunting, then work out a less efficient solution first, and improve on it.

I feel in real projects, we always have enough memory to build a separate array, so the O(1) space requirement is contrived. A O(N) time and O(N) space solution is easier. Therefore, I rate this coding question as contrived and relatively non-standard. Yet the required techniques are “universal” in many high-end interviews.

https://github.com/tiger40490/repo1/blob/cpp1/cpp1/array/checkArr0-N_Nsdq.cpp has my 99% solution. The unfinished part is trivial.

 

 

max-profit #Nsdq short-sell

Q1: given a time series of price points within a past day, there are many ways to trade the day — one buy-sell, five buy-sells, or do-nothing … Short-sell allowed, but you must start and end the day holding no shares. For example, if you sell 9 times then you must buy 9 times, each time 1 lot (100 shares) exactly. Write a function to trade the day and produce the highest profit.  Note you can analyze the price points with perfect hindsight.

Interviewer asked for real code. Very little time given, so I believe the right solution is short, and much simpler than the question on “array of 0-N” (where interviewer asked for pure algorithm).

https://github.com/tiger40490/repo1/blob/cpp1/cpp/array/oracleDayTrader_Nsdq.cpp is my buggy”greedy” solution.

https://github.com/tiger40490/repo1/blob/py1/py/array/maxProfit_Nsdq.py is my new solution.

Q2: now you are allowed to buy UP-TO 100 shares each time. All other rules remain. When if ever is it optimal to buy/sell fewer than 100 shares (a.k.a. an odd lot)?
%%A: never

Q3: same as Q1 but now your holding must always be long 1 lot, short 1 lot or zero holding.

–(broken?) brute force solution I gave on the spot:

Start with just four price points 1,2,3,4. Name every pair A=1>2; B=1>3; C=1>4; D=2>3; E=2>4; F=3>4. Each pair represents a buy/sell round trip, with a profit (ignore it if unprofitable).

How many “plays” i.e. how many ways to trade the day? 2^6 plays.

Just evaluate each play and find the best. Beware that ABD is same as B x 2 and should be scaled down by 2. Similarly, AEBFC == C x 3

convert a recursive algo to iterative #inOrderWalk

Suppose you have just one function being called recursively. (2-function scenario is similar.) Say it has 5 parameters. Create a struct named FRAME (having 5 fields + possibly a field for lineNo/instructionPointer.)

Maintain a stack holding the Frame instances. Each time the recursive algorithm adds to the call stack, we add to our stack too.

Wiki page on inorder tree walk  has very concise recursive/iterative algos. https://github.com/tiger40490/repo1/blob/py1/py/tree/iterative_InOrderWalk.py is my own attempt that’s not so simple. Some lessons:

  • Differentiate between popping vs peeking the top.
  • For a given node, popping and printing generally happen at different times without any clear pattern.
    • the sequence of pop() is probably a pre-order tree walk
    • the sequence of print is an in-order tree walk

NxN matrix: graph@N nodes #IV

Simon Ma of CVA team showed me this simple technique.

https://github.com/tiger40490/repo1/blob/cpp1/cpp1/miscIVQ/tokenLinked_Friend.cpp is my first usage of it.

  • I only needed half of all matrix cells (excluding the diagonal cells) because relationships are bilateral.
  • Otherwise, if graph edges are directed, then we need all (N-1)(N-1) cells since A->B is not same as B->A.

My discrete math textbook shows this is a simplified form of representation and can’t handle self-link or parallel edge. The vertex-edge matrix is more robust.

 

max palindrome substring #realIV Deepak

https://github.com/tiger40490/repo1/blob/py1/py/str/longestPalindrome.py is one solution, not O(N) at all.

— Design 1 — first {O(N)} identify each “core” which is defined as

  • “ABA”
  • at least 2 count of the same char like AA

Then {O(N)?} for each core, scan both ways in lock steps until we see a mismatch. Then we know the length of this palindrome.

To test, we can generate random strings using a-b.

— linear time solutions are on wikipedia, but probably not so obvious

given int array,find triplets: x divides y;y divides z #Trex

Within a sequence of unique natural numbers, count the occurrence of so-called triplets of (x,y,z) where x divides y and y divides z.

I feel this is not a classic, but the technique is somewhat reusable.

https://github.com/tiger40490/repo1/blob/py1/py/array/tripletsInPreSorted_Trex.py is the improved solution after lots of hints.

— a useful technique:

Construct each {factor, multiple} pair and save it in a collection. Any new pair constructed will be checked against all existing pairs. The “check” is the trick. My 25 Apr 2018 commit shows the most efficient “check” I can think of.

list of pairs -> list of triplet -> list of quadruplet -> list of quintuplet … Build up pairs to construct triplets. Use triplets to construct quadruplets. Use quadruplets to construct quintuplets.

The pairs collection doesn’t need to be fully built before we start constructing triplets.

convert-to-0 using4transformations #Trex

Q: given a natural number, write a function to "transform" it to zero in a minimum number of transformations. Four transformations allowed:A) add 1
B) subtract 1
C) divide by 2 if divisible
D) divide by 4 if divisible

My own idea:

1) attempt D for as long as possible. Repeat for C. Now we have an odd number K
2) if K can be written as 4N-1, then do A, otherwise B.
3) go back to step 1

https://github.com/tiger40490/repo1/blob/py1/py/trivial/4conversion_ez.py is my code

3-way sorter/classifier #FB

— Requirement — You are give a list of possibly repeating integers. Each belongs to exactly one category, either category Low, Medium or High. There is a function to return the category for any integer. Please write a constant-space function to output all category-L members, followed by all the category-M members, and finally the category-H members.

If input list has 9 trillion integers, output will have 9 trillion intergers too.
Within category Low, you can print in any order.
Within category Medium, you can print in any order.
Within category High, you can print in any order.

Personally, I would suggest (to make things concrete) that L means single-digit; M means double-digit and H means 3-digit or higher.

I failed to clarify requirements — input is array or linked list?

=== solutions

— 1st design — edit the same list in-place. Extract-insert each L or M element to group all elements into three sections. L on the left, H on the right. First design uses array, So I maintain 2 markers (itegers).

Invariant — the mm marker points to start of the M section. hh marker points to start of the H section, and a current marker points to start of unclassified section. L section is on the left of mm.

  • Every time I see a L element, I move it to beginning of array, and increment mm. (It’s more efficient to move it to left of mm.)
  • Every time I see a M element, I move it to the left of hh, and increment hh
  • Every time I see a H element, I do nothing.

— Second design avoids the inefficient array shift, using STL linked list.

Q: When to adjust mm and hh iterators?
A: Only when erase() hits mm or hh. This is important and easily missed. There’s no error message. Result may not show any difference. As a general rule

“Always check every iterator (except unneeded) when you do a list::erase()”

I plan to implement 1st design in python list (vector internally) and 2nd design in std::list.

–3rd design swap-based. If we only have an array and not a linked list, then swap-based design is efficient

https://github.com/tiger40490/repo1/blob/cpp1/cpp1/array/3waySorterArray.cpp is my tested solution

N-queen

Q1: generate one solution

Q2: generate all solution to the 4-queen problem without
A: there are up to 4^4 = 256 possible solutions so just iterate over all, but let’s say we want to extend the Q1 solution

https://github.com/tiger40490/repo1/blob/py1/py/2d/classic_nQueens.py is my tested solution. The only important function is placeOnRow(), which took me 17 minutes to write on white-board, but I missed something very important — restoring Mat[r][c] to UNFILLED before continue

4×4 sudoku: backtracking #FB

4-queen is one of the simplest problems in the backtracking category, but I will look at a mini sudoku. Sudoku was once asked in a Facebook 45-min white-board coding test. https://github.com/tiger40490/repo1/blob/py1/py/2d/sudoku4.py is my tested implementation.

I name the four values as color1  color2  color3  color4. A placement is {a color, row index, column index}.

Basic idea — at each empty cell, we try all four possible colors. If color1 is allowed, then we call the same function recursively to fill the next empty cell. Then (crucially) we check the status. If OK, then game over. If NOK, then we try another color. If no more color to try, then (crucially) we reset current cell to unfilled and return NOK

Key to the algo — backtracking is implemented by the restoration and returning NOK

Key to the algo — each empty cell would start a new recursive stack frame. But what data is saved on that stack frame?

Key to the implementation — know what variables go into the loop, what go to the stack and what can be global (simplest). In this case the returned next cell indices CANNOT be globals because each stack frame must remember “next-cell-after-me”

Key to the implementation — simplify state. Each stack frame doesn’t need to remember the “breadcrumb” of all previous placements. Each layer of recursive call only need to remember the colors already tried for its own current cell.

Key to the implementation — factor out simple utilities if they are well-defined and easy to implement. See comment in code.

The important function requires no more than 10 lines! So an interviewer could say “This is something a candidate can write in 45 minutes, assuming all the simple utilities are already available to him, namely failedSomething() and findNextCell2fill().

Truth is, most of us can’t write these 10 lines in an hour on the white-board if we don’t have the correct idea. My initial white-board implementation took 18 minutes and missed one important part —

Mat[r][c] = UNFILLED_COLOR # restore

I then took about an hour to make it work.

punctuate ContinuousSentence us`dictionary #2Q

I believe https://leetcode.com/problems/word-break/description/ is identical.

Q (bbg): Given a dictionary of English words (containing no numbers no underscores) and a very long sentence, someone has removed all the spaces. Please write a program to restore it by adding a space after each word. If there’s no way to parse the sentence just return False, otherwise return True to indicate you found at least one way to parse the sentence.

The sentence can have repeated words.

I was allowed to use either white-board or computer.

Brute force? I struggled for a long time with some vague idea. Interviewer asked me to implement any solution I have, however inefficient but I was unable to.

Key observation — This is a typical QQ type of algo question. If you remember one key point, then you have a advantage.

Key observation — syntax and ECT is not really hard. Only simple string operations. Challenge is pure algo.

Key observation — the recursive backtracking solution is working and clever-looking but actually inferior. Still, recursive thinking is valuable and gave me a first solution. I’m not ashamed of my recursive solution.

Key observation — This problem is considered not too hard because …. the solution is short! But most of us can’t conceive such a solution! I won’t trust the difficulty level in leetcode.com

https://github.com/tiger40490/repo1/blob/py1/py/str/continuousSentenceBBG.py shows my recursive backtracking my solution. Interviewers basically said it works.

–More efficient solution: non-recursive, no backtracking

Suppose the sentence has 99 chars ch[0], ch[1] … ch[98]. We maintain an “aboveWater[99]” bool array. A ch[33] is above water if there exists a 4-char bridge word from aboveWater character ch[29] ….

The bool array is assumed underwater initially. We scan it forward once to check and mark selected char as aboveWater.

Once the left section of the array is analyzed, we don’t need to revisit it by backtracking.

–Q2(Leetcode): Given a non-empty string s and a dictionary wordDict containing a list of non-empty words, add spaces in s to construct a sentence where each word is a valid dictionary word. Return ALL such possible sentences.

%%A: My github solution should solve it even if not optimal. Need to test.

binary-matrix island count #DeepakM

Q: https://leetcode.com/problems/number-of-islands/description/

https://github.com/tiger40490/repo1/blob/py1/py/2d/ has my solution. I don’t want to spend the time passing all leetcode tests! Diminishing return

https://www.geeksforgeeks.org/find-number-of-islands/ is conceptually identical, though using a different criteria for “connected” — diagonal neighbors are “connected”

Hi Deepak

I realize for “connected” problems, there’s definitely a graph underneath, so graph traversal is required. I guess BFT or DST will both find all the nodes connected to “me”.

Given a 1000 x 1000 all-black matrix, I think DFT recursion will go into exactly 1,000,000 levels and overflow stack space.

A high-level (vague) idea is

  • Scan in type-writer fashion. Suppose there are 10×10 = 100 cells either black or write. I paint each cell brown [1] once visited. I also use a integer counter to keep track how many cells already visited.
  • During the scan. If a cell is already visited, I will ignore it and move on
  • Any time I find a black cell, I will start a BFT (I dislike DST) to find all connected black cells.  Once I find all the black cells connected to “me”, I resume the type-writer scan.
  • Once my counter hits 100, I have visited all cells and will exit.

[1] you paint it white, which is probably better.

I thought this problem was too hard and not worth study, but you convinced me that it may come up and I can at least give a vague solution  … better than no solution.

Compared to a binary tree, walking over a matrix is /operationally/ (not conceptually) harder because

  • need to check array bounds. Error-prone and time consuming in coding tests
  • The (invisible) links will not cover all cells. To avoid missing a cell, we still need a full matrix scan. The integration of tree walk + matrix scan is unintuitive, to put it mildly.
  • During the tree-walk, you don’t want to visit any cell too many times or get lost in an endless loop. A brute-force technique — shadow matrix to remember all visited cells.
  • … However, once you get over these nitty-gritty operational complexities, then tree-walk in matrix is not really harder. These algo questions can therefore frustrate and fail many candidates untrained on The technique.

find any black-corner submatrix #unsolved

https://www.geeksforgeeks.org/find-rectangle-binary-matrix-corners-1/

Q: given a black/white matrix, find any rectangle whose all four corners are black.

Brute force — typewriter-scan. For each black cell, treat it as a top-left, and look rightward for a top-right. If found, then scan down for a pair of bottom-left/bottom-right. If no then complete the given row and discard the row.

For a space/time trade-off,

  • once I find a black cell on current row, I put it in a “horizontal” vector.

##short-n-hard algo@simple data structures

90% of the hardest coding questions use simple data structures. Some of these algos are

  • very short but baffling
  • doesn’t use fancy data structures
  • requires prior study; no expectation to invent them during an interview

–Linked list

Brent — check for loop in linked list

–binary tree

Morris — Morris in-order walk]O(N) #O(1)space

–string

  • regex
  • KMP string search

–array

 

FB2: look-n-say sequence #80%

This is one of the 3 sample FB coding questions, fairly popular
Q: Implement a function that outputs the Look and Say sequence, as explained in https://en.wikipedia.org/wiki/Look-and-say_sequence
1
11
21
1211
111221
312211
13112221
1113213211
31131211131221
13211311123113112211
–Analysis
Iterative solution should do. Based on the last string, we can generate a new string. Simply scan and split into segments. Each segment would produce 2 digits, to append on the new string.
Corner cases? No

FB3: edit distance #80%

This trivial problem surprised me
  1. surprise — python string/list techniques are often easer than c++, so I need to adapt to the language
  2. surprise — took 40 min on white board, mostly because of c++ style thinking
A popular coding question for phone or white-board screening.

Q: Write a function that returns whether two words are exactly “one edit” away using the following signature:

bool OneEditApart(string s1, string s2);
An edit is:
  • Inserting one character anywhere in the word (including at the beginning and end)
  • Removing one character
  • Replacing one character

Examples:

OneEditApart("cat", "dog") = false
OneEditApart("cat", "cats") = true
OneEditApart("cat", "cut") = true
OneEditApart("cat", "cast") = true
OneEditApart("cat", "at") = true
OneEditApart("cat", "act") = false
https://github.com/tiger40490/repo1/blob/py1/py/str/diffBy1.py is my tested code. I first spent 40 minutes to write on white board but it is basically bug-free.
I was writing python in c++ mode and spent too much time on the len-off-by-1 case until I decided to insert into the shorter string. Then I found a much simpler solution …
.. #1 lesson learned in python string matching:
After finding a mismatch char, comparing two slices is much much simpler than insertion technique or iterator adjustment. No performance hit. If you want to avoid string copy, you can even extract a function to take string references.

detect loop in slist #Part 2

Note there’s an open question at the end

Don’t use fast-slow iterators. These 3 solutions below each has advantages over the fast-slow iterators solution.

–solution: list-reversal, based on https://blog.ostermiller.org/find-loop-singly-linked-list and implemented in https://github.com/tiger40490/repo1/blob/py1/py/slist/slistReverseLoopDetect.py
Basically, iterate from aa to bb to cc … and reverse each arrow. Suppose dd points to bb initially forming a loop. At some point the snapshot is

  • null <- aa <- bb <- cc <- dd  while currentNode is dd and nextNode is bb.

If we go on, we would be back at aa, proving the existence of a loop.

Benefit of this solution — O(1) space complexity and O(N) time complexity

constraint — wont work on read-only lists .

–solution: count nodes against memory usage — my invention
We can get a reliable estimate of the memory usage of the entire data structure, and divide it by per-node footprint, so we know within good approximation how many nodes exist. If we count more than that, there must be duplicates.

https://blog.ostermiller.org/find-loop-singly-linked-list shows one way to estimate the memory usage — keep track of the maximum and minimum addresses among all visited nodes.

The low level languages usually provide addresses. The higher level languages usually provide memory usage. In realistic applications, there is always a way to get an estimate of memory usage.

–solution: Brent. See my code https://github.com/tiger40490/repo1/blob/cpp1/cpp/linkedList/cycleDetect.cpp

If we keep count of total number of increments, we should see that every time we double “power”, that total count is a power of 2. If we have incremented 32 times then we know { mu + lambda } must exceed 32… Advantages over the simple-2-iterator:

  1. tortoise is much faster, so if mu (merge point, or loop starting point) is far out, then we will find it faster. With the standard solution, tortoise takes a long time to reach the merge point.
  2. can return loop size

I now believe I can predict in which “phase” we will detect the loop — If lambda is between 2^5 and 2^6, then we will detect the loop in Phase 6, and we can’t detect in Phase 5

I suspect power-of-3 is also working:

  • If the slower iterator stops moving completely, then the algorithm is broken because the slower iterator could remain outside the loop.
  • If the slower iterator tailgates the faster iterator and the following distance is always always within a static limit (like 99), then the algorithm is broken because the loop size could exceed 99, so the two iterators would have no chance of rendezvous.
  • ==> If the following distance grows gradually, AND the follower is never stuck forever, I think eventually they will meet —
    • Suppose right after a tortoise jump, powOfN bumps up to 81 but actual loop length is shorter, like 80. Now within the next 81 iterations, the hare would move step by step while the tortoise remains. They would rendezvous for sure.
  • —- pros and cons for power-of-3 over power-of-2 —-
  • pro: if a big loop involves root, tortoise won’t have to jump too early too frequently ==> Fewer comparison needed.
  • con: if there’s a tiny loop far out, tortoise would end up jumping too late?

hackerrank IV: full query

/*requirement: given 99 sentences and 22 queries, check each query against all 99 sentences. All the words in the query must show up in the sentence to qualify.
*/
bool check(string & q, string & sen){
    istringstream lineStream(q);
	string qword;
    while(getline(lineStream, qword, ' '))
	  if ( string::npos == sen.find(qword)) return false;
	return true;
}
void textQueries(vector <string> sentences, vector <string> queries) {
  for (string & qr: queries){
	string output1query;
    for(int i=0; i<sentences.size(); ++i){
	  if (check(qr, sentences[i]))	    output1query.append(to_string(i)+" ");
	}
	if (output1query.empty()) output1query = "-1";
    cout<<output1query<<endl;	 
  }
}
int main(){
  vector<string> sen={"a b c d", "a b c", "b c d e"};
  vector<string> que={"a b", "c d", "a e"};
  textQueries(sen, que);
}

bbg-Eq: longest abbreviation #easier

Given a long word with N non-unique characters, we know there can be 2^N abbreviations (including the original word, and including the empty string).

Requirement: I give you a list of strings and you suspect some of them could be an abbreviation of your word. Find the longest string among them that’s a valid abbreviation. Optimize for time complexity.

I feel this problem appears simple but not easy to complete quickly

https://github.com/tiger40490/repo1/blob/py1/py/str/testAbbr_ez.py is my python solution, different from the c++ solution below. Not sure which has more bugs.


I started with string::find() and hit the problem of non-unique characters. Interviewer hinted char-matching — the key point I needed for this type of problem.

#include <iostream>
#include <algorithm>
#include <vector>
using namespace std;

typedef string candidate;

vector<candidate> v{"monkey","aaaaaaaaaaaaaaaaaa", "mamalon", "monk"};
string const haystack{"mamalonkey"};
size_t hsize=haystack.size();

//Tip: can use typdef alias in argument:)
bool lenComp(candidate const & a, candidate const & b){
  return a.size()<=b.size();
}
ostream & operator<<(ostream & os, vector<candidate> const & c){
  size_t const sz = c.size();
  for (int i=0;i<sz; ++i){
        os<<c[i]<<" ";
  }
  os<<endl; return os; } //single-pass 🙂 
} 

bool check1candate(candidate const & c){ 
  if (c.size() > hsize) return false;

  for(int i=0, h=0; h<hsize; ++h){
        if (c[i] == haystack[h]){
          if (i==c.size()-1) return true;
          ++i;
        }
  }
  return false;
}
int main(){
  sort(v.begin(), v.end(), lenComp);
  cout<<v; for (int i=v.size()-1; i>=0; --i){
    if (check1candate(v[i])){
        cout<<"winner is "<<v[i];
        return 0;
    }
  }
}

 

bbg-FX: in-place array shuffle #splice

Update: In-Place usually requires 1) splice 2) swap.

Requirement: given a list like

{1,2,3,4,  25,29,44,33,  159,139,155,150,  177,176,169,180} which describes four Persons
{id1,id2,..age1,age2, weight1,weight2,…height1,height2,..}

Write a program to output each person’s full attributes like

{1, 25, 139, 177,   2, 29, 129, 176,   3, 24, 135, 169,   4, 33, 140, 180}

Note the number of attributes (in this case 4) is an input parameter to your program. You don’t know (or care about) the name of each attribute. If you are told there are 37 attributes in each person, just output attr1 .. attr37.

Note the input and output sequence are important.

Q (Bonus question): do it in-place without creating a new collection. You can still have a small number of variables in your code.
A: one way to “cheat” is to append each output person to end of the original vector. It will double in size, so we will truncate the first half.
A: I believe I can use the “homeless/displacement” technique.
A: I now feel many in-place array reshuffle can be done in place with a single temp variable but this may not be one of them.
A: how about starting with a linked list?

https://github.com/tiger40490/repo1/blob/cpp1/cpp/array/insitu_reshuffle_attrib_bbg.cpp has ..

first N prime Fibonacci: cache server #Thesys

The challenge is not in Fib or prime, but in the algo around them.

//This version is memory efficient as it only remembers the last
//two Fib numbers + all previous prime Fib numbers

//Requirement: For a given input number X, print
//the first X prime Fib numbers. Each output is a Fib and prime
//
//This version is memory efficient as it only remembers the last
//two Fib numbers + all previous prime Fib numbers
//
//The typedef of Fib is nice documentation
//
//My version of isPrime() is simpler
#include <iostream>
#include <string>
#include <vector>
#include <cstdlib>
using namespace std;

typedef unsigned int Fib;
bool isPrime(int i){
        if (i <= 1) return false;
        for(int j=2; j<=i/2; ++j) {
            if(i%j==0) return false;
        }
        return true;
}

pair<Fib,bool> generate1Fib(){
  static Fib parent = 1;
  static Fib grandparent = 0;
  Fib newFib = parent + grandparent;
  grandparent = parent;
  parent = newFib;
  return make_pair(newFib, isPrime(newFib));
}

void print1stXFibPrimes(size_t const X){
    static vector<Fib> fibPrimes;
    int count = 0;
    for(int j=0; j<fibPrimes.size(); ++j){
                cout<<fibPrimes[j]<<" ";
                if (++count >= X) return;
    }

    while( count<X ) {
          pair<Fib,bool> new1 = generate1Fib();
          if (new1.second){
                fibPrimes.push_back(new1.first);
                cout<<new1.first<<" ";
                ++count;
          }
    }
}
inline bool isInteger(const string & s) {
   if(s.empty() || ((!isdigit(s[0])) && (s[0] != '-') && (s[0] != '+'))) return false ;
   char * p ;
   strtol(s.c_str(), &p, 10) ;
   return (*p == 0) ;
}
int main(int argc, char *argv[]){
  for(int i=1; i<argc; ++i){
    string arg(argv[i]);
    if (isInteger(arg)){
      cout<<"calculate the first "<<arg<<" prime fibs"<<endl;
      print1stXFibPrimes( stoi(arg)) ;
      cout<<endl;
    }else
      cout<<arg<<" is not an integer\n";
  }
}

top N active stocks #bbg#cache,VO

Requirement: implement top_n(int N): top N most active stocks

Requirement 2: add a top_n_last_x_minute(int N, int X): top N that are last traded in the last X millisec

This program also demonstrates how to avoid storing the same string twice, as map key and also as field of the map value

//
#include <unordered_map> //requires -std=c++0x
#include <iostream>
#include <vector>
#include <string>
#include <algorithm>
#include <chrono>
#include <assert.h>
using namespace std;

struct Rec{
  string const * strPtr; //a best-effort to avoid storing duplicate strings
  int cumvol; //cum vol
  long lastUpd; //millsec since epoch

  Rec(int vol): strPtr(NULL), cumvol(vol),
    lastUpd( (chrono::system_clock::now().time_since_epoch()).count()) { }

  bool operator<(Rec const & other) const{
        return this->cumvol < other.cumvol;
  }
  friend ostream & operator<<(ostream & os, Rec const & r){
        os<<r.lastUpd<<"-"<<*(r.strPtr)<<"  : "<<r.cumvol;
        return os;
  }
};

typedef unordered_map<string, Rec>::iterator ITR;

class App{
  unordered_map<string, Rec> lookup;
  vector<Rec> v;
  bool dirty;
public:
  App(): dirty(true){}
  void update(string const & symbol, int vol){
    this->dirty = true;
    ITR it = lookup.find(symbol);
    if (it == lookup.end()){
        pair<ITR, bool> pair = lookup.insert(make_pair(symbol, Rec(vol)));
        assert(pair.first->second.strPtr == NULL);
        pair.first->second.strPtr = //second is the valye in the pair
          & pair.first->first; //the key of the new pair
    }else{
        assert(it->second.strPtr != NULL);
        it->second.cumvol += vol;
        it->second.lastUpd = (std::chrono::system_clock::now().time_since_epoch()).count();
    }
  }
  void top_n(int N, unsigned long X=-1){ //unsigned -1 gives the largest positive value!
    size_t sz=lookup.size();
    if (dirty){
      cout<<"resorting due to updates to database ..."<<endl;
      v.clear();
      for(ITR i=lookup.begin(); i!= lookup.end(); ++i){
        v.push_back(i->second);
      }
      sort(v.begin(), v.end());
      dirty = false;
    }
    long now = (std::chrono::system_clock::now().time_since_epoch()).count();
    //cout<<now<<" is now"<<endl;
    for(int k=sz-1, count=0; k>=0; --k){
        if (now - v[k].lastUpd > X) continue;
        cout<<v[k]<<endl;
        if (++count >= N) break;
    }
  }
};
int main(){
  App app;
  app.update("ibm", 1000);
  app.update("gs", 700);
  app.update("gs", 500);
  app.update("goog", 600);
  app.update("msft", 800);
  app.update("ge", 500);
  app.update("ibm", 160);
  app.top_n(6,29);
  app.update("c", 400);
  app.top_n(3,39);
}

detect loop in slist #Part 1

Q: A singly-linked list (slist) contains a loop. You are given nothing but the head node. With O(1) space complexity, how do you locate the join node? For example,

0(head)->1->2->…101->102->103->4(again), so #4 is the merge point

Here’s Deepak’s ingenious trick

  1. first use the 2-pointer trick to find any node inside the loop.
  2. find the length (say, 55, denoted LL) of the loop using a single moving pointer, starting from that node
  3. now we cant discard that node
  4. Now start a pointer B from head and move LL steps.
  5. Now put pointer A at head
  6. Now move A and B in locksteps. They will meet for sure, at the merge point. Here’s the proof:

Suppose the merge point is at MM, i.e. MM steps from head. When A and B starts moving in locksteps,

  • how far is B from MM? MM steps!
  • how far is A from MM? MM steps too.

Note LL value could be small like 1.

struct Node{
  Node const * p;
  int const data;
  Node(int _data, Node const * _p): p(_p), data(_data){}
  friend ostream & operator<<(ostream &os, Node const & node){
        os<<node.data<<" / "<<&node<<" -> "<<node.p<<endl;
        return os;
  }
};

Node _9(9, NULL);
Node _8(8, &_9);
Node _7(7, &_8);
Node _6(6, &_7);
Node _5(5, &_6);
Node _4(4, &_5);
Node _3(3, &_4);
Node _2(2, &_3);
Node _1(1, &_2);
Node _0(0, &_1);
Node & root = _0;
Node const * mergePoint = &_1;

//how many distinct nodes in the loop
size_t getLoopLen(Node const & root){
  Node const * brunner = &root;
  Node const * frunner = &root;
  for(;;){
        frunner = frunner->p->p;
        brunner = brunner->p;
        if (frunner == brunner) break;
  }
  cout<<"now the two runners have met somewhere in the loop: "<<*frunner ;
  for(int ret = 1; ;++ret){
        frunner = frunner->p ;
        if (frunner == brunner) return ret;
  }
}

Node const * getMergePoint(Node const & root){
  size_t LL = getLoopLen(root);
  cout<<"# of nodes in loop = "<<LL<<endl;
  Node const * frunner = &root;
  for(int i = 0; i<LL; ++i){ frunner = frunner->p; }
  //cout<<"front runer is "<<*frunner;
  Node const * brunner = &root;
  for(;frunner != brunner;){
        brunner = brunner->p;
        frunner = frunner->p;
  }
  return frunner;
}

int main(){
  _9.p = mergePoint;
  Node const * ret = getMergePoint(root);
  cout<<"Merge point is "<<*ret;
  assert(ret == mergePoint);
}

check if a word can reshuffle to palindrome

requirement: https://codecon.bloomberg.com/challenger-series/29. With minimal time and space complexity, the corner cases are tricky. I decided to add a null terminator:)

int main() {
 string S;
  cin>>S;
  vector<char> v(S.begin(), S.end());
  sort(v.begin(), v.end());
  v.push_back(0); //null char
  char last=v[0];
  size_t occur=0;
  bool isOddFound = false;

  for(int i=0; i<v.size(); ++i) {
    bool isNew = v[i] != last;
    //cout<<"--> "<<v[i]<<" isNew = "<<isNew<<endl;
    if (!isNew){
        ++occur;
        //cout<<last<<" occured "<<occur<<" times"<<endl;
        continue;
    }
    //cout<<last<<" occured total "<<occur<<" times"<<endl;
    if (occur % 2){
      if(isOddFound){
        cout<<"no"<<endl;
        return 0 ;
      }
      isOddFound = true;
    }
    last = v[i];
    occur = 1;
  }
  cout<<"yes"<<endl;
}

milePerGallon #Deepak SIG

Similar to GS Tick Query question but without optimization requirements.

–Requirement: Your Choice of Editor, OS, Compiler & Programming Language. Total time allotted for this  test is 75 mins, The further rounds of Interview will be based on this programming test.

A group of N people go for driving. When ever they refill the fuel in their vehicles, they update a text file with the following fields

<Person>, <Car>, <No of Miles driven on that day, before the fill>, <NO of Gallons filled i.e. gallons used for those miles>,<Date of fill>

All the fields are comma separated. A sample such file is given below

John, Ford, 350, 20, 20160921
John, Ford, 220, 13, 20160920
John, Ford, 230, 35, 20160918
John, Ford, 300, 22.5, 20161112
Jonathan, Mercedes GLA, 220, 13, 20160920
Mishal, Ford Escort, 230, 35, 20160919

Write a function with the following parameters
GetMPG(name_of_person, Start_date, End_date)
such that it should return the following information for each User/driver
1) Name of Car
2) Miles per Gallon for the mentioned period for each of the Car

A person can have more than one car but never share his cars. If there is no record for the mentioned date range, the function should not return anything for that specific car.

–analysis

https://github.com/tiger40490/repo1/blob/py1/py/milePerGallon.py is my python solution.

next_perm@N color socks #complex #100%

A common IDE coding challenge — given x pairs socks of various colors, generate all the permutations, in ascending order. Each color has a value.

–Solution 1: std::next_permutation() and prev_permutation()

–solution 2: I can probably write my own next_perm(). Using This function we can generate an ascending sequence of permutations starting from the current content of a vector.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/combo_permu/nextPerm.cpp is my iterative solution, but should use lower_bound()

https://github.com/tiger40490/repo1/blob/py1/py/combo_perm/nextPermOfNSocks.py is my python solution, overcoming the lower_bound problem.

relating my perm/comb algos

–next_perm: I have an iterative solution

–next perm 5C3: iterative algo, growing global collection

–next_combo 5C3: I have a (approx) 10-line recursive algo. (Iterative algo is non-ideal). Passing in two collections.

–next_abbreviation of any length: I have a (approx) 10-line recursive algo. Using global collection, this algo is simpler than next_combo

–So far, these algos are decent but have nothing in common.

 

next abbreviation@word with repeat char #2^N #100%

This is a real coding question at 10gen/mongoDB

https://github.com/tiger40490/repo1/blob/py1/py/combo_perm/abbrIterative.py is a simple iterative py solution. Not ascending.

Q1: Given a word with unique letter (like “abc”), Can you write c++ program to generate all abbreviations in any order?
Q2: same question, but don’t use any external storage beside character arrays.
Q3: same question, but output in ascending order? Expected output:

  1. a
  2. ab
  3. abc
  4. ac
  5. b
  6. bc
  7. c

It’s daunting to generate this sequence without recursion. https://github.com/tiger40490/repo1/blob/cpp1/cpp1/combo_permu/abbr_ascendRecursive.cpp is my recursive solution to Q3.

Q4: what if the word has non-unique letters, like “mongo”? The only solution I know relies on an external lookup device.

find anagram groups]file #1-pass, FB

Requirement: if any 2 words are anagrams, that’s a group. Within a file, identify all such groups.

Normalize each word using counting sort on the characters of the word. There’s no competing alternative in this isolated part of the solution. Main part of the solution is sorting all words based on the normalized version. There are many alternative solutions to this sort.

desirable — memory efficiency. create no or few objects besides the strings read in from the file.

desirable — avoid the need to create/allocate linkNode or wrapper objects around original words. In c++, such a wrapper is smaller than in java, probably a pointer to the original word + a link pointer

Solution 1 — insert into a sorted set, using a customized comparison (subclass binary_function) based on the normalized string and something unique, such as line number or object address. After all the inserts, when you iterate the set, an anagram group would show up in a row. We can also keep a separate array or iterators (pointers) that point to the first word in each group — emulating a multimap.

Without the something unique, the set would drop words.

I think insertion is inefficient since a normalized string is regenerated for the same word over and over. To overcome this, we could create a wrapper object with 2 fields — normalized string + a pointer to the original word. However, if “SAT” is a very common normalized string then we create multiple copies of it.

Solution 2 — separate chaining. Maintain a hashed map (faster) or sorted map keyed by the normalized strings, just a single copy for each normalized string. Value of each map entry is a linked list of the original words.

#include <iostream>
#include <iomanip>
#include <fstream>
#include <sstream>
#include <string>
#include <vector>
#include <map>
#include <algorithm>
using namespace std;

map<vector<char>, size_t> m;
template<typename T> ostream & operator<<(ostream & out , vector<T> const & v){
        for(int i=0; i<v.size(); ++i) out<<setw(1)<<v[i];
        out<<" ";
}
int main(){
  ifstream fs("words.txt");
  string line, word;
  while(getline(fs, line)){
     stringstream ss(line);
     while(getline(ss, word, ' ')){
        //trim
        int endpos = word.find_last_not_of(" ");
        if (endpos < string::npos) word = word.substr(0, endpos+1);
        if (word.empty()) continue;
        //lower case
        transform(word.begin(), word.end(), word.begin(), ::tolower);
        //sort
        vector<char> sorted(word.begin(), word.end()); sort(sorted.begin(), sorted.end());

        int count = ++m[sorted]; //1st time we get 1 🙂
        cout<<sorted<<"\t<- "<<word<<" :" <<count<<endl;
     }
  }
}

 

generate random line-up@Any combo@N boys #70%

A standard permutation/combination problem in some coding tests. You are often required to iterate all of them.

Given N cities, how many permutations of any combinations are there in total.

My iterative sum formula: answer(N)= N_choose_0 + N_choose_1 * 1! + N_choose_2 * 2! + … + N_choose_N * N!

N_choose_0 = 1 !

–recursive algo: use answer(N-1) to answer(N)

–iterative algo:

  • Suppose we have an iterative_next_perm(list) function already written.
  • suppose we have an iterative_next_combo(N, 3) that generates all combinations of 3 out of N distinct chars.

Then, here’s a solution — call

iterative_next_combo(N,1)
iterative_next_combo(N,2)
iterative_next_combo(N,3)…
iterative_next_combo(N,N), in a loop (of N iterations).

Suppose one of the N calls generated 221 combinations. Run a loop (of 221 iterations) each time pass one combination into iterative_next_perm(combo). So our main program has only 2 nested loops. Most of the complexities are encapsulated in iterative_next_perm() and iterative_next_combo()

next split@N boys #51% #hard

N boys are all unique “entities”, each identified by student id. Later we can look into “next split of N colored balls, having J unique colors”.

In a split, every boy belongs to exactly one group (minimum 1 member). We could rely on python’s default sort order, but for now let’s define a sort order between two groups AA and BB:

  1. sort by group size
  2. If same size, sort by the lowest student id in AA vs in BB.

Every “split” is recorded as a sorted list of groups. I will define a sort order between 2 splits:

  1. sort by list size
  2. if same size, sort by first group, the by 2nd group..

Q1: how many ways to split N boys?
%%A: First we line up N boys — there are N! line-ups. In each line-up, we have 2^(N-1) splits because there are N-1 slots to place a group-boundary. This is a way to iterate over ALL splits, but without uniqueness.

Q2: Write a program to generate all splits in some order. For example, with 3 boys h,i,j:

  1. h/i/j in a giant group…
  2. –3 split-of-2-groups: h, i/j
  3. i, h/j
  4. j, h/i…
  5. –1 split of 3 singular groups: h, i, j
  6. ==== total 5 distinct splits showing 10 group objects but not all distinct

With 3+1 boys h,i,j,k: First add k to each of 10 group objects:

  1. h/i/j/k
  2. h/k, i/j
  3. h, i/j/k
  4. i/k, h/j
  5. i, h/j/k
  6. j/k, h/i
  7. j, h/i/k
  8. h,i, j/k
  9. h,j, i/k
  10. i,j, h/… — so far, 10 splits. Now 5 more splits featuring k in its own singular group
  11. k, h/i/j
  12. h, k, i/j
  13. i, k, h/j
  14. j, k, h/i
  15. h,i,j,k

—- analysis: I don’t know how many splits there will be for N boys. An iterative function will probably work:

After I listed out exactly 5 splits of 3 boys, and I added another boy. So here’s the algo I used — For each of the 5 (existing) splits, add this boy to each of 10 group objects, or put himself as a new singular group.

next Perm@3boys out@5, non-recursive #complex

Latest -- https://github.com/tiger40490/repo1/blob/cpp1/cpp1/combo_permu/iterative_nextPerm3boysOf5.cpp


// Requirement: generate all permutations of C(like 3) chars
//from N(like 5). The count should be N!/(N-C)!
// Bonus: generate in ascending order, where 'ascending' is
//defined on the basis that within the original word, a char
//has lower value than any char on its right. This is more clear
//when the word itself is a sorted string, but actually it's
//not needed.
//
//global collection is simpler than stack variables. Easier to visualize
//and uses less memory
#include <iostream>
#include <sstream>
#include <vector>
#include <deque>
#include <algorithm>
#include <iomanip>
#include <assert.h>
using namespace std;

string _tmp="abcd"; vector<char> const pool(_tmp.begin(), _tmp.end());
vector<size_t> poolIndex;
size_t const C = 3, N = pool.size(); //wanted: all permutations of C, out of the pool of items

//global collection, to be updated in each recursive layer.
vector<vector<size_t> > coll;
// Note each item (like a char or a color or a studentId) is
// represented in the global collection by an unsigned integer.
// This is the positional index in the original "pool" of items.
// This scheme ensures the permutations produced is ascending

string show(vector<size_t> const & v, string headline="", bool isPrint=true){
  stringstream ss;
  for (int i=0; i<v.size(); ++i){
    ss<<setw(5)<<pool[v[i]];
  }
  if (isPrint) cout<<ss.str()<<" <-- "<<headline<<v.size()<<endl; return ss.str(); 
} 
void dump(string headline="", bool isAssert=true){ size_t const cnt = coll.size(); assert(cnt); size_t const len = coll[0].size(); size_t exp = 1; for (int t=N; t>N-len; --t) exp *= t; //loop len times
  assert(cnt == exp);

  stringstream ss;
  string last = "";
  for(int i=0; i<cnt; ++i){
    string item=show(coll[i], "", false);
    ss<<item<<endl;
    if (!isAssert) continue;
    assert(last<item && "should be strictly asending");
    last = item;
  }
  cout<<headline<<"\t size "<<cnt<<endl<<ss.str()<<endl;
}

//RVO should kick in to skip the copying upon return
vector<size_t> const find_unused(vector<size_t> const & perm, size_t const len){
      vector<size_t> tmp4set_diff(perm); //permutations are by defnition unsorted.
      sort(tmp4set_diff.begin(), tmp4set_diff.end());
      vector<size_t> unused(N-len);
      set_difference(poolIndex.begin(), poolIndex.end(), tmp4set_diff.begin(),tmp4set_diff.end(),unused.begin());
      //show(tmp4set_diff, "tmp4set_diff"); show(poolIndex, "poolIndex"); show(unused, "unused");
      return unused;
}

//RVO should kick in to skip the copying upon return
vector<size_t> const new_perm(vector<size_t> const & perm, size_t unused){
        vector<size_t> newPerm(perm);
        newPerm.push_back(unused);
        //show(newPerm, "newPerm");
        return newPerm;
}
//This algo is considerably more complex than many recursive algos
//we have written recently, largely due to the set_difference()
void next_perm_from_pool_iterative(){
  for(size_t loop=0;loop<9 /*useful during dev to control when to exit*/;++loop){
    if (coll.empty()){
      for(size_t j=0; j<pool.size(); ++j){
        coll.push_back(vector<size_t>(1, j));
        poolIndex.push_back(j);
      }
      // dump("^^^^ after initilization of coll ^^^^");
    }
    assert(!coll.empty());
    size_t const len=coll[0].size();
    assert(loop+1 == len);
    if (len == C) {
      cout<<"Done:)"<<endl;
      return;
    }
    vector<vector<size_t> > newcoll;
    for(int kk=0; kk<coll.size(); ++kk){
      vector<size_t> const & perm = coll[kk];
      vector<size_t> unused(find_unused (perm, len));

      //now unused contains the pool items not in the current perm.
      //Suppose there are 3 unused items, we will create 3 new permutations
      //by appending each one to the current perm
      for(vector<size_t>::iterator it=unused.begin(); it != unused.end(); ++it){
        newcoll.push_back(new_perm(perm, *it));
      }
    }
    coll = newcoll;
    dump("^^^^ end of iteration ^^^^");
  }
}

int main(){
  next_perm_from_pool_iterative();
}

next_Perm@3boys out@5 #80%

algo-practice: generate permutation@3, using5distinct chars

Such a skill might be needed in some coding IV sooner or later. Let’s write this in py or c++. Formally,

Q1: generate all permutations of 3, from 5 distinct chars, in any order.
Q2: generate all permutations of 3, from 5 distinct chars, in ascending order. You can sort the 5 chars first.
Q2b: once you have generated one permutation, how do you identify The next?

Note the same solution is a generalization of std::next_permutation(), so once you have this solution you have that solution.

How many? 5-choose-3 * 3! = 5!/(5-3)! = 5*4*3 = 60, denoted Answer(3). Paradoxically, Answer(4) == Answer(5) = 120

–algorithm 1 for Q1

  • 1st generate 5 single-char strings;
  • then for each generate 4 two-char strings. We get 20 strings.

–algorithm 1b for Q1: rec(5,3) will call rec(5,2). rec(5,2) has 20 items, each can generate 3 items for rec(5.3), because each item has 2 characters and a void to be filled by the 3 unused chars.

The 20 items returned should be a pair{vector of 2, vector of 3}

This produces a sorted collection:)

See my code  https://github.com/tiger40490/repo1/blob/py1/py/combo_perm/nextPerm%403boysFrom5.py

 

 

 

next_Combo@3 boys out@5 # 100%

Given, abcde, How many combinations of 3? 5-choose-3 = 10 standard formula, but let’s implement in py or c++.

It’s useful to define a score for a combination — sort the combination into a string. The string is the score. Now,

Q1: generate all combinations@3 from 5 distinct chars, in ascending score.
Q2 (more importantly) given any combination of 3, generate the next higher combination of 3.  Output each combination as a sorted string to keep things clean. Extremely effective constraint and simplification.

1) https://github.com/tiger40490/repo1/blob/py1/py/combo_perm/nextComboOf3from6boys.py is a decent recursive.

2) Perhaps we can modify the iterative generate random perm@3boys out@5 algorithm:

Key: Scan from end of string to identify position2upgrade. Compare current (i.e.last) position with the unused characters. If can’t upgrade me [eg abe], then move left which is guaranteed to be a lower character. Now compare the new “me” with only the unused characters (in this algo we don’t allow swap within the string) If can’t upgrade “me”, then move left.

If current position is upgradeable, then set it as p2u. Upgrade to the lowest unused character. Then reset all characters on my right and return.

  1. abc
  2. abd
  3. abe
  4. acd
  5. ace
  6. ade //last 2 letters are the max. p2u = 0
  7. bcd
  8. bce
  9. bde
  10. cde

 

next_Combo@3 boys out@5 #recursive descending

//This recursive version suffers from stack overflow
//but it's able to print out combinations in Descending order and
//maintains the relative positions between any 2 items
//
//This version reduces vector cloning by growing/shrinking the prefix vector
#include <iostream>
#include <sstream>
#include <vector>
#include <iomanip> //setw
#include <algorithm>  //sort
#include <assert.h>
using namespace std;
size_t calls=0, combos=0;
size_t const C=3; //how many in each combination
vector<char> emptyV;

template<typename T> void dump(vector<T> const & p, string const & s){
  cout<<"------------ "<<s<<" ------------ size = "<<p.size()<<endl;
  for(int i=0; i<p.size(); ++i) cout<<setw(5)<<p[i];
  if (p.size()) cout<<endl;
}
template<typename T> int show(vector<T> const * p, vector<T> const * v = NULL){
  ++ combos;
  stringstream ss;
  for(int i=0; i<p->size(); ++i) ss<<setw(5)<<(*p)[i];
  if (v){
    //cout<<",";
    for(int i=0; i<v->size(); ++i) ss<<setw(5)<<(*v)[i];
  }
  static string last("ZZZZZ");
  string combo=ss.str();
  cout<<"combo: "<<combo<<endl; assert(last >= combo);
  last = combo;
}

template<typename T> int recurs(vector<T> & prefix, vector<T> const & pool){// size_t const suffix ){
  ++calls;
  dump(prefix, "entering ... prefix"); dump(pool, "pool");
  if (prefix.size()            == C) return show(&prefix);
  if (prefix.size()+pool.size()== C) return show(&prefix, &pool);
  assert (!pool.empty());
  vector<T> const newPool(pool.begin()+1, pool.end());
  recurs(prefix, newPool);
  prefix.push_back(pool[0]);
  recurs(prefix, newPool);//this function returns only after all the layer are done
  prefix.pop_back();
}

int main() {
  string tmp = "abcde";
  vector<char> v(tmp.begin(), tmp.end());
  assert(C <= v.size());
  recurs(emptyV, v);
  cout<<calls<<"  calls to the recursive funcion to generate "<<combos<<endl;
}

next_Combo@3 boys out@5 #recursive

//This recursive version suffers from stack overflow but
//it's able to print out combinations in ascending order and
//maintains the relative positions between any 2 items
//
//This version reduces vector cloning by growing/shrinking
//global objects but with higher complexity
//
//However, global variables actually simplify the logic!
#include <iostream>
#include <sstream>
#include <deque>
#include <iomanip> //setw
#include <algorithm>  //sort
#include <assert.h>
//#define DEBUG
using namespace std;
size_t calls=0, combos=0;
size_t const C=3; //how many in each combination
string tmp = "abcde";
deque<char> pool(tmp.begin(), tmp.end());
deque<char> prefix;

template<typename T> void dumpDeque(deque<T> const & p, string const & headline){
  cout<<"-- "<<headline<<" -- size = "<<p.size()<<endl;
  for(int i=0; i<p.size(); ++i) cout<<setw(5)<<p[i];
  cout<<endl;
}
template<typename T> int showCombo(deque<T> const * p){
  ++ combos;
  stringstream ss;
  for(int i=0; i<p->size(); ++i) ss<<setw(5)<<(*p)[i];
  static string last;
  string combo=ss.str();
  cout<<"combo: "<<combo<<endl;
  assert(last <= combo && "should be ascending");
  last = combo;
}

template<typename T> int recurs(){
  ++calls;
#ifdef DEBUG
  cout<<"-------------\nentering "; dumpDeque(prefix, "prefix"); dumpDeque(pool, "pool");
#endif
  if (prefix.size() == C) return showCombo(&prefix);
  if (pool.empty()) return 0;
  T poolhead = pool.front(); pool.pop_front();

  prefix.push_back(poolhead); //add poolhead to prefix

  //this 1st recursive function call starts a rather deep call stack and prints
  //all combinations with the given (new longer) prefix
  recurs<T>();//use the longer prefix and the shorter pool
  prefix.pop_back();//restore prefix
  recurs<T>();
  pool.push_front(poolhead); //restore pool, needed by the 2nd call in the parent stack
#ifdef DEBUG
  cout<<"^^^^^^ restored before returning "; dumpDeque(prefix, "prefix"); dumpDeque(pool, "pool");
#endif
}

int main() {
  assert(C <= pool.size());
  recurs<char>();
  cout<<calls<<"  calls to the recursive function to generate "<<combos<<endl;
}

next_Combo@3 boys out@5 #iterative complex

//Without loss of generality, each combination is internally represented
//as a sorted vector (ascending).
//There's one-to-one mapping between such a vector and a combination
#include <iostream>
#include <vector>
#include <iomanip> //setw
#include <algorithm>  //sort
#include <assert.h>
using namespace std;
size_t changes=0, calls=0;
size_t const C=3; //how many in each combination

template<typename ITR> bool isAscending (ITR const b, ITR const end){
  for (ITR last = b, i = b; i!=end; ++i){
    if (*last > *i) {
      cout<<*last<<" should be lower (or equal) but is higher than "<<*i<<endl;
      return false;
    }
  }
  return true;
}
template<typename T> void dump(vector<T> & v,  bool isAssert = true){
  for(int i=0; i<v.size(); ++i) {
    cout<<setw(5)<<v[i];
    if (i == C-1) cout<<"  unused:";
  }
  cout<<endl;
  for(int i=0; i<v.size(); ++i){
    cout<<setw(5)<<i;
    if (i == C-1) cout<<"  unused:";
  }
  cout<<endl<<"---------------------------------"<<endl;
  if(isAssert){
    assert(isAscending(v.begin(), v.begin()+C) && "1st C elements should be ascending after next_combo (not next_perm)");
    assert(isAscending(v.begin()+C+1, v.end()) && "unused section should be ascending");
  }
}

template<typename T> bool reshuffle(vector<T> & v, int p2u){
//      cout<<"after swap"<<endl; dump(v);
        sort(v.begin()+p2u+1, v.end());
//      cout<<"after sorting everyting to the right of p2u"<<endl; dump(v);

        if (p2u == C-1){
                sort(v.begin()+C, v.end());
                ++changes;
                return true;
        }
        assert(p2u<C-1);
        //now reset everything to my right
        //now locate the best_man (next item after the new p2u) .. can use binary search
        for(int i=p2u+1; i<v.size() ; ++i){
          if (i==v.size()){ //p2u is highest possible!
                //cout<<"p2u is already highest"<<endl;
                sort(v.begin()+C, v.end());
                ++changes;
                return true;
          }
          if (v[p2u]<v[i]){
                //cout<<"best man = "<<i<<endl;
                for(int j=0; p2u+1+j<=C-1; ++j){
                  swap(v[p2u+1+j], v[i+j]);
                }
                sort(v.begin()+C, v.end());
                ++changes;
                return true;
          }
        }//for
        // now must return!

        assert(1==0 && "should never reach here");
        cout<<"after best_man search"<<endl; dump(v);
}

// reshuffles vector to the next higher combo
//Assuming 5-choose-3, the 1st 3 chars represent the combination,
//and the remaining characters at the end of the vector are
//unused in the current combination.
template<typename T> bool next_combo(vector<T> & v){
  ++calls;
  dump(v );
  if (v.size() == C) return false; // C-choose-C == 1 !

  for(int p2u=C-1; /*last position*/ p2u >=0 ;--p2u){
    for (int unusedItem=C; unusedItem<v.size(); ++unusedItem){ //scan the unused section of the array
        if (v[p2u] < v[unusedItem]) {
          assert(p2u<unusedItem);
          swap(v[p2u], v[unusedItem]);  //p2u should not change further
        //cout<<"identified "<<p2u<<" as position to upgrade... Will reset subsequent positions, and return"<<endl;
          return reshuffle(v, p2u);
        }
    }
    // no p2u identified yet. move p2u marker to the left
  }//for
  cout<<"no more higher combo. This is the end"<<endl;
  return false;
}
int main() {
//  vector<float> v{111,222,333,444,555,666};
  string tmp = "abcdefg";
  vector<char> v(tmp.begin(), tmp.end());
  assert(C <= v.size());
  for(; calls<9992; ){
        if (!next_combo(v)){
         cout<<changes<<" changes performed till the highest combo; next_combo() call count = "<<calls<<endl;
         return 0;
        }
  }
}

tail-recursion Fibonacci # tricky]python

Tail recursion is a “halo” skill in coding interviews. It turns out that most recursive functions can be reworked into the tail-call form, according to http://chrispenner.ca/posts/python-tail-recursion.

The same author also demonstrates

  1. python recursion stack depth is about 1000 only, so deep recursion is unpopular in python
  2. python doesn’t support tail recursion
  3. some decorator trick can simulate tail recursion in python

—————-

Easiest demo problem is factorial(N). For Fibonacci, https://stackoverflow.com/questions/22111252/tail-recursion-fibonacci has a very short python implementation (though I suspect python doesn’t optimize tail recursion). Let me rephrase the question:

Q: Given f(firstVal=0, secondVal=1, length=0) returns 0, f(0,1,1) returns 1, can you implement f(0,1,N) using recursion but in O(N) time and O(1) space? Note Fib(N) ==f(0,1,N)

Key points in the python solution:

  • Start with iterative algo, then convert it to tail recursion.
  • use 2 extra arguments to hold last two intermediate values like Fib(2) Fib(3) etc
  • We saw in the iterative solution that memory usage is O(1), a good sign that tail recursion might be possible.
  • if you observe the sequence of Fib() values computed in the blackbox, actually, you see Fib(2), Fib(3) … up to Fib(N), exactly like the iterative solution.
  • solution is extremely short but non-trivial

https://github.com/tiger40490/repo1/blob/cpp1/cpp1/FibTailRecurse.cpp is my very brief implementation

show free slots between meetings #bbg c++11

Q: You are given a list of teleconference booking like {9,10} meaning from 9am to 10am. We can multi-task, so overlap is fine. Can you print out all the free slots within the day? You can safely assume all the input data are sensible and between 9 to 18, i.e. our workday.

I solved this problem on the spot. My weakness is still the ECT cycle esp. the compiler errors.

I was familiar with the inverse problem of listing all busy sessions. So I decided to *focus* on that. I took a calculated risk that once I get that to work, I will have the cool confidence to solve the original problem.

corner case: two adjacent meetings. I added comment to specify exactly how to check for that
corner case: what if the entire day is one busy session?

int const SOD = 0, EOD=24;

struct Intv{ //can be a single meeting, a joined session or a free window
  int bgn, end; //end = 10 means 10 o'clock sharp. Will become float
  Intv(int a, int b): bgn(a), end(b){}
  bool operator<(Intv const & other) const { return this->bgn < other.bgn;}

  //can be a free function if no private fields
  friend ostream & operator<<(ostream & os, Intv const & a){
        os<<a.bgn<<'-'<<(float)a.end-0.41;
        return os;
  }
};
template<typename T> ostream & operator<<(ostream & os, vector<T> const & v){
        for(int i = 0; i<v.size(); ++i) os<<v[i]<<"  ";
        os<<endl;
}
void printBusy(vector<Intv> const & arg){
  if (arg.empty()) return;
  vector<Intv> & v = const_cast<vector<Intv>&> (arg);
  sort(v.begin(), v.end());
  size_t const sz = v.size(); //cache
  Intv growable = v[0];
  for(size_t i = 1; i<sz; ++i){
        Intv & req = v[i];
        if (growable.end < req.st){
          cout<<growable<<endl; //close current session
          growable = req;//start new session
          continue;
        }
        growable.end = max(growable.end, req.end);
  }
  cout<<"last session : "<<growable<<endl;
}
void printWindow(vector<Intv> const & arg){ //subsequent exercise
        if (arg.empty()){
                cout<<"free slot "<<Intv(SOD, EOD)<<endl;
                return;
        }
        vector<Intv> &v = const_cast<vector<Intv> &>(arg); //avoid vector deep cloning
        sort(v.begin(), v.end());
        cout<<v<<endl;
        if (v[0].bgn > SOD){
          cout<<"SOD free slot "<<Intv(SOD, v[0].bgn)<<endl;
        }
        Intv growing(v[0]); //current busy period
        for(int i=1; i<v.size(); ++i){
          //Heart of the algo. each new meeting either extends the current
          //busy period or prints a free window before it
          Intv & meeting = v[i];
          //cout<<i<<" a meeting: "<<meeting<<endl;
          //cout<<"growing = "<<growing<<endl;
          if (growing.end < meeting.bgn){
                cout<<"free slot "<<Intv(growing.end, meeting.bgn)<<endl;
                growing = meeting;
          }else{
                growing.end = max(growing.end, meeting.end);
          }
        }
        if (growing.end < EOD)
                cout<<"EOD free slot "<<Intv(growing.end, EOD)<<endl;
}
int main() {
        //pass in initializer list to initliaze a const vector
        printBusy({Intv(15,17), Intv(7,9), Intv(5,7)});
}

LRU cache #Part 1 #Bbg

I believe my friend got this question in a bbg phone interview.

Design and implement a data structure for Least Recently Used (LRU) cache. It should support the following operations:

get(key) – Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.
put(key, value) – Set or insert the value if the key is not already present. When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item.

Follow up:
Could you do both operations in O(1) time complexity?

==Analysis==

Hash table to support lookup. The “value” is a pointer to a link node. Link node also also has key/value — 2-way linkage.

Link node grows at head for every new key/value pair, so tail is the oldest pair. Every time a key/value is accessed via hash table, we move the link node to the head.

When capacity is reached, we would remove the tail node. Using the key in that node, we also remove from hash table.

reverse words in a string containing a sentence

Q: A sentence can contain 2 or more words, spaces and punctuation marks but no hyphen. You are given a sentence as a c-string. Reverse the words in O(1) space O(N) time. “a1 b2 c3” becomes “c3 b2 a1”.

%%A: first pass to reverse the c-string character-wise by swapping. 2nd pass — scan left to right and keep the current-word-start and current-word-end positions. Once we have a word, reverse the chars in it by swapping.

min-stack #bbg

My friend Abhinav (not his real name, to protect his privacy) got this question at Bloomberg internship interview. I added some details to make it clearer:

Q: Design a stack that supports push, pop, top, and retrieving the minimum element all with O(1) time complexity in the worst case.

There exists a function compare(Item A, Item B) that returns 1 if A is greater, 0 if equal, and -1 if A is smaller.

  • getMin() — Retrieve the minimum element in the stack.
  • push(x) — Push element x onto stack.
  • pop() — Removes the element on top of the stack.
  • top() — Get the top element.

==== analysis =====

The most efficient getMin() data structure is the binary heap, but insert/delete runs in O(logN). Therefore I felt the requirements here are computationally impossible. But in this context, we only need to support deleting the last added item 🙂

Key insight — popping is a constrained form of deletion. It’s hard to support O(1) while supporting unconstrained deletions, BUT with a constraint on deletions, all operations can be O(1).

I need a regular stack + a helper data structure. A linked list or vector can support the stack operations

–The helper — Another (sorted) stack to hold record-breakers and record-matchers.

IFF a new minimum (record breaker) or another item matching the existing minimum (record-matcher) is added, we push it to the sorted stack.

After every pop(), we check the popped item. If equal to top of sorted stack, then pop the sorted stack.

At any time, top of the sorted stack is the current minimum.

–Here’s a design to beat binary_heap, based on finite-sized (32 or 64-bit int) keys

Assuming the comparison is based on 32-bit integer key (string or floats can also use radix sort). I will use a radix array structure. Perhaps 4 arrays of 256-elements.  Or perhaps a 4×256 matrix, Ultimately the payload stored in the data structure are pointers to stack nodes. This saves memory since a stack node may be 99MB.

Every time we push a node, we derive the new key value in O(1), use the key value to locate its ‘home’ in the radix structure and store the new node’s address in a linked list therein. After the push(), we can compare and update (in constant time) a global variable pointing to the minimum node.

Each stack node also has a back-pointer to the iterator into the list, so before pop() we can use the iterator to locate the object in the radix structure and delete it from the host linked list. We will also update the global variable.

find lowest natural number !! in%%list#codility

Q: Write a function int solution(int[] A);  that, given an array A of N integers, returns the smallest natural number that does not occur in A. For example, given A = [1, 3, 6, 4, 1, 2], the function should return 5.
Given A = [1, 2, 3], the function should return 4.
Given A = [−1, −3], the function should return 1.

• each element of array A is an 32-bit signed int
• expected worst-case time complexity is O(N);
• expected worst-case space complexity is O(N).
* Elements of input arrays can be modified.

https://leetcode.com/problems/first-missing-positive/description/ is similar but O(1) space and average O(N) time!

—— my analysis —–
This is not testing coding abilities (ECT, syntax…), but comp science algo.

Codility is a fully automated system. I believe the O(N) requirement will not be checked by a human. The Codility system may not be able to determine O() by parsing the source code. In fact, a O(N) source code may run slower than a O(N logN) solution and fail the Codility hidden tests. Therefore, we can put aside the O(…) requirement initially.

One way to achieve O(N) is radix sort on A, dropping any negative values. Then iterate the sorted array and check for the earliest “gap”. Worst case — you get 1,2,3… without gap, so answer is the 1+ largest array element.

—-solutionB: make_heap O(1) space but O(N logN) time. Build a min-heap based on the array in O(1) space, then keep calling min().

  • make_heap shows random access container (vector used in demo), and “rearrange”, implying O(1) space
  • make_heap shows O(N) to construct the heap

min() is O(1) but delete_min() is O(log N), so overall we probably get O(N logN)

—-solutionC: radix sort. https://en.wikipedia.org/wiki/Radix_sort#In-place_MSD_radix_sort_implementations shows an in-place binary radix sort.

First pass to transform all negative numbers to 0.

O(W*N) where W is width of the largest integer. If we assume 64-bit then W is a constant:)

http://www.drdobbs.com/parallel/parallel-in-place-radix-sort-simplified/229000734 is another in-place radix sort.

Monkey-jump problem{Ashish IV@Baml #solved

On the 2D-coordinate plane, A monkey starts from point [a,b] and wants to reach [P,Q]. From any point, the money can only jump to one of two points:

  1. [x,y] up to [x, x+y]
  2. [x,y] out to [x+y, y]

Q: Can you write bool canReach(unsigned long long a,b,P,Q)
A: start from [P,Q] and have no decision to make!

#include <iostream>
using namespace std;
struct Point {
    int x, y;
    Point(int x, int y) : x(x), y(y) {}
    friend ostream& operator<<(ostream& os, Point & n){
      os<<n.x<<","<<n.y;
      return os;
    }
};
bool canReach(unsigned int a, unsigned int b, unsigned int P, unsigned int Q){
  Point lower(a,b), higher(P,Q);
  for(Point p=higher;;cout<<p<<endl){
    if (p.x == lower.x && lower.y == p.y){
        cout<<"Good:)"<<endl;
        return true;
    }
    if (p.x==p.y || p.x<lower.x || p.y<lower.y){
        cout<<"Failed:( Can't jump further"<<endl;
        return false;
    }
    if (p.x<p.y){ p.y = p.y-p.x ; continue;} 
    if (p.x>p.y){ p.x = p.x-p.y ; continue;}
  }
}
int main() {
  cout<<canReach(1,10, 57,91);
}

 

factorize a natural number #AQR tough

My friend Dilip received this question in a 2017 AQR on-site.

Q: given a natural number (like 8), write a function to output every factorization such as (2,4) (2,2,2). You can ignore or include the trivial factorization (1,8). You can use recursion if you want.
— (incomplete) Analysis

  1. I would first generate all the prime numbers up to sqrt(N)
  2. among them, i would find all the factors. Once I find a prime factor x, keep dividing by x so I know in total I have 3 x’s, 2 y’s and 1 z, or (x,x,x,y,y,z). I call them 6 non-distinct “prime factors”.

From there, I might be able to write a (recursive?) function to output the factorization formulas. The ideal algo automatically avoids duplicate factorizations but Here’s my non-ideal design: generate all 2-way “splits”, all 3-way splits… If I keep all my “splits” in a hashtable, I can detect duplicates. So just treat the 6 factors as 6 distinct factors. Now the problem is well-defined — next split@N boys .

— trie solution based on generate combinationSum compositions #backtrack up] trie+tree

— recursive solution by CSY

  • no prime number needed! A major drawback — if the target number is odd, we would still keep testing 2, 4, 6, 8 as possible divisors!

https://github.com/tiger40490/repo1/blob/cpp1/cpp1/combo_permu/factorize_AQR.cpp is very short solution by CSY. Here’s my analysis —

  • I believe every time the factorize(60) function finds a small factor like 2, it pushes the factor onto a global stack, then run factorize() on the quotient i.e. 30 — wherein every factorization formula on 30 is “decorated” with the stack.

https://github.com/tiger40490/repo1/blob/py1/py/combo_perm/factorize_AQR.py is my modified/improved python solution

  • I replaced the global vector with a local immutable list on each call stack. It helps me reason. This is also thread-friendly, if the target number is large.
  • It’s highly instructive to work out the expected output from the recursive loops, as in my comments.
  • Just like the continuousSentence problem, the recursive solution is clever-looking but not scalable.

generate paths in ternary matrix #Promethean TomJerry

–Requirement:

Similar to binary-matrix cluster count #DeepakM #DST+fullScan but the connection is modeled differently.

–Original requirements: Input matrix has an object in each cell

  • 1 means wall or barrier
  • 0 means passage
  • 2 means cheese station

Jerry is [x][y]. Tom starts at [0][0] and must collect all cheese and deliver to Jerry. Find shortest path. Tom can only move one (U/D/L/R) step at a time.

https://github.com/tiger40490/repo1/blob/cpp1/cpp1/2d/TomJerry.cpp is my half-finished code. It builds a graph connecting all the cells (each a Node object). It’s able to check feasibility. Todo:

  • breadth-first walk from Tom to Jerry, without revisiting any node. Hopefully we can discover all possible paths (i.e. collecting all cheese)
  • determine the shortest.

 

tokens shared among friends #Promethean

See also NxN matrix for graph of N nodes

Requirement: There are N friends numbered from 1 to N. There are M pairs of links, where each (x , y ) pair is connected by a shared integer token described by tokenId. Any two friends, x and y , can be connected directly by multiple tokens, or indirectly (without directly shared token) because if friends x and y share token t and friends y and z also share token t , then x and z are also said to share token t.

Note if x/y shares t and u/v share t, then x and u may be unconnected!

Find the maximal product of x and y for any directly or indirectly connected (x , y ) pair such that x and y share the maximal number of tokens with each other. If x/y have 3 tokens connecting them, and u/v also have 3 tokens, then we compare x*y vs u*v.

https://github.com/tiger40490/repo1/blob/cpp1/cpp1/miscIVQ/tokenLinked_Friend.cpp showcasing nested containers like map<int, list<set<int>>>. I believe STL is harder than java, python etc.

sorted circular array max() in O(log N)

Q: A sorted array A[] with distinct elements is rotated at some unknown point, the task is to find the max element in it.

Expected Time Complexity : O(Log n)

–Analysis —
It takes a const time to determine if the list is ascending or descending.

(Take 1st 3 elements. If not sorted, then entire task is simple — answer is the max among the three because we hit the turning point)

Suppose we know it’s ascending, use binary search forward to locate the turn.

c++CollabEdit/Broadway IV: implement hash table#python

Q: implement a hash table class in any language. You could use existing implementations of linked list, array, hash function…

Q: talk about how you would implement rehash?
%%A: hash code won’t change for the key objects. But I would rerun the modulus against the new bucketCount. Based on the new index values, I would create the linked lists in each new bucket. Every pair needs to be relocated. Lastly I need to get rid of the old bucket array.

Q: how would you test your hash table?
%%A: try inserting (key1, val1), then (key1, val2), then look up key1
%%A: if I know any common weakness of the hash function, then test those.
%%A: trigger rehash

Q: what could go wrong in a multi-threaded context?
%%A: things like lost update or duplicate entries

Q: What concurrency solution would you choose for best performance?
%%A: could use lockfree algo at each point of writing to the bucket array or writing to a linked list.

minimum-cost array shrinking #Citadel

Input is a vector of positive integers. You are to shrink it progressively. Each step you remove 2 selected elements, and replace with their sum. Therefore vector size drops by 1 at each step, until there’s one element left.

At each step there’s a cost, which is defined as that sum.

Eg: {4,2,1} becomes {4,3} after you combine 2/1. The cost at this step is the sum 2+1=3.

Q1: For a given vector, find a sequence of steps with the lowest total cost. Test your code in c++
Q2: how would you optimize your code, assuming high data volume.

#include <vector>
#include <queue>
#include <algorithm>
#include <iostream>
#include <string>
#include <functional> // std::greater
using namespace std;

vector<int> vec = { 3,2,1 }; // a std::set will fail when duplicate values show up like {3,3}
priority_queue<int, vector<int>, std::greater<int> > pq(vec.begin(), vec.end());

void dumpVector(string msg) {
 cout << msg << ", size = " << vec.size() << endl;
 for (auto it = vec.begin(); it != vec.end(); ++it) cout << *it << ' ';
 cout << endl;
}

int operateVector(int sum = 0) {
 auto lowestItem = min_element(vec.begin(), vec.end());
 sum += *lowestItem;
 vec.erase(lowestItem); // now s1 is an invalidated iterator and unusable

 //remove() is bad as it removes all not one item having target value!
 //v.erase(std::remove(v.begin(), v.end(), *lowestItem), v.end()); 

 dumpVector("afer erase");
 return sum;
}

void dumpHeap(string msg) {
 auto clone = pq;
 cout << msg << ", size = " << clone.size() << endl;
 for (; !clone.empty();clone.pop()) {
 std::cout << clone.top() << ' ';
 }
 cout << endl;
}
int operateHeap(int sum = 0) {
 sum += pq.top();
 pq.pop();
 //dumpHeap("afer pop");
 return sum;
}

int f1(int sum = 0) {
 return operateHeap(sum);
}
int main87() {
 int totalCost = 0;
 for (; pq.size()>1;) {
 int sum = f1(f1()); //call f1() twice recursively.
 pq.push(sum);
 dumpHeap("afer push");
 totalCost += sum;
 }
 cout << "total cost = " << totalCost << endl;
 return 0;
}

construct graph from list of connections #BGC java

Given an input file showing a list of {string, string} pairs, build a connection graph.

If you have a correct connection graph, then you can easily determine the connectedness (bool) of any 2 nodes. In a social-network, this bool flag indicates whether 2 individuals are completely unconnected or somehow connected.

—-analysis:
I see this as a social-network. Any pair represents an edge connecting 2 nodes.  At any time there are a number of disconnected islands. The next pair could 1) merge 2 islands or 2) add a node to an existing island or 3) create a new island 4) do nothing, if the 2 nodes are already in some existing island

  • Any known node appears exactly once in the entire graph, in exactly one of the islands.
  • All nodes are contained in a lookup table or hashmap  {node -> island}
  • Each island can be a implemented as a hashset of nodes.

So here’s a proposed algo to process a new pair {A, B}. Look for A and B in the  graph. 3 scenarios + a dummy scenario:

  • (Scenario 3) If both A an B are new comers, then they form a new island.
  • if both A and B are already in the graph,
    • (Scenario 4) if they are in the same island, then exit. Nothing to do
    • (Scenario 1) else we can merge the 2 islands
  • (Scenario 2) If A is in island 3 but B is new comer, then B joins island 3

The merge operation is expensive. The big lookup table needs update but here’s an alternative:

  • At merge time, the smaller island would have all the nodes moved to the bigger island. When the island is empty, it gets a pointer “this.redirect” to the bigger island.
  • lookup table needs no update, avoiding locking a global object.
  • At query time, we look up the table to get the original island, then we follow its pointer (defaults to null) until the island is non-empty.
  • endless loop? would only be a programming error.

maxProfit, maxSubArraySum #Kadane

One of the top 3 all-time favorite algo questions, worth studying in-depth. I know only two algorithms — (Kanade) disposableCurSubArray and lowWaterMark (my own and Xinfeng Zhou). I think both are equivalent from a few angles.

My algo is intuitive (to me) for level input (delta input can be transformed into levels) . One pass. I feel it’s not inferior in any way.

In contrast, disposableCurSubArray is intuitive for delta input.

https://github.com/tiger40490/repo1/tree/py1/py/array has a tested solution.

https://www.geeksforgeeks.org/largest-sum-contiguous-subarray/ has a simpler solution, to be understood.

  1. special case: If all numbers are negative, then final answer is the “smallest” negative number as a tiny array.
  2. special case: If all numbers are positive, then final answer is the entire array
  3. so the tricky case must have a mix of negative/positive

Here’s my explanation of Kadane’s algo:

  • Delta Input array has arr[0], arr[1] .. arr[n]. let’s denote maxSubsumEndingAt(i) as B[i]. The max Subarray Ending At #55 is a continuous subarray starting somewhere before #55. It can be a lone-wolf containing only #55, as an important special case. In fact, in the mixed positive/negative array, we will surely encounter this case.
  • (2-pointer algo) We maintain a left marker and a right marker. Both move to the right. maxSubArrayEndingAt(i) is basically an accumulating subarray from left marker to right marker.
  • B[i+1] == max(B[i] + arr[i+1] –vs– arr[i+1] )
    • if  B[3] is negative i.e. all 4 sub-array sums ending in arr[3] are all negative, then B[4] should not include any of them. We can discard them for good and “start afresh“(while keeping the global maxSumSeen)
    • else, there exists a “positive” sub-array ending at arr[3], so we keep growing it until it becomes negative.
    • (See github code) I can imagine a left-marker, the start of the current max subarray. We will move the right marker to grow the sub-array if the current sub-array is useful i.e. positive. Otherwise, we start afresh and the left marker is set to right-marker

Here’s my attempt to connect my lowWaterMark to Kanade’s algo:

I suspect that whenever we move the left marker, we always have a new lowWaterMark.

(Level is defined as 0 _before_ the first element. In case III, the low water mark is usually a negative level.) Suppose A few elements after hitting low water mark level=-66, we hit level=-22. This is not a new water mark. maxSubsumEndingAt[this element] is actually positive since there exists a subarray starting right after the “level=-66” element!

When we hit level=-77, a new water mark, the maxSubsumEndingAt[this element] is basically zero, as the disposableCurSubArray is discarded. We start afresh to accumulate. Essentially, we reset the “base” to be the new water mark.



			

substring search – Boyer-Moore

I think single-string coding questions are evergreen. The insights and techniques might be …reusable?

Brute force search — shift by one, then attempt to match the entire substring. My code below is more efficient than brute-force. Mostly it shifts by one each time, but checks just one char.

Q: write a python function match(string haystack, string nonRegexNeedle), following the simplified Boyer-Moore algo:

Try matching at the left end. If mismatch, then identify the LEFT-most offending char (J) in haystack. Now locate the RIGHT-most position of J in needle (if not found then simply shift all the way past the J.) and right-shift the needle to align the two J.

Q: So when would we right-shift  by one? Not sure. So I don’t know if my code below is 100% correct in all cases.

def visualize(haystack, substring, offset):
  print '\t--' + '0123456789 123456789 12345-----'
  print '\t--' + haystack
  print '\t--' + ' '*offset + substring
  print '\t--' + '-------------------------------'

def match(haystack, substring):
  offset = 0 #offset from start of haystack, where substring is now
  while(True):
    #now locate first offender in haytack
    offenderOffset, offender = -9999, ''
    for i, char in enumerate(substring):
       offenderOffset = offset+i
       if substring[i] != haystack[offenderOffset]:
         offender = haystack[offenderOffset]
         print offender + ' <-- spotted at offenderOffset = ', offenderOffset
         visualize(haystack, substring, offset)       
         break;
         
    if offender == '': return True
    
    # now back-scan substring to locate offender, and then shift to align
    # not really forward-scan haystack
    while True:
      offset += 1
      if offset + len(substring) > len(haystack):
           print 'Gave up complately:'
           visualize(haystack, substring, offset)       
           return False
      if offenderOffset - offset == -1: 
        print 'gave up on aligning: '
        visualize(haystack, substring, offset)
        break
      if substring[offenderOffset - offset] == offender:
        print 'aligned: '
        visualize(haystack, substring, offset)
        break # go back to start of outer while loop

print match('aborb bdb', 'abors') 

 

find sub-array with exact target sum #O(N)#1pass clever

#include <iostream>

/* determines if the there is a subarray of arr[] with sum equal to 'sum' 
Nice and simple algorithm. Works for exact match only.
*/
void subArraySum(int const arr[], int const n, int const sum)
{
	int curr_sum = arr[0], le = 0, ri;

	/* Add elements one by one to curr_sum and if the curr_sum exceeds
the sum, then remove starting element */
	for (ri = 0; ri <= n-1; ri++) {
/* If curr_sum exceeds the sum and sub-array isn't single,
then remove the starting elements*/
while (curr_sum > sum && le < ri) {
			printf("curr_sum = %d too high. le = %d; ri = %d\n", curr_sum, le, ri);
			curr_sum = curr_sum - arr[le];
			le++;
		}

		if (curr_sum == sum) {
			printf("Sum found between indexes %d and %d\n", le, ri);
			return;
		}
		// now curr_sum is too small or sub-array is single
		curr_sum = curr_sum + arr[ri+1];
	}
	printf("No subarray found\n");
}
int main()
{
	int const arr[] = { 11,24, 2, 4, 8, 7 };
	int const n = sizeof(arr) / sizeof(arr[0]);
	int const sum = 7;
	subArraySum(arr, n, sum);
}

 

my quicksort in python/c++

import random
# partition the given section by fixing the anchor
def partitionUsingFarRight(A, le, ri):
    pivotValue = A[ri] # immutable value
    pivotPos = i = ri
    while True:
        if (i <= le): return pivotPos
        i -= 1
        if A[i] > pivotValue:  
          swap(A, pivotPos-1, i)
          swap(A, pivotPos-1, pivotPos) 
          pivotPos -= 1
          
def partitionUsingFarLeft(A, le, ri): 
    # optional: swap a random object to the far left
    swap(A, random.randint(le, ri), le)
    benchmark=A[le]
    ret = i = le
    while True:
        if i == ri: return ret
        i +=1 #move pointer
        if A[i] < benchmark: # 3-step adjustment
            swap(A, ret+1, i)
            swap(A, ret+1, ret)
            ret +=1
    
def partition(A, le, ri):
    return partitionUsingFarLeft(A, le,ri)
def recurse(A, le, ri): 
    if le>=ri: return
    print 'entering partition()...',
    print(A[le:ri+1]), ' pivotVal =', A[ri]
    anchor = partition(A, le, ri)
    print '...after partition()   ',
    print(A[le:ri+1])
    recurse(A, le, anchor-1)
    recurse(A, anchor+1, ri)
def swap(A, x,y):
    tmp = A[x]
    A[x] = A[y]
    A[y] = tmp
def qsort(A):
    recurse(A, 0, len(A)-1)
    print(A)
 
qsort([222,77,6,55,3,11,5,2,88,9,66,22,8,44,1,33,99,7,66])

Above is py, below is c++


#include <iostream>
#include <vector>

std::vector<int> A{77, 11, 66,22,33,99,44,88, 77, 55, 0};
int const size = A.size();

void dump(int l=0, int r=size-1) {
	for (int i = l; i <= r; ++i)
		std::cout << A[i] << ' ';
	std::cout <<std::endl;
}

template <typename T>
void swap(int pos1, int pos2) {
	if (A[pos1] == A[pos2]) return;
	T tmp = A[pos1];
	A[pos1] = A[pos2];
	A[pos2] = tmp;
}

/*partition the region [l,r] such that all elements smaller than
pivotValue are on the left of pivotPosition
*/
template <typename T>
int partitionUsing1st(int l, int r) {
	T const pivotVal = A[l];
	int pivotPos = l;
	for (int i = l+ 1; i <= r; ++i) { 
		if (A[i] >= pivotVal) continue;
		swap<int>(pivotPos + 1, i);
		swap<int>(pivotPos + 1, pivotPos);
		++pivotPos;
	}
	return pivotPos;
}
template <typename T>
int partitionUsingLast(int l, int r) {
	T const pivotVal = A[r];
	int pivotPos = r;
	for (int i = r - 1; i >= l; --i) {
		if (A[i] <= pivotVal) continue;
		swap<int>(pivotPos - 1, i);
		swap<int>(pivotPos - 1, pivotPos);
		--pivotPos;
	}
	return pivotPos;
}
/*based on [[Algorithms unlocked]], should but doesn't minimize swaps!
Lime zone -- items smaller than pivot value
Red zone -- items bigger than pivot value
Ugly zone -- items yet to be checked
*/
template <typename T>
int partitionMinimalSwap(int le, int ri) {
	T const pivotVal = A[ri];
	// two boundaries exist between zones
	int redStart = le;
	// we start with redStart == uglyStart == l, which means item at le is Unknown
	for (int uglyStart = le; uglyStart < ri; ++uglyStart) {
		if (A[uglyStart] < pivotVal) {
			swap<int>(uglyStart, redStart);
			redStart++;
		}
	}
	swap<int>(ri, redStart);
	return redStart;
}

template <typename T>
void recurse(int l, int r) {
	if (l >= r) return; // recursion exit condition
	int const anchor = partitionMinimalSwap<T>(l, r);
	recurse<T>(l, anchor-1);
	recurse<T>(anchor+1, r);
}

int main() {
	recurse<int>(0, size-1);
	dump();
	return 0;
}

1st lonewolf]unsorted int array

Q: if an integer appears only once in an array, then it’s a lone wolf. Find the first lone wolf in a given array.

This problem is mostly about efficient data structures.

— one idea — O(N) one-pass. linked list + hash table. https://github.com/tiger40490/repo1/blob/cpp1/cpp1/array/1stLoneWolf.cpp is my tested implementation.

In a green linked list, each link node holds {index}. Every index “point to” a lone wolf.

Every lone wolf int or repeated int is also in a hashtable {int -> {color, iterator in linked list}}

For every node encountered, we add it to the hashtable. If it is a new int, it’s appended to the linked list; if it turns from green to brown, then we remove it from linked list.

— one idea — O(N) two-pass. linkedHashMap
* any “new” element will be appended to a LinkedHashMap and painted green
* any “seen” element will be repainted brown in the same container.
* How to test for newness? Any examined element is tested by contains().
* 2nd scan to return 1st green.

— One idea — hash table + heap in fwd scan

  • use a hash table to hold payload objects, and progressively mark each payload green or brown
  • each payload consists of {color, index}. The key in the hash table is arr[index]
  • As we create a payload (to put into hash table), we also save its address in a min-heap, sorted on the index
  • After the scan, extract minimum from the heap, until we get a green element.

 

max product (j-i)*min(arr[i], arr[j]), streaming mode #Ashish

In the static, non-streaming context, the optimal solution is perhaps in my gmail. The brute force would evaluate roughly N*N/2 pairs. We can reduce that significantly.

Start with 1st/last, which is quite possibly the final winner. Save this as maxProductOfLoopA and evaluate a[0] vs a[9 i.e. last].

  • Suppose a[0] > a[9], then 1/9 , 2/9 , 3/9 etc are automatically eliminated.
  • Suppose a[0] < a[9], then 0/8, 0/7, 0/6 etc are automatically eliminated.
  • if a[0] == a[9], then 0/8 etc are automatically eliminated

You can visualize it as removing an outer layer from a NxN matrix. Note the matrix is triangular and has exactly one outer row and one outer column at any moment. In the first step, you either remove the outer row or outer column.

Supposed you removed */9 column. In 2nd step, we compute 2nd product at 0/8, and 2nd evaluation of a[0] vs a[8] and remove another outer layer.

In about N steps we should reduce the matrix to 1 remaining cell. This cell could be the final winner so we must evaluate it.

—-

Hi Ashish,

Let me change your question slightly. Input numbers come individually in a stream. Among all possible pairs of numbers, we want to compute and publish the latest maximum product:

    A(i, j) * B(i, j)

, where A(i, j) is the time lag j-i and B(i, j) is minimum(arr[i], arr[j])

NB: We will need to keep all numbers seen so far, since the the winning pair might include the first elements.

At any time there’s a current max. When we receive the 89th element, enter loop A:

compute the product for 1st/89th i.e. arr[0]/arr[88]
* If arr[0] > arr[88], then just exit the loop with this product as maxProductOfLoopA. No need to try 2nd/89th or 3rd/89th, as all those products are all smaller than 1st/89th. (This is the genius of the solution you told me.)
* otherwise, compute the product for 2nd/89th. If it exceeds maxProductOfLoopA, then update maxProductOfLoopA. Now check if arr[1] > arr[88]. If yes just exit loop A with maxProductOfLoopA
* otherwise compute the product for 3rd/89th….

Once we exit loopA, update the current max product using maxProductOfLoopA.

For streaming situation, I think this is one good solution, if not the optimal solution.

[12]back-scan any container,print`every Other item #MS

Have I overspent my time on this once-asked question?

The umbrella question — write a utility function to iterate any container and print out every Other element backwards?

Good coding practice! I think this is all about iterator syntax knowledge (my weakness) not algorithm (my strength)!

Note this is really about knowledge not  coding abilities. QQ not ZZ.

Iterator declaration is a can of worm 😦 I might need to give up on this.

#include <iostream>
#include <vector>
#include 	<list>
#include <set>
using namespace std;

template<class _InIt>  void printAlternateItem2itr(_InIt _First, _InIt _Last){
	bool flag = true;
	// if the iterator is from rbegin, then ++ would reverse it!
	for (_InIt it = _First; it != _Last; ++it, flag=!flag) {
		if (flag) cout << *it << ' ';
	}
	cout << endl;
}
template <typename CONT> void printAlternateItemBackward(CONT const & cont) {
	printAlternateItem2itr(cont.rbegin(), cont.rend());
}
int main() {
	//vector<int> cont = { 11,2,3,4,5,6,7,18 };
	//list<int> cont = { 11,2,3,4,5,6,7,18 };
	string cont = "0123456789a";
	set<int> cont2 = { 11,2,33,44,55,66,77,88,99 };
	printAlternateItemBackward(cont);
	printAlternateItemBackward(cont2);
	int arr[] = { 11,2,3,4,5,6,7,18,9 };
	int size = sizeof(arr) / sizeof(arr[0]);
	printAlternateItem2itr(arr, arr + size); //forward only
}

—-
Q: is comparison defined on all iterators?
A: now I think linked list doesn’t. Now I think only random access itr does.

%%Q: what’s the signature of STL find()? I will use those declarations of iterators in my function. (Actually the map and set containers have member functions find() outperforming std::find)

%%Q: from a const container, can u get a non-const iterator?

Q: why don’t you take a container as input? Why must you take iterators?
%%A: it’s more common to take iterator, but in this case container will do. All containers provide rbegin() or begin() including string. Raw array doesn’t but the iterator increment won’t work for raw arrays anyway.


Separate question
Q: OO design — how would you represent Order state transition graph in an OMS?

rotate array by K slots in O(1)space O(N) time

Can write a c++ solution as an ECT practice. Easy to verify. Added to enroute.txt.

[[EPI300]] 6.13 is probably a much shorter and more readable solution!

A: find the largest common denominator between N and K. If it’s 3, then the array consists of 3 interleaved subsequences (abbreviations). We will “process” each subsequences simultaneously since they are completely independent. First subsequences consists of Slots 0,3,6,9… Within each subsequences , K’ and N’ (scaled down by the denominator) have no common denominator, therefore an iteration will visit every element exactly once until we revisit the first element.

Q: rotate a large array (N elements) by K positions. Both N and K are potentially large, so we need 1) time efficiency 2) memory efficiency. Ideally, O(1) space and O(N) time.

–analysis
If K = 1, then just shift every element right, rotating the last element to the head. If K = 2, just repeat once. But K is large.

You can assume it’s an array of pointers, or you can assume it’s an array of integers — Same challenge.

The brute-force solution requires O(K*N), shifting all N elements K times.

Without loss of generality, let’s rotate by 5 slots. If N is multiple of 5, like 30, then there are 5 individual, independent, interleaved arrays of 10. Just rotate each by 1 slot.

If N is not a multiple of 5, like 31, then view the array as a circular array and we hop by 5 slots each time. So our journey will start from position 0 (or any position) and cover every position and come back to the starting position. This can be proven by contradiction —

Sooner or later we must be re-visiting some position. We know that position is reachable from position 0, so the distance between them must be multiple of 5. So position 0 must be revisited.

 

find every 3-subsets adding up to 9 #Ashish

Q: You have 9 poker cards, numbered 1 to 9. Given two integers SUM (for example 11) and COUNT (for example, 3), construct every combination of 3 cards, who add up to the target 11.

Within each combination, if we need to generate each permutation, it’s as simple as calling next_permutation() within an array (which is a combination).

You can only choose each card 0 or 1 time, i.e. no redraw.

I used to feel this was dynamic programming. Now I feel we have no choice but iterate over all combinations. We have an algo to generate ascending combinations. We can stored them in mini arrays, each one is ascending. We could use binary search in each mini-array.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/array/sizeN-subset_TargetSum_Ashish.cpp is a very short recursive solution by my friend CSY.

//There are N distinct poker cards numbered 1 to N. Find all combinations
//of C cards such that each combo adds up to the same given Target
//
//Based on https://bintanvictor.wordpress.com/2017/11/08/generate-next_combo3-boys-out5-recursive/
//
//Note some range of the generated sequence is ascending, permitting
//binary search like lower_bound, but it's not easy to tweak the algo
//to skip ahead. There's no random access iterator here!
#include <iostream>
#include <sstream>
#include <deque>
#include <iomanip> //setw
#include <algorithm>  //sort
#include <assert.h>
//#define DEBUG
using namespace std;
size_t calls=0, combos=0;
size_t const C=3; //how many in each combination
size_t const Target=12;
deque<int> pool{1,3,4,5,8};
deque<int> prefix;

template<typename T> void dumpDeque(deque<T> const & p, string const & headline){
  cout<<"-- "<<headline<<" -- size = "<<p.size()<<endl;
  for(int i=0; i<p.size(); ++i) cout<<setw(5)<<p[i];
  cout<<endl;
}
template<typename T> int showCombo(deque<T> const * p){
  ++ combos;
  size_t sum = 0;
  stringstream ss;
  for(int i=0; i<p->size(); ++i){
        ss<<setw(5)<<(*p)[i];
        sum+= (*p)[i];
  }
  static string last;
  string combo=ss.str();
  cout<<"combo: "<<combo;
  if (sum == Target) cout<<" <- Hit!";
  cout<<endl;
  assert(last <= combo && "should be ascending");
  last = combo;
}

template<typename T> int recurs(){
  ++calls;
#ifdef DEBUG
  cout<<"-------------\nentering "; dumpDeque(prefix, "prefix"); dumpDeque(pool, "pool");
#endif
  if (prefix.size() == C) return showCombo(&prefix); //prefix alone is the combo
  if (pool.empty()) return 0;
  T poolhead = pool.front(); pool.pop_front();

  prefix.push_back(poolhead); //add poolhead to prefix

  //this 1st recursive function call starts a rather deep call stack and prints
  //all combinations that start with the given (new longer) prefix
  recurs<T>();//use the longer prefix and the shorter pool

  prefix.pop_back();//restore prefix
  recurs<T>();
  pool.push_front(poolhead); //restore pool, needed by the 2nd call in the parent stack
#ifdef DEBUG
  cout<<"^^^^^^ restored before returning "; dumpDeque(prefix, "prefix"); dumpDeque(pool, "pool");
#endif
}

int main() {
  assert(C <= pool.size());
  sort(pool.begin(), pool.end());
  recurs<int>();
  cout<<calls<<"  calls to the recursive function to generate "<<combos<<endl;
}

[15] EPI300 skyline #event-handler

Reusable technique – SQL

PROBLEM STATEMENT (equivalent to Leetcode Q 218)
We have to design a program which helps drawing the skyline of a two-dimensional city given the locations of the rectangular buildings in the city. Each building B_i is represented by a triplet of (L_i, R_i, h_i) where L_i and R_i are the left and right coordinates of the ith building, and h_i is the height. In the diagram below there are 8 buildings, represented from left to right by the triplets (1, 5, 11), (2, 7, 6), (3, 9, 13), (12, 16, 7), (14, 25, 3), (19, 22, 18), (23, 29, 13) and (24, 28, 4).

Input
The input of the program is a sequence of building triplets. The triplets are sorted by L_i (the left coordinate of the building) in ascending order.

====analysis====

A data structure challenge. Once I had the correct data structure … 迎刃而解

Q1 —– For any given x, determine the height in the skyline.
Note If x == R_i, then the ith building doesn’t count. In other words, If you look at the first building, the 1-to-5 range it covers is a half-open interval, sometimes written as [1,5) as this range includes 1 but excludes 5. You can think of [1,5) as [1, 4.99999999] approximately.

A1(brute force): look at each building and decide if it “covers” the point X. Given the pre-sort, most buildings aren’t relevant. A complete solution would be

Select max(h) from myTable t where t.l =< x < t.r

To support this solution, the objects could be stored in two sorted data structures, one sorted by L_i and one sorted by R_i.

Q2 —- draw the skyline.
A2: evaluate Q1 (i.e. get the height) at every L_i and R_i value. This solution is probably suboptimal.

Q2b (Leetcode Q218) —- draw the skyline by outputting a sequence of {x,h} pairs showing the change in height (represented by a new height h) at each vertical edge (marked by x).

Is it possible to do this in one scan after preprocessing the triplets with some clever data structures? [[EPI300]] may have a solution. Here’s my own proposal —

Pre-sort the N buildings into a list by L_i, and sort the same buildings into another list by R_i, and merge the 2 lists into a big sorted list of 2N pointers to N unique objects. Each building shows up twice. Each of the 2N entries consists of {building_object_id, x, boolean flag left_or_right }. We will go through this big list one-pass.

I convert this list into a sequence of “events” as we move along the x-axis. That’s why all left/right edges must be sorted into a single sequence.

Main scan — As we hit the left edge of a building, we include this building in the Alive container (probably a BST). In Alive, we keep the buildings sorted by height. We also maintain a lookup table { building_id, pointer/iterator into Alive}. As we hit the right edge of a building, we remove it from Alive. (No need to remove from lookup since we won’t see the same building again.)

As we hit any edge, we need to determine the impact on the skyline if any. This is when we make use of the Alive system. For any edge, there’s impact iff the target building is taller than all other in Alive.

locate a pair with targetDdiff=55#popular

http://www.geeksforgeeks.org/write-a-c-program-that-given-a-set-a-of-n-numbers-and-another-number-x-determines-whether-or-not-there-exist-two-elements-in-s-whose-sum-is-exactly-x/ has a nice algo for “sum=55”

This technique becomes more interesting when the problem is modified to “find a pair whose difference is 55”. Without loss of generality, let’s keep everything positive.

–my solution:

  1. sort the original integers
  2. duplicate the array, flip the signs and reverse it
  3. append positive array to get a combined, sorted array, whose left half are all negavie
  4. now start the 2 pointers, but each constrained within its half.
//Requirement: find all pairs in the array whose difference = D
//
//If there are negative numbers, then the simplest solution is
//to shift everything up to positive

vector<int> v{3,1,1,7,4,9};
int const SZ = v.size();
int const D = 2;

void dump(){
  for(int i=0; i<v.size(); ++i) cout<<setw(3)<<v[i];
  cout<<endl;
  for(int i=0; i<v.size(); ++i) cout<<setw(3)<<i;
  cout<<endl;
}
int main(){
  sort(v.rbegin(), v.rend());
  for (int i=SZ-1; i>=0; --i) v.push_back(-1 * v[i]);
  dump();
  for(int left=0, right=v.size()-1; left < SZ && right>=SZ;){
        int diff = v[left] + v[right];
        if (diff == D) {
                cout<<v[left]<<" - "<<-1*v[right]<<endl;
                if (v[left+1] == v[left]) ++left;
                else --right;
        }
        else if (diff < D) --right;
        else ++left;
  }
}

locate a pair with targetSum=55 : pre-sorted #O(N) 1-pass

I came up with this idea within 5 sec — carefully adjust le and ri pointers from both ends towards center.

  • if arr[le] + arr[ri] too big, then move le
  • else move ri
  • repeat until success or le+1==ri

My friend Ashish was asked a related question — unsorted array .. must return original indices of the pair. It’s relatively easy to adjust my algo to meet this requirement.

  • before sorting, save the original indices in a multimap or map<val, set<index>>
  • after we found a pair, look up to get the indices.

3c++London bbg algo IV #all about array

Q: given a string of characters, find the longest “run” of any repeating character.

Q: given an array of integers, and a target, determine if there’s any pair that add up to the target. Return true/false.

Q: maximum profit problem. Given a time series of historical spot FX rates, find the maximum possible trading profit by exactly one buy one sell.
A: I have seen this many times. First write pseudo-code.

FB 2questions #regex..

—Q1: write a stateless utility function to take in any string consisting of a-z, and return a string with all the original characters appearing once only. All 2nd and subsequent occurrences should be removed. eg: abcbdc → abcd

They don’t mind c/c++/java, but I feel they prefer c-string.

I don’t want to new up a string in my utility, so I decided to take in an output char* buffer which should be large enough.

I guess this is all about robust and fast coding.
—Q2: Suppose there’s a regex grammar that can accept exactly 3 constructs
1) literals — any combination of abcdefghijkljmnopqrstuvwxyz,
2) dot (.) that matches any one character
3) asterisk — a standard quantifier

For example,

“a*” can match empty string, a single a, or aa, or aaa etc.
“.*” matches any string including the empty string.
“a*b*a” should match “a” and also match “aa”, or “ba”, or “aaa”

—-
Now, write a utility function bool match(char const * regex, char const * S);

Note entire string S must match entire regex. We don’t want to get a false positive when only a substring of S matches the regex.
—– some tips—–

  1. Easy part — if regex ends in literals, then they must “eat up” the tail of S exactly.
  2. After this step, if regex ends with a dot, it will just eat up the last char of S. After this step, regex definitely ends in a star.
  3. If 2nd char in regex is not a star, then first char must eat up head of S.
  4. Repeat previous step until we see star as 2nd char.
  5. Now we can basically split the regex into segments. Each segment is either a single literal, a single dot, or a non-star followed by a star.

Interviewer said — First assume dot is not allowed in regex. We will deal with the dot later.

—– analysis
EPI Question 6.23 is very similar to my Facebook interview. I told FB interviewer that we don’t NEED such problem solving at work. He said this is one method (standard method?) they assess a candidate’s coding abilities.

I feel it (and google questions) is like IQ test or probability test or abstract reasoning test — irrelevant to work, but used to assess learning capacity. They don’t want a dumb person on their team.

The truth is, practice is key. There are patterns. A very very smart person will still fail if without enough practice. A mediocre guy can speed up significantly if practised enough. I said the same thing to My Hwa Chong math teacher Tony Tan. This factor is also the reason I became so strong in physics.

Q: A simple regex (SRE) can use alphanumerics, dot and star. It is enclosed between implicit ^ and $. Write a match() function taking a SRE and a word. No white space allowed.

https://github.com/tiger40490/repo1/blob/py1/py/regexFB.py is my implementation. Here are a few test cases:

assert not match(”, ‘.’)
assert match(‘xyab-abc’, ‘.*ab.*c.*x*’)
assert match(‘xyab-abc’, ‘.*ab.*c.*.*’)
assert match(‘xyab-abc’, ‘.*ab.*c’)
assert match(‘aaaaaaab-abc’, ‘a*aab.*c’)
assert match(‘aab-abc’, ‘a*aab.*c’)
assert match(‘aabc’, ‘a*aab.*..*x*’)
assert not match(‘abc’, ‘a*aab.*c’)
assert match(‘a-aab’, ‘a*.*a*aab’)
assert not match(‘a-aab’, ‘a*a*aab’)

gym scheduler #greedy#Junli

I find this algorithm extremely simple but not easy to recognize. The extreme simplicity indicates some important/valuable tip but not sure what.

Write a scheduler for a single gym equipment(like a room). Every morning at 8am before start of day, we receive a batch file containing all booking requests for that day. We run the scheduler once, and all accepted bookings are confirmed and finalized. Each request is identified by a pair of start/end times on the current day. No 2 requests share the same start/end times (perhaps because the duplicates are already filtered out). Exercise duration is in blocks of 1 minute. We earn $1 rental for each session regardless of duration.  Optimization goal is to allocate as many sessions as possible. We don’t care idle time; it doesn’t cost us. Let’s say the candidate for the “optimal” allocation plan is PlanK.
—–solution——
Pick the booking that ends earliest. Alternatively, pick the booking that starts later than all others.
(I feel this is not a dynamic programming algorithm, but a greedy algorithm.)

If there’s a last-minute user (4.59pm – 5pm, let’s call it AA), then it’s always good to accept. This last minute user AA doesn’t block others. AA is a minimum-hindrance session.

If PlanK ends with any other session (call it BB), BB surely starts before 4.59pm. BB is a bigger hindrance than AA is. If we replace BB with the last-minute user AA, we would produce a “safer” plan – PlanL. PlanL is at least as optimal as PlanK, and can potentially /accommodate/ additional sessions. Therefore PlanK without AA isn’t the optimal. The optimal plan ought to include the last-minute allocation.

That’s the key to the optimization. The last-to-begin session has minimum hindrance and should always be selected. After that selection, eliminate all requests that are now hindered. Then among the remaining  requests, look for the last-to-begin.

———-
Exploiting symmetry, you can reverse the algorithm by picking the first-to-end session.

c# cod`IV – struct/disconnected

–req#0–
O(1): Insert(Employee e)
O(1): Delete(int id)
O(1): LookupById(int id)
O(1): LookupByEmail(string email)
–req#1–
Must not corrupt:
EmployeeLookup el = new EmployeeLookup();
Employee e = new Employee(1, “one@example.com“);
el.Insert(e);
e.Id = 5;
MUST BE TRUE: el.LookupById(1).Id == 1
MUST BE FALSE: e.Id = el.LookupbyId(1).Id
e = el.LookupById(1);
e.Id = 10;
MUST BE TRUE: el.LookupById(1).Id == 1
MUST BE FALSE: e.Id = el.LookupbyId(1).Id
—– my solution:
public struct Employee{
public int id;
public string em;
}
public class EL where T: struct {//EmployeeLookup
Dictionary d1 = new…;
Dictionary d2 = new…;
 //d1 add and d2 add should both fail or both succeed
bool Insert(T emp){//insert into d1/d2
if (!d1.TryAdd(emp.id, emp)) return false;
if (!d2.TryAdd(emp.email, emp)) {
d1.Remove(emp.id); //rollback
return false;
}
return true;
}
Employee //struct
LookupById(int id){
Employee found;
if (!d1.TryGetValue(id, out found)){//log and throw….
}
return found; //struct
}
}

[12] locate]sentence all permutations@99words #dragonSearch

https://leetcode.com/problems/substring-with-concatenation-of-all-words/description/ is very similar

Q1: you are given a list of (possibly repeating) words denoted L that are all the same length , and a long string denoted S. Find a substring of S that is a concatenation of each word in the list exactly once and without any intervening characters. Such a substring is known as a dragon.

To simplify the problem, you can assume it occurs at least once in the sentence.

Example:.
L: “fooo”, “barr”, “wing”, “ding”, “wing”.
S: “lingmindraboofooowingdingbarrwingmonkeypoundcake”.
Answer — fooowingdingbarrwing.

For O(), let’s assume there are H characters in haystack, W characters in each word, and N words (possibly repeating)

Q1b: what if word length is not uniform?

Solution-A — See https://github.com/tiger40490/repo1/blob/py1/py/str/dragonSearch.py with O(HN)

I hope this can be adapted for Q1b — see source comments

Solution-B1 (Brute force) — For each permutation, scan by indexOf() or strstr() or string::find(). Guaranteed to find a solution in finite time. Doable if N is small, since there’s O(N!) element.

Solution-B2 (Brute force) — WN-long sliding window over haystack. Each window is examined to determine if it’s a dragon.

This is reasonable if H is small relative to N.

Solution-C — a potentially faster solution if the list is large — Permutation of 5 would be 120 😦

Phase 1: For each word in the list, i’d count how many times it occurs. The word with lowest occurrence would be a “candidate”.

If every word has the same occurrence (say 3), I think this is still faster than Solution-B, because all but one of the occurrences is a false positive. It’s relatively easy to screen them out.

As soon as I find a word with occurrence=1 exactly, I exit Phase 1 with a super candidate. For this discussion, I assume we have a super candidate.

Updae — In such a case, I can also use the Solution A approach. Even if the best candidate has occurrence = 2 I can consider this approach…

Phase 2a: remove the super candidate from the list. Blindly read the next 4 character in S after our super candidate. See if any word starts with that…..

Phase 2b: do the same leftward. Perhaps by reversing all the strings.

[12]locate]long sentence a permutation of 3 words #src

/* Requirement: you are given a list of words denoted L that are all the same length , 
and a long string denoted S. Find a substring of S that is a concatenation of each 
word in the list exactly once and without any intervening characters. 
This substring is guaranteed to occur exactly once in S.

Showcase how to populate a map containing list,
* without excessive list cloing, and
* without explicit new
Showcase auto keyword
Showcase unordered_multiset
Showcase rbegin()
*/
#include <iostream>
#include <vector>
#include <list>
#include <unordered_map>
#include <unordered_set>
#include <memory> // make_shared
#include <string>
using namespace std;

string const S = "barrfooolingmindraboofooowingdingbarrwingmonkeypounddingdingbarrwingfooocake";
vector<string> L{"fooo", "barr", "wing", "ding", "wing"};
unordered_map<string, list<size_t> > lookup; //list of positions where the word appears
unordered_multiset<string> pool; //global variable

template<typename T> ostream & operator<<(ostream & os, list<T> const & l){
  for(auto it = l.begin(); it != l.end(); ++it) os<<*it<<" ";
  os<<endl;
}

shared_ptr<list<size_t> > countOccurrence(string const & word){
  shared_ptr<list<size_t> > ret= make_shared<list<size_t> >();

  for(size_t pos=0, hit=0;;pos=hit+4){
        hit = S.find(word, pos);
        if (hit == string::npos) break;
        ret->push_back(hit);
  }
  return ret;
}
int step( bool isForward){
        return isForward? 4 : -4;
}
char search1way(unsigned int pos2try, bool isForward){
    for(int marker = pos2try+step(isForward); ; marker+=step(isForward)){
        string const & substr4 = S.substr(marker,4);
        //cout<<"substr4 = "<<substr4<<endl;

        //now lets see if the substr is one of our words
        auto itr = pool.find(substr4);
        if (pool.end() == itr) break;
        pool.erase(itr);
        cout<<"found   "<<substr4<<"  at position "<<marker<<". words remain unfound = "<<pool.size()<<endl;
        if (pool.empty())  return 0;
    }
    return 'r' ; //has words remaining in pool;
}
int main(){
  pair<string, int> winner = make_pair("", 9999);
  for(int i=L.size()-1; i>=0; --i){
        string const & word = L[i];
        if (lookup.count(word)) continue;
        shared_ptr<list<size_t> > occ = countOccurrence(word);
        lookup.insert(make_pair(word, *occ));
        if (occ->size() < winner.second) winner = make_pair(word, occ->size());
        cout<<word<<" maps to "<<lookup[word];
  }
  cout<<winner.first<<" has the lowest occurrence = "<<winner.second<<endl;

  list<size_t> const & winnerPos = lookup[winner.first];
  for(auto it = winnerPos.rbegin(); it!=winnerPos.rend(); ++it){
    int pos2try=*it;
    pool = unordered_multiset<string>(L.begin(), L.end());
    pool.erase(pool.find(winner.first)); //guaranteed
    cout<<"trying  "<<winner.first<<"  at position "<<pos2try<<" as the anchor word. Words remain unfound = "<<pool.size()<<endl;
    if (0 == search1way(pos2try, true)) return 0;
    cout<<"searching backward..\n";
    if (0 == search1way(pos2try, false)) return 0;
    cout<<pos2try<<" trial failed :(\n\n";
  }
}

spreadsheet concretization algorithm #Junli

Q: A spreadsheet consists of a two-dimensional array of cells, labelled A1, A2, etc. Rows are identified using letters, columns by numbers. Each cell contains either a numeric value or an expression. Expressions contain numbers, cell references, and any combination of ‘+’, ‘-‘, ‘*’, ‘/’ (4 basic operators). Task: Compute all RPN expressions, or point out one of the Cyclic dependency paths.

——— Here is “our” design ———-
I feel it’s unwise to start by detecting circles. Better concretize as many cells as possible in Phase 1.

* First pass — construct all the RPN objects and corresponding cell objects. An RPN holds all the concrete or symbolic tokens. A cell has an rpn and also a cell name. If a cell is completely concrete, then calculate the result, and add the cell to a FIFO queue.

Also construct a p2d or precedent2dependent<Name, Set > map. It’s a look-up map of <Name, set > This will help us fire update events. If you wonder why use Name. In this context, name is a unique identifier for a cell. I use a simple hash map.

* 2nd pass — process the queue of concrete cells. For every cell removed from the queue, get its name and concrete value into a pair (call it ppair since it’s a Precedent). Look up p2d to get all dependents. Fire update events by feeding the ppair to each dependent cell, which will use the ppair to concretize (part of) its expression. If any dependent cell gets fully concretized, add it to the queue.

Remember to remove the ppair cell from p2d.

Only 2 data structures needed — queue and p2d.

* Phase 1 over when queue depletes. If p2d is still non-empty, we have cyclic dependency (Phase 2). All concrete cells have been “applied” on the spreadsheet yet some cells still reference other cells.

All remaining cells are guaranteed to be involved in some circle(s). To print out one circle, just start from any cell and follow first link and you are bound to hit the starting point.

c++ Push all zeros in array to the end #easy

Basic programming exercise

#include <cstdlib>
#include <iostream>
#include <iterator>
#include <assert.h>

using namespace std;
int a[] = {0,0,-1,2,0,4,0,0,8,0};
/*
Push all the zero's of a given array to the end of the array. In place only. Ex 1,2,0,4,0,0,8 becomes 1,2,4,8,0,0,0
*/
int main(int argc, char *argv[])
{
   size_t size = sizeof(a)/sizeof(a[0]);
   copy(a, a+size, ostream_iterator<int>(cout, " "));
   int left = 0;
   int right = size -1;
   while(left < right){ if (0==a[left]){ while (0==a[right]) --right; if (left >= right) break;
          swap(a[left], a[right]);
       }
       ++left;
   }
   cout<<"\n new: \n";
   copy(a, a+size, ostream_iterator<int>(cout, " "));
   cin.get();
}

mode finder algo #java boxing

“We have a set of integers, the size is 1 mill. The value of
those integers can go as large as 1 trillion. So the set might look
like [1,3, 3, 100, 23, 100, 100, 89,999999, 89, 100000000, 10000000,
10000000000000, 39, 3, 23, 1,1,1,1,1,1000, 10000000000000….], now we
need to find out the individual single integer which is mostly
repeated. Say if integer 3 repeated 1000 times, no other integers have
dups as many as it does, integer 3 will be the one we are looking for.”

I did a java solution using HashMap. Now I feel auto-boxing is a growing performance drag with increasing volume. Whenever you compute hashcode on an Integer, you must box it up to a heap object. Counting up also requires unboxing and then boxing. C++ would use simple integers.

A Java solution can avoid auto-boxing completely. Build a custom HMap class with an array of buckets, each containing a pointer to a custom LList of Pair objects. Each Pair instance consists of 2 primitive integers + pointer to the “next” Pair object.

How about a sparse array?

island rainfall problem

#include
#include
#include
#include
#include
#include
using namespace std;
int const island[] = { 54, 50, 54, 54, 52, 55, 51, 59, 50, 56, 52, 50 };
///////////////   Pos # 0   1   2   3   4   5   6   7   8   9  10  11
int const size = sizeof(island) / sizeof(int);
int accu = 0;
//adapted from STL
template
ForwardIterator max_element_last(ForwardIterator scanner, ForwardIterator const end) {
ForwardIterator ret = scanner;
if (scanner == end)
return ret;//empty range, with zero element!
while (++scanner != end)
if (*ret <= *scanner) //"=" means find LAST
ret = scanner;
return ret;
}
//print height and address of a column
void print1(int const* const pos, char const * const label) {
//int const height = *pos;
printf(“%s=%d/%d “, label, *pos, pos – island);
}
void printAll(int const* const L, int const* const l, int const* const h,
int const* const H) {
if (l < h) {
print1(L, “wallL”);
print1(l, “ptr”);
printf(”  “);
print1(h, “ptr”);
print1(H, “wallH”);
} else {
print1(H, “wallH”);
print1(h, “ptr”);
printf(”  “);
print1(l, “ptr”);
print1(L, “wallL”);
}
printf(“%d=Accu\n”, accu);
}
//Rule: move the lo-side pointer only
void onePassAlgo(){
int*loptr; //moving pointer, moving-inward.
int*wallLo, *wallHi; //latest walls
int*h;

//1st we ASSUME the first left side wall will be lower than the first right side wall
wallLo = loptr = const_cast (island);
wallHi = h = const_cast (island) + size – 1;
//2nd, we validate that assumption
if (*wallLo > *wallHi) {
std::swap(wallLo, wallHi);
std::swap(loptr, h);
}
// now lo is confirmed lower than the hi side
printAll(wallLo,loptr,h,wallHi);
printf(“All pointers initialized (incl. 2 walls\n”);
while (loptr != h) {
if (*loptr > *wallHi) {
wallLo = wallHi;
wallHi = loptr;
std::swap(loptr, h);
//printf(“new wallHi:”);
} else if (*loptr >= *wallLo) {//see the >=
wallLo = loptr;
//printf(“wallLo updated:”);
} else {
assert (*loptr < *wallLo);
accu += (*wallLo – *loptr);
printf(“adding %d liter of water at Pos_%d (%d=A\n”, *wallLo – *loptr,
loptr – island, accu);
}
printAll(wallLo,loptr,h,wallHi);
// only by moving the loptr (not h) can we confidently accumulate water
if (loptr < h)
++loptr; //lo side is on the left, move loptr right
else
–loptr; //lo side is on the right, move loptr left
}
}
void twoPassAlgo() {//less convoluted
int const* const peak = max_element_last(island, island + size);
printf(“highest peak (last if multiple) is %d, at Pos %d\n”, *peak, peak
– island);
//(island, island + size, ostream_iterator (cout, ” “));

//forward scan towards peak
int* pos = const_cast (island); //left edge of island
int* wall = pos;
for (++pos; pos < peak; ++pos) {
if (*wall > *pos) {
accu += *wall – *pos; // accumulate water
printf(“adding %d liter of water at Pos#%d (T=%d)\n”, *wall – *pos,
pos – island, accu);
continue;
}
//ALL new walls must match or exceed previous wall.
printf(“found new wall of %d^ at Pos#%d\n”, *pos, pos – island);
wall = pos;
}
cout << "^^^ end of fwd scan ; beginning backward scan vvv\n";
//backward scan
pos = const_cast (island) + size – 1;
wall = pos;
for (–pos; pos > peak; –pos) {
if (*wall > *pos) {
accu += *wall – *pos; // accumulate water
printf(“adding %d liter of water at Pos#%d (T=%d)\n”, *wall – *pos,
pos – island, accu);
continue;
}
//Note all new walls must match or exceed previous wall.
printf(“found new wall of %d^ at Pos#%d\n”, *pos, pos – island);
wall = pos;
}
}
int main(int argc, char *argv[]) {
twoPassAlgo();
accu = 0;
cout<<"—————————–\n";
onePassAlgo();
}
/*
 Requirement — a one-dimentional island is completely covered with columns of bricks.
 If  between Column 
 A(height 9) and Column B(10) all columns are lower, then we get a basin to
 collect rainfall. Watermark height (absolute) will be 9.  We can easily calculate the
 amount of water. If I give you all the column heights, give me total rainfall collected.
 Code showcasing
 – stl algo over raw array
 – array/pointer manipulation
 – array initialization
 – array size detection
 – std::max_element modified
 – std::swap
 */

##classic algos using 2 cooperating data structures

* LRU cache — i.e. least-recently-used cache. requires a hash table + some kind of sequence container

Hash table is the only choice to provide constant-time lookup.

To maintain the order, we need some kind of sequence container.

* non-recursively print a directory tree breadth-first (then again depth-first). P171 [[c#precisely]] has a concise 10-line solution.

You need a queue (for breadth-first) or stack (for depth-first) to hold the nodes to print

You need a “seen ” Set to remember which nodes already printed.

* tree-print is the “most-interviewed” example of recursion-to-iteration problem. All such problems require at least 2 data structures, at least one of them a stack

* implement a queue using 2 stacks.

implement a thread-safe queue using 2 stacks

Your idea is basically

void enqueue(Element e){
    this->stack1.push(e);
}

Element dequeue(){
    if (this->stack2.size()){
      return this->stack2.pop();
    }
    this->transferFromStack1to2();
    if ( ! this->stack2.size()){
        //throw exception or return a special value indicating QUEUE_EMPTY;
    }
    return this->stack2.pop();
}
void transferFromStack1to2(){
    assert this->stack2.size() == 0;
    //get locks on Stack1 and Stack2
    while(this->stack1.size()){
       this->stack2.push(this->stack1.pop()); 
    }
    //release locks
}

We can improve performance a bit with a proactively transfer — as soon as stack2 becomes empty. However, this must be done on another thread.

slist: remove consecutive dupes #Macq

#include
#include
struct litem {
     char data;
     litem* next;
};
void myDelete(litem * i){
  delete i;
}
int remove_consecutive_duplicates( litem*& list ){
  vector trash; // collection of bad nodes' address, to be DELETE'd
  litem* keep = list; // this pointer only points to “good” nodes to keep
  for (litem* p = keep->next; p;  p=p->next ){
    if (keep->data != p->data) {
      keep->next=p; // often unnecessary, but very cheap
      keep=p;
      continue;
    }
    trash.push_back(p);
  }
  keep->next = 0; // keep is now the last good node. It may or may not point to null. This  is cheap insurance.
  int ret = (int) trash.size();
  std::for_each(trash.begin(), trash.end(), myDelete);
  // now, trash.size() == ret probably, as trash holds a bunch of stray pointers, but just in case size changes in the foreach loop, we save the original size in advance.
  return ret;
}
void dump( litem*& list){
    for (litem * p=list; p; p=p->next){
      cout<

data< “;
    }
    cout<<endl;
}
int main23(){
   litem* a = new litem();
   a->data='a';
   litem* a2 = new litem();
   a2->data='a';
   litem* a3 = new litem();
   a3->data='a';
  
   litem* b = new litem();
   b->data='b';
   litem* b2 = new litem();
   b2->data='b';
   litem* b3 = new litem();
   b3->data='b';
   litem* b4 = new litem();
   b4->data='b';
  
   litem* c = new litem();
   c->data='c';
   litem* c2 = new litem();
   c2->data='c';
   litem* c3 = new litem();
   c3->data='c';
  
   a->next = b;
   b->next = c;
   c->next = c2;
   c2->next = a2;
   a2->next = b2;
   b2->next = b3;
   b3->next=b4;
   b4->next=a3;
   dump(a);
   remove_consecutive_duplicates(a);
   dump(a);
   cin.get();
}

recursive -> iterative — level 2

In this example, first call’s job need 2nd call’s result. We build up another stack while depleting the args stack. This emulates the recursive descend->ascend.

 static List elements = new ArrayList();
 static {
  elements.add('a');
  elements.add('b');
  elements.add('c');
  elements.add('d');
 }
 static List combinations = new LinkedList();
 static LinkedList> stack1 = new LinkedList>();
 static LinkedList stack2 = new LinkedList();

static void interativePrintCombinations() {
stack1.add(elements);
while (!stack1.isEmpty()) {
List chars = stack1.removeLast();
char firstChar = chars.get(0);

System.out.println(“stack2 = ” + stack2);

if (chars.size() == 1) {
combinations.add(“”);
combinations.add(“” + firstChar);
continue;
}
stack2.add(firstChar);
stack1.add(chars.subList(1, chars.size()));
}

while (!stack2.isEmpty()) {
append1CharToEveryString(stack2.removeLast());
System.out.println(combinations);
}

}

private static void append1CharToEveryString(char firstChar) {
List tmp = new LinkedList();
for (String s : combinations) {
tmp.add(s + firstChar);
}
combinations.addAll(tmp);
}

static void getCombinations(List letters) {
if (letters.size() == 1) {
combinations.add(“”);
combinations.add(“” + letters.get(0));
return;
}
getCombinations(letters.subList(1, letters.size()));

// here we absolutely need the nested call to complete before we proceed
List tmp = new LinkedList();
for (String s : combinations) {
tmp.add(s + letters.get(0));
}
combinations.addAll(tmp);

System.out.println(combinations);
}



			

data structure to hold spreadsheet content

Q: Assume string content in each cell. As a start let’s target 1,000,000 rows by 1,000,000 columns. Goals
– memory
– random access
– growable

%%A: Intermediate solution (good habit): a 3-column DB table with a composite key (row,col). Now how do we represent this in a data structure. First attempt would be a hashmap whose key is a customized big integer. Higher 64 bits represent row numbr, while lower half col number. Value is a reference with Copy-on-write. Hash access with a given (row,col) is …. considered random access with O(1).

Now I think a second try would be a hashmap where
main key -> row number
value -> sub-hashmap, where
sub-key -> col number

The other of the twin hashmap keeps col numbers as main keys. Every update to the spreadsheet requires updates to both.

estimate cubic root (IV

Q: Given any double, how do you create an algorithm to estimate its /cube root/ to 3 decimal places.
%%A: first multiply it by 8 until we hit the limit of a 64bit long integer. We can safely drop the decimal part since it’s guaranteed to be negligible. Now we have a large (at least 61 bit) INTEGER to work with, which is easier than a float/double.

Original question was designed to ask: how would you as a human estimate that, without a computer.

recursion – whiteboard cod`IV tips

  • spend some time writing down the external interface of the recursive func — exactly what to print/return, and what global variables to update.
  • define end-of-recursion condition explicitly. Interviewers want to see that.
  • list all important local variables vs global variables on the side.
  • Some variables must be local — on the stack. All other variables had better be global variables.
    • sometimes local variables are simpler — throw-away temp
    • sometimes global variables are simpler — fewer stateful objects.
    • Simplifies the visualization. Any variable passed into a recursive call is additional state on every stack frame and makes the logic harder to reason about. This is more serious when there are many such local states to keep in mind

bitwise operators, my summary

1 XOR an unknown bit among neighboring bits => TOGGLE the bit

1 AND an unknown bit among neighboring bits => SELECT the bit to examine.

0 AND an unknown bit among neighboring bits => RESET the bit to 0 and ignore it among the neighboring bits

1 OR an unknown bit among neighboring bits => SET the bit to 1

To test equality of 2 “trains” of bits, xor them. 0 means equal.

If you are looking for a one-phrase intro to XOR, consider

) TOGGLE ie toggle the selected bits
) DIFFERENCE ie difference gate

If you apply a toggler twice, you get the original.

If you apply an all-1 toggler, you get the “bitwise NOT” also known as “one’s complement”