find any black-corner subMatrix #52%

https://www.geeksforgeeks.org/find-rectangle-binary-matrix-corners-1/

Q: given a black/white matrix, find any rectangle whose all four corners are black.
Q2: list all of them
Q3 (google): find the largest

— idea 2: record all black cell locations and look out for 3-corner groups

a Rec class with {northwest corner, northeast corner, southwest corner}

first pass, For each pair on the same row, create a pair object and save it in hashmap {northwest cell -> list of x-coordinates on my right} We will iterate the list later on.

2nd pass, scan each column. For each pair on the same col, say cell A and C, use A to look-up the list of northeast corners in the hashmap. Each northeast corner B would give us a 3-corner group. For every 3-corner pack, check if the forth corner exists.

— idea 1: row by row scan.
R rows and C columns. Assuming C < R i.e. slender matrix

For first row, record each ascending pair [A.x,B,x] (No need to save y coordinate) in a big hashset. If S black cells on this row, then O(SS).

In next row, For each new pair, probe the hashset. If hit, then we have a rectangle, otherwise add to the hashset. If T black cells on this row, then O(TT) probes and (if no lucky break) O(TT) inserts.

Note one single hashset is enough. Any pair matching an earlier pair identifies a rectangle. The matching would never occur on the same row 🙂 Optionally, We can associate a y-coordinate to each record, to enable easy calculation of area.

After all rows are processed, if no rectangle, we have O(SS+TT+…). Worst case the hashset can hold C(C-1)/2 pairs, so we are bound by O(CC). We also need to visit every cell, in O(CR)

If C > R, then we should process column-by-column, rather than row-by-row

Therefore, we are bound by O( min(CC,RR) + CR). Now min(CC,RR) < CR, so we are bound by O(CR) .. the theoretical limit.

— idea 4: for each diagonal pair found, probe the other corners
If there are H black cells, we have up to HH pairs to check 😦 In each check, the success rate is too low.

— Idea 5: Brute force — typewriter-scan. For each black cell, treat it as a top-left, and look rightward for a top-right. If found, then scan down for a pair of bottom-left/bottom-right. If no then complete the given row and discard the row.

For a space/time trade-off,

  • once I find a black cell on current row, I put it in a “horizontal” vector.
Advertisements

anagramIndexOf(): frqTable+slidingWindow

Q: Similar to java indexOf(string pattern, string haystack), determine the earliest index where a permutation of pattern starts.

====analysis

https://github.com/tiger40490/repo1/blob/py1/py/algo_str/anagramIndexOf.py is my tested solution featuring

  • O(H+P) where H and P are the lengths
  • elegant sliding window with frequency

Aha — worst input involves only 2 letters in haystack and pattern. I used to waste time on the “average” input.

On the spot I had no idea, so I told interviewer (pregnant) about brute force solution to generate all P! permutations and look for each one in the haystack

Insight into the structure of the problem — the pattern can be represented by a frequency table, therefore, this string search is easier than regex!

Then I came up with frequency table constructed for the pattern. Then I explained checking each position in haystack. Interviewer was fine with O(H*P) so I implemented it fully, with only 5 minutes left. Basically implementation was presumably slower than other candidates.

A few hours later, I realized there’s an obvious linear-time sliding window solution, but it would have taken more than the available time to implement. Interviewer didn’t hint at all there was a linear time solution.

— difficulty and intensity

I remember feeling extremely intense and extreme pressure after the interview, because of the initial panic. So on the spot I did my best. I improved over the brute force solution after calming down.

Many string search problems require DP or recursion-in-loop, and would be too challenging for me. This problem was not obvious and not easy for me, so I did my best.

I didn’t know how to deep clone a dict. The Indeed.com interviewers would probably consider that a serious weakness.

TreeNode lock/unlocking #Rahul

Q: given a static binary tree, provide O(H) implementation of lock/unlocking subject to one rule — Before committing locking/unlocking on any node AA, we must check that all of AA’s subtree nodes are currently in the “free” state, i.e. already unlocked. Failing the check, we must abort the operation.

H:= tree height

====analysis

— my O(H) solution, applicable on any k-ary tree.

Initial tree must be fully unlocked. If not, we need pre-processing.

Each node will hold a private hashset of “locked descendants”. Every time after I lock a node AA, I will add AA into the hashset of AA’s parent, AA’s grand parent, AA’s great grandparent etc. Every time after I unlock AA, I will do the reverse i.e. removal.

I said “AFTER locking/unlocking” because there’s a validation routine

bool canChange(node* AA){return AA->empty(); } # to be used before locking/unlocking.

Note this design requires uplink. If no uplink available, then we can run a pre-processing routine to populate an uplink lookup hash table { node -> parent }

— simplified solution: Instead of a hashset, we may make do with a count, but the hashset provides useful info.

 

longest run@same char,allow`K replacements #70%

https://leetcode.com/problems/longest-repeating-character-replacement/

Q: Given a string s that consists of only uppercase English letters, you can perform at most k operations on that string. In one operation, you can choose any character of the string and change it to any other character. Find the length of the longest sub-string containing all repeating letters you can get after performing the above operations.

Here’s my description of the same problem:

Q: Suppose we stand by highway and watch cars of the each color. Only 26 possible colors. Cars pass fast, so sometimes we miscount.

My son says “I saw 11 red cars in a row in the fast lane”.
My daughter says “I saw 22 blue cars in a row in the middle lane”
We allow kids to miss up to 3 cars in their answer. In other words, my son may have seen only 8, 9 or 10 red cars in a row.

When we review the traffic video footage of N cars in a single lane, determine the max X cars in a row of the same color, allowing k mistakes. K < N.
====analysis
Suppose k is 3

— solution 1: O(N) use 2 variables to maintain topFrq and w i.e. winSize

Within a sliding window of size w, maintain a frq table. initialize w to a good conservative value of 4 (i.e. k+1).

If we notice top frq is 2, better than (w-k) i.e. w-k<=topFrq , then lucky we can be less conservative and we can expand the current window backward (possibly safer than fwd).

After expansion, immediate try further expansion. IFF impossible i.e. w – topFrq > k, then slide the window.

If correct answer is 11 i.e there’s a 11-substring containing 8 reds, I feel my sliding window will not miss it.

zero out rows/columns +! auxDS

Q (Leetcode 73): Given a m x n matrix, if an element is 0, set its entire row and column to 0. Do it in-place in O(1) space. No time complexity.

I will assume all signed ints.

====analysis
I find this problem very well-defined, but the O(1) space is highly contrived. I think it only needs some clever technique, not really reusable technique.

Reusable technique — for array of integers with stringent space complexity, we can save indices in the array

Aha — I only need to know the full set of rowIDs and columnIDs.

— My O(minimum(m,n)) space solution 1:
zeroRowCnt:=how many rows to be zeroed out
zeroColCnt  :=how many columns to be zeroed out

Compare the two. Suppose zeroRowCnt == 11 is smaller. I will save the 11 rowID’s in a collection. Then scan horizontally to zero out by column. Then use the rowIDs to zero out by row

–My O(1) space idea 2 — more elaborate than the published solution.

Aha — Can we save the 11 rowID’s in a column to be zeroed out?

Compare zeroRowCnt and zeroColCnt as earlier. Get first rowID among the 11. Suppose it’s Row #3.

Now we know Row#3 has some zeros, so find the first column having a zero. It might be the last column (farthest east). Wherever it is, we pick that column as our “bookkeeper column”.

Visual insight — Suppose bookkeeper is Column #33. Then a[3,33] would be the first zero if we scan entire matrix by-row-and-internally-by-column

We scan row by row again (since we don’t remember those 11 rowIDs), starting after that first rowID. For every rowID found, we will zero out one corresponding cell in bookkeeper column.

Insight — We should end up with exactly 11 zeros in that column. Can’t exceed 11 (only 11 rows having zeros). Can’t fall below 11 (we save all 11 rowIDs)

From now on, freeze that column until further notice. Now zero out each Column to be zeroed out, but leave out our bookkeeper column.

Lastly, follow our bookkeeper column to zero out every “dirty row”.

minimize average distance to N cells]matrix #Kyle 60%

Q: given N red cells in a white matrix, find the best cell anywhere in the matrix, whose average distance to the N red cells is minimized. You can imagine N homes deciding to build a waterpoint.

Distance is like how many horizontal or vertical steps.

=====analysis

Insight — vertical vs horizontal dimensions are independent, so this is really a one-dimension problem.

center of gravity?

Insight — rate of change should be zero at the trough…

count lower ints to my right

https://leetcode.com/problems/count-of-smaller-numbers-after-self/ is labelled “hard”.

Q: given an integer array nums , return a new counts array, wherein counts[i] is the number of smaller elements to the right of nums[i]

====analysis

Order statistics tree  (i.e. an augmented RBTree) should make O(N logN). However, the actual algorithm is not clear to me.

One scan from right. Insert each node into this tree. Before inserting a node of value like 22, we will query the tree getRank(22).

Implementation wise, it’s hard to create a self-balancing BST from scratch. So I think an unbalanced BST might do.

Also, there might be some alternative solutions, like mergesort??

longest descending path through matrix #60%

https://leetcode.com/problems/longest-increasing-path-in-a-matrix/

Q: given an int-matrix that allows 4-way moves,  find longest path with strictly descending nodes. Lateral move disallowed.

====analysis

I have code to generate all paths… generate loopfree paths: graph node A→B

— Solution 1: O(N logN)
O(N) First pass  to construct “waterflow” graph — visit N nodes. For each, check the four neighbors. Add an outgoing edge (in a Node.edgeList) to a neighbor node if it is strictly lower.

Now put all nodes into a min-heap, according to node height. Can be part of first pass.

Second pass we visit each item in the min-heap. For each node, compute longest descending path starting therefrom. Record this path length as Node.score. Note if a node is surrounded by higher or equal nodes, then it has score 0, as in the lowest node.

A higher node AA is always upstream to some “computed” nodes. (We don’t care about any incoming edges into AA). So pick the max of (up to four) lower neighbors’ scores. Add 1 to get AA’s score. This computation is O(1) but the heap pop() makes second pass O(N logN)

Note the lowest node may not be part of the longest path, if it is surrounded by the highest nodes.

getByRank() in sorted matrix: priorityQ^RBTree

https://leetcode.com/problems/kth-smallest-element-in-a-sorted-matrix/

====analysis

recombinant binTree pyramid, where “water” flows east or south.

  • first level has one node .. lowest value. Add it to pq (i.e. priorityQ)
  • pop the pq and insert the two downstream nodes
  • total K pops, each pop is followed by up to 2 inserts

Heap will grow to up to K items, so each pop will be up to logK

Total O(K logK). To achieve this time complexity, we can also use a RBTree. The tree nodes can come from a pre-allocated array.

lowest missing+ve int#Codility #70%

Q: Write a function int solution(int[] A);  that, given an array A of N integers, returns the smallest natural number that does not occur in A. For example, given A = [1, 3, 6, 4, 1, 2], the function should return 5.
Given A = [1, 2, 3], the function should return 4.
Given A = [−1, −3], the function should return 1.

• each element of array A is an 32-bit signed int
• expected worst-case time complexity is O(N);
• expected worst-case space complexity is O(N).
* Elements of input arrays can be modified.

https://leetcode.com/problems/first-missing-positive/description/ is similar but O(1) space and average O(N) time!

—— my analysis —–

The mutable and O(1) space hints at — saving array indices as payload !

—- Idea A:

first scan to swap non-positives to the end and remember the new boundary (say #111).

In the same scan also determine min and max. Suppose min=55.

Another scan to reduce all payloads by 55 so they fall in the range of 0 to max-55.

Now use CSY technique .. check array@0-N in-situ #Nsdq#contrived

—-solutionB: make_heap O(1) space but O(N logN) time. Build a min-heap based on the array in O(1) space, then keep calling min().

  • make_heap shows random access container (vector used in demo), and “rearrange”, implying O(1) space
  • make_heap shows O(N) to construct the heap

min() is O(1) but delete_min() is O(log N), so overall we probably get O(N logN)

—-solutionC: radix sort. https://en.wikipedia.org/wiki/Radix_sort#In-place_MSD_radix_sort_implementations shows an in-place binary radix sort.

First pass to transform all negative numbers to 0. Then iterate the sorted array and check for the earliest “gap”. Worst case — you get 1,2,3… without gap, so answer is the 1+ largest array element.

O(W*N) where W is width of the largest integer. If we assume 64-bit then W is a constant:)

http://www.drdobbs.com/parallel/parallel-in-place-radix-sort-simplified/229000734 is another in-place radix sort.

get_majority_elem]unsorted array,O(1)space #90%

Q: Given an array of size n, find the majority element. The majority element is the element that appears more than ⌊ n/2 ⌋ times. You may assume that the array is non-empty and the majority element always exist in the array.

====analysis
worst input: odd length, only X and Y. X occurs once more than Y.

hash table solution needs O(N) space since there can be N/2 distinct values. To improve space complexity, how about quick select? Discard the smaller side and pick another random pivot.

Median-finder algorithm can solve this problem, using std::nth_element() which uses QuickSelect… O(1) space despite recursive set-up.

— idea 3 O(1) space: random pick then verify
Random pick and count the occurrence of this pick. Is it more than N/2? Within a few trials we should find a good pick.

min int_range2include all clubs#presorted 60% careercup

Q (careercup): You have k uneven lists of pre-sorted integers, N ints in total. Find the smallest range that includes at least one number from each of the k lists. For example,
List 1: [4, 10, 15, 24, 26]
List 2: [0, 9, 12, 20]
List 3: [5, 18, 22, 30]
The smallest range here would be [20, 24] as it contains 24 from list 1, 20 from list 2, and 22 from list 3
==== Analysis
Looks rather contrived but very popular.
— solution 1 O(N): moving window algo:
O(N) merge all K clubs’ items into a big array, where payload includes the original clubId of each item. We end up with
[0/2nd 4/1st 5/3rd 9/2nd 10/1st …]

Once I come up with this big array, I feel confident to conceive a O(N) moving-window algo.

A ‘basic window’ is any subarray having representatives from each club.
A ‘tight window’ is a basic window that can’t shrink further, i.e. the 1st and last items are the only reps of their clubs.

Now findInitiaLwindow(), similar to findNextWin().
Simple procedure —
As we build the window, we update a hashtable like {club1: position 1,3,6; club2: position 0,5; club3: position 2,4,..}
Basically an array of queues.
We update this table each time we increment the front pointer in search of the next basic window.
When we find a basic window, this table should have all K queues non-empty and at least one of them singular, probably the last updated queue.

Now shrink this window via shrink(). Here’s a fast algo but more tricky and no O() impact —
For each queue, find the last added i.e. right-most position.
Put these K positions in a container(actually no container needed).
Now out of these K positions, find the minimum. Say 22. I claim 22 is the left edge of a smaller window.
Keep shrinking if we can but I doubt we can.

Here’s a slow shrink algo —
For the current window, look at the clubId of the left edge (back ptr).
Use the clubId to locate a queue in the table.
Head item in the queue should match the position of back ptr.
If not empty, pop this queue and increment the back ptr by one; else exit shrink()

Now we have a tight window, we remember its size (and update current winner).

After shrink(), we call findNextWin() —
Suppose the basic window’s earliest item is a clubX.
Now we move back ptr (clubX queue is now empty).
Now we move front pointer i.e. right edge of the window, looking for the next clubX item.

* findNextWin() is O(1) per front ptr increment. Only touches the tail of queues
* shrink() is O(1) per back ptr increment. Only touches the head of queues
* therefore, at most N front ptr moves and N back ptr moves.. O(N)

PriorityQ can speed up some, but won’t improve bigO

Pacific/Atlantic #51%

https://leetcode.com/problems/pacific-atlantic-water-flow/

Q: Given an m*n matrix of non-negative integers representing the height of each unit cell in a continent, the “Pacific ocean” touches the left and top edges of the matrix and the “Atlantic ocean” touches the right and bottom edges. Water can only flow in four directions (up, down, left, or right) from a cell to another one with height equal or lower.

Print all cells where water can flow to both the Pacific and Atlantic ocean.

====Analysis
peek? not yet
— solution 1:
Create m * n nodes in a 2D grid. Two more nodes:
All north|west boundary nodes have an edge From Pacific node
All south|east boundary nodes have an edge From Atlantic node

Between two adj cells, there’s 1 or 2 directed edges, from low to high. Intuition — tracing pollutant upstream. An arrow from P to Q means Q is suspect.

How do I represent the graph? Each node can hold a list of “suspect neighbors” (higher or equal).

Run BFT or DFT from Pacific node and turn on flag in each reachable node.

Run the same from Atlantic. Every reachable node with the flag set is a double-suspect to be printed.

clean-up: non-overlapping intervals #70%

Q (L435): Given a collection of intervals, find the minimum number of intervals you need to remove to make the rest of the intervals non-overlapping.

Note:

  1. You may assume the interval’s end point is always bigger than its start point.
  2. Intervals like [1,2] and [2,3] have borders “touching” but they don’t overlap each other.

==== analysis:

I think this is same as the greedy room scheduler. Schedule the earliest-ending task, so to to maximize accepted meetings.

A deselected meeting is an interval removed.

signed-int: longest K-sum subArray #60%

Q: given a signed-int array, find the longest subarray having sum=K

realistic scenario: we know someone executed a buy and a sell and made $K profit. There are many possible buy-sell points when I review the price points. We know this trader held the position longest to gain exposure, so which buy-sell points did he use?

====analysis

First scan to build cumulative sum — the sum-array. Looks like a price chart. Initial value is the first original element.

Now scan this sum-array. At each element s[j],

  • I save s[j] in hashtable {s[j] -> j] iFF no such key yet.
  • Then I look for the (by design, earliest) element s[i] such that s[i] + K == s[j]. If found in the hash table, then compute hold:=j-i and update the global longest_hold if necessary.

Note the hash table’s values (positions) are all earlier than j, since we are scanning forward.

Can consolidate to one scan.

Tip — hash table is usable to reduce O() thanks to exactly-K.

append+maxInRangeK #Rahul

Q: given a sequence of integers, implement two operations

  1. Operation 1: append another integer. Size becomes N
  2. Operation 2: given two arbitrary subscripts into the sequence, (they define a sub-array of size K), find the max

Rahul saw discussions without any optimal solution. I think a simple vector-based algo can achieve amortized O(1) for Append and O(logK) for Max.

AuxDS required.

— idea 1: segment the sequence into fixed segments

 

wildcard matching #more tractable than regex_FB

Q (Leetcode44): Given an input string (s) and a pattern (p), implement wildcard pattern matching with support for ‘?’ and ‘*’. The matching should cover the entire input string (not partial).

‘?’ Matches any single character.
‘*’ Matches any sequence of characters (including the empty sequence). The star is not a quantifier. I think a single star is like the “.*” in perl.

no space allowed. I think a wildcard can even sit at end of string. I think we could see two wildcards in a row. Two stars can be treated as one star.

=====analysis

My topdn-memoization solution (on github) was accepted at Leetcode.

I feel this is simpler than the regex_FB.py problem, therefore easier to absorb. I think either DP or my original recursive solution should work.

touch not cross: path between 2 corners

Q1: from p 183 [[discrete math]]: given a n x n grid. Start from north west corner moving south or east each step, towards that corner. The diagonal connecting them can be touched from north, but not crossed. print all paths

easier to treat origin as [0,0] and end as [N,N]

DFT will require deep recursion.
BFT (with color) where each node remembers all paths-from-root? Kinda brute force

Insight — Actually this is not necessarily a graph problem though it can be solved that way.

Q2 (accepted@leetcode ): Given n pairs of parentheses, write a function to generate all combinations of well-formed parentheses. For example, given n = 3, a solution set is:

[ “((()))”,
“(()())”,
“(())()”,
“()(())”,
“()()()” ]

====analysis
Four related problems — These two problems are related to the abbreviation generator i.e. the combination generator.

However, the abbr/combo generators are more versatile and possibly overkill for this problem. These two problems can use a bit array to represent the output. I think my solution on github is probably considered inelegant but I don’t care. BigO insight —  number of paths (or valid strings) is O(N!) so any solution would not be any better than O(N!)

In general, our own ideas are often inefficient. If efficient, then often inelegant by some arbitrary interview standard. Still more valuable than learning standard solutions. One of the biggest values is xRef which helps build insight, intuition, thick->thin.

best2subArrays show`max diff in sums #70%

Q (careercup): Given an array of N signed integers. Find two disjoint contiguous sub-arrays such that the absolute difference between the two sub-array sums is maximized.  The sub-arrays should not overlap. O(n^2) algorithm was not accepted.

eg: [2 -1 -2 1 -4 2 8] ans: (-1 -2 1 -4) (2 8), diff = 16

==== analysis
int-array problems are my relative weakness but I have done quite a few. I feel this is more about the algo rather the implementation

Without loss of generality, let’s assume the best result includes a min subarray on the left and a max subarray on the right.

–idea 1: a partition point at 5 splits the array to 0-4 and 5-end
For a given partition point, we can find the min subarray on the left and also max subarray on the right.

A O(N^2) solution can be designed based on this idea — at each partition point, O(N) search of min subarray sum on the left and max subarray sum on the right

—idea 2: 2 colliding pointers le and ri.
0-le contains a min subarray, and ri-end contains a max subarray. But how do I move the 2 pointers?

— idea 3: O(N) scan fwd to record at each position “max sum ending here” and “min sum ending here” .. two numbers. Ditto reverse scan. Total 4 numbers per tuple.

A subarray is never empty, as produced from my minSubarraySum.py.

End up with N-array of tuples like {a,b,c,d} . Then scan this array in O(N)

max-sum path up+down binTree #FB

Q2 (Leetcode “hard” Q124): find max path sum, where a ” path ” has minimum one node A and can include A’s left child and/or A’s right child. No uplink available.

====analysis

I see two types of paths

  1. down the tree, starting at some upstream node A ending at a node in A’s subtree
  2. up-down the tree, starting at some A’s left subtree, ending somewhere in A’s right subtree.

For 1) https://bintanvictor.wordpress.com/wp-admin/post.php?post=23360&action=edit has my tested solution

For 2: post-order walk to update each node a (signed) max-path-sum-from-here. See https://bintanvictor.wordpress.com/wp-admin/post.php?post=31601&action=edit. The 2 values from left+right children can solve this sub-problem

Q: can we use the 2) solution for 1)? I think so

given any string, generate splits into palindromes

https://leetcode.com/problems/palindrome-partitioning/

Q: Given a string s, partition s such that every substring of the partition is a palindrome. Return all possible palindrome partitioning of s.

====analysis

I found this question quite hard and had no confidence, .. overwhelmed, like manyDP problems. But then I came up with two DP solutions .. https://github.com/tiger40490/repo1/blob/py1/py/algo_str/genPalindromSplits.py

I don’t bother to run the leetcode tests as they tend to deflate my burning joy, absorbency… precious stuff.

insight — the word “partition” is horribly confusing. It can mean two very important entities in the problem domain — either a palindrome substring or a complete family of substrings that form the original word. It’s crucial to avoid this word

insight — challenge is not O() but a working solution that generates all splits without repetition

–idea 1: recursive top-down DP

memoization is not easy since I used generator.

–idea 2: iterative bottom-up DP

invalid/unbalanced brackets: kernel #62%

Q: given a string of N left+right brackets, you can find out how many invalid chars to remove to make it balanced. Say it’s R. There are up to N-choose-R ways to make it balanced. Print all unique balanced strings in compact form.

====analysis

subproblem: minimum how many invalid chars to remove

Useful preprocessing technique — on the left end, any closers must be removed. Same on the right end. No choice 🙂 We would end up with a valid char on each end. This is nice but optional in my Idea3 below.

— my idea 3 to count minimum cuts

Aha — There will surely be some “kernels” i.e. opener-then-closer in a row. First scan I will remove them, then remove more kernels. This is always optimal if we aim to minimize the cuts

  • [] ] [ [] ] [ [] ] becomes ]
  • []][[] becomes ][
  • [] [ [] [] [] [] becomes [
  • [[[][]][[]] becomes [
  • [[][]]][[]]][[] becomes ]][

What remain are the positions of bad chars. I need to remember these positions.

Case: closers only. Let’s look at one position like #55. We can cut #55 or another closer at an earlier position.

Case: openers only. Similar to the above.

Case: closers-openers. The original string is partitioned into exactly two sections, each similar to the above cases.

Preorder+Inorder list → binTree #50%

https://leetcode.com/problems/construct-binary-tree-from-preorder-and-inorder-traversal/ says — Q: Given preorder and inorder traversal of a tree, construct the binary tree. You may assume that duplicates do not exist in the tree.

For example, given

preorder = [3,9,20,15,7]
inorder = [9,3,15,20,7]

Return the following binary tree…

====analysis

The int values in the int arrays are meaningless and useless. I can improve things by assigning incremental ranks [1] to each node in the inorder list. Then use those ranks to rewrite the preorder list. After that, walk the preorder list:

  • first node is root
  • any subsequent node can be placed precisely down the tree.

I solved a similar problem of converting preorder list into BST — https://bintanvictor.wordpress.com/wp-admin/post.php?post=19713&action=edit

merge 2 binTrees by node position

Q (leetcode 617): https://leetcode.com/problems/merge-two-binary-trees/submissions/

==== Analysis

https://github.com/tiger40490/repo1/blob/py1/py/algo_tree/merge2Tree.py is a short, simple solution fully tested on leetcode.com but hard to run offline. Elegant in term of implementation.

Insight — Challenge is implementation. Input and return type of DFT function are tricky but server as useful implementation techniques.

Labelled as easy, but pretty hard for me.

— Idea 1: BFT. When a node has any null child, put null into queue. Not so simple to pair up the two iterations

— Solution 2: DFT on both trees. Always move down in lock steps. When I notice a node in Tree A is missing child link that Tree B has, then I need to suspend the Tree A DFT?

My DFT function would have two parameters nodeInA and nodeInB. One of them could be null , but the null handling is messy.

Aha — I set the input parameters to to dummy objects, to avoid the excessive null check. In this problem, this technique is not absolutely necessary, but very useful in general

 

us`lowest N natural nums,count BST#70%#Rahul

how do you make use of the previous results for N=2 to tackle N=3?
f(z) denotes count of unique BSTs consisting of the fist z natural number
f(0)=f(1)=1

For N=21, we can have 1 node on the left side, 19 nodes on the right side

  • for odd z, express it as z:=2y+1 where y > 0
    f(2y+1)=[ f(a=0)*f(2y) + f(a=1)f(2y-1) + f(2)f(2y-2)… +f(y-1)f(y+1) ]*2 + f(y)f(y)

Note a should increment from 0 up to y-1.

  • for even z, express it as z:=2y where y > 0
    f(2y)=[ f(0)*f(2y-1) + f(1)f(2y-2) + f(2)f(2y-3)… +f(y-1)f(y) ]*2

Let’s try this formula on f(2)…= 2 good
Let’s try this formula on f(3)…=5 y=1

–N=3:
if ‘1’ is root, then there are f(2) BSTs
if ‘3’ is root, then there are f(2) BSTs
if ‘2’ is root, then there are f(1) * f(1) BSTs
–N=9:
if ‘1’ is root, there are f(8) BSTs
if ‘9’ is root, there are f(8) BSTs
if ‘4’ is root, there are f(3) left-subtrees and f(5) right-subtrees, giving f(3)*f(5) BSTs

This is not coding challenge but math challenge.

Q2: output all BSTs. We need a way to represent (serialize) a BST?

nth largest element in unsorted array #QuickSelect

Q: Find the kth largest element in an unsorted array. Note that it is the kth largest element in the sorted order, not the kth distinct element.

I think this is mostly a bigO algoQQ problem.

std::nth_element is linear on average .. https://stackoverflow.com/questions/11068429/nth-element-implementations-complexities talks about QuickSelect algo

— idea 6: priority Q (Fib-heap) of size k
if any item is higher than the min, then pop min O(logK) and insert in O(1)
— idea 6: priority Q
Linear time to build it
— idea 5: 95th percentile problem from NYSE
— idea 4: multiple scans
— idea 3: segments
— Sol2: O(N). use the O(N) algo in the blog on “given int array, find median ] O(N)”. Then discard one of the two segments. Then repeat.
Note: Each time the target element must exist in one of the 2 segments.

O(N) + O(N/2) + O(N/4) … -> O(N)

— Sol2a: Use the O(N) qsort partition algo to anchor a (random) pivot element to create two segments. Our target must exist in one of the two, so discard the other by adjusting the le/ri boundaries.

This idea is same as the most voted solution in leetcode discussion.
O(N) on average — we get O(N)+O(N/2) + O(N/4) + … < O(2N)

Note average complexity is acceptable in hashtable!

balloon burst #DP optimization #50%

Q [ Leetcode 312]: not really classic : Given n (up to 500) balloons, indexed from 0 to n-1. Each balloon is painted with a number on it represented by array “nums”. You are asked to burst all the balloons one by one. If the you burst balloon i you will get nums[left] * nums[i] * nums[right] coins. Here left and right are adjacent indices of i. After the burst, the left and right then becomes adjacent. Find the maximum coins you can collect by bursting the balloons wisely.

If you burst a leftmost balloon, you collect 1*it*rightNeighbor coins. In other words, when multiplying 3 numbers, any absentee is a one.

0 ≤ nums[i] ≤ 100

Example: Input: [3,1,5,8]
Output: 167
Explanation: nums = [3,1,5,8] –> [3,5,8] –> [3,8] –> [8] –> []
coins = 3*1*5 + 3*5*8 + 1*3*8 + 1*8*1 = 167
==analysis:
int-array optimization problem.
Might be related to some classic problem.

Let’s define a generic math-function of 3 balloon IDs score(myle, me, myri). In this problem, score() is simply “return myle*me*myri “, but in the next problem, score() could be any math function of the three inputs.

I see each possible snapshot (having K balloons, i.e. at level K) as a graph node. Exactly 2^N nodes in the grid, i.e. 2^N possible snapshots i.e. 2^N combinations of these N balloons.

Every edge has a score. To compute the score, we only need the two nodes (snapshots) of the edge to identify the 3 balloons for score().

Pyramid — Let’s assume at bottom is “origin” i.e. snapshot of the original array ..Level 500; on top is “phi” i.e. snapshot of the empty array .. Level 0.

The problem transforms into a max path sum problem between these 2 nodes.

–solution-1 DP
From origin to any given node, there are many distinct paths each with a total score up to that node. If a node has 55 paths to it, the max sum among the 55 paths would be the uprank (upward rank) of the node.

If the node also has 44 paths from phi, the max sum among the 44 paths would be the downrank (downwrd rank) of the node. This is an interesting observation, but not needed in this solution since every edge is evaluated exactly once.

To our delight, uprank of a node AA at Level-5 depends only on the six Level-6 parent node upranks, so we don’t need to remember all the distinct paths to AA:). Our space complexity is the size of previous level + current level.

We just need to compute the uprank of every node at Level 6, then use those numbers to work out Level 5…. the Level 4 … all the way to phi.

If there are x nodes at Level 6 and y nodes at level 5, then there are 6x==5y edges linking the two levels.

Time complexity is O(V+E) i.e. visit every edge.

Level n: 1 node
Level n-1: n nodes
Level n-2: nc2 nodes

Level 2: nc2 nodes
Level 1: n nodes
Level 0: 1 node

Each node at level K has K child nodes above. This graph now suggests the max-path-sum algo (with edge scores), but it might be the only way to solve the problem, like the bbg odometer.

consider a DP algo to update the score at each node at level K, ie the max sum from root till here, via one of the K-1 nodes at level K-1

But Level 2 has too many (N-choose-2) nodes. Can We prune the tree, from either origin or phi?

tick`95%mark #Kam #70%

“Ticking 95th percentile server” is the blog title I would use. Original question is on paper/pencil, scanned and sent to my gmail, from my friend Deepak CM. I find this problem rather realistic with practical usage, rather than fake and contrived. I treat it as a System Design + implementation question.

Q: Using only std library, write a c++ program to process a live stream of 128,000,000 (or much more) double numbers, representing temperature readings not necessarily unique. As the temperatures come in, print the current 95th percentile on-demand. I call it the lucky “winner”. We can use the nearest-rank percentile definition.

====Idea 1: given unsorted ints, find median in O(N) is for median but can be tweaked for any percentile, but unfortunately, not “ticking”

====design 2, for static data set
use an “order statistic tree i.e. a RBTree where each node remembers the size of its subtree. (A leaf node has size 1.)
====design 3, optimized for high volume of updates like 128 million updates, not optimized for frequent query

The entire temperature range is divided into non-overlapping segments, each represented by a segment-head temperature i.e. the lower bound [1b]. Each segment has a range (i.e.distance to next segment head), size (i.e.item count) and density (i.e. size/range ratio). We mostly care about “size” only.

We need a RB-tree (or sorted vector) containing P=1024 [1] nodes, each an unsorted container[3]. The RB-tree serves to maintain the containers i.e segments.

Each incoming temperature is quickly “routed” to the correct container and simply appended therein, increasing its size.

Upon query request, we will use the latest segment sizes to build a cumulative profile, and run a O[logP] binary search to identify the one segment containing the “winner”. This segment size would be hopefully much smaller than 128,000 [2] and far more /tractable/.

–Within the chosen segment of size S, we can use a vector to sort in O(S logS) the temperatures and identify the winner.  After completing a query, the chosen container will become (partially) sorted, helping subsequent queries if this segment is picked again.

Since we only support 95th percentile, chance is good that this segment will be picked most of the time. If x% of the queries hit this segment, then I will convert this “favorite segment” to a RB-tree.

Alternatively, we can also use the O(S) algorithm in Idea 1, but the container won’t become sorted.

–priming

[2] 128,000 is 1024th the size of original sample size… not ideal. The segments need to be initialized carefully, during a priming phase, inspired by JIT compiler. Shall we assume roughly uniform distribution or Gaussian distribution? Assuming we know the total sample size is 128 million, I will use the first 100,000 temperatures to select the 1024 segment heads. The segments are structured not for equal length (in temperature) or equal size (i.e. element count). In fact the low segments can be very long very crowded.

Instead, the segment heads are chosen so that between 94th percentile and 96th percentile we have half of all segments. These small segment sizes will be much smaller than 128,000 and quicker to manipulate.

–Foot notes:

Q: what if some of the containers grow too big like three times 128,000,000/1024. The priming/estimate was ineffective.
A: Not a problem unless the winner happens to be in such a container.
A: One idea is to split up such a container when we notice it, and grow a new node on the RB-tree. std::map::insert() can take a hint for the position where new node can be inserted. Note we don’t want to split a node JJ too early since JJ may not grow any further subsequently and may end up no larger than other nodes as other nodes keep growing.

[1] Sizing of P — First we estimate total sample size. If unknown, then set N:=1024 so all nodes stay in L1-cache (typically 32KB). If we assume 16 bytes/node ( 8 bytes pointer to container + 8 bytes double ), then 32KB can hold 2000 nodes.

If query becomes more frequent, I can increase P by 1024, sacrificing insertion.

[1b] The node values are “lower-bounds” and don’t have to be really the minimum of the original temperatures in the container. We can probably cheat with 4-byte floats, and we get away with 2700 twelve-byte tree nodes.

[3] slist vs vector — vector might be faster due to pre-allocation, provided a node will never grow beyond a capacity. Vector has reserve() (Note resize() is wrong choice.)

O(1)getRandom+add+del on Bag #Rahul

Q: create an unordered multiset with O(1) add(Item), del(Item) and a getRandom() having the  probability of returning any item  based on the PMF.

Rahul posed this question to our Princeton PhD candidate, who needed some help on the Bag version.

====my solution:
On the spot, I designed a vector<Item> + hashmap<Item, hashset<Pos>>. The hashset records the positions within the vector.

Aha — Invariant — My vector will be designed to have no empty slots even after many del(). Therefore vec[ random() * vec.size() ] will satisfy getRandom() PMF.

add() would simply (in O(1)) append to the vector, and to the hashset.

— del() algo is tricky, as Rahul and I agreed. Here’s an illustration: Let’s say Item ‘A’ appears at positions 3,7,8,11,16 and B appears at positions 2,5,31 (the last in the vector). del(A) needs to remove one of the A’s and move the B@31 into that A’s position.

  1. Suppose the PMF engine picks vec[11] which is an A.
  2. unconditionally O(1) find the item at the last position in vector. We find a B, which is different from our ‘A’
  3. Here’s how to physically remove the A from position 11:
  4. O(1) replace ‘A’ with ‘B’ at position 11 in the vector
  5. O(1) remove 11 from A’s hashset and add 11 into B’s hashset, so A’s hashset size decrements.
  6. O(1) remove 31 from B’s hashset, so B’s hashset size remains

SCB-FM stack-based FIFO in O(1)amortized

Q: given a hardware-based stack API consisting of 3 functions {pop/push/isEmpty}, please implement a queue api consisting of 3 functions {enqueue/dequeue/isEmpty}

https://leetcode.com/problems/implement-queue-using-stacks/description/ is similar

====analysis====

service dequeue from a hidden stack.

When hidden stack is empty, pop all nodes from visible stack to hidden stack. Amortized O(1) pop()

isEmpty() must add up two sizes.

[[python cookbook]] P658 implements this classic algo in 9 lines.

LFU cache #cf.LRU #72%

Q LFU (Least-Frequently-Used) cache to support the following operations: get and put in O(1)
* get(key) – Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.
* put(key, value) – Set or insert the value if the key is not already present. When the cache reaches its capacity, it should invalidate the least frequently used item before inserting a new item. For the purpose of this problem, when there is a tie (i.e., two or more keys that have the same frequency), the least recently used key would be evicted.

====Analysis

  1. dstruc — centry i.e. CacheEntry node {key, value, hitCount, lastHit (timestamp), (optional)ptr to host LinkNode}, to be used in an inner linked list.
    • invariant: hitCount can only increase
  2. dstruct — inner minilist of centry nodes
    • invariant: list always sorted by lastHit. We can remove any intermediate node, but incoming node is always added to the Tail
  3. dstruct — fixed-sized (rehash-free) hashtable {key -> ptr to centry}, needed for mid-stream laser-removal
  4. dstruct — LinkNode {level, minilist-of-centry} where all centry objects share the same hitCount denoted “level”.
  5. dstruct — outer list of LinkNodes, always sorted by level

“bubble-up” operation — Whenever a centry gets a cache-hit, its hitCount increments. It immediately and unconditionally bubbles up to the LinkNode one level higher (to be created in O(1) if necessary) ((
* [o1] query the hashtable and follow ptr to remove the centry from the minilist in an old LinkNode
* [o1] insert the centry to the new level, at Tail of minilist. The new LinkNode could be non-existent but Never empty!
* [o1] optionally, new host LinkNode’s address is saved in the centry
))

  • Get() hit — relatively easy. Update the hitCount and bubble up
  • Get() miss — trivial
  • Put() Update — similar to get-hit
  • Insertion (possibly after deletion) — [o1] append to the minilist Tail in the Level-1 LinkNode (to be created if necessary) and add to hashtable
  • Deletion — always from list to hashtable, never the converse
    • [o1] identify lowest level present, then delete the head (i.e. eviction target) of minilist
    • when a linkNode becomes empty, it must disappear from the outer list, to prevent build-up of consecutive empty LinkNodes leading to linear search for eviction target. Imagine aaaaa bbbbb c[Now need to evict an “a”]. Therefore, array of LinkNode is unacceptable.

max-profit at-most-2K trades #proven but untested

Q(Leetcode): Say you have a price history as array. Design an algorithm to find the maximum profit. You may complete at most 2K transactions, consisting of exactly K (eg 2) buys and K sells. You may not engage in multiple transactions at the same time (i.e. you must sell the stock before you buy again). No short sell please.

No O() requirement.

====analysis=====

I feel first challenge is to list all (not more than 10) scenarios. This step has taken me a few days, even though I drew many examples.

–Idea 3 for 2K, based on Leetcode discussion

f[2k, ii] represents the max profit up until prices[ii] using at most 2k transactions. 
f[2k, ii] = max(f[2k, ii-1], prices[ii] + max_for_all_jj(f[2k-2, jj-1] - prices[jj]))Two possibilities
  1. the optimal solution at [2k,ii] doesn’t involve the price point ii, so solution is f[2k,ii-1]
  2. the optimal solution at [2k,ii] involves a Sell at prince point ii. In this scenario, the last buy is at some previous price point jj, and before jj we have an optimal solution at [2k-2, jj-1]

–solution 1 (O(NN) brute force): construct all possible pairs, rank them and pick top 2.

–solution 2 (O(N) only works for K=2)

  1. Identify all the turning points so we end up with HLHLHL… We can eliminate or ignore the other points.
  2. * identify the best pair using the max-profit algo. denote them as L1/Hj
  3. * In the subarray before L1, find the best pair
  4. * in the subarray after Hj, find the best pair
  5. pick the best among the two an denote it as p2
  6. Now look into the subarray L1 to Hj. If there’s no enclosed pairs within then we have a simple case — use L1/Hj and p2. But let’s assume there are at least 2 nodes enclosed. I will denote entire subarray as L1 H1 L2 H2 … Lj Hj (where L1-Hj is the max-profit)
  7. * use max-profit algo to find the worst loss from H1 to Lj. Suppose it’s H3 to L5.
  8. If this loss exceeds p2, then the we will return L1/H3 and l5/Hj. Otherwise, return L1/Hj and p2

This solution uses the watermark algo 4 times (*).

I feel basic decision is to break the best pair or keep it

case: need to break the highest pair into 2 pairs,
case: best pair + another pair outside. I think this is easy..
case: 1,18,2,19,15,16. Perhaps the hardest case to solve.

–other ideas, for K > 2

can we use a matrix?

We can keep track of all profitable pairs i.e. le/ri indices, and also a pointer to the current best pair that’s not overlapping with “me”.

After creating 2nd pair, IFF no overlap, then we update the pointers in both instances.

After creating 7th pair, if it doesn’t overlap with the #3 highest pair, then check-update the pointer in #3.

I think if we can efficiently keep track of these then it should work.

reverse slist in K-groups

https://leetcode.com/problems/reverse-nodes-in-k-group/description/ is the problem I tried today, not a classic problem. Challenge is not the algorithm per-se but the Edit-Compile-Test-Debug cycle. I think some of us can come up with a conceptual algorithm quickly, but to implement it correctly took me hours.

Similarly, the problems below are not tough due to algorithm but the ECTD cycle can take hours, sometimes due to c++ iterator pitfalls, sometimes because we can’t easily visualize the data structure .. I wrestled with all of these problem, so please feel free to try them and discuss with me.

* print any tree (you can start with a binary) by level, in zigzag sequence
* given a linked list, write a function to remove all nodes greater than 55 (or any user input). Return the head of the modified list.
* https://www.geeksforgeeks.org/zigzag-or-diagonal-traversal-of-matrix/
* https://www.geeksforgeeks.org/create-a-matrix-with-alternating-rectangles-of-0-and-x/
* https://bintanvictor.wordpress.com/2018/02/06/spiral-number-printer/

As decided last week, I didn’t bother to run the Leetcode test suit. They make me feel frustrated, worthless, defeated, inferior, weakling, quitter…. Without these tests I ran my own tests and I feel like a joyful hacker.

Even though I may not pass all Leetcode tests, I feel my code is reasonable quality and I’m proud of it.

—-Problem is well-defined but not very common.

Given a linked list, reverse the nodes of a linked list k at a time and return its modified list.

k is a positive integer and is less than or equal to the length of the linked list. If the number of nodes is not a multiple of k then left-out nodes in the end should remain as it is. O(1) space. Hopefully O(N) time.

—-My sol1: use my existing O(1) solution but now keep a count.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/linkedList/linkedListReverseInK_group.cpp

The first group and the last group are both tricky and can take up hours.

size-N array find The duplicate int #1~N+1#Abhinav

https://leetcode.com/problems/find-the-duplicate-number/description/: Given an immutable int array nums containing n + 1 elements where each element is between 1 and n (inclusive), prove that at least one duplicate number must exist. You are guaranteed that there is only one duplicate number, find the duplicate value in O(1) space, below O(NN) time. The culprit may repeat many times.

I didn’t bother to write the code.

===== analaysis =====

contributed by a user and highly contrived:(
many likes:)

–bisection solution in O(N logN) time and O(1) space. I came up with this solution within a minute.

  1. Divide the full range [1 to n] into 2 almost-equal ranges (i.e. if n = 2K+1, then i use [1 to K] and [K+1 to n] as 2 ranges)
  2. Count how many nodes are in each range. Clearly one of the two ranges must have too many elements.
  3. Remember the boundary of that bad range so from now on we will ignore those nodes falling into the good range. We will use 2 variables to update/improve the boundary, until they coincide.
  4. within the bad range, repeat Step 1.

Key insight — progressive bisection.. non-recursive.

Key insight — applying pigeon-hold principle, we split the conceptual range. The more common (but ineffective) technique would split the physical array.

generate loopfree paths: graph node A→B|anyPair

Q1: given 2 nodes in a graph containing N (eg 121) nodes, potentially with cycles, generate all simple paths between the pair. A simple path has no cycle. (In other words, for a simple path, length + 1 ==  # unique nodes)

I think there are classic math algorithms for it, because this is part of basic graph theory. Here are some applications of this type of algorithms —

  • Q1b (special case of Q1): given 2 nodes in a C-by-R rectangular grid, where every node is connected to (up to) four neighbors, generate all cycle-free paths between the pair.
    • Below, I solved this problem in python
  • Q2 (simple application of Q1 algo): generate all simple paths between ALL node pair in a graph. The shortest simple path has length=0. Longest simple path can potentially visit every node exactly once.
  • A: first generate all 121-Choose-2 node pairs. For each pair, solve Q1. Lastly generate the 121 trivial paths of length=0.
    • repetitive 😦
  • Q2b (special case of Q2): In a C-by-R rectangular grid, where every node is connected to (up to) four neighbors, generate all cycle-free paths between ALL pairs. I believe this simple leetcode problem#79 does it.
    • Now I think the algo is much simpler than I imagined. Should really code it by hand.
  • Q2c (easy one based on Q2): given a binary tree containing no cycles, generate all paths.

— A1b: my DFT implementation (probably not 100% correct) , where each “trail” either fails or becomes a path.

  1. from NodeA start a breadcrumb/trail. We can’t revisit any node already visited on current breadcrumb,
    1. if this is a matrix, then instead of a hashtable, we can also use a shadow matrix, but the breadcrumb is much smaller than a shadow matrix
  2. if we reach a node surrounded by nodes on the same breadcrumb, then the trail fails at a dead-end
  3. else we will reach NodeB 🙂 Print the breadcrumb

By construction, we won’t see duplicate paths 🙂

https://github.com/tiger40490/repo1/blob/py1/py/algo_grid/classic_count4waySimplePaths.py is the implementation

–BFT? I don’t think BFT can print each unique path

fewest jumps to reach right end #triple jump

Q(Leetcode 45): Given an array of non-negative integers, you are initially positioned at the first index of the array. Each element in the array represents the maximum permitted jump length from that position.

==== analysis =====
https://github.com/tiger40490/repo1/blob/py1/py/array/tripleJump.py is my solution, NOT tested on Leetcode. I won’t bother to test on leetcode. Protect my joy, momentum, absorbency

My solution not only determines feasibility but also finds the fewest jump.

Typical DP+

greedy algorithm. I will jump leftward starting from right end  [1].

Suppose there are N=99 nodes in the array. I will pre-scan the N nodes to build a shadow array of integer records, each a BestLefNode. (The first record is unused.)

Eg: If BestLefNode[44] == 33, it means that based on known data so far, the left-most (furthest) node we can jump to from Node #44 is Node #33.

Suppose the original array shows that from Node #7 we can jump 11 steps ahead. When we visit Node #7 during the rightward scan, we will update (up to) 11 BestLefNode records. These 11 records are located at #8 onwards. Each record, will be updated with “7” if appropriate.

As soon as we update BestLefNode[N-1] i.e. right-most record, we exit the initial scan since the optimal solution is now available. For example, if rightmost BestLefNode has value #88, that means the furthest node we can reach from the right end is Node #88, so we will jump to #88 and then check the best destination From #88.

[1] why not start from left end and jump rightward? No I think there’s no symmetry in this problem. From Node 1 the immediately-reachable nodes are a continuous region.

longest consecutive ints]O(N) #zebra

Popularity — 1000+ likes on Leetcode … possibly popular

Q(Leetcode #128): Given an unsorted array of integers, find the longest consecutive element sequence, in O(N) time. Eg: given [100, 4, 200, 1, 3, 2] return [1,2,3,4]

I call this the zebra problem because  every consecutive sequence of int is a black stripe and the gaps between them are white stripes. We want the widest black stripe. Obviously, each stripe has minimum size 1.

https://github.com/tiger40490/repo1/blob/py1/py/array/zebra.py is my O(N) solution, not tested on Leetcode.

========

What’s UnionFind? A reusable technique?

Like inserting interval #merging #80% done, I  feel this is a data structure problem,

To keep things simple, i will first run one iteration to remove all duplicate items.

I will use hashtable where key a known item. The value is a pointer to a “segment” object.

A segment stores the min and max values. All integers within [min, max] of the segment are always known-items during my scan of input array.

When a new item is either min-1 or max+1, we expand the segment by adjusting the extremes…

The trick is joining two segments, without link pointers. After joining, we don’t really adjust the min/max fields. We only update the max-length global variable if needed.

To keep the hashtable small, I can optionally delete from it but we don’t want to do a range delete within the loop — O(NN)

max-sum path Down binTree #self-tested

Q1: Given a non-empty binary tree of signed integers, find the maximum path sum. For this problem, a path is defined as any sequence of nodes from any starting node to any node in the tree along the parent->child connections. The path must contain at least one node and does not need to go through the root. No uplink. No cycle.

Luckily, there’s no published solution for this modified leetcode problem 🙂

====analysis====

My solution — DFT. Along each root-to-leaf path, use the max-subarray (Kadane) algo and store maxSumEndingHere value in each node, for reuse.

Q: is there any duplicate work?
A: I hope not, thanks to memoization i.e. Node::meh field

Q: do we visit every path?
A: I think so.

I simplified the idea further in

https://github.com/tiger40490/repo1/blob/cpp1/cpp/algo_binTree/maxPathSum.cpp

Time complexity is .. O(V+E) = O(N), since I visit every node and follow each edge once only.

There might be algorithmically superior solutions on leetcode but I don’t want it to affect my joy, motivation and momentum.

inserting interval #merging

Q (Leetcode): Given a set of non-overlapping intervals, insert a new interval into existing intervals (merge if necessary) and print updated list of intervals. Intervals were a vector sorted according to their start times.

–analysis–

Now I feel the #1 main data structure is a doubly linked list (dlist) of Segment objects:

  • { segment_left_mark,
  • ptr to next node, ptr to prev node
  • optionally a (bool or) enum having A/B, where A means current segment is AboveWater (an interval) or BelowWater i.e. a gap}.

Every time this dlist is modified, we would update a “helper container” — a tree of node pointers, sorted by the segment_left_mark value. Tree to help successive inserts. However, if each insert(vector intervals) has a sorted vector then we can binary search the vector and don’t need to tree.

First, binary search to locate the left mark among all existing marks. Ditto right mark. Based on these 2 results, there are many cases.

  1. done — Case (simple) both fall into the same existing interval. No op
  2. done — case (simple) both fall into the same gap segment. Create 2 new segments and insert into the dlist
  3. done — case (simple) one boundary falls into a gap the other falls into a adjacent interval — just adjust the segment_left_mark without inserting new segment
  4. done — case — bridge: both boundaries fall into different intervals. Adjust segment_left_mark of 2 affected segments, then link up the two to skip the intermediate segments
  5. done — case — wipeout: both boundaries fall into different gaps, wiping out at least 1 interval.
  6. done — case (most complex) — one falls into an interval, the other into a non-adjacent gap.
  7. case — incoming interval left boundary is lower than all boundaries, but right boundary falls into some segment
  8. case — incoming interval is very low
  9. case (special) — if an interval becomes adjacent to another, then merge the two.

Need a sorted tree of all marks + array of segments. Redundant but helpful.

Each segment (interval or gap) is represented by {left mark, right mark} where left <= right. I will save the segment objects into (a linked list and) an array. Even elements are interval objects and odd elements are gap objects. Now superceded by dlist.

I think this problem is all about corner cases. Perhaps start with the complex cases which will take care of the simpler cases. No need to pass Leetcode tests. Due to the pointer complexity, I prefer python.

https://github.com/tiger40490/repo1/blob/py1/py/linklist/insertInterval.py is my solution but I dare not test on Leetcode

max rectangle ] histogram

Q: https://leetcode.com/problems/largest-rectangle-in-histogram/description/. Given N possibly recurring non-negative integers representing the histogram’s bar heights, and given the width of each bar is 1, find the area of largest rectangle in the histogram.

Visually well-defined problem. Kind of naturally-occurring. Very simple data structure. No O() requirement, so I will just try my own solution.

https://github.com/tiger40490/repo1/blob/py1/py/array/maxHistoBox.py is my solution. 100% passed on Leetcode.

==== analysis — heavy on data structure design.

Key insight — one scan to update a clever data structure.

key insight — data structure is not per bar, but per height!

For every bar J, there exists an enclosing max-rectangle of J’s height. We can just compare all of these rectangles.

We might start with two extreme candidates
1) the peak — whose enclosing rectangle is likely slender — O(N) one scan to find all the peaks
2) the lowest bar — whose enclosing rectangle has width N — O(N)

If we paint the histogram as a binary matrix, then this is equivalent to anther problem max all-black submatrix #DP #zhurongbut I think there exists better solutions like O(N logN) or O(N*S) …

–homegrown algo with O[N*S] where S:= #unique heights. The binary search doesn’t show up as logS.

A pre-scan to get all distinct heights. For each distinct height, we maintain a RunRecord object {bestRun, currentRunStart, height}, in a sorted map {height -> record}. In py, I can use a pre-sorted vector of Records, sorted on height

In main scan, As we encounter a new bar of height J, we update these records.

  • if not falling or rising
    • record-J and each record-H below J must have a current run … extend that run (no-op)
  • if rising from height H
    • each record up to H must have a current run … extend that run by no-op
      • iterate the treemap up to H
    • iterate treemap from H+1 to J. start a new run for each record
  • if falling from height P to J
    • record-J and each record-H (where H <J) must have a current run … extend that run
    • iterate treemap from J+1 to P … each record-K must have a current run, indicated by a valid currentRunStart, then this record’s current run has just ended. We update bestRun and put a invalid value into currentRunStart.

At end of the main scan, every record has a bestRun i.e. the duration. I can then calc the area under each bestRun and return the max.

find min substr contain`all my fav chars

Update — a similar sliding window is used in longest substring without repeating chars

Q (leetcode): Given a string Haystack and a string T, find the minimum window in Haystack which contains (at least) all the characters in T according to the frequencies. Time complexity O(n). Eg: minWindow(ccbabccbabcb, bbc)==bcb

If there is such a window, you are guaranteed that there will always be only one unique minimum window in Haystack. <– I thought this guarantee means something but it doesn’t.

Without loss of generality, I will assume the chars are a-z. I believe those Leetcode corner cases will use only 3 chars

—analysis—

For single-string problem, use array indexed by ascii code. I can convert T to such an array to store the required frequencies (reqFrq)

I can construct a shadow array, same length as Haystack with these payloads:

  • if the hay is not in reqFrq, then payload is a special value like nullptr
  • if the hay is in reqFrq, then….?

–SolSW: sliding-window based

  1. Scan Haystack from left and keep count of actual frequency (check against reqFrq each time). I will inevitably find the earliest good window. By construction, both ends of this window are in reqFrq.
    • Note the entire haystack is more than a good window.
  2. Now I slide the fixed-sized window. If I find another good window, with extra chars on the left, then I have found a shorter window, so I truncate my window on the left
  3. continue Step 2

Longest Parentheses run with multiple hierarchies

Q (Leetcode): Given a string containing nothing but the characters ‘(‘ and ‘)’, find the length of the longest valid (well-formed) parentheses substring.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/str/maxParensRun.cpp is my solution 100% tested on Leetcode

–My Single-iteration solution:

Challenge is data structure. I ended up with 2 data structures to be updated during the iteration

  1. A stack (holding openers’ index values) to locate the matching openers
  2. an array to save “scores”

For each closer, I will record the position of the matching opener, then compute the distance (minimum two).

 

 

min-cost partitioning #c++Flex #rare

Q: You will be given a natural number array and a threshold value T. The threshold represents the maximum length of subarrays that may be created for the challenge. Each sub-array you create has a cost equal to maximum integer within the sub-array. Your challenge is to partition the entire array into sub-arrays no longer than the threshold, and do it at minimum cost.

Function Description
Complete the function calculateCost in the editor below. The function must return an integer denoting the minimum cost of partitioning the array.

calculateCost has the following parameter(s):
a[a[0],…a[n-1]]: the integer array to be divided into sub-arrays
k: the threshold value, i.e the maximum size of any sub-array

Constraints
• 1 ≤ n ≤ 5000
• 1 ≤ k ≤ 500
• 1 ≤ a[i] ≤ 100000

For example, for T=2 and original array {1,5,2}, you have two ways to partition it:

  • {1} {5,2} total cost = 1 + 5 = 6 (this is lowest cost)
  • {1,5} {2} total cost = 5 + 2 = 7

— My greedy AlgoAA:

Update: thanks to XR here is an edge case to break AlgoAA: {49,50,99,0,98}

I will use the terms “group” and “subarray” interchangeably. A lone wolf is a group of one node.

I would first identify the global peak value, like 99. Should this node be a lone wolf? No. I can prove that it should “absorb” a neighbor node and become a subarray of two [1]. Should it absorb a 3rd node? I think I can again prove that it should. Therefore my greedy algorithm would first create a subarray of size K around the peak, leaving behind a left segment (and also a right segment), where we apply the same greedy algorithm.

[1] my informal proof — suppose the left neighbor has value 6 and is a loan wolf in the final grouping. We can improve this final grouping by merging this node with the peak. Total cost would reduce by 6. In another scenario suppose this node (value 6) is within subarray #12. Again, we can break up subarray #12, move out this “6” and merge it with the peak, without breaking any rule or increasing total cost.

So what algorithm to create the first subarray around the peak? Let’s assume k=3. There are up to 3 candidate groups, since the peak can be the first node, 2nd node or last node in its group. We can use a sliding window (of width 3) to identify the best among the candidates.

Q: why start from the peak not start from end of the array?
A: If you do, you may separate 2nd highest node from the peak, when they are adjacent. My AlgoAA would identify this situation early on, and put them in the same group.

— My greedy AlgoBB:

Each time after the window slide, we will compare the new window with the best window so far. The comparison is first based on the 2nd highest value in the window. If tied, then compare 3rd highest value in the window..

I think this is not hard to implement — convert each window to a heap then compare top to bottom.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/array/minCostPartition_Flex.cpp is a briefly tested implementation .. 60% confident.

max-profit buy 1 sell any-bought #pimco java Q8

Pimco Java HackerRank Q8

Q8: Each minute, your trading platform allows you to either buy one share, sell any number of shares that you own (short sell forbidden), or not make any transaction at all. Your task is to find the maximum profit you can obtain with an optimal trading strategy.

I remember having issues with some HackerRank test cases. Should use 64-bit int rather than the default java int.

This problem appears to be very similar to day-trading in hindsight #Nsdq but requires drastically different thinking:

  • a single scan to the left, starting from the last price point.
  • any time there’s a left-ward drop, we have a trading opportunity!

https://github.com/tiger40490/repo1/blob/py1/py/array/maxProfit_buy1sellAny.py is my tested solution, peer-reviewed by Ashish.

count paths between 2 binTree nodes #PimcoQ9 Ashish,Pinsky

The DP idea — compare matrix-path-counter and EditDistance

  • showcasing efficient queue in python.
  • showcasing using (x,y) coordinates as dictionary key
  • showcasing find max value in a dictionary

— Requirement: (https://leetcode.com/problems/unique-paths-ii/description/ is similar)
See Q9.pdf in the email to Ashish. Here are some key points:

Consider a maze mapped to a 2d-grid with an upper left corner at coordinates (row, column) = (0, 0). Any movement must be in increasing row or column direction. You must determine the number of distinct paths through the maze. You will always start at position (0, 0), the top left, and end up at (max(row), max(column)), the bottom right.
1 1 0 1
1 1 1 1
As an example, consider the above matrix where 1 indicates an open cell and 0 indicates blocked. You can only travel through open cells, so no path can go through the cell at (0, 2). There are two distinct paths to the goal. As a 2nd example, matrix below has 10 paths:
1 1 1 1
1 1 1 1
1 1 1 1

https://github.com/tiger40490/repo1/blob/py1/py/2d/classic_pathCountQ9.py is my solution.

==== Analysis ====
I see a binary tree, where each node is a cell. Cell (0,0) has down/left node (1,0) + right node (0,1). I feel this is similar to a more generic problem “count paths between 2 nodes in a binary tree”

— DFT:
😦 I implemented something like a DFT but it was too slow with some test cases
😦 can overflow stack
🙂 DFT can print each unique path. I think BFT can’t.
🙂 DFT is easier to implement than BFT

— DynamicProgramming BFT solution from origin, as I described to Bill Pinsky:
This solution is more versatile than the one from Ashish. It can handle any directed graph.

Mark each node as “computed” once we have computed a score denoting how many paths-from-root there are. Save the score in a shadow matrix.

Origin node has score 1. Blocked cell has score 0.

Start BFT from origin. The two immediate neighbors are set to 1 (or 0 if blocked). Every node can be computed by adding up above-score + left-score. (This algo is simpler than the EditDistance algo.)

Memoization Performance note — My implementation was extremely slow (albeit correct) until I added an “if already computed, then continue” early in the loop

This Layered DP algo is also described in max-path-sum problems

Q: why is score[1,1] accessed 4 times?
A: node[1,1] is added to the queue twice. Each dequeue would need one check.
A: Also score[1,2] need to access score[1,1] as a parent. Ditto score[2,1]

— Ashish Singh gave me a much simpler solution, as shown in my github code.

sum@arbitrarySubArray, mutable int #Rahul#segmentTree

Q: given an array of mutable integers, implement subArraySum(le, ri), and updateElement(idx, newVal)

This is data-structure heavy. You need correct data structure to support efficient update/query.

Assumption A: Without loss of generality, i will assume original array length is a power of two, such as 8

— Idea 1: carefully segment the array. Maintain array of Segment object {le, sum}

The segments can shrink/expand based on heuristics. For now, I will assume “Segment.le” is immutable.

Every update() will update the Segment.sum in exactly one segment per level.

At the leaf level, there are 8 segments of length one or two. (Given Assumption A, it would be two.)

Next level I will have 4 segments. Each segment at this level consists of exactly 2 leaf segments. Similar to Fenwick tree and segmented binary tree, update() and query() are both O(log N)

update each cell with d2nearest 0 #DeepakCM

(Deepak 2019) tough matrix problem: given a black/white but mostly white matrix, for each white cell, compute the least horizontal/vertical steps (shortest distance) to nearest white cell.

Given a Matrix with 1’s and very few 0’s, replace all the 1’s in the matrix with the adjacent distance to nearest 0. There can be more than one ‘0’ in the matrix
Ex : Input: Matrix contains more than one ‘0’ Matrix = {
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
0, 1, 0, 1, 1,
1, 1, 1, 1, 1,
1, 1, 0, 1, 1 };
Output = {
2, 3, 2, 3, 4,
1, 2, 1, 2, 3,
0, 1, 0, 1, 2,
1, 2, 1, 2, 3,
2, 1, 0, 1, 2 }

——————-
Theoretical limit: O(N)

I will denote W := # white cells. Optimal solution should/might be O(N). For N = 1 quintillion, the run-time should NOT grow even when there are more white locations.

If we know for sure the scores in four neighbors, then it’s O(1) to work out my score. Which cells have known scores? those next to the zeros.

–idea 4: same as frontier but using a queue to hold frontier cells.

–idea 3 (frontier):
Does this algo work with any graph?
What if the white cells are on the perimeter and the frontier shrinks?
How would two frontiers join?

initial-scan #0a to initialize all non-white locations to -1 (indicating “green”). Save the number of greens in a “greenCount”
initial-Scan #0b to collect all white locations. For each white location F with some green neighbor, saves F in a “frontier” collection, perhaps a linked list.

When “saving” also cache (using a 4-bit integer) the nongreen neighbors, to avoid repeated memory access.

Also create an empty “new frontier” collection.

The initial scans can be combined but at the cost of simplicity.

Invariants before any subsequent update-scan —

  • Every frontier location has some green neighbor.
  • new frontier collection is empty.
  • greenCount is up to date.

update-scan #1 update each adjacent green location to the frontier. Set score to 1, finalized and no longer green. iif a finalized location F has a green neighbor, then save F in the new frontier collection.

After the scan, assert the new and old frontier collections have no overlap. Now swap old and new frontier collections and clear the new collection.

update-scan #2 for each location in the frontier, update adjacent green locations to 2, finalized and no longer green. If such a finalized location F has a green neighbor, then save F in the new frontier collection.

Green count should now reduce. When it becomes 0 we are done.

big-O? At each scan, the new frontier is usually larger than the old frontier until an inflection point. If before each update-scan we keep a count of the frontier collection size, i think they won’t add up to exceed N. Therefore, total complexity is O(N) provided the fanout is a constant like 4. If fanout is unlimited, then possibly O(V+E) since we visit each node and each ege up to 3 times.

–idea 2 (shells):
scan #1 to save all white cell locations, and save all black cell locations in a shadow matrix (bool shadow matrix of the same size as original matrix) and a blacklist (hashtable indexed by cell location)
For each while, compute distance to center. At end of this scan, designte one whte cell as red i.e. closest to center.

scan #1b[O(N)] update all black cells with d2red. Now we have some baseline values, to be improved
scan #2 [O(W)] for each white, update all cells around it with distance 1. Remove the updated cells from the “blacklist”

Also set the bool in the shadow matrix

Scan #3 for each white, update the 2nd shell

details?

If a black cell is updated by 5 white cells in the same iteration, then all 5 whites would be equally distant, so the first of them would remove the black cell from blacklist.

So each black cell is only updated once .. O(N)?

–idea 1 (DP) incomplete:
Scan#1 from top and left, update each cell with a “TL score” i.e. shortest dist ignoring the Bottom-Right quadrant of cells i.e. cells right-and-lower of current.

consider a typical cell on 2nd row. what’s tl score? Can compute using upper and left neighbors? will it ignore a white on the right?

Scan#2 from bottom right, to compute a BR score for each cell

Scan#3 (can be part of Scan#2) combine the data

Rationale — for each white cell, the shortest path can be in either BR quadrant (Scan2) or (Scan1) the other 3 quadrants.

SDI: elevator design #DeepakCM #70%

My friend Deepak gave me this Basic Requirements:

  • N-level building. You can assume 5 for now
  • Each level’s lift lobby has up/down buttons
  • Inside each lift there are N buttons for the N target floors
  • Any time, system can receive requests from any button

My basic design is not optimized for efficiency. The number of pending requests will stay below 20 since there are only that many buttons, so we iterate over all requests frequently.

My design effort is heavily focused on data structure — The more complex the requirements, the more I need to focus on clean, concise, sound data structure. They may not be necessary — a less optimal data structure can also work, but an optimal data structure helps us tremendously to cope with the complexity. I feel this problem is tractable once the data structures take shape.

Q: what if lift is in motion towards some target when a lift-lobby button is pressed and it happens to be serviceable?
A: Like the pencil solution to the space-pen challenge, my endless loop in main() may qualify as a simple solution to this daunting challenge. The system wakes up frequently to check for new requests. When sleeping, it ignores all inputs.

This simple design, if viable, avoids asynchronous or multi-threading complexities.  http://pubs.vmware.com/foundry1/pg/wwhelp/wwhimpl/common/html/wwhelp.htm?context=pg&file=Foundry_PG_concepts.3.30.html is a similar single-threaded design

For simplicity, I assume the new requests from all buttons show up in some buffer (or database), so we can poll it to see them. There’s no interrupt or callback.

Q: in your design, there are “targets” set by in-lift passengers vs. up/down requests (from lift lobbies) assigned by the system. How do you prioritize between the two types?
A: In general, targets are at higher priority than assignments, but if lift is already moving down towards Level 2 for an assigned request, it will move down all the way till that level.

I don’t want to spend too much time on any module since the correct emphasis/focus could be different in a real world design or design interview.

find median@2sorted arrays #Trex untested

https://leetcode.com/problems/median-of-two-sorted-arrays/description/ is similar except X and Y can be unequal length. My solution solves the harder, generalized problem.

This “coding” question is really math problem. Once you work out the math techniques, the coding is simple.

Designate arr1 as the shorter array. compare med(arr1) vs med(arr2)

Suppose former is lower, i can discard lower half of arr1 (s items). Can i discard highest s items in arr2? I think so because upper half of arr2 cannot have that median element, so any subset of it can be discarded

repeat until arr1 is completely discarded or left to a single element .. might be the final median. Now answer is close to the med of the remaining arr2.

–For the equal-length problem, My own idea on the spot — find the median of X and median of Y. If med(X) < med(Y) then discard the lower portion of X i.e. the “XB group”, and higher portion of Y (“YA group”). Then repeat.

  • Note len(XB) == len(YA) == min(len(X), len(Y))/2 := K. So every iteration would shrink the shorter array by half (i.e. K), and shrink the longer array by K. K would drop in value in next iteration.
  • loop exit — When the shorter of the two (say it’s X) shrinks to length 1, we are lucky — find the numbers around median(Y) and adjust the answer based on X[0].

Insight — Why can’t the final “winner”be somewhere in XB group? Because XA + YA already constitute half the population, and all of them are higher.

I always like concrete examples. So Suppose there are 512 items in the lower portion “XB group”, and the higher portion “XA” has 512 items. Suppose there are 128 items each in YB and YA groups. So in this iteration, we discard YA and the lowest 128 items in XB.

Definition of lower portion —
* all lower items up to but not including med(X) If len(X) is odd
* exactly the lower half of X if len(X) is even

bbg: allocate half the players to NY^SF

Always good to be concrete, so without loss of generality, let’s say J=99 i.e. 198 players.

Q1: Given 2J players, we need to allocate exactly half of them to NY and the rest to San Francisco, to form two teams for a inter-city contest. We receive the NY fare and SF fare for each player, as 2J (i.e. 198) pairs of integers. How can we minimize the total fares?

Q2(bonus): what if 2J+1 players, and you can decide which city gets an extra player?

— Analysis: how many possible allocations? 2J-choose-J. Very large number of allocations. Something like O(J!) so brute force is impractical.

If for any player the two fares both go up by $9800, it doesn’t change our algorithm at all. Therefore, we only care about the fare difference (N-S) for each player.

— solution: I will assume most players live near SF so SF fares are lower. I tabulate and sort the 198 fare differences “NY fare – SF fare” and suppose indeed at least 99 (half) are positive[1]. Therefore, my “base” is SF i.e. my base allocation is everyone-allocated-to-SF. Next I must pick 99 (half) of them and allocate to NY.

I will avoid the higher values in the table.

I simply pick the lower 99 (half) in the table, because these are the 99 “extra” costs I will incur. I want to minimize the total of these 99 values, whether they are mostly positive or mostly negative.

  • Note the highest number in the table indicates the most expensive player for NY relative to SF. If this number is negative, then SF is more expensive than NY for All players so follow the rule in [1] but notice he is the least expensive players for SF relative to NY. Regardless of positive/negative sign, we want to keep this person in SF .
  • Note the lowest number in the table indicates the least expensive player for NY relative to SF. If this number is negative, then SF is more expensive than NY for this player — normal. Regardless, we want to allocate her to NY.

[1] if proven otherwise (by the data), then we could easily treat NY as base to keep the solution intuitive. Actually even if we don’t change the base, the algorithm still works, albeit unintuitively.

A2: Say J=99 so total 199 players We already picked 99 for NY. Do we pick one more for NY or keep remaining 100 in SF?

Just look at the next lowest value in the table. If positive, then NY is more expensive for him, so keep him in SF.

punctuate ContinuousSentence

I believe https://leetcode.com/problems/word-break/description/ is identical.

Q (bbg): Given a dictionary of English words (containing no numbers no underscores) and a very long sentence, someone has removed all the spaces. Please write a program to restore it by adding a space after each word. If there’s no way to parse the sentence just return False, otherwise return True to indicate you found at least one way to parse the sentence.

The sentence can have repeated words.

I was allowed to use either white-board or computer.

Brute force? I struggled for a long time with some vague idea. Interviewer asked me to implement any solution I have, however inefficient , but I was unable to.

Key observation — This is a typical QQ type of algo question. If you remember one key point, then you have a advantage.

Key observation — syntax and ECT is not really hard. Only simple string operations. Data structure is simple. Challenge is an elegant (precious) pure algo.

Key observation — the recursive backtracking solution is working and clever-looking but actually not the very best. Still, recursive thinking is valuable and gave me a first solution. I’m not ashamed of my recursive solution.

Key observation — This problem is considered not too hard because …. the solution is short! But most of us can’t conceive such a solution! I won’t trust the difficulty level in leetcode.com

https://github.com/tiger40490/repo1/blob/py1/py/str/continuousSentenceBBG.py shows my

recursive backtracking  solution. Interviewers basically said it works.

There’s an inherent tree structure in this greedy backtracking algo. This tree has the power to meet the tougher requirement in Q2. As such, this algorithm is more powerful than then algo below.

— More efficient solution: non-recursive, no backtracking

Suppose the sentence has 99 chars ch[0], ch[1] … ch[98]. We maintain an “aboveWater[99]” bool array. A ch[33] is above water if there exists a 4-char bridge word from aboveWater character ch[29] ….

The bool array is assumed underwater initially. We scan it forward once to check and mark selected char as aboveWater.

Once the left section of the array is analyzed, we don’t need to revisit it by backtracking.

The simplest algo — forward iterate every position. At each position i,
look at the last 2 chars j-1~j
look at the last 3 chars j-2~j
look at the last 4 chars j-3~j
… If any is a bridge word, then mark j as aboveWater and move on.

— Q2(Leetcode): Given a non-empty string s and a dictionary wordDict containing a list of non-empty words, add spaces in s to construct a sentence where each word is a valid dictionary word. Return ALL such possible sentences.

%%A: My github solution should solve it even if not optimal. Need to test.

25-horse puzzle #Google

Q: A single racetrack. P=25 mechanical horses each with a unique speed. You can race up to 5 horses each time, and get a printout of the ranking, but no timing.

What’s the minimum number of races to identify fastest #1 #2 #3 horses.

Youtube and my green quant-IV book both have the ananlysis.

——–analysis——–

Quick elimination — each time I can eliminate minimum 2 horses but can we eliminate faster?

What if T is 1 and P is 9? Two races exactly.

Here’s my algo #1

  1. first pick 5 random horses,
  2. race them and eliminate slowest 2 (reduce population by 2). Name the top 3 as AA/BB/CC
  3. race CC with 4 untested random horses. Four scenarios:
    1. if 3 or more beat CC, then eliminate the two slowest including CC, and put the top 3 in a group with AA/BB. Go back to Step b)
    2. if 2 beat CC, then eliminate the three slowest including CC, and put top 2 and an untested horse in a group with AA/BB. Go back to b)
    3. if only one horse beat CC, then eliminate the four slowest including CC, and put the #1 and 2 untested horses in a group with AA/BB. Go back to step b)
    4. if CC is #1 (happy path), then eliminate the other four horse and go back to step c)

Worst case — first race actually had the slowest 5, then we hit case C1 every time. So we only eliminate 2 horses each race

best case — first race had 5 fastest. total 6 races [1] but this is statistically unlikely. If you are not the most lucky, then we will need more than 6 races.

[1] first race eliminates 2… next 5 races each eliminates 4 only if CC is always fastest in every race.

–algo #2:

  1. [5 races] Split into 5 groups and run 5 unrelated races. Discard slowest 2 in each group. We end up with 15 horses.
  2. [1 race] Now race the 5 number one’s. Name them ABCDE. The D and E groups are completely eliminated (-6) and the C group is left with C alone (-2), and B group’s #3 is eliminated (-1). We end up with 6 horses. I will name the seven as A/B/C and the less-tested A2 A3 B2. There are only three possibilities:
    1. ABC
    2. A A2 A3
    3. A B B2
  3. [1 race] so I will race 5 of the 6 horses, sparing A since it’s already the final champion.
  4. — total 7 races best or worst case.

 

next_perm@N color socks #complex #100%

A common IDE coding challenge — given x pairs socks of various colors, generate all the permutations, in ascending order. Each color has a value.

–Solution 1: std::next_permutation() and prev_permutation()

–solution 2: I can probably write my own next_perm(). Using This function we can generate an ascending sequence of permutations starting from the current content of a vector.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/combo_permu/nextPerm.cpp is my iterative solution, but should use lower_bound()

https://github.com/tiger40490/repo1/blob/py1/py/combo_perm/nextPermOfNSocks.py is my python solution, overcoming the lower_bound problem.

generate random line-up@Any combo@N boys #70%

A standard permutation/combination problem in some coding tests. You are often required to iterate all of them.

Given N cities, how many permutations of any combinations are there in total.

My iterative sum formula: answer(N)= N_choose_0 + N_choose_1 * 1! + N_choose_2 * 2! + … + N_choose_N * N!

N_choose_0 = 1 !

–recursive algo: use answer(N-1) to answer(N)

–iterative algo:

  • Suppose we have an iterative_next_perm(list) function already written.
  • suppose we have an iterative_next_combo(N, 3) that generates all combinations of 3 out of N distinct chars.

Then, here’s a solution — call

iterative_next_combo(N,1)
iterative_next_combo(N,2)
iterative_next_combo(N,3)…
iterative_next_combo(N,N), in a loop (of N iterations).

Suppose one of the N calls generated 221 combinations. Run a loop (of 221 iterations) each time pass one combination into iterative_next_perm(combo). So our main program has only 2 nested loops. Most of the complexities are encapsulated in iterative_next_perm() and iterative_next_combo()

next_Perm@3boys out@5 #80%

algo-practice: generate permutation@3, using5distinct chars

Such a skill might be needed in some coding IV sooner or later. Let’s write this in py or c++. Formally,

Q1: generate all permutations of 3, from 5 distinct chars, in any order.
Q2: generate all permutations of 3, from 5 distinct chars, in ascending order. You can sort the 5 chars first.
Q2b: once you have generated one permutation, how do you identify The next?

Note the same solution is a generalization of std::next_permutation(), so once you have this solution you have that solution.

How many? 5-choose-3 * 3! = 5!/(5-3)! = 5*4*3 = 60, denoted Answer(3). Paradoxically, Answer(4) == Answer(5) = 120

–algorithm 1 for Q1

  • 1st generate 5 single-char strings;
  • then for each generate 4 two-char strings. We get 20 strings.

–algorithm 1b for Q1: rec(5,3) will call rec(5,2). rec(5,2) has 20 items, each can generate 3 items for rec(5.3), because each item has 2 characters and a void to be filled by the 3 unused chars.

The 20 items returned should be a pair{vector of 2, vector of 3}

This produces a sorted collection:)

See my code  https://github.com/tiger40490/repo1/blob/py1/py/combo_perm/nextPerm%403boysFrom5.py

 

 

 

next_Combo@3 boys out@5 #iterative complex

//Without loss of generality, each combination is internally represented
//as a sorted vector (ascending).
//There's one-to-one mapping between such a vector and a combination
#include <iostream>
#include <vector>
#include <iomanip> //setw
#include <algorithm>  //sort
#include <assert.h>
using namespace std;
size_t changes=0, calls=0;
size_t const C=3; //how many in each combination

template<typename ITR> bool isAscending (ITR const b, ITR const end){
  for (ITR last = b, i = b; i!=end; ++i){
    if (*last > *i) {
      cout<<*last<<" should be lower (or equal) but is higher than "<<*i<<endl;
      return false;
    }
  }
  return true;
}
template<typename T> void dump(vector<T> & v,  bool isAssert = true){
  for(int i=0; i<v.size(); ++i) {
    cout<<setw(5)<<v[i];
    if (i == C-1) cout<<"  unused:";
  }
  cout<<endl;
  for(int i=0; i<v.size(); ++i){
    cout<<setw(5)<<i;
    if (i == C-1) cout<<"  unused:";
  }
  cout<<endl<<"---------------------------------"<<endl;
  if(isAssert){
    assert(isAscending(v.begin(), v.begin()+C) && "1st C elements should be ascending after next_combo (not next_perm)");
    assert(isAscending(v.begin()+C+1, v.end()) && "unused section should be ascending");
  }
}

template<typename T> bool reshuffle(vector<T> & v, int p2u){
//      cout<<"after swap"<<endl; dump(v);
        sort(v.begin()+p2u+1, v.end());
//      cout<<"after sorting everyting to the right of p2u"<<endl; dump(v);

        if (p2u == C-1){
                sort(v.begin()+C, v.end());
                ++changes;
                return true;
        }
        assert(p2u<C-1);
        //now reset everything to my right
        //now locate the best_man (next item after the new p2u) .. can use binary search
        for(int i=p2u+1; i<v.size() ; ++i){
          if (i==v.size()){ //p2u is highest possible!
                //cout<<"p2u is already highest"<<endl;
                sort(v.begin()+C, v.end());
                ++changes;
                return true;
          }
          if (v[p2u]<v[i]){
                //cout<<"best man = "<<i<<endl;
                for(int j=0; p2u+1+j<=C-1; ++j){
                  swap(v[p2u+1+j], v[i+j]);
                }
                sort(v.begin()+C, v.end());
                ++changes;
                return true;
          }
        }//for
        // now must return!

        assert(1==0 && "should never reach here");
        cout<<"after best_man search"<<endl; dump(v);
}

// reshuffles vector to the next higher combo
//Assuming 5-choose-3, the 1st 3 chars represent the combination,
//and the remaining characters at the end of the vector are
//unused in the current combination.
template<typename T> bool next_combo(vector<T> & v){
  ++calls;
  dump(v );
  if (v.size() == C) return false; // C-choose-C == 1 !

  for(int p2u=C-1; /*last position*/ p2u >=0 ;--p2u){
    for (int unusedItem=C; unusedItem<v.size(); ++unusedItem){ //scan the unused section of the array
        if (v[p2u] < v[unusedItem]) {
          assert(p2u<unusedItem);
          swap(v[p2u], v[unusedItem]);  //p2u should not change further
        //cout<<"identified "<<p2u<<" as position to upgrade... Will reset subsequent positions, and return"<<endl;
          return reshuffle(v, p2u);
        }
    }
    // no p2u identified yet. move p2u marker to the left
  }//for
  cout<<"no more higher combo. This is the end"<<endl;
  return false;
}
int main() {
//  vector<float> v{111,222,333,444,555,666};
  string tmp = "abcdefg";
  vector<char> v(tmp.begin(), tmp.end());
  assert(C <= v.size());
  for(; calls<9992; ){
        if (!next_combo(v)){
         cout<<changes<<" changes performed till the highest combo; next_combo() call count = "<<calls<<endl;
         return 0;
        }
  }
}

how many rolls to see all 6 values

Q: A fair dice has 6 colors. What’s the expected number of rolls to see all 6 colors?

This is a probability (not IT) interview question my friend Shanyou received.

My analysis:

Suppose it takes 3.1357913 rolls to get 2 distinct colors. how many additional rolls does it take to get the next distinct color? This is equivalent to

“How many coin tosses to get a head, given Pr(head)=4/6 (i.e. another distinct value)” — a Geometric distribution. Why 4/6? Because out of six colors , the four “new” colors are considered successes.

Once we solve this problem then it’s easy to solve “how many additional rolls to get the next distinct value” until we get all 6 values.

https://math.stackexchange.com/questions/28905/expected-time-to-roll-all-1-through-6-on-a-die is an accepted solution.

LRU cache #Part 1

I believe my friend got this question in a bbg phone interview.

Design and implement a data structure for Least Recently Used (LRU) cache. It should support the following operations:

get(key) – Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.
put(key, value) – Set or insert the value if the key is not already present. When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item.

Follow up:
Could you do both operations in O(1) time complexity?

==Analysis==

Hash table to support lookup. The “value” is a pointer to a link node. Link node also has key/value — 2-way linkage.

slist is a FIFO and grows at tail for every new key/value pair, so head is the earliest pair. Every time a key/value is accessed via hash table, we move the node to the tail.

When capacity is reached, we would remove the tail node. Using the key in that node, we also remove from hash table.

min-stack #bbg

My friend Abhinav (not his real name, to protect his privacy) got this question at Bloomberg internship interview. I added some details to make it clearer:

Q: Design a stack that supports push, pop, top, and retrieving the minimum element all with O(1) time complexity in the worst case.

There exists a function compare(Item A, Item B) that returns 1 if A is greater, 0 if equal, and -1 if A is smaller.

  • getMin() — Retrieve the minimum element in the stack.
  • push(x) — Push element x onto stack.
  • pop() — Removes the element on top of the stack.
  • top() — Get the top element.

==== analysis =====

The most efficient getMin() data structure is the binary heap, but insert/delete runs in O(logN). Therefore I felt the requirements here are computationally impossible. But in this context, we only need to support deleting the last added item 🙂

Key insight — popping is a constrained form of deletion. It’s hard to hit O(1) while supporting unconstrained deletions, BUT with a constraint on deletions, all operations can be O(1).

I need a regular stack + a helper data structure. A linked list or vector can support the stack operations

— The helper — a naturally sorted stack (or vector or deque) to hold record-breakers and record-matchers.

IFF a new minimum (record breaker) or another item matching the existing minimum (record-matcher) is added, we push it to the sorted stack.

After every pop(), we check the popped item. If equal to top of sorted stack, then pop the sorted stack.

At any time, top of the sorted stack is the current minimum.

Vector and deque are actually fine. Interviewer may feel they are inferior to a stack, but with a modified requirement, they may become very useful.

— Here’s a design to beat binary_heap, based on finite-sized (32 or 64-bit int) keys

Assuming the comparison is based on 32-bit integer key (string or floats can also use radix sort). I will use a radix array structure. Perhaps 4 arrays of 256-elements.  Or perhaps a 4×256 matrix, Ultimately the payload stored in the data structure are pointers to stack nodes. This saves memory since a stack node may be 99MB.

Every time we push a node, we derive the new key value in O(1), use the key value to locate its ‘home’ in the radix structure and store the new node’s address in a linked list therein. After the push(), we can compare and update (in constant time) a global variable pointing to the minimum node.

Each stack node also has a back-pointer to the iterator into the list, so before pop() we can use the iterator to locate the object in the radix structure and delete it from the host linked list. We will also update the global variable.

tokens shared among friends #Promethean

See also NxN matrix for graph of N nodes

Requirement: There are N friends numbered from 1 to N. There are M pairs of links, where each (x , y ) pair is connected by a shared integer token described by tokenId. Any two friends, x and y , can be connected directly by multiple tokens, or indirectly (without directly shared token) because if friends x and y share token t and friends y and z also share token t , then x and z are also said to share token t.

Note if x/y shares t and u/v share t, then x and u may be unconnected!

Find the maximal product of x and y for any directly or indirectly connected (x , y ) pair such that x and y share the maximal number of tokens with each other. If x/y have 3 tokens connecting them, and u/v also have 3 tokens, then we compare x*y vs u*v.

https://github.com/tiger40490/repo1/blob/cpp1/cpp1/miscIVQ/tokenLinked_Friend.cpp showcasing nested containers like map<int, list<set<int>>>. I believe STL is harder than java, python etc.

construct graph from list of connections #BGC java

Given an input file showing a list of {string, string} pairs, build a connection graph.

If you have a correct connection graph, then you can easily determine the connectedness (bool) of any 2 nodes. In a social-network, this bool flag indicates whether 2 individuals are completely unconnected or somehow connected.

—-analysis:
I see this as a social-network. Any pair represents an edge connecting 2 nodes.  At any time there are a number of disconnected islands. The next pair could 1) merge 2 islands or 2) add a node to an existing island or 3) create a new island 4) do nothing, if the 2 nodes are already in some existing island

  • Any known node appears exactly once in the entire graph, in exactly one of the islands.
  • All nodes are contained in a lookup table or hashmap  {node -> island}
  • Each island can be a implemented as a hashset of nodes.

So here’s a proposed algo to process a new pair {A, B}. Look for A and B in the  graph. 3 scenarios + a dummy scenario:

  • (Scenario 3) If both A an B are new comers, then they form a new island.
  • if both A and B are already in the graph,
    • (Scenario 4) if they are in the same island, then exit. Nothing to do
    • (Scenario 1) else we can merge the 2 islands
  • (Scenario 2) If A is in island 3 but B is new comer, then B joins island 3

The merge operation is expensive. The big lookup table needs update but here’s an alternative:

  • At merge time, the smaller island would have all the nodes moved to the bigger island. When the island is empty, it gets a pointer “this.redirect” to the bigger island.
  • lookup table needs no update, avoiding locking a global object.
  • At query time, we look up the table to get the original island, then we follow its pointer (defaults to null) until the island is non-empty.
  • endless loop? would only be a programming error.

count coordinate points enclosed by polygon #codecon 70%

Q: On a 49-by-49 super-sized GO-board, there are 2500 coordinate points. Now I give you a 15-side polygon described using the [x,y] coordinates of the 15 vertexes. List all enclosed points.

A: Here’s my tentative algorithm. We will paint and repaint all coordinate points in lime and red as we process the 15 vertexes one by one. At the end of drawing the polygon, we will know which color is enclosed. (Without loss of generality, assume vertexes are never on board boundary.)

1st, assume lines are never completely vertical. We will easily deal with vertical lines later.

  1. with 1st vertex, nothing to paint
  2. After 2nd vertex, we have a closed line (i.e. both ends defined), but we will treat it as an “open line” which is a line with both ends on board boundary.
    1. We paint half the board lime and the other half red. All 2500 coordinate points are painted. Any point on the actual closed line (not the open line) is left unpainted.
    2. how we decide the color? Basically when moving from Vertex 1 to 2, points on the left are painted lime. Specifically, if line is to the right, then below is painted red; if line is to the left, then below is lime.
  3. When we process 3rd vertex, we create a half-open line, meaning it has only one end defined (the previous vertex). The other end is actually the new vertex but we pretend the line extends beyond it and ends at boundary of the board. This is a divider into either lime or red section — new vertex’s current color will tell us. Suppose it cuts into the red, then part of the red must turn lime , i.e. we repaint the affected points lime. We will repeat this step for each remaining vertex.
    1. so which part to change color? follow the rule in 2.2
    2. Note we only repaint part of the newly-divided region. We don’t touch points in the unaffected region.
  4. When we process 4th vertex, we again create a dividing half-open line, either in the red or lime region.
  5. After we process the last vertex, in addition to the half-open line, we create a final line, this time a closed line with both ends defined. We repaint the “outside” points
  6. Completed. The color of [0,0] is the outside color.

Once we have a “database” or “lookup function” of all the “inside” coordinate points, let’s return to the original Bloomberg algo question (https://codecon.bloomberg.com/contest/5948031613881024512/3908)

First, we paint all unpainted points (i.e. on the closed lines) as inside points. Then we iterate over the 49 x 49 squares on the board one by one.

  • IFF all 4 corners are “inside” then it’s an inside square.

Math-wise, the main challenge appears to be Step 2.2. Solution: Suppose we hold x=2 constant. Check all the coordinate points. All the points “below” the “open line” can be pained lime .

IFF the line is exactly vertical, then all the points to the right are considered “below”.

Data structure — 2D array for the board, similar to my Tetris program. A Point class

[15] EPI300 skyline #event-handler

Reusable technique – SQL

PROBLEM STATEMENT (equivalent to Leetcode Q 218)
We have to design a program which helps drawing the skyline of a two-dimensional city given the locations of the rectangular buildings in the city. Each building B_i is represented by a triplet of (L_i, R_i, h_i) where L_i and R_i are the left and right coordinates of the ith building, and h_i is the height. In the diagram below there are 8 buildings, represented from left to right by the triplets (1, 5, 11), (2, 7, 6), (3, 9, 13), (12, 16, 7), (14, 25, 3), (19, 22, 18), (23, 29, 13) and (24, 28, 4).

Input
The input of the program is a sequence of building triplets. The triplets are sorted by L_i (the left coordinate of the building) in ascending order.

====analysis====

A data structure challenge. Once I had the correct data structure … 迎刃而解

Q1 —– For any given x, determine the height in the skyline.
Note If x == R_i, then the ith building doesn’t count. In other words, If you look at the first building, the 1-to-5 range it covers is a half-open interval, sometimes written as [1,5) as this range includes 1 but excludes 5. You can think of [1,5) as [1, 4.99999999] approximately.

A1(brute force): look at each building and decide if it “covers” the point X. Given the pre-sort, most buildings aren’t relevant. A complete solution would be

Select max(h) from myTable t where t.l =< x < t.r

To support this solution, the objects could be stored in two sorted data structures, one sorted by L_i and one sorted by R_i.

Q2 —- draw the skyline.
A2: evaluate Q1 (i.e. get the height) at every L_i and R_i value. This solution is probably suboptimal.

Q2b (Leetcode Q218) —- draw the skyline by outputting a sequence of {x,h} pairs showing the change in height (represented by a new height h) at each vertical edge (marked by x).

Is it possible to do this in one scan after preprocessing the triplets with some clever data structures? [[EPI300]] may have a solution. Here’s my own proposal —

Pre-sort the N buildings into a list by L_i, and sort the same buildings into another list by R_i, and merge the 2 lists into a big sorted list of 2N pointers to N unique objects. Each building shows up twice. Each of the 2N entries consists of {building_object_id, x, boolean flag left_or_right }. We will go through this big list one-pass.

I convert this list into a sequence of “events” as we move along the x-axis. That’s why all left/right edges must be sorted into a single sequence.

Main scan — As we hit the left edge of a building, we include this building in the Alive container (probably a BST). In Alive, we keep the buildings sorted by height. We also maintain a lookup table { building_id, pointer/iterator into Alive}. As we hit the right edge of a building, we remove it from Alive. (No need to remove from lookup since we won’t see the same building again.)

As we hit any edge, we need to determine the impact on the skyline if any. This is when we make use of the Alive system. For any edge, there’s impact iff the target building is taller than all other in Alive.

100-gunmen puzzle

Q: Suppose 100 gunmen stand in a big circle arranged by height, clockwise from #1 (lowest) to #100 (highest). In first shooting round, #1 starts off by shooting clockwise, so #2 gone. Then #3 shoots clockwise at #4, and so on.  How many rounds will there be and who will be the last standing?

Q2 (adapted from Q1): Before #1 gets shot or shoots again, first round ends, and the lowest among the remaining gunmen continues to shoot clockwise. In this case #1 will never die. No puzzle to solve.

Assume each gunmen stays in his spot, and his spot has his name.

Analysis –
Let me first clarify the terminology.
* Each round has a “starter” shooter. The round ends immediately before he shoots again or immediately before he gets shot.
* Each round has a InitialCount := number of gunmen at start of that round.

— my Solution —
Round 1: starter is #1. InitialCount=100. This is an even round, so the starter remains.
Round 2: starter is #1. InitialCount=50. This is an even round, so the starter remains.

End of Round 2 the remaining shooters are #1 #5 #9… #97. They are 4 (i.e. 2^2) stops apart.
Round 3: starter is #1. InitialCount = 25. This is an odd round, so starter will shift anticlockwise by 2^2 to #97. Why am I so sure? Since InitialCount is odd, the highest (among the 25) gunmen will NOT die and will become the next round’s starter.

End of Round 3 the remaining gunmen are 8 (i.e. 2^3) stops apart. Therefore, highest two are #89 #97.
Round 4: starter is #97. InitialCount = 13. This is an odd round, so starter will shift anticlockwise by 2^3 to #89.

End of Round 4, the starter (#97) is a “sitting duck” to be shot soon.
Round 5: starter is #89. InitialCount = 7. This is an odd round, so starter will shift anticlockwise by 2^4 to #73.

End of Round 5, #89 is a “sitting duck” to be shot soon.
Round 6: starter is #73. InitialCount = 4, which is a power of 2, so #73 will remain the starter for all subsequent rounds and therefore the last standing.

dynamic glass-drop #BNP

http://en.wikipedia.org/wiki/Dynamic_programming#Egg_dropping_puzzle is a similar but more ambiguous problem.

If we toss a wine glass out of a 3rd floor window it may break, but it may not if tossed from a 2nd floor window.

Q: with exactly two wine glasses taken from a (Japanese:) production line, you want to identify starting at exactly which level of a 11-floor tower this brand of glass will break if tossed out of the window. Equivalently, determine the highest safe floor. Optimization goal — minimize worst-case cost. Top floor is known to break, so answer might be 1st floor, 2nd floor,… or top floor — 11 possible answers, but 10 floors to test.

Needless to say, if the glass breaks at 6th floor, it will surely break at a higher level.

—- Analysis —-
With just a single glass, you have no strategy. You have to try from 1st floor up. If your classmate starts on 2nd floor and breaks her only glass there, she is disqualified because with no more glass to use, it’s now impossible to know whether it would survive1st floor.

I feel this is NOT a probability problem, so each toss is bound to pass or fail with 100% certainty but we just don’t know it.

We don’t want to toss first glass @2nd, @4th, @6th… because too many first-glass tests. Neither do we want to toss first glass @5th because if broken, then 2nd glass must try @1@2@3@4. Therefore first glass toss should be somewhere between ( @2, @5 ) exclusive.

—-%% solution, with heavy use of symbols —–
I feel this problem is tractable by dynamic programming. Let
* L denote the highest _safe__ floor known so far, and
* U denote the lowest _unsafe_ floor known so far. U is initially at top floor, and L is initially 0, not 1
* u denote U-1

u >= L is an invariant. The Upper and Lower bounds. (u-L) is the maximum number of floors to test. When we get u-L = 0 and final Answer is the current value of U.

Here’s a typical example
6 —- U known unsafe
5 —- u, so 3 floors to test (u-L)
4
3
2 —- L known safe

z8 denotes the worst case # tests required to locate the Answer with both glasses when u-L = 8 i.e. 8 floors to test. Could be z(L=0, u=8) or z(L=1,  u=9). Same thing.

For example, z(1,3) means worst case # tests when 1st and 4th storeys need no testing. Note z(1,3) == z(2,4) == z2

z1 = 1 i.e. one toss needed
z2 = 2
z3 = 2 // B! C || BA. The untested floors are named (upward) ABCDEFG (if 7 floors to try), and trailing ! means broken.

z4 = 3 // B! CD || BA
z5 = 3 // C! +2 || C then z2 // if C broken, then must test last glass at D and E
z6 = 3 // C! +2 || C z3  // ABCDEF
z7 = 4 // (D! + 3 || D z3) or (C! + 2 || C z4)
z8 = 4 // (D! + 3||D z4) or the smarter (C! + 2||C z5)
z9 = 4 // (D! + 3||D z5) or (C! + 2 || C z6)
z10 = 4 // (D!+3||D z6). Can’t use (C! + 2 || C z7) — for a 11-storey tower, we need 4 tosses.

I think there are cleverer implementations. This is not brute force. It builds up useful results with lower z() values and use them to work out higher z() values — dynamic programming.

This solution not only finds where to start the test and the worst-case # tests. It also works out the exact decision tree.

–Kyle’s insight

For a build of 100, the max cost is $14 i.e. 14 drops, so the first floor to try is 14th. I didn’t believe him, so he showed

  • if glass AA breaks, then subsequent max cost is $13 as BB must try floor 1,2,3…13. Total worst cost = $14
  • else AA would try 14+13=27th floor. If AA breaks, then subsequent max cost is $12 as BB must try #15,#16,..#26. Total cost is still $14
  • else AA would try 27+12=39th floor. If AA breaks, then subsequent max cost is $11. Total cost is still $14..
  • …AA’s 10th try is #95, at a $10 cost
  • On the 11th try, AA will try #99. If AA breaks, then subsequent max cost is $3 i.e. #96,97,98. Total cost is $14
  • else on the 12th try, AA will try #100. Total cost is $12

https://brilliant.org/wiki/egg-dropping/ explains the math x(x+1)/2=100 so x = 13.65, so firs floor to try is #14.