Interviewer (Richard) was kind enough to offer me a tip early enough. so we didn’t waste time (which could easily result in out-of-time)
Q: given K pre-sorted immutable lists, each up to N items, return an iterator that on demand yields each of the (up to K*N) items in sorted sequence.
Estimate time and space complexities.
— I first proposed pair-wise merge. Since there are logK passes, Time complexity == O(NK logK)
Space complexity is tricky. Very first pass i have to create a single list up to NK items. Then I can reuse this list in each merge. so space complexity == NK , but I said NK*logK. Interviewer wasn’t familiar with this solution and didn’t correct me.
 See https://www.geeksforgeeks.org/sort-array-two-halves-sorted/. Suppose 8 lists to merge. I will merge A+B into first quarter of the big array (size NK), then C+D into 2nd quarter… In next pass, I will merge AB + CD in-place using the first half of my big array.
The advantage of this solution — once I create a single merged array, each iteration is O(1). This can be valuable if run-time iteration speed is crucial but initial warm-up speed is unimportant.
bigO insight — merging N pre-sorted arrays is O(N logN), same as merge sort?
— Then interviewer suggested iterating over the K lists so I implemented the solution in https://github.com/tiger40490/repo1/blob/py1/py/88miscIVQ/itr_K_presortedLists_FB.py
- Space complexity = K
- Time complexity:
- next() O(logK) due to popping. I said Fib heap has O(1) insert
- init() O(K)
- hasNext() O(1)
How about a RBTree instead of heap? Space complexity is unchanged. Time complexity:
- next() O(logK) for both remove and insert
- init() O(K logK), worse than priorityQ
- hasNext() O(1)
— FB interviewer asked why I prefer to keep state in global var rather than a per-record field
%%A: Runtime heap allocation is slower if the record is bigger. In contrast, the global dictionary is tiny and likely lives in L1-cache