print height@subtree at every node #post-order

For these problems, elegant implementation is expected.

Note in-order is for binary tree, but post-order is for any tree.

== Q: given Any tree without cycle, print the size of subtree at every node

I think post-order walk is usable.

== Q: given Any tree without cycle, print the height of subtree at every node. Also print the node address just for identification

Intuition — dft post-order walk. Need to become more familiar with the detailed steps. https://www.techiedelight.com/calculate-height-binary-tree-iterative-recursive/ has details.

== Q (Leetcode): Given a binary tree, determine if it is height-balanced — a binary tree in which the depth of the two subtrees of every node never differ by more than 1.

I can try it on Leetcode .. no need to set up test harness myself but I might be frustrated

Intuition– post-order walk. A simple recursive algo — for any node, find the its subtree height including itself and return it. Before returning, also compare the left vs right subtree heights

Advertisements

identical binTree

Q: Given two binary trees, write a function to check if they are the same or not. Two binary trees are considered the same if they are structurally identical and the nodes have the same key.

====analysis:

Look at hints? No need !

— solution 1: I can use the serialization solution and compare the two serialized strings in real time.

— solution 2: BFT but ensure each branch level has strictly 2^n items including nulls

Intuition — If both sides match up on every level, then identical

This solution works if each tree node has a fixed number of child nodes.

— solution 2b: a null node at level N will reduce Level N+1 nodes by two.

— solution 3: recursion
bool diff(node1, node2){
if diff(node1.left, node2.left) or diff(node1.right, node2,right): return false
}
There might be some implementation issues

— Idea 9: BFT but each queue item is {payload, childrenCount}

This solution doesn’t work for binTree as it can’t detect “null leftChild” vs “null rightChild”.

This solution works if each node can have arbitrary number of child nodes, but I don’t know how common this is.

isSymetric(root of a binary tree)

Leetcode published solution has a clever BFT solution, easier to implement.

–idea 1 (untested): BFT to list all nodes at a given level
For each node’s enqueue() (including the null nodes), record the path-from-root as a list. As a lighter alternative to this “list”, the path can degenerate to the last step, as a le/ri flag.

Now scan each level from both ends. The left item’s path should mirror the right item’s path.

(Before the scan. confirm the node count is even.)

–in-order dft
first scan records the paths to each leaf node. (Optionally, each path can includes east/west directions).
2nd scan does the same but always visits right child first.

The two outputs should match

partition list into good^bad sections #c++LiquidNet

Q: template<typename I, typename P>
I myPartition(I bb, I const end, P isgood); //return first Bad node in the range after shuffling all Bad ones to the end. If no Bad node, then return the original end.

You can call it like firstBad = shuffle(li.begin(), li.end(), isEven). Itr and Pred are template params or well-defined typedefs.

Note we can’t assume the iterator provide sort order. We can only inc/dec and check equality.

https://github.com/tiger40490/repo1/blob/cpp1/cpp/linkedList/partition_LNet.cpp is my solution but it’s hard to reason because I move two rather than one pointer. Interviewer gave a simpler solution with similar time complexity.

Lessons:

  • in a two-iterator algo, if every move on each iterator has some logic, then it is probably over-complicated. In this case, we could move one iterator according to some logic, and simply move the other iterator to the same spot.
  • “bb should be at or before ee” — the invariant is critical but hard to maintain in the face of complex logic
  • There were many scenarios in my design. There are fewer scenarios in the interviewer’s design.

isPreorderBST(randomArray) #G4G

This was asked in a 2018 hacker rank interview, not as important as an on-site coding question. However, I see this question as a classic.

— G4G solution based on sorted stack

https://www.geeksforgeeks.org/check-if-a-given-array-can-represent-preorder-traversal-of-binary-search-tree/ has a tested solution but it’s too cryptic. I added instrumentation to help myself understand it. See my github code.

Insight — at any time, the stack is sorted with the top being the smallest value . If the new item is highest item so far, then stack is wiped clean !

The algo is fairly simple —

  • initialize stack with arr[0] i.e. root
  • for every new item xx, start looping
    • if xx falls below stack top or stack is empty, then push xx, otherwise pop and loop
  • so when would we fail. It relies on the “root” variable which holds the top of the stack if non-empty, but frozen (the real tree root) once stack wiped clean !

— My solution 2

My analysis — After we have seen some number of nodes, there’s exactly one possible tree we could construct, so let’s construct it.

Note during the construction, each new node has only one place to go (key insight) and after that we can check if it breaks pre-order.

https://github.com/tiger40490/repo1/blob/py1/py/algo_tree/isPreOrderBST.py is my tested code, probably less elegant than the G4G solution, but I’m still proud of it.

I don’t think any solution (including the G4G) is really O(N). My solution is not inferior. It has a readability advantage. It’s longer but not necessarily slower.

 

binary-matrix island count #DeepakCM

Q: https://leetcode.com/problems/number-of-islands/description/

https://github.com/tiger40490/repo1/blob/py1/py/grid/ has my solution. I don’t want to spend the time passing all leetcode tests! Diminishing return

https://www.geeksforgeeks.org/find-number-of-islands/ is conceptually identical, though using a different criteria for “connected” — diagonal neighbors are “connected”

Hi Deepak

I realize for “connected” problems, there’s definitely a graph underneath, so graph traversal is required. I guess BFT or DST will both find all the nodes connected to “me”.

Given a 1000 x 1000 all-black matrix, I think DFT recursion will go into exactly 1,000,000 levels and overflow stack space.

A high-level (vague) idea is

  • Scan in type-writer fashion. Suppose there are 10×10 = 100 cells either black or write. I paint each cell brown [1] once visited. I also use a integer counter to keep track how many cells already visited.
  • During the scan. If a cell is already visited, I will ignore it and move on
  • Any time I find a black cell, I will start a BFT (I dislike DST) to find all connected black cells.  Once I find all the black cells connected to “me”, I resume the type-writer scan.
  • Once my counter hits 100, I have visited all cells and will exit.

[1] you paint it white, which is probably better.

I thought this problem was too hard and not worth study, but you convinced me that it may come up and I can at least give a vague solution  … better than no solution.

Compared to a binary tree, walking over a matrix is /operationally/ (not conceptually) harder because

  • need to check array bounds. Error-prone and time consuming in coding tests
  • The (invisible) links will not cover all cells. To avoid missing a cell, we still need a full matrix scan. The integration of tree walk + matrix scan is unintuitive, to put it mildly.
  • During the tree-walk, you don’t want to visit any cell too many times or get lost in an endless loop. A brute-force technique — shadow matrix to remember all visited cells.
  • … However, once you get over these nitty-gritty operational complexities, then tree-walk in matrix is not really harder. These algo questions can therefore frustrate and fail many candidates untrained on The technique.

FB: spiral number pattern

This is a fairly popular coding question.

Q: given an positive integer N, print all integers from 1 to N2 in a N-by-N matrix, following a spiral pattern, starting from top-left. Please implement it in

int[][] spiral(int n); //to output

9 8 7
2 1 6
3 4 5

Facebook focuses on corner cases, complexity analysis.

https://github.com/tiger40490/repo1/blob/cpp1/cpp1/spiralFB.cpp is my implementation — iterative, deque<deque> matrix.

https://www.quora.com/How-do-I-print-spiral-number-patterns-using-only-loops-conditions-and-basic-math-i-e-without-the-use-of-libraries-data-structures-arrays-on-c

detect cycle in slist #Part 2

Note there’s an open question at the end

Don’t use fast-slow iterators. These 3 solutions below each has advantages over the fast-slow iterators solution.

–solution: list-reversal, based on https://blog.ostermiller.org/find-loop-singly-linked-list and implemented in https://github.com/tiger40490/repo1/blob/py1/py/slist/slistReverseLoopDetect.py
Basically, iterate from aa to bb to cc … and reverse each arrow. Suppose dd points to bb initially forming a loop. At some point the snapshot is

  • null <- aa <- bb <- cc <- dd  while currentNode is dd and nextNode is bb.

If we go on, we would be back at aa, proving the existence of a loop.

Benefit of this solution — O(1) space complexity and O(N) time complexity

constraint — wont work on read-only lists .

–solution: count nodes against memory usage — my invention
We can get a reliable estimate of the memory usage of the entire data structure, and divide it by per-node footprint, so we know within good approximation how many nodes exist. If we count more than that, there must be duplicates.

https://blog.ostermiller.org/find-loop-singly-linked-list shows one way to estimate the memory usage — keep track of the maximum and minimum addresses among all visited nodes.

The low level languages usually provide addresses. The higher level languages usually provide memory usage. In realistic applications, there is always a way to get an estimate of memory usage.

–solution: Brent. See my code https://github.com/tiger40490/repo1/blob/cpp1/cpp/linkedList/cycleDetect.cpp

If we keep count of total number of increments, we should see that every time we double “power”, that total count is a power of 2. If we have incremented 32 times then we know { mu + lambda } must exceed 32… Advantages over the simple-2-iterator:

  1. tortoise is much faster, so if mu (merge point, or loop starting point) is far out, then we will find it faster. With the standard solution, tortoise takes a long time to reach the merge point.
  2. can return loop size

I now believe I can predict in which “phase” we will detect the loop — If lambda is between 2^5 and 2^6, then we will detect the loop in Phase 6, and we can’t detect in Phase 5

I suspect power-of-3 is also working:

  • If the slower iterator stops moving completely, then the algorithm is broken because the slower iterator could remain outside the loop.
  • If the slower iterator tailgates the faster iterator and the following distance is always always within a static limit (like 99), then the algorithm is broken because the loop size could exceed 99, so the two iterators would have no chance of rendezvous.
  • ==> If the following distance grows gradually, AND the follower is never stuck forever, I think eventually they will meet —
    • Suppose right after a tortoise jump, powOfN bumps up to 81 but actual loop length is shorter, like 80. Now within the next 81 iterations, the hare would move step by step while the tortoise remains. They would rendezvous for sure.
  • —- pros and cons for power-of-3 over power-of-2 —-
  • pro: if a big loop involves root, tortoise won’t have to jump too early too frequently ==> Fewer comparison needed.
  • con: if there’s a tiny loop far out, tortoise would end up jumping too late?

bbg-FX: check if a binary tree is BST #no deep recursion

Binary search tree has an “ascending” property — any node in my left sub-tree are smaller than me.

Q1: given a binary tree, check if it’s a BST. (No performance optimization required.) Compile and run the program.

Suppose someone tells you that the lastSeen variable can start with a random (high) initial value, what kind of test tree would flush out the bug? My solution is below, but let’s Look at Q1b.

Q1b: what if the tree could be deep so you can’t use recursion?
A: use BFT. When we actually “print” each node, we check left/right child nodes.


I made a mistake with lastSeen initial value. There’s no “special” initial value to represent “uninitialized”. Therefore I added a boolean flag.

Contrary to the interviewer’s claim, local statics are automatically initialized to zerohttps://stackoverflow.com/questions/1597405/what-happens-to-a-declared-uninitialized-variable-in-c-does-it-have-a-value, but zero or any initial value is unsuitable (payload can be any large negative value), so we still need the flag.

#include <iostream>
#include <climits>
using namespace std;

struct Node {
    int data;
    Node *le, *ri;
    Node(int x, Node * left = NULL, Node * right = NULL) : data(x), le(left), ri(right){}
};
/*    5
  2       7
 1 a
*/
Node _7(7);
Node _a(6);
Node _1(1);
Node _2(2, &_1, &_a);
Node _5(5, &_2, &_7); //root

bool isAscending = true; //first assume this tree is BST
void recur(Node * n){
//static int lastSeen; // I don't know what initial value can safely represent a special value

  //simulate a random but unlucky initial value, which can break us if without isFirstNodeDone
  static int lastSeen = INT_MAX; //simulate a random but unlucky initial value
  static bool isFirstNodeDone=false; //should initialize to false

  if (!n) return;
  if (n->le) recur(n->le);

  // used to be open(Node *):
  if (!isAscending) return; //check before opening any node
  cout<<"opening "<<n->data<<endl; if (!isFirstNodeDone){ isFirstNodeDone = true; }else if (lastSeen > n->data){
        isAscending=false;
        return; //early return to save time
  }
  lastSeen = n->data;

  if (n->ri) recur(n->ri);
}

int main(){
  recur(&_5);
  cout<< (isAscending?"ok":"nok");
}

detect cycle in slist #Part 1

Q: A singly-linked list (slist) contains a loop. You are given nothing but the head node. With O(1) space complexity, how do you locate the join node? For example,

0(head)->1->2->…101->102->103->4(again), so #4 is the merge point

Here’s Deepak’s ingenious trick

  1. first use the 2-pointer trick to find any node inside the loop.
  2. find the length (say, 55, denoted LL) of the loop using a single moving pointer, starting from that node
  3. now we cant discard that node
  4. Now start a pointer B from head and move LL steps.
  5. Now put pointer A at head
  6. Now move A and B in locksteps. They will meet for sure, at the merge point. Here’s the proof:

Suppose the merge point is at MM, i.e. MM steps from head. When A and B starts moving in locksteps,

  • how far is B from MM? MM steps!
  • how far is A from MM? MM steps too.

Note LL value could be small like 1.

struct Node{
  Node const * p;
  int const data;
  Node(int _data, Node const * _p): p(_p), data(_data){}
  friend ostream & operator<<(ostream &os, Node const & node){
        os<<node.data<<" / "<<&node<<" -> "<<node.p<<endl;
        return os;
  }
};

Node _9(9, NULL);
Node _8(8, &_9);
Node _7(7, &_8);
Node _6(6, &_7);
Node _5(5, &_6);
Node _4(4, &_5);
Node _3(3, &_4);
Node _2(2, &_3);
Node _1(1, &_2);
Node _0(0, &_1);
Node & root = _0;
Node const * mergePoint = &_1;

//how many distinct nodes in the loop
size_t getLoopLen(Node const & root){
  Node const * brunner = &root;
  Node const * frunner = &root;
  for(;;){
        frunner = frunner->p->p;
        brunner = brunner->p;
        if (frunner == brunner) break;
  }
  cout<<"now the two runners have met somewhere in the loop: "<<*frunner ;
  for(int ret = 1; ;++ret){
        frunner = frunner->p ;
        if (frunner == brunner) return ret;
  }
}

Node const * getMergePoint(Node const & root){
  size_t LL = getLoopLen(root);
  cout<<"# of nodes in loop = "<<LL<<endl;
  Node const * frunner = &root;
  for(int i = 0; i<LL; ++i){ frunner = frunner->p; }
  //cout<<"front runer is "<<*frunner;
  Node const * brunner = &root;
  for(;frunner != brunner;){
        brunner = brunner->p;
        frunner = frunner->p;
  }
  return frunner;
}

int main(){
  _9.p = mergePoint;
  Node const * ret = getMergePoint(root);
  cout<<"Merge point is "<<*ret;
  assert(ret == mergePoint);
}

check if a word can reshuffle to palindrome

requirement: https://codecon.bloomberg.com/challenger-series/29. With minimal time and space complexity, the corner cases are tricky. I decided to add a null terminator:)

int main() {
 string S;
  cin>>S;
  vector<char> v(S.begin(), S.end());
  sort(v.begin(), v.end());
  v.push_back(0); //null char
  char last=v[0];
  size_t occur=0;
  bool isOddFound = false;

  for(int i=0; i<v.size(); ++i) {
    bool isNew = v[i] != last;
    //cout<<"--> "<<v[i]<<" isNew = "<<isNew<<endl;
    if (!isNew){
        ++occur;
        //cout<<last<<" occured "<<occur<<" times"<<endl;
        continue;
    }
    //cout<<last<<" occured total "<<occur<<" times"<<endl;
    if (occur % 2){
      if(isOddFound){
        cout<<"no"<<endl;
        return 0 ;
      }
      isOddFound = true;
    }
    last = v[i];
    occur = 1;
  }
  cout<<"yes"<<endl;
}

seek successor of a given node in a BST #root node unknown

EPI300 Q14.2.

I feel code is hard to write, as it’s hard to visualize or build the tree.

In-order walk must enter at root. If we only have the current node as the only input, then I assume there’s an uplink in each node. Here’s my algo:

Case 1: if i have Right child, then descend there and then descend Left all the way to a leaf node and return. Easy case.

Case 2: now we know i have no Right child. Am i a Left child? If yes then return my parent. Easy case.

Case 3: Now we know I am the Right child, without my own right child. This is the worst case.   My left child (if any) is irrelevant, so effectively I am a leaf node. A right leaf node. Solution: Move up step by step. After the first up-right move,  descend Left all the way.

For predecessor, I found this algo online, but it can’t move up so unable to support Case 2 and 3.

  1. /* Find the inorder predecessor of current */
  2. pre = current->left;
  3. while (pre->right != NULL && pre->right != current)
  4.    pre = pre->right;

Based on that, here’s my successor algo:

  1. /* Find the inorder sucessor of current */
  2. pre = current->right;
  3. while (pre->left != NULL && pre->left != current)
  4.    pre = pre->left;

maxProfit, max-sum subArray #Kadane

One of the top 3 all-time favorite algo questions, worth studying in-depth. I know only two algorithms — (Kanade) disposableCurSubArray and lowWaterMark (my own and Xinfeng Zhou). I think both are equivalent from a few angles.

My algo is intuitive (to me) for level input (delta input can be transformed into levels) . One pass. I feel it’s not inferior in any way.

In contrast, disposableCurSubArray is intuitive for delta input i.e. maxSubArray.

https://github.com/tiger40490/repo1/tree/py1/py/array has a tested solution with lots of explanations.

https://www.geeksforgeeks.org/largest-sum-contiguous-subarray/ has a simpler solution, to be understood. I rewrote it in https://github.com/tiger40490/repo1/blob/cpp1/cpp/array/maxSubarraySum.cpp

  1. special case: If all numbers are negative, then final answer is the “smallest” negative number as a lonewolf array.
  2. special case: If all numbers are positive, then final answer is the entire array
  3. so the tricky case must have a mix of negative/positive

Here’s my explanation of Kadane’s algo:

  • Delta Input array has arr[0], arr[1] .. arr[n]. let’s denote maxSubsumEndingAt(i) as B[i]. The max Subarray Ending At #55 is a continuous subarray starting somewhere before #55. It can be a lone-wolf containing only #55, as an important special case. In fact, in the mixed positive/negative array, we usually encounter this case.
  • (2-pointer algo) We maintain a left marker and a right marker. Both move to the right. maxSubArrayEndingAt(i) is basically an accumulating subarray from left marker to right marker.
  • B[i] == max(B[i-1] + arr[i]    vs  arr[i] )
    • if  B[3] is negative i.e. all 4 sub-array sums ending in arr[3] are all negative, then B[4] should not include any of them. We can discard them for good and “start afresh“(while keeping the global maxSumSeen)
    • else, there exists a “positive” sub-array ending at arr[3], so we keep growing it until it becomes negative.
    • (See github python code) I can imagine a left-marker, the start of the current max subarray. We will move the right marker to grow the sub-array if the current sub-array is useful i.e. positive. Otherwise, we start afresh and the left marker jumps and coincide with right-marker
  • Note sign(B[i-1]) is key but sign(arr[i]) is irrelevant

Here’s my attempt to connect my lowWaterMark to Kanade’s algo:

I suspect that whenever we move the left marker, we always have a new lowWaterMark.

(Before the first element, Level is defined as 0 . In case III, the low water mark is usually a negative level.) Suppose A few elements after hitting low water mark level=-66, we hit level=-22. This is not a new water mark. maxSubsumEndingAt[this element] is actually positive since there exists a subarray starting right after the “level=-66” element!

When we hit level=-77, a new water mark, the maxSubsumEndingAt[this element] is basically zero, as the disposableCurSubArray is discarded. We start afresh to accumulate. Essentially, we reset the “base” to be the new water mark.



			

rotate array by K slots in O(1)space O(N) time

Can write a c++ solution as an ECT practice. Easy to verify. Added to enroute.txt.

[[EPI300]] 6.13 is probably a much shorter and more readable solution!

A: find the largest common denominator between N and K. If it’s 3, then the array consists of 3 interleaved subsequences (abbreviations). We will “process” each subsequences simultaneously since they are completely independent. First subsequences consists of Slots 0,3,6,9… Within each subsequences , K’ and N’ (scaled down by the denominator) have no common denominator, therefore an iteration will visit every element exactly once until we revisit the first element.

Q: rotate a large array (N elements) by K positions. Both N and K are potentially large, so we need 1) time efficiency 2) memory efficiency. Ideally, O(1) space and O(N) time.

–analysis
If K = 1, then just shift every element right, rotating the last element to the head. If K = 2, just repeat once. But K is large.

You can assume it’s an array of pointers, or you can assume it’s an array of integers — Same challenge.

The brute-force solution requires O(K*N), shifting all N elements K times.

Without loss of generality, let’s rotate by 5 slots. If N is multiple of 5, like 30, then there are 5 individual, independent, interleaved arrays of 10. Just rotate each by 1 slot.

If N is not a multiple of 5, like 31, then view the array as a circular array and we hop by 5 slots each time. So our journey will start from position 0 (or any position) and cover every position and come back to the starting position. This can be proven by contradiction —

Sooner or later we must be re-visiting some position. We know that position is reachable from position 0, so the distance between them must be multiple of 5. So position 0 must be revisited.

 

locate a pair with targetDdiff==55

http://www.geeksforgeeks.org/write-a-c-program-that-given-a-set-a-of-n-numbers-and-another-number-x-determines-whether-or-not-there-exist-two-elements-in-s-whose-sum-is-exactly-x/ has a nice algo for “sum=55”

This technique becomes more interesting when the problem is modified to “find a pair whose difference is 55”. Without loss of generality, let’s keep everything positive.

–my solution:

  1. sort the original integers
  2. duplicate the array, flip the signs and reverse it
  3. append positive array to get a combined, sorted array, whose left half are all negavie
  4. now start the 2 pointers, but each constrained within its half.
//Requirement: find all pairs in the array whose difference = D
//
//If there are negative numbers, then the simplest solution is
//to shift everything up to positive

vector<int> v{3,1,1,7,4,9};
int const SZ = v.size();
int const D = 2;

void dump(){
  for(int i=0; i<v.size(); ++i) cout<<setw(3)<<v[i];
  cout<<endl;
  for(int i=0; i<v.size(); ++i) cout<<setw(3)<<i;
  cout<<endl;
}
int main(){
  sort(v.rbegin(), v.rend());
  for (int i=SZ-1; i>=0; --i) v.push_back(-1 * v[i]);
  dump();
  for(int left=0, right=v.size()-1; left < SZ && right>=SZ;){
        int diff = v[left] + v[right];
        if (diff == D) {
                cout<<v[left]<<" - "<<-1*v[right]<<endl;
                if (v[left+1] == v[left]) ++left;
                else --right;
        }
        else if (diff < D) --right;
        else ++left;
  }
}

locate a pair with targetSum==55 #bbg IV #Morris

Update

Q: any O(1) space sort?

Q: From an unsorted array of positive integers, is it possible to find a pair of integers that sum up to a given sum?

Constraints: This should be done in O(n) and in-place without any external storage like arrays, hash-maps, but you can use extra variables/pointers.

If this is not possible, can there be a proof given for the same?

—–Initial analysis—-
I wish I were allowed to use a hash table of “wanted ” values. (iterate once and build hashtable. For Each new value encountered, check if it is in the “wanted” list…)

I feel this is typical of west coast algorithm quiz.

I feel it’s technically impossible, but proof?  I don’t know the standard procedure to prove O(n) is impossible. Here’s my feeble attempt:

Worst case — the pair happens to be the 1st and last in the array. Without external storage, how do we remember the 1st element?  We can only use a small number of variables, even if the array size is 99999999. As we iterate the array, I guess there would be no clue  that 1st element is worth remembering. Obviously if we forget the 1st element, then when we see the last element we won’t recognize they are the pair.
—–2nd analysis—-
If we can find a O(n) in-place sort then problem is solvable [1]. Let’s look at Radix sort, one of the most promising candidates. https://stackoverflow.com/questions/463105/in-place-radix-sort has an implementation for DNA strings.

Assumption 1: the integers all have a maximum “size” in terms of digits. Let’s say 32-bit. then yes radix is O(n) but not sure about space complexity. Now, with any big-O analysis we impose no limit on the sample size. For example we could have 999888777666555444333 integers. Now, 32-bit gives about 4 billion distinct “pigeon-holes”, so by the pigeon-hole principle most integers in our sample have to be duplicates!

Therefore, Assumption 1 is questionable. In fact, some programming languages impose no limit on integer size. One integer, be it 32 thousand bits or 32 billion bits, could use up as much memory as there is in the system. Therefore, Assumption 1 is actually superfluous.

Without Assumption 1, and if we allow our sample to be freely distributed, we must assume nothing about the maximum number of digits. I would simply use

Assumption 2: maximum number of digits is about log(n). In that case radix sort is O(n log(n)), not linear time:(

—– new analysis as of Jun 2017 —-
Can Morris algo sort an array (external storage)? If yes,  then use the 2-moving-pointer algo in locate a pair whose diff=55

However, Morris needs a tree not an array.

[1] locate a pair with targetSum=55 : pre-sorted #O(N) 1-pass