c++ low-latency connectivity IV (nQuant) #2

This IV is heavy on low-level QQ in C/C++. Such obscure knowledge won’t help GTD and is not significant zbs. They may improve your design though.
Q3a: Memory alignment – what if on the stack I declare 2 char variables?
Q3b: what if I have 2 char fields in a struct?
Q3c: I have two 64-bit ints, one misaligned. When I use them what problems will I have? See https://lemire.me/blog/2012/05/31/data-alignment-for-speed-myth-or-reality/
Q1: If inside a member function I call “delete this”, what happens?
%%A: what if this “this” points to an object embedded as a field of an umbrella object? The deallocation would happen, but the destruction of the umbrella object may again deallocate it? This is confirmed in the FAQ (http://www.parashift.com/c++-faq-lite/delete-this.html)
%%A: how do we know the host obj is on heap, stack or global area
Q1b: To achieve heap-only, my class has private ctors and private op= and a static factory method. Will it work?
%%A: according to moreEffC++ P146, I would say yes, with certain caveats.
Q2: What’s reinterpret_cast vs dynamic_cast vs static_cast?
Q2b: What other casts are there?
Q: Placement new – can I use the regular “delete”?
%%A: probably no. Need to call the dtor manually? See P42 moreEffC++
Q: How does tcp handshake work? (I don’t know why this nlg is even relevant)
A: http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_establishment
Q: Some tcp parameter to speed it up?
A: larger TCP window size
Q: tcp client to specify a non-random port? See post on bind()
Q: If a c++ app runs fine in debug build (compiler optimizations removed), but crashes in release mode, what guesses/clues do you have?
%%A: conditional compilation, like in my c# project
%%A: the compiler optimization leads to unusual execution speed between 2 threads, and cooks up a rare corner case
%%A: I have seen assertions turned on in debug build (a debug STL can also be used), so we know one data file F1 is unusable, and another data file F2 is usable. In release build, someone else tries F1 and it crashes somewhere else.

sync primitives in pthreads^winThreads

(see also post on “top 2 threading constructs in java ^ c#”)

[[Art of concurrency]] P89 has a 3-pager on pthreads vs win threads, the 2 dominant thread libraries. It claims “… most of the functionality in one model can be found in the other.”

In pthreads (like java), the 2 “most common” controls are mutex and condition var

In win threads library, the 2 essential controls are

* events WaitHandles– kernel objects. Also known as event mutex, these are comparable to condVar, according to P175 [[object-oriented multithreading using c++]]
* locks
** mutex — kernel object, cross process
** CRITICAL_SECTION — userland object, single-process

nQuant c++IV #1 #monitoring

Overall, I remember nQuant IV was rather low-level and heavy on optimization. Mostly QQ.

Q6: how could do you implement a userland thread?
%%A: setjmp and longjmp, as in early JVM

Q6b: how? I feel better be modest and offer my tentative guesses.

Q6c: how is setjump different from goto? Seldom asked!
AA: http://ecomputernotes.com/what-is-c/function-a-pointer/what-is-the-difference-between-goto-and-longjmp-and-setjmp and http://www.geekinterview.com/question_details/3330

Q: by default, is a linux mutex cross-process?
%%A: I guess the mutex in pthreads by default isn’t. Outside linux, the windows mutex is cross-process.
AA: Beyond the default, linux mutex can be cross-process, by using shared memory — http://stackoverflow.com/questions/9389730/is-it-possible-to-use-mutex-in-multiprocessing-case-on-linux-unix

Q: in linux, what’s the difference between a process and a thread?

Q: start 5 copies of the current process, using fork? (Can use the example in [[head first c]] )

Q: If a variable in (2-process) shared memory is marked volatile, is it a reasonable usage of volatile keyword?
AA: now i think so. Similar to a variable writable by a temperature sensor. http://embeddedgurus.com/barr-code/2012/01/combining-cs-volatile-and-const-keywords/ shows use of “const volatile” on a variable in shared memory.
AA: [[moving from c to c++]] section on Volatile (P75) seems to agree

Q: what’s a limitation of the select() system call when there are too many sockets to check, and each messages (~ 2KB) is important.
A: select has max socket count. Use epoll() — http://stackoverflow.com/questions/5357445/select-max-sockets

Q: what linux command to monitor memory usage by a given process, showing size of heap, stack, text, even broken down to shared lib vs static lib
A: cat /proc/{pid}/smaps
A: pmap -x {pid}

Q: given a one-line c program “while(1);”, launch it and /usr/bin/top would show 100% cpu usage. What does it mean?

Q: what’s inside a socket? My socket book has detailed diagrams.
%%A: http://bigblog.tanbin.com/2011/06/5-parts-in-socket-object.html

Q: can a socket be shared by 2 processes?
AA: Yes. See https://bintanvictor.wordpress.com/2017/04/29/socket-shared-between-2-processes/

———I feel most of the questions below are rarely asked.

Q: how many sockets can a single process open? Not too sure. A few hundred?
A: http://stackoverflow.com/questions/410616/increasing-the-maximum-number-of-tcp-ip-connections-in-linux

Q: what linux command to monitor network performance?
%%A: Beside netstat, I have seen tools that report error rates that indicate saturation
A: ss


Q: how to remove the word “option” from a resume, unless it is a sub-word? Use perl or python
%%A: Word boundary symbol?

Q: when a user land thread makes a syscall, what’s the implication?
%%A: the thread enters kernel mode?

Q: what’s offered on Layer 2? Can IP simply operate on top of physical layer without Layer 2? Too deep and seldom asked.
A: ethernet is a L2 technology.
A: Each device on a network has a hardware address or MAC address, used by the data link layer

Libor, Eurodollar, OIS, Fed Fund rate … common features

deposit — All are based on the simple instrument of “deposit” — $1 deposited today grows to $1.00x

unsecured — when I deposit my $1 with you, you may go down with my money. Credit risk is low but non-zero.

inter-bank — the deposit (or the lending) is between banks. The lending rate is typically higher when lending to non-banks.

short-term — overnight, 3M etc, up to 12M.

arbitrage involving a convex/concave contract

(I doubt this knowledge has any value outside the exams.) Suppose a derivative contract is written on S(T), the terminal price of a stock. Assume a bank account with 0 interest rate either for deposit or loan. At time 0, the contract can be overpriced or under-priced, each creating a real arbitrage.

Basic realities (not assumptions) ? stock price at any time is non-negative.

— If the contract is concave, like L = log S, then a stock (+ bank account) can super-replicate the contract. (Can't subreplicate). The stock's range-of-possibilities graph is a straight tangent line touching the concave curve from above at a S(T) value equal to S(0) which is typically $1 or $10. The super-replication portfolio should have time-0 price higher than the contract, otherwise arbitrage by selling the contract.

How about C:=(100 ? S^2) and S(0) = $10 and C(0) = 0? Let's try {-20S, -C, +$200} so V(t=0) = $0 and V(t=T) = S^2 ? 20 S +100. At Termination,

If S=10, V = 0 ←global minimum

If S=0, V= 100

If S=11, V= 1

How about C:=sqrt(S)? S(0) = $1 and C(0) = $1? Let's try {S, +$1, -2C}. V(t=0) = 0. V(t=T) = S + 1 – 2 sqrt(S). At termination,

If S=0, V = 1

If S=1, V= 0 ←global minimum

If S=4, V= 1

If S=9, V= 4

— If the contract is convex, like exp(S), 2^S, S^2 or 1/S, then a stock position (+ bank account) can sub-replicate the contract. (Can't super-replicate). The replication range-of-possibilities graph is a straight tangent line touching the convex from below. This sub-rep should have a time-0 price below the contract, otherwise arbitrage by buying the contract and selling the replication.

string,debugging+other tips:[[moving from c to c++]]

[[moving from c to c++]] is fairly practical. Not full of good-on-paper “best practice” advice.

P132 don’t (and why) put “using” in header files
P133 nested struct
P129 varargs suppressing arg checking
P162 a practical custom Stack class non-template
P167 just when we could hit “missing default ctor” error. It’s a bit complicated.

–P102 offers practical tips on c++ debugging

* macro DEBUG flag can be set in #define and also … on the compiler command line
* frequently people (me included) don’t want to recompile a large codebase just to add DEBUG flag. This book shows simple techniques to turn on/off run-time debug flags
* perl dumper receives a variable $abc and dump the value of $abc and also ….. the VARIABLE NAME “abc”. C has a similar feature via the preprocessor stringize operator “#”

— chapter on the standard string class — practical, good for coding IV

* ways to initialize

* substring

* append

* insert

[[linux programmer’s toolbox]]

MALLOC_CHECK_ is a glibc env var
–debugger on optimized code
P558 Sometimes without compiler optimization performance is unacceptable.

To prevent optimizer removing your variables, mark them volatile.

An inline function may not appear in call stack. Consider “-fno-inline”

–P569 double-free may not show issues until the next free() or malloc()

–P470 – 472 sar command 
can show per-process performance data

can monitor network devices

—P515 printf + macros for debugging

buffering behavior differs between terminal ^ log files

c# coding practices for IV

These questions will seldom hit the phone round…

Q: Implement IEnumerable foo(IEnumerable aa, IEnumerable b) such that foo returns string in either aa or b but not both.

To keep things simple, let’s assume aa holds unique elements, and b too.

Q: regex engine supporting quantifier “*”, and the wild-card “.”
Q: IteratorTest from MS
Q: circular buffer
Q: reverse a linked list in-place
Q: rotate a long array by 3
Q: spreadsheet concretization

(610610 blog has a letter to XR, which shows a list of coding practice projects I completed.)

familiarity with local system(+mainstream tools) is KPI for GTD

(I have been learning this lesson since 2007, but Not before.)

We techies including the coolest techies, can get stressed up by technical issues, by deadlines, by users and managers. These really are the same kind of issue — the damn thing doesn’t work. This not-working issue constitute a major stressor though not the only stressor. When things are seriously bad, our basic competency becomes a question, meaning managers question if we are up to the job. When this stressor gets bad,

we feel like treated differently than other team members
we lose manager’s trust, and we get micro-managed,
we are required to justify our (big or small) designs
we are forced to abandon what we wrote (99% working) and adopt another design imposed on us
we are asked to explain each delay
we are forced to work late
we voluntarily decide to cancel our vacation
we worry about our job
we have no time/energy to hit the gym
we lose sleep and lose appetite

On the other hand, when we are clearly capable of handling this type of issue, we feel invincible and on top of the challenges. We can GetThingsDone — see blog post on GTD.

To effectively reduce this stressor, we really need to get comfortable with the “local system”. Let’s assume a typical financial system uses java/c#/c++, SQL, sproc, scripts, svn, autosys … In addition, there are libraries (spring, gemfire, hadoop etc) and tools like IDE, Database tools, debuggers… riding on these BASE technologies.

We are expected/required to know all of these tools [1] pretty well. If however we are slow/unfamiliar with one of these tools, we get blamed as under performing. We are expected to catch up with the colleagues. Therefore each tool can become a stressor.

[1] see also http://bigblog.tanbin.com/2012/11/2-kinds-of-essential-developer-tools-on.html

Therefore a basic survival skill is familiarity with all of these tools + the local system. If I’m familiar with all the common issues [2] in my local system then I can estimate effort, and I can tell my boss/users how long it takes. Basically, I’m on top of the tech challenge.

[2] If some part of java (say the socket layer or the concurrent iterators) never gives me problems, then I really don’t need that familiarity.

Q: how about non-mainstream tools like spring-integration, jmock? Just like local system knowledge. Investing your time learning these is not highly strategic.

When I change job, again there’s a body of essential know-how I need in order to / fend off / the stressors. Part of this know-how I already have – the portable tech knowledge. Frequently, my new team would use some unfamiliar java feature. More seriously, the local system knowledge is often the bulk of the learning load. If I were in a greenfield development phase I would write part of the local system, and I would have a huge advantage.

A major class of tools are poorly understood with insufficient proven solutions
– About half of Windows tools.
** OC GMDS systems kept crashing for no reason.
** When I got started with perl/php/javascript/mysql vs VBA/dos FTP, I soon noticed the difference in quality.