c++concurrency is harder to learn than java

C++ concurrency learning curve has lower /leverage/ higher effort.

* well-established — Java is designed with concurrency at its core so its memory model is mature and sound. There are a few very well-respected books on java concurrency (and some secondary books). As a result, concurrency best practices are more established in java than arguably any other language as of 2017.

* templates — Many concurrency constructs use templates with possibly specializations.

* C — Java developers never need to deal with the system-level concurrency library (invariably in C), the c++ developer may need to descend to the C level.

* multiple libraries — See post on c++ concurrency libraries. https://bintanvictor.wordpress.com/2017/04/05/common-cconcurrency-libraries/

* Low-level — C++ is a lower-level language than java.
** The concurrency classes must deal with the big 4.
** Memory management also complicate concurrency.
** Not only heap objects, but primitives on the stack are also accessible from multiple threads thanks to pointers.
** The atomic class templates are also more low-level than java’s, and much harder to get right.

* There are many mutex constructs.

##10c++coding habits to optimize perf

Many of these suggestions are based on [[optimized c++]]

· #1 Habit – in c++ at least, ++counter performance is strictly “better or equal to” counter++. If there’s no compelling reason, I would prefer the former.

· #2 Habit – in a for loop, one of the beginning and ending values is more expensive to evaluate. Choose the more expensive one as the beginning value, so you don’t evaluate it over and over. Some people object that compiler can cache the more expensive end value, but 2016 tests show otherwise.

· If a method can be static, then make it static. Good for performance and semantics.

· For a small if-else-if block, put the most likely scenario first. May affect readability. Worthwhile only in a hot spot.

· For a long if-elif-elif-elif-elif block, a switch statement performance is strictly “greater or equal”

· For-loop starts by checking the condition (2nd component in header). If this initial check is redundant (as often is), then use a do-while loop

· Call a loop in a function, rather than call a function in a loop. Another micro-optimization.

##boost modules USED ]finance n asked]IV

#1) shared_ptr (+ intrusive_ptr) — I feel more than half  (70%?) of the “finance” boost usage is here. I feel every architect who chooses boost will use shared_ptr
#2) boost thread
# serialization
# boost::ANY

Gregory said bbg used shared_ptr, hash tables and function binders from boost

Overall, I feel these topics are mostly QQ types, frequently asked in IV (esp. those in bold) but not central to the language. I would put only smart pointers in my Tier 1 or Tier 2.

—— other modules used in my systems
* noncopyable — for singletons
** private derivation
** prevents synthesized {op= and copier}, since base class is noncopyable.

* polymorphic_cast
* numeric_cast — mostly int types, also float types

* operators ?
* bind?
* tuple
* regex
* scoped_ptr — as non-copyable [1] stackVar,
[1] different from auto_ptr

[[linux programmer’s toolbox]]

MALLOC_CHECK_ is a glibc env var
–debugger on optimized code
P558 Sometimes without compiler optimization performance is unacceptable.

To prevent optimizer removing your variables, mark them volatile.

An inline function may not appear in call stack. Consider “-fno-inline”

–P569 double-free may not show issues until the next free() or malloc()

–P470 – 472 sar command 
can show per-process performance data

can monitor network devices

—P515 printf + macros for debugging

buffering behavior differs between terminal ^ log files