Background: https://bintanvictor.wordpress.com/2015/12/31/wall-st-survial-how-fast-you-figure-things-out-relative-to-team-peers/ explains why “figure things out quickly” is such a make-or-break factor.
In my recent experience, I feel compiler optimization is the #1 challenge. It can mess up GDB step-through. For a big project using automated build, it is often tricky to disable every optimization flag like “-O2”.
More fundamentally, it’s often
impossible to tell if the compiled binary in front of you was compiled as optimized or not. Rarely the binary shows it.
Still, compared to other challenges in figuring things out, this one is tractable.
I notice that, absolutely None of my c++ veteran colleagues (I asked only 3)  is a gdb expert as there are concurrency experts, algo experts , …
Most of my c++ colleagues
don’t prefer (reluctance?) console debugger. Many are more familiar with GUI debuggers such as eclipse and MSVS. All agree that prints are often a sufficient debugging tool.
 Actually, these other domains are more theoretical and produces “experts”.
 maybe I didn’t meet enough true c++ alpha geeks. I bet many of them may have very good gdb skills.
I would /go out on a limb/ to say that gdb is a powerful tool and can save lots of time. It’s similar to adding a meaningful toString() or operator<< to your custom class.
Crucially, it could help you figure things out faster than your team peers. I first saw this potential when learning remote JVM debugging in GS.
— My view on prints —
In perl and python, I use prints exclusively and never needed interactive debuggers. However, in java/c++/c# I heavily relied on debuggers. Why the stark contrast? No good answer.
Q: when are prints not effective?
A: when the edit-compile-test cycle is too long, not automated but too frequent (like 40 times in 2 hours) and when there is real delivery pressure. Note the test part could involve many steps and many files and other systems.
A: when you can’t edit the file at all. I have not seen it.
A less discussed fact — prints are simple and reliable. GUI or console debuggers are often poorly understood. Look at step-through. Optimization, threads, and exceptions often have unexpected impacts. Or look at program state inspection. Many variables are hard to “open up” in console debuggers. You can print var.func1()
See also post on alpha geeks…
See also post on transparent languages
See also post on how fast you figure things out relative to peers
See also https://bintanvictor.wordpress.com/2017/03/26/google-searchable-softwares/
tuning? never experienced this challenge in my projects.
NPE? Never really difficult in my perience.
#1 complexity/opacity/lack of google help
eg: understanding a hugely complex system like the Quartz dag and layers
eg: replaying raw data, why multicast works consistently but tcp fails consistently
eg: adding ssl to Guardian. Followed the standard steps but didn’t work. Debugger was not able to reveal anything.
#2 Intermittent, hard to reproduce
eg: Memory leak is one example, in theory but not in my experience
eg: crashes in GMDS? Not really my problem.
eg: Quartz preferences screen frequently but intermittently fails to remember the setting. Unable to debug into it i.e. opaque.
I always prioritize instrumentation over effi/productivity/GTD.
A peer could be faster than me in the beginning but if she lacks instrumentation skill with the local code base there will be more and more tasks that she can’t solve without luck.
In reality, many tasks can be done with superficial “insight”, without instrumentation, with old-timer’s help, or with lucky search in the log.
What if developer had not added that logging? You are dependent on that developer.
I could be slow in the beginning, but once I build up (over x months) a real instrumentation insight I will be more powerful than my peers including some older timers. I think the Stirt-tech London team guru (John) was such a guy.
In reality, even though I prioritize instrumentation it’s rare to make visible progress building instrumentation insight.
See also https://bintanvictor.wordpress.com/wp-admin/edit.php?s&post_status=all&post_type=post&action=-1&m=0&cat=560907660&filter_action=Filter&paged=1&action2=-1
* build up instrumentation toolset
* Burn weekends, but first … build momentum and foundation including the “instrumentation” detailed earlier
* control distractions — parenting, housing, personal investment, … I didn’t have these in my younger years. I feel they take up O2 and also sap the momentum.
* Focus on output that’s visible to boss, that your colleagues could also finish so you have nowhere to hide. Clone if you need to. CSDoctor told me to buy time so later you can rework “under the hood” like quality or design
* Limit the amount of “irrelevant” questions/research, when you notice they are taking up your O2 or dispersing the laser. Perhaps delay them.
Inevitably, this analysis relies on the past work experiences. Productivity(aka GTD) is a subjective, elastic yardstick. #1 Most important is GTD rating by boss. It sinks deep… #2 is self-rating https://bintanvictor.wordpress.com/2016/08/09/productivity-track-record/
With a transparent language, I am very likely (high correlation) to have higher GTD/productivity/KPI.
Bootstrap — with a transparent language, I’m confident to download an open source project and hack it (Moodle …). With an opaque language like C++, I can download, make and run it fine, but to make changes I often face the opacity challenge. Other developers are often more competent at this juncture.
Learning — The opaque parts of a language requires longer and more “tough” learning, but sometimes low market value or low market depth.
Competitiveness — I usually score higher percentiles in IV, and lower percentiles in GTD. The “
percentile spread” is wider and worse with opaque languages. Therefore, I feel 滥竽充数 more often
In this context, transparency is defined as the extent to which you can use __instrumentation__ (like debugger or print) to understand what’s going on.
- The larger systems tend to use the advanced language features, which are less transparent.
- The more low-level, the less transparent.
–Most of the items below are “languages” capable of expressing some complexity:
- [T] stored proc unless complex ones, which are uncommon
- [T] java threading is transparent to me, but not other developers
- [S] java reflection-based big systems
- [T] regular c++, c# and java apps
- [O]… but consider java dynamic proxy, which is used in virtually every non-trivial package
- [T] most python scripts
- [S] … but look at python import and other hacks. Import is required in large python systems.
- [O] quartz
- [S] Spring underlying code base. I initially thought it was transparent. Transparent to Piroz
- [O] Swing visual components
- [O] C# WCF, remoting
- [T] VBA in Excel
- —-below are not “languages” even in the generalized sense
- [S] git .. semi-transparency became stressor cos KPI!
- [O] java GC … but NO STRESS cos so far this is not a KPI
- [O] MOM low level interface
- [S] memory leak detectors in java, c#, c++
- [O] protobuf. I think the java binding uses proxies
- [T] XML + simple SAX/DOM
- [S =semi-transparent]
- [O =opaque]