Don’t use null/None to indicate empty

Many developers return (python) None/null (java) and put such a value in a containers to positively indicate an empty value.

This practice creates unnecessary ambiguity (rather than reduce ambiguity) because many built-in modules authors use None/null when they have no choice.  If you positively return this value, it’s harder to differentiate the various scenarios. This becomes an unnecessary ambiguity when troubleshooting in production.

In Java, I often create a dummy instance of a MyType and return it to mean “empty”.  I think applicable to c++ too.

In python, I tend to return a negative number from a function that ordinarily returns  a MyType, since python functions can return type1 in ContextA but type2 in ContextB! Python list also allows unrelated types.


non-local static class-instance: pitfalls

Google style guide and this MSDN article both warn against non-local static objects with a ctor/dtor.

  • (MSDN) construction order is tricky, and not thread-safe
  • dtor order is tricky. Some code might access an object after destruction 😦
  • (MSDN) regular access is also thread-unsafe, unless immutable, for any static object.
  • I feel any static object including static fields and local statics can increase the risk of memory leak since they are destructed very very late. What if they hold a growing container?

I feel stateless global objects are safe, but perhaps they don’t need to exist.

edit 1 file in big python^c++ production system #XR

Q1: suppose you work in a big, complex system with 1000 source files, all in python, and you know a change to a single file will only affect one module, not a core module. You have tested it + ran a 60-minute automated unit test suit. You didn’t run a prolonged integration test that’s part of the department-level full release. Would you and approving managers have the confidence to release this single python file?
A: yes

Q2: change “python” to c++ (or java or c#). You already followed the routine to build your change into a dynamic library, tested it thoroughly and ran unit test suite but not full integration test. Do you feel safe to release this library?
A: no.

Assumption: the automated tests were reasonably well written. I never worked in a team with a measured test coverage. I would guess 50% is too high and often impractical. Even with high measured test coverage, the risk of bug is roughly the same. I never believe higher unit test coverage is a vaccination. Diminishing return. Low marginal benefit.

Why the difference between Q1 and Q2?

One reason — the source file is compiled into a library (or a jar), along with many other source files. This library is now a big component of the system, rather than one of 1000 python files. The managers will see a library change in c++ (or java) vs a single-file change in python.

Q3: what if the change is to a single shell script, used for start/stop the system?
A: yes. Manager can see the impact is small and isolated. The unit of release is clearly a single file, not a library.

Q4: what if the change is to a stored proc? You have tested it and run full unit test suit but not a full integration test. Will you release this single stored proc?
A: yes. One reason is transparency of the change. Managers can understand this is an isolated change, rather than a library change as in the c++ case.

How do managers (and anyone except yourself) actually visualize the amount of code change?

  • With python, it’s a single file so they can use “diff”.
  • With stored proc, it’s a single proc. In the source control, they can diff this single proc. Unit of release is traditionally a single proc.
  • with c++ or java, the unit of release is a library. What if in this new build, beside your change there’s some other change , included by accident? You can’t diff a binary 😦

So I feel transparency is the first reason. Transparency of the change gives everyone (not just yourself) confidence about the size/scope of this change.

Second reason is isolation. I feel a compiled language (esp. c++) is more “fragile” and the binary modules more “coupled” and inter-dependent. When you change one source file and release it in a new library build, it could lead to subtle, intermittent concurrency issues or memory leaks in another module, outside your library. Even if you as the author sees evidence that this won’t happen, other people have seen innocent one-line changes giving rise to bugs, so they have reason to worry.

  • All 1000 files (in compiled form) runs in one process for a c++ or java system.
  • A stored proc change could affect DB performance, but it’s easy to verify. A stored proc won’t introduce subtle problems in an unrelated module.
  • A top-level python script runs in its own process. A python module runs in the host process of the top-level script, but a typical top-level script will include just a few custom modules, not 1000 modules. Much better isolation at run time.

There might be python systems where the main script actually runs in a process with hundreds of custom modules (not counting the standard library modules). I have not seen it.

java collection in a static field/singleton can leak memory

Beware of collections in static fields or singletons. By default they are unbounded, so by default they pose a risk of unexpected growth leading to memory leak.

Solution — Either soft or weak reference could help.

Q: why is soft reference said to support memory/capacity-sensitive cache?
A: only when memory capacity becomes a problem, will this soft reference show its value.

Q: is WeakHashMap designed for this purpose?
A: not an ideal solution. See other posts about the typical usage of WHM

Q: what if I make an unmodifiable facade?
A: you would need to ensure no one has the original, read-write interface. So if everyone uses the unmodifiable facade, then it should work.

##c++good habits like java q[final]

  • Good habit — internal linkage
    • ‘static’ keyword on file-level static variables
    • anonymous namespace
  • if (container.end() ==itrFromLower_bound) {…}
  • use typedef to improve readability of nested containers
  • typedef — “typedef int FibNum” as source documentation
  • initialize local variables upon declaration.
  • use ‘const/constexpr’ whenever it makes sense. The c++11 constexpr is even better.
  • [c++11] – use ‘override’ or ‘final’ keywords when overriding inherited virtual methods. Compiler will break if there’s nothing to override.

goto: justifications

Best-known use case: break out of deeply nested for/while blocks.

  • Alternative — extract into a function and use early return instead of goto. If you need “goto: CleanUp”, then you can extract the cleanup block into a function returning the same data type and replace the goto with “return cleanup()”
  • Alternative — extract the cleanup block into a void function and replace the goto with a call to cleanup()
  • Alternative (easiest) — exit or throw exception

Sometimes none of the alternatives are easy. To refactor the code to avoid goto requires too much testing, approval and release. The code is in a critical module in production. Laser surgery is preferred — introduce goto.


retreat to raw ptr from smart ptr ASAP

Raw ptr is in the fabric of C. Raw pointers interact/integrate with countless parts of the language in complex ways. Smart pointers are advertised as drop-in replacements but that advertisement may not cover all of those “interactions”:

  • double ptr
  • new/delete/free
  • ptr/ref layering
  • ptr to function
  • ptr to field
  • 2D array
  • array of ptr
  • ptr arithmetics
  • compare to NULL
  • ptr to const — smart ptr to const should be fine
  • “this” ptr
  • factory returning ptr — can it return a smart ptr?
  • address of ptr object

Personal suggestion (unconventional) — stick to the known best practices of smart ptr (such as storing them in containers). In all other situations, do not treat them as drop-in replacements but retrieve and use the raw ptr.

what to test #some brief observations

For a whitebox coder self-test, it’s tough to come up with real possible corner cases. It takes insight about the many systems that make up the real world. I would argue this type is the #1 most valuable regression tests.

If you see a big list of short tests, then some of them are trivial and simply take up space and waste developer time. So I’m a big fan of the elaborate testing frameworks.

Some blackbox tester once challenged a friend of mine to design test cases for a elevator. Presumably inputs from buttons on each level + in-lift buttons.

%%jargon — describing c++ templates

The official terminology describing class templates is clumsy and inconsistent with java/c#. Here’s my own jargon. Most of the words (specialize, instantiate) already have a technical meaning, so I have to pick new words like “concretize” or “generic”

Rule #1) The last Word is key. Based on [[C++FAQ]]
– a class-template — is a template not a class.
– a concretized template-class — is a class not a template

“concretizing” a template… Officially terminology is “instantiating”

A “concretized class” = a “template class”. Don’t’ confuse it with — “concrete” class means non-abstract

A “unconcretized / generic class template” = a “class template”.
(Note I won’t say “unconcretized class”as it’s self-contradictory.)

A “non-template” class is a regular class without any template.

Note concretizing = instantiating, completely different from specializing!

“dummy type name” is the T in the generic vector

“type-argument”, NOT “parameter“, refers to an actual, concrete type like the int in vector

beware of creating reference to dereferenced pointer

In one sample code (p190 [[c++in24hour]]), we see that once you delete a pointer, the reference to the object becomes a dangling reference. More common when multi-threaded.

I guess sometimes we do create a reference to a dereferenced pointer, but we have standard guards to avoid dangling references. What are these guards? Not sure.

Reference param is best practice in c++. What if caller passes an unwrapped pointer into a function’s ref param? How do you ensure the underlying heap/stack thingy doesn’t get deallocated?
– Single-threaded — if the pointee is accessible only on this thread, it can’t possibly “vaporize” beneath my feet. The caller won’t (technically impossible) delete the ptr before the func returns.
– multi-threaded — all hells break lose. We need guards.

There are many similar multi-threading “delete” issues to be solved with ownership control. Only one function on one thread should delete.

factory returning smart ptr^pbclone #Sutter2013

Background — This is one example of “too many variations”. A simple idea become popular. People start mix-n-match it with other ideas and create many, many interesting questions. In practice we should stick to a few best practices and avoid the hundreds of other variations.

Update: is a little too dogmatic as an article. A few take-aways:

  • returning raw ptr can indeed leak in real life
  • For a polymorphic type, return a unique_ptr by default. See also [[effModernC++]]. (Return shared_ptr only if the factory saves it in internal state, but who does?)
  • For a non-polymorphic type, return a clone. Safely assume it’s copyable or movable in c++11.

I think it’s quite common (see [[safe c++]] P51) to have a factory function creating a heap “resource”. Given it’s on heap, factory must return the object by pointer. Easy to hit memory leak. Standard solution is return by ref count smart ptr.

Q: what if I forget to destruct the obj returned?
A: I feel it’s uncommon and unnatural. (More common to forget deletion.) Often the object returned i.e. the smart pointer object is on the stack – no way to “forget”.

Note if a raw pointer variable is on the stack, then the stack unwinding will deallocate/reclaim the memory, but won’t call delete on it!

Q: what if the object returned is saved as a field like myUmbrella.resource1
A: I feel this is fine because the myUmbrella object will be destructed by someone’s responsibility. That would automatically destruct the smart pointer field this->resource1.

Alternative to the ref count smart ptr, we can also return a scoped pointer. See [[safe c++]].

Q: what if factory need to return a pointer to stack?
A: I don’t think so. Pointee goes out of scope….
A: I feel most “resources” are represented by a heapy thingy. Alternatively, it’s also conceivable to hold a “permanent” resource like globally. Memory management is non-issue.

By the way, [[safe c++]] also mentions the __common__ pattern of a (non-factory) function returning a pointer to an existing object, rather than a “new SomeClass”. The challenges and solutions are different.

append _v2 to methods and classes

Relax — It’s fairly painless to use IDE to rename back and forth any
method or class.
When I feel one class is fairly complete among 20 incomplete or
“volatile” classes, I mark it someClass_v9, signifying it’s version 0.9.
When I create a mockup class I often mark it MyClass_v1. Useful when
putting together a paper aircraft so as to build out a prototype
It helps to see these little information-radiators when so many things
are in a flux.

standard practice around q[delete]

See also post about MSDN overview on mem mgmt…

  • “delete” risk: too early -> stray pointer
  • “delete” risk: too “late” -> leak. You will see steadily growing memory usage
  • “delete” risk: too many times -> double free
  • … these risks are resolved in java and dotnet.

For all simple scenarios, I feel the default and standard idiom to program the delete is RAII. This means the delete is inside a dtor, which in turn means there’s a wrapper class, perhaps some smart ptr class.

It also means we create stack instances of
– the wrapper class
– a container holding the wrapper objects
– an umbrella holding the wrapper object

Should every departure/deviation be documented?

I feel it’s not a best idea to pass a pointer into some cleanup function, and inside the function, delete the passed-in pointer. What if the pointee is a pointee already deleted, or a pointee still /in active service/, or a non-heap pointee, or a pointee embedded in a container or umbrella… See P52 [[safe c++]]

renaming folder in svn

C) Conservative approach – clone the folder, verify it, check in, observe [3], then schedule to remove obsolete folder.

** Caveat – Before removal, everyone needs to know the old folder is obsolete and not to be used.
** Caveat – history lost

[3] it can take months to uncover a problem induced by the renaming.

D) The tortise “rename” is convenient but potentially dangerous. Usually works. Acceptable on unimportant svn folders. Use your judgement.

%%jargon – Consumer coder, Consumer class

When we write a utility, an API, or a data class to be used by other programmers or as components (or “services” or “dependencies”) in other modules, we often strain to find an unambiguous and distinct term that refers to “the other side” whom we are working to serve. The common choices of words are all ambiguous due to overload —
“Client” can mean client-server.
“User” can mean business user.
“App developer”? me or “the other side” are both app developers

My Suggestions —

How about “downstream coder”, or “downstream classes”, or “downstream app” ?

How about “upper-layer coder”, “upper-layer classes”, “upper-layer app”, “upper-layer modules”
How about “upper-level coder”, “upper-level classes”, “upper-level app”, “upper-level modules”
How about “Consumer coder”, “Consumer class”, or “Consumer app”?

##coding guru tricks (tools) learnt across Wall St teams

(Blogging. No need to reply.)

Each time I join a dev team, I tend to meet some “gurus” who show me a trick. If I am in a team for 6 months without learning something cool, that would be a low-calibre team. After Goldman Sachs, i don’t remember a sybase developer who showed me a cool sybase SQL trick (or any generic SQL trick). That’s because my GS colleagues were too strong in SQL.

After I learn something important about an IDE, in the next team again I become a newbie to the IDE since this team uses other (supposedly “common”) features.

eg: remote debugging
eg: hot swap
eg: generate proxy from a web service
eg: attach debugger to running process
eg: visual studio property sheets
eg: MSBuild

I feel this happens to a lesser extent with a programming language. My last team uses some c++ features and next c++ team uses a new set of features? Yes but not so bad.

Confucius said “Among any 3 people walking by, one of them could be teacher for me“. That’s what I mean by guru.

Eg: a Barcap colleague showed me how to make a simple fixed-size cache with FIFO eviction-policy, based on a java LinkedHashMap.
Eg: a guy showed me a basic C# closure in action. Very cool.
Eg: a Taiwanese colleague showed me how to make a simple home-grown thread pool.
Eg: in Citi, i was lucky enough to have a lot of spring veterans in my project. They seem to know 5 times more spring than I do.
Eg: a sister team in GS had a big, powerful and feature-rich OO design. I didn’t know the details but one thing I learnt was — the entire OO thing has a single base class
Eg: GS guys taught me remote debugging and hot replacement of a single class
Eg: a guy showed me how to configure windows debugger to kick-in whenever any OS process dies an abnormal death.
Eg: GS/Citi guys showed me how to use spring to connect jconsole to the JVM management interface and change object state through this backdoor.
Eg: a lot of tricks to investigate something that’s supposed to work
Eg: a c# guy showed me how to consolidate a service host project and a console host project into a single project.
Eg: a c# guy showed me new() in generic type parameter constraints

These tricks can sometimes fundamentally change a design (of a class, a module or sub-module)

Length of experience doesn’t always bring a bag of tricks. It’s ironic that some team could be using, say, java for 10 years without knowing hot code replacement, so these guys had to restart a java daemon after a tiny code change.

Q: do you know anyone who knows how to safely use stop(), resume(), suspend()?
Q: do you know anyone who knows how to read query plan and predict physical/logical io statistic spit out by a database optimizer?

So how do people discover these tricks? Either learn from another guru or by reading. Then try it out, iron out everything and make the damn code work.

%%unconventional code readability tips – for prod support

[[The Art of Readable Code]] has tips on naming, early return (from functions), de-nesting,  variable reduction, and many other topics…. Here are my own thoughts.

I said in another blog that “in early phrases (perhaps including go-live) of a ent SDLC, a practical priority is instrumentation.” Now I feel a 2nd practical priority is readability and traceability esp. for the purpose of live support. Here are a few unconventional suggestions that some authorities will undoubtedly frown upon.

Avoid cliche method names as they carry less information in the log. If a util function isn’t virtual but widely used, try to name it uniquely.

Put numbers in names — the less public but important class/function names. You don’t want other people outside your  team to notice these unusual names, but names with embedded numbers are more unique and easier to spot.

Avoid function overloads (unavailable in C). They reduce traceability without adding value.

Log with individualized, colorful even outlandish words at strategic locations. More memorable and easier to spot in log.

Asserts – make them easy to use and encourage yourself to use them liberally. Convert comments to asserts.

If a variable (including a field) is edited from everywhere, then allow a single point of write access – choke point. This is more for live support than readability.

How do global variables fit in? If a Global is modified everywhere (no choke point), then it’s hard to understand its life cycle. I always try to go through a single point of write-access, but Globals are too accessible and too open, so the choke point is advisory and easily bypassed.

quiet confidence on go-live day

I used to feel “Let’s pray no bug is found in my code on go-live day. I didn’t check all the null pointers…”

I feel it’s all about …blame, even if manager make it a point to to avoid blame.

Case: I once had a timebomb bug in my code. All tests passed but production system failed on the “scheduled” date. UAT guys are not to blame.

Case: Suppose you used hardcoding to pass UAT. If things break on go-live, you bear the brunt of the blame.

Case: if a legitimate use case is mishandled on go-live day, then
* UAT guys are at fault, including the business users who signed off. Often the business come up with the test cases. The blame question is “why this use case isn’t specified”?
* Perhaps a more robust exception framework would have caught such a failure gracefully, but often the developer doesn’t bear the brunt of the blame.
**** I now feel business reality discounts code quality in terms of airtight error-proof
**** I now feel business reality discounts automated testing for Implementation Imperfections (II). See

Now I feel if you did a thorough and realistic UAT, then you should have quiet confidence on go-live day. Live data should be no “surprise” to you.

Model dependent on view

See also

The longer version of the guideline is “model class source code should not mention views.” Model class should not be “implemented” using any speficic view. Instead, it should be view agnostic.

Import — view classes are usually in another package. Should not import them.

Reusable — Imagine MyModel uses MyView. MyModel is usable only with MyView and unusable otherwise.

Knowledge of MyView — The fact that MyModel uses MyView implies that MyModel methods need to “open up” MyView to use its members or to pass a MyView variable around. This means whoever using MyModel needs to know something about MyView.

Change/impact — Changes to MyView can break MyModel. It can happen at compile-time (good), or no impact at compile-time but fails silently at run-time. On the other hand, the change/impact from M to V is normal and expected.

Layering — (This is the best illustration of layering principal in OO.) M is at a lower layer below V. Lower layers should be free of higher-layer knowledg.

Separation of concerns — M author should not be concerned with (not even the interfaces of) upper layers. She should focus on providing a simple clean “service” to upper layers. This makes lower-layer classes easy to understand.

Coupling — This is also one of the good illustrations of tight coupling.

Two-way dependency — (This is the best illustration of two-way dependency.) If V is implemented using M, M should not “depend on” V. Basic principal. Two-way dependency is Tight coupling.

tuning/optimization needs "targeted" input data

My personal focus is performance profilers for latency engineering, but this principle also applies to other tuning — tuning needs “targeted” even “biased” input data, biased towards the latency-critical scenarios.

See also the post on 80/20.

The “80/20” item in [[More Effective C++]] points out that we must feed valid input data set to performance profiler. It think it’s easy to get carried away by a good quality profiler’s findings, without realizing that the really important users (who?) or the critical operations may not be represented by the test case.

This sounds like more than an engineer’s job duty — find the relevant input data sets that deserve performance-profiling most. Perhaps the business analyst or project manager knows who and what functionality is performance-critical.

80/20 rule, dimishing return…

In latency (or any other) tuning, I’m always be suspicious and critical about common assumptions, best-practices….I don’t completely subscribe to the “80/20” item in [[more eff c++]] but it raised valid points. One of them was the need for “targeted” input data — see

Bottleneck is a common example of 80/20….

Diminishing return is another issue. A discussion on DR assumes we have a feedback loop.

feedback loop — It’s so important to have a typical (set of) test setup to measure latency after tuning. Without it we risk shooting in the dark. In many situations that test-setup isn’t easy…. Good feedback loops focus on the most important users. In some trading apps, it’s the most latency-sensitive [1] trades.

Illustration of reliable feedback loop — to test a SQL query you just tuned, you must measure the 2nd time you run the query, because you want all the data to be in cache when you measure.

cost/benefit — sometimes there’s too much unknown or complexity in a module, so it would take too much time to get clarity — high cost in terms of man-days or delivery time frame. After the research it may turn out to be a hopeless direction — low benefit

[1] largest? no. Largest trades are often not latency sensitive.

practice blind-walk on a tight rope, and grow as true geek

A fellow developer told me if a c++ class is in deployed to production, then it’s better not to rename a method in it.

In another case — an initial system release to production — I was the designer of the spring config loading mechanism. I was so confident about how my code works that I simply dropped an override config file into a special folder and I knew my code would take care of it. A colleague suggested we test it in QA environment first. I said we need to develop some level of self-confidence by deliberately taking small leaps of faith in real production environment.

I developed similar self-confidence in GS when I directly ran update/delete in production database (when no choice). My release was approved on the last day of the monthly cycle. I had just 2 hours to finish the release. So I put the entire month-end autosys batch (tens of millions of dollars of payout processing) on-hold to complete my release. However, the first run produced bad data in the production DB, so I had to clean it up, tweak the stored proc and table constraints a bit before rerun. I stayed till 11pm. If I were to test everything in QA first, it would take 3 times longer and I would lose concentration, and would not be able to react to the inevitable surprises during such a complex release.

As a developer, if you always play safe and test, test, test, and dare not trust your own memory and deductive thinking, then you will not improve your memory and deduction. You will not quickly master all the one-liners, all the IDE refactors, …

Top Lasik surgeons can perform the operation swiftly (which is good for the patient). A young tight-rope walker will one day need to practice blind-walk. Master wood-carving artists develop enough self-confidence to not need to pencil down the cuts before cutting…

long constructor signature vs tons of setters

A mild code smell is a long list of (say 22) constructor parameters to set 22 fields. Alternative is 22 setters.
If some field should be final, then the ctor Pain is justified.

If some of the 22 fields are optional, then it’s good to initialize only the compulsory fields in the ctor. Optional fields can use setter.

If one field’s initialization is lengthy, then this single initialization can prolong the ctor call, during which the incomplete new-born reference can leak. Therefore, it’s perhaps safer to postpone this slow initialization to a setter.

If setter has non-trivial logic for a field, then setter is a reasonable choice.

Best benefit in the ctor route is the final keyword. With setters, we need to deny access to a field before initialization. Final fields are beautiful.

It’s often hard to know which is which looking at the 22 arguments in a ctor call (e.g. they include literals). However, a cheap sweetener is the IDE-refactor tool introduce-local-variable. In contrast, setter serves as simple documentation, but some IDE are unable to update setter names when we rename a field.

learning common code smells (time-savers)


Every experienced and self-respecting developer cares about maintainability, because everyone of us has first hand experience tracing, changing, and “visualizing” convoluted logic in other people’s code. Once you know the pain, you intuitively try to reduce it for your successors — altruistic instinct.

At the management level, every experienced /application-manager/ recognizes the importance of maintainability since they all have witnessed the dire consequence (earthquake, explosion, eruption…[1]) of making changes to production codebase without fully understanding the codebase. More frequently, management knows how much developer Time (most Wall St teams are tiny — nimble and quick turnaround) it takes to find out why our system behaves this way. Bottom line — unreadable code directly impacts an application team’s value-add.

Technical debt is built up when coders take shortcuts, hide dirt under the carpet,  and “borrow time”.

If we ask every developer in each of our past teams to write down their dreaded code smells, some common themes will emerge — like rampant code duplication,  VeryLongVeryUgly methods, VLVU method signatures or 10-level nested blocks. Best practice means no universally-hated code smells.

Beyond those universally-hated, code smells can be widely hated, hated by some, frowned-on or tolerated … — different levels of smelliness.

I am trying to understand the common intolerance of the common developers (not the purists). You seem to suggest there’s no point trying to. If someone says there’s code smell in my code, i’d like to know if it’s universally hated, or hated-by-some.

In many case, our code is seldom debugged/modified/reviewed by others. In those cases, it doesn’t matter that much.

Happy new year …

[1] In reality, they often get the “patient” to sigh on the not-responsible-for-death before operating on the patient. They also have a battery of UAT test cases to make sure all the (common) use cases are covered. So the code change is a best-effort and may break under unusual circumstances.

make your app self-sustainable af you are gone

I always hope that my app can run for years after I'm gone, not get killed soon after. Here are some realities and some solutions.

If your app does get killed soon after, then it's a management strategic error. Waste of resources.

For a 6-month contractor on wall street, it's quite common for the code to be maintained by an inexperienced guy. Knowledge transfer is usually rather brief and ineffective. (“Effective” means new maintainer runs the app for a few cycles and trace log/source code to resolve issues independently, with author on stand-by.)

All your hacks put into the code to “get it to work” are lost with you when you are gone.

Tip: really understand the users' priorities and likely changes. Try to identify the “invariant” business logic. The implementation of these core logic is more likely to survive longer. Probably more than 50% (possibly 90%) of your business logic aren't invariant. If really 90% of the logic isn't invariant, then requirement is unstable and any code written by anyone is likely a throw-away.

Tip: thoroughly test all the realistic use cases. Don't waste time on the imaginary —
Tip: don't waste time writing 99 nice-looking automated tests. Focus on the realistic use cases. Automated tests take too much effort to understand. Maintainer won't bother with it since she already has too much to digest.

Tip: use comment/wiki/jira to document your hacks. I don't think high-level documentation is useful, since developers can figure it out. Since time is very limited, be selective about what constitutes a “hack”. If they can figure it out then don't waste time documenting in wiki. Source code comment is usually the best.
Tip: get new maintainer involved early.
Tip: pray that requirements don't change too much too fast after you are gone.

code change while switching JDBC library?

When a new vendor implements JDBC api, they create classes implementing the PreparedStatement java interface. They ship the class files of new PreparedStatement classes. They don’t need to ship PreparedStatement.class which is a standard one, unchanged. Suppose i want to change my java app to use this vendor, I need to make exactly one SC (source code) change – import the new PreparedStatement class. Note we make no more and no fewer SC changes. That’s the the “state of the art” in java.

The C solution to “API standardization” is different but time-honored. For example, if a vendor implements pthreads, they implement pthread_create() function (and structs like pthread_mutex_t) in a new .C file. They ship the binary. Now suppose I want to change my working C application and switch to this vendor, from an older implementation…

Q: what SC changes?
%%A: possibly none. (None means no recompile needed.) When I compile and link, I probably need to pull in the new binary.

Q: Do they have to ship the header file?
%%A: i don’t think so. I feel header file is part of the pthreads standard.

Now the interesting bit — suppose I want to use a vendor with small extensions. SC change is different — include a slightly different header file, and change my thread function calls slightly. If the SC changes are small enough, it’s practically painless.

Now we see a real-world definition of “API” — If a vendor makes changes that require application SC change, then that impacts the API.

For java, the API includes all members public or protected. For C, it’s more about header files.

instance fields vs long argument list

One of the “mild” code smells is the habit to convert instance-method params into fields. It complicates code tracing. Much cleaner is a local var – writable only in a local scope. In contrast, a field (or global) can be updated from many places, so it’s harder to visualize data flow.

Practical Scenario – If there are 5 objects I keep passing among a bunch of instance-methods, I would consider converting them to instance fields of the class (so methods receive fewer args). As a middle ground between fields and long signatures, perhaps an immutable method object is acceptable.

## practical app documentation in front office

+ Email? default “documentation” system, grossly inefficient but #1 documentation tool in practice.
+ Source code comments? default documentation system

— above are “organic” products of regular work; below are special documentation effort — tend to be neglected —-

+ Jira? rarely but can be used as organic documentation. easy to link
+ [jsw] Cheatsheet, run book and quick guide? same thing, very popular.
+ [jsw] Troubleshooting guide on common production issues
+ Design document? standard practice but often get out of date. A lot of effort. Often too high-level and not detailed enough for developers/implementation guys — written for another audience. Some people tell me a requirement spec should be detailed enough so developers could write code against it, but in reality, i have see extremely detailed specs but developers always have questions needing real time answers.

+ Unit test as documentation? Not sure. In theory only.
+ Wiki? widespread. Easy to link
+ sharepoint? less widely used than wiki as documentation tool. organized into a file tree.

The more “organic”, the easier, and the better its take-up rate; The more artificial, the more resistance from developers.

[w = wiki]
[j = jira]
[s = sharepoint]

code change right before sys integration test

Reality – each developer has many major projects and countless “noise” tasks simultaneously. Even those “successfully” completed projects have long tails — Never “completely completed” and out of your mind.

Reality – context switch between projects has real cost both on the brain and on your computer screen — If you keep too many files, emails, browsers open, you lose focus and productivity.

Q: for something like SIG, should you wrap up all code changes 3 days before SIT (sys int test) or the night before SIT?
A: night before. During the SIT your mind will be fresh and fully alert. Even if you finish 3 days before, on the night before you are likely to find more changes needed, anyway.

software engineering where? wall street?

When I look at a durable product to be used for a long time, I judge its quality by its details. Precision, finishing, raw material, build for wear and tear…. Such products include wood furniture, musical instruments, leather jackets, watches, … I often feel Japanese and German (among others) manufacturers create quality.

Quick and dirty wall street applications are low quality by this standard, esp. for code maintainers.

Now software maintainability requires a slightly different kind of quality. I judge that quality, first and foremost, by correctness/bugs, then coverage for edge/corner cases such as null/error handling, automated tests, and code smell. Architecture underpins but is not part of code quality. Neither is performance, assuming our performance is acceptable.

There's a single word that sums up what's common between manufacturing quality and software quality — engineering. Yes software *is* engineering but not on wall street. Whenever I see a piece of quality code, it's never in a fast-paced environment.

code-cloning && other code smells in fast-paced GTD Wall St

In fast-paced Wall St, you can copy-paste, as long as you remember how many places to keep in-sync. In stark contrast, my ex-colleague Chad pointed out that even a one-liner sybase API call should be made a shared routine and a choke point among several execution paths. If everyone uses this same routine to access the DB, then it’s easier to change code. This is extreme DRYness. The opposite extreme is copy-paste or “code cloning” as some GS veteran described. Other wall st developers basically said Don’t bother with refactoring. I’m extremely uncomfortable with such quick and dirty code smells, but under extreme delivery pressure, this is often the only way to hit deadlines. Similarly,

*Use global variables.
** -Use global static collections. Remember my collection of locks in my ECN OMS kernel?
** -Instead of passing local variables as arguments, make them static or instance fields.
** -Use database or env var as global variables.
** -Use spring context as global variables.

* Tolerate duplicate variables (and functions) serving the same purpose. Don’t bother to refactor, simplify or clean up. Who cares about readablility? Not your boss! Maybe the next maintainer but don’t assume that.
** Given the requirement that a subclass field and a parent field pointing to the same object, due to bugs, sometimes they don’t. Best solution is to clean up the subclass, but don’t bother.
** 2 fields (out of 100) should always point to the same object, but due to bugs, sometimes they don’t. Best solution is to remove one, but don’t bother.
** a field and a field of a method param should always point to the same object, so the field in the param’s class is completely unnecessary distraction. Should clean up the param’s class, but don’t bother.
** a field and method parameter actually serve the same purpose.
*** maybe they refer to the same object, so the method param is nothing but noise. Tolerate the noise.
** tolerate a large number (100’s) of similar methods in the same class. Best to consolidate them to one, but don’t bother.
** tolerate many similar methods across classes. Don’t bother to consolidate them.
** tolerate tons of unused variables. Sometimes IDE can’t even detect those.

– use macro to substitute a checked version of vector/string. It’s cleaner but more work to use non-macro solutions.
– don’t bother with any validation until someone complains. Now, validation is often the main job of an entire applicaiton, but if your boss doesn’t require you to validate, then don’t bother. If things break, blame upstream or downstream for not validating. Use GarbageInGarbageOut as your defence.
– VeryLongVeryUgly methods, with multi-level nested if/while
– VLVU signatures, making it hard to understand (the 20 arguments in) a CALL to such a method.
– many methods could be made static to simplify usage — no need to construct these objects 9,999 times. It’s quite safe to refactor, but don’t bother to refactor.
– Don’t bother with null checks. You often don’t know how to handle nulls. You need to ask around why the nulls — extra work, thankless. If you can pass UAT, then NPE is (supposedly) unlikely. If it happens in production, you aren’t the only guy to blame. But if you finish your projects too slow, you can really suffer.
– Don’t bother with return value checks. Again, use UAT as your shield.
– use goto or exceptions to break out of loops and call stacks. Use exception for random jump.
– Ignore composed method pattern. Tolerate super long methods.
-Use if/else rather than virtual functions
-Use map of (String,Object) as cheap value objects
-Forget about DRY (Don’t Repeat Yourself)
-Tolerate super long signatures for constructors, rather than builder pattern ( [[EffectiveJava]] )
– don’t bother to close jdbc connections in the finally block
– open and close jdbc connections whenever you like. Don’t bother to reuse connections. Hurts performance but who cares:)
-Don’t pass jdbc connections or jdbcTemplates. In any context use a public static method to get hold of a connection.
– create new objects whenever you feel like, instead of efficiently reuse. Of course stateless objects can be safely reused, creating new ones are cheap. It hurts performance but who cares about performance?
-Use spring queryForMap() whenever convenience
– Tolerate tons of similar anon inner classes that should be consolidated
– Tolerate 4 levels of inner classes (due to copy-paste).
– Don’t bother with thread pools. Create any number of threads any time you need them.
– tolerate clusters of objects. No need to make a glue class for them
-Use public mutable fields; Don’t both with getters/setters
-JMX to open a backdoor into production JVM. Don’t insist on password control.
-print line# and method name in log4j. Don’t worry about performance hit
-Don’t bother with factory methods; call ctor whenever quick and dirty
-Don’t bother with immutables.
– tolerate outdated, misleading source documentation. Don’t bother to update. Who cares:)
– some documentations across classes contradict each other. Don’t bother to straighten.

avoid convenience high-level wrappers; program against low-level APIs

(another blog post.)

Many examples of the same observation — convenient high-level wrappers insulate/shield a developer from the (all-important) underlying API. Consequently, he can’t demonstrate his experience and mastery in that particular “real” technology — he only knows how to use the convenience wrappers.

example — servlet — in one of my GS trading desks, all web apps were written atop an in-house framework “Panels”, which was invented after servlet but before JSP, so it offers many features similar or in addition to JSP. After years of programming in Panels, you would remember nothing at all about the underlying servlet API, because that is successfully encapsulated. Only the Panels API authors work against the underlying servlet API. Excellent example of encapsulation, standardization, layering and division of labor.

Note the paradox — guys who wrote those wrappers are considered (by colleagues and job interviewers) as stronger knowledge experts than the users of the wrappers. So, everyone agrees knowledge of the underlying technology is more valuable.

example — Gemfire — one of my trading engines was heavy on Gemfire. The first things we created was a bunch of wrapper classes to standardize how to access gemfire. Everyone must use the wrappers. If you are NOT the developer of those wrappers, then you won’t know much about gemfire API even after 12 months of development. Interviewers will see that you have close to zero gemfire experience — unfair!

example — sockets — i feel almost everywhere sockets are used, they are used via wrappers. Boost, ACE, java, c#, python, perl all offer wrappers over sockets. However, socket API is critical and a favorite interview topic. We had better go beyond those wrappers and experiment with underlying sockets. I recently had to describe my network programming experience. I talked about using perl telnet module, which is a wrapper over wrapper over sockets. I feel interviewer was looking for tcp/udp programming experience. But for that system requirement, perl telnet module was the right choice. That right choice is poor choice for knowledge-gain.

example — thread-pool — One of my ML colleagues told me (proudly) that he once wrote his own thread pool. I asked why not use the JDK and he said his is simple and flexible — infinitely customizable. That experience definitely gave him confidence to talk about thread pools in job interviews.

If you ask me when a custom thread pool is needed, I can only hypothesize that the JDK pools are general-purpose, not optimized for the fastest systems. Fastest systems are often FPGA or hardware based but too costly. As a middle ground, I’d say you can create your customized threading tools including a thread pool.

example — Boost::thread — In C++ threading interviews, I often say I used a popular threading toolkit — Boost::thread. However, interviewers repeatedly asked about how boost::thread authors implement certain features. If you use boost in your project, you won’t bother to find out how those tools are internally implemented. That’s the whole point of using boost. Boost is a cross-platform toolkit (more than a wrapper) and relies on things like win32 threads or pthreads. Many interviewers asked about pthreads. If you use a high-level threading toolkit, you avoid the low-level pthreads details..

example — Hibernate — If you rely on hibernate and HQL to generate native SQL queries and manage transaction and identities, then you lose touch with the intricate multitable join, index-selection, transaction and identity column issues. Won’t do you good for interviews.

example — RV — one of my tibco RV projects used a simple and nice wrapper package, so all our java classes don’t need to deal with tibrv objects (like transport, subject, event, rvd….). However, interviewers will definitely ask about those tibrv objects and how they interact — non-trivial.

example — JMS — in GS, the JMS infrastructure (Tibco EMS) was wrapped under an industrial-strength GS firmwide “service”. No applications are allowed to access EMS directly. As a result, I feel many (experienced) developers don’t know JMS api details. For example, they don’t use onMessage() much and never browse a queue without removing messages from it. We were so busy just to get things done, put out the fires and go home before midnight, so who has time to read what he never needs to know (unless he plans to quit this company)?

example — spring jms — I struggled against this tool in a project. I feel it is a thin wrapper over JMS but still obscures a lot of important JMS features. I remember stepping through the code to track down the call that started the underlying JMS server and client sessions, and i found it hidden deeply and not well-documented. If you use spring jms you would still need some JMS api but not in its entirety.

example — xml parsing — After many projects, I still prefer SAX and DOM parsers rather than the high-level wrappers. Interviewers love DOM/SAX.

example — wait/notify — i used wait/notify many times, but very few developers do the same because most would choose a high-level concurrency toolkit. (As far as I know, all threading toolkits must call wait/notify internally.) As a result, they don’t know wait/notify very well but luckly, wait/notify are not complicated as the other APIs mentioned earlier.

example — FIX — one of my FIX projects uses a rather elaborate wrapper package to hide the complexity of FIX protocol. Interviewers will ask about FIX, but I don’t know any from that project because all FIX details were hidden from me.

example — raw arrays — many c++ authors say you should use vector and String (powerful, standard wrappers) to replace the low-level raw arrays. However, in the real world, many legacy apps still use raw arrays. If you avoid raw arrays, you have no skill dealing with them in interviews or projects. Raw arrays integrate tightly with pointers and can be extremely extremely tricky. Half of C/C++ complexities step from pointers.

example — DB tuning — when Oracle 10 came out, my DBA friend told me the toughest DBA job is now easier because oracle 10 features excellent self-tuning. I still doubt that because query tuning is always a developer’s job and the server software has limited insight. Even instance tuning can be very complicated so DBA hands-on knowledge is probably irreplaceable. If you naively believe in Oracle marketing and rely on self-tuning, you will soon find yourself falling behind other DBAs who look hard into tuning issues.

financial app testing – biz scenario^implementation error

Within the domain of _business_logic_ testing, I feel there are really 2 very different targets/focuses – Serious Scenario (SS) vs Implementation Imperfections (II). This dichotomy cuts through every discussion in financial application testing.

* II is the focus of junit, mock objects, fitnesse, test coverage, Test-driven-development, Assert and most of the techie discussions
* SS is the real meaning of quality assurance (QA) and user acceptance (UA) test on Wall St. In contrast II doesn’t provide assurance — QA or UA.

SS is about real, serious scenario, not the meaningless fake scenarios of II.

When we find out bugs have been released to production, in reality we invariably trace root cause to incomplete SS, and seldom to II. Managers, users, BA, … are concerned with SS only, never II. SS requires business knowledge. I discussed with a developer (Mithun) in a large Deutsche bank application. He pointed out their technique of verifying intermediate data in a workflow SS test. He said II is simply too much effort and little value.

NPE (null pointer exception) is a good example of II tester’s mindset. Good scenario testing can provide good assurance that NPE doesn’t happen in any acceptable scenarios. If a scenario is not within scope and not in the requirement, then in that scenario system behavior is “undefined”. Test coverage is important in II, but if some execution path (NPE-ridden) is never tested in our SS, then that path isn’t important and can be safely left untested in many financial apps. I’m speaking from practical experience.

Regression testing should be done in II testing and (more importantly) SS testing.

SS is almost always integrated testing, so mock objects won’t help.

XR’s memory efficiency in large in-memory search (DB diff via xml

A real world Problem: 2 DB tables have very similar content. The owner decided to periodically export each table into a 200MB xml file. Each row becomes an element in xml. Each field an attribute. Business Users want to compare the rows on-demand via a servlet. Need to show per-field differences.

Performance? A few seconds to compare the xml files and show result.

Due to various overhead while reading and write files, -Xmx1000m is required.

Insight — duplicate object is the chief performance issue. Key technique is an object registry (like SSN in US), where each recurring object (strings, Floats too!) is assigned an 32-bit Oid. Dates are converted to dates then long integer (memory efficient).  To compare 2 corresponding fields, just compare their Oid. String comparison is much slower than int comparison.

Use an AtomicInteger as Oid auto-increment generator — thread-safe.

– load each XML into memory, but never to instantiate 2 identical objects. Use the Oid.
** construct an {Object,Oid} hashmap?
– now build object graphs, using the Oid’s. Each row is represented by an int array
– 1m rows become 1m int arrays, where 0th array element is always Column 0, 1st element always Column 1 …
– Since Column 3 is primary key, we build a hashmap (primary key Oid, int array) using 1st xml file
– now we go through the 1m rows from 2nd xml file. For each row, we construct its int array, use the primary key’s Oid to locate the corresponding int array from the hashmap, and compare the 2 int arrays.

Note xml contains many partial rows — some of the 50 fields might be null. However, every field has a name. therefore our int-array is always 50 but with nulls represented by -1.

Table has 50 columns, 400,000 rows. Each field is char(20) to char(50).

%%jargon — describing c# enums

Consider public enum Weekday{Mon=1, Tue, Wed, Thu,Fri,Sat,Sun} This declares an enum TYPE.

Since these aren’t singletons, we would get many INSTANCES of Weekday.Wed (being value type), but only 7 MEMBERs in this Type. At runtime, we generally work with (pass and receive) INSTANCES, not MEMBERS. Members are a more theoretical concepts.

Each MEMBER has a name and an Integer value. Only 7 names and 7 integer values in this Type. (Warning – Names are unique but not integer values.)

We can get name or integer value from an INSTANCE.

automated test in non-infrastructure financial app

I worked in a few (non-infrastructure) financial apps under different senior managers. Absolutely zero management buy-in for automated testing (atest).

“automated testing is nice to have” — is the unspoken policy.
“regression test is compulsory” — but almost always done without automation.
“Must verify all known scenarios” — is the slogan but if there are too many (5+) known scenarios and not all of them critical then usually no special budget or test plan.

I feel atest is a cost/risk analysis for the manager. Just like market risk system. Cost of maintaining a large atest and QA system is real. It is justified on
* Risk, which ultimately must translates to costs.
* speed up future changes
* build confidence

Reality is very different on wall street. [1]
– Blackbox Confidence[2] is not provided by test coverage but by “battle-tested”. Many minor bugs (catch-able by atest but were not) will not show up in a fully “battle tested” system; but ironically a system with 90% atest coverage may show many types of problems once released. Which one enjoys confidence?

– future changes are only marginally enhanced by atest. Many atests become irrelevant. Even if a test scenario remains relevant, test method may need a complete rewrite.

– in reality, system changes are budgeted in a formal corporate process. Most large banks deal with man-months so won’t appreciate a few days of effort saving (actually not easy) due to automated tests.

– Risk is a vague term. For whitebox, automated tests provide visible and verifiable evidence and therefore provides a level of assurance, but i know as a test writer that a successful test can, paradoxically, cover up bugs. I never look at someone’s automated tests and feel that’s “enough” test coverage. Only the author himself knows how much test coverage there really is. Therefore Risk reduction is questionable even at whitebox level. Blackbox is more important from Risk perspective. For a manager, Risk is real, and automated tests offer partial protection.

– If your module has high concentration of if/else and computation, then it’s a different animal. Automated tests are worthwhile.

[1] Presumably, IT product vendors (and infrastructure teams) are a different animal, with large install bases and stringent defect tolerance level.
[2] users, downstream/upstream teams, and managers always treat your code as blackbox, even if they analyze your code. People maintaining your codebase see it as a whitebox. Blackbox confidence is more important than Whitebox confidence.

release test failures and code quality

(blogging again) XR, You mentioned some colleagues often release (to QA) code that can’t complete an end-to-end test .. (Did I got you wrong?)

That brings to mind the common observation “95% of your code and your time is handling the unexpected”. That’s nice advertisement and not reality — a lot of my GS colleagues aggressively cut corners on that effort (95% -> 50%), because they “know” what unexpected cases won’t happen. They frequently assume (correctly) a lot of things about the input. When some unexpected things happen, our manager can always blame the upstream or the user so we aren’t at fault. Such a culture rewards cutting-corners.

I used to handle every weird input combination, including nulls in all the weird places. I took the inspiration from library writers (spring, or tomcat…) since they can’t assume anything about their input.

I used to pride myself on my code “quality” before the rude awakening – such quality is unwanted luxury in my GS team. I admit that we are paid to serve the business. Business don’t want to pay for extra quality. Extra quality is like the loads of expensive features sold to a customer who just needs the basic product.

As we discussed, re-factoring is quality improvement at a price and at a risk. Business (and manager) may not want to pay that price, and don’t want to take unnecessary risk. I often do it anyway, in my own time.

static methods perfect; static fields dangerous

Hi XR,

Over the last 5 projects, I am moving into a new design direction — use static (rather than non-static) methods whenever possible, while keeping mutable static fields to a minimum.

A digression first — a note on local “auto” variables as a superior alternative to instance fields. Instance fields represent object state, which is often shared. System complexity (as measured by testing effort) is proportional to the number of stateful variables — mostly instance fields + some static fields. In contrast, Local variables including some function parameters [3] are temporary, on the stack, far more thread-friendly, and often stateless except temporary state. The best ones are scratch pad variables that become unreachable soon, so they don’t pollute, don’t linger, and has no side effects.

[3] If an object passed in as argument has a large “scope” somewhere else, then it doesn’t enjoy the simplicity of local variables.

Similar to local vars, the best static methods are stateless and thread-friendly — these methods don’t change object state i.e. instance fields or static fields. For example, most static debugger statements are stateless; most type converters are stateless; factory methods are often stateless as they create new objects; most math functions are static; many string utils are static

Static methods are often simpler. If you were to implement the same logic with instance methods, they often end up entangled in a web of dependencies.

I can easily move a static method between classes. Perfect lego blocks.

Object dependency is a major part of system complexity. Look at dependency injection frameworks. Java Swing is complex partly for this — Compared to the average non-swing app, I feel the average swing app has more objects and more inter-dependencies.

If you design objects to represent business domain entities (like accounts, requests, patients, shipments, houses..), then instance methods represent behavior of objects; static methods represent … utilities

When I look at a non-trivial OO system, and estimate the percentage of code in instance methods vs static methods, I find invariably instance methods outnumber static methods by at least 3 times (75% vs 25%). Habitually, I refactor them into static methods whenever feasible. Time well spent, as system invariably gets simpler and cleaner.

C# elevates static methods pattern into a language feature — static class

[[the art of readable code]] P98 says — to restrict access to class members, make as many methods static as possible. This let reader know “these lines of code are isolated from those variables”

Any experience to share?

create void methods

Many developers told me they seldom write void methods. Instead, their methods by default return a int/char status or something similar.

There are justifications for void methods. I think void method applies widely, not just when you have no choice.

* if you use void now, you can more easily add a return type later. More flexible.
* if you return something but don’t read it, then other programmers may get confused/mis-lead. Inevitably you end up adding comments
to explain that.
* if you read return value even though you don’t need, then that’s extra work and other programmers will feel they need to
understand what’s going on.
* for some simple methods, void makes the method body simpler. Non-void requires 1 or sometimes several return statements. What a
waste if you don’t need it at all!
* The need to indicate success/failure can be met with judicial use of runtime exceptions. Checked exeption usage requires
* IDE refactoring frequently creates void methods.

design patterns often increase code size

Using nested if and instanceof.., you can implement complex logic in much fewer lines, and the logic is centralized and clearly visible.

If you refactor using OO design techniques, you often create many Types and scatter the logic.

Case in point — Error Memos workflow kernel. We could use factory, polymorphism, interfaces…

Case in point — chain of responsibility

asserts for production system diagnosis

I think Assertions were added to java1.4 partly for documentation and partly to help us investigate production systems — restart the system with -ea and you may see error messages before a null pointer exception.

Put another way, without asserts enabled, the error condition (stack trace etc) might be inexplicable and baffling. You enable asserts to let the (production!) system fail earlier so you can see the early signs previously unreported/unnoticed. This is probably faster than reproducing the problem in QA.

Ideally you want to add more logging or error checking, but asserts offer some unique advantages.

how I estimate workload #XR

Hi XR,

There’s a lot of theory and many book chapters and courses on app development estimation. I don’t know how managers in my department does it. I’m sure these factors below affect their estimations.

* the right way of solving a problem can shorten a 2-day analysis/discussion to 2 hours, provided someone points out the correct way in the beginning. In our bigger department (100+ developers), many managers make estimates assuming each developer knows the standard (not always best) solutions from the onset. If I am new i could spend hours and hours trying something wrong.

* as in your struts form-validation example, a good framework/design can reduce a 1-month job to 1-week.

* tool — the right software tool can reduce a 3-hour job to 30-minutes. Example — I once mentioned to you testing tools. Also consider automated regression testing.

* A common complaint in my department (100+ developers) — a poorly documented system can turn a 2-day learning curve into a week.

* I suspect my colleagues do fewer test cases than i do, given the same system requirement. This is partly because they are confident and familiar with the system. This could mean 30% less effort over the entire project.

de-couple – using abstract^concrete class

When authors say a client class A “depends on a concrete class” B they mean instanticiates new B(). This is less advisable than using a abstract/interface. Now I think there are many hidden costs and abstract arguments.

One of the most understandable argument is testability. But in this post, we talk about another argument — the “expected services” between client object A and “service” object B.

Scenario 0: client object A instantiates new B(). The maintainers of and have to assume every public method [1] is required. Suppose and are maintained in separate companies like spring^hibernate. The 2 maintainers dare not change many things in or But all software need changes!

Scenario 1: Interface offers more flexibility — decoupled. If only uses C, then both maintainers know the services that must support. A maintainer can swap in (runtime) another implementation of interface C. Likewise, B maintainer have more room for maneuver.

Analogy: I wrote HTTP clients in telnet and Perl. They follow the HTTP protocol so they can interact with any web server, thanks to a published standard on the expected behaviour of client and server. Both sides depend on the abstract HTTP protocol only. A java Interface strives for the same flexibility and decoupling.

During OO design, I think it’s fine to introduce as many interfaces as you like. I think interfaces don’t add baggage or tie us down as classes do. Remember your team lead makes one DAO interface for every dao class!

[1] Public fields, too.

understand DB tables in a relatively large system

What do we mean “relatively large”?
– team size 5 – 15 including system support and developers
– tables 50 – 300

tbudget for a basic understanding of all the autosys jobs?
tbudget for a basic understanding of all the tables? 20h

I see 3 unambiguous groups of tables
– R: reference tables, mostly read-only, populated from upstream
– P: product-specific, “local” tables, since we have several independent streams of products
– C: core tables shared among many products

best way to sanity-check java method arguments?

Say you are writing a utility Class1.getByIndex(int arrayIndex). Best way to enforce that arrayIndex is positive?

– throw an unchecked error and crash the jvm? Recommended by many experts. I guess the downside is tolerable, and there are no better candidates as a standard, universal solution.
– throw a checked exception and force other programmers to try-catch? I think the overhead grows when you set this as standard practice.
– return a boolean result? Many methods/constructors can’t.

Ideally, you want to remind other programmers to check their input first, but not force them to put up a useless try-catch.

baseclass dependent on subclass

A parent calls a method overridden by a subclass .. template method. A lot of plugin, callback, hollywood principal uses something like template method.

Say you have just one class. Within it, you call your own methods. Those private and final methods are fine but all other methods are virtual (ie overridable and “subject to overriding”). Problematic if this happens by accident not by design. Now I think baseclass depending on subclass is a very likely mistake.

The MIT course note (in an earlier blog) points out the danger when a super class method B.m1() calls m2(), which is defined in B, overriden in subclass C.

In this case, “dependent” means “when the subclass C.m2() changes, B may need change, since B’s behaviour is affected.”

Q: whose m2() runs?
A: the innermost onion’s m2(), in this case, C.m2()

Q: how do u specify B’s m2() to be used in m1()? B.m2 ?
A: B.m2 will probably fail. B.m2() requires m2 be static.
A: I think u need to provide a private final m2a(), and make m2() call m2a().

If your class B need to call your own method m2(), you need to decide whether to call the m2 defined in this class or a subclass (by dynamic binding) or a parent class. Not sure if it’s possible to implement your decision.

For a full example, see Example 60 [[ java precisely ]]

wiki for enterprise app documentation

Justification: jargon. In a non-trivial enterprise app documentation, usually there are too many (hundreds) jargons for an uninitiated reader. Wiki can help. Let’s look at an example. Under generous assumptions, every time a document mentions “Price Spread”, this word will become a hyperlink to the wiki page for this jargon.

Justification: linking. linking up hundreds of related topics.

Justification: Easy update. Everyone can update every wiki page, without approval.


Hi Raymand,

You mentioned documentation. Documentation is not only important for users, but also important for you to get a complete understanding of the entire project. After writing the documentation, hopefully you will be able to better describe the system to a new person such as your next client or a new interviewer.

I think in US and Singapore, many interviewers respect job candidates who understand past projects in-depth.

How stable is your job? Are you learning some new technologies such as java, dotnet, SQL? I recently improved my understanding of SQL joins, correlated queries, java OO, java exceptions, java threads, jsp/javabeans, servlet sessions… Now I feel like a stronger survivor.

poorly modularized

An interviewer described that 2 C++ developers from Wealth Management were asked to help a quantatitive project for 2 weeks. I immediately commented that the design must be highly modularized. I felt the #1 important success factor in that /context/ was modularization.

In my experience, Poorly modularized code would impede initial learning curve, unit-test and debugging.

The new developers would have to tread very carefully, in fear of breaking client code. Poorly modularized code in my projects had less clean interfaces between a client (a dependent) and a provider module. High cohesion leads to cleaner and simpler interfaces.

Perhaps the #1 characteristic of modularation is cohesion. Highly modularized designs almost always show high-cohesion. The module given to the 2 “outsiders[1]” would be more self-contained and concerned with just 1 or very few responsibilities. Look at the old FTTP loader class. The old loader has to take care of diverse responsibilities including instantiation logic for uplink ports and gwr ports. I feel a separate uplink factory is high-cohesion. Separation of concern.

Object-oriented design is a specialized version of Module-oriented design. Most of the OO design principles apply to modularization. My comment basically /reflected/ the importance of clean and /disciplined/ OO design.

[1] zero domain knowledge.

agile methodology %%xp

Challenge: too many agile principles
Sugg: focus on a few, relate to your personal xp, develop stories —
understandable, memorable, convincing

–xp: frequent releases

–xp: devops automation
RTS, Mac

–xp: frequent and direct feedback from users

–xp: frequent changes to requirements

–xp: “Working software is the principal measure of progress”
pushmail: repeated code blocks; many functions sharing 99% identical code;

–xp: “Working software is the principal measure of progress”
nbc: top3.txt (expert brackets) not following mvc

–xp: “Working software is delivered frequently (weeks rather than months)”
nbc: half-finished product with hundreds of bugs are released to user to evaluate

–Agile home ground:

Low criticality
Senior developers
Requirements change very often
Small number of developers
Culture that thrives on chaos

–(traditional) Plan-driven home ground:

High criticality
Junior developers
Low requirements change
Large number of developers
Culture that demands order

app design in a fast-paced financial firm#few tips

#1 design goal? flexibility (for change). Decouple. Minimize colleagues’ source code change.

characteristic: small number of elite developers in-house (on wall street)
-> learn to defend your design
-> -> learn design patterns
-> automate, since there isn’t enough manpower

characteristic: too many projects to finish but too few developers and too little time
-> fast turnaround

characteristic: reputation is more important here than other firms
-> unit testing
-> automated testing

characteristic: perhaps quite a large data volume, quite data-intensive
-> perhaps “seed” your design around data and data models

characteristic: wide-spread use of stored proc, but Many java designs aren’t designed to work well with stored proc. Consider hibernate.
-> learn coping strategies

characteristic: “approved technologies”
characterstic: developers move around
-> maintenance left to other guys
-> documentation is ideally “less necessary” if your design is easy to understand
-> learn documentation tools like javadoc

safety in a dynamic financial environment

* to promote testing, try to write tests first, at least in 1% of the cases. Over time u may find shortcuts and simplified solutions
* Many systems aren’t easy to unit-test or regression-test. To increase the applicability of testing, allocate time/budget to study best practices
* allocate resources to learn automation tools like Perl, Shell, IDE, diff tools, Windows tools
* automated test
* regression test
* automated doc-generation
* keep track of volumes of written messages
* pair-programming
* involve the users early and at every step

1st steps in java design: file naming

Consider choosing package names for your objects that reflect how your application is layered. For example, the domain objects in the sample application can be located in the package. More specialized domain objects would be located in subpackages under the package. The business logic begins in the com.meagle.service package and DAO objects are located in the com.meagle.service.dao.hibernate package. The presentation classes for forms and actions reside in com.meagle.action and com.meagle.forms, respectively. Accurate package naming provides a clear separation for the functionality that your classes provide, allows for easier maintenance when troubleshooting, and provides consistency when adding new classes or packages to the application.