file (de)serialization, for array of simple structures

Not needed for IV..

// simple and fast (de)serialization to file given an array of structures

#include <fcntl.h>
#include <iostream>
using namespace std;
size_t const len=15;

struct A{
        int i1;
        char cstr[len];
        //string s4; //doesn't really work
        A(int i=999, string cs="default c-string", string s="default std::string"): i1(i){
                strncpy(cstr, cs.c_str(), len);
size_t const  cnt=2, siz=cnt * sizeof(A);
A arr[cnt], ar2[cnt];

char fname[] = "/tmp/,.dat";
int main() {
        arr[0]=A(1,  "grin", "backbone");
        arr[1]=A(2,  "frown", "try/except/else");

        int fd = open(fname, O_CREAT | O_WRONLY, S_IRUSR | S_IWUSR);
        write(fd, arr, siz);

        int fd2 = open(fname, O_RDONLY);
        read(fd2, ar2, siz);

        for (int idx = 0; idx < cnt; ++idx){
                A * tmp = ar2 + idx;
                cout<<tmp->i1<<" ; "<<tmp->cstr<<" ; "<<endl; //tmp->s4<<endl;

sharing port or socket #index page

Opening example – we all know that once a local endpoint is occupied by a tcp server process, another process can’t bind to it.

However, various scenarios exist to allow some form of sharing.

financial app developer profession: !so bad

(A letter I didn’t send out)

Hi Daniel (Yuan),

I think the programmer profession, and financial IT in particular, is not that bad:

· Entry barrier — not too high nor too low

· Job security — actually reasonable, despite the threat of globalization and threat of younger guys. I think the high entry barrier in financial IT is to our advantage

o I was told some managers have job security concerns too

o I was told U.S. university professors also face elimination (淘汰).

· L Work Stress — Significant variation across firms and across teams, but generally more stress than other professions.

· L Workload — much higher than other professions. Programming is a knowledge-intensive job.

· Job market demand — very high for IT skills. Therefore people from other professions are drawn here.

o Market depth is excellent. You can find plenty of jobs from $50k (easy to cope) to $300k

· L Age discrimination — Some professions (like research) are even better.

· Career Longevity — (I will delay or skip retirement) Reasonable in the U.S. Some professions (like doctors) are even better.

· Income — well above average among all professions. Just look at national statistics and world-wide statistics.


GTD: algo trading engine from scratch: ez route

Start by identifying some high-quality, flexible, working code base that’s as close to our requirement as possible. Then slowly add X) business features + Y) optimizations (on throughput, latency etc.) I feel [Y] is harder than [X], thought [X] gives higher business value. Latency tuning is seldom high-value, but data volume could be a show-stopper.

Both X/Y enhancements could benefit from the trusty old SQL or an in-memory data store[1]. We can also introduce MOM. These are mature tools to help X+Y. [3]

As I told many peers, my priorities as architect are 1) instrumentation 2) transparent languages 3) product maturity

GTD Show-stopper: data rate overflow. Already addressed
GTD Show-stopper: frequent[4] crashes. Unlikely to happen if you start with a mature working code base. Roll back to last-working version and retest incrementally. Sometimes the crash is intermittent and hard to reproduce 😦 Good luck with those.

To blast through the stone walls, you need power tools like instrumentation, debuggers … I feel these are more important to GTD than optimization skills.

To optimize, you can also introduce memory manager such as the ring buffer and custom allocator in TP, or the custom malloc() in Facebook. If performance doesn’t improve, just roll back as in Rebus.

For backend, there are many high or low cost products, so they are effectively out of scope, including things like EOD PnL, position management, risk management, reporting. Ironically, many products in these domains advertise themselves as “trading platforms”. In contrast, what I consider in-scope would be algo executor, OMS[2], market data engine [2], real time PnL.

— The “easy route” above is probably an over-simplification, but architects must be cautiously optimistic to survive the inevitable onslaught of adversities and setbacks —

It’s possible that such a design gradually becomes outdated like GMDS or the Perl codebase in PWM-commissions, but that happens to many architects, often for no fault of their own. The better architects may start with a more future-proof design, but more likely, the stronger architects are better at adjusting both the legacy design + new requirements

Ultimately, you are benchmarked against your peers in terms of how fast you figure things out and GTD….

Socket tuning? Might be required to cope with data rate. Latency is seldom a hard requirement.

Threading? single-threaded model is probably best. Multiple processes rather than multiple threads.

Shared memory? Even though shared memory is the fastest way to move data between processes, the high-performance and high-throughput ticket plant uses TCP/Multicast instead.

MOM? for high-speed market data gateway, many banks use MOM because it’s simpler and flexible.

Inter-process data encoding? TP uses a single simplified FIX-like, monolithic format “CTF”. There are thousands of token types defined in a “master” data dictionary — semi-static data.

GUI for trader? I would think HTML+javascript is most popular and quick. For a barebones trading engine, the GUI is fairly simple.

Scheduled tasks? Are less common in high speed trading engines and seldom latency-sensitive. I would rely on database or java/c++ async timers. For the batch tasks, I would use scripts/cron.

Testing? I would use scripts as much as possible.

[1] eg: GMDS architect chose memory-mapped-file which was the wrong choice. [2] both require an exchange interface
[3] data store is a must; MOM is optional;
[4]If it crashes once a day we could still cope. Most trading engines can shut down when market closed.

ssh host1 ssh host2 “cmd1; cmd2”

This simple automation script demonstrates how to ssh 2 layers into a machine to run a command.

Obviously you need to set up authorized_keys.

set -x
ssh -q uidev1 ssh -q bxbrdr2 "tar cfvz $tgz /data/mnt/captures/tp5/lfeeds/nysemkt/nysemkt-primary.2017${date}_03*"
ssh -q uidev1 ssh -q bxbrdr2 "hostname; ls -l ~/$tgz"
ssh -q uidev1 scp -pq bxbrdr2:$tgz .
set +x
ssh -q uidev1 "hostname; ls -l ~/$tgz"
scp -pq uidev1:$tgz .
ls -l $tgz

contractor^mgr^FTE-dev ]U.S.

In the U.S. context, I feel the FTE developer position is, on average, least appealing, though some do earn a lot such as some quant developers and CS Doctor. My very rough ranking is

  1. senior mgr
  2. contractor
  3. FTE-dev

Without bonus, the FTE-dev is often lowest. However, bonus is seldom guaranteed.

I exclude the pure quants (or physicians) as a different field from IT.


##2017%%agile advantages: b aggressive

  • healthy job market
  • advantage: short commute. Job not too demanding. Plenty of time available for self-study
  • advantage: I can work day and night to get things done
  • advantage: I’m not aiming for a promotion, so I can try many interviews
  • advantage: my quant credentials and zbs is probably top tier among techies
  • advantage: domain nlg
  • advantage: health and stamina
  • advantage? data analysis aptitude
  • advantage: I have Singapore + java as safety net –> risk capacity.
  • advantage: am open to relocation
  • advantage: am open to short term contracts
  • advantage: am open to pay cuts
  • advantage: no family.
  • advantage: c++/c# in addition to java
  • advantage: above-average SQL, Unix, python/perl/bash

If I hit $220k as a contractor, my self-image and self-esteem would improve. I would feel confident, no longer inferior. In fact, I would feel better than the managers since I don’t rely on that one employer.