simplest SDE (!! PDE) given by Greg Lawler

P91 of Greg Lawler’s lecture notes states that the most basic, simple SDE
  dXt = At dBt     (1)
can be intuitively interpreted this way — Xt is like a process that at time t evolves like a BM with zero drift and variance At2. 

In order to make sense of it, let’s back track a bit. A regular BM with 0 drift and variance_parameter = 33 is a random walker. At any time like 64 days after the start (assuming days to be the unit of time), the walker still has 0 drift and variance_param=33. The position of this walker is a random variable ~ N(0, 64*33). However, If we look at the next interval from time 64 to 64.01, the BM’s increment is a different random variable ~ N(0, 0.01*33).
This is a process with constant variance parameter. In contrast, our Xt process has a … time-varying variance parameter! This random walker at time 64 is also a BM walker, with 0 drift, but variance_param= At2. If we look at the interval from time 64 to 64.01, (due to slow-changing At), the BM’s increment is a random variable ~ N(0, 0.01At2).
Actually, the LHS “dXt” represents that signed increment. As such, it is a random variable ~ N(0, dt At2).

Formula (1) is another signal-noise formula, but without a signal. It precisely describes the distribution of the next increment. This is as precise as possible.

Note BS-E is a PDE not a SDE, because BS-E has no dB or dW term.

python getattr to call a method reflectively

The getattr function uses the same lookup rules as ordinary attribute access, and you can use it both with ordinary attributes and methods:

result = obj.method(args)

func = getattr(obj, “method2”)

result = func(args)

# or, in one line:

result = getattr(obj, “method2”)(args)

filtration +! periodic observations

In the stochastic probability (not “statistics”) literature, at least in the beginner level literature, I often see mathematicians elude the notion of a time-varying process. I think they want a more generalized and more rigorous terminology, so they prefer filtration.

I feel most of the time, filtration takes place through time.

Here’s one artificial filtration without a time element — cast a bunch of dice at once (like my story cube) but reveal one at a time.

Stoch Lesson 38 parameters of BM

Lawler defined BM with 2 params – drift and variance v, but the meaning of variance is tricky.

Note a BM is about a TVRV and notice the difference between a N@T vs TVRV. A N@T could be modeled by a Gaussian variable with a variance. The variance v of a BM is about the variance of increment. Specifically, the increment over deltaT is a regular Gaussian RV with a variance = deltaT*v

Fn-measurable, adapted-to-Fn — in my own language

(Very basic jargon…)

In the discrete context, Fn represents F1, F2, F3 … and denotes a sequence or accumulation of information.

If something Mn is Fn-measurable, it means as we get the n-th packet of information, this Mn is no longer random. It’s now measurable, but possibly unknown. I would venture to say Mn is already realized by this time. The poker card is already drawn.

If a process Mt is adapted to Ft, then Mt is Ft-measurable…

tips for bbg cod`IV

  • chrome (not Opera) supports Alt-R
  • Show 67 lines of source code
    • chrome F11 to toggle full screen
    • chrome can zoom out 75% to show 67 lines in editor, if you can’t find a bigger monitor.
    • use hackerrank top right icon -> settings -> SMALL font
    • put multiple lines in one line as much as possible
    • Can we hide the address bar?
  • hackerrank top right icon -> settings -> light (not dark) scheme shows line highlight better, but dark shows text better.
  • add comments to show your thinking

Here’s some code we can copy-paste:

#define ss if(1>0)cout
using namespace std;   int i1, i2, i3, N;   string s, s1, s2, s3;
int main(){
    ss<<"s = "<<s<<endl;


Avg(X-squared) always imt square[avg(X)], 2nd look

E[X2] is always larger than E2[X]

Confused which is larger? Quick reminder — think of a population of X ~{-5,5} uniform so E[X] = 0. More generally,

If the population has both positive and negative members, then averaging will reduce the magnitude by cancelling out a lot of extreme values.

In the common scenario where population is all positive, it’s slightly less intuitive, but we can still look at an outlier. Averaging usually reduces the outlier’s impact, but if we square every member first the outlier will have more impact.

One step further,

E[X2] = E2[X]      + Var[X]

##GTD topics for c++ coding drill

After I gain confidence with the basic tasks and move on to higher tasks, i frequently found gaping holes in my foundation. I then face a scary choice. If I admit and shift to the lower gear, I subconsciously feel fake and deeply disappointed with my progress — “after so many years, still at Level 1”. Well, truth is, even after 10 years of java (or c#, SQL or perl) most developers would lack some of the Level 1 skills, because they didn’t need to use them. Basics such as equals/hashCode, wait/notify, interrupt, most basic operations on threads, bigO of collection operations (basic), overriding vs overloading…

— Here are some __coding__ practice areas for GTD (not IV) improvement in c++:
* basic array, pointer/reference, free functions, func pointers. No classes. Plain C
* basic class inheritance, virtual function, new/delete, big3
* string manipulation? understated in books but heavily quizzed.
* basic IO using console and files
* consumption of existing templates. Vast majority of Wall St c++ teams don’t create templates (or design thread constructs). That’s the job of specialists in the bank.
* STL – basic and advanced.
* basic macros including assertion
* a lot of multi-unit compilation topics — much trickier than you think
* static
———-See separate blog posts on the most essential skills with STL

** containers are the real reason to use STL. Iterators and algos are reluctantly adopted.
** remove isn’t common. Add, lookup, find and interation are popular.
** …

–less quizzed
* basic operator overloading
* basic smart pointers