RESTful^SOAP web service, briefly

REST stands for Representational State Transfer, this basically means that each unique URL is a representation of some object. You can get the contents of that object using an HTTP GET, use a POST, PUT, or DELETE to modify the object (in practice most of the services use a POST for this).

— soap vs REST (most interviewers probably focus here) —
* REST has only GET POST PUT DELETE; soap uses custom methods “getAge()” etc
* SOAP takes more dev effort, despite it’s name
* SOAP dominates enterprise apps


simplest SDE (not PDE) given by Greg

P91 of Greg Lawler’s lecture notes states that the most basic, simple SDE
  dXt = At dBt     (1)
can be intuitively interpreted this way — Xt is like a process that at time t evolves like a BM with zero drift and variance At2. 

In order to make sense of it, let’s back track a bit. A regular BM with 0 drift and variance_parameter = 33 is a random walker. At any time like 64 days after the start (assuming days to be the unit of time), the walker still has 0 drift and variance_param=33. The position of this walker is a random variable ~ N(0, 64*33). However, If we look at the next interval from time 64 to 64.01, the BM’s increment is a different random variable ~ N(0, 0.01*33).
This is a process with constant variance parameter. In contrast, our Xt process has a … time-varying variance parameter! This random walker at time 64 is also a BM walker, with 0 drift, but variance_param= At2. If we look at the interval from time 64 to 64.01, (due to slow-changing At), the BM’s increment is a random variable ~ N(0, 0.01At2).
Actually, the LHS “dXt” represents that signed increment. As such, it is a random variable ~ N(0, dt At2).

Formula (1) is another signal-noise formula, but without a signal. It precisely describes the distribution of the next increment. This is as precise as possible.

Note BS-E is a PDE not a SDE, because BS-E has no dB or dW term.

filtration without periodic observations

In the stochastic probability (not “statistics”) literature, at least in the beginner level literature, I often see mathematicians elude the notion of a time-varying process. I think they want a more generalized and more rigorous terminology, so they prefer filtration.

I feel most of the time, filtration takes place through time.

Here’s one artificial filtration without a time element — cast a bunch of dice at once (like my story cube) but reveal one at a time.

Fwd: python ctor call syntax

In terms of ctor syntax, I think python is more flexible and less “clean” than java. More like c++.

        return riskgenerator.BasicDealValuationGenerator(envDetails)

Luckily I know BasicDealValuationGenerator is a class in the riskgenerator module, so I know this is likely be calling the ctor.

Stoch Lesson 38 parameters of BM

Lawler defined BM with 2 params – drift and variance v, but the meaning of variance is tricky.

Note a BM is about a TVRV and notice the difference between a N@T vs TVRV. A N@T could be modeled by a Gaussian variable with a variance. The variance v of a BM is about the variance of increment. Specifically, the increment over deltaT is a regular Gaussian RV with a variance = deltaT*v

Fn-measurable, adapted-to-Fn — in my own language

(Very basic jargon…)

In the discrete context, Fn represents F1, F2, F3 … and denotes a sequence or accumulation of information.

If something Mn is Fn-measurable, it means as we get the n-th packet of information, this Mn is no longer random. It’s now measurable, but possibly unknown. I would venture to say Mn is already realized by this time. The poker card is already drawn.

If a process Mt is adapted to Ft, then Mt is Ft-measurable…

hackrank tips for c++coding IV

  • chrome (not Opera) supports Alt-R
  • chrome F11 to toggle full screen
  • chrome can zoom out 80% to show more lines in editor, if you can’t find a bigger monitor.
  • I guess we need to parse stdin, so get familiar with getline() and “cin<<“
  • How (fast) you type and erase …. is all recorded, so better use the scratchpad
  • add comments to show your thinking

Here’s some code we can copy-paste:

#include <iostream>
#include <algorithm>
#include <string>
#include <vector>
using namespace std;
int i1, i2, i3, N; 
string s, s1, s2, s3;
int main(){
    cin >> std::ws; //special manipulator1
    cout<<"s = "<<s<<endl;

Avg(X-squared) always imt square[avg(X)], 2nd look

E[X2] is always larger than E2[X]

Confused which is larger? Quick reminder — think of a population of X ~{-5,5} uniform so E[X] = 0. More generally,

If the population has both positive and negative members, then averaging will reduce the magnitude by cancelling out a lot of extreme values.

In the common scenario where population is all positive, it’s slightly less intuitive, but we can still look at an outlier. Averaging usually reduces the outlier’s impact, but if we square every member first the outlier will have more impact.

One step further,

E[X2] = E2[X]      + Var[X]