python pseudo constructors

These Python builtin functions have something in common:

* pseudo [1] constructors — manufacture an object of the specified type
* conversion constructors — converting some input value into an object of the specified type

[1] Actually builtin functions rather than ctor. I guess the fine differences between builtin functions, keywords and operators are not that important at this stage.

P64 [[essential ref]] lists these and more, as “type conversion” functions.

– str()
– dict()
– list()
– tuple()
– set()

– file() — very similar to open()


higher order function ^ first class function

See also

Higher Order Function — A “boss” function that takes in smaller worker functions. Usually these workers are “applied” on a sequence.

Simplest Example: Python filter()/reduce()/map()

First Class Function — Each function is treated like a regular object and passed in/out of HOF. Often implemented as lambda (or closure). Functor too [[Essential C++]] P127.

Why FirstClass? I think objects are the only first class thing in traditional languages.

Simplest Example: Python lambda
Python has other HOF features. I guess decorator might be.

STL functors qualify as FCF. STL algorithms taking those functors qualify as HOF. For example,

C# Linq has many HOF functions like ForEach(), Aggregate() and many aggregate functions. Many other C# function take (unicast) delegate arguments.
C# lambda (and anonymous delegates) is the typical FCF.

python q(import)directive complexity imt java/c++/c#

I would say it “does more work” not just “more complicated”…

Name conflicts, name resolution … are the main purpose of (import/using/include) in java/c++/c#. (Side question — Can c++ header files execute arbitrary statements? I think so since it’s just pasted in… Minor question)

In contrast, python modules are executed line by line the first time they are imported. P300 [[programming python]]. I think this can include arbitrary statements. This is the major departure from “Tradition”.

I guess “from … import …” is more traditional.

WPF binding system queries VM upon PropertyChanged event pointed out …

The INotifyPropertyChanged interface contains an event pseudo-field called PropertyChanged.

Whenever a property on a ViewModel object (or a Model object) has a new value, it can (should?) raise the PropertyChanged event to notify the WPF binding system. Upon receiving that notification, the binding system queries the property, and the bound property on some UI element receives the new value. I believe this is how new data hits the screen.

I believe the callbacks run on the UI thread, just like swing.

In order for WPF to know which property on the ViewModel object has changed, the PropertyChangedEventArgs class exposes a PropertyName property of type String. You must be careful to pass the correct property name into that event argument; otherwise, WPF will end up querying the wrong property for a new value.

For PropertyChanged to work, i think we have to use xaml binding with a prop name. In contrast, the alternative — defining ItemSource in codebehind probably doesn’t work. 

sample code showing boost scoped_lock#not c++14

#include <iostream>
#include <boost/thread.hpp>
#include <string>
#include <iostream>
#include <sstream>
using namespace std;
boost::posix_time::milliseconds sleepTime(1);
template<typename T>
class MyQueue {
 void enqueue(const T& x) {
  cout << "\t\t\t > enqueuing ... " << x << "\n";
  boost::mutex::scoped_lock    myScopedLock(this->mutex_);
  cout << "\t\t\t >> just got lock ... " << x << "\n";
  // A scoped_lock is destroyed (and thus unlocked) when it goes out of scope
 T dequeue() {
  boost::mutex::scoped_lock lock(this->mutex_);
  if (list_.empty()) {
   throw 0; // unlock
  T tmp = list_.front();
  cout << "< dequeu " << tmp << "\n";
  return (tmp);
 std::list<T> list_;
 boost::mutex mutex_;
MyQueue<std::string> queueOfStrings;
int reps = 5;
void sendSomething() {
 for (int i = 0; i < reps; ++i) {
  stringstream st;
  st << i;
  queueOfStrings.enqueue("item_" + st.str());
void recvSomething() {
 for (int i = 0; i < reps*3; ++i) {
  try {
  } catch (int ex) {
   cout << "<- - (    ) after releasing lock, catching " << ex <<"\n";
int main() {
 boost::thread thr1(sendSomething);
 boost::thread thr2(recvSomething);

double-ptr usage #5 – pointer allocated on heap

Whenever a pointer Object (32-bit object[1]) is allocated on heap, there’s usually (always?) a double pointer somewhere.

new int*; //returns an address, i.e. the address of our pointer Object. If you save this address in a var3, then var3 must be a double-ptr.

int ** var3 = new int*; //compiles fine, but you should not access **var3

However, I feel we seldom allocate a pointer on heap directly. More common patterns of allocating pointer on heap are
– if an Account class has a pointer field, then an Account on heap would have this pointer field allocated on heap.
– a pointer array allocated on heap

[1] assuming a 32-bit machine. – initial phrasebook

producer/consumer – upgraded.

buffer – as explained elsewhere in my blog, there’s buffer in any async design. In the ExecutorCompletionService scenario, the buffer is the “completion queue”. In the classic producer/consumer scenario, buffer is the item queue.

items = “tasks” – In P/C setup, Thread 1 could be producing “items” and Thread 2 could be taking up the items off the buffer and using them. In the important special case of task items, the consumer thread
(possibly worker thread) would pick up the task from queue and execute them. CompletionService is all about task items.

tasks executed by..? – in P/C with task queue, tasks are executed by consumer. In CompletionService, tasks are executed by the mysterious “Service”, not consumer. See CompletionService javadoc.

3-party – 1 more than P/C. Beside the P and C threads, the task executor could run on another thread.