All sync objects in windows, across languages, are based on primitive kernel sync objects. (How about sync objects in jvm on windows? I guess the same.) Sync objects are not language constructs, but provided by the OS. I feel the stream construct is also something provided by the OS.
([[cpow]] means [[concurrent programming on Windows]])
The CLR Mutex class is a “crude” wrapper over some kernel object(s). Crude because using this Mutex involves expensive kernel transition.
In contrast, The CLR monitor is Also based on kernel objects, but minimizes kernel transition and is more effcient/cheaper. See P188 [[cpow]]. I feel this efficiency is achieved at the expense of features. For eg, CLR monitors aren’t cross-process.
CLR monitor offers 1) mutual exclusion and 2) condVar features. Entire CLR monitor including the cond var is based on kernel objects, according to [[cpow]]. The CLR monitor is considered a “
higher level of abstraction from the basic kernel objects“.
From the above analysis, the CLR offers 2 categories of sync objects — 1) wrappers over kernel objects, and 2) CLR-specific constructs.
1) Examples in the wrapper category — anything based on WaitHandle including Mutex/Semaphore
2) Examples in the CLR category —
– monitor class including Wait/Pulse
– the *Slim classes
Key differences between the two —
$ slow – kernel mode “transition” required in the wrapper objects. Therefore much slower.
$ IPC – only kernel sync objects are usable across processes. If no IPC required, then the non-kernel constructs are better (much faster).
$ P/Invoke – you could use P/Invoke to simulate many WaitHandle-based constructs
$ predate – there are win32 constructs to access the same kernel objects. They predate CLR. I think the wrappers are similar to the win32 constructs.
win32 – wrapper over win32 native constructs, (presumably) like file handles and other OS handles.
** p/invoke – these wrappers save you the p/i calls
kernel – the underlying are kernel constructs and probably involve kernel “Services”
predate – the kernel constructs predate the dotnet framework. I think they are part of win32 API.
conditionVar – I feel these are not like the condition variables offered by thread libraries
–some important dotnet constructs using wait handles
* Mutex class
* Semaphore class
* signal events like AutoResetEvent and ManualResetEvent. Despite the confusing name, unrelated to the dotnet events.
As illustrated in [[threading in c#]], CBO is about “serialize all instance methods”. Now I guess the CBO construct relies on SC construct.
SC is about marshalling calls to the GUI thread. There’s 1 or zero SC instance for each thread. The GUI thread always has one. The SC instance is automatically created for the GUI thread.
In http://www.codeproject.com/Articles/31971/Understanding-SynchronizationContext-Part-I, I feel the sync context construct is similar to the dispatcher construct. Both are effectively “handles” on the UI message pump. Since other threads can’t directly pass “tasks” to the UI thread, they must use a handle like these. Assuming sc1 covers the GUI thread, sc1.Post (some_delegate) is like Dispatcher.BeginInvoke(some_delegate)
Similar to Thread.CurrentThread static property, SC.Current is a static property. Thread1 Calling SynchronizationContext.Current would get object1, while Thread2 calling SynchronizationContext.Current will get object2.
http://stackoverflow.com/questions/1949789/using-synchronizationcontext-for-sending-events-back-to-the-ui-for-winforms-or-w advocates WPF Dispatcher instead of SC
http://www.howdoicode.net/2013/04/net-thread-pool.html has some good tips
Pool threads are always background. Un-Configurable.
Consider creating your own thread if Synchronization needed — If we rely on synchronization to start, wait and stop the work done by the thread, then it is very likely that the task duration will increase and could lead to starvation very soon. If we have to use WaitHandles, locks or other synchronization techniques – then creating our own threads would be a better option.
Consider creating your own thread if Longer duration of work — Thread pool is to be used only when the work is of short duration. The faster the thread completes its work and goes back to the pool, the better performance. If threads are assigned to work for something very long, after some time starvation could occur. New threads are not created in the pool after a limit is reached, and if all threads are busy – the work may not be performed at all.
In the current CLR, each “CLR thread” maps to exactly one kernel/native thread after it start running. Before starting, it maps to
no kernel thread.
There's a kernel data structure for each running thread, including the stack. The dotnet CLR thread enriches that data structure
with additional info. One important additional info is the GC info. For each thread, CLR needs to know how to find local pointers to
heap objects. Unencumbered with this special GC requirement, the kernel doesn't have this additional data structure. See P86
[[concur programming on windows]]
Smart spin — Both use smart spin. Traditionally, spin means hogging the driver’s seat unproductively, so other threads won’t get the time slices. Smart spin constructs don’t hog the driver’s seat for long.
lockfree — both contribute to lockfree algorithms, but
** Spinlock is unrelated to CAS.
** Spinwait doesn’t use CAS but is often used with CAS
context switch — minimized. Both constructs minimizes context switch. The spinning thread doesn’t give up driver’s seat but spins briefly and then (hopefully) enters critical section.
optimistic — “brief” spin and retry. This is actually identical to JVM lock
 according to my analysis. See MSDN
Spinlock is comparable to a regular lock. A thread hitting a contended regular lock immediately, automatically gives up CPU (context switch). With a spinlock, the thread spins briefly if contended, then hopefully (optimist!) enters the critical section, without context switch. There are many differences but that’s the big picture.
No difference between java and c#.
Non-Daemon threads (the default) will run its course and can block host process exit after main thread exit.
Daemon threads will NOT block host process exit after main thread exit.
I find the thread id number typically too low and not “realistic” enough. I am looking for a unique-looking identifier, one which I can search in log files.
Here’s my attempt to get it.
/*According to MSDN:
* An operating-system ThreadId has no fixed relationship to a managed thread, because an unmanaged host
* can control the relationship between managed and unmanaged threads. Specifically, a sophisticated host
* can use the CLR Hosting API to schedule many managed threads against the same operating system thread,
* or to move a managed thread between different operating system threads.
var winThrId = AppDomain.GetCurrentThreadId();
var thId = “/T” + Thread.CurrentThread.ManagedThreadId + “/” + winThrId + “/ “;
Biggest difference in my mind is signaling i.e. notification between threads or processes. The origin of semaphore is a signaling device.
Below we focus on c#
P22 [[threading in c#]] outlines c# locks.
Q: Mutex vs Semaphore in c#? (Note Mutex is not a simple lock)
%%A: Mutext is a lock usable either locally or IPC. C# Semaphore is a counting semaphore.
%%A: Mutex is like a semaphore of count 1
Both are classes. In contrast, the lock and wait/pulse features don’t belong to a class.
Both can be used locally (inter-thread) or IPC.
Both derive from WaitHandle, with signalling capability
Diff: thread affinity
Note Mutex is about 50 times slower than a simple lock