Kernel scheduler has an algorithm and therefore implemented as a sequence of instructions. You can think of it as some black-box function/routine.
I think it is Not really a long-running background process. In Linux, I believe it is an on-demand routine, but not run on behalf of any process.
Background — Many familiar on-demand kernel routines do run on behalf of an “owner process” —
- accessing a file or socket
- accessing some device such as NIC
- accessing memory
However, other on-demand kernel routines (often interrupt handlers) do not have an “owner process”. Here are some routines —
- reacting to timer interrupts
- reacting to low-level emergency hardware interrupts like …. ?
So the scheduler is a classic example. I think scheduler can get triggered by timer interrupts. See P 350 [[linux kernel]]
“Liunx kernel thread cannot run user programs”, as explained in [[UnderstandingLinuxKernel]].
Removing ‘Linux’ … Other Unix variants do use kernel threads to run both 1)user programs and 2)kernel routines. This is my understanding from reading about JVM…
Removing ‘kernel’… Linux userland threads do run user programs.
Removing ‘thread’… Linux kernel processes or interrupt routings could possibly run under user process pid.
- To the kernel, there are man similarities between the “thread” construct vs the “process” construct. In fact, a (non-kernel) thread is often referenced as a LightWeightProcess in many kernels such as Solaris and Linux.
- socket — threads in a process can access the same socket; two processes usually can’t access the same socket, unless … parent-child. See post on fork()
- memory — thread AA can access all heap objects, and even Thread BB’s stack objects. Two process can’t share these, except via shared memory.
- context switching — is faster between threads than between processes.
- creation — some thread libraries can create threads without the kernel knowing. No such thing for a process.
- a non-kernel thread can never exist without an owner process. In contrast, every process always has a parent process which could be long gone.
(This is largely a QQ question. Some may consider it zbs.)
Background — Linux kernel provides two types of locks — spinlock and blocking mutex, as in https://www.kernel.org/doc/htmldocs/kernel-locking/locks.html . Here I focus on the mutex. I think this is far more useful to userland applications.
https://lwn.net/Articles/575460/ has good pointers:
- I believe context switch is expensive since CPU cache has to be replaced. Therefore, optimistic spin is beneficial.
- a blocking mutex used in kernel, perhaps not directly used by userland apps
- implemented using spin lock + some wait_lock
- maintains a wait_list. Not visible to any userland app.
I think some interrupts are generated by software, but here I focus on hardware interrupt handlers.
- pseudo-function — Each handler is like a pseudo-function containing a series of instructions that will run on a cpu core
- top priority — interrupt context is higher priority than kernel context or process context. You can say “interrupt” means “emergency”. Emergency vehicles don’t obey traffic rules.
- However, an interrupt handler “function” can get interrupted by another . The kernel somehow remembers the “stack”
- not preemptible — except the  scenario, kernel can’t suspend a hardware interrupt handler in mid-stream and put in another series of instructions in the “driver’s seat”
- no PID — since there’s no preemption, we don’t need a pid associated with this series of instructions.
Suppose you have 2 cores so 2 kernel threads can run simultaneously. If they are deadlocked, what would the cpu be doing? Nothing but spin. I believe there’s no concept of “blocking” in kernel.
Now suppose there’s another core, but the current process is waiting for I/O. What can this core do? Noting but spin
“An IPC resource is persistent: unless explicitly removed by a process it is kept in memory and remains available until system shutdown.”
I just found this sentence in [[understandingLinuxKernel]] section on “System V IPC”.
“IPC resource” includes shared mem and semaphore.
https://syscalls.kernelgrok.com can sort the functions by function id
http://asm.sourceforge.net/syscall.html is ordered by function id
- open() close() read() write()
- socket() connect() accept()
- recvfrom() sendto()
- shutdown() is for socket shutdown and is more granular than the generic close()
- epoll family
http://www.boost.org/doc/libs/1_65_0/doc/html/interprocess/sharedmemorybetweenprocesses.html#interprocess.sharedmemorybetweenprocesses.sharedmemory.xsi_shared_memory points out
- Boost.Interprocess provides portable shared memory in terms of POSIX semantics. I think this is the simplest or default mode of Boost.Interprocess. (There are at least two other modes.)
- Unlike POSIX shared memory segments, SysV shared memory segments are not identified by names but by ‘keys’. SysV shared memory mechanism is quite popular and portable, and it’s not based in file-mapping semantics, but it uses special system functions (
- We could say that memory-mapped files offer the same interprocess communication services as shared memory, with the addition of filesystem persistence. However, as the operating system has to synchronize the file contents with the memory contents, memory-mapped files are not as fast as shared memory. Therefore, I don’t see any market value in this knowledge.
https://www.ibm.com/developerworks/library/l-semaphore/index.html — i have not read it.
My [[beginning linux programming]] book also touches on the differences.
I feel this is less important than the sharedMem topic.
- The posix semaphore is part of pthreads
- The sysV semaphore is part of IPC and often mentioned along with sysV sharedMem
The counting semaphore is best known and easy to understand.
- The pthreads semaphore can be used this way or as a binary semaphore.
- The system V semaphore can be used this way or as a binary semaphore. See http://portal.unimap.edu.my/portal/page/portal30/Lecturer%20Notes/KEJURUTERAAN_KOMPUTER/SEM10809/EKT424_REAL_TIME_SYSTEM/LINUX_FOR_YOU/12_IPC_SEMAPHORE.PDF