We should not be too distracted by the raw numbers. Most of the positions surveyed are irrelevant to me, like php, html, ruby, objective-C, embedded-C.
https://www.virtualbox.org/manual/ch01.html#virtintro explains that
Guest OS runs in a virtual machine or “vm”. A “vm”
- usually refers to a container process if it’s “live”
- More often, a vm means a vm-config i.e. a collection of parameters defining a physical container process to-be-started.It’s important to realize (windows host OS as example) a vm is strictly an application with a window, like a browser or shell. As such, this application has it’s own config data saved on disk.
Based on some prior experience, I feel I would not use linux as a desktop.
- Avoid overspending time learning the various “interesting” details
- Avoid overspending time understanding the installation issues
- Avoid overspending $ to buy stuff
Just do the minimum to set up a sandbox and get the coding interview done. The GTD/zbs about linux desktop has zero market value. It doesn’t enhance my skills with “professional linux”, as proven over the years.
I had many problems installing and using linux desktops each time
- ubuntu from XR — this installation was successful but many problems using it.
- oracle virtualbox
Close to 100,000 as of 2018
Designed to be much smaller than dom or sax parsers. now this design goal is carried by the nanoXML/Lite version.
The other version, nanoXML/java is now the standard version.
Q: why is SSCFI expert system model-based? Why is the model central to this expert system?
* according to published data, the largest body of this expert-system’s “knowledge” is about line records, even though the expert-system’s main job is something else ie diagnosis! This is common among real world expert-systems.
* I guess among expert systems, model-based designs form one well-known type. A one-liner introduction is “An expert system based on fundamental knowledge of the design and function of an object. Such systems are used to diagnose equipment problems, for example.”
* Circuit models help the system survive and continue to function despite 2 difficulties
1) many line records are unreliable — non-standard
2) many test equipments (used to test circuits) are unreliable — often misconfigured
Most if not all of the arguments below are my hypotheses with limited evidence.
* I feel an intelligent expert system can “reason” and use judgement, just like humans do with an internal model. The more comprehensive the model, the more it can reason and make sense of confusing data.
* I feel Fault-isolation may require the system to keep track of test resutls, to be interpreted in context. Circuit model is part of the context.
* I feel test data are perhaps correlated. The relations can be hard to identify. A model helps. A human tester, too, relies on a circuit model to correclate data.
* I feel Test results have patterns, as experienced human testers know. Perhaps patterns about brands and models, about seasons, about circuit types and designs … These could presumably be incorporated into the circuit model
Most complex software favor strong typing. I feel it’s not all due to ignorance, inertia, corporate politics or the marketing machine. Some brave and sharp technical minds ….
I think large teams need clean and well defined module-to-module interfaces. (module ~= class) A variable (mostly a pointer to a chunk of memory) should have well defined operations for it.
The precision comes with a cost — development time, inflexibility … but large teams usually need more coordination and control. At the heart of it is “identification”.
In the military, hospitals, government, and also large companies, identification is part of everyday life. It provides a foundation to security and coordination.
At the heart of OO modelling — translating real world security policies into system built-in rules. Strong typing = precise type identification.