Here’s the infrastructure. Exactly one microagent is installed on each machine to be monitored. An “environment” is defined by a base name on a machine. Strictly 1:M between micro agents and environments. Suppose we have 2 micro agents (on 2 machines) and 3 environments under each, so 6 distinct environments in total. A command like “dir” or “ipconfig” can execute in one of the 6 environments such as Environment #1. We can also run the same command on Environment #2, Environment #3, #4, #5, or all 6 environments. Another command “path” can also hit any environments.
If we single out one microagent, one enviroment under it, and run one command against it, the command output would be the status of one “service”. So a service is identified by a tuple of 3 things – a particular microagent, a particular environment, and a particular command. If we have 2 microagents, 3 environments under each, and 4 commands, then we could have up to 24 services. I use many different terms to refer to a service.
Sometimes I call it a query. You keep firing the same query to get updates from the microagent.
Sometimes I call it a chat room. All clients registered for that CR would get all updates.
Sometimes I call it a message generator.
For each service, GUI clients would continuously monitor its status. Client connects to server to subscribe to updates. Server maintains about 100 “services” like chat rooms, and each one generates messages once a few seconds. Server would push them to the registered clients, using WCF.
In terms of topology, just one server instance in the network, at least 3 microagent-enabled app server machine, and many, many client machines.
Trick: Server doesn’t know when one of the registered clients is offline so I often notice it sending updates to 13 client when only 1 or 0 client is alive. I created a dictionary of connections using their IP address as key, so we won’t have 2 duplicate clients to update, since one of them must be dead.
Trick: many msg generators (“services” or “chat rooms”) share the standard update interval of 60 seconds. Each is driven by a private timer. The timers start at server start time, but I decided to use different initial delays. Therefore one generator fires on the 1st sec of every minute, another generator would fire on the 2nd sec every minute. Spread out the load on all parties.
Trick: when a microagent is offline, the central server would keep hitting it as per schedule (like every 5 sec) driven by the timer. Expensive because the thread must block until timeout. I decided to reduce the timer frequency when a microagent is seen offline. Restored after microagent becomes reachable again.
Trick: some queries on the microagent take a long time (20 sec). Before first query completes, 2nd query in the series could hit the same microagent, overloading both sides. I decided to set a busy flag on each query, so next time a thread from the thread pool “wants” to fire the query, it would see the flag and simply return, without blocking.