For a basic web server, the resources (on disk) and objects (in memory) hosted in the server are mostly static files.
For a php/perl/python powered web server, the objects hosted would be the scripts to print html. There are almost always some resources beneath those scripts.
Simpler example — for an ftp server, the resources managed are the files.
Another simple example — time server. The resource beneath the server is the host OS.
For a database server, the resources managed by the server are the tables. The server performs heavy-duty CRUD operations on the tables. The most trivial operation — a simple select — is comparable to apache serving a static page.
For a CORBA or RMI server, there are actual “remote” objects and corresponding “skeleton” objects hosted in the server’s memory.
How about a regular java server? Resources — disk files, and databse and other servers on the network. More important are the objects hosted in the java server. They all live in JVM.
* domain entity objects are well-defined, such as
** (Hibernate) entity objects from data sources,
** message objects, and
** objects created from user input
** more generally, objects from external data brought into java via some interface are usually domain entity objects.
* temporary objects — can lead to memory leak if not reclaimed systematically. * infrastructure objects, such as spring beans and MOM system objects. I think 3rd party java packages often introduce many infrastructure objects.
useful when you have 2 database instances (perhaps on 2 DB boxes) that are not a DB cluster. What’s their relationship? Perhaps a replica.
If the 2 db instances form a db cluster, then a db client ( your servlet ) gets a single point of contact — no need for fancy multipool. The DB cluster /masquerades/ as a single regular DB instance.
the first time a query is run, Liquid Data saves its results into a query results cache. The next time the query is run with the same parameters, Liquid Data checks the cache configuration and, if the results have not expired, quickly retrieves the results from the cache.
similar to EHcache read-only mode
statement caching — mm2
web page caching
jsp pre-compiling before first visitor
thread pool — “execute queue” in weblogic
support for db-cluster
the ejb technology offers 2 extremely important frameworks to enterprise users — orm(obj/relational mapping) and dist_obj
It may offer other frameworks that are less important
– clusterable biz obj
Let’s read about cmr and cross check Binu’s observations
Lead management system, with 60 tables, each represented by a cmp bean.
in only 2 cases, the inter-bean relationship was managed by container-managed relationship. One of the 2 was the Lead-Address relationship, where each lead id is linked to an Address bean. When container loads a particular Lead beam from the DB, the associated Address bean was loaded too. When saving the beans to DB, we used the SLSB to save the Lead bean and the Address bean separately
A: performance. Hibernate has replaced cmr in subsequent projects.
A: The slsb facade is the only “client”. Using local interface.
A: Use ejb-ql.
A: basically nothing but ORM. The entity beans are related as in the DB, so the objects need2b linked up in some way.
Q: Why the majority of the relationships are not by cmr?
Q: how are the cmp beans used in terms of clients?
Q: How does the system know which Address bean to load for a given Lead bean?
Q: j4cmr]this project?
I have seen authors using the twin terms “context-scope” and “application-scope” interchangeably.
This is the first thing to bear in mind. After that, recognize that
* context-scope is related to ServletContext, a name used repeatedly in some files
* application-scope is related to web-app, a name used in some files too.
Lastly, remember that we are talking about ATTRIBUTES when we mention these scopes ie
context-scope attributes = application-scope attributes
P309 [[ head first servlet ]]
there’s some logic/intelligence involved in pool growth/shrinking, conn reclaim … That logic is somehow provided by the servlet container, within the same JVM. I think it’s provided by the “swimming pool manager”.
“container-managed conn pooling” is a common phrase. Servlet Container maintains a pool of connection objects — each a chunk of memory.
A primitive implementation is a hashmap in the container’s memory, A hashmap of physical (ie in-memory) PooledConnection objects.
“swimming pool manager” is a boundless (can’t bypass) wall between the servlets and the pool of PooledConnection objects, and exposes to the servlets essentially 2 methods — getConnection() and closeConnection() . Not sure about the method names. Reacting to these calls, the swimming pool manager hands out or reclaims a “physical” connection from the servlet to the pool.
“swimming pool manager” is the single “spokesman” and single point of access of the pool.
“swimming pool manager” is request-driven. I think a class will “send it a message” by calling poolManager.getConnection() or poolManager.closeConnection()
In Weblogic, a swimming pool manager (the hashmap of conn objects) may need 2 helpers — dataSource and a pool driver.
* i think u need to specify the driver in your jdbc calls
* You can choose to use dataSource when obtaining a connection from the swimming pool manager.
–val@dd for learning purpose
for beginners, source code could be too much to swallow.
dd xml format is very structured, concise compared to source code, which contains lots of seemingly “useless” code.
relationships between several java objects are not clear by reading source code. dd is better.
many critical infrastructure pieces are not defined in source code.