include non-key columns in a non-clustered index to reduce IO counts

http://msdn.microsoft.com/en-us/library/ms190806.aspx shows this lesser known performance trick. If a query uses a particular NCI (non-clustered-index), and all the output fields are “included” on the index, then query won't hit the data pages at all.

How about sybase? http://dba.stackexchange.com/questions/15319/equivalent-of-include-clause-while-creating-non-clustered-index-in-sybase says not supported. But See also https://support.ca.com/cadocs/0/CA%20ERwin%20Data%20Modeler%20r9%201-ENU/Bookshelf_Files/HTML/ERwin%20Help/index.htm?toc.htm?define_sybase_table_index_properties.html

Advertisements

clustered index looks like arrayList + sList

(based on the sybase ebook [[How Indexes Work]]) When you insert a row into a data page with 10 row capacity (2KB, 200B/row), you may get one of

* arrayList behavior — if much less than 10 rows present (eg 6), new row goes into the sorted position (say 2nd), shirting all subsequent (5) rows on the same page.

* linkedList behavior (page split) — page already full. New tree node (i.e. data page) allocated. About half of the 10 rows move into new page. All previous/next page-pointers adjusted. In addition, must adjust page-pointers in
** higher index page and
** non-clustered indexes.

sybase 15 dedicated cpu

In one of my sybase servers with large database load, we managed to mark one of 16 cpu cores to a specific stored proc, so no other process could use that core.

We also managed to dedicate a processor to a specific JDBC connection.

This let us ensure a high priority task gets enough CPU resource allocation.

What’s the name of the feature?

datachange && auto update-stat in sybase 15

— based on P22 [[ new features guide to ASE 15 ]]

The holy grail — let users determine the objects, schedules and datachange thresholds[1] to automate the stat-update process.

[1] “update only when required”

– datachange is a function used in “select datachange(….”
– datachange function returns a percentage — how many percent of data changed, due to CUD.

25G/table – sybase comfort zone

Many people feel Sybase is unsuitable for large tables. How about a 25GB table?

I worked with a few post trading systems (commissions, trade-level revenues, settlement…), where each table’s data occupy 10 – 25GB, with another 10 – 25GB for all indices of that table. Each row is typically 1 – 2KB, typically within a Sybase data page, so such a table typically hold 10+ million rows.

My Sybase trainer said Sybase is faster than oracle/db2 for small tables below 100mil rows.

My database colleagues also feel Sybase is fast with 10-25GB/table.

block-oriented storage – j4 columnar

As suggested on http://en.wikipedia.org/wiki/Column-oriented_DBMS, reading a megabyte of sequentially stored disk data takes no more time than a single random access on block-oriented storage. I call it wholesale vs retail. In such a context, storing related items physically close is essential optimization.

I believe this is a fundamental fact-of-life in tuning, algorithms and system design.

I believe this is the basis of the b+tree RDBMS index tree. In contrast, In-memory indexing uses no Block-oriented storage and probably doesn’t use b+tree.

I believe this is one of the top 3 fundamental premises of columnar DB. Imagine a typical user query selects columns of data, not rows of data…

#1 database scale-out technique (financial+

Databases are easy to scale-up but hard to scale-out.

Most databases get the biggest server in the department as it’s memory intensive, I/O intensive and cpu-intensive, taking up all the kernel threads available. When load grows, it’s easy to buy a bigger server — so-called scale-up.

Scale-out is harder, whereby you increase throughput linearly by adding nodes. Application scale-out is much easier, so scalable architects should take early precautions.

If DB is read-mostly, then you are lucky. Just add slave nodes. If unlucky, then you need to scale-out the master node.

Most common DB scale-out technique is shard or HORIZONTAL partitioning (cut horizontally, left to right). Simplest example — federated table, one partition per year.

I worked in a private wealth system. biggest DB is the client sub-ledger, partitioned by geographical region into 12 partitions.

We also used Vertical partitioning — multiple narrow tables.

Both vertical and horizontal partitions can enhance performance.

perf ^ functional-query — 2 sought-af skills on DB

Q: As a Wall st IT hiring mgr, what single DB skill is the most valuable?

Depends on who you ask. Many laymen and most experts would say tuning skills.  This has become the default answer. That means literally 99% of hiring managers would believe tuning is the #1 DB skill.

However, i have worked with many developers. When push comes to shove, i know the badly needed skill is “implementing complex logic”. Wall St is all about time to market, quick and dirty. Most Wall St systems have all of their financial data in database and huge amounts of non-trivial logic in SQL. SQL know-how (mostly SELECT) can blast through brick walls — see post on blast.  Therefore if I were hiring my #1 focus is the know-how to implement complex query logic, using 8-way outer joins, CASE, group-by, correlated sub-select …

So here we are — perf vs SQL tricks. These are the 2 most valuable DB skills on Wall Street.

Incidentally, both depend on detailed knowledge of a tiny language known as SQL. It's remarkable that SQL ranks along with java, Unix, OO, threading etc as a top-3 technical skill, but the SQL language itself is much, much smaller than the other languages.

A tiny subset of of the tuning interview topics to study —
* show plan, show statistics io
* sproc
* hints
* index selection, design
* index stats
* join types, join order
* temp tables

perf ^ functional — techniques, know-how

See also — post on perf^complex query on databases
Background —
* when reading code or publications, we often see 2 broad categories of _techniques_, design skills, know-how…
* When evaluating a developer, we often focus on these 2 types of skills

A) techniques to achieve AAA+ sys performance
B) techniques to achieve basic functionality

I would argue knowledge about B outweighs A in most wall street development jobs. “First get it to _show_ all the essential functionality, then optimize.” is the common strategy — pragmatic. Uncertainty and project risk are the key drivers. Once we show stake-holders we have a base deliverable, everyone feels reassured.

Q: In pursuing quick and dirty, would we end up with an non-scalable design?
A: seldom.

Q: how about low-latency, massive market data feeder, risk overnight batch?
A: I don't have experience there. I would speculate that in these specialized fields, base design is always a battle-tested design[1]. On top of a sound design, quick-and-dirty will not lead to unsolvable issues.

[1] Futuristic designs belong to the lab. Most Wall St systems have serious funding and aren't experimental.

DB performance – focus on read or write?

1) Concurrent Write is the focus of DB performance tuning in some low-latency apps, esp. execution engines at the heart of ECNs and broker/dealer systems. Every execution must be permanently recorded (transactional) before sending out responses to counter-parties and downstream. DB read is avoided with caching.

1b) For market data engines, non-transactional DB write is still slower than flat file write. If a flat-file-write operation is A, then a database write operation involves A+B. If you don’t need the DB features, then you don’t need B. A is always faster than A+B.

2) Read is the focus of DB performance in other domains where data volume is huge — search, reporting, ERP, back office, asset management, BI, Data warehouse,

———
Those are the 2 domains I see more often. There are other contexts —

Write can be the bottleneck in high volume batch ETL. We used to rely on sybase bcp.

OODB indexing inside memory (java and c++

We understand how indexing helps disk-based query, but large in-memory DB also need indexing. Gemfire is one example. A query without the aid of an index iterates through every object in the collection. If an index is available that matches the where-clause, the query iterates only over the indexed set, thus reducing the query processing time.

Best example is a point query “where trade.cusip=IBM”. Suppose we have a gigantic hash map of 4GB with millions of {tradeId, trade} pairs. With index you iterate over a subset. Perhaps the index is a sorted map of {cusip, tradeId}. It’s sorted to provide range queries, and index field must implement Comparable.

How large can a hash map grow? I once put 200MB into a hash map, easily.

In practice, to keep 100GB data in one machine is too expensive hardware-wise. When such a big server become obsolete, it’s probably useless. Most people prefer the more economical server farm, where data is distributed.

The VMware book [[Virtualizing and Tuning Large Scale Java Platforms]] explains JVM memory size limits.

For a modest machine, the index itself could be too large to fit into one machine. Since in-memory index is usually a red-black tree (or skip list, but not a hash table), we need to split the index tree among physical machines. Beware b+tree is suitable for disk-based indexing only.

when to denormalize, briefly

%%A: when select performance is kind of more important than DML performance
%%A: read-mostly DB

Q: any specific type of application you know
A: reporting, query, analysis, research, search

Q: How about data warehousing? Materialized views can be created from a join (?), therefore denormalized?
A: Not even close. Both can help performance. A Materilized view consists entirely of derived, redundant data. When the original data changes, this copy of redundant data is marked obsolete. If you add a bunch of triggers to always update the Materialized View to keep it in sync with original, then maybe it’s equivalent to a denormalized set of tables.

* joins — big joins are the problem
* summary tables like in GS are examples of denormalized tables.
* star schema on wikipedia

sort a large joined table by any field

Hi ZR,

Let’s start with a different challenge — say I have a big table of 10GB data, and a lot of users. If I peek at all the queries by these users, I see they search by every single column! One day I see a column that’s not the first column of any index, I know we have a real problem — a search by this column is a full table scan. Even if this happens once a month, this is unacceptable, as it means one of the valid queries simply won’t work.

My solution — create indices for every column, but make this table readonly for 23 hours, and update it during the 1-hour window. Having so many indices will slow down insert/update/delete. Writers may lock some data and slow down queries.

Assumption — if there are enough indices, then search will be fast enough. This is not true if an index has a very fat key and each index page contains just one key value, and the index tree is very very deep. Also, some searches can’t run fast — search by gender, search where age > 0, search for non-null names… Optimizer will be smart enough to ignore indices since FTS is best query plan. I think these are rare and obvious, so assumption is probably safe.

Now let’s modify the challenge — a big “worktable” after a join, not a big physical table. Say table A has columns a1 a2 a3 .., table B has b1, b2, b3 .. and they are joined.

My solution — create indices on all search columns (in our case all columns). Experiment and keep the physical tables writable all day. Since table A is now much narrower than the big table earlier, writers may be faster. Writes to tables A and B can execute independently. This is similar to the java ConcurrentHashMap design.

From my experience, if table A has 10 columns, it’s ok to create that many indices. If a big table has 200 columns, i am not sure.

Now, your question of sorting a large joined table by any field. I believe the last solution may work. Every sort column is covered by an index, so the driver table will be the host table of the sort column, using the index on the sort column. All rows will come out in the right sequence without further sorting.

http://en.wikipedia.org/wiki/Star_schema

practical, everyday sybase tuning in GS (accord2Nikhil

My GS system was database centric. Most important queries hit large 10GB tables. We deal with slow queries (mostly select) on a
daily basis. If a query is slower than usual

* query plan — is the join correct?
* query plan — is the correct index used?
* query plan — any table scan?
* update stat right away if necessary. Note we already run that automatically on a weekly basis.

Most of the time that’s enough. Occassionally, we also use
* partition (horizontal) into history tables
* partition (vertical) into 2 narrower tables (Main comm and trd tables + Trade Staging table)

index-only scan

“index-only” means “no data page reads”.

Before we explain index-only query, we need to explain that Most indexed queries first read the index then[2] read the data pages [1].

a typical index-only query uses a composite index [3]. Another post about “narrow index key” explains that the index key width is column width added up.

See also
http://books.google.com/books?id=JSVhe-WLGZ0C&pg=PA298&lpg=PA298&dq=index-only+scan&source=web&ots=Llbg8PAiJ9&sig=PhRIwHc78MNuso1xMJTLDnkJn-U&hl=en#PPA297,M1

http://www-1.ibm.com/support/docview.wss?uid=swg1IY94275
http://bioivlab.ils.unc.edu/course_wikis/INLS_623/ClassWork_INLS623_Spring2008/index.php/Kyle http://www.google.com/search?q=index%2Donly+scan

[1] for structure of a page, read the db2 training manual P171
[2] follows the index leaf nodes
[3] we have no time to cover single-column index-only queries

boolean column ] database@@

Q: how to you design your table columns to hold boolean flags such as isOK, isAlive, isActive, which are heavily used in where? The challenge is, no index possible, at least in sybase =} FTS. For now, assume you want to select the minority of the rows. If you want the majority, then FTS may be legitimate.

A: Basically you only care about yes/no. But I would suggest use 0 for the majority of the rows, and 1-255 or 1 ~ FFFF for the minority. Keep this column narrow — 1 or 2 bytes, then build an index.

Even though ms sql and mysql can perhaps create an index on a binary or BIT column, it’s 1) less well-supported than regular indices and 2) questionable. In an enterprise enviroment, politically safer to stick to regular indices.

http://www.sql-server-performance.com/article_print.aspx?id=347&type=art has examples.

http://sqlserver2000.databases.aspfaq.com/can-i-create-an-index-on-a-bit-column.html has examples.

favor narrow index keys

According to the DB2 trainer, narrow index keys are always better. Many principles in DB tuning have exceptions, not this one. The wider the key, the fewer entries per node page, the more layers in the B+tree, the more disk reads. Each layer usually [1] translates to one disk read.

Usually a point search on the index starts from the root, and reads one node per layer.

How wide is an index key? I asked the DB2 trainer. It’s pretty much the width of each index column added together. Compression on the width is doable.

[1] often the root node and the first layer are in memory and requires no disk read. Remember there are easily hundreds of indices in a machine.

index Lesson#5: understand non-clustered before clustered index

As explained in the post on 3+1 data pages, each pointer on the index leaf page points to a table row (not table-page). That’s non-clustered index.

(Get a firm grip on regular index structure before you read further, or risk permanent head damage – PhD.)

A clustered index requires a twist. As in non-clustered, a pointer in the branch node points to a PAGE. Unlike non-clustered, this PAGE is a table page.

Similar to an index leaf page, our clustered table page stores records sorted. In contrast, A table page in a heap table (without clustered index) stores records in insertion order, presumably.

Each table page has a lower number of data rows — lower than the number of index objects on a index leaf page. Think about why …

.. because a table row is fatter than an index object.

index Lesson#3: B+ tree index — key-pointer pairs

Nodes: root page, leaf pages and branch(or intermediate) pages. It’s a -> t r e e < -. Physically, Each node is an index-PAGE on disk. There's another post on these PAGES.

Logically, a node contains key-pointer pairs. I call these “objects”. You can represent them better in xml. A leaf page probably contains thousands of pairs, which read like
name rowId
————-
Tom -> 12989
May -> 2390
May -> 907
May -> 2090 // non-unique index
This is, of course, an index on firstName.

Now you see that In a leaf node, an object consists of
1) a key ie a value in the index column. (consider single-column index for simplicity.)
2) a pointer to exactly 1 table ROW. pointer is a physical address, known as row-id in sybase, ROWID in oracle, RID in DB2

In a sybase non-clustered index, each object contains a ptr to 1 row, not 1 page.

In non-leaf nodes, an object consist of
1) a key
2) a pointer to a child PAGE (Consider a unique index for simplicity)

http://www.itworld2.com/frmdsBplus.aspx
http://db.ucsd.edu/CSE232W99/Indexing/sld031.htm
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc20020_1251/html/databases/X59082.htm

index Lesson#6: clustered index combines leaf pages and table pages

clustered index — physical ordering. Important to know what physical ordering means and what it doesn’t mean. First, understand the index tree structure. Ignore B+tree jargon. Just get the tree idea. Next, really understand that

    The (usually thousands) entries on a single index page are sorted [1]. Pages’ disk addresses aren’t sorted by the key [2].

Page addresses are neither contiguous nor sorted. Imagine you insert a Magic row belonging to the exact midpoint, you may get page split, and new row gets a new disk page with a random disk address.

For any tree structure, node physical address is neither contiguous nor sorted.

The enormous power of indexing (CI or NCI) derives from the tree and feature [1], despite feature [2]. If I take any index (CI or NCI) and read table using it, data do come out sorted, thanks to the tree.

Unlike non-clustered, Clustered Index leaf pages contain table data — explained better in another post. That’s the so-called __physical_ordering__.

A clustered index combines { directed tree + sorted list }

See also http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc20020_1251/html/databases/X59082.htm

index Lesson#1: 3+1 page types for an index

Before studying key-ptr pairs, it’s good to know the 3+1 types of disk pages relevant to an index.
* root page
* branch page
* index leaf page
—————– above are index pages, below isn’t [2] —
* table page — not in the index.

Above is actually a 4-level hierarchy, typical of a non-clustered index. Each pointer (among hundreds) on an index-leaf page points to a table ROW (not table-page). Therefore you can say the data rows are the real leaves of the tree.

Remember, an index tree is a directed graph, starting at the root node, ending on the table ROWs.

In any clustered or non-clustered index, there’s always
= 1 root page, no more no less
= 0 or more branch pages
= 0 or more leaf pages [1]
= 1 or more table pages

[2] Clustered index combine leaf pages and table pages. See other posts.
[1] leaf node is usually present

http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc20020_1251/html/databases/X59082.htm

join order ] oracle – brief notes

* n-way
* Oracle can take an hour to choose a join order
* To limit it, you can tell the Oracle optimizer how many permutations to look into. P406 [[ oracle sql tuning ]]
* “ordered” hint tells the optimizer to follow the join order specified in the FROM clause

But what join order would you, the developer, prefer? I feel we should avoid re-scanning a large table

Async query (Sybase + others)

Q: What do we mean by Asynchronous query or Asynchronous DB connection?

A: Best answer, based on http://search.cpan.org/~mewp/sybperl-2.18/CTlib/CTlib.pm:
The Sybase API knows about 1) synchronous (ie blocking, the default), 2) deferred IO (polling) and 3) async IO (call-back) modes. In the synchronous mode database requests block(see below) until a response is available. In deferred IO mode database requests return immediately (with a return code of CS_PENDING), and you check for completion with the ct_poll() call. In async IO mode the API uses a completion callback to notify the application of pending data for async operations.

—– blocking ^ polling in Sybase and other databases, based on http://orafaq.com/maillist/oracle-l/2000/07/19/2071.htm:

Asynchronous connection to a database means that multiple commands can be passed to the database server and processed simultaneously. For example, there is no need to wait for the query to return before sending another command to the database server.

Instead of waiting for a “Finished” message to be sent back from the database server, the calling application sends a query status message to the database server at set times asking “Are you done yet?” (polling) When the database server says “Yes”, the result packets start coming back.

Client can send 2 signals to server
* a periodic status check — “my query is completed and ready for download?”
* a cancel

—– polling ^ call-back, based on http://search.cpan.org/~mewp/sybperl-2.18/CTlib/CTlib.pm:

To enable fully async mode, you need to have your completion callback Perl-subroutine registered, and you need to also enable async notifications, as well as setting up a timeout for requests.

Your completion callback perl-subroutine is either invoked when ct_poll() is notified of a request completion, or by the OpenClient API directly in full async mode.

DB view quiz

q: table rows are usually unique, in a standard design, but many derived views are not. Example please?
A: if you select just 1 column.

q: why views can’t be indexed like tables?
A: no physical storage of the data

q: name an obvious headache of using “distinct” in a view definition? Name a solution.
I think sybase 15 allows it.

j4stored procedures #sybase

Update — In a 2018 MS interview answer this same question:

Q: stored procedure vs regular query
%%A: input validation esp. for insert/update
%%A: stored proc can reduce 5,000,000 rows to 2,000 before sending them out over the wire. Contrast a join query (in a proc) vs join-emulation in application
%%A: regular query can’t create temp tables easily
%%A: data-related business logic can go into the database.
%%A: Say my proc uses first query result in a 2nd query. Clumsy with inline queries.
AA: server-side transformation can benefit from cache hits inside the (DB) server.

There are many advantages to using stored procedures (unfortunately in Sybase they do not handle the text/image types):

    • 1) Modularity (%% #1 object-oriented design principle: isolate changes => reduce impact, reduce regression test, unit test && dependency injection %%) – during application development, the application designer can concentrate on the front-end and the DB designer can concentrate on the ASE.
    • 1b) If query changes often, we hate to recompile and redeploy entire app every time — high risk. Consider extracting the inline queries into a command file, but proc is better.
    • 1c) proc can be unit-tested.
    • 1d) Independent version control.
    • 2) Security (%% consider views %%) – you can revoke access to the base tables and only allow users to access and manipulate the data via the stored procedures. Ensures all writes go through validation.
    • 4) Minimise blocks and deadlocks – it is a lot easier to handle a deadlock if the entire transaction is performed in one database request, also locks will be held for a shorter time, improving concurrency and potentially reducing the number of deadlocks. Further, it is easier to ensure that all tables are accessed in a consistent order (%% 1 of 4 deadlock conditions %%) if code is stored centrally rather than dispersed among a number of apps.
    • 4b) %% In zed I used to do mass delete and mass update, which held the lock forever. Cursor is more “nice” (unix jargon), locking and releasing a subset of rows (or just 1) at a time. Less contention and less deadlocking. %%
    • 3) Performance – stored procedures are parsed and a query plan is compiled. This information is stored in the system tables and it only has to be done once.
    • 3b) Network – if you have users who are on a WAN (slow connection) having stored procedures will improve throughput because less bytes (%% ? %%) need to flow down the wire from the client to ASE.

clustered index — when (not) to use

Clustered indexes sort and store the data rows in the table based on their key values. There can only be one clustered index per table, because the data rows themselves can only be sorted in one order

the index can comprise multiple columns (a composite clustered index).

— 2 + 1 usages of clustered index
* A clustered index is particularly efficient on columns that are often searched for ranges of values. After the row with the first value is found using the clustered index, rows with subsequent indexed values are guaranteed to be physically adjacent.

* Also, if there is a column(s) that is used frequently to sort the data retrieved from a table, it can be advantageous to cluster (physically sort) the table on that column(s) to save the cost of a sort each time the column(s) is queried.

%% this last usage isn’t compelling %% * Clustered indexes are also efficient for finding a specific row when the indexed value is unique. For example, the fastest way to find a particular employee using the unique employee ID column emp_id is to create a clustered index or PRIMARY KEY constraint on the emp_id column. Note PRIMARY KEY constraints create clustered indexes automatically if no clustered index already exists on the table and a nonclustered index is not specified when you create the PRIMARY KEY constraint. If you can’t create a clustered index on emp_id, a clustered index could be created on lname, fname (last name, first name), because employee records are often grouped and queried in this way rather than by employee ID.

–Clustered indexes are not a good choice for the following attributes:
* Columns that undergo frequent changes
* identity columns
* very few distinct values, such as boolean columns

unmatched primary-key is normal in 2nf

In a normalized (2nf) design, given

  Warehouse {id (prikey) , address, manager, established ….}

many tables (Drivers, Customers, Suppliers…) will contain warehouse_id as a forkey. This forkey can often become a Subset of the prike

Example: When a brand new warehouse is empty and not used (=> unmatched by any forkey) , we still want to keep its address in our database.

In performance-tuning, To denormalize “down” from 2NF, you might merge the warehouse table into

  Stock {item, warehouseName, address …} where the composite key is item-warehouseName

I think this reduces data page access to nothing but the Stock data page, without hitting any Warehouse data pages.

Problem: When you delete the last item from a very very old warehouse, you lose its address.

partitioned table: sybase^oracle

— based on http://manuals.sybase.com/onlinebooks/group-as/asg1251e/commands/@Generic__BookTextView/34421;hf=0

An unpartitioned table with no clustered-index (%% irrelevant %%) … each insertion into the table uses the last (newest) page of the chain. Adaptive Server holds an exclusive lock on the last page while it inserts the rows, blocking other concurrent transactions from inserting data into the table.

Partitioning a table with the partition clause of the alter table command creates additional page chains. Each chain has its own last page, which can be used for concurrent insert operations. This improves insert performance by reducing page contention.

If the table is spread over multiple physical devices, partitioning also improves insert performance by reducing I/O contention (lower than page-contention) while the server flushes data from cache to disk.

avoid clustered index on identity column

If you do a large number of inserts and you have built your clustered index on an Identity column, you will have major contention and deadlocking problems. This will instantly create a hot spot in your database at the point of the last inserted row, and it will cause bad contention if multiple insert requests are received at once. See post on [[ batch feature wishlist — integrate ]] There are 2 solutions, based on http://www.faqs.org/faqs/databases/sybase-faq/part14

Solution #1: create your clustered index on a field that will somewhat randomize the inserts across the physical disk (such as last name, account number, social security number, etc) and then create a non-clustered index based on the identity field that will “cover” any eligible queries.

The drawback here, as pointed out in the Identity Optimization section in more detail, is that clustering on another field doesn’t truly resolve the concurrency issues. The hot spot simply moves from the last data page to the last index page of the Identity column index (no longer clustered)

%%Every insert requires a write to the non-clustered index on IC. Since the IC increases for every insert, these index writes occur at the highest IC values. Visualize a B-tree index tree and often these index writes happen on the right-most leaf node.%%

— based on http://www.4guysfromrolla.com/ASPScripts/PrintPage.asp?REF=%2Fwebtech%2Fsqlguru%2Fq021700-1.shtml

Recall that a clustered index physically sorts the data pages in a table. If you put a clustered index on an IDENTITY column, then all of your inserts will happen on the last (newest) page of the table – and that page is locked for the duration of each IDENTITY.

Solution #2: turn on insert row-level locking in SQL 6.5, and that will only lock the row being inserted, thus reducing lock contention. You can also move your clustered index to a different column, thereby scattering the inserts around the table.

batch feature wishlist — integrate

Integrate with DB and other external systems

* sensible default DB commit-frequency. Batch jobs need a different commit frequency than interactive applications. P505 [[ mysql stored procedures ]] Without this feature, a novice batch user would commit on every update. Commit frequency can be configurable.
* [x] Expect capabilities — integrate with interactive systems
* [x] ability to restart a remote server
* integration with stored procedures and triggers
* integration with ftp, auto-reconnect
* [x] integration with servlets etc, with support for cookies[x]. batch^interactive? A batch can flood and overwhelm a URL.
* avoid db contention due to concurrent updates to the same “page” ie data block. Spread updates over separate data blocks for parallel updates.
* resilient to network outage. Batch vs interactive?
* integration with MQ, usually as a sender

— infrastructure support
* adjustable load level on external partner systems. Batch jobs can generate higher load than normal on unprepared “trading-partners”. Persistent XML configuration. Ideally, load level is adjustable in real time.
* detect peak hours for a database/URL and back off. For a mission-critical DB, our friend’s website, a shared scarce resource, or a resource-tight website, our batch may need extra intelligence to detect changes to peak hours, instead of a hard-coded schedule. It takes Just a single outbreak of flooding (due to change in peak-hour schedule) to bring down a site, damange trust, spoil our image, or even generate bad publicity

sql tuning — pre-sales and post-sales

pre (ie before turning): keep optimizer mode (optimier stability?)
pre-pre: choose optimizer mode

pre: decide which sql statements to tune
pre-pre: tune the most used sql – OR – most disk-read sql?
pre-pre: set instance-wide optimizer_mode first before tuning

post (ie after tuning): ongoing ranking of top sql statements to tune

read` DB execution plan

A2: FTS

1st step — get an overview. To reduce the confusion, always remove line wrap and reveal the indentation (Oracle). Like Python(?), indentation indicates flow including nested flow constructs.

2nd step — get a basic idea of the different table/index access methods. These are the basic vocab of “explain”

q: do u read inside out or sequentially?

q2: what’s the most obvious thing to look for in an execution plan?

q: Beside the most obvious, what else can u look for? no simple answer yet

2 ways to expire a cache

on-demand invalidation ^ TTL

Articles would hint at them, but few list these 2 modes explicitly. This is a simple concept but fundamental to DB tuning and app tuning.

A) TTL — more common. Each “cache item” has a time-to-live a.k.a expiry date

B) invalidation — some “events” would trigger an invalidation. Without invalidation, a cache item would live forever with a infinity TTL, like the list of China provinces.

You can combine A and B. I think cookie is an example.

3 types of tuning practitioners.

1) A 3-year “experienced coder” may know how to produce solid functionalities with good enough performance most of the time, but knows close to nothing about why his code has this performance. He has less than 1% of the written knowledge on tuning.

2) A “bookish tuning professional” may not know the limits of the written theory. Her theoretical knowledge could be non-trivial, based on many tuning books and online articles.

3) A “questioning practitioner” knows only the essential theory but relies heavily on utilities to /interrogate/ the target system and verify the tuning theories. As a questioning practitioner, he questions the tuning utilities as well. Profiling helps u pinpoint bottlenecks and calculates tcost (processing time) at each stage.

Look at an abstract/generic transation processing system. For each one of millions of transactions, the totol processing time is a simple sum total of the processing times at each stage. This is an extremely simple, well-understood and powerful truth. Every tuning approach relies on this truth. This is one basis of profiling. Common tuning utilities:

EXPLAIN, showplan…
Perl profiling
JVM profiling jconsole, JRA
GC monitor
PHP @@ apache benchmark and P316 [[ programming php ]]