outer table predicate in ON-clause #my take

(INNER table predicate in where-clause vs ON-clause is well covered online and also in my own http://bigblog.tanbin.com/2011/05/whats-different-between-these-2-sqls.html)

Note intermediate table is unaffected by where-clause. Query processor always uses on-clause exclusively to build an intermediate joined table before applying where-clause. This is the logical view we can safely assume. Physically, Query processor could optimize away the where/on distinction, but output is always consistent with the logical view.

Q1: can we _always_ assume that LL-outerjoin-RR intermediate table _always_ includes all LL rows if no outer table predicate in ON-clause?
A: yes with the big “if”
%%A: If you need to filter on outer table, do it in where-clause — better understood. Avoid ON-clause.

Q2: Boldly dropping the big “if”, can we _always_ assume that LL-outerjoin-RR intermediate table _always_ includes all LL rows _regardless_ of outer table predicate in ON-clause?
%%A: probably yes.

If you really really want outer table predicate in ON-clause, I assume you have no good reason and just feels adventurous.
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc32300.1502/html/sqlug/sqlug169.htm shows that even with outer table predicate in ON-clause, we still get “… all LL rows”

select title, titles.title_id, price, au_id
from titles left join titleauthor
on titles.title_id = titleauthor.title_id
and titles.price > $20.00
title                  title_id   price   au_id       
--------------------   ---------  ------  ---------------
The Busy Executive’s   BU1032     19.99   NULL
Cooking with Compute   BU1111     11.95   NULL
You Can Combat Compu   BU2075      2.99   NULL
Straight Talk About    BU7832     19.99   NULL
Silicon Valley Gastro  MC2222     19.99   NULL
The Gourmet Microwave  MC3021      2.99   NULL
The Psychology of Com  MC3026      NULL   NULL
But Is It User Friend  PC1035     22.95   238-95-7766
Advertisements

sybase rand() could return 0 and 1

“The rand function uses the output of a 32-bit pseudo-random integer generator. The integer is divided by the maximum 32-bit integer to give a double value between 0.0 and 1.0”

There’s a non-zero chance of getting the max integer, which gives 1 when divided by max. Same probability for 0/max which gives 0.

convert (int, rand()*3) can return 0, 1, 2 and 3, with a non-zero probability for 3 and equal distribution among the 0, 1 and 2.The non-zero chance is something like 232 or 2**(-32) in python syntax.

sybase 15 dedicated cpu

In one of my sybase servers with large database load, we managed to mark one of 16 cpu cores to a specific stored proc, so no other process could use that core.

We also managed to dedicate a processor to a specific JDBC connection.

This let us ensure a high priority task gets enough CPU resource allocation.

What’s the name of the feature?

datachange && auto update-stat in sybase 15

— based on P22 [[ new features guide to ASE 15 ]]

The holy grail — let users determine the objects, schedules and datachange thresholds[1] to automate the stat-update process.

[1] “update only when required”

– datachange is a function used in “select datachange(….”
– datachange function returns a percentage — how many percent of data changed, due to CUD.

25G/table – sybase comfort zone

Many people feel Sybase is unsuitable for large tables. How about a 25GB table?

I worked with a few post trading systems (commissions, trade-level revenues, settlement…), where each table’s data occupy 10 – 25GB, with another 10 – 25GB for all indices of that table. Each row is typically 1 – 2KB, typically within a Sybase data page, so such a table typically hold 10+ million rows.

My Sybase trainer said Sybase is faster than oracle/db2 for small tables below 100mil rows.

My database colleagues also feel Sybase is fast with 10-25GB/table.

OUTPUT param in sybasae

Sound byte — You write “OUTPUT” once each in the service proc AND the client proc.

[ Service proc ] CREATE procedure serviceProc (@p1,@p2,… @lastParam int OUTPUT) /* usually last param */
[ Service proc ] /* in the body */ select @lastParam = 123

1) [ client proc ] exec @ret = serviceProc ‘valueFor_p1’,’valueFor_p2’…. @someVarDeclaredLocally OUTPUT
2) [ client proc ] exec @ret = serviceProc @p1=..,@p2=…. @lastParam = @someVarDeclaredLocally OUTPUT

Note the strange syntax in @lastParam = @someVarDeclaredLocally — assigning left to right!

Note @someVarDeclaredLocally must declared locally. @lastParam must NOT since it’s not a variable at all. It’s a TAG.

In both 1) and 2), you don’t provide a value for @lastParam (like you do @p1), but you specify a L-value variable to RECEIVE the output

how to get java to capture printing from sybase stored proc

In my experience on Wall St, Sybase store proc can get very complex. A basic technique is the lowly “print”. It beats “select” because under error condition all selects into a log table are rolled back.

Sometimes Sybase print output doesn’t get returned to java. For jdbc I had a simple reusable method to while-loop through a series of warnings. Here’s my technique for spring jdbcTempalte. Note the documented “logging all warnings” may not work. If you don’t override handleWarnings() like I did, then all warnings become exceptions so super.query() return value is lost — real show stopper.

public class GenericProcedureCaller extends JdbcTemplate {
@Override
protected void handleWarnings(Statement stmt) {
try {
super.handleWarnings(stmt);
} catch (SQLWarningException e) {
log.info("\t\t v v v output from database server v v v v ");
SQLWarning warn = e.SQLWarning();
while (warn != null) {
log.info(warn);
warn = warn.getNextWarning();
}
log.info("\t\t ^ ^ ^ output from database server ^ ^ ^ ^ ");
} catch (SQLException e) {
e.printStackTrace();
}
}
@Override
public List query(String sql, RowMapper rowMapper) throws DataAccessException {
boolean oldSetting = isIgnoreWarnings();
// setting to false to capture "prints" from the proc, but there's side effect.
setIgnoreWarnings(false);
try {
return super.query(sql, rowMapper);
} finally {
setIgnoreWarnings(oldSetting);
}
}

verify SingleConnectionDataSource – sybase

import java.sql.Connection;

import java.sql.ResultSet;

import java.sql.SQLException;

 

import javax.sql.DataSource;

 

import org.apache.commons.logging.Log;

import org.apache.commons.logging.LogFactory;

import org.springframework.beans.BeansException;

import org.springframework.context.support.AbstractApplicationContext;

import org.springframework.context.support.ClassPathXmlApplicationContext;

import org.springframework.jdbc.core.JdbcTemplate;

import org.springframework.jdbc.core.RowMapper;

import org.springframework.jdbc.datasource.SingleConnectionDataSource;

 

/**

*

 * @author btan

*

 */

public class SybaseUtil {

      private static final Log log = LogFactory

                  .getLog(SybaseUtil.class.getName());

 

      static private JdbcTemplate singleConnectionHandle;

      static private JdbcTemplate handle;

 

      /**

      * @param args

      */

      public static void main(String[] args) {

            showServer();

            log(“abc ‘d’ “);

      }

 

      static public JdbcTemplate getHandle() {

            if (singleConnectionHandle != null && handle != null)

                  return handle;

            AbstractApplicationContext context = TradeEngineMain.getContext();

            if (context == null) {

                  try {

                        context = new ClassPathXmlApplicationContext(

                                    “applicationContext.xml”);

                  } catch (BeansException e) {

                        e.printStackTrace();

                        context = new ClassPathXmlApplicationContext(“dataSources.xml”);

                  }

            }

            DataSource dataSource = (DataSource) context.getBean(“mtsDataSource”);

            handle = new GenericProcedureCaller(dataSource);

 

            try {

                  dataSource = new SingleConnectionDataSource(

                              dataSource.getConnection(), true);

            } catch (SQLException e) {

                  e.printStackTrace();

                  throw new RuntimeException(e);

            }

            JdbcTemplate jt = new JdbcTemplate(dataSource);

            jt = new GenericProcedureCaller(jt);

            singleConnectionHandle = jt;

            return handle;

      }

 

      static public JdbcTemplate getSingleHandle() {

            getHandle();

            return singleConnectionHandle;

      }

 

      static public String showServer() {

            JdbcTemplate handle = getHandle();

            StringBuilder ret = new StringBuilder(“nnt == SybaseSybase Details ==”);

            ret.append(“nt ” + handle.queryForObject(“select @@servername + ‘ <– ' + @@version", String.class));

            ret.append(“nt ” + getConnectionDetails());

            ret.append(“nt ” + getSpidDetails());

            ret.append(“nn”);

            log.debug(ret);

            return ret.toString();

      }

      static private String getSpidDetails() {

            int spid = getSingleHandle().queryForInt(“sel” +

                        “ect @@spid”);

            String sql = String.format(“sp_who ‘%s'”, spid);

            String ret =”spid=” + spid + ” “;

            ret += getSingleHandle().queryForObject(sql, new RowMapper() {;

                  public String mapRow(ResultSet rs, int rowNum) throws SQLException {

                        return rs.getString(“loginame”) + ” ” + rs.getString(“hostname”);

                  }

            });

            System.err.println(ret);

            return ret;

      }

 

      static private String getConnectionDetails() {

            Connection conn3 = null;

            try {

                  conn3 = getSingleHandle().getDataSource().getConnection();

            } catch (SQLException e) {

                  throw new RuntimeException(e);

            }

            String ret = System.identityHashCode(conn3) +”=identityHash ” + conn3;

            System.err.println(ret);

            return ret;

      }

 

      static public void log(String s) {

            if (s == null || s.trim().isEmpty())

                  return;

            s = s.replaceAll(“‘”, “”);

            String insert = String.format(“insert snoop (charp1) select ‘%s'”, s);

            getSingleHandle().update(insert);

            showServer();

      }

}


This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited.
Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law.
The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses.

References to “Sender” are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link:
http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing.

check sybase user connections

Total no of allocated connections are 1700.

1> sp_configure 'number of user connections'
2> go
Parameter Name                 Default     Memory Used Config Value
Run Value   Unit                 Type
—————————— ———– ———– ————
———– ——————– ———-
number of user connections              25     1990886        1700
    1700 number               dynamic

sybase open client, open server, DB-library

http://manuals.sybase.com/onlinebooks/group-cnarc/cng1110e/ctref/@Generic__BookTextView/292;pt=294#X explains

The Sybase product line includes 1) servers and 2) tools for building servers:

  • SQL Server is a database server. SQL Servers manage information stored in one or more databases.
  • Open Server provides the tools and interfaces needed to create a custom server application.

SQL Server and Open Server applications are similar in some ways …But they also differ:

  • An application programmer must create an Open Server application, using Server-Library's building blocks and supplying custom code. SQL Server is complete and does not require custom code.

Open Client provides two core programming interfaces: Client-Library and DB-Library

  • Open Client Client-Library supports server-managed cursors and other new features that were added to the System 10 product line.
  • Open Client DB-Library provides support for older Open Client applications, and is a completely separate programming interface from Client-Library. DB-Library is documented in the Open Client DB-Library/C Reference Manual.

Client-Library programs also depend on CS-Library, which provides routines that are used in both Client-Library and Server-Library applications.

y sybase still popular on Wall Street

Reason: years ago, sybase made sure it had the fastest insert performance, targeting market data feed. However, the fastest market data feed uses flat files, according to a 2sigma engineer.

Reason: for smaller databases, sybase outperforms DB2
Reason: oracle is not a friendly company.
Reason: As of 2011, MSSQL is still quite similar to sybase
Reason: bcp is supposed to be the fastest way to store large amounts of data
Reason: mssql is not optimized for unix but most wall st database servers run unix.

A few answers from veterans
1) Miao did data warehousing for more than 10 years. Sybase is the most “memorable” RDBMS, mostly due to performance. Miao summaraized “Oracle — marketing, Sybase — performance”. I mentioned that sybase runs on unix and windows — one size fit all, whereas Oracle is mostly used on unix, with more opportunity for customization and optimization. However, Miao shook his head and said sybase performance wins. BCP (“Bulk CoPy”), the sybase Bulk loader was singled out as high performance.

2) my Wall St. DB2/sybase trainer said sybase performs better for small tables (up to 100 mil rows) but favored DB2 for large tables.

3) another Wall St. veteran told me sybase syntax is convenient. Personally, I feel sybase is no inferior than Oracle, but with convenient syntax.

These are the sybase users I know — GS, MS, ML, Citi, Barc

y sybase still popular on Wall Street, again

Despite many people complaining and predicting the decline of sybase on Wall St, sybase continues to be the database of choice in everyday decisions.

Protection — Sybase was built for Wall Street. It doesn’t have major shortcomings for Wall St.
Protection — sybase is considered robust enough.
Protection — sybase tuning is much easier than Oracle, according to an Oracle employee
Protection: performance — One veteran, a data warehousing expert, found bcp much faster than Oracle or others. My DB2 trainer said for small tables (below 100 mil rows) sybase outperforms Oracle/DB2

Protection: legacy — As explained in my blog, a lot of Wall St banks prefer stored procedures rather than embedded SQL in java/c/c#. They go to extreme lengths to externalize all embedded queries into stored proc. As a Wall St veteran puts it, to convert Sybase stored proc to Oracle is such a big effort it’s like rewriting the app. I feel it’s like rewriting from C to java

Unlike java/dotnet code, SQL is dense in business logic. About 99% SQL source code is pure business logic, but less than 25% of java code is business logic. To rewrite a single SQL from Sybase syntax to Oracle is often equivalent to rewriting a few java classes. Risky, test-intensive, fearful, complicated. Backward compatibility is hard to prove. Even embedded SQL use Sybase features, so converting to DB2 will require backward compatibility assurance as well.
 
New applications might use Oracle, DB2 or MSSQL.

sybase – number of user connection

1> sp_configure ‘number of user connection’

2> go

 Parameter Name                 Default     Memory Used Config Value Run Value   Unit                 Type      

 —————————— ———– ———– ———— ———– ——————– ———-

 number of user connections              25     1990886        1700         1700 number               dynamic   

 

 

1> sp_monitorconfig ‘number of user connection’

2> go

Usage information at date and time: Dec  2 2010 10:35AM.

 

 Name                      Num_free    Num_active  Pct_act Max_Used    Num_Reuse  

 ————————- ———– ———– ——- ———– ———–

 number of user connection         376        1324  77.88         1363           0

 

RE: Dealing with duplicate records in table which has no primary key

—–Original Message—–

Sent: Tuesday, December 07, 2010 11:47 AM

Subject: Dealing with duplicate records in table which has no primary key

 

How to remove duplicate records if table has no primary key

1. rename FOO to FOO_BAK, create FOO as select distinct * from FOO_BAK

 

[Bin] Did you test?

[Bin] It does work. Nice! How about  

 

— Solution 2:

 

Select distinct * into NEW_FOO from FOO

 

Revoke all access from FOO and NEW_FOO. Then create a public view to either point to FOO or NEW_FOO, so we can easily point applications to either of the 2 tables.

 

This preserves the original FOO table as is.

 

— Solution 3:

 

Select * into FOO_BAK from FOO

Truncate table FOO

Insert FOO select distinct * from FOO_BAK

alter table FOO add ident int identity — adding identity column so no more identical rows from now on

 

This avoids sp_rename, a troublesome Sybase system procedure to avoid.

sybase c++ driver used on wall street

http://manuals.sybase.com/onlinebooks/group-cnarc/cng1110e/dblib/@Generic__BookView

  • An application can call a stored procedure in two ways: by executing a command buffer containing a Transact-SQL execute statement or by making a remote procedure call (RPC).
  • Remote procedure calls have a few advantages over execute statements:
    • An RPC passes the stored procedure’s parameters in their native datatypes, in contrast to the execute statement, which passes parameters as ASCII characters. Therefore, the RPC method is faster and usually more compact than the execute statement, because it does not require either the application program or the server to convert between native datatypes and their ASCII equivalents.
    • It is simpler and faster to accommodate stored procedure return parameters with an RPC, instead of an execute statement. With an RPC, the return parameters are automatically available to the application. (Note, however, that a return parameter must be specified as such when it is originally added to the RPC via the dbrpcparam routine.) If, on the other hand, a stored procedure is called with an execute statement, the return parameter values are available only if the command batch containing the execute statement uses local variables, not constants, as the return parameters. This involves additional parsing each time the command batch is executed.
  • To make a remote procedure call, first call dbrpcinit to specify the stored procedure that is to be invoked. Then call dbrpcparam once for each of the stored procedure’s parameters. Finally, call dbrpcsend to signify the end of the parameter list. This causes the server to begin executing the specified procedure. You can then call dbsqlok, dbresults, and dbnextrow to process the stored procedure’s results. (Note that you will need to call dbresults multiple times if the stored procedure contains more than one select statement.) After all of the stored procedure’s results have been processed, you can call the routines that process return parameters and status numbers, such as dbretdata and dbretstatus.
  • If the procedure being executed resides on a server other than the one to which the application is directly connected, commands executed within the procedure cannot be rolled back.
  • For an example of a remote procedure call, see Example 8 in the online sample programs.

allpages locking ≠ table-level locking

Nikhil,

http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc20021_1251/html/locking/locking7.htm

Allpages locking locks both data pages and index pages. When a query updates a value in a row in an allpages-locked table, the data page [1] is locked with an exclusive lock. Any index pages affected by the update are also locked with exclusive locks.

[1] not the entire table

nested transactions in sybase

If proc1 does “begin tran; exec proc2”, and proc2 does a “begin tran…”, how many transactions are active at that point? The way to test it in sybase is simple, open a session in sqsh, isql or aqua studio and

insert t … — a table
begin tran tr1
insert t…
commit tr1

— open another session and revoke insert permission

insert t… — causing an error and a rollback

— now see what’s rolled back.

sybase if-else

Background: after a few years of usage, i still feel the need for a few simple rules, like the simple rules in “perlreftut”. Here
they are, each in 5 words or less.

* select => parenthesis. Any select in a if-condition had better wear a pair of parenthesis. I guess they

* select => single value. Any select in a if-condition must produce a single value, including count()

* begin. Being-end is harmless. May increase code size a bit but actually often helps readability.

* no slip-through. In a if-else, either the if block or the else block must execute. Nothing (including nulls) will slip through
between the 2.

sybase bcp and identity column

bcp and identity columns are reliable, proven features, but together they surprised me — if I bcp out a table to a file, wipe out the table, then bcp in, occasionally, i see the identity values completely changed. I think it’s because bcp-in ignored identity.

Solution1: use a tool like aqua studio to generate the inserts statements (hopefully not 200,000,000 rows), then enable identity-insert

Solution2: Sybase site says — By default, when you bulk copy data into a table with an IDENTITY column, bcp assigns each row a temporary IDENTITY column value of 0. As it inserts each row into the table, the server assigns it a unique, sequential IDENTITY column value, beginning with the next available value. To enter an explicit IDENTITY column value for each row, specify the -E (UNIX) flag. Refer to the Utility Guide for your platform for more information on bcp options that affect IDENTITY columns.

count(nullable_field) can return 0 – tricky in outer join

Opening quiz: how do you get count() to return a 0? I believe most novice SQL developers are unfamiliar.

See the vmware questions in pearl blog. Tested in sybase.

select count(field_allowing_nulls) — would not count the /null-cells/. See P 91 [[ introduction to sql ]]. Note you can think of the table as a spreadsheet with named and numbered columns, numbered rows, and numbered cells.

select count(*), count(1234) — would simply count all rows in the entire table without filtering

This null-behavior is esp. important in outer joins.

Generalize -> Just like count(), many aggregate functions like agg_func(filed_allowing_nulls) would filter out null values before any calculation. Best understood with average(), which must discard all null values before computation.

t-sql: GO,

Think of it like this: Cut up your script into multiple files,separated by the “GO” statement. Run each of these files individually,but use the same connection. That’s all “GO” does.

Server never sees GO. GO is a keyword in client apps such as sqsh. I think java/perl apps don’t use GO.

Until you are clear on the fundamentals above, avoid the confusing questions over auto-commit, transaction and GO. I think auto-commit means every GO-batch is a self-contained transaction. If a GO-batch issues a begin tran without ending it, then server will keep it open.

Another confusing question is GO and stored proc.
* i believe a java/perl app calling a proc won’t use GO at all.
* when sqsh calls the proc, it needs GO to mark end-of-batch
* Most complicated scenario is when you create the proc in sqsh. I usually wrap the create-begin-end piece in one go-batch.

modify a check constraint ] sybase

According to my brief inet search, i think you must do a drop-add.

1) sp_helpconstraint table1 — to see the old constraint name
) alter table table1 drop constraint con1
) alter table table1 add constraint con2 check (….) — must specify a constraint name [1].

[1] when adding/creating columns, u don’t need to specify a constraint name

Constraints do not apply to the data that already exists in the table at the time the constraint is added.

raiserror, fatal^non-fatal errors ] sybase

http://www.sqlteam.com/article/handling-errors-in-stored-procedures

– fatal errors (caused by one statement) causes the sproc to abort immediately at the offending statement, ignoring subsequent statements. You can’t “handle” or “react to” these errors. @@error won’t let you read it

– non-fatal errors let subsequent statements run. These are the only type of errors you can handle.

Segmentation Fault from sqsh

“Segmentation Fault (core dumped)” happened to my sqsh scripts many times without obvious reason. A few causes

Experiment: Put lots of commands after “/* “. These should be ignored. Run the script via sqsh -> OK. Now put a “go” at the very end -> segmentation

Experiment: remove all “/* */” comments on top, and pump into sqsh -> OK

Experiment: remove content between /* */, and pump into sqsh -> OK

Experiment: add the –, and pump into sqsh -> OK; remove the “– ” ==> segmentation fault

/*
— $Id$
*/

sybase triggers to print debug msg

(“debug msg” to show what’s going on. )

Looks like a jdbc insert might fail due to the debug msg. EVEN if u can keep your debug msg and keep jdbc happy, there would still be a lingering doubt — debug msg is not perfectly compatible with java …. In view of the cost of production support, to play safe in a large, fast-paced environment,

Suggestion: enable debug msg during trigger development only, unless you are very sure how to make the debug msg compatible.

This affects sybase, and may affect mssql.

bcp fragility imt INSERT #possibly

Created date default convert(date,getDate()) not null

Such a column is OK with insert, but to satisfy/pacify bcp, you must change datatype to datetime:

Created datetime default convert(date,getDate()) not null

No official explanation. Here’s my hypothesis. bcp, trying to be the fastest, bypasses triggers. And conversions too, perhaps. The explicit convert() is NOT ignored during table creation.

bcp IN

Q: what if a column is an identity?
A: -N

Q: what if i want to insert a null into Col2?
a: put nothing between the filed terminators

Q: what if i want my triggers to enforce data integrity?
A: bcp, to enable its maximum speed for loading data, does not fire rules and triggers. http://manuals.sybase.com/onlinebooks/group-as/asg1250e/util/@Generic__BookTextView/6210;pt=1806/*#X describes some solution.

Q: what if i want to use the default column value for Col3?
A: see http://manuals.sybase.com/onlinebooks/group-as/asg1250e/util/@Generic__BookTextView/6210;pt=1806/*#X
A: i couldn’t get it to work. My biggest concern with bcp in-mode is, no usable error message (no error file found) to help debug =} give up on bcp for now. No time

sybase date/time data types

(Based on some online resource) Use datetime, smalldatetime, date, and time to store absolute date and time information. Use timestamp to store binary-type information.

In versions earlier than 12.5.1, only datetime and smalldatetime were available. As of version 12.5.1, date and time are these separate datatypes:

– date
– time
– smalldatetime
– datetime

* [8] datetime columns hold dates between January 1, 1753 and December 31, 9999. datetime values are accurate to 1/300 second on platforms that support this level of granularity. Storage size is 8 bytes: 4 bytes for the number of days since the base date of January 1, 1900 and 4 bytes for the time of day.

* [4] smalldatetime columns hold dates from January 1, 1900 to June 6, 2079, with accuracy to the minute. Its storage size is 4 bytes: 2 bytes for the number of days after January 1, 1900, and 2 bytes for the number of minutes after midnight.

* [4] date columns hold dates from January 1, 0001 to December 31, 9999. Storage size is 4 bytes.

* [4] many wall street systems use int to represent dates.

* time is between 00:00:00:000 and 23:59:59:999. You can use either military time or 12AM for noon and 12PM for midnight. A time value must contain either a colon or the AM or PM signifier. AM or PM may be in either uppercase or lowercase.

When entering date and time information, always enclose the time or date in single or double quotes.

If you pass getDate() into a Date column, u lose the time portion.

input-validation triggers — sybase

A few websites say that “at the time of writing, sybase has no before-trigger”, like Oracle, mysql or DB2.

Solution? you know the new values are already in the table, but u can UPDATE those new values and commit, or make an explicit rollback. This would ensure only valid data get committed.

sybase dbo and ownerships

based on http://www.benslade.com/tech/OldIntroToSybase/security.html

Database tables, stored procedures, etc. are owned by users in a database — regular developer accounts or special accounts. Full syntax for referring to a table, view, or stored procedure is [1]:

database.owner.object

There is a special (different from regular login accounts) user in each database called the database owner or “dbo”. The dbo has all privileges within her own database. Each database has one and only one “dbo” user.

The database owner (dbo) can also own objects. In most production environments, the dbo (rather than developer users) owns the tables and stored procedures in a database. This is to preempt developer resignation (or passing away:(

[1] I think in many systems “owner” is omitted cos there’s no other owner beside dbo. In other systems, for the owner the current login session’s database name (??) is used as a first choice. The dbo is used as the second choice

sybase c++ self-quiz

See http://manuals.sybase.com/onlinebooks/group-cnarc/cng1110e/dblib/@Generic__BookTextView/503;pt=505#X

Q:2 standard header files?
A: 1) sybfront.h, 2) sybdb.h, as seen in MTS include files

Q: where are the custom structs defined?
A: sybfront.h — also includes type definitions for datatypes that can be used in program variable declarations

Q: what must be the first DB-Library/C routine in the program?
A: dbinit() – This routine initializes DB-Library/C. It must be the first DB-Library/C routine in the program

Q: how is the “dbproc” object set up?
A: dbinit -} dblogin -} dbopen
Let’s look at dbopen() – The dbopen routine opens a connection between the application and a server and creates the dbproc object. It uses the LOGINREC supplied by dblogin to log in to the server. It returns a DBPROCESS (dbproc) structure, which serves as the conduit (or handle/connection/session) for information between the application and the server. After this routine has been called, the application is connected with SQL Server and can now send Transact-SQL commands to SQL Server and process any results.

Q: dbcmd() vs dbsqlexec()?
A: buffer building vs GO

sybase tempdb basics

1984 — first introduced. 23 years. mature.

duration — a tempdb can exist for
* one sql
* part of an sql
* a user session
* multiple sessions

There’s one default temporary database, known as “tempdb”. ASE-15 supports multiple additional temporary databases. J4? Performance.

Lock contention, specifically.

choosing ASA

0.3 What applications is ASA good for?

1) A major area for ASA databases is with applications that need to distribute the database with the application as a general storage area for internal components, but the database is not a major part of the deliverable.

2) ASA has excellent ODBC support, which makes it very attractive to tools oriented towards ODBC.


0.4 When would I choose ASA over ASE?

    • Ease of administration,e.g., self-tuning optimizer, db file is an OS file (not partition).
    • Lower footprint – runs on “smaller” machines.
    • Lower cost, ASA is definitely cheaper than ASE on the same platform.

sub-query as column expression (sybase

— sybase
my $subselect=qq[ select min(ApprovalDate) from MtgAccts where …];
my $sql = “select *, convert(varchar, Day1ofMonth, 112) as ‘numeric_month’, ($subselect) as ‘approval_asof’…

http://sqlzoo.net/1.htm (tested on Oracle, MSSQL, Mysql)

* To list number of population “superpowers” by region

SELECT region, count(*),
(select count(*) from bbc a where population > 100123000 and a.region=b.region) as superpowers
FROM bbc b group by region

sybperl ^ DBD::Sybase

features^ease — my short comparision of the 2, based on hearsay. In other words, sybperl exposes more sybase features but DBD::sybase is easier to learn since it’s standard DBI/DBD. How easy? Nothing more than a change from “dbi:oracle:..” to “dbi:sybase:…”.

Other aspects?

* ease of use@@ Sybase::Simple is part of sybperl, but can’t reach the ease of DBD::Sybase
* Bugs and stability@@ I think by now DBD::sybase is mature and stable, but presumably less battle-tested than sybperl for certain rarely used features.
* performance@@ no idea