Thursday, 6 March 2025

Notes

 Reporting: SQL Performance and Tuning   





This is a list of some techniques used successfully at several different sites.



Getting Maximum Index Usage

1) Verify tables have had "update statistics" on them ;
Verify tables have had "sp_recompile" on them.

2) Verify any declared variables have the same data
type as their corresponding columns - this is a common
pitfall.

3) Force index usage as follows, with a hint:
from customer (index idx_customer2)

4) Use SET TABLE COUNT
Example: set table count 6
Then, compile the procedure, in the same session.

5) If temp tables are being used, put the temp table
creation statements in one procedure, and the
processing SQL in another procedure. This allows
the optimizer to form a query plan on the already
established tables.
Example:
proc_driver calls proc_create_temp_tables
then,
proc_driver calls proc_generate_report




General SQL Programming

- Plan for growth. Assume the driver table doubled or tripled in size; would
the report still function ?

- Avoid dumb comparisons in the where clause, like
where @emp_id > 0

- use "WHERE EXISTS ( )" rather than "WHERE NOT EXISTS"

- use "!=" rather than "<>"

- use "IS NOT NULL" rather than "<>NULL"

- use "IS NULL" rather than "=NULL"

- avoid distinct if possible ; see cursor loop option below

- use meaningful names for temp tables ... don't use #temp (lame)


Report Structure Approaches

1) Single query

Single query reports are rare - usually they involve getting a simple list
together.

- Don't try to 'shoehorn' SQL into one statement. Shorter programs are
great for C or Perl applications, but this is not the case in SQL.
Think "Bigger is Better" (and more maintainable).

- Keep queries from using more than four tables if possible.



2) Cursor on driver table(s), with IF..THEN processing in loop

Using a cursor for complex reports almost always increases performance
when large tables and a lot of joins are involved.

- Keep cursor queries from using more than two tables if possible,
make sure this query performs well on its own.

- Try to have a unique key of some sort available within the tables involved.
Strange results have been known to occur when a cursor is scanning
rows that are exactly alike.

- Don't use cursors for updating.

- Use IF statements for filtering results even further. In most cases:

A code construct like the one below is better than cramming the
logic in a where clause.


IF <cond-a>
BEGIN

IF <cond-b> and <cond-c>
.....
ELSE
....

END


3) Set processing without cursors

This technique should be attempted when even a cursor construct fails to
achieve the desired performance.

Basically, the driver query is re-run with each iteration of the loop.

Sample, with cursor:

declare cursor1 cursor for
select emp_id, last_name, salary
from employee

open cursor1

fetch cursor1 into @emp_id, @last_name, @salary

while (@@sqlstatus = 0)
begin

< processing >

fetch cursor1 into @emp_id, @last_name, @salary
end

close cursor1


Sample, with set processing:


select @emp_id = 0, @loop = 1

while (@loop > 0)
begin

set rowcount 1

select
@emp_id = emp_id,
@last_name = last_name,
@salary = salary
from employee
where emp_id > @emp_id
order by 1

select @loop = @@rowcount

set rowcount 0

if @loop > 0
begin

< processing >

end

end


Transaction Log Filling Up ?

If the transaction log is filling up, for tempdb or the main database, there
is likely something wrong with the report logic.

Things to check:

- Instead of repetitively updating each row, can the values be obtained
ahead of time, and then inserted with a single transaction ?

- Are the "joined" updates occuring on each row once ? When updating
using a join statement, make sure that the tables in question
are joined in a way that avoids duplicate rows. Try running the
SQL statement as a SELECT - check it out.

- Are you cramming 500,000 rows from a temp table into a db table ? bTry elminating the temp table.

- Create indexes on updated/inserted tables after the fact.

- Use "set rowcount" along with "waitfor delay" if log problems persist


*** A proper DBA will never extend the log segment, based on the needs of a single process.

Forcing an index in a query     



Sometimes it is necessary to force the index when the optimizer has incorrect statistics.


-- Example: Force the idx_emp index in this query


select t1.emp_id, t1.emp_name, t2.status
from employee t1 (index idx_emp),
audit_flags t2
where t2.emp_id = t2.emp_id
go

Isolation Levels

     




Setting the isolation levels can eliminate contention problems, when reports
are run on the same database as the online application.



Three flavors to choose from, for troublesome reports, queries, and updates.


1)
select cus_id
from customer_location
where cus_id< 1000000
at isolation read uncommitted

-- > Allows table to be read (ala Oracle) even when update page locks are pending.


2)
select cus_id from
customer_location noholdlock
where cus_id< 1000000

-- > Allows big queries to run without locking pages / tables.


3)
/* For updates: */


-- Session #1

begin transaction

update invoice_item
set discount_amt = 0
where invoice_id < 2000000

commit transaction

-- Session #2

set transaction isolation level 0

Queries against invoice_item in session #2 will NOT be blocked, even before the commit occurs.
Updates against invoice_item in session #2 where pages included in the Session #1 transaction WILL be blocked, before the commit occurs.

*** Mixing 1 & 2 from above is not recommended
Transact SQL: Finding duplicate rows in a table

     



This example finds cargo records with have duplicate destination ids.


3> select cargo_id, dest_id
4> from routing t1
5> where
6> ( select count(*)
7> from routing t2
8> where t2.dest_id = t1.dest_id ) > 1
9>
10> go
Using Temporary Tables

     




Temp tables allow developers to create and scan tables within a stored procedure - and
have the tables totally isolated from all other database connections. This is very
valuable when results need to be processed several times within a loop, or when
a complex result set is expected (like a crosstab). Note that temp table transactions
are logged within tempdb (exception: select into create statements).



create proc proc_gen_report (@region_id integer) as

declare @total money


/* standard create */

create table #rpt (
store_id integer not null,
store_cd char(5) not null,
inv_count integer not null,
total_sales money not null
)


/* create using select into - make sure 'select into' is turned on */

select t1.cus_id, t1.cus_name, sum(t2.inv_amount) 'inv_summary'
into #cus_invoices
from customer t1, invoice t2
where t2.cus_id = t1.cus_id


/* Processing occurs, using temp table(s) where needed. */
/* Temp tables can be used in joins, aggregates, updates, etc. */


drop table #rpt
drop table #cus_invoices

return
go


Safely delete a large number of rows without blowing the t-log

It is key to monitor the transaction log during large deletes.


Controlled delete, for large data sets, allows transaction log to clear.



The syntax for this operation is different for ASE 15 ... be sure to comment
out the 12.X code if you are using ASE 15.X


--This example: 9000 rows in a batch, transaction log
--needs to clear after 200mb is allocated between batches

select @@servername, db_name()
go


set nocount on
go

set rowcount 9000
go
declare @rows integer,
@trows integer,
@mb money
select @rows=1,@trows=0
while @rows > 0
begin
delete from invoice_history
select @rows = @@rowcount
select @trows = @trows + @rows
select @trows, 'tLog=', @mb
waitfor delay '00:00:02'

-- ASE 12.X
select @mb1 = ( data_pgs (8, doampg) ) * (@@maxpagesize/1000.0) / 1000
from sysindexes where id = 8

-- ASE 15.X
select @mb1 = ( ( data_pages (db_id(), 8, doampg) ) * (@@maxpagesize/1000.0) / 1000.0)
from sysindexes where id = 8


while @mb > 200
begin
waitfor delay "00:02:00"


-- ASE 12.X
select @mb1 = ( data_pgs (8, doampg) ) * (@@maxpagesize/1000.0) / 1000
from sysindexes where id = 8

-- ASE 15.X
select @mb1 = ( ( data_pages (db_id(), 8, doampg) ) * (@@maxpagesize/1000.0) / 1000.0)
from sysindexes where id = 8

select @mb "trans log MB"
end

end
go


select 'Table is now empty:', 'invoice_history', count(1) from invoice_history


Running SQL within a script

     



This script accepts a sybase command as a parameter, and executes it.

#!/usr/bin/ksh
#------------------------------------------------------
# File: sybexec
# Process Sybase command, output goes to std output
# Parameter: SQL command, in quotes
#
# Sample call: sybexec "sp_helpdb billing_db"
#------------------------------------------------------
intfile=/apps/sybase/interfaces

eval /apps/sybase/bin/isql -Sserver -I$intfile -Ujsmith -Pyankees << finis

$1
go

finis
Shared memory setting



Below are the examples for setting shared memory in Linux for Sybase.
For Solaris, you need to modify /etc/system

echo 134217728 > /proc/sys/kernel/shmmax

echo 999999999 > /proc/sys/kernel/shmmax




PT Notes

Deferred Name Resolution


Symptom
Deferred Name Resolution turned on
Stored procedure created with dynamic sql which includes a reference to a #temp table
Error 208 is observed
Msg 208, Level 16, State 1:

Server 'myserver', Procedure 'testproc', Line 13:

#tempsysobjects not found. Specify owner.objectname or use sp_help to check

whether the object exists (sp_help may produce lots of output).
Environment
Sybase Adaptive Server Enterprise(ASE) 15.5
SAP Adaptive Server Enterprise(ASE) 15.7
SAP Adaptive Server Enterprise(ASE) 16.0
Reproducing the Issue
Configure deferred name resolution

sp_configure 'deferred name resolution',1

go
OR

set deferred_name_resolution on

go

Create procedure with dynamic sql that creates a #temptable


create proc testproc as

declare @SQLStatement varchar(6000)

select @SQLStatement =

'select *  ' +

'into ' +

'#tempsysobjects ' +

'from ' +

'master..sysobjects'

exec (@SQLStatement)

select top 1 * from #tempsysobjects

go
Execute Procedure

exec testproc

go

Turn off deferred name resolutions:

sp_configure 'deferred name resolution',0

go
OR

set deferred_name_resolution off

go
Cause
Symptom
From source server, use ddlgen to generate DDL for the database
On target server, load the DDL that was generated from source server, run sp_depends on object and find missing stored procedure / trigger
Msg 208, Level 16, State 1:
Environment
SAP Adaptive Server Enterprise (ASE)
SAP Adaptive Server Enterprise For Business Suite
SAP Adaptive Server Enterprise Cluster Edition
Sybase Adaptive Server Enterprise (ASE)
Sybase Adaptive Server Enterprise Cluster Edition
Reproducing the Issue
Here are two different scenarios on how this may happen:
Using ddlgen:
From source server, run sp_depends on table "titles", it shows 3 stored procedures, 2 triggers, and 1 view:

isql -Usa -Ppassword -Sase_server
use pubs2
go
sp_depends titles
go

Things inside the current database that reference the object.

object                                 type

-------------------------------------- --------------------------------

dbo.deltitle                           trigger

dbo.history_proc                   stored procedure

dbo.title_proc                       stored procedure

dbo.titleid_proc                    stored procedure

dbo.titleview                        view

dbo.totalsales_trig                trigger

....

Run ddlgen from source server to generate DDL for this database:
ddlgen -Usa -Ppassword -Sase_server -TDB -Npubs2 -opubs2.ddl

Load the DDL into target server to generate the same in target server:
isql -Usa -Ppassword -Sase_server2 -ipubs2.ddl

On target server, run sp_depends on table "titles", it shows 3 stored procedures, 1 trigger, and 1 view; a trigger named "totalsales_trig" is missing in target server:
isql -Usa -Ppassword -Sase_server2
use pubs2
go
sp_depends titles
go

Things inside the current database that reference the object.

object                           type

-------------------------------- --------------------------------

dbo.deltitle                     trigger

dbo.history_proc                 stored procedure

dbo.title_proc                   stored procedure

dbo.titleid_proc                 stored procedure

dbo.titleview                    view

...
 sysdepends loses dependency information when objects are dropped:
Create a table:
create table t1 (x int)
go
create procedure p1 as select * from t1
go
select * from sysdepends where depid = object_id("t1")
go

id              number           depid       depnumber   status    selall    resultobj readobj         columns

-----------  ---------    ------------           ---------   ------    ------     --------- -------         -------------------------------------------------------------------------------------------------

431691137       1      415691080                    0          1          1            0            0         0x02

(1 row affected)

Drop the table t1:
drop table t1
go
create table t1 (x int)
go
select * from sysdepends where depid = object_id("t1")
go

id              number           depid       depnumber   status    selall    resultobj readobj         columns

-----------  ---------    ------------           ---------   ------    ------     --------- -------         -------------------------------------------------------------------------------------------------

(0 rows affected)
Cause
There is an assumption that the dependence is created in right order.
If the objects were created out of order (for example, if a trigger that uses a table is created before the table is created, or if a procedure that uses a view is created before the view is created.), no rows exist in sysdepends for dependencies.
Looking into the output when loading the DDL script to target server, the error is reported that when creating trigger "totalsales_trig", table "titles" is not found (table "titles" is not created yet):
<<<<< CREATING Trigger - "pubs2.dbo.totalsales_trig" >>>>>
Msg 208, Level 16, State 1:

Server 'sp110', Procedure 'totalsales_trig', Line 16:

titles not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).
Msg 208, Level 16, State 1:

Server 'sp110', Procedure 'totalsales_trig', Line 23:

titles not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).
Resolution
The solution is to:
Recreate the missing object(s) on target server for the ddlgen issue.
Drop and recreate the stored procedure for dropped objects.
Keywords
sp_depends sysdepends sybmigrate ddlgen

## Auto Start/Stop/Sybase server @ OS Level #######


refer this also ...

https://sap-database.blogspot.com/2017/05/startup-script-to-run-sybase-ase.html

-----------------------------------------------------working code----with root previledge-----
cd /etc/systemd/system/
vi sybase.service

[Unit]
Description=Sybase ASE Server 16.0
After=network.target

[Service]
Environment=PW="Password" DS=sybase_ASE
WorkingDirectory=/apps/sybase/server/16.04/ASE-16_0/install
Type=forking

ExecStart=/sbin/runuser sybase -c ". /apps/sybase/server/16.04/SYBASE.sh; /apps/sybase/server/16.04/ASE-16_0/bin/startserver -f RUN_${DS} -f RUN_${DS}_BS"
ExecStop= /sbin/runuser sybase -c ". /apps/sybase/server/16.04/SYBASE.sh; printf 'shutdown\ngo' | isql -Usa -P$PW -S$DS"

[Install]
WantedBy=multi-user.target
~

###########SQL FRAME COMMAND ################# ###################

select 'use',name, char(10) ,'go' , char(10) , 'sp_changedbowner', name ,'true' , char(10) ,'go' from sysdatabases where name not in ('sybsystemdb','sybsystemprocs','master','model','tempdb','sybsecurity') order by suid
go


#################OS level COMMAND##################### ###################
sybase> lscpu | grep "cpu(s):"
sybase> cat /proc/cpuinfo | grep "processor" | wc -l
4

isybase>  cat /proc/meminfo | grep "MemTotal"
MemTotal:        7878772 kB
sybase>
###############user permission issue ###################


use user_db
go
alter all userID.* modify owner dbo
go
sp_dropuser userID
go
sp_addalias userID,dbo
go
sp_helpuser userID
go
#######Script for DB CREATE LOAD #####can make edit and use as needed##############


#!/bin/bash

# Sybase server details
SERVER_NAME="sybase_ASE"
#USERNAME="sa_id"
#PASSWORD="sa_pwd"

# Get parameters from user
DB_NAME=$1
#DATA_SIZE=$2
#LOG_SIZE=$3
USERNAME=$2
PASSWORD=$3
PATH_NAME=$4
# Create device names
#DEVICE_NAME="${DB_NAME}D1"
#LOG_DEVICE_NAME="${DB_NAME}L1"

# Create physical device names
#PHYSICAL_DATA_DEVICE_NAME="/apps/sybase/data05/userdata/${DEVICE_NAME}.dat"
#PHYSICAL_LOG_DEVICE_NAME="/apps/sybase/data04/userlog/${LOG_DEVICE_NAME}.dat"

# Create device command
#CREATE_DEVICE_CMD="disk init name = '${DEVICE_NAME}', physname = '${PHYSICAL_DATA_DEVICE_NAME}', size = '${DATA_SIZE}'"
#CREATE_LOG_DEVICE_CMD="disk init name = '${LOG_DEVICE_NAME}', physname = '${PHYSICAL_LOG_DEVICE_NAME}', size = '${LOG_SIZE}'"

# Create database command
#CREATE_DB_CMD="create database ${DB_NAME} on ${DEVICE_NAME} = '${DATA_SIZE}' log on ${LOG_DEVICE_NAME} = '${LOG_SIZE}'"

# load database command
LOAD_DB_CMD="load database ${DB_NAME} from '$PATH_NAME/${DB_NAME}.dat'"

# online database command
online_database=" online database ${DB_NAME}"

# Execute commands using isql
isql -U $USERNAME -P $PASSWORD -S $SERVER_NAME -w 1000 << EOF
#$CREATE_DEVICE_CMD
#go
#$CREATE_LOG_DEVICE_CMD
#go
#$CREATE_DB_CMD
#go
$LOAD_DB_CMD
go
$online_database
go
EOF
#############################################################

 

Display Locks on Tables


     





This procedure will display the names of the locked tables
(shared, update, etc)
across databases.








create procedure sp_locks
@spid1 int = NULL, /* server process id to check for locks */
@spid2 int = NULL /* other process id to c heck for locks */
as

declare @length int
declare @msg varchar(250)



if @@trancount = 0
begin
set transaction isolation level 1
set chained off
end

/* Print warning message about cursor lock info:
** 18052, "The class column will display the cursor
name for locks associated
** with a cursor for the current user and the cursor id for other
** users."
*/
exec sp_getmessage 18052, @msg out
print @msg

/*
** Show the locks for both parameters.
*/
if @spid1 is not NULL
begin
select @length = max(datalength(db_name(dbid)))
from master..syslocks
where spid in (@spid1, @spid2)

if (@length > 15)

select spid, locktype = name, table_name = object_name(id, dbid), page,
dbname = db_name(dbid), class
from master..syslocks l, master..spt_values v
where l.type = v.number
and v.type = "L"
and spid in (@spid1, @spid2)
else
select spid, locktype = name, table_name = object_name(id, dbid), page,
dbname = convert(char(15), db_name(dbid)), class
from master..syslocks l,
master..spt_values v
where l.type = v.number
and v.type = "L"
and spid in (@spid1, @spid2)
end

/*
** No paramete
rs, so show all the locks.
*/
else
begin
select @length = max(datalength(db_name(dbid)))
from master..syslocks

if (@length
> 15)
select spid, locktype = name, table_name = object_name(id, dbid), page,
dbname = db_name(dbid), class
from master..syslocks l, master..spt_values v
where l.type = v.number
and v.type = "L"
order by spid, dbname, table_name, locktype, page

else
select spid, locktype = name, table_name = object_name(id, dbid), page,
dbname = convert(char(15), db_name(dbid)), class
from master..syslocks l, master..spt_values v
where l.type = v.number
and v.type = "L"
order by spid, dbname, table_name, locktype, page
end
return (0)
Sybase ASE: Setting up user assigned custom tempdbs

     


Sybase ASE allows users to be assigned specific tempdb databases - this allows for better process segregation and contention reduction.




-- Create a custom temp db, and assign user jsmith to it, also application cognos

create temporary database tempdb_app
on tmp004 = "4G",
tmp005 = "4G"
log on tmp006 = "4G"
go

sp_tempdb 'bind','lg','jsmith','db','tempdb_app'
go
sp_tempdb 'bind','app','cognos','db','tempdb_app'
go

-- Show the assigments

sp_tempdb 'show'
go