Tuesday 22 November 2011

How much memory is actually available for ASE?

Following could be a way to determine the actual memrory available for ASE

Ø       Determine the total physical memory available on the computer
Ø       Subtract the memory required for Operating System from the total available
Ø       Subtract the memory required for other software such as Backup Server, Monitor Server and other Adaptive Server software, and
  Ø       Subtract the memory required for any third party software installed on the
          Operating System from the total available memory

Tuesday 15 November 2011

INTRODUCTION ASE 15.7

The market appeal and reach for Sybase ASE has dramatically increased with the 2010 acquisition of Sybase by SAP, the leader in business applications. SAP Business suite applications, as well as most all SAP applications running on SAP Netweaver technology, now run on Sybase ASE. These are indeed exciting times for Sybase ASE.
Sybase ASE 15.7 is the latest release in the ASE 15x product line; and it’s a feature-packed release. This document provides an overview of some of these new capabilities, and the benefits customers can expect from moving to ASE 15.7.
ASE 15.7 offers important enhancements for:





• Optimizing for Storage
• Application Developer Productivity
• Performance
• Enhanced Diagnostics & Monitoring
• Security Administration

OPTIMIZED FOR STORAGE
Data growth is a serious challenge for enterprises. It can be attributed to a number of reasons: regulations requiring longer data retention, increased number of transactions, growth in unstructured data (images, text, audio and video), or the need to maintain data for longer periods of time for decision support systems. While disk itself has become cheaper, the growth in data continues to cause serious challenges for data centers.
The cost associated with data is not singular and limited to a single copy of the data. On the contrary, one unit of data on average translates into a minimum of 3-5 copies of it within the data center itself, as well as additional copies beyond the data center. Copies of the production data exist in development and testing systems, back-up and archived systems, secondary decision support systems as well as offsite backup and disaster recovery systems. Storage costs quickly get compounded when hardware costs, software license costs, data center floor space and power consumption costs, data transfer and bandwidth costs, and labor costs are thrown into the overall cost of maintaining this data. Clearly the oft-heard refrain "disk is cheap" can only be stretched so far before costs start piling up.
And this does not even consider performance costs due to I/O demands and cache memory requirements when processing this data!
ASE 15.7 introduces the following to help reduce storage requirements:

Compression


• Compression: Regular data and LOBs
• In-row LOBs
• Create nonmaterialized non-NULL columns without data copy
• Ability to shrink the transaction log

In prior releases ASE customers had the ability to compress backups thus helping reduce offline storage costs. ASE 15.7 now allows in-database compression for active data sets. Both regular relational data as well as unstructured data (large objects or LOBs) can be compressed. Early internal benchmarks indicate that compression ratios can readily hit the 50-80%.
ASE 15.7 uses a number of compression strategies to achieve high compression ratios. This includes compression within a single row to compress away empty spaces/zeroes in fixed length columns. At a page/block level this includes both page dictionary and page index compression strategies. Repeated data items as well as repeated sets of data items are replaced by a single reference resulting in dramatic savings for duplicated data. Data can be compressed at database, table or partition level. This allows for customers to separately make compression choices for active or inactive data.
ASE 15.7 also supports in-database LOB compression. Given that LOBs can be very large in size (up to 2GB), compression can result in very significant space savings. FastLZ and ZLib compression techniques are supported, which are lossless compression algorithms. While the former provides lower CPU usage and execution times, the latter provides higher compression ratios.
Customers have the flexibility to choose LOB compression at database, table or column level.

In-Row LOBs

Modern applications increasingly need to manage and manipulate unstructured data. This data is stored as LOB columns (e.g. text, images, unitext, XML). LOBs can vary in size from a few bytes to gigabytes. ASE optimized for LOB storage by allocating separate storage blocks that could be pointed to from the parent data row. This was very efficient for handling larger LOBs. ASE 15.7 enhances this capability by now also supporting in-row LOBs for the situations when LOBs are fairly small and can readily fit within the parent row.
This has two advantages: storage savings by optimizing for the space allocated for small LOBs, as well as performance benefits for retrieval of small LOBs by reducing the I/O overhead.
ASE seamlessly handles the migration of LOBs from in-row to off-row storage (or vice versa) to always optimize for the storage required as LOBs change in size due to updates.



ASE 15.7 allows for the ability to shrink a transaction log. Transaction logs can often grow very large due to a number of reasons: enlarged to handle log-full situations, to support one-time operations that may require lots of space or even due to generous estimates during capacity planning.
 
APPLICATION DEVELOPER PRODUCTIVITY


ASE 15.7 offers numerous enhancements designed to enhance the productivity of application developers. These include:
• Simpler and more efficient LOB retrieval and manipulation
• Greater flexibility around concurrency and isolation levels
• Enhanced SQL expressiveness
• Support for scripting language drivers
ASE 15.7 greatly improves on the handling of LOBs. In addition to the support for in-row LOBs discussed above, ASE 15.7 introduces the concept of LOB locators. LOB locators are essentially handles (pointers) to LOBs cached in ASE server memory. Client applications can send and receive locators as host variables and parameter markers. When a client application needs to fetch, search, update LOBs, it communicates with the server through the LOB locators, rather than pass actual LOBs to and fro. This greatly reduces network traffic between the client and the server. It also reduces the need for client side memory.
A number of T-SQL commands are provided to work with LOBs. These commands allow easy manipulation of LOBs (truncation and concatenation of LOBs, overwriting portions of a LOB, extracting substrings in a LOB, and so on). ODBC clients can use these T-SQL commands to manipulate LOBs. The jConnect JDBC driver supports access to the locators using Blob, Clob and NClob classes.
In addition to the above, LOBs can now also be passed as parameters to stored procedures. This adds greater flexibility to manipulate LOBs within stored procedures and reduces code complexity for developers.
ASE 15.7 also provides a number of T-SQL updates that allow for greater SQL expressiveness as well as control over concurrency and isolation level semantics. Some of these include:

Non-materialized Non-NULL Columns Without Data Copy

In the past adding non-NULL columns to an existing table required a complete data copy. This can be particularly cumbersome for very large tables. ASE 15.7 now allows creation of non-materialized non-NULL columns. This means columns get created without actually making a data copy (which can require a large amount of temporary space for the data copy, equal to the original table size). This saves both space and time.
When such rows are actually updated they get instantiated (physically stored in the row) behind the scenes.


Shrinking Transaction Log
• Support for MERGE statement to allow for transferring rows from a source table (table, view derived table) into a target table.
• SELECT FOR UPDATE to allow selected rows to be locked for a later update
• Ability to release read cursor locks at cursor close for active transactions for all isolation levels
• Various nested select statement enhancements
• Ability to run all system procedures and select DDLs in chained transaction mode

Another area where ASE 15.7 benefits application developers is through Sybase provided and supported scripting language drivers. Initial support is provided for python, PHP and perl. Application developers can now write their database applications in these languages. This makes it easy to leverage the vast pool of developer talent fluent in these languages.
ENHANCED DIAGNOSTICS & MONITORING

ASE 15.7 provides numerous enhancements around diagnostics and monitoring. Some of these include:
 SECURITY ADMINISTRATION
• Ability to save abstract plan information in the statement cache
• Ability to output cached plan in XML
• Ability to analyze dynamic parameters before running a query which can help to avoid inefficient query plans
• Ability to monitor lock timeouts
• Ability to display statistics and histograms for various objects, via a system procedure (T-SQL equivalent of the standalone
• Ability to monitor cursor statements• Ability to analyze the query plan generated to understand the various factors behind ASE’s choice of the query plan
 These and a number of other enhancements allow application developers to probe deeply into the decisions being made by ASE optimizer and identify opportunities to further tune and optimize for performance. Monitoring tables have been enhanced to provide ever more information for the sophisticated application developers and DBAs.
ASE 15.7 also introduced the 
Several security improvements are introduced in ASE 15.7. These include:
• Object ownership change
• Support for login profiles
• Split knowledge/Dual control for protecting encryption keys (a requirement for the payments card industry data standard, PCI-DSS)
• Improved encryption for FIPS 140-2 compliance — encrypting role passwords with SHA256, and encrypting external server passwords and hidden SQL text with AES256
• Single sign-on and end to end Kerberos credential forwarding
ASE 15.7 makes it easy to change object ownership from one owner to another. This often happens when the person who created an object (table, procedure etc.) may not be the one maintaining it. This can be due to employee changing responsibilities or leaving the company, mergers and acquisitions, or outsourced development and maintenance.
ASE 15.7 introduces the ability to conveniently change object ownership.
Another powerful feature that allows for easy security administration is the support for login profiles. Before ASE 15.7 each ASE login had to be managed individually. This can be cumbersome and makes it difficult to implement changes in policies that often affect large number of users.
ASE 15.7 introduces the login profiles which are SQL-defined attribute containers of login characteristics and their values (such as default database, login script, auto activated roles, authenticate with). Groups of logins can share login profiles. These are easy to administer and provide flexibility to implement security policies and comply with changing regulations.
PERFORMANCE ENHANCEMENTS 

ASE 15.7 focuses on two specific areas for performance: fetch statements, as well as ability to cache cursor query plans in the statement cache. optdiag utility)sp_opt_querystats system procedure)
• Reducing query latency, especially for dynamic SQL
• ASE kernel updates to better utilize parallel hardware

ASE 15.7 introduces various internal optimizations during query execution that reduce query execution time overhead. These include optimizations around caching and reusing information from query compilation across multiple executions, reducing metadata sent across the network for result sets, usage of a global cache for dynamic SQL to cache prepared statements to be reused across different sessions, more efficient execution of queries that do not touch a table and have no result set, efficient handling of cursor
ASE 15.7 also provides an updated kernel that can better leverage parallel hardware. The kernel updates enhance ASE scalability on systems with very large number of processors, processor cores, and hardware threads. Updates also provide better integration of I/O event processing and reduce I/O overhead within the ASE server. These enhancements will not only improve I/O performance but also help with operation of ASE in virtualized environments.

Monday 14 November 2011

monitor sybase tempdb log space

You can use the below script which can be also be configured to send an email alert if the sybase tempdb free data or log space falls below 10% . i.e., tempdb
monitoring script.

select "Data Size," = str(sum(size * abs(sign(segmap - 4))) / 512.0, 7, 2)+',',
"Data Used," = str(sum((size - curunreservedpgs(dbid, lstart, unreservedpgs)) * abs(sign(segmap - 4))) / 512.0, 7, 2)+',',
"Data Free," = str(100.0 * sum((curunreservedpgs(dbid, lstart,unreservedpgs)) * abs(sign(segmap - 4))) / sum(size * abs(sign(segmap- 4))), 3) + "%"+',',
"Log Size," = str(sum(size * (1 - abs(sign(segmap - 4)))) / 512.0, 7, 2)+',',
"Log Used," = str(sum((size - curunreservedpgs(dbid, lstart, unreservedpgs))* (1 - abs(sign(segmap - 4)))) / 512.0, 7, 2)+',',
"Log Free" = str(100.0 * sum((curunreservedpgs(dbid, lstart,unreservedpgs))* (1 - abs(sign(segmap - 4)))) / sum(size * (1 - abs(sign(segmap - 4)))), 3) + "%"
from master..sysusages
where segmap < 5 and dbid=2

you have to use conditional shell commands to check if the Log/Data free < threshold.

Thursday 10 November 2011

Free Disk Space,negative logsegment space

1) Display Available Device Space - Free Disk Space .
Determining the amount of free space on a server's set of devices
is not an easy task; the procedure below produces a nice  report ,
showing total allocations and available space on each device
.
create procedure sp_freedisk
as
set nocount on
select
logical_name = substring(d.name, 1, 12),
physical_name = substring(d.phyname, 1, 17),
vdevno = d.low / power(2, 24),
size = (d.high - d.low + 1) * 2 / 1024,
reserved = isnull(sum(u.size) * 2 / 1024, 0),
left = (d.high - d.low + 1) * 2 / 1024 -
isnull(sum(u.size) * 2 / 1024, 0)
from
master.dbo.sysdevices d, master.dbo.sysusages u
where
d.status & 2 = 2
and u.vstart / power(2, 24)=*d.low / power(2, 24)
group by
substring(d.name, 1, 12),
substring(d.phyname, 1, 17),
d.low / power(2, 24),
(d.high - d.low + 1) * 2 / 1024
order by vdevno
return

2) logsegment shows negative space


DBName            SegName                Allocated    Available   PctUsed
--------------- -------------------- ---------- ---------- ---------
reports             default                    289000.0  13153.5       95.4
reports             logsegment              3010.0      3857.7       -28.1
reports             system                    289000.0  13153.5       95.4

--Need to recycle server to take effect

--Negative value for log free space
--The following is undocumented from sybase

-- Connect to your server as sa and run the following to fix it.

dbcc gam(dbname, 0, 0, "fix")
go
dbcc usedextents(dbname, 0, 1, 1)
go


“How to check free space in Sybase(ASE) database”
 
If we would like to check free and occupied space in devices with data and logs in our Sybase(ASE) databases we can use script as below:

declare @pagesize int
select @pagesize=(select @@maxpagesize)
SELECT “Database Name” = CONVERT(char(20), db_name(D.dbid)),
“Data Size” = STR(SUM(CASE WHEN U.segmap != 4 THEN U.size*@pagesize/1048576 END),10,1),
“Used Data” = STR(SUM(CASE WHEN U.segmap != 4 THEN size – curunreservedpgs(U.dbid, U.lstart, U.unreservedpgs)END)*@pagesize/1048576,10,1),
“Data Full%” = STR(100 * (1 – 1.0 * SUM(CASE WHEN U.segmap != 4 THEN curunreservedpgs(U.dbid, U.lstart, U.unreservedpgs) END)/SUM(CASE WHEN U.segmap != 4 THEN U.size END)),9,1) + “%”,
“Log Size” = STR(SUM(CASE WHEN U.segmap = 4 THEN U.size*@pagesize/1048576 END),10,1),
“Free Log” = STR(lct_admin(“logsegment_freepages”,D.dbid)*@pagesize/1048576,10,1),
“Log Full%” = STR(100 * (1 – 1.0 * lct_admin(“logsegment_freepages”,D.dbid) /
SUM(CASE WHEN U.segmap = 4 THEN U.size END)),8,1) + “%”
FROM master..sysdatabases D,
master..sysusages U
WHERE U.dbid = D.dbid
AND ((D.dbid > 3) AND (D.dbid < 31513) AND (D.status != 256))
GROUP BY D.dbid
ORDER BY db_name(D.dbid)
go


Save it to file, eg: check.sql and execute.


Example: isql -Usa -P -i check.sql





If ew want to get information for all databases (including system databases) we need to delete from script one line:
AND ((D.dbid > 3) AND (D.dbid < 31513) AND (D.status != 256))
        
  1. In case we have seen the same issue using this script, because we have some larger databases and (unfortunately) some still on Sybase ASE 12.5.4, I had an arithmetic overflow error when running this script.
  2. we have to change the first line to declare @pagesize numeric(19,0). If we are using only 15.0+ then change it to declare @pagesize bigint.



3) Table Partitioning
Insert performance on partitioned tables is improved, as multiple
'entry points' (last page entries) are created. Partitioned tables
require slightly more disk space and need a chunk of memory also.

/* create 4 partitions for the invoice table */

alter table invoice
partition 4
go

bcp pubs2..publishers out "/optware/sybase/publishers.dat”
-Uuser1 –Pabc123 -c &
Bcp pub2..publishers out "/optware/sybase/ publishers.dat 
To bcp into several partitions split your file into 4 parts and bcp them into the partitions.

You have to specify the partition number like this:

bcp mydb..mytable:1 in ...
bcp mydb..mytable:2 in ...
bcp mydb..mytable:3 in ...

bcp mydb..mytable:4 in ...

***
Partitioning in ASE 15***

Semantic "smart" partitioning is new in ASE 15.
Tables can be partitioned by range, value, or round-robin. By default, all tables in ASE 15 are partitioned, round-robin.
4)Clustered vs non-clustered indexes

Typically, a clustered index will be created on the primary key
of a table, and non-clustered indexes are used where needed.



Non-clustered indexes


l>Leaves are stored in b-tree
li>Lower overhead on inserts, vs clustered
li>Best for single key queries
<li>Last page of index can become a 'hot spot'
</ul>

Clustered indexes

<ul>
<li>Records in table are sorted physically by key values
<li>Only one clustered index per table
<li>Higher overhead on inserts, if re-org on table is required
<li>Best for queries requesting a range of records
<li>Index must exist on same segment as table
</ul>

Note! With "lock datapages" or "lock datarows" ... clustered indexes
are sorted physically only upon creation. After that, the indexes
behave like non-clustered indexes.

6) Setting the thresholds

Threshold settings allow customized procedures to be run when database segments
approach a defined capacity.

The "last chance threshold" is set by default, to execute sp_thresholdaction within the
current database, when a segment reaches 95% of capacity. The procedure sp_thresholdaction
needs to be created by the DBA. Here is a sample:


create proc sp_thresholdaction (
@dbname varchar(30),
@segmentname varchar(30),
@space_left int,
@status int ) as

declare @msg varchar(80),
@date1 datetime,
@fname varchar(80),
@fdate varchar(20),

@fpath varchar(40)

select @fpath = '/usr/dumps/logs/'

select @date1 = getdate()

select @fdate =
convert(varchar(2),datepart(MM,@date1)) +
convert(varchar(2),datepart(DD,@date1)) +
convert(varchar(2),datepart(HH,@date1)) +
convert(varchar(2),datepart(MI,@date1))

select @fname = @fpath + 'log_' + @dbname + @fdate + '.dmp'

select @msg = '***!! Last Chance Threshold reached, for ' + @dbname + '(' + @segmentname + ')'

print @msg

if @segmentname = 'logsegment'
dump tran @dbname to @fname

return

Other threshold levels can be created, for specific segments. They can be set up
to print informational messages to the error log, as a forewarning to the DBA. Here's a
sample which reflects the command syntax:


1> sp_addthreshold dbname,logsegment,400,'proc_log_threshold'
2> go
Adding threshold for segment 'logsegment' at '400' pages.
DBCC execution completed. If DBCC printed error messages, contact a user with
System Administrator (SA) role.
(return status = 0)

8)Renaming a database
 Renaming a database requires that it be put in single-user mode first.


-- Renaming a database

use master
go

sp_dboption 'warehouse003','single user',true
go
use warehouse003
go
checkpoint
go
use master
go



sp_renamedb 'warehouse003','warehouse009'
go



sp_dboption 'warehouse009','single user',false
go
use warehouse009
go
checkpoint
go

use master
go

sp_helpdb warehouse009
go

Wednesday 9 November 2011

Fragmentation and its Resolution Steps

Measuring and Dealing with Object-Level Fragmentation
Within ASE
 Table and Index fragmentations occur as a result of heavy activities, especially deletes and updates. As tables grow in size, this problem becomes more acute.

There is no doubt of the benefits of adding indexes to tables, and for the most part we have to do little work to keep the indexes maintained. However, some maintenance is required as indexes can become fragmented during data modifications, especially deletes and updates. This fragmentation can become a source of performance issues with queries.

What is Object-Level Fragmentation?

Object (table/index)-level fragmentation is essentially made up of extents whose pages are not sequential, extents that contain empty pages, pages that have empty rows, or rows that have only been logically deleted thus far. All of these conditions will result in more I/Os being required to return the required data than would be necessary if there was no fragmentation.
So what exactly is index fragmentation? Index fragmentation actually comes in two different forms: external fragmentation and internal fragmentation. These two forms of fragmentation result in the inefficient use of pages within an index. This inefficient use may be because the logical order of the pages are wrong (external fragmentation) or because the amount of data stored within each page is less than the data page can contain (internal fragmentation). Whichever type of fragmentation occurs in the index, we could face performance issues with our queries because of the fragmentation.
Simply put fragmentation is defined as any condition which causes more than the optimal amount of disk I/O to be performed in accessing a table, or causes the disk I/Os that are performed to take longer than they optimally would. Optimal performance of select queries occurs when data pages are as contiguous as possible within the database, and the data pages are packed as fully as possible."

Type of ASE tables

When a table is created, by default the locking scheme is the server-wide setting for the configuration parameter "lock scheme". The default value for this parameter is "allpages" or APL. In addition, ASE allows you to set the table locking scheme to "datapages" DPL or "datarows" DOL. These are explained below:
1. In APL, the data page and the index pages are locked during the operation.
2. In DPL, only the data page is locked, no index page locking.
3. In DOL, only the rows are locked, no index page locking.
Parameters to Check for the Degree of Index Fragmentation

ASE provides a number of parameters to indicate the level of fragmentation in the indexes:
1. data-page-cluster-ratio, dpcr
2. index-page-cluster-ratio, ipcr
3. data-row-cluster-ratio, drcr
4. space utilization, sput
5. large I/O efficiency, lgio
- The "data-page-cluster-ratio" of the table indicates how many times a sequential read of the page chain will have to jump to one extent and then back to continue the read. The more jumps, the less clustered pages are within extents and thus the more fragmented. These jumps require extra I/O and over time can become expensive to perform.
- The "index-page-cluster-ratio" is the same as the "data-page-cluster-ratio" except that it measures how well clustered index leaf pages are in relation to their extents. It is a very good indicator of fragmentation of the index pages.
- The "data row cluster ratio", is another cluster ratio that is used to cost the number of data pages that need to be accessed during scans using a particular index.
- The Space Utilization, "sput" gives you an idea of how efficiently space is being used by the object—in other words, whether it’s fragmented or not. The value uses the average row size and the number of rows to estimate the minimum number of pages for the object and then compares this to the current actual number of pages.
- Large I/O efficiency is used to estimate the number of useful pages that can be read in a single large I/O. Put another way, how many pages in a single large I/O will contain needed rows?.
For DPL or DOL tables with clustered indexes, dpcr and drcr indicate the level of data fragmentation. For DPL or DOL tables with non clustered indexes, both ipcr and sput figures are useful parameters. For APL tables with clustered indexes, both sput and lgio efficiency will be good indicators of data fragmentation. For APL and non-clustered indexes, look at the parameter "sput". The rule of thumb is that the lower these values, the higher the degree of fragmentation.

Working Out Index Fragmentation

In versions prior to ASE 12.5.1, we could get information about the fragmented indexes through the ASE utility
 
> OptDiag/12.0.0.8/P/EBF 11689 ESD1/Sun_svr4/OS 5.6/2047/32bit/OPT/Thu
> Feb 12 21:33:41 2004
> Adaptive Server Enterprise/12.0.0.8/P/EBF 11689 ESD1/Sun_svr4/OS
> 5.6/2047/32bit/FBO/Fri Feb 13 03:52:51 2004
>
> Server name: "PROD_SERVER"
>
> Specified database: "PROD"
> Specified table owner: not specified
> Specified table: "op_services_log_detail_history"
> Specified column: not specified
>
> Table owner: "dbo"
> Table name: "op_services_log_detail_history"
>
> Statistics for index: "op_services_log_detail_hst_idx"
> (clustered)
> Index column list: "op_services_run_id"
> Data page count: 877600
> Empty data page count: 0
> Data row count: 31346435.0000000000000000
> Forwarded row count: 0.0000000000000000
> Deleted row count: 0.0000000000000000
> Data page CR count: 111527.0000000000000000
> OAM + allocation page count: 10561
> First extent leaf pages: 0
> Data row size: 46.0000000000000000
> Index height: 3
>
> Derived statistics:
> Data page cluster ratio: 0.9976207839562443
> Space utilization: 0.8150023207684015
> Large I/O efficiency: 0.9836183166408136
>
For this example, we chose a fairly large APL table in excess of 31 million records. This table had a non-unique clustered index with a large number of overflow pages. The section under "Derived statistics" provides the information required to determine the level of fragmentation in this table. The two parameters that are important in this case are sput and lgio.
The low "space utilization" figure is normally associated with an APL table with a non-unique clustered index. This can lead to lots of overflow pages as each key can have many rows. The non-unique clustered index is built on the column "op_services_run_id".

Using DERIVED_STAT Function to Estimate Fragmentation

ASE 12.5.0.3 introduced the function:

DERIVED_STAT("table_name","index_name","option")

By using this function, one saves a good bit of time and effort in picking up (a) specific fragmentation indicator(s) without going into optdiag. The function derived_stat can provide a number of indicators. However, it is left to the DBA on how to go about identifying the tables and indexes that need to be defragged.
In general, the closer the value of these parameters to 1 (range 0-1), the better, and the less fragmentation you have. As explained earlier, for different table and index types, these parameters have different significance.
As a DBA, I wrote a shell script that carried out table and index "fragmentation analysis" for a given database. Note that if you have a pre ASE 12.5.0.3 database, as long as you have a test environment with ASE 12.5.0.3 or higher installed you can load your pre ASE 12.5.0.3 database there and run your analysis against the 12.5.0.3 loaded database.
We used the algorithm explained under section "Parameters to Check for the Degree of Index Fragmentation" to recommend which tables and indexes we need to defrag.
The parameters we look at are:
'table_type = DOL or DPL and index type = clustered, examine ====> dpcr & drcr’ 'table_type = DOL or DPL and index_type = non-clustered, examine ====> ipcr & sput' 'table_type = APL and index_type = clustered, examine ====> sput & lgio' 'table_type = APL and index_type = non-clustered examine ====> sput'

The actions are twofold. If you have a DPL or DOL table, then carrying out "reorg rebuild" will do the trick. If you have an APL table with a clustered index, drop and recreate the clustered index. That action will rebuild all other indexes. If you have an APL table without a clustered index, then drop and recreate the index affected on that table.
So what criteria do you use to select indexes for defragging? Obviously given the nature of defragging most often a maintenance window is required. Taking into account the time factors, the objectives are to select large tables (read tables with the high number of records) which show the appropriate derived stats parameters having below optimal values.
 
Name TableType Records IndexName IndexType
------------------------------ --------- ----------- ------------------------------ ---------------
QMCASEEVENTAUDIT DPL 41924732 idx_nc_caseid non-clustered
QMCASEEVENTAUDIT DPL 41924732 idx_u_QMCASEEVENTAUDIT non-clustered
QMAUTHORISEREQUESTAUDIT APL 33911989 idx_nc_caseid non-clustered
QMAUTHORISEREQUESTAUDIT APL 33911989 idx_nc_created non-clustered
QMAUTHORISEREQUESTAUDIT APL 33911989 idx_u_QMAUTHORISEREQUESTAUDIT non-clustered
QMEXCEPTION DOL 10882587 idx_nc_state non-clustered
QMEXCEPTION DOL 10882587 idx_u_exceptionid clustered
QMEXCEPTION DOL 10882587 idx_nc_tradeid non-clustered
QMEXCEPTION DOL 10882587 idx_u_QMEXCEPTION non-clustered
QMEXCEPTION DOL 10882587 idx_nc_exceptiontypeid non-clustered
TMTMSTREAM APL 1405635 idx_uc_TMTMSTREAM clustered
TMTMSTREAM APL 1405635 idx_nc_TMTMSTREAM non-clustered
QMWORKQUEUETOVIEW APL 1122526 idx_u_QMWORKQUEUETOVIEW non-clustered
QMWORKQUEUETOVIEW APL 1122526 idx_unc_QMWORKQUEUETOVIEW non-clustered
TMTMTRADE APL 783232 idx_uc_TMTMTRADE clustered
TMTMTRADE APL 783232 idx_nc_TMTMTRADE non-clustered
SCHEDULED_EVENTS DOL 448277 idx_c_SCHEDULED_EVENTS_2 clustered
SCHEDULED_EVENTS DOL 448277 idx_u_SCHEDULED_EVENTS_1 non-clustered
SCHEDULED_EVENTS DOL 448277 idx_nc_SCHEDULED_EVENTS_3 non-clustered
SCHEDULED_EVENTS DOL 448277 idx_nc_SCHEDULED_EVENTS_4 non-clustered
QMCASEATTRIBUTE APL 225735 idx_u_QMCASEATTRIBUTE clustered
QMTRANSITION APL 189327 stepid non-clustered
QMTRANSITION APL 189327 idx_u_QMTRANSITION non-clustered
QMROLETOSTEP APL 152061 idx_u_QMROLETOSTEP non-clustered
QMROLETOSTEP APL 152061 idx_unc_roleid_stepid non-clustered
EVT_MESSAGE DOL 100547 idx_uc_EVT_MESSAGE_1 clustered
QMCASE DPL 77554 idx_u_QMCASE non-clustered
QMCASE DPL 77554 idx_u_nc_exceptionid clustered
QMCASE DPL 77554 idx_nc_workqueueid_state non-clustered
QMUSERGROUPMAPAUDIT APL 57790 idx_nc_created non-clustered
QMUSERGROUPMAPAUDIT APL 57790 idx_nc_useraccountid non-clustered
QMUSERGROUPMAPAUDIT APL 57790 idx_u_QMUSERGROUPMAPAUDIT non-clustered
.As an example we can take index "

This table is a data-page-locked table and has in excess of 41 million records. The index itself is non-clustered. Earlier we identified derived_stats paramerters "ipcr" and "sput" as the fragmentation indicators for this type of table and index. The actual parameters are shown below:
 
name index_name records size ipcr sput
------------------------------ ------------------------------ ----------- ----------- -------------------- --------------------
QMCASEEVENTAUDIT idx_nc_caseid 41924732 3515414
 
Note that both "ipcr" and "sput" are below their optimal values indicating that this index is fragmented.
The same principle applies to other tables as well. By creating an algorithm we can easily identify tables and indexes that need defragging.

INSTALLATION OF DEFRAGGING ROUTINES
See the cron example below.

#----------------------------------------------------------------------------------------------------------#
##
## Fragmentation report on all user databases daily
## Mich Talebzadeh 23/12/2004
##
00 6 * * 1-6 /local/usts02_sybase/sybase/dba/frag_analysis.ksh USTSAC2 > /local/usts02_sybase/sybase/logs/USTSAC2/frag_analysis.err 2>&1
00 8 * * 1-6 /local/usts02_sybase/sybase/dba/frag_analysis.ksh USTSAC2 STS_CS > /local/usts02_sybase/sybase/logs/USTSAC2/frag_analysis_STS_CS.err 2
>&1
#----------------------------------------------------------------------------------------------------------#

frag_analysis.ksh routine will update DBA_CONTROL..frag_databases. If record exists for this database then the column WhenFragLastRun will be timestamped. Otherwise a new record for this database will be created in DBA_CONTROL..frag_databases with default value of 10 for TopNTables to be defragged. If you want to exclude a database from defragging, set Exclude = 1 in this database in frag_databases.

1> use DBA_CONTROL
2> go
1> select * from frag_databases
2> go
DBName TopNTables WhenFragLastRun WhenRebuildLastDone Exclude
------------------------------ ----------- -------------------------- -------------------------- -------
test 10 Jan 14 2005 4:49PM Jan 14 2005 2:06PM 0
CORD_CORPORATE 10 Jan 25 2005 6:00AM NULL 0
DBA_CONTROL 10 Jan 25 2005 6:00AM NULL 0
PINGTEST 10 Jan 25 2005 6:00AM NULL 0
PUBLISH_SUBSCRIBE 10 Jan 25 2005 6:01AM NULL 0
STS_DEUSD 10 Jan 25 2005 6:02AM Jan 19 2005 8:33PM 0
sqlcqr_master 10 Jan 25 2005 6:02AM NULL 0
(7 rows affected)

The other table of interest is DBA_CONTROL..frag_info table. This is the main table which holds information about which tables and indexes need to be defragged. frag_analysis.ksh will populate this table. For a given database, it will simply delete the old records and create new records. After running frag_analyis.ksh YOU NEED TO EXCLUDE TABLES THAT SHOULD NOT BE PICKED UP BY rebuilt_indexes.ksh
because there may not be enough free space in the database or you may not have a large enough time window. For example for STS_QM we had to exclude the following three tables and their indexes from reorg rebuild etc

1> update frag_info set Exclude = 1 where DBName = 'STS_QM'
2> and Name in ('QMEXCEPTION','SCHEDULED_EVENTS','EVT_MESSAGE')
3> go
(10 rows affected)


Upon completion of this routine a summary email of work done will be sent to the list of emails in the function "forward_email" in routine ~sybase/dba/ common_functions.ksh

Output files
The main output file is the report file. The other is the log file both in ~/logs/server_name directory

sybaseAC@ubnsts-s02:/local/usts02_sybase/sybase/logs/USTSAC2% ltr frag*
-rw------- 1 sybaseAC sybase 0 Jan 12 06:00 frag_analysis.err
-rw------- 1 sybaseAC sybase 12213 Jan 12 11:59 frag_analysis_USTSAC2.report
-rw------- 1 sybaseAC sybase 1502 Jan 12 11:59 frag_analysis_USTSAC2.log

WHAT TO WATCH!

Anytime that you run frag_analysis.ksh the routine deletes the old records for a database in DBA_CONTROL..frag_info and populates them with new records. If you intend to exclude any table from rebuild index or reorg rebuild, set the Exclude column = 1 for that table in database before running the rebuild_indexs.ksh routine
frag_analysis.ksh is non destructive and can be run anytime. It will only update DBA_CONTROL tables.

Doing Reorg rebuild and index rebuild

To do the actual reorg and rebuild of indexes run ~/dba/rebuild_indexes.ksh. This routine expects as a minimum a server_name and a database_name to be supplied. You can give multiple databases after server name. They will be done in the order entered. You can also run reorg_rebuild for different servers and databases or the same server and different databases simultaneously. Examples below should help you

00 16 * * 6 /local/usts02_sybase/sybase/dba/rebuild_indexes.ksh USTSAC2 test > /local/usts02_sybase/sybase/logs/USTSAC2/rebuild_indexes_USTSAC2_tes
t.err 2>&1
00 16 * * 6 /local/usts02_sybase/sybase/dba/rebuild_indexes.ksh USTSAC2 test STS_CS > /local/usts02_sybase/sybase/logs/USTSAC2/rebuild_indexes_USTS
AC2.err 2>&1

This table relies on the information in DBA_CONTROL..frag_info table as discussed before. There is a log file for the server itself and for a given database there is a sql file and a log file. Examples are given below.

sybaseAC@ubnsts-s02:/local/usts02_sybase/sybase/logs/USTSAC2% ltr rebuild_indexes_USTSAC2_*12Jan2005_*
-rw------- 1 sybaseAC sybase 1828 Jan 12 12:11 rebuild_indexes_USTSAC2_SERVER_WIDE_12Jan2005_12:11_67.log
-rw------- 1 sybaseAC sybase 28791 Jan 12 12:11 rebuild_indexes_USTSAC2_DATABASE_test_index_rebuild_script_12Jan2005_12:11_67.sql
-rw------- 1 sybaseAC sybase 27091 Jan 12 12:36 rebuild_indexes_USTSAC2_DATABASE_test_12Jan2005_12:11_67_progress.log

Notes
Before running this routine ensure that both DBA_CONTROL..frag_databases and frag_info have correct information. The top n tables will be picked up from frag_databases table. The index names to be done will be picked up from frag_info table.
rebuild_indexes.ksh will set the Server configuration option 'number of sot buffers' to 1500 for the duration of index rebuild and set it back to its original value upon completion
In order to perform reorg rebuild the database option 'select into' will be turned on for that database. So you need to:
1. Stop the transaction log dumps
2. Back up the database
Once the operation for that database finished the option "select into" will be turned off. If the database already has 'select into' turned on no action will be done. Ensure that you backup the database and restart the transaction log dump if applicable.
1. Extract files from frags.tar. This will create a directory called "frags".
2. Put frag_analysis.ksh, common_function.ksh and rebuild_indexes.ksh under ~sybase/dba directory
3. Put environment.ksh under directory ~/<SERVER_NAME>. This is the environment file for the SQL Server <SERVER_NAME>
4. Modify this file to set the environment variables SYBASE and DSQUERY correctly. For example set
export SYBASE=/local/usts02_sybase/sybase/ase12_52-11942-64
export DSQUERY=USTSAC2
5. Run DBA_CONTROL.sql. This code creates two tables namely; DBA_CONTROL..frag_info and DBA_CONTROL..frag_databases
6. To get information for tables and indexes to be defragged, run $SYBASE/dba/frag_analysis.ksh. An example would be a cronjob installed similar to below. This routine takes two parameters, server_name and database_name respectively. database_name is optional. With server_name given like in the example below, it will do fragmenattion analysis for all user databases. You can run this routine with <server_name> <database_name>. In that case it will update frag info for that database only.
0.729916 0.763700
idx_nc_caseid" on table "QMCASEEVENTAUDIT".
As a good indicator I go for tables with more than 50,000 records having the applicable derived stats parameters below 0.9. An example of indexes chosen for defragging is shown in table below:
optdiag. The output from optdiag contains all sorts of information including the "Derived statistics":

SYBASE PERFORMANCE TUNING AND MDA TABLE

INTRODUCTION TO MDA TABLES:


Starting in ASE 12.5.0.3, a new feature called 'MDA tables' is available to ASE users. These MDA tables provide access to low-level monitoring information in ASE; since the MDA tables can be accessed with regular SQL select statements.

Salient Features


The MDA tables are proxy tables
Defined in the master database
Defined in installmontables script
Consume no space
Table definitions map to ASE RPC calls
39 proxy tables in ‘master’ database (35 in ASE 12.5)
– Can be accessed with regular SQL statements
– When queried, tables are created on-the-fly from memory structures
– No data is stored in master database
– The proxy tables may also be created in a different database than master
No license needed: included in ASE base product
Only small performance impact on ASE (<5%)
– Resource usage in ASE per table/query/entire server
– Current activity in ASE per spid/query/procedure/table
– Recent activity: recently completed statements, with the resources they required
Provide server status information even when tempdb is full

Some examples of practically relevant information:
– Amount of memory occupied in the data cache by each table or index
– Most frequently used tables/procedures
– Top-N queries for CPU, I/O, elapsed time
– Find unused indexes
– SQL text of currently executing/recently executed statements
– Automatically kill user processes that have been idle for more than X minutes
– Provide server status information even when tempdb is full

STEPS TO SETUP MDA TABLES


For setting u MDA TABLES in ASE 12.5 sybase provides a script instmontables in $SYBASE/ASE-12_5/scripts. In ASE 15 MDA installation scripts is clubbed in instmast script. Below are the detailed steps to setup MDA Tables in ASE-12_5.

isql –Udba –S$DSQUERY –P*****
sp_configure 'enable cis', 1
go
use master
go
sp_addserver loopback, null, @@servername
go
set cis_rpc_handling on
go
exec loopback...sp_who  -- note: 3 dots!
go
exit
isql –Udba –S$DSQUERY –P***** -i $SYBASE/$SYBASE_ASE/scripts/installmontables -o /optware/sybase/admin/create/utils/logs/mdainstallation_out

isql –Udba –S$DSQUERY –P*****
use master
go
grant role mon_role to dba
go
exec sp_configure "enable monitoring", 1
go
exec sp_configure "sql text pipe active", 1
go
exec sp_configure "sql text pipe max messages", 100
go
exec sp_configure "plan text pipe active", 1
go
exec sp_configure "plan text pipe max messages", 100
go
exec sp_configure "statement pipe active", 1
go
exec sp_configure "statement pipe max messages", 100
go
exec sp_configure "errorlog pipe active", 1
go
exec sp_configure "errorlog pipe max messages", 100
go
exec sp_configure "deadlock pipe active", 1
go
exec sp_configure "deadlock pipe max messages", 100
go
exec sp_configure "wait event timing", 1
go
exec sp_configure "process wait events", 1
go
exec sp_configure "object lockwait timing", 1
go
exec sp_configure "SQL batch capture", 1
go
exec sp_configure "statement statistics active", 1
go
exec sp_configure "per object statistics active", 1
go
exec sp_configure "max SQL text monitored", 2048
go
exec sp_configure "performance monitoring option", 1
go





DETERMINING SIZE OF MDA


Depends on which table you are configuring
These sp_configure parameters determine the number of rows in the historical tables
§  errorlog pipe max messages
§  plan text pipe max messages
§  sql text pipe max messages
§  statement pipe max messages
§  deadlock pipe max messages
The value of the parameter is the number of rows per engine.  Correct size depends on
         Rate at which rows are written to table                    
         Frequency with which queries will be run against the table
Rate x Frequency = Size.   E.g.: 5000/min x 5 min = 25000
For example:                                              
§  2 engines
§  5000 rows per minute per engine
§  Select * from monSysStatement every 5 minutes
§  Statement pipe max messages should be greater than or equal to 25000
§  Result set size??? (50000 rows!)
­      Errorlog and deadlock pipes are usually much smaller than
­      Plan text, sql text and statement pipes

STEPS TO ARCHIVE MDA DATA


For problem analysis, it is necessary to preserve transient data from MDA Tables into regular tables. Below is the list of tables which needs to be archived regularly:

         monSysSQLText
         monSysPlanText
         monSysStatement
         monErrorLog
         monDeadLock
         monProcedureCache
         monEngine
         monSysStatement
         monSysWaits
         monProcessActivity
         monDeviceIO
         monIOQueue

For Archiving data from these tables, create new permanent tables with same structure as these MDA tables and add a column timestamp. This new column will record the MDA Table archival process start time.

STEPS TO PURGE ARCHIVED MDA DATA


There is a stored procedure mda_data_purge in mon_db, which deletes mda data more than 15 days old.

Syntax:
Use mon_db
Go
Exec mda_data_purge
Go

SQL TO ARCHIVE DATA FROM MDAs


Setup Archive Tables:
select *,getdate() timestamp into mon_db..monSysSQLText from master..monSysSQLText where 1 = 2
select *,getdate() timestamp into mon_db..monSysPlanText from master..monSysPlanText where 1 = 2
select *,getdate() timestamp into mon_db..monSysStatement from master..monSysStatement where 1 = 2
select *,getdate() timestamp into mon_db..monErrorLog from master..monErrorLog where 1 = 2
select *,getdate() timestamp into mon_db..monDeadLock from master..monDeadLock where 1 = 2
select *,getdate() timestamp into mon_db..monProcedureCache from master..monProcedureCache where 1=2
select *,getdate() timestamp into mon_db..monEngine from master..monEngine where 1 = 2
select *,getdate() timestamp into mon_db..monSysStatement from master..monSysStatement where 1 = 2
select *,getdate() timestamp into mon_db..monSysWaits from master..monSysWaits where 1 = 2
select *,getdate() timestamp into mon_db..monProcessActivity from master..monProcessActivity where 1 = 2
select *,getdate() timestamp into mon_db..monDeviceIO  from master..monDeviceIO  where 1 = 2
select *,getdate() timestamp into mon_db..monIOQueue from master..monIOQueue where 1 = 2

Copy data to Historical Tables:
            There is a procedure mda_data_archive, which is scheduled to run in every 5 mins to archive data from Important MDA Tables. This procedure is wrapped in a UNIX executable file (/optware/sybase/admin/oper/utils/appdba/mda_data_collector) and is scheduled in crontab as follows:

05,10,15,20,25,30,35,40,45,50,55 * * * * /optware/sybase/admin/oper/utils/appdba/mda_data_collector >/dev/null 2>&1

Syntax:
Use mon_db
Go
Exec mda_data_archive
Go


USEFUL REPROTS FROM MDA TABLES

Monitoring Index Utilization


select "Database" = db_name(DBID), "Table" = object_name(ObjectID, DBID),
   IndID = IndexID, UsedCount, LastUsedDate, OptSelectCount, LastOptSelectDate
from mon_db..monOpenObjectActivity
where timestamp between startdate and enddate
order by UsedCount

Monitoring Table Usage


select "Database" = db_name(DBID), "Table" = object_name(ObjectID, DBID),
    IndexID, RowsInserted, RowsDeleted, RowsUpdated, LockWaits
from mon_db..monOpenObjectActivity
where timestamp between startdate and enddate
order by RowsInserted desc

Monitoring Table Scans


select "Database" = db_name(DBID), "Table" = object_name(ObjectID, DBID),
   IndID = IndexID, UsedCount, LastUsedDate
from mon_db..monOpenObjectActivity
Where IndexID=0 and UsedCount >0
and timestamp between startdate and enddate
go

Identifying ‘hot’ tables


select TableName = object_name(ObjectID, DBID), IndexID, LogicalReads, PhysicalReads, Operations, LockWaits
from mon_db..monOpenObjectActivity
and timestamp between startdate and enddate
order by 1, 2
go

Analyzing Stored Procedure Performance (Elapsed Time)


select ProcName = isnull(object_name(ProcedureID, DBID), "UNKNOWN"),
  DBName = isnull(db_name(DBID), "UNKNOWN"),
  ElapsedTime = datediff(ms, min(StartTime), max(EndTime))
from mon_db..monSysStatement
group by SPID, DBID, ProcedureID, BatchID
having ProcedureID != 0
and timestamp between startdate and enddate


Most Frequently Used Stored Procedures


select ProcedureName = isnull(object_name(ProcedureID, DBID), "UNKNOWN"),
 "Database" = db_name(DBID),
 "Execs" = count(*)
 from mon_db..monSysStatement
 where ProcedureID != 0
group by DBID, ProcedureID
having timestamp between startdate and enddate
order by 3 desc
go

Determine Procedure Cache Hit Ratio

select Requests, Loads,
"Ratio" = convert(numeric(5,2),(100 - (100 * ((1.0 * Loads)/ Requests))))
from mon_db..monProcedureCache
where timestamp between startdate and enddate
go

Process Consuming most CPU and Plan


  select ps.SPID, ps.CpuTime,
  pst.LineNumber, pst.SQLText
from master..monProcessSQLText pst,
master..monProcessStatement ps
where ps.SPID = pst.SPID
  and ps.CpuTime = (select max(CpuTime) from  master..monProcessStatement    
)
order by SPID, LineNumber


Engine Statistics


select EngineNumber, SystemCPUTime "CPU_TIME_IN_I/O", UserCPUTime "CPU_TIME_IN_USER_PROCESS", IdleCPUTime "IDLE_CPU_TIME", CPUTime "TOTAL_CPU_TIME" from mon_db..monEngine
where timestamp between startdate and enddate
Note: -   UserCPUTime will reflect bad queries with table scans in memory, Cursors, etc.
  SystemCPUTime will reflect physical and network IO

Useful Stored Procedures


Attached is the list of useful stored procedures.



SPUSAGEDescription
sp__monbackupsp__monbackupShows information about database backups from the mda tables. This lists the last time each database was backed up, whether the backup failed, and if the backup is running.
sp__moncache sp__moncache @num_sec_delay, @num_iter, @dont_format
 - @num_sec_delay : int    : seconds delay between output runs
 - @num_iter      : int    : number of iterations
 - @dont_format   : char(1): no output formating if its not null
This procedure uses vmstat/iostat format - you can run it either once or in a loop (@num_iter times with @num_sec_delay seconds between runs).
sp__mondumpsp__mondumpMDA table dump. This dumps the current server state - an composite report of everything going on at the current time. The purpose of this procedure is for post-mortems of problems on production systems where you do not have time to actually figure out why there is a problem because you must fix that problem immediately. Run sp__mondump, fix the problem, and then look at the output later to diagnose
sp__monenginesp__monengine @num_sec_delay, @num_iter, @dont_format
 - @num_sec_delay : int    : seconds delay between output runs
 - @num_iter      : int    : number of iterations
 - @dont_format   : char(1): no output formating if its not null
Run once, the proc does NOT show a breakdown of %usage and %idle because these numbers are only available since server start - which of course is completely useless.
This procedure uses vmstat/iostat format - you can run it either once or in a loop (@num_iter times with @num_sec_delay seconds between runs).
sp__moniosp__monio @num_sec_delay, @num_iter, @dont_format
- @num_sec_delay : int    : seconds delay between output runs
 - @num_iter      : int    : number of iterations
 - @dont_format   : char(1): no output formating if its not null
I/O usage by device
sp__monlocksp__monlock @num_sec_delay, @num_iter, @dont_format
- @num_sec_delay : int    : seconds delay between output runs
 - @num_iter      : int    : number of iterations
 - @dont_format   : char(1): no output formating if its not null
Improved version of sp__lock with additional lockstate information. While that, in and of itself is not that useful, Watching the change in locks can be useful.
sp__monlocksqlsp__monlocksqlShow information about locks AND the sql that causes them. This can be rather useful, although you need to run this from some form of windowed isql or it will scroll off and be totally unusable.
sp__monnetsp__monnet @num_sec_delay, @num_iter, @dont_format
- @num_sec_delay : int    : seconds delay between output runs
 - @num_iter      : int    : number of iterations
 - @dont_format   : char(1): no output formating if its not null
Shows Network IO Usage From The Mda tables
sp__monobjsp__monobj @num_sec_delay, @num_iter, @dont_format, @object_name
- @num_sec_delay : int    : seconds delay between output runs
 - @num_iter      : int    : number of iterations
 - @dont_format   : char(1): no output formating if its not null
Shows Highest Usage Objects Based On Mda tables. This uses a heuristic algorithm to find what objects are most busy.
LogicalReads/100 + PhysicalReads/10 + PhysicalWrites +
  RowsInserted + RowsDeleted + RowsUpdated + LockRequests + Lockwaits
sp__monopenobj sp__monopenobj @num_sec_delay, @num_iter, @dont_format, @object_name
- @num_sec_delay : int    : seconds delay between output runs
 - @num_iter      : int    : number of iterations
 - @dont_format   : char(1): no output formating if its not null
Shows IO on open objects From The Mda tables
sp__monpwaitssp__monpwaits @SPID, @num_sec_delay, @num_iter, @dont_format
- @SPID          : int    : Spid of interest
 - @num_sec_delay : int    : seconds delay between output runs
 - @num_iter      : int    : number of iterations
 - @dont_format   : char(1): no output formating if its not null
shows wait information
sp__monrunning sp__monrunningshow processes running > 100ms duration
sp__monspidsp__monspid @spidshow everything about a running spid . Can be used to investigate locks and blocks and slow running procs etc
sp__monsql sp__monsql @spid, @dont_formatShows currently executeing sql statements if no spid is passed and shows all sql history if spid is passed
sp__montableusagesp__montableusageShows comprehensive information about how tables are used in the db
sp__montopnsp__montopn @spidTop processes in use ordered by cputime
sp__monunusedindexsp__monunusedindex @dont_format, @no_printshow indexes that have not been used since server start
sp__monusedtablessp__monusedtables @owner, @objnameshow io on used tables
sp__monwaits sp__monwaits @num_sec_delay, @num_iter, @dont_formatshows wait info.
This procedure uses vmstat/iostat format - you can run it either once or in a loop (@num_iter times with @num_sec_delay seconds between runs).




Source Code of SPs:-

These sps are emailed to rediffmail account.


APPENDIX B

Useful Links



APPENDIX C


Other Useful SQLs used for performance monitoring:

CHECK BLOCKS

select spid,blocked from sysprocesses where blocked > 0

PHYSICAL_IO of blocked process

select spid,physical_io from sysprocesses where spid in (select blocked from sysprocesses where blocked > 0)

Details of processes which is blocked


select spid,suser_name(suid) Login,db_name(dbid),cmd,time_blocked from sysprocesses
where blocked > 0
order by time_blocked

select 'dbcc sqltext('+convert(varchar,spid)+')' from sysprocesses
where blocked > 0

select 'sp_showplan '+convert(varchar,spid)+',null,null,null' from sysprocesses
where blocked > 0

Details of blocked(Process which is blocking)

select spid,suser_name(suid) Login,db_name(dbid),cmd,cpu,loggedindatetime,physical_io from sysprocesses
where spid in (select blocked from sysprocesses where blocked > 0)
order by cpu desc

select spid,suser_name(suid) Login,db_name(dbid),cmd,cpu,loggedindatetime,physical_io from sysprocesses
where spid in (select blocked from sysprocesses where blocked > 0)
order by loggedindatetime

dbcc traceon(3604)

select 'dbcc sqltext('+convert(varchar,spid)+')' from sysprocesses
where spid in (select blocked from sysprocesses where blocked > 0)

select 'sp_showplan '+convert(varchar,spid)+',null,null,null' from sysprocesses
where spid in (select blocked from sysprocesses where blocked > 0)


CHECK PHYSICAL_IO OF LONG RUNNING TRANSACTIONS

select a.spid,a.physical_io,db_name(b.dbid) 'DATABASE NAME' from sysprocesses a,syslogshold b
where a.spid = b.spid

LARGEST TABLES IN DATABASE

select o.name,sum(s.pagecnt)*2 SIZE_KB from sysobjects o, systabstats s where o.id = s.id and o.type = 'U' group by o.name order by SIZE_KB desc