Quantcast
Channel: SQLServerCentral » SQL Server 2008 » SQL Server 2008 Performance Tuning » Latest topics
Viewing all 730 articles
Browse latest View live

CHECKDB Error

$
0
0
Hi Experts,We got below error please helpMsg 8951, Level 16, State 1, Line 2Table error: table 'TB1' (ID 1694329496). Data row does not have a matching index row in the index 'TB1_IPAddress_NCI' (ID 3). Possible missing or invalid keys for the index row matching:Msg 8955, Level 16, State 1, Line 2Data row (1:36527917:6) identified by (TB1ID = 'DF1E2856-8D53-DF11-9752-0024E861B15C' and AccountID = 'F136841E-22AF-DC11-BE8A-000423B9CF59' and TB1Number = 'X00ASM52') with index values 'IPAddress = 1099810826 and TB1ID = 'DF1E2856-8D53-DF11-9752-0024E861B15C' and AccountID = 'F136841E-22AF-DC11-BE8A-000423B9CF59' and TB1Number = 'X00ASM52''.Msg 8951, Level 16, State 1, Line 2Table error: table 'TB1' (ID 1694329496). Data row does not have a matching index row in the index 'TB1_IPAddress_NCI' (ID 3). Possible missing or invalid keys for the index row matching:Msg 8955, Level 16, State 1, Line 2TIA

Insert Trigger and @@rowcount problem - Performance Issue

$
0
0
In an Insert Trigger this code is sometimes quite slow: "SELECT @numrows = @@rowcount"This code is used to determine if a row was inserted into the table. So 2 questions. 1. Isn't this redundant? How can you get to an insert trigger if a row isn't being inserted?2. Sometimes the above line of code is has a very long duration (like 37450ms in a sql trace). Any clues to point me in a direction?My initial thought is to just delete the code. I am working with a commercial package so I have to be a little careful.Thanks for reading!John

The best way to store Key Value type of data

$
0
0
What is the best way to store Key Value type of data?I mean should we go with redundant data or is there anything else better then this.Thanks.

Index Tuning never finishes

$
0
0
I have been given several large databases to tune. They have never had the statistics updated and I cannot tell when the indexes were last rebuilt. I tried to set up a Maintenance Plan to Rebuild all indexes.... ran for 12 hours and had not finished. I then modified the plan to Reorganize the indexes. This did not fair any better results. I then ran a query to find which indexes with over 1000 pages were fragmented. Three indexes were returned One was 45% fragmented with over 4 million pages ... another was 90% fragmented with over 2 million pages.Any suggestions as to how to best clean up these indexes? ... or should I just let the process run over the weekend and see if it will complete... then I can apply a weekly regular tuning method.

Find out top memory consuming queries

$
0
0
Hi,Checking if anybody has a tsql script to display top 10 high MEMORY consuming queries?Thank you.

Recursive CTE problem

$
0
0
Hi.I have a problem with this one ITVF. It was working fine, but I had to change it so that it is limiting humidity into maximum and minimum during period of time. Periods are located in dbo.HumidityChangePeriods table. So I had to convert function to recursive CTE from just summing up humidity changes.This function is used in view so it gets called a lot.Can anyone share ideas how to speed this up? I attached execution plan.[code="sql"]USE tempdb;-- CREATE TABLES USEDIF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[numbers]') AND type in (N'U'))DROP TABLE [dbo].[numbers]GOSET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE TABLE [dbo].[numbers]( [n] [int] IDENTITY(1,1) NOT NULL) ON [PRIMARY]GOSET IDENTITY_INSERT dbo.numbers ON;WITH cte (n) AS(SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM sys.all_columns AS a CROSS JOIN sys.all_columns )INSERT INTO numbers (n) SELECT n FROM cte;SET IDENTITY_INSERT dbo.numbers OFF;IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[numbers]') AND name = N'NumberN')DROP INDEX [NumberN] ON [dbo].[numbers] WITH ( ONLINE = OFF )GOCREATE UNIQUE CLUSTERED INDEX [NumberN] ON [dbo].[numbers] ( [n] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]GOWITH cte AS (SELECT 1 AS periodId, CONVERT(DATE, '2000-01-01') AS periodBegin, CONVERT(DATE, '2000-02-29') AS periodEnd, 0.0 AS unitChange UNION ALL SELECT 2 AS periodId, CONVERT(DATE, '2000-03-01') AS periodBegin, CONVERT(DATE, '2000-04-01') AS periodEnd, 0.09375 AS unitChange UNION ALL -- 0.097SELECT 3, CONVERT(DATE, '2000-04-02'), CONVERT(DATE, '2000-06-15'), 0.16 UNION ALL -- 0.162SELECT 4, CONVERT(DATE, '2000-06-16'), CONVERT(DATE, '2000-07-30'), 5.0/45.0 UNION ALL -- 0.114 (oikeasti 5/45)SELECT 5, CONVERT(DATE, '2000-07-31'), CONVERT(DATE, '2000-08-31'), 0.046875 UNION ALL -- 0.048 SELECT 6, CONVERT(DATE, '2000-09-01'), CONVERT(DATE, '2000-11-30'), -12.0/91.0 UNION ALL -- -0.133 (oikeasti -12/91)SELECT 7, CONVERT(DATE, '2000-12-01'), CONVERT(DATE, '2000-12-31'), 0.0)SELECT * INTO dbo.HumidityChangePeriods FROM cte ;CREATE UNIQUE CLUSTERED INDEX [HumiditychangeperiodsPeriodbeginPeriodendUC] ON [dbo].[HumidityChangePeriods] ( [periodBegin] ASC, [periodEnd] ASC)WITH (STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]GO--- PROBLEMATIC FUNCTION STARTS HERE IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[calculateCurrentHumidity]') AND type in (N'FN', N'IF', N'TF', N'FS', N'FT'))DROP FUNCTION [dbo].[calculateCurrentHumidity]GOCREATE FUNCTION dbo.calculateCurrentHumidity(@identity INT, @beginDate DATE, @endDate DATE, @startHumidity FLOAT)RETURNS TABLE RETURN-- generate all dates for time periosWITH days AS ( SELECT DATEADD(DD, numbers.n - 1, @beginDate) AS calcDay FROM dbo.numbers WHERE n < DATEDIFF(DAY, @beginDate, @endDate) + 1)-- move dates to year 2000 and add year as grouping, dayCountsPerPeriod AS ( SELECT DATEADD(YEAR, - 1 * YEAR(calcDay) + 2000, calcDay) AS movedDate, YEAR(calcDay) AS calcYear FROM days)-- calculate days and humidity change in period, changes AS (SELECT COUNT(*) AS days , SUM(unitChange) AS change , ROW_NUMBER () OVER (ORDER BY calcYear, periodId) AS rno FROM dayCountsPerPeriodJOIN dbo.HumidityChangePeriods AS periods ON movedDate BETWEEN periodBegin AND periodEndGROUP BY calcYear, periodId)-- calculate humidity change recursively because humidity cannot go over 65 and below 20-- , rcte AS ( SELECT CASE WHEN @startHumidity - changes.change > 65.0 THEN 65.0 WHEN @startHumidity - changes.change < 20.0 THEN 20.0 ELSE @startHumidity - changes.change END AS humidity, days, change, rno FROM changes WHERE rno = 1 UNION ALL SELECT CASE WHEN humidity - changes.change > 65.0 THEN 65.0 WHEN humidity - changes.change < 20.0 THEN 20.0 ELSE humidity - changes.change END AS humidity , changes.days , changes.change , changes.rno FROM rcte JOIN changes ON rcte.rno + 1 = changes.rno -- maxrecursion is 100 so stop before that WHERE rcte.rno < 50)-- return last valueSELECT TOP 1 @identity AS id, CASE WHEN rcte.rno > 48 THEN -1 ELSE humidity END AS humidity FROM rcte ORDER BY rno DESCGO-- TESTING CREATE TABLE #TestData( testDataId INT NOT NULL, StartHumidity NUMERIC(10,4) NULL, BeginDate DATE NULL, EndDate DATE NULL, EndHumidity NUMERIC(10,4) NULL, Explanation VARCHAR(100) NULL)INSERT INTO #TestData (testDataId, StartHumidity, BeginDate, EndDate, EndHumidity, Explanation)VALUES (1, 60, '2013-02-01', '2013-01-02', NULL, 'Begin date bigger that end date '), (2, 60, '2013-01-01', '2013-01-02', 60, 'winter time'), (3, 60, '2013-01-01', '2013-05-01', 52.36, ''), (4, 60, '2013-01-01', '2013-09-01', 38.5, ''), (5, 60, '2013-09-01', '2013-12-01', 65, 'max humidity is 65%'), (6, 25, '2013-04-02', '2013-11-06', 28.7, 'Minimum reached and then starting to wet'), (7, 65, '2012-06-16', '2013-11-06', 52.2, ''), (8, 65, '2010-06-16', '2013-11-06', 33.2, ''), (9, 60, '2013-01-01', '2014-12-31', 41.0, '');SELECT td.testDataId, result.id, td.BeginDate, StartHumidity, result.humidity as currentHumidity, EndHumidity AS expectedHumidityFROM #TestData AS tdOUTER APPLY dbo.calculateCurrentHumidity(testDataId, BeginDate, EndDate, StartHumidity) AS result--WHERE testDataId IN (5);-- CLEAN UPDROP TABLE #TestData;DROP TABLE [dbo].[HumidityChangePeriods];DROP TABLE [dbo].[numbers];[/code]

CTE WITH CONDITION TAKING TOO LONG............

$
0
0
Hi Polks,;With Cteas(--- some joining query here with 3 tables--first table has 30 crores records,second one has 6 crores ,third one has 791 records)select * from cte --The above query running in 4 minute 28 sec ;With Cteas(--- some joining query here with 3 tables--first table has 30 crores records,second one has 6 crores ,third one has 791 records)select * from cte where booleancolumn<>0or select * from cte where booleancolumn=1this query is running more than two hours.the column in the where condition dont have any indexes,even no need bcz it has either 0 or 1.Am i doing anything wrong.pls share your comments.i cant post the exact query.

Performance issues with SP

$
0
0
Hi,We have a stored procedure which updates table A every 5 mins whole day(9AM to 7PM).Table A has only one day worth data and end of the day it moves all data to Table B.Table B have 80 million rowswhich mainiatins all historical data.SP usually finishes in 3 seconds but randomly everyday the execution time increases to 23 seconds and users started getting timeout.I have to run updates stats on table A and table B and recompile SP to get execution back to 3 second.Execution plan is same before and after the issue.We run reindex and updates stas with full scan on both tables everyday morning.Any help is appreciated.Please let me know if you need more details.Thanks,Sree

procedures used

$
0
0
Hi Experts,we have a list of procedures used when accessing a particular application.Is there anyway to check how long this procedures take to complete?If i ran profiler and click on the app buttons i can see those procedures running behind. Can i get the time it ran? Is there any other way other than profiler??

Hit on DB from a User

$
0
0
Hi Experts,We have a DB user names Supply . Application will hit the DB with this username. Is there anyway to find the hit happening to the database?\Thanks in Advance.

Full Text Index / Contains

$
0
0
How do I optionally include a Contains search in my query, based on the input? In the below situation, I don't want to search if the criteria is empty.In non-FTS I would do the following:[code="sql"]DECLARE @Criteria1 varchar(10) = 'find', @Criteria2 varchar(10) = ''SELECT *FROM TableXWHERE (@Criteria1='' OR Column1=@Criteria1)AND (@Criteria2='' OR Column2=@Criteria2) --Correctly returns True, if no criteria supplied[/code]How to implement this using Contains? I tried different approaches, like:[code="sql"]SELECT *FROM TableXWHERE (CONTAINS(Column1, @Criteria1))AND (CONTAINS(Column2, @Criteria12)) --Error regarding no search word[/code][code="sql"]SELECT *FROM TableXWHERE (CONTAINS(Column1, @Criteria1))AND (CONTAINS(Column2, @Criteria12)) --Returns False, excluding the whole record (unwantedly)[/code][code="sql"]SELECT *FROM TableXWHERE (@Criteria1='' OR CONTAINS(Column1, @Criteria1))AND (@Criteria2='' OR CONTAINS(Column2, @Criteria12)) --Runs very slow and ineficiently[/code]Do I have to create a separate query for echt combination of parameters, or is there a certain writing in the Contains statement? In other words, how do I get Contains return True immediately, when the criteria is empty?

Tunnig query with full-text search

$
0
0
I have problem with execution time for below query. It is 2/3seconds. I can accept time below 1s. I use this query for search engine in online shop. I think there is problem with time execution on full-text search and inventSumShopIntegration_ART. InventSum use index seek but query cost is very high.1. how can I optimize my query? 2. Why index seek on inventSumShopIntegration_ART has very high cost ?I attached query plan http://www.sendspace.com/file/1pw9p9Indexes on tables:1. ARTAOLINVENTTABLEa) IT_ITEMNAMEIDX - clustered unique - full text index fields: ITEMNAMEb) IT_ITEMSEARCHIDX - unique fields: ItemId ItemBrand ImportIndex QuotationStatus (included columns salesUnit, ItemBrand, Amount)2. ECPPRESENTATIONa) ECP_ITEMSEARCHIDX - clustered non unique fields: primaryPhoto_ART imageName_ART hasImage (included columns RefId) b) ECP_ITEMIDIDX - Non-Uniques, NonClustered fields: RefIdc) PK_ECPPresentation (clustered, unique) fields: RecId 3. INVENTITEMBARCODEa) IB_ITEMIDX - clustered unique fields: ItemId ItemBarCodeb) IB_ITEMIDIDX (Non-unique, Non-clustered) fields: itemIdc) IB_BARCODEIDX(Non-unique, non-clusterd fields: itemBarCode4. INVENTSUMSHOPINTEGRATION_ARTa) IS_ITEMIDX - non-clustered - unique fields: availphysical itemIdb) PK_INVENTSUM - unique, clustered fields: itemIdc) IS_AVAILIDX - non-unique, non-clustered fields: availphysical[code="sql"]SELECT DISTINCT IT.ITEMID, IT.ITEMNAME, IT.ITEMBRAND, ISU.AVAILPHYSICAL, IT.AMOUNT, IT.SALESUNIT, IT.ImportIndex, EP.IMAGENAME_ART FROM ARTAOLINVENTTABLE IT LEFT JOIN INVENTITEMBARCODE IB ON IT.ITEMID = IB.ITEMID INNER LOOP JOIN INVENTSUMSHOPINTEGRATION_ART ISU ON IT.ITEMID = ISU.ITEMID LEFT JOIN ECPPRESENTATION EP ON IT.ITEMID = EP.REFID AND EP.HASIMAGE = 1 AND EP.PRIMARYPHOTO_ART = 1WHERE (CONTAINS(IT.ITEMNAME, '("Papier*" OR FORMSOF(THESAURUS, Papier)) AND ("ksero*" OR FORMSOF(THESAURUS, ksero)) AND ("A3*" OR FORMSOF(THESAURUS, A3))') OR IT.ITEMID = 'Papier ksero a3' OR IT.IMPORTINDEX = 'Papier ksero a3' OR IB.ITEMBARCODE = 'Papier ksero a3') AND (ISU.AVAILPHYSICAL > 0 OR ISU.AVAILPHYSICAL = 0 ) AND (IT.QUOTATIONSTATUS = 1 OR IT.QUOTATIONSTATUS = 2)ORDER BY IT.ITEMNAME ASC[/code]

Stored Procedure Running Slow via SSIS vs Management Studio

$
0
0
I've been having some pesky tuning problems with Stored Procedures running via DTEXEC and through BIDS.When I run the stored proc (right click run task), it is slow, very slow. When I run the exec sproc_Name in Management Studio it runs in seconds. I've tried so many things1. ending all the statements within the sproc with semicolons2. dropping and recreating the sproc3. adjusting different BIDS options like IsStoredProc (off and on)I tuned each and every statement within the sproc, and when I run it in Management Studio, it is good. But in BIDS not so much. Finally I just dump the t-sql code into the task and that seemed to work, but it eventually slowed down again.But then when I ran the code by itself, it was ok. The BIDS is on a different server than SQL Server, but I ran the Management Studio from the same server as BIDS. When the sproc is running, I look at the Activity Monitor and I can see that the sproc/code is running. I don't see any blocks or wait statistics, so i'm at a loss of what can be causing it to run so slow.I would think , once SQL Server got the command from SSIS, then it would run the same as if it had gotten the command from Management Studio.Please help/advise.thanks

Monitoring Software

$
0
0
Hello Everybody,I am sure that was discussed earlier, so any links will be appreciated.We are in the process of getting a new Monitoring Software for SQL Server Production Environments. And the choice is between Foglight, Idera SQL Diag Manager, Embarcadero DB PowerStudio XE for DBA and Redgate.I personally like Idera - used to work with it awhile back, but it is quite expensive.Additionally, someone mentioned MS System Center for Sql Server. Never heard of that till today. Anybody is using that? Does anyone have any fact sheets or run any comparisons for the above software, including MS System Center.Thank you

Stored procedure excution plan reuse

$
0
0
I have a very complex stored procedure that it takes about 45 seconds to run the first time, and after that it takes at most a second to rerun with the same parameters, or even different parameters. Every once in a while the procedure takes a long time again, because SQL Server is recompiling it. Is there a way to save the execution plan and reuse it under my control, not under SQL Server control?

Trying to gauge performance between production workload run on same DB with different FK's and indexes...

$
0
0
I've been tasked with checking to see if a particular workload file (taken from our production environment over a 3 hour span), will run quicker on our DEV box against the same DB [i]with[/i] a bunch of FK's and Indexes on it, and again, with[i]out[/i] those objects.I'm hoping that my approach is correct... 1) restore DEV db from production. 2) run the FK and index create script3) replay the workload4) note the elapsed time5) re-restore the production DB.6) replay the workload (without running the FK and index create script)7) compare "replay times" for my ultimate answer.8) have a beerDoes that sound about right? Some other questions... Even though the workload file was gathered from the production box with a filter on DBname like '%<mydbname>%', when I replay the file on the DEV box, I see that some commands are run against other databases (not just system). How is that happening?I want to make sure I'm on an even playing field for each replay. Do I need to issue a "dropcleanbuffers" or "freeproccache" command on the databases after restoring from production?Thanks SSC!

Poor performance of parameterised NOT IN clause - SQL 2008

$
0
0
I'm comparing the logical reads done by two slightly differently phrased parameterised SQL statements that return the same results from an SAP database table :[size="1"][font="Courier New"]DECLARE @P1 varchar(3) = '400';DECLARE @P2 varchar(2)= '30';DECLARE @P3 varchar(2) ='60';SELECT "COUNTER" AS c ,"REFCOUNTER" AS c ,"STATUS" AS cFROM "CATSDB"WHERE "MANDT" = @P1 AND NOT ("STATUS" = @P2 OR STATUS = @P3 )ORDER BY "COUNTER"[/font][/size]In my case the query stats say:Scan count 2, logical reads 22084but when I run :[font="Courier New"][size="1"]DECLARE @P1 varchar(3) = '400';DECLARE @P2 varchar(2)= '30';DECLARE @P3 varchar(2) ='60';SELECT "COUNTER" AS c ,"REFCOUNTER" AS c ,"STATUS" AS cFROM qe7."CATSDB"WHERE "MANDT" = @P1 AND NOT "STATUS" IN (@P2, @P3)ORDER BY "COUNTER"[/size][/font]the query stats say :Scan count 2, logical reads 113454The CATSDB table has an index consisting of MANDT, STATUS, REFCOUNTER.Why do these two statements do such different numbers of logical reads and take such different times ?Thanks.

How to make "Update Statistics" faster?

$
0
0
Hi Team, We have a table with a 1 column Primary Key on and an UNIQUE Non-Clustered Index with 16 columns.This table has more than 20695780 Rows and on every day new records inserted between 30000 and 35000.After insertion process, we have many other processes those refer this table.On every day, before proceeding further, after insert we do an Update Statistics with 30 Percent Sample on this table.After this when we do rest processes then we are not facing problem and everything goes smoothly.Now the problem is if we do Update Statistics then it is taking more than 15 Minutes.We need a solution: - To make this "Update Statistics" faster OR Can we perform a parallel Update Stats on this table with other processes OR Instead of "Update Statistics" can we go for REBUILD INDEXES (need to know it will update all Statistics or not)? OR Any other solution by which we can save this 15 minutes.Please advice.

Improve Performance -Update query with OR in WHERE

$
0
0
Hi! How do we improve this Update query Performance as currently it is taking 1 Minute to update 17643 Rows.Query -> Declare @date datetime = '09-Dec-2013' update #tmp_sp_abc set test = rtrim(xyz_test) from #tmp_sp_abc t1, t_h_bg_pc_it t2 where (t2.id_i = t1.i or t2.id_s1 = t1.s) and t1.r_type = 1 and t2.[date] = @date Tables Row Count: - #tmp_sp_abc -> 125352 t_h_bg_pc_it -> 14798 Rows t_h_bg_pc_it table has 300 columns with primary key on id_i columnand #tmp_sp_abc has 11 columns with no primary key and no indexes.found that "OR" condition is the root cause of this much time consumption but, can't change it.tried to add indexes on: - Table: - t_h_bg_pc_it Columns: - [xyz_test], [id_i], [id_s1], [date] Table: - #tmp_sp_abc Columns: - [i], [s], [r_type] include [test]but, by doing this saved only 5 seconds.Attaching the Execution Plan Snaps (Without above indexes and with indexes).[img]http://i.stack.imgur.com/JaWCV.jpg[/img]Please advice.

Improve SIMPLE SELECT to retun 2 LAC Rows faster?

$
0
0
Hi Team, We have a Store Procedure and in this SP we are creating a TEMPORARY Table which has more than 200000 rows with 30 columns (2 of datetime, 4 of INT, 5 of [numeric](20, 3) and rest of VARCHAR(500) data types. This SP is giving the SIMPLE SELECT of this TEMPORARY Table in final output.Now the problem is the final SELECT statement is taking more than 3 Minutes to list all records.We tried by adding an Identity column to this Temp. Table and added Non-Clustered Index on this Identity column and in final output we don't include this column in SELECT statement but, in ORDER BY clause only. But, still it is taking more than 2.30 Minutes.We need a solution to get this "SELECT" statement output faster.Please advice.
Viewing all 730 articles
Browse latest View live


Latest Images