Quantcast
Channel: SQLServerCentral » SQL Server 2008 » SQL Server 2008 Performance Tuning » Latest topics
Viewing all 730 articles
Browse latest View live

Rebuilding Indexes on a large table are killing our log file

$
0
0
Hello Everyone,I the following note table:id pk inttimestamp datetime indexedorderid int clustered indexnote varchar(max)fill factor 90%table size: 18 Gigdatabase size: 110 GigOur reports start out at the Order table and we join notes when needed off the order id. That is why the clustered is on the orderid column. This table gets fragmented to 8% in three weeks time. When we rebuild indexes on this table, it jacks our logs to 70 Gig +. Our replication moves all database changes to another server in another part of the country. Normally this replication works great including when we reindex other tables. Reindexing this table takes 2 hours and it takes the replication 24 hours to recover. solutions we have tried:reorganize index versus rebuild - takes longer and the log actually grows larger.rebuild one index at a time - timestamp is fine, but the clustered is the issue. set our logs to simple mode - since the log file gets hit just as much, but truncates often, our replication spool still sees the change. We used to have the clustered on the timestamp 3 years ago and we found that our reports run much faster once we switched to the orderid field. Any suggestions?Thank you,Jason

Gathering performance informtion

$
0
0
I have been tasked to monitor the performance of queries running on our dev/test machine. Can I store the output from using Set Statistics Time On ?I want to monitor the execution times of stored procedures when used by SSRS reports.

Serious intermittent performance issues

$
0
0
Hi,We are getting intermittent performance issues on our SQL server that happen once or twice a month. When it happens the whole sql server becomes unresponsive and I have to use the dedicated administrator connection to connect. The last two times that it has happened I have looked in sys.dm_exec_requests and could see that there is one stored procedure in particular that has about 70 entries. Out of the last 2 times it happened, the first time we just failed over to another node so I didn't really have time to look into the issue however the second time (yesterday) I made sure that we gathered some info before failing over.I found that again there were about 70 lines in sys.dm_exec_requests for the stored procedure in question. I thought that I would comment out the SP and sure enough the requests for this SP started to dissapear, however this did not fix the problem overall. We still had about 70 lines in sys.dm_exec_requests for various SPs.In the wait resource for each of these many have command EXECUTE, wait_resource OBJECT: 5:1417080505:0 [COMPILE], wait_type LCK_M_X and sql_handle and plan_hanlde of 0x000000000000000000000000000000000000000000000000. These ones have the highest wait times of all the requests.Many other lines in here have wait_type LCK_M_SCH_M and wait_resource of either METADATA: database_id = 5 SECURITY_CACHE($hash = 0x2532319d:0x1) or METADATA: database_id = 5 METADATA_CACHE($hash = 0x4ae22766:0x0).As a side note we do have one SP that calles OPEN SYMMETRIC KEY and CLOSE SYMMETRIC KEY in the SP. This stored procedure was running at the time the issue was happening. When this was running there were a few exec requests as follows: OBJECT: 9:366624349:0 [COMPILE]. Using SQL Load generator on our dedicated Dev instance and loading it up with 20 concurrent requests running this SP I can get it to cause the OBJECT: 9:366624349:0 [COMPILE] wait_resource.There are no jobs running that would be modifying schema however we do have a job running every 5 minutes that does a bulk insert of data for each customer. Maybe this contributed the the LCK_M_SCH_M locks??Does anybody have any idea as to what could be the issue here as any help would be really great.I have attached an excel dump of the exec requests if that helps.I can provide more info if I have not been clear enough.Our setup is 32bit, dual core xeon 3ghz, 6gb ram (AWE not enabled) if this helps

Fetching value of previous record

$
0
0
Hi,I am facing a performance issue when fetching a value of previous record.My query goes hereSelect a.nTokenNo, MAX(a.nSeqNo) FROM tmptableA a LEFT JOIN tmptableB b on a.nTokenNo = b.nTokenNo ANd b.nSeqNo < a.nSeqNoThe tmptableA contains around 35000000 records and temptableB contains 4000000 records. Regards,Saumik

is UNION ALL on 2 SELECTs better performance-wise than 2 INSERTs into @Table and then SELECT * FROM @Table variable?

$
0
0
IF I have a SP in which the following actions occur:1 - Declare @Table table (.......)2 - INSERT INTO@Table SELECT F1, F2, F3, ... From ..Joins... WHERE Somefield= @Somefield_value 3 - INSERT INTO@Table SELECT F1, F2, F3, ... From ..Joins... WHERE SomeOTHERfield= @SomeOTHERfield_value4 - SELECT * FROM @Table[u]Question[/u]: Can I expect some performance benefit if I rewrite the above 4 steps as one SELECT with OR in WHERE clause orif I use 2 SELECTS with UNION ALL on 2 SELECT Statements?Should UNION ALL or UNION be in most cases necessarily better performance-wise than using @Table variable or one big select statement?Thanks for your ideas!

yikes

$
0
0
Have a prod server with 3rd party app which is buried this morning. SQL CPU at 95% no blocking or deadlocking getting lots of suspended sql statements. no long running queries

Will the covered index be used by one query only?

$
0
0
If I create a covered index, analyzing a specific query like Select F3, F4, F5 from Table1 Where F1=@F1 and F2=@F2Index created as follows:Create index IXc_Table1__F1F2on Table1 (F1, F2)INCLUDE (F3, F4, F5)My question is: will this index be used only by this specific query, or can other queries start using it as well?By other queries I mean queries likeSelect F3, F4 from Table1 Where F1=@F1 and F2=@F2or Select F3, F4, F5 from Table1 Where F1=@F1 ? (queries with different number of fields in SELECT part and different WHERE clause predicatesbut which are still partially contain fields used in both Index and Include statements?

Running Client-side Profiler trace or Server-side trace?

$
0
0
as some reference like this one suggestshttp://support.microsoft.com/kb/929728 that SQL Server instance is affected considerably while client-side GUI-based trace is running. It points out that such adverse affect is observed using SQL 2000 or 2005.Another article here [url=http://www.sqlservercentral.com/articles/Performance+Tuning/71549/]http://www.sqlservercentral.com/articles/Performance+Tuning/71549/[/url]suggests to run Server-side automated trace (not using Profiler GUI but T-SQL/Agent).I plan to run trace capturing several SPs-related events only. But I want to run it for a couple of days in production server (one SQL server is 2008 and another 2012). In the first KB article that I mentioned above it says that such symptoms as noticeable server slowdown is applicable if it is version 2000 or 2005. Does anyone know if in versions 2008 and 2012 such concern has diminished? Or is the same also true for versions 2008 and 2012 that I am going to run this long trace on, and should I then still consider server-side traces?

what performance metrics to show before and after optimization/tuning - to show improvement

$
0
0
question: If I am optimizing performance of a few dozen of SPs that i identified via Profiler Trace as taking most time while being the ones most often executed, and did certain things to improve performance for particularly those SPs. Created some indexes, modified some TSQL inside those SPS, etc.Now, after I roll out those changes to Production, i will need to show that my improvements/optimization actually has done something, and the performance/metrics are now improved.Is the best way to prove that changes brought about positive results is to also run Profiler Trace and note that the same SPs ave(duration) and READS have become smaller values? Or is there something else that is useful to show that modifications (mainly a few dozen of new indexes) have worked and performance is improved? I am thinking what would be most useful to include into my report to management that will show improvement after this tuning/optimization.Thanks.

does Profiler Trace capture same data as shown by STATISTICS IO on?

$
0
0
[u]QUESTION about Staistics IO[/u] on: 1) is the info provided when SQL statement is run after Set Statistics IO ON the same as data collected by Profiler Trace? (Reads, for example, -- is it the same as Logical Reads shown by Statistics IO on?)In other words, I am trying to figure out what else I need to show after optimization applied as metrics showing improved performance.For example, If I run Profiler Trace, should I just point out that Avg(duration) and /or Avg(Reads) for same SP is a lower number compared to previous Profiler Trace.(see, this is a peculiar situation, I was given a general task to improve SPs performance on the server but was not told what user processes they are initiated or what area of application(s) is slow or under-performing in any way), so what I did I ran the trace and simply identified the top 30 SPs that are often executed and sorted by Count*Avg(duration) Desc to see which ones take most time on the server. And my improvement was targeted around those SPs and tables/indexes that SQL in those SPs accesses. I have all that trace data and analysis saved.Now, upon applying the suggested changes in Production, I want to show certain improvement. I will use then the same metrics from new Profiler Trace)2) Or should I still [b]test-run each individual SP BEFORE and AFTER modifications withSet Statistics IO on? [/b]

Understanding how MAXDOP impacts performance.

$
0
0
I'm not trying to solve a specific problem, rather, I'm trying to gain a better understanding of the underlying SQL engine.BOL and many online articles talk about MAXDOP, and various recommended practices. Some articles recommend setting it to the total physical cores, some articles recommend setting it to 1 and using hints to override if necessary. Some articles talk about trying a query both at 1 and at a higher value and seeing which one is better.Having read these articles, what I'd like to find more about, is WHY lowering a MAXDOP to 1 would improve performance. I ran into this recently, and solved a performance issue by setting the MAXDOP to 1. But I found myself wondering why this would result in an improvement. I'm sure there is overhead in the parallelism of the query, but when the performance benefit can be as much as a 100 times, I realize I don't really know what's going on under the hood.Can anyone point me in a direction where I can fill in the blanks?Thanks,Jeffrey Kretz

Base-Lining info to compare before and after SPs Performance Tuning modifications ?

$
0
0
It appears that the main challenge I am looking forward to now is BASELINING. As i am trying to figure out[b] if there is ANYTHING else at all in addition to Profiler Trace data that would be valuable to save now, and then read the same info AFTER I apply certain performance tuning modifications to compare the NEW metrics to the OLD ones ? [/b]Does anyone know what else would be critical to include in currrent baseline metrics? I assume that saving/comparing to the data from sys.dm_exec_query_stats or other DMVs is no good because it is cumulative and is cleared only at restart of server. And it is not planned to be restarted.My main purpose is improving performance of 30 to 40 Stored Procedures, for which I have created additional indexes and modified some TSQL in some of the SPs.And I need to be able [b]to clearly show in my report to Management the POSITIVE difference that my tuning modifications have made[/b]. I would appreciate any useful practical advice. Thanks.

SQL Server Profiler question (resource issue)

$
0
0
I keep having a long running profile shut down because the C Drive is full. I am saving the output to a table so is there a method to stop the direction to the screen?We are saving transactions from a software package to the database and some of the transactions aren't saved nor do they error so I am trying to find out if the programmer has inadvertently rolled back a transaction. The issue is when I try to recreate it by submitting the transaction it processes so I am thinking this condition is created by how it is submitting previous transactions and maybe the program has a variable that triggers a rollback. I have seen this before but my transaction volume was much lower and this was not an issue so I was able to show them where they were calling the roll back and we could infer from the previous transactions what caused it.thoughts?

Could somebody decipher these perfmon results

$
0
0
Hi Guys, Our server crashed yesterday morning. We get this issue a few times a month. I was looking into this before however I didn't have any data to look at to help the investigation. I have got the perfmon results from the crash but I am having trouble diciphering them. The crash happened at 08:30am and the perfmon results are here: https://drive.google.com/file/d/0BzpIqI3rYv2PVlhydkpqam1NYU0/edit?usp=sharingIf somebody with a better idea of perfmon results could take a look at the events at 08:30 and tell me what you think that would be amazing.

Recommended Indexes

$
0
0
Hi all,Hopefully someone can shed some light on this issue I'm seeing.I'm using Glenn Berry's scripts to get recommended indexes from a particular instance/database, and I have implemented the index (so i think). However, the query still returns it as a "missing" index.Code to find missing indexes:[code="sql"]SELECT user_seeks * avg_total_user_cost * (avg_user_impact * 0.01) AS index_advantage, migs.last_user_seek, mid.statement AS [Database.Schema.Table], mid.equality_columns, mid.inequality_columns, mid.included_columns, migs.unique_compiles, migs.user_seeks, migs.avg_total_user_cost, migs.avg_user_impactFROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK) INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK) ON migs.group_handle = mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK) ON mig.index_handle = mid.index_handleORDER BY index_advantage DESC;[/code]The index that is recommended:[code="other"]index_advantage Database.Schema.Table147830591.8 [DB1].[dbo].[table1]equality_columns[state]inequality_columns[completedate]included_columns[messageid], [type], [haschildren]unique_compiles1user_seeks742688[/code]The index that is present has this definition, and it appears to be used (albeit not as much as expected based on the index_advantage) based on the index usage stats, but I can't figure out why it keeps coming up in the results when looking for recommended indexes.[code="sql"]CREATE NONCLUSTERED INDEX [idx_nc_sql_portal_state_completedate] ON [dbo].[table1]( [state] ASC, [completedate] ASC)INCLUDE ( [messageid], [type], [haschildren]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]GO[/code]Index usage stats:[code="other"]Index Nameidx_nc_sql_portal_state_completedateTotal Writes3887142Total Reads40948[/code]Thanks in advance for any help!Steve

View with WHERE vs. WHERE referencing the view

$
0
0
Not sure if this question has a "general" answer or not.At work, I ran into a view that is only for a certain type of member (call them "special") and selects from the member table. But there is no WHERE clause filtering for them, such as WHERE MemberType = 'SPECIAL', so the view will return all 3,000,000 records, instead of the 25,000 'special' member recordsThe selection for "SPECIAL" members is done in the code that references the view.select * from Table_AJoin V_SpecialMembers on v.member = a.memberwhere v.MemberType = "SPECIAL".I am wondering if there is a performance difference if the WHERE MemberType = 'SPECIAL' is in the view, or outside in the code that uses the view:Intuitively, I would think the WHERE should be in the view to restrict the # of rows it selects, but I know that my intuition and the facts can be far apart !EDIT: A quick test on the member table with a simple query & view returns the results in the same time for bot, and both show the same execution plan.

Query was executing fine and suddenly started taking time

$
0
0
Hi all expert,I am facing a problem .I am having a query on PROD , which used to execute within 2-4 minutes till some days back.Now its taking more than an hour, really dont know what is happening . Nothing has happen over a period of time. We Checked the execution plan and found that it is using index seek.Can't think what can be the cause. Has anyone faced the similar suituation . Any advice is highly appreciated .Thanks.

Profiler Trace: finding Stored Procedures that are slow(est) and resource-consuming

$
0
0
I am running a few traces capturing specifically these 2 events:RPC:CompletedSP:StmtCompletedBut what concerns me is that when I later analyze the trace data (querying table, aggregating, etc..), is that I may not be summarizing correct metrics. Because for example, for ONE RECORD for RPC:Completed event there may be 3 records for SP:StmtCompleted event, right? So for example DURATION value may be recorded four times in such case for same SP EXEC but in my analysis I will be summarizing all four? Which seems incorrect. Because for one EXEC <SPname> (RPC:Completed) the trace may contain many rows related to the same occurrence of EXEC this SP if there is a certain number of SQL Statements executed in it. [b] Should I then NOT be using SP:StmtCompleted event, not to mess up my analysis/metrics of performance?[/b] what also concerns me is that there may be EXECs of SPs that are not reflected in RPC:Completed but ARE rather only in SP:StmtCompleted or BATCHCOMPLETED events (that I also trace..).

SQLDIY: Gather Virtual File Statistics Using T-SQL

$
0
0
Hi,As below linkhttp://sqlserverio.com/2011/02/08/gather-virtual-file-statistics-using-t-sql-tsql2sday-15/#comment-166976this is excellent T-SQL script for gather IO tracking in database..Please let me know If anyone can using this script..I have tested SQL 2008 R2 in development server but corresponding database file name not display instead of management file name comming every execution that query. for example EPTDB database file name EPTDB_Data, EPTDB_log file name not comming..Execution script[code="sql"]DECLARE @RC INT, @StartTime DATETIME, @databaseID INT SELECT @StartTime = Getdate(), @databaseID = db_id () EXEC @RC = Gathervirtualfilestats '00:02:00',30,-1,-1 [/code]output result script[code="sql"]SELECT TOP 10 Db_name(DBID) AS 'databasename', File_name(fileid) AS 'filename', Reads / ( IntervalInMilliSeconds / 1000 ) AS 'readspersecond', Writes / ( IntervalInMilliSeconds / 1000 ) AS 'writespersecond', ( Reads + Writes ) / ( IntervalInMilliSeconds / 1000 ) AS 'iopersecond', CASE WHEN ( Reads / ( IntervalInMilliSeconds / 1000 ) ) > 0 AND IostallReadsInMilliseconds > 0 THEN IostallReadsInMilliseconds / Reads ELSE 0 END AS 'iolatencyreads', CASE WHEN ( Reads / ( IntervalInMilliSeconds / 1000 ) ) > 0 AND IostallWritesInMilliseconds > 0 THEN IostallWritesInMilliseconds / Writes ELSE 0 END AS 'iolatencywrites', CASE WHEN ( ( Reads + Writes ) / ( IntervalInMilliSeconds / 1000 ) > 0 AND IostallInMilliseconds > 0 ) THEN IostallInMilliseconds / ( Reads + Writes ) ELSE 0 END AS 'iolatency', RecordedDateTime FROM management.dbo.VirtualFileStats WHERE DBID = 6 AND FirstMeasureFromStart = 0 ORDER BY RecordID[/code]

Histogram question

$
0
0
[size="1"]RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS AVG_RANGE_ROWS10005138 6055 137 330 18.3484810007165 3262 212 95 34.33684[/size]please help from above histogram how QO it calculate "ESTIMATED ROWS" from below range col >10005139 and col <10005300.
Viewing all 730 articles
Browse latest View live


Latest Images