List top ten store procedure slow in sql server năm 2024

I have a pure OLTP system, which I'm trying to further optimize. For this I use a set workload which runs a large number of big transactions [mimicking my production system].

Inside these transactions a LOT of updates and inserts take place.

My idea was to replace all these update [and inserts] statements by stored procedures to optimize plan re-use and speed things up even more. I've further optimized these stored procedures to use local variables, join with a static numbers table [for updating multiple rows at the time] etc.

However, when running the exact same load [resetting buffers, updating statistics, same conditions], the code without stored procedures performs more batches/min and thus finishes quicker.

Anything obvious I might be missing? Anyone who has similar experience and could offer some insight of where my thinking is wrong?

As a side note, even with these stored procedures I am still suffering from reported bloated [ad hoc] cache.

Appreciate any feedback or ideas!

  • Michael L John One Orange Chip Points: 27023
    I have a pure OLTP system, which I'm trying to further optimize. For this I use a set workload which runs a large number of big transactions [mimicking my production system].

    Can you further define "big transaction"? My idea was to replace all these update [and inserts] statements by stored procedures to optimize plan re-use and speed things up even more. I've further optimized these stored procedures to use local variables, join with a static numbers table [for updating multiple rows at the time] etc. Local variables and a join with static numbers table may not mean better performance. However, when running the exact same load [resetting buffers, updating statistics, same conditions], the code without stored procedures performs more batches/min and thus finishes quicker. How exactly are you resetting buffers and updating statistics?? Can you post some of your ad-hoc code, and the corresponding stored procedure that was created? It's kind of tough to make some sort of analysis and recommendation without seeing the actual code.

    Stijn977

    SSC Eights! Points: 962 Hi Sir, talk about quick turnaround. I was actually already editing my post, please allow me to give some more details: Can you further define "big transaction"? \==> 500 to a 1000 updates + 10s of inserts per transaction. I understand a transaction should be as small as possible, in this case the application code has processed all changes in memory and needs to [at set times] write all changes to the database. Local variables and a join with static numbers table may not mean better performance. \==> The local variables are what I found [online best practice]to be a good way to [try and] avoid parameter sniffing, plan bloat. But indeed doesn't seem to be working as bloated cache still gets reported. \==> The static numbers table is in case multiple rows need to be updated. At the moment this is done by way of: UPDATE ... SET ... WHERE id = 1 OR id = 2 or ... id = x How exactly are you resetting buffers and updating statistics?? \==> I restart the service and I run an sp_updatestats. Can you post some of your ad-hoc code, and the corresponding stored procedure that was created? It's kind of tough to make some sort of analysis and recommendation without seeing the actual code.[/quote] \==> Sure, all these update SPs look like this [for a single row]: ALTER PROCEDURE [dbo].[sp_ClientNotes_update] @id uniqueidentifier, @ordering smallint = NULL, @title nvarchar[20] = NULL, @content text = NULL AS BEGIN SET NOCOUNT ON; DECLARE @local_ordering smallint = null; DECLARE @local_title nvarchar[20]; DECLARE @local_content text; SET @local_ordering = @ordering; SET @local_title = @title; SET @local_content = @content; UPDATE tbl_ClientNotes SET ordering=ISNULL[@local_ordering,ordering], title=ISNULL[@local_title,title], content=ISNULL[@local_content, content] WHERE id=@id END Those for multiple rows looks similar but JOIN with a split function [working with the static generated numbers tables] which passes a string of comma seperated id's and returns a table. I'm not @ work at the moment so typing this example from memory. 🙂 I'm more than willing to change or drop any of these SP designs, I'm merely trying to test and find what works best given my environment. Much appreciate your time and feedback!

    Michael L John One Orange Chip Points: 27023 Stijn Vervaet [1/11/2016]
-

Hi Sir, talk about quick turnaround. I was actually already editing my post, please allow me to give some more details: Thank you. The local variables are what I found [online best practice]to be a good way to [try and] avoid parameter sniffing, plan bloat. But indeed doesn't seem to be working as bloated cache still gets reported. In actuality, you want parameter sniffing. Where this is a problem is when the parameters being used change. As an example, three parameters. One time the query is executed with parameter 1 containing a value, parameters 2 and 3 are null. The next time, 1 is null, 2 has a value, and 3 is null. And so forth. The first time the query is run, the execution plan is created. Subsequent runs with different parameters may not use the most optimal plan. You may have created a "Catch all Query", described in detail by Gail Shaw. //sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/ Take some time and read this article. See if it corresponds to what you are trying to do, and if so, try to apply some of the techniques described in this article. From the query you provided below, this is exactly the problem. How exactly are you resetting buffers and updating statistics??

\==> I restart the service and I run an sp_updatestats. Restarting SQL is probably not a good method. You can perform these commands: DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS You also may not want to do this for every execution of an ad-hoc or a procedure. You probably only want to start from scratch if the query or proc is changed. Can you post some of your ad-hoc code, and the corresponding stored procedure that was created? It's kind of tough to make some sort of analysis and recommendation without seeing the actual code.

\==> Sure, all these update SPs look like this [for a single row]:

ALTER PROCEDURE [dbo].[sp_ClientNotes_update]

@id uniqueidentifier,

@ordering smallint = NULL,

@title nvarchar[20] = NULL,

@content text = NULL

AS

BEGIN

SET NOCOUNT ON;

DECLARE @local_ordering smallint = null;

DECLARE @local_title nvarchar[20];

DECLARE @local_content text;

SET @local_ordering = @ordering;

SET @local_title = @title;

SET @local_content = @content;

UPDATE tbl_ClientNotes

SET ordering=ISNULL[@local_ordering,ordering],

title=ISNULL[@local_title,title],

content=ISNULL[@local_content, content]

WHERE id=@id

END Assume you need to update only one column in the table. This syntax will update every column in the table. Those for multiple rows looks similar but JOIN with a split function [working with the static generated numbers tables] which passes a string of comma seperated id's and returns a table. I'm not @ work at the moment so typing this example from memory. 🙂 If the splitter function you are using contains a loop, you may be better served to find one that doesn't. One of the better ones is here, by Jeff Moden: //www.sqlservercentral.com/articles/Tally+Table/72993/

  • Stijn977 SSC Eights! Points: 962 Thank you Michael for your thoughts. As a sole DBA it's sometimes hard to come up wit a good solution without somebody to spar ideas back and forth with. - On the parameter sniffing 'catch-all' query. Good point on the catch-all query being generated. I will have to study the article you mentionned and try the dynamic SQL example out tomorrow. I'm running SQL 2012 and before trying local variables, had experimented with adding the RECOMPILE hint. Result was a hughe CPU increase and overall performance degradation. This might have been expected and maybe a sign I should get "a more powerful server". But I decided to continue searching in different directions. I want to try the Dynamic SQL generation, but something tells me that [especially since some of my tables have a LOT of columns, since this is a pure OLTP database] there will be a considerable overhead in doing so. But I will try! 🙂 - On resetting buffers and updating statistics. Agreed the DBCC commands are more convenient, but since I am on a Dev system [working with a backup from production], I figured it would not matter all that much. Sometimes I perform multiple runs of the workload. Not resetting the buffer, instead letting the [memory] cache build, and see how this performs over time. What's important for me is that I can compare the processed workload one-for-one [code with/without stored procedures]. But appreciate you mentioning this. - On the UPDATE itself: The SP has a parameter for every column. My initial thinking was to uild the update statement in such a way that if the parameter is NULL [aka no value is passed]. The field would be updated with its current value. It is now starting to dawn on me that this might be very in-efficient, especially since some tables are so large. Seems the only [??] solution for this would be to dynamically generate the sQL. But then wouldn't I be better at sticking with plain ol' update queries [generated by the application code, which generats SQL to only update those fields that have changed]? I guess I will find out. \> Careful conclusion: there might not be an efficient way to update tables by way of SP, especially if the table has a lot of columns AND which fields need updating changes on a per-statement basis. Which would lead me back to my original issue I saw reported of cache bloat / ad hoc plans taking up a lot of space, which might be unavoidable?

    I did experiment with turning on 'optimise for ad-hoc workloads' and even ALTER the DB for foreced parameterization, but this only seemed to hurt performance. - Finally, on the split function. I don't use a loop. I use the one described here []: CREATE FUNCTION inline_split_me[@param nvarchar[MAX]] RETURNS TABLE AS RETURN[SELECT ltrim[rtrim[convert[nvarchar[4000], substring[@param, Number, charindex[N',' COLLATE Slovenian_BIN2, @param + convert[nvarchar[MAX], N','], Number] - Number] ]]] AS Value FROM Numbers WHERE Number 3500. If you see any improvements I could make [I'll compare to your example shortly Michael] to the DSQL, I'm all ears. CREATE PROCEDURE gnr8_upd8_club_sequence_records @id bigint, @club_overall_match_stats bigint = NULL, @type tinyint = NULL, @amount int = NULL, @start_date datetime2 = NULL, @end_date datetime2 = NULL AS BEGIN SET NOCOUNT ON DECLARE @SQL_Cmd nvarchar[Max]; DECLARE @SQL_Params nvarchar[Max]; SET @SQL_Cmd = N'UPDATE dbo.fixtures WITH[REPEATABLEREAD] SET '; IF[@club_overall_match_stats IS NOT NULL] SET @SQL_Cmd = @SQL_cmd + N'club_overall_match_stats = @_club_overall_match_stats,'; IF[@type IS NOT NULL] SET @SQL_Cmd = @SQL_cmd + N'type = @_type,'; IF[@amount IS NOT NULL] SET @SQL_Cmd = @SQL_cmd + N'amount = @_amount,'; IF[@start_date IS NOT NULL] SET @SQL_Cmd = @SQL_cmd + N'start_date = @_start_date,'; IF[@end_date IS NOT NULL] SET @SQL_Cmd = @SQL_cmd + N'end_date = @_end_date,'; SET @SQL_Cmd = SUBSTRING[@SQL_Cmd,1,LEN[@SQL_Cmd]-1] + N' WHERE id = @_id'; EXEC sp_executesql @SQL_Cmd, N'@_id bigint,@_club_overall_match_stats bigint,@_type tinyint,@_amount int,@_start_date datetime2,@_end_date datetime2', @_id = @id,@_club_overall_match_stats = @club_overall_match_stats,@_type = @type,@_amount = @amount,@_start_date = @start_date,@_end_date = @end_date END GO edit, please see the workload comparison below:

    Michael L John

    One Orange Chip Points: 27023 Again, can we see a comparison of the two actual execution plans? That would help much more than this chart from Idera.

    Stijn977

    SSC Eights! Points: 962 Apologies Michael, I was trying to get my DSQL-SPs generator to work so I could test it and forgot to provide you with the execution plans. I will provide you with these tomorrow when I'm back in the office. Since there are so many different update statements happening in the transaction, does it matter if I pick a large or a short one?

    How to check the performance of stored procedure in SQL Server?

    Alternatively, in SQL Server Management Studio, select the procedure name and click Display Estimated Execution Plan. The estimated execution plan for natively compiled stored procedures shows the query operators and expressions for the queries in the procedure.

    How to speed up stored procedure SQL Server?

    Specify column names instead of using \ in SELECT statement. Try to avoid * ... .

    Avoid temp temporary table. Temporary tables usually increase a query's complexity. ... . Create Proper Index. Proper indexing will improve the speed of the operations in the database.. Use Join query instead of sub-query and co-related subquery..

    How do I find the slowest queries in SQL Server?

    To track down slow-running queries in SQL Server, you can use the sys. dm_exec_query_stats view. Two important columns returned by this view are last_elapsed_time and last_worker_time. The last_elapsed_time column shows how long it took to execute the most recent request in microseconds.

    How to find slow stored procedure in SQL Server?

    By checking cpu_time and total_elapsed_time of current statements in the sys. dm_exec_requests table, you can find out which queries are making the stored procedure slow. Similarly, for the past execution, you can check last_elapsed_time and last_worker_time on the sys. dm_exec_query_stats table.

Chủ Đề