Performance problems in VB | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Performance problems in VB

I need help with a performance problem I have when running a stored procedure from a web application or a VB form. Sometimes the performance is really bad, but when I run the stored procedure from the QA it’s really fast. Running FLUSHPROCINDB solves the performance problem temporarely. The stored procedure I’m executing contains several function and also a temp table. All information in the database is deleted every morning and new records are inserted by using DTS packages. I’ve tried to use WITH RECOMPILE in the stored procedure but that doesn’t help. Any suggestions?
Rebuild the indexes after uploading all the data.
OK, I could try that. But why does FLUSHPROCINDB solve the problem temporarely but the next day the performance is really slow from the web application or my VB form. And what is causeing this performance problem? Is it the index?
You’re inserting a lot of data from different sources in the morning. This means the indexes have a high probability of not being in optimum condition after the data is uploaded. If your indexes are not in optimum condition, most (if not all) table operations will perform badly.
See if this helps you
http://www.sql-server-performance.com/visual_basic_performance.asp Madhivanan Failing to plan is Planning to fail
See if you find additional useful informations here:http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/incbulkload.mspx
Frank Kalis
Microsoft SQL Server MVP
http://www.insidesql.de
Heute schon gebloggt?http://www.insidesql.de/blogs
Ich unterstütze PASS Deutschland e.V. http://www.sqlpass.de)

I can’t understand why I don’t get any performance problems when executing the stored procedure from the QA, only from the VB form and my web application. If there is an index problem, shouldn’t that result in performance problems in the QA as well?
What does your VB code look like? —
Frank Kalis
Microsoft SQL Server MVP
http://www.insidesql.de
Heute schon gebloggt?http://www.insidesql.de/blogs
Ich unterstütze PASS Deutschland e.V. http://www.sqlpass.de)

This is how my VB code looks like, please remember that by executing FLUSHPROCINDB solves the performance problem temporarely! Public Function GetSummary(ByVal UserId As Integer, ByVal Exceeded As Integer, ByVal SortOrder As String, ByVal Remarks As Integer, ByVal ShowReCalculate As Byte) As ADODB.Recordset
Dim rs As ADODB.Recordset
Dim strInputParam As String strInputParam = "UserId: " & UserId & vbCrLf & "Exceeded: " & Exceeded & vbCrLf & "SortOrder: " & SortOrder & vbCrLf & "ShowReCalculate: " & ShowReCalculate & vbCrLf & "Remarks: " & Remarks On Error GoTo errorHandler Set rs = RunSPReturnRS("GetSummary", _
msConnectionString, _
Array("@UserId", adInteger, 4, UserId), _
Array("@Exceeded", adInteger, 4, Exceeded), _
Array("@SortOrder", adVarChar, 35, SortOrder), _
Array("@Remarks", adInteger, 4, Remarks), _
Array("@ShowReCalculate", adInteger, 4, ShowReCalculate)) Set GetSummary = rs
Set rs = Nothing Exit Function errorHandler:
Set rs = Nothing
AddErrorMsg Err, "GetSummary", strInputParam
End Function Function RunSPReturnRS(ByVal SP As String, ByVal DSN As String, ParamArray Params() As Variant) As ADODB.Recordset Dim sErrMsg As String
Dim rs As ADODB.Recordset
Dim cmd As ADODB.Command On Error GoTo errorHandler 10 Set rs = CreateObject("ADODB.Recordset")
20 Set cmd = CreateObject("ADODB.Command") ‘Init the ADO objects & the stored proc parameters
30 cmd.ActiveConnection = DSN
40 cmd.CommandText = SP
50 cmd.CommandType = adCmdStoredProc
‘Set timeout to 240sec (90sec for asp timeout)
60 cmd.CommandTimeout = 240 70 collectParams cmd, Params ‘Execute the query for readonly
80 rs.CursorLocation = adUseClient
90 rs.Open cmd, , adOpenDynamic, adLockBatchOptimistic
100 If rs.ActiveConnection.Errors.Count > 0 Then GoTo errorHandler ‘Disconnect the recordset
110 Set cmd.ActiveConnection = Nothing
120 Set cmd = Nothing ‘Return the resultant recordset
140 Set RunSPReturnRS = rs
Exit Function errorHandler:
Set rs = Nothing
Set cmd = Nothing
Err.Raise Err.Number, "RunSPReturnRS" & " Line: " & Erl & "<-" & Err.Source, Err.Description
End Function Sub collectParams(ByRef cmd As ADODB.Command, ParamArray argparams() As Variant)
Dim Params As Variant
Dim prm As ADODB.Parameter
Dim i As Integer
Dim v As Variant Params = argparams(0) On Error GoTo errorHandler 10 For i = LBound(Params) To UBound(Params) 20 If UBound(Params(i)) = 3 Then
‘ Check for nulls.
30 If IsNumeric(Params(i)(3)) Or Params(i)(1) = adNumeric Then
40 v = IIf(Params(i)(3) = "", 0, Params(i)(3))
50 Set prm = cmd.CreateParameter(Params(i)(0), Params(i)(1), adParamInput, Params(i)(2), v)
60 If Params(i)(1) = adNumeric Then
70 prm.Precision = Left(Params(i)(2), InStr(1, Params(i)(2), ",") – 1)
80 prm.NumericScale = Mid(Params(i)(2), InStr(Params(i)(2), ",") + 1)
90 End If
100 ElseIf TypeName(Params(i)(3)) = "String" Then
110 v = IIf(Params(i)(3) = "", "", Params(i)(3))
120 Set prm = cmd.CreateParameter(Params(i)(0), Params(i)(1), adParamInput, Params(i)(2), v)
130 Else
140 Set prm = cmd.CreateParameter(Params(i)(0), Params(i)(1), adParamInput, Params(i)(2))
150 v = Params(i)(3)
160 prm.Value = v
End If 170 cmd.Parameters.Append prm
Else
180 cmd.Parameters.Append cmd.CreateParameter(Params(i)(0), Params(i)(1), adParamInput, Params(i)(2), Params(i)(3))
End If 190 Next i
Exit Sub
errorHandler:
Err.Raise Err.Number, "collectParams" & " Line: " & Erl & "<-" & Err.Source, Err.Description
End Sub
Hm, since you have already tried WITH RECOMPILE in your proc, the only thing I could think of are statistics that are probably missing. After loading the data, do you fire an UPDATE STATISTICS or sp_updatestats? Do you have turned on the database options "auto create statistics" and "auto update statistics"? —
Frank Kalis
Microsoft SQL Server MVP
http://www.insidesql.de
Heute schon gebloggt?http://www.insidesql.de/blogs
Ich unterstütze PASS Deutschland e.V. http://www.sqlpass.de)

Yes, I have turned on "auto create statistics" and "auto update statistics"? on the database but I don’t run UPDATE STATISTICS or sp_updatestats after inserting data. The strange think is that its not always the application runs slow after inserting the data. It seems that it’s the functions used in my stored procedure that causes the problem.
Ah, what functions?
Ok, so it looks like you recompiled the procedure, but you didn’t recompile other procs (if any) and functions executed inside the proc you are calling from VB.
May be it is good candidate to run RECOMPILE the stored procedures here and also run UPDATE STATISTICS on a regular basis. On this site using that DBCC FLUSHPROCINDB refers You may want to use this command before testing to ensure that previous stored procedure plans won’t negatively affect testing results. Satya SKJ
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
I’ve now made som additional tests and figured out that it’s the function calls, at least 2 of them, inside this procedure that causes this performance problem. This performance problem don’t necessary starts after reading the files via DTS packages in the morning. It can as well start during the day, but running FLUSHPROCINDB solves the performance problem temporarely.
You should find that the biggest factor is the number of rows for which the function is called. There are two ways of optimizing – preferably implemented both: (1) optimize the function itself; don’t forget to double-check whether the function is being used to replace a less-than-simple WHERE clause – SQL Server is much better equipped to understand complicated WHERE clauses than humans, and can resolve them much easier than calling a function for each row. (2) reduce the number of rows for which the function is called
Today I re-built the indexes after the bulk insert via DTS packages. This solved the performance problems. But one question still remain, why does the queries run fast in the QA but slow in VB or Web when I don’t re-create the index after a bulk insert?
One reason may be that vb and web sp execution use cached execution plan, which is no longer efficient. When you execute sp from query analyzer one or more connection settings that are different then one from vb/web causes new execution plan to be built and this plan is more efficient then cached one. When you rebuild indexes all execution plans on these tables will be invalidated (I think, not checked yet) and the first sp execution will generate and put in the sp cache new one.
The indexes must be pretty messed up after you’ve uploaded all the data in the morning. Perhaps a smaller fill factor might help, or a different index design. Assuming your database is off-line when you’re uploading the data, you could also drop the indexes on the target tables, then insert the data, and finally add the indexes again. Might even help reducing the upload time.
I’ll monitor the performance carefully during next week to see if this solution will solve the problem. Many thanks for your advices!
quote:Originally posted by Adriaan You should find that the biggest factor is the number of rows for which the function is called. There are two ways of optimizing – preferably implemented both: (1) optimize the function itself; don’t forget to double-check whether the function is being used to replace a less-than-simple WHERE clause – SQL Server is much better equipped to understand complicated WHERE clauses than humans, and can resolve them much easier than calling a function for each row. (2) reduce the number of rows for which the function is called

A third option is to re-write the function as a table-valued function and join the results to the rest of your query. The overhead for functions is still present, but it occurs only once instead of once for each row. Keith Payne
Technical Marketing Solutions
www.tms-us.com
I think the problem lies in your condition count > 0. Don’t use the count (i.e. count > 0) because it will read all the record first (and very long if too many records) before returning the result (specially with client cursor). Use the EOF instead. If you need the count, create stored procedure that returns the count and use that one rather calling count from client code. count in SQL server is faster than client recordset. Are you updating the recordset? if not, change the option to readonly cursor in your VB (currently, use set it to dynamic). Also, if the recordset is not huge (< 10000) and SQL clients mostly (Pentium IV), suggest to change the cursor location to server.
]]>