Hash Tables | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Hash Tables

We have a stored procedure that is running really slow and I’m wondering if the use of hash tables would significantly affect performance? Thanks.
Are you asking because you see hash tables in the execution plan, or because you’re creating your own, or because you want to create them? Did you run the Index Tuning Wizard for the procedure?
Hello. I’m asking because we have a stored procedure that has become very slow when returning 42,000 records and I’ve been asked to troubleshoot it and I noticed that it’s using hash tables. To give you an idea, here’s a copy of the proc. I appreciate any ideas at all. Thanks —————————————————————-
store procedure
—————————————————————– CREATE Procedure dbo.usp_AdvRpt_GetPortfolioAssetAllocation
@intShowAllRecords TinyInt=1,
@vchUserID varchar(8000)=”,
@vchOrgunitID varchar(8000)=”,
@dtmAsOn datetime,
@vchClientName varchar(150),
@vchPortfolioName varchar(150),
@intSortOrder int=0,
@intSortField int=1,
@intStartRecNo int=1,
@intPageSize int=50,
@intNoofRecords int=0 Output,
@intNoofPages int=0 Output
As CREATE TABLE #TempTable(
UserID int,
UserName varchar(100),
ClientID INT,
ClientName varchar(150),
PortfolioID INT,
ProgramID INT,
PortfolioName VARCHAR(80),
PortfolioTotal NUMERIC(18,2),
MarketValue NUMERIC(18,2),
Equity NUMERIC(18,2),
Fixed NUMERIC(18,2),
CashEqv NUMERIC(18,2),
AdvisorTotal NUMERIC(18,2),
GrandTotal NUMERIC(18,2),
NoOfRecords INT
) CREATE TABLE #MainTable(
RowID INT IDENTITY,
UserID int,
UserName varchar(100),
Phone varchar(20),
ClientID INT,
ClientName varchar(150),
PortfolioID INT,
ProgramID INT,
PortfolioName VARCHAR(80),
PortfolioTotal NUMERIC(18,2),
MarketValue NUMERIC(18,2),
Equity NUMERIC(18,2),
Fixed NUMERIC(18,2),
CashEqv NUMERIC(18,2),
AdvisorTotal NUMERIC(18,2),
GrandTotal NUMERIC(18,2),
NoOfRecords INT
) CREATE TABLE #TempOrgUnitID (OrgUnitID int)
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED SET ROWCOUNT 0
SET NOCOUNT ON DECLARE @intStartPosition int
DECLARE @intENDPosition int DECLARE @numGT NUMERIC(18,2),
@sinClLen SMALLINT,
@sinPOLen SMALLINT,
@sinAcLen SMALLINT,
@sinAcNoLen SMALLINT,
@Offset INT SET @numGT=0 SELECT @sinClLen=1,@sinPOLen=1,@sinAcLen=1,@sinAcNoLen=1 IF @vchClientName=’%’ OR LEN(@vchClientName)=0
SET @sinClLen=0 IF @vchPortfolioName=’%’ OR LEN(@vchPortfolioName)=0
SET @sinPOLen=0 INSERT INTO #TempOrgUnitID
SELECT
DISTINCT
OrgunitID
FROM
Ufn_RPT_GetChildOrgUnitsForOrgunit(@vchOrgunitID) INSERT INTO #TempTable(
UserID,
UserName,
ClientID,
ClientName,
PortfolioID,
ProgramID,
PortfolioName,
PortfolioTotal,
MarketValue,
Equity,
Fixed,
CashEqv
) SELECT
AC.UserID,
U.UserName ,
AC.ClientID,
AC.ClientName AS ClientName,
AC.PortfolioID,
AC.ProgramID AS ProgramID,
AC.PortfolioName AS PortfolioName,
SUM(ISNULL(AC.PortfolioTotal,0)) AS PortfolioTotal,
SUM(ISNULL(AC.PortfolioTotal,0)+ISNULL(AC.PortaccruedIncome,0)) AS MarketValue,
CASE WHEN(SUM(AC.CashAndEquivTotal)<0) THEN
CASE WHEN (ISNULL(SUM(AC.EquityTotal),1)+ISNULL(SUM(AC.FixedTotal),1)) <> 0 THEN
(SUM(AC.EquityTotal)/(ISNULL(SUM(AC.EquityTotal),1)+ISNULL(SUM(AC.FixedTotal),1)))*100
ELSE 0 END
ELSE
CASE WHEN ISNULL(SUM(AC.PortfolioTotal),1)<> 0 THEN
(SUM(AC.EquityTotal)/ISNULL(SUM(AC.PortfolioTotal),1))*100
ELSE 0 END
END AS Equity,
CASE when(SUM(AC.CashAndEquivTotal)<0) THEN
CASE WHEN (ISNULL(SUM(AC.EquityTotal),1) + ISNULL(SUM(AC.FixedTotal),1)) <>0 THEN
SUM(AC.FixedTotal)/ (ISNULL(SUM(AC.EquityTotal),1) + ISNULL(SUM(AC.FixedTotal),1)) * 100
ELSE 0 END
ELSE
CASE WHEN ISNULL(SUM(AC.PortfolioTotal),1) <>0 THEN
(SUM(AC.FixedTotal)/ISNULL(SUM(AC.PortfolioTotal),1)) *100
ELSE 0 END
END AS Fixed,
CASE when(SUM(AC.CashAndEquivTotal)<0) THEN
0.00
ELSE
CASE WHEN ISNULL(SUM(AC.PortfolioTotal),1)<> 0 THEN
(SUM(AC.CashAndEquivTotal)/ISNULL(SUM(AC.PortfolioTotal),1))*100
ELSE 0 END
END AS CashEqv
FROM View_Account AC
INNER JOIN Ufn_RepFilter_GetUsers(@vchUserID) U on AC.Userid = U.Userid
INNER JOIN UsersOrgUnit UO ON U.UserID=UO.UserID
INNER JOIN #TempOrgUnitID TMP ON UO.OrgUnitID=TMP.OrgUnitID
LEFT OUTER JOIN dbo.MultiPhone MP ON MP.multiid=U.userid
AND MP.multicode=’US’ AND MP.preferredFlag=’Y’
LEFT OUTER JOIN dbo.Phone PH ON PH.phoneid=MP.Phoneid
WHERE AC.ASon = @dtmASOn
AND ( (@sinClLen=0) or (AC.ClientName like @vchClientName) )
AND ( (@sinPOLen=0) or (AC.PortfolioName like @vchPortfolioName))
AND coalesce(AC.AcctOpeningDate,’01/01/1900′) <= @dtmASon
AND coalesce(AC.AcctClosingDate,@dtmASon) >= @dtmASon
GROUP BY
U.UserName,
AC.UserID,
AC.ClientID,
AC.PortfolioID,
AC.ProgramID,
AC.ClientName,
AC.PortfolioName INSERT INTO #MainTable( UserID,UserName,Phone,ClientID,ClientName,PortfolioID,ProgramID,PortfolioName,MarketValue,
Equity,Fixed,CashEqv)
SELECT
UserID,
UserName,
Phone,
ClientID,
ClientName,
PortfolioID,
ProgramID,
PortfolioName,
MarketValue as MarketValue,
Equity,
Fixed,
CashEqv
FROM #TempTable TMP
LEFT OUTER JOIN dbo.MultiPhone MP ON MP.multiid=TMP.userid
AND MP.multicode=’US’ AND MP.preferredFlag=’Y’
LEFT OUTER JOIN dbo.Phone PH ON PH.phoneid=MP.Phoneid
ORDER BY
TMP.UserName ASC,
TMP.UserID ASC,
CASE
WHEN @intSortOrder = 0 and @intSortField = 1 THEN TMP.ClientName
WHEN @intSortOrder = 0 and @intSortField = 2 THEN TMP.PortfolioName
END ASC,
CASE
when @intSortOrder = 0 and @intSortField = 3 THEN TMP.MarketValue
when @intSortOrder = 0 and @intSortField = 4 THEN TMP.Equity
when @intSortOrder = 0 and @intSortField = 5 THEN TMP.Fixed
when @intSortOrder = 0 and @intSortField = 6 THEN TMP.CashEqv
END ASC,
CASE
WHEN @intSortOrder = 1 and @intSortField = 1 THEN TMP.ClientName
WHEN @intSortOrder = 1 and @intSortField = 2 THEN TMP.PortfolioName
END DESC,
CASE
when @intSortOrder = 1 and @intSortField = 3 THEN TMP.MarketValue
when @intSortOrder = 1 and @intSortField = 4 THEN TMP.Equity
when @intSortOrder = 1 and @intSortField = 5 THEN TMP.Fixed
when @intSortOrder = 1 and @intSortField = 6 THEN TMP.CashEqv
END DESC SET @intNoofRecords = @@ROWCOUNT –SELECT * FROM #MainTable order by RowID ASC
UPDATE B
SET AdvisorTotal=A.MarketValue,
NoOfRecords=A.NoOfRecords,
@[email protected]+B.MarketValue
FROM ( SELECT UserID,
SUM(MarketValue) AS MarketValue,
Count(UserID) AS NoOfRecords
FROM #MainTable GROUP BY UserID )A ,#MainTable B WHERE A.UserID=B.UserID OPTION (KEEP PLAN)
————————————————————————————————
— Pagination
————————————————————————————————
if @intShowAllRecords = 0
begin
set @intStartPosition = @intStartRecNo
set @intENDPosition = @intStartRecNo
if @intNoofRecords <> 0
begin
set @intENDPosition = @intStartPosition + @intPageSize -1
END
set @intNoofPages = @intNoofRecords / @intPageSize if (@intNoofRecords % @intPageSize) >= 1
begin
set @intNoofPages = @intNoofPages + 1
END
SELECT
UserID,
UserName,
Phone,
ClientID,
ClientName,
PortfolioID,
ProgramID,
PortfolioName,
MarketValue AS MarketValue,
Equity AS EquityPer,
Fixed AS FixedPer,
CashEqv AS CashPer,
AdvisorTotal AS AdvisorTotal,
@numGT AS GrandTotal,
NoOfRecords
FROM
#MainTable
WHERE
RowID >= @intStartPosition AND RowID <= @intENDPosition
ORDER BY RowID OPTION (KEEP PLAN)
END
ELSE
begin SELECT
UserID,
UserName,
Phone,
ClientID,
ClientName,
PortfolioID,
ProgramID,
PortfolioName,
MarketValue AS MarketValue,
Equity AS EquityPer,
Fixed AS FixedPer,
CashEqv AS CashPer,
AdvisorTotal AS AdvisorTotal,
@numGT AS GrandTotal,
NoOfRecords
FROM
#MainTable
ORDER BY RowID OPTION (KEEP PLAN)
END DROP TABLE #TempTable
DROP TABLE #MainTable
DROP TABLE #TempOrgUnitID
GO ————————————————————————-
quote:Originally posted by Adriaan Are you asking because you see hash tables in the execution plan, or because you’re creating your own, or because you want to create them? Did you run the Index Tuning Wizard for the procedure?

you meant if there is a hash operation in the execution plan?
does the procedure execute faster from query analyzer than from your app?
if so, from your app, instead of declaring the sql command as a procedure, with appended parameters, try declaring it as text, and set the text to "exec procname parameters… "

Can you be a bit more specific what "slow" means? 1 Minute, 1 hour?
When this proc is returning 42,000 rows, how many rows are in the tables involved?
Also, you’re using UDF’s all over the place. Are they necessary? They can very well bring your performance significantly down. —
Frank Kalis
Microsoft SQL Server MVP
http://www.insidesql.de
Heute schon gebloggt?http://www.insidesql.de/blogs
Ich unterstuetze PASS Deutschland e.V. http://www.sqlpass.de)
In the tables in the View, 5 of them have around 43,000 and one table has 967,000 rows. The stored proc is taking about 30 seconds. How much do you think the UDF’s slow performance. We have around 2,500 stored procs and so the UDFs make it infinitely easier to develop. Thanks.
quote:Originally posted by FrankKalis Can you be a bit more specific what "slow" means? 1 Minute, 1 hour?
When this proc is returning 42,000 rows, how many rows are in the tables involved?
Also, you’re using UDF’s all over the place. Are they necessary? They can very well bring your performance significantly down. —
Frank Kalis
Microsoft SQL Server MVP
http://www.insidesql.de
Heute schon gebloggt?http://www.insidesql.de/blogs
Ich unterstuetze PASS Deutschland e.V. http://www.sqlpass.de)

it depends how many rows are involved in the call to the UDF and what it is doing. i heard some dork put a UDF to convert a bit value to ‘T’/’F’ which really killed their performance try removing one of the high row count UDF’s even if it nolonger correct just to see the impact.
I reviewed the Execution Plan and there is one query that is doing a table scan TWICE on a temp table that has 42,000 rows. First, query inserts 42,000 into TempTable Second, query updates 42,000 in TempTable and does a TableScan. Here it is: ————————————————- UPDATE B
SET AdvisorTotal=A.MarketValue,
NoOfRecords=A.NoOfRecords,
@[email protected]+B.MarketValue
FROM ( SELECT UserID,
SUM(MarketValue) AS MarketValue,
Count(UserID) AS NoOfRecords
FROM #MainTable GROUP BY UserID )A ,#MainTable B WHERE A.UserID=B.UserID OPTION (KEEP PLAN) ————————————————————– Any thoughts would be greatly appreciated!!!
quote:Originally posted by joechang it depends how many rows are involved in the call to the UDF and what it is doing. i heard some dork put a UDF to convert a bit value to ‘T’/’F’ which really killed their performance try removing one of the high row count UDF’s even if it nolonger correct just to see the impact.

Do the temp tables have an index on the UserId column? I would hazard a guess that they don’t have one.
Ah, ha! It doesn’t. Here’s the statement: create table #MainTable(
RowID INT IDENTITY,
UserID int,
UserName varchar(100),
Phone varchar(20),
ClientID INT,
AccountID int,
AccountName varchar(100),
CustodianAccountNo varchar(15),
CustodianAccountID int,
MoneyManagerName varchar(80),
InceptionDate datetime,
PortfolioTotal numeric(18,2),
MarketValue numeric(18,2),
Equity numeric(18,2),
Fixed numeric(18,2),
CashEqv numeric(18,2),
AdvisorTotal numeric(18,2),
GrandTotal numeric(18,2),
NoOfRecords int
)
quote:Originally posted by Adriaan Do the temp tables have an index on the UserId column? I would hazard a guess that they don’t have one.

And in a follow-up to your other thread – PLEASE do not start another thread on the same issue. The other thread is also incomplete in that it doesn’t menrion the multiple temp tables involved. Should have sportted this earlier, but you need an index on each temp table, not just on #Main.
OK. Thanks. I will give that a try as well. My apologies for the 2 threads.
quote:Originally posted by Adriaan And in a follow-up to your other thread – PLEASE do not start another thread on the same issue. The other thread is also incomplete in that it doesn’t menrion the multiple temp tables involved. Should have sportted this earlier, but you need an index on each temp table, not just on #Main.

You can create a clustered index on the temp tables after you’ve defined it and then did the insert into it, to speed up the select and the update piece. But I think the underlying problem is the fact that you are trying to return 42,000 rows across your network. I always advocate limiting the number of rows transferred to something that is reasonable. An end user can’t scroll through 42,000 and make any intelligent decisions regarding that kind of data volume. But if you have to return them all you can try to improve the command as best as possible using the index on the temp table (after the insert and before the update/select) Also any other parts of the query that go into building the inserts to the temp table.
There are a couple of other areas that might have a severe impact on performance as well. I noticed the developer is using LOTS of functions that are returning table variables and there are functions within functions. Doesn’t a UDF act similar to a cursor in terms of performance?? Also, there is a view being used that joins a table with 972,000 rows. I was going to try creating an indexed view to see if this improved the overall performance of the stored procedure. By the way, it’s taking about 20 seconds to return 42,000 rows in groups of 50. Is that considered slow? Your thoughts are appreciated.
quote:Originally posted by druer You can create a clustered index on the temp tables after you’ve defined it and then did the insert into it, to speed up the select and the update piece. But I think the underlying problem is the fact that you are trying to return 42,000 rows across your network. I always advocate limiting the number of rows transferred to something that is reasonable. An end user can’t scroll through 42,000 and make any intelligent decisions regarding that kind of data volume. But if you have to return them all you can try to improve the command as best as possible using the index on the temp table (after the insert and before the update/select) Also any other parts of the query that go into building the inserts to the temp table.

Functions within functions will be way beyond cursors in terms of performance … in the wrong direction. Bad idea! They should be using views, derived tables, local table variables, temp tables … and perhaps, if there really is no other way, they might consider using a UDF that returns a table. One thing they should NEVER do is to pass some column values to a UDF to be evaluated, and then filter on the value returned by the UDF – this kills performance, since you’re telling SQL to just retrieve all data, and start munching away on all of it. Do evaluations in WHERE clauses, so SQL Server can decide how to take advantage of indexes.
Great! I think I might be getting somewhere.<br /><br />By the way, the functions are returning tables.<br /><br />Here’s an example:<br /><br />In the sproc there’s 2 UDF’s. One in the FROM and one in a JOIN. Like this:<br /><br />============================================<br />(1)<br />INSERT INTO #TempOrgUnitID<br />SELECT<br />DISTINCT<br />OrgunitID <br />FROM<br /><b>Ufn_RPT_GetChildOrgUnitsForOrgunit</b>(@vchOrgunitID)<br />(2)<br /> SELECT <br />AC.UserID, <br />U.UserName UserName, <br />” as Phone,<br />AC.ClientID, <br />AC.AccountId,<br />AC.AccountName, <br />AC.CustodianAccountNo,<br />AC.CustodianAccountID,<br />PAS.InceptionDate,<br />SUM(AC.PortfolioTotal),<br />SUM(AC.PortfolioTotal+ AC.PortaccruedIncome) MarketValue, <br />SUM(AC.EquityTotal) Equity, <br />SUM(AC.FixedTotal) Fixed, <br />SUM(AC.CashAndEquivTotal) CashEqv, <br />CAST(0 AS NUMERIC(18,2)) AdvisorTotal,<br />CAST(0 AS NUMERIC(18,2)) GrandTotal,<br />CAST(0 AS INT) NoOfRecords<br /> FROM View_Account AC <br />INNER JOIN <b>Ufn_RepFilter_GetUsers</b>(@vchUserID) U on AC.Userid = U.Userid<br />INNER JOIN UsersOrgUnit UO ON U.UserID=UO.UserID<br />INNER JOIN #TempOrgUnitID TMP ON UO.OrgUnitID=TMP.OrgUnitID<br />Left Outer Join dbo.Port_AccountSummary PAS on AC.CustodianAccountSNAM=PAS.SNAM<br /> WHERE AC.Ason = @dtmAsOn<br /> AND ( (@sinAcLen=0) or (AC.AccountName like @vchAccountName) )<br /> AND ( (@sinAcNoLen=0) or (AC.CustodianAccountNo like @vchAccountNo) )<br />AND coalesce(AC.AcctOpeningDate,’01/01/1900′) &lt;= @dtmAson<br /> AND coalesce(AC.AcctClosingDate,@dtmason) &gt;= @dtmAson<br />GROUP BY<br />U.UserName,<br />AC.UserID,<br />AC.ClientID,<br />AC.AccountId,<br />AC.AccountName,<br />AC.CustodianAccountNo,<br />AC.CustodianAccountID,<br />PAS.InceptionDate<br />============================================================<br /><br />Then, on the first UDF there is:<br /><br />CREATEFUNCTION dbo.Ufn_RPT_GetChildOrgUnitsForOrgunit<br />(@vchOrgunitID varchar(8000))<br />RETURNS @TmpOrgUnits TABLE <br />(OrgunitID int)<br />AS<br />Begin<br /><br />DECLARE @TmpSplitOrgunits TABLE (OrgunitID int,RowID int Identity)<br />DECLARE @intOrgUnitID int,@intCount int,@intRowCount int<br /><br />SET @intCount=1<br /><br />INSERT INTO @TmpSplitOrgunits(OrgunitID)<br />SELECT Alpha FROM dbo.ufn_AlpIntlistToTbl (@vchOrgunitID)<br /><br />SET @[email protected]@ROWCOUNT<br /><br />While @intCount&lt;[email protected]<br />BEGIN<br />SET @intOrgUnitID=(SELECT OrgunitID FROM @TmpSplitOrgunits WHERE [email protected])<br /><br />INSERT INTO @TmpOrgUnits<br />SELECT OrgUnitID FROM dbo.ufn_GetChildOrgUnitsForOrgunit(@intOrgUnitID)<br /><br />SET @[email protected]+1<br /><br />END<br />RETURN<br />END <br /><br />==============================================================<br /><br />Then on the UDF within that UDF theres:<br /><br />CREATEFUNCTION dbo.Ufn_RepFilter_GetUsers<br />(@vchUserID VARCHAR(8000))<br />RETURNS @Users TABLE <br />(UserID INT,UserName VARCHAR(100))<br />AS<br /><br />BEGIN<br />DECLARE @intUserID INT<br />DECLARE @TempUserId TABLE ([Id] INTEGER IDENTITY, UserId INTEGER) <br />DECLARE @AccessibleUsers TABLE (UserID INT)<br />DECLARE @intMinRow INT,@intMaxRow INT<br /><br />/* Split and populate the input comma separated string of userids into a temp table */<br />INSERT INTO @TempUserId(UserID)<br />SELECT Alpha from dbo.ufn_AlpIntlistToTbl (@vchUserID)<br /><br />SET @intMaxRow = @@rowcount<br />SET @intMinRow = 1<br /><br />/* Loop for each user and get the list of accessible user*/<br />WHILE @intMinRow &lt;= @intMaxRow<br />BEGIN<br />SELECT @intUserID = UserId FROM @TempUserId WHERE [Id] = @intMinRow<br /><br />INSERT INTO @AccessibleUsers(UserID)<br />SELECT UserID FROM dbo.ufn_GetUsers(@intUSerID)<br /><br />SET @intMinRow = @intMinRow + 1<br />END<br /><br />/*Getting the Username of all the accessible userids for the given userids */<br />INSERT INTO @Users<br />SELECT DISTINCT USR.UserID,<br />CASE <br />WHEN LTRIM(RTRIM(USR.FirstName))&lt;&gt;” THEN<br />CASE <br />WHEN LTRIM(RTRIM(USR.LastName))&lt;&gt;” THEN<br />UPPER(USR.LastName)+’, ‘+UPPER(USR.FirstName)<br />ELSE<br />UPPER(USR.FirstName)<br />END<br />ELSE<br />CASE <br />WHEN LTRIM(RTRIM(USR.LastName))&lt;&gt;” THEN<br />UPPER(USR.LastName)<br />END<br />END AS UserName<br /><br />FROM Users USR <br />INNER JOIN @AccessibleUsers AU ON AU.UserID=USR.Userid<br /><br /> RETURN<br />END <br /><br />============================================================<br /><br />And it goes down two more levels from there, but I don’t want your head to explode. I’ve never seen a developer do this before and unfortunately there are 134 Functions and 2,411 Stored Procedures.<br /><br />I’ve inherited the problem and I have to fix it.<br /><br />What do I do?<br /><br />HELP!<br /><br /><img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />By the way, they said everything was running fine until they got a large client’s data.<br /><br /><br /><br /><br /><blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>Originally posted by Adriaan</i><br /><br />Functions within functions will be way beyond cursors in terms of performance … in the wrong direction. Bad idea!<br /><br />They should be using views, derived tables, local table variables, temp tables … and perhaps, if there really is no other way, they might consider using a UDF that returns a table.<br /><br />One thing they should NEVER do is to pass some column values to a UDF to be evaluated, and then filter on the value returned by the UDF – this kills performance, since you’re telling SQL to just retrieve all data, and start munching away on all of it. Do evaluations in WHERE clauses, so SQL Server can decide how to take advantage of indexes.<br /><hr height="1" noshade id="quote"></font id="quote"></blockquote id="quote">
Take it one step at a time. *** Start by creating a single script, doing all that you need to do within this script. No external functions if you can avoid them! For each UDF that you have, you add a temp table. You create some indexes on columns that you know you’ll need for joins and/or filters. Copy the INSERT processes of each UDF. See if you can find ways to avoid the WHILE loops! Think set-based. When all data is inserted into the temp tables, just select with joins … This should already improve things. **** One thing that struck me: along the way, there are lists of values being split up. This is an indication that either the data is not correctly normalized (a little tricky to resolve) or that lists of parameters are being passed from one UDF to another (much easier to solve).
Parameters seem to be passed between UDFs. Do you have any idea what kind of performance hit these nested UDFs would have. I want to develop an arsenal so I can go back to the developers and have them re-think the use of so many UDFs. Any advice there? Thanks for the help!!!
quote:Originally posted by Adriaan Take it one step at a time. *** Start by creating a single script, doing all that you need to do within this script. No external functions if you can avoid them! For each UDF that you have, you add a temp table. You create some indexes on columns that you know you’ll need for joins and/or filters. Copy the INSERT processes of each UDF. See if you can find ways to avoid the WHILE loops! Think set-based. When all data is inserted into the temp tables, just select with joins … This should already improve things. **** One thing that struck me: along the way, there are lists of values being split up. This is an indication that either the data is not correctly normalized (a little tricky to resolve) or that lists of parameters are being passed from one UDF to another (much easier to solve).

Another interesting thing I noticed is that the sproc uses two UDFs and these two UDF eventually end by using the same UDF. I’m wondering if I can go right to the end UDF and get rid of all of the UDFs in between. Have you seen this before?
quote:Originally posted by photura Parameters seem to be passed between UDFs. Do you have any idea what kind of performance hit these nested UDFs would have. I want to develop an arsenal so I can go back to the developers and have them re-think the use of so many UDFs. Any advice there? Thanks for the help!!!
quote:Originally posted by Adriaan Take it one step at a time. *** Start by creating a single script, doing all that you need to do within this script. No external functions if you can avoid them! For each UDF that you have, you add a temp table. You create some indexes on columns that you know you’ll need for joins and/or filters. Copy the INSERT processes of each UDF. See if you can find ways to avoid the WHILE loops! Think set-based. When all data is inserted into the temp tables, just select with joins … This should already improve things. **** One thing that struck me: along the way, there are lists of values being split up. This is an indication that either the data is not correctly normalized (a little tricky to resolve) or that lists of parameters are being passed from one UDF to another (much easier to solve).

Like I described: let them solve it with temp tables to hold the data, see how much that will improve. If they don’t understand the concept, then they would appear to have the wrong mindset for the job at hand, and – frankly – they shouldn’t be working on database development in the first place.
From the top of this SP to the end, for over 42.000 rows affected, did you see the grown of your memory in Windows ? I think this is a thing can be affected to your speed program ! I am Tran Quang Phuong
]]>