SQL Server Performance

Updating FK Indexed Columns?

Discussion in 'T-SQL Performance Tuning for Developers' started by jpfish, Aug 3, 2005.

  1. jpfish New Member

    Was reading one of the articles posted on Sql-server-performance.com and had quick question.... Our developers have started putting all basic CRUD into store procedures and I've noticed, that, particularly with Update SP's, that they've also included an update on the FK columns in the table even though the data, most likely hasn't changed.(GoodNews: They're not updating the PK's in these generic SP's) In monitoring SHOWCONTIG on these tables, it appears that the indexes on these FK columns, at least from a Scan Density perspective, are deteriorating rapidly. I guess my question is do you have any good articles/information on whether or not its good practice to update FK columns even when the data hasn't changed. From what I've found so far, it appears that it's not best practice, and for the most part, FK's rarely change and most applications don't touch them after they've been inserted.

    Pardon my ignorance, but I'm told that this is the easiest way to leverage the .Net Datasets and use a stored procedure to do the related work. Do you know of any work arounds, either on the .Net side or on the SQL server side, without having to deviate too far from basic CRUD, where Updates statements only update the columns that have really changed.
  2. satya Moderator

    If the data is updated frequently then ensure to update tables statistics and DBCC INDEXDEFRAG will help you some extent to reduce fragmentation on tables.

    Satya SKJ
    Moderator
    http://www.SQL-Server-Performance.Com/forum
    This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
  3. QueryAnalyzer New Member

    Never update the FK columns. (For that matter, also the PK columns.) You can treat this as a thumb-rule.
    There is a very good chance that you will be having indexes on these columns. Why unneccessarily add to overload?

    Happy Querying!
  4. QueryAnalyzer New Member

  5. Stalker New Member

    >>on whether or not its good practice to update FK columns even when the data hasn't changed

    what does it mean ? Why do you update column if you do not actually change value in it ?

    >>where Updates statements only update the columns that have really changed.

    ? If you're updating a column, you're updating only this column, not the others.

  6. jpfish New Member

    I agree with what you're saying, but this is what I'm told... this is the easiest way to leverage .Net data sets and generic SP's.
  7. Adriaan New Member

    Pardon my ignorance, but I have no idea what "to leverage the .Net Datasets and use a stored procedure to do the related work" could mean.

    Are you trying to insert data from a .Net application into a SQL Server database, and the FK columns in SQL Server are different from those implied in the .Net application, so you need an SP to convert the FK values before the data can be merged into the user tables?
  8. merrillaldrich New Member

    jpfish - I think I know what you mean -

    If your developers are working with .net, then it's really easy to snag a table from the DB, transform it into a dataset, and have it autogenerate insert, update, delete statements that work generically, to allow any offline edits to the data. The pattern is

    - Retrieve a bunch of rows
    - Edit them on the client side
    - Post all the changes back to the DB

    One of those generic update statements has to contain all columns, just in case one is edited at the client, and the tool makes no distinction about the content of the columns in order to ensure that the update handles all possible changes. It'll "update" both changed and unchanged columns for complete rows.

    These statements generally have the following form (just a sample from a project I have):

    SqlUpdCom.CommandText = "UPDATE LDAPUsers
    SET NDSFullName = @NDSFullName,
    LastName = @LastName,
    FirstName = @FirstName,
    ou = @ou,
    location = @location,
    street = @street,
    city = @city,
    state = @state,
    zip = @zip,
    email = @email,
    phone = @phone,
    fax = @fax, title = @title
    WHERE (NDSFullName = @Original_NDSFullName)
    AND (FirstName = @Original_FirstName OR @Original_FirstName IS NULL AND FirstName IS NULL) AND (LastName = @Original_LastName OR @Original_LastName IS NULL AND LastName IS NULL)
    AND (city = @Original_city OR @Original_city IS NULL AND city IS NULL)
    AND (email = @Original_email OR @Original_email IS NULL AND email IS NULL)
    AND (fax = @Original_fax OR @Original_fax IS NULL AND fax IS NULL)
    AND (location = @Original_location OR @Original_location IS NULL AND location IS NULL)
    AND (ou = @Original_ou OR @Original_ou IS NULL AND ou IS NULL)
    AND (phone = @Original_phone OR @Original_phone IS NULL AND phone IS NULL)
    AND (state = @Original_state OR @Original_state IS NULL" ... etc.

    This works, and does have a certain "leverage" as you say, in that the client is free to perform any change (except for things that violate the schema) and the update function will still work. However, they are not the fastest, and it would help performance, I think, to tune the application such that these are writen in a more specialized form and do only the update work that is really necessary.
  9. In our development shop, we don't allow CRUD operations except through stored procedures. One of the advantages of this approach is being able to check for changes before performing the update. Another is dramatically improved security with auditing. The draw back is having to write a lot of stored procedure code. At least for us, because we always write our rather than having a program produce a standard version. I have used ErWin to do some of the template stuff, though. It has a strong macro language just for that.

    We do update checks as a standard part of the proedures we write, that is one of many advantages of using a stored procedure. I have never worried about changing foreign keys, though. Your observation of "that the indexes on these FK columns, at least from a Scan Density perspective, are deteriorating rapidly" is new and interesting to me. Have you been checking this on a number of tables? I would think foreign keys that have large amounts of duplicate values would tend to lose scan density due to overflow pages, but not unique values. How many tables have you been checking and what is the nature of the data in them?

Share This Page