SQL Server Performance

Not defining FK constraints to boost performance?

Discussion in 'SQL Server 2005 Performance Tuning for DBAs' started by HSA, Jan 14, 2009.

  1. HSA New Member

    Hi:
    Our DBA keeps insisting that we should avoid defining FK constraints as much as possible (even when we do have a logical FK relationship) because they hurt performance (and create deadlocks, etc). He was not able to show me how this could happen though. Let's say you have tables A and B and table B has a column A_ID that references the PK of A. Let's assume your application never updates the PK of A or the column A_ID of B or deletes any row from A, but does a lot of updates to other fields in A and B. I guess this is a pretty typical scenario for enterprise applications. In this scenario, does adding the FK constraint really hurt the performance? (we may or may not have an index on column A_ID)
    I am asking this because If the FK constraint doesn't hurt performance, I prefer to define them to guard against data inconsistencies that could happen when people manually change the DB (or when there is a bug in the application that does unexpected updates to the FK)
    Thanks a lot for your help in advance
  2. satya Moderator

    Welcome to the forums.
    Are you creating these FK constraints with any option such as NOCHECK?
    For better performance either avoid using NOCHECK in foreign key constraint definitions or check your code carefully. When it comes to the performance of a query that refers a set of tables which are having set of FK constraints will only do better if the presence of an index on that column(s) that are referred in WHERE clause.
  3. HSA New Member

    Thank you for your reply. No, we don't use NO CHECK.
    Please note that I don;t want to add the FKs to get better performance. I simply want to add them to guarantee data consistency. But our DBA believes that the mere existence of a FK constraint adds overhead and therefore, he prefers not to define the constarints unless there is a great risk of data inconsistency.
    So my question is that assuming we don't update the PKs or FK columns, is there any extra overhead impose if you add a FK constraint to your column?
  4. satya Moderator

    Nothing in terms of any noticable performance difference, as I referred if there is an index then it should be ok to get optimum performance.
  5. FrankKalis Moderator

    [quote user="HSA"]
    Hi:
    Our DBA keeps insisting that we should avoid defining FK constraints as much as possible (even when we do have a logical FK relationship) because they hurt performance (and create deadlocks, etc). He was not able to show me how this could happen though. Let's say you have tables A and B and table B has a column A_ID that references the PK of A. Let's assume your application never updates the PK of A or the column A_ID of B or deletes any row from A, but does a lot of updates to other fields in A and B. I guess this is a pretty typical scenario for enterprise applications. In this scenario, does adding the FK constraint really hurt the performance? (we may or may not have an index on column A_ID)
    I am asking this because If the FK constraint doesn't hurt performance, I prefer to define them to guard against data inconsistencies that could happen when people manually change the DB (or when there is a bug in the application that does unexpected updates to the FK)
    [/quote]
    It is not all about performance, you know? [:)]
    I'd rather prefer a "slower" system with data I can trust, than a lightning-fast system giving unreliable answers.
    Sure there has to be some checking performed with a FOREIGN KEY, but I haven't come across a situation in which this checking turned out to be a bottleneck. Even under heavy workload conditions.
    What I however sometimes have deliberately done is to sacrifice the most often helpful suggested index on a FOREIGN KEY column. That is, when I only need to ensure data integrity, and not query that data.
    Get your DBA to prove his claim. If he can't, then he should stay quite and let you do your job. If he is still unwilling to help you, escalate this to your boss and his and let them sort it out. At the end of the day they are then responsible for the decision they've made.
    To be honest, I doubt that any one of them wants to go to their business users and tell them that they made a wrong decision because of inconsistent data from the database, but performance of the system is good. [;)]
  6. jeff_yao New Member

    I guess this DBA may be right in terms of FK on performance. In my envrionment, I do see the negative impact when inserts / updates are on table A which has a foreign key to table B, and table B has about 2+ BILLION rows.The negative impacts include long blocking to others and long execution time to itself. We finally disabled this type of foreign key and let the business logic to take care of this special case.
    But in all other cases, the impact is negligible.
    Jeff Yao
    SQL Server Consultant
    http://www.sqlservercentral.com/blogs/jeffrey_yao/
  7. FrankKalis Moderator

    [quote user="jeff_yao"]
    In my envrionment, I do see the negative impact when inserts / updates are on table A which has a foreign key to table B, and table B has about 2+ BILLION rows.The negative impacts include long blocking to others and long execution time to itself. We finally disabled this type of foreign key and let the business logic to take care of this special case.
    [/quote]
    Well, of course there are always exceptions to the rule. However, even nowadays such large tables are not all that common. [:)]
  8. Kewin New Member

    imo, declining use of FK's as DRI on the grounds that it 'hurts performance' is a sure sign you need to go find another job.
    (or get some education, it may just be a misunderstanding ofc) ;)
    The DBA's #1 job is to keep the data safe and true.
    (In reality, just the fact that a piece of data manages to find it's way into the db, makes it true, since it's there...)

    If, by chance (as noted in the thread) there is the odd table where a FK constraint proves to burden the system too much, then (again just my .02) you have reached a point where the physical design is no longer correct, and need to be changed.
    It is *not* remedied by removing the FK.
    You are correct in that the main concern is to keep the data consistent.
    FK's are a wonderful tool for that. A FK also blongs to the logical model, not the physical. (the logical model really has no concept of 'performance')
    So, to not break the logical model, one needs to find reasonable ways to implement those things into the physical model.
    /Kenneth
  9. jeff_yao New Member

    [quote user="Kewin"]
    The DBA's #1 job is to keep the data safe and true.
    (In reality, just the fact that a piece of data manages to find it's way into the db, makes it true, since it's there...)

    If, by chance (as noted in the thread) there is the odd table where a FK constraint proves to burden the system too much, then (again just my .02) you have reached a point where the physical design is no longer correct, and need to be changed.
    It is *not* remedied by removing the FK.
    [/quote]
    I guess that you have not been involved in supporting many 3rd party applications, where you do not have the luxury to make *any* change to the physical design, and when the data grows out of its original design, as a DBA, the first priority is to ensure a workable system, which is your employment base and why you are there in the first place. As for data validity, it is usually tolerable in the sense that "bad" data will not compromise the whole system. In the real world, there are lots of ideal things we need to achieve, but we cannot due to various constraints. One obvious eaxmple in DBA world, when you desing a data model, do you want to stick to normalization rules to the extent that the whole system based on the model will suffer in performance?
    In my case, if the FK not removed, the whole system suffers, meaning it is useless in eyes of users when time-outs occur frequently, by removing the FK, it is working great and the chances that FK (if not removed) will violated is almost NIL (unless the system is hacked), because from the application side, the insert / update always get the PK id from the parent table before it goes to child table to do modification.
  10. Adriaan New Member

    One performance issue with a PK would be if it is on a non-substitute key, and the referred key gets updated a lot, which then gets cascaded through the FK constraint. A substitiute key would not need to be updated.
  11. Kewin New Member

    Well, your guess is incorrect, I'm very well aware of real world issues.
    That wasn't the point.
    The op was about the general statement made by his DBA about not using FK's for performance reasons. Period.

    We all have different ambition levels we live by.
    If someone's ambition of 'almost' is good enough for them, that's fine.
    'Almost' isn't my level of ambition, however.
    Now, don't you confuse 'ambition' with 'what we really can do in the real world'.
    The real world are full of daily compromises.
    It's however our levels of ambition that dictates how we deal with those compromises, and how we leave them for those who may come after.
    /Kenneth
  12. Adriaan New Member

    Kenneth, that was just my real-world comment to yours - I don't think we disagree.
    Real-world systems use natural keys for cascading FKs. The point is that such FKs might cause performance hiccups once the RK column gets updated. You could improve on that by using a substitute key for the FK - provided that you can alter the data structure.
  13. Kewin New Member

    I'm sorry if I was being unclear.

    Didn't mean to comment to anyone in particular, just to the op's original post.
    I agree with him, and I still think that his DBA is wrong in his assumptions, that's all.
    I'm perfectly aware that we don't live in a shiny bubble or a perfect world.
    We're surrounded by 'flawed designs' and 'imperfect models', yes, that's the way it is.
    But it doesn't mean that we also have to resign and be content with it in silence.
    /Kenneth
  14. moh_hassan20 New Member

    When you create FK , you insure data integrity at the level of database (not application)
    if you don't create FK , the application is responsible to make more operation to keep data integrity which may effect performance and may be inefficient.


    As satya said , if you create unclustered index on the FK fields, you enhance performance and keep referential integrity.

    i assume if there is a contradiction (which is not )between data integrity and performance, so data integrity should be first,
    As they said "pay for one (data integrity), and get next free (performance) [;)]"
    review that tip:
    Foreign Key Constraints (Without NOCHECK) Boost Performance and Data Integrity
    http://www.microsoft.com/technet/abouttn/flash/tips/tips_122104.mspx



  15. simmur New Member

    I guess your DBA should be fired.
    The advise of not using FKs is really stupid.
  16. Kewin New Member

    [quote user="moh_hassan20"]
    review that tip:
    Foreign Key Constraints (Without NOCHECK) Boost Performance and Data Integrity
    http://www.microsoft.com/technet/abouttn/flash/tips/tips_122104.mspx
    [/quote]
    Just to add to the article: The bug Brian mentions about the wrong reults isn't there anymore.
    When doing the examples on 2005, the results are correct.

    However, the behaviour is the same, though the article doesn't go into why that is.

    The 'trick' for speed is that since you have a FK on t2 referencing t1, then you don't need to access t1.
    The optimizer is smart enough to know this, and will indeed not bother to burden you with accessing t1 - IF...
    the FK constraint in t2 can be trusted. If you also have an index on the FK, then it may be sufficient to just access the indexpages, and not the data at all.
    If it's trusted, then the optimizer *knows* it doesn't need to check things out in t1. The DRI is there already.
    However, if it's not trusted, then it will discard the DRI and go check for itself, perhaps even not using indexes while doing it.
    This is where you may get 'lossy performance' from FK's (same thing with CHECK constraints of all kinds)
    You make it untrusted when you use NOCHECK. (which is a good reason to never ever use NOCHECK, not even to 'save time')
    NOCHECK makes the DRI useless.
    To see what one have, try out
    select OBJECTPROPERTY(object_id('fk_t2_t1'), 'CnstIsNotTrusted')
    and see what it returns in the first and second example.

    There's the explanation for why there's an access to a table that shouldn't be necessary, and also a mechanic, if used right and wise, could save a system from a lot of unnecessary work.

    /Kenneth
  17. rohit2900 Member

    [quote user="moh_hassan20"]
    review that tip:
    Foreign Key Constraints (Without NOCHECK) Boost Performance and Data Integrity
    http://www.microsoft.com/technet/abouttn/flash/tips/tips_122104.mspx
    [/quote]
    Can any body explain me the difference between
    ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1)
    REFERENCES t1(col1)
    And
    ALTER TABLE t2 WITH NOCHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) REFERENCES t1(col1)
    ALTER TABLE t2 CHECK CONSTRAINT fk_t2_t1
  18. Kewin New Member

    The first ALTER:
    ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1)
    REFERENCES t1(col1)

    ..this creates a FK constraint *and* also validates it's references. This is the way to always 'enable' FK's and check constraints, with validation.
    The second ALTER:
    ALTER TABLE t2 WITH NOCHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) REFERENCES t1(col1)
    ... this disables the FK, making it untrusted, and if left in this condition, useless for the optimizer, who will not consider untrusted FK's/CHECK constraints.
    The third ALTER:
    ALTER TABLE t2 CHECK CONSTRAINT fk_t2_t1
    Well, this is tricky one..
    This actually does absolutely nothing.....
    It doesn't validate the prevoius 'nocheck add' and it doesn't make the untrusted constraint trusted.
    If you insert something into t2 (ie value 2) that's not in t1, and then use;
    ALTER TABLE t2 WITH NOCHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) REFERENCES t1(col1)
    and
    ALTER TABLE t2 CHECK CONSTRAINT fk_t2_t1
    .. neither will say anything or warn about the existing orphan in t2 that misses it's parent in t1.


    However, if one were to say;

    ALTER TABLE t2 WITH CHECK CHECK CONSTRAINT fk_t2_t1
    ..this would do validation, and succeed only if the constraint cheks out ok.
    It will also set the FK to trusted again.
    /Kenneth

Share This Page