SQL Server Performance

SQL 2000 view slow after upgrade from 7

Discussion in 'Performance Tuning for DBAs' started by Jeremy54, Aug 11, 2003.

  1. Jeremy54 New Member

    We recently upgraded to SQL 2000 from SQL 7 and the only issue is that a relatively complex view, necessary to run critical systems, is running about 100 times slower while all of our other queries run much faster. The view hits 11 databases and tables owned by 3 different users.

    I compared the execution plan on our old server and the new server and found for several tables it is no longer using the secondary index that closely matches the query criteria, rather it is using the primary clustered index on the table ID (essentially doing a table scan).

    I have dropped and recreated the view and indexes, updated statistics, and turned up the parallelism cutoff to over a 100, but have found nothing that helps.

    Any suggestions?

    Thanks,
    Jeremy
  2. gaurav_bindlish New Member

    Did you try turning off the Parallel Execution of the Query? What is the Service Pack of SQL Server that you are using?

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard
  3. satya Moderator

    Also PROFILER is the best bet for the slow running queries to find out and submit to Index tuning wizard for any recommendation that could fine tune the performance.

    HTH

    _________
    Satya SKJ
    Moderator
    SQL-Server-Performance.Com
  4. Jeremy54 New Member

    After I upped the parallel cutoff it did not try to use parallelism (according the execution plan), is there some other way I should try to turn this off?

    We are using SQL 2000 8.00.760 (SP3).

  5. gaurav_bindlish New Member

  6. Jeremy54 New Member

    I added "OPTION (MAXDOP 1)" to my query and though it did not run as fast as it did before the upgrade, it did improve somewhat and I noticed a couple interesting items:

    Without OPTION (MAXDOP 1) specified, the plan shows only one of five large tables using the correct secondary index. By the way, the 5 large tables are combined using UNIONs and the one table that was using the correct index contains data that matchs the criteria while the others do not and thus return no rows (but searching these tables is what is slowing down the query).

    With OPTION (MAXDOP 1) specified it using the correct index on three of the tables....so that does help, but it does not fix the issue completely.

    Does this shed any light?

    Jeremy

  7. bradmcgehee New Member

    You may also want to consider adding the appropriate hints to force the query optimizer to use the same queries as before.

    -----------------------------
    Brad M. McGehee, MVP
    Webmaster
    SQL-Server-Performance.Com
  8. gaurav_bindlish New Member

    Or if the data is horizontally partitioned, as it seems, use check constraints on the partitioning column. This will give optimizer an hint to look for data in specific table.

    BTW, This works very well with distributed partitioned views..

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard
  9. Luis Martin Moderator

    If I were you, I'll run Profiler to capture the SQL and run Index Tuning Wizart to see what's new.

    Luis Martin
  10. Jeremy54 New Member

    I ran the profiler to capture the running of a the large view and then used the Index Tuning Wizard, but it did not recommend any changes. The view hits many different databases and the wizard selects only one db to analyze that may be why it is not suggesting any improvements?

  11. thomas New Member

    You need to take the Index Tuning Wizard's recommendations very carefully - they are not always appropriate - or will help one query, but harm others. It sounds useless in your situation anyway.

    If I was you I'd look at your index design from the ground up too to check it's all optimal - especially have a look at gaurav's idea of adding a constraint to aid the query optimizer - though that's only possible if your data is organised in a logically appropriate way.

    Also I'd second Brad's suggestion of forcing the optimizer to use the same indexes as previously by adding hints as appropriate - but you need to ensure these hints don't break your query in future if indexes are changed, renamed or dropped.

    Tom Pullen
    DBA, Oxfam GB
  12. Jeremy54 New Member

    I do not understand how the check constraint will help (or where I would put it), let me explain the situation in a little more detail to see if there are suggestions:

    We have 2 identically structured databases repeated 5 times (one for each country we do business) and 1 global database, for a total of 11 databases making up this view. The data of interest are stored in one of the 2 country specific databases and are made up of 5 tables ranging from 1.3 million rows to less than 10,000. The view I am interested in does a union to combine these 5 tables into one logical table I can use in our stored procedures and systems. Hitting this view is where I see the primary key used rather than the appropriate secondary index.

    I tried to force the tables to use the secondary index, but it took longer and the execution plan showed the use of the forced index but then an additional Bookmark Lookup step that cost a huge percent of the total execution time, any suggestions?

    Jeremy
  13. thomas New Member

    can you give me an idea how uniqueness of rows is enforced in these tables? i mean, within one db for one country. have you got an ID field with the identity property, a PK or something?

    Tom Pullen
    DBA, Oxfam GB
  14. Jeremy54 New Member

    There is an Identity(1,1) field in each of these tables. There is a secondary index on the identity field (though I never bothered to make it check uniqueness, should I?). The clustered primary key on the table is not on the identity field.

    Jeremy
  15. Luis Martin Moderator

    Please, will you send me a profiler file to see in detail the SQL sentences.
    I you prefere, via e-mail.

    Thanks,

    Luis Martin
  16. thomas New Member

    I guess it's too late to change the db design?

    your problem isn't an easy one to fix, i don't think. have you considered changing the clustered index to the identity column then having a nonclustered index on the other column? this of course has massive implications if you use the (formerly clustered) column for regular order bys or groups bys.





    Tom Pullen
    DBA, Oxfam GB
  17. Jeremy54 New Member

    I thought it was bad to have the identity column as the clustered index because it created performance hotspots. These databases have been around for several versions of SQL Server, so maybe it is different now or not a concern?

    If you think it will fix the issue then I am willing to try it, I have two questions:
    1) can I do this while the system is online or do I need to bring it offline? If yes, how?
    2) Why did it work without trouble in SQL 7 and not give us trouble until SQL 2000?

    Thanks,
    Jeremy
  18. gaurav_bindlish New Member

    Actually with row level locking in place with SQL Server 2K, the problem of hotspots is almost gone.

    Yeah the table will be locked during the time index is dropped and recreated so I won't advice the same during business hours.

    Gaurav
    Moderator
    Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard
  19. Twan New Member

    I think that SQLServer is much more stringent on the use of non-covering nonclustered indexes... Is there anyway that you can add a covering index to the table(s)? i.e. an index which contains all of the columns used in the query for that table. The order of the columns being where clause search columns before where clause join columns before other columns

    Is it possible for you to post the table and view DDL?

    Twan
  20. Luis Martin Moderator

    All members:

    This is the SQL Jeremy:

    select neracountry, server_time_id, emp_exp_number, proj_number,
    office, last_name, first_name, code_title_cat, title, sort_title,
    timespent, posted * case when neracountry='USA' then 1 when neracountry='UK' then 1
    when neracountry='SPA' then 1 when neracountry='AUS' then 1
    when neracountry='JPN' then 1 else 1 end as posted, billing_rate *
    case when neracountry='USA' then 1 when neracountry='UK' then 1
    when neracountry='SPA' then 1 when neracountry='AUS' then 1
    when neracountry='JPN' then 1 else 1 end as billing_rate, convert(char(12), week_ending, 107) as week_endingC, week_ending, convert(char(12), date, 107) as dateC, date, fullname,
    convert(char(255), diary) as diary, isnull('$', CurrencySymbolStart) as CurrencySymbol, show,0,0,0,0,0,0
    from global_diary
    where proj_number = 'C598' and week_ending between '1-1-2000' and '1-1-2020'
    and isnull(show,1) = isnull(1, isnull(show,1))
    and case when billing_week is null then 1 else 0 end <= 1

    Luis Martin
  21. thomas New Member

    "2) Why did it work without trouble in SQL 7 and not give us trouble until SQL 2000?"

    very, very good question - one you'd have to ask Microsoft. They're the ones who changed the query processor.

    Tom Pullen
    DBA, Oxfam GB
  22. Luis Martin Moderator

    I forgot the following information with SQL before:

    CPU: 54172
    Reads: 783510
    Writes: 0
    Duration: 21970 ms

    Luis Martin
  23. Twan New Member

    I presume the SQL has been simplified to take variables out of it? If not then there are several simplifications... If this is a statement from profiler, then dynamic sql is used to create this query, and again some improvements to the statement creation code could be mode.

    without the view definition, it is hard to see what indexes could/should make a difference.

    Cheers
    Twan
  24. Jeremy54 New Member

    I have posted below the SQL necessary to build one of the countries (the US, the other countries are identical just different database names I can post that too if you like) and the global database.

    Thanks for your help.

    Jeremy

    --Here is code to create the "US Diary" database and 3 necessary tables:

    CREATE DATABASE [diary] ON (NAME = N'diarydata', FILENAME = N'e:SQLServer2000MSSQLdatadiarydiary_data.mdf' , SIZE = 756, FILEGROWTH = 10%), (NAME = N'diarydata2', FILENAME = N'e:SQLServer2000MSSQLdatadiarydiary_data2.ndf' , SIZE = 9, FILEGROWTH = 10%), (NAME = N'diarydata3', FILENAME = N'e:SQLServer2000MSSQLdatadiarydiary_data3.ndf' , SIZE = 9, FILEGROWTH = 10%), (NAME = N'diarydata4', FILENAME = N'e:SQLServer2000MSSQLdatadiarydiary_data4.ndf' , SIZE = 9, FILEGROWTH = 10%) LOG ON (NAME = N'diarylog', FILENAME = N'e:SQLServer2000MSSQLdatadiarydiary_log.ldf' , SIZE = 297, FILEGROWTH = 10%), (NAME = N'diarylog2', FILENAME = N'e:SQLServer2000MSSQLdatadiarydiary_log2.ldf' , SIZE = 5, FILEGROWTH = 10%), (NAME = N'diarylog3', FILENAME = N'e:SQLServer2000MSSQLdatadiarydiary_log3.ldf' , SIZE = 5, FILEGROWTH = 10%)
    COLLATE SQL_Latin1_General_CP1_CI_AS
    GO

    exec sp_dboption N'diary', N'autoclose', N'false'
    GO

    exec sp_dboption N'diary', N'bulkcopy', N'false'
    GO

    exec sp_dboption N'diary', N'trunc. log', N'true'
    GO

    exec sp_dboption N'diary', N'torn page detection', N'false'
    GO

    exec sp_dboption N'diary', N'read only', N'false'
    GO

    exec sp_dboption N'diary', N'dbo use', N'false'
    GO

    exec sp_dboption N'diary', N'single', N'false'
    GO

    exec sp_dboption N'diary', N'autoshrink', N'false'
    GO

    exec sp_dboption N'diary', N'ANSI null default', N'false'
    GO

    exec sp_dboption N'diary', N'recursive triggers', N'false'
    GO

    exec sp_dboption N'diary', N'ANSI nulls', N'false'
    GO

    exec sp_dboption N'diary', N'concat null yields null', N'false'
    GO

    exec sp_dboption N'diary', N'cursor close on commit', N'false'
    GO

    exec sp_dboption N'diary', N'default to local cursor', N'false'
    GO

    exec sp_dboption N'diary', N'quoted identifier', N'false'
    GO

    exec sp_dboption N'diary', N'ANSI warnings', N'false'
    GO

    exec sp_dboption N'diary', N'auto create statistics', N'true'
    GO

    exec sp_dboption N'diary', N'auto update statistics', N'true'
    GO

    use [diary]
    GO

    CREATE TABLE [timeadmin].[EMPLOYEES] (
    [employee_id] [char] (6) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [first_name] [char] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [last_name] [char] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [password] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [base_hours] [numeric](4, 2) NOT NULL ,
    [overtime] [bit] NOT NULL ,
    [office] [char] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [title] [char] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [computer_id] [int] NOT NULL ,
    [num_ids] [int] NOT NULL ,
    [show] [tinyint] NULL ,
    [billing_rate] [int] NULL ,
    [start_date] [datetime] NULL ,
    [end_date] [datetime] NULL ,
    [extension] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [mailsent] [tinyint] NULL ,
    [char] (60) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [login_id] [char] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
    ) ON [PRIMARY]
    GO

    CREATE TABLE [timeadmin].[TIME_ENTRIES] (
    [server_time_id] [int] IDENTITY (1, 1) NOT NULL ,
    [project_id] [char] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [category_id] [int] NOT NULL ,
    [diary] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [timespent] [numeric](4, 2) NOT NULL ,
    [date] [datetime] NOT NULL ,
    [employee_id] [char] (6) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [week_ending] [datetime] NOT NULL ,
    [submitted] [bit] NOT NULL ,
    [billed] [bit] NOT NULL ,
    [local_time_id] [int] NOT NULL ,
    [time_update] [datetime] NOT NULL ,
    [billing_week] [datetime] NULL ,
    [billing_rate] [numeric](6, 2) NULL ,
    [emp_director_id] [char] (6) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [bill] [numeric](6, 2) NULL ,
    [defer] [numeric](6, 2) NULL ,
    [posted] [numeric](6, 2) NULL ,
    [adjusted] [bit] NOT NULL ,
    [bill_type] [int] NULL ,
    [title] [char] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [show] [int] NULL
    ) ON [PRIMARY]
    GO

    CREATE TABLE [timeadmin].[all_levels] (
    [project_id] [char] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [level1] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [level2] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [level3] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [level4] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [level5] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [level6] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [category_id] [int] NOT NULL
    ) ON [PRIMARY]
    GO

    ALTER TABLE [timeadmin].[EMPLOYEES] WITH NOCHECK ADD
    CONSTRAINT [pk_EMPLOYEES] PRIMARY KEY CLUSTERED
    (
    [employee_id]
    ) WITH FILLFACTOR = 80 ON [PRIMARY]
    GO

    ALTER TABLE [timeadmin].[TIME_ENTRIES] WITH NOCHECK ADD
    CONSTRAINT [PK_TIME_ENTRIES] PRIMARY KEY CLUSTERED
    (
    [employee_id],
    [local_time_id]
    ) WITH FILLFACTOR = 80 ON [PRIMARY]
    GO

    ALTER TABLE [timeadmin].[all_levels] WITH NOCHECK ADD
    CONSTRAINT [PK_ALL_LEVELS] PRIMARY KEY CLUSTERED
    (
    [project_id],
    [category_id]
    ) WITH FILLFACTOR = 90 ON [PRIMARY]
    GO

    ALTER TABLE [timeadmin].[EMPLOYEES] WITH NOCHECK ADD
    CONSTRAINT [DF__EMPLOYEES__passw__2DC6D687] DEFAULT ('nera') FOR [password],
    CONSTRAINT [DF__EMPLOYEES__base___2EBAFAC0] DEFAULT (7) FOR [base_hours],
    CONSTRAINT [DF__EMPLOYEES__overt__2FAF1EF9] DEFAULT (0) FOR [overtime],
    CONSTRAINT [DF__EMPLOYEES__compu__30A34332] DEFAULT ((-1)) FOR [computer_id],
    CONSTRAINT [DF__EMPLOYEES__num_i__3197676B] DEFAULT (0) FOR [num_ids],
    CONSTRAINT [DF__employees__show__2002C715] DEFAULT (1) FOR [show],
    CONSTRAINT [DF_EMPLOYEES_mailsent_1__11] DEFAULT (0) FOR [mailsent]
    GO

    ALTER TABLE [timeadmin].[TIME_ENTRIES] WITH NOCHECK ADD
    CONSTRAINT [DF_TIME_ENTRI_submitted] DEFAULT (0) FOR [submitted],
    CONSTRAINT [DF_TIME_ENTRI_billed_2__11] DEFAULT (0) FOR [billed],
    CONSTRAINT [DF_TIME_ENTRI_time_update] DEFAULT (getdate()) FOR [time_update],
    CONSTRAINT [DF_TIME_ENTRI_adjusted] DEFAULT (0) FOR [adjusted],
    CONSTRAINT [DF_TIME_ENTRI_bill_type] DEFAULT (0) FOR [bill_type],
    CONSTRAINT [DF_TIME_ENTRIES_show] DEFAULT (1) FOR [show]
    GO

    CREATE INDEX [SI_EMPLOYEES_OFFICE] ON [timeadmin].[EMPLOYEES]([office]) WITH FILLFACTOR = 90 ON [PRIMARY]
    GO

    CREATE INDEX [SI_EMPLOYEES_SHOW] ON [timeadmin].[EMPLOYEES]([show]) WITH FILLFACTOR = 90 ON [PRIMARY]
    GO

    CREATE INDEX [SI_BILLING_WEEK] ON [timeadmin].[TIME_ENTRIES]([billing_week]) WITH FILLFACTOR = 90 ON [PRIMARY]
    GO

    CREATE INDEX [SI_PROJECTS_WEEK_ENDING] ON [timeadmin].[TIME_ENTRIES]([project_id], [week_ending]) WITH FILLFACTOR = 90 ON [PRIMARY]
    GO

    CREATE INDEX [SI_TIME_EMPLOYEE_WEEK_ENDING] ON [timeadmin].[TIME_ENTRIES]([employee_id], [week_ending]) WITH FILLFACTOR = 90 ON [PRIMARY]
    GO

    CREATE INDEX [SI_TIME_SERVER_TIME_ID] ON [timeadmin].[TIME_ENTRIES]([server_time_id]) WITH FILLFACTOR = 90 ON [PRIMARY]
    GO







    --Here is the SQL to create the "US Billing" database and two necessary tables
    CREATE DATABASE [usconvert] ON (NAME = N'usconvertdata', FILENAME = N'E:SQLServer2000MSSQLdatausconvertusconvert_data.mdf' , SIZE = 2306, FILEGROWTH = 10%), (NAME = N'usconvertdata2', FILENAME = N'E:SQLServer2000MSSQLdatausconvertusconvert_data2.ndf' , SIZE = 12, FILEGROWTH = 10%) LOG ON (NAME = N'usconvertlog', FILENAME = N'E:SQLServer2000MSSQLdatausconvertusconvert_log.ldf' , SIZE = 1831, FILEGROWTH = 10%), (NAME = N'usconvertlog2', FILENAME = N'E:SQLServer2000MSSQLdatausconvertusconvert_log2.ldf' , SIZE = 17, FILEGROWTH = 10%)
    COLLATE SQL_Latin1_General_CP1_CI_AS
    GO

    exec sp_dboption N'usconvert', N'autoclose', N'false'
    GO

    exec sp_dboption N'usconvert', N'bulkcopy', N'false'
    GO

    exec sp_dboption N'usconvert', N'trunc. log', N'true'
    GO

    exec sp_dboption N'usconvert', N'torn page detection', N'false'
    GO

    exec sp_dboption N'usconvert', N'read only', N'false'
    GO

    exec sp_dboption N'usconvert', N'dbo use', N'false'
    GO

    exec sp_dboption N'usconvert', N'single', N'false'
    GO

    exec sp_dboption N'usconvert', N'autoshrink', N'false'
    GO

    exec sp_dboption N'usconvert', N'ANSI null default', N'false'
    GO

    exec sp_dboption N'usconvert', N'recursive triggers', N'false'
    GO

    exec sp_dboption N'usconvert', N'ANSI nulls', N'false'
    GO

    exec sp_dboption N'usconvert', N'concat null yields null', N'false'
    GO

    exec sp_dboption N'usconvert', N'cursor close on commit', N'false'
    GO

    exec sp_dboption N'usconvert', N'default to local cursor', N'false'
    GO

    exec sp_dboption N'usconvert', N'quoted identifier', N'false'
    GO

    exec sp_dboption N'usconvert', N'ANSI warnings', N'false'
    GO

    exec sp_dboption N'usconvert', N'auto create statistics', N'true'
    GO

    exec sp_dboption N'usconvert', N'auto update statistics', N'true'
    GO

    use [usconvert]
    GO

    CREATE TABLE [lonstr].[ms_title_cat] (
    [search_title] [char] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [code_title_cat] [char] (26) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [sort_title] [char] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
    ) ON [PRIMARY]
    GO

    CREATE TABLE [lonstr].[office] (
    [office_id] [char] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [name] [char] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [active] [int] NULL ,
    [code] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
    ) ON [PRIMARY]
    GO

    CREATE UNIQUE CLUSTERED INDEX [ms_title_cat] ON [lonstr].[ms_title_cat]([search_title]) WITH FILLFACTOR = 90 ON [PRIMARY]
    GO

    ALTER TABLE [lonstr].[office] WITH NOCHECK ADD
    CONSTRAINT [DF_office_active_1__22] DEFAULT (1) FOR [active]
    GO






    --Here is the SQL to create "Global" database, 1 necessary table, and the problem view
    CREATE DATABASE [intranet] ON (NAME = N'intranet_Data', FILENAME = N'E:SQLServer2000MSSQLDataintranetintranet_Data.MDF' , SIZE = 2371, FILEGROWTH = 10%) LOG ON (NAME = N'intranet_Log', FILENAME = N'E:SQLServer2000MSSQLDataintranetintranet_Log.LDF' , SIZE = 627, FILEGROWTH = 10%)
    COLLATE SQL_Latin1_General_CP1_CI_AS
    GO

    exec sp_dboption N'intranet', N'autoclose', N'false'
    GO

    exec sp_dboption N'intranet', N'bulkcopy', N'true'
    GO

    exec sp_dboption N'intranet', N'trunc. log', N'false'
    GO

    exec sp_dboption N'intranet', N'torn page detection', N'false'
    GO

    exec sp_dboption N'intranet', N'read only', N'false'
    GO

    exec sp_dboption N'intranet', N'dbo use', N'false'
    GO

    exec sp_dboption N'intranet', N'single', N'false'
    GO

    exec sp_dboption N'intranet', N'autoshrink', N'false'
    GO

    exec sp_dboption N'intranet', N'ANSI null default', N'false'
    GO

    exec sp_dboption N'intranet', N'recursive triggers', N'false'
    GO

    exec sp_dboption N'intranet', N'ANSI nulls', N'false'
    GO

    exec sp_dboption N'intranet', N'concat null yields null', N'false'
    GO

    exec sp_dboption N'intranet', N'cursor close on commit', N'false'
    GO

    exec sp_dboption N'intranet', N'default to local cursor', N'false'
    GO

    exec sp_dboption N'intranet', N'quoted identifier', N'false'
    GO

    exec sp_dboption N'intranet', N'ANSI warnings', N'false'
    GO

    exec sp_dboption N'intranet', N'auto create statistics', N'true'
    GO

    exec sp_dboption N'intranet', N'auto update statistics', N'true'
    GO

    use [intranet]
    GO

    CREATE TABLE [dbo].[conversion_rates] (
    [CountryStart] [char] (3) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [CountryEnd] [char] (3) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
    [rate] [numeric](24, 20) NOT NULL ,
    [CurrencyNameStart] [char] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [CurrencyNameEnd] [char] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [CurrencySymbolEnd] [char] (7) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
    [CurrencySymbolStart] [char] (7) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
    ) ON [PRIMARY]
    GO

    ALTER TABLE [dbo].[conversion_rates] WITH NOCHECK ADD
    CONSTRAINT [PK_conversion_rates] PRIMARY KEY NONCLUSTERED
    (
    [CountryStart],
    [CountryEnd]
    ) WITH FILLFACTOR = 90 ON [PRIMARY]
    GO

    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_NULLS ON
    GO


    /****** Object: View dbo.global_diary Script Date: 6/20/2002 8:07:32 AM ******/

    CREATE VIEW dbo.global_diary
    AS
    SELECT 'JPN' AS neracountry, server_time_id, e.employee_id as emp_exp_number, te.project_id as proj_number,
    o.name AS office, e.last_name, e.first_name, code_title_cat, te.title, sort_title,
    timespent, posted, te.billing_rate, week_ending, date, convert(char(255), rtrim(isnull(level1,'MISC')) +
    case rtrim(isnull(level2,'')) when '' then '' else '' + rtrim(isnull(level2,'')) end +
    case rtrim(isnull(level3,'')) when '' then '' else '' + rtrim(isnull(level3,'')) end +
    case rtrim(isnull(level4,'')) when '' then '' else '' + rtrim(isnull(level4,'')) end +
    case rtrim(isnull(level5,'')) when '' then '' else '' + rtrim(isnull(level5,'')) end +
    case rtrim(isnull(level6,'')) when '' then '' else '' + rtrim(isnull(level6,'')) end ) as fullname, diary, CurrencySymbolStart, te.show, te.category_id, billing_week
    FROM diaryTokyo.timeadmin.time_entries te,
    diaryTokyo.timeadmin.employees e,
    Tokyo.lonstr.ms_title_cat t, tokyo.lonstr.office AS o, diaryTokyo.timeadmin.all_levels l, conversion_rates c
    WHERE te.employee_id = e.employee_id AND te.project_id = l.project_id and te.category_id = l.category_id and
    te.title *= t.search_title AND e.office = o.office_id and c.countryStart = 'JPN'
    UNION
    SELECT 'USA' AS neracountry, server_time_id, e.employee_id as emp_exp_number, te.project_id as proj_number,
    o.name AS office, e.last_name, e.first_name, code_title_cat, te.title, sort_title,
    timespent, posted, te.billing_rate, week_ending, date, convert(char(255), rtrim(isnull(level1,'MISC')) +
    case rtrim(isnull(level2,'')) when '' then '' else '' + rtrim(isnull(level2,'')) end +
    case rtrim(isnull(level3,'')) when '' then '' else '' + rtrim(isnull(level3,'')) end +
    case rtrim(isnull(level4,'')) when '' then '' else '' + rtrim(isnull(level4,'')) end +
    case rtrim(isnull(level5,'')) when '' then '' else '' + rtrim(isnull(level5,'')) end +
    case rtrim(isnull(level6,'')) when '' then '' else '' + rtrim(isnull(level6,'')) end ) as fullname, diary, CurrencySymbolStart, te.show, te.category_id, billing_week
    FROM diary.timeadmin.time_entries te,
    diary.timeadmin.employees e,
    usconvert.lonstr.ms_title_cat t, usconvert.lonstr.office AS o, diary.timeadmin.all_levels l, conversion_rates c
    WHERE te.employee_id = e.employee_id AND te.project_id = l.project_id and te.category_id = l.category_id and
    te.title *= t.search_title AND e.office = o.office_id and c.countryStart = 'USA'
    UNION
    SELECT 'SPA' AS neracountry, server_time_id, e.employee_id as emp_exp_number, te.project_id as proj_number,
    o.name AS office, e.last_name, e.first_name, code_title_cat, te.title, sort_title,
    timespent, posted, te.billing_rate, week_ending, date, convert(char(255), rtrim(isnull(level1,'MISC')) +
    case rtrim(isnull(level2,'')) when '' then '' else '' + rtrim(isnull(level2,'')) end +
    case rtrim(isnull(level3,'')) when '' then '' else '' + rtrim(isnull(level3,'')) end +
    case rtrim(isnull(level4,'')) when '' then '' else '' + rtrim(isnull(level4,'')) end +
    case rtrim(isnull(level5,'')) when '' then '' else '' + rtrim(isnull(level5,'')) end +
    case rtrim(isnull(level6,'')) when '' then '' else '' + rtrim(isnull(level6,'')) end ) as fullname, diary, CurrencySymbolStart, te.show, te.category_id, billing_week
    FROM diaryMadrid.timeadmin.time_entries te,
    diaryMadrid.timeadmin.employees e,
    MadridEuro.lonstr.ms_title_cat t, MadridEuro.lonstr.office AS o, diaryMadrid.timeadmin.all_levels l, conversion_rates c
    WHERE te.employee_id = e.employee_id AND te.project_id = l.project_id and te.category_id = l.category_id and
    te.title *= t.search_title AND e.office = o.office_id and c.countryStart = 'SPA'
    UNION
    SELECT 'AUS' AS neracountry, server_time_id, e.employee_id as emp_exp_number, te.project_id as proj_number,
    o.name AS office, e.last_name, e.first_name, code_title_cat, te.title, sort_title,
    timespent, posted, te.billing_rate, week_ending, date, convert(char(255), rtrim(isnull(level1,'MISC')) +
    case rtrim(isnull(level2,'')) when '' then '' else '' + rtrim(isnull(level2,'')) end +
    case rtrim(isnull(level3,'')) when '' then '' else '' + rtrim(isnull(level3,'')) end +
    case rtrim(isnull(level4,'')) when '' then '' else '' + rtrim(isnull(level4,'')) end +
    case rtrim(isnull(level5,'')) when '' then '' else '' + rtrim(isnull(level5,'')) end +
    case rtrim(isnull(level6,'')) when '' then '' else '' + rtrim(isnull(level6,'')) end ) as fullname, diary, CurrencySymbolStart, te.show, te.category_id, billing_week
    FROM diarySydney.timeadmin.time_entries te,
    diarySydney.timeadmin.employees e,
    sydneyconvert.lonstr.ms_title_cat t, sydneyconvert.lonstr.office AS o, diarySydney.timeadmin.all_levels l, conversion_rates c
    WHERE te.employee_id = e.employee_id AND te.project_id = l.project_id and te.category_id = l.category_id and
    te.title *= t.search_title AND e.office = o.office_id and c.countryStart = 'AUS'
    UNION
    SELECT 'UK' AS neracountry, server_time_id, e.employee_id as emp_exp_number, te.project_id as proj_number,
    o.name AS office, e.last_name, e.first_name, code_title_cat, te.title, sort_title,
    timespent, posted, te.billing_rate, week_ending, date, convert(char(255), rtrim(isnull(level1,'MISC')) +
    case rtrim(isnull(level2,'')) when '' then '' else '' + rtrim(isnull(level2,'')) end +
    case rtrim(isnull(level3,'')) when '' then '' else '' + rtrim(isnull(level3,'')) end +
    case rtrim(isnull(level4,'')) when '' then '' else '' + rtrim(isnull(level4,'')) end +
    case rtrim(isnull(level5,'')) when '' then '' else '' + rtrim(isnull(level5,'')) end +
    case rtrim(isnull(level6,'')) when '' then '' else '' + rtrim(isnull(level6,'')) end ) as fullname, diary, CurrencySymbolStart, te.show, te.category_id, billing_week
    FROM diaryLondon.timeadmin.time_entries te,
    diaryLondon.timeadmin.employees e,
    LondonConvert.lonstr.ms_title_cat t, LondonConvert.lonstr.office AS o, diaryLondon.timeadmin.all_levels l, conversion_rates c
    WHERE te.employee_id = e.employee_id AND te.project_id = l.project_id and te.category_id = l.category_id and
    te.title *= t.search_title AND e.office = o.office_id and c.countryStart = 'UK'

    GO
    SET QUOTED_IDENTIFIER OFF
    GO
    SET ANSI_NULLS ON
    GO

  25. bambola New Member

    To begin with, since values in different table will not repeat you can use UNION ALL instead of UNION. Union alone returns distinct value while UNION ALL does not check that.

    Bambola.
  26. Jeremy54 New Member

    I went ahead and reorganized the indexes as recommended above, all the primary keys are now clustered on the identity column. This change alone fixes my problem in 3 of the 5 databases, so now even witout the Option (MAXDOP 1) clause it chooses the correct index for 3 of the 5 databases.

    Now the the largest database and the 3rd largest still use the primary key (now on the identity field) rather than the index that matches the query criteria.

    Any ideas?

    Thanks,
    Jeremy
  27. Twan New Member

    If you take one part of the union statements and merge that with your original criteria, how is performance of that?


    SELECT
    'UK' AS neracountry,
    server_time_id,
    e.employee_id as emp_exp_number,
    te.project_id as proj_number,
    o.name AS office,
    e.last_name,
    e.first_name,
    code_title_cat,
    te.title,
    sort_title,
    timespent,
    posted,
    te.billing_rate,
    week_ending,
    date,
    convert(char(255), rtrim(isnull(level1,'MISC')) +
    case rtrim(isnull(level2,'')) when '' then '' else '' + rtrim(isnull(level2,'')) end +
    case rtrim(isnull(level3,'')) when '' then '' else '' + rtrim(isnull(level3,'')) end +
    case rtrim(isnull(level4,'')) when '' then '' else '' + rtrim(isnull(level4,'')) end +
    case rtrim(isnull(level5,'')) when '' then '' else '' + rtrim(isnull(level5,'')) end +
    case rtrim(isnull(level6,'')) when '' then '' else '' + rtrim(isnull(level6,'')) end ) as fullname,
    diary,
    CurrencySymbolStart,
    te.show,
    te.category_id,
    billing_week
    FROMdiaryLondon.timeadmin.time_entries te,
    diaryLondon.timeadmin.employees e,
    LondonConvert.lonstr.ms_title_cat t,
    LondonConvert.lonstr.office AS o,
    diaryLondon.timeadmin.all_levels l,
    conversion_rates c
    WHEREte.employee_id = e.employee_id
    AND te.project_id = l.project_id
    and te.category_id = l.category_id
    and te.title *= t.search_title
    AND e.office = o.office_id
    and c.countryStart = 'UK'

    and te.project_id = 'C598'
    and week_ending between '1-1-2000' and '1-1-2020'
    and isnull(show,1) = isnull(1, isnull(show,1))
    and case when billing_week is null then 1 else 0 end <= 1


    Assuming that
    - conversion rates only has on record for each countrystart
    - office doesn't have a one to many relationship with time_entries
    - all_levels doesn't havea one to many relationship with time_entries

    NOTE that the parameters can make quite a difference, so if between 2000 and 2020 and project = c598 returns a lot of data (i have this figure of 10% in my head but this is unsubstantiated), then clustered index scan will be chosen
    what if you change the statement to:


    SELECT
    'UK' AS neracountry,
    server_time_id,
    e.employee_id as emp_exp_number,
    te.project_id as proj_number,
    office = (selecto.name
    fromLondonConvert.lonstr.office AS o
    wheree.office = o.office_id ),
    e.last_name,
    e.first_name,
    code_title_cat,
    te.title,
    sort_title,
    timespent,
    posted,
    te.billing_rate,
    week_ending,
    date,
    fullname = ( selectconvert(char(255), rtrim(isnull(level1,'MISC')) +
    case rtrim(isnull(level2,'')) when '' then '' else '' + rtrim(isnull(level2,'')) end +
    case rtrim(isnull(level3,'')) when '' then '' else '' + rtrim(isnull(level3,'')) end +
    case rtrim(isnull(level4,'')) when '' then '' else '' + rtrim(isnull(level4,'')) end +
    case rtrim(isnull(level5,'')) when '' then '' else '' + rtrim(isnull(level5,'')) end +
    case rtrim(isnull(level6,'')) when '' then '' else '' + rtrim(isnull(level6,'')) end )
    fromdiaryLondon.timeadmin.all_levels l
    where
    l.project_id = te.project_id
    and l.category_id = te.category_id ),
    diary,
    CurrencySymbolStart = (select CurrencySymbolStart
    from conversion_rates c
    where c.countryStart = 'UK'),
    te.show,
    te.category_id,
    billing_week
    FROMdiaryLondon.timeadmin.time_entries te,
    diaryLondon.timeadmin.employees e,
    LondonConvert.lonstr.ms_title_cat t
    WHEREte.employee_id = e.employee_id
    and te.title *= t.search_title

    and te.project_id = 'C598'
    and week_ending between '1-1-2000' and '1-1-2020'
    and isnull(show,1) = isnull(1, isnull(show,1))
    and case when billing_week is null then 1 else 0 end <= 1


    If still no good then how does this perform

    SELECT
    'UK' AS neracountry,
    server_time_id,
    e.employee_id as emp_exp_number,
    te.project_id as proj_number,
    e.last_name,
    e.first_name,
    te.title,
    timespent,
    posted,
    te.billing_rate,
    week_ending,
    date,
    diary,
    te.show,
    te.category_id,
    billing_week
    FROMdiaryLondon.timeadmin.time_entries te,
    diaryLondon.timeadmin.employees e
    WHEREte.employee_id = e.employee_id
    and te.project_id = 'C598'
    and week_ending between '1-1-2000' and '1-1-2020'
    and isnull(show,1) = isnull(1, isnull(show,1))
    and case when billing_week is null then 1 else 0 end <= 1


    if none of this helps then
    - What is the showplan for the view (v7 compared with v2k)
  28. Jeremy54 New Member

    It is instantaneous when going directly and interestingly it always uses the correct index as well.

    The show plan in SQL 7 always uses the correct index, it is just in SQL 2000 that I now have 2 of the 5 databases using the primary key rather than the index.

  29. bambola New Member

    > Now the the largest database and the 3rd largest still use the primary key (now on the identity
    > field) rather than the index that matches the query criteria.

    How is the selectivity of this index?

    Bambola.
  30. Twan New Member


    are the tables in these database larger than the others
    are they skewed (as per Balbola's question)
    is the collation the same on these databases

    what does the showplan say about these databases... are you able to do a showplan_all and post the output? (might be too long...)
  31. Jeremy54 New Member

    This is for the DB that doesn't work:

    Statistics for INDEX 'SI_PROJECTS_WEEK_ENDING'.
    Updated Rows Rows Sampled Steps Density Average key length
    Aug 12 2003 1:31PM 1258363 36168 200 13.707669 16.0

    All density Average Length Columns
    5.5928412E-4 4.0 project_id
    3.0321407E-4 12.0 project_id, week_ending
    7.9468327E-7 16.0 project_id, week_ending,server_time_id


    This is for the DB that does work:
    Statistics for INDEX 'SI_PROJECTS_WEEK_ENDING'.
    Updated Rows Rows Sampled Steps Density Average key length
    Aug 12 2003 12:50PM 168301 168301 200 2.3578144E-4 16.0

    All density Average Length Columns
    7.7942322E-4 4.0 project_id
    4.8125512E-5 12.0 project_id, week_ending
    5.9417353E-6 16.0 project_id, week_ending, server_time_id


  32. Jeremy54 New Member

    Q: are the tables in these database larger than the others
    A: It is the largest and 3rd largest table, but the 2nd largest works correctly (stats shown above)

    Q: are they skewed (as per Balbola's question)
    A: Very big difference between the largest table (almost 1.3 million rows) and smallest (10,000)
    Q: is the collation the same on these databases
    A: Yes (I double checked the largest and 2nd largest, they are the same)

    Q: what does the showplan say about these databases... are you able to do a showplan_all and post the output? (might be too long...)
    A: I am not very familiar with the execution plan but it clearly shows the two databases using the PK rather than Secondary index and that in the case of the large db that it takes a large percentage of the time. It is very large but I can try to post it if you want.

    Jeremy
  33. Jeremy54 New Member

    I noticed the statistics for the db that does not work looked strange, I rebuilt the index in question, here are the updated stats for the DB that doesn't work:

    Statistics for INDEX 'SI_PROJECTS_WEEK_ENDING'.
    Updated Rows Rows Sampled Steps Density Average key length
    Aug 12 2003 3:00PM 1258382 1258382 200 7.1285336E-5 16.0

    All density Average Length Columns
    1.6450074E-4 4.0 project_id
    1.0005203E-5 12.0 project_id, week_ending
    7.9467128E-7 16.0 project_id, week_ending, server_time_id
  34. Twan New Member

    Hi Jeremy...

    If you create a view just against the large database, and try doing your select off that, does it still work slowly?

    If so, then what is the showplan for that statement...?

    Cheers
    Twan
  35. joechang New Member

    can you post of output of SET SHOWPLAN_ALL ON
    or email the output to me at jchang6@yahoo.com
    it would also be helpful to get the same for SQL7 if that is still available

    my observation is that cost formulas (loop join 10 rows etc) has not changed, but sql 2000 is more sophisticated on estimating row count, of course, that sometimes causes the bad plan

    i also recall that sql 7 had problems with joins to a "view" that with UNION, where it would apply the best index to the join for the first part of the UNION but not the remaining, it seems that jeremy is seeing the reverse
  36. Jeremy54 New Member

    Twan,<br /><br />I did as you said and it uses the index and works instantly. I then added in one of the small databases that normally uses the index, now it gets interesting. I posted the execution tree below, but I saw in th execution plan through SQL Analyzer that the correct index is being used for the large table. The problem is that it then has a step called "Bookmark Lookup" that takes 97% of the time about 30 seconds. This is the same step that I saw if I forced the view to use the correct index using hints, what is this and is that the real problem?<br /><br />Thanks,<br />Jeremy<br /><br />Compute Scalar(DEFINE<img src='/community/emoticons/emotion-6.gif' alt=':(' />[Expr1058]=[Union1048]*Convert(If ([Union1037]='USA') then 1 else If ([Union1037]='UK') then 1 else If ([Union1037]='SPA') then 1 else If ([Union1037]='AUS') then 1 else If ([Union1037]='JPN') then 1 else 1), [Expr1059]=[Union1049]*Convert(If ([Union1037]='USA') then 1 else If ([Union1037]='UK') then 1 else If ([Union1037]='SPA') then 1 else If ([Union1037]='AUS') then 1 else If ([Union1037]='JPN') then 1 else 1), [Expr1060]=Convert([Union1050]), [Expr1061]=Convert([Union1051]), [Expr1062]=Convert([Union1053])))<br /> |--Sort(DISTINCT ORDER BY<img src='/community/emoticons/emotion-6.gif' alt=':(' />[Union1037] ASC, [Union1038] ASC, [Union1039] ASC, [Union1040] ASC, [Union1041] ASC, [Union1042] ASC, [Union1043] ASC, [Union1044] ASC, [Union1045] ASC, [Union1046] ASC, [Union1047] ASC, [Union1048] ASC, [Union1049] ASC, [Union1050] ASC, [Union1051] ASC, [Union1052] ASC, [Union1053] ASC, [Union1054] ASC, [Union1055] ASC, [Union1056] ASC, [Union1057] ASC))<br /> |--Concatenation<br /> |--Compute Scalar(DEFINE<img src='/community/emoticons/emotion-6.gif' alt=':(' />[Expr1013]=Convert(rtrim([all_levels].[level1])+If (rtrim([all_levels].[level2])='') then '' else (''+rtrim([all_levels].[level2]))+If (rtrim([all_levels].[level3])='') then '' else (''+rtrim([all_levels].[level3]))+If (rtrim([all_levels].[level4])='') then '' else (''+rtrim([all_levels].[level4]))+If (rtrim([all_levels].[level5])='') then '' else (''+rtrim([all_levels].[level5]))+If (rtrim([all_levels].[level6])='') then '' else (''+rtrim([all_levels].[level6])))))<br /> | |--Nested Loops(Left Outer Join, OUTER REFERENCES<img src='/community/emoticons/emotion-6.gif' alt=':(' />[TIME_ENTRIES].[title]))<br /> | |--Nested Loops(Inner Join)<br /> | | |--Merge Join(Inner Join, MERGE<img src='/community/emoticons/emotion-6.gif' alt=':(' />[all_levels].[category_id])=([TIME_ENTRIES].[category_id]), RESIDUAL<img src='/community/emoticons/emotion-6.gif' alt=':(' />[TIME_ENTRIES].[category_id]=[all_levels].[category_id]))<br /> | | | |--Clustered Index Seek(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[DiaryTokyo].[timeadmin].[all_levels].[PK_ALL_LEVELS]), SEEK<img src='/community/emoticons/emotion-6.gif' alt=':(' />[all_levels].[project_id]='C598') ORDERED FORWARD)<br /> | | | |--Nested Loops(Inner Join, WHERE<img src='/community/emoticons/emotion-6.gif' alt=':(' />[EMPLOYEES].[office]=[office].[office_id]))<br /> | | | |--Nested Loops(Inner Join, OUTER REFERENCES<img src='/community/emoticons/emotion-6.gif' alt=':(' />[TIME_ENTRIES].[employee_id]))<br /> | | | | |--Sort(ORDER BY<img src='/community/emoticons/emotion-6.gif' alt=':(' />[TIME_ENTRIES].[category_id] ASC))<br /> | | | | | |--Filter(WHERE<img src='/community/emoticons/emotion-6.gif' alt=':(' />isnull([TIME_ENTRIES].[show], 1)=1 AND If ([TIME_ENTRIES].[billing_week]=NULL) then 1 else 0&lt;=1))<br /> | | | | | |--Bookmark Lookup(BOOKMARK<img src='/community/emoticons/emotion-6.gif' alt=':(' />[Bmk1000]), OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[DiaryTokyo].[timeadmin].[TIME_ENTRIES]))<br /> | | | | | |--Index Seek(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[DiaryTokyo].[timeadmin].[TIME_ENTRIES].[SI_PROJECTS_WEEK_ENDING]), SEEK<img src='/community/emoticons/emotion-6.gif' alt=':(' />[TIME_ENTRIES].[project_id]='C598' AND [TIME_ENTRIES].[week_ending] &gt;= 'Jan 1 2000 12:00AM' AND [TIME_ENTRIES].[week_ending] &lt;= 'Jan 1 2020 12:00AM') ORDERED FORWARD)<br /> | | | | |--Clustered Index Seek(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[DiaryTokyo].[timeadmin].[EMPLOYEES].[pk_EMPLOYEES]), SEEK<img src='/community/emoticons/emotion-6.gif' alt=':(' />[EMPLOYEES].[employee_id]=[TIME_ENTRIES].[employee_id]) ORDERED FORWARD)<br /> | | | |--Table Scan(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[Tokyo].[lonstr].[office]))<br /> | | |--Bookmark Lookup(BOOKMARK<img src='/community/emoticons/emotion-6.gif' alt=':(' />[Bmk1010]), OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[intranet].[dbo].[conversion_rates]))<br /> | | |--Index Seek(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[intranet].[dbo].[conversion_rates].[PK_conversion_rates]), SEEK<img src='/community/emoticons/emotion-6.gif' alt=':(' />[conversion_rates].[CountryStart]='JPN') ORDERED FORWARD)<br /> | |--Clustered Index Seek(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[Tokyo].[lonstr].[ms_title_cat].[ms_title_cat]), SEEK<img src='/community/emoticons/emotion-6.gif' alt=':(' />[ms_title_cat].[search_title]=[TIME_ENTRIES].[title]) ORDERED FORWARD)<br /> |--Filter(WHERE<img src='/community/emoticons/emotion-6.gif' alt=':(' />[Expr1028]='C598'))<br /> |--Compute Scalar(DEFINE<img src='/community/emoticons/emotion-6.gif' alt=':(' />[Expr1027]=Convert([EMPLOYEES].[employee_id]), [Expr1028]=Convert([TIME_ENTRIES].[project_id]), [Expr1029]=Convert(Convert([office].[name])), [Expr1030]=Convert([EMPLOYEES].[last_name]), [Expr1031]=Convert([EMPLOYEES].[first_name]), [Expr1032]=Convert(Convert([TIME_ENTRIES].[title])), [Expr1033]=Convert([TIME_ENTRIES].[posted]), [Expr1034]=Convert([TIME_ENTRIES].[billing_rate]), [Expr1035]=Convert(rtrim([all_levels].[level1])+If (rtrim([all_levels].[level2])='') then '' else (''+rtrim([all_levels].[level2]))+If (rtrim([all_levels].[level3])='') then '' else (''+rtrim([all_levels].[level3]))+If (rtrim([all_levels].[level4])='') then '' else (''+rtrim([all_levels].[level4]))+If (rtrim([all_levels].[level5])='') then '' else (''+rtrim([all_levels].[level5]))+If (rtrim([all_levels].[level6])='') then '' else (''+rtrim([all_levels].[level6]))), [Expr1036]=Convert([TIME_ENTRIES].[diary])))<br /> |--Hash Match(Right Outer Join, HASH<img src='/community/emoticons/emotion-6.gif' alt=':(' />[ms_title_cat].[search_title])=([TIME_ENTRIES].[title]), RESIDUAL<img src='/community/emoticons/emotion-6.gif' alt=':(' />[TIME_ENTRIES].[title]=[ms_title_cat].[search_title]))<br /> |--Clustered Index Scan(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[usconvert].[lonstr].[ms_title_cat].[ms_title_cat]))<br /> |--Nested Loops(Inner Join)<br /> |--Hash Match(Inner Join, HASH<img src='/community/emoticons/emotion-6.gif' alt=':(' />[all_levels].[project_id], [all_levels].[category_id])=([TIME_ENTRIES].[project_id], [TIME_ENTRIES].[category_id]), RESIDUAL<img src='/community/emoticons/emotion-6.gif' alt=':(' />[TIME_ENTRIES].[project_id]=[all_levels].[project_id] AND [TIME_ENTRIES].[category_id]=[all_levels].[category_id]))<br /> | |--Clustered Index Scan(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[diary].[timeadmin].[all_levels].[PK_ALL_LEVELS]))<br /> | |--Hash Match(Inner Join, HASH<img src='/community/emoticons/emotion-6.gif' alt=':(' />[EMPLOYEES].[employee_id])=([TIME_ENTRIES].[employee_id]), RESIDUAL<img src='/community/emoticons/emotion-6.gif' alt=':(' />[TIME_ENTRIES].[employee_id]=[EMPLOYEES].[employee_id]))<br /> | |--Hash Match(Inner Join, HASH<img src='/community/emoticons/emotion-6.gif' alt=':(' />[office].[office_id])=([EMPLOYEES].[office]), RESIDUAL<img src='/community/emoticons/emotion-6.gif' alt=':(' />[EMPLOYEES].[office]=[office].[office_id]))<br /> | | |--Table Scan(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[usconvert].[lonstr].[office]))<br /> | | |--Clustered Index Scan(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[diary].[timeadmin].[EMPLOYEES].[pk_EMPLOYEES]))<br /> | |--Filter(WHERE<img src='/community/emoticons/emotion-6.gif' alt=':(' />isnull([TIME_ENTRIES].[show], 1)=1 AND If ([TIME_ENTRIES].[billing_week]=NULL) then 1 else 0&lt;=1))<br /> | |--Bookmark Lookup(BOOKMARK<img src='/community/emoticons/emotion-6.gif' alt=':(' />[Bmk1014]), OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[diary].[timeadmin].[TIME_ENTRIES]))<br /> | |--Index Scan(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[diary].[timeadmin].[TIME_ENTRIES].[SI_PROJECTS_WEEK_ENDING]), WHERE<img src='/community/emoticons/emotion-6.gif' alt=':(' />[TIME_ENTRIES].[week_ending]&gt;='Jan 1 2000 12:00AM' AND [TIME_ENTRIES].[week_ending]&lt;='Jan 1 2020 12:00AM'))<br /> |--Table Spool<br /> |--Bookmark Lookup(BOOKMARK<img src='/community/emoticons/emotion-6.gif' alt=':(' />[Bmk1024]), OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[intranet].[dbo].[conversion_rates]))<br /> |--Index Seek(OBJECT<img src='/community/emoticons/emotion-6.gif' alt=':(' />[intranet].[dbo].[conversion_rates].[PK_conversion_rates]), SEEK<img src='/community/emoticons/emotion-6.gif' alt=':(' />[conversion_rates].[CountryStart]='USA') ORDERED FORWARD)<br />
  37. Jeremy54 New Member

    Correction to my above post:

    Once I add in the smaller database to the view, it no longer uses the "correct" secondary index, instead it reverts to the primary key on the large table but uses the secondary index on the smaller table.

    Sorry for the confusion, I had left my test hint forcing the correct index in the view by mistake.

    Thanks,
    Jeremy
  38. joechang New Member

    i am guessing that because s2k has more sophisticated statistics capability on multiple columns where s7 could only guess row count effectively on the first column, somehow s2k has figured out to expect a relatively high row count for the [TIME_ENTRIES] table.
    both versions of sql server over weight the cost of a bookmark lookup, leading to potentially very bad execution plans, in this case, a really bad plan.
    basically the expected row count for [TIME_ENTRIES] based on:
    proj_number = 'C598' and week_ending between '1-1-2000' and '1-1-2020'

    is probably moderately high, so it decides to do a table scan rather than use the index on the two columns

    you could use join hints, but that could lead to other ugly problems.

    is it possible to change the clustered index on to the column sequence:
    [project_id], [week_ending], [server_time_id]
    or
    [project_id], [week_ending], [employee_id],[local_time_id]

    the primary key on [employee_id],[local_time_id] becomes nonclustered

    technically, all we want for the clustered index is
    [project_id], [week_ending], but if it is not unique, sql server tacks on something to make it unique, so you may as well put something yourself

  39. Jeremy54 New Member

    I am willing to change the clustered index if it will help.

    Currently, the primary key is on the identity column (server_time_id) and is clustered. How do I mark the PK as not clustered so I can me a new clustered index?

    Thanks,
    Jeremy
  40. joechang New Member

    hold up a sec,
    based on the show plan all output:
    [TIME_ENTRIES] Est rows
    Country S2K S7
    London 15458
    Sydney 111
    Madrid 32k31
    diary 708k68
    Tokyo 113

    for some reason, the query to the view failed to apply the:
    [TIME_ENTRIES].[project_id]='C598'
    condition for the Diary (US?) until the end

    does the a query to the US table only have the proper plan?
  41. Jeremy54 New Member

    I figured out how to change the clustered index and did so before I saw your post.

    The good news is the execution plan does seem to pick the proper index now (the clustered index, not the PK) in all databases. The bad news is it is still slow...this may be due to your observation.

    What do you mean by "does the US table only have the proper plan?"

    Thanks,
    Jeremy
  42. Jeremy54 New Member

    Joe, please disregard my last post, I recieved your email.
  43. Jeremy54 New Member

    Okay, the secondary indexes are now being used as expected. However, as Joe pointed out through analysis of the showplan, there is a large difference in the number of estimated rows from SQL 7 to SQL 2000. I looked through the show plan and noticed that we have a RIGHT OUTER JOIN in SQL 2000 but none in SQL 7 in addition to the estimated CPU costs being about 100 times higher than in SQL 7.

    Any suggestions?

    Thanks,
    Jeremy

  44. joechang New Member

    try creating a view with only the US part,
    then run the query against the US view,
    if it shows an index seek (not scan) to the [TIME_ENTRIES] table,
    then try the following:

    1) create the global view with the US query first, the Mardrid tables next,
    ie, in reverse order of expected row count.

    2) parameterize the query:
    DECLARE @Proj CHAR(3), @WE1 datetime, @WE2 datetime
    SELECT @Proj = 'C598', etc

    SELECT xx from global_diary
    where proj_number = @Proj and week_ending between @WE1 and @WE1

    the idea is to fool s2k into thinking that @WE1 and @WE2 has a much narrower range
    also, run the following first to get rid of the old plan
    DBCC freeproccache

    3) alternatively, create views for each country,
    then run the query as

    select xx
    from US_diary
    where proj_number = 'C598' and week_ending between '1-1-2000' and '1-1-2020'
    UNION ALL
    select xx
    from restofworld_diary
    where proj_number = 'C598' and week_ending between '1-1-2000' and '1-1-2020'

    side note:
    I am guessing that you don't mean UNION, but rather UNION ALL, as the UNION implies a SORT DISTINCT
    ----------------------------------------------------------------------------
    CREATE VIEW dbo.US_diary
    AS
    SELECT 'USA' AS neracountry, server_time_id, e.employee_id as emp_exp_number, te.project_id as proj_number,
    o.name AS office, e.last_name, e.first_name, code_title_cat, te.title, sort_title,
    timespent, posted, te.billing_rate, week_ending, date, convert(char(255), rtrim(isnull(level1,'MISC')) +
    case rtrim(isnull(level2,'')) when '' then '' else '' + rtrim(isnull(level2,'')) end +
    case rtrim(isnull(level3,'')) when '' then '' else '' + rtrim(isnull(level3,'')) end +
    case rtrim(isnull(level4,'')) when '' then '' else '' + rtrim(isnull(level4,'')) end +
    case rtrim(isnull(level5,'')) when '' then '' else '' + rtrim(isnull(level5,'')) end +
    case rtrim(isnull(level6,'')) when '' then '' else '' + rtrim(isnull(level6,'')) end ) as fullname, diary, CurrencySymbolStart, te.show, te.category_id, billing_week
    FROM diary.timeadmin.time_entries te,
    diary.timeadmin.employees e,
    usconvert.lonstr.ms_title_cat t, usconvert.lonstr.office AS o, diary.timeadmin.all_levels l, conversion_rates c
    WHERE te.employee_id = e.employee_id AND te.project_id = l.project_id and te.category_id = l.category_id and
    te.title *= t.search_title AND e.office = o.office_id and c.countryStart = 'USA'
  45. Twan New Member


    How many rows are there in the big table for:
    - project = 'C598'
    - week_ending between '1-1-2000' and '1-1-2020'
    - both of the SARGS

    What happens if each part of the view is re-written as, may need to check if it still returns the same results as I have made a few assumptions...

    SELECT
    'UK' AS neracountry,
    server_time_id,
    e.employee_id as emp_exp_number,
    te.project_id as proj_number,
    office = ( select o.name
    from LondonConvert.lonstr.office AS o
    where e.office = o.office_id ),
    e.last_name,
    e.first_name,
    code_title_cat,
    te.title,
    sort_title,
    timespent,
    posted,
    te.billing_rate,
    week_ending,
    date,
    fullname = ( select convert(char(255), rtrim(isnull(level1,'MISC')) +
    case rtrim(isnull(level2,'')) when '' then '' else '' + rtrim(isnull(level2,'')) end +
    case rtrim(isnull(level3,'')) when '' then '' else '' + rtrim(isnull(level3,'')) end +
    case rtrim(isnull(level4,'')) when '' then '' else '' + rtrim(isnull(level4,'')) end +
    case rtrim(isnull(level5,'')) when '' then '' else '' + rtrim(isnull(level5,'')) end +
    case rtrim(isnull(level6,'')) when '' then '' else '' + rtrim(isnull(level6,'')) end )
    from diaryLondon.timeadmin.all_levels l
    where
    l.project_id = te.project_id
    and l.category_id = te.category_id ),
    diary,
    CurrencySymbolStart = ( select CurrencySymbolStart
    from conversion_rates c
    where c.countryStart = 'UK'),
    te.show,
    te.category_id,
    billing_week
    FROM diaryLondon.timeadmin.time_entries te,
    diaryLondon.timeadmin.employees e,
    LondonConvert.lonstr.ms_title_cat t
    WHERE te.employee_id = e.employee_id
    and te.title *= t.search_title

    Cheers
    Twan
  46. Hans New Member

    In the original select for example for JPN as shown below. Where is the join criteria between conversion_rates and the rest of the tables? Doesn't this mean the cross product between the join of the other tables and the conversion_rates table which of course is eleminated by the UNION operator later.

    SELECT 'JPN' AS neracountry, server_time_id, e.employee_id as emp_exp_number, te.project_id as proj_number,
    o.name AS office, e.last_name, e.first_name, code_title_cat, te.title, sort_title,
    timespent, posted, te.billing_rate, week_ending, date, convert(char(255), rtrim(isnull(level1,'MISC')) +
    case rtrim(isnull(level2,'')) when '' then '' else '' + rtrim(isnull(level2,'')) end +
    case rtrim(isnull(level3,'')) when '' then '' else '' + rtrim(isnull(level3,'')) end +
    case rtrim(isnull(level4,'')) when '' then '' else '' + rtrim(isnull(level4,'')) end +
    case rtrim(isnull(level5,'')) when '' then '' else '' + rtrim(isnull(level5,'')) end +
    case rtrim(isnull(level6,'')) when '' then '' else '' + rtrim(isnull(level6,'')) end ) as fullname, diary, CurrencySymbolStart, te.show, te.category_id, billing_week
    FROM diaryTokyo.timeadmin.time_entries te,
    diaryTokyo.timeadmin.employees e,
    Tokyo.lonstr.ms_title_cat t, tokyo.lonstr.office AS o, diaryTokyo.timeadmin.all_levels l, conversion_rates c
    WHERE te.employee_id = e.employee_id AND te.project_id = l.project_id and te.category_id = l.category_id and
    te.title *= t.search_title AND e.office = o.office_id and c.countryStart = 'JPN'

    Hans
  47. Jeremy54 New Member

    Joe, I tried your suggestion and found the following:

    1) If I only hit the US db (largest) then it uses the index seek and works instantly as expected. If I add in the Madrid db (3rd largest) is also works instantly. Unfortunately, as soon as I add in any of the other 3 databases, it becomes slow again. The plan shows that US and Madrid DB switch in index scans and the newly entered db uses index seek.

    What could possibly be causing it to treat these databases differently? Should I be rebuilding these databases from scratch to see if I can fix it?

    2) Parameterizing it did not help.

    3) Yes, this works but we have thousands of projects; the value of the view was that I could use it for any project.

    Also, yes, I am using a union all.

    Hans, in my earlier posting of the view I left out some of the join criteria to converstion rates, but if you add in another clause " and CountryEnd = 'USA'" that will suffice.

  48. Jeremy54 New Member

    Twan,

    How many rows are there in the big table for:
    - project = 'C598' **** ZERO, no US records *****
    - week_ending between '1-1-2000' and '1-1-2020' *** 716959 ***
    - both of the SARGS *** 716959 ***


    I rewrote the view as you directed and it did not help, it acted as described above making the US and Madrid use index scans and the other 3 databases using index seeks.

    What could be causing this?

    Jeremy
  49. Jeremy54 New Member

    I generated SQL from the 2nd largest DB (one that works as expected in all scenarios) and used that to build two test databases. I then inserted the data from the US and Madrid databases into these two new databases and modified the view to work off the new test dbs. It now works as expected, execution in less than 2 seconds.

    Now, I have no idea what is different about these databases, but I will script the two problem dbs and compare that to the new db to see if I can figure it out.

    Thanks for everyone's help, this forum and most importantly the contributors, are really top-notch.

    Jeremy
  50. bambola New Member

    Please let us know what you have found.

    Bambola.
  51. Jeremy54 New Member

    I compared the scripts and found a few things but nothing that seemed obvious so I decided to use the "bad" database script to build a new database and populate it with the data from the bad db. I did this and to my suprise, it worked (used the index seek rather than scan).

    This means it is not a database issue, but instead appears to be how the db is running on my server. I ran a few more tests to narrow down where the problem could be:
    1) I restored a backup of the bad db into a test db, it had the same problem
    2) I dropped the big table in the new test db, repopulated it from the bad db, rebuilt indexes, and it works.
    3) I re-restored the backup and this time dropped only the indexes, rebuilt the indexes, this time it did not work.

    So, it looks like the large table in my problem databases are corrupted or something like that. I will plan on dropping and recreating these tables in the live environment as soon as possible.

    Thanks for all the help,
    Jeremy
  52. Jeremy54 New Member

    By the way, I ran the DBCC CHECKDB on one of the problem db and it found nothing wrong...

    CHECKDB found 0 allocation errors and 0 consistency errors in database 'DiaryTest3'.
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
  53. Twan New Member

    How bizarre!!!
  54. Jeremy54 New Member

    I just wanted to follow up and let you all know that I scheduled downtime last night and on the two live problem databases I dropped the problem tables and rebuilt them, re-created the indexes, re-populated with data from a backup, and now everything works fine.

    I have no idea why I needed to do this, but it worries me because I may have similiar issues in other tables and databases that I cannot find. Maybe they are waiting to pounce when I least expect it...

    Thanks again for all your help!
    Jeremy

Share This Page