Cascade Skyline - with Microsoft Logo and Project Support header - author Brian Smith

Is your cube build slower, and your tempdb larger since loading SP1? - UPDATE - See new posting 3/24/2008

Is your cube build slower, and your tempdb larger since loading SP1? - UPDATE - See new posting 3/24/2008

  • Comments 19

See http://blogs.msdn.com/brismith/archive/2008/03/24/slow-olap-cube-builds-and-large-tempdb-revisited.aspx for the latest information on this problem.

There is an issue with some of the SP1 and hotfix 939594 fixes which resolve earlier problems of data missing in the cubes.  The query has become more complex and SQL Server 2005 is not coming up with the right execution plan for this new more complex query.  The upshot of this is much use of tempdb - which can grow very large, and the cube build takes much longer than it did.

You may have already seen the blog which gives a workaround for this issue - but as many of you may see this anew after loading SP1 I wanted to raise awareness again. Thanks to Noel, Kermit, JF and Thuy for helping get to the bottom of this one.

The problem is mostly seen if you have added lots of dimensions to the various cubes and also if you specify a dynamic date range.  A full build from earliest to latest is sometimes a workaround which reduces the impact on tempdb but is still slower than it should be.  My suggested steps which are also mentioned in the referenced blog and given even more coverage in the MSDN article are:-

1. Create a cube set exactly how you want it (earliest to latest please - no dynamic date range) and while it is building capture a trace in SQL Profiler.  SQL Database Engine should be profiled – can be restricted to the reporting db.

2. After a few minutes search in the running trace for MSP_EpmAssignmentByDay_OlapView.  If it isn’t found then wait a little while longer and try again.

3. Once found then right click the line in the trace and select Extract Event Data. (DO NOT COPY FROM THE bottom pane).  In the save as dialog save this as badquery.sql.

4. The current cube can be stopped at this point.  If no one else is using Analysis Service then just restarting will kill this off.  If you do restart then you will need to close out of the Cube Status window otherwise you can get errors with the next cube build.

5. In SQL Server Management Studio open a new query to the Database Engine and select the Reporting DB.  Copy the following text into the query window.

EXEC sp_create_plan_guide N'guide_forceorder',

    N'<exact query>',

    N'SQL',

    NULL,

    NULL,

N'OPTION (force order)'

6. Open the badquery.sql file and select all (Ctrl-A or Edit, Select All) then Copy (CTRL_C or Edit Copy) and then select the <exact query> from the text in the query window and Paste (CTRL-V).  You should have selected the < > character too.  This may leave some space between the ‘ and the SELECT but this is fine.

7. Execute this command – which should finish successfully.  Build a new cube and this should use this execution plan and process more quickly.  To get an indication of the speed you could open badquery.sql and then at the end of the query paste OPTION (FORCE ORDER) and then execute.

8. You can also monitor in SQL Profiler to see if it is finishing quicker by looking for the MSP_EpmAssignmentByDay_OlapView and seeing if the SQL BatchCompleted comes in a reasonable time.

The reason to avoid dynamic date ranges is that this will make the query change every time it is run - and then the execution plan will not match - and will be ignored.

If you use a constant date range then this method can be applied - but read the MSDN article on the need to escape single quotations marks.  The will be around the dates - so for instance '12/31/2008' would need to be ''12/31/2008''.  Note that these are two individual quotes, not a double quote.

As an example this workaround can mean the tempdb hardly gets touched (rather than growing 30GB or more) and the cube builds in less than 1 tenth of the time.  Your mileage may vary.

I hope this helps - and if you don't understand any of the steps above then you probably shouldn't attempt this - speak to your DBA.  But if you are seeing this issue and need some assistance to address it then please open an incident with our support teams.  Http://support.microsoft.comwill give you the options.

Technorati Tags: Project Server 2007

Leave a Comment
  • Please add 7 and 2 and type the answer here:
  • Post
  • What about add query hints to original QUERY?

  • Firstly we can't deliver that as quickly as this workaround - and secondly that could be challenging as the query itself is dynamic based on how a customer's data set is structured and the configuration of their cubes in terms of dates, diemnsions and measures.  It could be that some customers will not see this problem - so will not need the workaround.  In my very quick tests the same hint slows down the query in the RTM code - but again this would be very dependant on the data structure.  I'm sure we will either look to come up with an in-built query hint in certain circumstances - or see if SQL can improve the execution plan generation for this type of query.  Thanks for the posting Martin.

  • Single reference point for some of the latest stuff on EPM 2007 and other useful stuff for EPM 2007 admin

  • Brian,

    BEWARE! This does not seem to only apply to dynamic date ranges.  It seems to apply to any reconfiguration of the cube (adding/removing fields).  Not only did I not correct our original problem, I invalidated the slow-build fix.

    What I did:

    1) Suspecting the RBS to contain invalid characters, I removed it from the cube settings because I didn’t have enough time to run through the RBS looking for it, and our production cube was down.

    2) There was a resource name with an apostrophe in the last name.  I removed it by replacing it with a space

    3) I deleted a custom field we defined to hold our OBS.  We’re a large organization, and the OBS can stretch for a mile, never mind the 255 character limit.  We were getting reports of corrupted global.mpt files on peoples’ desktops.  There are posts out there about this.  It was probably an unnecessary move on part, because EPM lookup tables are define nvarchar(255).  SQL probably truncated it anyway.  If nothing else, the Pro GUI would have taken care of truncation.  They were also Users who may have used Pro to access our QA environment.  Regardless, it was suspect, so I eliminated the only field that referenced it (not the lookup table).

    4) I reinstated the cube using the procedure mention below.

    Short story:

    I inspected the badsqlquery.sql file noted in the fix, and sure enough, there was the RBS field I removed.  I am highly suspect that it invalidated the plan, because cube build times returned to the 3 hours it took before the fix was applied.  Thankfully, it rebuilt.  Needless to say, I’ll have to reapply the fix, but Man what a pain in the [fill in the blank].

    Symptoms:

    We are experiencing cube corruption, anytime one of our Users alters a PWA Data Analysis User view.  If they simply move fields around to get the pivot table they seek, the browser will hang – especially if you do this repeatedly, searching for that view you want.  If you go to the Analytics server (a dedicated box for analytics and SRS) it immediately goes hay-wire.  Pages/sec goes top side, memory shoots to 2G and continues to climb, and msmdsrv.exe goes into a steady state of 25% CPU.  It stays there until you restart Analysis Services.

    If I can take a short detour, I found a quick way to get the cube back on-line, as long as you don’t touch the cube configuration settings, and have applied the fix you explained prior.

    1) Stop analysis services

    2) Use the following SQL on the OLAP DB (not the cube) to clear the index

    a. Delete From dbo.OlapObjects;

    3) Start Analysis Services

    4) Launch the cube build from the PWA

    We have a ticket open with Microsoft.  Hopefully they’ll have a fix for this and the slow cube-build soon.

    To give you an idea of our environment, without going into too much detail, we run the following platform configurations.  The above occurs in either environment and certainly repeatable.

    QA:

    2 load balanced front-end web/app servers

    1 dedicated SQL box

    1 dedicated AS/SRS box

    Prod:

    3 load balanced front-end web/app servers

    1 dedicated SQL box

    1 dedicated AS/SRS box

    We’re running, but not using Portfolio Server and it resides on a dedicated box, but shares the SQL box.  Portfolio Server is one product where I wish I kept the receipt - so I could return it.  But that’s another day.

  • Brian,

    If we have custom fields that contain formulas included in the cube dimensions or measures, could that contribute to the problems mentioned prior?

    Thanks.

  • Hi Ed,

    Yes, any change to cube configuration will need the workaround to be re-applied.  Sorry I did not make this clear.  If you change the configuration by adding or removing or even re-naming a dimension then the query will change - and hence will not match the execution plan.  Even changing it back to how it was may not mean the query is identical - the dimensions may be in a different order.

    Brian.

  • Hi Ed,

    Regarding formulae in the custom fiedls - if the custom fields do not change then the workaround should stay valid.  However, it may well increase the build time and tempdb usage compared to fields with no formulae, so you may see the problem with smaller data volumes.

    Brian.

  • Hi Brian,

    why i can't use the formula

    Sum(PeriodsToDate([Time].[All]),[Measures].[Work]) in project server 2007  portfolio analyser?????  

  • After quite a bit of work on this one it appears that SP1 (and a slightly earlier hotfix) were almost

  • Hi Brian --

    Just wanted to mention that we've been able to partly address this issue with a SQL Server fix in a couple of cases (should work for all cases).  The fix is here:

    http://support.microsoft.com/kb/942444

    Requires SQL Server 2005 SP2 CU4 or later, and turning on a couple of trace flags.  In my testing, I went from tempdb growing to > 25GB before I killed the process to tempdb not growing beyond 1GB with this fix activated.  Also know of at least 3 other custs who found it did the trick.

    Hope that helps...

    Justin (MSFT SSAS Support)

  • Thanks Justin - and yes, that seems to do the trick.  The only time I've seen it fail has quickly been fixed with a update of the reporting DB statistics.

    Brian.

  • hi ,

    I am getting the below error

    [7/18/2008 4:51 PM] Cube build session started

    [7/18/2008 4:51 PM] Analysis Services session started

    [7/18/2008 4:51 PM] Analysis Services session failed with the following error: Invalid object name 'OlapObjects'.

    ===== Process Completed =====

    [7/18/2008 4:52 PM] Failed to build the OLAP cubes. Error: Analysis Services session failed with the following error: Invalid object name 'OlapObjects'.

    can you please let me know why is it coming and how to solve it

  • Hi Susan,

    OlapObjects is a table in the repository database - from the error I would guess that the SQL script in the TechNet article was not executed - or it was executed against the wrong DB.  Perhaps you created an OlapObjects table in the master database.  Ruiing the script in the correct DB should resolve this issue.

    Best regards,

    Brian

  • Brian,

    Can you please confirm if any of the more recent Hotfixes fixed this problem or from your experience this is still a common problem. We have applied all hotfixes and I am wondering if attempting this will help or possibly make matters worse as the last post was 29 July 2008.

    Regards

  • Hi RyDaMa,

    It can certainly still be a problem if you are not setting the trace flags in your SQL Server.  We have seen this issue again recently - mainly due to customers upgrading SQL Server to the latest SP and assuming the fix is now permanent.  It is not - so the trace flags are still required to overcome the problem.  Also note that even with the trace flags building can be slow if indexes are stale.

    Just to be clear - there was never a bug with Project - just a potentially very complex query that exposed a bug with SQL Server.

    Best regards,

    Brian.

Page 1 of 2 (19 items) 12