Cascade Skyline - with Microsoft Logo and Project Support header - author Brian Smith

  • Brian Smith's Microsoft Project Support Blog

    Developer Webcast this week!

    • 4 Comments

    If you remember my blog about the Developer Community from a few weeks ago - and joined up, then you should have seen an invite to a Webcast this week.  I have been invited to talk about extending development from Project Server to Microsoft Office SharePoint Server.  The topics I am going to cover are:

    • The programmatic creation of list items in the WSS lists project uses, and the use of the Object Link Provider to expose these item in PWA
    • Finding your way around the parts of a SharePoint farm - such as identifying the Shared Service Providers in the farm
    • Using the Business Data Catalog as a way to get Project Server data exposed in other parts of SharePoint.

    This final topic has been the most interesting to work on and is related to the work Christophe blogged about on searching Project data - but this time using Project data in SharePoint lists.  I will be showing how you could add a column to your SharePoint lists that shows Project names - and other related information.  Other uses of this might be to show Resource names, or the RBS as a Business Data column.  While I am on the topic of Christophe - he has been very busy lately and has some great postings on TechEd 2008, SQL Server 2008 and general performance improvement tips. And finally a very useful link to the Project Server 2007 hot fixes RSS feed

  • Brian Smith's Microsoft Project Support Blog

    We might be building cube now... But not if you scheduled it for half-past!

    • 3 Comments

    In support we frequently have those moments that make us smile.  Often they aren't things I would choose to blog about - but I will share this one.  I certainly don't mean to trivialize bugs - and I do appreciate that they can cause a great deal of inconvenience to our customers (One particular customer I am thinking of in this case) but they can have their amusing moments!

    For those that just want the short story - always schedule your cube builds on the hour.  If you choose 12:30, 1:30 etc it will not build when you want it.  If you want the long version then read on...

    For this particular support incident we were troubleshooting a cube building issue where the customer was scheduling a cube build and it wasn't happening on time - but sometime later - that seemed variable but in some cases looked like it was using Universal Coordinated Time (UCT or GMT).  I set my cube build for 4:30pm that afternoon, set my ULS logs to give verbose output for the Analysis Cube Build and started SQL Profiler traces to see what was going on in the database.  My usual steps for "following the data".  I saw that the cube build didn't happen at 4:30pm so thought I might have a repro so left it all running overnight.

    The following morning I could see the cube had built at 2:30am - which fitted the pattern as I am in PST (GMT-8), so I started through the logs and traces to see what was going on.  Only 3 million rows in my SQL traces so shouldn't take long...  I also set another cube build for 8:00 while I worked through the logs - and this worked.  Then set one for 8:30 - which failed, 9:00 worked.  I could see the pattern, and having this information and the logs I managed to find the bug.

    The amusing part was looking in the ULS logs for the previous afternoon and at a few milliseconds past 4:30 seeing this entry.

    Cube building timer job invoked and we might be building cube now

    It was as if the SharePoint timer service (which was the process that gave this message) wasn't going to take any accountability for what happened next!  Or more correctly the developer coding the message!  If you are a regular reader you will know that the cube build can fail for a number of reasons, so perhaps I shouldn't have been surprised that the message was a little evasive.  It did however make me dig deeper to understand this part of the cube build process which I hadn't investigated in depth before.

    It follows a similar path to the provisioning of a PWA site, and for similar reasons.  It is going to use a WSS Timer job, but the SSP account does not have permissions to create one - so this is what happens.  When you click save on the cube build settings page for a periodical update it saves the information to the MSP_WEB_CUBE_ADMIN table in the _Published database and also puts a row into the SharedServices1_DB (your name may vary) in the MIPScheduledJob table.  This is the timer job request. This is read by the SharePoint timer service after a minute or so and it gets put into the Objects table of the SharePoint_Config database - and is seen as a timer job in timer job definitions.  At the appointed time it will run and you will see a job in the TimerRunningJobs table and this is when the message to the ULS logs gets posted.  If all works well this would be followed by another couple of messages saying the cube building job has started.  In my case I didn't see these messages in my log until 2:30am - even though they referred back to the requested time of 4:30pm.

    Cube building job has started for scheduled time: 3/25/2008 4:30 PM

    [CBS] Status message: Cube build request message has been added to the Project Server queue

    In fact you will see the "might be building cube" message at every 30 minutes past - even after it builds the cube - once the timer job is in place.  So the real reason it says "might" is that only once during the day (if you have daily builds) will it actually be building when that message appears!

    A couple of other interesting points about this process is that the times in the SharePoint databases are held in UCT (which is by design and not the reason for the strange build times).  However the requested build time in the MSP_WEB_CUBE_ADMIN table is held as an integer defining at how many half hours past midnight the cube should be built.  12:30 would be 1, 3:00am would be 6 etc.  It is the decoding of this for the odd number that appears to be at the root of this bug.

    For the main issue though - that cubes scheduled for the half hours do not get built on time - we do have a hotfix in the pipeline.  We might release this in April :).

  • Brian Smith's Microsoft Project Support Blog

    Slow OLAP Cube Builds and Large TempDB - Revisited

    • 12 Comments

    After quite a bit of work on this one it appears that SP1 (and a slightly earlier hotfix) were almost coincidental in this problem rather than being the cause - and it can occur even without SP1 if you add enough dimensions to your cubes.  The best resolution is a combination of the first 2 or all 3 of the following items:

    1.  Keep you statistics up to date!  In minor cases updating the statistics on your reporting database can help - but in all cases it is required to get the full benefit.  You should set up a maintenance plan or initial for testing you could use sp_updatestats on your reporting database.  See Christophe's recent blog for some guidance here - database maintenance is one of the keys to better queue performance.

    2.  if you see a "No Join Predicate" warning in the execution plan of the large query causing the problem then apply SQL Server 2005 SP2 Cumulative Update 6 http://support.microsoft.com/kb/946608 .  The SQL bug is resolved by Hotfix 942444 which is in that roll up.  Please also read http://support.microsoft.com/kb/942444 carefully, particularly the details of setting Trace Flags 4101 and 4121 - which are required to be set to active this hotfix.  See the remarks section at http://msdn2.microsoft.com/en-us/library/ms188396.aspx for more details of trace flag setting.  Use the command line option -T4101 and -T4121, and set these in the SQL Server Configuration Manager for the instance of SQL Server 2005 used for your reporting database.

    *** Update - SQL Server 2008 R2 - CU7 + setting the flag for -T4136 will get you back to good build times if you have upgraded and gone slow again - thanks to Flo ***

    ***Update 2 - I think this should be -T4199 for 2008 R2 to do the exact same - -T4136 is for parameter sniffing - perhaps that results in the good execution plan too ***

    3.  If this still isn't giving you the speed of cube build you require then it might be time to consider upgrading your server hardware.  In our tests with very large cube builds you can improve the build times with more RAM - and also x64 hardware (which also allows you to support even more RAM).

    If you are not sure about step 2 - then see my previous posting which will help you identify the bad query - and then paste this into a SQL Query window against you reporting database and click to see the estimated query plan.  In the execution plan - which will be quite large - you are looking for something like this:

    image

    And if you hover over the warning triangle you should see a warning of "No Join Predicate".  Also note the expense (74% of the cost of the query) attributed to the Table Spool.  This is where tempdb is getting heavily used.

    As an example of potential improvements after following these steps (without any hardware upgrades) we have seen cube builds of 20+ hours complete in 45 minutes, and a 1 hour 15 minute build come down to less than 1 minute.  Your mileage may vary :).

    You may still see quite heavy tempdb usage and this is to be expected with such a large query - and should not be a concern.  If Your server doesn't have the space for the tempdb SQL is trying to use after following these steps then time to invest in more disk space!

  • Brian Smith's Microsoft Project Support Blog

    Lies and out-dated statistics

    • 0 Comments

    The saying goes that you can prove anything with statistics - and I am sure the same figures are being used by the McCain, Obama and Clinton camps to tell vastly different stories.  But enough about politics - what has this to do with Project Server 2007?  SQL Server uses statistics too - and if they are not up to date it might make the wrong decisions!

    As an example if you were trying to find me in a database and there were indexes on my department, surname and date of birth, and you knew may name, birth month (but not the actual day or year) and my department what is the best way to find me?  With a name like Smith, you probably wouldn't want to start there.  So perhaps you would find all the people in Project Support, then look for any with surname Smith, then trim down to the ones born in the right month and finally see if any (or hopefully the only one) is called Brian.  If you were looking for Chris Boyd, or Christophe Fiessinger then you might start your filtering on their names first - as there aren't as many Boyds or Fiessingers as there are Smiths. What you are doing is building an execution plan, and usually the most efficient way is to trim down the number of results at the earliest point in the plan.  SQL Server does this by keeping statistical data about the contents of data tables and indexes, and builds plans according to this data.  If the data is not accurate it can lead to execution plans that are not optimal.

    Maintenance plans in SQL Server give you the ability to schedule jobs to reorganize your indexes to ensure they are efficiently stored both for reading and writing - and also update your statistics to ensure SQL will make the best possible decisions when deciding a query plan.  For SharePoint databases you should avoid using the default rebuild index option of the maintenance plan wizard as it does not re-apply existing index options - but you can script your own tasks to do this.

    There are some great resources online for maintenance of your databases - I'd suggest the recent Whitepaper titled Database Maintenance for Microsoft® SharePoint® Products and Technologies and also for general background reading the SQL Books Online, Database Engine Administrator InfoCenter is an excellent resource. 

    I will be following up in the next few weeks with blogs specifically about two areas where a good set of up to date statistics can improve the performance in Project Server 2007.

    • Building Data Analysis cubes - outdated statistics are one of the causes of recent problems seen with slow cube building and heavy tempdb usage.
    • Project Center Views - the error that can appear after around 30 seconds if you have many projects, and also large project that you drill into.
  • Brian Smith's Microsoft Project Support Blog

    OLAP and Data Analysis - timeouts, plan guides and tempdb

    • 3 Comments

    UPDATE: Be sure to see the more recent post - http://blogs.msdn.com/brismith/archive/2008/03/24/slow-olap-cube-builds-and-large-tempdb-revisited.aspx  

    Some of this has been in previous postings or comments on postings - but thought it was time for a round-up.  As a quick intro this blog will look at more information on tempdb growth some of you may have experienced since Project Server 2007 SP1, as well as timeouts that you might have hit due to the same problem.  I'll also talk more about SQL Server 2005 plan guides used to overcome this problem - and finally errors you may see after your reporting database gets rebuilt - or refreshed as we call it in the queue.

    TEMPDB Growth

    In SP1, and in a hotfix just prior to SP1 we introduced some dimensions to the cube that we had missed in the initial release.  These related to some of the flags for generic and team resources and a few other things.  For customers that were also using many other dimensions and had large datasets these changes gave longer queue build times and also increased the use of tempdb as the SQL execution plan made use of table spools which create temporary tables in the tempdb database.  In many cases the workaround described in my previous blog  gives some relief from this problem.  You may still need a reasonable amount of tempdb so don't bother shrinking tempdb below the level it would normally use - it will just slow things down as it has to grow again next time.

    In some cases the increased cube build time may lead to the timeout issue dealt with below - and also applying the workaround can be challenging.  To help understand the workaround a bit more on to plan guides...

    Plan Guides

    SQL Server Books Online have always been a great resource since the early days of SQL Server.  This is still true and the plan guide documentation can be found at Understanding Plan Guides.  Basically whenever SQL runs a query it will look for any plan guides (in the system table sys.plan_guides - so select * from sys.plan_guides will show what is set) and if it has query_text in that table that matches the query it will follow the hints in that table.  The query needs to be an exact match so any change in date or dimensions (or even order of dimensions added to a cube) may "break" the plan guide and the workaround would need to be re-applied.  As the default name of the plan (from the workaround) is already in existence either a new name needs to be used or the plan guide needs to be dropped from the database.  An example of the command to drop a plan guide called guide_forceorder would be:

    EXEC sp_control_plan_guide N'DROP', N'guide_forceorder';

    You can also enable and disable the plan guides using the sp_control_plan system stored procedure.

    To see if a plan guide is being used you can run the profiler trace and add the Showplan XML event type (located under the Performance node).  Then in the trace you should find the Showplan XML event for the affected query and it will start something like:

    <ShowPlanXML xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan" Version="1.0" Build="9.00.3054.00"><BatchSequence><Batch><Statements><StmtSimple PlanGuideDB="pwa_Reporting" PlanGuideName="guide_forceorder"><QueryPlan CachedPlanSize="1463" CompileTime="336" CompileCPU="336" CompileMemory="9432"><RelOp NodeId="0" PhysicalOp="Compute Scalar" LogicalOp="Compute Scalar" EstimateRows="1" EstimateIO="0" EstimateCPU="1e-007" AvgRowSize="1625" EstimatedTotalSubtreeCost="778.134" Parallel="0" EstimateRebinds="0" EstimateRewinds="0"><OutputList><ColumnReference Column="Expr1007"/><ColumnReference….

    If you don't find this then perhaps the query in the plan doesn't match what Analysis Services is using to pull the data.

    Timeouts

    One of my tips for support engineers is that if something fails at very specific times then be suspicious.  So when a cube fails at just over an hour suspect that the default External Command Timeout is still set in Analysis Services.  The default is 3600 and is in seconds - so is equal to 1 hour.  Right click the AS server in Management Studio, then set Show Advanced (All) Properties and check the External Command Timeout.  Increase as appropriate - 36000 would be 10 hours.  For the SP1 issue you may find that this is the first error you hit - then once you get past this the tempdb problem may give you issues.

    The error for the timeout issue is:

    Failed to build the OLAP cubes. Error: Analysis Services session failed with the
    following error: Failed to process the Analysis Services database <cube name> on
    the <Analysis Server name> server. Error: Internal error: The operation terminated
    unsuccessfully


    Internal error: The operation terminated unsuccessfully. Internal error: The
    operation terminated unsuccessfully. OLE DB error: OLE DB or ODBC error:
    Unspecified error. Errors in the OLAP storage engine: An error occurred while
    processing the 'Assignment Timephased' partition of the 'Assignment Timephased'
    measure group for the 'Assignment Timephased' cube from the <cube name>
    database.

    The initial part of the error may be different depending on the exact point the timeout stops things.  The first part will also be localized if you are running a language pack.

    Refreshing the reporting database loses the Assignment dimensions

    The error for this problem is:

    Failed to build the OLAP cubes. Error: Invalid call to GetJoinForCustomFieldDimension. Cannot join "Task Non Timephased" fact with a custom field of entity Assignment

    And is caused when you have added dimensions to your assignment cube and then have done something that will have caused the reporting database to refresh.  Restoring from certain administrative backups will do this.  The fix is easy - just remove the dimensions from the assignment cube, save the configuration - then add them back again and save again.  To be sure this is your issue take a look at the view MSP_EpmAssignment_OlapView in the reporting database.   If the last column is AssignmentBaseline10BudgetMaterialWork then this means no dimensions are added.  If the repository is expecting some dimensions you will get the error.  A screenshot of the view with added dimensions will look something like:

    clip_image002

    I hope this helps to understand more about the cube build process and some of the problems you can run into.

Page 74 of 92 (458 items) «7273747576»