Cascade Skyline - with Microsoft Logo and Project Support header - author Brian Smith

  • Brian Smith's Microsoft Project Support Blog

    Failure Audit message in SQL Server - Event ID: 18456 every minute?

    • 10 Comments

    This is a event log error I have seen in Project Server 2007 on various farms going right back to the Beta and I finally found some time to track it down.  It didn't seem to be breaking anything on my server, but made it difficult to read the logs and see other "important" stuff.  This is the error:

    Event Type:    Failure Audit
    Event Source:    MSSQLSERVER
    Event Category:    (4)
    Event ID:    18456
    Date:        1/17/2008
    Time:        1:29:00 PM
    User:        DOMAIN\User
    Computer:    SERVERNAME
    Description:
    Login failed for user 'DOMAIN\user'. [CLIENT: <local machine>]

    I did a SQL Profiler trace to see where it was coming from and discovered the cause was a SQL Server Agent job called SharedServices_DB_Job_DeleteExpiredSessions that was running every minute.  The reason for the failure was that I did not have a SharedServices_DB on that server.  I did once - but my test server gets changed around a fair bit and this was a remnant that didn't get cleaned up. Not sure if it would normally get removed and if I did something bad which left it hanging around. 

    This could also happen with Microsoft Office SharePoint Server 2007 even if Project isn't installed as it relates to the Shared Services Provider.  To disable the job you can go to SQL Management Studio, and connect to your database engine, then expand SQL Server Agent - select the Disable option. 

    image

    There will likely be other valid jobs there too - for your real SharedServices databases that still exist.  The bad one will show that it has failed when last executed if you look in the Job Activity Monitor.

    image

    Not a big problem - but at least disabling will keep the logs looking clean, and will save a few CPU cycles for some real work.

    Technorati Tags: Project Server 2007

  • Brian Smith's Microsoft Project Support Blog

    What is special about the "Administrator Account" when provisioning a new PWA site?

    • 3 Comments

    Nothing much really.  But we do often get the question "How do I change the administrator account?"  If you go to the Manage PWA Page then you will see that it is greyed out.  You can't change it here - but then you don't really need to.  This account is just put in the database so that you have an admin in the system and you can log in.    I guess this can be a problem if the person is the only admin - and is not available to log in, but the admin's first job should be to create the second admin.  The first administrator is also set as the primary administrator of the site collection created for the PWA site.  You can update the user in this case using stsadm -o siteowner.  The parameters for this command are -url <url of site> -ownerlogin <DOMAIN\user> -secondarylogin <DOMAIN\user>.  You could also use stsadm to add users to Project Server using the -o projcreateentity flag.  See Christophe's posting for full details of stsadm commands for project.

    One gotcha we have come across when provisioning against an existing set of databases is that if the account you use for the administrator account already exists in the database then it must also be an active user.  If it is an inactive account then the provision will fail with event ID of 7013 and 6966.

    Technorati Tags: Project Server 2007

  • Brian Smith's Microsoft Project Support Blog

    Project Server Queue - When is an error not an error?

    • 9 Comments

    When it is a status message?

    One of the challenges when interacting with an asynchronous queue is error handling.  In a connected application it is easy to interact with the user and tell them what went wrong - but if they have submitted a timesheet and something about it isn't quite right then it is harder to return this information.  In the Manage Queue screen the problem will show as a failed job, and an error will display, but in many cases it isn't really that anything broke - just that something about the timesheet wasn't right.  You may experience this more using the PSI web services directly from a custom application than if you are using the normal timesheet interface, as we are aware of the various states and will not let you try and submit something that isn't going to go through.  However, there could be timing issues that would even make our timesheet jobs fail - and generally this information will display with the timesheet..

    If you are developing a timesheet application you should consider the best way to get this information back to the resource through your application - as they may not see their queue jobs if they don't generally use PWA. 

    Technorati Tags: Project Server 2007

  • Brian Smith's Microsoft Project Support Blog

    Project Server 2007 Queue - How many threads is enough?

    • 3 Comments

    Working on a case recently I could see from the application logs that a server was having a hard time.  Timeouts, out of memory exceptions, lots of red - so something was up.  The customer's comment was that this was period end processing so lots of timesheets and updates going through.  The queue was meant to avoid this problem, so I looked at the queue settings and both project and timesheet queues were set to use 10 threads, rather than the default 4.  This could well have been leading to their problems!

    The queue is designed to limit the rate that work gets processed so that the peaks that can overload the server get spread out.  In project management terms think of it as leveling for the server - stopping the server from trying to do many things at once.  Another analogy is my weekend to-do list  If I had a list of 10 things I would spend at least Saturday morning deciding what to do, and which my least favorite task would be.  Then I might end up swapping tasks and my productivity would be poor.  My wife has learned that I am "single-threaded" and best throughput is achieved by giving me one job at a time.

    How many threads is the right number?  This very much depends on your server configuration and also the workload mix in terms of both volumes of transactions and the size of each transaction.  You can obviously publish many more 10 line projects than 1000 line projects in the same time.  One rule of thumb some of our field guys use as a starting point is to set the number of threads to the number of available processors (or cores).  So if you have a single dual processor machine as your application server then 2 threads might be a good starting point; if you have a quad dual-core then you might be able to use 8 threads.  This will also depend on farm topology and other applications running on the same servers.  You wouldn't want to use these figures and also have the server acting as a Web Front-End or running search or other processor intensive activities.

    Monitoring performance counters and the application and ULS logs will enable you to fine tune the queue to work with your normal server loads - but please don't just increase the number of queue threads expecting to make things work faster.  Time is nature's way of keeping everything from happening at once - for project server we achieve the same thing with the queue!

    Technorati Tags: Project Server 2007

  • Brian Smith's Microsoft Project Support Blog

    Is your cube build slower, and your tempdb larger since loading SP1? - UPDATE - See new posting 3/24/2008

    • 19 Comments

    See http://blogs.msdn.com/brismith/archive/2008/03/24/slow-olap-cube-builds-and-large-tempdb-revisited.aspx for the latest information on this problem.

    There is an issue with some of the SP1 and hotfix 939594 fixes which resolve earlier problems of data missing in the cubes.  The query has become more complex and SQL Server 2005 is not coming up with the right execution plan for this new more complex query.  The upshot of this is much use of tempdb - which can grow very large, and the cube build takes much longer than it did.

    You may have already seen the blog which gives a workaround for this issue - but as many of you may see this anew after loading SP1 I wanted to raise awareness again. Thanks to Noel, Kermit, JF and Thuy for helping get to the bottom of this one.

    The problem is mostly seen if you have added lots of dimensions to the various cubes and also if you specify a dynamic date range.  A full build from earliest to latest is sometimes a workaround which reduces the impact on tempdb but is still slower than it should be.  My suggested steps which are also mentioned in the referenced blog and given even more coverage in the MSDN article are:-

    1. Create a cube set exactly how you want it (earliest to latest please - no dynamic date range) and while it is building capture a trace in SQL Profiler.  SQL Database Engine should be profiled – can be restricted to the reporting db.

    2. After a few minutes search in the running trace for MSP_EpmAssignmentByDay_OlapView.  If it isn’t found then wait a little while longer and try again.

    3. Once found then right click the line in the trace and select Extract Event Data. (DO NOT COPY FROM THE bottom pane).  In the save as dialog save this as badquery.sql.

    4. The current cube can be stopped at this point.  If no one else is using Analysis Service then just restarting will kill this off.  If you do restart then you will need to close out of the Cube Status window otherwise you can get errors with the next cube build.

    5. In SQL Server Management Studio open a new query to the Database Engine and select the Reporting DB.  Copy the following text into the query window.

    EXEC sp_create_plan_guide N'guide_forceorder',

        N'<exact query>',

        N'SQL',

        NULL,

        NULL,

    N'OPTION (force order)'

    6. Open the badquery.sql file and select all (Ctrl-A or Edit, Select All) then Copy (CTRL_C or Edit Copy) and then select the <exact query> from the text in the query window and Paste (CTRL-V).  You should have selected the < > character too.  This may leave some space between the ‘ and the SELECT but this is fine.

    7. Execute this command – which should finish successfully.  Build a new cube and this should use this execution plan and process more quickly.  To get an indication of the speed you could open badquery.sql and then at the end of the query paste OPTION (FORCE ORDER) and then execute.

    8. You can also monitor in SQL Profiler to see if it is finishing quicker by looking for the MSP_EpmAssignmentByDay_OlapView and seeing if the SQL BatchCompleted comes in a reasonable time.

    The reason to avoid dynamic date ranges is that this will make the query change every time it is run - and then the execution plan will not match - and will be ignored.

    If you use a constant date range then this method can be applied - but read the MSDN article on the need to escape single quotations marks.  The will be around the dates - so for instance '12/31/2008' would need to be ''12/31/2008''.  Note that these are two individual quotes, not a double quote.

    As an example this workaround can mean the tempdb hardly gets touched (rather than growing 30GB or more) and the cube builds in less than 1 tenth of the time.  Your mileage may vary.

    I hope this helps - and if you don't understand any of the steps above then you probably shouldn't attempt this - speak to your DBA.  But if you are seeing this issue and need some assistance to address it then please open an incident with our support teams.  Http://support.microsoft.comwill give you the options.

    Technorati Tags: Project Server 2007

Page 82 of 98 (487 items) «8081828384»