July, 2010

  • The Search Blog

    5 SEO features for your Content Management System


    imageImagine you have just received the keys to a shiny new, custom built, special edition sports car which you have had on order for months.  You have the keys in your hand, the car is sitting on your drive way, the sun is shining and you’ve got a free afternoon…what do you do?

    1) Leave the car sitting on the drive and spend a few hours reading through the detailed user instruction guide

    2) Slam the front door of your house, get in the car and take it for a spin around the neighbourhood?

    If you are like 99% of the people I presented to a couple of months ago, you would go for option 2.  The lesson?  No matter how good your instructions/guidelines are, you cannot guarantee that every use will read them.  In fact, I can guarantee that the majority will not!

    I delivered a presentation internally at Microsoft about content management systems (CMS) and SEO, here are the top 5 CMS features I recommended to increase the likelihood of content production teams considering SEO whilst using a content management system…

    1) Auto title length highlighting

    It’s easy to set a rule of having titles less than 65 characters in length, however it is important that users understand why this is necessary when they are creating content (otherwise they will find a reason NOT to conform to it).  We are currently developing a concept to show the title text as green to start with, then change to orange as the length approaches 65 characters…


    And then red along with a warning message when the 65 character limit is passed…


    The hyperlink provides information about why titles should be less than 65 characters, and how that helps the content rank better.  A surprisingly high number of users simply don’t realise that the primary purpose for the page title is for search titles!  Once they are informed of this, it is difficult for them to find a reason why the title should be longer.

    2) Warn about similar/duplicate content

    As your site matures, content teams grow and the amount of published content increases it becomes more and more difficult to keep track of previously published pages.  When creating a new page it is important to find similar content which may need to be redirected to the new page to avoid duplicate content, or which may remove the need to create the new page completely.  Similar/related content is also useful to identify for cross linking opportunities. 

    Rather than trying to educate users to check for duplicate/similar content each time they create a new page, why not automatically warn them about content directly in the content?  This could be achieved at a simplest level be simply using the Bing API to run a site search on your site based on the keywords in the title of the page.


    If Bing considers a particular page on your site to be relevant to the keywords in the title of the page you are creating, the chance are that the previously created page is a good candidate for cross linking or redirecting.

    3) Allow keywords in URLs

    It is very common to see content management systems which generate a URL which looks something like this…


    …which does not provide any indication as to the content of the page, and misses an opportunity to increase ranking by getting keywords in to the URL.

    Whether you achieve it by allowing your users to enter their own URL for each page created, or by automatically generating the page name based on the page title, make sure that your URLs do not fall in to the trap of being non-descriptive and against SEO recommendations.  A great example of a system which generates great SEO friendly URLs is the Wordpress blogging platform.  In Wordpress the URLs are generated based on the page title, e.g….


    Of course steps need to be taken to ensure that page titles are not duplicated, of which there are two commonly used methods…

    • Do not allow duplicate page titles (this is good for SEO anyway)
    • Add a unique identifier to the end of the URL, e.g…

    4) Auto suggest keywords

    Have you noticed the Bing ‘Related Searches’ box shown every time you make a search?…


    These searches may or may not contain the original keywords, and are based on Bing’s analysis of common customer intent (based on previous search behaviour).  The FIRST thing I love about this is that it provides you with easy access to related search queries which you might want to consider for any given search.  The SECOND thing I love is that this data is also available programmatically via the Bing API.  I am not (much of) a developer, so I have not yet tried this myself, but I imagine it would not be too difficult to pull in this data and present it to users of a CMS as they type page title (please check to make sure you are not violating the terms of use for the API)…


    If you have a large website, it may even be possible to generate this data yourself based on the internal search strings from your own customers.  The advantage with this approach is that you could enhance the data with actual customer search volumes so that your CMS users could prioritise some phrases over others…


    Getting users to research keywords when creating content is one of the most challenging tasks in my job, however integrated keyword suggestions directly in the the CMS in this way would make it MUCH easier.

    5) Content lifecycle management

    The only constant in life is change…


    During the lifetime of your website, you can be almost certain that there will be a need for content to be moved or removed.  Whilst content management systems typically enable certain users to do both, they rarely initiate an essential to inform search engines that a piece of content has been moved to a new location, or to prompt them to remove a page from their index.  Here are a couple of process flows to show what your publishing systems need to do in order to effectively move and remove a page for search engines…





    Chris Moore is a Program Manager working on Search Engine Optimisation at Microsoft.  Follow him on Twitter

  • The Search Blog

    5 great Tweets about search from the past week



    Here are 5 search related topics which I have found interesting over the past week.  Please re-tweet any which you feel worth sharing…

    1. More signs that facebook is moving in to locations services SOON! http://tinyurl.com/2ws9bdt [re-tweet]
      Location based marketing is getting big! Dominoes Pizza and Starbucks are just two examples of companies who have used Foursquare to reach consumers and make money.  Facebook’s move to the world of check-ins will undoubtedly sky rocket location enabled user data and present businesses with awesome opportunities to offer contextually relevant local information to users.  I would love to own a bar or nightclub right now, so many opportunities! :-)

    2. Bing U.S. Search Share Up 7% In June, Google Down 1% http://tinyurl.com/36ywjyu [re-tweet]
      …according to latest comscore data. 

    3. Twitter May Let Users Pay for Self-Promotion [RUMOR] http://tinyurl.com/2fwlf52 [re-tweet]
      Whilst promoted tweets have started popping up already, the recent rumour that Twitter will offer users the ability to promote themselves (and get more followers) for cash is big news!   In the past, individuals wishing to gain Twitter had to establish a reputation, build a network of contacts and tweet content which people want to subscribe to and share.  Will that change when individuals can simply pay for followers?  It will be interesting to see how that one plays out…

    4. Use microformats to make your events more discoverable on google http://tinyurl.com/39ex2v7 [re-tweet]
      Microformat adoption has still got a lot of room to grow, but this post explains the benefits and the ways you can get involved if you can show you have quality content with well marked microformats.

    5. 5 interesting things about links inferred from google's latest patent http://tinyurl.com/32lzk2x [re-tweet]
      Danny from SearchEngineLand spent a LONG time reading through a patent application by Google and managed to pull out some interesting insights in to how Google’s algorithm probably treats links on a page.  Whilst the facts have already been suggested by other sources (such as SEOMOZ), it’s interesting to read through and get SOME level of validation direct from Google.


    Chris Moore is a Program Manager working on Search Engine Optimisation at Microsoft.  Follow him on Twitter

  • The Search Blog

    How to implement a drop down correctly for SEO


    image Whilst drop down boxes are commonly used to present web users with a list of many choices for a destination page to be routed to, if they are not implemented correctly the ‘links’ will not be counted by search engines and no PageRank will be passed.

    Example of search engines not ‘seeing’ links…

    For example, if you look on this page - http://support.microsoft.com/kb/935791 you will see a drop down box which is linking to translations of the English article…


    However, Yahoo Site Explorer does not register the English article as linking to the Italian (or any other language) version of the Article (http://support.microsoft.com/kb/935791/it)…


    And neither does Google Webmaster tools…


    Which supports the fact that neither search engine are able to interpret the links in the drop down box as real links.

    Why do search engines not see these links?

    If we look at a simplified version of the code behind the drop down box, this is what we see…


    There are in fact no HTML hyperlinks (<a>) pointing to the local versions of the article, but instead there is a Javascript function which redirects to the page based on the selection made by the user. 

    Whilst this works fine when using the webpage, search engine crawlers are not capable of interpreting the Javascript, so they do not register that the English page is linking to any translated versions listed in the drop down.

    A crawlable drop down list

    Our site manager and SEO champ Charles Li from China recently found a solution to this problem which allowed us to continue to provide our customers with a drop down list, but in a way that the search engine crawlers would recognise the links and pass PageRank to the destination pages.  The modified drop down list can be view on this page - http://support.microsoft.com/fixit


    This list differs from the previous example because it is created using ACTUAL HTML hyperlinks, contained within a DIV tag which is hidden/displayed when the drop down box is clicked on.

    Importantly, if Javascript and CSS is not enabled then all of the links will be visible (and CRAWLABLE) on the page.  So when a search engine crawler reaches the content, they will see all of the links without having to interpret the Javacsript.  PageRank will be passed to the destination pages.

    The Result

    The first version of this page used a similar format drop down box to the first example shown (a form based javascript function).  Neither Google or Yahoo registered the links which were contained in that version of the drop down.

    After making the switch to the improved CSS/Javascript drop down, both Google and Yahoo! now register the links, e.g. here are the acknowledged links for the Chinese version of the page…



    We also saw a significant increase in the referrals to the affected pages which coincided with this change

    Author: Chris Moore is a program manager from Microsoft working on Search Engine Optimisation.  Follow him on Twitter

Page 1 of 1 (3 items)