Download Research Tools
When you type a word or phrase into a search engine, there are a number of things that could go wrong. You might not know how a term is spelled or, in your rush to jump to the results, you could transpose or otherwise mistype some characters.
Spelling alteration is a popular search technique used to translate apparent typographical errors, alternative spellings, and synonyms into an improved query that returns the best possible results on the first try.
But this approach is not without its pitfalls. You might enter a word correctly that's not widely used but has a neighbor in the dictionary that's much more popular on the Internet. One person's spelling error could be another's perfect query. Which results should the search engine provide, and how should any useful alternative searches be represented?
That's the task being offered to researchers and students around the world in the Speller Challenge, presented by Microsoft Research in partnership with Bing. The goal is to develop a spelling alteration system suitable for large-scale statistical data mining-based web search.
A common approach to spelling alteration is the noisy channel model, in which the received query (q) is treated as a noise-corrupted version of the target query (c). In this model, the spelling alteration system alters q into c and returns the latter's results. The techniques to best identify query/target pairs and best estimate these statistics are the active research problem that underlies this challenge.
But that's just the foundation. Place the spelling alteration task in the context of web search, and you have another dimension to consider. For a lot of spelling applications, target queries are assumed to be composed of tokens (i.e., words and phrases) that are drawn from a predetermined vocabulary. The effectiveness of using a fixed lexicon is a known problem because it can lead the speller not only to miss "real word" errors but also misrecognize out-of-vocabulary tokens as errors.
In the context of search, the scale of the web magnifies this problem considerably. The challenge is therefore not necessarily to alter queries to conform to a specific dictionary of words and phrases, but rather provide relevant documents that have high matching scores in ranking.
If this sounds like the type of problem you (or the search developer in your life) would enjoy solving, the task is to build the best speller web service that proposes the most plausible spelling alternatives for a wide range of search queries. Spellers are encouraged to take advantage of cloud computing and must be submitted to the challenge in the form of REST (Representational State Transfer) web services.
For the purpose of the Speller Challenge, a development dataset (derived from the publicly available TREC queries that are based on the 2008 Million Query Track) will be made available to the public through the Microsoft Web N-gram service. This TREC Evaluation Dataset is annotated by using the same guidelines and processes as in the creation of the Bing Test Dataset, which is the dataset used to select the winners.
The top five competitors will receive the following prizes:
—Evelyne Viegas, Director of Semantic Computing for the External Research division of Microsoft Research
Check out our latest challenge: blogs.msdn.com/.../building-a-better-speller
Cast Your Spell on Spelling
Cast Your Spell on Spelling The Spelling Alteration for Web Search Workshop looks at web-scale natural language processing, with a focus on search-query spelling correction.