Microsoft Server Speech Platform 10.1 Released (SR and TTS in 26 languages)

Microsoft Server Speech Platform 10.1 Released (SR and TTS in 26 languages)

Rate This
  • Comments 30

Want to build your speech application with speech recognitions and synthesis?

Install the package below, you can now use SR and TTS in 26 language easily. 

The 26 languages are:

Catalan

Spain

ca-ES

Chinese (Hong Kong)

China

zh-HK

Chinese (Simplified)

China

zh-CHS

Chinese (Traditional)

Taiwan

zh-TW

Danish

Denmark

da-DK

Dutch

Netherlands

nl-NL

English

Australia

en-AU

English

Canada

en-CA

English

India

en-IN

English

United Kingdom

en-GB

English

United States

en-US

Finnish

Finland

fi-Fl

French

Canada

fr-CA

French

France

fr-FR

German

Germany

de-DE

Italian

Italy

it-IT

Japanese

Japan

ja-JP

Korean

Korean

ko-KR

Norwegian (Bokmal)

Norway

nb-NO

Polish

Poland

pl-PL

Portuguese

Brazil

pt-BR

Portuguese

Portugal

pt-PT

Russian

Russia

ru-RU

Spanish

Spain

es-ES

Spanish

Mexico

es-MX

Swedish

Sweden

sv-SE

 

Feel free to try the TTS in the 26 languages, post your feedbacks about them here.

 

 

Leave a Comment
  • Please add 5 and 3 and type the answer here:
  • Post
  • French Canadian Harmonie, need some more tuning.  Does not sound that great compare to acapela or RealSpeak

  • Hi, Denis :

    Thanks for the feedback. Do you have some specific comments from intelligiblity, prosody naturalness or sound quality perspective?

  • How do I create SR and TTS for my language? Beside our national language, we have many region languages. It would be nice if we can preserve the languages. thanks for guide.

  • Hi, DeDE:

    For your region language, do they have different lexicon or pronunication system?

    Seems that you want to create dialect voice, there is not public tools to do so yet. But I think this is a direction of TTS in the future.

  • Wait this is for Server OS only? I installed it on Windows 7 but no voices show up in the Text to Speech control panel.

  • Why not make these server only voices available on the desktop?

  • Hi, Server:

    These voices are optimized for server scenerios.

    For example, the output format of the voice is 8k16bit.

    But you can use the voices on desktop by writing an app with sdk but not with the text-to-speech control panel.

  • yes, it is difference lexicon and pronunciation system per region languages.

    Since it own region has they languages and culture, just like mini world.

    Hope we can get the sdk. thanks for explaining. keep a hope. :)

  • Can you explain what this package is? How does it relate to UCMA? Is it a new package, and if so why is it version 10.1?

  • I think this is great news for us. However, I must admit, I'm confused. Can you please do a post that explains the difference between this new "Speech Platform", the UCMA 2.0 SDK, and the Microsoft.speech and System.speech namespaces?

    I think this new release takes the Speech features from UCMA and the Recognizer and TTS engines from OCS and makes them available for reuse. So, with this I can build server speech apps without OCS. Is this correct?

  • Hi, Michael:

    UCMA SDK is with OCS. Now speech platform is a standalone SDK

  • Does this require us to buy a separate license? I have a TTS synthesis application running on a server and would like to replace the Desktop Speech to Speech Server.

  • hi, Iggy:

    From technology point of view, you can certainly do this.

    For license issue, I am not a lawyer, but you can check the license agreement to see if it matches your scenerio and need.

  • Can't get it going.

    1. Running Windows 7 Pro x64 with Visual Studio 2008.

    2. Installed Runtime x86, SDK x86 and SR en-GB.

    3. In VS set platform target to x86 for All Configuration

    Copied and pasted the code from documentation file:

    using Microsoft.Speech.Recognition;

    ...

       private void Form1_Load(object sender, EventArgs e)

       {

         // Create a new SpeechRecognitionEngine instance.

         sre = new SpeechRecognitionEngine();

         // Create a simple grammar that recognizes “red”, “green”, or “blue”.

         Choices colors = new Choices();

         colors.Add("red");

         colors.Add("green");

         colors.Add("blue");

         GrammarBuilder gb = new GrammarBuilder();

         gb.Append(colors);

         // Create the actual Grammar instance, and then load it into the speech recognizer.

         Grammar g = new Grammar(gb);

         sre.LoadGrammar(g);

         // Register a handler for the SpeechRecognized event.

         sre.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(sre_SpeechRecognized);

         sre.SetInputToDefaultAudioDevice();

         sre.RecognizeAsync(RecognizeMode.Multiple);

       }

       // Simple handler for the SpeechRecognized event.

       void sre_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)

       {

         MessageBox.Show(e.Result.Text);

       }

       SpeechRecognitionEngine sre;

    Program compiles but doesn't recognize my voice.

  • Hi, thanks for your post!

    Well, i have been trying to use this in C#, i'm importing the COM Microsoft Speech Object Library version 10.0 (guess that's the one that belongs to this SDK). I'm programming under Windows 7 OS, with VS 2010.

    Ok, so, i'm just trying to use the next code:

    SpVoice voz = new SpVoice();

    voz.Speak("Hi there");

    But, it throws an exception: HRESULT: 0x8004503A

    Don't know what i'm doing wrong, could you please help me?

    Thanks!

Page 1 of 2 (30 items) 12