Hello,
I lost an earlier version of this message so I’m sorry if this is terse. Currently Linux users need to install voices to their templates using an add-on (AwesomeTTS and HyperTTS being the standard). They’re great at what they do, but neither plays well with other speech synthesis software that may be installed and both require some modification of each template to get the TTS to work. Speech-dispatcher has features such as different application queues and priorities for messages. It is somewhat of a standard interface to many different TTS providers such as festival and espeak. It is used by default on many Linux systems to provide speech synthesis to large projects such as Firefox, Emacs, and Orca.
I’m not familiar with Anki’s code, but would it be feasible for Linux/Anki users get native TTS support using speech-dispatcher? It has a Python API
A feature overview from info speech-dispatcher
:
Speech Dispatcher from user's point of view:
* ability to freely combine applications with your favorite
synthesizer
* message synchronization and coordination
* less time devoted to configuration of applications
Speech Dispatcher from application programmers's point of view:
* easy way to make your applications speak
* common interface to different synthesizers
* higher level synchronization of messages (priorities)
* no need to take care about configuration of voice(s)