[BRLTTY] A SpeechAPI for BrlTTY

Samuel Thibault samuel.thibault at ens-lyon.org
Tue Aug 30 11:46:08 EDT 2005


Hi,

We've already been thinking about this issue for some time with
Sébastien. At LSM'04 we raised it, were advised not to care about
speech, and we actually do agree. The main reasons I can remember are
(Séb, please complete):

- The way speech dispatching works is _much_ different from the way
braille dispatching works. Hence really little code could be reused: the
way clients use libbrlapi is really not suited to the way they would use
a speech library.

- There already _are_ people thinking about this. Duplicating work
is asking for future incompatibility. Just using the existing speech
dispatcher framework seems much more reasonable: add a pseudo TTS driver
for getting access to other speech synthesizers, and add a speech
dispatcher daemon for letting other programs access to brltty's speech
synthesizers. And if speech dispatcher is not powerful enough, it should
be enhanced, not concurrenced.

- The portability issue is already handled by brltty itself, so speech
already gets it.

- We (Séb and I) are not used to using speech synthesizers at all :).

But some "daemon" part of the code could possibly be shared indeed
(establishing sockets for instance).

To sum it up, you propose to reuse BrlAPI's framework, but I'd say
only technical (sockets/daemon/...) work can be reused. For the rest
(protocol/speech dispatching/...), I'd say the work from speechd should
just be used.

Regards,
Samuel


More information about the BRLTTY mailing list