[BRLTTY] A SpeechAPI for BrlTTY

Nicolas Pitre nico at cam.org
Tue Aug 30 11:38:44 EDT 2005


On Tue, 30 Aug 2005, Yannick PLASSIARD wrote:

> Hello,
> 	I realized that, now we have a BrlAPI to allow other 
> applications to communicate with a braille display, what about a 
> SpeechAPI to communicate with the TTS used by BrlTty ?

Look at BRLTTY's speech FIFO.  That might be useful to you already.

> I think this should be done (meaning I could do it), but need to be 
> planned and discussed to avoid useless stuffs. 
> For now I think about 3 directions to start :
> * Rebuild a complete client/server/library stuff, as BrlAPI but designed 
> for speech.

There are many projects aiming for that already, to which BRLTTY should 
be only one of many possible clients.

> * Add extensions to the BrlAPI to send messages to the TTS.

That could be done but would be quite suboptimal IMHO.

> * Use an existing TTS dispatcher (speech-dispatcher ??), and make a 
> Bridge between brltty and the dispatcher (a pseudo TTS driver).

Exactly what I was saying above.

> For now, I think that the best sollution would be second one

I completely disagree.

> because:
> * The programmer as a unique piece of code to deal with Braille and 
> Speech ;

This is not a good design argument.  You should always seek for multiple 
but smaller pieces of code instead of a single monolithic project.  

> * We can reuse all the "thread" and priority stuff used by the BrlAPI, 
> which works well and has very few bugs, compared to building a new one 
> from scratch.

You should not have to rebuild anything from scratch.  Speech servers 
already exist.  Please consider using and improving an existing one 
which is not tied to BRLTTY.

> * It will be easier to port, maintain, and change, a single library than 
> two.

Again I disagree.  It is far far more easy and effective to maintain 
multiple small and independent libraries than trying to fit everything 
in a single package with interdependencies.  Independent libraries also 
has the advantage of being reusable by different projects that might be 
interested into speech only without dragging along the braille support 
from BRLTTY.

Then you can improve and debug those independent pieces separately and 
even have independent communities around them without the need for 
grasping the whole landscape.  Doing otherwise is only increasing 
complexity and the first rule of software design is to limit complexity 
as much as possible.

All is needed from BRLTTY at that point is to have a single speech 
driver that interfaces with that speech server through the speech server 
API and not be further concerned by the speech server internals.  It 
even not need to know that some other application might be using the 
speech server at the same time.  Policy for speech handling like queue 
priorities and the like is only the speech server's business.


Nicolas


More information about the BRLTTY mailing list