[BRLTTY] BrlAPI Raw key code mode?

Shérab Sebastien.Hinderer at ens-lyon.org
Fri Aug 25 16:28:11 EDT 2017


Hi,

Mario Lang (2017/08/24 16:54 +0200):
> I think we already have something like that, a predefined set of
> commands.  The problem is, that we can not predict what a BrlAPI client
> actually wants to do.  So we can make some assumptions about how a
> graphical screen reader works, but this will likely always be limiting
> to some types of clients (that might not even be written yet).

Fully agreed.

> Also, I am wondering how clients are going to implement their own
> bindings.
> JAWS for instance has an interactive key binding editor which is very
> nice for users.  You can define a new binding for a screen reader command by
> simply pressing the desired key combination interactively.
> I am suspecting that some BrlAPI clients would like to do something
> similar at some point.  Especially if the set of screen reader commands
> grows.  Some of these commands might not be bound by default to
> anything, waiting for the user to customize a binding if they really need the
> command.
> 
> If I haven't overlooked something, to me, what is missing is a way
> for the client to query the actual model the driver has
> detected.  With that, a client could implement their own key code
> handling, because they actually would know what they can expect from a
> readKey call.

This is something which is supposed to be provided by the BrlAPI
properties, once they will be implemented.

> The client would still need to do their own key code
> handling, which seems fine to me if what is desired is more control.

To make this easier and to warranty as much coherence with the core's
way of handling the bindings, would it make sense to build the code in
charge of this as a library that the clients could link to and use?

> Exporing our knowledge of how certain NavigationKeys are named could
> also help the client to avoid duplicating too much stuff.

I believe that this could be a side-effect of having the library
previously suggested.

> In any case, if we proceed with an attempt to generalize for graphical
> screen readers, we really should talk to Orca dev what they actually
> would want to have.  They might have some insights that we haven't
> thought about yet.

I am fully convinced it will be beneficial to talk to Orca developers.
In particular, I always have the impression that Orca, like Jaws,
Window-Eyes and VoiceOver are speech-centric screen review system while
brltty is, in my opinon, braille-cedntric. By this, I mean that the
features offered by the screen reviewing system, brltty excepted, seem
much reacher to me when it comes to speech than when it comes to
braille. Also, I believe that these systems have been developed with
speech in mind and that braille support has been incorporated to them
only afterwards, at east that is my impression. As an example, I am
pretty sure these tools can be used quite easily with speech only. But I
believe one could not use them with only braille turned on, or at least
it would not be as comfortable to use as the pure speech way.

Do you guys share this vision of things?

In that case I think talking to Orca developers becomes even more
important. We would also need to think about how we, braille fans, would
like to access a graphical desktop with braille only.

Comments warmly welcome.

Shérab.


More information about the BRLTTY mailing list