[BRLTTY] Footsteps towards better accessibility in Linux
Aura Kelloniemi
kaura.dev at sange.fi
Tue May 6 21:15:20 UTC 2025
Hi,
On 2025-05-06 at 22:31 +0200, Samuel Thibault <samuel.thibault at ens-lyon.org> wrote:
> Aura Kelloniemi, le mar. 06 mai 2025 23:11:47 +0300, a ecrit:
> > I think that we could discuss decopuling these activites – i.e. separating
> > BrlAPI server out of BRLTTY (possibly together with some additional features).
> > BRLTTY would then always communicate with the display(s) through BrlAPI.
> What would be the benefit, compared to the additional latency?
There are several. Most of them are related to usage scenarios where Linux
virtual consoles are not the only environment where the user is working.
- Resource usage: if the user does not use VTs (or they are not available),
having a core BRLTTY that reads unused screens, parses configuration files,
manages braille tables, starts a speech driver, etc. is useless. All
BRLTTY's screen reading features would be just extra bloat (and would also
be additional attack surface).
- Multiple displays: if the user uses multiple displays (like I sometimes do),
they cannot connect them all to the same BRLTTY instance. Running a second
BRLTTY for the second display does not help, if the user wants to use
anything that relies on BrlAPI.
- Separation of roles/Unix philosophy: communicating with a braille display
and providing a user interface are different tasks and they naturally belong
to different programs once the user wants to use only one of these features.
- Better support for BrlAPI-enabled applications: currently it is non-trivial
to write BrlAPI applications which don't utilize BRLTTY's command set in
their input handling. BRLTTY's command set works well if the application is
a terminal screen reader, but this becomes a limitation if the application
does something different.
For example I have written an e-book reader for BrlAPI, but BRLTTY does not
have commands for navigating menus, exiting contexts, activating buttons,
etc. Thus the application works well only with my display and with my custom
BRLTTY key bindings.
Separating BrlAPI from terminal screen reading (BRLTTY)
would require rethinking the interface between the application and BrlAPI
server.
- Resource usage again: if the user runs several BRLTTYs (e.g. one for
console, one for gnome-terminal, one for some other terminal emulator, etc.)
the BRLTTY binaries will all contain code for the BrlAPI server and this
code will be kept in memory, even though only the core instance of BRLTTY
runs the server.
- Device management: whether it is adding, removing or switching a braille
display from one connetion type to another, all device management could be
done by the core server. This would mean that all clients could fluently
switch from one display to another, or the user could configure the server
to keep multiple displays synced together, link different clients to control
different displays independently, or whateer. Compare this to the current
udev-based setup where connecting a USB braille display spans a new BRLTTY
instance – most likely without a BrlAPI server, and certainly without
migrating old BrlAPI clients to the new server.
These are just from the top of my head. I assume there are other benefits as
well, as well as many issues to be resolved. But the most important point here
is that if we would start using other interfaces than Linux console we would
need to handle different setups with minimal hassle. The problems I listed are
already present and are blocking me from using multiple devices effectively or
migrating to GUI.
--
Aura
More information about the BRLTTY
mailing list