[BRLTTY] [OT] How to integrate BSI into a Linux cell phone?

Rich rdm at cfcl.com
Mon Aug 17 16:21:31 EDT 2020


Thanks for the detailed, thoughtful response!  I've edited in my answers below.

> On Aug 17, 2020, at 09:03, Mario Lang <mlang at blind.guru> wrote:
> 
> Voice Recognition also conflicts with your idea of a CLI-based system.

It's certainly an unusual combination, but not unthinkable.  For example,
if Richard Stallman can dictate Emacs key combinations to a human typist,
a blind user could certainly dictate *nix commands using voice recognition.
But, given that the processing needs are currently unmet, let's move on.

> A physical keyboard doesn't sound like a real problem.

I'm mostly basing this on comments by a blind friend who uses an old, small
iPhone for much of her Internet access.  She is unhappy that all of the new
iPhones are larger than she wants or needs; carrying around a keyboard (even
a small, folding one) would be a nuisance.

> I guess the reason for that is that plain Linux systems running on a
> worn-out mobile device aren't really popular amongst blind users.

I think a lot of blind folks aren't Linux users, so convincing them to learn
a bunch of CLI magic may be an uphill slog.  I'd argue, however, that even a
"worn-out mobile device" might well run a CLI-based Linux system just fine.

> Cost-wise, Rasperry Pis are very much more populare when it comes to
> building a low-cost computing solution.  ...

I've been following various RasPi-based projects (e.g., Stormux, VOISS) with
great interest.  However, although a RasPi can work well as a desktop machine,
it isn't a great solution as a portable device.  The main problem is the lack
of power management.  Without this, the device will either need to be booted
up before each use (feh!) or require an unreasonable amount of batteries.  One
thing I like about cell phones is that they have power management hardware and
software, so they can be used as an "instant on" notetaker, etc.

> Are you saying you want to implement the motion tracking for the multitouch
> screen with the stack you mention above?  Or do you plan to write your
> applications with that stack?

I'm mostly thinking about the motion tracking, gesture interpretation, etc.
Once my code has generated some ASCII text, the shell or other Linux commands
would decide what to do with it.

> You seem to be contradicting yourself.  At the beginning, you argue that
> a plain Linux system is better because it doesn't have a GUI.  Now, you
> are looking for ways to interface wtih a windowing system.  I am confused.

Join the club :-).  The reason I mention a windowing system (really, desktop)
is that most Linux systems I've seen seem to bundle session management and
other features into something like MATE.  Ideally, however, I'd like the system
to work without that overhead.

>> Should I be looking into D-Bus, GTK, Qt, or what?
> 
> That really depends on what sort of applications you are trying to
> support on your system.  If I wanted to write a BSI for CLI on Linux,
> I'd be looking into the Linux kernel input device API.  You can simulate
> a keyboard in software with that.  How you talk to your multitouch
> screen devices is totally out of my knowledge.

It seems reasonable to assume that the rest of the system (e.g., desktops)
get their keyboard input in that manner.  I'll definitely look into this.

> Frankly, you don't quite understand the ecosystem you are trying to adapt.

Quite true.

> Orca is only relevant if you decide to go for a GUI based
> system.  Emacspeak is pretty specific to Emacs users, although I guess
> it could be used as the main audio desktop if you want to go that way.
> BRLTTY should be easy to plug in, if you are either using virtual
> terminals or something that requires Orca.

Thanks again for the detailed response.

-r



More information about the BRLTTY mailing list