[BRLTTY] Is there a feature-compatible text-based browser

kperry at blinksoft.com kperry at blinksoft.com
Sat Sep 27 19:48:57 UTC 2025


Yes it can you can have models off line.  In fact I would like to create a
model specifically for my idea.  I just got a new laptop because my old
laptop could not handle creating my own models.   I would start with one of
the open source models for doing graphics or something like that.  

-----Original Message-----
From: BRLTTY <brltty-bounces at brltty.app> On Behalf Of Rob Hudson
Sent: Saturday, September 27, 2025 3:04 PM
To: Informal discussion between users and developers of BRLTTY.
<brltty at brltty.app>
Subject: Re: [BRLTTY] Is there a feature-compatible text-based browser

Can AI based stuff operate while offline? IN addition to paying for the
screen reader (presuming it goes for sale) would users also have to pay for
an AI subscription to use the screen reader? I do agree the old model of
screen reader is getting harder to maintain in the wake of new advanced
content generation, but is asking a user to pay for an additional
subscription to use their devices the answer?

----- Original Message -----
From: <kperry at blinksoft.com>
To: "'Informal discussion between users and developers of BRLTTY.'"
<brltty at brltty.app>
Date: Sat, 27 Sep 2025 14:58:33 -0400
Subject: Re: [BRLTTY] Is there a feature-compatible text-based browser

> Yes, I agree with this. Flipper wasn't the only screen reader that 
> explored these ideas. ASAP, for example, had watch windows and set 
> files that let you track information from different parts of the screen.
>
> What I'm working on now takes a very different approach. I've set up a 
> Raspberry Pi connected to my Windows machine, with HDMI out from the 
> PC going into the Pi's camera input. On the Pi I run a simple 
> Python-based OCR screen reader. It's still pretty basic, but the idea 
> is to move away from relying only on the back end of the operating 
> system. Now that devices like the Raspberry Pi are powerful enough to 
> do real-time OCR, I think it's worth
> asking: can a screen reader learn to use a computer the way a sighted 
> person does-purely from what's on the screen?
>
> Looking ahead, I imagine we could even use smart glasses as the main 
> interface with all devices. Those glasses could still talk directly to 
> the OS, but I want to see how far we can get by interpreting the 
> graphical interface with today's AI tools.
>
> Right now, my prototype just watches the monitor and speaks any 
> changes it detects. My next goal is to define "regions" that can be 
> spoken when needed, instead of everything at once. For example, the 
> clock region shouldn't be spoken unless you ask for it. Toolbars on a 
> webpage shouldn't be repeated constantly, while the main content 
> should only be read if it changes. That's still a big 
> oversimplification, which is why I want to bring AI into the mix.
>
> One of my students, Braeden, built a project called View Point 
> (available at nibblenerds.com). It's an AI screen reader assistant, 
> powered by Gemini, that can already do a lot-even with your screen 
> reader turned off. Combining something like View Point with my 
> OCR-only screen reader could open up very interesting possibilities.
>
> This really forces us to rethink what a screen reader is and what it 
> should do. Instead of just being an interpreter for the OS, it could 
> become an intelligent companion that understands the screen, filters 
> information, and interacts with the user in more natural ways.
>
> Of course if you throw braille in as a braille first viewer that is a 
> whole other ball game but there needs to be a lot of thought put into 
> our next steps.
>
>
>
> -----Original Message-----
> From: BRLTTY <brltty-bounces at brltty.app> On Behalf Of Brian Buhrow
> Sent: Wednesday, September 17, 2025 3:01 PM
> To: Informal discussion between users and developers of BRLTTY.
> <brltty at brltty.app>
> Subject: Re: [BRLTTY] Is there a feature-compatible text-based browser
>
> 	hello.  I realize this is completely anecdoatl, but in my
experience, 
> even when using graphical browsers such as Chrome or Edge, many 
> dynamically generated web sites are virtually impossible to use.  I 
> recently learned this is, in large part,  due to UIA, which is how 
> screen readers get most of their data today, is a single-threaded 
> library which must be accessed from the root thread of the browser.  
> This is why sighted users can begin interacting with such pages much 
> sooner than folks using screen readers.  Of course, there are 
> different definitions of "dynamic" and I agree with ken that we should 
> try to specify what we mean.  For example, there are many pages where 
> there are a series of drop down menus and the choices which appear in 
> a given menu depend on choices made in another menu on the page.  If 
> the lynx-like interface were modified to communicate the menu choices 
> the user made at the time a drop down menu was closed, as opposed to 
> when the final submit button was pressed, most of the issues with 
> those types of pages would be resolved.  Pages that auto-update 
> dynamically at short intervals, say, at 5-10 second intervals, would 
> be a problem for every nonvisual interface I can think of today.  For 
> example, stock broker pages which show the running stock ticker 
> running across the bottom of the page.  Mostly the nonvisual user would
elect to silence that portion of the page and ignore it, or grab snapshots
of it at much slower refresh rates.
> Again, however, I think configuration options could be defined to help 
> users deal with these types of pages in a more efficient manner and 
> those options could be compatible with the lynx interface paradigm.  
> I'm thinking about the old DOS screen reader Flipper, which dealt with 
> issues like this by allowing the user to define "watch" windows which 
> could notify the  user when something changed in a region of the 
> display, or "ignore" windows which could be completely ignored by the 
> screen reader regardless of what happened in the defined region.  
> There might be a discussion about whether some of the features should 
> live in the screen reader or the browser, but given how tightly modern 
> screen readers on Windows are tied to the browser, I don't see a real
problem with making similar ties in a text-based interface.
>
> In either case, Firelynx is an interesting start to this project, as 
> are some of the projects Ken has alluded to in this thread, assuming 
> some or all of the projects mentioned can be made more generally 
> available to the blind developer community.
>
> -thanks
> -Brian
>
> _______________________________________________
> This message was sent via the BRLTTY mailing list.
> To post a message, send an e-mail to: BRLTTY at brltty.app For general 
> information, go to: http://brltty.app/mailman/listinfo/brltty
>
> _______________________________________________
> This message was sent via the BRLTTY mailing list.
> To post a message, send an e-mail to: BRLTTY at brltty.app For general 
> information, go to: http://brltty.app/mailman/listinfo/brltty
>
_______________________________________________
This message was sent via the BRLTTY mailing list.
To post a message, send an e-mail to: BRLTTY at brltty.app For general
information, go to: http://brltty.app/mailman/listinfo/brltty



More information about the BRLTTY mailing list