[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Accessibility] Call to Arms
From: |
Eric S. Johansson |
Subject: |
Re: [Accessibility] Call to Arms |
Date: |
Wed, 28 Jul 2010 13:04:17 -0400 |
User-agent: |
Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.7) Gecko/20100713 Thunderbird/3.1.1 |
On 7/28/2010 12:26 PM, Christian Hofstader wrote:
eric: I would love to except much of the process requires computer programs
that I can't use with currently available speech recognition. seriously
Richard, you do have a chicken or egg problem. The free software foundation
philosophy precludes enhancing nonfree software. I get that and I can make my
peace with it. The cost is that you lose volunteers from the disabled
community. If they can't enhance or expand on NaturallySpeaking capabilities,
then they can't participate until you produce a large vocabulary continuous
speech recognition system. Bit of an organizational pickle. But then again,
the other solution is too.
cdh: I don't know if this is possible but, if we can build a limited
vocabulary speech recognition engine designed to work explicitly for
programmers that we can put out as emacs macros, would a person incapable of
using a keyboard be able to help with the hacking? As it is emacs, we can do a
whole lot of command and control statements as well as meta commands that can
do a bunch of things with few words that would be useful to a hacker sans
keyboard, if it can be done, it can be done in emacs and emacspeak and,
therefore, this could be a very cool place to start that is probably a shorter
route to getting hackers with this set of disabilities up and going to help
build the next generation.
Head-desk head-desk head-desk
Believer or not, 90% of what you do when writing code is deal with English
symbols modified according to an algorithm within your head. If you write
comments (a forgotten skill), you need to use real English in a specialized
framework to accommodate the stylistic needs of the code. When I wrote code, I
would write a small novel worth of comments as I explore the idea of what I need
to do in a piece of code. And yes, I kept the two in sync those who claim it's
difficult, if a cryp can do it, <Eric says something rude about those who won't>.
Every time you create code, not only do you have plaintext modified by some
rules, you have contextual data helping you interpret or anticipate what you're
going to write. For example, if you know that something is instance of a class,
the first time you create it, you know you need to deal with a constructor type
signature. When you use it on the right hand side of the equation, you know it's
just an instant and what follows it is the symbol joining into a method. This
knowledge of the instance and its class tells you what can be spoken next. Once
the method has been selected, then you know the type signature you need to speak to.
This is one example where the interrupting cow user interface would be really
helpful. When you stop after the instance name, it can tell you what the methods
are safe to speak the methods. If you pause after the method, it can tell you
what the type signature is and even provide a wizard framework where you can go
through and change arguments my name. This wizard framework would help cognitive
overload (what do I say to get something I want) and make it easier to navigate
by voice. It will also make it easier to recursively go through the argument
definition process with another instance/method etc. as one of the arguments. I
prefer to assign the output of all calls into separate variable and pass those
variables. I think that's an artifact of how speech recognition works when
writing code today.
But we are not limited to this old-fashioned 40 year old style of writing code.
We can use different notations, different methods expressing code like literate
programming which is a better match to speech recognition use.
Speaking the keyboard is a disaster. It will damage your voice. It will make you
unable to speak much more quickly than any other technique. It is slow, it is
error prone, he drives you to think about solving the problem using more
difficult solutions. I'm sure they're other disasters about speaking the
keyboard I've seen over the years which I will inform you of as they come to me.
Here's an experiment you can inflict on someone who likes you enough to be a
guinea pig but not enough you are willing to ruin your relationship over.
Sit them in front of the computer.
you sit on the other side of the computer.
You give them the instructions to type exactly what you say and nothing else. If
you go too fast, it's okay for them to drop words. Even if you threaten them
with bodily harm, they are to type only what you say. if it makes her feel
better, give them a telephone and dial 91
when they are in an editor, start dictating code. Remember if you go too fast,
they will drop words on you.
Now, speak some text
second, speak some code
third, speak the code the way you are thinking of with a limited, non-word
oriented dictation system.
Now, save the file and have them e-mail it to all of us (hopefully keeping you
from cheating :-)
now look at the text. If you're using a modern speech recognition engine, you
should have about one error in 20 words. Typos in this test count as
misrecognition's. If a person truly knew nothing about coding, you see something
that vaguely recognize what's in your code but, it's not going to be pretty.
You'll probably also noticed that you are much more tired in the third case then
you were the second case that's not just the dictation fatigue that's actual
fatigue caused by mental overload and physical strain the voice.
I'm sorry this sucks so bad and there's no simple answer but, think about this.
You have had some very smart people for 15 years working on this problem and we
don't have a good solution. I'm hoping this good group of people can have more
luck coming up with a better solution.
cdh: ALso, an area about which I am nearly totally ignorant is on screen
keyboards. I'm told that GNOME 3 has a much better replacement for GOC coming
that may also be useful for people who cannot use a standard keyboard.
yes I've heard that too. I am hopeful because for extremely disabled people, who
have no other option, it's opening a new world for them. Something like this
could be the basis for unicorn stick (using touchscreen) , toggle switches, or
scanning input.
I still think unicorn sticks are drunken frat initiation prank.
- Re: [Accessibility] Call to Arms, (continued)
- Re: [Accessibility] Call to Arms, Richard Stallman, 2010/07/27
- Re: [Accessibility] Call to Arms, Eric S. Johansson, 2010/07/27
- Re: [Accessibility] Call to Arms, Bill Cox, 2010/07/27
- Re: [Accessibility] Call to Arms, Eric S. Johansson, 2010/07/27
- Re: [Accessibility] Call to Arms, Richard Stallman, 2010/07/28
- Re: [Accessibility] Call to Arms, Christian Hofstader, 2010/07/28
- Re: [Accessibility] Call to Arms,
Eric S. Johansson <=
- Re: [Accessibility] Call to Arms, Christian Hofstader, 2010/07/28
- Re: [Accessibility] Call to Arms, Eric S. Johansson, 2010/07/28
- Re: [Accessibility] Call to Arms, Richard Stallman, 2010/07/28
- Re: [Accessibility] Call to Arms, Eric S. Johansson, 2010/07/28
- Re: [Accessibility] Call to Arms, Christian Hofstader, 2010/07/28
- [Accessibility] Why not first an IDE that recognizes speech?, Susan Jolly, 2010/07/28
- Re: [Accessibility] Why not first an IDE that recognizes speech?, Eric S. Johansson, 2010/07/28
- Re: [Accessibility] Why not first an IDE that recognizes speech?, Susan Jolly, 2010/07/28
- Re: [Accessibility] Call to Arms, Eric S. Johansson, 2010/07/28
- Re: [Accessibility] Call to Arms, Chris Hofstader, 2010/07/29