Tag Archives: text editing

Enabling text editing for the visually challenged – A project by David Effendi

When I first look through the ChorusText website and YouTube video, I found the concept pretty interesting. ChorusText is a device endeavoured to facilitate text editing for the visually challenged, by making it sight independent. For a better understanding of what ChorusText is about, check out this video first before reading on David Effendi’s replies to our email interview.

MFS: What makes you create the ChorusText?

David:Computer text editing, as we know it today, is a sight-led activity. For those without sight or with only very little remaining, it can be a challenge. Yet, text-editing is a very important part of our daily life, so it is a problem.

I simply want this problem nailed. I really hope that in 20 years’ time, this becomes a non-issue, there’s a completely sight independent (and Libre!) alternative to edit text effectively.

It is crucial for it to be open-source, and prioritize “Education” over everything else, including over “Economy”. I believe by making it open-source and as easy as possible for people to tinker with, we can push the limits much further, collectively. Ideally ChorusText becomes “that next project” after one has gone through an “Arduino Starter Kit”.

The “Libre” part has got to be there as well. It is not an entertainment device or game console where one seeks to get some amusement out of, but rather a device to help a person stay functional, without which s/he would be at a disadvantage compared to the rest of the population. A device to help level the playing field should be universally open and avoid lock-ins as much as possible.”

MFS: When did you first build it and how long did you spend doing it? How did you start?

David: “The initial spark of idea was in Feb 2014, and it was very different from the current state. At first I wanted to use a headtracker IMU, but there are several shortcomings with it (straying, can’t stay fixed at a point for a prolonged period, prone to unintentional inputs etc).

After more brainstorming with friends, I remembered that Sparkfun has this motorized slide potentiometers and it was demonstrated in one of their “New Product Friday Videos”. That would solve the problem of straying, fixation, and unintentional inputs). And so I continued with the sliders. Real work began in May 2014.”

MFS: Were you working individually or as a team? What kind of challenges did you face?

David: “There’s a friend (Dr Corey Brady from Northwestern University) who helped with laser cutting the acrylic enclosure. Having a full protective enclosure really brings it up a notch.

Before that, all the sliders and components were mounted on 3 small acrylic plates joined together to form a surface, cables were hanging underneath them and it was pretty fragile. Even moving it from one table to another could result in connections coming loose and the device dysfunctional (and I’d be scratching my head for the next 15 mins trying to figure out which cable came loose:) ).

But with a full protective enclosure, I could bring it to Jakarta and back, without any problems at all. This gives me something solid that I can bring with me to a table, put it down and let people try when I ask their thoughts/opinions/ideas.

Previously I sent the design files to Seeed Studio for laser cutting, and unfortunately they can only cut acrylic up to 20cmx20cm at that time (But as at now, Seeed’s laser cutting service is able to cut up to 30cmx30cm). The current design needs 30cmx30cm, which Dr Brady’s machine can do. Dr Brady has been kind enough to help with the project using his own resources, but I need to find a local (Singapore) source for laser cutting in the long run.”

MFS: Is the current version the final version or is it still in prototyping stage?

David: “It is still a prototype ( and I think it will remain a prototype for a long time as there are more features I’d like to implement ). Unless “magic” happen through Google Summer of Code, or the project attracted an army of developers or something similar to that effect :)”

MFS: Is this open source or will this be a commercial item for sale?

David: “This is open source. One of the main reason why I am doing this is so that doing text-editing is no longer a challenge for the visually impaired. And I think having an assistive device that is open source and “Libre” can help bring this about better than if it was commercial.”

MFS: You mentioned in your website that the ultimate goal is for ChorusText to become an online, collaborative text editing platform, that is enriched with social and chat functionalities. Can you explain what you mean by this?

David: “I’d like to implement some kind of chat functionality into ChorusText, where the user can simply turn a knob to “Chat” mode, and he can send and receive text messages to his friends, using the keyboard and the sliders.

Hopefully this helps mitigate the problem of social isolation which affects many visually impaired people (If we can’t see, we don’t travel to see our friends as much. If we can’t travel, at least we can send messages over the internet) Now I am looking into Telegram messenger’s API to see if integration is feasible.

Also, it is possible to “send” physical movements over the internet ( a servo connected to a plastic hand that would rise up to give the user a “virtual hi-5” for example ), but the trick is to do so in a friendly and safe manner.

Maybe, we can turn the knob to “Collaborate” mode and edit a document / text together ( like etherpad-lite or google docs, but updates happen one sentence at a time, instead of one keystroke at a time ).

Any new sentence sent in by a user can be pushed to all users editing the text. Who knows what could happen when we bring minds together like this? (This is kind of experimental though.)

From some discussion, there is also an idea of developing ChorusText into a device to access Wikipedia’s content, using MediaWiki API. I think this is very interesting and worthwhile too, so turn the knob to “Wikipedia” type in some search words and the search results will be available via the sliders.

There’s also a much better sounding text-to-speech engine called MaryTTS, and it would be great if we can use it or offer the user a choice of TTS engines. The current one is eSpeak, which is very lightweight and robust, but not as pleasant-sounding (monotonous and roboty, but I think it’s nicely geeky 🙂 )

Also, from discussions with people I met in GNOME Asia Summit, there were some ideas about desktop integration. Right now, there is already has a screen reader on the Linux GNOME desktop that will speak out the text on the screen / under the mouse pointer. Let’s add on to that by sending the text to ChorusText in addition to the screen reader, so the text is also available via the sliders.

It would be even better if we can tap into keyboard to input events, so following each keyboard keypress, a character can be sent to Chorustext such that the contents of the currently focused textbox is the same as what’s on the device. I think this is by far the most interesting idea, but need more time to explore, especially as this falls outside my domain knowledge.

All these are very interesting and definitely worthwhile exploring.

But it also means that we are on the crossroads now. After getting the device to handle reading, typing, importing and exporting, multiple paths lie ahead but there remains only one developer. This is another reason for making ChorusText “Education First”, open source and as easy as possible for other people to study, modify and take it to wherever they want.”

MFS: Also, can you share whether you are based in Singapore or are you based in Indonesia? Are you able to share on the making culture at where you are, if not in SG?

David: “I am based in Singapore. For the past few years I had not been very active in the local makers community, because my son was still very young. But now he is older and I can afford to be more active (very much looking forward to it! 🙂 ).”

MFS: Have you attended the Maker Faire in SG before? How did you learn about it?

David:This is actually my second time. Last year I showcased ChorusText too but I was in the pcDuino booth. I was introduced to Liu JingFeng, the founder of pcDuino (by Dr Brady), who came to Singapore for the Mini Maker Faire 2014. He invited me to showcase pcDuino in his booth as I am using pcDuino for ChorusText. I am really thankful especially since I missed the 2014 call for makers.”

MFS: What do you hope to go away with from the event?

David:Meeting people, brainstorming, raise awareness, and who knows, hopefully more people are interested to take a look at ChorusText such that the number of developers grow. 🙂 “

ChorusText is an open assistive device for people with low-vision / blindness, that lets them explore and edit text by means of touch and hearing. As you can tell from David’s sharing, his objective is to enable visually challenged people to be able to communicate through ChorusText and use the device for collaborations. If you are interested in David’s cause, check out his booth at Maker Faire Singapore which will take place on 11 & 12 July next month!