Over the past few weeks we have been trying to understand the importance of the various ways phonemes are represented to support literacy skills in Arabic and English and how best to show them alongside words or multiwords that are added to the dictionary via the Symbol Management system. We have also discussed the need for recorded speech where synthesised speech or text to speech fails for both MSA and Qatari Arabic. Research has shown how important phonemic awareness skills are for AAC users who go on to develop literacy skills and it appears that listening to the sounds and seeing the text highlighted helps reading skills as well as finger pointing (Vandervelden & Siegel 1999).
One of the hardest problems in English is how to represent the sounds when the spelling of the words bears little resemblance to the spoken version. Decisions have to be made as to whether one uses a system similar to that offered by the BBC where sounds are written as a combination of vowels or consonants that represent what is said such as /th a ng k / y oo/ or stay with the original spelling and just divide the word up into segments or syllables with the various blended or individual sounds e.g. th a n k | y ou
Whilst discussing this matter with Professor Annalu Waller, Rolf Black, Andrea Kirton and Simon Judge at the Communication Matters Conference 2014 it was clear that the presentation should follow the way the phonics are being taught in schools by primary school teachers where the AAC users developing literacy skills could work alongside their classmates. In UK such schemes as Jolly Phonics are being used and Andrea Kirton and Simon Judge are working on a phonic screen that might well be developed further to present the sounds with speech output in a similar way to the Macmillan app developed by Vivid Interactive to provide speech therapists with the phonetic alphabet. It is possible that with the English section of the Arabic Symbol Dictionary we will need to take this further with clusters and blends being part of the segmentation to aid search and categorisation of words for example the listings provided in ‘Spotlight On Spelling: A Structured Guide To The Assessment And Teaching Of Spelling’ and the work of Cootes and Jamieson
In Arabic some thought is needed as to how phonemes are represented with the various diacritical marks. However, it is felt that by offering all the movements (diacritical marks) the text to speech (TTS) voices on offer will be able to provide acceptable pronunciation for most words even if they fail on individual phonemes were there will be the need for human recordings.
Below you will find 16 rows with 28 representations of the Arabic alphabet with possible phonemic variations which can be read using the Arabic version of ATbar. As the phonemes are used in written Arabic their letter shapes will change. The shape of each letter altering depending on the position in the word and phrase. Arabic keyboards achieve this automatically! You are seeing all the letter combinations as if they are in their initial position. I should point out that corrections to this table may still need to be made by our Arabic speaking experts, but this is just to show the type of discussions taking place at this stage in the research.
Tullah has also been carrying out research in this area and has discovered an iPad app called ‘Sawti‘ developed by Gadah Alofisan from King Saud University who has won awards for his work in this area and has presented at ICCHP . This is one of the first apps to offer Arabic AAC support with symbols and their corresponding words being said by male and female children’s voices. It offers users the chance to practice symbol / word recognition with free text being read aloud with the synthesised voice. There are some colloquial Arabic words as well as MSA and the user can choose when to use speech feedback.
The only problem we have found is that the voice changes depending on the symbol being read which can be a little distracting and sometimes the way the word is pronounced was questioned by some Arabic speakers.
Both Arabic and English have such a wide range of pronunciation that we are going to have to agree on some guidelines for the way we work with voices / TTS and the way phonemes are presented.
Bayan Alarifi, Arwa Alrubaian, Ghada Alofisan, Nora Alromi, Areej Al-Wabil (2013) Towards an Arabic Language Augmentative and Alternative Communication Application for Autism, In proceedings of HCI International 2013 A. Marcus (Ed.): DUXU/HCII 2013, Part II, LNCS 8013, pp. 333-341. Springer, Heidelberg (2013).
Black R, Waller A, Pullin G, Abel E. Introducing the PhonicStick: Preliminary evaluation
with seven children. Montreal, Canada: ISAAC; 2008. http://phonicstick.computing.dundee.ac.uk/publications/
Andrea Kirton, Simon Judge, P. B. (2014). Using Phonemes to Construct Utterances for Aided Communication. ISAAC 2014. doi:10.13140/2.1.3524.4162 http://openconf.faiddsolutions.com/modules/request.php?module=oc_program&action=summary.php&id=142
Trinh, H. (2011). Using a Computer Intervention to Support Phonological Awareness Development of Adults with Severe Speech and Physical Impairments. The 13th International ACM SIGACCESS Conference on Computers and Accessibility, Dundee, UK. Accessed 5th September 2014 http://src.acm.org/2012/HaTrinh.pdf
Trinh1, H. (2012). iSCAN: A Phoneme-based Predictive Communication Aid for Nonspeaking Individuals. Proceeding ASSETS ’12 Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility. ccessed 5th September 2014 http://keithv.com/pub/iscan/iSCAN_Final.pdf
Vandervelden, M., & Siegel, L. (1999). Phonological Processing and Literacy in AAC Users and Students with Motor Speech Impairments. Augmentative and Alternative Communication, 15(September), 191–211. Accessed 5th September 2014 http://informahealthcare.com/doi/abs/10.1080/07434619912331278725