Behind the Paper, From the Editors

Behind the Book | Artificial Intelligence and Assistive Technology

A critical discussion by Dr. Marion Hersh, University of Glasgow

Introduction:  using assistive technology to overcome barriers

I am a disabled and neurodivergent researcher with many years of experience of working on assistive technology (AT).  This is technology designed to support disabled and older people and overcome the barriers they would otherwise experience and expands the opportunities open to them.  There are many different types of assistive technology.  A few examples include wheelchairs, hearing aids and screen readers and screen magnifiers which read out the text on the screen or magnify it for people with visual impairments or dyslexia.  There are also low tech examples such as the long cane used by blind people or tools with ‘easy grip’ handles.

 The approach of designing AT to overcome barriers is based on the social model of disability.  This states that disabled people experience attitudinal, environmental, infrastructural barriers and that it is these barriers that are the problem not their physical and mental differences (impairments).

Involving the end-user community

The involvement of the intended end-user community in designing AT is crucial to good design which meets the needs of this community.  Individuals understand their own needs, but other people, particularly non-disabled people, generally do not.  However, involvement has its limitations.  It can have an underlying assumption that disabled people receive and use technologies, but do not develop them.  For instance, I recently presented a paper at an assistive technology conference on disabled researchers in artificial intelligence (AI) (Hersh, 2025).  This was literature based, but I could not find any literature on disabled AI researchers, though there are clearly disabled AI researchers and I could be considered one of them.  However, this illustrates the perspective of disabled people to some extent being seen as passive and unable to develop and research technology.                   

Risks of AI use

There is increasing interest in the potential of AI in assistive technology.  However, some of this is ‘hype’ and there seems to be limited discussion of the potential risks.  These include privacy and security, (inappropriate) replacement of human assistance by technology and the possibility of some people, particularly those with dementia or limited experience with technology, confusing AI with reality.  These risks need to be taken seriously when considering the use of AI in AT, but space limitations mean they cannot be discussed in detail here.  

Examples of AI use in AT: multiple languages and synthetic voices

A study I was involved in of AT use in 15 different countries (Hersh and Mouroutsou, 2019) found that differences in AT access both between and within countries depended on income and language.  You are most likely to find suitable AT if you speak English followed by the ‘dominant’ European languages.  For speakers of other language much less is available.  And, if you speak, for instance, an African language there may be nothing at all.  There is potential to use AI to produce AT in a wide range of languages.  However, AI translation is not totally accurate and could cause problems where safety is paramount or (small) errors can significantly change the meaning.  I am naturally unable to find the details now, but I remember an incident from several years ago when someone was killed at a level crossing due to local differences in language use.  They therefore understood it was safe to cross when in fact a train was approaching.  Language is not just about choice of words and how they are fitted together.  It is also part of a particularly culture.  There may therefore be cultural issues which affect language use and translation that AI could have difficulty with.  

A related area is the production of synthetic voices with different characteristics and in different languages.  This has applications in screen readers (which read out what is on a computer screen) for blind and low vision people and those with dyslexia.  Synthetic voices can also be used in providing speech output from a variety of different devices.  In both types of applications there are issues of what needs to be set up in advance and what can be left as choices from a number of options to the user.  Users have varying levels of IT ability so should not be expected to do a complicated set up, particularly in an unfamiliar language, before they can use a device or software.

AI in inclusive education and assistants

There have been various proposals for the use of AI in inclusive education, including to provide a structure to support note taking by disabled students.  However, a number of disabled student are unable to take notes in lectures, for instance Deaf students who need to look at a sign language interpreter and students who cannot combine audio and visual processing.  (Deaf with a capital D is used to indicate someone who signs and is part of the Deaf community.)  Could AI be used to produce notes?  And, if so, how would the quality and usefulness of these notes compare with those produced by a person?  Does human involvement add an extra dimension or extra value?  There are also wider issues of the different ways in which people learn and the factors that affect this and whether AI support can provide the same quality of learning experience as human support. 

Another area is the use of chatbots to provide a variety of advice and assistance to make it easier for disabled people to use a variety of technical systems, for instance to participate in digital educational games.  Chatboxes are AI programs, generally used on the Internet, to contribute to text or spoken language.  They are intended to sound natural and engage in conversation in a similar way to a person.    While the focus is on assistive technology and disabled people, the same assistants could be equally useful to non-disabled people.  However, while I think there is potential, my experience is of the text I want to look at being covered by assistance of no interest to me or options popping up which I have not asked for and do not want.  Therefore,  there is a need for users to be able to control this type of assistance.  This should include it being very easy to turn off and consideration of when it should be opt in rather than automatically available with the option of opting out.

Final word

This brief discussion has only been able to consider some of the potential applications of AI in AT and some of the potential risks.  However, it has shown both the potential and the risks and that considerable care needs to be taken when applying AI to manage these risks.  

References

Hersh, M., & Mouroutsou, S. (2019). Learning technology and disability—Overcoming barriers to inclusion: Evidence from a multicountry study. British Journal of Educational Technology, 50(6), 3329-3344

Hersh, M. (2025, September). AI Research: Where are the Disabled Researchers and the Voices of Disabled People. In International Conference of the Association for the Advancement of Assistive Technology in Europe (pp. 209-216). Cham: Springer Nature Switzerland.

Biography

Dr. Marion Hersh is a senior lecturer in Biomedical Engineering at the University of Glasgow. They are both a chartered engineer and a chartered mathematician with a first degree in mathematics and a PhD in control engineering. They currently carry out interdisplinary research in assistive technology, disability studies, engineering ethics and accessible and sustainable design. They have authored or edited five books published by Springer Nature, the most recent being Ethics and Human Behaviour in ICT Development: International Case Studies with a Focus on Poland, 2023. 

Please contact marion.hersh@glasgow.ac.uk with any feedback or questions.