Amy Beeston

Sheffield University site

Principal projects

My research in human and machine listening is highly interdisciplinary, combining acoustics, psychophysics and computer science to better understand sound in varied domains.

Hearing aids for music

PI: Alinka Greasley, School of Music, University of Leeds
Dates: June 2017 – January 2018

This project is funded by the Arts and Humanities Research Council.

Research from the field of music psychology – a sub-discipline of psychology – has revealed a lot about how people engage with music. This includes musical performance by trained musicians but also everyday music listening. The focus, however, has primarily been on people with 'normal' hearing. Very little is known about how deafness or hearing impairments affect music listening experiences, especially for hearing aid users. This project represents the first large-scale, systematic investigation of how music listening is affected by hearing aid technology.

Machine listening in artistic contexts

PI: Amy Beeston, Department of Computer Science, University of Sheffield
Dates: May 2017

This short project (1-month) is funded by a Socially Enterprising Researcher award from the University of Sheffield's Engineering Researcher Society.

Much research effort is spent worldwide on speech, education and medical applications of machine listening. On a personal level, however, I was motivated to join this effort as a result of difficulties encountered while creating sonic controllers for use in live performance and sound installations. My work since then has involved the development of machine listening techniques, specifically those inspired by human audition. In this project I aim to open new lines of communication and enquiry, and to survey the current state of machine listening in artistic contexts.

Acoustic detection and diagnosis of snoring

Partners: Passion For Life Healthcare and University of Sheffield
Dates: April 2015 – April 2017

This project is funded by means of a Knowledge Transfer Partnership grant from Innovate UK (formerly the Technology Strategy Board).

Products treating snoring and obstructive sleep apnoea (OSA) have historically been evaluated using subjective processes (e.g., by questionnaires given to the sufferer and bed partner). To provide an objective perspective, the aim of this project is to develop software for acoustic detection and diagnosis of snoring and OSA. This becomes particularly challenging when a solution is required for 'real world' applications, robust to the background noise that is encountered in the home. A further challenge is to understand human auditory perception of snore sounds, particularly in relation to the nuisance that it causes others.

Meeting the challenge of simultaneous talk for cochlear implant users

PI: Bill Wells, Department of Human Communication Sciences, University of Sheffield
Dates: Mar 2014 – Mar 2015

This project is funded by the Arts and Humanities Research Council under its Follow on Fund for Impact and Engagement scheme which supports innovative and creative engagements with new audiences and user communities.

Simultaneous or overlapping talk is known to be a particular problem for individuals who have a hearing loss, even when using a conventional hearing aid or cochlear implant. Until recently, even in one-to-one settings many users would need optimum conditions in order to hold a satisfactory conversation, and professionals have steered clear of advising cochlear implant users about how to deal with situations of overlapping talk on the basis that such a situation would be just too hard to handle. However, recent improvements in the signal processing strategies used in cochlear implants mean that it is now more realistic for users to attempt to engage in conversations where overlapping talk occurs. We therefore aim to engage with a group of adult users of cochlear implants in order to develop useful training materials for handling overlapping talk in conversation.

An interactive demonstration of expressive timing in music

PI: Renee Timmers, Department of Music, University of Sheffield
Dates: Feb 2014 – Jun 2014

This project is funded by a Sheffield University 'wider participation' grant which supports the development of an interactive music performance analysis demonstration allowing participants to explore performance expression in two domains: jazzy swing and romantic rubato.

With years of experience and practice, musicians gain implicit knowledge about where to speed up or slow down in music, when to lengthen or shorten notes, and where to place accents for special effect. As listeners, we are accustomed to these micro-scale variations in music, and are not necessarily aware of the extent to which performers vary their timing and tempo for expressive effect. Although the exact characteristics of a performance depend on a musician's individual interpretation of the music, there are certain regularities or rules that are commonly observed across different performances within a particular genre. The rise of digital tools in music analysis has allowed a (semi-)automatic investigation of such performance characteristics, and consequently has deepened our understanding of performance processes.

Speech technology for language learning

PI: Thomas Hain, Department of Computer Science, University of Sheffield
Dates: Nov 2012 – Feb 2014

In collaboration with our industrial partners, we develop computer-assisted pronunciation training tools for Dutch learners of English.

In all areas of life there is a growing demand for people who speak languages well. Yet pronunciation and language training require a lot of practice, with exercises and feedback tailored to the specific needs of each individual learner. Most traditional language classes cannot deliver that level of support, and as a result, computer-assisted language learning has become an area of great interest to the general public. Our project concentrates on three themes: visualisation of pronunciation (and errors); utterance verification (speech recognition); and pronunciation quality assessment.

Compensation for reverberation

PI: Guy Brown, Department of Computer Science, University of Sheffield
Dates: Oct 2008 – Jan 2015, part-time

My PhD research was funded within an EPSRC project on perceptual constancy in audition.

Reverberation poses a problem for artificial listening devices, and automatic speech recognition suffers increased error rates when reflected sound energy is present. My approach to this topic lies in the development of computational models based on psychoacoustic principles of hearing. Like people, these models use auditory feedback signals to glean environmental clues from contextual sound, and use these clues to inform their recognition of spoken words in reverberant rooms. Unlike existing methods of dereverberation, the proposed strategy addresses the rapid changes in acoustic conditions that occur in every-day listening.