Welcome

I am a Professor in Computer Science at the University of Sheffield, and a member of the Speech and Hearing Research Group.

My research interests include speech recognition by humans and machines, hearing-aid signal processing for speech and music, audio-visual speech processing, machine listening and the application of machine learning to audio processing.

(Follow me on ResearchGate)

New

  • We have a new project, Cadenza, that will be developing machine learning challenges for improving music listening for hearing impaired listeners. Find out more.
  • We’ve launched the 2nd Clarity Enhancement Challenge for Hearing Aid signal processing. To participate visit the challenge website.
  • Registration for the Clarity-2022 Workshop on Machine Learning Challenges for Hearing Aids is now open.
  • Clarity EPSRC project: machine learning competitions for hearing aid development - to find out more and join our mailing list.

Research

Recent Publications

  1. Barker, J., Akeroyd, M., Cox, T. J., Culling, J. F., Firth, J., Graetzer, S., … Munoz, R. V. (2022). The 1st Clarity Prediction Challenge: A machine learning challenge for hearing aid intelligibility prediction. In Proceedings of the 23nd Annual Conference of the International Speech Communication Association (INTERSPEECH 2022). Incheon, Korea.
  2. Graetzer, S., Akeroyd, M. A., Barker, J., Cox, T. J., Culling, J. F., Naylor, G., … Viveros-Muñoz, R. (2022). Dataset of British English speech recordings for psychoacoustics and speech processing research: The clarity speech corpus. Data in Brief, 41(107951), 2711. 10.1016/j.dib.2022.107951
  3. Deadman, J., & Barker, J. (2022). Improved Simulation of Realistically-Spatialised Simultaneous Speech Using Multi-Camera Analysis in The Chime-5 Dataset. In Proceedings of the 47th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2022). Singapore: IEEE. 10.1109/ICASSP43922.2022.9746351
  4. Deadman, J., & Barker, J. (2022). Modelling Turn-taking in Multispeaker Parties for Realistic Data Simulation. In Proceedings of the 23nd Annual Conference of the International Speech Communication Association (INTERSPEECH 2022). Incheon and Korea.
  5. Zhang, J., Zorila, C., Doddipatla, R., & Barker, J. (2022). On monoaural speech enhancement for automatic recognition of real noisy speech using mixture invariant training. In Proceedings of the 23nd Annual Conference of the International Speech Communication Association (INTERSPEECH 2022). Incheon, Korea.