Publications

Journal Papers

  1. Alghamdi, N., Maddock, S., Barker, J., & Brown, G. J. (2017). The impact of automatic exaggeration of the visual articulatory features of a talker on the intelligibility of spectrally distorted speech. Speech Communication. 10.1016/j.specom.2017.08.010
  2. Gonzalez, J. A., Gómez, A. M., Peinado, A. M., Ma, N., & Barker, J. (2017). Spectral Reconstruction and Noise Model Estimation Based on a Masking Model for Noise Robust Speech Recognition. Circuits, Systems, and Signal Processing, 36(9), 3731–3760. 10.1007/s00034-016-0480-7
  3. Barker, J., Marxer, R., Vincent, E., & Watanabe, S. (2017). The third ‘CHiME’ speech separation and recognition challenge: Analysis and outcomes. Computer Speech and Language, 46, 605–626. 10.1016/j.csl.2016.10.005
  4. Vincent, E., Watanabe, S., Nugraha, A. A., Barker, J., & Marxer, R. (2017). An analysis of environment, microphone and data simulation mismatches in robust speech recognition. Computer Speech and Language, 46, 535–557. 10.1016/j.csl.2016.11.005
  5. Marxer, R., Barker, J., Cooke, M., & Garcia Lecumberri, M. L. (2016). A corpus of noise-induced word misperceptions for English. Journal of the Acoustical Society of America, 140(5), EL458–EL463. Retrieved from http://eprints.whiterose.ac.uk/108991/ [PDF]
  6. Gonzalez, J., Peinado, A., Ma, N., Gomez, A., & Barker, J. (2013). MMSE-based missing-feature reconstruction with temporal modeling for robust speech recognition. IEEE Transactions on Audio, Speech and Language Processing, 21(3), 624–635. 10.1109/TASL.2012.2229982 [PDF]
  7. Carmona, J. L., Barker, J., Gomez, A. M., & Ma, N. (2013). Speech spectral envelope enhancement by HMM-based analysis/resynthesis. IEEE Signal Processing Letters, 20(6), 563–566. 10.1109/LSP.2013.2255125
  8. Ma, N., Barker, J., Christensen, H., & P., G. (2013). A hearing-inspired approach for distant-microphone speech recognition in the presence of multiple sources. Computer Speech and Language, 27(3), 820–836. 10.1016/j.csl.2012.09.001 [PDF]
  9. Barker, J., Vincent, E., Ma, N., Christensen, H., & Green, P. (2013). The PASCAL CHiME Speech Separation and Recognition Challenge. Computer Speech and Language, 27(3), 621–633. 10.1016/j.csl.2012.10.004 [PDF]
  10. N. Ma, J. Barker, H. Christensen, & P. Green. (2012). Combining speech fragment decoding and adaptive noise floor modelling. IEEE Transactions on Audio, Speech and Language Processing, 20(3), 818–827.
  11. J. Barker, N. Ma, A. Coy, & M. Cooke. (2010). Speech fragment decoding techniques for simultaneous speaker identification and speech recognition. Computer Speech and Language, 24(1), 94–111. 10.1016/j.csl.2008.05.003 [PDF]
  12. Barker, J., & Shao, X. (2009). Energetic and informational masking effects in an audio-visual speech recognition system. IEEE Transactions on Audio, Speech and Language Processing, 17(3), 446–458. 10.1109/TASL.2008.2011534 [PDF]
  13. X. Shao, & J. P. Barker. (2008). Stream weight estimation for multistream audio-visual speech recognition in a multispeaker environment. Speech Communication, 50(4), 337–353. 10.1016/j.specom.2007.11.002 [PDF]
  14. Christensen, H., Ma, N., Wrigley, S. N., & Barker, J. (2008). Improving source localisation in multi-source, reverberant conditions: exploiting local spectro-temporal location cues. Journal of the Acoustical Society of America, 123(5), 3294. 10.1121/1.2933688
  15. Cooke, M., Garcia Lecumberri, M. L., & Barker, J. P. (2008). The foreign language cocktail party problem: Energetic and informational masking effects in non-native speech perception. Journal of the Acoustical Society of America, 123(1), 414–427. doi:10.1121/1.2804952 [PDF]
  16. A. Coy, & J. Barker. (2007). An automatic speech recognition system based on the scene analysis account of auditory perception. Speech Communication, 49(7), 384–401. 10.1016/j.specom.2006.11.002 [PDF]
  17. N. Ma, P. Green, J. Barker, & A. Coy. (2007). Exploiting correlogram structure for robust speech recognition with multiple speech sources. Speech Communication, 49(12), 874–891. 10.1016/j.specom.2007.05.003 [PDF]
  18. J. Barker, & M. Cooke. (2007). Modelling speaker intelligibility in noise. Speech Communication, 49(5), 402–417. 10.1016/j.specom.2006.11.003 [PDF]
  19. M. Cooke, J. Barker, S. Cunningham, & X. Shao. (2006). An audio-visual corpus for speech perception and automatic speech recognition. Journal of the Acoustical Society of America, 120(5), 2421–2424. 10.1121/1.2229005 [PDF]
  20. S. Harding, J. Barker, & G. J. Brown. (2006). Mask estimation for missing data speech recognition based on statistics of binaural interaction. IEEE Trans. Speech and Audio Processing. IEEE Transactions on Audio, Speech and Language Processing, 14(1), 58–67. 10.1109/TSA.2005.860354 [PDF]
  21. J. Barker, M. P. Cooke, & D. P. W. Ellis. (2005). Decoding speech in the presence of other sources. Speech Communication, 45(1), 5–25. doi:10.1016/j.specom.2004.05.002 [PDF]
  22. K. J. Palomäki, G. J. Brown, & J. Barker. (2004). Techniques for handling convolutional distortion with ‘missing data’ automatic speech recognition. Speech Communication, 43(1–2), 123–142. 10.1016/j.specom.2004.02.005 [PDF]
  23. Barker, J. P., & Cooke, M. P. (1999). Is the sine-wave speech cocktail party worth attending? Speech Communication, 27(3–4), 159–174. 10.1016/S0167-6393(98)00081-8 [PDF]
  24. Barker, J. P., & Cooke, M. P. (1996). Modeling the recognition of sine-wave sentences. Journal of the Acoustical Society of America, 100(4), 2682. 10.1121/1.416995

Book Chapters

  1. Barker, J., Marxer, R., Vincent, E., & Watanabe, S. (2017). The CHiME challenges: Robust speech recognition in everyday environments. In S. Watanabe, M. Delcroix, F. Metze, & J. R. Hershey (Eds.), New era for robust speech recognition – Exploiting deep learning.
  2. Mandel, M. I., & Barker, J. P. (2017). Multichannel spatial clustering using model-based source separation. In S. Watanabe, M. Delcroix, F. Metze, & J. R. Hershey (Eds.), New era for robust speech recognition – Exploiting deep learning.
  3. Barker, J. P. (2013). Missing Data Techniques: Recognition with Incomplete Spectrograms. In T. Virtanen, R. Singh, & B. Raj (Eds.), Techniques for Noise Robustness in Automatic Speech Recognition (pp. 371–398). Wiley. 10.1002/9781118392683.ch14
  4. Cooke, M. P., Barker, J. P., & Lecumberri Garcia, M. L. (2013). Crowdsourcing in Speech Perception. In M. Eskanazi, G.-A. Levow, H. Meng, G. Parent, & D. Sundermann (Eds.), Crowdsourcing for Speech Processing (pp. 137–169). John Wiley and Sons.
  5. Barker, J. P. (2006). Robust automatic speech recognition. In D.-L. Wang & G. J. Brown (Eds.), Computational Auditory Scene Analysis: Principals, Algorithms and Applications (pp. 297–350). Wiley/IEEE Press.

Conference Papers

  1. Loweimi, E., Barker, J., & Hain, T. (2017). Statistical normalisation of phase-based feature representation for robust speech recognition. In Proceedings of the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2017). New Orleans, USA: IEEE.
  2. Hussain, A., Barker, J., Marxer, R., Adeel, A., Whitmer, W., Watt, R., & Derleth, P. (2017). Towards Multi-modal Hearing Aid Design and Evaluation in Realistic Audio-Visual Settings: Challenges and Opportunities. In Proceedings of the 1st ISCA International Workshop on Challenges in Hearing Assistive Technology (CHAT-2017) (pp. 29–34). Stockholm, Sweden.
  3. Loweimi, E., Barker, J., Saz Torralba, O., & Hain, T. (2017). Robust Source-Filter Separation of Speech Signal in the Phase Domain. In Proceedings of the 18th Annual Conference of the International Speech Communication Association (INTERSPEECH 2017). Stockholm, Sweden.
  4. Loweimi, E., Barker, J., & Hain, T. (2017). Channel Compensation in the Generalised Vector Taylor Series Approach to Robust ASR. In Proceedings of the 18th Annual Conference of the International Speech Communication Association (INTERSPEECH 2017). Stockholm, Sweden.
  5. Marxer, R., & Barker, J. (2017). Binary Mask Estimation Strategies for Constrained Imputation-Based Speech Enhancement. In Proceedings of the 18th Annual Conference of the International Speech Communication Association (INTERSPEECH 2017). Stockholm, Sweden.
  6. Mandel, M. I., & Barker, J. (2016). Multichannel Spatial Clustering for Robust Far-Field Automatic Speech Recognition in Mismatched Conditions. In Proceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH 2016). San Francisco, CA.
  7. Garcia Lecumberri, M. L., Barker, J., Marxer, R., & Cooke, M. (2016). Language effects in noise-induced word misperceptions. In Proceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH 2016). San Francisco, CA.
  8. Loweimi, E., Barker, J., & Hain, T. (2016). Use of Generalised Nonlinearity in Vector Taylor Series Noise Compensation for Robust Speech Recognition. In Proceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH 2016). San Francisco, CA.
  9. Abel, A., Marxer, R., Barker, J., Watt, R., Whitmer, B., Derleth, P., & Hussain, A. (2016). A data driven approach to audiovisual speech mapping. In Proceedings of Advances in Brain Inspired Cognitive Systems: 8th International Conference, BICS 2016 (pp. 331–342). Beijing, China.
  10. Tóth, A. M., Cooke, M., & Barker, J. (2016). Misperceptions arising from speech-in-babble interactions. In Proceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH 2016). San Francisco, CA.
  11. Marxer, R., Cooke, M., & Barker, J. (2015). A Framework for the Evaluation of Microscopic Intelligibility Models. In Proceedings of the 16th Annual Conference of the International Speech Communication Association (INTERSPEECH 2015). Dresden, Germany.
  12. E. Loweimi, J. B., M. Doulaty, & Hain, T. (2015). Long-term Statistical Feature Extraction from Speech Signals and its Application in Emotion Recognition. In Proc. 3rd International Conference on Statistical Language and Speech Processing (SLSP). Budapest, Hungary.
  13. Barker, J., Marxer, R., Vincent, E., & Watanabe, S. (2015). The third CHiME speech separation and recognition challenge: dataset, task and baselines. In Proc. 2015 IEEE Automatic Speech Recognition and Understanding (ASRU). Scottsdale, AZ.
  14. Ma, N., Marxer, R., J.Barker, & Brown, G. J. (2015). Exploiting synchrony spectra and deep neural networks for noise-robust automatic speech recognition. In Proc. 2015 IEEE Automatic Speech Recognition and Understanding (ASRU). Scottsdale, AZ.
  15. Lin, L., Barker, J., & J. Brown, G. (2015). The Effect of Cochlear Implant Processing on Speaker Intelligibility: A Perceptual Study and Computer Model. In Proceedings of the 16th Annual Conference of the International Speech Communication Association (INTERSPEECH 2015). Dresden, Germany.
  16. Loweimi, E., Barker, J., & Hain, T. (2015). Source-filter Separation of Speech Signal in the Phase Domain. In Proceedings of the 16th Annual Conference of the International Speech Communication Association (INTERSPEECH 2015). Dresden, Germany.
  17. Dabel, M. A., & Barker, J. (2015). On the Role of Discriminative Intelligibility Models for Speech Intelligibility Enhancement. In Proc. XVIII International Congress of Phonetics Sciences (ICPhS) 2015. Glasgow, UK.
  18. Alghamdi, N., Maddock, S., Brown, G. J., & Barker, J. (2015). A Comparison of Audiovisual and Auditory-only Training on the Perception of Spectrally-distorted Speech. In Proc. XVIII International Congress of Phonetics Sciences (ICPhS) 2015. Glasgow, UK.
  19. Foster, P., Sigtia, S., Krstulovic, S., Barker, J., & D. Plumbley, M. (2015). CHiME-HOME A Dataset for Sound Source Recognition in a Domestic Environment. In Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, WASPAA. New Paltz, NY.
  20. Alghamdi, N., Maddock, S., Brown, G. J., & Barker, J. (2015). Investigating the Impact of Aritificial Enhancement of Lip Visibility on the Intelligibility of Spectrally-Distorted Speech. In Proc. The 1st Joint Conference on Facial Analysis, Animation and Auditory-Visual Speech Processing, FAAVSP2015. Vienna, Austria.
  21. Dabel, M. A., & Barker, J. (2014). Speech pre-enhancement using a discriminative microscopic intelligibility model. In Proceedings of the 15th Annual Conference of the International Speech Communication Association (INTERSPEECH 2014). Singapore.
  22. N. Ma, & J. Barker. (2013). A fragment-decoding plus missing-data imputation system evaluated on the 2nd CHiME challenge. In Proceedings of the 2nd CHiME Workshop on Machine Listening in Multisource Environments (pp. 53–58). Vancouver, Canada. [PDF]
  23. Vincent, E., Barker, J., Watanabe, S., Le Roux, J., Nesta, F., & Matassoni, M. (2013). The second ‘CHiME’ Speech Separation and Recognition Challenge: Datasets, tasks and baselines. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Vancouver, Canada: IEEE.
  24. Ma, N., & Barker, J. (2012). Coupling identification and reconstruction of missing features for noise-robust automatic speech recognition. In Proceedings of the 13th Annual Conference of the International Speech Communication Association (Interspeech 2012). Portland, Oregon.
  25. González, J. A., Peinado, A. M., Gómez, A. M., Ma, N., & Barker, J. (2012). Combining missing-data reconstruction and uncertainty decoding for robust speech recognition. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4693–4696). Kyoto, Japan: IEEE. 10.1109/ICASSP.2012.6288966 [PDF]
  26. N. Ma, J. Barker, H. Christensen, & P. Green. (2011). Recent advances in fragment-based speech recognition in reverberant multisource environments. In Proceedings of the 1st CHiME Workshop on Machine Listening in Multisource Environments (pp. 68–73). Florence, Italy.
  27. Cooke, M., Barker, J., Lecumberri, M. L., & Wasilewski, K. (2011). Crowdsourcing for word recognition in noise. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011). Florence.
  28. N. Ma, J. Barker, H. Christensen, & P. Green. (2011). Binaural cues for fragment-based speech recognition in reverberant multisource environments. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011) (pp. 1657–1660). Florence, Italy. [PDF]
  29. N. Ma, J. Barker, H. Christensen, & P. Green. (2011). Incorporating localisation cues in a fragment decoding framework for distant binaural speech recognition. In IEEE Joint Workshop on Hands-Free Speech Communication and Microphone Arrays (HSCMA’11) (pp. 207–212). Edinburgh, United Kingdom. 10.1109/HSCMA.2011.5942400
  30. Morales-Cordovilla, J. A., Ma, N., Sánchez, V., Carmona, J. L., Peinado, A. M., & Barker, J. (2011). A pitch based noise estimation technique for robust speech recognition with missing data. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4808–4811). Prague, Czech Republic: IEEE. 10.1109/ICASSP.2011.5947431 [PDF]
  31. Kabir, A., Giurgiu, M., & Barker, J. (2010). Robust automatic transcription of English speech corpora. In Proceedings of the 8th International Conference on Communications (COMM) (pp. 79–82). Bucharest, Romania. 10.1109/ICCOMM.2010.5509116 [PDF]
  32. N. Ma, J. Barker, H. Christensen, & P. Green. (2010). Distant microphone speech recognition in a noisy indoor environment: combining soft missing data and speech fragment decoding. In Proceedings of the ISCA Tutorial and Research Workshop on Statistical And Perceptual Audition. Makuhari, Japan.
  33. Christensen, H., & Barker, J. (2010). Speaker turn tracking with mobile microphones: combining location and pitch information. In Proceedings of the 18th European Signal Processing Conference (EUSIPCO-2010). Aalborg, Denmark.
  34. H. Christensen, J. Barker, N. Ma, & P. Green. (2010). The CHiME corpus: a resource and a challenge for Computational Hearing in Multisoure Environments. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010). Makuhari, Japan.
  35. A. Kabir, J. Barker, & M. Giurgiu. (2010). Integrating hidden Markov model and PRAAT: a toolbox for robust automatic speech transcription. In Proceedings of SPIE 7745, Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments (Vol. 774513). Wilga, Poland. 10.1117/12.872211
  36. H. Christensen, & J. Barker. (2009). Using location cues to track speaker changes from mobile, binaural microphones. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009). Brighton, UK.
  37. H. Christensen, N. Ma, S. N. Wrigley, & J. Barker. (2009). A speech fragment approach to localising multiple speakers in reverberant environments. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4593–4596). Taipei, Taiwan: IEEE. 10.1109/ICASSP.2009.4960653 [PDF]
  38. E. Arnaud, H. Christensen, Y-C. Lu, J. Barker, V. Khalidov, M. Hansard, … R. Horaud. (2008). The CAVA Corpus: Synchronised Stereoscopic and Binaural Datasets with Head Movements. In ICMI ’08 Proceedings of the 10th international conference on Multimodal interfaces (pp. 109–116). Crete, Greece. 10.1145/1452392.1452414 [PDF]
  39. N. Ma, J. Barker, & P. Green. (2007). Applying duration constraints by using unrolled HMMs. In Proceedings of the 8th Annual Conference of the International Speech Communication Association (Interspeech 2007). Antwerp, Belgium.
  40. H. Christensen, N. Ma, S. Wrigley, & J. Barker. (2007). Integrating pitch and localisation cues at a speech fragment level. In Proceedings of the 8th Annual Conference of the International Speech Communication Association (Interspeech 2007). Antwerp, Belgium.
  41. X. Shao, & J. Barker. (2007). Audio-visual speech fragment decoding. In Proceedings of the International Conference on Auditory-Visual Speech Processing (AVSP 2007). Hilvarenbeek, The Netherlands. [PDF]
  42. J. Barker, A. Coy, N. Ma, & M. Cooke. (2006). Recent advances in speech fragment decoding techniques. In Proceedings of the 9th International Conference on Spoken Language Processing (Interspeech 2006) (pp. 85–88). Pittsburgh, PA.
  43. G. J. Brown, S. Harding, & J. Barker. (2006). Speech separation based on the statistics of binaural auditory features. In Proceedings of the 2006 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toulouse, France: IEEE. 10.1109/ICASSP.2006.1661434 [PDF]
  44. A. Coy, & J. Barker. (2006). A Multipitch Tracker for Monaural Speech Segmentation. In Proceedings of the 9th International Conference on Spoken Language Processing (Interspeech 2006) (pp. 1678–1681). Pittsburgh, PA.
  45. K. J. Palomäki, G. J. Brown, & J. Barker. (2006). Recognition of reverberant speech using full cepstral features and spectral missing data. In Proceedings of the 2006 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toulouse, France: IEEE. 10.1109/ICASSP.2006.1660014 [PDF]
  46. X. Shao, & J. Barker. (2006). Audio-visual speech recognition in the presence of a competing speaker. In Proceedings of the 9th International Conference on Spoken Language Processing (Interspeech 2006) (pp. 1292–1295). Pittsburgh, PA.
  47. S. Harding, J. Barker, & G. Brown. (2005). Mask Estimation Based on Sound Localisation for Missing Data Speech Recognition. In Proceedings of the 2005 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 537–540). Philadelphia, PA: IEEE. 10.1109/ICASSP.2005.1415169 [PDF]
  48. A. Coy, & J. Barker. (2005). Soft Harmonic Masks for Recognising Speech in the Presence of a Competing Speaker. In Proceedings of the 9th European Conference on Speech Communication and Technology (Interspeech 2005) (pp. 2641–2644). Lisbon, Portugal.
  49. J. Barker, & A. Coy. (2005). Towards Solving the Cocktail Party Problem through Primitive Grouping and Model Combination. In Proceedings of Forum Acusticum. Budapest, Hungary.
  50. S. Harding, J. Barker, & G. Brown. (2005). Binaural Feature Selection for Missing Data Speech Recognition. In Proceedings of the 9th European Conference on Speech Communication and Technology (Interspeech 2005) (pp. 1269–1272). Lisbon, Portugal.
  51. Barker, J. (2005). Tracking Facial Markers with an Adaptive Marker Collocation Model. In Proceedings of the 2005 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 665–668). Philadelphia, PA: IEEE. 10.1109/ICASSP.2005.1415492 [PDF]
  52. A. Coy, & J. Barker. (2005). Recognising Speech in the Presence of a Competing Speaker using a ‘Speech Fragment Decoder.’ In Proceedings of the 2005 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 425–428). Philadelphia, PA: IEEE. 10.1109/ICASSP.2005.1415141 [PDF]
  53. Brown, G. J., Palomäki, K., & Barker, J. (2004). A Missing Data Approach for Robust Automatic Speech Recognition in the Presence of Reverberation. In Proceedings of the 18th International Congress on Acoustics (ICA) (pp. 449–452). Kyoto, Japan.
  54. K. J. Palomäki, G. J. Brown, & J. Barker. (2002). Missing data speeech recognition in reverberant conditions. In Proceedings of the 2002 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. I-65–I-68). Orlando, FL: IEEE. 10.1109/ICASSP.2002.5743655 [PDF]
  55. J. Barker, M. Cooke, & D. Ellis. (2002). Temporal integration as a consequence of multi-source decoding. In Proceedings of the ISCA Workshop on the Temporal Integration in the Perception of Speech (TIPS). Aix-en-Provence, France.
  56. G. J. Brown, D. L. Wang, & J. Barker. (2001). A neural oscillator sound separator for missing data speech recognition. In Proceedings of the International Joint Conference on Neural Networks (IJCNN 2001) (Vol. 4, pp. 2907–2912). Washington, DC. 10.1109/IJCNN.2001.938839 [PDF]
  57. P. Green, J. Barker, M. P. Cooke, & L. Josifovski. (2001). Handling Missing and Unreliable Information in Speech Recognition. In Proceedings of the 8th International Workshop on Artificial Intelligence and Statistics (AISTATS-2001). Key West, FL. [PDF]
  58. J. Barker, P. Green, & M. P. Cooke. (2001). Linking Auditory Scene Analysis and Robust ASR by Missing Data Techniques. In Proceedings of the Worshop on Innovation in Speech Processing (WISP 2001). Stratford-upon-Avon, UK. [PDF]
  59. J. Barker, M. Cooke, & D. Ellis. (2001). Combining bottom-up and top-down constraints for robust ASR: The multisource decoder. In Proceedings of Workshop on consistent and reliable acoustic cues for sound analysis (CRAC-01). Aalborg, Denmark. [PDF]
  60. A. C. Morris, J. Barker, & H. Bourlard. (2001). From Missing Data to Maybe Useful Data: Soft Data Modelling for Noise Robust ASR. In Proceedings of the Worshop on Innovation in Speech Processing (WISP 2001). Stratford-upon-Avon, UK. [PDF]
  61. J. Barker, M. Cooke, & P. Green. (2001). Robust ASR based on clean speech models: An evaluation of missing data techniques for connected digit recognition in noise. In Proceedings of the 7th European Conference on Speech Communication and Technology, 2nd INTERSPEECH Event, Eurospeech 2001 (pp. 213–216). Aalborg, Denmark. [PDF]
  62. J. Barker, L. Josifovski, M. P. Cooke, & P. D. Green. (2000). Soft decisions in missing data techniques for robust automatic speech recognition. In Proceedings of the 6th International Conference on Spoken Language Processing (Interspeech 2000). Beijing, China. [PDF]
  63. J. Barker, M. P. Cooke, & D. P. W. Ellis. (2000). Decoding speech in the presence of other sound sources. In Proceedings of the International Conference on Spoken Language Processing. Beijing, China. [PDF]
  64. Barker, J. P., & Berthommier, F. (1999). Evidence of correlation between acoustic and visual features of speech. In Proc. ICPhS ’99. San Francisco. [PDF]
  65. Barker, J. P., & Berthommier, F. (1999). Estimation of speech acoustics from visual speech features: A comparison of linear and non-linear models. In Proceedings of the ISCA Workshop on Auditory-Visual Speech Processing (AVSP) ’99. University of California, Santa Cruz. [PDF]
  66. Barker, J. P., Williams, G., & Renals, S. (1998). Acoustic confidence measures for segmenting broadcast news. In Proc. ICSLP ’98. Sydney, Australia. [PDF]
  67. Barker, J. P., Berthommier, F., & Schwartz, J. L. (1998). Is primitive AV coherence an aid to segment the scene? In Proceedings of the ISCA Workshop on Auditory-Visual Speech Processing (AVSP) ’98. Sydney, Australia. [PDF]
  68. Barker, J. P., & Cooke, M. P. (1997). Is the sine-wave cocktail party worth attending? In Proceedings of the 2nd Workshop on Computational Auditory, Scene Analysis. Nagoya, Japan: Int. Joint Conf. Artificial Intelligence. [PDF]
  69. Barker, J. P., & Cooke, M. P. (1997). Modelling the recognition of spectrally reduced speech. In Proceeding of the Eurospeech ’97 (pp. 2127–2130). Rhodes, Greece. [PDF]

Abstracts

  1. Ma, N., Brown, G., Barker, J., & Stone, M. (2017, August). Exploiting deep learning to inform spectral contrast enhancement for hearing-impaired listeners. Stockholm, Sweden: Proceedings of the 1st ISCA International Workshop on Challenges in Hearing Assistive Technology (CHAT-2017).
  2. Cooke, M., Marxer, R., Garcia Lecumberri, M. L., & Barker, J. (2017, January). Lexical frequency effects in noise-induced robust misperceptions. Oldenburg, Germany: Proceedings of the 9th Speech in Noise Workshop (SpIN 2017).
  3. H. Christensen, J. Barker, Lu, Y.-C., J. Xavier, R. Caseiro, & H. Araújo. (2009). POPeye: Real-time, binaural sound source localisation on an audio-visual robot-head. Proceedings of the Conference on Natural Computing and Intelligent Robotics and NCAF.
  4. H. Christensen, & J. Barker. (2009). Simultaneous Tracking of Perceiver Movements and Speaker Changes Using Head-Centered, Binaural Data. Proceedings of the Conference on Natural Computing and Intelligent Robotics and NCAF.
  5. H. Christensen, J. Barker, Lu, Y.-C., J. Xavier, R. Caseiro, & H. Araújo. (2009). POPeye: Real-time, binaural sound source localisation on an audio-visual robot-head. Proceedings of the Conference on Natural Computing and Intelligent Robotics and NCAF.

PhD Thesis

  1. Barker, J. P. (1998). The relationship between auditory organisation and speech perception: Studies with spectrally reduced speech (PhD thesis). Sheffield University, U.K.