Welcome to my research pages. I am a PhD student studying in the Centre for Doctoral Training in Speech and Language Technologies at the University of Sheffield. I am working under the supervision of Prof. Thomas Hain and Dr. Stefan Goetze from the Department of Computer Science at the University of Sheffield. Please see the links below to find out more.
About Research ContactsMy background is in signal processing, statistical modelling and machine learning with a focus on speech and audio applications in particular. I have an MEng in Electronic Engineering with Music Technology Systems from the University of York. I also have an MSc in Mathematical Finance from the University of York. This was where I first gained epxerience with digital signal processing, audio restoration, machine learning and artificial intelligence techniques which inspired me to becom a researcher. In my current research I am heavily focussed on improving multi-speaker neural network architecture in adverse acoustic scenarios.
I am from the United Kingdom and I am currently based in Sheffield, UK. I spent most of my life living down in the Cotswolds in the South West of England until I moved to York to study for my MEng and have stayed up North ever since.
My primary research interest revolve around speech processing in adverse acoustic scenarios. My main current interests are in:
Other affiliations: The Department of Computer Science, Machine Intelligence for Natural Interfaces (MINI)
The working title of my PhD project is "Automatic Meeting Transcription in Highly Reverberant Environments". This project is being part funded by partners at UKRI and 3M Health Information Systems.
W Ravenscroft, S Goetze, T Hain (2023). Deformable Temporal Convolutional Networks for Monaural Noisy Reverberant Speech Separation. Proceedings of the 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023).
G Close, W Ravenscroft, T Hain, S Goetze (2023). Perceive and predict: self-supervised speech representation based loss functions for speech enhancement. Proceedings of the 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023).
W Ravenscroft, S Goetze, T Hain (2022). Utterance Weighted Multi-Dilation Temporal Convolution Networks for Monaural Speech Dereverberation. Proceedings of the 17th International Workshop on Acoustic Signal Enhancement (IWAENC 2022), Bamberg, Germany.
W Ravenscroft, S Goetze, T Hain (2022). Receptive Field Analysis of Temporal Convolutional Networks for Monaural Speech Dereverberation. Proceedings of 30th European Signal Processing Conference (EUSIPCO 2022).
W Ravenscroft, S Goetze and T Hain (2022). Att-TasNet: Attending to Encodings in Time-Domain Audio Speech Separation of Noisy, Reverberant Speech Mixtures. Frontiers in Signal Processing: Signal Processing Theory.
Email is my preferred method of contact, but feel free to connect with me via my other links as well.
Email: jwravenscroft1@sheffield.ac.uk
Twitter: @WillRavenscrof1
GitHub: @jwr1995
LinkedIn: William Ravenscroft
Google Scholar: William Ravenscroft
ResearchGate : William Ravenscroft