--

The research project focusses on how sound data can be converted into understandable and actionable information by humans and machines. It started on 14 March 2016 and will run until 13 March 2019. The project is funded by the Engineering and Physical Sciences Research Council (EPSRC) with a funding value of £1,275,401. This is a joint project between the Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey, and the Acoustics Research Centre at the University of Salford.

A project overview can be found here.

In the media: MIT News - 'Computer learns to recognize sounds by watching video'

Mark Plumbley appeared in an article about machine-learning by Larry Hardesty for MIT News.

More...

In the media: BBC Radio 3 - 'The Verb'

Trevor Cox appeared in ‘The Verb’ of BBC Radio 3.

More...

New publication: 'A Joint Detection-Classification Model for Audio Tagging of Weakly Labelled Data'

Kong, Qiuqiang, Yong Xu, Wenwu Wang, and Mark Plumbley. “A Joint Detection-Classification Model for Audio Tagging of Weakly Labelled Data.” arXiv preprint arXiv:1610.01797 (2016).

More...

New publication: 'Deep neural network baseline for DCASE challenge 2016'

Kong, Qiuqiang, Iwnoa Sobieraj, Wenwu Wang, and Mark Plumbley (2016). “Deep neural network baseline for DCASE challenge 2016.”, DCASE 2016 Workshop, Budapest.

More...

In the media: New Scientist - 'Binge-watching videos teaches computers to recognise sounds'

Mark Plumbley was interviewed by Aviva Hope Rutkin for New Scientist.

More...