Speech separation

Project name

Adaptive Speech Separation Using Neural Networks

Investigators

Dr Yan Li

Project summary

Hearing aids, video conferencing etc should disentangle one sound from other sounds as human beings do. However, current techniques simply amplify the desired signal and the competing noise without discrimination. The problem involves multiple signals and multiple sensors, and each sensor receives a mixture of the source signals. Blind signal separation is a technique to retrieve these source signals from observed mixed data when the transmission channels and original sources are unknown. This project is to develop an adaptive algorithm using neural networks for applications in hearing aids, video conferencing, noise cancellation, and speech enhancement.