PhD Project Opportunities

The following research projects in my area are open to postgraduate applicants. Just contact me for more details or discuss similar projects.  


"Clever learners" - Adaptive machine learning for changing environments

In the real world, machine learning models that have been trained on labelled data may become out of date as new scenarios are presented to the model.  This issue of concept drift means that models become unreliable and unable to complete their trained tasks.  This project is concerned with developing and testing approaches to enabling machine learning to adapt to changing environments so that they are (1) aware that their input is changing and (2) able to exploit strategies to continuously learn with minimal disruption and maximum accuracy.  



SoundGen: Creative and Targeted Sound Mixing, using Deep Neural Networks

The SoundGen project will deliver advanced state of the art techniques for advanced and effective sound generation and mixing. This work is inspired by recent developments in image neural style transfer networks for image mixing. Sounds can be represented as spectrogram images – which have proven effective as representations for sound when used with neural network classifiers. This project proposes to use spectrograms in combination with CNNs that have been trained on variety of sounds, to discover how specific feature maps of the CNN are associated with aspects of sound – similar as that of image neural style transfer network. The project is funded by the D-Real scholarship scheme. Apply here



Automated classification of internet video content

There has been enormous growth in the volume of video material posted on the internet for public and private consumption. For example, 300 hours of new video footage is uploaded to YouTube every minute [1]. A major challenge for businesses is to process the volume of uploaded video content. The video content needs to be classified into safe versus abusive content – and further into genres such as news, sports, comedy and education. Current methods focus heavily on visual and /or text content of video, with less focus on including embedded audio content [2]. This project initially proposes to use an audio-led machine learning approach to classification, based on the premise that the audio content of a digital audio-visual segment will provide rich information for classification. We will enhance our approach through exploring the latest techniques in image content for building classification features. The student will have the freedom to explore new mechanisms for improving video classification results.
Automated classification of internet video content
There has been enormous growth in the volume of video material posted on the internet for public and private consumption. For example, 300 hours of new video footage is uploaded to YouTube every minute [1]. A major challenge for businesses is to process the volume of uploaded video content. The video content needs to be classified into safe versus abusive content – and further into genres such as news, sports, comedy and education. Current methods focus heavily on visual and /or text content of video, with less focus on including embedded audio content [2]. This project initially proposes to use an audio-led machine learning approach to classification, based on the premise that the audio content of a digital audio-visual segment will provide rich information for classification. We will enhance our approach through exploring the latest techniques in image content for building classification features. The student will have the freedom to explore new mechanisms for improving video classification results.
Automated identification of fake users profiles in online media
The volume of social media content posted by users has grown enormously over the past decade. One of the challenges for social media providers is to validate user profile information against true user profiles. For example, children who claim to be older gain access to site and content by creating users profiles with false age information. Other false information can include gender or location.
This project processes to address this problem by automatically identifying risky user profiles – i.e. by developing a mechanism for analysing user content which does not match the declare user profile. This will involve developing particular activity or content profiles associated with specific user profiles – using machine learning techniques. The output of the project will be of interest to any social media or general media business on the internet. The technologies will include machine learning techniques and text mining.
Sentiment analysis of real-time online content
Every hour, more than 21 million new tweets are posted on Twitter and 300 hours of new video material is uploaded to YouTube. Media companies, advertisers and companies of branded products are amongst those that are keenly interested in the response to newly posted material . This is difficult due to the volume of data posted, the range of topics, and the variety of data sourced involved. It requires real time clustering of information, topics, channels – combined with analysis of the overall sentiment of discussion around any one of these. This project aims to develop automated techniques to monitor the sentiment of topics within high volume streams social media data in real time. The techniques involved include machine learning and text analysis.
Detection of abusive content in social media in a multi lingual environment
There has been enormous growth in the volume of user content posted to social media such as Twitter, Instagram , Facebook and YouTube. A major challenge for this media is to monitor and prevent the posting of abusive user content, be this bullying content or other types of abusive text content. Research work in this area is ongoing. A particular gap, however, is the detection of content across multiple languages, as the tendency is to focus on English. This project looks at addressing both the detection of abusive content, and the challenge of doing this while taking account of multiple languages. The techniques that will be used on the project include machine learning for building automated modules for categorising content – and text mining for the parsing and analysis of text.
Automated classification of internet video content
There has been enormous growth in the volume of video material posted on the internet for public and private consumption. For example, 300 hours of new video footage is uploaded to YouTube every minute [1]. A major challenge for businesses is to process the volume of uploaded video content. The video content needs to be classified into safe versus abusive content – and further into genres such as news, sports, comedy and education. Current methods focus heavily on visual and /or text content of video, with less focus on including embedded audio content [2]. This project initially proposes to use an audio-led machine learning approach to classification, based on the premise that the audio content of a digital audio-visual segment will provide rich information for classification. We will enhance our approach through exploring the latest techniques in image content for building classification features. The student will have the freedom to explore new mechanisms for improving video classification results.
Automated Monitoring of Group Environment
In nursing home and other assistive group environments, situations arise where residents’ standard of care is substandard – such as the verbal or physical abuse of patients by staff or lack of adequate exercise for patients. Initial solutions to external monitoring of these environments suggest use of cameras [1][2] but this poses problems of privacy and manual effort to analyse camera footage.
In such situations, we need solution(s) that deliver real time alerts and/or post incident analysis  - using several streams of information – including visual, auditory and activity information from sensors. The interpretation of this sensor information needs to be correct in identifying abusive incidents and fast, so that such problems are identified as soon as possible.
 For example, auditory signals from a microphone can monitor noise levels in a room.  Audio patterns that are associated with incidents, such as raised voices can be captured and classified using learning algorithms.  Pressure sensors can be used to detect the length of time spent in bed –and so build up an activity profile for a resident – with any change in typical patterns highlighted.  Camera images cannot rely on manual monitoring so require automated image processing to pick up patterns associated with problem incidents.  These problems require investigation and use of image processing and machine learning techniques to develop improved and new algorithms for detection of scenarios of interest.
In summary, the overall purpose of this project is to investigate and develop a set of solutions to assist in automated monitoring of group environments.