Title: PhD at Queen Mary University of London Post by: Gru on December 22, 2021, 13:17:06 pm Two PhD positions at Queen Mary University of London
Efficient Deep Learning for Perception and Generation A major challenge in deep learning is how to develop models which are compact, lightweight and power efficient so that they can be effectively deployed on devices that billions of users use like XR glasses, smart-phones, and tablets. Prominent methods for achieving all these goals are developing efficient architectures via Neural Architecture Search, Network Pruning and Quantization (including Binary Networks). Despite recent successes in all these areas, efficiency always comes at the cost of reduced accuracy. This PhD project will undertake fundamental research in the area efficient Deep Learning for developing computationally efficient yet powerful models for perception and/or generation building upon prior work by Tzimiropoulos & Patras (the supervisors). Machine Learning for Analysis of Affect and Mental Health The project is in the area of Computer Vision and Machine Learning for the analysis of actions, activity and behaviour with an application in the field of Affective Computing and Mental Health. More specifically, the focus on Machine learning methods for the analysis of facial expressions, body gestures, speech and audio for understanding the affective and mental-health state in context. The studentship is to build on existing works on the analysis of the facial non-verbal behaviour (e.g., Schinet: Automatic estimation of symptoms of schizophrenia from facial behaviour analysis, Bishay et al. IEEE Transactions on Affective Computing, 2019) and on works of Purver on affect and mental health using audion and Natural Language Processing (e.g., Multi-modal fusion with gating using audio, lexical and disfluency features for Alzheimer's dementia recognition from spontaneous speech, 2021). At a methodological level, the work will focus on the development of novel Machine Learning methods for fusion of information from vision and language. Deadline: 30th January 2022 Contact: Prof. Georgios Tzimiropoulos g.tzimiropoulos@qmul.ac.uk |