PhD position on visual commonsense learning from images/videos to improve natural language understanding

This PhD will be developed within the framework of the AWARE project, whose aim is to investigate new methods to learn visual commonsense knowledge from images/videos to improve the natural language understanding (NLU) capabilities of current language models. A possible topic for the PhD will be to research in ways to extract commonsense knowledge from image/videos and to leverage that knowledge to improve NLU, but we are open to other related research ideas.

The student will join a thriving team of other PhDs and researchers focused on the limitations of current Large Language Models like GPT, and how to overcome them. 

Job id: 

The candidate should preferably have a BSc degree in computer science, telecommunications engineering, mathematics or physics, and a MSc in language technologies and/or machine learning. We are looking for individuals who are passionate about natural language processing and computer vision and have a strong background in computer science and related fields. Our ideal candidate has experience in machine learning, deep learning, and statistical analysis, as well as a strong proficiency in programming languages such as Python. We welcome applicants from all backgrounds and are committed to creating an inclusive and supportive workplace.

Duration of Contract: 

3 years

Gross salary: 

The position is fully funded. The student will have all tuition fees covered and receive a gross salary of 17,221€ (1st year), 17,823€ (2nd year) and 19,765€ (3rd year), which is sufficient to cover life expenses in the area (including housing in a shared apartment).

San Sebastian
Starting date: 
Start date is flexible, with the choice of joining immediately.
Contact details for enquiries: 

The advisors will be Gorka Azkune and Eneko Agirre. If you have any question, please do not hesitate to contact us at this address: Please include the job ID when contacting us.

To submit your application please follow this link.