Artificial intelligence for the Sciences (AI4theSciences) is an innovative, interdisciplinary and intersectoral PhD programme, led by Université Paris Sciences et Lettres and co-funded by the European Commission.
Supported by the European innovation and research programme Horizon 2020-Marie Sklodowska-Curie Actions, AI4theSciences is uniquely shaped to train a new generation of researchers at the highest academic level in their main discipline (Physics, Engineering, Biology, Human and Social Sciences) and master the latest technologies in Artificial Intelligence and Machine Learning which apply in their own field.
26 doctoral students will join the PSL university's doctoral schools in 2 academic cohorts to carry out work on subjects suggested and defined by PSL's scientific community.
The 2020 call will offer up to 15 PhD positions on 24 PhD research projects. The candidates will be recruited through HR processes of high standard, based on transparency, equal opportunities and excellence.
Description of the PhD subject : Learning dynamics in biological and artificial neural networks
Context - Motivation
Category learning in biological networks
Learning is a fundamental property of the cortex that enables a broad range of cognitive functions. Amongst them, speech acquisition forces our brain to create new internal auditory categories for speech-relevant sounds, and to associate new stimuli to preexisting categories.
However, how category learning shapes neural representations in the brain is an open question.
In the human brain, neural populations selective to speech acoustics have been identified in non-primary regions of the human auditory cortex (Norman-Haigneré et al.
2018). Ferret electrophysiology shows that learning to categorize acoustic patterns alters responses in multiple auditory cortical regions (Atiani et al.
2014), with prominent and magnified representations in non-primary auditory areas (Elgueda et al. 2019). These results suggest that neural circuits for categorizing sounds are found in higher auditory areas.
But how these specialized circuits are shaped during learning remains unknown.
Category learning in artificial networks
This ability to categorize diverse signals into different meanings and classes is also a key ability of Deep Neural Networks (DNNs).
How DNNs transform complex and noisy signals into useful abstractions is the subject of intensive investigations in computer science, and has already revealed important connections with neuroscience.
Specifically, both the cortex and DNNs share central computational principles such as the notion of hierarchical organizations and of spatio-temporal convolutions (Hassabis et al.
2017). These shared elements make artificial networks a potentially useful model for understanding hierarchical effects of categorization in sensory cortex (LeCun et al. 2015).
Stimulus representations in sensory cortical areas and DNNs exhibit many non-trivial similarities. Specifically, several studies have demonstrated that DNNs are able to linearly predict neural responses in visual (Yamins & DiCarlo.
2016), auditory (Kell et al. 2018) and language responsive brain regions (Caucheteux & King. bioRxiv 2020). These results suggest that supervised training on visual and auditory tasks leads trained DNNs to ultimately generate representations similar to those of the brain.
Learning dynamics in neural networks
DNNs are thus being proposed as models of sensory and language processing in the brain. However, whether the learning dynamics in these models are similar to those in the brain is unknown .
This issue is critical both for building better models of the brain and for building machines that match the learning capacities of human listeners.
The overarching hypothesis of this project is that computational constraints force artificial and biological neural networks to adopt similar learning dynamics .
We will test this hypothesis by explicitly training ferrets and DNNs on a speech-related auditory task. We will then monitor (i) how DNNs distribute their learning resources during training ( Goals 1&2 ), (ii) where and when changes occur in auditory cortex during training ( Goal 2 ), and (iii) whether functional changes induced by training are similar between the two systems ( Goal 3 ).
The temporal sequence of such changes is, to date, unknown, partly because it is technically challenging to monitor individual neurons along the course of learning.
To tackle this question, our laboratory will perform state-of-the-art neuroimaging using high-resolution neuroimaging in the ferret (Bimbard et al.
2018; Landemard et al. biorXiv 2020) to track learning dynamics in primary and non-primary fields of auditory cortex. Only a few studies have investigated how stimulus representations in DNNs change during learning, but preliminary evidence suggests that lower layers of DNNs stabilize much earlier than representations in higher layers (Raghu et al.
2017). Here, we will compare learning dynamics in (1) the auditory cortex and (2) DNNs throughout their respective training on the same task .
We postulate that training should induce prominent changes in speech-evoked responses of later (non-primary) stages, as found in humans.
Once the learning dynamics are characterized in both cortical and artificial networks, we will investigate single unit activations in DNNs through all learning stages .
Co-supervisors Yves Boubenec and Jean-Rémi King are experts in UltraSound imaging and DNN-based neuroscientific analyses, respectively, as demonstrated by their recent publications (Bimbard et al.
2018, Landemard et al. bioRxiv 2020, Caucheteux & King. bioRxiv 2020).
Scientific Objectives, Methodology & Expected results
We will compare neural and simulated representations of test stimuli with Representational Similarity Analysis (RSA; Kriegeskorte et al.
2008). We will extend previous human findings to ferrets showing that first layers of DNNs correlate more with representations in the primary auditory cortex, whereas deep layers correlate more with those of higher auditory regions.
This project is highly interdisciplinary as it involves advanced machine learning with quantitative analysis of neuroimaging big data.
Neural data for Goal 1 are already available in the group of co-supervisor Yves Boubenec.
Analogous to the animal training, we will re-train the DNN on the same task as ferrets with arbitrarily defined category boundaries between speech and other sounds.
Our prediction is that post-training representation of sounds becomes less (or more) distant if they belong to the same (or different) categories.
Based on our previous work on the cortical encoding of speech acoustics (Norman-Haigneré et al. 2015, Landemard et al. bioRxiv 2020), we expect these differential representations to emerge in later (non-primary) stages of the processing hierarchy.
Changes in categorical representations will be modeled by adding back a new classification layer to the last embeddings and performing supervized fine tuning of the whole network with the training stimuli used in the animal.
2015, 2018), as well as a collaborator of co-supervisor Yves Boubenec for cross-species comparison of natural sound encoding in auditory cortex.
For this purpose, we will predict cortical activity from DNN activation along the course of learning. We will deploy dimensionality reduction techniques such as Canonical Correlation Analysis (CCA) to capture shared learning dynamics in subspaces relevant for both auditory cortex and DNN.
Common neural dynamics will be dissected by studying DNN single units activation, which is not feasible in the auditory cortex.
This goal will benefit from the intersectorial background of co-supervisor Jean-Rémi King, who is a research scientist both at the École Normale Supérieure-PSL and Facebook Artificial Intelligence Research.
This project addresses fundamental problems in neuroscience that link sensory representations of complex sounds to their abstract meanings, which touches upon their utilization in symbolic systems such as music and language.
Sam Norman-Haigneré (MIT) will be a close collaborator of this project. He is an expert in auditory neuroscience and functional neuroimaging, themes that are central to this proposal and has experience coupling machine learning with the analysis of functional neuroimaging data (Kell et al.
2018). The student will move for a period of 2 months to the USA in order to benefit from Norman-Haigneré’s advice and expertise.
Yves Boubenec and Jean-Rémi King
Created in 2012, Université PSL is aiming at developing interdisciplinary training programmes and science projects of excellence within its members.
Its 140 laboratories and 2,900 researchers carry out high-level disciplinary research, both fundamental and applied, fostering a strong interdisciplinary approach.
The scope of Université PSL covers all areas of knowledge and creation (Sciences, Humanities and Social Science, Engineering, the Arts).
Its eleven component schools gather 17,000 students and have won more than 200 ERC. PSL has been ranked 36th in the 2020 Shanghai ranking (ARWU).
Required Research Experiences
Skills / Qualifications