Accueil > Pages Perso > Frédéric Lavigne

Frédéric Lavigne

PR -  UNS

My research interests focus on the dynamics of learning and semantic processes in memory (experimental approach) and their underlying synaptic and neuronal mechanisms in the cerebral cortex (modeling approach).

CV

Frédéric Lavigne

Researcher/Professor in Cognitive Psychology.

Laboratoire BCL (Bases, corpus, Langage) - UMR 7320 CNRS - Université de Nice - Sophia Antipolis, France. Equipe Langage et Cognition

Research Background :

Post Doc positions :

  • Laboratoire de Neurophysiologie de la Perception et de l’Action - CNRS - Collège de France, Paris, France.
  • Laboratorium voor Experimentele Psychologie - Katholieke Universiteit, Leuven, Belgium.

Ph.D. in Cognitive Sciences : Laboratoire Language, Cognition, Ergonomie. CNRS - Ecole Normale Supérieure, Paris, France.

Advanced Master Degree in Cognitive Sciences : Université Pierre et Marie Curie and EHESS, Paris, France.

Master Degree in Neuroscience and Psychophysiology : Université Pierre et Marie Curie, Paris, France.

Dernières publicationsHAL

pour l'idHal "frederic-lavigne" :

titre
Latching dynamics in neural networks with synaptic depression
auteur
Pascal Chossat, Maciej Krupa, Frédéric Lavigne
article
2016
annee_publi
2016
resume
Priming is the ability of the brain to more quickly activate a target concept in response to a related stimulus (prime). Experiments point to the existence of an overlap between the populations of the neurons coding for different stimuli. Other experiments show that prime-target relations arise in the process of long term memory formation. The classical modelling paradigm is that long term memories correspond to stable steady states of a Hopfield network with Hebbian connectivity. Experiments show that short term synaptic depression plays an important role in the processing of memories. This leads naturally to a computational model of priming, called latching dynamics; a stable state (prime) can become unstable and the system may converge to another transiently stable steady state (target). Hopfield network models of latching dynamics have been studied by means of numerical simulation, however the conditions for the existence of this dynamics have not been elucidated. In this work we use a combination of analytic and numerical approaches to confirm that latching dynamics can exist in the context of Hebbian learning, however lacks robustness and imposes a number of biologically unrealistic restrictions on the model. In particular our work shows that the symmetry of the Hebbian rule is not an obstruction to the existence of latching dynamics, however fine tuning of the parameters of the model is needed.
typdoc
Pré-publication, Document de travail
Accès au bibtex
https://arxiv.org/pdf/1611.03645 BibTex
titre
Beyond transitional probabilities: learning XOR in non-human primates
auteur
Arnaud Rey, Frédéric Lavigne, Fabien Mathy, Joël Fagot
article
Fifth Implicit Learning Seminar, Jun 2016, Lancaster, United Kingdom. 2016, 〈http://www.lancaster.ac.uk/implicit-learning-seminar/〉
annee_publi
2016
resume
Paper presented at the Fifth Implicit Learning Seminar, Lancaster, UK.
typdoc
Communication dans un congrès
Accès au bibtex
BibTex
titre
Machine Learning under the light of Phraseology expertise: use case of presidential speeches, De Gaulle -Hollande (1958-2016)
auteur
Mélanie Ducoffe, Damon Mayaffre, Frédéric Precioso, Frédéric Lavigne, Laurent Vanni, A Tre-Hardy
article
Damon Mayaffre; Céline Poudat; Laurent Vanni; Véronique Magri; Peter Follette. JADT 2016 - Statistical Analysis of Textual Data, Jun 2016, Nice, France. Presses de FacImprimeur, JADT - Statistical Analysis of Textual Data, Volume 1, pp.157-168, 2016, JADT 2016 - Statistical Analysis of Textual Data. 〈https://jadt2016.sciencesconf.org/〉
annee_publi
2016
resume
Author identification and text genesis have always been a hot topic for the statistical analysis of textual data community. Recent advances in machine learning have seen the emergence of machines competing state-of-the-art computational linguistic methods on specific natural language processing tasks (part-of-speech tagging, chunking and parsing, etc). In particular, Deep Linguistic Architectures are based on the knowledge of language speci-ficities such as grammar or semantic structure. These models are considered as the most competitive thanks to their assumed ability to capture syntax. However if those methods have proven their efficiency, their underlying mechanisms, both from a theoretical and an empirical analysis point of view, remains hard both to explicit and to maintain stable, which restricts their area of applications. Our work is enlightening mechanisms involved in deep architectures when applied to Natural Language Processing (NLP) tasks. The Query-By-Dropout-Committee (QBDC) algorithm is an active learning technique we have designed for deep architectures: it selects iteratively the most relevant samples to be added to the training set so that the model is improved the most when built from the new training set. However in this article, we do not go into details of the QBDC algorithm-as it has already been studied in the original QBDC article-but we rather confront the relevance of the sentences chosen by our active strategy to state of the art phraseology techniques. We have thus conducted experiments on the presidential discourses from presidents C. De Gaulle, N. Sarkozy and F. Hollande in order to exhibit the interest of our active deep learning method in terms of discourse author identification and to analyze the extracted linguistic patterns by our artificial approach compared to standard phraseology techniques.
typdoc
Communication dans un congrès
Accès au texte intégral et bibtex
https://hal.archives-ouvertes.fr/hal-01343209/file/JADT2016_Ducoffe_et_al.pdf BibTex
titre
Semantic integration by pattern priming: experiment and cortical network model
auteur
Frédéric Lavigne, Dominique Longrée, Damon Mayaffre, Sylvie Mellet
article
Cognitive Neurodynamics, Springer Verlag, 2016, 〈10.1007/s11571-016-9410-4〉
annee_publi
2016
resume
Neural network models describe semantic priming effects by way of mechanisms of activation of neuron coding for the words that rely strongly on synaptic efficacies between pairs of neurons. Biologically inspired Hebbian learning defines efficacy values as a function of the activity of pre- and post-synaptic neurons only. It generates only pair associations between words in the semantic network. However, the statistical analysis of large text databases points to the frequent occurrence not only of pairs of words (e.g., “the way”) but also of patterns of more than two words (e.g., “by the way”). The learning of these frequent patterns of words is not reducible to associations between pairs of words but must take into account the higher level of coding of three-word patterns. The processing and learning of pattern of words challenges classical Hebbian learning algorithms used in biologically inspired models of priming. The aim of the present study was to test the effects of patterns on the semantic processing of words and investigates how an inter-synaptic learning algorithm succeeds at reproducing the experimental data. The experiment manipulates the frequency of occurrence of patterns of three words in a multiple-paradigm protocol. Results show for the first time that target words benefit more priming when embedded in a pattern with the two primes than when only associated with each prime in a pair. A biologically inspired, inter-synaptic learning algorithm is tested that potentiates synapses as a function of the activation of more than two pre- and post-synaptic neurons. Simulations show that the network can learn patterns of three words to reproduce the experimental results
typdoc
Article dans une revue
Accès au bibtex
BibTex
titre
Apprentissage de combinaisons XOR: cortex et comportement
auteur
Frédéric Lavigne, Fabien Mathy
article
Invited Talk, Station de primatologie UPS 846 CNRS, 2015, Rousset, France. 2015
annee_publi
2015
typdoc
Communication dans un congrès
Accès au bibtex
BibTex
titre
Anything goes: Czech initial clusters in a dichotic experiment
auteur
Laurent Dumercy, Frédéric Lavigne, Tobias Scheer, Markéta Zikova
article
Olinco 2, Jun 2014, Olomouc, Czech Republic. 2014
annee_publi
2014
typdoc
Communication dans un congrès
Accès au bibtex
BibTex
titre
Anything goes: Czech initial clusters in a dichotic experiment
auteur
Laurent Dumercy, Tobias Scheer, Markéta Zikova, Frédéric Lavigne
article
22nd Manchester Phonology Meeting, May 2014, Manchester, United Kingdom. 2014
annee_publi
2014
typdoc
Communication dans un congrès
Accès au bibtex
BibTex
titre
Inter-synaptic learning of combination rules in a cortical network model
auteur
Frédéric Lavigne, Francis Avnaïm, Laurent Dumercy
article
Frontiers in Cognitive Science, Frontiers Media S.A, 2014, 〈10.3389/fpsyg.2014.00842〉
annee_publi
2014
typdoc
Article dans une revue
Accès au bibtex
BibTex
titre
A latch on Priming
auteur
Alberto Bernacchia, Giancarlo La Camera, Frédéric Lavigne
article
Frontiers in Cognitive Science, Frontiers Media S.A, 2014, 5:869, 〈10.3389/fpsyg.2014.00869〉
annee_publi
2014
typdoc
Article dans une revue
Accès au bibtex
BibTex
titre
Anything goes: Czech initial clusters run against evidence from a dichotic experiment
auteur
Tobias Scheer, Markéta Zikova, Laurent Dumercy, Frédéric Lavigne
article
Formal Description of Slavic Languages 10, Dec 2013, Leipzig, Germany. 2013
annee_publi
2013
typdoc
Communication dans un congrès
Accès au bibtex
BibTex
  • + de résultats dans la Collection HAL du laboratoire BCL
  • Voir l'ensemble des résultats sur la plateforme HAL