dSPACE and Intempora Sign Strategic Partnership

Paderborn, February 23, 2016In a press conference held today at embedded world 2016, dSPACE and Intempora have announced an exclusive cooperation that aims at providing a superior tool chain for developing advanced driver assistance systems (ADAS) and highly automated driving functions. In line with this agreement, dSPACE will globally and exclusively distribute RTMaps (Real-Time Multisensor applications) from Intempora, an innovative and unparalleled software environment for multisensor applications.

André Rolfsmeier (left), Lead Product Manager for Advanced Applications and Technologies at dSPACE, shaking hands with Nicolas du Lac, Managing Director of Intempora. (Photo: dSPACE GmbH)

André Rolfsmeier (left), Lead Product Manager for Advanced Applications and Technologies at dSPACE, shaking hands with Nicolas du Lac, Managing Director of Intempora. (Photo: dSPACE GmbH)

Intempora was founded in 2000 based on research performed at the Center of Robotics of École des Mines de Paris (now MINES ParisTech). Since then, the company’s team of software engineers has been working on the development of RTMaps and related products, turning them into a robust and easy-to-use software framework and meeting the needs of demanding customers from the industry. Among others, Intempora is member of the Groupement ADAS, a team of members of the French Mov’eo cluster, which is dedicated to the field of advanced driver assistance systems.

More information about the Strategic Partnership


3D Modeling for Autonomous Vehicles

During the MIG option, students of MINES ParisTech are confronted with the realities of their future work.
The group of students participating in the MINES ParisTech MIG “3D modeling for autonomous vehicles” was accompanied by faculty and doctoral students of Center for Robotics.


Ph.D. Defense – Guilllaume Trehard

Guillaume Trehard is pleased to invite you to his Ph.D. defense titled “Evidence theory applications for localization and mapping in an urban context”, on Friday  2016, 05th at 2h30 pm at Mines Paristech (60 bd St Michel, Paris VI, RER B – Luxembourg), amphitheater L 109.

The jury will be composed of :

  • M. Christoph STILLER, Karlsruhe Institut of Technology (KIT), Examiner
  • Mme Véronique CHERFAOUI, Université de Technologie de Compiègne (UTC), Examiner
  • M. Raja CHATILA, CNRS, Examiner
  • M. Christian LAUGIER, INRIA, Examiner
  • M. Benazouz BRADAI, Valeo Driving Assistance Research, Examiner
  • M. Fawzi NASHASHIBI, INRIA, Examiner
  • Mme Evangéline POLLARD, INRIA, Examiner


Since its emergence in the beginning of the nineties, the evidence theory have gained a growing interest among the data fusion community. Its applications started to spread in the whole robotics field where its advantages complement the traditional Bayesian frameworks. In the area of environment mapping in particular, the quality of the description provided by evidences has already been appreciated and put forward in the literature. By pushing this application up to the simultaneous localization and mapping (SLAM) techniques, this document proposes a new version of the maximum likelihood SLAM in the evidential context before it proposes an original scheme for its integration in a global localization and mapping solution. A practical evaluation of these algorithms is performed in the context of autonomous driving and in urban environments using laser range data integrated on equipped vehicles. In addition, a solution to fuse this local mapping with a global semantic map is proposed as a way to overpass the classical limits of these techniques in restricted budget constraints and with an aim to address the public market. The solutions developed in this thesis are validated thanks to real data of three different experimental platforms from Inria, Valeo and the KITTI database. – See more at:


Michel Fliess receives the Ampere Prize of the Academy of Sciences in 2015

Entry only available in french…


Environment mapping and landmarks extraction by passive 3D vision, for robot navigation

Contact: bogdan.stanciulescu at

Position and duration: Postdoctorate – 12 Months full time contract

Starting date: 1st of April 2016

Qualifications and skills: Applicants must have a PhD in the field of computer science, electrical engineering, physics, or any other related field. The candidates need to have a strong background in scene interpretation, particularly in the following fields: 3D environment reconstruction, SLAM, feature extraction, scene recognition, visual object detection. The applicants must have good communication skills, be able to work in a team environment and have fluent English skills. French language knowledge is a plus, but not compulsory.

The application must contain information of research background and work experience, including:

  1. A motivation letter outlining background, experience and interest in the subject.
  2. A detailed CV, including personal contact information and list of publications.


Applications must be submitted by e-mail to bogdan.stanciulescu at with the subject: POSTDOCTORAL POSITION.

The Robotics Laboratory of Mines-ParisTech (CAOR) has developed extensive competences and tools in the field of computer vision and pattern recognition for real-time object detection and classification (people, vehicles, faces, etc). One of the CAOR’s algorithms has been internationally recognised as the 2nd best Pascal VOC challenge 2006.

For its results in real-time object recognition and classification, the CAOR’s has been rewarded the Best Student Paper Award at the International Conference on Control, Automation, Robotics and Vision 2011, and again rewarded the International Joint Conference on Neural Networks 2011 object recognition challenge.

The postdoctoral associate could use the CAOR’s experience in real-time video processing, robust signature extraction from multiple images and machine learning.

Least but no last, the Robotics Lab has acquired a good experience in sensor data fusion for performing indoor SLAM. The Laboratory’s prototype « Corebots » has won 2 times out of 3 the DGA-ANR Carotte competition for mobile robots, by a precise 3D environment mapping and localisation.


SLAM laser mapping by Corebots prototype

Read more


Silvère Bonnabel recipient of the SEE-IEEE Price Alain Glavieux 2015

This entry is only available in french…


International Workshop on Movement and Computing (MOCO’16) – Call for Papers

Following on from the two previous successes of the International Workshop on Movement and Computing (MOCO’14) at IRCAM (Paris, France) in 2014, as well as MOCO’15 at Simon Fraser University (Vancouver, Canada) in 2015, we are pleased to announce MOCO’16, which will be hosted in Thessaloniki, Greece. MOCO’16 will be organized by MINES ParisTech, (France) in co-operation with the Paris 8 University (France), the University of Macedonia, Thessaloniki (Greece) and Aristotle University of Thessaloniki (Greece).


International Workshop on Movement and Computing

The vision of MOCO’16 is to bring together academics, researchers, engineers, designers, technologists, technocrats, creative artists, anthropologists, museologists, ergonomists and other practitioners interested in the phenomenon of the symbiosis between the human and the creative process, e.g. dancer-digital medias, musician-instrument, craftsman-object etc. This symbiosis takes the form of an interactional and gravitational relationship, where the human element is both a trigger and a transmitter, connecting perception (mind/environment interaction and cognition), knowledge (theoretical understanding of a process) and gesture (semantic motor skills). MOCO’16 invites researchers that have experiences of capturing the combined key elements of perception, knowledge and gesture/movement. MOCO’16 will be of interest to artists who work on the elucidation of the intersection between art, meaning cognition and technology by unlocking the hidden components in human creativity. The workshop also provides a forum for industrial partners, for whom the movement and gestures of the workers/operators consist of key elements in terms of ergonomics and health, to see and present state-of-the-art technologies.

A key feature of the MOCO’16 Workshop will be to open some of its demonstrations and artistic activities to the public-at-large in order to provide this extended audience with the opportunity to be informed about current scientific issues and topics by experts in an informal setting.

Read more


[Project]: SINETIC

Titre complet :  Système Intégré Numérique pour les Transports Intelligents Coopératifs

Dates de réalisation : du 01/10/2014 au 30/06/2017

Type de réalisation : projet français de recherche collaborative (FUI18)

Partenaires : Oktal (porteur), Armines, Renault, Dynalogic, Civitec, All4Tec, EURECOM, IFSTTAR, INRIA, VDC

Synopsis : 

Les systèmes de transport intelligents coopératifs (C-ITS) permettent l’échange de données entre les véhicules de toutes catégories, l’infrastructure routière, les systèmes nomades et les centres de contrôles et autres infrastructures de fournisseurs de services afin de rendre la mobilité des personnes et des marchandises plus sure, plus efficace, plus confortable, et plus éco-responsable. La figure ci-dessous représente les ITS Coopératifs comme un système global.

Le projet SINETIC se propose de mettre en place un environnement de simulation complet pour la conception des systèmes de transports intelligents coopératifs avec deux niveaux de granularité :

  • Le niveau système intégrant toutes les composantes du système (véhicules, infrastructure, centres de gestion, etc..) et ses réalités (terrain, trafic etc.).
  • Le niveau sous-système ou composant modélise de façon fine les caractéristiques et les comportements des différents composants (véhicules, capteurs, systèmes de communication et de localisation etc.) sur des étendues géographiques limitées mais décrites avec détail.

Le rôle du CAOR : modélisation d’une tâche de conduite dans le cadre de l’utilisation d’un véhicule autonome par expérimentation en réalité virtuelle.

Les intervenants du CAOR : Philippe Fuchs, Olivier Hugues.

Pôles Labellisateurs : Systematic et Movéo

Titre complet :

Dates de réalisation : du 01/10/2012 au 30/09/2015

Type de réalisation : projet français de recherche collaborative (FUI)

Partenaires :

Synopsis :

Le rôle du CAOR :

Les intervenants du CAOR : Philippe Fuchs

Pour plus d’informations : 


Talk by Philippe Fuchs at “WHAT’S NEXT”, IMMERSION in 360°

Professor at Center for Robotics, Philippe Fuchs talked about 360° immersion at “What’s Next”, 360° Immersion, in october, the 13th which took place at l’Usine – Saint-Denis. With the increasingly offer of HMD accessible to the general public (due to their low cost), it seems legitimate that the use of this type of device, quite intrusive, asks at least questions of public health.

You will find here the workshop’s participants…


INC Day 2015 – Keynote lecture by Alexis Paljic

Alexis Paljic has been invited to perform the inaugural presentation at the INC Day 2015 Conference. This year, the INC Day is dedicated to the many applications of Virtual Reality in Neurocience, Psychology and Psychiatry.


Aperçu de la présentation d’Alexis Paljic (MINES ParisTech) lors de la conférence INC Day 2015

Snapshot of Alexis Paljic (MINES ParisTech) keynote during INC Day 2015 Conference
MINES ParisTech researcher and specialist in Virtual Reality, Alexis Paljic offered some elements of answers to the question “How far can we trust VR for the Simulation of Real Human Activities ?”.

Videos of the presentations are available on the website INC Day 2015.