Software that allows you to manually and quickly annotate images in directories. The method is pseudo manual because it uses the algorithm watershed marked of OpenCV. The general idea is to manually provide the markers with brushes and then to launch the algorithm. If at first pass the segmentation needs to be corrected, the user can refine the markers by drawing new ones on the erroneous areas (as shown on video below).
The sources are available on Github
v1.1.0 : save color image and add CTRLS to save all masks : Windows
PhD position offering
Position and duration: PhD Thesis – 3 Years full time contract
Starting date: 1st of October 2016
This PhD thesis Subject is a part of an ongoing work between Mines ParisTech and PSA Peugeot Citroën Group, focused on car security. The context is the use of onboard visual digital information to inform the driver and increase his security. The digital information is acquired outside of the vehicle through the use of sensors. This is a confidential work, potentially leading to a patent, because of this, the scenario of use cannot be precisely described in this subject.
There are two objectives in the thesis : 1) Reconstruct a real-time 3D view of the car environment, acquired from a fixed point of view on a moving car, and 2) Master the visual perception of that 3D data by a human, positioned within the car, and ensure the pertinence of its use for a given set of scenarios.
The point of view for the 3D reconstruction does not have to be a very wide angle view, but the visual refreshment of the 3D data has to deal with the specific constraints of car driving. Indeed, for high speeds, the processing time and delays can have a negative effect on the visual data presented to the driver and the algorithms for 3D reconstruction should account for spatial shifting effects.
One of the main purposes of this thesis is to perform a realistic but partial 3D environment reconstruction of the road environment, given the inherent constraints of the automotive field: speed of the vehicle, road visibility, etc. The aim is not in performing a complete 3D scene reconstruction, from a single viewpoint, but a real-time partial reconstruction with respect to the speed of the vehicle and the inherent perturbation phenomena: change in vehicle’s direction and other secondary motions. If a certain speed threshold is overpassed, unwanted effects can occur such as the processing time and the information rendering, therefore some strategies of image acquisition should be studied, in order to compensate these inconvenient situations. In a second phase, the emerging algorithms and technology should be validated from the perceptive point of view, by user experience tests. Use cases and tests will be defined for the system’s evaluation.
Description of work:
After a preliminary phase of the field’ state of the art, the PhD candidate should propose pertinent technical solutions of innovations over the existing technologies, able to respond to the challenges risen by the state of art. The second phase will be the algorithms implementation phase on the vehicle platform.
At the end, a definition phase of the realistic use case from the safety and security point of view will be analyzed, for the system assessment.
Finally, realistic use cases will be set up and evaluated. One central question as for the perceptive feasibility of the concept, is the question of visual behavior of the user. Indeed, it is unknown how the users will deal with the presence of visual information within the car, that is relative to events and objects that are outside of the car. Especially we will focus on accommodative behaviors, indeed, users may have two accommodative strategies: 1) switch between interior/exterior accommodation points or 2) keep an -outside of the car- accommodation strategy and use the digital visual information available within the car in their peripheral visual field. The open question being which, if any, one of these strategies is actually viable in the proposed interface.
MSc Degree in Electrical Engineering, Computer Science or Physics.
Scientific knowledge in Virtual or Augmented Reality, Computer Vision and real-time computer programming.
The successful candidate will be enrolled in the Mines-ParisTech doctoral program and employee of the PSA Peugeot-Citroën Company.
Centre de Robotique / Mines ParisTech
60 boulevard Saint Michel
75272 Paris Cedex 06
Director: Philippe Fuchs: email@example.com
Scientific supervisor: Bogdan Stanciulescu: firstname.lastname@example.org
Scientific supervisor: Alexis Paljic: email@example.com
Mme Christine Vignaud
Phone: +33-(0)1 40 51 92 55
Following on from the two previous successes of the International Workshop on Movement and Computing (MOCO’14) at IRCAM (Paris, France) in 2014, as well as MOCO’15 at Simon Fraser University (Vancouver, Canada) in 2015, we are pleased to announce MOCO’16, which will be hosted in Thessaloniki, Greece. MOCO’16 will be organized by MINES ParisTech, (France) in co-operation with the Paris 8 University (France), the University of Macedonia, Thessaloniki (Greece) and Aristotle University of Thessaloniki (Greece).
On Monday 7th March 2016, at the Municipal Radio of Thessaloniki FM100.6, at the radio show « Meetings of Civilisations », which is presented by Dr. Argiro Moumtzidou, will host the Researcher and General Chair of MOCO’16, Dr. Sotiris Manitsaris and the Professor Brigitte D’Andréa-Novel, members of the Centre for Robotics at MINES ParisTech together with the student of the University of Macedonia Gavriela Senteri, will present the activities of MOCO’16, which cover presentations of scientific papers about Movement and Computing, demonstration of technological paradigms and contemporary artistic events that will be open to the large-public.
The MINES ParisTech Center for Robotics had the pleasure to welcome a delegation from the University of Tokyo. This visit had been an opportunity to introduce the on-going projects and foresee the futur research collaborations.
Paderborn, February 23, 2016: In a press conference held today at embedded world 2016, dSPACE and Intempora have announced an exclusive cooperation that aims at providing a superior tool chain for developing advanced driver assistance systems (ADAS) and highly automated driving functions. In line with this agreement, dSPACE will globally and exclusively distribute RTMaps (Real-Time Multisensor applications) from Intempora, an innovative and unparalleled software environment for multisensor applications.
Intempora was founded in 2000 based on research performed at the Center of Robotics of École des Mines de Paris (now MINES ParisTech). Since then, the company’s team of software engineers has been working on the development of RTMaps and related products, turning them into a robust and easy-to-use software framework and meeting the needs of demanding customers from the industry. Among others, Intempora is member of the Groupement ADAS, a team of members of the French Mov’eo cluster, which is dedicated to the field of advanced driver assistance systems.
During the MIG option, students of MINES ParisTech are confronted with the realities of their future work.
The group of students participating in the MINES ParisTech MIG “3D modeling for autonomous vehicles” was accompanied by faculty and doctoral students of Center for Robotics.
Guillaume Trehard is pleased to invite you to his Ph.D. defense titled “Evidence theory applications for localization and mapping in an urban context”, on Friday 2016, 05th at 2h30 pm at Mines Paristech (60 bd St Michel, Paris VI, RER B – Luxembourg), amphitheater L 109.
The jury will be composed of :
Since its emergence in the beginning of the nineties, the evidence theory have gained a growing interest among the data fusion community. Its applications started to spread in the whole robotics field where its advantages complement the traditional Bayesian frameworks. In the area of environment mapping in particular, the quality of the description provided by evidences has already been appreciated and put forward in the literature. By pushing this application up to the simultaneous localization and mapping (SLAM) techniques, this document proposes a new version of the maximum likelihood SLAM in the evidential context before it proposes an original scheme for its integration in a global localization and mapping solution. A practical evaluation of these algorithms is performed in the context of autonomous driving and in urban environments using laser range data integrated on equipped vehicles. In addition, a solution to fuse this local mapping with a global semantic map is proposed as a way to overpass the classical limits of these techniques in restricted budget constraints and with an aim to address the public market. The solutions developed in this thesis are validated thanks to real data of three different experimental platforms from Inria, Valeo and the KITTI database. – See more at: http://www.mines-paristech.fr/Formation/Doctorat/Annuaire-docteurs/Detail/Guillaume-TREHARD-2016/51281#sthash.MpPnfFmr.dpuf
Entry only available in french…
Contact: bogdan.stanciulescu at mines-paristech.fr
Position and duration: Postdoctorate – 12 Months full time contract
Starting date: 1st of April 2016
Qualifications and skills: Applicants must have a PhD in the field of computer science, electrical engineering, physics, or any other related field. The candidates need to have a strong background in scene interpretation, particularly in the following fields: 3D environment reconstruction, SLAM, feature extraction, scene recognition, visual object detection. The applicants must have good communication skills, be able to work in a team environment and have fluent English skills. French language knowledge is a plus, but not compulsory.
The application must contain information of research background and work experience, including:
Applications must be submitted by e-mail to bogdan.stanciulescu at mines-paristech.fr with the subject: POSTDOCTORAL POSITION.
The Robotics Laboratory of Mines-ParisTech (CAOR) has developed extensive competences and tools in the field of computer vision and pattern recognition for real-time object detection and classification (people, vehicles, faces, etc). One of the CAOR’s algorithms has been internationally recognised as the 2nd best Pascal VOC challenge 2006.
For its results in real-time object recognition and classification, the CAOR’s has been rewarded the Best Student Paper Award at the International Conference on Control, Automation, Robotics and Vision 2011, and again rewarded the International Joint Conference on Neural Networks 2011 object recognition challenge.
The postdoctoral associate could use the CAOR’s experience in real-time video processing, robust signature extraction from multiple images and machine learning.
Least but no last, the Robotics Lab has acquired a good experience in sensor data fusion for performing indoor SLAM. The Laboratory’s prototype « Corebots » has won 2 times out of 3 the DGA-ANR Carotte competition for mobile robots, by a precise 3D environment mapping and localisation.
SLAM laser mapping by Corebots prototype