Low-cost image annotation for supervised machine learning. Application to the detection of weeds in dense culture - Université d'Angers Accéder directement au contenu
Poster De Conférence Année : 2018

Low-cost image annotation for supervised machine learning. Application to the detection of weeds in dense culture

Résumé

An open problem in robotized agriculture is to detect weeds in dense culture. This problem can be addressed with computer vision and machine learning. The bottleneck of supervised approaches lay in the manual annotation of training images. We propose two different approaches for detecting weeds position to speed up this process. The first approach is using synthetic images and eye-tracking to annotated images [4] which is at least 30 times faster than manual annotation by an expert, the second approach is based on real RGB and depth images collected via Kinect v2 sensor. We generated a data set of 150 synthetic images which weeds were randomly positioned on it. Images were gazed by two observers. Eye tracker sampled eye position during the execution of this task [5, 6]. Area of interest was recorded as rectangular patches. A patch is considered as including weeds if the average fixation time in this patch exceeds 1.04 seconds. The quality of visual annotation by eye-tracking is assessed by two ways. First, direct comparison of visual annotation with ground-truth which is shown an average 94.7% of all fixations on an image which fell within ground-truth bounding-boxes. Second, as shown in fig.1 eye-tracked annotated data is used as a training data set in four machine learning approaches and compare the recognition rate with the ground-truth. These four machine learning methods are tested in order to assess the quality of the visual annotation. These methods correspond to handcrafted features adapted to texture characterization. They are followed by a linear support vector machine binary classifier. The table 1 gives the average accuracy and standard deviation. Experimental results prove that visual eye-tracked annotated data are almost the same as in-silico ground-truth and performances of supervised machine learning on eye-tracked annotated data are very close to the one obtained with ground-truth.
Fichier non déposé

Dates et versions

hal-02528734 , version 1 (01-04-2020)

Identifiants

  • HAL Id : hal-02528734 , version 1
  • OKINA : ua17391

Citer

Salma Samiei, Ali Ahmad, Pejman Rasti, Etienne Belin, David Rousseau. Low-cost image annotation for supervised machine learning. Application to the detection of weeds in dense culture. Computer Vision Problems in Plant Phenotyping (CVPPP 2018), 2018, Newcastle, United Kingdom. , 2018. ⟨hal-02528734⟩
23 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More