EP3274961A1 - Imaging of a macroscopic plant object - Google Patents
Imaging of a macroscopic plant objectInfo
- Publication number
- EP3274961A1 EP3274961A1 EP16718400.1A EP16718400A EP3274961A1 EP 3274961 A1 EP3274961 A1 EP 3274961A1 EP 16718400 A EP16718400 A EP 16718400A EP 3274961 A1 EP3274961 A1 EP 3274961A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- pixel
- dimensional
- data
- visible spectrum
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 31
- 238000001429 visible spectrum Methods 0.000 claims abstract description 25
- 238000012545 processing Methods 0.000 claims description 8
- 238000006073 displacement reaction Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000002329 infrared spectrum Methods 0.000 claims description 2
- 230000001419 dependent effect Effects 0.000 claims 2
- 241000196324 Embryophyta Species 0.000 description 32
- IJGRMHOSHXDMSA-UHFFFAOYSA-N Atomic nitrogen Chemical compound N#N IJGRMHOSHXDMSA-UHFFFAOYSA-N 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 230000012010 growth Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 241000209140 Triticum Species 0.000 description 2
- 235000021307 Triticum Nutrition 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 229910052757 nitrogen Inorganic materials 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241000195493 Cryptophyta Species 0.000 description 1
- 208000005156 Dehydration Diseases 0.000 description 1
- 241000233866 Fungi Species 0.000 description 1
- DGAQECJNVWCQMB-PUAWFVPOSA-M Ilexoside XXIX Chemical compound C[C@@H]1CC[C@@]2(CC[C@@]3(C(=CC[C@H]4[C@]3(CC[C@@H]5[C@@]4(CC[C@@H](C5(C)C)OS(=O)(=O)[O-])C)C)[C@@H]2[C@]1(C)O)C)C(=O)O[C@H]6[C@@H]([C@H]([C@@H]([C@H](O6)CO)O)O)O.[Na+] DGAQECJNVWCQMB-PUAWFVPOSA-M 0.000 description 1
- 241000219094 Vitaceae Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 239000012080 ambient air Substances 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 235000021021 grapes Nutrition 0.000 description 1
- 238000011081 inoculation Methods 0.000 description 1
- 238000011545 laboratory measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008635 plant growth Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 229910052708 sodium Inorganic materials 0.000 description 1
- 239000011734 sodium Substances 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000005068 transpiration Effects 0.000 description 1
- 238000002211 ultraviolet spectrum Methods 0.000 description 1
- 238000009369 viticulture Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/571—Depth or shape recovery from multiple images from focus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/68—Food, e.g. fruit or vegetables
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/254—Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2209/00—Details of colour television systems
- H04N2209/04—Picture signal generators
- H04N2209/041—Picture signal generators using solid-state devices
- H04N2209/042—Picture signal generators using solid-state devices having a single pick-up sensor
- H04N2209/047—Picture signal generators using solid-state devices having a single pick-up sensor using multispectral pick-up elements
Definitions
- the invention relates to macroscopic plant imaging.
- plant is meant in the present application both plants, including algae, and macroscopic organisms commonly referred to as fungi, even if these organisms are not always recognized as being plants themselves.
- Macroscopic is meant that the object to be represented is seen with the naked eye. This object therefore has dimensions of the order of at least 1 / 10th of a millimeter, advantageously at least one millimeter, for example at least one centimeter. The dimensions of the object may however be less than ten meters, for example less than one meter.
- the invention can for example find an application in phenotyping, field counting, laboratory measurements of dimensions, etc.
- Phenotyping is a technique that determines how a given plant variety reacts to a given environment. More precisely, a large number of plants of the same variety or not are subjected to various manipulations, for example of the hydrous state of the soil, of the temperature (lighting with
- Imaging boxes comprising a set of sensors make it possible to estimate various parameters of the plants having thus grown in these various environments.
- an infra-red camera can be used to estimate nitrogen stress
- a thermal sensor can be used to estimate water stress, etc.
- the current imaging boxes can make it possible to characterize up to 1800 plants per day.
- Field counting is a technique for estimating a number of plant elements, for example a number of ears of wheat or bunches of grapes, produced in a given plot, from acquisitions made on this parcel. , allowing to estimate for example a yield.
- Field counting conventionally involves pattern recognition software for the identification and counting of plant elements from images.
- the acquisition platforms used in macroscopic plant imaging may include several devices, for acquisition in the visible, in the infrared, etc.
- a 3D reconstruction can make it possible to estimate a leaf volume, rather than a leaf area, and more generally to obtain a depth information. This can be particularly useful when studying species competition, since the foliar cover of each plant can be evaluated more precisely.
- Document CA 2 764 135 describes an example of a 3D reconstruction method, in which a TOF camera (of the English "Time of Flight") and a color camera are used. The data from these two cameras are confronted with each other to reconstruct a 3D image of the plant.
- a TOF camera of the English "Time of Flight”
- a color camera are used. The data from these two cameras are confronted with each other to reconstruct a 3D image of the plant.
- said two-dimensional image data being derived from at least one a two-dimensional capture device, comprising for example an optical sensor, and these two-dimensional image data comprising data acquired beyond the visible spectrum, and
- a three-dimensional reconstruction of said object is performed from at least this acquired data beyond the visible spectrum.
- the invention may furthermore make it possible to avoid registration between acquisitions made outside the visible spectrum and the reconstructed 3D image, since the 3D image is reconstructed from these acquisitions outside the visible spectrum.
- this method has the advantage of implementation with little or no modifications of an existing platform, since most of these platforms already incorporate sensors outside the visible band.
- the macroscopic plant object may for example comprise: a macroscopic portion of a plant, for example the roots or a bud,
- the received two-dimensional image data can also include data acquired in the visible spectrum, and these acquired data can also be used in the visible spectrum for the 3D reconstruction.
- the reconstruction process may be an SFF (Shape From Focus) process.
- SFF Shape From Focus
- N> 1 sets of two-dimensional image data can be received, each set being acquired at a corresponding depth of field and including values of a plurality of pixels.
- Sets may include data acquired outside the visible spectrum, and possibly data acquired in the visible spectrum.
- N sets can come from the same two-dimensional acquisition device.
- the method can thus be implemented with relatively simple optical equipment.
- a resetting step can be provided to match pixels of these N sets.
- the method may thus comprise, for each pixel matched, a calculation step for estimating from N sets of received image data, a depth of field value corresponding to a maximum of sharpness.
- the depth of field value corresponding to the maximum sharpness value is then chosen, or a depth of field value corresponding to a maximum of sharpness is calculated by interpolation.
- the invention is in no way limited to SFF reconstruction.
- the received two-dimensional image data can then comprise a single image, from which a 3D reconstruction is performed by analyzing the shape on this image of this prior known pattern.
- the reconstruction method may be a stereovision process.
- the received image data then comprises a left image and a right image.
- the data acquired in the non-visible spectrum may comprise data acquired in the infrared spectrum, for example data corresponding to wavelengths between 700 nm and 5 mm, advantageously between 780 nm and
- the data acquired in the non-visible spectrum may comprise data acquired in the ultraviolet spectrum, or the like.
- the two-dimensional image data received may comprise, for each pixel, a set of P> 1 pixel values, each value being derived from an acquisition made in a corresponding frequency band.
- P frequency bands are thus provided, for example P frequency bands disjoined and covering a wide range of the spectrum, for example the visible and infra-red.
- the method may then comprise, for at least one, and preferably each, pixel, and for at least one, and preferably each, N sets, a step of estimating a additional pixel value from the P> 1 pixel values corresponding to this pixel.
- the calculation step leading to estimating a depth of field for a pixel can then be carried out starting from at least one, and preferably N, additional values thus estimated for this pixel, and possibly with values of neighboring pixels, for example, additional values estimated for the neighboring pixels.
- this additional value it is possible, for example, to choose one of the values P corresponding to this pixel, or else to proceed by interpolation.
- This additional pixel value may correspond for example to a maximum of reflected energy.
- the additional pixel value may be estimated to correspond to one or more frequency bands identified as particularly information-rich.
- this additional pixel value can be estimated so as to correspond to a maximum of sharpness.
- the additional pixel value can then be obtained by further using the values corresponding to the neighboring pixels.
- the method may comprise, for at least one, and preferably each, pixel, and for at least one, and preferably each, N sets, a step of combining at least some of the P values pixel corresponding to this pixel so as to obtain an enriched pixel value, for example a P-couplet of values. By repeating this step for each pixel of a set, one can thus obtain an enriched image.
- each pixel it will be possible to estimate from the enriched image or images, for example from N enriched images, a depth of field value corresponding to a maximum of sharpness.
- a device for imaging a macroscopic plant object comprising
- reception means for receiving from at least one two-dimensional image capture device two-dimensional image data of said object, said two-dimensional image data comprising data acquired beyond the visible spectrum and
- processing means arranged to carry out a three-dimensional reconstruction of said object from at least this acquired data beyond the visible spectrum.
- This imaging device can for example include or be integrated in one or more processors, for example in a microprocessor, a DSP (the "Digital Signal Processor”), or other.
- processors for example in a microprocessor, a DSP (the "Digital Signal Processor"), or other.
- the receiving means may for example comprise an input pin, an input port, or the like.
- the processing means may for example comprise a processor core, or CPU (of the "Central Processing Unit"), or other.
- the imaging device may further comprise transmission means for transmitting three-dimensional reconstruction data from the processing means to a display device, for example a screen, a 3D monitor and / or other, with a view to a 3D rendering of the reconstructed object.
- a display device for example a screen, a 3D monitor and / or other, with a view to a 3D rendering of the reconstructed object.
- an imaging system for a macroscopic plant object comprising an imaging device as described above and at least one capture device able to acquire two-dimensional data out of the visible spectrum and at a plurality of depths of field.
- displacement means may be provided for moving all or part of the capturing device, for example simply a lens, in order to adapt the depth of field.
- control means for example all or part of a processor.
- These control means can be integrated in the same processor as the processing means of the management device, or not.
- control means and these displacement means, on which are sent by the message control means in order to control the movement of all or part of the recording device, and another connection between the device. capture device and the imaging device, on which are emitted by the message capturing device carrying the data acquired.
- a computer program product comprising instructions for performing the steps of the method described above when these instructions are executed by a processor.
- This program can for example be downloaded, saved on a hard disk type support, or other.
- Figure 1 shows schematically an example of an imaging system according to one embodiment of the invention
- FIG. 2 is a flowchart corresponding to an exemplary method according to one embodiment of the invention. Similar references from one figure to another may be used to designate similar or similar objects.
- an imaging system 100 of a macroscopic plant object 2 comprises a two-dimensional capture device 3 and a terminal 5.
- the object 2 may for example comprise a portion of a plant plant, as illustrated in the figure.
- the capture device 3 may for example comprise a camera, a camera, or other.
- the terminal 5 may for example include a computer, a smartphone, or other.
- the capture device comprises a lens 32, drivable in movement along an optical axis by displacement means 31, 33.
- These displacement means may for example comprise a stepping motor 33 and a processor 31 driving this motor 33
- the processor 31 can thus impose a displacement of the lens 32, and thus an adaptation of the depth of field.
- a sensor 34 makes it possible to generate electrical signals carrying information of energy reflected by the object 2. These signals are received by the processor 31 and sent by a unidirectional data bus 42 to the terminal 5.
- the displacement of the lens 32 is controlled by the terminal 5, via another bus 41, separate from the bus 42 in order to avoid cluttering the bus 41.
- the terminal comprises an imaging device 1, here a microprocessor, comprising:
- transmission means 13 for positioning control message of the lens 32 for example one or more output pins,
- processing means 12 for example a processor core or CPU, for developing the messages intended for the stepper motor 33, and for performing a 3D reconstruction of the object 2 from the 2D image data coming from the device capture 3,
- transmission means (not shown) of reconstructed 3D data from the CPU 12, for transmitting this data for example to a screen for display.
- the devices 1, 3 may be separate and possibly distant from each other, but it can be expected to embark these devices 1, 3 in the same device, for example a smartphone or a dedicated terminal.
- FIG. 2 is a flowchart of a method implemented in the imaging device 1.
- two-dimensional image data (p (x, y, i, j)) is received. More precisely, the lens referenced 32 in FIG. 1 is regularly moved from one position to the other. other along the optical axis, so that N> 1 acquisitions are made, each acquisition corresponding to a depth of field.
- the parameter j indexes the acquisitions.
- Each measured value p is a value of reflected energy.
- the P frequency bands are disjoint and form a partition including visible and infra-red.
- the parameter i indexes the frequency bands.
- a registration is then made to match the pixels of the N acquisitions made. For example, you can enlarge some images and find fixed points from one image to another. As the depth of field varies from one acquisition to another, certain pixel values, especially on the edges of the images, can not be matched with the pixel values of the image acquired at the shortest distance from the image. object.
- the parameters x ', y' indicate the positions of the pixels after registration.
- the registration 202 is implemented by integrating the data acquired outside the visible spectrum, which is relatively advantageous compared to a method in which the reconstruction would be based on data acquired in the visible only and which would then impose a registration. between reconstructed 3D data and 2D data outside the visible.
- a loop is then set up in order to traverse each of the N acquisitions, with initialization steps 203 of the index j, incrementing 206 of this index j, and with further a loop output test 207.
- This step 204 is performed for each of the pixels. Then, it is estimated a sharpness value for each of the pixels, during a step 205.
- step 205 succeeds step 204 within the same loop, but one skilled in the art will understand that in practice two other loops, subscripted x ', y', may be implemented because the estimation of the sharpness value H (x ', y', j) of a pixel is a function of the additional values obtained for the neighboring pixels.
- the set of pixels is then scanned to estimate an additional value for each of the pixels before making a sharpness estimation.
- a sharpness operator of the type known from the prior art for example a gradient-based operator, as described in the article by S. Pertuz et al., "Analysis of focus fit operators for shape-from-focus ", Pattern Recognition (2013), pages 1415- 1432, 2013, or another operator.
- the loop output test 207 is positive and new loops, still unrepresented, are put in place in order to traverse all the set pixels.
- a depth of field value For each of the pixels, corresponding to a pair (x ', y'), it is estimated during a step 208 a depth of field value from the N additional values obtained for this pixel. We are actually looking for the depth of field corresponding to a maximum of sharpness.
- PE (X ', y', J ' MAX) max ( ⁇ p (x', y ', is checked.
- This data may comprise, for each pixel, the estimated depth of field value at step 208 for this pixel, as well as an energy value p (x ', y').
- This value p (x ', y') may be the additional value corresponding to the depth of field estimated at step 208, for example.
- the reconstructed image can then be analyzed in an automated manner, for example using shape recognition tools, particularly in the context of a field counting application.
- the step of transmitting the reconstructed image data to a 3D engine is optional.
- processing means arranged to derive from these data a height value, particularly in the case of a growth monitoring type application.
- These means can, for example, determine a maximum (or minimum) depth value from the received depth of field values and assign the height value this maximum or minimum depth of field of view value.
- a spectrum is developed for each pixel.
- the sharpness of each pixel is estimated from the spectrum of this pixel and the spectra of the neighboring pixels.
- the method described above can be implemented in the context of phenotyping, field counting, growth monitoring in the laboratory, or the like.
- This method can be implemented with a monocular tool and allows to obtain 3D information, which can participate in the quality of the variety selection in the case of phenotyping, or even counting in the case of field counting, or height measurements in the case of growth monitoring.
- the invention can thus find applications in agriculture, especially in viticulture, for example to characterize a culture in real time with an onboard system, for example to evaluate damage in a field, detect diseases early, characterize a number of plants per square meter, early yield, leaf volume, nitrogen content, number of plants seeded / sown, or other.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Imagerie d'un objet végétal macroscopique Imaging a macroscopic plant object
L'invention concerne l'imagerie végétale macroscopique. The invention relates to macroscopic plant imaging.
Par « végétal», on entend dans la présente demande aussi bien les plantes, y compris les algues, que les organismes macroscopiques communément dénommés champignons, même si ces organismes ne sont pas toujours reconnus comme étant des végétaux proprement dits. By "plant" is meant in the present application both plants, including algae, and macroscopic organisms commonly referred to as fungi, even if these organisms are not always recognized as being plants themselves.
Par « macroscopique », on entend que l'objet à représenter se voit à l'œil nu. Cet objet présente donc des dimensions de l'ordre d'au moins l / 10ème de millimètre, avantageusement d'au moins un millimètre, par exemple d'au moins un centimètre. Les dimensions de l'objet peuvent en revanche être inférieures à la dizaine de mètres, par exemple inférieures au mètre. By "macroscopic" is meant that the object to be represented is seen with the naked eye. This object therefore has dimensions of the order of at least 1 / 10th of a millimeter, advantageously at least one millimeter, for example at least one centimeter. The dimensions of the object may however be less than ten meters, for example less than one meter.
L'invention peut par exemple trouver une application dans le phénotypage, le comptage en champ, les mesures de dimensions en laboratoire, etc. The invention can for example find an application in phenotyping, field counting, laboratory measurements of dimensions, etc.
Le phénotypage est une technique conduisant à déterminer comment une variété végétale donnée réagit face à un environnement donné. Plus précisément, un grand nombre de plants d'une même variété ou non sont soumis à différentes manipulations, par exemple de l'état hydrique du sol, de la température (éclairage par lampes à Phenotyping is a technique that determines how a given plant variety reacts to a given environment. More precisely, a large number of plants of the same variety or not are subjected to various manipulations, for example of the hydrous state of the soil, of the temperature (lighting with
Sodium par exemple), de la teneur en CO2 de l'air ambiant, de l'inoculation de maladies, etc. Des mesures de croissance des plantes, de leur architecture et de leur transpiration sont ensuite réalisées afin de pouvoir mieux corréler conditions environnementales et état des plants. Sodium for example), the CO2 content of the ambient air, the inoculation of diseases, etc. Measurements of plant growth, architecture and transpiration are then carried out in order to better correlate environmental conditions and plant conditions.
Des box d'imagerie comprenant un ensemble de capteurs permettent d'estimer divers paramètres des plants ayant ainsi poussé dans ces divers environnements. Par exemple une caméra infra-rouge peut permettre d'estimer un stress azoté, un capteur thermique permet d'estimer le stress hydrique, etc. Imaging boxes comprising a set of sensors make it possible to estimate various parameters of the plants having thus grown in these various environments. For example, an infra-red camera can be used to estimate nitrogen stress, a thermal sensor can be used to estimate water stress, etc.
Les box d'imagerie actuels peuvent permettre de caractériser ainsi jusqu'à 1800 plants par jour. The current imaging boxes can make it possible to characterize up to 1800 plants per day.
Une sélection variétale peut ensuite être effectuée en fonction de cette caractérisation, laquelle aura permis de savoir que dans un environnement donné telle variété prospère mieux que telle autre. Le comptage en champ est une technique visant à estimer un nombre d'éléments végétaux, par exemple un nombre d'épis de blé ou de grappes de raisin, produits dans une parcelle donnée, et ce à partir d'acquisitions réalisées sur cette parcelle même, permettant d'estimer par exemple un rendement. Le comptage en champ implique classiquement des logiciels de reconnaissance de formes pour l'identification et le comptage des éléments végétaux à partir d'images. A varietal selection can then be made based on this characterization, which will have made it possible to know that in a given environment, one variety prospers better than another. Field counting is a technique for estimating a number of plant elements, for example a number of ears of wheat or bunches of grapes, produced in a given plot, from acquisitions made on this parcel. , allowing to estimate for example a yield. Field counting conventionally involves pattern recognition software for the identification and counting of plant elements from images.
Les plates-formes d'acquisition utilisées dans l'imagerie végétale macroscopique peuvent comprendre plusieurs dispositifs, pour l'acquisition dans le visible, dans l'infrarouge, etc. The acquisition platforms used in macroscopic plant imaging may include several devices, for acquisition in the visible, in the infrared, etc.
On cherche à obtenir des images 3D, afin de disposer de davantage d'informations. Par exemple, en cas d'image 2D sur laquelle deux épis de blés sont superposés, le logiciel risque de ne compter qu'un seul épi. Une image tridimensionnelle peut ainsi permettre de limiter ce risque d'erreur. Dans le cadre d'une application au phénotypage, une reconstruction 3D peut permettre d'estimer un volume foliaire, plutôt qu'une surface foliaire, et plus généralement d'obtenir une information de profondeur. Ceci peut être particulièrement utile lorsqu'on étudie la concurrence entre espèces, car on pourra évaluer la couverture foliaire de chaque plant plus précisément. We are looking for 3D images, in order to have more information. For example, if a 2D image on which two ears of wheat are superimposed, the software may count only one ear. A three-dimensional image can thus limit this risk of error. In the context of a phenotyping application, a 3D reconstruction can make it possible to estimate a leaf volume, rather than a leaf area, and more generally to obtain a depth information. This can be particularly useful when studying species competition, since the foliar cover of each plant can be evaluated more precisely.
Il a été envisagé de rajouter à une plate-forme du type connu de l'art antérieur un scanner, mais il existe un besoin pour un procédé plus simple. It has been envisaged to add to a platform of the type known from the prior art a scanner, but there is a need for a simpler method.
On cherche en conséquence à effectuer une reconstruction 3D. We therefore seek to perform a 3D reconstruction.
Le document CA 2 764 135 décrit un exemple de procédé de reconstruction 3D, dans lequel on utilise une caméra TOF (de l'anglais « Time of Flight ») et une caméra couleur. Les données issues de ces deux caméras sont confrontées entre elles pour reconstruire une image 3D de la plante. Document CA 2 764 135 describes an example of a 3D reconstruction method, in which a TOF camera (of the English "Time of Flight") and a color camera are used. The data from these two cameras are confronted with each other to reconstruct a 3D image of the plant.
Il existe un besoin pour une reconstruction 3D d'objets végétaux macroscopiques permettant de concilier simplicité et haute qualité d'image. There is a need for a 3D reconstruction of macroscopic plant objects to reconcile simplicity and high image quality.
Il est proposé un procédé d'imagerie d'un objet végétal macroscopique, dans lequel : There is provided a method of imaging a macroscopic plant object, wherein:
- on reçoit des données d'images bidimensionnelles dudit objet, ces données d'images bidimensionnelles étant issues d'au moins un dispositif de captation bidimensionnelle, comprenant par exemple un capteur optique, et ces données d'images bidimensionnelles comprenant des données acquises au-delà du spectre visible, et receiving two-dimensional image data from said object, said two-dimensional image data being derived from at least one a two-dimensional capture device, comprising for example an optical sensor, and these two-dimensional image data comprising data acquired beyond the visible spectrum, and
- on effectue une reconstruction tridimensionnelle dudit objet à partir au moins de ces données acquises au-delà du spectre visible. a three-dimensional reconstruction of said object is performed from at least this acquired data beyond the visible spectrum.
Ainsi, on prend en compte des informations correspondant à des longueurs d'ondes en dehors de la bande visible pour la reconstruction 3D d'un objet végétal macroscopique. En utilisant cette information en- dehors de la bande visible, on peut améliorer la précision de la reconstruction 3D. Thus, information corresponding to wavelengths outside the visible band for the 3D reconstruction of a macroscopic plant object is taken into account. By using this information outside the visible band, the accuracy of the 3D reconstruction can be improved.
L'invention peut en outre permettre d'éviter les recalages entre acquisitions réalisées hors du spectre visible et l'image 3D reconstruite, puisque l'image 3D est reconstruite à partir de ces acquisitions hors du spectre visible. The invention may furthermore make it possible to avoid registration between acquisitions made outside the visible spectrum and the reconstructed 3D image, since the 3D image is reconstructed from these acquisitions outside the visible spectrum.
En outre, ce procédé présente l'avantage d'une mise en œuvre avec peu ou pas de modifications d'une plate-forme existante, car la plupart de ces plates-formes intègrent déjà des capteurs en-dehors de la bande visible. In addition, this method has the advantage of implementation with little or no modifications of an existing platform, since most of these platforms already incorporate sensors outside the visible band.
L'objet végétal macroscopique peut par exemple comprendre : - une portion macroscopique d'un végétal, par exemple les racines ou un bourgeon, The macroscopic plant object may for example comprise: a macroscopic portion of a plant, for example the roots or a bud,
- l'ensemble de ce végétal, ou bien encore - all of this plant, or even
- un ensemble macroscopique de plusieurs végétaux, en état de concurrence ou non. On peut ainsi effectuer une reconstruction de plusieurs végétaux différents dans une même scène, ou au contraire des mêmes végétaux. - a macroscopic set of several plants, in a state of competition or not. It is thus possible to carry out a reconstruction of several different plants in the same scene, or on the contrary of the same plants.
Très avantageusement, les données d'images bidimensionnelles reçues peuvent comprendre aussi des données acquises dans le spectre visible, et on peut utiliser aussi ces données acquises dans le spectre visible pour la reconstruction 3D. Very advantageously, the received two-dimensional image data can also include data acquired in the visible spectrum, and these acquired data can also be used in the visible spectrum for the 3D reconstruction.
Avantageusement, le procédé de reconstruction peut être un procédé SFF (de l'anglais « Shape From Focus »). Ce type de procédé s'est avéré bien adapté à l'imagerie du végétal macroscopique. Advantageously, the reconstruction process may be an SFF (Shape From Focus) process. This type of method has proved to be well suited to macroscopic plant imaging.
Ainsi, au cours de l'étape de réception, on peut recevoir N> 1 ensembles de données d'images bidimensionnelles, chaque ensemble étant acquis à une profondeur de champ correspondante et comprenant des valeurs d'une pluralité de pixels. Les ensembles peuvent comprendre des données acquises en dehors du spectre visible, et éventuellement des données acquises dans le spectre visible. Thus, during the reception step, N> 1 sets of two-dimensional image data can be received, each set being acquired at a corresponding depth of field and including values of a plurality of pixels. Sets may include data acquired outside the visible spectrum, and possibly data acquired in the visible spectrum.
Ces N ensembles peuvent être issus d'un même dispositif d'acquisition bidimensionnelle. Le procédé peut ainsi être mis en œuvre avec un équipement optique relativement simple. These N sets can come from the same two-dimensional acquisition device. The method can thus be implemented with relatively simple optical equipment.
On peut prévoir une étape de recalage pour mettre en correspondance des pixels de ces N ensembles. A resetting step can be provided to match pixels of these N sets.
On peut ensuite estimer pour au moins un, et de préférence chaque, pixel une valeur de profondeur de champ correspondant à un maximum de netteté sur ces N acquisitions. Le procédé peut ainsi comprendre, pour chaque pixel mis en correspondance, une étape de calcul pour estimer à partir des N ensembles de données d'images reçues, une valeur de profondeur de champ correspondant à un maximum de netteté. It is then possible to estimate for at least one, and preferably each pixel, a depth of field value corresponding to a maximum of sharpness on these N acquisitions. The method may thus comprise, for each pixel matched, a calculation step for estimating from N sets of received image data, a depth of field value corresponding to a maximum of sharpness.
Par exemple, on peut estimer pour au moins un, et de préférence chaque, pixel, et pour au moins un, et de préférence chaque, ensemble, une valeur de netteté à partir des données de cet ensemble. On choisit ensuite la valeur de profondeur de champ correspondant à la valeur de netteté maximale, ou bien encore on calcule par interpolation une valeur de profondeur de champ correspondant à un maximum de netteté. For example, one can estimate for at least one, and preferably each, pixel, and for at least one, and preferably each, together, a sharpness value from the data of this set. The depth of field value corresponding to the maximum sharpness value is then chosen, or a depth of field value corresponding to a maximum of sharpness is calculated by interpolation.
L'invention n'est en rien limitée à une reconstruction SFF. The invention is in no way limited to SFF reconstruction.
Par exemple, on pourrait choisir une technique de stéréovision active, dite aussi lumière structurée. On projette alors sur l'objet un motif connu, puis on capte par exemple une seule image. Les données d'images bidimensionnelles reçues peuvent alors comprendre une seule image, à partir de laquelle on effectue une reconstruction 3D en analysant la forme sur cette image de ce motif a priori connu. For example, one could choose a technique of active stereovision, also called structured light. One then projects on the object a known pattern, then one captures for example a single image. The received two-dimensional image data can then comprise a single image, from which a 3D reconstruction is performed by analyzing the shape on this image of this prior known pattern.
Alternativement, le procédé de reconstruction peut être un procédé de stéréovision. Les données d'images reçues comprennent alors une image gauche et une image droite. Alternatively, the reconstruction method may be a stereovision process. The received image data then comprises a left image and a right image.
Avantageusement, les données acquises dans le spectre non- visible peuvent comprendre des données acquises dans le spectre infrarouge, par exemple des données correspondant à des longueurs d'ondes entre 700 nm et 5 mm, avantageusement entre 780 nm et Advantageously, the data acquired in the non-visible spectrum may comprise data acquired in the infrared spectrum, for example data corresponding to wavelengths between 700 nm and 5 mm, advantageously between 780 nm and
50 μπι, avantageusement entre 800 nm et 20 μπι. Ces données peuvent s'avérer relativement pertinentes lorsque l'objet est un végétal. Alternativement, ou en complément, les données acquises dans le spectre non-visible peuvent comprendre des données acquises dans le spectre ultraviolet, ou autre. 50 μπι, advantageously between 800 nm and 20 μπι. These data may be relatively relevant when the object is a plant. Alternatively, or in addition, the data acquired in the non-visible spectrum may comprise data acquired in the ultraviolet spectrum, or the like.
Dans un mode de réalisation avantageux, les données d'images bidimensionnelles reçues peuvent comprendre, pour chaque pixel, un jeu de P> 1 valeurs de pixel, chaque valeur étant issue d'une acquisition réalisée dans une bande de fréquences correspondante. On prévoit ainsi P bandes de fréquences, par exemple P bandes de fréquences disjointes et recouvrant une large gamme du spectre, par exemple le visible et l'infra- rouge. In an advantageous embodiment, the two-dimensional image data received may comprise, for each pixel, a set of P> 1 pixel values, each value being derived from an acquisition made in a corresponding frequency band. P frequency bands are thus provided, for example P frequency bands disjoined and covering a wide range of the spectrum, for example the visible and infra-red.
Dans le cas d'une reconstruction SFF en particulier, le procédé peut alors comprendre, pour au moins un, et de préférence chaque, pixel, et pour au moins un, et de préférence chacun, des N ensembles, une étape consistant à estimer une valeur de pixel additionnelle à partir des P> 1 valeurs de pixels correspondant à ce pixel. In the case of an SFF reconstruction in particular, the method may then comprise, for at least one, and preferably each, pixel, and for at least one, and preferably each, N sets, a step of estimating a additional pixel value from the P> 1 pixel values corresponding to this pixel.
L'étape de calcul conduisant à estimer une profondeur de champ pour un pixel peut ensuite être effectuée à partir d'au moins une, et de préférence de N, valeurs additionnelles ainsi estimées pour ce pixel, et éventuellement avec des valeurs de pixels voisins, par exemple des valeurs additionnelles estimées pour les pixels voisins. The calculation step leading to estimating a depth of field for a pixel can then be carried out starting from at least one, and preferably N, additional values thus estimated for this pixel, and possibly with values of neighboring pixels, for example, additional values estimated for the neighboring pixels.
Lors de l'estimation de cette valeur additionnelle on peut par exemple choisir une valeur parmi les P valeurs correspondant à ce pixel, ou bien encore procéder par interpolation. When estimating this additional value, it is possible, for example, to choose one of the values P corresponding to this pixel, or else to proceed by interpolation.
Cette valeur de pixel additionnelle peut correspondre par exemple à un maximum d'énergie réfléchie. This additional pixel value may correspond for example to a maximum of reflected energy.
La valeur de pixel additionnelle peut être estimée de façon à correspondre à une ou plusieurs bandes de fréquences identifiées comme particulièrement riches en information. The additional pixel value may be estimated to correspond to one or more frequency bands identified as particularly information-rich.
Alternativement, cette valeur de pixel additionnelle peut être estimée de façon à correspondre à un maximum de netteté. La valeur de pixel additionnelle peut alors être obtenue en utilisant en outre les valeurs correspondant aux pixels voisins. Alternatively, this additional pixel value can be estimated so as to correspond to a maximum of sharpness. The additional pixel value can then be obtained by further using the values corresponding to the neighboring pixels.
L'invention n'est donc en rien limitée par la façon dont est déterminée la valeur de pixel additionnelle, pourvu que cette estimation soit fonction d'au moins certaines des P valeurs de pixels correspondant à ce pixel. Dans un autre mode de réalisation, le procédé peut comprendre, pour au moins un, et de préférence chaque, pixel, et pour au moins un, et de préférence chacun, des N ensembles, une étape consistant à combiner au moins certaines des P valeurs de pixel correspondant à ce pixel de façon à obtenir une valeur de pixel enrichie, par exemple un P- couplet de valeurs. En réitérant cette étape pour chaque pixel d'un ensemble, on peut ainsi obtenir une image enrichie. The invention is therefore in no way limited by the way in which the additional pixel value is determined, provided that this estimate is a function of at least some of the P pixel values corresponding to this pixel. In another embodiment, the method may comprise, for at least one, and preferably each, pixel, and for at least one, and preferably each, N sets, a step of combining at least some of the P values pixel corresponding to this pixel so as to obtain an enriched pixel value, for example a P-couplet of values. By repeating this step for each pixel of a set, one can thus obtain an enriched image.
Pour chaque pixel, on pourra estimer à partir de la ou des images enrichies, par exemple à partir de N images enrichies, une valeur de profondeur de champ correspondant à un maximum de netteté. For each pixel, it will be possible to estimate from the enriched image or images, for example from N enriched images, a depth of field value corresponding to a maximum of sharpness.
Il est en outre proposé un dispositif d'imagerie d'un objet végétal macroscopique, comprenant There is further provided a device for imaging a macroscopic plant object, comprising
- des moyens de réception, pour recevoir d'au moins un dispositif de captation bidimensionnelle des données d'images bidimensionnelles dudit objet, ces données d'images bidimensionnelles comprenant des données acquises au-delà du spectre visible et reception means, for receiving from at least one two-dimensional image capture device two-dimensional image data of said object, said two-dimensional image data comprising data acquired beyond the visible spectrum and
- des moyens de traitement agencés pour effectuer une reconstruction tridimensionnelle dudit objet à partir au moins de ces données acquises au-delà du spectre visible. processing means arranged to carry out a three-dimensional reconstruction of said object from at least this acquired data beyond the visible spectrum.
Ce dispositif d'imagerie peut par exemple comprendre ou être intégré dans un ou plusieurs processeurs, par exemple dans un microprocesseur, un DSP (de l'anglais « Digital Signal Processor »), ou autre. This imaging device can for example include or be integrated in one or more processors, for example in a microprocessor, a DSP (the "Digital Signal Processor"), or other.
Les moyens de réception peuvent par exemple comprendre une broche d'entrée, un port d'entrée, ou autre. The receiving means may for example comprise an input pin, an input port, or the like.
Les moyens de traitement peuvent par exemple comprendre un cœur de processeur, ou CPU (de l'anglais « Central Processing Unit »), ou autre. The processing means may for example comprise a processor core, or CPU (of the "Central Processing Unit"), or other.
Le dispositif d'imagerie peut comprendre en outre des moyens de transmission pour transmettre des données de reconstruction tridimensionnelles issues des moyens de traitement vers un dispositif d'affichage, par exemple un écran, un moniteur 3D et/ou autre, en vue d'un rendu 3D de l'objet reconstruit. The imaging device may further comprise transmission means for transmitting three-dimensional reconstruction data from the processing means to a display device, for example a screen, a 3D monitor and / or other, with a view to a 3D rendering of the reconstructed object.
II est en outre proposé un système d'imagerie d'un objet végétal macroscopique, comprenant un dispositif d'imagerie tel que décrit ci- dessus et au moins un dispositif de captation apte à acquérir des données bidimensionnelles hors du spectre visible et à une pluralité de profondeurs de champs. In addition, an imaging system for a macroscopic plant object is proposed, comprising an imaging device as described above and at least one capture device able to acquire two-dimensional data out of the visible spectrum and at a plurality of depths of field.
A cet effet, on peut prévoir des moyens de déplacement, pour entraîner en mouvement tout ou partie du dispositif de captation, par exemple simplement une lentille, afin d'adapter la profondeur de champ. For this purpose, displacement means may be provided for moving all or part of the capturing device, for example simply a lens, in order to adapt the depth of field.
Ces moyens de déplacement peuvent être pilotés par des moyens de contrôle, par exemple tout ou partie d'un processeur. Ces moyens de contrôle peuvent être intégrés dans le même processeur que les moyens de traitement du dispositif de gestion, ou non. These moving means can be controlled by control means, for example all or part of a processor. These control means can be integrated in the same processor as the processing means of the management device, or not.
Avantageusement, on peut prévoir une liaison entre les moyens de contrôle et ces moyens de déplacement, sur laquelle sont émis par les moyens de contrôle des messages afin de piloter le déplacement de tout ou partie du dispositif de captation, et une autre liaison entre le dispositif de captation et le dispositif d'imagerie, sur laquelle sont émis par le dispositif de captation des messages transportant les données acquises. Advantageously, provision can be made for a connection between the control means and these displacement means, on which are sent by the message control means in order to control the movement of all or part of the recording device, and another connection between the device. capture device and the imaging device, on which are emitted by the message capturing device carrying the data acquired.
L'utilisation de ces deux liaisons distinctes peut permettre de diminuer l'encombrement de la liaison de données entre les dispositifs de captation et le dispositif d'imagerie. On pourra prévoir d'utiliser un bus unidirectionnel. The use of these two separate links can reduce the congestion of the data link between the capturing devices and the imaging device. It will be possible to use a unidirectional bus.
Il est en outre proposé un produit programme d'ordinateur comprenant des instructions pour effectuer les étapes du procédé décrit ci-dessus lorsque ces instructions sont exécutées par un processeur. Ce programme peut par exemple être téléchargé, sauvegardé sur un support de type disque dur, ou autre. There is further provided a computer program product comprising instructions for performing the steps of the method described above when these instructions are executed by a processor. This program can for example be downloaded, saved on a hard disk type support, or other.
Des modes de réalisation de l'invention sont à présent décrits en référence aux dessins annexés sur lesquels : Embodiments of the invention are now described with reference to the accompanying drawings in which:
La figure 1 montre schématiquement un exemple de système d'imagerie selon un mode de réalisation de l'invention ; Figure 1 shows schematically an example of an imaging system according to one embodiment of the invention;
La figure 2 est un organigramme correspondant à un exemple de procédé selon un mode de réalisation de l'invention. Des références semblables d'une figure à l'autre peuvent être utilisées pour désigner des objets semblables ou similaires. FIG. 2 is a flowchart corresponding to an exemplary method according to one embodiment of the invention. Similar references from one figure to another may be used to designate similar or similar objects.
En référence à la figure 1 , un système d'imagerie 100 d'un objet végétal macroscopique 2 comprend un dispositif de captation bidimensionnelle 3 et un terminal 5. L'objet 2 peut par exemple comprendre une portion d'un plant végétal, comme illustré sur la figure. With reference to FIG. 1, an imaging system 100 of a macroscopic plant object 2 comprises a two-dimensional capture device 3 and a terminal 5. The object 2 may for example comprise a portion of a plant plant, as illustrated in the figure.
Le dispositif de captation 3 peut par exemple comprendre une caméra, un appareil-photo, ou autre. The capture device 3 may for example comprise a camera, a camera, or other.
Le terminal 5 peut par exemple comprendre un ordinateur, un téléphone intelligent, ou autre. The terminal 5 may for example include a computer, a smartphone, or other.
Le dispositif de captation comprend une lentille 32, entrainable en mouvement le long d'un axe optique par des moyens de déplacement 31 , 33. Ces moyens de déplacement peuvent par exemple comprendre un moteur pas à pas 33 et un processeur 31 pilotant ce moteur 33. Le processeur 31 peut ainsi imposer un déplacement de la lentille 32, et donc une adaptation de la profondeur de champ. The capture device comprises a lens 32, drivable in movement along an optical axis by displacement means 31, 33. These displacement means may for example comprise a stepping motor 33 and a processor 31 driving this motor 33 The processor 31 can thus impose a displacement of the lens 32, and thus an adaptation of the depth of field.
Un capteur 34 permet de générer des signaux électriques transportant des informations d'énergie réfléchie par l'objet 2. Ces signaux sont reçus par le processeur 31 et envoyés par un bus unidirectionnel de données 42 au terminal 5. A sensor 34 makes it possible to generate electrical signals carrying information of energy reflected by the object 2. These signals are received by the processor 31 and sent by a unidirectional data bus 42 to the terminal 5.
Le déplacement de la lentille 32 est contrôlée par le terminal 5, via un autre bus 41 , distinct du bus 42 afin d'éviter d'encombrer le bus 41. The displacement of the lens 32 is controlled by the terminal 5, via another bus 41, separate from the bus 42 in order to avoid cluttering the bus 41.
Le terminal comprend un dispositif d'imagerie 1 , ici un microprocesseur, comprenant : The terminal comprises an imaging device 1, here a microprocessor, comprising:
- des moyens de réception 1 1 de données mesurées issues du dispositif de captation 3, par exemple une ou plusieurs broches d'entrée, - Receiving means 1 1 of measured data from the capture device 3, for example one or more input pins,
- des moyens de transmission 13 de message de commande de positionnement de la lentille 32, par exemple une ou plusieurs broches de sortie, transmission means 13 for positioning control message of the lens 32, for example one or more output pins,
- des moyens des traitement 12, par exemple un cœur de processeur ou CPU, pour élaborer les messages destinés au moteur pas à pas 33, et pour effectuer une reconstruction 3D de l'objet 2 à partir des données d'images 2D issues du dispositif de captation 3, processing means 12, for example a processor core or CPU, for developing the messages intended for the stepper motor 33, and for performing a 3D reconstruction of the object 2 from the 2D image data coming from the device capture 3,
- des moyens de transmission (non représentés) de données 3D reconstruites issues du CPU 12, pour transmettre ces données par exemple vers un écran en vue d'un affichage. transmission means (not shown) of reconstructed 3D data from the CPU 12, for transmitting this data for example to a screen for display.
Les dispositifs 1 , 3 peuvent être distincts et éventuellement éloignés l'un de l'autre, mais on peut prévoir d'embarquer ces dispositifs 1 , 3 dans un même appareil, par exemple un téléphone intelligent ou un terminal dédié. The devices 1, 3 may be separate and possibly distant from each other, but it can be expected to embark these devices 1, 3 in the same device, for example a smartphone or a dedicated terminal.
La figure 2 est un organigramme d'un procédé mis en œuvre dans le dispositif d'imagerie 1. FIG. 2 is a flowchart of a method implemented in the imaging device 1.
Au cours d'une étape 201 , on reçoit des données d'images bidimensionnelles (p(x,y,i,j)}. Plus précisément, la lentille référencée 32 sur la figure 1 est régulièrement déplacée d'une position à l'autre le long de l'axe optique, de sorte que N> 1 acquisitions sont réalisées, chaque acquisition correspondant à une profondeur de champ. During a step 201, two-dimensional image data (p (x, y, i, j)) is received. More precisely, the lens referenced 32 in FIG. 1 is regularly moved from one position to the other. other along the optical axis, so that N> 1 acquisitions are made, each acquisition corresponding to a depth of field.
Sur la figure 2, le paramètre j indice les acquisitions. In FIG. 2, the parameter j indexes the acquisitions.
Pour chaque acquisition, on reçoit un ensemble de valeurs de pixels, x et y indiçant la position des pixels sur une image 2D. Chaque valeur p mesurée est une valeur d'énergie réfléchie. For each acquisition, we receive a set of pixel values, x and y indicating the position of the pixels on a 2D image. Each measured value p is a value of reflected energy.
En outre, pour une acquisition j donnée, pour chaque pixel correspondant à un couple (x,y), on reçoit P valeurs d'énergie réfléchie, chaque valeur d'énergie réfléchie correspondant à une bande spectrale. In addition, for a given acquisition, for each pixel corresponding to a pair (x, y), P values of reflected energy are received, each reflected energy value corresponding to a spectral band.
Les P bandes de fréquences sont disjointes et forment une partition incluant le visible et l'infra- rouge. The P frequency bands are disjoint and form a partition including visible and infra-red.
Sur la figure 2, le paramètre i indice les bandes de fréquences. On procède ensuite à un recalage pour mettre en correspondance les pixels des N acquisitions réalisées. On procède par exemple en agrandissant certaines images et en repérant des points fixes d'une image à l'autre. La profondeur de champ variant d'une acquisition à l'autre, certaines valeurs de pixel, notamment sur les bords des images, ne pourront être mises en correspondances avec les valeurs de pixels de l'image acquise à la plus courte distance de l'objet. In FIG. 2, the parameter i indexes the frequency bands. A registration is then made to match the pixels of the N acquisitions made. For example, you can enlarge some images and find fixed points from one image to another. As the depth of field varies from one acquisition to another, certain pixel values, especially on the edges of the images, can not be matched with the pixel values of the image acquired at the shortest distance from the image. object.
Sur la figure 2, les paramètres x', y' indiquent les positions des pixels après recalage. In FIG. 2, the parameters x ', y' indicate the positions of the pixels after registration.
Ainsi le recalage 202 est mis en place en intégrant les données acquises en-dehors du spectre visible, ce qui est relativement avantageux par rapport à un procédé dans lequel la reconstruction serait basée sur des données acquises dans le visible seulement et qui imposerait ensuite un recalage entre données 3D reconstruites et données 2D en-dehors du visible. Thus, the registration 202 is implemented by integrating the data acquired outside the visible spectrum, which is relatively advantageous compared to a method in which the reconstruction would be based on data acquired in the visible only and which would then impose a registration. between reconstructed 3D data and 2D data outside the visible.
Une boucle est ensuite mise en place afin de parcourir chacune des N acquisitions, avec des étapes d'initialisation 203 de l'indice j, d'incrémentation 206 de cet indice j, et avec en outre un test de sortie de boucle 207. A loop is then set up in order to traverse each of the N acquisitions, with initialization steps 203 of the index j, incrementing 206 of this index j, and with further a loop output test 207.
A des fins de simplicité, on n'a pas représenté les boucles permettant de parcourir l'ensemble des pixels d'une même acquisition, qui porteraient sur les couples (x',y'), mais l'homme du métier comprendra que les étapes 204 et 205 sont effectuées pour chacun des pixels après recalage. For the sake of simplicity, the loops for traversing all the pixels of the same acquisition, which would relate to the pairs (x ', y'), have not been represented, but the person skilled in the art will understand that the Steps 204 and 205 are performed for each of the pixels after registration.
Lors de l'exécution d'une boucle, et pour chacun des pixels recalés, on estime au cours d'une étape 204 une valeur de pixel additionnelle PE(X', y', j). Cette estimation est effectuée à partir des P valeurs correspondant à ce pixel. When executing a loop, and for each of the set pixels, it is estimated during a step 204 an additional pixel value PE (X ', y', j). This estimate is made from the P values corresponding to this pixel.
Dans cet exemple, on choisit parmi ces P valeurs la valeur maximale, c'est-à-dire que PE(X', y', j) = max({p(x',y',i,j}i=i,...,p) . In this example, among these P values, the maximum value is chosen, that is to say that PE (X ', y', j) = max ({p (x ', y', i, j) i = i, ..., p).
Si cette valeur maximale est atteinte dans l'infrarouge, c'est ainsi une information acquise en dehors du spectre visible qui sera retenue pour la suite du procédé, permettant ainsi une meilleure qualité de reconstruction 3D. If this maximum value is reached in the infrared, it is thus information acquired outside the visible spectrum that will be retained for the rest of the process, thus allowing a better 3D reconstruction quality.
Cette étape 204 est effectuée pour chacun des pixels. Puis, on estime une valeur de netteté pour chacun des pixels, au cours d'une étape 205. This step 204 is performed for each of the pixels. Then, it is estimated a sharpness value for each of the pixels, during a step 205.
Sur la figure 2, l'étape 205 succède à l'étape 204 à l'intérieur d'une même boucle, mais l'homme du métier comprendra qu'en pratique, deux autres boucles, indicées x',y', peuvent être mises en place car l'estimation de la valeur de netteté H(x',y',j) d'un pixel est fonction des valeurs additionnelles obtenues pour les pixels voisins. In FIG. 2, step 205 succeeds step 204 within the same loop, but one skilled in the art will understand that in practice two other loops, subscripted x ', y', may be implemented because the estimation of the sharpness value H (x ', y', j) of a pixel is a function of the additional values obtained for the neighboring pixels.
On parcourt donc l'ensemble des pixels pour estimer une valeur additionnelle pour chacun des pixels avant de procéder à une estimation de netteté. The set of pixels is then scanned to estimate an additional value for each of the pixels before making a sharpness estimation.
Au cours de cette étape 205, on pourra utiliser un opérateur de netteté du type connu de l'art antérieur, par exemple un operateur basé sur un gradient, tel que décrit dans l'article de S. Pertuz et al., « Analysis of focus measure operators for shape-from-focus », Pattern Récognition (2013), pages 1415- 1432, 2013, ou un autre opérateur. During this step 205, it is possible to use a sharpness operator of the type known from the prior art, for example a gradient-based operator, as described in the article by S. Pertuz et al., "Analysis of focus fit operators for shape-from-focus ", Pattern Recognition (2013), pages 1415- 1432, 2013, or another operator.
Lorsque la netteté de chaque pixel de chacune des N acquisitions est ainsi estimée, le test de sortie de boucle 207 est positif et de nouvelles boucles, toujours non représentées, sont mises en place afin de parcourir l'ensemble des pixels recalés. Pour chacun des pixels, correspondant à un couple (x',y'), on estime au cours d'une étape 208 une valeur de profondeur de champ à partir des N valeurs additionnelles obtenues pour ce pixel. On cherche en fait la profondeur de champ correspondant à un maximum de netteté. When the sharpness of each pixel of each of the N acquisitions is thus estimated, the loop output test 207 is positive and new loops, still unrepresented, are put in place in order to traverse all the set pixels. For each of the pixels, corresponding to a pair (x ', y'), it is estimated during a step 208 a depth of field value from the N additional values obtained for this pixel. We are actually looking for the depth of field corresponding to a maximum of sharpness.
Par exemple, on choisit l'indice J'MAX pour lequel la relation For example, we choose the index J ' MAX for which the relation
PE(X', y', J'MAX) = max({p(x',y', est vérifiée. PE (X ', y', J ' MAX) = max ({p (x', y ', is checked.
Puis, au cours d'une étape 209, des données d'images reconstruites sont transmises vers un moteur 3D. Ces données peuvent comprendre, pour chaque pixel, la valeur de profondeur de champ estimée à l'étape 208 pour ce pixel, ainsi qu'une valeur d'énergie p(x',y'). Then, during a step 209, reconstructed image data is transmitted to a 3D engine. This data may comprise, for each pixel, the estimated depth of field value at step 208 for this pixel, as well as an energy value p (x ', y').
Cette valeur p(x',y') peut être la valeur additionnelle correspondant à la profondeur de champ estimée à l'étape 208, par exemple. This value p (x ', y') may be the additional value corresponding to the depth of field estimated at step 208, for example.
L'image reconstruite peut ensuite être analysée de façon automatisée, par exemple en utilisant des outils de reconnaissance de formes, notamment dans le cadre d'une application au comptage en champ. The reconstructed image can then be analyzed in an automated manner, for example using shape recognition tools, particularly in the context of a field counting application.
L'étape de transmission des données d'images reconstruite à un moteur 3D est facultative. On pourrait par exemple prévoir une transmission à des moyens de traitement agencés pour déduire de ces données une valeur de hauteur, notamment dans le cas d'une application de type suivi de croissance. Ces moyens peuvent par exemple déterminer une valeur de profondeur maximale (ou minimale) à partir des valeurs de profondeur de champ reçues et assigner à la valeur de hauteur cette valeur de profondeur de champ maximale ou minimale. The step of transmitting the reconstructed image data to a 3D engine is optional. For example, it would be possible to provide transmission to processing means arranged to derive from these data a height value, particularly in the case of a growth monitoring type application. These means can, for example, determine a maximum (or minimum) depth value from the received depth of field values and assign the height value this maximum or minimum depth of field of view value.
Dans un autre mode de réalisation de l'invention, non illustré, plutôt que de ramener au cours de l'étape 204 les P valeurs de pixels à une seule valeur additionnelle, on élabore un spectre pour chaque pixel. Dit autrement, lorsque tous les pixels ont été parcourus, on a obtenu une image enrichie, et la netteté de chaque pixel est estimée à partir du spectre de ce pixel et des spectres des pixels voisins. In another embodiment of the invention, not illustrated, rather than reducing in step 204 the P values of pixels to a single additional value, a spectrum is developed for each pixel. In other words, when all the pixels have been traveled, an enriched image has been obtained, and the sharpness of each pixel is estimated from the spectrum of this pixel and the spectra of the neighboring pixels.
Le procédé décrit ci-dessus peut être mis en œuvre dans le cadre d'un phénotypage, d'opérations de comptage en champ, de suivi de croissance en laboratoire, ou autre. Ce procédé peut être mis en œuvre avec un outil monoculaire et permet d'obtenir des informations 3D, ce qui peut participer à la qualité de la sélection variétale dans le cas du phénotypage, ou bien encore du comptage dans le cas du comptage en champ, ou des mesures de hauteur dans le cas du suivi de croissance. The method described above can be implemented in the context of phenotyping, field counting, growth monitoring in the laboratory, or the like. This method can be implemented with a monocular tool and allows to obtain 3D information, which can participate in the quality of the variety selection in the case of phenotyping, or even counting in the case of field counting, or height measurements in the case of growth monitoring.
L'invention pourra ainsi trouver des applications en agriculture, notamment en viticulture, par exemple pour caractériser une culture en temps réel avec un système embarqué, par exemple pour évaluer des dégâts dans un champ, détecter des maladies de façon précoce, caractériser un nombre de plants par mètre carré, un rendement précoce, un volume foliaire, une teneur en azote, un nombre de plants levés/ semés, ou autre. The invention can thus find applications in agriculture, especially in viticulture, for example to characterize a culture in real time with an onboard system, for example to evaluate damage in a field, detect diseases early, characterize a number of plants per square meter, early yield, leaf volume, nitrogen content, number of plants seeded / sown, or other.
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1552391A FR3034234B1 (en) | 2015-03-23 | 2015-03-23 | IMAGING A MACROSCOPIC VEGETABLE OBJECT |
PCT/FR2016/050634 WO2016151240A1 (en) | 2015-03-23 | 2016-03-22 | Imaging of a macroscopic plant object |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3274961A1 true EP3274961A1 (en) | 2018-01-31 |
Family
ID=53177655
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16718400.1A Withdrawn EP3274961A1 (en) | 2015-03-23 | 2016-03-22 | Imaging of a macroscopic plant object |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3274961A1 (en) |
FR (1) | FR3034234B1 (en) |
WO (1) | WO2016151240A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3057095B1 (en) * | 2016-10-03 | 2019-08-16 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | METHOD FOR CONSTRUCTING A DEPTH MAP OF A SCENE AND / OR A COMPLETELY FOCUSED IMAGE |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5555464A (en) * | 1995-07-28 | 1996-09-10 | Lockheed Martin Corporation | Red/near-infrared filtering for CCD cameras |
DE102009023896B4 (en) | 2009-06-04 | 2015-06-18 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for detecting a plant |
-
2015
- 2015-03-23 FR FR1552391A patent/FR3034234B1/en not_active Expired - Fee Related
-
2016
- 2016-03-22 EP EP16718400.1A patent/EP3274961A1/en not_active Withdrawn
- 2016-03-22 WO PCT/FR2016/050634 patent/WO2016151240A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
FR3034234B1 (en) | 2017-04-21 |
FR3034234A1 (en) | 2016-09-30 |
WO2016151240A1 (en) | 2016-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chianucci | An overview of in situ digital canopy photography in forestry | |
EP2724203B1 (en) | Generation of map data | |
AU2016230926A1 (en) | Method and apparatus for processing spectral images | |
CN105675549A (en) | Portable crop parameter measurement and growth vigor intelligent analysis device and method | |
US10497139B2 (en) | Method and system for photogrammetric processing of images | |
US9294682B2 (en) | Information processing device, information processing method, and program for light source estimation | |
Fang et al. | High-throughput volumetric reconstruction for 3D wheat plant architecture studies | |
Mäkeläinen et al. | 2D hyperspectral frame imager camera data in photogrammetric mosaicking | |
CN106524909A (en) | Three-dimensional image acquisition method and apparatus | |
WO2014060657A1 (en) | Method for designing a passive single-channel imager capable of estimating depth of field | |
FR3011960A1 (en) | METHOD FOR IDENTIFICATION FROM A SPATIAL AND SPECTRAL OBJECT MODEL | |
WO2016151240A1 (en) | Imaging of a macroscopic plant object | |
CN110476412B (en) | Information processing apparatus, information processing method, and storage medium | |
EP3384255B1 (en) | Method of hyperspectral measurement | |
FR3067824B1 (en) | METHOD AND DEVICE FOR IMAGING A PLANT | |
KR102315329B1 (en) | Monitoring method of ecological disturbance species using drone hyperspectral imaging | |
JP2013033006A (en) | Spectroscopic information acquiring apparatus, spectroscopic information acquiring method, and program for spectroscopic information acquisition | |
JP6931401B2 (en) | Imaging equipment and image processing equipment | |
CN117456385A (en) | Bamboo forest canopy chlorophyll content estimation method based on unmanned aerial vehicle hyperspectral technology | |
CN113874710B (en) | Image processing method, image processing apparatus, imaging system, and program | |
FR2978276A1 (en) | METHOD FOR MODELING BUILDINGS FROM A GEOREFERENCED IMAGE | |
Zhao et al. | High throughput system for plant height and hyperspectral measurement | |
EP3380947B1 (en) | Method and device for forecasting cloudiness by statistical processing of data selected by spatial analysis | |
EP3847622B1 (en) | Measurement of land surface albedo, without needing a conventional albedometer | |
Ding et al. | Method for GPU-based spectral data cube reconstruction of integral field snapshot imaging spectrometers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20171023 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20180801 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20190212 |