EP3969899A1 - Systeme und verfahren zur phänotypisierung - Google Patents

Systeme und verfahren zur phänotypisierung

Info

Publication number
EP3969899A1
EP3969899A1 EP20805595.4A EP20805595A EP3969899A1 EP 3969899 A1 EP3969899 A1 EP 3969899A1 EP 20805595 A EP20805595 A EP 20805595A EP 3969899 A1 EP3969899 A1 EP 3969899A1
Authority
EP
European Patent Office
Prior art keywords
sensor
sensors
images
plant
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20805595.4A
Other languages
English (en)
French (fr)
Other versions
EP3969899A4 (de
Inventor
Lior COEN
Victor Alchanatis
Oshry MARKOVICH
Yoav ZUR
Daniel Koster
Yogev MONTEKYO
Hagai Karchi
Ilya Leizerson
Sharone ALONI
Anna BROOK
Zur Granevitze
Alon ZVIRIN
Yaron Honen
Ron Kimmel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hazera Seeds Ltd
Sensilize Ltd
Carmel Haifa University Economic Corp Ltd
Evogene Ltd
Technion Research and Development Foundation Ltd
Rahan Meristem (1998) Ltd
Opgal Optronics Indudtries Ltd
Israel Ministry of Agriculture and Rural Development
Elbit Systems Land and C4I Ltd
Agricultural Research Organization of Israel Ministry of Agriculture
Hazera Seeds Ltd
Original Assignee
Hazera Seeds Ltd
Sensilize Ltd
Carmel Haifa University Economic Corp Ltd
Evogene Ltd
Technion Research and Development Foundation Ltd
Rahan Meristem (1998) Ltd
Opgal Optronics Indudtries Ltd
Israel Ministry of Agriculture and Rural Development
Elbit Systems Land and C4I Ltd
Agricultural Research Organization of Israel Ministry of Agriculture
Hazera Seeds Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hazera Seeds Ltd, Sensilize Ltd, Carmel Haifa University Economic Corp Ltd, Evogene Ltd, Technion Research and Development Foundation Ltd, Rahan Meristem (1998) Ltd, Opgal Optronics Indudtries Ltd, Israel Ministry of Agriculture and Rural Development, Elbit Systems Land and C4I Ltd, Agricultural Research Organization of Israel Ministry of Agriculture , Hazera Seeds Ltd filed Critical Hazera Seeds Ltd
Publication of EP3969899A1 publication Critical patent/EP3969899A1/de
Publication of EP3969899A4 publication Critical patent/EP3969899A4/de
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/27Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/251Colorimeters; Construction thereof
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/27Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration
    • G01N21/274Calibration, base line adjustment, drift correction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/02Food
    • G01N33/025Fruits or vegetables
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N2021/8466Investigation of vegetal material, e.g. leaves, plants, fruits
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • G01N2201/1296Using chemometrical methods using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/0098Plants or trees
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Definitions

  • the present invention relates to the field of phenotyping, particularly to systems and methods for collecting, retrieval and processing of data for accurate and sensitive analysis and prediction of a phenotype of an object, particularly a plant.
  • agricultural management may relate to plant breeding, developing new plant types, planning location and density of future plantations, planning for selling or otherwise using the expected crops, or the like. These activities may be performed by agronomists consulting to the land owner or user, the agronomists executing visual and other inspections of the plants and the environment and providing recommendations.
  • agronomists consulting to the land owner or user, the agronomists executing visual and other inspections of the plants and the environment and providing recommendations.
  • the reliance on manual labor significantly limits the capacity and response time which may lead to sub-optimal treatment and lower profits.
  • Crops phenotyping relies on non-destructive collection of data from plants over time. Developing precision management requires tools for collecting plant phenotypic data, environmental data and computational environment enabling high throughput processing of the data received. Translation of the processed data to an agriculture and/or eco-culture recommendation is further required.
  • U.S. Patent Application Publication No. 2004/0264761 discloses a system and method for creating 3 -dimensional agricultural field scene maps comprising producing a pair of images using a stereo camera and creating a disparity images based on the pair of images, the disparity image being a 3- dimensional representation of the stereo images.
  • Coordinate arrays can be produced from the disparity image and the coordinate arrays can be used to render a 3 -dimensional local map of the agricultural field scene.
  • Global maps can also be made by using geographic location information associated with various local maps to fuse together multiple local maps into a 3 -dimensional global representation of the field scene.
  • U.S. Patent Application Publication No. 2013/0325346 discloses systems and methods for monitoring agricultural products, particularly fruit production, plant growth, and plant vitality.
  • the invention provides systems and methods for a) determining the diameter and/or circumference of a tree trunk or vine stem, determining the overall height of each tree or vine, determining the overall volume of each tree or vine, determining the leaf density and average leaf color of each tree or vine; b) determining the geographical location of each plant and attaches a unique identifier to each plant or vine; c) determining the predicted yield from identified blossom and fruit; and d) providing yield and harvest date predictions or other information to end users using a user interface.
  • WO 2016/181403 discloses an automated dynamic adaptive differential agricultural cultivation system, constituted of: a sensor input module arranged to receive signals from each of a plurality of first sensors positioned in a plurality of zones of a first field; a multiple field input module arranged to receive information associated with second sensors from a plurality of fields; a dynamic adaptation module arranged, for each of the first sensors of the first field, to compare information derived from the signals received from the respective first sensor with a portion of the information received by the multiple field input module and output information associated with the outcome of the comparison; a differential cultivation determination module arranged, responsive to the output information of the dynamic adaptation module, to determine a unique cultivation plan for each zone of the first field; and an output module arranged to output a first function of the determined unique cultivation plans.
  • WO/2017/123201 discloses systems, devices, and methods for data-driven precision agriculture through close-range remote sensing with a versatile imaging system.
  • This imaging system can be deployed onboard low-flying unmanned aerial vehicles (UAVs) and/or carried by human scouts.
  • UAVs unmanned aerial vehicles
  • the technology stack can include methods for extracting actionable intelligence from the rich datasets acquired by the imaging system, as well as visualization techniques for efficient analysis of the derived data products.
  • U.S. Patent Applications Publication No. 2016/0148104 and 2017/0161560 disclose system and method for automatic plant monitoring, comprising identifying at least one test input respective of a test area, wherein the test area includes at least one part of a plant; and generating a plant condition prediction based on the at least one test input and on a prediction model, wherein the prediction model is based on a training set including at least one training input and at least one training output, wherein each training output corresponds to a training input.
  • the plant condition to be predicted include a current disease, insect and pest activity, deficiencies in elements, a future disease, a harvest yield, and a harvest time.
  • U.S. Patent No. 10, 182,214 discloses an agricultural monitoring system composed of an airborne imaging sensor, configured and operable to acquire image data at sub- millimetric image resolution of parts of an agricultural area in which crops grow, and a communication module configured and operable to transmit to an external system image data content which is based on the image data acquired by the airborne imaging sensor.
  • the system further comprises a connector operable to connect the imaging sensor and the communication module to an airborne platform.
  • the present invention discloses systems and methods for determining and predicting phenotype(s) of a plant or of a plurality of plants.
  • the phenotypes are useful for managing the plant growth, particularly for precise management of agricultural practices, for example, breeding, fertilization, stress management including disease control, and management of harvest and yield.
  • the systems and methods of the invention may be based on, but not limited to data obtained during the growing season (presently obtained or recently obtained data) and on an engine having reference data and phenotypes, including engine trained to determine and/or predict a phenotype based on reference data previously obtained by the systems of the present invention and phenotypes corresponding to the reference data.
  • the systems and methods of the present invention provide for phenotypes at meaningful agricultural time points, including, for example, very early detection of biotic as well as abiotic stresses, including detecting of stress symptoms before the symptoms of the stress are visible to the human eye or to a single Red- Green-Blue (RGB) camera.
  • the system and methods of the present invention can capture the plant as a whole as well as plant parts and objects present on the plan parts, including, for example, the presence of insects or even insect eggs which predict a potential to develop a disease phenotype.
  • the present invention is based in part on a combination of (i) data obtained from a plurality of imaging sensors set at a predetermined geometrical relationship; (ii) means to effectively reduce variations in data readings resulting from the outdoor environmental conditions, sensors effects, and other factors including object positioning and angle of data acquisition; and (iii) computational methods of processing the data.
  • the processed data synchronized and aligned across the various sensors are highly reproducible, enabling both - training an engine to set a phenotype, and using the trained engine or another engine to determine and/or predict a phenotype based on newly obtained processed data.
  • the invention may also utilize improvement of internal sensor data resolution and blurring correction.
  • the present invention provides a system for detecting or predicting a phenotype of a plant, comprising: a plurality of imaging sensors of different modalities selected from the group consisting of: a Red-Green-Blue (RGB) sensor; a multi spectral sensor; a hyperspectral sensor; a depth sensor; a time-of-flight camera; a LIDAR; and a thermal sensor, the plurality of sensors mounted on a bracket at predetermined geometrical relationships; a computing platform comprising at least one computer-readable storage medium and at least one processor for:
  • RGB Red-Green-Blue
  • the data comprising at least two images of at least one part of a plant, the at least two images captured at a distance of between 0.05m and 10m from the plant; preprocessing the at least two images in accordance with the predetermined geometrical relationship, to obtain unified data;
  • the engine is a trained neural network or a trained deep neural network.
  • the processor is further adapted to display to a user an indicator helpful in verifying the reliability of the engine.
  • the indicator helpful in verifying the reliability of the engine is a class activation map of the engine.
  • the at least two images are captured at a distance of between 0.05m and 5m from the plant.
  • the processor is further adapted to:
  • the preprocessing comprises preprocessing the at least two enhanced images.
  • the at least one additional sensor is selected from the group consisting of: a light sensor, a global positioning system (GPS); a digital compass; a radiation sensor; a temperature sensor; a humidity sensor; a motion sensor; an air pressure sensor; a soil sensor; an inertial sensor and any combination thereof.
  • the at least one additional sensor is a light sensor.
  • preprocessing comprises at least one of: registration; segmentation; stitching; lighting correction; measurement correction; and resolution improvement.
  • the preprocessing comprises registering the at least two enhanced images in accordance with the predetermined geometrical relationships.
  • registering the at least two enhanced images comprises alignment of the at least two enhanced images.
  • the preprocessing according to the teachings of present invention provides for unified data enabling extracting the features in a highly accurate, reproducible manner.
  • the percentage of prediction accuracy depends on the feature to be detected. According to certain embodiments, the accuracy of the feature prediction is at least 60%.
  • the measurement correction comprises correction of data captured by at least one imaging sensor.
  • the computing platform may also be operative in receiving information related to mutual orientation among the sensors. According to certain embodiments, the computing platform may also be operative in receiving information related to mutual orientation between the sensors and at least one of an illumination source and a plant.
  • the plurality of imaging sensors comprises at least two of said imaging sensors. According to certain embodiments, the plurality of imaging sensors comprises two of said imaging sensors. According to certain embodiments, the plurality of imaging sensors consists of two of said imaging sensors. According to certain embodiments, the plurality of imaging sensors comprises at least three of said imaging sensors.
  • the plurality of imaging sensors comprises three of said imaging sensors. According to some embodiments, the plurality of imaging sensors consists of three of said imaging sensors.
  • the specific combination of the imaging sensors and optionally of the at least one additional sensor may be determined according to the task to be performed, including, for example, detecting or predicting a phenotype, the nature of the phenotype, the type and species of the plant or plurality of plants and the like.
  • the plurality of imaging sensors comprises an RGB sensor and a multispectral sensor or a hyperspectral sensor. According to some embodiments, the plurality of imaging sensors consists of an RGB sensor and a multispectral sensor or a hyperspectral sensor. According to certain alternative embodiments, the plurality of imaging sensors comprises an RGB sensor and a thermal sensor. According to some embodiments, the plurality of imaging sensors consists of an RGB sensor and a thermal sensor. According to some embodiments, the plurality of imaging sensors comprises RGB sensor, thermal sensor and depth sensor. According to some embodiments, the plurality of imaging sensors consists of RGB sensor, thermal sensor and depth sensor.
  • a combination of imaging sensors comprising an RGB sensor and multispectral sensor or a combination of imaging sensors comprising an RGB sensor, a thermal sensor and a depth sensor provides for early detection of a phenotype of stress resulting from fertilizer deficiency, before stress symptoms are visible to the human eye or by a single RGB sensor.
  • an external lighting monitoring is added to the combination of imaging sensors.
  • the at least two images can provide for distinguishing between plant parts and/or objects present on the plant part.
  • the objects are plant pests or parts thereof.
  • the RGB sensor can provide for distinguishing between plant parts and/or objects present on the plant part.
  • multi -spectral and lighting sensors can provide for identifying significant signature differences between healthy and stress plants.
  • RGB sensor may provide for detecting changes in leaf color
  • a depth sensor may provide for detecting changes in plant size and growth rate
  • a thermal sensor may provide for detecting changes in transpiration.
  • combinations of the above can provide for early detection and predicting stress resulting from lack of water or lack of fertilizer.
  • the plurality of imaging sensor provides at least one image of a plant part selected from the group consisting of a leaf, a petal, a flower, an inflorescent, a fruit, and parts thereof.
  • each of the plurality of imaging sensors or of the at least one additional sensors is calibrated independently of other sensors.
  • the plurality of imaging sensors and the at least one additional sensor are calibrated as a whole.
  • at least one calibration is radiometric calibration.
  • the preprocessing comprises at least one of: registration; segmentation; stitching; lighting correction; measurement correction; and resolution improvement.
  • the preprocessing comprises registering the at least two enhanced images in accordance with the predetermined geometrical relationships.
  • registering the at least two enhanced images comprises alignment of the at least two enhanced images.
  • the at least one additional sensor is selected from the group consisting of: a digital compass; a global positioning system (GPS); a light sensor for determining lightning conditions, such as a light intensity sensor; a radiation sensor; a temperature sensor; a humidity sensor; a motion sensor; an air pressure sensor; a soil sensor, and an inertial sensor.
  • the at least one additional sensor is a light sensor.
  • the at least one additional sensor can be located within the system or remote of the system. According to certain embodiments, the at least one additional sensor is located within the system, separate of the bracket mounted with the plurality of imaging sensors.
  • the at least one additional sensor is located within the system on said bracket at predetermined geometrical relationships with the plurality of imaging sensors.
  • the computing platform is located separate of the bracket mounted with the plurality of imaging sensors. According to certain embodiments, the computing platform is located on said bracket.
  • the system further comprises a command and control unit for coordinating activation of the plurality of sensors; and operating the at least one processor in accordance with the plurality of sensors and with the at least one additional sensor.
  • the command and control unit is further operative to perform at least one action selected from the group consisting of: setting a parameter of a sensor from the plurality of sensors; operating the at least one processor in accordance with a selected application; providing an indication to an activity status of a sensor from the plurality of sensors; providing an indication to a calibration status of a sensor from the plurality of sensors; and recommending to a user to calibrate a sensor from the plurality of sensors.
  • the system further comprises a communication unit for communicating data from said plurality of sensors to the computing environment.
  • the communication unit can be within the system or remote of the system.
  • the system further comprises a cover and at least one light intensity sensor positioned on the cover for enabling, for example, radiometric calibration of the system.
  • the system of the present invention can be stationary, mounted on a manually held platform, or installed on a moving vehicle.
  • the complex interaction between a plant genotype and its environment controls the biophysical properties of the plant, manifested in observable traits, i.e., the plant phenotype or phenome.
  • the system of the present invention can be used to determine and/or predict a plant phenotype of agricultural or ecological importance as long as the phenotype is associated with imagery data that may be obtained from the plant.
  • the system of the invention enables detection of a phenotype at an early stage, based on early primary plant processes which are reflected by imagery data, but are nonvisible to the human eye or cannot be detected by RGB imaging only.
  • the system of the present invention advantageously enables monitoring structural, color, and thermal changes of the plant and parts thereof, as well as changes of the plant or plant parts surface, for example presence of pests, particularity insects and insect eggs.
  • the phenotype is selected from the group consisting of a biotic stress status including potential to develop a disease, presence of a disease; severity of a disease, a pest activity and an insect activity; an abiotic stress status including deficiency in an element or combination of elements; water stress and salinity stress; a feature predicting harvest time; a feature predicting harvest yield; a feature predicting yield quality and any combination thereof.
  • Plant pests can include viruses, nematodes, bacteria, fungi, and insects.
  • the system is further configured to generate as output data at least one of the phenotype, a quantitative phenotype, an agricultural recommendation based on said phenotype, or any combination thereof.
  • the computing platform is further configured to deliver the output data to a remote device of at least one user.
  • the agricultural recommendation relates to yield prediction, including, but not limited to, monitoring male or female organs to estimate yield, monitoring fruit maturity, monitoring fruit size and number, monitoring fruit quality, nutrient management, and determining time of harvest.
  • the reproducible image data obtained by the systems and methods of the present invention can be used for accurate annotation of the obtained images.
  • a plant phenotype database can be produced to be used for training an engine to detect and/or predict a phenotype of a plant.
  • the present invention provides a system for training an engine for detecting or predicting a phenotype of a plant, comprising: a plurality of imaging sensors of different modalities selected from the group consisting of: a Red-Green-Blue (RGB) sensor; a multi spectral sensor; a hyperspectral sensor; a depth sensor; a time-of-flight camera; a LIDAR; and a thermal sensor, the plurality of sensors mounted on a bracket at predetermined geometrical relationships; a computing platform comprising at least one computer-readable storage medium and at least one processor for:
  • RGB Red-Green-Blue
  • the data comprising at least two images of at least one part of a plant, the at least two images captured at a distance of between 0.05m and 10m from the plant;
  • annotations for the unified data are associated with the phenotype of the plant.
  • the processor is further adapted to:
  • the preprocessing comprises preprocessing the at least two enhanced images.
  • the engine, processor and the at least one additional sensor are as described hereinabove.
  • the computing platform may also be operative in receiving information related to mutual orientation among the sensors.
  • the computing platform may further be operative in receiving information related to mutual orientation between the sensors and at least one of an illumination source and a plant.
  • training the engine is performed upon multiplicity of unified data obtained from images received at a plurality of time points.
  • Use of the systems and methods of the present invention is not limited to phenotyping plants, and can be used for phenotyping other objects.
  • the present invention provides a system of detecting or predicting a state of an object, the system comprising:
  • a plurality of imaging sensors of different modalities selected from the group consisting of: a Red-Green-Blue (RGB) sensor; a multi spectral sensor; a hyperspectral sensor; a depth sensor; a time-of-flight camera; a LIDAR; and a thermal sensor, the plurality of sensors mounted on a bracket at predetermined geometrical relationships; a computing platform comprising at least one computer-readable storage medium and at least one processor for:
  • the data comprising at least two images of at least one part of the object, the at least two images captured at a distance of between 0.05m and 10m from the object;
  • the processor is further adapted to:
  • the preprocessing comprises preprocessing the at least two enhanced images.
  • the computing platform may also be operative in receiving information related to mutual orientation among the sensors. According to further embodiments, the computing platform may also be operative in receiving information related to mutual orientation between the sensors and at least one of an illumination source and an object.
  • Embodiments of methods and/or devices herein may involve performing or completing selected tasks manually, automatically, or by a combination thereof. Some embodiments are implemented with the use of components that comprise hardware, software, firmware or combinations thereof. In some embodiments, some components are general-purpose components such as general purpose computers or processors. In some embodiments, some components are dedicated or custom components such as circuits, integrated circuits or software.
  • some of an embodiment may be implemented as a plurality of software instructions executed by a data processor, for example which is part of a general-purpose or custom computer.
  • the data processor or computer may comprise volatile memory for storing instructions and/or data and/or a non-volatile storage, for example a magnetic hard-disk and/or removable media, for storing instructions and/or data.
  • implementation includes a network connection.
  • implementation includes a user interface, generally comprising one or more of input devices (e.g., allowing input of commands and/or parameters) and output devices (e.g., allowing reporting parameters of operation and results).
  • Fig. 1 is a schematic illustration of a system, in accordance with some embodiments of the disclosure.
  • Fig. 2A is a flowchart of a method for training an engine for determining a phenotype of a plant, in accordance with some embodiments of the disclosure
  • Fig. 2B is a flowchart of a method for preprocessing images, in accordance with some embodiments of the disclosure.
  • Fig. 3 is a flowchart of a method for calibrating a system for determining phenotypes, in accordance with some embodiments of the disclosure
  • Fig. 4 shows exemplary images taken by different sensors and registered, in accordance with some embodiments of the disclosure
  • Fig. 5 is a flowchart of a method for calibrating a thermal sensor, in accordance with some embodiments of the disclosure
  • Fig. 6 is a flowchart of a method for determining a phenotype of a plant, in accordance with some embodiments of the disclosure.
  • Fig. 7 shows exemplary images of leaves of plants exposed to fertilizer deficiency abiotic stress taken with a variety of imaging sensors and schematic demonstration of the analyses performed for each image;
  • Fig. 8 demonstrates the effect of fertilizer deficiency abiotic stress on leaves
  • Fig. 9 shows exemplary images of leaves of plants exposed to fertilizer deficiency abiotic stress taken with a variety of imaging sensors and schematic demonstration of the analyses performed for the multi-modal images
  • Fig.10 shows that analysis performed on the images in their totality (i.e. not cropped, as described by Grad-CAM) may focus on regions of the images that are irrelevant to the problem, such as background elements that are not plant material. This phenomenon is especially true for images taken by RGB only (Fig 10A), vs images taken by multi-modal stack (Fig 10B);
  • Fig. 11 demonstrates images obtained by the multi-modal sensors (RGB, thermal and depth sensor) and by RGB sensor only after masking and schematic presentation of the analyses performed; and
  • Fig. 12 demonstrates that using a mask on the relevant plant parts (leaves), results in analysis which predominantly focusses on these plant parts, and does not take into account irrelevant parts of the images, such as background features.
  • Fig. 12A RGB only Fig. 12B multi-modal stack.
  • One technical problem handled by the present disclosure is the need to use manual labor for monitoring one or more plants and their environment in order to determine the best treatment of the plants or to plan for future activities, to achieve agricultural or ecocultural goals.
  • Manual labor is costlier and less available, thus limiting its usage.
  • monitoring plant phenotypes by unprofessional human labor is less accurate and reproducible, while professional human labor may be extremely expensive and not always available.
  • plant phenotype refers to observable characteristics of the plant biophysical properties, the latter being controlled by the interaction between the genotype of the plant and its environment.
  • the aboveground plant phenotypes may be broadly classified into three categories, including structural, physiological, and temporal.
  • the structural phenotypes refer to the morphological attributes of the plants, whereas the physiological phenotypes are related to traits that affect plant processes regulating growth and metabolism.
  • Structural and physiological phenotypes may be based on referring to the plant as a single object and compute its basic geometrical properties, e.g., overall height and size, or by considering individual components of the plants, for example leaves, stem, flower, fruit and further components like leaf length, chlorophyll content of each leaf, stem angle, flower size, fruit volume and the like.
  • phenotyping refers to the process of quantitative characterization of the phenotype.
  • Such states may include, but are not limited to biotic- and a-biotic stresses that reduce yield and/or quality. It may also be useful to monitor in almost real time large areas and get fast recommendations on crop growth conditions, status of plants production and/or health and the like.
  • Yet another technical problem handled by the present disclosure is the need for objective decision support systems overcoming the subjectivity of manual assessment of plant status, and the recommendations based on such assessment.
  • several components need to be considered and integrated, such as but not limited to the plant, the environment including soil, air temperature, visibility, inputs, pathogens, or the like.
  • the plant status is the bottom line of the mentioned factors, however, it is hardest to obtain by an automatic system since it requires, among others, high resolution and knowledge from experts. Additionally, or alternatively, the situation is dynamic as the conditions change, the plant is growing, etc., thus requiring adjustment of the system.
  • the sensors may include one or more of the following: RGB sensor, a multi spectral sensor, a hyperspectral sensor, a thermal sensor, a depth sensor such as but not limited to a LIDAR or a stereo vision camera, or others.
  • the sensors are mounted on a bracket in predetermined geometrical relationship.
  • the system may also comprise additional sensors, such as a radiation sensor, a temperature sensor, a humidity sensor, a position sensor, or the like. It will be appreciated that data from sensors of one or more types may be useful for processing images captured by a sensor of another type. For example, a light intensity sensor may be required in order to process images taken by a multispectral or hyperspectral sensors.
  • the term“a plurality” or“multiplicity” refers to“at least two”.
  • the RGB camera is selected from the group consisting of an automatic camera and a manual camera.
  • Each sensor may be calibrated individually, for example under laboratory conditions.
  • the sensors may then be installed on the bracket, and the system may be calibrated as a whole, to adjust for the different fields of view and to eliminate changing conditions within the system and environmental conditions.
  • calibrating one or more sensors may be performed after installation on the bracket.
  • decision mechanisms may be used, in particular neural networks, for which a training phase may take place, in which a multiplicity of images captured by different modalities may be acquired by the imaging sensors.
  • the images may be captured consecutively or sporadically.
  • the images may be pre-processed, possibly including the fusion of data received from the additional sensors, to eliminate noises and compensate for the differences in the sensors and their calibration parameters, the environment, and measurement errors, and to improve sensors signals (e.g. by improving their resolution).
  • the images may then be registered, for example one or more of the images may be transformed such that the images or parts thereof correspond and the locations of various objects appearing in multiple images are matched, to generate a unified image.
  • One or more captured images, or the unified image may be annotated, also referred to as labeled by a user, to indicate a status, a prediction, or recommendation, or the like.
  • Features may then be extracted from the unified image, using image analysis algorithms.
  • the features and annotations may then be used for training an artificial intelligence engine, such as a neural network, a deep neural network, or the like.
  • a runtime phase may then take place, in which images of the same nature as used during the training phase are available.
  • the images of various plants or plant parts may be preprocessed and registered as on the training phase, and features may then be extracted.
  • the features and optionally additional data as acquired directly or indirectly from the additional sensors, may be fed into the trained engine, to obtain a corresponding status, prediction or recommendation.
  • the engine may be trained upon data collected from a multiplicity of systems, to provide for more sets of various conditions, possible various imaging sensors, or the like.
  • a trained engine may be created per each plant type, per each plant type and location, or for multiple plant types and possibly multiple locations, wherein the plant type or the location may be indicated as a feature. Additionally, or alternatively, the engines may be obtained from other sources.
  • a trained engine may be created per each type of a plant disease, per each type and the severity of a plant disease.
  • a trained engine may be created for multiple plant species and plant crops, multiple plant diseases and possibly multiple grades of disease severity, wherein the type of plant disease or the disease severity may be indicated as a feature.
  • Such engines may include but are not limited to rule-based engines, Look up table (LUT), or the like.
  • the system may be adapted for taking images from a relatively short distance, for example, between about 5cm and about 10m, between about 5cm and about 5m, or between about 50 cm and about 3m, or between about 75 cm and about 2m, or the like.
  • the difference in the viewing angles between different imaging sensors mounted on the bracket may be significant and cannot be neglected.
  • the resolution of the used sensors may be high, for example better than 1mm.
  • the system may be designed to be carried by a human, mounted on a car or on an agricultural vehicle, carried by a drone, or the like. In further embodiments, the system may be installed on a drone or another flying object, or the like.
  • the system may be designed to be mounted on a cellular phone.
  • One technical effect of the disclosure is the provisioning of a system for automatic determination of phenotypes of plants, such as but not limited to: yield components, yield prediction, properties of the plants, biotic stress, a-biotic stress, harvest time and any combination thereof.
  • Another technical effect of the disclosure is the option to train such system to any plant and also to objects other than plants, at any environment, wherein the system can be used in any manner - carried by a human operator, installed on a vehicle, installed on a flying device such as a drone, or the like. Training on any such conditions provides for using the system to determine phenotypes or predictions from images captured with corresponding conditions.
  • Yet another technical effect of the disclosure is the option to train the system for certain plant types and conditions in one location, and reuse it in a multiplicity of other locations, by other growers.
  • Yet another technical effect of the disclosure is the provisioning of quantitative, reproducible results, in a consistent manner.
  • Yet another technical effect of the disclosure is the ability to detect the plant status and integrate it with additional aspects (e.g. environment, soil, etc.) to produce useful recommendation to the farmers.
  • additional aspects e.g. environment, soil, etc.
  • FIG. 1 showing a schematic illustration of a system, in accordance with some embodiments of the disclosure.
  • the system generally referenced 100, comprises a bracket 104 such as a gimbal. Bracket 104 may comprise a pan stopper 124 for limiting the panning angle of the gimbal.
  • a plurality of sensors 108 may be mounted on the gimbal. Sensors 108 may be mounted separately or as a pre-assembled single unit. The geometric relations among sensors 108 are predetermined, and may be planned to accommodate their types and capturing distances, to make sure none of them blocks the field of view of the other, or the like.
  • Sensors 108 may comprise a multiplicity of different imaging sensors, each of which may be selected among an RGB camera, a multi spectral camera imaging at various wave lengths, a depth camera and a thermal camera. Each of these sensors is further detailed below.
  • System 100 may further comprise a power source 120, for example one or more batteries, one or more rechargeable batteries, solar cells, or the like.
  • a power source 120 for example one or more batteries, one or more rechargeable batteries, solar cells, or the like.
  • System 100 may further comprise at least one computing platform 128, comprising at least a processor and a memory unit.
  • Computing platform 128 may also be mounted on bracket 100, or be remote.
  • system 100 may comprise one or more collocated computing platforms and one or more remote computing platforms.
  • computing platform 128 can be implemented as a mobile phone mounted on bracket 104.
  • System 100 may further comprise communication component 116, for communicating with a computing platform 128. If computing platform 128 comprises components mounted on bracket 104, communication component 116 can include a bus, while if computing platform 128 comprises remote components, communication component 116 can operate using any wired or wireless communication protocol such as Wi-Fi, cellular, or the like.
  • System 100 can comprise additional sensors 112, such as but not limited to any one or more of the following: a temperature sensor, a humidity sensor, a position sensor, a radiation sensor, or the like. Some of additional sensors 112 may be positioned on bracket 104, while others may be located at a remote location and provide information via communication component 116. Additional sensors 112 may be mounted on bracket 104 at predetermined geometrical relationships with plurality of sensors 108. Predetermined geometrical relationships may relate to planned and known locations of the sensors relatively to each other, comprising known translation and main axes rotation, wherein the locations are selected such that the fields of view of the various sensors at least partially overlap.
  • Imaging sensors 108 may comprise an RGB sensor.
  • the RGB sensor may operate at high resolution, required for measuring geometrical features of plants, which may have to be performed at a level of a few pixels.
  • a BFS-U3- 2006SC camera by Flir® may be used, having a resolution of 3648 x 5472 pixels, pixel size of 2.4pm, sampling rate of 18 frames per second, and weight of 36g, with a lens having a focal length of 16mm.
  • Such optical properties provide for a field of view of 30° x 45°, with angular resolution of 0.009°. This implies that within a range of lm, one pixel covers size of 0.15mm.
  • Imaging sensors 108 may comprise a multi spectral sensor, having a multiplicity of narrow bandwidths.
  • the multi spectral sensor may operate with 7 channels of lOnm full width at half maximum (FWHM), as shown in Table 1 below.
  • the multi spectral sensor may be operative in determining properties required for evaluating biotic and a-biotic stress of plants.
  • the channels in the green and blue areas provide for assessing the chlorophyll a and anthocyanin: chlorophyll a is characterized by absorbing the blue area, and the gradient formed by two wavelengths in the green area provides measuring of pigments.
  • the red channel provides data for measuring chlorophyll b.
  • the red and near infra-red channels may be located on the red edge, which provides a means for measuring changes in the geometric properties of the cells in the spongy mesophyll layer of the leaves and other general stress in the plant.
  • a multi spectral camera by Sensilize® may be used, having a resolution of 640 x 480 pixels, weight of 36g, with a lens having a focal length of 6mm. These properties provide for a field of view of 35° x 27°, with angular resolution of 0.06°. This implies that within a range of lm, one pixel covers size of 1mm.
  • Imaging sensors 108 may comprise a depth sensor, operative for: complementing the data obtained by the RGB camera, such as differentiating between plant parts, with geometrical dimensions, thus enabling the measurement of geometrical sizes in true scale; and depicting the three-dimensional structure of plants for radiometric correction of the multi spectral sensor and the thermal sensor.
  • the depth camera may provide an image of lmm resolution at lm distance and depth accuracy of 1cm. However, as more advanced sensors become available, better performance can be achieved. Additionally, or alternatively, a depth map may be created by a time of flight camera, by a LIDAR, using image flow techniques, or the like.
  • Imaging sensors 108 may comprise a thermal camera for measuring the temperature of plant parts, such as leaves.
  • the leaves temperature may provide an indication to the water status of the plant. Additionally, or alternatively, the temperature distribution over the leaves may provide indication for leaf injuries or lesions due to the presence of diseases or pests.
  • a Therm-App camera by Opgal® may be used, having a resolution of 384 x 288 pixels, pixel size of 17pm, sampling rate of 12 frames per second, and weight of lOOg, with a lens having a focal length of 13mm.
  • Such optical properties provide for a field of view of 30° x 22°, with angular resolution of 0.08°. This implies that within a range of lm, one pixel covers size of 1.3mm
  • Analyzing a single thermal image for temperature distribution within the image provides for identifying relative stress, e.g., local anomaly on distinct leaves or plants, which may provide an early indication of a stress.
  • the temperature differences which indicate such stresses are of the order of few degrees.
  • a thermal camera with sensitivity of about 0.5° provides for detecting these differences.
  • higher sensitivity can be achieved.
  • the leaf temperature In order to detect absolute stress, and provide quantitative measure thereof, it may be required to normalize the leaf temperature in accordance with the environmental temperature, relative humidity, radiation and wind speed, which may be measured by other sensors.
  • the differences required to be measured in the leaf temperature depend on the plant water status. Higher accuracy provides for differentiating smaller differences in the water status of plants. For example, accuracy of 1.50 degrees Celsius has been proven sufficient for assessing the water state of grapevine and cotton plants.
  • Combinations of images from different imaging sensors may provide for various observations, for example:
  • a combination of an RGB, multi -spectral sensors or thermal sensors and optionally external lighting monitoring can provide for early detection of stresses before symptoms are visible to the human eye or by RGB images.
  • the combination of an RGB, multi-spectral sensors or thermal sensors and optionally external lighting monitoring can provide for early detection of stress caused by fertilizer deficiency.
  • combination of an RGB, thermal and depth sensors can provide for early detection of stress caused by fertilizer deficiency.
  • a combination of multi-spectral and lighting sensors can provide for identifying significant signature differences between healthy and stress plants.
  • An RGB sensor can provide for distinguishing between plant parts.
  • An RGB sensor may thus provide for detecting changes in leaf color
  • a depth sensor may provide for detecting changes in plant size and growth rate
  • a thermal sensor may provide for detecting changes in transpiration. Combinations of the above can provide for early detection of lack of water and early detection of lack of fertilizer.
  • the combination of imagining sensors comprises RGB sensor, multi spectral sensor, depth sensor and thermal sensor.
  • Additional sensors 112 may include an inertial sensor for monitoring and recording of optical head direction, which may be useful in calculating the light reflection from plant organs, independent of the experimental conditions.
  • An exemplary inertial sensor is VMU931 having a total of nine axes for gyro, accelerometer and magnetometer, and equipped with calibration software. The inertial sensor may also be useful in assessing the motion between consecutive images, and thus evaluate the precise position of the system, for calculating the depth map and compensating for the smearing effects caused due to motion.
  • Additional sensors 112 may include other sensors, such as temperature, humidity, location, or the like.
  • All sensors mounted on bracket 104 may be controlled by a command and control unit, which may be implemented on computing platform 128 or a different platform.
  • the command and control unit may be implemented as a software or hardware unit, responsible for activating the mounted sensors, with adequate parameter setting, which may depend, for example on the plant, the location, the required phenotypes, or the like.
  • the command and control unit may be further operative to perform any one or more of the following actions: setting a parameter of a sensor from the plurality of sensors; operating the processor in accordance with a selected application; providing an indication to an activity status of a sensor; providing an indication to a calibration status of a sensor; and recommending to a user to calibrate.
  • the command and control unit may also be operative in initiating the preprocessing of the images, registering the images, providing the images or features thereof to the trained engine, providing the results to a user, or the like.
  • Computing platform 128 may be operative in communicating with all sensors, receiving images and other data therefrom, and continuing processing the images and data.
  • system 100 may optionally comprise a cover 132, and one or more light intensity sensors 136 positioned on cover 132.
  • Light intensity sensors 136 which may measure ambient light intensity in predefined wavelength bands, may be used to reduce the effect of different background light created by the differences in weather, clouds, time of day, or the like.
  • the light sensors may be used for the normalization of the images taken by multi-spectral and RGB cameras or by other optical sensors.
  • system 100 may comprise a calibration target for recording the light conditions by one or more sensors, for example sensors 108 comprising an additional set of the similar sensors.
  • the calibration target may be a permanently mounted target or a target that performs a motion to appear in the field of view.
  • system 100 may be implemented as a relatively small device, such as a mobile phone or a small tablet computer, equipped with a plurality of capture devices, such as an RGB camera and a depth, thermal, hyperspectral or multispectral camera, with or without additional components such as cover 132 or others.
  • the various sensors may be located on the mobile phone with predetermined geometrical relationship therebetween.
  • Such device may already comprise processing, command and control, or communication capabilities and may thus require relatively little or no additions.
  • FIG. 2A showing a method for training an engine for determining a phenotype of a plant based on images taken by a system, such as the system of Fig. 1.
  • the device may be calibrated, in order to match the parameters of all sensors with their locations, and possibly with each other.
  • the device calibration is further detailed in association with Fig. 3 below.
  • a data set may be created, the data set comprising a multiplicity of images collected from the various imaging sensors 108, as described in association with Fig. 1 above and as calibrated in accordance with Fig. 3 below.
  • the data set may also comprise data received from additional sensors 112.
  • Each image or image set may thus be associated with information related to the parameters under which it was taken, and additional data, such as location, position, e.g., which direction the imaging sensor is facing, environmental temperature, plant type, soil data, or the like.
  • the images may be preprocessed to eliminate various effects and enhance resolution, and may be registered. Preprocessing is further detailed in association with Fig. 2B below.
  • one or more annotations may be received for each captured image or registered image, for example from a human operator.
  • the annotations may include observations related to specific part of a plant, size of an organ, color of an organ, a state of the plant, such as stress of any kind, pest, treatment, treatment recommendation, observation related to the soil, the plot, or the like.
  • the process of Fig. 2 may be performed retroactively, such that annotations reflect actual information which was not available at the time the images have been taken, such as yield.
  • features may be extracted from one or more of the captured or registered images.
  • the features may relate to optical characteristics of the image, to objects identified within the images, or others.
  • the registration enables to extract features obtained from one or more sensors. For example, once RGB and thermal images are registered, valuable leaf temperature data can be achieved, which could not be achieved without the registration.
  • the extracted features, and optionally additional data as received from additional sensors 112, along with the provided annotations may be used to train an artificial intelligence engine, such as a neural network (NN), a deep neural network (DNN), or others. Parameters of the engine, such as the number of layers in a NN, may be determined in accordance with the available images and data.
  • the engine may be re trained as additional images, data, and annotations are received.
  • the engine training may also include testing, feedback and validation phases.
  • a separate engine may be created for each type of object, particularly of a plant, each location/plot, geographical area, or the like.
  • one engine may serve a multiplicity of plant types, plots, geographical areas, or others, wherein the specific plant type, plot, or geographical area are provided as features which may be extracted from the additional data.
  • the engine may be tested by a user and enhanced, for example by adding additional data, changing the engine parameters, or the like. Testing may include operating the engine on some images upon which it was trained, or additional images, and checking the percentage of the responses which correspond to the human-provided labels.
  • the provided results may be examined, by the engine providing an indication which area of the unified image or the images as captured demonstrates the differentiating factor that caused the recognition of the phenotype.
  • the indication may be translated to a graphic indication displayed over an image on a display device, such as a display of a mobile phone.
  • FIG. 2B showing a flowchart of steps in a method for preprocessing the images.
  • Preprocessing may include registration step 232.
  • Registration may provide for images taken by imaging sensors of different types to match, such that an object, object part, or feature thereof depicted in multiple images is identified cross-image. Registration may thus comprise alignment of the images.
  • registration provides for fusing information from different sensors to obtain information in a number of ranges, including visible light, Infra-Red, and multispectral ranges in-between.
  • information from the depth camera which provides information of the distance between the system and the captured objects may be used to register RGB images, thermal images and multi spectral images, using also the optical structure of the system and the geometric transformation between the cameras.
  • a depth image may be loaded to memory, and the distance to a depicted object, such as a leaf, fruit, or stalk is evaluated.
  • Further images for example RGB or multi spectral images, may then be loaded, and using the geometric transformations between the cameras, each such image is transformed accordingly. Following the transformations, the images can be matched, and features may be extracted from their unification.
  • registration may be performed by other methods, such as deep learning, or a combination of two or more methods.
  • Preprocessing may include segmentation step 234, in which one or more images or parts thereof are split into smaller parts, for example parts that depict a certain organ.
  • Preprocessing may include stitching step 236, in which two or more images or parts thereof are connected, i.e., each contributes pats or features not included in others, for creating a larger image, for example of a plot comprising multiple plants.
  • Preprocessing may include lighting and/or measurement correction step 240.
  • Step 240 may comprise radiometric correction, which may include: 1. correcting the target geometry, which relates to the surface evenness, pigment, humidity level and the unique reflection spectrum of the material of the captured image; 2. correcting errors caused by the atmosphere between the sensor and the captured object, including particles, aerosols and gases; and 3. physical geometry of the system and the captured object, also known as Bidirectional reflectance distribution function (BRDF).
  • the BRFD may be calculated according to any known model or a model that will be developed in the future, such as the Cook-Torrence model, the GGX model, or the like.
  • Correction step 240 may include geometric transformations between images captured by different modalities, such as translation, scaling and rotations between images, for correcting the measuring angles, aspects of 3 dimensional view, or the like.
  • Preprocessing may include resolution improvement step 244, for improving the resolution of one or more of the images, for example images captured by a thermal sensor, a multi spectral camera, or the like.
  • Fig. 3 showing a flowchart of a method for calibrating a system for determining phenotypes, in accordance with some embodiments of the disclosure. Due to a possible usage of the system in identifying plant phenotypes, the system needs to be calibrated to relevant sizes as detailed above, for example a capture distance of between 0.1m and 10m such as about lm, and high resolution, for example better than lmm.
  • each sensor may be calibrated individually for example by its manufacturer, in a lab, or the like.
  • parameters such as resolution, exposure times, frame rate or others may be set. These parameters may be varied during the image capturing and the image may be normalized in accordance to the updated capturing parameters.
  • An exemplary method for calibrating a thermal sensor is detailed in Fig. 5 and the associated description below.
  • the imaging sensors may be assembled on the bracket and calibrated as a whole.
  • the mutual orientation among the sensors, and between the sensors, an illumination source or a plant may be determined or obtained, and utilized.
  • the calibration process is thus useful in obtaining reliable physical output from all sensors, which is required for thermal and spectral analysis of the output. Due to the high variability of the different sensors, lab calibration may be insufficient, and field calibration may be required as well.
  • the fields of view of the various sensors may be matched automatically, manually or a by combination thereof.
  • initial matching may be performed by a human, followed by finer automatic matching using image analysis techniques.
  • radiometric calibration may be performed, for neutralizing the effect of the different offsets and gains associated with each pixel of each sensor, and associate physical measures to each, expressed for example as Watt/Steradian.
  • the radiometric calibration is thus required for extracting reliable spectral information from the imaged objects or imaged scene, such as reflectance values in a reflective range, or emission values in a thermal range.
  • the extraction of unique thermal signature of an object is affected by: a. various noises and disturbances; optical distortions; c. atmospheric disturbances and changes in the spectral composition of the environmental illumination; and d. the reflection and emission of radiation from the imaged object, which depends also on its environment.
  • some aspects of the sensors may be calibrated during the calibration of each sensor on step 304, while other aspects are handled when calibrating the device as a whole.
  • the thermal sensor is highly affected by changes between the conditions in the lab and in the field.
  • Factor (d) above i.e., the reflection and emission of radiation from the imaged object has impact on the temperature measurement of a surface.
  • large angles between the perpendicular to the depicted surface and the optical axes of the plant can cause errors in the temperature measurement.
  • the availability of geometric information provides for correcting such errors and more accurately assessing the leaf temperature and evaluating biotic and abiotic stress conditions.
  • Correcting the distortions and disturbances provides for receiving radiometric information from the system, as related to spectral reflection in each channel.
  • Such information provides reliable base for analysis and retrieval of required phenotypes, such as biotic and abiotic stress using spectral indices or other mathematical analyses.
  • distortion aberration correction may be performed.
  • This aberration can be defined as a departure of the performance of an optical system from the predictions of paraxial optics. Aberration correction is thus required to eliminate this effect.
  • This correction may be done, for example, by standard approaches involving chess board for the distortion correction.
  • IR resolution improvement may take place in order to improve the resolution of the thermal sensor.
  • step 328 field specific calibration may take place, in which parameters of the various sensors or their relative positioning may be enhanced, to correspond to the specific conditions at the field where the device is to be used.
  • This calibration is aimed at eliminating the effect of the changing mutual orientation between the light source, the looing direction of the sensor, and the normal to the capturing plane.
  • the calibration may determine an appropriate bidirectional reflectance distribution function (BRDF).
  • BRDF bidirectional reflectance distribution function
  • the exposure time and amplification of the sensors should be adjusted to the lighting conditions.
  • the captured images need to be normalized, for example in accordance with the following formula:
  • • j is the value of the specific MS waveband A after calibration on pixel (ij), in gl
  • Io(A.)i j is the value of the specific MS waveband A measured in front of the integration sphere on pixel (ij), in Watt/s teradi an units
  • ® is the gain of the specific MS waveband A , in dB units
  • ® is the radiometric calibration factor value of the corresponding band correcting
  • the vignetting and supplying the physical units in Watt/steradian units.
  • the calibration output is thus a system comprising a plurality of imaging sensors, which can be operated in any environment, under any conditions and in any range.
  • the product of the radiometric and geometric correction provides for normalizing image values, in order to create a uniform basis for spectral signatures complying with the following rule: radiation hitting an object is converted into reflected radiation, transferred radiation, or absorbed radiation, in each wavelength separately.
  • step 332 registration of the images taken by the various sensors and normalized may be performed.
  • Registration may comprise masking, in which the background of the RGB image is eliminated, such that an object of interest, for examples leaves is distinguished. Once registration is complete all images are aligned, and the background of the other images may be eliminated in accordance with the same mask. The data relevant to the leaves or other parts of the plant can then be extracted.
  • the registration process may use any currently known algorithm, or any algorithm that will be known in the future, such as Feature based (surf or sift) registration, Ransac, Intensity based, cross-correlation, or the like.
  • Fig. 4 showing examplary images taken by different sensors, following their registartions. All images have been taken form a distance of about lm and at a resolution that enables differentiating details smaller than 1mm this relates to the specific images.
  • RGB image 404 was taken by an RGB camera in wave lentgth as detailed on band number 8 in Table 1 above
  • RGB-HD image 408 was taken by a high defmtiion RGB camera
  • depth image 412 was taken by a depth camera, such that the color of each region indicates the distance between the camera and the relevant detail in the image
  • multi spectral images 420 show the images taken in seven wave lengths, as detailed in bands numbe 1-7 of Table 1 above. It is seen that all images show the same details of the plant and its environment at the same size and location, thus enabling combining the images. The registration thus compensates for the different scales, paralax, and fields of view of the various sensors.
  • FIG. 5 showing a flowchart of an exemplary method for calibrating a thermal sensor, in accordance with some embodiments of the disclosure.
  • Dead pixels are pixels for which at least one of the following condition holds: no response to temperature changes; initial voltage higher than offset voltage; pixels whose sensitivity deviates from the sensitivity of the sensor in more than predetermined threshold, for example 10%; and pixels having noise level exceeding in at least a predetermined threshold the average noise level of the sensor, for example 50%.
  • the correction of the dead pixels may be performed automatically by software.
  • non-uniformity correction may be performed, for bringing all pixels into a unified calibration curve, such that their reaction to energy changes is uniform.
  • the reaction of the sensor depends on the internal temperature and on the environmental temperature. In order to reduce the complexity, a linear model is created for each of these parameters separately.
  • step 512 a second detection and correction of dead pixels may be performed, as detailed in association with step 504 above.
  • environmental temperature dependence adjustment may be performed.
  • the environmental temperature may be measured by a number of sensors located on the thermal sensor and the optical components.
  • One or more matrices may be determined, defining the relationship between the different measured temperatures and the energy level measured by the sensor. This relationship may then be fitted to a polynomic, for example of third or higher degree.
  • radiometric correction may be performed.
  • the energy measured by the thermal sensor is calculated as temperature
  • step 524 a third detection and correction of dead pixels may be performed, as detailed in association with step 504 above.
  • CWSI Crop Water Stress Index
  • T min the lower reference temperature of a completely non-stressed leaf at the same environmental conditions
  • T max the upper reference temperature of a completely stressed leaf at the same environmental conditions.
  • T m in and T max are either empirically estimated, or practically measured at the same scene where the leaves are measured, or theoretically calculated from energy balance equations.
  • a final check may be performed.
  • the first and second dead pixel corrections are performed automatically, while the third correction is performed manually.
  • FIG. 6 showing a flowchart of a method for determining a phenotype of a plant, in accordance with some embodiments of the disclosure. The method may be performed once the system is assembled and calibrated, and an engine has been trained with relevant data, comprising for example data related to the same plant, plot, or other characteristic.
  • At least some steps of the method of Fig. 6 may be performed by computing platform 128 of Fig. 1.
  • some of the steps may be performed by computing platform mounted on bracket 104, and other steps may be performed by one or more computing platforms located remotely from bracket 104, such as a cloud computer.
  • all processing may be performed by a cloud computer, which may receive the data and return the processing result to the computing platform mounted on bracket 104.
  • the steps may be performed by executing one or more units, such as an executable, a static library, a dynamic library, a function, a method, or the like.
  • At least two images may be received from at least two imaging sensors of different types, from plurality of sensors 108 mounted on bracket 104 of system 100.
  • Each image may be an RGB image, a multi spectral image, a depth image, a thermal image, or the like.
  • step 604 data related to positioning of system 100 may be received from additional sensors 112.
  • the data may be received from additional sensors mounted on bracket 104, additional sensors located remote from bracket 104, or a combination thereof.
  • the at least two images may undergo elimination of effects generated by the environmental conditions to obtain enhanced images, using the data obtained from the additional sensors.
  • the mutual orientation between imagers, illumination source and plant, may also be used.
  • Preprocessing may use calibration parameters obtained during system calibration, and corrections determined as detailed for example in association with step 212 above, such that the images are normalized.
  • the enhanced images may be preprocessed, as described for example on Fig. 2B above.
  • Preprocessing may include registration, in which the images are aligned, such that areas in two or more images representing the same objects or parts thereof appearing in two or more images are matched.
  • the registration provides for unified data, whether in the format of a unified image or in any other representation.
  • one or more features may be extracted from the unified data.
  • the features may be optical features, plant-related features, environment-related features, or the like.
  • the features may be extracted using image analysis algorithms.
  • the extracted features and optionally data items from the additional data may be provided to an engine, to obtain a phenotype of the plant, thus using the multi-dimensional sensor input to quantify or predict disease and/or stress level based on multi-modal data.
  • the engine may be a trained artificial intelligence engine, such as a neural network or a deep neural network, but may also be a non-trained engine, such as a rule engine, a look up table, or the like.
  • a combination of one or more engines may be used for determining a phenotype based on the features as extracted from the multi modal sensors pre-trained models and non-trained models as a starting point in a network adapted to the analysis of data from a plurality of multi-modal sensors.
  • the phenotype can be provided to a user or to another system using any Input/Output device, written to a file, transmitted to another computing platform, or the like.
  • the results provided by the engine may be examined using class activation map.
  • the engine may provide an indication to which area of the unified image or the images as captured demonstrates the differentiating factor that caused the recognition of the phenotype.
  • the indication may be translated to a graphic indication displayed over an image on a display device, such as a display of a mobile phone.
  • Example 1 Early detection of abiotic stress - different imaging sensors and combinations thereof
  • Symptoms of abiotic stress were used to assess the effect of combination of a plurality of imaging sensors of different modalities on early detection.
  • the biological system used was leaves of banana plantlets induced for abiotic stress by deficient fertilizer application.
  • the operator moved the tripod with the AgriEye set of camera from plant to plant and using collected the data from all the sensors using a tablet.
  • Data collection time was between 07:00AM andl0:00AM. Watering of the plants with or without fertilizer was at 13 :00. All the collected data was uploaded to a database.
  • Late stress detection is defined as the symptoms being visible in an RGB image, e.g., a visible difference in plant size (height and leaf number).
  • Fig. 7 shows exemplary images of leaves taken by each of the sensors. As can be taken from the upper panel, the plants are small, and thus end-to-end capture is not effective. Determination of regions of interest (ROI) is required, as described, for example, in Vit yet ah, 2019 (Vit A et al. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 16-20 2019).
  • the middle panel of Figure 7 shows the ROI of the pictures taken by each of the imaging sensors.
  • the images of the ROI were analyzed by making use of a deep neural network (ResNet50), with weights pre-trained on a classification task from the ImageNet database.
  • ResNet50 deep neural network
  • the feature layer of the network was coupled to a 3-layer deep fully-connected neural network with output the relevant classes, as is customary for a transfer learning scheme. Based on this analysis, the images were classified as“stressed” or“unstressed”, referring to abiotic stress.
  • the ground-truth for the multi-modal data was set by the description of the treatment known (stress vs. no stress). Accuracy is defined as the percentage of correct classifications at the early stage out of the total number of cases.
  • Table 2 shows the accuracy (correct detections divided by the total number of detections) for analysis using the various sensors and of combinations thereof compared to the detection rate obtained from RGB sensor only, where "early time-points” are defined to be those time-points for which the symptoms are not yet visible by the naked eye (as judged by a trait expert), and as such are not expected to be distinguishable to a RGB sensor.
  • “late time-points” are those time-points where an expert (and thus potentially an RGB sensor) seemed the symptoms to be visible.
  • RGB and thermal senor or RGB and multi-spectral sensor at 670nm
  • RGB and multi-spectral sensor at 670nm enabled early detection of the biotic stress symptoms, which were not visible using RGB sensor only.
  • the combination of RGB + 670nm readings not only enabled the detection, but improved its accuracy.
  • Example 2 Early detection of abiotic stress including registration steps
  • RGB camera RGB camera
  • thermal camera also referred to as InfraRed, IR
  • depth camera The camera used are as described in Example 1 hereinabove.
  • Fig. 8 shows images taken by RGB, IR and depth sensors, independently. This figure demonstrates that no significant difference is observed when the fertilizer was applied at different concentrations (67%, 100% or 200%). Accordingly, treatment“A” of 0% fertilizer was taken as inducing maximal stress, while treatments B-C were taken as not inducing stress on the examined plants.
  • Fig. 9 shows an exemplary image of leaves taken with RGB sensor only and of images taken with multiple sensors including RGB, thermal and depth sensors and analyzed as described for Fig. 7 hereinabove.
  • One problem in imaging of plants or other complicated objects, specifically in natural environment is that the image captures the object of interest as well as its surrounding.
  • Grad-CAM Gradient- weighted Class Activation Mapping
  • Figs. 10A and 10B demonstrate that using multiple sensors provides for better differentiation of the leaves (points or objects of interest), wherein in 42% of the cases the model identified the leaves (Fig. 10B), compared to only 26% in images taken by RGB only (Fig. 10A).
  • Figs. 11 and 12 demonstrate registration of the images taken by multiple sensors vs. RGB sensor.
  • the registration comprises masking in which the background of the RGB image is eliminated, such that artifacts of the surroundings are distinguishable from the leaves. Once registration is complete all images are aligned, and the background of the other images may be eliminated in accordance with the same mask. As is demonstrated in Figure 12, such registration results in higher percentage of detection of the object of interest (93% detection using RGB, Fig. 12A vs. 96% using the multiple sensors Fig. 12B).
  • Table 3 demonstrates that using multiple imaging sensors provides significantly improved detection of stress resulting from lack of fertilizer, compared to images taken by RGB sensor only, at all the time points examined. Table 3: Detection of abiotic stress by single vs. multi modals after registration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Food Science & Technology (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medicinal Chemistry (AREA)
  • Vascular Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
EP20805595.4A 2019-05-13 2020-05-13 Systeme und verfahren zur phänotypisierung Pending EP3969899A4 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962846764P 2019-05-13 2019-05-13
PCT/IL2020/050515 WO2020230126A1 (en) 2019-05-13 2020-05-13 Systems and methods for phenotyping

Publications (2)

Publication Number Publication Date
EP3969899A1 true EP3969899A1 (de) 2022-03-23
EP3969899A4 EP3969899A4 (de) 2023-02-01

Family

ID=73288900

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20805595.4A Pending EP3969899A4 (de) 2019-05-13 2020-05-13 Systeme und verfahren zur phänotypisierung

Country Status (3)

Country Link
US (1) US20220307971A1 (de)
EP (1) EP3969899A4 (de)
WO (1) WO2020230126A1 (de)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11835390B2 (en) * 2020-04-14 2023-12-05 Utah State University Spatially estimating thermal emissivity
DE102021207009A1 (de) * 2021-07-05 2023-01-05 Robert Bosch Gesellschaft mit beschränkter Haftung Verfahren und Vorrichtung zum Erkennen eines Pflanzengesundheitszustands von Pflanzen für eine Landmaschine
CN113848208B (zh) * 2021-10-08 2023-12-19 浙江大学 一种植物表型平台及其控制系统
CN115996318B (zh) * 2023-03-22 2023-05-26 中国市政工程西南设计研究总院有限公司 一种苗木栽植数字化管理系统及方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7617057B2 (en) * 2005-12-21 2009-11-10 Inst Technology Development Expert system for controlling plant growth in a contained environment
US10520482B2 (en) * 2012-06-01 2019-12-31 Agerpoint, Inc. Systems and methods for monitoring agricultural products
WO2015006675A2 (en) * 2013-07-11 2015-01-15 Blue River Technology, Inc. Method for automatic phenotype measurement and selection
US20150130936A1 (en) * 2013-11-08 2015-05-14 Dow Agrosciences Llc Crop monitoring system

Also Published As

Publication number Publication date
US20220307971A1 (en) 2022-09-29
EP3969899A4 (de) 2023-02-01
WO2020230126A1 (en) 2020-11-19

Similar Documents

Publication Publication Date Title
Haghighattalab et al. Application of unmanned aerial systems for high throughput phenotyping of large wheat breeding nurseries
Virlet et al. Field Scanalyzer: An automated robotic field phenotyping platform for detailed crop monitoring
Feng et al. Yield estimation in cotton using UAV-based multi-sensor imagery
US20220307971A1 (en) Systems and methods for phenotyping
US20230292647A1 (en) System and Method for Crop Monitoring
Raj et al. Precision agriculture and unmanned aerial Vehicles (UAVs)
Herrmann et al. Ground-level hyperspectral imagery for detecting weeds in wheat fields
Bouguettaya et al. A survey on deep learning-based identification of plant and crop diseases from UAV-based aerial images
Narváez et al. LiDAR and thermal images fusion for ground-based 3D characterisation of fruit trees
Diago et al. On‐the‐go assessment of vineyard canopy porosity, bunch and leaf exposure by image analysis
CN112903600A (zh) 一种基于固定翼无人机多光谱影像的水稻氮肥推荐方法
Sakamoto et al. Application of day and night digital photographs for estimating maize biophysical characteristics
Awais et al. Remotely sensed identification of canopy characteristics using UAV-based imagery under unstable environmental conditions
Saura et al. Mapping multispectral Digital Images using a Cloud Computing software: applications from UAV images
Yuhao et al. Rice Chlorophyll Content Monitoring using Vegetation Indices from Multispectral Aerial Imagery.
Katsigiannis et al. Fusion of spatio-temporal UAV and proximal sensing data for an agricultural decision support system
Latif et al. Mapping wheat response to variations in N, P, Zn, and irrigation using an unmanned aerial vehicle
Sangha et al. Impact of camera focal length and sUAS flying altitude on spatial crop canopy temperature evaluation
Kawamura et al. Mapping herbage biomass and nitrogen status in an Italian ryegrass (Lolium multiflorum L.) field using a digital video camera with balloon system
Meivel et al. Monitoring of potato crops based on multispectral image feature extraction with vegetation indices
Vlachopoulos et al. Evaluation of crop health status with UAS multispectral imagery
Kazemi et al. Evaluation of RGB vegetation indices derived from UAV images for rice crop growth monitoring
Lawrence et al. Dynamic Application of Unmanned Aerial Vehicles for Analyzing the Growth of Crops and Weeds for Precision Agriculture
Kaivosoja et al. Different remote sensing data in relative biomass determination and in precision fertilization task generation for cereal crops
Tumlisan Monitoring growth development and yield estimation of maize using very high-resolution UAV-images in Gronau, Germany

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20211213

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20230105

RIC1 Information provided on ipc code assigned before grant

Ipc: G01N 21/84 20060101ALI20221223BHEP

Ipc: G06T 7/00 20170101ALI20221223BHEP

Ipc: G01S 17/88 20060101ALI20221223BHEP

Ipc: G01N 21/25 20060101ALI20221223BHEP

Ipc: G01N 21/31 20060101ALI20221223BHEP

Ipc: G01N 33/00 20060101AFI20221223BHEP