WO2019133973A1 - Crop health sensing system - Google Patents

Crop health sensing system Download PDF

Info

Publication number
WO2019133973A1
WO2019133973A1 PCT/US2018/068150 US2018068150W WO2019133973A1 WO 2019133973 A1 WO2019133973 A1 WO 2019133973A1 US 2018068150 W US2018068150 W US 2018068150W WO 2019133973 A1 WO2019133973 A1 WO 2019133973A1
Authority
WO
WIPO (PCT)
Prior art keywords
crop
images
image
canopy
camera
Prior art date
Application number
PCT/US2018/068150
Other languages
French (fr)
Inventor
Scott Shearer
Christopher WIEGMAN
Andrew KLOPFENSTEIN
David W. SEE
Original Assignee
Ohio State Innovation Foundation
The Government Of The United States, As Represented By The Secretary Of The Air Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ohio State Innovation Foundation, The Government Of The United States, As Represented By The Secretary Of The Air Force filed Critical Ohio State Innovation Foundation
Publication of WO2019133973A1 publication Critical patent/WO2019133973A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • Embodiments of the present disclosure relate to sensing of crop health below the canopy using various systems and image processing to provide assessment of crop health. Any method that can capture multiple images from within the canopy can be used.
  • remote sensing may be used, such as an unmanned aerial system (UAS), specifically a UAS outfitted with a“stinger” appendage.
  • UAS unmanned aerial system
  • Remote sensing provides producers and the agricultural community the opportunity to visualize entire fields from an aerial perspective.
  • the ability to detect and view targets of interest from a different vantage point provides significant benefits that have not been fully realized.
  • !JAS Unmanned Aerial Systems
  • this invention in one aspect, relates to a method of developing an image library collected from vantage points within a crop canopy, for specific crops (e.g. com, wheat, soybeans, etc.) within agricultural regions of interest (e.g. Midwestern Cornbelt, etc.) to be utilized in conjunction with artificial intelligence (e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.) for the purpose of diagnosing crop health problems.
  • crops e.g. com, wheat, soybeans, etc.
  • agricultural regions of interest e.g. Midwestern Cornbelt, etc.
  • artificial intelligence e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.
  • the invention relates to a method of generating point clouds from multiple images collected from within the plant canopy the extraction of specific plant components and projection from 3D to 2D for the purpose of extracting image features to improve crop health problem diagnosis accuracy using artificial intelligence techniques (e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.).
  • artificial intelligence techniques e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.
  • the invention in another aspect, relates to a method of generating point clouds from multiple images collected from within the plant canopy comprising creation of solid models from the point clouds, extraction of specific plant components, and projection of surface features from 3D to 213 for the purpose of extracting and clarifying image features to improve crop health problem diagnosis accuracy using artificial intelligence techniques (e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.).
  • artificial intelligence techniques e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.
  • the invention in another aspect, relates to a method of assessing crop health using a learning algorithm capable of identifying different types of stresses of a crop of interest, comprising: accessing a sample image library comprising a plurality of images including images of deficient crop tissue, the deficient crop tissue images classified in the images according to the stress type; training the learning algorithm using the sample image library by applying an upsampling to provide approximately the same number of images of each stress type in the image library'; extracting relevant features from an image; performing dimensionality reduction of each feature map of each image and retaining information most useful for classification; and determining a highest probability for each stress for each image in the library to provide a final classification.
  • Any of these methods may include collecting images of plants at a discrete sampling site from a vantage point within the crop canopy, generating a 3D model of the images of plants at the discrete sampling; extracting and projecting samples of the images of plants into a 2D image; and matching features of the projected samples with images in an image library for diagnosing diseases at the discrete sampling site.
  • Any of these methods may include classifying the pixels in the image according to a predefined classification set of pixel values.
  • Any of these methods may include the image library including images of healthy crop tissue.
  • Any of these methods may include the stresses being abiotic/biotic stresses. [0015] Any of these methods may include the stresses including any stress identifiable by visual inspection of crop tissue.
  • Any of these methods may include the crop of interest being com.
  • Any of these methods may include the stresses being abiotic or biotic classes.
  • Any of these methods may include an additional class assessment.
  • Any of these methods may include extracting relevant images being performed by running a sliding window across pixels of an image.
  • Any of these methods may include performing dimensionality reduction of each feature map of each image and retaining information most useful for classification.
  • Any of these methods may include determining a highest probability ' for each stress for each image in the library to provide a final classification.
  • a crop health assessment system image capture system for use with the method according to any of the claims above, comprises a motorized vehicle; a camera attached to a distal portion of the motorized vehicle; and an image storage medium for storing images captured by the camera for later processing.
  • Any of these systems may include a transmitter for transmitting images to a remote location.
  • Any of these systems may include at least one camera that is a multi- spectral camera.
  • Any of these systems may include the motorized vehicle being an unmanned vehicle.
  • Any of these systems may include the unmanned vehicle being a rotary wing drone.
  • a method of the health of a crop having a crop canopy using an aerial crop health assessment system includes an unmanned aerial vehicle, capable of being controlled remotely, a suspension rod, extending from an underside of the unmanned aerial vehicle and a camera attached to a distal end of the suspension rod, the method includes flying the unmanned aerial vehicle to a crop location; positioning the unmanned aerial vehicle above a preselected location in the crop canopy; lowering the unmanned aerial vehicle to cause the camera to descend within the crop canopy; capturing images via the camera, the images of an area within the crop canopy, and processing the images to determine if health of the crop is compromised.
  • An aerial crop health assessment system comprising an unmanned aerial vehicle, capable of being controlled remotely; a suspension rod, extending from an underside of the unmanned aerial vehicle; a camera attached to a distal end of the suspension rod; and an image storage medium for storing images captured by the camera for later processing.
  • the invention relates to A method of the health of a crop having a crop canopy using an aerial crop health assessment system, the aerial crop health assessment comprising an unmanned aerial vehicle, capable of being controlled remotely, a suspension rod, extending from an underside of the unmanned aerial vehicle and a camera attached to a distal end of the suspension rod, the method compri sing flying the unmanned aerial vehicle to a crop location; positioning the unmanned aerial vehicle above a preselected location in the crop canopy; lowering the unmanned aerial vehicle to cause the camera to descend within the crop canopy; capturing images via the camera, the images of an area within the crop canopy; and processing the images to determine if health of the crop is compromised.
  • the aerial crop health assessment comprising an unmanned aerial vehicle, capable of being controlled remotely, a suspension rod, extending from an underside of the unmanned aerial vehicle and a camera attached to a distal end of the suspension rod, the method compri sing flying the unmanned aerial vehicle to a crop location; positioning the unmanned aerial vehicle above a preselected location in the crop canopy
  • the invention relates to A method of developing an image library' collected from vantage points within the crop canopy using the methodology described herein, for specific crops (e.g. corn, wheat, soybeans, etc.) within agricultural regions of interest (e.g. Midwestern Cornbelt, etc.) to be utilized in conjunction with artificial intelligence (e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.) for the purpose of diagnosing crop health problems.
  • crops e.g. corn, wheat, soybeans, etc.
  • agricultural regions of interest e.g. Midwestern Cornbelt, etc.
  • artificial intelligence e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.
  • the invention relates to A method of generating point clouds fro multiple images collected from within the plant canopy using the methodology described herein, and the extraction of specific plant components and projection from 3D to 2D for the purpose of extracting image features to improve crop health problem diagnosis accuracy using common artificial intelligence techniques (e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.).
  • common artificial intelligence techniques e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.
  • An advantage of the present invention is to provide unmanned aerial stinger-suspended crop health assessment sensing tool.
  • FIG. 1 illustrates a schematic of an embodiment of the stinger suspended crop health sensing system according to principles described herein.
  • FIG. 2 is a flow chart suggesting how- the Unmanned Aerial Stinger-
  • FIG. 3 is a flow chart summarizing statistical measures for assessing crop vigor from LIAS remote sensing imagery from above the crop canopy.
  • FIG. 4 is a flow chart summarizing feature extraction and crop stress classification for assessing crop health from imagery obtained from within the crop canopy using stinger-suspended cameras.
  • FIG. 5 is a 3-D crop model, viewed parallel to crop row-s.
  • FIG. 6 is a 3-D model showing orientation of image planes (and cameras) relative to crop canopy.
  • FIG. 7 is a 3-D model of plant components (leaves and ears).
  • FIG. 8 shows images of multiple leaves on individual plants from a 3-D model.
  • FIG. 9A shows a stinger inserted into crop canopy.
  • FIG. 9B shows a stinger insertion into crop canopy.
  • FIG. 9C show's a stinger insertion into crop canopy.
  • FIG. 9D show's a stinger insertion into crop canopy.
  • FIGS. 10A-C illustrate a solid model of stinger with camera head tilt angle of 45°
  • the isographic, right side and front views are shown in A, B and C respectively.
  • FIG. 11 is a flow' chart for the general image processing according to principles disclosed herein.
  • FIG. 12 shows an example of individual pixel cl asses.
  • FIGS. 13 A and 13B show' a skewed frequency, having disproportionally more small blobs, indicates a low vigor.
  • FIG. 14 shows a plot of the points on top of an additional user defined image.
  • FIG. 15 show's a camera was mounted to the underside of the drone on the payload rack.
  • FIGS. 16-17 show wireless antennas (e.g., Ubiquiti)
  • FIGS. 18-21 are confusion matrices generated using image classification software (e.g. MATLAB) to indicate where machine-learning classification algorithm errors are accumulating.
  • image classification software e.g. MATLAB
  • FIGS. 22A and 22B illustrate image library counts according to at least tw'o class division methodologies.
  • FIG. 23 illustrates a representative CNN architecture used for diagnosis of stresses in corn. Detailed are the two convolution and pooling layers with the fully connected and classification layer. The important detail is the lightweight nature of the neural networks yielding quick classifications architecture of a system according to principles described herein.
  • FIGS. 24 A and 24B illustrate an exemplar)' output of the system shown in FIG. 23.
  • FIG. 25 illustrates training loss and accuracy on abiotic/biotic stress according to an exemplary embodiment.
  • FIG. 26 illustrates training loss and accuracy on all stresses according to an exemplary embodiment.
  • One possible solution to the problem of early detection and diagnosis of lower canopy crop vigor is the use of devices to capture images from within the canopy, such as inserting a camera into crop canopy. For example, lowering a camera from a boom or suspension rod from above, e.g. from a tractor or aerial vehicle, may be used.
  • an unmanned aerial system HAS
  • stinger suspended camera may be provided with stinger suspended camera.
  • the UAS may be of any known controllable or automated flight path UAS, which may be a multi-rotor or other appropriate UAS
  • the UAS stinger may include a sampling probe.
  • the stinger allows for the suspension of a camera head; and/or tissue, soil and/or spore samplers; thereby placing human-like scouting capabilities at virtually any location with agricultural production regions.
  • the stinger can be positioned and dropped into virtually any plant canopy by the UAS, and pilot in command.
  • UAS technology it may be possible for the UAS to fly the appropriate path to sense the crop vigor with minimal pilot or human intervention, or even may be fully automated, if allowable under local flight rules.
  • Insertion points may ⁇ be determined based on information collected in a initial“fly over” by the UAS described herein or by a more traditional fixed wing aircraft or drone prior to insertion of the suspended devices of the UAS described herein in the canopy, e.g., target areas of sacrificed crop vigor. Such insertion points of reduced crop vigor may inform treatment decisions for other areas of the field that may not yet exhibit reduced crop vigor from above.
  • stinger-suspended technologies is a camera head capable of collecting multiple, synchronized images upon penetration of the plant canopy.
  • a multi-spectral band e.g., infrared, visible, and/or ultra-violet frequency
  • the camera may be mounted using an articulated mount so that the camera’s targeted field of view can be controlled remotely (or pre-programmed).
  • the cameras image capture band may be of any range appropriate for the analysis to be performed.
  • the camera may be panoramic or spherical or have a lens, such as fish eye, or articulation appropriate to capture in multiple directions.
  • the suspension device may additionally include a light source, such as LED lighting, to improve quality of image capture.
  • FIG. 1 A schematic of an exemplary' embodiment of a crop health sensing system 100 according to principles described herein is provided in FIG. 1
  • the crop health sensing system includes a multi-rotor unmanned aerial vehicle or system (UAV7UAS) 102.
  • a camera 104 which may be multi-spectral, is suspended from the multi-rotor IJAV 102 via a suspension rod or line 106.
  • the system 100 may include one or more than one camera. Three cameras 104a, 104b, 104c are illustrated in FIG. 1, but the system 100 is not limited to three cameras.
  • the system 100 further includes a wireless antenna 108 for communication with a ground station or other recipient.
  • the syste 100 may further include a lamp or lamps for providing illumination to the camera, particularly while the camera is inserted into the crop canopy.
  • the lamp may be a light emitting diode (LED) lamp 1 10 or a series of LED lamps in a strip, or the like.
  • LED light emitting diode
  • the collected imagery may be downloaded and processed to generate a 3D model of the internal plant canopy.
  • the imagery may be transmitted in real time to a processing center to perform image analysis. It is conceivable that with the evolution of technology, that the presently contemplated image preprocessing may performed“on board” the UAS.
  • FIG. 2 A flow chart of general steps of sensing crop health according to principles described herein is provided in FIG. 2.
  • a fixed wing or other vehicle capable of“fly over” of the crop canopy flies over the designated field, and captures images of the crop, e.g., by RGB (red, green, blue) NADIR imagery.
  • a crop vigor assessment may be conducted using the imageiy to identify GPS coordinates (or other appropriate geo-locating information) of areas within the field of poor crop vigor as assess from an upper view.
  • a crop health sensing system 100 according to principles described herein, e.g., as illustrated in FIG.
  • the camera 104 of the system 100 is inserted into the crop canopy to obtain RGB or muitispectral images of the crop from below the canopy. Analysis of the images provided by the camera may be transmitted back to a ground station via the wireless antenna 108 or stored on-board for later retrieval. Using the images, a 2D stress analysis may be performed with input from/comparison to information in a reference library. In addition, or alternatively, the images provided by the camera may be used to create a 3D model of the crop below the canopy (3D Intra-Canopy Modeling).
  • a 3D point cloud model may be created using the information and a 3D stress analysis performed with input from/comparison to the information in the reference library.
  • the reference library ' may be static or may be dynamic, such that it is updated after diagnosis using the system is confirmed, images collected, and modeling uploaded to build a more comprehensive reference library.
  • FIG. 3 is a flow chart of exemplar ⁇ ' steps for providing the crop vigor assessment that is part of the overall health assessment according to principles described herein.
  • a crop assessment algorithm is initialized with a graphical user interface; flight paths, image resolutions, and image overlap are specified. Images imported from the crop health sensing system (e.g. the camera inserted into the canopy), either by download after return of the image capture device (e.g., camera) to a ground or base station, after transmission from the image capture device in the field or in real-time or near real time while the the image capture device is still in the field.
  • the crop health sensing system e.g. the camera inserted into the canopy
  • the image capture device may be provided with appropriate image storage capabilities, e.g., by on-board storage such as a hard drive, removable storage device or the like.
  • the crop images are standardized/processed (e.g. cropped in RGB) to perform an image pixel classification according to a pixel classification set, the pixel classification set may be loaded from an external data set or may be created or updated using images gathered from use of the present system, for example, by dynamically updating the pixel classification set in a feedback loop from the present system.
  • the images are processed to create binary images to allow for blob formation and categorization into positive or negative classes, for example related to the crop type, such as‘corn’ and‘not corn’ classes. Blob statistics are calculated, and a test conducted to determine if the data matches a particular population, such as a Chi Square
  • a priori“crop vigor” ratings are compared to the Chi Squared statistic for generating the final“crop vigor” rating. Then, a vigor rating for each image is generated and mapped to the position coordinates, obtained from a Global Position System geolocator, for that image. The collective set of “crop vigor” rating and corresponding position coordinates are used to map the“crop vigor” of an entire field.
  • FIG. 4 is a flow chart for the stress diagnosis method using near real- time input from a crop health sensing system according to principles described herein. While the crop-insertion camera is remote from the base station, imagery may be wirelessly transmitted to the base station, e.g., during transit in field or to/from the field. The 2D or 3D imagery captured by the camera from below the canopy may then processed to perform 2D feature extraction or used to create a 3-D model of the crop below the canopy (3D Intra- Canopy Modeling). As in FIG. 2, a 2D stress analysis may be performed with input from/compari on to information in a reference library and actionable information collected.
  • a 3D point cloud model may be created using the information and a 3D stress analysis performed with input from/comparison to the information in the reference library.
  • the reference library may be static or may be dynamic, such that it is updated after diagnosis using the system is confirmed, images collected, and modeling uploaded to build a more comprehensive reference library.
  • the imagery may be compared to a biotic / abiotic classification model to determine if the stressors are biotic or abiotic. Actionable information is then generated as appropriate for the biotic/abiotic determination. While shown with respect to the wireless information transfer of MG 4, the biotic/abiotic classification is also applicable if the images are downloaded from the camera system not in real time/near real time (e.g subsequent“hard wire” download) or by directly accessing the memory device, e.g., removable storage.
  • a canopy 3D model analysis technique provides for the extraction of individual plant components, and then comparison of plant component features with a known database of features for identification of crop health problems. For example, individual leaves from the lower part of the canopy are extracted and then flattened (2D image projection) to support analysis of leaf color, leaf margin shape and size, leaf venation and color striations, and lesions. This approach can be used for both the top and bottom sides of the leaf. The extracted features are then compared to an existing crop health database to identify multiple crop health problems including macro and micro nutrient deficiencies, disease and insect infestations, for example.
  • the feature database may include, but not be limited to, leaf color, shape and size; venation pattern; margin physical shape; lesion shape, size, number and size; and other physical features known to entomologists, plant pathologists, crop physiologists, soil scientists, agronomists, horticulturalists, enologists, and other crop production specialists or the like.
  • the stinger or boom Concurrent with image data collection, the stinger or boom can he fitted with soil, tissue and spore sample collection capabilities. Material samples collected during concurrent image collection may prove vital for confirmation of the crop health diagnoses.
  • stinger- or boom-suspended camera head and image processing procedures allows for timely diagnosis of crop stresses that may develop after crop canopy closure and action can be taken to remediate yield reductions.
  • Remediation actions may include the application of fungicides, insecticides and fertilizers, or irrigation scheduling and management.
  • the stinger or boom By using the stinger or boom to do within canopy imaging and/or material sampling from multiple locations in affected regions of a field, the extent of the problem can be more effectively characterized to determine the best possible management approach and resulting economic outcome. By precisely identifying the affected area, the farmer can reduce his application amounts and resulting waste not only saving money, but also the environment.
  • image processing allows for an evaluation of the extent stress involvement and potential yield loss both at the individual plant level and then on a more targeted sampling regime more representative of the entire field compared to current techniques.
  • analyses by plant components, such as individual leaves, by projecting from the 3D point cloud model onto into a 2D surface leads to a more rigorous, repeatable and quantitative measurement of critical crop health features.
  • a stinger-suspended camera is mounted to the underside of a UAS for positioning the stinger within the crop canopy at“area of interest” locations. While described here with respect to a UAS, a boom suspension via tractor or other ground based transit mechanism is possible. At least one material sampling probe may also be suspended from the stinger. Upon reaching an“area of interest,” the drone (UAS) descends, dropping the cameras/probes into the crop canopy so imagery of the lower canopy reaches can be obtained.
  • the UAS may be equipped with geo-sensing equipment, such as a Global Position System geolocator.
  • the stinger may include a 3m fiberglass rod, flexible UAS coupling and camera head mounting plate with multiple cameras.
  • the 3m rod length allows for sufficient canopy penetration while keeping the drone (UAS) far enough above the crop canopy to avoid rotor entanglement in the crop canopy.
  • the camera mounting plate is designed so that camera angles can be adjusted relative to the horizon.
  • the multi-rotor UAS is used to fly the stinger with camera head and/or sampling probes to the correct field position.
  • the UAS lift capacity is selected such that it exceeds the overall weight of the stinger, camera head and sampling probes, to ensure safe and reliable flights.
  • the crop health sensing system allows for qualitative assessment of crop health issues in a field.
  • a UAS flight (“fly over”) is used to determine any locations of interest in a field, usually any that have a lower crop vigor.
  • the crop health sensing system allows for a visual inspection of the lower reaches of the canopy and the associated algorithms enable a quantitative assessment of the crop health.
  • the exemplary crop health sensing system suspends one or numerous cameras beneath a UAS for remote data acquisition, enabling large area coverage with minimal effort..
  • the drone UAS flies to a desired location and then inserts cameras suspended approximately 12 feet beneath it into the canopy and intra-canopy images are taken.
  • Figure 5 is a 3-D crop model, viewed parallel to crop rows. Blue squares represent the image plane of the photos used to generate the model.
  • Figure 6 is a 3- D model showing orientation of image planes (and cameras) relative to crop canopy.
  • Figure 7 is a 3-D model of plant components (leaves and ears). Leaf venation and margins are clearly visible pointing to the robust nature of the model formation process.
  • Figure 8 shows images of multiple leaves on individual plants from a 3-D model. Two corn plants, clearly visible to the left and right, with zoomed view at center projected onto the 2D surface for crop stress diagnosis. Note the differences in venation, coloring and leaf shape.
  • An exemplary Stinger Suspended Crop Health System was studied, in which the drone used to fly the Stinger is a DJI S1000+. Due to its high payload capacity of 251bs it has proven an excellent UAS platform to handle the Stinger as well as insert and then remove it from the canopy.
  • the exemplary drone may fly to a series of pre-programmed locations determined with the Stressed Crop Identification Algorithm using DJI’s standard flight controller software, DJI GO, then drop altitude, inserting the cameras into the crop canopy ( Figures 6 and 7). After the cameras capture the relevant imagery, the drone will regain its set travel altitude and head to the next desired sample point and repeat the process until a select number (e.g., some or ail) stressed locations have been visited.
  • DJI Stressed Crop Identification Algorithm
  • Post flight, imagery is used to generate a 3D model of the internal crop canopy from each of the discrete sampling sites.
  • 3D model generation may be accomplished using image processing software (e.g., Agisoff s PhotoScan) by first forming a point cloud from corresponding pixels within multiple images.
  • image processing software e.g., Agisoff s PhotoScan
  • the depth viewed in the 3D models is a result of the stereoscopic image overlap from the four cameras composing the camera head mounted at the distal end of the stinger.
  • Renderings of sample models are shown in Figures 5, 6 and 7. After the model is formed, individual leaves from plants are extracted and projected into a 2D plane.
  • FIGS. 9a, 9b, 9c, and 9d show a stinger inserted into crop canopy.
  • FIG. 10 is a solid model of stinger with camera head tilt angle of 45° The isographic, right side and front views are shown in A, B and C respectively. The Center of Gravity is circled in red in each view .
  • a Stressed Detection Algorithm may be used to evaluate the crop vigor in the field.
  • the first step is to fly the field of interest, for example, with a small fixed wing drone, such as an“eBee”, and take 200 to 300 images of the field depending on field size; giving 200 to 300 sample points for analysis.
  • Typical industry practice is to stitch these into a single image, however this is a time-consuming process often taking several hours to complete and often, not an effective one either.
  • a fixed wing drone is described in this example for initial fly over, the drone according to principles described herein may he appropriately outfitted with an aerial camera to perform this function in addition to the canopy insertion function by the suspended camera(s).
  • the Stress Detection Algorithm analyzes each image individually, greatly reducing processing time to around 20 minutes for an approximately 30 hectare field in initial testing. Furthermore, the output is much more actionable, being GPS coordinates of crop that exhibits low vigor. Please refer to the flow chart in FIG. 1 1 for a general overview of the algorithm. First, after starting the algorithm and entering some basic information, the images are imported from the drone and then cropped to focus on the center region. This is to reduce any overlap that occurs as these flights are programmed to have 70%-85% side and end lap for the stitching process. After cropping, the pixels in the image are classified according to a predefined classification set, for example, using a technique developed by J. Leonard D.
  • the individual pixel classes are then reduced to positive or negative classes, for example‘crop’ and‘not crop’ classes, so the image can be binarized (FIG. 12).
  • the pixels classified as crop (corn crop shown as white) are then grouped into blobs using binary' morphological image processing techniques. These techniques utilize filter masks to close any errant gaps in the com blobs via a technique called dilation followed by erosion to maintain object edge integrity.
  • the primary' objective in these operations is to account for image noise or any errantly classified pixels from the original RGB drone image.
  • statistics Cho Squared Goodness of Fit Test
  • a skewed frequency having disproportionalJy more small blobs, indicates a low' vigor as the crop canopy has yet to close due to the slowed growth of the pl ants (FIG. 13).
  • the low vigor has higher numbers of soil and‘non-crop’ foreground pixels, which results in the reduced sizes of the blobs upon classification.
  • the final output of the algorithm can be take multiple forms (e.g. spreadsheet file with the vigor ratings at each image location, or a plot of the points on top of an additional user defined image) in accordance with end-user requirements.
  • An example of the output is shown in FIG. 14.
  • the GPS coordinates are represented as dots whose color corresponds to their relative vigor levels and they are displayed on top of a user defined remote sensing image (e.g. AirScout) of the field.
  • An !JAS with vertical stinger provides a simple approach for inserting cameras into a crop canopy for image collection.
  • the crop of interest has been com for demonstration of the crop health sensing system, as its canopy is less dense than other grain crops such as wheat or soybeans.
  • the Stinger is a 3.5 meter fiberglass rod with stiff rubber hose at the end that attaches to the drone payload mounts to allow for stinger flexure during takeoff and landing as well as providing damping as the stinger swings in the wind or changes in direction during flight.
  • the camera mount At the other end is the camera mount.
  • the specific camera mount is subject to change depending on the characteristics of the crop canopy and the cameras mounted to it.
  • Camera control is another unique aspect of this overall process.
  • the initial problem is in determining when the Stinger has been inserted into the correct position for image acquisition.
  • the flight controller is relatively reliable for general drone placement, real-time kinematic GPS is required to place the cameras exactly where they need to be for optimal image acquisition. If real-time kinematic GPS is unavailable, several alternative steps may be required to solve the low accuracy GPS concern.
  • a camera was mounted to the underside of the drone on the payload rack right where the top of the Stinger mounts to the drone (FIG. 15). This camera focuses on the Stinger itself and broadcasts though the DJI Lightbridge 2 flight control system to the pilot’s ground control station. This lets the pilot in command (PIC) observe and confirm the location of the Stinger. This is also a useful safety feature, as it guarantees the Stinger is not caught on anything during the extraction process.
  • the current capacity' of the network is around 200-450 megabytes per second, depending on signal quality and range. This means that any device connected to the network at a base station has remote access to the cameras in the field. Furthermore, this network also allows for the user back on the ground to access the images immediately after acquisition. Although still under development, this system enables access to any image in near-real time after acquisition. This allows for processing much earlier than many current industry alternatives, which require the operators to wait until the UAS lands before downloading and processing the imagery'
  • the second use of the images is for stress diagnosis.
  • features on the leaves in the 2D images are extracted from a test set (Reference Library ' , refer to section 2.5 for more details). These features are extracted using MATLAB command ‘bagOfFeatures’ which identifies distinguishing features for each image class i.e. stress). These features include things such as lesion shape, the specific pattern of chlorosis (discoloration) on the leaf, distribution of lesions or chlorotic areas etc.
  • the MATLAB application ‘ClassificationLeamer’ was used with these extracted features to perform automated training of different classification models such as decision trees, SVM (support vector machines) logistic regression and others to evaluate which method has the most discriminatory ability among the image classes, or in simple terms, the highest successful stress diagnosis rate.
  • classification models such as decision trees, SVM (support vector machines) logistic regression and others to evaluate which method has the most discriminatory ability among the image classes, or in simple terms, the highest successful stress diagnosis rate.
  • these models are then compared and matched to any leaf features in the images from the Stinger with the goal of diagnosing the stress.
  • Biotic stresses are generally those that are caused by a biological agent such as a disease or insect and Abiotic stresses are those that are not due to a living organism such as nutrient deficiencies, drought or equipment damage.
  • Abiotic stresses are those that are not due to a living organism such as nutrient deficiencies, drought or equipment damage.
  • the algorithms are capable of differentiating between these two general groups with around 85%-9Q% accuracy (refer to confusion matrices below).
  • the advantage of this tiered approach is that it specifies or limits the features to those that pertain to the specific disease group, increasing the ability to differentiate the different classes.
  • this same general process could be applied to the leaves in the 3D models.
  • artificial intelligence/neural networks may be used to determine the crop stressor, e.g., by Principal Component Analysis (PCA) for features extracted from images.
  • PCA uses orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called Principal Components.
  • the first Principal Component accounts for the largest possible variance, with successive Principal Components exhibiting the largest possible variance with the constraint that it is orthogonal to the previous Principal Component.
  • the resulting vectors are an uncorrelated orthogonal basis set.
  • PCA can be used to reduce data sets to a lower dimension by eliminating variables that contribute little to the classification accuracy
  • Matrix generated by MATLAB and is used to indicate where the machine learning classification algorithm is accumulating errors.
  • On the x-axis is the‘Predicted Class’ or what the algorithm thought the image class was.
  • On the y-axis is the‘True Class’ or what the image actually is.
  • the first image shows the first tier, separating the Biotic from Abiotic stresses.
  • FIG. 18 is a confusion matrix for Medium Gaussian SVM classification model for Biotic/Abiotic classification with an overall accuracy of 88 5%
  • FIG. 19 is a confusion matrix for Cubic SVM classification model for Biotic Stress classification with an overall accuracy of 66.1%.
  • FIG. 20 is a confusion matrix for SVM classification model with Principal Component Analysis explaining a 95% variance for Abiotic stress classification overall.
  • FIG. 21 is a confusion matrix for Cubic SVM classification model for Abiotic Stress classification with an overall accuracy of 81.9%.
  • An exemplary methodology for reference library formation involved receiving information from various researchers conducting different disease studies. Whenever an extension educator was notified by an area farmer of a crop stress, project personnel would travel to the field for documentation. This process enables inclusion of diseases common to all of Ohio, for that particular growing season. Furthermore, this will be the best method for continued expansion of the library' in the long term. The second way the library' was formed by taking pictures of the various test plots and associates stresses. This was the best method for building the Library' in the short term, as it provides the opportunity to walk through inoculated plots with high disease levels, meaning substantial numbers of images can be taken of a great variety of disease/stress manifestations. However, for the long term, this method will be limited to the stresses studied by the researchers.
  • a machine learning algorithm capable of identifying different types of stresses on corn leaves, including abiotic and biotic classes has been developed.
  • the model described herein allows for distinguishing between abiotic and biotic stresses via a convolutional neural network (CNN), which achieved ⁇ 96 % in validation accuracy.
  • CNN convolutional neural network
  • a second model to distinguish between the seven (7) specific stresses and deficiency classes has been developed and may be used in addition to or separately from the abiotic/biotic stress identification.
  • Additional images may be added to the library to balance the dataset.
  • a class may be added to the library for normal corn leaf images to allow the machine to learn the characteristics of a normal leaf. From a practicality perspective, the algorithm will the difference between normal and abnormal.
  • the sample dataset leveraged for the development of the neural networks was the PLSDA library provided by Ohio State, although the present invention is not so limited.
  • This sample library ' consisted of 2,166 total images, as illustrated in FIG. 22 A. Splitting the library into two classes resulted in 1 ,720 biotic stresses, and 466 abiotic stresses, as illustrated in FIG. 22B. Splitting the library ' further into seven classes resulted in the distribution shown in the figure on the right.
  • An exemplary architecture implemented, as illustrated in FIG. 23, includes two phases of convolutional, activation, and pooling layers, followed by a fully- connected layer, activation, an additional fully-connected layer rounded out with a soft max classifier to provide the probability that an image belongs to a certain class.
  • the convolutional layers essentially run a sliding window across the entire pixel matrix to extract relevant features. This is a benefit of CNNs for image recognition; feature extraction is done automatically within the algorithm.
  • Pooling layers also referred to as subsampling or down-sampling
  • This step also reduces the risk of overfitting by controlling the number of parameters and computations in the network. Drop out functions may also be used to further minimize the risk of overfitting.
  • the resulting output is produced by the softmax classifier.
  • the softmax function takes the vector of real-valued scores and converts it to a vector of values between zero and one (probability of the image belonging to each class).
  • the class with the highest probability determines the final classification, which is the final output produced by the code. Examples of the output are shown in FIGS. 24A and 24B for each network.
  • the first attempted solution was to apply class weights to ensure the loss function did not penalize as heavily for the minority class. This, however, had a negligible impact on performance improvement.
  • a resampling method was applied, specifically choosing to up-sample the minority class to equal the majority (i.e., abiotic and biotic now had the same number of images). Down-sampling is another valid option but might result in losing valuable data points.
  • the resulting model performed at a true 96% validation accuracy and did well when testing on new images the network had not yet seen.
  • the performance was verified with cross validation, which randomly splits the data into 10 subsets of train/test sets; this technique resulted in an average validation accuracy of 97.67%.
  • a dropout technique was implemented after each convolutional layer. The visual in FIG. 25 illustrates the loss and accuracy through each epoch of training for the final neural network
  • these neural networks may be transitioned into a commercially viable product for the intended use within a full technology stack, given a few considerations.
  • normal data should be collected for healthy corn leaves.
  • the two demonstrated neural networks will classify every image as abnormal, which is impractical for deploying algorithms real-time in the field.
  • the models may be retrained to learn the features of healthy leaves.
  • a near-term solution could be to collect as many from Google as possible, and up sample those to begin prototyping.
  • a solution for class imbalance in the machine learning application is to collect more data.
  • the library of images should be continued to be built at least until the minority classes begins to equal the majority classes.
  • the minority classes are lacking the amount of unique data required to learn often subtle differences between various corn leaf stresses and deficiencies, which can be remedied.
  • the networks may be retrained and tuned for optimal performance.
  • stinger/boom suspended camera head and sampling probes when combined with unique image processing and crop health library of plant component physical features, allows for more cost-effective diagnosis of a crop health and/or crop stressors in the lower reaches of the plant canopy.
  • remote sensing technologies where visible features in the upper reaches of the plant canopy occur later in the plant growth stage, limit the impact of mitigating crop health problems (i.e , foliar application of macro or micro nutrients, application of insecticides and fungicides, or water management).
  • Stinger/boom suspended technology translates directly into actionable information (at lower cost when compared with human crop scout counterparts) so preventative or curative treatments can be ordered and applied to preserve crop quality and yield.
  • this process utilizes the mobility of a drone or other mechanized equipment such as a tractor, allowing for rapid deployment and large area coverage.
  • a compatible UAS platform and stinger-suspended camera head or boom suspended camera minimize and material sampling probes, represents a onetime investment with the potential to substantially enhance and extend the human capital. Plant health is at the forefront of many of crop production management decisions, and scouting session prior to crop canopy closure do catch the onset of nutrient deficiencies, disease and insect infestations as the most serious problem occur after canopy closure.
  • LAI leaf area index
  • SAVI vegetative indices
  • NDVI vegetative indices

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A crop health assessment system for use with a canopy includes a vehicle, manned or unmanned (e.g., capable of being controlled remotely or preprogrammed); a suspension rod, extending into the crop canopy; a camera attached to a distal end of the suspension rod; and an image storage medium for storing images captured by the camera for later processing. The images are used to render models for canopy inspection and evaluation, allowing for targeted canopy inspection deep in fields without a human crop scout needed to visit the location.

Description

CROP HEALTH SENSING SYSTEM
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH
[0001] This invention was made with government support under Subcontract
Number 1919.02.05.92, Project Number 60050854 awarded by The Design Knowledge Company (Air Force Research Laboratory) The government has certain rights in the invention.
BACKGROUND OF THE INVENTION
Field of the Invention
[0002] Embodiments of the present disclosure relate to sensing of crop health below the canopy using various systems and image processing to provide assessment of crop health. Any method that can capture multiple images from within the canopy can be used. In an aspect of the invention, remote sensing may be used, such as an unmanned aerial system (UAS), specifically a UAS outfitted with a“stinger” appendage.
Background
[0003] Diagnosing deteriorating crop vigor at early stages can improve crop yield by allowing for early intervention. Traditionally, crop health/vigor has been monitored by human“crop scouts”. These human crop scouts made decisions based on their view of a 10m to 15m buffered field border. Placing human crop scouts at key locations within a field provides the best opportunity to mitigate production problems. Currently, treatment decisions are made based on crop scouting reports where the scout often checks three or more locations in a field, and often those locations are located within 20m of the field border. Assuming the threshold indicators are met in the scouting locations, treatments are ordered for the entire field. If the presence of one of the above stresses is detected, the assumption is made that the entire field is infected, and the corrective measure applied accordingly. The current model for determining plant health involves observation by domain experts (crop scouts). But, it is a subjective task that may lead to bias or human error. Inserting human crop scouts deep within a field often neither realistic nor cost- effective, so a surrogate is needed.
[0004] Remote sensing provides producers and the agricultural community the opportunity to visualize entire fields from an aerial perspective. The ability to detect and view targets of interest from a different vantage point provides significant benefits that have not been fully realized. With the advent of Unmanned Aerial Systems (!JAS), the prospects of cost-effective remote sensing are being realized by the agricultural community.
[0005] Using !JAS to gather information over large areas can provide high resolution imagery of the upper portion of the crop canopy, allowing farmers and managers to detect potential problems. One of the most important benefits of remote sensing is the ability to make timely crop management decisions from a vantage point that reflects spatial management of complete fields
[0006] However, even with remote sensing from an aerial perspective, decisions about crop vigor are made based on information about the upper portion or view of the canopy. Current sensing technology limits a crop scout’s view to the upper reaches of the crop canopy. Even using aerial surveillance, the upper crop canopy is usually the last location exhibiting visible signs of crop health problems. Most crop health problems originate in the lower portion of the canopy (i.e., nutrient translocation, disease and insect infestations, etc.). In many cases, by the time the upper reaches of the canopy become involved, remote sensing facilitates a post mortem analysis of the problem. Earlier detection through lower canopy sensing and sampling provides the best opportunity to treat crop health problems at a point in plant development necessary to mitigate substantial yield loss.
BRIEF SUMMARY OF THE DISCLOSURE
[0007] Accordingly, the present disclosure is directed to Unmanned Aerial
Stinger- Suspended Crop Health Sensing System that obviates one or more of the problems due to limitations and disadvantages of the related art .In accordance with the purpose(s) of this invention, as embodied and broadly described herein, this invention, in one aspect, relates to a method of developing an image library collected from vantage points within a crop canopy, for specific crops (e.g. com, wheat, soybeans, etc.) within agricultural regions of interest (e.g. Midwestern Cornbelt, etc.) to be utilized in conjunction with artificial intelligence (e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.) for the purpose of diagnosing crop health problems. [0008] In another aspect, the invention relates to a method of generating point clouds from multiple images collected from within the plant canopy the extraction of specific plant components and projection from 3D to 2D for the purpose of extracting image features to improve crop health problem diagnosis accuracy using artificial intelligence techniques (e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.).
[0009] In another aspect, the invention relates to a method of generating point clouds from multiple images collected from within the plant canopy comprising creation of solid models from the point clouds, extraction of specific plant components, and projection of surface features from 3D to 213 for the purpose of extracting and clarifying image features to improve crop health problem diagnosis accuracy using artificial intelligence techniques (e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.).
[0010] In another aspect, the invention relates to a method of assessing crop health using a learning algorithm capable of identifying different types of stresses of a crop of interest, comprising: accessing a sample image library comprising a plurality of images including images of deficient crop tissue, the deficient crop tissue images classified in the images according to the stress type; training the learning algorithm using the sample image library by applying an upsampling to provide approximately the same number of images of each stress type in the image library'; extracting relevant features from an image; performing dimensionality reduction of each feature map of each image and retaining information most useful for classification; and determining a highest probability for each stress for each image in the library to provide a final classification.
[0011] Any of these methods may include collecting images of plants at a discrete sampling site from a vantage point within the crop canopy, generating a 3D model of the images of plants at the discrete sampling; extracting and projecting samples of the images of plants into a 2D image; and matching features of the projected samples with images in an image library for diagnosing diseases at the discrete sampling site.
[0012] Any of these methods may include classifying the pixels in the image according to a predefined classification set of pixel values.
[0013] Any of these methods may include the image library including images of healthy crop tissue.
[0014] Any of these methods may include the stresses being abiotic/biotic stresses. [0015] Any of these methods may include the stresses including any stress identifiable by visual inspection of crop tissue.
[0016] Any of these methods may include the crop of interest being com.
[0017] Any of these methods may include the stresses being abiotic or biotic classes.
[0018] Any of these methods may include an additional class assessment.
[0019] Any of these methods may include extracting relevant images being performed by running a sliding window across pixels of an image.
[0020] Any of these methods may include performing dimensionality reduction of each feature map of each image and retaining information most useful for classification.
[0021] Any of these methods may include determining a highest probability' for each stress for each image in the library to provide a final classification.
[0022] Any of these methods wherein the 3D model results from stereoscopic image overlap from a plurality of cameras received from an image capture device within the canopy.
[0023] In accordance with an aspect of the present invention, a crop health assessment system image capture system for use with the method according to any of the claims above, comprisesa motorized vehicle; a camera attached to a distal portion of the motorized vehicle; and an image storage medium for storing images captured by the camera for later processing.
[0024] Any of these systems may include a transmitter for transmitting images to a remote location.
[0025] Any of these systems may include at least one camera that is a multi- spectral camera.
[0026] Any of these systems may include the motorized vehicle being an unmanned vehicle.
[0027] Any of these systems may include the unmanned vehicle being a rotary wing drone.
[0028] In accordance with an aspect of the present invention, a method of the health of a crop having a crop canopy using an aerial crop health assessment system, the aerial crop health assessment includes an unmanned aerial vehicle, capable of being controlled remotely, a suspension rod, extending from an underside of the unmanned aerial vehicle and a camera attached to a distal end of the suspension rod, the method includes flying the unmanned aerial vehicle to a crop location; positioning the unmanned aerial vehicle above a preselected location in the crop canopy; lowering the unmanned aerial vehicle to cause the camera to descend within the crop canopy; capturing images via the camera, the images of an area within the crop canopy, and processing the images to determine if health of the crop is compromised.
[0029] An aerial crop health assessment system, comprising an unmanned aerial vehicle, capable of being controlled remotely; a suspension rod, extending from an underside of the unmanned aerial vehicle; a camera attached to a distal end of the suspension rod; and an image storage medium for storing images captured by the camera for later processing..
[0030] In another aspect, the invention relates to A method of the health of a crop having a crop canopy using an aerial crop health assessment system, the aerial crop health assessment comprising an unmanned aerial vehicle, capable of being controlled remotely, a suspension rod, extending from an underside of the unmanned aerial vehicle and a camera attached to a distal end of the suspension rod, the method compri sing flying the unmanned aerial vehicle to a crop location; positioning the unmanned aerial vehicle above a preselected location in the crop canopy; lowering the unmanned aerial vehicle to cause the camera to descend within the crop canopy; capturing images via the camera, the images of an area within the crop canopy; and processing the images to determine if health of the crop is compromised..
[0031] In yet another aspect, the invention relates to A method of developing an image library' collected from vantage points within the crop canopy using the methodology described herein, for specific crops (e.g. corn, wheat, soybeans, etc.) within agricultural regions of interest (e.g. Midwestern Cornbelt, etc.) to be utilized in conjunction with artificial intelligence (e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.) for the purpose of diagnosing crop health problems..
[0032] In yet another aspect, the invention relates to A method of generating point clouds fro multiple images collected from within the plant canopy using the methodology described herein, and the extraction of specific plant components and projection from 3D to 2D for the purpose of extracting image features to improve crop health problem diagnosis accuracy using common artificial intelligence techniques (e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.).
[0033] Additional advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The advantages of the invention will be realized and
_ s _ attained by means of the elements and combinations particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
[0034] An advantage of the present invention is to provide unmanned aerial stinger-suspended crop health assessment sensing tool.
[0035] Further embodiments, features, and advantages of the unmanned aerial stinger-suspended crop health assessment sensing tool, as well as the structure and operation of the various embodiments of the unmanned aerial stinger-suspended crop health assessment sensing tool, are described in detail below with reference to the accompanying drawings.
[0036] It is to be understood that both the foregoing general description and the following detailed description are exemplar}- and explanatory only, and are not restrictive of the invention as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0037] The accompanying figures, which are incorporated herein and form part of the specification, illustrate an unmanned aerial stinger-suspended crop health sensing system. Together with the description, the figures further serve to explain the principles of the unmanned aerial stinger-suspended crop health sensing system described herein and thereby enable a person skilled in the pertinent art to make and use the unmanned aerial stinger-suspended crop health sensing system.
[0038] FIG. 1 illustrates a schematic of an embodiment of the stinger suspended crop health sensing system according to principles described herein.
[0039] FIG. 2 is a flow chart suggesting how- the Unmanned Aerial Stinger-
Suspended Crop Health Sensing System would be deployed in practice.
[0040] FIG. 3 is a flow chart summarizing statistical measures for assessing crop vigor from LIAS remote sensing imagery from above the crop canopy.
[0041] FIG. 4 is a flow chart summarizing feature extraction and crop stress classification for assessing crop health from imagery obtained from within the crop canopy using stinger-suspended cameras.
[0042] FIG. 5 is a 3-D crop model, viewed parallel to crop row-s. [0043] FIG. 6 is a 3-D model showing orientation of image planes (and cameras) relative to crop canopy.
[0044] FIG. 7 is a 3-D model of plant components (leaves and ears).
[0045] FIG. 8 shows images of multiple leaves on individual plants from a 3-D model.
[0046] FIG. 9A shows a stinger inserted into crop canopy.
[0047] FIG. 9B shows a stinger insertion into crop canopy.
[0048] FIG. 9C show's a stinger insertion into crop canopy.
[0049] FIG. 9D show's a stinger insertion into crop canopy.
[0050] FIGS. 10A-C illustrate a solid model of stinger with camera head tilt angle of 45° The isographic, right side and front views are shown in A, B and C respectively.
[0051] FIG. 11 is a flow' chart for the general image processing according to principles disclosed herein.
FIG. 12 shows an example of individual pixel cl asses.
FIGS. 13 A and 13B show' a skewed frequency, having disproportionally more small blobs, indicates a low vigor.
FIG. 14 shows a plot of the points on top of an additional user defined image.
[0055] FIG. 15 show's a camera was mounted to the underside of the drone on the payload rack.
[0056] FIGS. 16-17 show wireless antennas (e.g., Ubiquiti)
[0057] FIGS. 18-21 are confusion matrices generated using image classification software (e.g. MATLAB) to indicate where machine-learning classification algorithm errors are accumulating.
[0058] FIGS. 22A and 22B illustrate image library counts according to at least tw'o class division methodologies.
[0059] FIG. 23 illustrates a representative CNN architecture used for diagnosis of stresses in corn. Detailed are the two convolution and pooling layers with the fully connected and classification layer. The important detail is the lightweight nature of the neural networks yielding quick classifications architecture of a system according to principles described herein.
[0060] FIGS. 24 A and 24B illustrate an exemplar)' output of the system shown in FIG. 23. [0061] FIG. 25 illustrates training loss and accuracy on abiotic/biotic stress according to an exemplary embodiment.
[0062] FIG. 26 illustrates training loss and accuracy on all stresses according to an exemplary embodiment.
DETAILED DESCRIPTION OF THE DISCLOSURE
[0063] Reference will now be made in detail to embodiments of the unmanned aerial stinger-suspended crop health sensing system with reference to the accompanying figures.
[0064] One possible solution to the problem of early detection and diagnosis of lower canopy crop vigor is the use of devices to capture images from within the canopy, such as inserting a camera into crop canopy. For example, lowering a camera from a boom or suspension rod from above, e.g. from a tractor or aerial vehicle, may be used. In an aspect of the present system, an unmanned aerial system (HAS) may be provided with stinger suspended camera.
[0065] In an example implementation using an UAS, the UAS may be of any known controllable or automated flight path UAS, which may be a multi-rotor or other appropriate UAS The UAS stinger may include a sampling probe. The stinger allows for the suspension of a camera head; and/or tissue, soil and/or spore samplers; thereby placing human-like scouting capabilities at virtually any location with agricultural production regions. The stinger can be positioned and dropped into virtually any plant canopy by the UAS, and pilot in command. As UAS technology evolves, it may be possible for the UAS to fly the appropriate path to sense the crop vigor with minimal pilot or human intervention, or even may be fully automated, if allowable under local flight rules. Insertion points may¬ be determined based on information collected in a initial“fly over” by the UAS described herein or by a more traditional fixed wing aircraft or drone prior to insertion of the suspended devices of the UAS described herein in the canopy, e.g., target areas of sacrificed crop vigor. Such insertion points of reduced crop vigor may inform treatment decisions for other areas of the field that may not yet exhibit reduced crop vigor from above.
[0066] Among the most cost-effective, stinger-suspended technologies is a camera head capable of collecting multiple, synchronized images upon penetration of the plant canopy. A multi-spectral band (e.g., infrared, visible, and/or ultra-violet frequency) camera or cameras can be selected and tuned for a particular crop health concern or application. The camera may be mounted using an articulated mount so that the camera’s targeted field of view can be controlled remotely (or pre-programmed). The cameras image capture band may be of any range appropriate for the analysis to be performed. In addition, the camera may be panoramic or spherical or have a lens, such as fish eye, or articulation appropriate to capture in multiple directions. The suspension device may additionally include a light source, such as LED lighting, to improve quality of image capture.
[0067] A schematic of an exemplary' embodiment of a crop health sensing system 100 according to principles described herein is provided in FIG. 1 As illustrated in FIG. 1, the crop health sensing system includes a multi-rotor unmanned aerial vehicle or system (UAV7UAS) 102. A camera 104, which may be multi-spectral, is suspended from the multi-rotor IJAV 102 via a suspension rod or line 106. The system 100 may include one or more than one camera. Three cameras 104a, 104b, 104c are illustrated in FIG. 1, but the system 100 is not limited to three cameras. The system 100 further includes a wireless antenna 108 for communication with a ground station or other recipient. The syste 100 may further include a lamp or lamps for providing illumination to the camera, particularly while the camera is inserted into the crop canopy. As illustrated in the embodiment of FIG. 1 , the lamp may be a light emitting diode (LED) lamp 1 10 or a series of LED lamps in a strip, or the like.
[0068] Upon return of the UAS to a field laboratory, the collected imagery may be downloaded and processed to generate a 3D model of the internal plant canopy. In an alternate aspect, the imagery may be transmitted in real time to a processing center to perform image analysis. It is conceivable that with the evolution of technology, that the presently contemplated image preprocessing may performed“on board” the UAS.
[0069] While described above with respect to image capture using an UAS with a suspended camera, the present techniques are not so limited. It is possible to suspend a camera from other means, such as a tractor boom or even by hand-held boom, for example, with a camera system as described above, The following techniques may thus be applied to the captured images for crop health assessment.
[0070] A flow chart of general steps of sensing crop health according to principles described herein is provided in FIG. 2. As illustrated in FIG. 2, a fixed wing or other vehicle capable of“fly over” of the crop canopy flies over the designated field, and captures images of the crop, e.g., by RGB (red, green, blue) NADIR imagery. A crop vigor assessment may be conducted using the imageiy to identify GPS coordinates (or other appropriate geo-locating information) of areas within the field of poor crop vigor as assess from an upper view. After areas of poor crop vigor are identified, a crop health sensing system 100 according to principles described herein, e.g., as illustrated in FIG. 1, is launched and directed to select areas of poor vigor identified by the fixed wing aircraft imagery' . The camera 104 of the system 100 is inserted into the crop canopy to obtain RGB or muitispectral images of the crop from below the canopy. Analysis of the images provided by the camera may be transmitted back to a ground station via the wireless antenna 108 or stored on-board for later retrieval. Using the images, a 2D stress analysis may be performed with input from/comparison to information in a reference library. In addition, or alternatively, the images provided by the camera may be used to create a 3D model of the crop below the canopy (3D Intra-Canopy Modeling). A 3D point cloud model may be created using the information and a 3D stress analysis performed with input from/comparison to the information in the reference library. The reference library' may be static or may be dynamic, such that it is updated after diagnosis using the system is confirmed, images collected, and modeling uploaded to build a more comprehensive reference library.
[0071] FIG. 3 is a flow chart of exemplar}' steps for providing the crop vigor assessment that is part of the overall health assessment according to principles described herein. As illustrated in FIG. 3, a crop assessment algorithm is initialized with a graphical user interface; flight paths, image resolutions, and image overlap are specified. Images imported from the crop health sensing system (e.g. the camera inserted into the canopy), either by download after return of the image capture device (e.g., camera) to a ground or base station, after transmission from the image capture device in the field or in real-time or near real time while the the image capture device is still in the field. The image capture device may be provided with appropriate image storage capabilities, e.g., by on-board storage such as a hard drive, removable storage device or the like. The crop images are standardized/processed (e.g. cropped in RGB) to perform an image pixel classification according to a pixel classification set, the pixel classification set may be loaded from an external data set or may be created or updated using images gathered from use of the present system, for example, by dynamically updating the pixel classification set in a feedback loop from the present system. The images are processed to create binary images to allow for blob formation and categorization into positive or negative classes, for example related to the crop type, such as‘corn’ and‘not corn’ classes. Blob statistics are calculated, and a test conducted to determine if the data matches a particular population, such as a Chi Square
Test, to rate the crop vigor. A priori“crop vigor” ratings are compared to the Chi Squared statistic for generating the final“crop vigor” rating. Then, a vigor rating for each image is generated and mapped to the position coordinates, obtained from a Global Position System geolocator, for that image. The collective set of “crop vigor” rating and corresponding position coordinates are used to map the“crop vigor” of an entire field.
[0072] FIG. 4 is a flow chart for the stress diagnosis method using near real- time input from a crop health sensing system according to principles described herein. While the crop-insertion camera is remote from the base station, imagery may be wirelessly transmitted to the base station, e.g., during transit in field or to/from the field. The 2D or 3D imagery captured by the camera from below the canopy may then processed to perform 2D feature extraction or used to create a 3-D model of the crop below the canopy (3D Intra- Canopy Modeling). As in FIG. 2, a 2D stress analysis may be performed with input from/compari on to information in a reference library and actionable information collected. A 3D point cloud model may be created using the information and a 3D stress analysis performed with input from/comparison to the information in the reference library. The reference library may be static or may be dynamic, such that it is updated after diagnosis using the system is confirmed, images collected, and modeling uploaded to build a more comprehensive reference library.
[0073] In addition, as shown in FIG. 4, the imagery may be compared to a biotic/abiotic classification model to determine if the stressors are biotic or abiotic. Actionable information is then generated as appropriate for the biotic/abiotic determination. While shown with respect to the wireless information transfer of MG 4, the biotic/abiotic classification is also applicable if the images are downloaded from the camera system not in real time/near real time (e.g subsequent“hard wire” download) or by directly accessing the memory device, e.g., removable storage.
[0074] In a method according to principles described herein, a canopy 3D model analysis technique provides for the extraction of individual plant components, and then comparison of plant component features with a known database of features for identification of crop health problems. For example, individual leaves from the lower part of the canopy are extracted and then flattened (2D image projection) to support analysis of leaf color, leaf margin shape and size, leaf venation and color striations, and lesions. This approach can be used for both the top and bottom sides of the leaf. The extracted features are then compared to an existing crop health database to identify multiple crop health problems including macro and micro nutrient deficiencies, disease and insect infestations, for example. The feature database may include, but not be limited to, leaf color, shape and size; venation pattern; margin physical shape; lesion shape, size, number and size; and other physical features known to entomologists, plant pathologists, crop physiologists, soil scientists, agronomists, horticulturalists, enologists, and other crop production specialists or the like. Concurrent with image data collection, the stinger or boom can he fitted with soil, tissue and spore sample collection capabilities. Material samples collected during concurrent image collection may prove vital for confirmation of the crop health diagnoses.
[0075] Development of the stinger- or boom-suspended camera head and image processing procedures allows for timely diagnosis of crop stresses that may develop after crop canopy closure and action can be taken to remediate yield reductions. Remediation actions may include the application of fungicides, insecticides and fertilizers, or irrigation scheduling and management. By using the stinger or boom to do within canopy imaging and/or material sampling from multiple locations in affected regions of a field, the extent of the problem can be more effectively characterized to determine the best possible management approach and resulting economic outcome. By precisely identifying the affected area, the farmer can reduce his application amounts and resulting waste not only saving money, but also the environment. According to principles described herein, image processing allows for an evaluation of the extent stress involvement and potential yield loss both at the individual plant level and then on a more targeted sampling regime more representative of the entire field compared to current techniques. In addition, analyses by plant components, such as individual leaves, by projecting from the 3D point cloud model onto into a 2D surface, leads to a more rigorous, repeatable and quantitative measurement of critical crop health features.
[0076] An exemplary embodiment according to principles described herein, a stinger-suspended camera is mounted to the underside of a UAS for positioning the stinger within the crop canopy at“area of interest” locations. While described here with respect to a UAS, a boom suspension via tractor or other ground based transit mechanism is possible. At least one material sampling probe may also be suspended from the stinger. Upon reaching an“area of interest,” the drone (UAS) descends, dropping the cameras/probes into the crop canopy so imagery of the lower canopy reaches can be obtained. The UAS may be equipped with geo-sensing equipment, such as a Global Position System geolocator. Thus, location of the camera/probe, and therefore location of image capture and/or sample capture, may be determined using the Global Positioning System, associated with the image/sample (geo-tagged) and recorded. [0077] In an exemplar}' embodiment according to principles described herein, the stinger may include a 3m fiberglass rod, flexible UAS coupling and camera head mounting plate with multiple cameras. The 3m rod length allows for sufficient canopy penetration while keeping the drone (UAS) far enough above the crop canopy to avoid rotor entanglement in the crop canopy. The camera mounting plate is designed so that camera angles can be adjusted relative to the horizon. The multi-rotor UAS is used to fly the stinger with camera head and/or sampling probes to the correct field position. The UAS lift capacity is selected such that it exceeds the overall weight of the stinger, camera head and sampling probes, to ensure safe and reliable flights.
[0078] In an exemplary crop health sensing process, the crop health sensing system according to principles described herein allows for qualitative assessment of crop health issues in a field. First, a UAS flight (“fly over”) is used to determine any locations of interest in a field, usually any that have a lower crop vigor. Also, the crop health sensing system allows for a visual inspection of the lower reaches of the canopy and the associated algorithms enable a quantitative assessment of the crop health. Put simply, the exemplary crop health sensing system suspends one or numerous cameras beneath a UAS for remote data acquisition, enabling large area coverage with minimal effort.. The drone (UAS) flies to a desired location and then inserts cameras suspended approximately 12 feet beneath it into the canopy and intra-canopy images are taken. These images are then used to render 3D models for general canopy inspection and evaluation, allowing for an‘in-person’ view of the crop, without having to physically visit the location. Secondly, these images are compared to a Reference Library containing imagery of diseases common the region of operation for stress diagnosis. Aimed with this geo-tagged diagnosis, it is possible to create prescription application maps to ameliorate the stress and increase yield, the ultimate goal. Camera angle may be controlled by maneuvering of the UAS or by remote control of an articulated mount.
[0079] Figure 5 is a 3-D crop model, viewed parallel to crop rows. Blue squares represent the image plane of the photos used to generate the model. Figure 6 is a 3- D model showing orientation of image planes (and cameras) relative to crop canopy. Figure 7 is a 3-D model of plant components (leaves and ears). Leaf venation and margins are clearly visible pointing to the robust nature of the model formation process. Figure 8 shows images of multiple leaves on individual plants from a 3-D model. Two corn plants, clearly visible to the left and right, with zoomed view at center projected onto the 2D surface for crop stress diagnosis. Note the differences in venation, coloring and leaf shape. [0080] An exemplary Stinger Suspended Crop Health System was studied, in which the drone used to fly the Stinger is a DJI S1000+. Due to its high payload capacity of 251bs it has proven an excellent UAS platform to handle the Stinger as well as insert and then remove it from the canopy. In concept demonstration, the exemplary drone may fly to a series of pre-programmed locations determined with the Stressed Crop Identification Algorithm using DJI’s standard flight controller software, DJI GO, then drop altitude, inserting the cameras into the crop canopy (Figures 6 and 7). After the cameras capture the relevant imagery, the drone will regain its set travel altitude and head to the next desired sample point and repeat the process until a select number (e.g., some or ail) stressed locations have been visited.
[0081] Post flight, imagery is used to generate a 3D model of the internal crop canopy from each of the discrete sampling sites. 3D model generation may be accomplished using image processing software (e.g., Agisoff s PhotoScan) by first forming a point cloud from corresponding pixels within multiple images. In an exemplary embodiment, the depth viewed in the 3D models is a result of the stereoscopic image overlap from the four cameras composing the camera head mounted at the distal end of the stinger. Renderings of sample models are shown in Figures 5, 6 and 7. After the model is formed, individual leaves from plants are extracted and projected into a 2D plane. As an example, extracted leaves are shown in Figure 8 Striation patterns are clearly visible on the leaves and when projected to 2D images and standardized and normalized for feature matching with a pre-existing database (library) showing the on patterns of known nutrient deficiencies, diseases, and insect infestations for diagnostic purposes.
[0082] FIGS. 9a, 9b, 9c, and 9d show a stinger inserted into crop canopy. FIG. 10 is a solid model of stinger with camera head tilt angle of 45° The isographic, right side and front views are shown in A, B and C respectively. The Center of Gravity is circled in red in each view .
[0083] For initial crop assessment, a Stressed Detection Algorithm may be used to evaluate the crop vigor in the field. The first step is to fly the field of interest, for example, with a small fixed wing drone, such as an“eBee”, and take 200 to 300 images of the field depending on field size; giving 200 to 300 sample points for analysis. Typical industry practice is to stitch these into a single image, however this is a time-consuming process often taking several hours to complete and often, not an effective one either. Although a fixed wing drone is described in this example for initial fly over, the drone according to principles described herein may he appropriately outfitted with an aerial camera to perform this function in addition to the canopy insertion function by the suspended camera(s).
[0084] The Stress Detection Algorithm analyzes each image individually, greatly reducing processing time to around 20 minutes for an approximately 30 hectare field in initial testing. Furthermore, the output is much more actionable, being GPS coordinates of crop that exhibits low vigor. Please refer to the flow chart in FIG. 1 1 for a general overview of the algorithm. First, after starting the algorithm and entering some basic information, the images are imported from the drone and then cropped to focus on the center region. This is to reduce any overlap that occurs as these flights are programmed to have 70%-85% side and end lap for the stitching process. After cropping, the pixels in the image are classified according to a predefined classification set, for example, using a technique developed by J. Leonard D. Wolters, using the specific RGB triplet values to separate the pixels into specified groups. In brief, the algorithm performs this separation using a“one versus all” logistic regression to calculate the optimal classifiers for each group that gives the best separation from all other pixels in the other pixel classes.
[0085] After this initial classification, the individual pixel classes are then reduced to positive or negative classes, for example‘crop’ and‘not crop’ classes, so the image can be binarized (FIG. 12). The pixels classified as crop (corn crop shown as white) are then grouped into blobs using binary' morphological image processing techniques. These techniques utilize filter masks to close any errant gaps in the com blobs via a technique called dilation followed by erosion to maintain object edge integrity. The primary' objective in these operations is to account for image noise or any errantly classified pixels from the original RGB drone image. To quantitatively assess the vigor of the crop in the image, statistics (Chi Squared Goodness of Fit Test) regarding blob size and size distribution are calculated. The more normal the distribution, the higher the vigor. A skewed frequency, having disproportionalJy more small blobs, indicates a low' vigor as the crop canopy has yet to close due to the slowed growth of the pl ants (FIG. 13). Unlike the high vigor crop that has a full, healthy canopy that when classified in the imagery' results in large, contiguous groups blobs, the low vigor has higher numbers of soil and‘non-crop’ foreground pixels, which results in the reduced sizes of the blobs upon classification. The Chi Squared Test Statistic
(the specific indicator of blob size frequency distribution from the Chi Squared Goodness of
Fit Test) may be used to rate the crop vigor according to a rating scale developed. A random set 25 drone images was rated according to their crop vigor. This vigor ranking w¾s then plotted against the Chi Squared Test Statistic for those images and the resulting formula from the exponential regression was used to relate Crop Vigor to the Chi Squared Test Statistic. The final output of the algorithm can be take multiple forms (e.g. spreadsheet file with the vigor ratings at each image location, or a plot of the points on top of an additional user defined image) in accordance with end-user requirements. An example of the output is shown in FIG. 14. The GPS coordinates are represented as dots whose color corresponds to their relative vigor levels and they are displayed on top of a user defined remote sensing image (e.g. AirScout) of the field.
[0086] An !JAS with vertical stinger provides a simple approach for inserting cameras into a crop canopy for image collection. As described above, the crop of interest has been com for demonstration of the crop health sensing system, as its canopy is less dense than other grain crops such as wheat or soybeans. The Stinger is a 3.5 meter fiberglass rod with stiff rubber hose at the end that attaches to the drone payload mounts to allow for stinger flexure during takeoff and landing as well as providing damping as the stinger swings in the wind or changes in direction during flight. At the other end is the camera mount. The specific camera mount is subject to change depending on the characteristics of the crop canopy and the cameras mounted to it. Currently it has consisted of a flat aluminum plate but with recent advances in 3D printing, a vast array of design options is available.
[0087] Camera control is another unique aspect of this overall process. The initial problem is in determining when the Stinger has been inserted into the correct position for image acquisition. Although the flight controller is relatively reliable for general drone placement, real-time kinematic GPS is required to place the cameras exactly where they need to be for optimal image acquisition. If real-time kinematic GPS is unavailable, several alternative steps may be required to solve the low accuracy GPS concern. First, a camera was mounted to the underside of the drone on the payload rack right where the top of the Stinger mounts to the drone (FIG. 15). This camera focuses on the Stinger itself and broadcasts though the DJI Lightbridge 2 flight control system to the pilot’s ground control station. This lets the pilot in command (PIC) observe and confirm the location of the Stinger. This is also a useful safety feature, as it guarantees the Stinger is not caught on anything during the extraction process.
[0088] Also, live streaming capabilities have been enabled on the Stinger cameras themselves. Although the specific process varies with the type of camera mounted, all methods currently utilize the high-bandwidth long-rangewireless antennas (FIGS. 16-17).
The current capacity' of the network is around 200-450 megabytes per second, depending on signal quality and range. This means that any device connected to the network at a base station has remote access to the cameras in the field. Furthermore, this network also allows for the user back on the ground to access the images immediately after acquisition. Although still under development, this system enables access to any image in near-real time after acquisition. This allows for processing much earlier than many current industry alternatives, which require the operators to wait until the UAS lands before downloading and processing the imagery'
[0089] After downloading, the images are then used in a variety of processes.
First, they can be used to render 3D models of the crop canopy. This yields a‘in person view’ of the crop, allowing the user to interact the canopy and inspect it in ways that an image does not. It gives a better indication as to plant size and canopy density' as well as means to assess the general condition of the leaf, looking for stress indicators such as curling. Although these do have high potential as computational inputs, they are too large to handle. The density of the points (almost 100,000 for one leaf, over 5,000,000 for a single model) means they are too data rich to be used in any timely computation.
[0090] The second use of the images is for stress diagnosis. Using machine learning tools provided in MATLAB, features on the leaves in the 2D images are extracted from a test set (Reference Library', refer to section 2.5 for more details). These features are extracted using MATLAB command ‘bagOfFeatures’ which identifies distinguishing features for each image class i.e. stress). These features include things such as lesion shape, the specific pattern of chlorosis (discoloration) on the leaf, distribution of lesions or chlorotic areas etc. Then, the MATLAB application‘ClassificationLeamer’ was used with these extracted features to perform automated training of different classification models such as decision trees, SVM (support vector machines) logistic regression and others to evaluate which method has the most discriminatory ability among the image classes, or in simple terms, the highest successful stress diagnosis rate. After the initial training with images from the Reference Library, these models are then compared and matched to any leaf features in the images from the Stinger with the goal of diagnosing the stress.
[0091] This comparison/matching is performed in a multi-tiered fashion to increase system reliability and accuracy. First, the stresses are divided into two separate groups, Biotic and Abiotic Stresses Biotic stresses are generally those that are caused by a biological agent such as a disease or insect and Abiotic stresses are those that are not due to a living organism such as nutrient deficiencies, drought or equipment damage. Currently, the algorithms are capable of differentiating between these two general groups with around 85%-9Q% accuracy (refer to confusion matrices below). The advantage of this tiered approach is that it specifies or limits the features to those that pertain to the specific disease group, increasing the ability to differentiate the different classes. Furthermore, this same general process could be applied to the leaves in the 3D models. Although the tools in MATLAB are geared for 2D feature extraction, the same general approach applies. Look for distinguishing features in the 3D point cloud from a known Reference Library and compare that to the field models for diagnosis. The main difficulty is the development of custom functions as well as handling the tremendous amount of data for timely computations.
[0092] Ultimately, artificial intelligence/neural networks may be used to determine the crop stressor, e.g., by Principal Component Analysis (PCA) for features extracted from images. PCA uses orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called Principal Components. The first Principal Component accounts for the largest possible variance, with successive Principal Components exhibiting the largest possible variance with the constraint that it is orthogonal to the previous Principal Component. The resulting vectors are an uncorrelated orthogonal basis set. PCA can be used to reduce data sets to a lower dimension by eliminating variables that contribute little to the classification accuracy
[0093] For the Abiotic stresses, current algorithms are diagnosing with an 80- 85% accuracy. Usually having highest success with Nitrogen (80%-9Q%), but that is the best documented stress with this group. For the Biotic group, current accuracy is 65%~70%, indicating that more library images may be needed. Exemplary application of the present techniques has provided high success rates as distinguishing Corn Borer (85%-90%) due to the substantially different damage pattern (insect feeding), but out of the fungal infections, most success is with Northern Corn Leaf Blight (65%-70%), but that is the most well documented with 956 images in the library. Overall classification accuracy can be increased through tuning the general process, i.e. the classification algorithm and feature extraction, but traditionally, the best results are with the largest datasets. In conclusion, to reach the desired goal of 85%-90% for all stresses and all groups, a substantial increase in the initial test set, the Reference Library, will be needed.
[0094] The following figures (FIGS 18, 19, 20, and 21) are of a‘Confusion
Matrix’ generated by MATLAB and is used to indicate where the machine learning classification algorithm is accumulating errors. On the x-axis is the‘Predicted Class’ or what the algorithm thought the image class was. On the y-axis is the‘True Class’ or what the image actually is. As an example, the first image shows the first tier, separating the Biotic from Abiotic stresses. One hundred percent of the time when Abiotic stress was predicted, the true class was and Abiotic stress. Eight seven percent of the time a Biotic stress was predicted it was a Biotic stress. However, there was a false discovery rate of 13% meaning that 13% of the time, when Biotic stress was predicted, it turned out to be Abiotic stress. Fig. 18 is a confusion matrix for Medium Gaussian SVM classification model for Biotic/Abiotic classification with an overall accuracy of 88 5% FIG. 19 is a confusion matrix for Cubic SVM classification model for Biotic Stress classification with an overall accuracy of 66.1%. FIG. 20 is a confusion matrix for SVM classification model with Principal Component Analysis explaining a 95% variance for Abiotic stress classification overall. FIG. 21 is a confusion matrix for Cubic SVM classification model for Abiotic Stress classification with an overall accuracy of 81.9%.
[0095] An exemplary methodology for reference library formation involved receiving information from various researchers conducting different disease studies. Whenever an extension educator was notified by an area farmer of a crop stress, project personnel would travel to the field for documentation. This process enables inclusion of diseases common to all of Ohio, for that particular growing season. Furthermore, this will be the best method for continued expansion of the library' in the long term. The second way the library' was formed by taking pictures of the various test plots and associates stresses. This was the best method for building the Library' in the short term, as it provides the opportunity to walk through inoculated plots with high disease levels, meaning substantial numbers of images can be taken of a great variety of disease/stress manifestations. However, for the long term, this method will be limited to the stresses studied by the researchers.
The Reference Library Example and Empirical Study Statistics are as follows:
1) 2,997 total images
2) 2,624 field images
3) 316 images taken in the lab under controlled lighting
4) Individual Image Set (Stress) Breakdown:
5) Common Rust
a) 3 image dates for set
b) 315 Total Images
i) 209 field images
ii) 106 controlled lighting images ) Corn Borer Damage
a) i image date for set
b) i i i total images
i) 111 field images
ii) 0 controlled lighting images
) Grey Leaf Spot
a) 3 image dates for set
b) 660 total images
i) 567 field images
ii) 93 controlled lighting images
) Holcus Spot
a) 1 image date for set
b) 57 total images
i) 57 field images
ii) 0 controlled lighting Images
) Magnesmm/Potassium Deficiency
a) 1 image date for set
b) 90 total images
i) 90 field images
ii) 0 controlled lighting images
0) Nitrogen Deficiency
a) 5 image dates for set
b) 697 total images
i) 580 field images(328 possibly useless, 252 'good' ones) ii) 117 controlled lighting images
1) Northern Com Leaf Blight
a) 3 image dates for set
b) 956 total images
i) 956 field images
ii) 0 controlled lighting images
2) Phosphorus Deficiency
a) 1 image date for set
b) 1 1 1 total images
i) 111 field images ii) 0 controlled lighting images
[0097] In addition to an aspect of the present disclosure, a machine learning algorithm capable of identifying different types of stresses on corn leaves, including abiotic and biotic classes has been developed. The model described herein allows for distinguishing between abiotic and biotic stresses via a convolutional neural network (CNN), which achieved ~ 96 % in validation accuracy.
[0098] A second model to distinguish between the seven (7) specific stresses and deficiency classes has been developed and may be used in addition to or separately from the abiotic/biotic stress identification. An additional CNN with validation accuracy maxed out at 91%. Additional images may be added to the library to balance the dataset. Also, a class may be added to the library for normal corn leaf images to allow the machine to learn the characteristics of a normal leaf. From a practicality perspective, the algorithm will the difference between normal and abnormal. The following sections outline our approach, sample performance results, and further address recommendations for future work. Additional benefits of this approach entail application of geo-regional specific datasets. Stresses vary based on a regional bases; Central Ohio will experience greater disease pressures than even Northwestern Ohio. The differences are even greater at a national level. For example, Common Rust is a severe issue in southern corn states such as Texas and Oklahoma but, is almost never an issue requiring management in Ohio as the disease has to move North from Southern states. The use of a regional specific reference library will enable tailored deployment of diagnostic algorithms for those locations, greatly increasing system reliability and accuracy. Unlike other machine learning techniques, application of CNN in the diagnostic process make this location dependent training significantly simpler.
[0099] The sample dataset leveraged for the development of the neural networks was the PLSDA library provided by Ohio State, although the present invention is not so limited. This sample library' consisted of 2,166 total images, as illustrated in FIG. 22 A. Splitting the library into two classes resulted in 1 ,720 biotic stresses, and 466 abiotic stresses, as illustrated in FIG. 22B. Splitting the library' further into seven classes resulted in the distribution shown in the figure on the right.
[00100] There were two factors with the data that impacted the practicality of the networks in addition to the performance. First, normal images should be included to address this type of classification problem. From a practicality standpoint, while performance may be sufficient for distinguishing between two stress types, sending this algorithm into the field would result in substantially all images being classified with a stress or deficiency if normal images are not included. Second, when building a neural network, it is recommended to leverage a loss function for performance optimization so that imbalance between classes be minimized. In the present example, for the two-class classification, 80% of the data was labeled biotic stress. This labeling did not have a huge impact on performance since the abiotic stress class still had a significant number of images. However, the disparity in class imbalance (10: 1 for NorthemComLeafBlight: MagnesiumDeficiency) between the multi-class library had a significant impact on algorithm performance. Application of an up-sampling technique addressed this issue, but it is recommended moving forward to strive for balanced classes. Other recommendations with be further discussed in a later section.
[00101] Another factor to be considered in the CNNs, individually or in combination, is the clarity of the data. For example, images in training that are representative of the images that may be taken in the field without the presence of a user may be leveraged. Removing images that would impact the algorithm’s search methods for identifying points of interest within each image, e.g., by the presence of a user or an arbitrary unimportant object, may be performed.
[00102] In the present example, development was completed in Python with the Keras deep learning framework running on a Tensorflow GPU backend, although the invention presented herein is not so required. Any suitable platform will do. From Keras, LeNet, an existing CNN architecture allowing for custom builds, was leveraged.
[00103] The following sections outline the development and performance of an example networks developed for two-class and multi-class classification of com leaves with disease or deficiencies.
[00104] An exemplary architecture implemented, as illustrated in FIG. 23, includes two phases of convolutional, activation, and pooling layers, followed by a fully- connected layer, activation, an additional fully-connected layer rounded out with a soft max classifier to provide the probability that an image belongs to a certain class. The convolutional layers essentially run a sliding window across the entire pixel matrix to extract relevant features. This is a benefit of CNNs for image recognition; feature extraction is done automatically within the algorithm. Pooling layers (also referred to as subsampling or down-sampling) perform dimensionality reduction of each feature map and retain the information most influential for classification. This step also reduces the risk of overfitting by controlling the number of parameters and computations in the network. Drop out functions may also be used to further minimize the risk of overfitting.
[00105] The resulting output is produced by the softmax classifier. After the fully-connected layers connect all relevant features from the convolutional and pooling layers, the softmax function takes the vector of real-valued scores and converts it to a vector of values between zero and one (probability of the image belonging to each class). The class with the highest probability determines the final classification, which is the final output produced by the code. Examples of the output are shown in FIGS. 24A and 24B for each network.
[00106] The architecture for the binary' classification problem was trained. At first pass, the class imbalance issue was not accounted for because the abiotic group, despite being outnumbered approximately 4: 1, had a significant number of images. Deep learning techniques typically have the ability to naturally adjust for class imbalance if a sufficient amount of data is provided for the minority class. The first model resulted in 96% accuracy. This result may be somewhat misleading because the balance accuracy is no longer being compared to a 50:50 random draw. Thus, 96% accuracy is not truly 96%. Testing the model on new images showed favoritism toward the biotic class, which meant the network was trained heavily on the majority' class. Therefore, multiple class imbalance solutions were tried to account for the issue. The first attempted solution was to apply class weights to ensure the loss function did not penalize as heavily for the minority class. This, however, had a negligible impact on performance improvement. Next, a resampling method was applied, specifically choosing to up-sample the minority class to equal the majority (i.e., abiotic and biotic now had the same number of images). Down-sampling is another valid option but might result in losing valuable data points. The resulting model performed at a true 96% validation accuracy and did well when testing on new images the network had not yet seen. The performance was verified with cross validation, which randomly splits the data into 10 subsets of train/test sets; this technique resulted in an average validation accuracy of 97.67%. To minimize the risk of overfitting, a dropout technique was implemented after each convolutional layer. The visual in FIG. 25 illustrates the loss and accuracy through each epoch of training for the final neural network
[00107] In another example, an architecture for a multi-class classification problem for seven subsets of stresses and nutrient deficiencies was trained. Like the first network, class imbalance w?as not adjusted for in the first pass to assess the impact on performance. After training the network, validation accuracy merely reached 69%. Again, this is not a pure 69% accuracy measure because there is not an equal probability of selecting from each classification group. To account for class imbalance, both up-sampling and down-sampling techniques were applied. Down-sampling, as expected, resulted in poor performance; validation accuracy peaked at 68%. For this technique, all classes were down- sampled to 89 images to match the minority class (Magnesium Potassium Deficiency), which appears not enough data to train an effective convolutional neural network. Next, the up-sampling technique was applied; ail classes were up-sampled to 956 images to match the majority class (Northern Corn Leaf Blight) Training the model on up-sampled classes resulted in approximately 90% validation accuracy; cross validation ensured no bias was presented by the train/test split, achieving 91% average validation accuracy. After testing on new images, it appeared that the network was likely overfit to the training images due to the high duplicity of the minority classes created by up-sampling. To mitigate the risk of overfitting, dropout regularization was applied after each convolution and pooling layer, which randomly ignores certain‘neurons within the network to ensure specific weight is not placed on any one neuron (i.e., prevent the network from overfitting to specific features). The visual in FIG. 26 illustrates the loss and accuracy through each training epoch for the final neural network.
[00108] As discussed above, two well performing CNNs to classify com leaf stresses and deficiencies into binary classes and then a step further into seven specific categories have been demonstrated. Both networks were tuned for class imbalances and overfitting issues for maximum performance.
[00109] To build on this work, these neural networks may be transitioned into a commercially viable product for the intended use within a full technology stack, given a few considerations. First, normal data should be collected for healthy corn leaves. At this point, the feasibility of distinguishing between different abnormalities has been demonstrated. The two demonstrated neural networks will classify every image as abnormal, which is impractical for deploying algorithms real-time in the field. However, if new images are collected, the models may be retrained to learn the features of healthy leaves. A near-term solution could be to collect as many from Google as possible, and up sample those to begin prototyping.
[00110] Second, a solution for class imbalance in the machine learning application is to collect more data. Thus, the library of images should be continued to be built at least until the minority classes begins to equal the majority classes. At this point, in the present example, the minority classes are lacking the amount of unique data required to learn often subtle differences between various corn leaf stresses and deficiencies, which can be remedied. Again, once additional data is collected, the networks may be retrained and tuned for optimal performance.
[00111] The development of the stinger/boom suspended camera head and sampling probes, when combined with unique image processing and crop health library of plant component physical features, allows for more cost-effective diagnosis of a crop health and/or crop stressors in the lower reaches of the plant canopy. In contrast, over the canopy remote sensing technologies, where visible features in the upper reaches of the plant canopy occur later in the plant growth stage, limit the impact of mitigating crop health problems (i.e , foliar application of macro or micro nutrients, application of insecticides and fungicides, or water management). Stinger/boom suspended technology translates directly into actionable information (at lower cost when compared with human crop scout counterparts) so preventative or curative treatments can be ordered and applied to preserve crop quality and yield. Furthermore, unlike the current industry standard for crop stress diagnosis, a crop scout walking the field, this process utilizes the mobility of a drone or other mechanized equipment such as a tractor, allowing for rapid deployment and large area coverage. Also, in at least one aspect, by removing the recurring cost of a skilled/trained workforce, by the use of a compatible UAS platform and stinger-suspended camera head or boom suspended camera minimize and material sampling probes, represents a onetime investment with the potential to substantially enhance and extend the human capital. Plant health is at the forefront of many of crop production management decisions, and scouting session prior to crop canopy closure do catch the onset of nutrient deficiencies, disease and insect infestations as the most serious problem occur after canopy closure. Traditional remote sensing techniques can detect the existence of these problems through the use of indicators such as leaf area index (LAI) or vegetative indices such as SAVI, NDVI, etc.. However, these methods have not been shown to reliably diagnose the crop health problem. Rather, they effectively quantify crop yield loss post morte as sensing occurs at the upper reaches of the plant canopy
[00112] Furthermore, because the UAS or boom suspended camera can visit more locations within the field per hour than its human counterpart (crop scout) yields a better characterization of the of the crop health problem leading to better informed management decisions and ultimately, a better economic outcome. Not only is this detailed of information useful in the scenario presented above for mitigating crop health problems, but in the event of irrevocable plant damage, it could be useful proof when filing for crop insurance.
[00113] While the present examples are described with reference to a single crop, e.g., com, this invention is not limited to application to com crops and may be applied to any crop using the principles described herein. While this invention is described with respect to an unmanned aerial vehicle, image capture is not so limited
[00114] It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
[00115] Throughout this application, various publications may have been referenced. The disclosures of these publications in their entireties are hereby incorporated by reference into this application in order to more fully describe the state of the art to which this invention pertains.
[00116] While various embodiments of the present invention have been descri bed above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the present invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

WHAT IS CLAIMED IS:
1. A method of developing an image library collected from vantage points within a crop canopy, for specific crops (e.g. com, wheat, soybeans, etc.) within agricultural regions of interest (e.g. Midwestern Combelt, etc.) to be utilized in conjunction with artificial intelligence (e.g probabilistic methods, classifier and statistical learning, neural networks, etc.) for the purpose of diagnosing crop health problems.
2. A method of generating point clouds from multiple images collected from within the plant canopy the extraction of specific plant components and projection from 3D to 2D for the purpose of extracting image features to improve crop health problem diagnosis accuracy using artificial intelligence techniques (e.g. probabilistic methods, classifier and statistical learning, neural networks, etc.).
3. A method of generating point clouds from multiple images collected from within the plant canopy comprising creation of solid models from the point clouds, extraction of specific plant components, and projection of surface features from 3D to 2D for the purpose of extracting and clarifying image features to improve crop health problem diagnosis accuracy using artificial intelligence techniques (e.g. probabilistic methods, classifier and statistical learning, neural netw'orks, etc.).
4. A method of assessing crop health using a learning algorithm capable of identifying different types of stresses of a crop of interest, comprising:
accessing a sample image library comprising a plurality of Images including images of deficient crop tissue, the deficient crop tissue images classified in the images according to the stress type;
training the learning algorithm using the sample image library by applying an upsampling to provide approximately the same number of images of each stress type in the image library; extracting relevant features from an image; performing dimensionality reduction of each feature map of each image and retaining information most useful for classification; and determining a highest probability for each stress for each image in the library' to provide a final classification.
5. The method according to any of the preceding claims, further comprising collecting images of plants at a discrete sampling site from a vantage point within the crop canopy,
generating a 3D model of the images of plants at the discrete sampling;
extracting and projecting samples of the images of plants into a 2D image; and matching features of the projected samples with images in an image library for diagnosing diseases at the discrete sampling site.
6. The method according to any of the preceding claims wherein the pixels in the image are classified according to a predefined classification set of pixel values.
7. The method according to any of the preceding claims wherein the image library comprises images of healthy crop tissue.
8. The method according to any of the preceding claims, wherein the stresses are abiotic/biotic stresses.
9. The method according to any of the preceding claims, wherein the stresses include any stress identifiable by visual inspection of crop tissue
10. The method according to any of the preceding claims, wherein the crop of interest is com.
11. The method according to any of the preceding claims, wherein the stresses is abiotic or biotic classes.
12. The method according to any of the preceding claims further comprising an additional class assessment.
13. The method according to any of the preceding claims, wherein extracting relevant images is performed by running a sliding window' across pixels of an image.
14. The method according to any of the preceding claims further comprising performing dimensionality reduction of each feature map of each image and retaining information most useful for classification.
15. The method according to any of the preceding claims further comprising determining a highest probability for each stress for each image in the library to provide a final classification.
16. The method according to any of the preceding claims, wherein the 3D model results from stereoscopic image overlap from a plurality of cameras received from an image capture device within the canopy.
17. A crop health assessment system image capture system for use with the method according to any of the claims above, comprising:
a motorized vehicle,
a camera attached to a distal portion of the motorized vehicle; and
an image storage medium for storing images captured by the camera for later processing.
18. The crop health assessment image capture system of claim 17, further comprising a transmitter for transmitting images to a remote location.
19. The crop health assessment image capture system of any of claims 17-18, wherein at least one camera is a multi-spectral camera.
20. The crop health assessment image capture system of any of claims 17-19, wherein the motorized vehicle is an unmanned vehicle.
21. The crop health assessment image capture system of claim 20, wherein the unmanned vehicle is a rotary wing drone.
22. A method of the health of a crop having a crop canopy using an aerial crop health assessment system, the aerial crop health assessment comprising an unmanned aerial vehicle, capable of being controlled remotely, a suspension rod, extending from an underside of the unmanned aerial vehicle and a camera attached to a di stal end of the suspensi on rod, the method comprising:
flying the unmanned aerial vehicle to a crop location,
positioning the unmanned aerial vehicle above a preselected location in the crop canopy;
lowering the unmanned aerial vehicle to cause the camera to descend within the crop canopy;
capturing images via the camera, the images of an area within the crop canopy, and processing the images to determine if health of the crop is compromised.
PCT/US2018/068150 2017-12-29 2018-12-31 Crop health sensing system WO2019133973A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762612082P 2017-12-29 2017-12-29
US62/612,082 2017-12-29

Publications (1)

Publication Number Publication Date
WO2019133973A1 true WO2019133973A1 (en) 2019-07-04

Family

ID=67068195

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/068150 WO2019133973A1 (en) 2017-12-29 2018-12-31 Crop health sensing system

Country Status (1)

Country Link
WO (1) WO2019133973A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063639A (en) * 2021-10-16 2022-02-18 长沙乐源土地规划设计有限责任公司 Unmanned aerial vehicle remote sensing information acquisition method and device
EP3968113A1 (en) * 2020-09-11 2022-03-16 Vivent SA Apparatus and method for assessing a characteristic of a plant
CN116612192A (en) * 2023-07-19 2023-08-18 山东艺术学院 Digital video-based pest and disease damage area target positioning method
CN116778343A (en) * 2023-08-15 2023-09-19 安徽迪万科技有限公司 Target image feature extraction method for comprehensive identification
CN117115668A (en) * 2023-10-23 2023-11-24 安徽农业大学 Crop canopy phenotype information extraction method, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100111369A1 (en) * 2008-09-26 2010-05-06 Robert Lussier Portable Intelligent Fluorescence and Transmittance Imaging Spectroscopy System
US20130114641A1 (en) * 2011-11-07 2013-05-09 Brian Harold Sutton Infrared aerial thermography for use in determining plant health
US20130136312A1 (en) * 2011-11-24 2013-05-30 Shih-Mu TSENG Method and system for recognizing plant diseases and recording medium
WO2013096704A1 (en) * 2011-12-20 2013-06-27 Sadar 3D, Inc. Systems, apparatus, and methods for acquisition and use of image data
WO2017004074A1 (en) * 2015-06-30 2017-01-05 Precision Planting Llc Systems and methods for image capture and analysis of agricultural fields

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100111369A1 (en) * 2008-09-26 2010-05-06 Robert Lussier Portable Intelligent Fluorescence and Transmittance Imaging Spectroscopy System
US20130114641A1 (en) * 2011-11-07 2013-05-09 Brian Harold Sutton Infrared aerial thermography for use in determining plant health
US20130136312A1 (en) * 2011-11-24 2013-05-30 Shih-Mu TSENG Method and system for recognizing plant diseases and recording medium
WO2013096704A1 (en) * 2011-12-20 2013-06-27 Sadar 3D, Inc. Systems, apparatus, and methods for acquisition and use of image data
WO2017004074A1 (en) * 2015-06-30 2017-01-05 Precision Planting Llc Systems and methods for image capture and analysis of agricultural fields

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3968113A1 (en) * 2020-09-11 2022-03-16 Vivent SA Apparatus and method for assessing a characteristic of a plant
CN114063639A (en) * 2021-10-16 2022-02-18 长沙乐源土地规划设计有限责任公司 Unmanned aerial vehicle remote sensing information acquisition method and device
CN114063639B (en) * 2021-10-16 2023-08-08 长沙乐源土地规划设计有限责任公司 Unmanned aerial vehicle remote sensing information acquisition method and device
CN116612192A (en) * 2023-07-19 2023-08-18 山东艺术学院 Digital video-based pest and disease damage area target positioning method
CN116778343A (en) * 2023-08-15 2023-09-19 安徽迪万科技有限公司 Target image feature extraction method for comprehensive identification
CN116778343B (en) * 2023-08-15 2023-11-14 安徽迪万科技有限公司 Target image feature extraction method for comprehensive identification
CN117115668A (en) * 2023-10-23 2023-11-24 安徽农业大学 Crop canopy phenotype information extraction method, electronic equipment and storage medium
CN117115668B (en) * 2023-10-23 2024-01-26 安徽农业大学 Crop canopy phenotype information extraction method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2019133973A1 (en) Crop health sensing system
Liu et al. Weed detection for selective spraying: a review
Dhanya et al. Deep learning based computer vision approaches for smart agricultural applications
AU2020103613A4 (en) Cnn and transfer learning based disease intelligent identification method and system
US10614562B2 (en) Inventory, growth, and risk prediction using image processing
US10747999B2 (en) Methods and systems for pattern characteristic detection
Blok et al. The effect of data augmentation and network simplification on the image‐based detection of broccoli heads with Mask R‐CNN
WO2021043904A1 (en) System and method for identification of plant species
Wulandhari et al. Plant nutrient deficiency detection using deep convolutional neural network
Alzadjali et al. Maize tassel detection from UAV imagery using deep learning
US20200320294A1 (en) Artificial intelligence based plantable blank spot detection
Joseph et al. Intelligent plant disease diagnosis using convolutional neural network: a review
Mutalib et al. A brief study on paddy applications with image processing and proposed architecture
WO2022117772A1 (en) System and method for determining damage on plants after herbicide application
Finn et al. Unsupervised spectral-spatial processing of drone imagery for identification of pine seedlings
Lai et al. Real-time detection of ripe oil palm fresh fruit bunch based on YOLOv4
Xiang et al. Measuring stem diameter of sorghum plants in the field using a high-throughput stereo vision system
Suresh Kumar et al. Selective fruit harvesting: Research, trends and developments towards fruit detection and localization–A review
Kuswidiyanto et al. Airborne hyperspectral imaging for early diagnosis of kimchi cabbage downy mildew using 3D-ResNet and leaf segmentation
Olsen Improving the accuracy of weed species detection for robotic weed control in complex real-time environments
Aversano et al. Water stress classification using Convolutional Deep Neural Networks.
Bulanon et al. Machine vision system for orchard management
Singhi et al. Integrated YOLOv4 deep learning pretrained model for accurate estimation of wheat rust disease severity
Chotikunnan et al. Evaluation of Single and Dual image Object Detection through Image Segmentation Using ResNet18 in Robotic Vision Applications
Karegowda et al. Deep learning solutions for agricultural and farming activities

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18894190

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18894190

Country of ref document: EP

Kind code of ref document: A1