WO2020039434A1 - Plant-monitor - Google Patents

Plant-monitor Download PDF

Info

Publication number
WO2020039434A1
WO2020039434A1 PCT/IL2019/050932 IL2019050932W WO2020039434A1 WO 2020039434 A1 WO2020039434 A1 WO 2020039434A1 IL 2019050932 W IL2019050932 W IL 2019050932W WO 2020039434 A1 WO2020039434 A1 WO 2020039434A1
Authority
WO
WIPO (PCT)
Prior art keywords
sounds
plant
plants
sound
computer system
Prior art date
Application number
PCT/IL2019/050932
Other languages
French (fr)
Inventor
Itzhak KHAIT
Raz SHARON
Yosef YOVEL
Lilach HADANY
Original Assignee
Ramot At Tel-Aviv University Ltd.,
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ramot At Tel-Aviv University Ltd., filed Critical Ramot At Tel-Aviv University Ltd.,
Priority to US17/270,016 priority Critical patent/US20210325346A1/en
Priority to EP19850938.2A priority patent/EP3830662A4/en
Publication of WO2020039434A1 publication Critical patent/WO2020039434A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/14Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object using acoustic emission techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/22Details, e.g. general constructional or apparatus details
    • G01N29/24Probes
    • G01N29/2437Piezoelectric probes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/22Details, e.g. general constructional or apparatus details
    • G01N29/26Arrangements for orientation or scanning by relative movement of the head and the sensor
    • G01N29/262Arrangements for orientation or scanning by relative movement of the head and the sensor by electronic orientation or focusing, e.g. with phased arrays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/36Detecting the response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/42Detecting the response signal, e.g. electronic circuits specially adapted therefor by frequency filtering or by tuning to resonant frequency
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4481Neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/0098Plants or trees
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/02Indexing codes associated with the analysed material
    • G01N2291/024Mixtures
    • G01N2291/02466Biological material, e.g. blood
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/02Indexing codes associated with the analysed material
    • G01N2291/025Change of phase or condition
    • G01N2291/0258Structural degradation, e.g. fatigue of composites, ageing of oils
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/10Number of transducers
    • G01N2291/106Number of transducers one or more transducer arrays

Definitions

  • Embodiments of the disclosure relate to monitoring the condition of plants by listening to sounds that the plants make.
  • Modern agriculture that provides produce to feed the burgeoning global population is a complex industrial process that involves investment and management of natural and manmade resources such as land, artificial soil, water, sunlight, nutrients, and pesticides to promote plant growth that provides abundant, economic, crop yields.
  • Plant health, growth rate, and crop yields are subject to variables, such as weather, disease, and insect infestations, which may be difficult to anticipate, and operate to make efficient provision and timely administration of the resources a relatively complex undertaking.
  • efficient and close monitoring of plant growth and health, and that of the grains, fruits, and vegetables they bear may be particularly advantageous in facilitating effective management of the resources.
  • An aspect of an embodiment of the disclosure relates to providing a system for monitoring the condition of plants by listening to sounds that the plants make.
  • the system hereinafter also referred to as a“Plant-monitor”, comprises an array of sound collectors, optionally microphones or piezoelectric transducers, configured to generate signals responsive to sounds that plants make.
  • the system comprises a processor configured to receive and process the signals to identify the type of plants making the sounds and/or to determine a status of the plants making the sounds.
  • the processor uses a classifier to identify types of plants vocalizing the sounds and/or evaluate indications of plant status.
  • Plant status may by way of example be based on at least one or any combination of more than one of indications of strain due to drought stress, structural damage which may be brought about by violent weather and/or by infestation of pests, as well as plant maturity, density of foliage, and/or fruit.
  • the plant sounds are ultrasonic sounds in a band from about 20 kHz to about 100 kHz, from about 40 kHz to about 70 kHz, or from about 40 kHz to about 60 kHz.
  • the plurality of sound collectors are operated as a phased array acoustic antenna controllable to spatially scan a field of plants and acquire plant sounds as a function of location in the field.
  • FIG. 1 schematically shows a Plant-monitor scanning a field of plants to acquire and process plant sounds as a function of location in the field, in accordance with an embodiment of the disclosure
  • FIG. 2 schematically shows an experimental setup for acquiring sounds made by, optionally tomato and tobacco, plants respectively challenged by different drought stress and physical injury, in accordance with an embodiment of the disclosure
  • Fig. 3A shows a graph of a number of sounds per hour made by tomato and tobacco plants challenged by drought, the sounds acquired by the experimental set up shown in Fig. 2, in accordance with an embodiment of the disclosure;
  • Fig. 3B shows graphs of the amplitude and frequency distributions of the sounds graphed in Fig. 3A, in accordance with an embodiment of the disclosure
  • Fig. 4A shows a graph of a number of sounds per hour made by tomato and tobacco plants that have been damaged acquired by the experimental set up shown in Fig. 2, in accordance with an embodiment of the disclosure
  • Fig. 4B shows graphs of the amplitude and frequency distributions of the sounds graphed in Fig. 4A, in accordance with an embodiment of the disclosure
  • Figs. 5A-5C show graphs of a mean and standard deviations for intensity, frequency, and duration of sounds respectively made by tomato and tobacco plants that are challenged by drought and physical damage, acquired by the experimental set up shown in Fig. 2, in accordance with an embodiment of the disclosure;
  • Fig. 5D shows a graph of accuracy in classifying sounds made by tomato and tobacco plants challenged by drought and physical damage by a support vector machine (S VM) operating on feature vectors based on different types of descriptors, in accordance with an embodiment of the disclosure;
  • S VM support vector machine
  • Fig. 5E shows a graph of accuracy in classifying sounds made by tomato and tobacco plants challenged by drought and physical damage by a support vector machine (SVM) as a function of number of sounds recorded for the plants, in accordance with an embodiment of the disclosure.
  • SVM support vector machine
  • Fig. 6A and Fig. 6B show confusion matrices of accuracy of identifying noise and plant sounds and indicating plant status, respectively, in accordance with an embodiment of the disclosure.
  • a Plant-monitor comprising a plurality of sound collectors, optionally operating as a phased array, to acquire and analyze sounds made by plants in a field is discussed with reference to Fig. 1.
  • Fig. 2 shows an experimental system used for recording and analyzing sounds made by tomato and tobacco plants under stress from drought and physical damage. Results of experiments carried out using the system shown in Figs. 1-2 that show that the plants and the stresses to which they are subjected can be distinguished by their sounds are discussed with reference to data shown in Figs. 3A-6B.
  • adjectives such as“substantially” and“about” modifying a condition or relationship characteristic of a feature or features of an embodiment of the disclosure are understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of the embodiment for an application for which it is intended.
  • a general term in the disclosure is illustrated by reference to an example instance or a list of example instances, the instance or instances referred to, are by way of non limiting example instances of the general term, and the general term is not intended to be limited to the specific example instance or instances referred to.
  • the word “or” in the description and claims is considered to be the inclusive“or” rather than the exclusive or, and indicates at least one of, or any combination of more than one of items it conjoins.
  • Fig. 1 schematically shows a Plant-monitor system 20 being used to monitor plant health and detect injury and/or strain to plants 50 in a field 52 of the plants as a function of their location in the field.
  • Plant-monitor system 20 optionally comprises an array 22 of sound collectors 24 configured to generate signals responsive to sounds, schematically represented by beams of concentric arcs 40, made by the plants and transmit the signals to a computer system 26 for processing.
  • the signals may be transmitted to computer system 26 via wire or wireless channels represented by dashed lines 42.
  • Generating signals responsive to the plant sounds may be referred to as acquiring a soundtrack of the sounds or acquiring the sounds, and processing the signals may be referred to as processing the sounds.
  • a sound collector of sound collectors 24 may be, by way of example, a microphone or a piezoelectric transducer.
  • computer system 26 may control sound collectors 24 to operate in a phased mode to scan field 52 and acquire soundtracks of the sounds 40 of plants 50 as a function of their location in field 52.
  • sound collector array 22 is shown controlled by computer system 26 to acquire plant sounds from a region of field 52 indicted by a circle 54.
  • a sound collector of sound collectors 24 may be mounted on a mobile platform (not shown), by way of example a ground-based vehicle or an aerial vehicle such as a quadcopter or a drone.
  • the mobile platform is remote-controlled by a human operator, is semi- autonomous, or is fully autonomous.
  • the mobile platform is controlled by computer system 26.
  • the acquired plant sounds comprise airborne sounds generated within a stem of the plant that can be recorded from a distance, by way of example about 5 centimeters, about 10 centimeters, about 25 centimeters, about 50 centimeters, or about 100 centimeters from the stem.
  • the sound collectors may be placed in the field so that they are positioned appropriately to record sounds emanating from the stem of one or more selected plants.
  • computer system 26 may be configured to control the mobile platform to position itself and the sound collector to be appropriately positioned to record sounds emanating from the stem of one or more selected plants.
  • computer system 26 processes the sounds to generate a profile of the one or more plants producing the sounds, the profile comprising an identity of one or more different types of plants in field 52, and/or their respective status, optionally as a function of their location.
  • the computer system processes the signals to generate a set of features that characterize the sounds, optionally as a function of location.
  • the computer system may use a classifier to process feature vectors based on the set of features to identify the different types of plants and to determine their respective statuses, optionally as a function of location.
  • the plant identity is optionally a plant species.
  • the status of the plant may comprise one or any combination of more than one of a state of hydration, structural integrity, plant maturity, density of foliage, level of pathogen infection, herbivore damage, and/or density of fruit.
  • the feature vectors may comprise at least one or any combination of more than one of a fundamental frequency, a max intensity, a mean intensity, a variance of intensity, a duration of the sound, a frequency or count of sound incidence, a variance of intervals between sounds, and an inter-quartile range of sound duration. Additionally or alternatively, the feature vectors may comprise one or more features generated by an algorithm that reduces dimensionality of a signal based on the sounds. The one or more features may be generated by a Mel Frequency Cepstral Coefficient (MFCC) algorithm or a function thereof, a wavelet scattering convolution network, a stacked autoencoder, or an embedding algorithm.
  • MFCC Mel Frequency Cepstral Coefficient
  • the recordings are filtered, by hardware means or software means, to select for sounds within a desired frequency range for further analysis in computer system 26.
  • the desired frequency ranges from about 20 kHz to about 100 kHz, from about 40 kHz to about 70 kHz, or from about 40 kHz to about 60 kHz.
  • Sound collectors 24 may comprise or be operatively connected to one or more amplifiers and/or band pass filters as appropriate for transmitting signals generated by a defined range of frequencies to computer system 26.
  • computer system 26 may comprise electronic sound filtering and amplification functionalities to filter the soundtrack so that only sounds within a desired range of frequencies is kept for further analysis.
  • computer system 26 filters recordings of plant-generated sounds to filter non-plant-generated background sounds that may be produced in field 52.
  • Background sounds may, by way of example comprise sounds generated by wind, farm equipment operation, animals such as birds and insects, and the like.
  • the filtering is performed using a classifier trained to distinguish background sounds from plant generated sounds.
  • the classifiers used to determine the status or identity of the plants that generated the sounds being analyzed, or to filter the plant recordings may be one or more of: an artificial neural network (ANN) classifier by way of example a convolutional neural network (CNN) or a deep neural network (DNN), a support vector machine (SVM) classifier, a Linear Predictive Coding (LPC) classifier, and/or a K Nearest Neighbors (KNN) classifier.
  • ANN artificial neural network
  • CNN convolutional neural network
  • DNN deep neural network
  • SVM support vector machine
  • LPC Linear Predictive Coding
  • KNN K Nearest Neighbors
  • computer system 26 analyzes recordings comprising airborne sounds generated in a stem of a plant (or group of plants) to determine the species of the plant and/or to determine whether the plant was well irrigated, drought- stressed, or severed close to the ground.
  • the feature vector comprises features based on descriptors of the recorded plant generated sounds determined using a wavelet scattering convolution network.
  • computer system 26 analyzes recordings comprising airborne sounds generated in a stem of a plant (or group of plants) to determine whether the plant was well irrigated, drought-stressed, or severed close to the ground based on a count or frequency of incidence of the plant-generated sound.
  • Computer system 26 may operate to manage plants 50 in field 52 based on the determined plant identities and their statuses.
  • the plants in Field 52 may be managed by controlling, by way of example, irrigation, application of fertilizer, or application of pesticides.
  • computer system 26 may determine substantially in real-time responsive to acquired plant sounds that plants in a first region of field 52 are suffering from drought stress, and that plants in a second region are suffering from exposure to excessive soil water.
  • the computer system may control an irrigation system (not shown) to increase provision of water to the first region and decrease provision of water to the second region.
  • Computer system 26 may also control sound collector array 22 to operate as an array of acoustic transmitters and focus sound at a first frequency and intensity in the first region that may cause the plants in the first region to reduce their stomata openings and reduce their rate of water loss.
  • the computer system may focus sound of a second intensity and frequency that may cause plants in the second region to increase their stomata openings to increase a rate at which they lose water.
  • computer system 26 may determine, substantially in real time, that plants 50 in a third region of field 52 are being physically damaged by pest infestation. In response the computer system may operate to dispatch a robot (not shown) to the third region to spray the region with a pesticide and/or to patrol the region to scare the pests off by frequently appearing in the region.
  • computer system 26 may determine, substantially in real time, that plants 50 in a fourth region of field 52 are not growing as robustly in terms of foliage or fruit production. In response the computer system may operate to dispatch a robot (not shown) to the fourth region to apply fertilizer to the region. By way of another example, computer system 26 may determine, substantially in real time, that a species of plants 50 in a fourth region of field 52 does not match an intended species, indicating weed proliferation. In response the computer system may operate to dispatch a robot (not shown) to the fourth region to spray the region with an herbicide, optionally an herbicide that is well-tolerated by the intended species.
  • FIG. 2 shows a schematic of an experimental plant monitor 21 in accordance with an embodiment of the disclosure, including six microphones 24 for recording and processing plant sounds as well as a computer system 26 configured to digitize, record, and process sounds captured by the microphones.
  • the plants 25 and microphones 24 are placed inside an acoustically isolated box (“ acoustic box”) 100.
  • Figs. 3A-5E were acquired with the following equipment:
  • the microphones used were condenser CM16 ultrasound microphone (Avisoft).
  • the recorded sound was digitized using an UltraS oundGate 1216H A/D converter (Avisoft) and stored on a PC.
  • the sampling rate was 500 KHz per channel, and a high-pass filter 15 KHz was used.
  • Two microphones were directed at each plant stem from a distance of 10 centimeters (cm), and instance of sound recording in a microphone was triggered with a sound exceeding 2% of the maximum dynamic range of the microphone. In subsequent analysis, only sounds that were recorded by both microphones were considered as a“plant sound”.
  • Fig. 3A Sounds recorded from drought-stressed and control plants showed that drought- stressed plants emit significantly more sounds than control plants (p ⁇ e-7, Wilcoxon test).
  • a drought experiment was performed in which all the plant sounds were recorded twice: a first recording session of 1 hour before drought treatment, and a second recording session (1 hour) after half of the plants experienced drought stress in the form of 4-6 days without being watered until the soil moisture in the un-watered plants decreased to 5% of pre-drought levels. It was found that the mean number of sounds emitted by drought-stressed plants during one hour was 35.4 ⁇ 6.1 and 11.0 ⁇ 1.4 sounds for tomato and tobacco plants respectively (Fig. 3A).
  • Examples for a sound emitted by drought-stressed plants as a function of time are shown in graphs 301 (tomato) and 302 (tobacco) respectively in Fig. 3B.
  • Frequency spectra for the emitted sounds are respectively shown in graphs 303 (drought-stressed tomato) and 304 (drought-stressed tobacco).
  • the mean intensity of tomato plant sounds (Fig. 3B graph 301) was 61.6 ⁇ 0.1 DBSPL (decibel sound pressure level) at 10.0 cm distance from the tomato plants, with a mean frequency of 49.6 ⁇ 0.4 KHz, (Fig. 3B graph 302).
  • Mean intensity of tobacco sounds was 65.6 ⁇ 0.4 DBSPL (Fig. 3B graph 303) at 10.0 cm with a mean frequency of 54.8 ⁇ 1.1 KHz (Fig. 3B graph 304).
  • Fig. 4A It was found that severed plants emitted 15.2 ⁇ 2.6 and 21.1 ⁇ 3.4 sounds during one hour for tomato and tobacco plants respectively. A mean number of sounds emitted by control plants was less than 1 per hour (Fig 4A). Sounds produced by cut tomato plants were characterized by a mean intensity of 65.6 ⁇ 0.2 DBSPL measured at 10.0 cm (Fig. 4B graph 401) and a mean peak frequency of 57.3 ⁇ 0.7 KHz, (Fig. 4B graph 403). Sounds produced by cut tobacco plants were characterized by a mean intensity of 63.3 ⁇ 0.2 DBSPL measured at 10.0 cm (Fig. 4B graph 402) and a mean peak frequency of 57.8 ⁇ 0.7 KHz (Fig. 4B graph 404).
  • Tomato plants were infected with of three different strains of Xanthomonas: Xanthomonas euvesicatoria (Xcv) 85-10, Xanthomonas vesicatoria (Xcv) T2, or Xanthomonas vesicatoria (Xcv) T2 comprising a plasmid expressing the avirulence effector AvrXv3.
  • Xcv Xanthomonas euvesicatoria
  • Xcv Xanthomonas vesicatoria
  • Xcv Xanthomonas vesicatoria
  • stem-generated sounds can be used to determine a developmental stage of a plant. Sounds generated by the stem of Mamilarius ramosissima plants were monitored over a period of time before and during blooming of flowers on the plants. Well-watered plants generally produced less than one sound per hour. However, a temporary increase in stem sound generation was consistently observed between 2 and 4 days prior to blooming. As such, a temporary increase in stem-generated sounds in well-watered, healthy, intact plants prior to flower blooming indicates impending flower blooming.
  • the acquired tomato and tobacco sounds were analyzed using a trained classifier to distinguish plant sounds emitted by tomato and tobacco plants to indicate status of the plants.
  • the sounds were divided into four groups according to the plant type; tomato (Tom), or tobacco (Tob), and the stress the plant suffered from when it emitted the sound, drought (D) or severing (S): Tom-D; Tom-S; Tob-D; and Tob- S.
  • the distributions of the descriptors of max intensity (in dBSPL), fundamental frequency (in KHz), and sound duration (in milliseconds), for the four sound groups are shown in Figs 5A-5C, respectively.
  • a first feature vector (“Basic”) was based on a small set of basic physical descriptors of the sounds such as fundamental frequency, max intensity and sound duration. Using the basic physical descriptors alone, the SVM classifier obtained a maximum accuracy between about 45- 65% for identifying sounds in a pair of sound groups. Accuracy of identification as a function of sound group pair using the SVM classifier is given by a graph line 501 in Fig. 5D.
  • a second feature vector (“MFCC”) was based on Mel Frequency Cepstral Coefficients (MFCC), descriptors. Using the second MFCC feature vector, a SVM classifier attained 60-70% accuracy (graph line 502 in Fig. 5D).
  • a third feature vector (“Scattering”) was based on descriptors determined using a wavelet scattering convolution network, which provides a locally translation invariant representation, stable to time- warping deformations.
  • the wavelet scattering network extends the MFCC representations by computing modulation spectrum coefficients of multiple orders using a cascade of wavelet filter banks and modulus rectifiers.
  • the SVM classifier as shown by a graph line 503 in graph 500, achieved accuracy between about 70-80%.
  • Accuracy obtained using features determined by Linear Predictive Coding (LPC) is shown by a graph line 504 in graph 500, and accuracy obtained using operating on a feature vector comprising LPC and MFCC features is given by a graph line 505 in graph 500.
  • LPC Linear Predictive Coding
  • a graph line 506 shows accuracy of results obtained by an SVM classifier operating feature vectors comprising LPC, MFCC, scattering network and basic features. Particularly advantageous results were obtained by an SVM classifier operating on the third, wavelet scattering network feature vectors (graph line 503 in Fig. 5D). It was also found that all classifiers tested robustly distinguished plant-generated sounds from noise, with an accuracy of 90% or greater.
  • Fig. 5E shows a graph 520 of accuracy of classification of plant-generated sounds, from an hour of recording, as belonging to one of the groups of sounds described above, tomato- drought (Tom-D), tomato- severed (Tom-S), tobacco-drought (Tob-D), tobacco- severed (Tob-S), in a pair of groups as a function of a number of sounds recorded for the plant in an hour.
  • the sounds made by the plant were classified using a SVM classifier and wavelet scattering network feature vector and a majority vote of classifications of all sounds that the plant emitted during the hour.
  • a Plant-monitor may have a memory stored with a library of characterizing feature vectors for different plants and different status of the plants.
  • the library feature vectors may be used to process plant sounds to identify the plants making the sounds and to provide an indication of the status of the plants.
  • Another experimental Plant monitor (not shown), similar to Plant monitors 20 and 21, was used to record sounds generated by tomato plants in a more natural environment, in a greenhouse without an acoustic isolation box to prevent ambient noise from being recorded together with plant- generated sounds.
  • the sound recording equipment was the same as used in Example 1.
  • a noise sounds library of background noise was first generated by recording inside an empty greenhouse for several days.
  • a classifier based on a convolution neural network (CNN) was trained using the pre-recorded greenhouse noise and pre-recorded tomato sounds recorded in an acoustic isolation box to distinguish between tomato-generated sounds and greenhouse noise with a balanced accuracy score of 99.7%.
  • CNN convolution neural network
  • the balanced accuracy score is a percentage score calculated by subtracting from a theoretical perfect accuracy rate of 100% a balanced failure rate calculated as a mean of the false positive rate and the false negative rate.
  • GND classifier Greenhouse noise-detecting classifier
  • the recordings were filtered using a CNN-based GND classifier to remove greenhouse noise and the condition of the plants from which the recordings were made was determined to be drought-stressed or normally irrigated based on the count or frequency of the plant-generated sounds in hour-long recordings filtered by the GND classifier.
  • Filtered recordings containing 3 or more tomato plant-generated sounds during the l-hour recording duration were classified as being recorded from drought- stressed plants, and filtered recordings containing 2 or fewer tomato plant- generated sounds during the l-hour recording duration were classified as being recorded from normally irrigated plants.
  • the above-described classification process resulted in a balanced accuracy score of -84.4% for distinguishing between 25 normally irrigated (control) plant recordings and 26 drought-stressed plant recordings.
  • a system for monitoring plants comprising one or more sound collectors configured to receive and generate signals based on sounds produced by one or more plants; and a computer system configured to receive and process the signals to provide a profile of the one or more plants producing the sounds.
  • plant profile comprises at least one or both of a plant identity and a plant status of the one or more plants.
  • the plant status comprises at least one or any combination of more than one of a state of hydration, structural integrity, plant maturity, density of foliage, level of pathogen infection, herbivore damage, and/or density of fruit.
  • the computer system is configured to generate at least one feature vector having components to represent the sounds.
  • the components of a feature vector of the at least one feature vector comprise at least one or any combination of more than one of a fundamental frequency, a max intensity, a mean intensity, a variance of intensity, a duration of the sounds, a frequency or count of incidence of the sounds, a variance of intervals between sounds, and an inter-quartile range of sound duration.
  • the components of a feature vector of the at least one feature vector comprise one or more features generated by an algorithm that reduces dimensionality of a signal that operates on the sounds.
  • the one or more features are generated by a Mel Frequency Cepstral Coefficient (MFCC) algorithm or a function thereof, a wavelet scattering convolution network, a stacked autoencoder, or an embedding algorithm.
  • the one or more features are generated by a scattering convolution network.
  • the computer system is configured to process the at least one feature vector using at least one classifier to profile the one or more plants producing the sounds.
  • the at least one classifier comprises at least one or any combination of more than one of an artificial neural network (ANN), a support vector machine (SVM), Linear Predictive Coding (LPC), and/or a K Nearest Neighbors (KNN) classifier.
  • ANN artificial neural network
  • SVM support vector machine
  • LPC Linear Predictive Coding
  • KNN K Nearest Neighbors
  • an ANN is used as a feature extractor that filters out non-plant-generated sounds from the recorded signal.
  • the one or more sound collectors comprises an array of sound collectors.
  • the computer system is configured to operate the array of sound collectors as a phased array to receive plant sounds from a limited spatial region.
  • the computer system is configured to operate the array of sound collectors as an array of speakers.
  • the computer system is configured to operate the array of speakers as a phased array of speakers to direct sound to a limited spatial region.
  • the sound is directed responsive to the plant profile of the one or more plants, the one or more plants optionally being located within the limited spatial region.
  • the computer system is configured to initiate administration of an agricultural treatment to the one or more plants responsive to the plant profile of the one or more plants.
  • the agricultural treatment is selected from the group consisting of water, a pesticide, an herbicide or a fertilizer.
  • a sound collector of the one or more sound collectors is mounted to a mobile platform.
  • the mobile platform is optionally a ground-based vehicle or an aerial vehicle.
  • the sound collector is a microphone or a piezoelectric transducer.
  • the sound collector is configured to generated the signal responsive to sounds within a frequency range of from about 20 kHz to about 100 kHz.
  • the frequency range is from about 40 kHz to about 60 kHz.
  • a method comprising: receiving sounds that plants make; generating signals based on the sounds; and processing the signals to provide a profile of one or more plants producing the sounds.
  • the plant profile comprises at least one or both of a plant identity and a plant status.
  • plant status comprises at least one or any combination of more than one of a state of hydration, structural integrity, plant maturity, density of foliage, level of pathogen infection, herbivore damage, and/or density of fruit.
  • the method comprises generating at least one feature vector having components to represent the sounds.
  • the components of a feature vector of the at least one feature vector comprise at least one or any combination of more than one of a fundamental frequency, a max intensity, a mean intensity, a variance of intensity, a duration of the sounds, a frequency or count of incidence of the sounds, a variance of intervals between sounds, and an inter-quartile range of sound duration.
  • the components of a feature vector of the at least one feature vector comprise one or more features generated by an algorithm that reduces dimensionality of a signal that operates on the sounds.
  • the one or more features are generated by a Mel Frequency Cepstral Coefficient (MFCC) algorithm or a function thereof, a wavelet scattering convolution network, a stacked autoencoder, or an embedding algorithm.
  • the one or more features are generated by a scattering convolution network.
  • the at least one feature vector is processed using at least one classifier to profile the one or more plants producing the sounds.
  • the at least one classifier comprises at least one or any combination of more than one of an artificial neural network (ANN), a support vector machine (SVM), Linear Predictive Coding (LPC), and/or a K Nearest Neighbors (KNN) classifier.
  • ANN artificial neural network
  • SVM support vector machine
  • LPC Linear Predictive Coding
  • KNN K Nearest Neighbors
  • an ANN is used as a feature extractor that filters out non-plant-generated sounds from the recorded signal.
  • the signals are generated responsive to sounds within a frequency range of from about 20 kHz to about 100 kHz.
  • the frequency range is from about 40 kHz to about 60 kHz.
  • the method comprises initiating administration of an agricultural treatment to the one or more plants responsive to the plant profile.
  • the agricultural treatment is selected from the group consisting of water, a pesticide, an herbicide or a fertilizer.
  • each of the verbs,“comprise” “include” and“have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of components, elements or parts of the subject or subjects of the verb.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Botany (AREA)
  • Wood Science & Technology (AREA)
  • Food Science & Technology (AREA)
  • Medicinal Chemistry (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A system for monitoring plants, the system comprising: one or more sound collectors configured to receive and generate signals based on sounds that plants make; and a computer system configured to receive and process the signals to provide a profile of a plant producing the sounds.

Description

PUANT-MONITOR
REUATED APPUICATIONS
[0001] The present application claims the benefit under 35 U.S.C. 119(e) of U.S. Provisional Application 62/719,709, filed on August 20, 2018, the disclosure of which is incorporated herein by reference.
FIEUD
[0002] Embodiments of the disclosure relate to monitoring the condition of plants by listening to sounds that the plants make.
BACKGROUND
[0003] Modern agriculture that provides produce to feed the burgeoning global population is a complex industrial process that involves investment and management of natural and manmade resources such as land, artificial soil, water, sunlight, nutrients, and pesticides to promote plant growth that provides abundant, economic, crop yields. Plant health, growth rate, and crop yields are subject to variables, such as weather, disease, and insect infestations, which may be difficult to anticipate, and operate to make efficient provision and timely administration of the resources a relatively complex undertaking. Whether greenhouse, open field, or orchard agriculture, efficient and close monitoring of plant growth and health, and that of the grains, fruits, and vegetables they bear may be particularly advantageous in facilitating effective management of the resources.
SUMMARY
[0004] An aspect of an embodiment of the disclosure relates to providing a system for monitoring the condition of plants by listening to sounds that the plants make. In an embodiment, the system, hereinafter also referred to as a“Plant-monitor”, comprises an array of sound collectors, optionally microphones or piezoelectric transducers, configured to generate signals responsive to sounds that plants make. The system comprises a processor configured to receive and process the signals to identify the type of plants making the sounds and/or to determine a status of the plants making the sounds. In an embodiment the processor uses a classifier to identify types of plants vocalizing the sounds and/or evaluate indications of plant status. Plant status may by way of example be based on at least one or any combination of more than one of indications of strain due to drought stress, structural damage which may be brought about by violent weather and/or by infestation of pests, as well as plant maturity, density of foliage, and/or fruit.
[0005] Optionally, the plant sounds are ultrasonic sounds in a band from about 20 kHz to about 100 kHz, from about 40 kHz to about 70 kHz, or from about 40 kHz to about 60 kHz. In an embodiment the plurality of sound collectors are operated as a phased array acoustic antenna controllable to spatially scan a field of plants and acquire plant sounds as a function of location in the field.
[0006] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
BRIEF DESCRIPTION OF FIGURES
[0007] Non-limiting examples of embodiments of the disclosure are described below with reference to figures attached hereto that are listed following this paragraph. Identical features that appear in more than one figure are generally labeled with a same label in all the figures in which they appear. A label labeling an icon representing a given feature of an embodiment of the disclosure in a figure may be used to reference the given feature. Dimensions of features shown in the figures are chosen for convenience and clarity of presentation and are not necessarily shown to scale.
[0008] Fig. 1 schematically shows a Plant-monitor scanning a field of plants to acquire and process plant sounds as a function of location in the field, in accordance with an embodiment of the disclosure;
[0009] Fig. 2 schematically shows an experimental setup for acquiring sounds made by, optionally tomato and tobacco, plants respectively challenged by different drought stress and physical injury, in accordance with an embodiment of the disclosure;
[0010] Fig. 3A shows a graph of a number of sounds per hour made by tomato and tobacco plants challenged by drought, the sounds acquired by the experimental set up shown in Fig. 2, in accordance with an embodiment of the disclosure;
[0011] Fig. 3B shows graphs of the amplitude and frequency distributions of the sounds graphed in Fig. 3A, in accordance with an embodiment of the disclosure; [0012] Fig. 4A shows a graph of a number of sounds per hour made by tomato and tobacco plants that have been damaged acquired by the experimental set up shown in Fig. 2, in accordance with an embodiment of the disclosure;
[0013] Fig. 4B shows graphs of the amplitude and frequency distributions of the sounds graphed in Fig. 4A, in accordance with an embodiment of the disclosure;
[0014] Figs. 5A-5C show graphs of a mean and standard deviations for intensity, frequency, and duration of sounds respectively made by tomato and tobacco plants that are challenged by drought and physical damage, acquired by the experimental set up shown in Fig. 2, in accordance with an embodiment of the disclosure;
[0015] Fig. 5D shows a graph of accuracy in classifying sounds made by tomato and tobacco plants challenged by drought and physical damage by a support vector machine (S VM) operating on feature vectors based on different types of descriptors, in accordance with an embodiment of the disclosure;
[0016] Fig. 5E shows a graph of accuracy in classifying sounds made by tomato and tobacco plants challenged by drought and physical damage by a support vector machine (SVM) as a function of number of sounds recorded for the plants, in accordance with an embodiment of the disclosure; and
[0017] Fig. 6A and Fig. 6B show confusion matrices of accuracy of identifying noise and plant sounds and indicating plant status, respectively, in accordance with an embodiment of the disclosure.
DETAILED DESCRIPTION
[0018] In the detailed discussion below operation of a Plant-monitor comprising a plurality of sound collectors, optionally operating as a phased array, to acquire and analyze sounds made by plants in a field is discussed with reference to Fig. 1. Fig. 2 shows an experimental system used for recording and analyzing sounds made by tomato and tobacco plants under stress from drought and physical damage. Results of experiments carried out using the system shown in Figs. 1-2 that show that the plants and the stresses to which they are subjected can be distinguished by their sounds are discussed with reference to data shown in Figs. 3A-6B.
[0019] In the discussion, unless otherwise stated, adjectives such as“substantially” and“about” modifying a condition or relationship characteristic of a feature or features of an embodiment of the disclosure, are understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of the embodiment for an application for which it is intended. Wherever a general term in the disclosure is illustrated by reference to an example instance or a list of example instances, the instance or instances referred to, are by way of non limiting example instances of the general term, and the general term is not intended to be limited to the specific example instance or instances referred to. Unless otherwise indicated, the word “or” in the description and claims is considered to be the inclusive“or” rather than the exclusive or, and indicates at least one of, or any combination of more than one of items it conjoins.
[0020] Fig. 1 schematically shows a Plant-monitor system 20 being used to monitor plant health and detect injury and/or strain to plants 50 in a field 52 of the plants as a function of their location in the field. Plant-monitor system 20 optionally comprises an array 22 of sound collectors 24 configured to generate signals responsive to sounds, schematically represented by beams of concentric arcs 40, made by the plants and transmit the signals to a computer system 26 for processing. The signals may be transmitted to computer system 26 via wire or wireless channels represented by dashed lines 42. Generating signals responsive to the plant sounds may be referred to as acquiring a soundtrack of the sounds or acquiring the sounds, and processing the signals may be referred to as processing the sounds. A sound collector of sound collectors 24 may be, by way of example, a microphone or a piezoelectric transducer.
[0021] In an embodiment computer system 26 may control sound collectors 24 to operate in a phased mode to scan field 52 and acquire soundtracks of the sounds 40 of plants 50 as a function of their location in field 52. By way of example, in Fig. 1 sound collector array 22 is shown controlled by computer system 26 to acquire plant sounds from a region of field 52 indicted by a circle 54.
[0022] Optionally a sound collector of sound collectors 24 may be mounted on a mobile platform (not shown), by way of example a ground-based vehicle or an aerial vehicle such as a quadcopter or a drone. Optionally, the mobile platform is remote-controlled by a human operator, is semi- autonomous, or is fully autonomous. Optionally, the mobile platform is controlled by computer system 26.
[0023] Optionally, the acquired plant sounds comprise airborne sounds generated within a stem of the plant that can be recorded from a distance, by way of example about 5 centimeters, about 10 centimeters, about 25 centimeters, about 50 centimeters, or about 100 centimeters from the stem. The sound collectors may be placed in the field so that they are positioned appropriately to record sounds emanating from the stem of one or more selected plants. In configurations for which a sound collector of the sound collectors is mounted on a mobile platform, computer system 26 may be configured to control the mobile platform to position itself and the sound collector to be appropriately positioned to record sounds emanating from the stem of one or more selected plants.
[0024] In an embodiment computer system 26 processes the sounds to generate a profile of the one or more plants producing the sounds, the profile comprising an identity of one or more different types of plants in field 52, and/or their respective status, optionally as a function of their location. Optionally, the computer system processes the signals to generate a set of features that characterize the sounds, optionally as a function of location. The computer system may use a classifier to process feature vectors based on the set of features to identify the different types of plants and to determine their respective statuses, optionally as a function of location. The plant identity is optionally a plant species. The status of the plant may comprise one or any combination of more than one of a state of hydration, structural integrity, plant maturity, density of foliage, level of pathogen infection, herbivore damage, and/or density of fruit.
[0025] The feature vectors may comprise at least one or any combination of more than one of a fundamental frequency, a max intensity, a mean intensity, a variance of intensity, a duration of the sound, a frequency or count of sound incidence, a variance of intervals between sounds, and an inter-quartile range of sound duration. Additionally or alternatively, the feature vectors may comprise one or more features generated by an algorithm that reduces dimensionality of a signal based on the sounds. The one or more features may be generated by a Mel Frequency Cepstral Coefficient (MFCC) algorithm or a function thereof, a wavelet scattering convolution network, a stacked autoencoder, or an embedding algorithm.
[0026] In an embodiment, the recordings are filtered, by hardware means or software means, to select for sounds within a desired frequency range for further analysis in computer system 26. Optionally, the desired frequency ranges from about 20 kHz to about 100 kHz, from about 40 kHz to about 70 kHz, or from about 40 kHz to about 60 kHz. Sound collectors 24 may comprise or be operatively connected to one or more amplifiers and/or band pass filters as appropriate for transmitting signals generated by a defined range of frequencies to computer system 26. Additionally or alternatively, computer system 26 may comprise electronic sound filtering and amplification functionalities to filter the soundtrack so that only sounds within a desired range of frequencies is kept for further analysis. In an embodiment, computer system 26 filters recordings of plant-generated sounds to filter non-plant-generated background sounds that may be produced in field 52. Background sounds may, by way of example comprise sounds generated by wind, farm equipment operation, animals such as birds and insects, and the like. Optionally, the filtering is performed using a classifier trained to distinguish background sounds from plant generated sounds.
[0027] The classifiers used to determine the status or identity of the plants that generated the sounds being analyzed, or to filter the plant recordings, may be one or more of: an artificial neural network (ANN) classifier by way of example a convolutional neural network (CNN) or a deep neural network (DNN), a support vector machine (SVM) classifier, a Linear Predictive Coding (LPC) classifier, and/or a K Nearest Neighbors (KNN) classifier.
[0028] In an embodiment, computer system 26 analyzes recordings comprising airborne sounds generated in a stem of a plant (or group of plants) to determine the species of the plant and/or to determine whether the plant was well irrigated, drought- stressed, or severed close to the ground. Optionally, the feature vector comprises features based on descriptors of the recorded plant generated sounds determined using a wavelet scattering convolution network.
[0029] In an embodiment computer system 26 analyzes recordings comprising airborne sounds generated in a stem of a plant (or group of plants) to determine whether the plant was well irrigated, drought-stressed, or severed close to the ground based on a count or frequency of incidence of the plant-generated sound.
[0030] Computer system 26 may operate to manage plants 50 in field 52 based on the determined plant identities and their statuses. The plants in Field 52 may be managed by controlling, by way of example, irrigation, application of fertilizer, or application of pesticides. For example, computer system 26 may determine substantially in real-time responsive to acquired plant sounds that plants in a first region of field 52 are suffering from drought stress, and that plants in a second region are suffering from exposure to excessive soil water. In response, the computer system may control an irrigation system (not shown) to increase provision of water to the first region and decrease provision of water to the second region. Computer system 26 may also control sound collector array 22 to operate as an array of acoustic transmitters and focus sound at a first frequency and intensity in the first region that may cause the plants in the first region to reduce their stomata openings and reduce their rate of water loss. The computer system may focus sound of a second intensity and frequency that may cause plants in the second region to increase their stomata openings to increase a rate at which they lose water. By way of another example, computer system 26 may determine, substantially in real time, that plants 50 in a third region of field 52 are being physically damaged by pest infestation. In response the computer system may operate to dispatch a robot (not shown) to the third region to spray the region with a pesticide and/or to patrol the region to scare the pests off by frequently appearing in the region. By way of another example, computer system 26 may determine, substantially in real time, that plants 50 in a fourth region of field 52 are not growing as robustly in terms of foliage or fruit production. In response the computer system may operate to dispatch a robot (not shown) to the fourth region to apply fertilizer to the region. By way of another example, computer system 26 may determine, substantially in real time, that a species of plants 50 in a fourth region of field 52 does not match an intended species, indicating weed proliferation. In response the computer system may operate to dispatch a robot (not shown) to the fourth region to spray the region with an herbicide, optionally an herbicide that is well-tolerated by the intended species.
[0031] Example 1
[0032] Fig. 2 shows a schematic of an experimental plant monitor 21 in accordance with an embodiment of the disclosure, including six microphones 24 for recording and processing plant sounds as well as a computer system 26 configured to digitize, record, and process sounds captured by the microphones. The plants 25 and microphones 24 are placed inside an acoustically isolated box (“ acoustic box”) 100.
[0033] The results shown in Figs. 3A-5E were acquired with the following equipment: The microphones used were condenser CM16 ultrasound microphone (Avisoft). The recorded sound was digitized using an UltraS oundGate 1216H A/D converter (Avisoft) and stored on a PC. The sampling rate was 500 KHz per channel, and a high-pass filter 15 KHz was used. Two microphones were directed at each plant stem from a distance of 10 centimeters (cm), and instance of sound recording in a microphone was triggered with a sound exceeding 2% of the maximum dynamic range of the microphone. In subsequent analysis, only sounds that were recorded by both microphones were considered as a“plant sound”.
[0034] In each of a plurality of recording sessions using Plant monitor 21, three plants 25 were placed inside the acoustic box, with two microphones directed at each plant to eliminate false recordings. The interior of acoustic box 100 was limited to the plants being recorded, microphones 24, and Plant sounds in the ultrasonic sound range, between 15-250 KHz (kilohertz) for which ambient noise is less common, were recorded. The recorded sounds were generated by tomato ( Solanum lycopersicum) and tobacco ( Nicotiana tabacum ) plants that were grown normally in a pot, subjected to drought stress, or subjected to damage by severing their stalks close to the ground. All plants were grown in a growth room at 25 degrees Celsius (deg. C), and were tested at 5-7 weeks after germination.
[0035] Reference is made to Fig. 3A. Sounds recorded from drought-stressed and control plants showed that drought- stressed plants emit significantly more sounds than control plants (p<e-7, Wilcoxon test). A drought experiment was performed in which all the plant sounds were recorded twice: a first recording session of 1 hour before drought treatment, and a second recording session (1 hour) after half of the plants experienced drought stress in the form of 4-6 days without being watered until the soil moisture in the un-watered plants decreased to 5% of pre-drought levels. It was found that the mean number of sounds emitted by drought-stressed plants during one hour was 35.4 ± 6.1 and 11.0 ± 1.4 sounds for tomato and tobacco plants respectively (Fig. 3A). In contrast, the mean number of sounds emitted under control conditions was less than 1 per hour. Three controls were used: recording from the same plant before drought treatment (self-control), recording from a normally-watered same-species plant placed next to the drought- stressed plant (neighbor-control), and recording from an empty pot without a plant (Pot). Between 20 and 30 plants were sampled for each group.
[0036] Examples for a sound emitted by drought-stressed plants as a function of time are shown in graphs 301 (tomato) and 302 (tobacco) respectively in Fig. 3B. Frequency spectra for the emitted sounds are respectively shown in graphs 303 (drought-stressed tomato) and 304 (drought-stressed tobacco). For the drought-stressed tomato plants, the mean intensity of tomato plant sounds (Fig. 3B graph 301) was 61.6 ± 0.1 DBSPL (decibel sound pressure level) at 10.0 cm distance from the tomato plants, with a mean frequency of 49.6 ± 0.4 KHz, (Fig. 3B graph 302). Mean intensity of tobacco sounds was 65.6 ± 0.4 DBSPL (Fig. 3B graph 303) at 10.0 cm with a mean frequency of 54.8 ± 1.1 KHz (Fig. 3B graph 304).
[0037] Similarly to drought-stressed plants, cut plants also emitted significantly more sounds compared to control conditions (p<e-7, Wilcoxon test). Plants included in the treatment group were cut with scissors close to the ground immediately preceding the experimental recording session and sounds made by the stalk in the severed part of the plant, disconnected from the roots, was recorded. The pot soil was kept moist. As with the drought-stressed experiment, three controls were used: recording from the same plant before severing (self-control), recording from a normally-watered same-species plant placed next to the severed plant (neighbor-control), and recording from an empty pot without a plant (Pot). Between 20 and 30 plants were sampled for each group.
[0038] Reference is made to Fig. 4A. It was found that severed plants emitted 15.2 ± 2.6 and 21.1 ± 3.4 sounds during one hour for tomato and tobacco plants respectively. A mean number of sounds emitted by control plants was less than 1 per hour (Fig 4A). Sounds produced by cut tomato plants were characterized by a mean intensity of 65.6 ± 0.2 DBSPL measured at 10.0 cm (Fig. 4B graph 401) and a mean peak frequency of 57.3 ± 0.7 KHz, (Fig. 4B graph 403). Sounds produced by cut tobacco plants were characterized by a mean intensity of 63.3 ± 0.2 DBSPL measured at 10.0 cm (Fig. 4B graph 402) and a mean peak frequency of 57.8 ± 0.7 KHz (Fig. 4B graph 404).
[0039] It was also found that tomato plant branches that were severed from the rest of the plant also emitted substantially more sounds than intact branches.
[0040] It was also found that the stems of plants that were infected with a pathogen emitted substantially more sounds than healthy plants. Tomato plants were infected with of three different strains of Xanthomonas: Xanthomonas euvesicatoria (Xcv) 85-10, Xanthomonas vesicatoria (Xcv) T2, or Xanthomonas vesicatoria (Xcv) T2 comprising a plasmid expressing the avirulence effector AvrXv3. With each strain, the infected plants emitted substantially more sounds compared to the same plants before infection or neighboring healthy plants. Therefore, increased emission of stem-generated sound from intact, well- watered plants is an indication of pathogen infection.
[0041] It was also found that stem-generated sounds can be used to determine a developmental stage of a plant. Sounds generated by the stem of Mamilarius ramosissima plants were monitored over a period of time before and during blooming of flowers on the plants. Well-watered plants generally produced less than one sound per hour. However, a temporary increase in stem sound generation was consistently observed between 2 and 4 days prior to blooming. As such, a temporary increase in stem-generated sounds in well-watered, healthy, intact plants prior to flower blooming indicates impending flower blooming.
[0042] In accordance with an embodiemnt of the disclsoure the acquired tomato and tobacco sounds were analyzed using a trained classifier to distinguish plant sounds emitted by tomato and tobacco plants to indicate status of the plants. The sounds were divided into four groups according to the plant type; tomato (Tom), or tobacco (Tob), and the stress the plant suffered from when it emitted the sound, drought (D) or severing (S): Tom-D; Tom-S; Tob-D; and Tob- S. The distributions of the descriptors of max intensity (in dBSPL), fundamental frequency (in KHz), and sound duration (in milliseconds), for the four sound groups are shown in Figs 5A-5C, respectively.
[0043] For each of six pairs (Tom-D vs. Tom-S, Tob-D vs. Tob-S, Tom-D vs. Tob-D, Tom-D vs. Tob-S, Tom-S vs. Tob-D, Tom-S vs. Tob-S) of sound groups the classifier was trained to identify to which group of the pair of groups a given plant sounds belonged. Each plant sound was represented by a feature vector and the classifier operated on the feature vector representing a sound to determine to which group of plant sounds the sound belonged. [0044] Different types of feature vectors representing the sound recording from the four sounds groups were used to determine efficacy of different feature vectors for use in distinguishing the sounds. Reference is made to Fig. 5D, displaying a graph 500 showing accuracy of identification as a function of sound group pairs by applying a SVM classifier to the different types of feature vectors.
[0045] A first feature vector (“Basic”) was based on a small set of basic physical descriptors of the sounds such as fundamental frequency, max intensity and sound duration. Using the basic physical descriptors alone, the SVM classifier obtained a maximum accuracy between about 45- 65% for identifying sounds in a pair of sound groups. Accuracy of identification as a function of sound group pair using the SVM classifier is given by a graph line 501 in Fig. 5D. A second feature vector (“MFCC”) was based on Mel Frequency Cepstral Coefficients (MFCC), descriptors. Using the second MFCC feature vector, a SVM classifier attained 60-70% accuracy (graph line 502 in Fig. 5D). A third feature vector (“Scattering”) was based on descriptors determined using a wavelet scattering convolution network, which provides a locally translation invariant representation, stable to time- warping deformations. The wavelet scattering network extends the MFCC representations by computing modulation spectrum coefficients of multiple orders using a cascade of wavelet filter banks and modulus rectifiers. Using the third Scattering feature vector, the SVM classifier, as shown by a graph line 503 in graph 500, achieved accuracy between about 70-80%. Accuracy obtained using features determined by Linear Predictive Coding (LPC), is shown by a graph line 504 in graph 500, and accuracy obtained using operating on a feature vector comprising LPC and MFCC features is given by a graph line 505 in graph 500. A graph line 506 shows accuracy of results obtained by an SVM classifier operating feature vectors comprising LPC, MFCC, scattering network and basic features. Particularly advantageous results were obtained by an SVM classifier operating on the third, wavelet scattering network feature vectors (graph line 503 in Fig. 5D). It was also found that all classifiers tested robustly distinguished plant-generated sounds from noise, with an accuracy of 90% or greater.
[0046] Fig. 5E shows a graph 520 of accuracy of classification of plant-generated sounds, from an hour of recording, as belonging to one of the groups of sounds described above, tomato- drought (Tom-D), tomato- severed (Tom-S), tobacco-drought (Tob-D), tobacco- severed (Tob-S), in a pair of groups as a function of a number of sounds recorded for the plant in an hour. The sounds made by the plant were classified using a SVM classifier and wavelet scattering network feature vector and a majority vote of classifications of all sounds that the plant emitted during the hour. Accuracy of classifying sounds made by a plant as belonging to one of the groups in group pairs (Tom-D)-(Tom-S), (Tom-D)-(Tob-D), (Tom-S)-(Tob-S), and (Tob-D)-(Tob-S) are given as functions of number of sounds made by the plant during an hour by graph lines 521, 522, 523, and 524 respectively. From graph 520 it may be seen that plants that emitted 10 or more sounds in an hour were classified correctly 80-90% of the time in terms of their species and/or condition.
[0047] The experimental results indicate that plant species may be identified by sounds they make and that the sounds may be used to determine status of the plants. In an embodiment of the disclosure a Plant-monitor may have a memory stored with a library of characterizing feature vectors for different plants and different status of the plants. The library feature vectors may be used to process plant sounds to identify the plants making the sounds and to provide an indication of the status of the plants.
[0048] Example 2
[0049] Another experimental Plant monitor (not shown), similar to Plant monitors 20 and 21, was used to record sounds generated by tomato plants in a more natural environment, in a greenhouse without an acoustic isolation box to prevent ambient noise from being recorded together with plant- generated sounds. The sound recording equipment was the same as used in Example 1. A noise sounds library of background noise was first generated by recording inside an empty greenhouse for several days. A classifier based on a convolution neural network (CNN) was trained using the pre-recorded greenhouse noise and pre-recorded tomato sounds recorded in an acoustic isolation box to distinguish between tomato-generated sounds and greenhouse noise with a balanced accuracy score of 99.7%. The balanced accuracy score is a percentage score calculated by subtracting from a theoretical perfect accuracy rate of 100% a balanced failure rate calculated as a mean of the false positive rate and the false negative rate. An analysis of 1622 greenhouse noise recordings and 1378 tomato plant-generated sound recordings with the trained CNN classifier resulted in a false positive rate of 0.4 % in which the classifier mis- identified greenhouse noise to be a plant-generated sound and a false negative rate of 0.2 % in which the classifier misidentified plant-generated sounds to be greenhouse noise, resulting in a balanced failure rate of 0.3%. A confusion matrix of the above-noted results of analyzing the sounds to detect greenhouse noise with the CNN classifier is shown in Fig. 6A. An SVM classifier trained on the same data was also able distinguish between tomato-generated sounds and greenhouse noise with similar accuracy. A successfully trained classifier capable of accurately detecting greenhouse noise as described hereinabove may be generically referred to herein as a“greenhouse noise-detecting classifier” (“GND classifier”)
[0050] Subsequently, l-hour recordings of drought- stressed and normally irrigated tomato plants were made in the greenhouse. As in Example 1, two microphones were directed at each plant stem from a distance of 10 cm, and instance of sound recording in a microphone was triggered with a sound exceeding 2% of the maximum dynamic range of the microphone. In subsequent analysis, only sounds that were recorded by both microphones were considered as a“plant sound”. The sounds made by the normally-irrigated plants were recorded 1 day after irrigation. Sounds made by the drought-stressed plants were recorded after the plants were left unirrigated for 5 days prior. The recordings were filtered using a CNN-based GND classifier to remove greenhouse noise and the condition of the plants from which the recordings were made was determined to be drought-stressed or normally irrigated based on the count or frequency of the plant-generated sounds in hour-long recordings filtered by the GND classifier. Filtered recordings containing 3 or more tomato plant-generated sounds during the l-hour recording duration were classified as being recorded from drought- stressed plants, and filtered recordings containing 2 or fewer tomato plant- generated sounds during the l-hour recording duration were classified as being recorded from normally irrigated plants. The above-described classification process resulted in a balanced accuracy score of -84.4% for distinguishing between 25 normally irrigated (control) plant recordings and 26 drought-stressed plant recordings. The analysis based on sound count resulted in a false positive rate of 12% in which recordings from normally irrigated tomato plants were mis-identified as being from drought-stressed tomato plants and a false negative rate of 19.2% in which recordings from drought- stressed tomato plants were mis- identified as being from normally-irrigated tomato plants. A confusion matrix of the above-noted results of analyzing the count of plant-generated sounds to determine normally-irrigated and drought-stressed plants is shown in Fig. 6B.
[0051] There is therefore provided in accordance with and embodiment of the disclosure a system for monitoring plants, the system comprising one or more sound collectors configured to receive and generate signals based on sounds produced by one or more plants; and a computer system configured to receive and process the signals to provide a profile of the one or more plants producing the sounds.
[0052] In an embodiment of the disclosure, plant profile comprises at least one or both of a plant identity and a plant status of the one or more plants. Optionally, the plant status comprises at least one or any combination of more than one of a state of hydration, structural integrity, plant maturity, density of foliage, level of pathogen infection, herbivore damage, and/or density of fruit.
[0053] In an embodiment of the disclosure, the computer system is configured to generate at least one feature vector having components to represent the sounds. Optionally, the components of a feature vector of the at least one feature vector comprise at least one or any combination of more than one of a fundamental frequency, a max intensity, a mean intensity, a variance of intensity, a duration of the sounds, a frequency or count of incidence of the sounds, a variance of intervals between sounds, and an inter-quartile range of sound duration. Optionally, the components of a feature vector of the at least one feature vector comprise one or more features generated by an algorithm that reduces dimensionality of a signal that operates on the sounds. Optionally, the one or more features are generated by a Mel Frequency Cepstral Coefficient (MFCC) algorithm or a function thereof, a wavelet scattering convolution network, a stacked autoencoder, or an embedding algorithm. Optionally, the one or more features are generated by a scattering convolution network. Optionally, the computer system is configured to process the at least one feature vector using at least one classifier to profile the one or more plants producing the sounds. Optionally, the at least one classifier comprises at least one or any combination of more than one of an artificial neural network (ANN), a support vector machine (SVM), Linear Predictive Coding (LPC), and/or a K Nearest Neighbors (KNN) classifier.
[0054] In an embodiment of the disclosure, an ANN is used as a feature extractor that filters out non-plant-generated sounds from the recorded signal.
[0055] In an embodiment of the disclosure, the one or more sound collectors comprises an array of sound collectors. Optionally, the computer system is configured to operate the array of sound collectors as a phased array to receive plant sounds from a limited spatial region. Optionally, the computer system is configured to operate the array of sound collectors as an array of speakers. Optionally, the computer system is configured to operate the array of speakers as a phased array of speakers to direct sound to a limited spatial region. Optionally, the sound is directed responsive to the plant profile of the one or more plants, the one or more plants optionally being located within the limited spatial region.
[0056] In an embodiment of the disclosure, the computer system is configured to initiate administration of an agricultural treatment to the one or more plants responsive to the plant profile of the one or more plants. Optionally, the agricultural treatment is selected from the group consisting of water, a pesticide, an herbicide or a fertilizer. [0057] In an embodiment of the disclosure, a sound collector of the one or more sound collectors is mounted to a mobile platform. The mobile platform is optionally a ground-based vehicle or an aerial vehicle.
[0058] In an embodiment of the disclosure, the sound collector is a microphone or a piezoelectric transducer.
[0059] In an embodiment of the disclosure, the sound collector is configured to generated the signal responsive to sounds within a frequency range of from about 20 kHz to about 100 kHz. Optionally, the frequency range is from about 40 kHz to about 60 kHz.
[0060] There is also provided a method comprising: receiving sounds that plants make; generating signals based on the sounds; and processing the signals to provide a profile of one or more plants producing the sounds. Optionally, the plant profile comprises at least one or both of a plant identity and a plant status. Optionally, plant status comprises at least one or any combination of more than one of a state of hydration, structural integrity, plant maturity, density of foliage, level of pathogen infection, herbivore damage, and/or density of fruit.
[0061] In an embodiment of the disclosure, the method comprises generating at least one feature vector having components to represent the sounds. Optionally, the components of a feature vector of the at least one feature vector comprise at least one or any combination of more than one of a fundamental frequency, a max intensity, a mean intensity, a variance of intensity, a duration of the sounds, a frequency or count of incidence of the sounds, a variance of intervals between sounds, and an inter-quartile range of sound duration. Optionally, the components of a feature vector of the at least one feature vector comprise one or more features generated by an algorithm that reduces dimensionality of a signal that operates on the sounds. Optionally, the one or more features are generated by a Mel Frequency Cepstral Coefficient (MFCC) algorithm or a function thereof, a wavelet scattering convolution network, a stacked autoencoder, or an embedding algorithm. Optionally, the one or more features are generated by a scattering convolution network. Optionally, the at least one feature vector is processed using at least one classifier to profile the one or more plants producing the sounds. Optionally, the at least one classifier comprises at least one or any combination of more than one of an artificial neural network (ANN), a support vector machine (SVM), Linear Predictive Coding (LPC), and/or a K Nearest Neighbors (KNN) classifier.
[0062] In an embodiment of the disclosure, an ANN is used as a feature extractor that filters out non-plant-generated sounds from the recorded signal. [0063] In an embodiment of the disclosure, the signals are generated responsive to sounds within a frequency range of from about 20 kHz to about 100 kHz. Optionally, the frequency range is from about 40 kHz to about 60 kHz.
[0064] In an embodiment of the disclosure, the method comprises initiating administration of an agricultural treatment to the one or more plants responsive to the plant profile. Optionally, the agricultural treatment is selected from the group consisting of water, a pesticide, an herbicide or a fertilizer.
[0065] In the description and claims of the present application, each of the verbs,“comprise” “include” and“have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of components, elements or parts of the subject or subjects of the verb.
[0066] Descriptions of embodiments of the disclosure in the present application are provided by way of example and are not intended to limit the scope of the disclosure. The described embodiments comprise different features, not all of which are required in all embodiments. Some embodiments utilize only some of the features or possible combinations of the features. Variations of embodiments of the disclosure that are described, and embodiments comprising different combinations of features noted in the described embodiments, will occur to persons of the art. The scope of the invention is limited only by the claims.

Claims

1. A system for monitoring plants, the system comprising:
one or more sound collectors configured to receive and generate signals based on sounds produced by one or more plants; and
a computer system configured to receive and process the signals to provide a profile of the one or more plants producing the sounds.
2. The system according to claim 1 wherein the plant profile comprises at least one or both of a plant identity and a plant status of the one or more plants.
3. The system according to claim 2 wherein the plant status comprises at least one or any combination of more than one of a state of hydration, structural integrity, plant maturity, density of foliage, level of pathogen infection, herbivore damage, and/or density of fruit.
4. The system according to any of claims 1-3 wherein the computer system is configured to generate at least one feature vector having components to represent the sounds.
5. The system according to claim 4 wherein the components of a feature vector of the at least one feature vector comprise at least one or any combination of more than one of a fundamental frequency, a max intensity, a mean intensity, a variance of intensity, a duration of the sounds, a frequency or count of incidence of the sounds, a variance of intervals between sounds, and an inter-quartile range of sound duration.
6. The system according to claim 4 or claim 5 wherein the components of a feature vector of the at least one feature vector comprise one or more features generated by an algorithm that reduces dimensionality of a signal that operates on the sounds.
7. The system according to claim 5 or claim 6, wherein the one or more features are generated by a Mel Frequency Cepstral Coefficient (MFCC) algorithm or a function thereof, a wavelet scattering convolution network, a stacked autoencoder, or an embedding algorithm.
8. The system according to claim 6, wherein the one or more features are generated by a scattering convolution network.
9. The system according to any of claims 4-8 wherein the computer system is configured to process the at least one feature vector using at least one classifier to profile the one or more plants producing the sounds.
10. The system according to claim 4 wherein the at least one classifier comprises at least one or any combination of more than one of an artificial neural network (ANN), a support vector machine (SVM), Linear Predictive Coding (LPC), and/or a K Nearest Neighbors (KNN) classifier.
11. The system according to any of the preceding claims wherein an ANN is used as a feature extractor that filters out non-plant-generated sounds from the recorded signal.
12. The system according to any of the preceding claims wherein the one or more sound collectors comprises an array of sound collectors.
13. The system according to claim 12 wherein the computer system is configured to operate the array of sound collectors as a phased array to receive plant sounds from a limited spatial region.
14. The system according to claim 13 wherein the computer system is configured to operate the array of sound collectors as an array of speakers and the computer system is configured to operate the array of speakers as a phased array of speakers to direct sound to the limited spatial region responsive to the profile of the plants within the limited spatial region.
15. The system according to any of the preceding claims wherein the computer system is configured to initiate administration of an agricultural treatment to the one or more plants responsive to the plant profile of the one or more plants.
16. The system according to claim 15, wherein the agricultural treatment is selected from the group consisting of water, a pesticide, an herbicide or a fertilizer.
17. The system according to any of the preceding claims wherein a sound collector of the one or more sound collectors is mounted to a mobile platform.
18. The system according to claim 15, wherein the mobile platform is a ground-based vehicle or an aerial vehicle.
19. The system according to any one of the preceding claims, wherein the sound collector is a microphone or a piezoelectric transducer.
20. The system according to any one of the preceding claims, wherein the signal is generated responsive to sounds within a frequency range of from about 20 kHz to about 100 kHz.
21. The system according to claim 20, wherein the frequency range is from about 40 kHz to about 60 kHz.
22. A method for monitoring plants, the method comprising:
receiving sounds that plants make;
generating signals based on the sounds; and
processing the signals to provide a profile of one or more plants producing the sounds.
23. The method for according to claim 22 wherein the plant profile comprises at least one or both of a plant identity and a plant status.
24. The method according to claim 23 wherein the plant status comprises at least one or any combination of more than one of a state of hydration, structural integrity, plant maturity, density of foliage, level of pathogen infection, herbivore damage, and/or density of fruit.
25. The method according to any one of claims 22-24, comprising initiating administration of an agricultural treatment to the one or more plants responsive to the plant profile.
26. The method according to claim 25, wherein the agricultural treatment is selected from the group consisting of water, a pesticide, an herbicide or a fertilizer.
PCT/IL2019/050932 2018-08-20 2019-08-20 Plant-monitor WO2020039434A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/270,016 US20210325346A1 (en) 2018-08-20 2019-08-20 Plant-monitor
EP19850938.2A EP3830662A4 (en) 2018-08-20 2019-08-20 Plant-monitor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862719709P 2018-08-20 2018-08-20
US62/719,709 2018-08-20

Publications (1)

Publication Number Publication Date
WO2020039434A1 true WO2020039434A1 (en) 2020-02-27

Family

ID=69591411

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2019/050932 WO2020039434A1 (en) 2018-08-20 2019-08-20 Plant-monitor

Country Status (3)

Country Link
US (1) US20210325346A1 (en)
EP (1) EP3830662A4 (en)
WO (1) WO2020039434A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110288689A1 (en) * 2008-12-05 2011-11-24 National University Corporation Saitama University Evaluation method for botanical-integrity of vascular plant, irrigating method to vascular plant, film electret sensor and film ecm array
CN102394064A (en) * 2011-08-30 2012-03-28 浙江大学 Combined type plant audio frequency regulation and control method
US20120209612A1 (en) * 2011-02-10 2012-08-16 Intonow Extraction and Matching of Characteristic Fingerprints from Audio Signals
US20180017965A1 (en) * 2015-01-21 2018-01-18 Ramot At Tel-Aviv University Ltd. Agricultural robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013219474A1 (en) * 2013-09-26 2015-03-26 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. DEVICE AND METHOD FOR OBTAINING INFORMATION ON ONE OR MORE LIVES

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110288689A1 (en) * 2008-12-05 2011-11-24 National University Corporation Saitama University Evaluation method for botanical-integrity of vascular plant, irrigating method to vascular plant, film electret sensor and film ecm array
US20120209612A1 (en) * 2011-02-10 2012-08-16 Intonow Extraction and Matching of Characteristic Fingerprints from Audio Signals
CN102394064A (en) * 2011-08-30 2012-03-28 浙江大学 Combined type plant audio frequency regulation and control method
US20180017965A1 (en) * 2015-01-21 2018-01-18 Ramot At Tel-Aviv University Ltd. Agricultural robot

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ACEVEDO, MIGUEL A. ET AL.: "Automated classification of bird and amphibian calls using machine learning: A comparison of methods", ECOLOGICAL INFORMATICS, vol. 4, no. 4, 30 September 2009 (2009-09-30), pages 206 - 214, XP026585456, DOI: 10.1016/j.ecoinf.2009.06.005 *
DOST?L P ET AL.: "Detection of Acoustic Emission Characteristics of Plant According to Water Stress Condition", ACTA UNIVERSITATIS AGRICULTURAE ET SILVICULTURAE MENDELIANAE BRUNENSIS, vol. 64, no. 5, 30 October 2016 (2016-10-30), pages 1465 - 71, XP055687907 *
See also references of EP3830662A4 *
VERGEYNST, LIDEWEI L. ET AL.: "Deciphering acoustic emission signals in drought stressed branches: the missing link between source and sensor", FRONTIERS IN PLANT SCIENCE, vol. 6, 2 July 2015 (2015-07-02), pages 494, XP055687910 *

Also Published As

Publication number Publication date
US20210325346A1 (en) 2021-10-21
EP3830662A4 (en) 2022-04-27
EP3830662A1 (en) 2021-06-09

Similar Documents

Publication Publication Date Title
Zhou et al. Intelligent robots for fruit harvesting: Recent developments and future challenges
Khait et al. Sounds emitted by plants under stress are airborne and informative
CN115204689B (en) Intelligent agriculture management system based on image processing
DE102013219474A1 (en) DEVICE AND METHOD FOR OBTAINING INFORMATION ON ONE OR MORE LIVES
Lang et al. Intensive crop regulation strategies in sweet cherries
US20210325346A1 (en) Plant-monitor
Khait et al. The sounds of plants–plants emit remotely-detectable ultrasounds that can reveal plant stress
Martin et al. Studies on acoustic activity of red palm weevil the deadly pest on coconut crops
CN117351181A (en) Intelligent farmland monitoring and crop growth automatic control method and system
CN111080616A (en) Tobacco leaf pest and disease damage monitoring system
Shubenko et al. Features of growth processes of sweet cherry trees of various ripening terms in the conditions of the Right-Bank Forest-Steppe of Ukraine
Tiwari et al. Precision agriculture applications in horticulture.
Blank et al. Invasion of greedy scale crawlers (Hemiberlesia rapax) onto kiwifruit from taraire trees
Moraes et al. Optoacoustic intelligent sensor for real-time detection of fruit flies in McPhail traps
AdelineSneha et al. A review on energy efficient image feature transmission in WSN for micro region pest control
Zhang et al. A study of the influence of pruning strategy effect on vibrational harvesting of apples
Chougule et al. Decision support for grape crop protection using ontology
Berrie Disease monitoring and decision making in integrated fruit disease management
Mankin et al. Listening to the larvae: acoustic detection of Diaprepes abbreviatus (L.).
LU502599B1 (en) Intelligent agricultural management system based on image processing
Isaak et al. Challenge of implementing Horticulture 4.0 innovations in the value chain
Sunitha Studies on management of grape stem borer Celosterna scabrator Fab.(Cerambycidae: Coleoptera)
Saranya et al. Exploring the Role of Deep Learning in Coconut Palm Diseases and Detecting Red Palm Weevil in Early Stage Amidst Advances and Challenges—a Review
Amenyedzi et al. Signal Preprocessing Towards IoT Acoustic Data for Farm Pest Detection
Mhamed et al. Developments of the Automated Equipment of Apple in the Orchard: A Comprehensive Review

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19850938

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019850938

Country of ref document: EP

Effective date: 20210301