WO2024072357A1 - A field crop efficiency detection method - Google Patents
A field crop efficiency detection method Download PDFInfo
- Publication number
- WO2024072357A1 WO2024072357A1 PCT/TR2023/051035 TR2023051035W WO2024072357A1 WO 2024072357 A1 WO2024072357 A1 WO 2024072357A1 TR 2023051035 W TR2023051035 W TR 2023051035W WO 2024072357 A1 WO2024072357 A1 WO 2024072357A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- layers
- layer
- prediction
- absolute error
- training
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title description 2
- 238000000034 method Methods 0.000 claims abstract description 62
- 238000012549 training Methods 0.000 claims abstract description 41
- 238000013528 artificial neural network Methods 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 22
- 241000196324 Embryophyta Species 0.000 claims description 16
- 238000010790 dilution Methods 0.000 claims description 9
- 239000012895 dilution Substances 0.000 claims description 9
- 229920000742 Cotton Polymers 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 8
- 235000003222 Helianthus annuus Nutrition 0.000 claims description 4
- 240000008042 Zea mays Species 0.000 claims description 4
- 235000005824 Zea mays ssp. parviglumis Nutrition 0.000 claims description 4
- 235000002017 Zea mays subsp mays Nutrition 0.000 claims description 4
- 235000005822 corn Nutrition 0.000 claims description 4
- 208000031888 Mycoses Diseases 0.000 claims description 3
- 244000020551 Helianthus annuus Species 0.000 claims 1
- 238000004590 computer program Methods 0.000 claims 1
- 238000013473 artificial intelligence Methods 0.000 abstract description 9
- 238000010801 machine learning Methods 0.000 abstract description 4
- 230000004913 activation Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 6
- 241000208818 Helianthus Species 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012656 cationic ring opening polymerization Methods 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/0098—Plants or trees
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/02—Agriculture; Fishing; Forestry; Mining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the invention relates to a machine learning training method for predicting the yield of the field based on the field images in which the field plant is planted, and the prediction method with the trained artificial intelligence and a system for carrying out these training and prediction methods.
- yield prediction is made from the images taken from the cotton field.
- the images taken from unmanned aerial vehicles are used in the system.
- the structure has convolutional layers that improve network depth.
- multi-level convolutional interconnected layer is used. Filters, the dimensions of which are odd numbers, are used in these convolutional layers.
- Feature map is created in the layers. Flow progresses using the properties obtained from each layer. It is ensured that the dimensions are reduced by using maximum pooling.
- Document WO2021221704A1 discloses a system for analyzing a plurality of aerial images for abnormalities associated with growing one or more species or varieties of crop.
- the images obtained from the unmanned aerial vehicle are analyzed by passing through layered structures.
- CNN is applied here to train artificial intelligence.
- ReLu and Sigmoid functions are used in the layers.
- the system given in this document makes it possible to analyze how much the crops have grown and other information about the crop.
- US10721859B2 relates to applications for improvement and monitoring of crops.
- the document describes the integration of the methods applied for monitoring and improving the crops with agricultural tools.
- Many layers are used in monitoring and analyzing the crops.
- applications such as ReLU and Sigmoid functions, dilution, Maximum pooling, CNN are applied together.
- Multi-layered learning processes are used in these methods, and therefore the cost of processing increases in both the training and prediction processes.
- artificial intelligence which is trained in which the general characteristics of each field, at least the field group in a certain region, are different, does not give correct results for each field or region and different training methods are generally needed.
- the main object of the invention is to optimize training process costs in artificial intelligence trained for a field yield prediction.
- the object of the invention is to provide a training method of artificial intelligence that can be easily adapted for different areas. It was ensured that the trained artificial intelligence selected the necessary attributes for field crop yield prediction from the aircraft images.
- the present invention relates to a computer applied training method for an artificial neural network for predicting field plant yield to provide the above-mentioned requirements. Accordingly, the present invention comprises the steps of pre-determining the N layers, applying the first layer to the data set containing the field images of the field plant and finding the prediction absolute error, comparing the obtained prediction absolute error with a predetermined value, applying the next layer if said absolute error is greater than the predetermined value and the number of layers applied is less than N, and terminating the training if the absolute error is less than the predetermined value or the number of layers applied is equal to N.
- the minimum number of layers and the desired precision of the artificial neural network are obtained by automatically selecting the attributes, and accordingly, the process cost and prediction speed are increased both during the training and during the prediction.
- this method is especially suitable for artificial neural networks to be used for yield prediction in cotton fields, it can also be used in the training of artificial neural networks to be used for pure line prediction in corn fields and fungal disease prediction in sunflower fields.
- FIG. 1 A representative schematic view of the system subject to the invention
- Figure 4 A flow diagram of an embodiment of layer-adding Definitions of Components/Pieces/Parts of the Invention
- the subject matter of the invention relates to a machine learning training method for predicting yield based on the field images in which the field plant is planted, and the prediction method with the trained artificial intelligence and a system for carrying out these training and prediction methods.
- the training method of the invention is designed to train artificial neural networks for a prediction method.
- the yield prediction of the relevant area is carried out with a new cotton field image fed into the artificial neural network trained with a data set with field images in which the field plant is planted.
- the field mentioned here can be a cotton field, a sunflower field, or a corn field.
- the aircraft (10) is preferably an unmanned aerial vehicle. Unmanned aerial vehicles, especially drones, eliminate problems such as cloud blocking by providing an image from low altitude.
- the aircraft (10) further comprises an image-receiving element (11), preferably a camera, in order to provide said images.
- the image receiving element (11) can provide both the input image for prediction and the images that will form the necessary data set for training.
- the aircraft (10) further comprises a communication unit (H).
- the communication unit (H) is arranged to wirelessly send the images provided by the image receiving element (11) to a server (20) or a processing unit (30).
- Another communication unit (H) to communicate with the communication unit (H) may be provided on the server (20) and/or the processing unit (30).
- the server (20) is arranged to store the training data set and, if necessary, send it to the processing unit (30).
- the server (30) may transmit the data set to the processing unit (30) through the communication unit (H) it has.
- the processing unit (30) includes at least one processing component (31).
- the processing component (31) may be configured to train the artificial neural network and/or configured to execute the trained artificial neural network.
- a separate processing component (31) may also be used for each of the operations in which it is possible to carry out both operations (training, prediction) through the same processing component (31).
- the processing unit (30) may also include a memory component (32).
- the data set may be stored in this memory component (32). In this case, the use of the server (20) is not necessary.
- snapshots and/or artificial neural network outputs provided from the aircraft (10) may be stored in the memory unit.
- the artificial neural network outputs may be instantaneously used through an output unit (40).
- the output unit (40) comprises at least one screen.
- the output unit (40) may preferably be a computer, mobile phone or tablet.
- the instantaneous artificial neural network outputs may be sent to the server to be stored or transmitted to an output unit (40).
- This storage and transmission process can also be provided with the memory component (32).
- an artificial neural network is used for yield prediction, referring to Figure 2.
- the yield prediction mentioned is the yield prediction for cotton fields, the pure lines prediction with 100% homozygous value for corn fields, and the fungal disease prediction for sunflower fields. A method with multiple layers is used for the training of this artificial neural network.
- a predetermined layer sequence is used herein.
- the number of layers in the layer sequence may be taken as N. Where N is an integer.
- at least one, preferably the first of the layers in the layer sequence is the convolutional layer.
- the layer sequence may further comprise at least one, preferably all, of the maximum pooling, dilution, ReLU, fully coupled layers.
- the maximum pooling layers are used herein to reduce the number of parameters to be learned, and the dilution layers are used to prevent overfitting.
- the ReLU layer was preferred because it is not linear, it can be operated very simply and very quickly, and its first-order derivative can be expressed in its own terms.
- the fully coupled layers are used as a flattening layer for converting the images into a one-dimensional array.
- filter sizes, dilution rates and number of nodes are given in this structure, they should not be perceived as restrictive.
- the prediction absolute error value is determined after applying a layer to the data set.
- the predicted absolute error value determined herein is compared to the predetermined desired value. If the prediction absolute error value determined because of the comparison is lower than the desired value, the training is terminated. In the opposite case, that is, if the predicted absolute error value is higher than the desired value, the next layer is added, and retraining is provided.
- this layer addition process is continued until the absolute value reaches the desired value or until the layers in the sequence are a bit, that is, until the number of layers added reaches N.
- the data set is preferably not completely introduced into the training from the mentioned training method. The data set is divided into a certain number of parts and taken into training in turn.
- the first piece is trained, the performance of the model is tested, and the weights are updated with backpropagation according to the performance. Then, the model is retrained with the new training set and the weights are updated again. This process is repeated in each training step and the most appropriate weight values for the model are calculated.
- the training of each piece is called an "epoch”.
- an absolute error is detected in each epoch and if the error is higher than the desired value, the next epoch is proceeded. If the desired error value is reached in any of the epochs, the training is terminated, and no new layer is added.
- the data set is divided by a predetermined number of epochs. When the maximum number of epochs is reached, adding a new layer is performed if the desired error value is still not met.
- the epoch value may be selected as 30.
- the maximum number of layers to be added is determined before processing.
- the embodiment consists of a decreasing number of nodes from the input layer to the output layer and is like a pyramid in this way. Since the input image size of the circuit is mxa, the number of filters in the last convolutional layer that will determine the attributes is also selected as m.
- the maximum number of layers in the need is defined as C and C is calculated using the following formula:
- the size of an image in the data set of mxa is the smallest image size obtained at the end of the layers in nxb.
- the first term, 64x64 is the total number of input nodes of the digital meter.
- the number of nodes in the output layer is also given in the denominator.
- the number of attribute filters is equal to 64.
- the smallest image size obtained at the end of the layers is nxn and the value of n is equal to 4.
- the maximum number of layers is equal to 3. Once the maximum number of layers has been determined, the number of layers added separately before adding the new layer can also be compared with the maximum number of layers.
- the training is terminated.
- the maximum number of layers obtained can be assigned as "C”
- the number of new sequence layers can be assigned as "N”
- the same result can be obtained with a single control instead of two controls in each cycle, and besides, the first C layer of the layer sequence can be optionally removed from the sequence.
- colored field images of mxm pixels are accepted in the image input layer for the artificial neural network training process.
- numerical filters are repeatedly applied to this field image, revealing an activation map called a feature map, which shows the positions and strength of a feature detected in the image.
- Maximum pooling 2x2 is used and thus the calculation cost is reduced.
- a higher number of numerical filters than the number of filters available in the previous layer in the next convolutional layer are repeatedly applied to these new images, revealing a new activation map called the feature map, which shows the positions and strength of a feature detected in the images obtained in the previous layer.
- ReLU was used as the activation function in the hidden intermediate layer (ReLU layer) of the new images obtained.
- F(x) Maximum (0, x).
- the number of parameters to be learned is reduced again by using the maximum pooling 2x2 size.
- Numerical filters, which are odd in size, twice as many as the number of filters available in the previous layer in the next convolutional layer, are repeatedly applied to this new image, revealing a new activation map called the feature map, which shows the positions and strength of a feature detected in the images obtained in the previous layer.
- the activation function was applied to the new images obtained in the hidden intermediate layer (ReLU layer).
- the number of parameters to be learned is reduced again by using the maximum pooling 2x2 size.
- a flattening layer was then used to convert the images into a one-dimensional array.
- ReLU was used as the activation function in the hidden intermediate layer of the one-dimensional index obtained.
- a certain amount of dilution was applied to the images obtained to prevent excessive adaptation.
- a new flattening layer was then used for the one-dimensional data, containing a smaller number of nodes than the previous layer.
- ReLU was used as the activation function in the hidden intermediate layer (ReLU layer) of the new one-dimensional index obtained.
- a lesser amount of re-dilution of the previous dilution was applied to the data obtained to prevent excessive re-adaptation in the new dilution layer.
- the data then diluted was combined with a new fully connected layer containing a smaller number of nodes than the previous flattening layer.
- ReLu was used as the activation function in the hidden intermediate layer in the new data index obtained. It was combined with a fully connected output layer with 1 node to predict the efficiency at the end of the neural network circuit.
- the training method of the artificial neural network for yield prediction with computer application can be provided as a program.
- the program comprises instructions for performing the steps of any one of claims 1-16.
- the computer-readable medium containing the instructions to perform the method steps according to any one of claims 1-16 can be used for the training of the artificial neural network for yield prediction.
- the artificial neural network trained according to the processes given above is used in a computer applied prediction method for yield prediction.
- Pre-determining the N layers applying the first layer to the data set containing the field images and finding the prediction absolute error, comparing the obtained prediction absolute error with a predetermined value, applying the next layer if said absolute error is greater than the predetermined value and the number of layers applied is less than N, and entering it as an entry into the artificial neural network trained with the steps of terminating the training if the absolute error is less than the predetermined value or the number of layers applied is equal to N,
- the prediction process is preferably used as the activation function. It can be defined as a Sigmoid function.
- Sigmoid(x) defines the yield data as a decimal number between 0 and 1. The software denormalizes this value to reach the actual yield value.
- the system arranged to perform a computer-applied prediction method for yield prediction comprised the following:
- an aircraft preferably an unmanned aerial vehicle, in particular a drone,
- processing component (31) for executing the artificial neural network trained according to claim 1 and creating a prediction for the image provided by the aircraft (10)
- the computer applied prediction method can be provided as a program.
- the program comprises instructions for performing the above prediction method steps.
- the computer-readable medium containing the instructions to perform the prediction method steps given above can be used for yield prediction.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- Medical Informatics (AREA)
- Economics (AREA)
- Immunology (AREA)
- Biochemistry (AREA)
- Analytical Chemistry (AREA)
- Medicinal Chemistry (AREA)
- Food Science & Technology (AREA)
- Wood Science & Technology (AREA)
- Agronomy & Crop Science (AREA)
- Animal Husbandry (AREA)
- Marine Sciences & Fisheries (AREA)
- Mining & Mineral Resources (AREA)
- Botany (AREA)
- Pathology (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
Abstract
The invention relates to a machine learning training method for predicting the yield of the field based on the field images in which the field plant is planted, and the prediction method with the trained artificial intelligence and a system for carrying out these training and prediction methods.
Description
A FIELD CROP EFFICIENCY DETECTION METHOD
Technical Field
The invention relates to a machine learning training method for predicting the yield of the field based on the field images in which the field plant is planted, and the prediction method with the trained artificial intelligence and a system for carrying out these training and prediction methods.
State of the Art
Accurate determination of the productivity parameters of the fields is of great importance and therefore, genetic engineers try to determine the superior varieties in terms of yield or the diseases in the field plants that affect the yield to ensure maximum efficiency. This process is usually carried out manually and therefore causes extra cost and time loss.
In recent years, image processing methods have also started to be used to determine the field yield. Within these methods, seed companies, universities, and institutes try to predict yield by calculating crop vegetation indices and leaf area index using satellite images. It is used in machine learning with the collected images and the trained artificial intelligence predicts the yield through the new images. However, predefined data collection times and cloud blockages reduce the efficiency of satellite images taken from the sites.
In the study of Haolu Li et al., titled "Identifying Coton Fields from Remote Sensing Images Using Multiple Deep Learning Networks", a method for examining cotton fields with deep learning methods was explained. Herein, yield prediction is made from the images taken from the cotton field. The images taken from unmanned aerial vehicles are used in the system. The structure has convolutional layers that improve network depth. Preferably, multi-level convolutional interconnected layer is used. Filters, the dimensions of which are odd numbers, are used in these convolutional layers. Feature map is created in the layers. Flow progresses
using the properties obtained from each layer. It is ensured that the dimensions are reduced by using maximum pooling.
Document WO2021221704A1 discloses a system for analyzing a plurality of aerial images for abnormalities associated with growing one or more species or varieties of crop. The images obtained from the unmanned aerial vehicle are analyzed by passing through layered structures. CNN is applied here to train artificial intelligence. In addition, ReLu and Sigmoid functions are used in the layers. The system given in this document makes it possible to analyze how much the crops have grown and other information about the crop.
The system described in US10721859B2 relates to applications for improvement and monitoring of crops. The document describes the integration of the methods applied for monitoring and improving the crops with agricultural tools. Many layers are used in monitoring and analyzing the crops. In these layers, applications such as ReLU and Sigmoid functions, dilution, Maximum pooling, CNN are applied together.
Multi-layered learning processes are used in these methods, and therefore the cost of processing increases in both the training and prediction processes. In addition, artificial intelligence, which is trained in which the general characteristics of each field, at least the field group in a certain region, are different, does not give correct results for each field or region and different training methods are generally needed.
All the problems mentioned above have made it necessary to make an innovation in the relevant field as a result.
Objects and Brief Description of the Invention
The main object of the invention is to optimize training process costs in artificial intelligence trained for a field yield prediction.
The object of the invention is to provide a training method of artificial intelligence that can be easily adapted for different areas.
It was ensured that the trained artificial intelligence selected the necessary attributes for field crop yield prediction from the aircraft images.
The present invention relates to a computer applied training method for an artificial neural network for predicting field plant yield to provide the above-mentioned requirements. Accordingly, the present invention comprises the steps of pre-determining the N layers, applying the first layer to the data set containing the field images of the field plant and finding the prediction absolute error, comparing the obtained prediction absolute error with a predetermined value, applying the next layer if said absolute error is greater than the predetermined value and the number of layers applied is less than N, and terminating the training if the absolute error is less than the predetermined value or the number of layers applied is equal to N.
Thus, thanks to the different filters applied on each floor, the minimum number of layers and the desired precision of the artificial neural network are obtained by automatically selecting the attributes, and accordingly, the process cost and prediction speed are increased both during the training and during the prediction.
Although this method is especially suitable for artificial neural networks to be used for yield prediction in cotton fields, it can also be used in the training of artificial neural networks to be used for pure line prediction in corn fields and fungal disease prediction in sunflower fields.
Definitions of Figures Describing the Invention
The figures and related descriptions used to better explain the device developed by this invention are as follows.
Figure 1. A representative schematic view of the system subject to the invention
Figure 2. A flow diagram of the inventive method
Figure 3. A flow diagram of error checking and layer-adding
Figure 4. A flow diagram of an embodiment of layer-adding
Definitions of Components/Pieces/Parts of the Invention
To better explain the device developed by this invention, the parts and pieces in the figures are numbered and the corresponding numbers are given below.
10. Aircraft
11. Image receiving element
20. Server
30. Processing unit
31. Processing component
32. Memory component
40. Output unit
H. Communication unit
Detailed Description of the Invention
The subject matter of the invention relates to a machine learning training method for predicting yield based on the field images in which the field plant is planted, and the prediction method with the trained artificial intelligence and a system for carrying out these training and prediction methods.
As shown in Figure 1, the training method of the invention is designed to train artificial neural networks for a prediction method. Herein, the yield prediction of the relevant area is carried out with a new cotton field image fed into the artificial neural network trained with a data set with field images in which the field plant is planted.
The field mentioned here can be a cotton field, a sunflower field, or a corn field.
There is a need for field images provided from a certain altitude, preferably from an aircraft (10), in order to carry out the prediction. The aircraft (10) is preferably an unmanned aerial vehicle. Unmanned aerial vehicles, especially drones, eliminate problems such as cloud blocking by providing an image from low altitude. The aircraft (10) further comprises an image-receiving element (11), preferably a camera, in order to provide said images. The
image receiving element (11) can provide both the input image for prediction and the images that will form the necessary data set for training.
The aircraft (10) further comprises a communication unit (H). The communication unit (H) is arranged to wirelessly send the images provided by the image receiving element (11) to a server (20) or a processing unit (30). Another communication unit (H) to communicate with the communication unit (H) may be provided on the server (20) and/or the processing unit (30).
The server (20) is arranged to store the training data set and, if necessary, send it to the processing unit (30). The server (30) may transmit the data set to the processing unit (30) through the communication unit (H) it has. In addition, it is possible to arrange the embodiments in which the server (20) and the processing unit (30) are integrated.
The processing unit (30) includes at least one processing component (31). Herein, the processing component (31) may be configured to train the artificial neural network and/or configured to execute the trained artificial neural network. Herein, a separate processing component (31) may also be used for each of the operations in which it is possible to carry out both operations (training, prediction) through the same processing component (31).
In addition, the processing unit (30) may also include a memory component (32). The data set may be stored in this memory component (32). In this case, the use of the server (20) is not necessary. In addition, snapshots and/or artificial neural network outputs provided from the aircraft (10) may be stored in the memory unit.
The artificial neural network outputs may be instantaneously used through an output unit (40). The output unit (40) comprises at least one screen. The output unit (40) may preferably be a computer, mobile phone or tablet. In addition, or alternatively, the instantaneous artificial neural network outputs may be sent to the server to be stored or transmitted to an output unit (40). This storage and transmission process can also be provided with the memory component (32).
It has been previously stated that an artificial neural network is used for yield prediction, referring to Figure 2. The yield prediction mentioned is the yield prediction for cotton fields, the pure lines prediction with 100% homozygous value for corn fields, and the fungal disease prediction for sunflower fields. A method with multiple layers is used for the training of this artificial neural network.
In said training method, it is aimed to provide a desired, at least acceptable, prediction absolute error rate with the maximum number of layers. A predetermined layer sequence is used herein. The number of layers in the layer sequence may be taken as N. Where N is an integer. Herein, at least one, preferably the first of the layers in the layer sequence is the convolutional layer. In addition, the layer sequence may further comprise at least one, preferably all, of the maximum pooling, dilution, ReLU, fully coupled layers.
The maximum pooling layers are used herein to reduce the number of parameters to be learned, and the dilution layers are used to prevent overfitting. The ReLU layer was preferred because it is not linear, it can be operated very simply and very quickly, and its first-order derivative can be expressed in its own terms. In addition, the fully coupled layers are used as a flattening layer for converting the images into a one-dimensional array.
The multi-layer sequence (N = 19) shown in Figure 4 may be used as the layer sequence. Although filter sizes, dilution rates and number of nodes are given in this structure, they should not be perceived as restrictive.
Basically, in the training method, the prediction absolute error value is determined after applying a layer to the data set. The predicted absolute error value determined herein is compared to the predetermined desired value. If the prediction absolute error value determined because of the comparison is lower than the desired value, the training is terminated. In the opposite case, that is, if the predicted absolute error value is higher than the desired value, the next layer is added, and retraining is provided. Herein, this layer addition process is continued until the absolute value reaches the desired value or until the layers in the sequence are a bit, that is, until the number of layers added reaches N.
As shown in Figure 3, the data set is preferably not completely introduced into the training from the mentioned training method. The data set is divided into a certain number of parts and taken into training in turn. The first piece is trained, the performance of the model is tested, and the weights are updated with backpropagation according to the performance. Then, the model is retrained with the new training set and the weights are updated again. This process is repeated in each training step and the most appropriate weight values for the model are calculated. Here, the training of each piece is called an "epoch".
In the current training method, an absolute error is detected in each epoch and if the error is higher than the desired value, the next epoch is proceeded. If the desired error value is reached in any of the epochs, the training is terminated, and no new layer is added. Preferably, the data set is divided by a predetermined number of epochs. When the maximum number of epochs is reached, adding a new layer is performed if the desired error value is still not met. Preferably, the epoch value may be selected as 30.
In a preferred embodiment, the maximum number of layers to be added is determined before processing. Here, the embodiment consists of a decreasing number of nodes from the input layer to the output layer and is like a pyramid in this way. Since the input image size of the circuit is mxa, the number of filters in the last convolutional layer that will determine the attributes is also selected as m. The maximum number of layers in the need is defined as C and C is calculated using the following formula:
Herein, the size of an image in the data set of mxa is the smallest image size obtained at the end of the layers in nxb. Preferably m=a and n=b. To illustrate, if m=64 is taken, the first term, 64x64, is the total number of input nodes of the digital meter. The number of nodes in the output layer is also given in the denominator. The number of attribute filters is equal to 64. The smallest image size obtained at the end of the layers is nxn and the value of n is equal to 4. When the above equation is used, the maximum number of layers is equal to 3.
Once the maximum number of layers has been determined, the number of layers added separately before adding the new layer can also be compared with the maximum number of layers. When the layer added here is equal to the maximum number of layers, the training is terminated. Alternatively, the maximum number of layers obtained can be assigned as "C", the number of new sequence layers can be assigned as "N", and thus the same result can be obtained with a single control instead of two controls in each cycle, and besides, the first C layer of the layer sequence can be optionally removed from the sequence.
As shown in Figure 4, in a preferred embodiment of the invention, colored field images of mxm pixels are accepted in the image input layer for the artificial neural network training process. In the next convolutional layer, numerical filters are repeatedly applied to this field image, revealing an activation map called a feature map, which shows the positions and strength of a feature detected in the image. Maximum pooling 2x2 is used and thus the calculation cost is reduced. A higher number of numerical filters than the number of filters available in the previous layer in the next convolutional layer are repeatedly applied to these new images, revealing a new activation map called the feature map, which shows the positions and strength of a feature detected in the images obtained in the previous layer. ReLU was used as the activation function in the hidden intermediate layer (ReLU layer) of the new images obtained. If the value that comes to a node as input is expressed as x, the ReLU activation function is defined as F(x) = Maximum (0, x). The number of parameters to be learned is reduced again by using the maximum pooling 2x2 size. Numerical filters, which are odd in size, twice as many as the number of filters available in the previous layer in the next convolutional layer, are repeatedly applied to this new image, revealing a new activation map called the feature map, which shows the positions and strength of a feature detected in the images obtained in the previous layer. The activation function was applied to the new images obtained in the hidden intermediate layer (ReLU layer). The number of parameters to be learned is reduced again by using the maximum pooling 2x2 size.
A flattening layer was then used to convert the images into a one-dimensional array. ReLU was used as the activation function in the hidden intermediate layer of the one-dimensional index obtained. A certain amount of dilution was applied to the images obtained to prevent excessive adaptation. A new flattening layer was then used for the one-dimensional data, containing a smaller number of nodes than the previous layer. ReLU was used as the
activation function in the hidden intermediate layer (ReLU layer) of the new one-dimensional index obtained. A lesser amount of re-dilution of the previous dilution was applied to the data obtained to prevent excessive re-adaptation in the new dilution layer.
The data then diluted was combined with a new fully connected layer containing a smaller number of nodes than the previous flattening layer. ReLu was used as the activation function in the hidden intermediate layer in the new data index obtained. It was combined with a fully connected output layer with 1 node to predict the efficiency at the end of the neural network circuit.
The training method of the artificial neural network for yield prediction with computer application can be provided as a program. The program comprises instructions for performing the steps of any one of claims 1-16.
In addition, the computer-readable medium containing the instructions to perform the method steps according to any one of claims 1-16 can be used for the training of the artificial neural network for yield prediction.
The artificial neural network trained according to the processes given above is used in a computer applied prediction method for yield prediction.
Accordingly, in the prediction method,
- The field image containing the field plant provided from an aircraft (10),
Pre-determining the N layers, applying the first layer to the data set containing the field images and finding the prediction absolute error, comparing the obtained prediction absolute error with a predetermined value, applying the next layer if said absolute error is greater than the predetermined value and the number of layers applied is less than N, and entering it as an entry into the artificial neural network trained with the steps of terminating the training if the absolute error is less than the predetermined value or the number of layers applied is equal to N,
- It comprises the steps of performing yield prediction depending on the field image of the artificial neural network carried out by a processing unit.
The prediction process is preferably used as the activation function. It can be defined as a Sigmoid function.
#
Sigmoid(x) defines the yield data as a decimal number between 0 and 1. The software denormalizes this value to reach the actual yield value.
The system arranged to perform a computer-applied prediction method for yield prediction comprised the following:
- an aircraft (10), preferably an unmanned aerial vehicle, in particular a drone,
- a processing component (31) for executing the artificial neural network trained according to claim 1 and creating a prediction for the image provided by the aircraft (10)
- an output unit (40) to show the prediction to the user or a memory component (32) and/or a server (20) to store the prediction.
The computer applied prediction method can be provided as a program. The program comprises instructions for performing the above prediction method steps.
In addition, the computer-readable medium containing the instructions to perform the prediction method steps given above can be used for yield prediction.
Claims
1. A computer-implemented training method for an artificial neural network for field plant yield prediction, characterized in that:
N number of layers is pre-determined, the first layer is applied to the data set containing the images of the field plant and the absolute error of the prediction is established, the obtained prediction absolute error is compared to a predetermined value, the next layer is applied if the absolute error is greater than the predetermined value the number of layers applied is less than N; and wherein training is terminated if the absolute error is less than a predetermined value or the number of layers applied is equal to N.
2. A method according to Claim 1, characterized in that the order of the N number of layers is predetermined.
3. A method according to Claim 1 or 2, characterized in that at least one of the layers is a convolutional layer containing an mxm size filter.
4. A method according to Claim 3, characterized in that at least the first one of the layers is a convolutional layer containing an mxa size filter.
5. A method according to any one of the preceding claims, characterized in that at least one of the layers is the maximum pooling layer.
6. A method according to any one of the preceding claims, characterized in that at least one of the layers is a dilution layer.
7. A method according to any one of the preceding claims, characterized in that at least one of the layers is the ReLU layer.
8. A method according to any one of the preceding claims, characterized in that the layers are fully connected layers with a number of B nodes.
9. A method according to Claim 1, characterized in that the data set is divided into A number of epochs and layers are applied to the epochs, and if the absolute error obtained after the layer applied to the epoch is less than the predetermined value and the number of layers applied is less than N, the next convolutional layer is applied.
11. A method according to Claim 3 or 10, characterized in that a=m.
12. A method according to claim 10 or 11, characterized in that b=n.
13. A method according to any one of Claims 10-12, characterized in that if the number of layers applied is less than C, the next layer is applied.
14. A method according to any one of Claims 10-12, characterized in that C is assigned as the new value to N.
15. A method according to Claim 14, characterized in that only the first C elements of the predetermined layers are retrieved, and the others are removed from the layer sequence.
16. A method according to Claim 1, characterized in that said field plant is cotton and the absolute error is determined based on the result of the cotton yield prediction.
17. A method according to Claim 1, characterized in that said field plant is corn and the absolute error is determined according to the predicted result of pure lines containing 100% homozygous value.
18. A method according to Claim 1, characterized in that said field plant is sunflower and the absolute error is determined based on the result of the fungal disease prediction.
19. A computer-implemented training system for an artificial neural network for field plant yield prediction, characterized in that it comprises the following:
- A memory unit to store N predetermined layers and data sets,
- A processing unit that is configured to apply the first layer to the data set containing the images of the field plant and finding the absolute error of the prediction, and compare the obtained prediction absolute error with a predetermined value, and apply the next layer if the absolute error is less than the predetermined value and the number of layers applied is less than N, and terminate the training if the absolute error is less than the predetermined value or the number of layers applied is equal to N.
20. A computer program for artificial neural network training for field plant yield prediction, characterized in that it comprises instructions that, when carried out by a processing unit, enable the processing unit to perform the steps according to Claim 1.
21. A computer readable medium for an artificial neural network training for field plant yield prediction, characterized in that it comprises the instructions that, when carried out by a processing unit, enable the processing unit to perform the steps according to Claim 1.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TR2022/015007 TR2022015007A1 (en) | 2022-09-30 | FIELD CROP EFFICIENCY DETECTION METHOD | |
TR2022015007 | 2022-09-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024072357A1 true WO2024072357A1 (en) | 2024-04-04 |
Family
ID=90478904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/TR2023/051035 WO2024072357A1 (en) | 2022-09-30 | 2023-09-27 | A field crop efficiency detection method |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024072357A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11185007B1 (en) * | 2020-11-20 | 2021-11-30 | Advanced Agrilytics Holdings, Llc | Methods and systems for prioritizing management of agricultural fields |
CN114119640A (en) * | 2022-01-27 | 2022-03-01 | 广东皓行科技有限公司 | Model training method, image segmentation method and image segmentation system |
US20220114491A1 (en) * | 2020-10-09 | 2022-04-14 | AquaSys LLC | Anonymous training of a learning model |
-
2023
- 2023-09-27 WO PCT/TR2023/051035 patent/WO2024072357A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220114491A1 (en) * | 2020-10-09 | 2022-04-14 | AquaSys LLC | Anonymous training of a learning model |
US11185007B1 (en) * | 2020-11-20 | 2021-11-30 | Advanced Agrilytics Holdings, Llc | Methods and systems for prioritizing management of agricultural fields |
CN114119640A (en) * | 2022-01-27 | 2022-03-01 | 广东皓行科技有限公司 | Model training method, image segmentation method and image segmentation system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107977706A (en) | Modularized distribution type artificial neural network | |
EP3816880A1 (en) | A yield estimation method for arable crops and grasslands, coping with extreme weather conditions and with limited reference data requirements | |
CN108921893A (en) | A kind of image cloud computing method and system based on online deep learning SLAM | |
WO2021147300A1 (en) | Multi-source heterogeneous farmland big data yield prediction method and system, and apparatus | |
CN110427968A (en) | A kind of binocular solid matching process based on details enhancing | |
JP2016139176A (en) | Image processing device, image processing system, image processing method, and image processing program therefor | |
CN112529146B (en) | Neural network model training method and device | |
Sasidhar et al. | Land cover satellite image classification using ndvi and simplecnn | |
CN113705641A (en) | Hyperspectral image classification method based on rich context network | |
CN113065562A (en) | Crop ridge row extraction and leading route selection method based on semantic segmentation network | |
CN112380917A (en) | A unmanned aerial vehicle for crops plant diseases and insect pests detect | |
Din et al. | Onion Crop Monitoring with Multispectral Imagery Using Deep Neural Network | |
DE112019006526T5 (en) | Computing device | |
Bhadra et al. | End-to-end 3D CNN for plot-scale soybean yield prediction using multitemporal UAV-based RGB images | |
CN116680548B (en) | Time sequence drought causal analysis method for multi-source observation data | |
WO2024072357A1 (en) | A field crop efficiency detection method | |
CN113723281B (en) | High-resolution image classification method based on local adaptive scale ensemble learning | |
Altınbilek et al. | Identification of paddy rice diseases using deep convolutional neural networks | |
TR2022015007A1 (en) | FIELD CROP EFFICIENCY DETECTION METHOD | |
WO2021134519A1 (en) | Device and method for realizing data synchronization in neural network inference | |
Venu et al. | Disease Identification in Plant Leaf Using Deep Convolutional Neural Networks | |
JP7243496B2 (en) | Information processing equipment | |
CN117110217B (en) | Three-dimensional water quality monitoring method and system | |
CN116720635B (en) | Actual measurement data-based Guangxi oil tea estimation method | |
Altinbilek et al. | Identification of Some Paddy Rice Diseases Using Deep Convolutional Neural Networks. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23873353 Country of ref document: EP Kind code of ref document: A1 |