CN114818909A - Weed detection method and device based on crop growth characteristics - Google Patents

Weed detection method and device based on crop growth characteristics Download PDF

Info

Publication number
CN114818909A
CN114818909A CN202210427051.7A CN202210427051A CN114818909A CN 114818909 A CN114818909 A CN 114818909A CN 202210427051 A CN202210427051 A CN 202210427051A CN 114818909 A CN114818909 A CN 114818909A
Authority
CN
China
Prior art keywords
image
weed
crop
detection
crops
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210427051.7A
Other languages
Chinese (zh)
Other versions
CN114818909B (en
Inventor
柯善风
吴国龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhennao Technology Shanghai Co ltd
Beidahuang Information Co ltd
Original Assignee
Zhennao Technology Shanghai Co ltd
Beidahuang Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhennao Technology Shanghai Co ltd, Beidahuang Information Co ltd filed Critical Zhennao Technology Shanghai Co ltd
Priority to CN202210427051.7A priority Critical patent/CN114818909B/en
Publication of CN114818909A publication Critical patent/CN114818909A/en
Application granted granted Critical
Publication of CN114818909B publication Critical patent/CN114818909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Abstract

The invention relates to the technical field of weed detection, and particularly discloses a weed detection method and a weed detection device based on crop growth characteristics, wherein the method comprises the steps of obtaining images of various plants in a field and the growth period of crops to obtain an image library to be detected; inputting the image library to be detected into a trained first classifier, performing content recognition on the images, and acquiring weed images according to content recognition results; uploading the weed image to a detection image, manually marking the weed image, and storing the image into a first sample library and a third sample library in a classified manner according to the result of manual marking; inputting the weed image into a trained second classifier, performing feature recognition on the image, rejecting a crop image in the weed image according to a feature recognition result, and reporting a detection result. According to the invention, the growth characteristics of crops in each growth period are learned through a neural network algorithm, and weeds are detected through a reverse weed detection and identification method for identifying 'non-crops', so that the identification accuracy is greatly improved.

Description

Weed detection method and device based on crop growth characteristics
Technical Field
The invention relates to the technical field of weed detection, in particular to a weed detection method and device based on crop growth characteristics.
Background
In a planting management system of agriculture in China, how to effectively identify weeds in the growing period of crops is a very important farming management link, weeds are quickly, efficiently and accurately identified, the weed spreading and weed distribution situation is effectively estimated, a solid foundation is laid for subsequent accurate implementation of pesticide weeding and pesticide dosage control, and therefore pesticide waste and excessive residual of soil pesticides are avoided, and the standard requirements of green agriculture provided by China are met.
At present, whether dry field crops such as corn, wheat or paddy field crops such as rice and the like exist, the weeds in the fields are various, such as crabgrass, sedge, elytrigia repens, barnyard grass and the like, the weeds are high in similarity and difficult to distinguish, and the weeds are easy to be confused with crop seedlings, so that the weeds are difficult to control in the early stage, and the crop harvest yield and the economic loss caused by the harm of the weeds are difficult to reduce. The general control method is that when farmers or agricultural technicians check the growth of seedlings in a field, the farmers or the agricultural technicians need to manually judge the farming experience of the current land according to historical experience to find the risk of weeds, the action of the manual method is inefficient and lagged, the discovery at the initial growth stage of the weeds is difficult to ensure and pesticide implementation measures are timely taken for treatment, and the risk of weed spreading cannot be effectively avoided. And in the middle and later periods when weeds occur, weed identification algorithm or model training is carried out by collecting large-scale weed image data of different types of weeds, so that the types of weeds are analyzed and identified in field operation. However, the above method has the following problems:
the weeds were noted and the crop was ignored. The essence of weed identification is to find the difference between crops and weeds in the field, namely accurately identify the crops, and plants except the crops are weeds. However, most of the existing weed identification thinking methods neglect crop identification by acquiring image information of different types of weed structures, lack growth information of whole plants and sub-organs such as roots, stems, leaves and flowers in the growth stage of crops, and lack the establishment of a crop full-growth cycle image sample library.
The weeds are various in variety, the characteristics of the weeds in the growing period are different, a sample library is difficult to construct, data is difficult to label, and the recognition accuracy of an algorithm model trained by using a neural network algorithm is low. The weed species of different crops in the field are numerous, the characteristic expression of weeds growing in different regions, different climates and different soils is also varied, along with the evolution and the change of the plant adaptive environment, the difficulty of collecting weed species images is undoubtedly increased, so that the data of the conventional weed sample library is incomplete, an image sample library containing weeds of different species and different growth periods cannot be constructed, the work amount of marking samples is large, the cost is high, and errors are easy to mark.
Focusing on the texture information of the weed surface image, the intrinsic biological characteristic difference between the weeds and the crops is ignored. At present, the identification of weeds is realized by acquiring image information of different types of weeds, and the inherent physiological and growth differences between crops and the weeds, such as plant structures (roots, stems and leaves), plant heights, leaf lengths, leaf widths, leaf areas, leaf colors and leaf green contents, are ignored.
Disclosure of Invention
The invention aims to provide a weed detection method and a weed detection device based on crop growth characteristics, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a method of weed detection based on crop growth characteristics, the method comprising:
acquiring images of various plants in the field and the growing period of crops to obtain an image library to be detected;
inputting the image library to be detected into a trained first classifier, performing content recognition on the images, and acquiring weed images according to content recognition results;
uploading the weed image to a detection image, manually marking the weed image, and storing the image into a first sample library and a third sample library in a classified manner according to the result of manual marking;
inputting the weed image into a trained second classifier, performing feature recognition on the image, rejecting a crop image in the weed image according to a feature recognition result, and reporting a detection result; the detection result comprises a weed image and a detection report based on the weed image;
the first classifier comprises a crop detection model, and the crop detection model is a neural network model obtained based on training of a first sample library;
the second classifier comprises a classification model and a target recognition model, and the acquisition step of the classification model comprises the following steps: constructing a crop growth characteristic parameter database; counting the parameter range and the mean value of the crop growth characteristics to obtain index characteristics and classification models of each parameter; the target recognition model is a neural network model obtained based on training of a third sample library.
As a further scheme of the invention: when the content identification result or the characteristic identification result is a crop image, inserting the crop image into an image library to be detected and recording the cycle times; and when the cycle times reach a preset time threshold value, deleting the corresponding image from the image library to be detected.
As a further scheme of the invention: the crop detection model comprises:
acquiring image data of the designated crop at each growth stage according to a preset acquisition height and an illumination angle;
storing the image data by taking the crop growth stage as a label to obtain an image database; wherein the image data is attached with a crop growth cycle time range attribute;
and taking the image database as a training data set, and respectively adopting a machine learning algorithm to carry out classifier training to obtain crop detection models of crops in different growth periods.
As a further scheme of the invention: the machine learning algorithm comprises AdaBoost, reinforcement learning and a deep learning neural network model, wherein the deep learning neural network model comprises VGG, YoLo-V5 and ResNet.
As a further scheme of the invention: the step of determining the classification model comprises:
obtaining the plant height through manual measuring ruler measurement or plant height measurement equipment;
obtaining a chlorophyll content value and a leaf color through a spectral sensor;
acquiring characteristic parameters of crops through an image recognition algorithm; the characteristic parameters comprise leaf length, leaf width, leaf area and leaf length-width ratio;
and respectively counting key growth characteristic parameters of different stages of crop growth to obtain the numerical range and mean square difference value of each corresponding parameter, and using the numerical range and mean square difference value as a secondary check classification model.
As a further scheme of the invention: the acquiring of the characteristic parameters of the crops through the image recognition algorithm comprises the following steps:
carrying out scale normalization pretreatment on the collected image;
converting the color image into a gray image, obtaining interval division threshold value range according to the statistical information of the gray histogram, and performing binarization processing to enable the gray value of a background area to be 0 and the foreground area to be 255;
acquiring a crop outline through an edge detection Canny operator, and connecting edge pixels to form a closed connected region through a Hough transform method;
and calculating the characteristic parameters of the crops based on the closed connected region.
As a further scheme of the invention: the calculating the characteristic parameters of the crops based on the closed connected region comprises the following steps:
calculating the number of pixel points in the connected domain, and defining the leaf area as the contour area of the crop on the projection plane;
determining the maximum and minimum coordinates of the connected domain in the x-axis direction, and respectively defining the difference between the maximum and minimum coordinates as the leaf length of the crop;
determining the maximum and minimum coordinates of the connected domain in the y-axis direction, and respectively defining the difference of the maximum and minimum coordinates as the leaf width of the crop;
and calculating the leaf length-width ratio of the crops according to the existing characteristic values.
The technical scheme of the invention also provides a weed detection device based on the growth characteristics of crops, which comprises:
the image library generation module is used for acquiring images of plants in the field and the growth period of crops to obtain an image library to be detected;
the content identification module is used for inputting the image library to be detected into a trained first classifier, performing content identification on the images and acquiring weed images according to content identification results;
the artificial marking module is used for uploading the weed image to a detection image, carrying out artificial marking, and storing the images into a first sample library and a third sample library in a classified manner according to the artificial marking result;
the characteristic recognition module is used for inputting the weed images into a trained second classifier, performing characteristic recognition on the images, rejecting crop images in the weed images according to characteristic recognition results, and reporting detection results; the detection result comprises a weed image and a detection report based on the weed image;
the first classifier comprises a crop detection model, and the crop detection model is a neural network model obtained based on training of a first sample library;
the second classifier comprises a classification model and a target recognition model, and the acquisition step of the classification model comprises the following steps: constructing a crop growth characteristic parameter database; counting the parameter range and the mean value of the crop growth characteristics to obtain index characteristics and classification models of each parameter; the target recognition model is a neural network model obtained based on training of a third sample library.
Compared with the prior art, the invention has the beneficial effects that: according to the invention, the growth characteristics of crops in each growth period are learned through a neural network algorithm, and weeds are detected through a reverse weed detection and identification method for identifying 'non-crops', so that the identification accuracy is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
FIG. 1 is a block flow diagram of a weed detection method based on crop growth characteristics.
Fig. 2 is a diagram of a G _ D _ Net network structure.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
Fig. 1 is a flow chart of a weed detection method based on crop growth characteristics, and in an embodiment of the present invention, the weed detection method based on crop growth characteristics includes:
acquiring images of various plants in the field and the growing period of crops to obtain an image library to be detected;
inputting the image library to be detected into a trained first classifier, performing content recognition on the images, and acquiring weed images according to content recognition results;
uploading the weed image to a detection image, manually marking the weed image, and storing the image into a first sample library and a third sample library in a classified manner according to the result of manual marking;
inputting the weed image into a trained second classifier, performing feature recognition on the image, rejecting a crop image in the weed image according to a feature recognition result, and reporting a detection result; the detection result comprises a weed image and a detection report based on the weed image;
the first classifier comprises a crop detection model, and the crop detection model is a neural network model obtained based on training of a first sample library;
the second classifier comprises a classification model and a target recognition model, and the acquisition step of the classification model comprises the following steps: constructing a crop growth characteristic parameter database; counting the parameter range and the mean value of the crop growth characteristics to obtain index characteristics and classification models of each parameter; the target recognition model is a neural network model obtained based on training of a third sample library.
Further, when the content identification result or the characteristic identification result is a crop image, inserting the crop image into an image library to be detected and recording the cycle times; and when the cycle times reach a preset time threshold value, deleting the corresponding image from the image library to be detected.
The invention provides a method for distinguishing weeds based on crop growth environment and growth characteristics, which comprises the steps of firstly, acquiring visual or multispectral image data of a crop full-growth stage as a training sample, and obtaining a target identification characteristic model through a deep neural network algorithm; according to the growth period of the crop, one or more proper characteristic models in the adjacent growth periods are selected in sequence to identify the identification image (first identification);
if the crop is not the corresponding target crop, the visual sensor and the multispectral sensor are used for respectively obtaining key parameters of growth sub-characteristic parameters of the crop at each stage of growth, such as plant height, leaf length, leaf width, leaf area, leaf color and the like, and then comparative analysis (second check classification) is carried out, if two or more than two parameters are not within the pre-statistical numerical range of each parameter, the crop is judged to be weed.
Meanwhile, pictures of the crops which are identified and judged as non-corresponding targets for the first time are stored in a neural network algorithm training sample library for manual identification and labeling, and the pictures are respectively used for feature extraction training learning of the crops and weeds, so that the accuracy of a classification image identification algorithm used for the first time identification and the second time verification of the crops is continuously improved. The method can reduce the construction difficulty of the weed identification training sample library, improve the precision and accuracy of weed classification identification, facilitate early treatment and prevention of weeds, reduce economic loss caused by weeds, and facilitate the assistance of agricultural production managers to provide a good growth environment for crops and realize more accurate operation management.
As a preferred embodiment of the technical solution of the present invention, the crop detection model includes:
acquiring image data of the designated crop at each growth stage according to a preset acquisition height and an illumination angle;
storing the image data by taking the crop growth stage as a label to obtain an image database; wherein the image data is attached with a crop growth cycle time range attribute;
and taking the image database as a training data set, and respectively adopting a machine learning algorithm to carry out classifier training to obtain crop detection models of crops in different growth periods.
Further, the machine learning algorithm includes AdaBoost, reinforcement learning, and deep learning neural network models including VGG, YoLo-V5, and ResNet.
In one embodiment of the technical scheme, the acquisition height and the illumination angle of the visual sensing equipment are set, so that the availability and the definition of the shot appointed crop picture are ensured. In the growth period of crops, image data of each growth stage (such as seedlings, joints, blooms, ears and the like) of specified crops are collected, the growth stages (such as Level _1 for seedlings, Level _2 for joints, Level _3 for blooms and Level _4 for ears) of the crops are used as labels, time range attributes of the growth periods of the crops are attached, the time range attributes are stored in an image database, and the time range attributes are respectively used as a neural network training sample library S1 and a crop key feature extraction sample library S2. If the growth of the crop in a certain growth period is fast and the change of the phenotypic characteristic is large, the growth period can be subdivided into a plurality of sub-growth periods according to the week.
And (3) training classifiers by using S1 sample library data as training data sets and respectively adopting related machine learning algorithms such as AdaBoost, reinforcement learning or deep learning neural network model/algorithm frameworks such as VGG, YoLo-V5, ResNet and the like to obtain target classification models A1 of crops in different growth periods. The patent simultaneously provides a deep neural network G _ D _ Net improved based on DenseNet and GoogleNet for model training and obtaining a target recognition classification model A1, the network model refers to a Dense Block structure in DenseNet for feature reuse and an inclusion structure in GoogleNet, the features are extracted in a multi-scale manner for fusion, and the probability of overfitting is reduced; and considering that the images of the same crop in various periods may be very similar, the original image is directly input as a feature into a Dense Block to be used and learned for multiple times.
As a preferred embodiment of the technical solution of the present invention, the determining step of the classification model includes:
obtaining the plant height through manual measuring ruler measurement or plant height measurement equipment;
obtaining a chlorophyll content value and a leaf color through a spectral sensor;
acquiring characteristic parameters of crops through an image recognition algorithm; the characteristic parameters comprise leaf length, leaf width, leaf area and leaf length-width ratio;
and respectively counting key growth characteristic parameters of different stages of crop growth to obtain the numerical range and mean square difference value of each corresponding parameter, and using the numerical range and mean square difference value as a secondary check classification model.
Further, the obtaining of the characteristic parameters of the crops through the image recognition algorithm includes:
carrying out scale normalization pretreatment on the collected image;
converting the color image into a gray image, obtaining interval division threshold value range according to the statistical information of the gray histogram, and performing binarization processing to enable the gray value of a background area to be 0 and the foreground area to be 255;
acquiring crop outlines through an edge detection Canny operator, and connecting edge pixels through a Hough transform method to form a closed connected region;
and calculating the characteristic parameters of the crops based on the closed connected region.
Specifically, the calculating characteristic parameters of the crops based on the closed communication area includes:
calculating the number of pixel points in the connected domain, and defining the leaf area as the contour area of the crop on the projection plane;
determining the maximum and minimum coordinates of the connected domain in the x-axis direction, and respectively defining the difference of the maximum and minimum coordinates as the leaf length of the crop;
determining the maximum and minimum coordinates of the connected domain in the y-axis direction, and respectively defining the difference of the maximum and minimum coordinates as the leaf width of the crop;
and calculating the leaf length-width ratio of the crops according to the existing characteristic values.
And extracting and obtaining the key growth characteristics of the crops through various methods by using an S2 sample library. The common methods are as follows:
(1) obtaining the plant height through manual measuring ruler measurement or plant height measurement equipment;
(2) obtaining chlorophyll content value and leaf color through a spectral sensor;
(3) the characteristic parameters of the crop, such as leaf length, leaf width, leaf area and leaf length-width ratio, are obtained as follows, as shown in fig. 2:
firstly, carrying out scale normalization pretreatment on an acquired image, then converting a color image into a gray image, obtaining an interval division threshold range according to gray histogram statistical information, and carrying out binarization treatment to enable the gray value of a background area to be 0 and a foreground area (as an object area) to be 255; then, crop outlines are obtained through an edge detection Canny operator, and edge pixels are connected through a Hough transform method to form a closed connected region. Calculating the number of pixel points in the connected domain, and defining the leaf area as the contour area of the crop on the projection plane; finding the maximum and minimum coordinates of the connected domain in the x-axis direction, and respectively defining the difference of the maximum and minimum coordinates as the leaf length of the crop; finding the maximum and minimum coordinates of the connected domain in the y-axis direction, and respectively defining the difference of the maximum and minimum coordinates as the leaf width of the crop; according to the existing characteristic values, the leaf length-width ratio of the crops can be calculated.
Respectively counting key growth characteristic parameters of different stages of crop growth: plant height, leaf length, leaf width, leaf area, leaf length-width ratio, leaf color and chlorophyll content, and obtaining the numerical range and mean square difference value of each corresponding parameter as a secondary check classification model A2. (secondary verification model).
In one example of the technical scheme, images of plants in the field are obtained, according to the images and the current growth period of crops, the images are firstly identified by using an A1 algorithm, namely a characteristic model algorithm of the crops in the corresponding growth period by using a C1 classifier, if the identification rate is not high, such as uneven growth vigor of the crops, the characteristic models in the adjacent growth periods of the crops can be properly adopted for secondary or tertiary identification, and if the crops are identified once, the crops are judged to be non-weeds. For the subject judged as a weed, the next step was carried out.
And performing structural analysis on the target image to obtain key growth characteristics of the monitored object, such as plant height, leaf length, leaf width, leaf length-width ratio, leaf area, chlorophyll content and the like, comparing the characteristics with a characteristic range of an A2 algorithm by using a classifier C2, judging the object as weed if more than two characteristics exceed the range, and reporting suspicious alarm if not.
Meanwhile, pictures of weeds identified by the C1 classifier are reported to an algorithm development layer, the algorithm development layer constructs an S3 sample library according to the pictures through methods such as manual labeling, the S1 sample library is refreshed appropriately, the A1 algorithm precision is improved in an iterative mode, and the refreshed A1 algorithm is loaded into the C1. And training by using a machine learning or neural network algorithm according to an S3 sample library to obtain an A3 algorithm. After comparing the accuracy of the A3 algorithm and the a2 algorithm, if the recognition accuracy of the A3 is higher than that of the a2 algorithm, the a2 algorithm can be replaced by the A3 algorithm in the C2 classifier. Under the condition that the sample library is large enough, the A3 algorithm can identify specific weed types, and the application scene of 'symptomatic medicine application' on specific weeds can be met. The a3 algorithm can also be obtained by training and learning with a deep neural network G _ D _ Net framework based on the improvements of DenseNet and GoogleNet proposed by the patent.
Providing the identification result to weeding equipment, an intelligent agricultural platform or other equipment through a reporting module in the device according to the identification result; the method is convenient for weeding equipment and agricultural production personnel to carry out weeding operation or carry out manual image recognition judgment on reported suspicious alarms, and carries out corresponding improved setting.
In the identification process of the method, the dry field crops are considered to grow on ridges, and if weeds grow on furrows, the weeds can be distinguished and identified through the position information of the visual sensor.
Meanwhile, detected weed images are unified and put in a warehouse through a reporting module to form a weed data image library, so that annotation data are provided for subsequent manual annotation of weed types, and support is provided for weed classification and accurate implementation of pesticide weed control.
Example 2
In an embodiment of the present invention, a weed detection apparatus based on crop growth characteristics is provided, the apparatus including:
the image library generation module is used for acquiring images of plants in the field and the growth period of crops to obtain an image library to be detected;
the content identification module is used for inputting the image library to be detected into a trained first classifier, performing content identification on the images and acquiring weed images according to content identification results;
the artificial marking module is used for uploading the weed image to a detection image, carrying out artificial marking, and storing the images into a first sample library and a third sample library in a classified manner according to the artificial marking result;
the characteristic recognition module is used for inputting the weed images into a trained second classifier, performing characteristic recognition on the images, rejecting crop images in the weed images according to characteristic recognition results, and reporting detection results; the detection result comprises a weed image and a detection report based on the weed image;
the first classifier comprises a crop detection model, and the crop detection model is a neural network model obtained based on training of a first sample library;
the second classifier comprises a classification model and a target recognition model, and the acquisition step of the classification model comprises the following steps: constructing a crop growth characteristic parameter database; counting the parameter range and the mean value of the crop growth characteristics to obtain index characteristics and classification models of each parameter; the target recognition model is a neural network model obtained based on training of a third sample library.
In order to facilitate better understanding of the technical scheme of the invention for those skilled in the art, the invention provides a deep neural network G _ D _ Net improved based on DenseNet and GoogleNet for model training and obtaining a crop detection model and a target recognition model, that is, in the above, the G _ D _ Net network is implemented as follows:
the first, basic concept:
1. the picture size is as follows: w h d;
w is the length of the picture, h is the width of the picture, d is the depth (number of channels) of the picture; typically, a color picture is 3 channels and a grayscale picture is 1 channel.
2. And (3) convolution operation: the main parameters are the number of input channels, the size of convolution kernel, the stride of moving step and the padding number of zero padding.
Conv2d (input _ channels, output _ channels, kernel, stride, padding)
Input _ channels: inputting the number of channels;
output _ channels: the number of output channels;
kernel: convolution kernel size for performing convolution operations, such as 1 × 1 or 3 × 3 or 5 × 5.
Stride: the convolution kernel moves by a step size, which refers to the pixel length of each time the convolution kernel moves in the picture.
Padding: it means that the number of 0 turns of pixels is increased at the periphery of the picture, for example padding =1, and then a number of 0 pixels is increased at the periphery of the picture, which is mainly used for convolution to keep the picture size unchanged.
3. And (3) pooling operation:
maximum pooling: usually the kernel size is selected to be 2 x 2, the largest of the 4 values is retained, and the moving step size stride and padding are consistent with the above;
average pooling: usually, the kernel size is 2 x 2, the average value of 4 values is calculated, and the moving step size stride and padding are consistent with the above;
BatchNorm2d operation: normalizing the data, inputting the number of channels of the image according to a function nn.
ReLU operation: an activation function that performs the operation f (x) = max (0, x) for each pixel of the input picture
The operation normalizes the data, function nn. BatchNorm2d (), inputs the number of channels of the picture, normalizes the data and outputs the picture with unchanged length, width and number of channels.
Introduction of each module of the G _ D _ Net network:
1, Dense Layer: a processing unit of the G _ D _ Net network. Inputting images, and operating in the following sequence, wherein the final output channel number result is num _ output _ features set by human
1) Normalizing the data through a BN layer for the first time;
2) through once ReLU operation, the network sparsity is increased, and overfitting is relieved;
3) a convolution layer, the size of the convolution kernel is 1 multiplied by 1, the moving step size stride is set to 1, padding is set to 0, and the number of output channels is 4 multiplied by num _ output _ features;
4) respectively passing through a BN layer and a ReLU operation;
5) one convolutional layer, the convolutional kernel size is 3 × 3, the move step size stride is set to 1, padding is set to 1, and the number of output channels is num _ output _ features.
2.Dense Block:
The system consists of 6 sense layers, forms a processing module, and the input of each part is the concatenation of all the results, and a function of the directory () is used.
Because the whole process ensures that the picture size (length multiplied by width) is not changed, splicing is carried out, only the number of channels is changed, and the calculation formula is as follows: the number of output channels = the number of input channels +6 × num _ output _ features.
3. Transition module Transition: each Transition module is the same, and is placed between two Dense blocks for reducing parameters and halving the picture size.
1) Normalizing the data through a BN layer for the first time; ReLU operation is performed once, so that the network sparsity is increased, and overfitting is relieved;
2) the size of the convolution kernel is 1 multiplied by 1, the moving step length is set to be 1, the padding is set to be 0, and the number of output channels is reduced by half compared with the number of input channels;
3) the once-averaging pooling layer has a convolution kernel size of 2 × 2 and a moving step size of 2, which can halve the output picture size.
4, inclusion: and carrying out convolution operation with different convolution kernel sizes on the same input to obtain characteristics under different views, and splicing to obtain final output.
Function inclusion (in _ channels, ch1 × 1, ch3 × 3red, ch3 × 3, ch5 × 5red, ch5 × 5, pool);
from left to right the structure includes:
1) first column: a convolution layer, the size of the convolution kernel is 1 multiplied by 1, the step length is set to be 1, and the number of output channels is ch1 multiplied by 1;
2) the second column: firstly, a convolution layer is formed, the size of the convolution kernel is 1 multiplied by 1, the step length is set to be 1, and the number of output channels is ch3 multiplied by 3 red; in another convolution layer, the size of convolution kernel is 3 × 3, the step size is set to 1, padding is set to 1, and the number of output channels is ch3 × 3;
3) third column: firstly, a convolution layer is formed, the size of the convolution kernel is 1 multiplied by 1, the step length is set to be 1, and the number of output channels is ch5 multiplied by 5 red; in another convolution layer, the size of the convolution kernel is 5 × 5, the step size is set to 1, padding is set to 2, and the number of output channels is ch5 × 5;
4) fourth column: limiting a maximum pooling layer, wherein the size of a convolution kernel is 3 multiplied by 3, the step size is set to be 1, and padding is set to be 1; then, the convolution layer is passed through, the size of convolution kernel is 1 x 1, the step length is set to be 1, and the number of output channels is pool;
the output is a mosaic of four-column output, the picture size is unchanged, and the number of output channels = ch1 × 1+ ch3 × 3+ ch5 × 5+ pool.
Thirdly, G _ D _ NET model framework:
1) the input original image size (length × width × depth) is fixed to 224 × 224 × 3, wherein the length width is 224 pixels, the width height is 224 pixels, 3 channels, and if not, the image is adjusted to the size in advance;
2) after one convolution layer Conv1, the convolution kernel size is 1 × 1, the step size is set to 1, padding is set to 0, and the output channel is 1;
3) splicing the result of Conv1 with the input (original image) to obtain a 4-channel picture;
4) inputting a 4-channel picture into a first Dense Block, setting the output feature quantity num _ output _ features as 10, and obtaining an output channel number of 4+6 × 10=64 and an output result of 224 × 224 × 64 after Dense Block processing;
5) after one transition operation, the number of channels is halved, the size of the picture is halved, and the output result is 112 multiplied by 32;
6) inputting the 32-channel picture into a second depth Block, setting num _ output _ features as 16, and after the depth Block processing, obtaining an output channel number of 32+6 × 16=128 and an output result of 112 × 112 × 128;
7) after one transition operation, the number of channels is halved, the size of the picture is halved, and the result is output to be 56 multiplied by 64;
8) inputting the 64-channel picture into a third Dense Block, setting num _ output _ features as 32, and after Dense Block processing, obtaining an output channel number of 64+6 × 32=256 and an output result of 56 × 56 × 256;
9) after passing through a maximum pooling layer, the size of a convolution kernel is 3 multiplied by 3, the step length is set to be 2, the size of the picture is halved, and the output result is 28 multiplied by 256;
10) passing through a first inclusion, where in _ channels =256, ch1 × 1=128, ch3 × 3red =128, ch3 × 3=192, ch5 × 5red =32, ch5 × 5=96, pool =96, and the number of output channels is 128+192+96+96= 512;
11) after passing through a maximum pooling layer, the size of a convolution kernel is 3 multiplied by 3, the step length is set to be 2, the size of the picture is halved, and the output result is 14 multiplied by 512;
12) passing through a second inclusion, where in _ channels =512, ch1 × 1=160, ch3 × 3red =112, ch3 × 3=224, ch5 × 5red =24, ch5 × 5=64, pool =64, and the number of output channels is 160+224+64+64= 512;
13) passing through a third inclusion, where in _ channels =512, ch1 × 1=256, ch3 × 3red =160, ch3 × 3=320, ch5 × 5red =32, ch5 × 5=128, pool =128, and the number of output channels is 256+320+128+128= 832;
14) after passing through a maximum pooling layer, the size of a convolution kernel is 3 multiplied by 3, the step length is set to be 2, the size of the picture is halved, and the output result is 7 multiplied by 832;
15) passing through a fourth inclusion, where in _ channels =832, ch1 × 1=384, ch3 × 3red =192, ch3 × 3=384, ch5 × 5red =48, ch5 × 5=128, pool =128, and the number of output channels is 384+384+128+128= 1024;
16) after passing through an average pooling layer, the picture size is changed to 1 × 1, the output result is 1 × 1 × 1024, and 1024 elements exist;
17) through Dropout operation, setting the probability to be 0.4, and setting the probability that 1024 element elements all have 0.4 to be 0;
18) through a full connection layer, calculating 1 multiplied by 1024 parameters to obtain output 1 multiplied by num _ classes, wherein num _ classes is the set type number;
19) a total of num _ classes values are obtained, each value corresponding to the probability of each category, wherein the category with the highest probability is predicted to be output as our result.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. A method for detecting weeds based on the growth characteristics of crops, which comprises the following steps:
acquiring images of various plants in the field and the growing period of crops to obtain an image library to be detected;
inputting the image library to be detected into a trained first classifier, performing content recognition on the images, and acquiring weed images according to content recognition results;
uploading the weed image to a detection image, manually marking the weed image, and storing the image into a first sample library and a third sample library in a classified manner according to the result of manual marking;
inputting the weed image into a trained second classifier, performing feature recognition on the image, rejecting a crop image in the weed image according to a feature recognition result, and reporting a detection result; the detection result comprises a weed image and a detection report based on the weed image;
the first classifier comprises a crop detection model, and the crop detection model is a neural network model obtained based on training of a first sample library;
the second classifier comprises a classification model and a target recognition model, and the acquisition step of the classification model comprises the following steps: constructing a crop growth characteristic parameter database; counting the parameter range and the mean value of the crop growth characteristics to obtain index characteristics and classification models of each parameter; the target recognition model is a neural network model obtained based on training of a third sample library.
2. The weed detection method based on the growth characteristics of crops as claimed in claim 1, wherein when the content recognition result or the characteristic recognition result is a crop image, the crop image is inserted into the image library to be detected and the number of cycles is recorded; and when the cycle times reach a preset time threshold value, deleting the corresponding image from the image library to be detected.
3. The crop growth characteristic-based weed detection method according to claim 1, wherein the crop detection model comprises:
acquiring image data of the designated crop at each growth stage according to a preset acquisition height and an illumination angle;
storing the image data by taking the crop growth stage as a label to obtain an image database; wherein the image data is attached with a crop growth cycle time range attribute;
and taking the image database as a training data set, and respectively adopting a machine learning algorithm to carry out classifier training to obtain crop detection models of crops in different growth periods.
4. The crop growth characteristic-based weed detection method according to claim 3, wherein the machine learning algorithm comprises AdaBoost, reinforcement learning and deep learning neural network models, and the deep learning neural network models comprise VGG, YoLo-V5 and ResNet.
5. The crop growth characteristic-based weed detection method according to claim 1, wherein the determination step of the classification model comprises:
obtaining the plant height through manual measuring ruler measurement or plant height measurement equipment;
obtaining a chlorophyll content value and a leaf color through a spectral sensor;
acquiring characteristic parameters of crops through an image recognition algorithm; the characteristic parameters comprise leaf length, leaf width, leaf area and leaf length-width ratio;
and respectively counting key growth characteristic parameters of different stages of crop growth to obtain the numerical range and mean square difference value of each corresponding parameter, and using the numerical range and mean square difference value as a secondary check classification model.
6. The weed detection method based on the growth characteristics of crops as claimed in claim 5, wherein the obtaining of the characteristic parameters of crops by image recognition algorithm comprises:
carrying out scale normalization pretreatment on the collected image;
converting the color image into a gray image, obtaining interval division threshold value range according to the statistical information of the gray histogram, and performing binarization processing to enable the gray value of a background area to be 0 and the foreground area to be 255;
acquiring crop outlines through an edge detection Canny operator, and connecting edge pixels through a Hough transform method to form a closed connected region;
and calculating the characteristic parameters of the crops based on the closed connected region.
7. The crop growth characteristic-based weed detection method according to claim 6, wherein the calculating characteristic parameters of the crop based on the closed connected area comprises:
calculating the number of pixel points in the connected domain, and defining the leaf area as the contour area of the crop on the projection plane;
determining the maximum and minimum coordinates of the connected domain in the x-axis direction, and respectively defining the difference of the maximum and minimum coordinates as the leaf length of the crop;
determining the maximum and minimum coordinates of the connected domain in the y-axis direction, and respectively defining the difference of the maximum and minimum coordinates as the leaf width of the crop;
and calculating the leaf length-width ratio of the crops according to the existing characteristic values.
8. A weed detection device based on crop growth characteristics, the device comprising:
the image library generation module is used for acquiring images of plants in the field and the growth period of crops to obtain an image library to be detected;
the content identification module is used for inputting the image library to be detected into a trained first classifier, performing content identification on the images and acquiring weed images according to content identification results;
the artificial marking module is used for uploading the weed image to a detection image, carrying out artificial marking, and storing the images into a first sample library and a third sample library in a classified manner according to the artificial marking result;
the characteristic recognition module is used for inputting the weed images into a trained second classifier, performing characteristic recognition on the images, rejecting crop images in the weed images according to characteristic recognition results, and reporting detection results; the detection result comprises a weed image and a detection report based on the weed image;
the first classifier comprises a crop detection model, and the crop detection model is a neural network model obtained based on training of a first sample library;
the second classifier comprises a classification model and a target recognition model, and the acquisition step of the classification model comprises the following steps: constructing a crop growth characteristic parameter database; counting the parameter range and the mean value of the crop growth characteristics to obtain index characteristics and classification models of each parameter; the target recognition model is a neural network model obtained based on training of a third sample library.
CN202210427051.7A 2022-04-22 2022-04-22 Weed detection method and device based on crop growth characteristics Active CN114818909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210427051.7A CN114818909B (en) 2022-04-22 2022-04-22 Weed detection method and device based on crop growth characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210427051.7A CN114818909B (en) 2022-04-22 2022-04-22 Weed detection method and device based on crop growth characteristics

Publications (2)

Publication Number Publication Date
CN114818909A true CN114818909A (en) 2022-07-29
CN114818909B CN114818909B (en) 2023-09-15

Family

ID=82505585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210427051.7A Active CN114818909B (en) 2022-04-22 2022-04-22 Weed detection method and device based on crop growth characteristics

Country Status (1)

Country Link
CN (1) CN114818909B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325240A (en) * 2020-01-23 2020-06-23 杭州睿琪软件有限公司 Weed-related computer-executable method and computer system
CN115413550A (en) * 2022-11-07 2022-12-02 中化现代农业有限公司 Beet plant protection method and beet plant protection equipment
CN115690590A (en) * 2023-01-04 2023-02-03 中化现代农业有限公司 Crop growth abnormity monitoring method, device, equipment and storage medium
CN117576560A (en) * 2023-11-17 2024-02-20 中化现代农业有限公司 Method, device, equipment and medium for identifying field weeds of northern spring corns

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859375A (en) * 2010-06-01 2010-10-13 南京林业大学 Method for recognizing inline crops and weeds of seedling stage in farmland
CN104899553A (en) * 2015-04-30 2015-09-09 浙江理工大学 Field crop row extraction method capable of resisting dense weed interference
EP3279831A1 (en) * 2016-08-03 2018-02-07 Bayer CropScience AG Recognition of weed in a natural environment using a digital image
CN109522797A (en) * 2018-10-16 2019-03-26 华南农业大学 Rice seedling and Weeds at seedling recognition methods and system based on convolutional neural networks
WO2019083336A1 (en) * 2017-10-27 2019-05-02 전북대학교산학협력단 Method and device for crop and weed classification using neural network learning
CN109740483A (en) * 2018-12-26 2019-05-10 南宁五加五科技有限公司 A kind of rice growing season detection method based on deep-neural-network
US20190147249A1 (en) * 2016-05-12 2019-05-16 Bayer Cropscience Aktiengesellschaft Recognition of weed in a natural environment
CN109961024A (en) * 2019-03-08 2019-07-02 武汉大学 Wheat weeds in field detection method based on deep learning
CN110020651A (en) * 2019-04-19 2019-07-16 福州大学 Car plate detection localization method based on deep learning network
CN110288041A (en) * 2019-07-01 2019-09-27 齐鲁工业大学 Chinese herbal medicine classification model construction method and system based on deep learning
CN110363224A (en) * 2019-06-19 2019-10-22 创新奇智(北京)科技有限公司 A kind of object classification method based on image, system and electronic equipment
CN110457989A (en) * 2019-06-19 2019-11-15 华南农业大学 The weeds in paddy field recognition methods of various dimensions extension feature is extracted based on convolutional neural networks
WO2020000253A1 (en) * 2018-06-27 2020-01-02 潍坊学院 Traffic sign recognizing method in rain and snow
CN111340141A (en) * 2020-04-20 2020-06-26 天津职业技术师范大学(中国职业培训指导教师进修中心) Crop seedling and weed detection method and system based on deep learning
CN112541383A (en) * 2020-06-12 2021-03-23 广州极飞科技有限公司 Method and device for identifying weed area
CN112906537A (en) * 2021-02-08 2021-06-04 北京艾尔思时代科技有限公司 Crop identification method and system based on convolutional neural network

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859375A (en) * 2010-06-01 2010-10-13 南京林业大学 Method for recognizing inline crops and weeds of seedling stage in farmland
CN104899553A (en) * 2015-04-30 2015-09-09 浙江理工大学 Field crop row extraction method capable of resisting dense weed interference
US20190147249A1 (en) * 2016-05-12 2019-05-16 Bayer Cropscience Aktiengesellschaft Recognition of weed in a natural environment
EP3279831A1 (en) * 2016-08-03 2018-02-07 Bayer CropScience AG Recognition of weed in a natural environment using a digital image
WO2019083336A1 (en) * 2017-10-27 2019-05-02 전북대학교산학협력단 Method and device for crop and weed classification using neural network learning
WO2020000253A1 (en) * 2018-06-27 2020-01-02 潍坊学院 Traffic sign recognizing method in rain and snow
CN109522797A (en) * 2018-10-16 2019-03-26 华南农业大学 Rice seedling and Weeds at seedling recognition methods and system based on convolutional neural networks
CN109740483A (en) * 2018-12-26 2019-05-10 南宁五加五科技有限公司 A kind of rice growing season detection method based on deep-neural-network
CN109961024A (en) * 2019-03-08 2019-07-02 武汉大学 Wheat weeds in field detection method based on deep learning
CN110020651A (en) * 2019-04-19 2019-07-16 福州大学 Car plate detection localization method based on deep learning network
CN110363224A (en) * 2019-06-19 2019-10-22 创新奇智(北京)科技有限公司 A kind of object classification method based on image, system and electronic equipment
CN110457989A (en) * 2019-06-19 2019-11-15 华南农业大学 The weeds in paddy field recognition methods of various dimensions extension feature is extracted based on convolutional neural networks
CN110288041A (en) * 2019-07-01 2019-09-27 齐鲁工业大学 Chinese herbal medicine classification model construction method and system based on deep learning
CN111340141A (en) * 2020-04-20 2020-06-26 天津职业技术师范大学(中国职业培训指导教师进修中心) Crop seedling and weed detection method and system based on deep learning
CN112541383A (en) * 2020-06-12 2021-03-23 广州极飞科技有限公司 Method and device for identifying weed area
CN112906537A (en) * 2021-02-08 2021-06-04 北京艾尔思时代科技有限公司 Crop identification method and system based on convolutional neural network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CIRO POTENA ETC.: "Fast and Accurate Crop and Weed Identification with Summarized Train Sets for Precision Agriculture", ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING, vol. 531 *
侯晨伟;陈丽;: "基于概率神经网络的玉米苗期杂草识别方法的研究", 农机化研究, no. 11 *
张健钦,屈平,邝朴生: "计算机视觉技术在杂草识别中的应用研究进展", 河北大学学报(自然科学版), no. 04 *
李南: "基于机器视觉的锄草机器人快速作物识别方法研究", 中国博士学位论文全文数据库信息科技辑, no. 08 *
郭鑫鑫;任海铭;周志贤;: "基于深度置信网络的田间杂草快速识别", 计算机产品与流通, no. 04 *
陆明;李茂松;申双和;王春艳;: "图像识别技术在作物农情信息反演中的应用", 自然灾害学报, no. 03 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325240A (en) * 2020-01-23 2020-06-23 杭州睿琪软件有限公司 Weed-related computer-executable method and computer system
CN115413550A (en) * 2022-11-07 2022-12-02 中化现代农业有限公司 Beet plant protection method and beet plant protection equipment
CN115413550B (en) * 2022-11-07 2023-03-14 中化现代农业有限公司 Beet plant protection method and beet plant protection equipment
CN115690590A (en) * 2023-01-04 2023-02-03 中化现代农业有限公司 Crop growth abnormity monitoring method, device, equipment and storage medium
CN115690590B (en) * 2023-01-04 2023-03-31 中化现代农业有限公司 Crop growth abnormity monitoring method, device, equipment and storage medium
CN117576560A (en) * 2023-11-17 2024-02-20 中化现代农业有限公司 Method, device, equipment and medium for identifying field weeds of northern spring corns

Also Published As

Publication number Publication date
CN114818909B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
Rastogi et al. Leaf disease detection and grading using computer vision technology & fuzzy logic
Kurtulmuş et al. Detecting corn tassels using computer vision and support vector machines
CN114818909B (en) Weed detection method and device based on crop growth characteristics
Tellaeche et al. A new vision-based approach to differential spraying in precision agriculture
Yu et al. Automatic image-based detection technology for two critical growth stages of maize: Emergence and three-leaf stage
Sun et al. Image processing algorithms for infield single cotton boll counting and yield prediction
Huang et al. Deep localization model for intra-row crop detection in paddy field
CN114581801A (en) Fruit tree identification and quantity monitoring method based on unmanned aerial vehicle data acquisition
Selvi et al. Weed detection in agricultural fields using deep learning process
Ji et al. In-field automatic detection of maize tassels using computer vision
WO2021154624A1 (en) System and method for performing machine vision recognition of dynamic objects
CN113011221A (en) Crop distribution information acquisition method and device and measurement system
Gining et al. Harumanis mango leaf disease recognition system using image processing technique
Miao et al. Crop weed identification system based on convolutional neural network
Dandekar et al. Weed Plant Detection from Agricultural Field Images using YOLOv3 Algorithm
Ashok Kumar et al. A review on crop and weed segmentation based on digital images
CN117036926A (en) Weed identification method integrating deep learning and image processing
Chang et al. Improved deep learning-based approach for real-time plant species recognition on the farm
Fang et al. Classification system study of soybean leaf disease based on deep learning
Moghaddam et al. Developing a selective thinning algorithm in sugar beet fields using machine vision system
Yihang et al. Automatic recognition of rape seeding emergence stage based on computer vision technology
Sharma et al. Detection and classification of plant diseases by Alexnet and GoogleNet deep learning architecture
Poleshchenko et al. Development of a System for Automated Control of Planting Density, Leaf Area Index and Crop Development Phases by UAV Photos
Rangarajan et al. A vision based crop monitoring system using segmentation techniques
Gulac et al. Plant and phenology recognition from field images using texture and color features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant