CN115115935A - Automatic weed identification method, system, equipment and storage medium - Google Patents

Automatic weed identification method, system, equipment and storage medium Download PDF

Info

Publication number
CN115115935A
CN115115935A CN202210640118.5A CN202210640118A CN115115935A CN 115115935 A CN115115935 A CN 115115935A CN 202210640118 A CN202210640118 A CN 202210640118A CN 115115935 A CN115115935 A CN 115115935A
Authority
CN
China
Prior art keywords
identification
weeds
images
weed
final
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210640118.5A
Other languages
Chinese (zh)
Inventor
龙晓波
付强
田冰川
清毅
刘京
赵健
尹合兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhi Biotechnology Co ltd
Original Assignee
Huazhi Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhi Biotechnology Co ltd filed Critical Huazhi Biotechnology Co ltd
Priority to CN202210640118.5A priority Critical patent/CN115115935A/en
Publication of CN115115935A publication Critical patent/CN115115935A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic weed identification method, system, equipment and storage medium, wherein image-level marking is carried out on weed distribution in a training set, pixel-level marking is carried out on weed distribution in a testing set, the cost of sample marking in deep learning is greatly reduced, and the identification accuracy is improved by marking weeds for characteristic identification; the training set is divided into a plurality of subdata sets to respectively train a plurality of network models, the robustness of model identification is improved, the subdata sets are continuously updated through comparison between a prediction result graph obtained by the test set and a prediction result graph obtained by the training set, and the network models are iteratively trained to obtain final network models, so that the training of the network models can be realized by using a small number of pixel-level labeling samples, and the workload brought by a large number of pixel-level labeling samples when the network models are trained is reduced; and finally, the average value of the recognition results of the plurality of final network models is used as a final result, so that the accuracy and robustness of model recognition are improved.

Description

Automatic weed identification method, system, equipment and storage medium
Technical Field
The invention relates to the technical field of weed identification, in particular to a method, a system, equipment and a storage medium for automatically identifying weeds.
Background
Weeds in rice fields bring great harm to the growth of rice, compete for nutrients, water, light and the like with the rice, affect the growth of the rice and reduce the quality and yield of rice grains. In order to prevent and control weeds in the field, a common method is to uniformly spray and weed the whole operation area in a covering mode, which inevitably causes excessive application of pesticides, and also causes the problems of drug resistance enhancement of weeds, pesticide waste, environmental pollution and the like. When the pesticide is sprayed, if the weed distribution of the rice field can be rapidly counted in real time in a divided area mode, the weed distribution information of each area is obtained, the weed distribution map of the whole field is formed, variable pesticide application and accurate pesticide application are facilitated, and the pesticide using amount and the spraying operation efficiency are effectively reduced.
With the development of deep learning technology, the convolutional neural network is gradually widely applied in the field of machine vision and obtains good effect. The application of computer vision to weed identification is generally realized by using methods such as wavelet analysis, Bayesian discriminant models and support vector machines according to weed color, characteristics such as shape, texture and spatial distribution, and combinations of the characteristics, and crop and weed identification is realized. Although the methods have low detection difficulty, the environment of a planting area of a common crop is complex, the robustness of the method for identifying the specific characteristics of the weeds is poor, and the identification accuracy is low.
Compared with an image recognition task, image semantic segmentation can simultaneously realize segmentation and recognition of targets, can provide fine-grained and high-level semantic information for subsequent visual tasks such as image analysis and understanding, and gradually becomes a core technology of application scenes such as remote sensing image analysis. The method has the advantages that the training data labeled by the pixel-level labels and the deep convolutional neural network model are widely applied, and the recognition accuracy of the image semantic segmentation method is remarkably improved. However, image semantic segmentation methods based on deep convolutional neural networks rely on large-scale fine-to-pixel-granularity training data. The large-scale pixel-level label labeling work which is time-consuming, labor-consuming and high-cost seriously restricts the further improvement of the image semantic segmentation performance and the expandability of practical application. In order to solve the above disadvantages and limitations, the image-level label-based weak supervised image semantic segmentation only needs to give specific object class information existing in a scene image, and does not need to indicate position information of the object class in the image. Image-level tags can be accurately and efficiently labeled compared to pixel-level tags, which greatly reduces the time and cost of data labeling. For example, a skilled annotator needs 5 to 7 minutes to annotate a pixel-level label of a 256 x 256 image with high quality, whereas the time annotation cost of image-level labels only needs a few seconds or tens of seconds. At this stage, with the increase of satellites and various sensors, the number of available remote sensing images is increasing. When a new region needs to be re-segmented, the pixel level labeling is performed again for the remote sensing image of the region if the fine tuning method is used. Obviously, it is not practical to label such a large amount of remote sensing image data with pixel-level labels to train a semantic segmentation model.
Disclosure of Invention
The present invention is directed to at least solving the problems of the prior art. Therefore, the invention provides an automatic weed identification method, system, equipment and storage medium, which can not only use weed features for identification and improve identification accuracy, but also reduce the workload brought by massive pixel-level labeling samples when a network model is trained, and can use a small amount of pixel-level labeling to complete the training of the network model.
In a first aspect, embodiments of the present invention provide an automatic weed identification method, comprising the steps of;
acquiring a plurality of images of a rice field, and dividing the plurality of images into a training set and a test set; wherein the number of images in the training set is greater than the number of images in the test set;
carrying out image-level marking on an outer frame of a weed position of the images in the training set, and carrying out pixel-level marking on a weed boundary of the images in the testing set;
constructing a plurality of network models, dividing the training set into subdata sets with the number equal to that of the network models, inputting each subdata set into the corresponding network model for training to obtain a plurality of identification models for identifying paddy field weeds;
identifying the test set according to each identification model to obtain an identification result graph output by each identification model; comparing all the identification result graphs to obtain different labels among all the identification result graphs;
respectively training the corresponding recognition model according to each subdata set until the loss function of the recognition model reaches the minimum; when the recognition model is trained, pixels corresponding to the different labels are removed through a mask;
and testing all the converged identification models according to the test set to obtain the identification result output by each identification model, and taking the mean value of all the identification results as the final identification result of the weeds in the rice field.
According to the embodiment of the invention, at least the following technical effects are achieved:
by carrying out image-level marking on the weed distribution in the training set and carrying out pixel-level marking on the weed distribution in the testing set, the cost of sample marking in deep learning is greatly reduced, and the identification accuracy is improved by marking weeds for feature identification; the training set is divided into a plurality of subdata sets to respectively train a plurality of network models, the robustness of model identification is improved, the subdata sets are continuously updated through comparison between a prediction result graph obtained by the test set and a prediction result graph obtained by the training set, and the network models are iteratively trained to obtain final network models, so that the training of the network models can be realized by using a small number of pixel-level labeling samples, and the workload brought by a large number of pixel-level labeling samples when the network models are trained is reduced; and finally, the average value of the recognition results of the plurality of final network models is used as a final result, so that the accuracy and robustness of model recognition are improved, and accidental errors of the final result are prevented.
According to some embodiments of the invention, the images are obtained by splicing high-definition rice field images acquired by the unmanned aerial vehicle and then slicing the images.
According to some embodiments of the invention, the constructed network model employs a deep lab v3+ network.
According to some embodiments of the invention, the loss function is calculated as follows:
Figure BDA0003683572150000041
wherein, the Loss t+1 Representing the loss function, t representing the number of repetitions, i and j representing the number of rows and columns of the image, respectively, and L ij (t) represents a label, said P ij (t +1) represents the pixel value that the recognition model predicts at the ith row and the jth column.
According to some embodiments of the present invention, the formula for calculating the mean of all the identification results as the final identification result of the paddy field weeds comprises:
R mean =(R 1 +R 2 +R 3 +R 4 ...+R N )/N
wherein, R is 1 To the R N A recognition result graph representing the corresponding recognition models, wherein N represents the number of the recognition models, and R represents mean Representing the final recognition result.
According to some embodiments of the present invention, after taking the average of all identification results as the final identification result of the paddy field weeds, the method for automatically identifying the weeds further comprises the following steps:
and grading the final recognition result.
According to some embodiments of the invention, the calculation formula of the rating comprises:
P=P G /(P G +P L )
R=P G /(P G +P W )
F1=(2×P×R)/(P+R)
wherein, the P G Number of pixels representing correctly identified weeds, said P L Representing the number of pixels of non-weeds, said P representing the accuracy of said final recognition result, said P W Representing the number of pixels that should be identified as weeds but for which the identification result is non-weeds, said R representing the recall rate of said final identification result, said F1 representing the rating result of said final identification result.
In a second aspect, embodiments of the present invention provide an automatic weed identification system, comprising:
the image acquisition module is used for acquiring a plurality of images of the rice field and dividing the plurality of images into a training set and a test set; wherein the number of images in the training set is greater than the number of images in the test set;
the weed marking module is used for carrying out image-level marking on an outer frame of a weed position of the image in the training set and carrying out pixel-level marking on a weed boundary of the image in the testing set;
the network model building module is used for building a plurality of network models, dividing the training set into subdata sets with the number equal to that of the network models, inputting each subdata set into the corresponding network model for training to obtain a plurality of identification models for identifying the weeds in the rice field;
the network model training module is used for identifying the test set according to each identification model to obtain an identification result graph output by each identification model; comparing all the identification result graphs, and acquiring different labels among all the identification result graphs;
and the final identification result module is used for testing all the converged identification models according to the test set to obtain the identification result output by each identification model, and taking the mean value of all the identification results as the final identification result of the paddy field weeds.
In a third aspect, embodiments of the present invention provide an electronic device, including at least one control processor and a memory communicatively coupled to the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the method of automatic weed identification of the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform the method for automatic weed identification of the first aspect.
It is to be noted that the advantageous effects between the second to fourth aspects of the present invention and the prior art are the same as those of the automatic weed identification method of the first aspect, and will not be described in detail herein.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flow chart of an automatic weed identification method according to an embodiment of the present invention;
fig. 2 is a block diagram of an automatic weed identification method according to an embodiment of the present invention;
FIG. 3 is a block diagram of an automatic weed identification system according to one embodiment of the present invention;
fig. 4 is an electronic device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "axial", "radial", "circumferential", and the like, indicate orientations and positional relationships based on the orientations and positional relationships shown in the drawings, and are used merely for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the device or element so referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore, should not be construed as limiting the present invention. Furthermore, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Referring to fig. 1, in some embodiments of the present invention, there is provided an automatic weed identification method, comprising the steps of:
s100, acquiring a plurality of images of a rice field, and dividing the plurality of images into a training set and a test set; and the number of the images in the training set is greater than that of the images in the testing set.
And S200, carrying out image-level marking on the outer frame of the weed position of the image in the training set, and carrying out pixel-level marking on the weed boundary of the image in the testing set.
And S300, constructing a plurality of network models, dividing the training set into subdata sets with the same number as the network models, inputting each subdata set into the corresponding network model for training, and obtaining a plurality of identification models for identifying the weeds in the rice field.
S400, identifying the test set according to each identification model to obtain an identification result graph output by each identification model; and comparing all the identification result graphs to obtain different labels among all the identification result graphs.
S500, training the corresponding recognition model according to each subdata set until the loss function of the recognition model reaches the minimum; and when the recognition model is trained, pixels corresponding to different labels are removed through a mask.
And S600, testing all the converged identification models according to the test set to obtain the identification result output by each identification model, and taking the mean value of all the identification results as the final identification result of the paddy field weeds.
According to the automatic weed identification method provided by the embodiment of the invention, image-level marking is carried out on the weed distribution in the training set, pixel-level marking is carried out on the weed distribution in the testing set, the cost of sample marking in deep learning is greatly reduced, and the identification accuracy is improved by marking weeds for feature identification; the training set is divided into a plurality of subdata sets to respectively train a plurality of network models, the robustness of model identification is improved, the subdata sets are continuously updated through comparison between a prediction result graph obtained by the test set and a prediction result graph obtained by the training set, and the network models are iteratively trained to obtain final network models, so that the training of the network models can be realized by using a small number of pixel-level labeling samples, and the workload brought by a large number of pixel-level labeling samples when the network models are trained is reduced; and finally, the average value of the recognition results of the plurality of final network models is used as a final result, so that the accuracy and robustness of model recognition are improved, and accidental errors of the final result are prevented.
In some embodiments of the invention, the images are obtained by splicing rice field high-definition images acquired by an unmanned aerial vehicle, the rice field high-definition images are acquired by flying unmanned aerial vehicles at 50 m height in a rice field area and are spliced by PIX4D software, the spliced high-definition images are sliced into 256 × 256 PNG pictures by using a Python + gdal library, the rice field high-definition images are firstly spliced, the images which do not belong to the rice field are removed, then the spliced high-definition images are sliced to obtain the 256 × 256 PNG pictures, and a training set and a test set are obtained by performing subsequent random processing.
In some embodiments of the invention, a built network model adopts a DeepLab v3+ network, the DeepLab proposes a cavity convolution and carries out a series of improvements on the basis, firstly, the DeepLab v1 directly adds the cavity convolution on the basis of VGG, but the effect is not ideal, so that the CRF is used for post-processing optimization, the DeepLab v2 adds an ASPP module on the basis of the DeepLab v1, the introduction of the ASPP optimizes the segmentation effect of targets with different scales, but the CRF is optionally needed for optimization, the Multi-Grid strategy of the DeepLab v3 refers to HDC, and the Gridding problem of the cavity convolution is solved; meanwhile, the ASPP is endowed with stronger characterization capability by the modification of the DeepLab v3, the repair of CRF is not needed by the DeepLab v3, and finally, the DeepLab v3+ refers to a very common feature fusion strategy in target detection, so that the network retains more shallow information, and meanwhile, the speed of dividing the network is optimized by adding deep separable convolution.
In some embodiments of the invention, the formula for calculating the loss function comprises:
Figure BDA0003683572150000091
therein, Loss t+1 Denotes a loss function, t denotes the number of repetitions, i and j denote the number of rows and columns of the image, respectively, L ij (t) represents a label, P ij (t +1) represents the image predicted by the recognition model at the ith row and the jth columnAnd (4) prime value.
Through calculation of the loss function, whether the recognition network model reaches the convergence condition or not can be accurately judged, and the training of the recognition network model is stopped in time.
In some embodiments of the present invention, the formula for calculating the final identification of the paddy field weeds by taking the average of all identification results comprises:
R mean =(R 1 +R 2 +R 3 +R 4 ...+R N )/N
wherein R is 1 To R N A recognition result graph representing the corresponding recognition models, N representing the number of recognition models, R mean Representing the final recognition result.
The final recognition result is obtained by carrying out averaging processing on the recognition result graph of each recognition network model, and the robustness and the accuracy of the recognition network model are improved.
In some embodiments of the present invention, after taking the average of all the identification results as the final identification result of the weeds in the paddy field, the method for automatically identifying weeds further comprises the following steps:
and step S700, grading the final recognition result.
And providing a reference standard to judge the error level of the final recognition result by carrying out rating operation on the final recognition result.
In some embodiments of the invention, the calculation formula for the rating comprises:
P=P G /(P G +P L )
R=P G /(P G +P W )
F1=(2×P×R)/(P+R)
wherein, P G Number of pixels representing correctly identified weeds, P L Representing the number of pixels of non-weeds, P representing the accuracy of the final recognition result, P W Indicates the number of pixels that should be recognized as weeds but the recognition result is not weeds, R indicates the recall rate of the final recognition result, and F1 indicates the rating result of the final recognition result.
By combining the accuracy and the recall rate, the final recognition result is evaluated by using the F1 as an accuracy recognition evaluation index, so that the objectivity and the robustness of the rating result are ensured, and the accuracy of the final recognition result can be correctly reflected by the rating result.
Referring to fig. 2, in order to facilitate understanding by those skilled in the art, one embodiment of the present invention provides an automatic weed identification method, comprising the steps of:
firstly, acquiring high-definition rice field images by a 50-meter-height flying unmanned aerial vehicle in a rice field area, splicing the high-definition rice field images by using PIX4D software, slicing the spliced high-definition images into 256 × 256 PNG images by using a Python + gdal library, wherein 7000 images are provided, and randomly dividing the sliced PNG images into a training set and a test set, wherein 6500 images are provided in the training set, and the test set is 500.
Secondly, using Labelme to mark weeds in the pictures of the training set, wherein the marking process only needs to mark the outer frame (the approximate range of the periphery of the weeds) of the approximate position of the distribution of the weeds, and the marking time of each picture is 5 seconds on average; when the test set is marked, the boundary of the weeds (the detailed distribution of the weed distribution and growth angles and the like) is marked at the pixel level, and the average marking time is 5 minutes.
And thirdly, building a Mask R-CNN environment, installing a TensorFlow-2.0 deep learning framework, dividing a training set into N sub-data sets, correspondingly building N network models, training a DeepLab v3+ network on each sub-data set, and recognizing the test set by the trained N recognition network models to obtain N groups of recognition result graphs.
Comparing the labels of the N groups of identification result graphs, judging that the identification result of the label is correct if the labels of the N groups of identification result graphs are the same, and keeping the identification result as the label of the pixel; and when the labels of the N groups of identification result images are different, whether the identification result of the label is correct or not cannot be judged, and pixels corresponding to the label which is uncertain to be correct are removed by using a mask.
And fifthly, training the network model on the N sub-data sets again on the basis of the fourth step, and removing uncertain pixels and participating in training as confirmed pixels similarly to the operation of the fourth step. Training a network model, optimizing a label, retraining, iterating until a loss function of the network model reaches the minimum to obtain an identified network model, wherein a calculation formula of the loss function is as follows:
Figure BDA0003683572150000111
therein, Loss t+1 Representing the loss function, t representing the number of repetitions, i and j representing the number of rows and columns of the image, respectively, L ij (t) represents a label, P ij (t +1) represents the recognition model predicting the pixel value at the ith row and the jth column.
And sixthly, identifying the test set through the N identification network models to obtain an identification result graph, wherein the average value of the identification results obtained by the N identification network models is used as a final result, and the average value calculation formula is as follows:
R mean =(R 1 +R 2 +R 3 +R 4 ...+R N )/N
wherein R is 1 To R N A recognition result graph representing the corresponding recognition models, N representing the number of recognition models, R mean Representing the final recognition result.
And seventhly, grading the final recognition result, wherein a calculation formula of the grading is as follows:
P=P G /(P G +P L )
R=P G /(P G +P W )
F1=(2×P×R)/(P+R)
P G number of pixels representing correctly identified weeds, P L Representing the number of pixels of non-weeds, P representing the accuracy of the final recognition result, P W Indicates the number of pixels that should be recognized as weeds but the recognition result is not weeds, R indicates the recall rate of the final recognition result, and F1 indicates the rating result of the final recognition result.
Referring to fig. 3, an embodiment of the present invention provides an automatic weed identification system 1000, which includes an image acquisition module 1001, a weed marking module 1002, a network model construction module 1003, a network model training module 1004, and a final identification result module 1005, wherein:
the image acquisition module 1001 is used for acquiring a plurality of images of the rice field and dividing the plurality of images into a training set and a test set; wherein the number of images in the training set is greater than the number of images in the testing set;
the weed marking module 1002 is used for performing image-level marking on an outer frame of a weed position of the images in the training set and performing pixel-level marking on a weed boundary of the images in the testing set;
the network model building module 1003 is used for building a plurality of network models, dividing the training set into subdata sets with the same number as the network models, inputting each subdata set into the corresponding network model for training, and obtaining a plurality of identification models for identifying the weeds in the rice field;
the network model training module 1004 is used for identifying the test set according to each identification model to obtain an identification result graph output by each identification model; comparing all the identification result graphs to obtain different labels among all the identification result graphs;
and a final recognition result module 1005 for testing all the converged recognition models according to the test set to obtain the recognition result output by each recognition model, and taking the average value of all the recognition results as the final recognition result of the paddy field weeds.
It should be noted that, since an automatic weed identification system in the present embodiment is based on the same inventive concept as the above-mentioned automatic weed identification method, the corresponding contents in the method embodiments are also applicable to the present apparatus embodiments, and are not described in detail herein.
Referring to fig. 4, another embodiment of the present invention further provides an electronic device 6000, which may be any type of smart terminal, such as a mobile phone, a tablet computer, a personal computer, and the like.
Specifically, the electronic device 6000 includes: one or more control processors 6001 and a memory 6002, for example, a control processor 6001 and a memory 6002 in fig. 4, the control processor 6001 and the memory 6002 can be connected by a bus or by other means, for example, in fig. 4.
The memory 6002 serves as a non-transitory computer-readable storage medium that can be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to an electronic device in an embodiment of the invention;
the control processor 6001 executes the non-transitory software programs, instructions, and modules stored in the memory 6002 to perform the various functional applications and data processing of an automatic weed identification method, i.e., a method of automatic weed identification that implements the above-described method embodiments.
The memory 6002 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the stored data area may store data created by use of an automatic weed identification method, and the like. Further, the memory 6002 can include high-speed random access memory, and can also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, the memory 6002 optionally includes memory that is remotely located from the control processor 6001, and such remote memory can be coupled to the electronic device 6000 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Stored in the memory 6002 are one or more modules that, when executed by the one or more control processors 6001, perform the weed automatic identification method of the above-described method embodiment, e.g., perform the method steps of fig. 1 described above.
The memory, as a non-transitory computer-readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer-executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
It should be noted that, since an electronic device in the present embodiment is based on the same inventive concept as the above-mentioned automatic weed identification method, the corresponding contents in the method embodiments are also applicable to the present device embodiment, and are not described in detail herein.
An embodiment of the present invention also provides a computer-readable storage medium storing computer-executable instructions for performing: the automatic weed identification method of the above example.
It should be noted that, since a computer-readable storage medium in the present embodiment is based on the same inventive concept as the above-mentioned automatic weed identification method, the corresponding contents in the method embodiments are also applicable to the present apparatus embodiments, and are not described in detail herein.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of data such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired data and which can accessed by the computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any data delivery media as known to one of ordinary skill in the art.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. An automatic weed identification method is characterized by comprising the following steps:
acquiring a plurality of images of a rice field, and dividing the plurality of images into a training set and a test set; wherein the number of images in the training set is greater than the number of images in the test set;
carrying out image-level marking on an outer frame of a weed position of the images in the training set, and carrying out pixel-level marking on a weed boundary of the images in the testing set;
constructing a plurality of network models, dividing the training set into subdata sets with the number equal to that of the network models, inputting each subdata set into the corresponding network model for training to obtain a plurality of identification models for identifying paddy field weeds;
identifying the test set according to each identification model to obtain an identification result graph output by each identification model; comparing all the identification result graphs to obtain different labels among all the identification result graphs;
respectively training the corresponding recognition model according to each subdata set until the loss function of the recognition model reaches the minimum; when the recognition model is trained, pixels corresponding to the different labels are removed through a mask;
and testing all the converged identification models according to the test set to obtain the identification result output by each identification model, and taking the mean value of all the identification results as the final identification result of the weeds in the rice field.
2. The automatic weed identification method according to claim 1, wherein the images are obtained by splicing high-definition paddy field images acquired by an unmanned aerial vehicle and then slicing the images.
3. The method for automatically identifying weeds of claim 1, wherein the constructed network model employs a deep lab v3+ network.
4. The method of claim 1, wherein the formula for calculating the loss function comprises:
Figure FDA0003683572140000021
wherein, the Loss t+1 Representing the loss function, t representing the number of repetitions, i and j representing the number of rows and columns of the image, respectively, and L ij (t) represents a label, said P ij (t +1) represents the pixel value that the recognition model predicts at the ith row and the jth column.
5. The method for automatically identifying weeds of claim 4, wherein the calculation formula taking the mean value of all identification results as the final identification result of the weeds in the paddy field comprises:
R mean =(R 1 +R 2 +R 3 +R 4 ...+R N )/N
wherein, R is 1 To the R N A recognition result map representing the corresponding recognition models, wherein N represents the number of recognition models, and R mean Representing the final recognition result.
6. The method for automatically identifying weeds of claim 5, wherein after taking the mean value of all identification results as the final identification result of the weeds in the paddy field, the method for automatically identifying weeds further comprises the following steps:
and grading the final recognition result.
7. The method according to claim 6, wherein the calculation formula of the rating includes:
P=P G /(P G +P L )
R=P G /(P G +P W )
F1=(2×P×R)/(P+R)
wherein, the P G Number of pixels representing correctly identified weeds, said P L Representing the number of pixels of non-weeds, said P representing the accuracy of said final recognition result, said P W Representing the number of pixels that should be identified as weeds but for which the identification result is non-weeds, said R representing the recall of said final identification result, said F1 representing the rating of said final identification result.
8. An automatic weed identification system, comprising:
the image acquisition module is used for acquiring a plurality of images of the rice field and dividing the plurality of images into a training set and a test set; wherein the number of images in the training set is greater than the number of images in the test set;
the weed marking module is used for carrying out image-level marking on an outer frame of a weed position of the image in the training set and carrying out pixel-level marking on a weed boundary of the image in the testing set;
the network model building module is used for building a plurality of network models, dividing the training set into subdata sets with the number equal to that of the network models, inputting each subdata set into the corresponding network model for training to obtain a plurality of identification models for identifying the weeds in the rice field;
the network model training module is used for identifying the test set according to each identification model to obtain an identification result graph output by each identification model; comparing all the identification result graphs to obtain different labels among all the identification result graphs;
and the final identification result module is used for testing all the converged identification models according to the test set to obtain the identification result output by each identification model, and taking the mean value of all the identification results as the final identification result of the paddy field weeds.
9. An electronic device, characterized in that: comprises at least one control processor and a memory for communicative connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the method of weed automatic identification according to any one of claims 1 to 7.
10. A computer-readable storage medium characterized by: the computer-readable storage medium stores computer-executable instructions for causing a computer to perform the method of automatically identifying weeds of any one of claims 1 to 7.
CN202210640118.5A 2022-06-08 2022-06-08 Automatic weed identification method, system, equipment and storage medium Pending CN115115935A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210640118.5A CN115115935A (en) 2022-06-08 2022-06-08 Automatic weed identification method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210640118.5A CN115115935A (en) 2022-06-08 2022-06-08 Automatic weed identification method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115115935A true CN115115935A (en) 2022-09-27

Family

ID=83325942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210640118.5A Pending CN115115935A (en) 2022-06-08 2022-06-08 Automatic weed identification method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115115935A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315493A (en) * 2023-11-29 2023-12-29 浙江天演维真网络科技股份有限公司 Identification and resolution method, device, equipment and medium for field weeds

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315493A (en) * 2023-11-29 2023-12-29 浙江天演维真网络科技股份有限公司 Identification and resolution method, device, equipment and medium for field weeds
CN117315493B (en) * 2023-11-29 2024-02-20 浙江天演维真网络科技股份有限公司 Identification and resolution method, device, equipment and medium for field weeds

Similar Documents

Publication Publication Date Title
US20220036562A1 (en) Vision-based working area boundary detection system and method, and machine equipment
CN113362329B (en) Method for training focus detection model and method for recognizing focus in image
CN107918776B (en) Land planning method and system based on machine vision and electronic equipment
CN110415277B (en) Multi-target tracking method, system and device based on optical flow and Kalman filtering
CN111340141A (en) Crop seedling and weed detection method and system based on deep learning
CN113298053B (en) Multi-target unmanned aerial vehicle tracking identification method and device, electronic equipment and storage medium
US20210357643A1 (en) Method for determining distribution information, and control method and device for unmanned aerial vehicle
CN108805864A (en) The acquisition methods and device of architecture against regulations object based on view data
CN111860072A (en) Parking control method and device, computer equipment and computer readable storage medium
CN113052295B (en) Training method of neural network, object detection method, device and equipment
US20220114396A1 (en) Methods, apparatuses, electronic devices and storage media for controlling image acquisition
CN112634368A (en) Method and device for generating space and OR graph model of scene target and electronic equipment
CN111582410B (en) Image recognition model training method, device, computer equipment and storage medium
CN115115935A (en) Automatic weed identification method, system, equipment and storage medium
Mazzia et al. Deepway: a deep learning waypoint estimator for global path generation
CN116486076A (en) Remote sensing image semantic segmentation method, system, equipment and storage medium
Tavera et al. Augmentation invariance and adaptive sampling in semantic segmentation of agricultural aerial images
CN114267032A (en) Container positioning identification method, device, equipment and storage medium
CN114067142A (en) Method for realizing scene structure prediction, target detection and lane level positioning
CN116739739A (en) Loan amount evaluation method and device, electronic equipment and storage medium
CN114663751A (en) Power transmission line defect identification method and system based on incremental learning technology
CN113361405B (en) Asian image recognition method and system based on yolo v3
Lu et al. Farmland boundary extraction based on the AttMobile-DeeplabV3+ network and least squares fitting of straight lines
CN114022516A (en) Bimodal visual tracking method based on high rank characteristics and position attention
Shahid et al. Aerial imagery-based tobacco plant counting framework for efficient crop emergence estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination