CN110135341B - Weed identification method and device and terminal equipment - Google Patents

Weed identification method and device and terminal equipment Download PDF

Info

Publication number
CN110135341B
CN110135341B CN201910403138.9A CN201910403138A CN110135341B CN 110135341 B CN110135341 B CN 110135341B CN 201910403138 A CN201910403138 A CN 201910403138A CN 110135341 B CN110135341 B CN 110135341B
Authority
CN
China
Prior art keywords
feature map
weed
target
network
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910403138.9A
Other languages
Chinese (zh)
Other versions
CN110135341A (en
Inventor
李春明
逯杉婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Science and Technology
Original Assignee
Hebei University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Science and Technology filed Critical Hebei University of Science and Technology
Priority to CN201910403138.9A priority Critical patent/CN110135341B/en
Publication of CN110135341A publication Critical patent/CN110135341A/en
Application granted granted Critical
Publication of CN110135341B publication Critical patent/CN110135341B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01MCATCHING, TRAPPING OR SCARING OF ANIMALS; APPARATUS FOR THE DESTRUCTION OF NOXIOUS ANIMALS OR NOXIOUS PLANTS
    • A01M21/00Apparatus for the destruction of unwanted vegetation, e.g. weeds
    • A01M21/02Apparatus for mechanical destruction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Wood Science & Technology (AREA)
  • Evolutionary Biology (AREA)
  • Zoology (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Environmental Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Pest Control & Pesticides (AREA)
  • General Engineering & Computer Science (AREA)
  • Insects & Arthropods (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention is suitable for the technical field of neural networks, and provides a weed identification method, a weed identification device and terminal equipment, wherein the method comprises the following steps: acquiring a weed image to be identified; inputting the weed image to be identified into a target convolutional neural network model to obtain a first characteristic diagram; then inputting the first characteristic diagram into a target to generate a countermeasure network, and carrying out noise adding processing on the first characteristic diagram; and finally, inputting the first feature map subjected to noise adding into a target identification network to obtain the classification probability of the first feature map and the predicted weed position. According to the weed recognition method, the convolutional neural network model and the generated countermeasure network are comprehensively utilized, so that the robustness of the integral weed recognition model can be enhanced, and the accuracy and the recognition efficiency of weed recognition are improved.

Description

Weed identification method and device and terminal equipment
Technical Field
The invention belongs to the technical field of neural networks, and particularly relates to a weed identification method, a weed identification device and terminal equipment.
Background
Flowers, plants and trees in the lawn are the basis of natural landscape of the garden and are of great importance to the exertion of garden functions. Lawn weeds usually grow along with landscape plants, are various in types, grow rapidly and have strong vitality. The weeds not only have dicotyledonous broad-leaf weeds, but also have monocotyledonous grassy weeds and sedge weeds, compete for nutrients and growth space with landscape plants after being nourished, can quickly develop into dominant populations under the condition of incapability of prevention and control, and even cause premature senility and degeneration of the original natural landscape. Therefore, in the daily maintenance of the garden natural landscape, a great deal of cost is invested to remove and control weeds.
However, accurate weed identification is a key problem to be solved by intelligent weed removal technology, and various identification technologies are combined with a pesticide spraying robot at present. In the recent prior art, most weed identification technologies are limited to traditional identification methods such as a threshold segmentation method, MATLAB image processing, a shallow convolutional neural network, multi-feature fusion extraction and the like, and the technologies are relatively simple and easy to implement, but the identification efficiency and the identification accuracy still need to be improved.
Disclosure of Invention
In view of this, embodiments of the present invention provide a weed identification method, a weed identification device, and a terminal device, so as to solve the problem in the prior art that the weed identification efficiency and the identification accuracy are poor.
A first aspect of embodiments of the present invention provides a weed identification method, comprising:
acquiring a weed image to be identified;
inputting the weed image to be identified into a target convolutional neural network model to obtain a first characteristic diagram;
inputting the first characteristic diagram into a target to generate a countermeasure network, and carrying out noise adding processing on the first characteristic diagram;
and inputting the first feature map subjected to the noise adding treatment into a target identification network to obtain the classification probability of the first feature map and the predicted weed position.
A second aspect of an embodiment of the present invention provides a weed identifying apparatus comprising:
the weed image recognition module is used for acquiring a weed image to be recognized;
the first characteristic diagram acquisition module is used for inputting the weed image to be identified into a target convolutional neural network model to obtain a first characteristic diagram;
the noise adding processing module is used for inputting the first characteristic diagram into a target to generate a countermeasure network and adding noise to the first characteristic diagram;
and the weed identification module is used for inputting the first feature map subjected to the noise adding processing into a target identification network to obtain the classification probability of the first feature map and the predicted weed position.
A third aspect of embodiments of the present invention provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the weed identification method as described above when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the weed identification method as described above.
The weed identification method provided by the embodiment of the invention firstly obtains an image of weeds to be identified; inputting the weed image to be identified into a target convolutional neural network model to obtain a first characteristic diagram; then inputting the first characteristic diagram into a target to generate a countermeasure network, and carrying out noise adding processing on the first characteristic diagram; and finally, inputting the first feature map subjected to noise adding into a target identification network to obtain the classification probability of the first feature map and the predicted weed position. According to the weed identification method, the convolutional neural network model and the generation countermeasure network are comprehensively utilized, so that the robustness of the integral weed identification model can be enhanced, and the accuracy and the identification efficiency of weed identification are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic flow diagram of a weed identification method provided by an embodiment of the present invention;
fig. 2 is a schematic flow chart of an implementation of S102 in fig. 1 according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of an implementation of S202 in fig. 2 according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of an implementation of S104 in fig. 1 according to an embodiment of the present invention;
fig. 5 is a schematic structural view of a weeding robot according to an embodiment of the present invention;
FIG. 6 is a schematic structural view of a weed identification device provided in an embodiment of the present invention;
fig. 7 is a schematic diagram of a terminal device provided in an embodiment of the present invention;
fig. 8 shows a schematic diagram of a joint network of the target convolutional neural network model and the target generation countermeasure network provided by the present embodiment.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 shows a flow chart of an implementation of a weed identification method provided by an embodiment of the present invention, and for convenience of explanation, only the parts related to the embodiment of the present invention are shown, which are detailed as follows:
as shown in fig. 1, a weed identification method provided in an embodiment of the present invention includes:
s101: and acquiring an image of the weeds to be identified.
The embodiment of the invention provides a weeding robot, the overall structure of the system of which is shown in fig. 5, and the weeding robot comprises a manipulator device 1, a weed collecting device 2, a mechanical arm device 3, a main control module 4, a power supply module 5, a vision module 6 and other devices. The vision module 6 is arranged above the four-wheel mobile platform and used for shooting the weed image to be identified and transmitting the weed image to be identified to the main control module 4 for analysis and processing. The main control module 4 is a control core of the weeding robot, and can receive wireless communication signals sent by an upper computer so as to control the operation of the mechanical arm device 3. Meanwhile, the analysis and the processing of the visual images are also finished by the main control module 4, the weed identification result is converted into an instruction and sent to the mechanical arm device 3, and the mechanical arm device 3 starts to operate in due time according to the weed identification result to realize accurate weeding.
The vision module 6 is installed above the front part of the weeding robot, the optical axis of the camera forms an angle of 60 degrees with the vertical direction, and the images in a JPG format are generated through overlooking shooting so that the main control module 4 can conduct vision analysis. A Linux system is built in the main control module 4, a lawn weed sundry detection and identification algorithm is learned through a deep learning framework Tensorflow, and after the distribution situation of weeds is obtained, the mechanical arm device 3 is controlled to operate according to the obtained distribution situation, so that accurate weeding is realized.
In this embodiment, the main process of the weed identification method provided by the present invention may be the main control module 4 of the weeding robot, or may be other terminal devices capable of communicating with the vision module 6, and the main control module 4 is taken as an example for the following description in this embodiment.
In this embodiment, the main control module 4 first obtains the image of the weeds to be identified, which is sent by the vision module 6, and the image of the weeds to be identified includes weeds and other useful plants. The weed image to be identified is a fixed size image.
S102: and inputting the weed image to be identified into a target convolutional neural network model to obtain a first characteristic diagram.
In this embodiment, the target convolutional neural network model (CNN) may be the Faster R-CNN network model. Through the target convolutional neural network model, a plurality of accurate candidate areas can be generated in the weed image to be identified, and a first feature map is obtained.
S103: and inputting the first feature diagram into a target to generate a countermeasure network, and carrying out noise adding processing on the first feature diagram.
In this embodiment, the first feature map is subjected to noise adding processing in the generation countermeasure network, so that an image with a good noise adding effect can be obtained, and the target identification network is facilitated to identify the first feature map.
S104: and inputting the first feature map subjected to the noise adding treatment into a target identification network to obtain the classification probability of the first feature map and the predicted weed position.
In this embodiment, the target recognition network can determine the classification probability of each candidate region in the first feature map, set a threshold value for the classification probability, determine that weeds are present in the candidate region if the classification probability of the first feature map is greater than the set threshold value, determine that weeds are not present in the candidate region if the classification probability is less than or equal to the set threshold value, and determine a predicted weed location from the candidate region determined as weeds. Alternatively, the set threshold may be 0.5, 0.6, or other values.
As can be seen from the above embodiments, the weed identification method provided by the embodiment of the present invention first obtains an image of the weed to be identified; inputting the weed image to be identified into a target convolutional neural network model to obtain a first characteristic diagram; then inputting the first characteristic diagram into a target to generate a countermeasure network, and carrying out noise adding processing on the first characteristic diagram; and finally, inputting the first feature map subjected to noise adding into a target identification network to obtain the classification probability of the first feature map and the predicted weed position. According to the weed identification method, the convolutional neural network model and the generation countermeasure network are comprehensively utilized, so that the robustness of the integral weed identification model can be enhanced, and the accuracy and the identification efficiency of weed identification are improved.
As shown in fig. 2, in an embodiment of the present invention, the target convolutional neural network model includes a target VGG (Visual Geometry Group) network model and a target region suggestion network model; fig. 2 shows a specific flow of S102 in the embodiment corresponding to fig. 1, which includes:
s201: and inputting the weed image to be identified into the target VGG network model, and performing convolution feature extraction on the weed image to be identified to obtain a second feature map of the weed image to be identified.
In the present embodiment, as shown in fig. 8, the target Faster R-CNN network model includes a target VGG network model and a target area recommendation network model, and the target VGG network model is a network including 13 convolutional layers, 13 active layers, and 4 pooling layers. After the weed image to be recognized is input into the target convolutional neural network model, firstly, the weed image to be recognized is input into the target VGG network model, and the convolutional characteristic is calculated through the target VGG network model to obtain a second characteristic diagram.
S201: and inputting the second feature map into the target area suggested network model, and performing area candidate frame division on the second feature map to generate a first feature map.
In this embodiment, the target region proposed network model (RPN) performs classification decision on whether the second feature map belongs to a target or a background through a Softmax two classification function, obtains an accurate position of a region candidate frame, and is used for subsequent target identification and detection.
As shown in fig. 3, in an embodiment of the present invention, fig. 3 shows a specific implementation flow of step S202 in the embodiment corresponding to fig. 2, and a process thereof is detailed as follows:
s301: and generating a preset number of first region candidate frames in the second feature map according to the classification function of the target region proposed network model to obtain a third feature map.
In this embodiment, the preset number may be 300, and 300 first region candidate frames may be generated in the second feature map, so as to facilitate determination of the position of the subsequent weeds.
S302: inputting the second feature map and the third feature map into a pooling layer of the target area proposed network model, and performing size adjustment on a preset number of first area candidate frames in the third feature map to generate a first feature map marked with a preset number of second area candidate frames; and the size of each second area candidate frame is a fixed size.
In this embodiment, as shown in fig. 8, image features of the convolution layer and information of the first region candidate frames are integrated in the pooling layer, so that the input third feature map generates a preset number of second region candidate frames with fixed sizes.
As shown in fig. 4, in an embodiment of the present invention, fig. 4 shows a specific implementation flow of S104 in fig. 1, which includes:
s401: inputting the first feature map subjected to the noise processing into a classification layer of the target identification network to obtain the classification probability of the first feature map;
s402: inputting the first feature map subjected to noise adding processing into a regression layer of the target identification network, and determining the optimal border of the weeds in the first feature map to obtain the predicted position of the weeds.
In this embodiment, as shown in fig. 8, the target recognition network includes a classification layer and a regression layer, the classification layer includes a classification loss function, and the regression layer includes a regression loss function, and the probability of the presence of weeds in each second region candidate box in the first feature map can be determined through the classification loss function of the classification layer, so as to synthesize the probability of the presence of weeds in each second region candidate box to obtain the classification probability of the presence of weeds in the first feature map.
Secondly, eliminating the second area candidate frames without weeds according to the probability of weeds existing in each area candidate frame in the first characteristic diagram, fusing the second area candidate frames with weeds, and accordingly obtaining the optimal frame indicating the position of the weeds, and determining the predicted position of the weeds according to the position of the optimal frame in the image.
In one embodiment of the present invention, before performing weed identification, the present embodiment further comprises a step of generating a target convolutional neural network, a target generation countermeasure network, and a target identification network, which includes:
step 1: weed training samples were obtained.
Firstly, data set preparation is carried out, a large number of weed training samples are prepared to serve as training data sets, and frames indicating weed positions and classification probabilities are marked in each training sample.
Step 2: inputting the weed training sample into an initial convolutional neural network model, and training the initial convolutional neural network model to obtain the target convolutional neural network model and a first training sample characteristic diagram.
In this embodiment, a large number of weed training samples are input into the initial convolutional neural network model, so as to train the initial convolutional neural network model to obtain the target convolutional neural network model, and the first training sample feature map obtained through calculation of the target convolutional neural network model may be input into the initial generation countermeasure network to calculate the initial generation countermeasure network.
And step 3: inputting the first training sample feature map into an initial generation confrontation network, and carrying out noise addition on the first training sample feature map by the initial generation confrontation network to obtain a second training sample feature map.
In this embodiment, a first training sample feature map is input into an initially generated confrontation network, and a generator for initially generating the confrontation network performs a noise adding process on the first training sample feature map to generate a second training sample feature map, where the second training sample feature map is a noise added sample feature map.
And 4, step 4: inputting the second training sample feature map into an initial recognition network to obtain the classification probability and the optimal frame of the second training sample feature map;
and 5: and feeding back the classification probability of the second training sample feature map to the initially generated countermeasure network, adjusting network parameters of the initially generated countermeasure network according to the classification probability of the second training sample feature map, and replacing the initially generated countermeasure network in the step 3 with the current initially generated countermeasure network.
In this embodiment, after the generated countermeasure network performs noise addition on the first training sample feature map through the generator, the second training sample feature map is input to the initial recognition network, the classification layer of the initial recognition network is used as the discriminator of the initially generated countermeasure network to obtain the classification probability of the second training sample feature map, and then the classification probability of the second training sample feature map is fed back to the generator of the initially generated countermeasure network, so that the generator of the initial countermeasure network adjusts its own parameters according to the classification probability, thereby adjusting the noise addition effect of the sample.
Further, the adjusted initial countermeasure network is used as a new initial countermeasure network, and the steps 3 to 5 are repeated until the expected distribution of the classification loss function of the initial recognition network is equal to the expected distribution of the regression loss function, so that the noise adding effect of the initially generated countermeasure network is considered to be optimal, and the training of the initially generated countermeasure network and the initially recognized network is completed to obtain the target initial countermeasure network and the target recognition network.
And repeating the steps 3-5 until the expected distribution of the classification loss function of the initial identification network is equal to the expected distribution of the regression loss function of the initial identification network, so as to obtain the target generation countermeasure network and the target identification network.
From the above embodiments, in the present embodiment, five processes, such as calculating a convolution feature through a VGG network, generating a more accurate candidate region by using an RPN, generating an anti-network noise, and performing classification and regression training by using an identification network to obtain an optimal weed border, are performed on an input image. Because the lawn weed image has extremely high similarity, small targets in a large image are detected, the maximum detection rate and the recognition rate are achieved by modifying parameters such as the maximum iteration value and the batch processing size for further optimization and other operations, and finally the detection and recognition of weed sundries in the lawn are completed. After the main control module 4 obtains the distribution condition of weeds, the mechanical arm device 3 is controlled to operate according to the distribution condition, and therefore accurate weeding is achieved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
As shown in fig. 6, an embodiment of the invention provides a weed identification apparatus 100 for performing the method steps in the embodiment corresponding to fig. 1, comprising:
the weed image identification module 110 is used for acquiring a weed image to be identified;
the first feature map acquisition module 120 is configured to input the weed image to be identified into a target convolutional neural network model to obtain a first feature map;
a noise processing module 130, configured to input the first feature map into a target to generate a countermeasure network, and perform noise processing on the first feature map;
and the weed identification module 140 is configured to input the first feature map subjected to the noise processing into the target identification network, so as to obtain the classification probability of the first feature map and the predicted position of the weeds.
As can be seen from the above embodiments, the weed identification method provided by the embodiment of the present invention first obtains an image of the weed to be identified; inputting the weed image to be identified into a target convolutional neural network model to obtain a first characteristic diagram; then inputting the first characteristic diagram into a target to generate a countermeasure network, and carrying out noise adding processing on the first characteristic diagram; and finally, inputting the first feature map subjected to noise adding into a target identification network to obtain the classification probability of the first feature map and the predicted weed position. According to the weed identification method, the convolutional neural network model and the generation countermeasure network are comprehensively utilized, so that the robustness of the integral weed identification model can be enhanced, and the accuracy and the identification efficiency of weed identification are improved.
In one embodiment of the invention, the target convolutional neural network model comprises a target VGG network model and a target area recommendation network model; the first feature map obtaining module 120 in the embodiment corresponding to fig. 6 further includes a structure for executing the method steps in the embodiment corresponding to fig. 2, and includes:
the second feature map acquisition unit is used for inputting the weed image to be identified into the target VGG network model, and performing convolution feature extraction on the weed image to be identified to obtain a second feature map of the weed image to be identified;
and the first characteristic diagram acquisition unit is used for inputting the second characteristic diagram into the target area suggestion network model, and performing area candidate frame division on the second characteristic diagram to generate a first characteristic diagram.
In an embodiment of the present invention, the first feature map obtaining unit further includes a structure for executing the method steps in the embodiment corresponding to fig. 3, and the structure includes:
the second classification subunit is used for generating a preset number of first region candidate frames in the second feature map according to the classification function of the target region suggested network model to obtain a third feature map;
a first feature map obtaining subunit, configured to input the second feature map and the third feature map into a pooling layer of the target area proposed network model, perform size adjustment on a preset number of first area candidate frames in the third feature map, and generate a first feature map marked with a preset number of second area candidate frames; and the size of each second area candidate frame is a fixed size.
In one embodiment of the present invention, the weed identification module 140 comprises:
the classification probability obtaining unit is used for inputting the first feature map subjected to the noise processing into a classification layer of the target identification network to obtain the classification probability of the first feature map;
and the predicted position obtaining unit is used for inputting the first feature map subjected to the noise processing into a regression layer of the target identification network, determining the optimal border of the weeds in the first feature map, and obtaining the predicted position of the weeds.
In an embodiment of the present invention, the weed identifying apparatus 100 provided by the present embodiment further includes:
the training sample acquisition module is used for acquiring a weed training sample;
the convolutional neural network model training module is used for inputting the weed training samples into an initial convolutional neural network model and training the initial convolutional neural network model to obtain the target convolutional neural network model and a first training sample feature map;
a generated confrontation network training module, configured to input the first training sample feature map into an initially generated confrontation network, where the initially generated confrontation network performs noise addition on the first training sample feature map to obtain a second training sample feature map;
the initial recognition network training module is used for inputting the second training sample feature map into an initial recognition network to obtain the classification probability and the optimal frame of the second training sample feature map;
a feedback module, configured to feed back the classification probability of the second training sample feature map to the initially generated confrontation network, adjust a network parameter of the initially generated confrontation network according to the classification probability of the second training sample feature map, and replace the initially generated confrontation network in step 3 with the current initially generated confrontation network; and repeating the steps 3-5 until the expected distribution of the classification loss function of the initial identification network is equal to the expected distribution of the regression loss function of the initial identification network, so as to obtain the target generation countermeasure network and the target identification network.
In one embodiment, the weed identification apparatus 100 further comprises other functional modules/units for carrying out the method steps in the embodiments of embodiment 1.
Fig. 7 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 7, the terminal device 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72 stored in said memory 71 and executable on said processor 70. The processor 70, when executing the computer program 72, performs the steps in the various weed identification method embodiments described above, such as steps 101 to 104 shown in fig. 1. Alternatively, the processor 70, when executing the computer program 72, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 110 to 140 shown in fig. 6.
Illustratively, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 72 in the terminal device 7.
The terminal device 7 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of a terminal device 7 and does not constitute a limitation of the terminal device 7 and may comprise more or less components than shown, or some components may be combined, or different components, for example the terminal device may further comprise input output devices, network access devices, buses, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. The memory 71 may also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing the computer program and other programs and data required by the terminal device. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (9)

1. A method of weed identification, comprising:
acquiring a weed image to be identified;
inputting the weed image to be identified into a target convolutional neural network model to obtain a first characteristic diagram;
inputting the first characteristic diagram into a target to generate a countermeasure network, and carrying out noise adding processing on the first characteristic diagram;
inputting the first feature map subjected to noise processing into a target identification network to obtain the classification probability of the first feature map and the predicted weed position;
before the weed image to be identified is obtained, generating a target convolutional neural network, a target generation countermeasure network and a target identification network;
the generation steps of the target convolutional neural network, the target generation countermeasure network and the target identification network comprise:
step 1, obtaining a weed training sample;
step 2, inputting the weed training sample into an initial convolutional neural network model, and training the initial convolutional neural network model to obtain the target convolutional neural network model and a first training sample characteristic diagram;
step 3, inputting the first training sample feature map into an initial generation countermeasure network, and carrying out noise addition on the first training sample feature map by the initial generation countermeasure network to obtain a second training sample feature map;
step 4, inputting the second training sample feature map into an initial recognition network to obtain the classification probability and the optimal frame of the second training sample feature map;
step 5, feeding back the classification probability of the second training sample feature map to the initially generated confrontation network, adjusting the network parameters of the initially generated confrontation network according to the classification probability of the second training sample feature map, and replacing the initially generated confrontation network in the step 3 with the current initially generated confrontation network;
and repeating the steps 3-5 until the expected distribution of the classification loss function of the initial identification network is equal to the expected distribution of the regression loss function of the initial identification network, so as to obtain the target generation countermeasure network and the target identification network.
2. The weed identification method of claim 1, wherein the target convolutional neural network model comprises a target VGG network model and a target area recommendation network model;
inputting the weed image to be identified into the target convolutional neural network model to obtain a first characteristic diagram, wherein the first characteristic diagram comprises:
inputting the weed image to be identified into the target VGG network model, and performing convolution feature extraction on the weed image to be identified to obtain a second feature map of the weed image to be identified;
and inputting the second feature map into the target area suggested network model, and performing area candidate frame division on the second feature map to generate a first feature map.
3. The weed identification method according to claim 2, wherein the step of inputting the second feature map into the target area recommendation network model, performing area candidate box division on the second feature map, and generating the first feature map comprises:
generating a preset number of first region candidate frames in the second feature map according to the classification function of the target region suggested network model to obtain a third feature map;
inputting the second feature map and the third feature map into a pooling layer of the target area proposed network model, and performing size adjustment on a preset number of first area candidate frames in the third feature map to generate a first feature map marked with a preset number of second area candidate frames; and the size of each second area candidate frame is a fixed size.
4. The weed identification method according to claim 1, wherein the inputting the first feature map subjected to the noise processing into the object identification network to obtain the classification probability and the predicted weed location of the first feature map comprises:
inputting the first feature map subjected to the noise processing into a classification layer of the target identification network to obtain the classification probability of the first feature map;
inputting the first feature map subjected to noise adding processing into a regression layer of the target identification network, and determining the optimal border of the weeds in the first feature map to obtain the predicted position of the weeds.
5. A weed identification device, comprising:
the weed image recognition module is used for acquiring a weed image to be recognized;
the first characteristic diagram acquisition module is used for inputting the weed image to be identified into a target convolutional neural network model to obtain a first characteristic diagram;
the noise adding processing module is used for inputting the first characteristic diagram into a target to generate a countermeasure network and adding noise to the first characteristic diagram;
the weed identification module is used for inputting the first feature map subjected to the noise processing into a target identification network to obtain the classification probability of the first feature map and the predicted weed position;
the device further comprises:
the training sample acquisition module is used for acquiring a weed training sample;
the convolutional neural network model training module is used for inputting the weed training samples into an initial convolutional neural network model and training the initial convolutional neural network model to obtain the target convolutional neural network model and a first training sample feature map;
a generated confrontation network training module, configured to input the first training sample feature map into an initially generated confrontation network, where the initially generated confrontation network performs noise addition on the first training sample feature map to obtain a second training sample feature map;
the initial recognition network training module is used for inputting the second training sample feature map into an initial recognition network to obtain the classification probability and the optimal frame of the second training sample feature map;
a feedback module, configured to feed back the classification probability of the second training sample feature map to the initially generated confrontation network, adjust a network parameter of the initially generated confrontation network according to the classification probability of the second training sample feature map, and replace the initially generated confrontation network in step 3 with the current initially generated confrontation network; and repeating the steps until the expected distribution of the classification loss function of the initial identification network is equal to the expected distribution of the regression loss function of the initial identification network, so as to obtain the target generation countermeasure network and the target identification network.
6. The weed identification apparatus of claim 5, wherein the target convolutional neural network model comprises a target VGG network model and a target area recommendation network model;
the first feature map acquisition module includes:
the second feature map acquisition unit is used for inputting the weed image to be identified into the target VGG network model, and performing convolution feature extraction on the weed image to be identified to obtain a second feature map of the weed image to be identified;
and the first characteristic diagram acquisition unit is used for inputting the second characteristic diagram into the target area suggestion network model, and performing area candidate frame division on the second characteristic diagram to generate a first characteristic diagram.
7. The weed identifying apparatus according to claim 6, wherein the first feature map acquiring unit comprises:
the second classification subunit is used for generating a preset number of first region candidate frames in the second feature map according to the classification function of the target region suggested network model to obtain a third feature map;
a first feature map obtaining subunit, configured to input the second feature map and the third feature map into a pooling layer of the target area proposed network model, perform size adjustment on a preset number of first area candidate frames in the third feature map, and generate a first feature map marked with a preset number of second area candidate frames; and the size of each second area candidate frame is a fixed size.
8. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201910403138.9A 2019-05-15 2019-05-15 Weed identification method and device and terminal equipment Active CN110135341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910403138.9A CN110135341B (en) 2019-05-15 2019-05-15 Weed identification method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910403138.9A CN110135341B (en) 2019-05-15 2019-05-15 Weed identification method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN110135341A CN110135341A (en) 2019-08-16
CN110135341B true CN110135341B (en) 2021-05-18

Family

ID=67574169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910403138.9A Active CN110135341B (en) 2019-05-15 2019-05-15 Weed identification method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN110135341B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12035648B2 (en) 2020-02-06 2024-07-16 Deere & Company Predictive weed map generation and control system

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11589509B2 (en) 2018-10-26 2023-02-28 Deere & Company Predictive machine characteristic map generation and control system
US11957072B2 (en) 2020-02-06 2024-04-16 Deere & Company Pre-emergence weed detection and mitigation system
US11641800B2 (en) 2020-02-06 2023-05-09 Deere & Company Agricultural harvesting machine with pre-emergence weed detection and mitigation system
US11672203B2 (en) 2018-10-26 2023-06-13 Deere & Company Predictive map generation and control
US11653588B2 (en) 2018-10-26 2023-05-23 Deere & Company Yield map generation and control system
CN110807425B (en) * 2019-11-04 2024-02-27 金陵科技学院 Intelligent weeding system and weeding method
CN111126503B (en) * 2019-12-27 2023-09-26 北京同邦卓益科技有限公司 Training sample generation method and device
CN111340141A (en) * 2020-04-20 2020-06-26 天津职业技术师范大学(中国职业培训指导教师进修中心) Crop seedling and weed detection method and system based on deep learning
CN111553258B (en) * 2020-04-26 2023-06-13 江苏大学 Tea garden identification weeding method by utilizing convolutional neural network
CN112541383B (en) * 2020-06-12 2021-12-28 广州极飞科技股份有限公司 Method and device for identifying weed area
EP3861842B1 (en) * 2020-10-08 2023-08-16 Deere & Company Predictive weed map generation and control system
EP3981236B1 (en) * 2020-10-08 2024-07-10 Deere & Company Predictive map generation and control system
EP3981232B1 (en) * 2020-10-08 2024-07-17 Deere & Company Predictive map generation and control system
US20240020971A1 (en) * 2021-03-31 2024-01-18 Upl Ltd System and method for identifying weeds
CN115375997B (en) * 2022-08-23 2023-10-31 黑龙江工程学院 Sea surface target detection method, target detection device and terminal equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598287A (en) * 2018-10-30 2019-04-09 中国科学院自动化研究所 The apparent flaws detection method that confrontation network sample generates is generated based on depth convolution

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019019199A1 (en) * 2017-07-28 2019-01-31 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image conversion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598287A (en) * 2018-10-30 2019-04-09 中国科学院自动化研究所 The apparent flaws detection method that confrontation network sample generates is generated based on depth convolution

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
应用GAN和Faster R-CNN的色织物缺陷识别;李明等;《西安工程大学学报》;20181231;第32卷(第6期);第663-669页 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12035648B2 (en) 2020-02-06 2024-07-16 Deere & Company Predictive weed map generation and control system

Also Published As

Publication number Publication date
CN110135341A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN110135341B (en) Weed identification method and device and terminal equipment
CN110689519B (en) Fog drop deposition image detection system and method based on yolo network
CN111340864A (en) Monocular estimation-based three-dimensional scene fusion method and device
CN110210434A (en) Pest and disease damage recognition methods and device
US11850747B2 (en) Action imitation method and robot and computer readable medium using the same
CN105320945A (en) Image classification method and apparatus
Naga Srinivasu et al. A comparative review of optimisation techniques in segmentation of brain MR images
CN109344813A (en) A kind of target identification and scene modeling method and device based on RGBD
CN104063686A (en) System and method for performing interactive diagnosis on crop leaf segment disease images
CN112036261A (en) Gesture recognition method and device, storage medium and electronic device
CN112465038A (en) Method and system for identifying disease and insect pest types of fruit trees
CN115512238A (en) Method and device for determining damaged area, storage medium and electronic device
CN109409209A (en) A kind of Human bodys' response method and apparatus
CN115810133A (en) Welding control method based on image processing and point cloud processing and related equipment
CN111091122A (en) Training and detecting method and device for multi-scale feature convolutional neural network
CN113344009B (en) Light and small network self-adaptive tomato disease feature extraction method
WO2024178904A1 (en) Crop water and fertilizer stress decision-making method and apparatus, and mobile phone terminal
CN103942554A (en) Image identifying method and device
CN117612087A (en) Bird-expelling method, equipment and system for transmission tower based on bird species identification
CN112560718A (en) Method and device for acquiring material information, storage medium and electronic device
CN116739739A (en) Loan amount evaluation method and device, electronic equipment and storage medium
CN116311228A (en) Uncertainty sampling-based corn kernel identification method and system and electronic equipment
CN111539350A (en) Intelligent identification method for crop diseases and insect pests
Qi et al. Method for Segmentation of Bean Crop and Weeds Based on Improved UperNet
CN114511753A (en) Target detection model updating method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant