CN110807425A - Intelligent weeding system and weeding method - Google Patents
Intelligent weeding system and weeding method Download PDFInfo
- Publication number
- CN110807425A CN110807425A CN201911064435.1A CN201911064435A CN110807425A CN 110807425 A CN110807425 A CN 110807425A CN 201911064435 A CN201911064435 A CN 201911064435A CN 110807425 A CN110807425 A CN 110807425A
- Authority
- CN
- China
- Prior art keywords
- module
- layer
- weeding
- size
- adopting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000009333 weeding Methods 0.000 title claims abstract description 56
- 241000196324 Embryophyta Species 0.000 claims abstract description 54
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 238000001514 detection method Methods 0.000 claims abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 11
- 238000012360 testing method Methods 0.000 claims abstract description 6
- 230000008569 process Effects 0.000 claims description 26
- 238000011176 pooling Methods 0.000 claims description 15
- 238000005070 sampling Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 7
- 239000002689 soil Substances 0.000 claims description 6
- 238000005286 illumination Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 abstract description 5
- 238000007667 floating Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 12
- 238000004590 computer program Methods 0.000 description 7
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000015654 memory Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01D—HARVESTING; MOWING
- A01D34/00—Mowers; Mowing apparatus of harvesters
- A01D34/006—Control or measuring arrangements
- A01D34/008—Control or measuring arrangements for automated or remotely controlled operation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N2021/8466—Investigation of vegetal material, e.g. leaves, plants, fruits
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Chemical & Material Sciences (AREA)
- Environmental Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Biodiversity & Conservation Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Catching Or Destruction (AREA)
Abstract
The invention discloses an intelligent weeding system and a weeding method, wherein the system comprises: the system comprises an image acquisition module, a processor, a path planning module, a display device and a weeding module; the image acquisition module is used for acquiring an original farmland image by adopting OV series cameras and transmitting the original farmland image to the processor; the processor comprises a preprocessing module and an identification module, wherein the preprocessing module is used for preprocessing an original farmland image, and the identification module is used for performing weed identification training and detection according to the original farmland image by adopting an FPGA (field programmable gate array) embedded with a lightweight LeNet-5 binary network, and driving the weeding module to weed after walking by adopting a path preset by the path planning module; the display device is interconnected with the processor for auxiliary observation during testing and weeding. Compared with floating point number operation in a traditional neural network framework, the binary parameter is more suitable for FPGA logic realization, is lighter, is suitable for pipeline operation, has good real-time performance, and solves the problem that the speed and the precision are difficult to balance.
Description
Technical Field
The invention relates to the technical field of neural networks, in particular to an intelligent weeding system and a weeding method.
Background
The existing weed removal in the farmland is mainly realized by manual labor, the manual weeding efficiency is low, the labor amount is large, the environment is severe, and the method is not suitable for the development of modern agriculture. With the development of precision agriculture, the mechanical weeding method improves the labor efficiency, reduces the labor cost, and enables weed removal to gradually tend to be mechanized and intelligentized.
Currently, existing weed removal systems are mostly based on structural design of the weeding system, and vision-based methods are also in a theoretical stage. The existing wireless remote control method based on image acquisition still adopts a remote control mode to issue instructions although images are acquired based on vision, and the intelligence level needs to be improved. The invention relates to a mowing robot based on deep learning, which is used for removing weeds from the sky and is a full-automatic mechanical tool for assisting vegetation growth. The real-time detection and operation based on vision are the key of technical innovation, and are beneficial to promoting modern agricultural construction to step over a new step.
With the development of artificial intelligence and deep learning, the target detection method based on vision is continuously emerging, and the ultrahigh precision is achieved while a certain detection speed is met. However, the identification method of weeds is still in the theoretical research stage and has not been applied to a practical system. The existing image recognition methods mainly comprise two main categories, namely a traditional method and a neural network-based method. Although the weed identification method based on the traditional method has high detection speed and good real-time performance, the identification precision is low; the method based on deep learning has high identification precision, but is limited by hardware resources, high in cost and large in power consumption, and is not suitable for popularization.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects of the prior art, the invention provides an intelligent weeding system which can solve the problems of low weed identification rate, low speed and high hardware power consumption.
The technical scheme is as follows: the invention discloses an intelligent weeding system, which comprises: the system comprises an image acquisition module, a processor, a path planning module, a display device and a weeding module; the image acquisition module is used for acquiring an original farmland image by adopting OV series cameras and transmitting the original farmland image to the processor; the processor comprises a preprocessing module and an identification module, wherein the preprocessing module is used for preprocessing an original farmland image, and the identification module is used for performing weed identification training and detection according to the original farmland image by adopting an FPGA (field programmable gate array) embedded with a lightweight LeNet-5 binary network, driving the weeding module to walk by adopting a path preset by the path planning module and then weeding; the path planning module is used for determining a path covered by the weeding module when the weeding module walks; the weeding module is used for clearing weeds identified in farmlands according to the instruction of the processor; the display device is interconnected with the processor for auxiliary observation during testing and weeding.
Further, comprising:
in the identification module, the lightweight LeNet-5 binarization network is characterized in that binarization operation is added after each convolution layer in the training process and the detection process on the basis of the LeNet-5 network, so that weight binarization is realized.
Further, comprising:
the binarization operation added after each convolution layer specifically comprises the following steps:
an input layer: the input image size is 128 × 128;
CB1 layer: the method comprises the steps of performing first convolution layer and binarization operation, wherein the first convolution layer comprises 6 convolution kernels, the size of the convolution kernels is 5 x 5, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 128 x 6;
p1 layer: a pooling layer, sampling region 2 x 2, step size 2, output vector size 64 x 6;
CB2 layer: the second convolution layer comprises 16 convolution kernels, the size of each convolution kernel is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 64 x 16;
p2 layer: a pooling layer, sampling region 2 x 2, step size 2, output vector size 32 x 16;
CB3 layer: a third convolution layer and binarization operation, wherein the third convolution layer comprises 16 convolution kernels, the size of the convolution kernels is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 32 x 16;
p3 layer: a pooling layer, sampling region 2 x 2, step size 2, output vector size 16 x 16;
CB4 layer: a fourth convolution layer and binarization operation, wherein the fourth convolution layer comprises 64 convolution kernels, the size of the convolution kernels is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 16 x 64;
p4 layer: a pooling layer, sampling region 2 x 2, step size 2, output vector size 8 x 64;
CB5 layer: a fifth convolution layer and binarization operation are carried out, wherein 128 convolution kernels are provided, the size of the convolution kernels is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 8 x 128;
p5 layer: pooling layer, sampling region 2 x 2, step size 2, output vector size 4 x 128;
layer F6: in the full-connection layer, 64 nodes are arranged, the dot product of the input vector and the weight vector is calculated, and a bias is added, so that the result is output through a relu function;
an output layer: there are 2 nodes representing vegetation and weeds, respectively, and the loss function is a squared loss function.
Further, comprising:
the path that the route planning module was preset is the I shape unanimous with farmland plant plants route configuration.
Further, comprising:
the pretreatment module comprises brightness adjustment and mark and soil background separation, wherein the brightness adjustment adopts normalization processing to remove the influence of illumination change; separating the target from the soil background, and firstly, realizing binarization operation by adopting OTSU; then, obtaining a plant area after being processed by a morphological method; and finally, reserving the plant area image.
An intelligent weeding method, comprising:
the S1 image acquisition module is used for acquiring an original farmland image by adopting OV series cameras and transmitting the image to the processor;
after a preprocessing module of the S2 processor preprocesses an original farmland image, an identification module adopts an FPGA embedded with a lightweight LeNet-5 binary network to identify weeds according to the original farmland image, and drives a weeding module to walk by adopting a path preset by a path planning module and then weed;
the S3 weeding module removes the weeds identified in the farmland according to the instruction of the processor;
s4 shows that the device assists in observing weed identification.
Has the advantages that: the method can realize the linear operation of the pipeline, and has high detection precision compared with the traditional characteristic identification; compared with the floating point number operation in the traditional neural network framework, the binary parameter is more suitable for FPGA logic realization, is lighter and lighter, is suitable for pipeline operation, has good real-time performance, and solves the problem that the speed and the precision are difficult to balance; the weed removing scheme provided by the invention can be used for rapidly, efficiently, low-cost and fully-automatically and intelligently removing farmland weeds, and filling the gap of an intelligent agricultural weed removing technology.
Drawings
FIG. 1 is a schematic diagram of a system according to the present invention;
FIG. 2 is a schematic representation of a feature data set model according to the present invention;
FIG. 3 is a network flow chart of a LeNet-5 binarization network according to the present invention;
FIG. 4 is a logic diagram of the FPGA of the present invention;
FIG. 5 is a flow chart of an internal implementation of the FPGA of the present invention;
fig. 6 is a walking path planning diagram in the path planning module according to the present invention.
Detailed Description
The invention aims to provide a quick, efficient, low-cost and full-automatic intelligent farmland weed removal scheme, fills the gap of the intelligent agricultural weed removal technology, and comprises the following components in percentage by weight as shown in figure 1: the system comprises an image acquisition module, a processor, a path planning module, a display device and a weeding module; the image acquisition module is used for acquiring an original farmland image by adopting an OV series camera, making a data set of weeds and vegetation, wherein the data set comprises a training set and a testing set label, and parameters obtained after training are tested on a testing machine and are adjusted through multiple parameters, so that the requirement on the identification precision is not lower than 95%. The training platform is implemented using a high performance GPU computer, as detailed in FIG. 2.
The processor comprises a preprocessing module and an identification module, wherein the preprocessing module is used for preprocessing an original farmland image, and the identification module is used for performing weed identification training and detection according to the original farmland image by adopting an FPGA (field programmable gate array) embedded with a lightweight LeNet-5 binary network, driving the weeding module to walk by adopting a path preset by the path planning module and then weeding; wherein, the processor selects a xilinx/Altera series FPGA, and the internal logic is realized by adopting a hardware description language; the weed identification system is developed by taking the FPGA processor as a center, the FPGA processor has strong functions, can effectively realize the weed identification technology with low cost, low power consumption and high efficiency, and realizes the path planning and the land-turning weeding instruction of the system. And searching a lightweight LeNet-5 binary network suitable for the identification algorithm realized in the FPGA.
The target identification method comprises the steps of adding binarization operation after each convolution layer in a training process and a detection process on the basis of an original LeNet-5 network to realize weight binarization and solve the problem of FPGA operation and storage resource limitation, and is shown in the attached figure 3 in detail. Besides, the final output of the network is also 0 or 1, namely whether the plants are weeds is judged, if yes, 1 is output, and the high level instruction controls the machine to operate. Otherwise, 0 is output, the land plowing weeding instruction is not executed, and the forward operation is continued.
The algorithm mainly comprises the following contents:
1. an input layer: the input image sizes are unified to 128 × 128;
2. CB1 layers, namely convolution layer and binarization, 6 convolution kernels, wherein the size of the convolution kernels is 5 x 5, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 128 x 6;
3. p1 level-pooling layer, sample region 2 x 2, step size 2, output vector size 64 x 6;
4. CB 2-convolution layer + binarization, 16 convolution kernels, the size of the convolution kernels is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 64 x 16;
5. p2 level-pooling layer, sample region 2 x 2, step size 2, output vector size 32 x 16;
6. CB 3-convolution layer + binarization, 16 convolution kernels, the size of the convolution kernels is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 32 x 16;
7. p3 level-pooling layer, sample region 2 x 2, step size 2, output vector size 16 x 16;
8. CB 4-convolution layer + binarization, 64 convolution kernels, the size of the convolution kernels is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 16 x 64;
9. p4 level-pooling layer, sample zone 2 x 2, step size 2, output vector size 8 x 64;
10. CB 5-convolution layer + binarization, 128 convolution kernels, the size of the convolution kernels is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 8 x 128;
11. p5 level-pooling layer, sample area 2 x 2, step size 2, output vector size 4 x 128;
12. f6 layer-full connection layer, setting 64 nodes, calculating the dot product of the input vector and the weight vector, adding an offset, and outputting the result through a relu function;
13. the F7 output layer, has 2 nodes, representing vegetation and weeds, respectively, using a squared loss function.
The method realizes the linear operation of the pipeline, and has high detection precision compared with the traditional characteristic identification; compared with floating point number operation in a traditional neural network framework, the binary parameter is more suitable for FPGA logic realization, is lighter and lighter, is suitable for pipeline operation and has good real-time performance. The problem that speed and precision are difficult to balance is solved. The FPGA internal logic diagram is shown in the attached figure 4 in detail, image data flow is sequentially executed in the FPGA and internal logic aspect, and a sequential and parallel execution relation exists between walking path planning and turning instructions (speed control and the like) and between land turning and weeding instructions. For completing more complete and intelligent steps. And the brightness adjusting part adopts normalization processing to remove the influence of illumination change and is suitable for more illumination scenes. The target and soil background separation method comprises the following steps: firstly, implementing binarization operation by adopting OTSU; then, obtaining a plant area after being processed by a morphological method; and finally, reserving the plant area image and performing feature extraction in the next link. The implementation of the FPGA algorithm is shown in fig. 5 in detail, and in order to implement the real-time feature extraction of the image data, the FPGA internal logic is implemented by using a daisy chain with multiple rows of cache memories. The parameters of the improved LeNet-5 network after training are of a binary type, namely 0 or 1, are stored in the BRAM and are read and participate in calculation in the process of daisy chain execution.
The path planning module is used for determining a path covered by the weeding module when the weeding module walks; the method has the advantages that the effective path is ensured by the aid of the I-shaped or annular walking module and covers a farmland area, the walking path is planned in the attached drawing 6, the path is planned by the I-shaped walking route, and as shown in the attached drawing 6, the method is consistent with the arrangement of the planting paths of the farmland plants and effectively covers an operation area. Before this, need build collision obstacle rope at the regional periphery of farmland that needs the operation, ensure that the edge turns smoothly. W in the figure represents the spacing between two rows of crops, set as required prior to operation of the system.
The weeding module is used for clearing weeds identified in farmlands according to the instruction of the processor; the weeding module is realized by adopting an industrial-grade mechanical arm and a large plowing hoe and is responsible for removing weeds in the farmland according to instructions. Compared with the traditional rolling and ground turning, the mechanical arm is flexible to operate and can realize the operation similar to human hands. In addition, in the operation process of the traditional rolling device, a large amount of soil is turned, so that the camera is easily shielded and the machine is easily polluted. The display device is interconnected with the processor, is used for auxiliary observation in the testing and weeding processes, and is connected with the display through a VGA or HDMI port for display. More economical and easy to implement.
The system also comprises a power supply system which is used for the source of the operation of other modules, takes the weight problem of the storage battery into consideration and can be realized by adopting solar power supply.
Based on the above contents, the technical effects finally realized by the invention are as follows: the system walks according to an effective path, acquires image data of a corresponding position in the walking process, and the processor effectively judges whether the weed is in the back of the weed, effectively sends out a command of whether to weed or not to weed and can walk forwards after the command is executed.
On the basis of the weeding system, the invention also provides an intelligent weeding method, which comprises the following steps:
the S1 image acquisition module is used for acquiring an original farmland image by adopting OV series cameras and transmitting the image to the processor 2);
after a preprocessing module of the S2 processor preprocesses an original farmland image, an identification module adopts an FPGA embedded with a lightweight LeNet-5 binary network to identify weeds according to the original farmland image, and drives a weeding module to walk by adopting a path preset by a path planning module and then weed;
the S3 weeding module removes the weeds identified in the farmland according to the instruction of the processor;
s4 shows that the device assists in observing weed identification.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.
Claims (6)
1. An intelligent weeding system, comprising: the system comprises an image acquisition module, a processor, a path planning module, a display device and a weeding module; the image acquisition module is used for acquiring an original farmland image by adopting OV series cameras and transmitting the original farmland image to the processor; the processor comprises a preprocessing module and an identification module, wherein the preprocessing module is used for preprocessing an original farmland image, and the identification module is used for performing weed identification training and detection according to the original farmland image by adopting an FPGA (field programmable gate array) embedded with a lightweight LeNet-5 binary network, driving the weeding module to walk by adopting a path preset by the path planning module and then weeding; the path planning module is used for determining a path covered by the weeding module when the weeding module walks; the weeding module is used for clearing weeds identified in farmlands according to the instruction of the processor; the display device is interconnected with the processor for auxiliary observation during testing and weeding.
2. The intelligent weeding system according to claim 1, wherein in the identification module, the lightweight LeNet-5 binarization network is implemented by adding binarization operation after each convolution layer in the training process and the detection process on the basis of the LeNet-5 network, so as to implement weight binarization.
3. An intelligent weeding system as claimed in claim 2, wherein the post-addition binarization operation of each convolution layer specifically comprises:
an input layer: the input image size is 128 × 128;
CB1 layer: the method comprises the steps of performing first convolution layer and binarization operation, wherein the first convolution layer comprises 6 convolution kernels, the size of the convolution kernels is 5 x 5, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 128 x 6;
p1 layer: a pooling layer, sampling region 2 x 2, step size 2, output vector size 64 x 6;
CB2 layer: the second convolution layer comprises 16 convolution kernels, the size of each convolution kernel is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 64 x 16;
p2 layer: a pooling layer, sampling region 2 x 2, step size 2, output vector size 32 x 16;
CB3 layer: a third convolution layer and binarization operation, wherein the third convolution layer comprises 16 convolution kernels, the size of the convolution kernels is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 32 x 16;
p3 layer: a pooling layer, sampling region 2 x 2, step size 2, output vector size 16 x 16;
CB4 layer: a fourth convolution layer and binarization operation, wherein the fourth convolution layer comprises 64 convolution kernels, the size of the convolution kernels is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 16 x 64;
p4 layer: a pooling layer, sampling region 2 x 2, step size 2, output vector size 8 x 64;
CB5 layer: a fifth convolution layer and binarization operation are carried out, wherein 128 convolution kernels are provided, the size of the convolution kernels is 3 x 3, the step length is 1, the binarization process is obtained by adopting a sign function, and the size of an output vector is 8 x 128;
p5 layer: pooling layer, sampling region 2 x 2, step size 2, output vector size 4 x 128;
layer F6: in the full-connection layer, 64 nodes are arranged, the dot product of the input vector and the weight vector is calculated, and a bias is added, so that the result is output through a relu function;
an output layer: there are 2 nodes representing vegetation and weeds, respectively, and the loss function is a squared loss function.
4. The intelligent weeding system according to claim 1, wherein the path preset by the path planning module is in an i-shape consistent with the arrangement of the farmland plant planting path.
5. The intelligent weeding system according to claim 1, wherein the preprocessing module comprises brightness adjustment and mark separation from the soil background, and the brightness adjustment adopts normalization processing to remove the influence of illumination change; separating the target from the soil background, and firstly, realizing binarization operation by adopting OTSU; then, obtaining a plant area after being processed by a morphological method; and finally, reserving the plant area image.
6. An intelligent weeding method, comprising the following steps:
the S1 image acquisition module is used for acquiring an original farmland image by adopting OV series cameras and transmitting the image to the processor 2);
after a preprocessing module of the S2 processor preprocesses an original farmland image, an identification module adopts an FPGA embedded with a lightweight LeNet-5 binary network to identify weeds according to the original farmland image, and drives a weeding module to walk by adopting a path preset by a path planning module and then weed;
the S3 weeding module removes the weeds identified in the farmland according to the instruction of the processor;
s4 shows that the device assists in observing weed identification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911064435.1A CN110807425B (en) | 2019-11-04 | 2019-11-04 | Intelligent weeding system and weeding method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911064435.1A CN110807425B (en) | 2019-11-04 | 2019-11-04 | Intelligent weeding system and weeding method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110807425A true CN110807425A (en) | 2020-02-18 |
CN110807425B CN110807425B (en) | 2024-02-27 |
Family
ID=69501001
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911064435.1A Active CN110807425B (en) | 2019-11-04 | 2019-11-04 | Intelligent weeding system and weeding method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110807425B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111357468A (en) * | 2020-03-13 | 2020-07-03 | 西安海裕能源科技有限公司 | A full-automatic weeding robot for photovoltaic power plant |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3166025A1 (en) * | 2015-11-05 | 2017-05-10 | Facebook, Inc. | Identifying content items using a deep-learning model |
CN108898112A (en) * | 2018-07-03 | 2018-11-27 | 东北大学 | A kind of near-infrared human face in-vivo detection method and system |
CN109522797A (en) * | 2018-10-16 | 2019-03-26 | 华南农业大学 | Rice seedling and Weeds at seedling recognition methods and system based on convolutional neural networks |
CN110135341A (en) * | 2019-05-15 | 2019-08-16 | 河北科技大学 | Weed identification method, apparatus and terminal device |
CN110245551A (en) * | 2019-04-22 | 2019-09-17 | 中国科学院深圳先进技术研究院 | The recognition methods of field crops under the operating condition of grass more than a kind of |
-
2019
- 2019-11-04 CN CN201911064435.1A patent/CN110807425B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3166025A1 (en) * | 2015-11-05 | 2017-05-10 | Facebook, Inc. | Identifying content items using a deep-learning model |
CN108898112A (en) * | 2018-07-03 | 2018-11-27 | 东北大学 | A kind of near-infrared human face in-vivo detection method and system |
CN109522797A (en) * | 2018-10-16 | 2019-03-26 | 华南农业大学 | Rice seedling and Weeds at seedling recognition methods and system based on convolutional neural networks |
CN110245551A (en) * | 2019-04-22 | 2019-09-17 | 中国科学院深圳先进技术研究院 | The recognition methods of field crops under the operating condition of grass more than a kind of |
CN110135341A (en) * | 2019-05-15 | 2019-08-16 | 河北科技大学 | Weed identification method, apparatus and terminal device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111357468A (en) * | 2020-03-13 | 2020-07-03 | 西安海裕能源科技有限公司 | A full-automatic weeding robot for photovoltaic power plant |
Also Published As
Publication number | Publication date |
---|---|
CN110807425B (en) | 2024-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110297483B (en) | Method and device for obtaining boundary of area to be operated and operation route planning method | |
Nie et al. | Artificial intelligence and digital twins in sustainable agriculture and forestry: a survey | |
CN110084307B (en) | Mobile robot vision following method based on deep reinforcement learning | |
CN103823371B (en) | Agriculture Tree Precise Fertilization system and fertilizing method based on neural network model | |
Singh et al. | A systematic review of artificial intelligence in agriculture | |
CN112711900A (en) | Crop digital twin modeling method | |
Ma et al. | Using an improved lightweight YOLOv8 model for real-time detection of multi-stage apple fruit in complex orchard environments | |
Badgujar et al. | Agricultural object detection with You Only Look Once (YOLO) Algorithm: A bibliometric and systematic literature review | |
CN115723138A (en) | Control method and device of agricultural robot, electronic equipment and storage medium | |
CN110807425B (en) | Intelligent weeding system and weeding method | |
CN116128672A (en) | Model-data combined driving intelligent greenhouse fertilizer preparation method and system | |
Sane et al. | Artificial intelligence and deep learning applications in crop harvesting robots-A survey | |
Ren et al. | A review of the large-scale application of autonomous mobility of agricultural platform | |
Badgujar et al. | Agricultural Object Detection with You Look Only Once (YOLO) Algorithm: A Bibliometric and Systematic Literature Review | |
Meng et al. | Real-time statistical algorithm for cherry tomatoes with different ripeness based on depth information mapping | |
WO2024178904A1 (en) | Crop water and fertilizer stress decision-making method and apparatus, and mobile phone terminal | |
Ji et al. | Performance analysis of target information recognition system for agricultural robots | |
Martini et al. | Enhancing navigation benchmarking and perception data generation for row-based crops in simulation | |
Xu et al. | Geometric positioning and color recognition of greenhouse electric work robot based on visual processing | |
Wei et al. | Accurate crop row recognition of maize at the seedling stage using lightweight network | |
Fulkar et al. | Artificial Intelligence Cultivation: Transforming Agriculture for a Smart and Sustainable Future | |
Niu et al. | Sustainable mechatronic solution for agricultural precision farming inspired by space robotics technologies | |
Lee et al. | Developing a Self-Guided Field Robot for Greenhouse Asparagus Monitoring | |
KR102077219B1 (en) | Routing method and system for self-driving vehicle using tree trunk detection | |
Xin et al. | Key Issues and Countermeasures of Machine Vision for Fruit and Vegetable Picking Robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |