CN112446388A - Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model - Google Patents
Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model Download PDFInfo
- Publication number
- CN112446388A CN112446388A CN202011410890.5A CN202011410890A CN112446388A CN 112446388 A CN112446388 A CN 112446388A CN 202011410890 A CN202011410890 A CN 202011410890A CN 112446388 A CN112446388 A CN 112446388A
- Authority
- CN
- China
- Prior art keywords
- detection model
- network
- training
- image
- lightweight
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Abstract
The invention discloses a method and a system for identifying multi-class vegetable seedlings based on a lightweight two-stage detection model, wherein the method comprises the following steps: acquiring multi-category vegetable seedling image data sets, and performing data enhancement on the image data sets; labeling the enhanced data set, and dividing the labeled data set into a training set, a verification set and a test set; building a lightweight two-stage detection model on a TensorFlow deep learning framework, designing a mixed deep separation convolutional neural network as a preposed basic network, fusing different levels of characteristic information of the preposed basic network by adopting a characteristic pyramid network, and compressing the network channel dimension and the number of full connection layers of a detection head; initializing lightweight two-stage detection model parameters, and inputting a training set into a detection model to train based on a random gradient descent method; and after the training is finished, inputting the image to be recognized into the detection model, and outputting the type and position information of the vegetable seedling. The method solves the problems of low accuracy and poor real-time performance of the traditional vegetable seedling detection algorithm.
Description
Technical Field
The invention relates to the field of agricultural crop detection and identification, in particular to a method and a system for identifying multi-class vegetable seedlings based on a lightweight two-stage detection model.
Background
The vegetables contain abundant vitamins, minerals and dietary fibers, and are one of important foods for maintaining the nutrition balance of human bodies and keeping the body healthy. In recent years, the vegetable planting area of China is stabilized at about 3 hundred million mu, the annual output reaches 7 hundred million tons, and the vegetable planting area exceeds the grain output and becomes the first large agricultural product. The rapid development of the vegetable industry meets the daily life needs of people, but the problems of excessive fertilization, overproof pesticide use and the like exist in the planting process, and adverse effects are generated on the ecological environment and the human health. With the development of electronic technology and computer technology, the automatic intelligent agricultural equipment is gradually applied to agricultural production, and the yield and the safety quality of vegetable crops are improved by a series of means such as targeted pesticide spraying, variable fertilization, mechanical weeding and the like.
The traditional vegetable seedling detection method mainly realizes vegetable identification and positioning based on one or more combinations of characteristics such as color, shape, texture, spectrum and position, but can only detect crops in a specific environment in the practical application process, and is easily influenced by factors such as natural illumination, background noise and branch and leaf shielding, so that the identification accuracy is reduced.
Compared with the traditional method, the target detection method based on the deep learning technology is rapidly developed in recent years, different level features of an input image are extracted from shallow to deep through a convolution layer, a pooling layer and a full-connection layer, and accurate detection of a target is achieved through information classification and position regression. In the field of precision agriculture, a target detection model based on deep learning is gradually applied to crop identification and detection and has a remarkable effect. The detection model can be divided into a one-stage target detection model and a two-stage target detection model according to the steps required for realizing the classified positioning of the targets. The first-stage detection model is represented by SSD and YOLO series algorithms, and the second-stage detection model is represented by Faster R-CNN and F-RCN. Compared with a one-stage target detection model, the two-stage target detection model is high in identification precision, but is long in consumed time, and is difficult to meet the requirement of rapid detection of crops in a complex agricultural environment.
Disclosure of Invention
The invention provides a multi-class vegetable seedling identification method based on a lightweight two-stage detection model, which aims to improve the detection precision and speed of vegetable seedlings in a natural environment. The lightweight two-stage detection model adopts mixed depth separation convolution as a preposed basic network to operate the input image, so that the image feature extraction speed and efficiency are improved; introducing a characteristic pyramid network to fuse different levels of characteristic information of a preposed basic network, and enhancing the identification precision of a detection model on a multi-scale target; by compressing the number of the network channel dimensions of the detection head and the number of the full connection layers, the scale of the model parameters and the calculation complexity are reduced.
In a first aspect, the invention provides a method for identifying multi-class vegetable seedlings based on a lightweight two-stage detection model, which comprises the following steps:
s01, acquiring multi-category vegetable seedling image data sets, and performing data enhancement on the image data sets;
s02, labeling the enhanced data set, and dividing the labeled data set into a training set, a verification set and a test set;
s03, building a lightweight two-stage detection model on a TensorFlow deep learning framework, designing a mixed deep separation convolutional neural network as a preposed basic network, fusing different levels of feature information of the preposed basic network by adopting a feature pyramid network, and compressing the network channel dimension and the number of full connection layers of a detection head;
s04, initializing the parameters of the lightweight two-stage detection model, and inputting a training set into the detection model to train based on a random gradient descent method;
and S05, inputting the image to be recognized into the detection model after training is finished, and outputting the type and position information of the vegetable seedling.
Optionally, the acquiring an image dataset of the multi-class vegetable seedlings in step S01, and performing data enhancement on the image dataset specifically includes:
(1.1) enabling the camera and the horizontal direction of crop rows to form an included angle of 80-90 degrees, enabling the camera to be 80cm away from the ground, and acquiring images of various vegetable seedlings under different weather conditions, different illumination directions and different environmental backgrounds to construct an image data set;
(1.2) data enhancing the image dataset by geometric transformation and color transformation.
Optionally, in the step S02, the enhancing data set is marked, and the marked data set is divided into a training set, a verification set, and a test set, which specifically includes:
(2.1) adopting labeling software to label the type and the position of the vegetable seedling in the enhanced data set;
and (2.2) randomly splitting the labeling data set into a training set, a verification set and a test set according to the proportion of 7:2: 1.
Optionally, in step S03, on the tensrflow deep learning framework, a lightweight two-stage detection model is built, a hybrid deep separation convolutional neural network is designed as a pre-base network, a feature pyramid network is adopted to fuse different levels of feature information of the pre-base network, and the network channel dimension and the number of full connection layers of the detection head are compressed, which specifically includes:
(3.1) fusing a plurality of convolution kernels with different sizes into a single depth separable convolution operation to form a mixed depth separable convolution neural network, and taking the mixed depth separable convolution neural network as a preposed basic network to perform feature acquisition on an input image;
(3.2) introducing a characteristic pyramid network to fuse different levels of characteristics of the preposed basic network, and inputting the fused characteristic diagram into a regional suggestion network to generate a series of sparse prediction frames;
and (3.3) operating the output characteristics of the final stage of the mixed deep separation convolutional neural network by using asymmetric convolution in a detection head network to generate a characteristic diagram with less channel dimensions, comprehensively accessing the characteristic diagram and a prediction frame into 1 full-connection layer to obtain global characteristics of a detection target, and finishing target classification and position prediction based on 2 parallel branches.
Optionally, the initializing the parameters of the lightweight two-stage detection model in step S04, inputting a training set to the detection model, and training the detection model based on a stochastic gradient descent method, specifically including:
(4.1) using a pre-trained mixed deep separation convolutional neural network weight parameter model for the pre-feature extraction network, and randomly initializing the rest layers by using Gaussian distribution with the mean value of 0 and the standard deviation of 0.01;
(4.2) setting hyper-parameters related to model training, and training by adopting a multi-task loss function as a target function based on a random gradient descent method;
and (4.3) calculating loss functions of input samples by using an online difficult sample mining strategy in the training process, sequencing the loss functions from large to small, and updating model weight parameters by back propagation training of difficult samples with larger loss functions of the first 1 percent screened.
In a second aspect, the present invention further provides a light-weight two-stage detection model-based multi-class vegetable seedling recognition system, including:
the image acquisition and enhancement module is used for acquiring image data sets of the multi-category vegetable seedlings and performing data enhancement on the image data sets; the image labeling and classifying module is used for labeling the enhanced data set and dividing the labeled data set into a training set, a verification set and a test set;
the detection model building module is used for building a lightweight two-stage detection model on a TensorFlow deep learning framework, designing a mixed deep separation convolutional neural network as a preposed basic network, fusing different levels of characteristic information of the preposed basic network by adopting a characteristic pyramid network, and compressing the network channel dimension and the number of full connection layers of the detection head;
the detection model training module is used for initializing the lightweight two-stage detection model parameters and inputting a training set into the detection model to train the detection model based on a random gradient descent method;
and the detection result output module is used for inputting the images to be recognized into the detection model after the training is finished and outputting the type and the position information of the vegetable seedlings.
Optionally, the image acquisition enhancing module specifically includes:
the image acquisition unit is used for enabling the camera to form an included angle of 80-90 degrees with the horizontal direction of the crop row and to be about 80cm away from the ground, and acquiring various vegetable seedling images under different weather conditions, different illumination directions and different environmental backgrounds to construct an image data set; and the image enhancement unit is used for performing data enhancement on the image data set through geometric transformation and color transformation.
Optionally, the image labeling and classifying module specifically includes:
the labeling unit is used for labeling the category and the position of the vegetable seedling in the enhanced data set by adopting labeling software;
and the classification unit is used for randomly splitting the labeling data set into a training set, a verification set and a test set according to the proportion of 7:2: 1.
Optionally, the detection model building module specifically includes:
the pre-basic network unit is used for fusing a plurality of convolution kernels with different sizes into a single depth separable convolution operation to form a mixed depth separation convolution neural network, and the mixed depth separation convolution neural network is used as a pre-basic network to carry out feature acquisition on an input image;
the feature information fusion unit is used for introducing a feature pyramid network to fuse different levels of features of the preposed basic network, and inputting the fused feature map into a regional suggestion network to generate a series of sparse prediction frames;
and the lightweight detection head unit is used for calculating the output characteristics of the mixed deep separation convolutional neural network at the last stage by using asymmetric convolution in the detection head network to generate a characteristic diagram with less channel dimensions, comprehensively accessing the characteristic diagram and a prediction frame into 1 full-connection layer to obtain global characteristics of a detection target, and finishing target classification and position prediction based on 2 parallel branches.
Optionally, the detection model training module specifically includes:
the initialization unit is used for using a pre-trained mixed deep separation convolutional neural network weight parameter model for the pre-feature extraction network, and randomly initializing the rest layers by using Gaussian distribution with the mean value of 0 and the standard deviation of 0.01;
the training unit is used for setting hyper-parameters related to model training and training on the basis of a random gradient descent method by adopting a multi-task loss function as a target function;
and the difficult sample mining unit is used for calculating the loss function of the input sample by utilizing an online difficult sample mining strategy in the training process, sorting the loss function of the input sample according to the sequence from large to small, and updating the model weight parameter by back propagation training of the difficult samples with the larger loss function of the first 1 percent.
According to the technical scheme, the method comprises the following steps: the invention provides a method and a system for identifying multi-class vegetable seedlings based on a lightweight two-stage detection model, which have the following advantages:
firstly, a mixed depth separation convolutional neural network is used as a preposed basic network to extract the characteristics of an input image, so that the calculated characteristic image pixels have different receptive fields, and the image characteristic extraction speed and efficiency are effectively improved;
fusing different levels of features of the preposed basic network by adopting a feature pyramid network, wherein the fused feature graph has enough resolution and stronger semantic information, and the detection precision of the multi-scale target can be enhanced;
thirdly, the detection head network is designed in a light weight mode, redundant parameters are reduced by compressing the number of network channel dimensions and the number of full connection layers, the calculated amount of the model is reduced, and the reasoning speed of the model is improved;
and fourthly, the multi-category vegetable seedling identification method and system based on the lightweight two-stage detection model have high identification precision and high reasoning speed, and can be applied to embedded agricultural mobile equipment with limited computing capacity and storage resources.
Drawings
Fig. 1 is a schematic flow chart of a multi-class vegetable seedling identification method based on a lightweight two-stage detection model according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a hybrid deep separation convolutional neural network according to an embodiment of the present invention;
fig. 3 is a schematic diagram of different-level feature structures of a feature pyramid network fusion hybrid depth separation convolutional neural network provided in the embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a lightweight two-stage target detection model according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a multi-class vegetable seedling identification system based on a lightweight two-stage detection model according to an embodiment of the present invention.
Detailed Description
The following embodiments are described in detail with reference to the accompanying drawings, and the following embodiments are only used to clearly illustrate the technical solutions of the present invention, and should not be used to limit the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a method for identifying multi-class vegetable seedlings based on a lightweight two-stage detection model according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
101. acquiring multi-category vegetable seedling image data sets, and performing data enhancement on the image data sets;
102. labeling the enhanced data set, and dividing the labeled data set into a training set, a verification set and a test set;
103. building a lightweight two-stage detection model on a TensorFlow deep learning framework, designing a mixed deep separation convolutional neural network as a preposed basic network, fusing different levels of characteristic information of the preposed basic network by adopting a characteristic pyramid network, and compressing the network channel dimension and the number of full connection layers of a detection head;
104. initializing parameters of the lightweight two-stage detection model, and inputting a training set into the detection model to train based on a random gradient descent method;
105. and after the training is finished, inputting the image to be recognized into the detection model, and outputting the type and position information of the vegetable seedling.
The step 101 comprises the following specific steps:
(1.1) enabling the camera and the horizontal direction of crop rows to form an included angle of 80-90 degrees, enabling the camera to be 80cm away from the ground, and acquiring images of various vegetable seedlings under different weather conditions, different illumination directions and different environmental backgrounds to construct an image data set;
(1.2) data enhancing the image dataset by geometric transformation and color transformation;
for example, in the present embodiment, a Matlab tool is used for data enhancement. Geometric transformation: randomly dividing an original image data set into 2 parts, carrying out image rotation on one part, and selecting a rotation angle of-20 degrees, -5 degrees, 5 degrees and 20 degrees to generate a new image; and the other part randomly performs mirror image turning, horizontal turning and vertical turning on the image. Color transformation: the original image is transformed from RGB color space to HVS color space, and the brightness (Value) and Saturation (Saturation) of the image are randomly adjusted, wherein the brightness adjustment Value is 0.8 times, 0.9 times, 1.1 times and 1.2 times of the original Value, and the Saturation adjustment Value is 0.85 times, 0.95 times, 1.05 times and 1.15 times of the original Value. And combining the original image data set, the geometric transformation data set and the color transformation data set to form an enhanced image data set.
The step 102 comprises the following specific steps:
(2.1) adopting labeling software to label the type and the position of the vegetable seedling in the enhanced data set;
for example, LabelImg software is used for image annotation in this embodiment. Firstly, double-clicking LabelImg software to enter an operation interface, and opening a folder (Open Dir) where an image to be marked is located; then, setting a marked image storage directory (Change Save Dir), marking a target area in the current image by using Create \ nRectBox and setting a class name; finally, the labeled image (Save) is saved, and the Next image is clicked for marking (Next). The marked image is generated under the condition of saving a file path, the name of the xml file is consistent with the name of the marked image, and the file comprises information such as the name, the path, the target quantity, the category, the size and the like of the marked image;
and (2.2) randomly splitting the labeling data set into a training set, a verification set and a test set according to the proportion of 7:2: 1.
The step 103 comprises the following specific steps:
(3.1) fusing a plurality of convolution kernels with different sizes into a single depth separable convolution operation to form a mixed depth separable convolution neural network, and taking the mixed depth separable convolution neural network as a preposed basic network to perform feature acquisition on an input image, wherein the specific process comprises the following steps:
in this embodiment, the deep learning framework selects TensorFlow, and performs program design based on Python language on a Windows 10 operating system, and the design idea of the hybrid deep separation convolutional neural network is as follows: let the input feature map be X(h,w,c)H represents the height of the feature map, w represents the width, c represents the number of channels, and the feature map is divided into g groups of sub-feature maps along the channel directioncs( s 1,2.. g) represents the number of channels of the s-th group of sub-feature maps, and c1+c2+...+cgC. Establishing g groups of different-size depth convolution kernelsm denotes a channel multiplier, kt×kt(t ═ 1,2.. g) denotes the t-th group convolution kernel size. And (3) operating the t group of input sub-feature maps and the corresponding depth convolution kernels to obtain a t group of output sub-feature maps, wherein the specific definition formula is as follows:
wherein x represents the characteristic image pixel line number, y represents the characteristic image pixel column number, ztRepresenting the number of channels of the t-th group of output feature maps, h representing the height of the input feature map, w representing the width of the input feature map, i representing the row number of the convolution kernel elements, j representing the column number of the convolution kernel elements,showing the output sub-feature map of the t-th group,representing the t-th group of input sub-feature maps,representing a t-th set of deep convolution kernels;
according to the calculation result of the formula, all the output sub-feature graphs are spliced in the channel dimension in an addition mode to obtain a final output feature graph, and the calculation formula is as follows:
wherein z represents the number of channels of the output characteristic diagram, and z is equal to z1+...+zg,Yx,y,zRepresenting the spliced output characteristic diagram;
the structure of the mixed deep separation convolutional neural network in this embodiment is shown in fig. 2, the maximum grouping number g of the feature map is 5, each group has the same number of channels, the size of the corresponding deep convolutional kernel is {3 × 3, 5 × 5,7 × 7,9 × 9,11 × 11}, the feature map is grouped and then operated with convolutional kernels of different sizes, and then the result is spliced to obtain an output. FIG. 2 is a graph obtained by dividing the convolution neural network into 5 stages (stages) according to the size of a feature map, wherein the feature map with the same size is the same Stage, and the scale ratio of the feature map in the adjacent stages is 2;
(3.2) introducing a feature pyramid network to fuse different levels of features of the preposed basic network, inputting the fused feature map into a regional suggestion network to generate a series of sparse prediction frames, wherein the specific process comprises the following steps:
in the embodiment, a feature pyramid network is merged into the hybrid depth separation convolutional neural network, as shown in fig. 3. In fig. 3, the mixed depth separation convolution sequentially generates feature maps of different stages in the bottom-up order, wherein x in Stage x/y (x is 1,2,3,4, 5; y is 2,4,8,16,32) represents the number of stages in which the feature maps are located, and y represents the reduction factor of the feature map size relative to the input image at this Stage. The stages 2-5 are respectively input to the feature pyramid network after being subjected to 1 × 1 convolution operation, wherein the 1 × 1 convolution has the function of keeping the number of channels input to the feature pyramid network by each Stage feature diagram consistent. And the feature pyramid network unit performs up-sampling on the input high-level feature map according to the top-down sequence to enlarge the resolution, and then performs fusion with the adjacent low-level features in an addition mode. On one hand, the fused feature graph is input into a subsequent network for predictionReasoning, on the other hand, continues to fuse with the underlying feature map through upsampling. The mixed depth separation convolution stages 2-5 correspond to the P2-P5 levels of the feature pyramid network, and P6 is obtained by downsampling Stage5 and is used for generating a prediction box in the area suggestion network without participating in fusion operation. Each level of { P2, P3, P4, P5, P6} is responsible for information processing of a single scale, and corresponds to {16 }2,322,642,1282, 25625 scale prediction frames, each prediction frame has 3 length-width ratios of {1:1,1:2,2:1}, and the prediction frames totally comprise 15 prediction frames for predicting the target object and the background;
(3.3) in a detection head network, computing the output characteristics of the mixed deep separation convolutional neural network at the last stage by using asymmetric convolution to generate a characteristic diagram with less channel dimensions, comprehensively accessing the characteristic diagram and a prediction frame into 1 full-connection layer to obtain global characteristics of a detection target, and finishing target classification and position prediction based on 2 parallel branches, wherein the specific process comprises the following steps:
in this embodiment, the lightweight detection head unit is constructed by compressing the network channel dimension and the parameter scale, and the specific design method is as follows: generating a feature map of an alpha multiplied by p channel by adopting a large-size asymmetric convolution aiming at a feature map output by a final stage of a mixed deep separation convolutional neural network, wherein alpha is a number which is irrelevant to a category and has a small numerical value, the value of alpha is 10, the value of p multiplied by p is equal to the number of grids after pooling of a candidate area, the value of p multiplied by p is 49, and a feature map of a 490 channel is obtained through calculation; then, introducing ROI Align operation to pool the feature information corresponding to the prediction frames with different sizes to generate a feature map with a fixed size, wherein the ROI Align operation acquires the numerical value of a pixel point with coordinates as floating point numbers by using a bilinear difference method, and the whole feature aggregation process is converted into a continuous operation; finally, accessing 1 full-connection layer to obtain global characteristics of the detected target, and completing target classification and position prediction based on 2 parallel branches; as used herein, large-scale asymmetric convolution consists of 1 × 15 and 15 × 12 convolution kernels; FIG. 4 is a schematic block diagram of a lightweight two-stage target detection model.
The step 104 comprises the following specific steps:
(4.1) using a pre-trained mixed deep separation convolutional neural network weight parameter model for the pre-feature extraction network, and randomly initializing the rest layers by using Gaussian distribution with the mean value of 0 and the standard deviation of 0.01;
(4.2) setting hyper-parameters related to model training, and training by adopting a multi-task loss function as a target function based on a random gradient descent method, wherein the specific process comprises the following steps:
the momentum factor is 0.9, and the weight attenuation coefficient is 5X 10-4The initial learning rate is 0.002, the attenuation rate is 0.9, the attenuation is 1 time after every 2000 iterations, the accuracy rate of the training model is tested on the verification set, and the total iteration number of the model training is 50000;
secondly, in the training process, a multi-task loss function is adopted to complete the confidence degree discrimination and the position regression of the target type, and the method is specifically defined as follows:
LTotal=LRPN(pl,al)+LHEAD(p,u,o,g)
wherein
LHEAD(p,u,o,g)=Lcls(p,u)+λ'[u≥1]LDIOU(o,g)
The loss function of the embodiment comprises two parts, namely area recommendation network loss and detection head loss, wherein each part comprises classification loss and position regression loss. In the formula, LTotalTo detect model loss, LRPNSuggesting network loss for a region, LHEADTo detect head network loss, l is the anchor frame index, plPredict probability for the first anchor frame two classes, pl *For the first anchor frame discriminationValue of alFor the prediction box corresponding to the ith anchor box,is a real frame corresponding to the ith anchor frame, p is a prediction class probability, u is a real class label, lambda' are weight parameters, LclsTo classify the loss, NclsNumber of anchor frames to sample, NregFor sampling positive and negative sample numbers, o is a prediction box output by the area recommendation network, g is a real box corresponding to the prediction box, and LDIOUFor Distance-based cross-over ratio (DIoU) loss, A is a prediction box, B is a real box, c is A, B is the minimum bounding box diagonal length, rho (·) is Euclidean Distance calculation, A isctrTo predict the frame center point coordinates, BctrThe coordinate of the center point of the real frame is IoU (intersection over Union), and the intersection ratio of the prediction frame and the real frame is IoU;
and (4.3) calculating loss functions of input samples by using an online difficult sample mining strategy in the training process, sequencing the loss functions from large to small, and updating model weight parameters by back propagation training of difficult samples with larger loss functions of the first 1 percent screened.
The step 105 comprises the following specific steps:
(5.1) setting a category confidence threshold value to be 0.5 and setting a threshold value of intersection and union ratio to be 0.5 in the trained detection model;
and (5.2) inputting the image to be recognized into the trained detection model to obtain a multi-class vegetable seedling recognition result, wherein the recognition result comprises a target class label, a class confidence coefficient and a target position frame.
Fig. 5 is a schematic structural diagram of a multi-class vegetable seedling identification system based on a lightweight two-stage detection model according to an embodiment of the present invention, and as shown in fig. 5, the system includes:
the image acquisition enhancing module 501 is used for acquiring image data sets of multi-category vegetable seedlings and enhancing the data of the image data sets;
an image labeling and classifying module 502, configured to label the enhanced data set, and divide the labeled data set into a training set, a verification set, and a test set;
the detection model building module 503 is used for building a lightweight two-stage detection model on a TensorFlow deep learning framework, designing a mixed deep separation convolutional neural network as a preposed basic network, fusing different levels of characteristic information of the preposed basic network by adopting a characteristic pyramid network, and compressing the network channel dimension and the number of full connection layers of the detection head;
a detection model training module 504, configured to initialize the lightweight two-stage detection model parameters, and input a training set to the detection model for training based on a random gradient descent method;
and the detection result output module 505 is used for inputting the images to be recognized into the detection model after the training is finished and outputting the type and position information of the vegetable seedlings.
The image acquisition enhancing module 501 specifically includes:
the image acquisition unit is used for enabling the camera to form an included angle of 80-90 degrees with the horizontal direction of the crop row and to be about 80cm away from the ground, and acquiring various vegetable seedling images under different weather conditions, different illumination directions and different environmental backgrounds to construct an image data set; and the image enhancement unit is used for performing data enhancement on the image data set through geometric transformation and color transformation.
The image labeling and classifying module 502 specifically includes:
the labeling unit is used for labeling the category and the position of the vegetable seedling in the enhanced data set by adopting labeling software;
and the classification unit is used for randomly splitting the labeling data set into a training set, a verification set and a test set according to the proportion of 7:2: 1.
The detection model building module 503 specifically includes:
the pre-basic network unit is used for fusing a plurality of convolution kernels with different sizes into a single depth separable convolution operation to form a mixed depth separation convolution neural network, and the mixed depth separation convolution neural network is used as a pre-basic network to carry out feature acquisition on an input image;
the feature information fusion unit is used for introducing a feature pyramid network to fuse different levels of features of the preposed basic network, and inputting the fused feature map into a regional suggestion network to generate a series of sparse prediction frames;
and the lightweight detection head unit is used for calculating the output characteristics of the mixed deep separation convolutional neural network at the last stage by using asymmetric convolution in the detection head network to generate a characteristic diagram with less channel dimensions, comprehensively accessing the characteristic diagram and a prediction frame into 1 full-connection layer to obtain global characteristics of a detection target, and finishing target classification and position prediction based on 2 parallel branches.
The detection model training module 504 specifically includes:
the initialization unit is used for using a pre-trained mixed deep separation convolutional neural network weight parameter model for the pre-feature extraction network, and randomly initializing the rest layers by using Gaussian distribution with the mean value of 0 and the standard deviation of 0.01;
the training unit is used for setting hyper-parameters related to model training and training on the basis of a random gradient descent method by adopting a multi-task loss function as a target function;
and the difficult sample mining unit is used for calculating the loss function of the input sample by utilizing an online difficult sample mining strategy in the training process, sorting the loss function of the input sample according to the sequence from large to small, and updating the model weight parameter by back propagation training of the difficult samples with the larger loss function of the first 1 percent.
The detection result output module 505 specifically includes:
the threshold setting unit is used for setting a category confidence threshold value of 0.5 and a threshold value of intersection and union ratio value of 0.5 in the trained detection model;
and the detection output unit is used for inputting the image to be recognized into the trained detection model to obtain the recognition result of the multi-class vegetable seedling, and the recognition result comprises a target class label, a class confidence coefficient and a target position frame.
The system and the method of the invention are in one-to-one correspondence, so the calculation process of some parameters in the method is also suitable for the calculation process in the system, and the detailed description in the system is omitted.
In the description of the present invention, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; while the invention has been described in detail and with reference to the foregoing embodiments, those skilled in the art will appreciate that; the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; these modifications and substitutions do not depart from the spirit of the invention in the form of examples, and are intended to be included within the scope of the claims and the specification.
Claims (10)
1. A multi-category vegetable seedling identification method based on a lightweight two-stage detection model is characterized by comprising the following steps:
s01, acquiring multi-category vegetable seedling image data sets, and performing data enhancement on the image data sets;
s02, labeling the enhanced data set, and dividing the labeled data set into a training set, a verification set and a test set;
s03, building a lightweight two-stage detection model on a TensorFlow deep learning framework, designing a mixed deep separation convolutional neural network as a preposed basic network, fusing different levels of feature information of the preposed basic network by adopting a feature pyramid network, and compressing the network channel dimension and the number of full connection layers of a detection head;
s04, initializing the parameters of the lightweight two-stage detection model, and inputting a training set into the detection model to train based on a random gradient descent method;
and S05, inputting the image to be recognized into the detection model after training is finished, and outputting the type and position information of the vegetable seedling.
2. The method for identifying multi-class young vegetables based on the light-weighted two-stage detection model as claimed in claim 1, wherein the step S01 specifically comprises:
(1.1) enabling the camera and the horizontal direction of crop rows to form an included angle of 80-90 degrees, enabling the camera to be 80cm away from the ground, and acquiring images of various vegetable seedlings under different weather conditions, different illumination directions and different environmental backgrounds to construct an image data set;
(1.2) data enhancing the image dataset by geometric transformation and color transformation.
3. The method for identifying multi-class young vegetables based on the light-weighted two-stage detection model as claimed in claim 1, wherein the step S02 specifically comprises:
(2.1) adopting labeling software to label the type and the position of the vegetable seedling in the enhanced data set;
and (2.2) randomly splitting the labeling data set into a training set, a verification set and a test set according to the proportion of 7:2: 1.
4. The method for identifying multi-class young vegetable seedlings based on the light-weighted two-stage detection model as claimed in claim 1, wherein the step S03 specifically comprises:
(3.1) fusing a plurality of convolution kernels with different sizes into a single depth separable convolution operation to form a mixed depth separable convolution neural network, and taking the mixed depth separable convolution neural network as a preposed basic network to perform feature acquisition on an input image;
(3.2) introducing a characteristic pyramid network to fuse different levels of characteristics of the preposed basic network, and inputting the fused characteristic diagram into a regional suggestion network to generate a series of sparse prediction frames;
and (3.3) operating the output characteristics of the final stage of the mixed deep separation convolutional neural network by using asymmetric convolution in a detection head network to generate a characteristic diagram with less channel dimensions, comprehensively accessing the characteristic diagram and a prediction frame into 1 full-connection layer to obtain global characteristics of a detection target, and finishing target classification and position prediction based on 2 parallel branches.
5. The method for identifying multi-class young vegetable seedlings based on light-weight two-stage detection model as claimed in claim 1, wherein the step S04 specifically comprises
(4.1) using a pre-trained mixed deep separation convolutional neural network weight parameter model for the pre-feature extraction network, and randomly initializing the rest layers by using Gaussian distribution with the mean value of 0 and the standard deviation of 0.01;
(4.2) setting hyper-parameters related to model training, and training by adopting a multi-task loss function as a target function based on a random gradient descent method;
and (4.3) calculating loss functions of input samples by using an online difficult sample mining strategy in the training process, sequencing the loss functions from large to small, and updating model weight parameters by back propagation training of difficult samples with larger loss functions of the first 1 percent screened.
6. The utility model provides a multi-category vegetable seedling identification system based on lightweight two-stage detection model which characterized in that includes:
the image acquisition and enhancement module is used for acquiring image data sets of the multi-category vegetable seedlings and performing data enhancement on the image data sets;
the image labeling and classifying module is used for labeling the enhanced data set and dividing the labeled data set into a training set, a verification set and a test set;
the detection model building module is used for building a lightweight two-stage detection model on a TensorFlow deep learning framework, designing a mixed deep separation convolutional neural network as a preposed basic network, fusing different levels of characteristic information of the preposed basic network by adopting a characteristic pyramid network, and compressing the network channel dimension and the number of full connection layers of the detection head;
the detection model training module is used for initializing the lightweight two-stage detection model parameters and inputting a training set into the detection model to train the detection model based on a random gradient descent method;
and the detection result output module is used for inputting the images to be recognized into the detection model after the training is finished and outputting the type and the position information of the vegetable seedlings.
7. The system for identifying the multi-class young vegetable seedlings based on the light-weighted two-stage detection model as claimed in claim 6, wherein the image acquisition enhancing module specifically comprises:
the image acquisition unit is used for enabling the camera to form an included angle of 80-90 degrees with the horizontal direction of the crop row and to be about 80cm away from the ground, and acquiring various vegetable seedling images under different weather conditions, different illumination directions and different environmental backgrounds to construct an image data set;
and the image enhancement unit is used for performing data enhancement on the image data set through geometric transformation and color transformation.
8. The system for identifying multi-class young vegetables and seedlings based on the light-weighted two-stage detection model as claimed in claim 6, wherein the image labeling and classifying module specifically comprises:
the labeling unit is used for labeling the category and the position of the vegetable seedling in the enhanced data set by adopting labeling software;
and the classification unit is used for randomly splitting the labeling data set into a training set, a verification set and a test set according to the proportion of 7:2: 1.
9. The system for identifying the multi-class young vegetable seedlings based on the light-weight two-stage detection model as claimed in claim 6, wherein the detection model building module specifically comprises:
the pre-basic network unit is used for fusing a plurality of convolution kernels with different sizes into a single depth separable convolution operation to form a mixed depth separation convolution neural network, and the mixed depth separation convolution neural network is used as a pre-basic network to carry out feature acquisition on an input image;
the feature information fusion unit is used for introducing a feature pyramid network to fuse different levels of features of the preposed basic network, and inputting the fused feature map into a regional suggestion network to generate a series of sparse prediction frames;
and the lightweight detection head unit is used for calculating the output characteristics of the mixed deep separation convolutional neural network at the last stage by using asymmetric convolution in the detection head network to generate a characteristic diagram with less channel dimensions, comprehensively accessing the characteristic diagram and a prediction frame into 1 full-connection layer to obtain global characteristics of a detection target, and finishing target classification and position prediction based on 2 parallel branches.
10. The system for identifying the multi-class young vegetable seedlings based on the light-weight two-stage detection model as claimed in claim 6, wherein the detection model training module specifically comprises:
the initialization unit is used for using a pre-trained mixed deep separation convolutional neural network weight parameter model for the pre-feature extraction network, and randomly initializing the rest layers by using Gaussian distribution with the mean value of 0 and the standard deviation of 0.01;
the training unit is used for setting hyper-parameters related to model training and training on the basis of a random gradient descent method by adopting a multi-task loss function as a target function;
and the difficult sample mining unit is used for calculating the loss function of the input sample by utilizing an online difficult sample mining strategy in the training process, sorting the loss function of the input sample according to the sequence from large to small, and updating the model weight parameter by back propagation training of the difficult samples with the larger loss function of the first 1 percent.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011410890.5A CN112446388A (en) | 2020-12-05 | 2020-12-05 | Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011410890.5A CN112446388A (en) | 2020-12-05 | 2020-12-05 | Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112446388A true CN112446388A (en) | 2021-03-05 |
Family
ID=74739341
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011410890.5A Withdrawn CN112446388A (en) | 2020-12-05 | 2020-12-05 | Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112446388A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112926605A (en) * | 2021-04-01 | 2021-06-08 | 天津商业大学 | Multi-stage strawberry fruit rapid detection method in natural scene |
CN113052255A (en) * | 2021-04-07 | 2021-06-29 | 浙江天铂云科光电股份有限公司 | Intelligent detection and positioning method for reactor |
CN113065446A (en) * | 2021-03-29 | 2021-07-02 | 青岛东坤蔚华数智能源科技有限公司 | Depth inspection method for automatically identifying ship corrosion area |
CN113076873A (en) * | 2021-04-01 | 2021-07-06 | 重庆邮电大学 | Crop disease long-tail image identification method based on multi-stage training |
CN113096080A (en) * | 2021-03-30 | 2021-07-09 | 四川大学华西第二医院 | Image analysis method and system |
CN113096079A (en) * | 2021-03-30 | 2021-07-09 | 四川大学华西第二医院 | Image analysis system and construction method thereof |
CN113192040A (en) * | 2021-05-10 | 2021-07-30 | 浙江理工大学 | Fabric flaw detection method based on YOLO v4 improved algorithm |
CN113408423A (en) * | 2021-06-21 | 2021-09-17 | 西安工业大学 | Aquatic product target real-time detection method suitable for TX2 embedded platform |
CN113420819A (en) * | 2021-06-25 | 2021-09-21 | 西北工业大学 | Lightweight underwater target detection method based on CenterNet |
CN113435302A (en) * | 2021-06-23 | 2021-09-24 | 中国农业大学 | GridR-CNN-based hydroponic lettuce seedling state detection method |
CN113449611A (en) * | 2021-06-15 | 2021-09-28 | 电子科技大学 | Safety helmet identification intelligent monitoring system based on YOLO network compression algorithm |
CN113468992A (en) * | 2021-06-21 | 2021-10-01 | 四川轻化工大学 | Construction site safety helmet wearing detection method based on lightweight convolutional neural network |
CN113486781A (en) * | 2021-07-02 | 2021-10-08 | 国网电力科学研究院有限公司 | Electric power inspection method and device based on deep learning model |
CN113572742A (en) * | 2021-07-02 | 2021-10-29 | 燕山大学 | Network intrusion detection method based on deep learning |
CN113822265A (en) * | 2021-08-20 | 2021-12-21 | 北京工业大学 | Method for detecting non-metal lighter in X-ray security inspection image based on deep learning |
CN113837058A (en) * | 2021-09-17 | 2021-12-24 | 南通大学 | Lightweight rainwater grate detection method coupled with context aggregation network |
CN113971731A (en) * | 2021-10-28 | 2022-01-25 | 燕山大学 | Target detection method and device and electronic equipment |
CN114187606A (en) * | 2021-10-21 | 2022-03-15 | 江阴市智行工控科技有限公司 | Garage pedestrian detection method and system adopting branch fusion network for light weight |
CN114332849A (en) * | 2022-03-16 | 2022-04-12 | 科大天工智能装备技术(天津)有限公司 | Crop growth state combined monitoring method and device and storage medium |
CN114359546A (en) * | 2021-12-30 | 2022-04-15 | 太原科技大学 | Day lily maturity identification method based on convolutional neural network |
CN116229052A (en) * | 2023-05-09 | 2023-06-06 | 浩鲸云计算科技股份有限公司 | Method for detecting state change of substation equipment based on twin network |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111340141A (en) * | 2020-04-20 | 2020-06-26 | 天津职业技术师范大学(中国职业培训指导教师进修中心) | Crop seedling and weed detection method and system based on deep learning |
-
2020
- 2020-12-05 CN CN202011410890.5A patent/CN112446388A/en not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111340141A (en) * | 2020-04-20 | 2020-06-26 | 天津职业技术师范大学(中国职业培训指导教师进修中心) | Crop seedling and weed detection method and system based on deep learning |
Non-Patent Citations (6)
Title |
---|
ALYOSHA507: "《https://blog.csdn.net/weixin_41059269/article/details/99232245》", 11 August 2019 * |
IFREEWOLF99: "《https://blog.csdn.net/ifreewolf_csdn/article/details/101352352》", 25 September 2019 * |
QIRUI REN ET AL.: "Slighter Faster R-CNN for real-time detection of steel strip surface defects", 《2018 CHINESE AUTOMATION CONGRESS (CAC)》 * |
TSUNG-YI LIN ET AL.: "Feature Pyramid Networks for Object Detection", 《ARXIV:1612.031442V2》 * |
ZEMING LI ET AL.: ""Light-Head R-CNN: In Defense of Two-Stage Object Detector"", 《ARXIV:1711.07264V2》 * |
孙哲 等: "基于Faster R-CNN的田间西兰花幼苗图像检测方法", 《农业机械学报》 * |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113065446A (en) * | 2021-03-29 | 2021-07-02 | 青岛东坤蔚华数智能源科技有限公司 | Depth inspection method for automatically identifying ship corrosion area |
CN113096080B (en) * | 2021-03-30 | 2024-01-16 | 四川大学华西第二医院 | Image analysis method and system |
CN113096079A (en) * | 2021-03-30 | 2021-07-09 | 四川大学华西第二医院 | Image analysis system and construction method thereof |
CN113096079B (en) * | 2021-03-30 | 2023-12-29 | 四川大学华西第二医院 | Image analysis system and construction method thereof |
CN113096080A (en) * | 2021-03-30 | 2021-07-09 | 四川大学华西第二医院 | Image analysis method and system |
CN113076873A (en) * | 2021-04-01 | 2021-07-06 | 重庆邮电大学 | Crop disease long-tail image identification method based on multi-stage training |
CN113076873B (en) * | 2021-04-01 | 2022-02-22 | 重庆邮电大学 | Crop disease long-tail image identification method based on multi-stage training |
CN112926605A (en) * | 2021-04-01 | 2021-06-08 | 天津商业大学 | Multi-stage strawberry fruit rapid detection method in natural scene |
CN112926605B (en) * | 2021-04-01 | 2022-07-08 | 天津商业大学 | Multi-stage strawberry fruit rapid detection method in natural scene |
CN113052255A (en) * | 2021-04-07 | 2021-06-29 | 浙江天铂云科光电股份有限公司 | Intelligent detection and positioning method for reactor |
CN113192040A (en) * | 2021-05-10 | 2021-07-30 | 浙江理工大学 | Fabric flaw detection method based on YOLO v4 improved algorithm |
CN113192040B (en) * | 2021-05-10 | 2023-09-22 | 浙江理工大学 | Fabric flaw detection method based on YOLO v4 improved algorithm |
CN113449611A (en) * | 2021-06-15 | 2021-09-28 | 电子科技大学 | Safety helmet identification intelligent monitoring system based on YOLO network compression algorithm |
CN113468992B (en) * | 2021-06-21 | 2022-11-04 | 四川轻化工大学 | Construction site safety helmet wearing detection method based on lightweight convolutional neural network |
CN113408423A (en) * | 2021-06-21 | 2021-09-17 | 西安工业大学 | Aquatic product target real-time detection method suitable for TX2 embedded platform |
CN113408423B (en) * | 2021-06-21 | 2023-09-05 | 西安工业大学 | Aquatic product target real-time detection method suitable for TX2 embedded platform |
CN113468992A (en) * | 2021-06-21 | 2021-10-01 | 四川轻化工大学 | Construction site safety helmet wearing detection method based on lightweight convolutional neural network |
CN113435302B (en) * | 2021-06-23 | 2023-10-17 | 中国农业大学 | Hydroponic lettuce seedling state detection method based on GridR-CNN |
CN113435302A (en) * | 2021-06-23 | 2021-09-24 | 中国农业大学 | GridR-CNN-based hydroponic lettuce seedling state detection method |
CN113420819B (en) * | 2021-06-25 | 2022-12-06 | 西北工业大学 | Lightweight underwater target detection method based on CenterNet |
CN113420819A (en) * | 2021-06-25 | 2021-09-21 | 西北工业大学 | Lightweight underwater target detection method based on CenterNet |
CN113572742B (en) * | 2021-07-02 | 2022-05-10 | 燕山大学 | Network intrusion detection method based on deep learning |
CN113486781A (en) * | 2021-07-02 | 2021-10-08 | 国网电力科学研究院有限公司 | Electric power inspection method and device based on deep learning model |
CN113486781B (en) * | 2021-07-02 | 2023-10-24 | 国网电力科学研究院有限公司 | Electric power inspection method and device based on deep learning model |
CN113572742A (en) * | 2021-07-02 | 2021-10-29 | 燕山大学 | Network intrusion detection method based on deep learning |
CN113822265A (en) * | 2021-08-20 | 2021-12-21 | 北京工业大学 | Method for detecting non-metal lighter in X-ray security inspection image based on deep learning |
CN113837058A (en) * | 2021-09-17 | 2021-12-24 | 南通大学 | Lightweight rainwater grate detection method coupled with context aggregation network |
CN114187606A (en) * | 2021-10-21 | 2022-03-15 | 江阴市智行工控科技有限公司 | Garage pedestrian detection method and system adopting branch fusion network for light weight |
CN113971731A (en) * | 2021-10-28 | 2022-01-25 | 燕山大学 | Target detection method and device and electronic equipment |
CN114359546A (en) * | 2021-12-30 | 2022-04-15 | 太原科技大学 | Day lily maturity identification method based on convolutional neural network |
CN114359546B (en) * | 2021-12-30 | 2024-03-26 | 太原科技大学 | Day lily maturity identification method based on convolutional neural network |
CN114332849A (en) * | 2022-03-16 | 2022-04-12 | 科大天工智能装备技术(天津)有限公司 | Crop growth state combined monitoring method and device and storage medium |
CN116229052B (en) * | 2023-05-09 | 2023-07-25 | 浩鲸云计算科技股份有限公司 | Method for detecting state change of substation equipment based on twin network |
CN116229052A (en) * | 2023-05-09 | 2023-06-06 | 浩鲸云计算科技股份有限公司 | Method for detecting state change of substation equipment based on twin network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112446388A (en) | Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model | |
Koirala et al. | Deep learning–Method overview and review of use for fruit detection and yield estimation | |
Jiao et al. | AF-RCNN: An anchor-free convolutional neural network for multi-categories agricultural pest detection | |
CN111310861B (en) | License plate recognition and positioning method based on deep neural network | |
CN109685115B (en) | Fine-grained conceptual model with bilinear feature fusion and learning method | |
CN108564097B (en) | Multi-scale target detection method based on deep convolutional neural network | |
Le et al. | Deep learning for noninvasive classification of clustered horticultural crops–A case for banana fruit tiers | |
CN110222767B (en) | Three-dimensional point cloud classification method based on nested neural network and grid map | |
CN112633350B (en) | Multi-scale point cloud classification implementation method based on graph convolution | |
CN103955702A (en) | SAR image terrain classification method based on depth RBF network | |
CN107832797B (en) | Multispectral image classification method based on depth fusion residual error network | |
CN110363253A (en) | A kind of Surfaces of Hot Rolled Strip defect classification method based on convolutional neural networks | |
CN110321862B (en) | Pedestrian re-identification method based on compact ternary loss | |
Wang et al. | Precision detection of dense plums in orchards using the improved YOLOv4 model | |
CN108416270B (en) | Traffic sign identification method based on multi-attribute combined characteristics | |
CN110807485B (en) | Method for fusing two-classification semantic segmentation maps into multi-classification semantic map based on high-resolution remote sensing image | |
CN111178177A (en) | Cucumber disease identification method based on convolutional neural network | |
WO2023019698A1 (en) | Hyperspectral image classification method based on rich context network | |
CN110060273A (en) | Remote sensing image landslide plotting method based on deep neural network | |
CN110929746A (en) | Electronic file title positioning, extracting and classifying method based on deep neural network | |
CN110969121A (en) | High-resolution radar target recognition algorithm based on deep learning | |
CN111401380A (en) | RGB-D image semantic segmentation method based on depth feature enhancement and edge optimization | |
Hussain et al. | A simple and efficient deep learning-based framework for automatic fruit recognition | |
CN113435254A (en) | Sentinel second image-based farmland deep learning extraction method | |
CN114492634B (en) | Fine granularity equipment picture classification and identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210305 |
|
WW01 | Invention patent application withdrawn after publication |