CN113837039A - Fruit growth form visual identification method based on convolutional neural network - Google Patents

Fruit growth form visual identification method based on convolutional neural network Download PDF

Info

Publication number
CN113837039A
CN113837039A CN202111067533.8A CN202111067533A CN113837039A CN 113837039 A CN113837039 A CN 113837039A CN 202111067533 A CN202111067533 A CN 202111067533A CN 113837039 A CN113837039 A CN 113837039A
Authority
CN
China
Prior art keywords
model
fruit
neural network
convolutional neural
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111067533.8A
Other languages
Chinese (zh)
Other versions
CN113837039B (en
Inventor
吕继东
许浩
徐黎明
李文杰
邹凌
戎海龙
杨彪
马正华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou University
Original Assignee
Changzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou University filed Critical Changzhou University
Priority to CN202111067533.8A priority Critical patent/CN113837039B/en
Publication of CN113837039A publication Critical patent/CN113837039A/en
Application granted granted Critical
Publication of CN113837039B publication Critical patent/CN113837039B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention relates to the technical field of convolutional neural networks, in particular to a fruit growth form visual identification method based on a convolutional neural network, which comprises the following steps: s1, image acquisition: collecting fruit images of different forms of an orchard, and labeling the images; s2, image enhancement: performing data enhancement on the acquired image to expand the data set; s3, building a convolutional neural network model; s4, optimizing network parameters by using an SGD optimizer; and S5, detecting the test set by using the trained optimal model, and giving a prediction frame, a category and a confidence coefficient of each target. The invention provides a method for identifying the growth form of the fruit based on the deep learning technology, and compared with fast-RCNN and YOLO algorithms, the method has higher identification accuracy and higher identification speed, and the number of model parameters is less.

Description

Fruit growth form visual identification method based on convolutional neural network
Technical Field
The invention relates to the technical field of convolutional neural networks, in particular to a fruit growth form visual identification method based on a convolutional neural network.
Background
China is a big agricultural country, and the fruit industry is the third big industry behind the ranking of grains and vegetables in the planting industry. In recent years, the fruit industry in China has developed rapidly, the planting area and the yield are expanded rapidly, and the fruit industry has formed scale advantages and is growing continuously. However, at present, most of fruit picking is still mainly finished by hands, which is time-consuming, labor-consuming and high in labor intensity; furthermore, as the population ages and agricultural labor decreases, the cost of hand picking increases, thereby impacting the market competitiveness of fruit. Therefore, the timely and efficient harvesting of fruits in the orchard and the reduction of the picking cost are particularly important. The fruit and vegetable picking robot based on machine vision can fully utilize the information perception capability of the fruit and vegetable picking robot to recognize and pick fruits and improve picking efficiency, so that economic benefits are improved, income of farmers is increased, and the fruit and vegetable picking robot becomes a research hotspot in the field of domestic and foreign intelligent agricultural machinery equipment. However, at present, the practical products of the picking robot are few, and the picking robot is rarely used in a large amount, and the main reason is that the intelligent degree is lower. In view of the above circumstances, developing the research on the related technologies of fruit picking robots and realizing the mechanical automatic intelligent picking of orchard fruits have great practical significance.
The fruit and vegetable picking robot has the advantages that the growth forms of fruits and vegetables are various, the picking mechanisms of the fruits and vegetables with different growth forms are different for the picking robot, therefore, in the operation of the fruit and vegetable picking robot, the first task of the work is to be capable of visually identifying the fruits and vegetables with different growth forms, and then the robot can select a corresponding method to successfully pick the fruits and vegetables with different growth forms. However, most of the current researches concern the identification problem of fruits and vegetables with single growth forms, and the integration identification of fruits and vegetables with different growth forms is still a necessary link which needs to be solved. At present, in the existing documents at home and abroad, there are few special researches on visual identification of fruit and vegetable growth forms, but only the growth form judgment of the fruits with single overlapping shielding and branch and leaf shielding in the process of fruit identification research is slightly related. Zhang Asia statics, China agricultural university, determines whether the situation of multi-fruit overlapping exists by determining a single fruit area threshold value and calculating the area of each region after the acquired image is segmented. Chua Jianrong et al, Jiangsu university, judges whether the fruits are overlapped or not by using a minimum circumscribed rectangle side length threshold value a/b >1.4 in a segmented citrus fruit image. Asparagus et al, China agricultural university, stipulates that if two or more apple fruits have overlapping circles that are larger than 1/2 of the smallest circle, an apple fruit is considered to be divided into several different objects due to branch and leaf occlusion, and is counted as a branch and leaf occlusion morphology fruit. In the aspect of the research of the fruit and vegetable identification method with multiple growth forms of the picking robot, the application number CN201310188346.4 patent proposes a method based on the combination of geometric calculation and area mapping for identification from coarse to fine. The literature is mostly just the judgment of the growth form of a single fruit and vegetable, but is also too simple; the method for identifying fruits and vegetables in multiple growth forms researched in the early period of the team is complex in tradition and limited in applicability, and in short, a complete and mature method for identifying fruits and vegetables in different growth forms is not available.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the network is built based on deep learning to realize the identification of the growth form of the apple fruits, so that the picking robot can automatically visually identify the growth form of the fruits, and a foundation is laid for further selecting a corresponding picking mechanism.
The technical scheme adopted by the invention is as follows: a fruit growth form visual identification method based on a convolutional neural network comprises the following steps:
s1, image acquisition: the method comprises the following steps of shooting fruit images of different forms of a plurality of orchards in a single-reflection mode, labeling the fruit images, wherein the fruit images of different forms have four forms, and the method comprises the following steps: no branch covers a single fruit, a branch covers a single fruit, no branch covers an overlapping fruit and a branch covers an overlapping fruit;
s2, image enhancement: and performing data enhancement on the collected fruit image to expand a data set, wherein the data enhancement method comprises saturation adjustment, contrast adjustment, turning and definition adjustment, and randomly performing image enhancement according to the following steps of 6: 2: 2, dividing the ratio into a training set, a verification set and a test set;
s3, building a convolutional neural network model: the method comprises the steps that a network is firstly subjected to down-sampling through 4 convolution modules, after the down-sampling is carried out for the second time and the third time, the quality of some key information is inevitably reduced, in order to improve the network performance, 9C 3 networks are added behind the second convolution module and the third convolution module, and 3C 3 networks are added behind the other convolution modules; sending the down-sampled data into a spatial pyramid pooling layer, and inserting 3C 3 networks into the pooling layer; then, an up-sampling operation is started, wherein the up-sampling operation mainly comprises 2 convolution modules for adjusting the number of channels, 2 up-sampling modules for enhancing semantic information, and 3C 3 networks are inserted behind each up-sampling module; after the up-sampling is finished, the operations of down-sampling and up-sampling are repeated, then the down-sampling is carried out through 2 convolution modules, and 3C 3 networks are inserted behind each convolution module; then, the data are passed through 2 up-sampling modules, 3C 3 networks and 3 convolution modules are also inserted behind each up-sampling module, and the 3 convolution modules are used for adjusting the number of channels; then, the data is processed by two down-sampling modules, and each down-sampling module is inserted into 3C 3 networks; finally, the model is sent to a detection module for detection, and the expression capacity of the neural network to the model is improved through mixing an activation function in 12 convolution modules;
the model detection module evaluates the advantages and disadvantages of the training model by calculating 3 model loss values, namely classification loss and regression loss, wherein the classification loss model is divided into: the classification loss of the positive sample and the classification loss of the foreground background prediction of the positive and negative samples; the regression Loss is calculated by using CIOU _ Loss, the classification Loss is calculated by using BECLOss and BCEWithLogitsLoss respectively, the three Loss values are added to be used as indexes for evaluating the quality of the model, and the smaller the Loss value is, the more excellent the model training is until the Loss value is not changed any more, the optimal model is obtained;
the hybrid activation function formula is as follows:
f(x)=(p1-p2)x·σ[β(p1-p2)x]+p2x (1)
f(x)=xσ(x) (2)
wherein sigma is sigmoid function, p1, p2 and beta use three learnable parameters for self-adaptive adjustment, wherein formula (1) is ACON-C activation function, and formula (2) is SiLu activation function;
compared with the currently most widely applied ReLU function, the formula (1) has the characteristics of unsaturation, smoothness and non-monotonicity, and has higher calculation accuracy on a deeper neural network; in order to save training time and avoid over-fitting training, the invention uses the activating function of formula (1) in a convolution module of 3 × 3 in down-sampling and uses the activating function of formula (2) in a convolution module of 1 × 1 in an adjustment channel;
s4, training a convolutional neural network model: training a convolutional neural network model through a training set, optimizing weight parameters, bias parameters and batch normalization weight parameters of the convolutional neural network by using an SGD optimizer, calculating the gradient of mini-batch through each iteration, then updating each parameter of the model towards the opposite direction of the gradient by using a learning rate, and gradually reducing the learning rate along with the increase of iteration steps until the model converges;
s5, identifying the fruit growth form: and (4) sending the test set image into the convolutional neural network optimal model trained in the step (4) for forward propagation, wherein the specific format of the returned prediction frame is as follows: and (3) the central point + length and width + confidence level + classification result, then NMS operation is carried out, a confidence level threshold value and an IOU threshold value are set, and the central point and the length and width of the prediction box are changed into: and (4) storing the prediction result by the lower left corner coordinate and the upper right corner coordinate.
The invention has the following beneficial effects:
the method provides a method for identifying the fruit growth form based on the deep learning technology, and compared with fast-RCNN, YOLOv3 and YOLOv4 algorithms, the method has higher identification accuracy and higher identification speed, and the number of model parameters is less; the invention enriches the technology of the current agricultural intellectualization in the research direction of fruit and vegetable growth form identification.
Drawings
FIG. 1 is a flow chart of a fruit growth morphology visual identification method based on a convolutional neural network according to the present invention;
FIG. 2 is a diagram of the network architecture of the present invention;
FIG. 3 is a block diagram of the C3 network of the present invention;
FIG. 4 is a diagram of a feature processing network architecture of the present invention;
FIG. 5 is a diagram illustrating the effect of identifying the growth morphology of fruits by using a morphology recognition model according to the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and examples, which are simplified schematic drawings and illustrate only the basic structure of the invention in a schematic manner, and therefore only show the structures relevant to the invention.
As shown in fig. 1, the embodiment of the present invention provides a fruit growth morphology visual identification method based on a convolutional neural network,
s1, image acquisition: the method comprises the steps of shooting fruit images of different forms of a plurality of orchards by using single-shot method, marking the images, wherein the different forms comprise that no branch and stem covers a single fruit, that a branch and stem covers a single fruit, that no branch and stem covers an overlapped fruit and that a branch and stem covers an overlapped fruit, and classifying and marking the collected fruit images by using marking software, wherein samples with insufficient or unclear pixel areas are not marked so as to prevent overfitting of a neural network; in the case of an image edge being close to, a target whose image edge area is less than 15% is also not marked because its specific feature cannot be accurately discriminated.
S2, image enhancement: for the identification of fruit growth form in an orchard, because the weather and illumination condition change greatly in one day, whether the convolutional neural network can process fruit and vegetable images collected under different illumination conditions depends on the integrity of a training data set; meanwhile, considering that the training data are too few, which causes poor approximation values, and causes the model to lack constraints, so that the overfitting problem occurs, in order to enhance the richness of the experimental data set, data enhancement in the aspects of saturation, contrast, turning and definition is performed on the acquired image;
the human visual system can sense the color invariance of a target object under the condition of illumination and imaging, but the shooting equipment is extremely sensitive to the change of the environment, and the shooting image and the real image have some color difference inevitably under the influence of different illumination conditions; the method comprises the steps of changing the color vividness degree and the contrast degree of a bright and dark area of an image by adjusting the contrast and the saturation degree so as to enhance the generalization capability of a neural network, wherein the saturation degree is improved by 50%, and the enhancement factor of the contrast is set to be 1.5;
in order to further expand the data set, the original image is rotated by 180 degrees so as to improve the detection performance of the neural network; the obtained image is not clear due to the fact that the distance is too long or an incorrect focal length is selected during camera framing and movement of the camera, and the detection performance of the neural network can be influenced by a blurred image; therefore, part of original images are randomly selected, salt and pepper noise with the variance of 0.1 is added to simulate unclear images, and the robustness of a detection model is further enhanced by using the blurred images as samples; in addition, the method also sharpens part of the original image, sets an enhancement factor to be 0.5, compensates the outline of the image, enhances the edge part of the image to make the image more clear, and performs data enhancement on the image according to the following formula of 6: 2: the scale of 2 is randomly divided into a training set, a validation set, and a test set.
S3, building a convolutional neural network model, wherein the built network totally comprises 39 processing modules, and the specific structure of the network is shown in FIG. 2; before the images to be trained are sent into the convolutional neural network, in order to increase the number of small targets and make the robustness of the network better, splicing the four images by using a random scaling, random cutting and random arrangement mode; considering that the invention aims to improve the recognition rate and simultaneously maintain the real-time detection efficiency, the invention uses the depth separable convolution principle to carry out slicing operation on the image at the image preprocessing module so as to reduce the calculation of the model parameter quantity.
After briefly processing the image, sending the image to a feature extraction module to fit the fruit growth form, and firstly performing down-sampling operation on the image by using three times of convolution; according to the invention, 3C 3 networks are inserted after each convolution operation, the specific structure of the C3 network is shown in FIG. 3, the C3 module firstly divides the feature mapping of the basic layer into two parts, and then combines the two parts through a cross-stage hierarchical structure, so that the accuracy can be ensured while the calculated amount is reduced; meanwhile, a residual error component is added in the C3 network, so that the gradient value of backward propagation between layers can be enhanced, the disappearance of the gradient caused by the deepening of the network is avoided, and the characteristic of finer granularity is extracted without worrying about the network degradation; after downsampling is carried out, in order to convert the convolution characteristics of the image with any scale into the same dimension, the method introduces the spatial pyramid pooling layer for processing, so that the convolution neural network can process the image with any scale, and overfitting of the network can be avoided.
For small targets, the small targets have less pixel information, and information loss is inevitably caused in the downsampling process, so that the detection performance difference of large and small objects is high; therefore, the invention carries out up-sampling immediately after down-sampling operation, and uses two up-sampling modules to reserve the semantic information of the characteristic diagram to the maximum extent; then, the downsampling is continued to transmit the strong positioning features of the lower layer upwards, and the specific structure of the part is shown in FIG. 4; if the input and the output of the three routes are nodes on the same layer, an extra edge is added in the middle for fusion, and more features are fused without increasing consumption; the method repeatedly performs the steps of up-sampling and down-sampling to deepen the network so as to better fit the morphological characteristics of the fruits; after the fruit morphological characteristics are extracted by using a backbone network, the fruit morphological characteristics are sent to a detection module to detect a verification set and calculate loss values and maps of three models, wherein the loss values comprise classification loss and regression loss, and the classification loss is divided into the following parts: the classification loss of the positive sample and the classification loss of the foreground background prediction of the positive and negative samples; the regression Loss is calculated by using CIOU _ Loss, the classification Loss is calculated by using BECLOss and BCEWithLogitsLoss respectively, and finally the three Loss values are added to be used as indexes for evaluating the quality of the model.
S4, training a convolutional neural network model: training a convolutional neural network model through a training set, and optimizing weight parameters, bias parameters and batch normalization weight parameters of the convolutional neural network through an SGD optimizer to obtain an optimal model;
s5, loading the trained optimal model, sending the test set image into the model for forward propagation, and returning to the specific format of the prediction box as follows: and (3) the central point + length and width + confidence level + classification result, then NMS operation is carried out, a confidence level threshold value and an IOU threshold value are set, and the central point and the length and width of the prediction box are changed into: the lower left corner coordinate + the upper right corner coordinate, and finally the prediction result is saved, the effect is as shown in fig. 5, and the number after each category in the figure represents the confidence of the corresponding category.
The invention provides a method for identifying the growth form of fruits based on a deep learning technology, and compared with fast-RCNN, YOLOv3 and YOLOv4 algorithms, the method has higher identification accuracy and higher identification speed, and the number of model parameters is less; the invention enriches the technology of the current agricultural intellectualization in the research direction of fruit and vegetable growth form identification.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (3)

1. A fruit growth form visual identification method based on a convolutional neural network is characterized by comprising the following steps:
s1, image acquisition: the method comprises the following steps of collecting fruit images in different forms of an orchard, and labeling the images, wherein the fruit images in different forms comprise: no branch covers a single fruit, a branch covers a single fruit, no branch covers an overlapping fruit and a branch covers an overlapping fruit;
s2, image enhancement: and performing data enhancement on the collected fruit image to expand a data set, wherein the data enhancement method comprises saturation adjustment, contrast adjustment, turning and definition adjustment, and randomly performing image enhancement according to the following steps of 6: 2: 2, dividing the ratio into a training set, a verification set and a test set;
s3, building a convolutional neural network model: the network is composed of 1 image preprocessing module, 12 convolution modules, 1 spatial pyramid pooling layer, 4 up-sampling modules, 8 feature fusion blocks, 12C 3 modules and 1 detection module, wherein the network is firstly down-sampled by the 4 convolution modules, 9C 3 networks are respectively added behind the second convolution module and the third convolution module, and are sent into the spatial pyramid pooling layer after down-sampling processing, and 3C 3 networks are inserted behind the pooling layer; then, performing upsampling operation, wherein the upsampling operation comprises 2 convolution modules and 2 upsampling modules, and 3C 3 networks are inserted behind each upsampling module; after the up-sampling is finished, the operations of down-sampling and up-sampling are repeated, the down-sampling is carried out through 2 convolution modules, and 3C 3 networks are inserted behind each convolution module; then passing through 2 up-sampling modules, and inserting 3C 3 networks and 3 convolution modules after each up-sampling module; then, the data is processed by two down-sampling modules, and each down-sampling module is inserted into 3C 3 networks; finally, the model is sent to a detection module for detection, and the expression capacity of the neural network to the model is improved through mixing an activation function in 12 convolution modules;
s4, training a convolutional neural network model: training a convolutional neural network model through a training set, optimizing weight parameters, bias parameters and batch normalization weight parameters of the convolutional neural network by using an SGD optimizer, calculating the gradient of mini-batch through each iteration, then updating each parameter of the model towards the opposite direction of the gradient by using a learning rate, and gradually reducing the learning rate along with the increase of iteration steps until the model converges;
s5, identifying the fruit growth form: sending the test set image into the convolutional neural network model trained in the step 4 for forward propagation, wherein the specific format of the returned prediction frame is as follows: and (3) the central point + length and width + confidence level + classification result, then NMS operation is carried out, a confidence level threshold value and an IOU threshold value are set, and the central point and the length and width of the prediction box are changed into: and (4) storing the prediction result by the lower left corner coordinate and the upper right corner coordinate.
2. The visual fruit growth morphology recognition method based on the convolutional neural network as claimed in claim 1, wherein the formula of the hybrid activation function of step S3 is:
f(x)=(p1-p2)x·σ[β(p1-p2)x]+p2x (1)
f(x)=xσ(x) (2)
where σ is sigmoid function, p1P2 and β use three learnable parameters for adaptive tuning, where equation (1) is the ACON-C activation function and equation (2) is the SiLu activation function; wherein the activation function of equation (1) is used in the 3 × 3 convolution block in the downsampling, and the activation function of equation (2) is used in the 1 × 1 convolution block in the adjustment channel.
3. The visual fruit growth morphology identification method based on the convolutional neural network as claimed in claim 1, wherein the model detection module evaluates the merits of the training model by calculating a classification Loss model and a regression Loss model, the classification Loss model is divided into classification Loss of positive samples and classification Loss of foreground-background prediction of the positive and negative samples, wherein the regression Loss model is calculated by using CIOU _ Loss, the classification Loss model is calculated by using beclos and bcewithlogitss, respectively, and the three Loss values are added to be used as indexes for evaluating the merits of the model until the Loss values do not change any more to obtain the optimal model.
CN202111067533.8A 2021-09-13 2021-09-13 Fruit growth morphology visual identification method based on convolutional neural network Active CN113837039B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111067533.8A CN113837039B (en) 2021-09-13 2021-09-13 Fruit growth morphology visual identification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111067533.8A CN113837039B (en) 2021-09-13 2021-09-13 Fruit growth morphology visual identification method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN113837039A true CN113837039A (en) 2021-12-24
CN113837039B CN113837039B (en) 2023-10-24

Family

ID=78959224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111067533.8A Active CN113837039B (en) 2021-09-13 2021-09-13 Fruit growth morphology visual identification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN113837039B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109956A (en) * 2023-04-12 2023-05-12 安徽省空安信息技术有限公司 Unmanned aerial vehicle self-adaptive zooming high-precision target detection intelligent inspection method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784671A (en) * 2020-06-30 2020-10-16 天津大学 Pathological image focus region detection method based on multi-scale deep learning
CN113052210A (en) * 2021-03-11 2021-06-29 北京工业大学 Fast low-illumination target detection method based on convolutional neural network
WO2021139069A1 (en) * 2020-01-09 2021-07-15 南京信息工程大学 General target detection method for adaptive attention guidance mechanism
US11074711B1 (en) * 2018-06-15 2021-07-27 Bertec Corporation System for estimating a pose of one or more persons in a scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11074711B1 (en) * 2018-06-15 2021-07-27 Bertec Corporation System for estimating a pose of one or more persons in a scene
WO2021139069A1 (en) * 2020-01-09 2021-07-15 南京信息工程大学 General target detection method for adaptive attention guidance mechanism
CN111784671A (en) * 2020-06-30 2020-10-16 天津大学 Pathological image focus region detection method based on multi-scale deep learning
CN113052210A (en) * 2021-03-11 2021-06-29 北京工业大学 Fast low-illumination target detection method based on convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
伍锡如;雪刚刚;刘英璇;: "基于深度学习的水果采摘机器人视觉识别系统设计", 农机化研究, no. 02 *
谢飞;穆昱;管子玉;沈雪敏;许鹏飞;王和旭;: "基于具有空间注意力机制的Mask R-CNN的口腔白斑分割", 西北大学学报(自然科学版), no. 01 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109956A (en) * 2023-04-12 2023-05-12 安徽省空安信息技术有限公司 Unmanned aerial vehicle self-adaptive zooming high-precision target detection intelligent inspection method

Also Published As

Publication number Publication date
CN113837039B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
Jia et al. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot
CN107016405B (en) A kind of pest image classification method based on classification prediction convolutional neural networks
CN110929578A (en) Anti-blocking pedestrian detection method based on attention mechanism
CN112598713A (en) Offshore submarine fish detection and tracking statistical method based on deep learning
CN111626993A (en) Image automatic detection counting method and system based on embedded FEFnet network
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN112381764A (en) Crop disease and insect pest detection method
CN109034184A (en) A kind of grading ring detection recognition method based on deep learning
Lv et al. A visual identification method for the apple growth forms in the orchard
CN110288033B (en) Sugarcane top feature identification and positioning method based on convolutional neural network
CN111179216A (en) Crop disease identification method based on image processing and convolutional neural network
CN113657326A (en) Weed detection method based on multi-scale fusion module and feature enhancement
CN111783693A (en) Intelligent identification method of fruit and vegetable picking robot
CN110348503A (en) A kind of apple quality detection method based on convolutional neural networks
CN113223027A (en) Immature persimmon segmentation method and system based on PolarMask
Gao et al. Recognition and Detection of Greenhouse Tomatoes in Complex Environment.
CN110599458A (en) Underground pipe network detection and evaluation cloud system based on convolutional neural network
CN112380917A (en) A unmanned aerial vehicle for crops plant diseases and insect pests detect
CN113837039B (en) Fruit growth morphology visual identification method based on convolutional neural network
Kundur et al. Deep convolutional neural network architecture for plant seedling classification
CN114037737B (en) Neural network-based offshore submarine fish detection and tracking statistical method
CN116740337A (en) Safflower picking point identification positioning method and safflower picking system
CN115861768A (en) Honeysuckle target detection and picking point positioning method based on improved YOLOv5
CN115439690A (en) Flue-cured tobacco leaf image grading method combining CNN and Transformer
CN114120359A (en) Method for measuring body size of group-fed pigs based on stacked hourglass network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant