CN111898651B - Tree detection method based on Tiny YOLOV3 algorithm - Google Patents

Tree detection method based on Tiny YOLOV3 algorithm Download PDF

Info

Publication number
CN111898651B
CN111898651B CN202010661911.4A CN202010661911A CN111898651B CN 111898651 B CN111898651 B CN 111898651B CN 202010661911 A CN202010661911 A CN 202010661911A CN 111898651 B CN111898651 B CN 111898651B
Authority
CN
China
Prior art keywords
layer
tree
convolution
optimized
tiny
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010661911.4A
Other languages
Chinese (zh)
Other versions
CN111898651A (en
Inventor
王新彦
吕峰
袁春元
江泉
易政洋
张凯
盛冠杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN202010661911.4A priority Critical patent/CN111898651B/en
Publication of CN111898651A publication Critical patent/CN111898651A/en
Application granted granted Critical
Publication of CN111898651B publication Critical patent/CN111898651B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a tree detection method based on an optimized Tiny YOLOV3 algorithm, which comprises the following steps: (1) Collecting image data, and randomly dividing the collected image data into a training set and a testing set; (2) Optimizing a traditional Tiny YOLOV3 algorithm, replacing a partial pooling layer of a neural network shallow layer by using a convolution layer, canceling the convolution layer with 1024 channels, and splicing images of the neural network partial shallow convolution layer; (3) Training by using a training set to obtain an optimal detection model; (4) And detecting the picture data in the test set, and evaluating the detection accuracy and the real-time performance of the detection result of the test set. According to the tree identification method, the tree type information in the image can be obtained through the processing of the optimized Tiny YOLOV3 algorithm on the image data of the trunk and the spherical tree collected in the lawn environment, and compared with the existing method that the tree type can not be obtained or the traditional image processing algorithm is not obtained, the tree identification algorithm is more efficient and convenient.

Description

Tree detection method based on Tiny YOLOV3 algorithm
Technical Field
The invention relates to a detection method, in particular to a tree detection method based on a Tiny YOLOV3 algorithm.
Background
The tree is an important component of the agricultural environment, the tree detection is the basis of environmental perception when the agricultural machinery does not work, the rapid and accurate tree detection is the precondition for the agricultural robot to realize autonomous obstacle avoidance, positioning navigation and agricultural intellectualization, and the method has very important significance for the target detection research of the tree.
The existing tree detection method mainly comprises a method based on traditional image recognition and a method combining traditional image recognition with a classifier. The traditional image recognition technology mainly comprises the steps of realizing tree detection based on pixel characteristics and based on a color space model, for example, roughly estimating the trunk position through laser radar data, and detecting apple trunks according to pixel classification. Chen et al in the paper Multi-feature fusion tree trunk detection and orchard mobile robot localization using camera/ultrasonic sensors combine color histograms with training classifiers to achieve trunk detection of orange trees by training multiple classifiers through the histograms. The method can identify multi-target detection of single-type trunks, but cannot identify multi-type multi-target detection with large appearance and shape differences. The deep learning method such as Tiny YOLOV3 has the advantages of being quick, efficient, easy to transplant and the like by training a neural network to construct a target detection model. However, the Tiny YOLOV3 neural network has more pooling layers in the feature extraction part, and the convolutional layer of the shallow layer of the network is not fully utilized, so that the detection precision is low.
Disclosure of Invention
The invention aims to: the invention aims to provide a tree detection method with high detection precision based on a Tiny YOLOV3 algorithm.
The technical scheme is as follows: the tree detection method based on the optimized Tiny Yolov3 algorithm comprises the following steps:
(1) Collecting image data containing trunk and spherical tree targets, randomly dividing the collected image data into a training set and a testing set, and labeling;
(2) Optimizing a traditional Tiny yolo 3 algorithm, replacing partial pooling layers of a neural network shallow layer by using convolution layers with 1x1, 3x3 and 3x3 with a step length of 1 and a step length of 2, canceling the convolution layers with a channel number of 1024, and splicing images of the partial shallow convolution layers of the neural network;
(3) Training the neural network of the optimized Tiny YOLOV3 algorithm by using a training set to obtain an optimal detection model;
(4) And (3) detecting the picture data in the test set by using the model obtained in the step (3), and evaluating the detection accuracy and the real-time performance of the detection result of the test set.
The beneficial effects are that: compared with the prior art, the invention has the following remarkable advantages:
(1) The AP values of the traditional Tiny YOLOV3 on trunks and spherical trees are respectively 89.04 percent and 73.55 percent, and the AP values of the invention are respectively improved by 1.84 percent and 5.3 percent; the total mAP is 81.29%, the detection speed is 187 pieces/second, the calculated amount is 5.244BFLOPS, and the detection is reduced by 3.76%. The optimized Tiny YOLOV3 algorithm provided by the invention has higher detection precision and good real-time performance for tree detection.
(2) According to the tree identification method, the tree type information in the image can be obtained through the processing of the optimized Tiny YOLOV3 algorithm on the image data of the trunk and the spherical tree collected in the lawn environment, and compared with the existing method that the tree type can not be obtained or the traditional image processing algorithm is not obtained, the tree identification algorithm is more efficient and convenient.
Drawings
FIG. 1 is a flow chart of the present invention.
Figure 2 is a flow chart of the model building and optimization steps of the present invention.
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings.
(1) And (3) image acquisition: the camera of the smart phone is used for collecting image data containing trunks and spherical trees in a university campus of Jiangsu science and technology and in a tourist area of Dayang in salt city of Jiangsu province to obtain the following sub-data sets: trunk, spherical tree, wherein each sub-dataset contains the number of photographs as shown in table 1.
TABLE 1 detailed partitioning Table of training set and test set for each sub-data set
Partitioning and labeling of image datasets: mixing and shuffling the two sub-data sets, randomly dividing the sub-data sets into a training set and a testing set according to the proportion of 3:1, finally dividing the training set into 6325 positive samples in total, dividing the testing set into 2164 positive samples in total, wherein the training set and the testing set are mutually independent, specifically dividing the training set and the testing set into the data are shown in a table 1, and marking the data by using labelImg to form a marking file suitable for optimizing Tiny YOLOV3 network training.
(2) The traditional Tiny yolo 3 algorithm is optimized, a convolution layer with 1x1, 3x3 with step length of 1 and 3x3 with step length of 2 is used for replacing a partial pooling layer of the neural network shallow layer, and shortcut, concatenate of the neural network partial shallow convolution layer is used for connecting a convolution layer with 1024 replaced channels.
Firstly, carrying out normalization processing on images in a training set, secondly, inputting the normalized images into a feature extraction layer of an optimized Tiny Yolov3 neural network to obtain two feature images with different scales, then respectively carrying out feature fusion, up-sampling and feature fusion on the two feature images with different scales to obtain two different tensor data, calculating a loss value between the tensor data and a true value through a loss function, and finally, reversely propagating update weights, and obtaining a neural network detection model based on an optimized Tiny Yolov3 algorithm after multiple iterations. As shown in fig. 2, the method specifically includes the following steps, steps (22), (23) are shown in dashed boxes in fig. 2:
(21) The input images in the training set are normalized to 416 x3 dimensions.
(22) Inputting the normalized image into a feature extraction layer of an optimized Tiny Yolov3 neural network to respectively obtain two feature graphs with different sizes, namely, sequentially passing the input image data through three convolution layers and three largest pooling layer modules with the step length of 2, and changing the size of the input image data into 52 multiplied by 128; then passing through a 1×1 convolution layer, the scale is unchanged, and the channel number is changed to 64; then passing through a 3×3 convolution layer, the size is unchanged; then, the scale is unchanged by passing through a 1×1 convolution layer, and the number of channels becomes 128. Then a 3x3 convolutional layer of step size 2 is passed, the size becomes 26 x 128, and this layer is marked as layer 10.
In the step, the maximum pooling layer of the original 7 th layer is modified into a convolution layer of 3x 3; and a convolution layer of 1x1 is added before the 7 th layer to compress the channel, then the characteristic extraction is carried out through a convolution layer of 3x3, and finally the channel fusion expansion is carried out through the convolution layer of 1x1, so that the 7 th layer is changed into the 10 th layer. Because the imaging is smaller when the distance is far, the pooling layer directly compresses the characteristics of smaller pixels, so that the network behind the pooling layer cannot extract small target information.
(23) Then passing through a 1×1 convolution layer, the scale is unchanged, and the channel number is compressed to 64; then passing through a 1X1 convolution expansion layer, the scale is unchanged, and the number of channels is expanded to 128; then passing through a 3×3 convolution layer, the size of which becomes 26×26×256, and marking the layer as layer 13; then, through a maximum pooling layer with a step length of 2, the scale is changed to 13 multiplied by 13, and the number of channels is 256; then passing through a 3×3 convolution layer, the size becomes 13×13×512; then a maximum pooling layer with a step size of 1 is passed, the size is still 13×13×512, and the layer is 16 layers.
The output characteristic diagram of 16 layers and the output characteristic diagram of 10 layers pass through a 3X3 convolution layer with the step length of 2, the output characteristic diagram of 13 layers passes through a 3X3 convolution layer with the step length of 2, then the images are spliced to form the characteristic diagram with the scale of 13X 13 and the channel number of 896, and then the characteristic diagrams respectively pass through the 3X3 convolution layer, the 1X1 convolution layer and the 1X1 convolution layer with the step length of 1, and the output characteristic diagram size is 13X 18.
The step eliminates the convolution layer with 1024 channels in the existing model, the calculated amount of the convolution layer is the largest, and the step replaces the convolution layer by adopting an image splicing mode, so that the calculated amount of the whole model is reduced.
(3) And training the neural network of the optimized Tiny YOLOV3 algorithm by using the training set to obtain an optimal detection model.
(31) Setting super parameters in a neural network: the momentum parameter is 0.9, the weight attenuation regular term parameter is set to be 0.0005, the picture angle change parameter is 0, the saturation and exposure change parameter is 1.5, the tone change parameter is 0.1, the initial learning rate is 0.001, and the total training wheel number is 50020.
(32) When the training round number exceeds 1000, the learning rate adopts a poll updating mode, and when the iteration round number is 40000 and 45000, the learning rate is respectively attenuated by 10 times on the basis of the previous time, and multi-scale training is started.
(33) Clustering the size of an anchoring frame on the tree training set by using a kmeans++ algorithm;
(34) The loss function adopts CIOU index, the IOU threshold value of the loss calculation is 0.7, the multi-scale training is started, and the random is 1. The loss function formula based on the CIOU index is as follows:
in the above, L CIOU Represents a loss function using CIOU as an index, IOU represents a ratio of an area intersected by a prediction frame and a real frame to an area intersected by the prediction frame, b gt Representing the center points of the prediction frame and the real frame, respectively, ρ represents b, b gt The Euclidean distance between the two frames, alpha is a parameter for making track-off, v is a parameter for measuring the consistency of the aspect ratio of the real frames of the prediction frames, and w gt ,h gt And w and h are the width and the height of the real frame and the width and the height of the prediction frame.
(35) The number of filters of the convolution layer before two YOLO layers of the optimized Tiny YOLOV3 algorithm is set to 21, and the number of categories of the optimized Tiny YOLOV3 algorithm is modified to 2.
(4) Prediction of feature images
(41) The feature map is input into a first YOLO layer for prediction, and a loss function value is calculated according to a CIOU-based index. Since the feature map scale into the YOLO layer is divided into 13 x 13, the receptive field is large, so this layer is responsible for detecting large targets.
(42) The 23 rd layer characteristic diagram is spliced with the characteristic diagram of the 1×1 convolution layer with the step length of 1 and the characteristic diagram of the double up-sampling layer to form the characteristic diagram with the size of 26×26×384, then the characteristic diagram with the size of 26×26 is output through the 1×1 convolution layer with the step length of 1, the characteristic diagram with the channel of 256 is output, and then the characteristic diagram with the size of 26×26×18 is output through the 1×1 convolution layer with the step length of 1.
(43) And inputting the feature map into a second YOLO layer for prediction, and calculating a loss function value according to the CIOU-based index. Since the feature map scale into the YOLO layer is 26 x 26, the receptive field is smaller, so this layer is responsible for detecting smaller targets.
(5) And evaluating and optimizing the Tiny YOLOV3 neural network model according to the evaluation index. The evaluation indexes AP and mAP are calculated as follows:
(51) Calculating APs of a certain class C c Values. First, the accuracy P of the single image belonging to the category is calculated c The calculation formula is as follows:
in the above, N (TruePositions) c Representing the number of objects correctly predicted as category C in a single image N (TotalObjects) s Representing the total number of images in the test set that contain category C.
In the above, ΣP c The sum of the accuracies belonging to class C representing all images of the test set,N(TotaIlmage)s c representing the total number of images in the test set that contain class C.
(52) mAP values for all categories are calculated. The formula is as follows:
in the above formula, Σap represents the sum of the average accuracy rates of all the Classes of the test set, and N (Classes) represents the total number of the Classes of the test set.
Through the calculation of the evaluation indexes AP and mAP, the AP values of the trunk and the spherical tree are 89.04%,73.55% and 81.29% respectively, and the tree detection method based on the optimized Tiny Yolov3 algorithm has high detection precision on the trunk and the spherical tree.
All pictures of the test set are detected, the total time is 2059, the time is 11 seconds, the FPS is 187 pieces/second, and the tree detection method based on the optimized Tiny Yolov3 algorithm meets the real-time requirement.

Claims (6)

1. The tree detection method based on the optimized Tiny YOLOV3 algorithm is characterized by comprising the following steps of:
(1) Collecting image data containing trunk and spherical tree targets, randomly dividing the collected image data into a training set and a testing set, and labeling;
(2) Optimizing a traditional Tiny yolo 3 algorithm, replacing partial pooling layers of a neural network shallow layer by using convolution layers with 1x1, 3x3 and 3x3 with a step length of 1 and a step length of 2, canceling the convolution layers with a channel number of 1024, and splicing images of the partial shallow convolution layers of the neural network; the optimization specifically comprises the following steps:
(21) Normalizing the image;
(22) Carrying out multichannel fusion by a 1x1 convolution layer on the basis of the layer 6; secondly, extracting features through a 3x3 convolution layer; expanding the channel number by a 1x1 convolution layer;
(23) Modifying the maximum pooling layer of the 7 th layer into a convolution layer of 3x3, and setting the step length of the convolution kernel to be 2; compressing the channel by adding a 1x1 convolution layer before the 7 th layer, extracting features by using a 3x3 convolution layer, and finally performing channel fusion expansion by using the 1x1 convolution layer, wherein the 7 th layer is changed into the 10 th layer;
(24) Adding 21 x1 convolution layers to the 10 th layer of the modified model to be the 11 th layer and the 12 th layer, and changing the original 8 th layer into the 13 th layer; the 10 th layer and the 13 th layer of the modified model are spliced with the 16 th layer after feature extraction and dimension reduction are carried out on the convolution layer with the step length of 2 through a 3x3 layer; extracting features through a 3x3 convolution layer, and reducing the number of channels;
(3) Training the neural network of the optimized Tiny YOLOV3 algorithm by using a training set to obtain an optimal detection model;
(4) And (3) detecting the picture data in the test set by using the model obtained in the step (3), and evaluating the detection accuracy and the real-time performance of the detection result of the test set.
2. The tree detection method based on the optimized Tiny YOLOV3 algorithm according to claim 1, wherein the step (3) specifically comprises the following steps:
(31) Setting initial parameters including picture input size, momentum parameters, weight attenuation regular term parameters, picture angle change parameters, saturation and exposure change parameters, tone change parameters, initial learning rate and training total wheel number;
(32) When the training round number exceeds 1000, the learning rate adopts a policy updating mode, and when the iteration round number is 40000 and 45000, the learning rate is respectively attenuated by 10 times on the basis of the previous time;
(33) Clustering the size of an anchoring frame on the tree training set by using a kmeans++ algorithm;
(34) The loss function adopts CIOU index, sets the IOU threshold value of the loss calculation, and starts multi-scale training;
(35) Setting the number of filters of a convolution layer before two YOLO layers of the optimized Tiny YOLOV3 algorithm, and modifying the number of categories of the optimized Tiny YOLOV3 algorithm.
3. The tree detection method based on the optimized Tiny YOLOV3 algorithm according to claim 1, wherein the loss function formula based on the CIOU index in the step (3) is as follows:
in the above, L CIOU Represents a loss function using CIOU as an index, IOU represents a ratio of an area intersected by a prediction frame and a real frame to an area intersected by the prediction frame, b gt Representing the center points of the prediction frame and the real frame, respectively, ρ represents b, b gt The Euclidean distance between the two frames, alpha is a parameter for making track-off, v is a parameter for measuring the consistency of the aspect ratio of the real frames of the prediction frames, and w gt ,h gt And w and h are the width and the height of the real frame and the width and the height of the prediction frame.
4. The tree detection method based on the optimized Tiny YOLOV3 algorithm according to claim 1, wherein in the step (4), non-maximum suppression is required when the picture data passes through the convolution feature extraction layer in the process of detecting the pictures in the test set by using the detection model.
5. The tree detection method based on the optimized Tiny YOLOV3 algorithm according to claim 1, wherein in the step (4), AP values of each category and mAP values of the overall category are used as evaluation indexes of detection accuracy.
6. The tree detection method based on the optimized Tiny YOLOV3 algorithm according to claim 1, wherein in the step (4), an FPS value is used as an evaluation index of real-time performance, and the FPS value is the number of pictures of the detected test set in a unit time.
CN202010661911.4A 2020-07-10 2020-07-10 Tree detection method based on Tiny YOLOV3 algorithm Active CN111898651B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010661911.4A CN111898651B (en) 2020-07-10 2020-07-10 Tree detection method based on Tiny YOLOV3 algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010661911.4A CN111898651B (en) 2020-07-10 2020-07-10 Tree detection method based on Tiny YOLOV3 algorithm

Publications (2)

Publication Number Publication Date
CN111898651A CN111898651A (en) 2020-11-06
CN111898651B true CN111898651B (en) 2023-09-26

Family

ID=73192225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010661911.4A Active CN111898651B (en) 2020-07-10 2020-07-10 Tree detection method based on Tiny YOLOV3 algorithm

Country Status (1)

Country Link
CN (1) CN111898651B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112487915B (en) * 2020-11-25 2024-04-23 江苏科技大学 Pedestrian detection method based on Embedded YOLO algorithm
CN112488230A (en) * 2020-12-07 2021-03-12 中国农业大学 Crop water stress degree judging method and device based on machine learning
CN112418208B (en) * 2020-12-11 2022-09-16 华中科技大学 Tiny-YOLO v 3-based weld film character recognition method
CN112464911A (en) * 2020-12-21 2021-03-09 青岛科技大学 Improved YOLOv 3-tiny-based traffic sign detection and identification method
CN112633174B (en) * 2020-12-23 2022-08-02 电子科技大学 Improved YOLOv4 high-dome-based fire detection method and storage medium
CN112634245A (en) * 2020-12-28 2021-04-09 广州绿怡信息科技有限公司 Loss detection model training method, loss detection method and device
CN112835037B (en) 2020-12-29 2021-12-07 清华大学 All-weather target detection method based on fusion of vision and millimeter waves
CN112651376A (en) * 2021-01-05 2021-04-13 珠海大横琴科技发展有限公司 Ship detection method and device
CN112785561A (en) * 2021-01-07 2021-05-11 天津狮拓信息技术有限公司 Second-hand commercial vehicle condition detection method based on improved Faster RCNN prediction model
CN113420583A (en) * 2021-01-29 2021-09-21 山东农业大学 Color-changing wood monitoring system and method based on edge computing platform
CN113591963A (en) * 2021-07-23 2021-11-02 广州绿怡信息科技有限公司 Equipment side loss detection model training method and equipment side loss detection method
CN114937195A (en) * 2022-03-29 2022-08-23 江苏海洋大学 Water surface floating object target detection system based on unmanned aerial vehicle aerial photography and improved YOLO v3

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210452A (en) * 2019-06-14 2019-09-06 东北大学 It is a kind of based on improve tiny-yolov3 mine truck environment under object detection method
CN110321999A (en) * 2018-03-30 2019-10-11 北京深鉴智能科技有限公司 Neural computing figure optimization method
CN110337669A (en) * 2017-01-27 2019-10-15 爱克发医疗保健公司 Multiclass image partition method
CN110414421A (en) * 2019-07-25 2019-11-05 电子科技大学 A kind of Activity recognition method based on sequential frame image
CN110765865A (en) * 2019-09-18 2020-02-07 北京理工大学 Underwater target detection method based on improved YOLO algorithm
CN110826520A (en) * 2019-11-14 2020-02-21 燕山大学 Port grab bucket detection method based on improved YOLOv3-tiny algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110337669A (en) * 2017-01-27 2019-10-15 爱克发医疗保健公司 Multiclass image partition method
CN110321999A (en) * 2018-03-30 2019-10-11 北京深鉴智能科技有限公司 Neural computing figure optimization method
CN110210452A (en) * 2019-06-14 2019-09-06 东北大学 It is a kind of based on improve tiny-yolov3 mine truck environment under object detection method
CN110414421A (en) * 2019-07-25 2019-11-05 电子科技大学 A kind of Activity recognition method based on sequential frame image
CN110765865A (en) * 2019-09-18 2020-02-07 北京理工大学 Underwater target detection method based on improved YOLO algorithm
CN110826520A (en) * 2019-11-14 2020-02-21 燕山大学 Port grab bucket detection method based on improved YOLOv3-tiny algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Data-Driven based tiny-yolov3 method for front vehicle detection inducing SPP-Net;XIAOLAN WANG, SHOU WANG;《SPECIAL SECTION ON UNTELLIGENT LOGISTICS BASED ON BIG DATA》;全文 *
基于增强Tiny YOLOV3算法的车辆实时检测与跟踪;刘军,后士浩;《农业工程学报》;全文 *

Also Published As

Publication number Publication date
CN111898651A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN111898651B (en) Tree detection method based on Tiny YOLOV3 algorithm
CN112270249B (en) Target pose estimation method integrating RGB-D visual characteristics
CN109583483B (en) Target detection method and system based on convolutional neural network
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
CN108780508A (en) System and method for normalized image
CN114387520B (en) Method and system for accurately detecting compact Li Zijing for robot picking
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN111460968A (en) Video-based unmanned aerial vehicle identification and tracking method and device
CN112487915B (en) Pedestrian detection method based on Embedded YOLO algorithm
CN111368825B (en) Pointer positioning method based on semantic segmentation
CN113128335B (en) Method, system and application for detecting, classifying and finding micro-living ancient fossil image
CN113627229B (en) Target detection method, system, device and computer storage medium
CN109902576B (en) Training method and application of head and shoulder image classifier
CN114241003B (en) All-weather lightweight high-real-time sea surface ship detection and tracking method
CN113252584B (en) Crop growth detection method and system based on 5G transmission
CN108734200A (en) Human body target visible detection method and device based on BING features
CN111488766A (en) Target detection method and device
CN109615610B (en) Medical band-aid flaw detection method based on YOLO v2-tiny
CN113327253B (en) Weak and small target detection method based on satellite-borne infrared remote sensing image
CN113628170A (en) Laser line extraction method and system based on deep learning
CN117351371A (en) Remote sensing image target detection method based on deep learning
CN116778473A (en) Improved YOLOV 5-based mushroom offline real-time identification method and system
CN116246184A (en) Papaver intelligent identification method and system applied to unmanned aerial vehicle aerial image
CN115995017A (en) Fruit identification and positioning method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant