CN110287849A - A kind of lightweight depth network image object detection method suitable for raspberry pie - Google Patents

A kind of lightweight depth network image object detection method suitable for raspberry pie Download PDF

Info

Publication number
CN110287849A
CN110287849A CN201910534572.0A CN201910534572A CN110287849A CN 110287849 A CN110287849 A CN 110287849A CN 201910534572 A CN201910534572 A CN 201910534572A CN 110287849 A CN110287849 A CN 110287849A
Authority
CN
China
Prior art keywords
network
depth
image
convolution
characteristic pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910534572.0A
Other languages
Chinese (zh)
Other versions
CN110287849B (en
Inventor
任坤
黄泷
范春奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Industrial Internet (Beijing) Technology Group Co.,Ltd.
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201910534572.0A priority Critical patent/CN110287849B/en
Publication of CN110287849A publication Critical patent/CN110287849A/en
Application granted granted Critical
Publication of CN110287849B publication Critical patent/CN110287849B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A kind of lightweight depth network image object detection method suitable for raspberry pie belongs to deep learning and object detection field, collects the image for containing target to be detected first, the image being collected into is pre-processed, is used for network training;Secondly the image obtained after pretreatment is input to the separable expansion convolutional neural networks of depth and carries out feature extraction, obtain different resolution characteristic pattern;Then different resolution characteristic pattern is input to feature pyramid network and carries out Fusion Features, generate the fusion feature figure for carrying more abundant information;Then the classification and positioning for being carried out target to be detected to fusion feature figure using detection network, are finally carried out non-maxima suppression, obtain optimal object detection results.Be difficult to realize on raspberry pie platform the present invention overcomes the image object detection method based on deep neural network and based on the image object detection method of lightweight network on raspberry pie platform the low difficulty of Detection accuracy.

Description

A kind of lightweight depth network image object detection method suitable for raspberry pie
Technical field
The invention belongs to deep learnings and object detection field, and in particular to a kind of lightweight depth suitable for raspberry pie Network image object detection method.
Background technique
Target detection is a basic task in computer vision.The main purpose of target detection be from input picture or Interested object is positioned in video, is accurately classified to the classification of each object, and provides bounding box for each object. The method that the target detection technique of early stage uses manual extraction feature, the feature of manual extraction is combined with classifier, is realized Object detection task.The method of manual extraction feature is not only complicated, and the feature extracted do not have good presentation skills and Robustness, therefore researcher proposes the object detection method based on convolutional neural networks.Convolutional neural networks can be learned automatically The useful feature for practising image not only saves the limitation of artificial design features, also improves the accuracy of target detection.This The mainstream research side that a little advantages make the method based on convolutional neural networks that conventional method be replaced to become object detection field rapidly To.
Currently, optimizing network mould by deepening network level based on the image object detection model of convolutional neural networks Type, to promote detection accuracy.With the intensification of network level, hardware resource needed for training pattern becomes from common hardware platform For extensive high-performance server, large-scale intensive calculations make in the limited miniature computing platform (such as raspberry pie) of resource Realize that depth detection model becomes difficult.In view of the above-mentioned problems, current technology scheme is mainly to depth convolutional neural networks It is compressed and is accelerated, reduce network parameter and calculation amount, make the EMS memory occupation of deep neural network model and calculate power satisfaction The requirement of low configuration, but cost is that detection accuracy can decline to a great extent.
Summary of the invention
In order to solve the above-mentioned technical problem, the present invention is intended to provide a kind of lightweight depth network suitable for raspberry pie As object detection method, overcomes the image object detection method based on deep neural network and be difficult to realize on raspberry pie platform And based on the image object detection method of lightweight network on raspberry pie platform the low difficulty of Detection accuracy.
In order to achieve the above technical purposes, the technical solution of the present invention is as follows:
A kind of lightweight depth network image object detection method suitable for raspberry pie, comprising the following steps:
(1) image for containing target to be detected is collected, the image being collected into is pre-processed, is used for network training;
(2) the separable expansion convolutional neural networks of image input depth obtained after step (1) pretreatment are subjected to feature It extracts, obtains different resolution characteristic pattern;
(3) the different resolution characteristic pattern input feature vector pyramid network that selecting step (2) obtains carries out Fusion Features, raw At the fusion feature figure for carrying more abundant information;
(4) the fusion feature figure input detection network that step (3) generate is carried out to the classification and positioning of target to be detected, most After carry out non-maxima suppression, obtain optimal object detection results.
Further, detailed process is as follows for step (1):
(a) target category to be detected is selected, acquisition includes the image of these classification targets, target is marked, i.e., Its bounding box and classification information are marked out to each target to be detected occurred in every image;
(b) when the image quantity acquired, data enhancement operations are carried out using existing image.Using overturning, translation, The methods of rotation or plus noise create more images, so that the neural network of training has better effect;
(c) uniformly convert image resolution ratio to 224*224 to adapt to input size;
(d) image is subjected to the optimization based on positive and negative number of samples, division obtains training image collection and test chart image set.
Further, detailed process is as follows for step (2):
(A) the Standard convolution block that input picture passes through 7*7 first carries out preliminary feature extraction, obtains 112*112*64 feature Figure, wherein 64 indicate the port number of characteristic pattern;
(B) 112*112*64 characteristic pattern obtained in step (A) successively passes through the depth spy that 3 depth separate convolution block Sign is extracted, and the characteristic pattern of 56*56*256,28*28*512,14*14*1024 are respectively obtained;
(C) 14*14*1024 characteristic pattern obtained in step (B) is by the separable final feature for extending convolution block of depth It extracts, obtains the characteristic pattern of 14*14*1024 resolution ratio.
Wherein, the depth in step (B) separate convolution block can substantially compression network parameter, specific explanations it is as follows:
3*3 Standard convolution uses Hi*Wi* the input tensor L of Mi, and using the convolution kernel K of 3*3*M*NsObtain Hi*Wi* N Export tensor Lj, wherein Hi, WiThe length and width of input picture is respectively indicated, M indicates that the port number of input feature vector figure, N indicate The port number of characteristic pattern is exported, 3*3 indicates the Spatial Dimension of convolution kernel.3*3 Standard convolution needs to calculate cost are as follows:
Hi*Wi*M*N*3*3。
Depth separates convolution and Standard convolution is decomposed into two steps: the point-by-point convolution of 3*3 depth convolution sum 1*1.3*3 is deep It spends convolution and single convolution kernel progress convolution is used only to each input feature vector figure.Then, point-by-point convolution is by the defeated of depth convolutional layer Linear combination is carried out with 1*1 convolution kernel out.
Depth separates convolution and uses Hi*Wi* the input tensor L of Mi, and using the depth convolution kernel K of 3*3*1*MdIt obtains Hi*Wi* 1 output tensor Lj, the point-by-point convolution kernel K of 1*1*M*N is used laterpObtain Hi*Wi* the output tensor L of Nk.Depth can Separation convolution needs to calculate cost are as follows:
Hi*Wi*M*3*3+M*N*3*3
Depth separates convolution by being filtering and combined process by Convolution, and compared with traditional convolution, depth can The calculating cost of bundling product is only traditional convolution
Wherein, point-by-point convolution (1*1) uses ReLU layers of progress non-linearization afterwards, avoids gradient from disappearing, and it is dilute to increase network Property is dredged to avoid over-fitting.And ReLU layers are not added after depth convolution (3*3) to guarantee in terms of information flow and reduction between characteristic pattern It calculates.
In addition, the separable expansion convolution block of depth in step (C) can have in the case where not increasing network parameter amount Effect ground expands the receptive field of convolution kernel, improves the homing rate and positioning accuracy of target.
Further, detailed process is as follows for step (3):
(I) 28*28*512,14*14*1024 characteristic pattern that step (2) feature extraction obtains are passed through into 1*1 convolution behaviour respectively Make, port number is unified for 256, obtains 28*28*256,14*14*256 characteristic pattern;
(II) characteristic pattern of multiple and different spatial resolutions obtained in step (I) is adjusted by up-sampling to identical point Splicing is carried out after resolution, generates 56*56*256,28*28*256,14*14*256 the fusion spy for carrying more abundant information Sign figure.
Further, detailed process is as follows for step (4):
(i) using fusion feature figure obtained in step (II) as input, each pixel of input feature vector figure is generated Multiple default frames are detected by positioning sub-network and classification sub-network respectively later.Detected value includes two parts: bounding box Position and classification confidence level;
(ii) positioning sub-network predicts a bounding box to each default frame;Sub-network of classifying is pre- to each default frame Survey the confidence level of its all categories;
(iii) using non-maxima suppression in multiple prediction blocks target category confidence level and prediction block is opposite defaults frame Position offset inhibited, by target loss function be minimum value prediction block be selected as optimum prediction frame, obtain optimal Target category and prediction block position in prediction block.
Wherein, the target loss function L (x, l, c, g) of the detection network in step (iii) is by Classification Loss function Lconf (x, c) and positioning loss function Lloc(x, l, g) composition:
Wherein, x is characterized the default frame on figure, and l is prediction block, and c is characterized the default frame on figure in each classification Confidence level predicted value, g are true frame, Lconf(x, c) indicates the default frame on characteristic pattern on category score set c Softmax Classification Loss function, Lloc(x, l, g) indicates that position loss function, N indicate and the matched default frame number of true frame institute Amount, weight coefficient α are set as 1 by cross validation.
Detection network realizes more accurate target positioning and classification by optimization loss function.
By adopting the above technical scheme bring the utility model has the advantages that
The present invention proposes the method for separating convolution using depth, reduces redundancy in characteristic pattern, in minimum essence The substantially compression that network parameter is realized under degree loss to hardware memory and calculates force request reduction;Introduce the separable expansion of depth The receptive field that convolution increases characteristic pattern is opened, under the premise of not increasing network parameter, enhancing small target deteection effect and target are fixed Position precision;Multi-scale feature fusion is carried out using feature pyramid, so that the feature under all scales has image abundant to believe Breath, further improves detection and the target location accuracy of Small object.Method of the invention has low EMS memory occupation and low calculating The object detection task on raspberry pie platform may be implemented in the advantages of power demand.
Detailed description of the invention
Flow diagram Fig. 1 of the invention;
Model structure Fig. 2 of the invention;
Specific embodiment
In order to make the purpose of the method for the present invention, technical solution and advantage are more clearly understood, below in conjunction with attached drawing and reality It illustrates and releases the present invention, be not intended to limit the present invention:
Step 1 collects the image for containing target to be detected, and the image being collected into is pre-processed, and is used for network training.
Target category to be detected is selected, image largely comprising these classification targets is then acquired, target is marked Note, i.e., mark out its bounding box and classification information to each target to be detected occurred in every image;
When the image quantity of acquisition, data enhancement operations are carried out using existing image.Using overturning, translation, rotation Turn or the methods of plus noise creates more images, so that the neural network of training has better effect;
Uniformly convert image resolution ratio to 224*224 to adapt to input size;
Image is subjected to the optimization based on positive and negative number of samples, division obtains training image collection and test chart image set.
The separable expansion convolutional neural networks of image input depth obtained after step 1 pretreatment are carried out feature by step 2 It extracts, obtains different resolution characteristic pattern.
In the stage 1, the 7*7 Standard convolution down-sampling that the input picture of 224*224 is 2 by stride exports 112* The characteristic pattern of 112*64.
In the stage 2, down-sampling is carried out using 3*3 maximum pond layer to the input feature vector figure of 112*112*64, using 3 A depth separates convolutional layer and carries out feature extraction, exports the characteristic pattern of 56*56*256.
In the stage 3, convolutional layer is separated to the 3*3 depth of the input feature vector figure stride 2 of 56*56*256 and adopt Sample separates convolutional layer using 3 depth and carries out feature extraction, exports the characteristic pattern of 28*28*512.
In the stage 4, convolutional layer is separated to the 3*3 depth of the input feature vector figure stride 2 of 28*28*512 and adopt Sample separates convolutional layer using 5 depth and carries out feature extraction, exports the characteristic pattern of 14*14*1024.
In the stage 5, convolutional layer is separated using the depth of spreading rate 2 to the input feature vector figure of 14*14, is experienced in extension It keeps the spatial resolution of characteristic pattern constant while wild, exports the characteristic pattern of 14,*14 1024.
Different resolution characteristic pattern input feature vector pyramid network obtained in step 3, selecting step 2 carries out feature and melts It closes, generates the fusion feature figure for carrying more abundant information.
The characteristic pattern of 2~5 final output of stage passes through 1*1 convolution respectively and port number is unified for 256.
Characteristic pattern A passes through 1*1 convolution, and the 14*14 characteristic pattern B exported with the stage 4 is merged, and obtains 14*14 characteristic pattern AB。
Characteristic pattern AB is up-sampled, and 28*28 characteristic pattern is obtained, and the 28*28 characteristic pattern C exported later with the stage 3 is merged Obtain characteristic pattern ABC.
Characteristic pattern ABC is up-sampled, and 56*56 characteristic pattern is obtained, and the 56*56 characteristic pattern D exported later with the stage 2 is merged Obtain characteristic pattern ABCD.
The fusion feature figure generated in step 3 input detection network is carried out the classification of target to be detected and determined by step 4 Position, finally carries out non-maxima suppression, obtains optimal object detection results.
Using fusion feature figure obtained in step 3 as input, 4 defaults are generated to each pixel of input feature vector figure Frame is detected by positioning sub-network and classification sub-network respectively later.Detected value includes two parts: bounding box position and class Other confidence level;
It positions sub-network and one prediction block is generated to each default frame;Classification sub-network predicts it to each default frame The confidence level of all categories;
Using non-maxima suppression to the position of target category confidence level and the opposite default frame of prediction block in multiple prediction blocks It sets offset to be inhibited, the prediction block that target loss function is minimum value is selected as optimum prediction frame, obtains optimal prediction Target category and prediction block position in frame.
Wherein, target loss function L (x, l, c, g) is by Classification Loss function Lconf(x, c) and positioning loss function Lloc (x, l, g) composition:
Wherein, x is characterized the default frame on figure, and l is prediction block, and c is characterized the default frame on figure in each classification Confidence level predicted value, g are true frame, Lconf(x, c) indicates the default frame on characteristic pattern on category score set c Softmax Classification Loss function, Lloc(x, l, g) indicates that position loss function, N indicate and the matched default frame number of true frame institute Amount, weight coefficient α are set as 1 by cross validation.
The above examples only illustrate the technical idea of the present invention, and this does not limit the scope of protection of the present invention, all According to the technical idea provided by the invention, made any change on the basis of technical solutions, each falls within the scope of the present invention Within.

Claims (7)

1. a kind of lightweight depth network image object detection method suitable for raspberry pie, which is characterized in that including following step It is rapid:
(1) image for containing target to be detected is collected, the image being collected into is pre-processed, is used for network training;
(2) the separable expansion convolutional neural networks of image input depth obtained after step (1) pretreatment are subjected to feature extraction, Obtain different resolution characteristic pattern;
(3) the different resolution characteristic pattern input feature vector pyramid network that selecting step (2) obtains carries out Fusion Features, and generation is taken The fusion feature figure of band more abundant information;
(4) the fusion feature figure input detection network that step (3) generate is carried out to the classification and positioning of target to be detected, it is most laggard Row non-maxima suppression obtains optimal object detection results.
2. being suitable for the lightweight depth network image object detection method of raspberry pie as described in claim 1, feature exists In detailed process is as follows for step (1):
(a) target category to be detected is selected, acquisition includes the image of these classification targets, target is marked, i.e., to every It opens each target to be detected occurred in image and marks out its bounding box and classification information;
(b) when the image quantity acquired, data enhancement operations are carried out using existing image;
(c) uniformly convert image resolution ratio to 224*224 to adapt to input size;
(d) image is subjected to the optimization based on positive and negative number of samples, division obtains training image collection and test chart image set.
3. being suitable for the lightweight depth network image object detection method of raspberry pie as described in claim 1, feature exists In detailed process is as follows for step (2):
(A) the Standard convolution block that input picture passes through 7*7 first carries out preliminary feature extraction, obtains 112*112*64 characteristic pattern, Wherein, the port number of 64 expression characteristic patterns;
(B) depth characteristic that 112*112*64 characteristic pattern obtained in step (A) successively passes through the separable convolution block of 3 depth mentions It takes, respectively obtains the characteristic pattern of 56*56*256,28*28*512,14*14*1024;
(C) 14*14*1024 characteristic pattern obtained in step (B) separates the final feature extraction of extension convolution block by depth, Obtain the characteristic pattern of 14*14*1024 resolution ratio.
4. being suitable for the lightweight depth network image object detection method of raspberry pie as claimed in claim 3, feature exists In, depth in step (B) separate convolution block can substantially compression network parameter, specific explanations it is as follows:
3*3 Standard convolution uses Hi*Wi* the input tensor L of Mi, and using the convolution kernel K of 3*3*M*NsObtain Hi*Wi* the output of N Tensor Lj, wherein Hi, WiThe length and width of input picture is respectively indicated, M indicates that the port number of input feature vector figure, N indicate output The port number of characteristic pattern, 3*3 indicate the Spatial Dimension of convolution kernel;3*3 Standard convolution needs to calculate cost are as follows:
Hi*Wi*M*N*3*3;
Depth separates convolution and Standard convolution is decomposed into two steps: the point-by-point convolution of 3*3 depth convolution sum 1*1;3*3 depth volume Product is used only single convolution kernel to each input feature vector figure and carries out convolution;Then, point-by-point convolution by the output of depth convolutional layer with 1*1 convolution kernel carries out linear combination;
Depth separates convolution and uses Hi*Wi* the input tensor L of Mi, and using the depth convolution kernel K of 3*3*1*MdObtain Hi*Wi* 1 output tensor Lj, the point-by-point convolution kernel K of 1*1*M*N is used laterpObtain Hi*Wi* the output tensor L of Nk;The separable volume of depth Product needs to calculate cost are as follows:
Hi*Wii*M*3*3+M*N*3*3。
5. being suitable for the lightweight depth network image object detection method of raspberry pie as described in claim 1, feature exists In detailed process is as follows for step (3):
(I) 28*28*512,14*14*1024 characteristic pattern that step (2) feature extraction obtains are passed through into 1*1 convolution operation respectively, Port number is unified for 256, obtains 28*28*256,14*14*256 characteristic pattern;
(II) characteristic pattern of multiple and different spatial resolutions obtained in step (I) is adjusted by up-sampling to equal resolution After carry out splicing, generate 56*56*256,28*28*256,14*14*256 the fusion feature figure for carrying more abundant information.
6. being suitable for the lightweight depth network image object detection method of raspberry pie as described in claim 1, feature exists In detailed process is as follows for step (4):
(i) using fusion feature figure obtained in step (3) as input, each pixel of input feature vector figure is generated multiple silent Recognize frame, is detected respectively by positioning sub-network and classification sub-network later;Detected value include two parts: bounding box position and Classification confidence level;
(ii) positioning sub-network predicts a bounding box to each default frame;Classification sub-network predicts it to each default frame The confidence level of all categories;
(iii) using non-maxima suppression to the position of target category confidence level and the opposite default frame of prediction block in multiple prediction blocks It sets offset to be inhibited, the prediction block that target loss function is minimum value is selected as optimum prediction frame, obtains optimal prediction Target category and prediction block position in frame.
7. being suitable for the lightweight depth network image object detection method of raspberry pie as claimed in claim 6, feature exists In detecting the target loss function L (x, l, c, g) of network by Classification Loss function L in step (iii)conf(x, c) and positioning Loss function Lloc(x, l, g) composition:
Wherein, x is characterized the default frame on figure, and l is prediction block, and c is characterized confidence of the default frame in each classification on figure Predicted value is spent, g is true frame, Lconf(x, c) indicates the softmax on category score set c points of the default frame on characteristic pattern Class loss function, Lloc(x, l, g) indicates that position loss function, N indicate and the matched default frame quantity of true frame institute, weight system Number α is set as 1 by cross validation.
CN201910534572.0A 2019-06-20 2019-06-20 Lightweight depth network image target detection method suitable for raspberry pi Active CN110287849B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910534572.0A CN110287849B (en) 2019-06-20 2019-06-20 Lightweight depth network image target detection method suitable for raspberry pi

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910534572.0A CN110287849B (en) 2019-06-20 2019-06-20 Lightweight depth network image target detection method suitable for raspberry pi

Publications (2)

Publication Number Publication Date
CN110287849A true CN110287849A (en) 2019-09-27
CN110287849B CN110287849B (en) 2022-01-07

Family

ID=68004845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910534572.0A Active CN110287849B (en) 2019-06-20 2019-06-20 Lightweight depth network image target detection method suitable for raspberry pi

Country Status (1)

Country Link
CN (1) CN110287849B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991305A (en) * 2019-11-27 2020-04-10 厦门大学 Airplane detection method under remote sensing image and storage medium
CN111008562A (en) * 2019-10-31 2020-04-14 北京城建设计发展集团股份有限公司 Human-vehicle target detection method with feature map depth fusion
CN111047630A (en) * 2019-11-13 2020-04-21 芯启源(上海)半导体科技有限公司 Neural network and target detection and depth prediction method based on neural network
CN111191508A (en) * 2019-11-28 2020-05-22 浙江省北大信息技术高等研究院 Face recognition method and device
CN111199227A (en) * 2019-12-20 2020-05-26 广西柳州联耕科技有限公司 High-precision image identification method
CN111199220A (en) * 2020-01-21 2020-05-26 北方民族大学 Lightweight deep neural network method for people detection and people counting in elevator
CN111204452A (en) * 2020-02-10 2020-05-29 北京建筑大学 Target detection system based on miniature aircraft
CN111242122A (en) * 2020-01-07 2020-06-05 浙江大学 Lightweight deep neural network rotating target detection method and system
CN111325784A (en) * 2019-11-29 2020-06-23 浙江省北大信息技术高等研究院 Unsupervised pose and depth calculation method and system
CN111340141A (en) * 2020-04-20 2020-06-26 天津职业技术师范大学(中国职业培训指导教师进修中心) Crop seedling and weed detection method and system based on deep learning
CN111666836A (en) * 2020-05-22 2020-09-15 北京工业大学 High-resolution remote sensing image target detection method of M-F-Y type lightweight convolutional neural network
CN112115914A (en) * 2020-09-28 2020-12-22 北京市商汤科技开发有限公司 Target detection method and device, electronic equipment and storage medium
CN112115970A (en) * 2020-08-12 2020-12-22 南京理工大学 Lightweight image detection agricultural bird repelling method and system based on hierarchical regression
CN112132001A (en) * 2020-09-18 2020-12-25 深圳大学 Automatic tracking and quality control method for iPSC and terminal equipment
CN112183291A (en) * 2020-09-22 2021-01-05 上海蜜度信息技术有限公司 Method and system for detecting tiny object in image, storage medium and terminal
CN112183203A (en) * 2020-08-26 2021-01-05 北京工业大学 Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN112347936A (en) * 2020-11-07 2021-02-09 南京天通新创科技有限公司 Rapid target detection method based on depth separable convolution
CN112435236A (en) * 2020-11-23 2021-03-02 河北工业大学 Multi-stage strawberry fruit detection method
CN112507872A (en) * 2020-12-09 2021-03-16 中科视语(北京)科技有限公司 Positioning method and positioning device for head and shoulder area of human body and electronic equipment
CN113270156A (en) * 2021-04-29 2021-08-17 甘肃路桥建设集团有限公司 Detection modeling and detection method and system of machine-made sandstone powder based on image processing
CN113420651A (en) * 2021-06-22 2021-09-21 四川九洲电器集团有限责任公司 Lightweight method and system of deep convolutional neural network and target detection method
CN113468992A (en) * 2021-06-21 2021-10-01 四川轻化工大学 Construction site safety helmet wearing detection method based on lightweight convolutional neural network
CN113487551A (en) * 2021-06-30 2021-10-08 佛山市南海区广工大数控装备协同创新研究院 Gasket detection method and device for improving performance of dense target based on deep learning
CN113642662A (en) * 2021-08-24 2021-11-12 凌云光技术股份有限公司 Lightweight classification model-based classification detection method and device
US20210365724A1 (en) * 2020-05-20 2021-11-25 Electronics And Telecommunications Research Institute Object detection system and an object detection method
CN113971731A (en) * 2021-10-28 2022-01-25 燕山大学 Target detection method and device and electronic equipment
CN114462555A (en) * 2022-04-13 2022-05-10 国网江西省电力有限公司电力科学研究院 Multi-scale feature fusion power distribution network equipment identification method based on raspberry pi
CN115719445A (en) * 2022-12-20 2023-02-28 齐鲁工业大学 Seafood identification method based on deep learning and raspberry type 4B module
WO2023165024A1 (en) * 2022-03-01 2023-09-07 北京交通大学 Training method for binary target detection neural network structure and model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180068463A1 (en) * 2016-09-02 2018-03-08 Artomatix Ltd. Systems and Methods for Providing Convolutional Neural Network Based Image Synthesis Using Stable and Controllable Parametric Models, a Multiscale Synthesis Framework and Novel Network Architectures
CN108229442A (en) * 2018-02-07 2018-06-29 西南科技大学 Face fast and stable detection method in image sequence based on MS-KCF
CN108288075A (en) * 2018-02-02 2018-07-17 沈阳工业大学 A kind of lightweight small target detecting method improving SSD
CN109214406A (en) * 2018-05-16 2019-01-15 长沙理工大学 Based on D-MobileNet neural network image classification method
CN109344821A (en) * 2018-08-30 2019-02-15 西安电子科技大学 Small target detecting method based on Fusion Features and deep learning
CN109784298A (en) * 2019-01-28 2019-05-21 南京航空航天大学 A kind of outdoor on-fixed scene weather recognition methods based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180068463A1 (en) * 2016-09-02 2018-03-08 Artomatix Ltd. Systems and Methods for Providing Convolutional Neural Network Based Image Synthesis Using Stable and Controllable Parametric Models, a Multiscale Synthesis Framework and Novel Network Architectures
CN108288075A (en) * 2018-02-02 2018-07-17 沈阳工业大学 A kind of lightweight small target detecting method improving SSD
CN108229442A (en) * 2018-02-07 2018-06-29 西南科技大学 Face fast and stable detection method in image sequence based on MS-KCF
CN109214406A (en) * 2018-05-16 2019-01-15 长沙理工大学 Based on D-MobileNet neural network image classification method
CN109344821A (en) * 2018-08-30 2019-02-15 西安电子科技大学 Small target detecting method based on Fusion Features and deep learning
CN109784298A (en) * 2019-01-28 2019-05-21 南京航空航天大学 A kind of outdoor on-fixed scene weather recognition methods based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨俊: "基于卷积神经网络的目标检测研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008562B (en) * 2019-10-31 2023-04-18 北京城建设计发展集团股份有限公司 Human-vehicle target detection method with feature map depth fusion
CN111008562A (en) * 2019-10-31 2020-04-14 北京城建设计发展集团股份有限公司 Human-vehicle target detection method with feature map depth fusion
CN111047630A (en) * 2019-11-13 2020-04-21 芯启源(上海)半导体科技有限公司 Neural network and target detection and depth prediction method based on neural network
CN111047630B (en) * 2019-11-13 2023-06-13 芯启源(上海)半导体科技有限公司 Neural network and target detection and depth prediction method based on neural network
CN110991305B (en) * 2019-11-27 2023-04-07 厦门大学 Airplane detection method under remote sensing image and storage medium
CN110991305A (en) * 2019-11-27 2020-04-10 厦门大学 Airplane detection method under remote sensing image and storage medium
CN111191508A (en) * 2019-11-28 2020-05-22 浙江省北大信息技术高等研究院 Face recognition method and device
CN111325784A (en) * 2019-11-29 2020-06-23 浙江省北大信息技术高等研究院 Unsupervised pose and depth calculation method and system
CN111199227A (en) * 2019-12-20 2020-05-26 广西柳州联耕科技有限公司 High-precision image identification method
CN111242122A (en) * 2020-01-07 2020-06-05 浙江大学 Lightweight deep neural network rotating target detection method and system
CN111242122B (en) * 2020-01-07 2023-09-08 浙江大学 Lightweight deep neural network rotating target detection method and system
CN111199220A (en) * 2020-01-21 2020-05-26 北方民族大学 Lightweight deep neural network method for people detection and people counting in elevator
CN111199220B (en) * 2020-01-21 2023-04-28 北方民族大学 Light-weight deep neural network method for personnel detection and personnel counting in elevator
CN111204452A (en) * 2020-02-10 2020-05-29 北京建筑大学 Target detection system based on miniature aircraft
CN111204452B (en) * 2020-02-10 2021-07-16 北京建筑大学 Target detection system based on miniature aircraft
CN111340141A (en) * 2020-04-20 2020-06-26 天津职业技术师范大学(中国职业培训指导教师进修中心) Crop seedling and weed detection method and system based on deep learning
US11593587B2 (en) * 2020-05-20 2023-02-28 Electronics And Telecommunications Research Institute Object detection system and an object detection method
US20210365724A1 (en) * 2020-05-20 2021-11-25 Electronics And Telecommunications Research Institute Object detection system and an object detection method
CN111666836B (en) * 2020-05-22 2023-05-02 北京工业大学 High-resolution remote sensing image target detection method of M-F-Y type light convolutional neural network
CN111666836A (en) * 2020-05-22 2020-09-15 北京工业大学 High-resolution remote sensing image target detection method of M-F-Y type lightweight convolutional neural network
CN112115970A (en) * 2020-08-12 2020-12-22 南京理工大学 Lightweight image detection agricultural bird repelling method and system based on hierarchical regression
CN112183203A (en) * 2020-08-26 2021-01-05 北京工业大学 Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN112183203B (en) * 2020-08-26 2024-05-28 北京工业大学 Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN112132001B (en) * 2020-09-18 2023-09-08 深圳大学 Automatic tracking and quality control method for iPSC and terminal equipment
CN112132001A (en) * 2020-09-18 2020-12-25 深圳大学 Automatic tracking and quality control method for iPSC and terminal equipment
CN112183291A (en) * 2020-09-22 2021-01-05 上海蜜度信息技术有限公司 Method and system for detecting tiny object in image, storage medium and terminal
CN112115914B (en) * 2020-09-28 2023-04-07 北京市商汤科技开发有限公司 Target detection method and device, electronic equipment and storage medium
CN112115914A (en) * 2020-09-28 2020-12-22 北京市商汤科技开发有限公司 Target detection method and device, electronic equipment and storage medium
CN112347936A (en) * 2020-11-07 2021-02-09 南京天通新创科技有限公司 Rapid target detection method based on depth separable convolution
CN112435236B (en) * 2020-11-23 2022-08-16 河北工业大学 Multi-stage strawberry fruit detection method
CN112435236A (en) * 2020-11-23 2021-03-02 河北工业大学 Multi-stage strawberry fruit detection method
WO2022121075A1 (en) * 2020-12-09 2022-06-16 中科视语(北京)科技有限公司 Positioning method, positioning apparatus and electronic device for human head and shoulders area
CN112507872A (en) * 2020-12-09 2021-03-16 中科视语(北京)科技有限公司 Positioning method and positioning device for head and shoulder area of human body and electronic equipment
CN113270156A (en) * 2021-04-29 2021-08-17 甘肃路桥建设集团有限公司 Detection modeling and detection method and system of machine-made sandstone powder based on image processing
CN113468992A (en) * 2021-06-21 2021-10-01 四川轻化工大学 Construction site safety helmet wearing detection method based on lightweight convolutional neural network
CN113420651A (en) * 2021-06-22 2021-09-21 四川九洲电器集团有限责任公司 Lightweight method and system of deep convolutional neural network and target detection method
CN113487551A (en) * 2021-06-30 2021-10-08 佛山市南海区广工大数控装备协同创新研究院 Gasket detection method and device for improving performance of dense target based on deep learning
CN113487551B (en) * 2021-06-30 2024-01-16 佛山市南海区广工大数控装备协同创新研究院 Gasket detection method and device for improving dense target performance based on deep learning
CN113642662A (en) * 2021-08-24 2021-11-12 凌云光技术股份有限公司 Lightweight classification model-based classification detection method and device
CN113642662B (en) * 2021-08-24 2024-02-20 凌云光技术股份有限公司 Classification detection method and device based on lightweight classification model
CN113971731A (en) * 2021-10-28 2022-01-25 燕山大学 Target detection method and device and electronic equipment
WO2023165024A1 (en) * 2022-03-01 2023-09-07 北京交通大学 Training method for binary target detection neural network structure and model
CN114462555A (en) * 2022-04-13 2022-05-10 国网江西省电力有限公司电力科学研究院 Multi-scale feature fusion power distribution network equipment identification method based on raspberry pi
US11631238B1 (en) 2022-04-13 2023-04-18 Iangxi Electric Power Research Institute Of State Grid Method for recognizing distribution network equipment based on raspberry pi multi-scale feature fusion
CN114462555B (en) * 2022-04-13 2022-08-16 国网江西省电力有限公司电力科学研究院 Multi-scale feature fusion power distribution network equipment identification method based on raspberry group
CN115719445A (en) * 2022-12-20 2023-02-28 齐鲁工业大学 Seafood identification method based on deep learning and raspberry type 4B module

Also Published As

Publication number Publication date
CN110287849B (en) 2022-01-07

Similar Documents

Publication Publication Date Title
CN110287849A (en) A kind of lightweight depth network image object detection method suitable for raspberry pie
CN105069746B (en) Video real-time face replacement method and its system based on local affine invariant and color transfer technology
CN111047551B (en) Remote sensing image change detection method and system based on U-net improved algorithm
CN108009518A (en) A kind of stratification traffic mark recognition methods based on quick two points of convolutional neural networks
CN110188705A (en) A kind of remote road traffic sign detection recognition methods suitable for onboard system
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN108334847A (en) A kind of face identification method based on deep learning under real scene
CN113313082B (en) Target detection method and system based on multitask loss function
CN110084165A (en) The intelligent recognition and method for early warning of anomalous event under the open scene of power domain based on edge calculations
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN108549866B (en) Remote sensing airplane identification method based on dense convolutional neural network
CN111476133B (en) Unmanned driving-oriented foreground and background codec network target extraction method
CN110490252A (en) A kind of occupancy detection method and system based on deep learning
CN113255837A (en) Improved CenterNet network-based target detection method in industrial environment
CN115512103A (en) Multi-scale fusion remote sensing image semantic segmentation method and system
CN112633257A (en) Potato disease identification method based on improved convolutional neural network
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN114155371A (en) Semantic segmentation method based on channel attention and pyramid convolution fusion
CN113298817A (en) High-accuracy semantic segmentation method for remote sensing image
CN112308087A (en) Integrated imaging identification system and method based on dynamic vision sensor
CN113486712B (en) Multi-face recognition method, system and medium based on deep learning
CN113610024B (en) Multi-strategy deep learning remote sensing image small target detection method
CN114821434A (en) Space-time enhanced video anomaly detection method based on optical flow constraint
CN109948628A (en) A kind of object detection method excavated based on identification region
CN109241932A (en) A kind of thermal infrared human motion recognition method based on movement variogram phase property

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231009

Address after: No. 202-126, 2nd Floor, Building 18, Zone 17, No. 188 South Fourth Ring West Road, Fengtai District, Beijing, 100070

Patentee after: China Industrial Internet (Beijing) Technology Group Co.,Ltd.

Address before: 100124 No. 100 Chaoyang District Ping Tian Park, Beijing

Patentee before: Beijing University of Technology