CN111832630A - Target detection method based on first-order gradient neural network - Google Patents

Target detection method based on first-order gradient neural network Download PDF

Info

Publication number
CN111832630A
CN111832630A CN202010583423.6A CN202010583423A CN111832630A CN 111832630 A CN111832630 A CN 111832630A CN 202010583423 A CN202010583423 A CN 202010583423A CN 111832630 A CN111832630 A CN 111832630A
Authority
CN
China
Prior art keywords
gradient
image
neural network
order gradient
order
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010583423.6A
Other languages
Chinese (zh)
Inventor
王堃
王铭宇
吴晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Star Innovation Technology Co ltd
Original Assignee
Chengdu Star Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Star Innovation Technology Co ltd filed Critical Chengdu Star Innovation Technology Co ltd
Priority to CN202010583423.6A priority Critical patent/CN111832630A/en
Publication of CN111832630A publication Critical patent/CN111832630A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target detection method based on a first-order gradient neural network, and relates to the field of intelligent identification. The method specifically comprises the following steps: capturing a field image in the driving process of the vehicle; converting the image into gray scale and smoothing to reduce high frequency noise; extracting the gradient size of the image; extracting the gradient size of the first-order gradient neural network through the first-order gradient neural network; performing feature fusion on the gradient magnitude and the gradient magnitude of the first-order gradient neural network; and sequentially enabling the image after feature fusion to enter a convolutional layer Conv1, 8 first-order gradient neural network convolutional layers Fire-module and a convolutional layer Conv10, and then entering a softmax classifier, so that a target detection result can be output. The invention is applied to the field of automatic driving, achieves light-weight target detection, has small power consumption and high identification precision, balances speed and precision, and ensures that the technology closely related to life, namely the automatic driving technology, has more safety guarantee.

Description

Target detection method based on first-order gradient neural network
Technical Field
The invention relates to the field of intelligent identification, in particular to a target detection method based on a first-order gradient neural network.
Background
With the development of intelligent recognition, the automatic driving technology gets more and more attention, and the technology closely related to life needs to achieve the rapidness and accuracy of target detection.
The existing automatic driving target detection algorithms, such as SSD, YOLO algorithm and fast-RCNN, cannot balance speed and precision, and each type of detection method has some problems more or less. In the actual driving process, once target detection is delayed or inaccurate, great harm is generated to personal safety. Therefore, it is necessary to construct a target detection method that balances speed and accuracy, and to make the technology related to life, which is the automatic driving technology, more secure.
Disclosure of Invention
The invention aims to: the target detection method based on the first-order gradient neural network is provided, the light-weight target detection is achieved, and the speed and the precision are balanced.
The technical scheme adopted by the invention is as follows:
the invention relates to a target detection method based on a first-order gradient neural network, which specifically comprises the following steps:
step 1: capturing a field image in the driving process of the vehicle;
step 2: converting the image in the step 1 into gray scale and smoothing to reduce high-frequency noise;
and step 3: extracting the gradient size of the image in the step 2;
and 4, step 4: on the basis of the step 3, extracting the gradient size of the first-order gradient neural network through the first-order gradient neural network;
and 5: performing feature fusion on the gradient magnitude in the step 3 and the gradient magnitude of the first-order gradient neural network in the step 4;
step 6: and sequentially enabling the image after feature fusion to enter a convolutional layer Conv1, 8 first-order gradient neural network convolutional layer Fire-module and a convolutional layer Conv10, and then entering a softmax classifier, so that a target detection result can be output.
Further, the step 3 of extracting the gradient size specifically comprises the following steps:
step 31: a gaussian function with a standard deviation σ is set, exp () is an exponential function with a natural constant e as a base, and formula 1 is as follows:
Figure BDA0002553812330000011
step 32: the directional derivative G of the image processed in the step 2 and the Gaussian functionxAnd GyPerforming convolution operation to obtain gradient vectors F of the image in horizontal and vertical directionsxAnd FyEquation 2 for the gradient vectors in the horizontal and vertical directions is as follows:
Figure BDA0002553812330000021
in the above formula 2, F represents the image processed in the step 2, and FxRepresenting the gradient vector of the input image in the horizontal direction, FyRepresenting the gradient vector of the image in the vertical direction, GxRepresenting the first derivative of the Gaussian function in the horizontal direction, GxRepresenting the first derivative of the gaussian function in the vertical direction;
step 33: in formula 2, the image F may be a discrete function F (x, y), different pixel values exist at each point, and the gradient of the image is a difference operation performed on the discrete function F (x, y), that is, the directional gradient of the image at the point (x, y) is as follows in formula 3:
horizontal direction:
Figure BDA0002553812330000022
the vertical direction is as follows:
Figure BDA0002553812330000023
therefore, the gradient at point (x, y) is of magnitude
Figure BDA0002553812330000024
Further, the step 4 of extracting the gradient magnitude of the first-order gradient neural network specifically comprises the following steps:
the image captured in the driving process of the vehicle is obtained by reflecting light of objects around the vehicle body, and the illumination reflection model is as follows in formula 6:
F(x,y)=R(x,y)L(x,y),
f (x, y) represents a discrete function of the image, R (x, y) represents the illumination reflectance, and L (x, y) represents the corresponding illumination value at point (x, y);
and R (x, y) depends on the surface characteristics of the photographed object itself and is insensitive to illumination, so that two adjacent pixel points (x, y) and (x + Δ x, y) in the image, that is, the illumination reflection model of the pixel point (x + Δ x, y), can be represented by formula 7:
F(x+Δx,y)=R(x+Δx,y)L(x+Δx,y),
if L (x, y) is approximately smoothed, then equation 6 and equation 7 are subtracted as follows:
F(x+Δx,y)-F(x,y)≈R(x+Δx,y)-R(x,y)L(x,y),
equation 8 is calculated by partial derivatives to obtain equation 9 as follows:
Figure BDA0002553812330000025
this gives:
Figure BDA0002553812330000031
since R (x, y) is a parameter insensitive to illumination, the ratio of the gradient in the y direction to the gradient in the x direction can be used as the ratio of the parameter insensitive to illumination, and the image under illumination is set to be F, and the formula 10 of the gradient magnitude G of the gradient neural network is as follows:
Figure BDA0002553812330000032
wherein arctan is an arctangent function and gradient is a gradient parameter.
Further, the gradient G obtained by processing the image in the step 3 is setaThe gradient size of the first-order gradient neural network obtained by processing the image in the step 4 is GbSaid step 5 is to apply a gradient size GaAnd first order gradient neural network gradient size GbFusion characteristics y obtained by characteristic fusion, wherein the fusion characteristics y are expressed as follows:
Figure BDA0002553812330000033
Ga∈RW×H×D,Gb∈RW×H×D,y∈RW×H×D,1<j<W,1<i<H,1<d<D,y∈RW×H×Dwhich is
Where j denotes the width of the image, i denotes the height of the image, and d denotes the number of channels of the image.
Further, in the step 6, the 8 first-order gradient neural network convolutional layers Fire-modules are Fire2, Fire3, Fire4, Fire5, Fire6, Fire7, Fire8 and Fire9, respectively, wherein after the images subjected to feature fusion are output through Fire4, Fire6 and Fire8, a Conv1 × 1 convolution kernel is added.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. the invention relates to a target detection method based on a first-order gradient neural network, which is used for solving the gradient size of an image subjected to smoothing and gray level processing and the gradient size of the first-order gradient neural network and then performing image feature fusion on the first-order gradient neural network, thereby realizing the target detection of the first-order gradient neural network. The method emphasizes the calculation of the gradient size of the first-order gradient neural network, has low calculation complexity, thereby having low operation power consumption during target detection, realizing light-weight target detection, having high identification precision and balancing speed and precision, and simultaneously ensuring that target images with different illumination are obtained to be detected and identified by utilizing the gradient characteristics. The technology closely related to life, namely the automatic driving technology, has safety guarantee.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments are briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without creative efforts, and the proportional relationship of each component in the drawings in the present specification does not represent the proportional relationship in the actual material selection design, and is only a schematic diagram of the structure or the position, in which:
FIG. 1 is a block diagram of a method for detecting a target in a first-order gradient neural network according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
All of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except combinations of features and/or steps that are mutually exclusive.
The present invention will be described in detail with reference to the accompanying drawings.
Example one
As shown in fig. 1, the present invention is a target detection method based on a first-order gradient neural network, which specifically includes the following steps:
step 1: capturing a field image in the driving process of the vehicle;
step 2: converting the image in the step 1 into gray scale and smoothing to reduce high-frequency noise;
and step 3: extracting the gradient size of the image in the step 2;
and 4, step 4: on the basis of the step 3, extracting the gradient size of the first-order gradient neural network through the first-order gradient neural network;
and 5: performing feature fusion on the gradient magnitude in the step 3 and the gradient magnitude of the first-order gradient neural network in the step 4;
step 6: and sequentially enabling the image after feature fusion to enter a convolutional layer Conv1, 8 first-order gradient neural network convolutional layer Fire-module and a convolutional layer Conv10, and then entering a softmax classifier, so that a target detection result can be output.
Further, the step 3 of extracting the gradient size specifically comprises the following steps:
step 31: a gaussian function with a standard deviation σ is set, exp () is an exponential function with a natural constant e as a base, and formula 1 is as follows:
Figure BDA0002553812330000051
step 32: the directional derivative G of the image processed in the step 2 and the Gaussian functionxAnd GyPerforming convolutionCalculating to obtain gradient vectors F of the image in the horizontal direction and the vertical directionxAnd FyEquation 2 for the gradient vectors in the horizontal and vertical directions is as follows:
Figure BDA0002553812330000052
in the above formula 2, F represents the image processed in the step 2, and FxRepresenting the gradient vector of the input image in the horizontal direction, FyRepresenting the gradient vector of the image in the vertical direction, GxRepresenting the first derivative of the Gaussian function in the horizontal direction, GxRepresenting the first derivative of the gaussian function in the vertical direction;
step 33: in formula 2, the image F may be a discrete function F (x, y), different pixel values exist at each point, and the gradient of the image is a difference operation performed on the discrete function F (x, y), that is, the directional gradient of the image at the point (x, y) is as follows in formula 3:
horizontal direction:
Figure BDA0002553812330000053
the vertical direction is as follows:
Figure BDA0002553812330000054
therefore, the gradient at point (x, y) is of magnitude
Figure BDA0002553812330000055
Further, the step 4 of extracting the gradient magnitude of the first-order gradient neural network specifically comprises the following steps:
the image captured in the driving process of the vehicle is obtained by reflecting light of objects around the vehicle body, and the illumination reflection model is as follows in formula 6:
F(x,y)=R(x,y)L(x,y),
f (x, y) represents a discrete function of the image, R (x, y) represents the illumination reflectance, and L (x, y) represents the corresponding illumination value at point (x, y);
and R (x, y) depends on the surface characteristics of the photographed object itself and is insensitive to illumination, so that two adjacent pixel points (x, y) and (x + Δ x, y) in the image, that is, the illumination reflection model of the pixel point (x + Δ x, y), can be represented by formula 7:
F(x+Δx,y)=R(x+Δx,y)L(x+Δx,y),
if L (x, y) is approximately smoothed, then equation 6 and equation 7 are subtracted as follows:
F(x+Δx,y)-F(x,y)≈R(x+Δx,y)-R(x,y)L(x,y),
equation 8 is calculated by partial derivatives to obtain equation 9 as follows:
Figure BDA0002553812330000061
this gives:
Figure BDA0002553812330000062
since R (x, y) is a parameter insensitive to illumination, the ratio of the gradient in the y direction to the gradient in the x direction can be used as the ratio of the parameter insensitive to illumination, and the image under illumination is set to be F, and the formula 10 of the gradient magnitude G of the gradient neural network is as follows:
Figure BDA0002553812330000063
wherein arctan is an arctangent function and gradient is a gradient parameter.
Further, the gradient G obtained by processing the image in the step 3 is setaThe gradient size of the first-order gradient neural network obtained by processing the image in the step 4 is GbSaid step 5 is to apply a gradient size GaAnd first order gradient neural network gradient size GbFusion characteristics y obtained by characteristic fusion, wherein the fusion characteristics y are expressed as follows:
Figure BDA0002553812330000064
Ga∈RW×H×D,Gb∈RW×H×D,y∈RW×H×D,1<j<W,1<i<H,1<d<D,y∈RW×H×Dwhich is
Where j denotes the width of the image, i denotes the height of the image, and d denotes the number of channels of the image.
Further, in the step 6, the 8 first-order gradient neural network convolutional layers Fire-modules are Fire2, Fire3, Fire4, Fire5, Fire6, Fire7, Fire8 and Fire9, respectively, wherein after the images subjected to feature fusion are output through Fire4, Fire6 and Fire8, a Conv1 × 1 convolution kernel is added.
In summary, the present invention obtains the gradient size and the gradient size of the first-order gradient neural network for the smoothed and gray-scale processed image, and then performs the image feature fusion of the first-order gradient neural network, thereby realizing the target detection of the first-order gradient neural network. The method emphasizes the calculation of the gradient size of the first-order gradient neural network, has low calculation complexity, thereby having low operation power consumption during target detection, realizing light-weight target detection, having high identification precision and balancing speed and precision, and simultaneously ensuring that target images with different illumination are obtained to be detected and identified by utilizing the gradient characteristics. The technology closely related to life, namely the automatic driving technology, has safety guarantee.
The above description is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be made by those skilled in the art without inventive work within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope defined by the claims.

Claims (5)

1. A target detection method based on a first-order gradient neural network is characterized by comprising the following steps:
step 1: capturing a field image in the driving process of the vehicle;
step 2: converting the image in the step 1 into gray scale and smoothing to reduce high-frequency noise;
and step 3: extracting the gradient size of the image in the step 2;
and 4, step 4: on the basis of the step 3, extracting the gradient size of the first-order gradient neural network through the first-order gradient neural network;
and 5: performing feature fusion on the gradient magnitude in the step 3 and the gradient magnitude of the first-order gradient neural network in the step 4;
step 6: and sequentially enabling the image after feature fusion to enter a convolutional layer Conv1, 8 first-order gradient neural network convolutional layer Fire-module and a convolutional layer Conv10, and then entering a softmax classifier, so that a target detection result can be output.
2. The method for detecting the target based on the first-order gradient neural network as claimed in claim 1, wherein the step 3 of extracting the gradient magnitude comprises the following specific steps:
step 31: a gaussian function with a standard deviation σ is set, exp () is an exponential function with a natural constant e as a base, and formula 1 is as follows:
Figure FDA0002553812320000011
step 32: the directional derivative G of the image processed in the step 2 and the Gaussian functionxAnd GyPerforming convolution operation to obtain gradient vectors F of the image in horizontal and vertical directionsxAnd FyEquation 2 for the gradient vectors in the horizontal and vertical directions is as follows:
Figure FDA0002553812320000012
in the above formula 2, F represents the image processed in the step 2, and FxRepresenting the gradient vector of the input image in the horizontal direction, FyRepresenting the gradient vector of the image in the vertical direction, GxRepresenting the first derivative of the Gaussian function in the horizontal direction, GxRepresenting the first derivative of the gaussian function in the vertical direction;
step 33: in formula 2, the image F may be a discrete function F (x, y), different pixel values exist at each point, and the gradient of the image is a difference operation performed on the discrete function F (x, y), that is, the directional gradient of the image at the point (x, y) is as follows in formula 3:
horizontal direction:
Figure FDA0002553812320000013
the vertical direction is as follows:
Figure FDA0002553812320000014
therefore, the gradient at point (x, y) is of magnitude
Figure FDA0002553812320000021
3. The method for detecting a target based on a first-order gradient neural network of claim 2, wherein the step 4 of extracting the gradient magnitude of the first-order gradient neural network comprises the following specific steps:
the image captured in the driving process of the vehicle is obtained by reflecting light of objects around the vehicle body, and the illumination reflection model is as follows in formula 6:
F(x,y)=R(x,y)L(x,y),
f (x, y) represents a discrete function of the image, R (x, y) represents the illumination reflectance, and L (x, y) represents the corresponding illumination value at point (x, y);
and R (x, y) depends on the surface characteristics of the photographed object itself and is insensitive to illumination, so that two adjacent pixel points (x, y) and (x + Δ x, y) in the image, that is, the illumination reflection model of the pixel point (x + Δ x, y), can be represented by formula 7:
F(x+Δx,y)=R(x+Δx,y)L(x+Δx,y),
if L (x, y) is approximately smoothed, then equation 6 and equation 7 are subtracted as follows:
F(x+Δx,y)-F(x,y)≈R(x+Δx,y)-R(x,y)L(x,y),
equation 8 is calculated by partial derivatives to obtain equation 9 as follows:
Figure FDA0002553812320000022
this gives:
Figure FDA0002553812320000023
since R (x, y) is a parameter insensitive to illumination, the ratio of the gradient in the y direction to the gradient in the x direction can be used as the ratio of the parameter insensitive to illumination, and the image under illumination is set to be F, and the formula 10 of the gradient magnitude G of the gradient neural network is as follows:
Figure FDA0002553812320000024
wherein arctan is an arctangent function and gradient is a gradient parameter.
4. The method for detecting the target based on the first-order gradient neural network as claimed in claim 3, wherein:
setting the gradient G obtained by processing the image in the step 3aThe gradient size of the first-order gradient neural network obtained by processing the image in the step 4 is GbSaid step 5 is to apply a gradient size GaAnd first order gradient neural network gradient size GbFusion characteristics y obtained by characteristic fusion, wherein the fusion characteristics y are expressed as follows:
Figure FDA0002553812320000031
Ga∈RW×H×D,Gb∈RW×H×D,y∈RW×H×D,1<j<W,1<i<H,1<d<D,y∈RW×H×D
where j represents the width of the image, i represents the height of the image, and d represents the number of channels of the image.
5. The method for detecting the target based on the first-order gradient neural network as claimed in claim 1, wherein: in the step 6, the 8 first-order gradient neural network convolution layers Fire-modules are respectively Fire2, Fire3, Fire4, Fire5, Fire6, Fire7, Fire8 and Fire9, wherein after the images subjected to feature fusion are output through the Fire4, the Fire6 and the Fire8, a Conv1 × 1 convolution kernel is added.
CN202010583423.6A 2020-06-23 2020-06-23 Target detection method based on first-order gradient neural network Pending CN111832630A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010583423.6A CN111832630A (en) 2020-06-23 2020-06-23 Target detection method based on first-order gradient neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010583423.6A CN111832630A (en) 2020-06-23 2020-06-23 Target detection method based on first-order gradient neural network

Publications (1)

Publication Number Publication Date
CN111832630A true CN111832630A (en) 2020-10-27

Family

ID=72898147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010583423.6A Pending CN111832630A (en) 2020-06-23 2020-06-23 Target detection method based on first-order gradient neural network

Country Status (1)

Country Link
CN (1) CN111832630A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872477A (en) * 2009-04-24 2010-10-27 索尼株式会社 Method and device for detecting object in image and system containing device
CN107274416A (en) * 2017-06-13 2017-10-20 西北工业大学 High spectrum image conspicuousness object detection method based on spectrum gradient and hierarchical structure
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN111199220A (en) * 2020-01-21 2020-05-26 北方民族大学 Lightweight deep neural network method for people detection and people counting in elevator
CN111259800A (en) * 2020-01-16 2020-06-09 天津大学 Neural network-based unmanned special vehicle detection method
US20210174149A1 (en) * 2018-11-20 2021-06-10 Xidian University Feature fusion and dense connection-based method for infrared plane object detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872477A (en) * 2009-04-24 2010-10-27 索尼株式会社 Method and device for detecting object in image and system containing device
CN107274416A (en) * 2017-06-13 2017-10-20 西北工业大学 High spectrum image conspicuousness object detection method based on spectrum gradient and hierarchical structure
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
US20210174149A1 (en) * 2018-11-20 2021-06-10 Xidian University Feature fusion and dense connection-based method for infrared plane object detection
CN111259800A (en) * 2020-01-16 2020-06-09 天津大学 Neural network-based unmanned special vehicle detection method
CN111199220A (en) * 2020-01-21 2020-05-26 北方民族大学 Lightweight deep neural network method for people detection and people counting in elevator

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘祥楼等: "融合梯度特征的轻量级神经网络的人脸识别", 《激光与电子学进展》 *
王正友等: "基于局部特征的铁路驾驶员目标检测", 《中国科技论文》 *

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
JP3151284B2 (en) Apparatus and method for salient pole contour grading extraction for sign recognition
CN114897864B (en) Workpiece detection and defect judgment method based on digital-analog information
CN111768392B (en) Target detection method and device, electronic equipment and storage medium
CN109034184B (en) Grading ring detection and identification method based on deep learning
CN111652852A (en) Method, device and equipment for detecting surface defects of product
CN107220962B (en) Image detection method and device for tunnel cracks
CN112288758B (en) Infrared and visible light image registration method for power equipment
CN110660048B (en) Leather surface defect detection method based on shape characteristics
CN113139549B (en) Parameter self-adaptive panoramic segmentation method based on multitask learning
CN115496976A (en) Visual processing method, device, equipment and medium for multi-source heterogeneous data fusion
CN111814852A (en) Image detection method, image detection device, electronic equipment and computer-readable storage medium
US8126275B2 (en) Interest point detection
CN108710881B (en) Neural network model, candidate target area generation method and model training method
CN108133226B (en) Three-dimensional point cloud feature extraction method based on HARRIS improvement
CN117636045A (en) Wood defect detection system based on image processing
CN117227247A (en) Intelligent positioning control method for carton processing
CN111415378B (en) Image registration method for automobile glass detection and automobile glass detection method
CN111832630A (en) Target detection method based on first-order gradient neural network
JP3251840B2 (en) Image recognition device
CN116205894A (en) Bearing roller defect detection method based on multi-information fusion
Akman et al. Computing saliency map from spatial information in point cloud data
CN115861259A (en) Lead frame surface defect detection method and device based on template matching
JP3253752B2 (en) Pattern evaluation method
CN113870342A (en) Appearance defect detection method, intelligent terminal and storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination