CN110991374B - Fingerprint singular point detection method based on RCNN - Google Patents

Fingerprint singular point detection method based on RCNN Download PDF

Info

Publication number
CN110991374B
CN110991374B CN201911255304.1A CN201911255304A CN110991374B CN 110991374 B CN110991374 B CN 110991374B CN 201911255304 A CN201911255304 A CN 201911255304A CN 110991374 B CN110991374 B CN 110991374B
Authority
CN
China
Prior art keywords
network
fingerprint
rcnn
singular point
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911255304.1A
Other languages
Chinese (zh)
Other versions
CN110991374A (en
Inventor
漆进
王菁怡
杨轶涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201911255304.1A priority Critical patent/CN110991374B/en
Publication of CN110991374A publication Critical patent/CN110991374A/en
Application granted granted Critical
Publication of CN110991374B publication Critical patent/CN110991374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fingerprint singular point detection method based on RCNN, which comprises the following steps: constructing a data set, enhancing a fingerprint image, segmenting the fingerprint image, detecting singular points of the fingerprint image and checking the accuracy. Compared with the traditional fingerprint singular point detection method, the method innovatively combines convolution and is based on the RCNN framework for detection, and the method has the advantages of high detection speed, high accuracy and high efficiency. The image enhancement process reduces the requirement on the quality of the fingerprint image, and the application of the block network simplifies and omits the operation of data enhancement in the prior processing method.

Description

Fingerprint singular point detection method based on RCNN
Technical Field
The invention relates to an image singular point detection method, in particular to a fingerprint singular point detection method, belonging to the field of computer vision and deep learning.
Background
Because of its uniqueness, fingerprint images are now widely used as identification labels in access permissions, criminal investigations and other aspects, and people can determine the owner of a certain fingerprint image by judging the consistency of the used fingerprint image and the image in the database. The singular points are used as essential global features and obvious marks on the fingerprint image, have the features which are not changed along with rotation, deformation and the like, and are suitable for various fingerprint identification scenes such as fingerprint retrieval, fingerprint classification and the like.
The poincare index is applied in the singular point detection of the fingerprint, and the method using the poincare index is generally susceptible to picture noise and poor in performance on a low-quality fingerprint picture, and is easy to cause huge calculation amount. Most of the existing singular point detection methods are based on the increase of poincare exponent. If the poincare index and a multi-scale detection algorithm are combined, the method only needs to calculate singular points of possible regions, the detection speed can be effectively improved, but the detection accuracy is not ideal, and in addition, the performance of the zero-pole model combined with the Hough transform method is limited by the accuracy of the poincare index.
The deep convolutional neural network promotes the development of many advanced computer vision directions nowadays, is widely used in the fields of biological pattern recognition, video recognition and the like, and achieves good effect. The RCNN network has a strong effect on target detection, and uses a convolutional neural network with high capacity to propagate a candidate region from bottom to top so as to achieve the purposes of target positioning and segmentation. For the condition that the training data of the label is less, the RCNN can use the trained parameters as assistance to conduct fine adjustment, the recognition effect can be well improved, in addition, the RCNN adopts a mode that supervised pre-training is conducted under a large number of samples and fine adjustment is conducted under a small number of samples, and the problems that small samples are difficult to train and even are over-fitted are effectively solved.
The invention content is as follows:
in view of the defects of the conventional method, the invention provides a fingerprint singular point detection method based on RCNN, the implementation process of which is shown in fig. 1, and the purpose of the method is to extract fingerprint singular points shown in fig. 2b from a fingerprint image shown in fig. 2a more efficiently and accurately, and simultaneously reduce the requirement on the quality of a sample fingerprint image.
In order to achieve the purpose, the invention carries out the following steps after the computer reads the original fingerprint image:
step one, constructing a data set: acquiring 256 × 320 original fingerprint gray level images containing noise points, manually enhancing the images, marking out a group route, normalizing the images, and dividing a training set and a test set according to the proportion of 8;
step two, image enhancement: and constructing a de-coding convolutional neural network for image enhancement, wherein the de-coding convolutional neural network consists of a coding network module and a decoding network module. Training an image enhancement network by using an original data set, and storing 256 × 320 fingerprint pictures output by network prediction as input of the third step;
step three, image segmentation: dividing the enhanced fingerprint image into a plurality of regions with the size of 41 × 41 according to a grid, manually marking the category to which each region belongs, representing the category by using a matrix as a group route, and then setting a probability threshold for screening classified results. The Res-net classifier is trained using the enhanced image dataset. Aiming at the output result of each region, reserving the region higher than the probability threshold value for detecting the singular point coordinate;
step four, singular point detection: taking the area image containing the singular points in the third step as input, taking the normalized fingerprint coordinates as output, and performing FCN training, wherein the essential is to perform regression on the proposed region of interest;
step five, accuracy calculation: and (4) extracting the prediction result of the FCN in the step four, comparing the prediction result with the true value, and calculating the prediction accuracy of the method. And taking the Euclidean distance between the predicted point and the real point as a basis, and regarding the point with the distance lower than the threshold value as successful detection.
Aiming at the first step, the artificial image enhancement means performs filtering, noise reduction and other operations by using an image processing technology, the tagging groudtruth means that the position of a singular point is manually tagged, the coordinate of the singular point is read, and the coordinate is stored as a csv file. Image normalization refers to dividing the gray values of all pixels by 255 to make the values within the range of [0,1 ].
And aiming at the second step, the image enhancement network consists of an encoder network and a decoder network, wherein the encoder network is structurally composed of two modules which respectively consist of two same convolution layers (the convolution kernel is 3 × 3, the number of channels is 16 and 64 in sequence, and the step length is 1) and one maximum pooling layer (the window size is 2 × 2). The encoder network consists of two network modules including one upsampling layer (window size 2 x 2) and two convolutional layers (convolution kernel 3 x 3, channel number 64 and 16 in turn, step length 1), and finally, one convolutional layer with convolution kernel 1 x 1. In the training process, the mean square error is used as a loss function, and a random gradient descent algorithm is used for parameter optimization.
Aiming at the third step, the matrix for labeling each region type is as follows:
Figure BDA0002310067490000021
where C is a class matrix, C i ∈{0,1},i=1,2,3,c 1 Indicating whether or not the region contains singular points, c 2 Indicating whether the area contains a core point, c 3 Indicates that in the area isAnd if not, containing triangular points. The probability threshold depends on the particular data set and is generally slightly less than the maximum of the prediction probability. The specific structure of Res-net in the step is as follows: a convolutional layer with convolution kernel of 5 × 5 and channel number of 16, a downsampled layer with window size of 2 × 2 and channel number of 16, a convolutional layer with convolution kernel of 5 × 5 and channel number of 32, and a residual network extended to the next convolutional layer, a downsampled layer with window size of 2 × 2 and channel number of 64, a convolutional layer with convolution kernel of 5 × 5 and channel number of 64, and a fully connected layer. And setting the training parameters of the network as the network in the second step.
Aiming at the fourth step, the training set is a 41 × 41 region gray level picture higher than the probability threshold in the third step, the singular point coordinates in the region need to be calculated through the coordinates marked by the original image, and the singular point coordinates are normalized, and the specific steps are as follows:
Figure BDA0002310067490000031
Figure BDA0002310067490000032
wherein x i As original coordinates, x i ' is the coordinate of the singular point in the area gray scale picture,
Figure BDA0002310067490000033
is a coordinate value after being normalized, and is a coordinate value after being normalized,
Figure BDA0002310067490000034
and n is of the fingerprint picture in the data set. The FCN of this step consists of four similar modules, each module consisting of two convolutional layers (convolutional kernels 3 × 3, number of channels 16, 64, 128, and 256 in order) and one maximum pooling layer (window size 2 × 2), the number of fully-connected layers being 2, and the number of nodes being 256 and 2, respectively. In this network, regression is performed using random gradient descent, and the network learns by back-propagating the mean square error. Since the input picture is small, theThe CNN in the step can be effectively learned, so that the output predicted value has higher accuracy.
For step five, the euclidean distance used is as follows:
Figure BDA0002310067490000035
wherein p is x ,p y ,g x ,g y Respectively representing the horizontal and vertical coordinates of the predicted point and the horizontal and vertical coordinates of the real singular point, and the threshold is a threshold value.
And aiming at the step five, the threshold value is determined according to the size of the picture, generally about one tenth of the image size, and the threshold value is 20 pixel points according to the size of the picture.
Description of the drawings:
FIG. 1 is a flow chart of one embodiment of the present invention
FIGS. 2a and b show the original drawing and the detection result of the embodiment of FIG. 1
The specific implementation process comprises the following steps:
the RCNN-based fingerprint singular point detection method is further described below with reference to a flowchart and an embodiment.
The whole method mainly comprises the following five steps: constructing a data set, enhancing a fingerprint image, segmenting the fingerprint image, detecting singular point coordinates and detecting accuracy.
Acquiring 256 × 320 original gray level images of fingerprints containing noise points, manually enhancing the images, marking out a group route, normalizing the images, and dividing a training set and a test set according to a ratio of 8;
and step two, constructing a de-coding convolutional neural network for image enhancement, wherein the de-coding convolutional neural network consists of a coding network module and a decoding network module. Training an image enhancement network by using the original data set, and storing 256 × 320 fingerprint images output by network prediction as input of the third step;
and step three, dividing the enhanced fingerprint image into regions with the size of 41 × 41 according to grids, manually marking the category to which each region belongs, representing the categories by using a matrix, and then setting a probability threshold value for screening classified results. Training a Res-net classifier by using the enhanced image data set, and reserving a region higher than a probability threshold value for singular point coordinate detection;
step four, taking the area image containing the singular points in the step three as input, taking the normalized fingerprint coordinates as output, and carrying out FCN training;
and step five, extracting the prediction result of the FCN in the step four, comparing the prediction result with the true value, and calculating the prediction accuracy of the method. And taking the Euclidean distance between the predicted point and the real point as a basis, and regarding the point with the distance lower than the threshold value as successful detection.
Aiming at the first step, the artificial image enhancement means performs filtering, noise reduction and other operations by using an image processing technology, the tagging groudtruth means that the position of a singular point is manually tagged, the coordinate of the singular point is read, and the coordinate is stored as a csv file. Image normalization refers to dividing the gray values of all pixels by 255 to make the values within the range of [0,1 ].
And aiming at the second step, the image enhancement network consists of an encoder network and a decoder network, wherein the encoder network is structurally composed of two modules which respectively consist of two layers of same convolution layers (convolution kernel is 3 × 3, the number of channels is 16 and 64 in sequence, and the step length is 1) and one layer of maximum pooling layer (window size is 2 × 2). The encoder network consists of two network modules including one upsampling layer (window size 2 x 2) and two convolutional layers (convolution kernel 3 x 3, channel number 64 and 16 in turn, step length 1), and finally, one convolutional layer with convolution kernel 1 x 1. In the training process, the mean square error is used as a loss function, and a random gradient descent algorithm is used for parameter optimization.
Aiming at the third step, the matrix for labeling each region type is as follows:
Figure BDA0002310067490000041
where C is a class matrix, C i ∈{0,1},i=1,2,3,c 1 Indicates whether or not the region contains an oddAnomaly, c 2 Indicating whether the area contains a core point, c 3 Indicating whether or not the area contains triangle points. The probability threshold depends on the particular data set and is generally slightly less than the maximum of the prediction probability. The specific structure of Res-net in the step is as follows: a convolutional layer with convolution kernel of 5 × 5 and channel number of 16, a downsampled layer with window size of 2 × 2 and channel number of 16, a convolutional layer with convolution kernel of 5 × 5 and channel number of 32, and a residual network extended to the next convolutional layer, a downsampled layer with window size of 2 × 2 and channel number of 64, a convolutional layer with convolution kernel of 5 × 5 and channel number of 64, and a fully connected layer. And setting the training parameters of the network as the network in the second step.
Aiming at the fourth step, the training set is a 41 × 41 region gray level picture higher than the probability threshold in the third step, the singular point coordinates in the region need to be calculated through the coordinates marked by the original image, and the singular point coordinates are normalized, and the specific steps are as follows:
Figure BDA0002310067490000042
Figure BDA0002310067490000051
wherein x i As original coordinates, x i ' is the coordinate of the singular point in the area gray scale picture,
Figure BDA0002310067490000052
is the coordinate value after the normalization,
Figure BDA0002310067490000053
and n is of the fingerprint picture in the data set. The FCN of this step consists of four similar modules, each module consisting of two convolutional layers (convolutional kernels 3 × 3, number of channels 16, 64, 128, and 256 in order) and one maximum pooling layer (window size 2 × 2), the number of fully-connected layers being 2, and the number of nodes being 256 and 2, respectively. In this network, random gradient descent is usedRegression is performed and the network learns by back-propagating the mean square error. Because the input picture is small, the CNN can be effectively learned in the step, and the output predicted value has high accuracy.
For step five, the Euclidean distance used is as follows:
Figure BDA0002310067490000054
wherein p is x ,p y ,g x ,g y Respectively representing the horizontal and vertical coordinates of the predicted point and the horizontal and vertical coordinates of the real singular point, and the threshold is a threshold value.
And aiming at the step five, the threshold value is determined according to the size of the picture, generally about one tenth of the image size, and the threshold value is 20 pixel points according to the size of the picture.
The fingerprint singular point detection method based on the RCNN provided by the invention is based on the RCNN framework to achieve the advantages of high detection speed, high accuracy and high efficiency, the requirement on the quality of the fingerprint image is reduced in the image enhancement process, and the data enhancement operation is not required in the process to simplify the training process by the block network.
The method provided by the invention is described in detail, the invention principle and the implementation method are explained by applying specific examples, and the description of the examples is only used for understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and the content of the present specification should not be construed as a limitation of the present invention.

Claims (9)

1. A fingerprint singular point detection method based on RCNN is characterized by comprising the following steps:
(1) Constructing a data set after the computer reads the original fingerprint image: acquiring 256 × 320 original fingerprint gray level images containing noise points, firstly performing artificial image enhancement and labeling a group route, then performing normalization processing on the images, and meanwhile dividing a training set and a test set according to the proportion of 8;
(2) Constructing a coding and decoding convolutional neural network for image enhancement, which consists of a coding network module and a decoding network module, training the image enhancement network by using an original data set, and storing 256 × 320 fingerprint pictures output by network prediction as input in the step (3);
(3) Dividing the enhanced fingerprint image into a plurality of 41-41 areas according to a grid, manually marking the category of each area, representing the category by using a matrix as a group route, setting a probability threshold value for screening the classified result, training a Res-net classifier by using the enhanced image data set, and reserving the area higher than the probability threshold value for the output result of each area for the step (4);
(4) Taking the area image containing the singular points in the step (3) as input, taking the normalized fingerprint coordinates as output, and performing FCN training, wherein the FCN is used for performing regression on the proposed region of interest;
(5) And (4) extracting the prediction result of the FCN in the step (4), comparing the prediction result with the true value, calculating the prediction accuracy of the method, and regarding the point with the distance lower than the threshold value as successful detection by taking the Euclidean distance between the prediction point and the true point as the basis.
2. The RCNN-based fingerprint singular point detection method of claim 1, wherein: in the first step (1), the artificial image enhancement means performs filtering, noise reduction and other operations by using an image processing technology, labels the positions of the singular points manually marked by the group point, reads the coordinates of the singular points, stores the coordinates as a csv file, and divides the gray values of all the pixel points by 255 to enable the gray values to be in a range of [0,1 ].
3. The RCNN-based fingerprint singular point detection method of claim 1, wherein: the image enhancement network in the step (2) is composed of an encoder network and a decoder network, the encoder network is composed of two convolution layers and a pooling down-sampling layer, convolution kernels of the two convolution layers are 3 × 3, the number of channels is 16 × 64 in sequence, the step length is 1, the window size of the largest pooling down-sampling layer is 2 × 2, the decoder network is composed of a pooling up-sampling layer and two convolution layers, the window size of the pooling up-sampling layer is 2 × 2, the convolution kernels of the two convolution layers are 3 × 3, the number of channels is 64 × 16 in sequence, the step length is 1, finally, the convolution layers with convolution kernels of 1 × 1 are passed through, the mean square error is used as a loss function in the training process, and parameter optimization is carried out by using a random gradient descent algorithm.
4. The RCNN-based fingerprint singular point detection method of claim 1, wherein: the matrix for marking each region category in the step (3) is as follows:
Figure FDA0003919669020000011
wherein C is a class matrix, C i ∈{0,1},i=1,2,3,c 1 Indicating whether or not the region contains singular points, c 2 Indicating whether the area contains a core point, c 3 Whether the area contains triangle points or not is shown, the probability threshold is determined according to a specific data set, and the probability threshold is 0.95 times of the maximum value of the prediction probability.
5. The RCNN-based fingerprint singular point detection method of claim 1, wherein: the specific structure of Res-net in the step (3) is as follows: and (3) connecting a convolution layer with convolution kernel of 5 × 5 and channel number of 16, connecting a downsampling layer with window size of 2 × 2 and channel number of 16, then connecting convolution layers with convolution kernel of 5 × 5 and channel number of 32 and extending to a residual error network of the next convolution layer, then connecting downsampling layers with window size of 2 × 2 and channel number of 64, and finally connecting the convolution layers with convolution kernel of 5 × 5 and channel number of 64 and a full connection layer, wherein training parameters of the network are set to be the same as the network in the step (2).
6. The RCNN-based fingerprint singular point detection method of claim 1, wherein: the training set in the step (4) is a 41 × 41 pixel region grayscale picture which is higher than the probability threshold in the step (3), and singular point coordinates in the region need to be calculated through coordinates marked by the original image and normalized, and the specific steps are as follows:
Figure FDA0003919669020000026
Figure FDA0003919669020000021
wherein x i As original coordinates, x i ' is the coordinate of the singular point in the area gray scale picture,
Figure FDA0003919669020000022
is a normalized coordinate value>
Figure FDA0003919669020000023
Figure FDA0003919669020000024
And n is the number of singular points of the fingerprint picture in the data set.
7. The RCNN-based fingerprint singular point detection method of claim 1, wherein: the FCN in step (4) is composed of four similar modules, each module is composed of two convolution layers and a pooled downsampled layer, where the convolution kernels of the convolution layers are all 3 × 3, the number of convolution channels of the four modules is sequentially 16, 64, 128, and 256, the window size of the pooled downsampled layer is 2 × 2, the number of layers of the final fully-connected layer is 2, and the number of nodes is 256 and 2, respectively, in the network, a random gradient descent is used for regression, and the network learns by back-propagating the mean square error.
8. The RCNN-based fingerprint singular point detection method of claim 1, wherein: the euclidean distance used in the step (5) is as follows:
Figure FDA0003919669020000025
wherein p is x ,p y ,g x ,g y Respectively representing the horizontal and vertical coordinates of the predicted point and the horizontal and vertical coordinates of the real singular point, and the threshold is a threshold value.
9. The RCNN-based fingerprint singular point detection method of claim 1, wherein: and (5) the threshold value is 20 pixel points.
CN201911255304.1A 2019-12-10 2019-12-10 Fingerprint singular point detection method based on RCNN Active CN110991374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911255304.1A CN110991374B (en) 2019-12-10 2019-12-10 Fingerprint singular point detection method based on RCNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911255304.1A CN110991374B (en) 2019-12-10 2019-12-10 Fingerprint singular point detection method based on RCNN

Publications (2)

Publication Number Publication Date
CN110991374A CN110991374A (en) 2020-04-10
CN110991374B true CN110991374B (en) 2023-04-04

Family

ID=70091666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911255304.1A Active CN110991374B (en) 2019-12-10 2019-12-10 Fingerprint singular point detection method based on RCNN

Country Status (1)

Country Link
CN (1) CN110991374B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818797B (en) * 2021-01-26 2024-03-01 厦门大学 Consistency detection method and storage device for online examination answer document images
CN113705519B (en) * 2021-09-03 2024-05-24 杭州乐盯科技有限公司 Fingerprint identification method based on neural network
CN115187570B (en) * 2022-07-27 2023-04-07 北京拙河科技有限公司 Singular traversal retrieval method and device based on DNN deep neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017133009A1 (en) * 2016-02-04 2017-08-10 广州新节奏智能科技有限公司 Method for positioning human joint using depth image of convolutional neural network
WO2017215284A1 (en) * 2016-06-14 2017-12-21 山东大学 Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network
WO2018089210A1 (en) * 2016-11-09 2018-05-17 Konica Minolta Laboratory U.S.A., Inc. System and method of using multi-frame image features for object detection
CN108645498A (en) * 2018-04-28 2018-10-12 南京航空航天大学 Impact Location Method based on phase sensitivity light reflection and convolutional neural networks deep learning
WO2018214195A1 (en) * 2017-05-25 2018-11-29 中国矿业大学 Remote sensing imaging bridge detection method based on convolutional neural network
CN110472623A (en) * 2019-06-29 2019-11-19 华为技术有限公司 Image detecting method, equipment and system

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2375614B1 (en) * 2010-04-09 2014-05-07 Alcatel Lucent Method for broadcasting multimedia content
US10387773B2 (en) * 2014-10-27 2019-08-20 Ebay Inc. Hierarchical deep convolutional neural network for image classification
CN104933722B (en) * 2015-06-29 2017-07-11 电子科技大学 A kind of method for detecting image edge based on Spiking convolutional neural networks models
KR102592076B1 (en) * 2015-12-14 2023-10-19 삼성전자주식회사 Appartus and method for Object detection based on Deep leaning, apparatus for Learning thereof
US9904871B2 (en) * 2016-04-14 2018-02-27 Microsoft Technologies Licensing, LLC Deep convolutional neural network prediction of image professionalism
US20170300811A1 (en) * 2016-04-14 2017-10-19 Linkedin Corporation Dynamic loss function based on statistics in loss layer of deep convolutional neural network
US10229347B2 (en) * 2017-05-14 2019-03-12 International Business Machines Corporation Systems and methods for identifying a target object in an image
US11398088B2 (en) * 2018-01-30 2022-07-26 Magical Technologies, Llc Systems, methods and apparatuses to generate a fingerprint of a physical location for placement of virtual objects
CN108509839A (en) * 2018-02-02 2018-09-07 东华大学 One kind being based on the efficient gestures detection recognition methods of region convolutional neural networks
US11380422B2 (en) * 2018-03-26 2022-07-05 Uchicago Argonne, Llc Identification and assignment of rotational spectra using artificial neural networks
CN108830908A (en) * 2018-06-15 2018-11-16 天津大学 A kind of magic square color identification method based on artificial neural network
CN109214441A (en) * 2018-08-23 2019-01-15 桂林电子科技大学 A kind of fine granularity model recognition system and method
CN109543643B (en) * 2018-11-30 2022-07-01 电子科技大学 Carrier signal detection method based on one-dimensional full convolution neural network
CN109767423B (en) * 2018-12-11 2019-12-10 西南交通大学 Crack detection method for asphalt pavement image
CN109815156A (en) * 2019-02-28 2019-05-28 北京百度网讯科技有限公司 Displaying test method, device, equipment and the storage medium of visual element in the page
CN109948566B (en) * 2019-03-26 2023-08-18 江南大学 Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN110232380B (en) * 2019-06-13 2021-09-24 应急管理部天津消防研究所 Fire night scene restoration method based on Mask R-CNN neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017133009A1 (en) * 2016-02-04 2017-08-10 广州新节奏智能科技有限公司 Method for positioning human joint using depth image of convolutional neural network
WO2017215284A1 (en) * 2016-06-14 2017-12-21 山东大学 Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network
WO2018089210A1 (en) * 2016-11-09 2018-05-17 Konica Minolta Laboratory U.S.A., Inc. System and method of using multi-frame image features for object detection
WO2018214195A1 (en) * 2017-05-25 2018-11-29 中国矿业大学 Remote sensing imaging bridge detection method based on convolutional neural network
CN108645498A (en) * 2018-04-28 2018-10-12 南京航空航天大学 Impact Location Method based on phase sensitivity light reflection and convolutional neural networks deep learning
CN110472623A (en) * 2019-06-29 2019-11-19 华为技术有限公司 Image detecting method, equipment and system

Also Published As

Publication number Publication date
CN110991374A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110363182B (en) Deep learning-based lane line detection method
CN110287960B (en) Method for detecting and identifying curve characters in natural scene image
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN110991374B (en) Fingerprint singular point detection method based on RCNN
CN113486886B (en) License plate recognition method and device in natural scene
CN109840483B (en) Landslide crack detection and identification method and device
CN109886159B (en) Face detection method under non-limited condition
CN112307919B (en) Improved YOLOv 3-based digital information area identification method in document image
CN109635726B (en) Landslide identification method based on combination of symmetric deep network and multi-scale pooling
CN112232371A (en) American license plate recognition method based on YOLOv3 and text recognition
CN112270317A (en) Traditional digital water meter reading identification method based on deep learning and frame difference method
CN103679187A (en) Image identifying method and system
CN111652273A (en) Deep learning-based RGB-D image classification method
CN116311310A (en) Universal form identification method and device combining semantic segmentation and sequence prediction
CN116030396A (en) Accurate segmentation method for video structured extraction
CN113159215A (en) Small target detection and identification method based on fast Rcnn
CN116524189A (en) High-resolution remote sensing image semantic segmentation method based on coding and decoding indexing edge characterization
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance
CN111882000A (en) Network structure and method applied to small sample fine-grained learning
CN114863189A (en) Intelligent image identification method based on big data
CN111881803B (en) Face recognition method based on improved YOLOv3
CN113313678A (en) Automatic sperm morphology analysis method based on multi-scale feature fusion
CN109284752A (en) A kind of rapid detection method of vehicle
CN108537266A (en) A kind of cloth textured fault sorting technique of depth convolutional network
CN111832463A (en) Deep learning-based traffic sign detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant