CN109684964A - Face identification method based on region segmentation Haar-SIFT deepness belief network - Google Patents

Face identification method based on region segmentation Haar-SIFT deepness belief network Download PDF

Info

Publication number
CN109684964A
CN109684964A CN201811538898.2A CN201811538898A CN109684964A CN 109684964 A CN109684964 A CN 109684964A CN 201811538898 A CN201811538898 A CN 201811538898A CN 109684964 A CN109684964 A CN 109684964A
Authority
CN
China
Prior art keywords
haar
sift
network
algorithm
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201811538898.2A
Other languages
Chinese (zh)
Inventor
史涛
任红格
秦琴
李福进
刘矗
张俊琴
李军
陈炫
陈俊吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Technology
Original Assignee
North China University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China University of Science and Technology filed Critical North China University of Science and Technology
Priority to CN201811538898.2A priority Critical patent/CN109684964A/en
Publication of CN109684964A publication Critical patent/CN109684964A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/446Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The present invention for Scale invariant features transform (SIFT) algorithm in characteristic extraction procedure rectangular area to the limitation of rotationally-varying and ratio variation performance, feature vector dimension is excessive and matching rate is low the problems such as, propose a kind of based on region segmentation Haar-SIFT deepness belief network algorithm model.Firstly, improving to SIFT algorithm, principal direction, Expressive Features vector are determined using haar wavelet character, and there is better rotational invariance than rectangular area using border circular areas, region segmentation is carried out on border circular areas, establishes 32 dimensional feature vectors;Then, using unsupervised layer-by-layer training, and with there is the BP network of supervision to be finely adjusted, the self study and self-optimizing of network are realized;Finally, classifying using deepness belief network, facial image is identified.The algorithm is applied on ORL and FERET face database, the experimental results showed that, under the influence of the factors such as illumination, fuzzy, rotation, posture, effectively improve recognition of face effect and rate matched.

Description

Face identification method based on region segmentation Haar-SIFT deepness belief network
Technical field
It is specifically a kind of to be based on region segmentation Haar-SIFT depth conviction net the present invention relates to field of artificial intelligence The face identification method of network.
Background technique
All tool has very great significance in theory and actual application for the research of recognition of face the relevant technologies, it is not only It is the understanding for promoting people to vision system itself, also meets demand of the current social people to artificial intelligence application.Face Identification the relevant technologies are widely used to many fields, in order to further increase the high quality experience that technology brings people, we Scientist achieve one and another new breakthrough in recognition of face correlative technology field.2006, Hinton proposed that DBN is calculated Method is a kind of representative deep learning method, its advantage is that there is stronger function characterization ability, has been successfully applied to hand Write the numerous areas such as number identification, dynamic human body detection.Between 1999 to 2004 years, Lowe is proposed and is improved scale invariant feature change Scaling method (SIFT algorithm), the algorithm can be applied to object identification, 3D modeling, robot map perception with navigate and image chase after Track etc..Since this method has stronger robustness for variation caused by rotation, illumination, visual angle etc., and to scale Change and good stability is also ensured in the influence of noise, so be used to extract face local feature, and later Continuous optimised and improvement in practical application.
The patent of invention of relevant patent such as application publication number CN 104899834A discloses a kind of based on SIFT algorithm Fuzzy image recognition method and device, the invention carry out Gaussian smoothing, gray scale operation and motion blur to clearly image respectively, Generate fuzzy space, to blurred picture to be identified using SIFT algorithm extract the blurred picture to be identified characteristic point and will be to Identify that characteristic point of the characteristic point of image respectively with every piece image in clear image fuzzy space generated is matched Obtain final recognition result.
The patent of invention of application publication number CN106920239A discloses a kind of crucial based on the image for improving SIFT algorithm Point detecting method, this method from image take up an official post fetch bit in pyramidal s layers of DOG a sampled point and successively compare to obtain pole It is worth point, finally characteristics of image is described, calculation amount is small, guarantees that image registration rate is high while image registration accuracy.
The patent of invention of application publication number CN102622748A discloses a kind of sift based on mix behavior ant group algorithm The matched method of the feature key points of algorithm, using the city distance of key point feature vector as key point in two images Similarity determination, reduce computation complexity, and substantially increase the matched accuracy of key point and arithmetic speed.
But the above patent there are problems that rectangular area most vulnerable to rotation and ratio variation interference and SIFT description son dimension The excessively high problem of number, illumination and posture change and reduce accuracy rate, therefore, what the above method obtained in terms of recognition accuracy As a result still not fully up to expectations.
Summary of the invention
For traditional SIFT algorithm, rectangular area changes the office of performance to rotationally-varying and ratio in characteristic extraction procedure Sex-limited, the problems such as feature vector dimension is excessive and matching rate is low, the present invention provides one kind based on original SIFT algorithm Based on the face identification method of region segmentation Haar-SIFT deepness belief network, to be conducive to more accurately extract facial image Feature, to complete the Classification and Identification to facial image.
The present invention solves described problem, the technical solution adopted is that:
A kind of face identification method based on region segmentation Haar-SIFT deepness belief network, mainly to SIFT feature into Row improves, and determines principal direction, Expressive Features vector using haar wavelet character, rectangular area is replaced using border circular areas, in circle Region segmentation is carried out on shape region, and in conjunction with deepness belief network algorithm, is included the following steps:
Step 1, face characteristic is extracted with improved SIFT algorithm first, SIFT algorithm extracts face characteristic, is broadly divided into Four parts: progress scale space extremum extracting first;Then to crucial point location;Haar wavelet character is recycled to determine main side To using characteristic point as circular central, in 60 ° of fan-shaped sections, counting the horizontal, perpendicular of whole characteristic points in circle shaped neighborhood region Then straight haar wavelet character summation counts the characteristic value of characteristic point haar small echo in new region simultaneously fan-shaped rotary and again It is compared with the characteristic value counted before, final characteristic point principal direction is exactly the maximum fan-shaped direction of characteristic value after rotating a circle; Key point is finally described, the point centered on characteristic point separates 8 sub-districts in border circular areas using principal direction as prime direction Domain, the area equation of 8 sub-regions require each sub-regions and take the response of haar small echo, the sound in the direction haar small echo x It should be denoted as dx, the response in the direction y is denoted as dy, obtain the description sub- ∑ dx of 4 dimensions, ∑ dy, ∑ | dx, ∑ | dy |, collectively generate 8 × 4 =32 dimensional feature vectors;
Step 2, the feature that region segmentation Haar-SIFT algorithm extracts is input to the visual layers of DBN, training first layer RBM, the input training second layer RBM by the output of first layer RBM as second layer RBM, repeats the process until the complete institute of training Some RBM finely tune entire network parameter with BP algorithm, are optimal whole network, utilize this to obtain optimal network parameter Network learns layer by layer from top to bottom and excavates the useful feature in test set with distinction, obtains final recognition result.
The present invention by adopting the above technical scheme, compared with prior art, the feature protruded is:
1. reducing the problem of former SIFT algorithm rectangular area is vulnerable to rotation and ratio variation interference simultaneously using border circular areas Further eliminate skirt response;Then it is special to solve former SIFT algorithm to the feature vector dimensionality reduction of extraction with haar wavelet character Levy the excessive problem of vector dimension;Secondly it by this method in conjunction with DBN algorithm, goes to train depth network structure as input, subtract Few study of the depth network for unfavorable feature, avoids redundancy, reduces training network time.Illumination, fuzzy, rotation and Under the influence of uncontrollable factor such as posture, recognition of face effect and rate matched are effectively improved.
2. it is excessively high to solve the problems, such as that rectangular area most describes sub- dimension vulnerable to rotation and ratio variation interference and SIFT Problem changes to which the influence generated has good robustness for illumination and posture.
Detailed description of the invention
Fig. 1 is DOG spatial extrema detection structure schematic diagram of the embodiment of the present invention;
Fig. 2 is that key point of the embodiment of the present invention seeks principal direction structural schematic diagram;
Fig. 3 is that the embodiment of the present invention eliminates skirt response structural schematic diagram;
Fig. 4 is the description minor structure schematic diagram of border circular areas of embodiment of the present invention segmentation;
Fig. 5 is RBM model structure schematic diagram of the embodiment of the present invention;
Fig. 6 is DBN model structural schematic diagram of the embodiment of the present invention.
Specific embodiment:
Below with reference to embodiment, the invention will be further described, and purpose, which is only that, more fully understands the content of present invention, because This, the cited case is not intended to limit protection scope of the present invention.
A kind of face identification method based on region segmentation Haar-SIFT deepness belief network, mainly to SIFT feature into Row improves, and determines principal direction, Expressive Features vector using haar wavelet character, rectangular area is replaced using border circular areas, in circle Region segmentation is carried out on shape region, and in conjunction with deepness belief network algorithm, is included the following steps:
Step 1, feature is extracted with improved SIFT algorithm first, SIFT algorithm extracts face characteristic, is broadly divided into four Part, first progress scale space extremum extracting;Then to crucial point location, as shown in Figure 1, key point is by finding Gauss One pixel neighbor point all with it is compared come what is positioned, obtains Gaussian difference by the Local Extremum in difference space Point Function Extreme Value point, intermediate test point needs are compared with 26 points, including with its above or below adjacent to scale Corresponding 9 × 2=18 point and 8 consecutive points on a scale;Haar wavelet character is recycled to determine principal direction, In circle shaped neighborhood region, using characteristic point as circular central, in 60 ° of fan-shaped sections, horizontal, the vertical haar of whole characteristic points is counted Wavelet character summation, then count fan-shaped rotary and again in new region the characteristic value of characteristic point haar small echo and with before The characteristic value of statistics compares, and final characteristic point principal direction is exactly the maximum fan-shaped direction of characteristic value after rotating a circle, such as Fig. 2 institute Show;Key point is finally described, as shown in figure 3, border circular areas can further eliminate skirt response, increases the ratio of description not Denaturation, the point centered on characteristic point, separates 8 sub-districts in border circular areas using principal direction as prime direction as shown in Figure 4 Domain, the area equation of 8 sub-regions require each sub-regions and take the response of haar small echo, the sound in the direction haar small echo x It should be denoted as dx, the response in the direction y is denoted as dy, obtain the description sub- ∑ dx of 4 dimensions, ∑ dy, ∑ | dx, ∑ | dy |, collectively generate 8 × 4 =32 dimensional feature vectors;
Step 2, face characteristic matrix region segmentation Haar-SIFT algorithm extracted is as the input of deepness belief network Model is established, RBM model is as shown in figure 5, training first layer RBM, the input by the output of first layer RBM as second layer RBM Training second layer RBM repeats the process until the complete all RBM of training, to obtain optimal network parameter, and is finely tuned with BP network Whole network parameter, is optimal whole network, learns and excavate in test set to have layer by layer from top to bottom using the network There is the useful feature of distinction, obtains final recognition result.
DBN algorithm training schematic diagram as shown in fig. 6, DBN model is trained its process be broadly divided into pre-training and Finely tune two parts.Generally speaking, pre-training is exactly using the layer-by-layer method acquisition weight of non-supervisory greediness and according under Each layer of RBM is respectively trained to upper sequence.In the pre-training stage, need using gibbs sampler respectively to each layer of RBM into The feature extracted is inputted in first layer RBM network and is carried out to it abundant unsupervised training, after output training by row training Feature vector, by the H in the hidden layer i.e. figure of first layer0Visual layers V as second layer RBM network1, therefore the second layer RBM network is just by V1(H0)-H1It constitutes, equally also second layer RBM network is trained up, is continued like this Until complete DBM network of training;Whole network enters trim process after pre-training, sets DBN model last Layer is BP network, is finely adjusted by BP network to whole network, this is a top-down process, the feature of BP network inputs Vector is the feature vector of RBM output, and entire DBN network parameter is adjusted, and is avoided because weighting parameter initialization is drawn That rises can only be the disadvantages of local search be optimal and cannot reach global optimum, and reduces convergence time, training for promotion speed Rate is optimal the parameter of whole network.
Principal direction, Expressive Features vector are determined using haar wavelet character, are had more using border circular areas than rectangular area Good rotational invariance, region segmentation is carried out on border circular areas, improves SIFT algorithm to facial image feature extraction In journey rectangular area to the limitation of rotationally-varying and ratio variation performance, feature vector dimension is excessive and matching rate is low etc. Problem;By region segmentation Haar-SIFT algorithm in conjunction with deepness belief network algorithm, by the region segmentation Haar-SIFT of sample Feature vector is input to the visual layers of DBN, successively trains DBN, and finely tune entire deepness belief network using BP network, last complete The Classification and Identification of pairs of facial image.
The present invention is improved to region segmentation Haar-SIFT algorithm based on original SIFT algorithm, and with depth conviction Network algorithm is combined together, i.e., extracts face local feature by region segmentation Haar-SIFT algorithm, is believed as depth The input of network is read, it is bottom-up, learn more abstract face characteristic layer by layer, classifies in top layer, it is possible to reduce to unfavorable Feature description study, improve the identifiability of feature, finely tuned with BP algorithm global, reduce net training time, experiment The result shows that mentioned algorithm very good solution rectangular area most vulnerable to rotation and ratio variation interference the problem of and SIFT description The excessively high problem of sub- dimension changes to which the influence generated has good robustness for illumination and posture;Utilize circle Region reduces the problem of former SIFT algorithm rectangular area is vulnerable to rotation and ratio variation interference and further eliminates skirt response; Then with haar wavelet character to the feature vector dimensionality reduction of extraction, solve that former SIFT algorithm characteristics vector dimension is excessive to ask Topic;Secondly it by this method in conjunction with DBN algorithm, goes to train depth network structure as input, reduces depth network for unfavorable The study of feature avoids redundancy, reduces training network time.In uncontrollable factors such as illumination, fuzzy, rotation and postures Under the influence of, effectively improve recognition of face effect and rate matched.
The foregoing is merely preferably feasible examples of the invention, and not thereby limiting the scope of the invention, all With equivalence changes made by description of the invention and its accompanying drawing content, it is intended to be included within the scope of the present invention.

Claims (1)

1. a kind of face identification method based on region segmentation Haar-SIFT deepness belief network, which is characterized in that main right SIFT feature improves, and determines principal direction, Expressive Features vector using haar wavelet character, replaces rectangle using border circular areas Region carries out region segmentation on border circular areas, and in conjunction with deepness belief network algorithm, includes the following steps:
Step 1, extracts face characteristic with improved SIFT algorithm first, and SIFT algorithm extracts face characteristic, is broadly divided into four Part: progress scale space extremum extracting first;Then to crucial point location;Haar wavelet character is recycled to determine principal direction, In circle shaped neighborhood region, using characteristic point as circular central, in 60 ° of fan-shaped sections, the horizontal, vertical of whole characteristic points is counted Haar wavelet character summation, then count fan-shaped rotary and again in new region the characteristic value of characteristic point haar small echo and with The characteristic value counted before compares, and final characteristic point principal direction is exactly the maximum fan-shaped direction of characteristic value after rotating a circle;Most After key point is described, the point centered on characteristic point separates 8 sub-districts in border circular areas using principal direction as prime direction Domain, the area equation of 8 sub-regions, each sub-regions are required take haar small echo respond, the direction haar small echo x Response is denoted as dx, and the response in the direction y is denoted as dy, obtains the description sub- ∑ dx of 4 dimensions, ∑ dy, ∑ | dx |, ∑ | dy |, collectively generate 8 × 4=32 dimensional feature vector;
Step 2: the feature that region segmentation Haar-SIFT algorithm extracts being input to the visual layers of DBN, training first layer RBM will Input training second layer RBM of the output of first layer RBM as second layer RBM, repeats the process until training is complete all RBM finely tunes entire network parameter with BP algorithm, is optimal whole network, utilizes the network to obtain optimal network parameter Learn and excavate the useful feature in test set with distinction layer by layer from top to bottom, obtains final recognition result.
CN201811538898.2A 2018-12-17 2018-12-17 Face identification method based on region segmentation Haar-SIFT deepness belief network Withdrawn CN109684964A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811538898.2A CN109684964A (en) 2018-12-17 2018-12-17 Face identification method based on region segmentation Haar-SIFT deepness belief network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811538898.2A CN109684964A (en) 2018-12-17 2018-12-17 Face identification method based on region segmentation Haar-SIFT deepness belief network

Publications (1)

Publication Number Publication Date
CN109684964A true CN109684964A (en) 2019-04-26

Family

ID=66187770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811538898.2A Withdrawn CN109684964A (en) 2018-12-17 2018-12-17 Face identification method based on region segmentation Haar-SIFT deepness belief network

Country Status (1)

Country Link
CN (1) CN109684964A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855758A (en) * 2012-08-27 2013-01-02 无锡北邮感知技术产业研究院有限公司 Detection method for vehicle in breach of traffic rules
CN103021183A (en) * 2012-12-07 2013-04-03 北京中邮致鼎科技有限公司 Method for detecting regulation-violating motor vehicles in monitoring scene
CN104966081A (en) * 2015-06-04 2015-10-07 广州美读信息技术有限公司 Spine image recognition method
CN106096658A (en) * 2016-06-16 2016-11-09 华北理工大学 Based on the Aerial Images sorting technique without supervision deep space feature coding
CN107729890A (en) * 2017-11-30 2018-02-23 华北理工大学 Face identification method based on LBP and deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855758A (en) * 2012-08-27 2013-01-02 无锡北邮感知技术产业研究院有限公司 Detection method for vehicle in breach of traffic rules
CN103021183A (en) * 2012-12-07 2013-04-03 北京中邮致鼎科技有限公司 Method for detecting regulation-violating motor vehicles in monitoring scene
CN104966081A (en) * 2015-06-04 2015-10-07 广州美读信息技术有限公司 Spine image recognition method
CN106096658A (en) * 2016-06-16 2016-11-09 华北理工大学 Based on the Aerial Images sorting technique without supervision deep space feature coding
CN107729890A (en) * 2017-11-30 2018-02-23 华北理工大学 Face identification method based on LBP and deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曾峦 等: "一种基于扇形区域分割的 SIFT 特征描述符", 《自动化学报》 *

Similar Documents

Publication Publication Date Title
Zhao et al. A survey on deep learning-based fine-grained object classification and semantic segmentation
Wu et al. Rapid target detection in high resolution remote sensing images using YOLO model
Sermanet et al. Overfeat: Integrated recognition, localization and detection using convolutional networks
CN109977798B (en) Mask pooling model training and pedestrian re-identification method for pedestrian re-identification
CN104463100B (en) Intelligent wheel chair man-machine interactive system and method based on human facial expression recognition pattern
CN108268859A (en) A kind of facial expression recognizing method based on deep learning
CN105718889B (en) Based on GB (2D)2The face personal identification method of PCANet depth convolution model
Li et al. Pedestrian detection based on deep learning model
CN102930302A (en) On-line sequential extreme learning machine-based incremental human behavior recognition method
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN109902565B (en) Multi-feature fusion human behavior recognition method
CN108345866B (en) Pedestrian re-identification method based on deep feature learning
CN107463917A (en) A kind of face feature extraction method merged based on improved LTP with the two-way PCA of two dimension
CN113763417B (en) Target tracking method based on twin network and residual error structure
CN107729890A (en) Face identification method based on LBP and deep learning
Wang et al. Camera-based signage detection and recognition for blind persons
CN106127112A (en) Data Dimensionality Reduction based on DLLE model and feature understanding method
CN107784263B (en) Planar rotation face detection method based on improved accelerated robust features
CN103020614A (en) Human movement identification method based on spatio-temporal interest point detection
CN112396036B (en) Method for re-identifying blocked pedestrians by combining space transformation network and multi-scale feature extraction
CN115527269B (en) Intelligent human body posture image recognition method and system
CN108509861B (en) Target tracking method and device based on combination of sample learning and target detection
CN107220607B (en) Motion trajectory behavior identification method based on 3D stationary wavelet
CN105957103A (en) Vision-based motion feature extraction method
Liu et al. Action recognition based on features fusion and 3D convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210120

Address after: 300384 No. 391 Binshui West Road, Xiqing District, Tianjin

Applicant after: TIANJIN University OF TECHNOLOGY

Address before: 063210 Tangshan City Caofeidian District, Hebei Province, Tangshan Bay eco Town, Bohai Road, 21

Applicant before: NORTH CHINA University OF SCIENCE AND TECHNOLOGY

WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20190426