CN105095867A - Rapid dynamic face extraction and identification method based deep learning - Google Patents

Rapid dynamic face extraction and identification method based deep learning Download PDF

Info

Publication number
CN105095867A
CN105095867A CN201510429994.3A CN201510429994A CN105095867A CN 105095867 A CN105095867 A CN 105095867A CN 201510429994 A CN201510429994 A CN 201510429994A CN 105095867 A CN105095867 A CN 105095867A
Authority
CN
China
Prior art keywords
face
quote
partial weight
value
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510429994.3A
Other languages
Chinese (zh)
Inventor
姚一鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Duozhi Science And Technology Development Co Ltd
Original Assignee
Harbin Duozhi Science And Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Duozhi Science And Technology Development Co Ltd filed Critical Harbin Duozhi Science And Technology Development Co Ltd
Priority to CN201510429994.3A priority Critical patent/CN105095867A/en
Publication of CN105095867A publication Critical patent/CN105095867A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

A rapid dynamic face extraction and identification method based deep learning is disclosed. Face identification technology is used for identifying input face images or video stream based on human face characteristics. A moving body is determined to be a human body by searching Haar-like features of the upper part of a human body, and color images with a head portion greater than 39*39 pixels is intercepted through screening. Five face characteristic areas, the left eye, the right eye, the nose tip, the left mouth corner and the right mouth corner are determined through partial weight shared convolution formula and a partial weight shared sampling formula to find the central points in the characteristic areas, and deep relation values of the characteristic areas are extracted and converted into a matrix. Finally, the values in the matrix are compared with the values in a database, the probability analysis is carried out through a gauss model to obtain positive value or reverse value, and a face is identified.

Description

Quick dynamic human face based on degree of depth study extracts, recognition methods
technical field:
the present invention relates to field of face identification, particularly relate to a kind of quick dynamic human face extraction based on degree of depth study, recognition methods.
background technology:
face recognition technology is the face feature based on people, to facial image recognition or the video flowing of input.First judge whether it exists face, if there is face, then further provide the positional information of the position of each face, size and each major facial organ.And according to these information, extract the identity characteristic contained in each face further, and itself and known face are contrasted, thus identify the identity of each face.Face recognition technology has a wide range of applications, and as room entry/exit management, access control and attendance, computer security is taken precautions against, intelligent alarm etc. is pursued and captured an escaped prisoner in public security crime.But existing face recognition technology and application have the deficiency of the following aspects: (1) identifies quantity, scope and concurrency: once can only identify a people, cannot identify many people on a large scale simultaneously.(2) customer location limitation and mandatory: user must adjust oneself position, and camera is aimed in front, side face a little or all possible None-identified of bowing.(3) response speed and efficiency: be subject to the limitation of customer location and mandatory, cannot accomplish to obtain fast and identify at a distance to overcome above-mentioned the deficiencies in the prior art, the invention provides a kind of recognition methods based on the study of the computing machine degree of depth, author is by the method called after " deepmax algorithm ".The method can be real-time, extract and identify face dynamically, fast, on a large scale, better can be applied to the system such as security protection, work attendance, room entry/exit management of enterprise, school, government bodies.(4) hsrdware requirements are reduced: classic method is quite strict for hsrdware requirements in extensive identification, and in the identification to ten thousand people's ranks, common computer cannot meet computing demand at all.This method is by single knuckle, and two floating-point operation effectively solves hardware and takies excessive problem.Randomly draw in filler test 100,000 people, only need within 0.28 second, substantially can accomplish Real time identification to the identification of people.
summary of the invention:
the object of this invention is to provide a kind of quick dynamic human face extraction based on degree of depth study, recognition methods.
above-mentioned object is realized by following technical scheme:
1. the quick dynamic human face based on degree of depth study extracts, a recognition methods, firstfirst levied by search upper half of human body Lis Hartel, determine that mobile object is the mankind, afterwards by screening the coloured image intercepting out head and be greater than 39*39 pixel; Then, Convolution Formula and partial weight shared sampling formula is shared by partial weight:
Wherein: for the image slices vegetarian refreshments of input, for the image slices vegetarian refreshments exported, subscript represents the coordinate of pixel respectively, with being the weight that will train, is new partial weight technology of sharing due to what adopt, so with subscript represent local shared region; R=0,1 ..., m-1 illustrates the passage of last layer network, total m; T represents the port number of current layer network, total n;
Wherein: x_ (i+k, j+l) ^ ((r)) is the image slices vegetarian refreshments of input, y_ (i, j) ^ ((t)) is the image slices vegetarian refreshments exported, subscript represents coordinate the .g^ ((u of pixel respectively, v, ) and b^ ((u t), v, t)) be the weight .max ┬ (0≤k that will train, l<s) { x_ (i*s+k, j*s+l) ^ ((t)) } illustrate at x_ (i*s+0, j*s+0) ^ ((t)) to x_ (i*s+s-1, a maximal value is got in the rectangular area of j*s+s-1) ^ ((t)), as the data after sampling, then parameter g is multiplied by, last value is gone out with hyperbolictangent formulae discovery after adding side-play amount b, determine face's 5 characteristic area left eyes, right eye, nose, the left corners of the mouth and the right corners of the mouth, find central point in characteristic area, outwards select 3 groups of rectangles and 3 groups of squares with machine frame according to central point, and gray level image is got to these 6 groups of squares, each like this position just obtains 12 images.Five feature locations, 60 images altogether.
Often will open image respectively through 7 layers of Processing with Neural Network, obtain 160 eigenwerts, and both often open face and become 160*60=9600 eigenwert by calculating, eigenwert is put into neural matrix according to stationary arrangement;
By the matrix transformation of ownership, carry out probability analysis by Logic Regression Models, if draw positive integer result, then enter next step, if all total is not positive integer, be then judged to be stranger.
Extract the face matrix can obtained a result by Logic Regression Models, the matrix gone out with convolution is compared, use associating gauss hybrid models, draw a similarity probability, this probability be greater than inevitable probability of happening then to judge in this face and database face of depositing as same people, otherwise decision bits stranger;
Next share Convolution Formula and partial weight shared sampling formula according to partial weight again, extract the deep relationship value of characteristic area, all relation value transformations of ownership are become matrix; Finally, numerical value in numerical value in matrix and database is contrasted, draws numerical value forward or backwards, determine the identification of face.
beneficial effect:
the present invention utilizes neural network and region weight to share convolution and combines, and builds the intuitionistic feature of degree of depth learning network to face and positions.Wherein, the weight in region is shared and is played a key effect, because face characteristic divides by specific region in a view picture figure, screens easily so train different weight features to contribute to whole network respectively in these regions by different featuresimage , reach the object of locating human face's feature fast and accurately, and utilize the neural network of deep layer to extract high-level feature, this makes whole system can keep identical pinpoint accuracy under different light and light and shade condition.In the training stage, the rectangular area of input face, obtains multiple characteristic pattern by a convolution, then samples to these characteristic patterns, and then carry out convolution and sampling next time, after three times like this, will obtain faceimage high-order feature, these features are linked entirely through twice, finally obtain ten output neurons, what their exported is exactly the coordinate figure of five key characters in face, characteristic coordinates real in these coordinates and face figure is compared, utilizes stochasticgradientdescent algorithm to regulate weight, obtain higher accuracy rate, the coordinate finally making system export overlaps with true coordinate, and the weight obtained is optimum solution.
accompanying drawing illustrates:
accompanying drawing 1 face nerve recognition network of the present invention model.
embodiment:
embodiment 1:
1. the quick dynamic human face based on degree of depth study extracts, a recognition methods, it is characterized in that: firstfirst levied by search upper half of human body Lis Hartel, determine that mobile object is the mankind, afterwards by screening the coloured image intercepting out head and be greater than 39*39 pixel; Then, Convolution Formula and partial weight shared sampling formula is shared by partial weight:
Wherein: for the image slices vegetarian refreshments of input, for the image slices vegetarian refreshments exported, subscript represents the coordinate of pixel respectively, with being the weight that will train, is new partial weight technology of sharing due to what adopt, so with subscript represent local shared region; R=0,1 ..., m-1 illustrates the passage of last layer network, total m; T represents the port number of current layer network, total n;
Wherein: x_ (i+k, j+l) ^ ((r)) is the image slices vegetarian refreshments of input, y_ (i, j) ^ ((t)) is the image slices vegetarian refreshments exported, subscript represents coordinate the .g^ ((u of pixel respectively, v, ) and b^ ((u t), v, t)) be the weight .max ┬ (0≤k that will train, l<s) { x_ (i*s+k, j*s+l) ^ ((t)) } illustrate at x_ (i*s+0, j*s+0) ^ ((t)) to x_ (i*s+s-1, a maximal value is got in the rectangular area of j*s+s-1) ^ ((t)), as the data after sampling, then parameter g is multiplied by, last value is gone out with hyperbolictangent formulae discovery after adding side-play amount b, determine face's 5 characteristic area left eyes, right eye, nose, the left corners of the mouth and the right corners of the mouth, find central point in characteristic area, outwards select 3 groups of rectangles and 3 groups of squares with machine frame according to central point, and gray level image is got to these 6 groups of squares, next share Convolution Formula and partial weight shared sampling formula according to partial weight again, extract the deep relationship value of characteristic area, all relation value transformations of ownership are become matrix, finally, numerical value in numerical value in matrix and database is contrasted, draws numerical value forward or backwards, determine the identification of face.
Embodiment 2:
According to embodiment 1 quick dynamic human face based on degree of depth study extracts, recognition methods,this system is to the identification of facial image feature random on network, and its Green point is the face characteristic region identified, and is respectively: left eye, right eye, nose, the left corners of the mouth and the right corners of the mouth.The signal of facial characteristics recognition network figure. wherein input layer is single pass gray image, ground floor input layer, and size is 39*39 pixel face; Second layer convolutional layer has 20 features, and size is 36*36 pixel; Third layer sample level has 20 features, and size is 18*18 pixel; 4th layer of convolutional layer has 40 features, and size is 16*16 pixel; Layer 5 sample level has 40 features, and size is 8*8 pixel; Layer 6 convolutional layer has 60 features, and size is 6*6 pixel; Layer 7 sample level has 60 features, and size is 3*3 pixel; 8th layer of convolutional layer has 80 features, and size is 2*2 pixel; 9th layer entirely connects the 8th layer and has 120 neurons; Tenth layer entirely connects the 9th layer and has 10 neurons.

Claims (1)

1. quick dynamic human face based on degree of depth study extracts, a recognition methods, it is characterized in that: firstfirst levied by search upper half of human body Lis Hartel, determine that mobile object is the mankind, afterwards by screening the coloured image intercepting out head and be greater than 39*39 pixel; Then, Convolution Formula and partial weight shared sampling formula is shared by partial weight:
Wherein: QUOTE for the image slices vegetarian refreshments of input, QUOTE for the image slices vegetarian refreshments exported, subscript represents the coordinate of pixel respectively, QUOTE and QUOTE being the weight that will train, is new partial weight technology of sharing due to what adopt, so QUOTE and QUOTE subscript represent local shared region; R=0,1 ..., m-1 illustrates the passage of last layer network, total m; T represents the port number of current layer network, total n;
Wherein: x_ (i+k, j+l) ^ ((r)) is the image slices vegetarian refreshments of input, y_ (i, j) ^ ((t)) is the image slices vegetarian refreshments exported, subscript represents coordinate the .g^ ((u of pixel respectively, v, ) and b^ ((u t), v, t)) be the weight .max ┬ (0≤k that will train, l<s) { x_ (i*s+k, j*s+l) ^ ((t)) } illustrate at x_ (i*s+0, j*s+0) ^ ((t)) to x_ (i*s+s-1, a maximal value is got in the rectangular area of j*s+s-1) ^ ((t)), as the data after sampling, then parameter g is multiplied by, last value is gone out with hyperbolictangent formulae discovery after adding side-play amount b, determine face's 5 characteristic area left eyes, right eye, nose, the left corners of the mouth and the right corners of the mouth, find central point in characteristic area, outwards select 3 groups of rectangles and 3 groups of squares with machine frame according to central point, and gray level image is got to these 6 groups of squares, next share Convolution Formula and partial weight shared sampling formula according to partial weight again, extract the deep relationship value of characteristic area, all relation value transformations of ownership are become matrix, finally, numerical value in numerical value in matrix and database is compared and contrasts, carry out probability analysis by Gauss model and draw numerical value forward or backwards, determine the identification of face.
CN201510429994.3A 2015-07-21 2015-07-21 Rapid dynamic face extraction and identification method based deep learning Pending CN105095867A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510429994.3A CN105095867A (en) 2015-07-21 2015-07-21 Rapid dynamic face extraction and identification method based deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510429994.3A CN105095867A (en) 2015-07-21 2015-07-21 Rapid dynamic face extraction and identification method based deep learning

Publications (1)

Publication Number Publication Date
CN105095867A true CN105095867A (en) 2015-11-25

Family

ID=54576256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510429994.3A Pending CN105095867A (en) 2015-07-21 2015-07-21 Rapid dynamic face extraction and identification method based deep learning

Country Status (1)

Country Link
CN (1) CN105095867A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913117A (en) * 2016-04-04 2016-08-31 北京工业大学 Intelligent related neural network computer identification method
CN106096518A (en) * 2016-06-02 2016-11-09 哈尔滨多智科技发展有限公司 Quick dynamic human body action extraction based on degree of depth study, recognition methods
CN106874857A (en) * 2017-01-19 2017-06-20 腾讯科技(上海)有限公司 A kind of living body determination method and system based on video analysis
CN106934377A (en) * 2017-03-14 2017-07-07 深圳大图科创技术开发有限公司 A kind of improved face detection system
CN107992859A (en) * 2017-12-28 2018-05-04 华慧视科技(天津)有限公司 It is a kind of that drawing method is cut based on Face datection
WO2018137226A1 (en) * 2017-01-25 2018-08-02 深圳市汇顶科技股份有限公司 Fingerprint extraction method and device
CN108416265A (en) * 2018-01-30 2018-08-17 深圳大学 A kind of method for detecting human face, device, equipment and storage medium
CN109998496A (en) * 2019-01-31 2019-07-12 中国人民解放军海军工程大学 A kind of autonomous type body temperature automatic collection and respiratory monitoring system and method
CN114220142A (en) * 2021-11-24 2022-03-22 慧之安信息技术股份有限公司 Face feature recognition method of deep learning algorithm
CN115424383A (en) * 2022-10-10 2022-12-02 广州睿泰智能设备科技股份有限公司 Intelligent access control management system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140079297A1 (en) * 2012-09-17 2014-03-20 Saied Tadayon Application of Z-Webs and Z-factors to Analytics, Search Engine, Learning, Recognition, Natural Language, and Other Utilities
CN104361327A (en) * 2014-11-20 2015-02-18 苏州科达科技股份有限公司 Pedestrian detection method and system
CN104408435A (en) * 2014-12-05 2015-03-11 浙江大学 Face identification method based on random pooling convolutional neural network
CN104463172A (en) * 2014-12-09 2015-03-25 中国科学院重庆绿色智能技术研究院 Face feature extraction method based on face feature point shape drive depth model
CN104484658A (en) * 2014-12-30 2015-04-01 中科创达软件股份有限公司 Face gender recognition method and device based on multi-channel convolution neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140079297A1 (en) * 2012-09-17 2014-03-20 Saied Tadayon Application of Z-Webs and Z-factors to Analytics, Search Engine, Learning, Recognition, Natural Language, and Other Utilities
CN104361327A (en) * 2014-11-20 2015-02-18 苏州科达科技股份有限公司 Pedestrian detection method and system
CN104408435A (en) * 2014-12-05 2015-03-11 浙江大学 Face identification method based on random pooling convolutional neural network
CN104463172A (en) * 2014-12-09 2015-03-25 中国科学院重庆绿色智能技术研究院 Face feature extraction method based on face feature point shape drive depth model
CN104484658A (en) * 2014-12-30 2015-04-01 中科创达软件股份有限公司 Face gender recognition method and device based on multi-channel convolution neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许可: ""卷积神经网络在图像识别上的应用的研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913117A (en) * 2016-04-04 2016-08-31 北京工业大学 Intelligent related neural network computer identification method
CN106096518A (en) * 2016-06-02 2016-11-09 哈尔滨多智科技发展有限公司 Quick dynamic human body action extraction based on degree of depth study, recognition methods
CN106874857A (en) * 2017-01-19 2017-06-20 腾讯科技(上海)有限公司 A kind of living body determination method and system based on video analysis
CN106874857B (en) * 2017-01-19 2020-12-01 腾讯科技(上海)有限公司 Living body distinguishing method and system based on video analysis
WO2018137226A1 (en) * 2017-01-25 2018-08-02 深圳市汇顶科技股份有限公司 Fingerprint extraction method and device
CN106934377A (en) * 2017-03-14 2017-07-07 深圳大图科创技术开发有限公司 A kind of improved face detection system
CN106934377B (en) * 2017-03-14 2020-03-17 新疆智辰天林信息科技有限公司 Improved human face detection system
CN107992859A (en) * 2017-12-28 2018-05-04 华慧视科技(天津)有限公司 It is a kind of that drawing method is cut based on Face datection
CN108416265A (en) * 2018-01-30 2018-08-17 深圳大学 A kind of method for detecting human face, device, equipment and storage medium
CN109998496A (en) * 2019-01-31 2019-07-12 中国人民解放军海军工程大学 A kind of autonomous type body temperature automatic collection and respiratory monitoring system and method
CN114220142A (en) * 2021-11-24 2022-03-22 慧之安信息技术股份有限公司 Face feature recognition method of deep learning algorithm
CN115424383A (en) * 2022-10-10 2022-12-02 广州睿泰智能设备科技股份有限公司 Intelligent access control management system and method

Similar Documents

Publication Publication Date Title
CN105095867A (en) Rapid dynamic face extraction and identification method based deep learning
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN104866829B (en) A kind of across age face verification method based on feature learning
CN104572804B (en) A kind of method and its system of video object retrieval
CN105518744B (en) Pedestrian recognition methods and equipment again
Guo et al. Background subtraction using local SVD binary pattern
CN108875600A (en) A kind of information of vehicles detection and tracking method, apparatus and computer storage medium based on YOLO
CN110084156A (en) A kind of gait feature abstracting method and pedestrian&#39;s personal identification method based on gait feature
CN106599883A (en) Face recognition method capable of extracting multi-level image semantics based on CNN (convolutional neural network)
CN106650806A (en) Cooperative type deep network model method for pedestrian detection
CN108764085A (en) Based on the people counting method for generating confrontation network
CN104700078B (en) A kind of robot scene recognition methods based on scale invariant feature extreme learning machine
CN105138954A (en) Image automatic screening, query and identification system
CN106446862A (en) Face detection method and system
Rahimpour et al. Person re-identification using visual attention
Wang et al. Head pose estimation with combined 2D SIFT and 3D HOG features
CN109711416A (en) Target identification method, device, computer equipment and storage medium
CN104680545B (en) There is the detection method of well-marked target in optical imagery
CN109753864A (en) A kind of face identification method based on caffe deep learning frame
TW201308254A (en) Motion detection method for comples scenes
CN109902613A (en) A kind of human body feature extraction method based on transfer learning and image enhancement
CN108564040A (en) A kind of fingerprint activity test method based on depth convolution feature
CN106909884A (en) A kind of hand region detection method and device based on hierarchy and deformable part sub-model
CN104820711A (en) Video retrieval method for figure target in complex scene
CN105069403B (en) A kind of three-dimensional human ear identification based on block statistics feature and the classification of dictionary learning rarefaction representation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
CB02 Change of applicant information

Address after: 519000 Guangdong city of Zhuhai province Hengqin Baohua Road No. 6, room 105, -23248 (central office)

Applicant after: Zhuhai wisdom Technology Co., Ltd.

Address before: 150090 Harbin, Nangang District, Tai Po District, building 1, floor 3, No. 25,

Applicant before: HARBIN DUOZHI SCIENCE AND TECHNOLOGY DEVELOPMENT CO., LTD.

CB02 Change of applicant information
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20151125

WD01 Invention patent application deemed withdrawn after publication