CN109002799A - Face identification method - Google Patents

Face identification method Download PDF

Info

Publication number
CN109002799A
CN109002799A CN201810796508.5A CN201810796508A CN109002799A CN 109002799 A CN109002799 A CN 109002799A CN 201810796508 A CN201810796508 A CN 201810796508A CN 109002799 A CN109002799 A CN 109002799A
Authority
CN
China
Prior art keywords
human face
region
rotation process
face region
textural characteristics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810796508.5A
Other languages
Chinese (zh)
Other versions
CN109002799B (en
Inventor
金益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Vocational University
Original Assignee
Suzhou Vocational University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Vocational University filed Critical Suzhou Vocational University
Priority to CN201810796508.5A priority Critical patent/CN109002799B/en
Publication of CN109002799A publication Critical patent/CN109002799A/en
Application granted granted Critical
Publication of CN109002799B publication Critical patent/CN109002799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

The invention discloses a kind of face identification methods, comprising: obtains images to be recognized;According to edge detection algorithm, the first human face region is obtained, according to benchmark rgb value range, obtains the second human face region, the region that the first human face region is overlapped with the second human face region is as third human face region;Characteristic area is obtained from third human face region, a rectangular area is determined with the center of characteristic area, rectangular area is successively rotated 0~180 ° around center, after each rotation process, rectangular area is divided into multiple zonules, extracts the textural characteristics of each zonule respectively;Textural characteristics set corresponding with each rotation process is compared with database, calculates the corresponding similarity of each rotation process, if the corresponding similarity of each rotation process is above given threshold, using face corresponding in database as the face recognized.The present invention, which can be avoided shelter, influences identification, and increases characteristic information amount, improves recognition accuracy.

Description

Face identification method
Technical field
The present invention relates to technical field of image processing.It is more particularly related to a kind of face identification method.
Background technique
Recognition of face has been obtained in video monitoring, identification field and is widely applied.But head part inclines Gradient, shelter, shooting angle, light can impact recognition accuracy.Also, in some occasions, people's offer of putting on record Facial image it is less, many times only one, the characteristic information that can be obtained is limited, also results in centainly to recognition accuracy Influence.Therefore, it needs to design a kind of face identification method for being able to solve the above problem.
Summary of the invention
It is an object of the present invention to provide a kind of face identification methods, utilize edge algorithms and benchmark rgb value range The first human face region and the second human face region are obtained respectively, and are avoided using the overlapping region of the two as the region of feature extraction Shelter influences feature extraction, and increases characteristic information amount using rotation and zoom operations, improves recognition accuracy.
In order to realize these purposes and other advantages according to the present invention, face identification method is provided, comprising:
Step 1: obtaining images to be recognized;
Step 2: the first human face region is obtained from images to be recognized, from images to be recognized according to edge detection algorithm The region to get colors within the scope of benchmark rgb value, as the second human face region, by the first human face region and the second human face region The region of coincidence is as third human face region;
Step 3: obtaining characteristic area from third human face region, a rectangular area is determined with the center of characteristic area, it will Rectangular area successively rotates 0~180 ° around center, and after each rotation process, rectangular area is divided into multiple zonules, respectively The textural characteristics of each zonule are extracted, textural characteristics set corresponding with each rotation process is obtained;
Step 4: textural characteristics set corresponding with each rotation process is compared with database, each rotation is calculated Turn the corresponding similarity of operation, it, will be corresponding in database if the corresponding similarity of each rotation process is above given threshold Face as the face recognized;
Step 5: if the corresponding similarity of wherein one or more rotation process is lower than given threshold, by third face 0.1~10 times of area zoom, the operation of step 3 and step 4 is then repeated using the third human face region after scaling as object.
Preferably, the face identification method, benchmark rgb value range include the rgb value range of skin of face, lip Rgb value range and eyes rgb value range.
Preferably, the face identification method, characteristic area are eyes or mouth.
Preferably, it is identical to be uniformly divided into multiple shapes, and is rectangle by the face identification method for rectangular area Zonule.
Preferably, the face identification method compares the textural characteristics in corresponding zonule, Mei Gexuan respectively Turn the ratio that the corresponding similarity of operation is textural characteristics identical zonule and zonule total quantity.
Preferably, the face identification method, databases contain the people that puts on record facial image and with put on record The corresponding textural characteristics set of the rotation process of the facial image of people, the acquisition modes and step of the textural characteristics set in database Rapid three is identical.
The present invention is include at least the following beneficial effects:
The present invention obtains the first human face region and the second human face region using edge algorithms and benchmark rgb value range respectively, And search for obtain characteristic area from the overlapping region of the two, characteristic area and its neighboring area are put into rectangular area, it will Rectangular area is divided into multiple zonules, extracts the textural characteristics of zonule, compares, fills with database as textural characteristics set Divide and local feature is utilized, improves recognition accuracy, and reduce influence of the shelter to recognition effect.The present invention is to rectangle Region carries out multiple rotary and zooms in and out operation to overlapping region, obtains textural characteristics set respectively, enhances to part The use of feature, the recognition accuracy further increased.
Further advantage, target and feature of the invention will be partially reflected by the following instructions, and part will also be by this The research and practice of invention and be understood by the person skilled in the art.
Specific embodiment
The present invention will be further described in detail below with reference to the embodiments, to enable those skilled in the art referring to specification Text can be implemented accordingly.
It should be appreciated that such as " having ", "comprising" and " comprising " term used herein do not allot one or more The presence or addition of a other elements or combinations thereof.
The present invention provides face identification methods, comprising:
Step 1: obtaining images to be recognized;
Step 2: the first human face region is obtained from images to be recognized, from images to be recognized according to edge detection algorithm The region to get colors within the scope of benchmark rgb value, as the second human face region, by the first human face region and the second human face region The region of coincidence is as third human face region;
Step 3: obtaining characteristic area from third human face region, a rectangular area is determined with the center of characteristic area, it will Rectangular area successively rotates 0~180 ° around center, and after each rotation process, rectangular area is divided into multiple zonules, respectively The textural characteristics of each zonule are extracted, textural characteristics set corresponding with each rotation process is obtained;
Step 4: textural characteristics set corresponding with each rotation process is compared with database, each rotation is calculated Turn the corresponding similarity of operation, it, will be corresponding in database if the corresponding similarity of each rotation process is above given threshold Face as the face recognized;
Step 5: if the corresponding similarity of wherein one or more rotation process is lower than given threshold, by third face 0.1~10 times of area zoom, the operation of step 3 and step 4 is then repeated using the third human face region after scaling as object.
In the above-mentioned technical solutions, database, identity information, facial image comprising the people that puts on record in database are pre-established And the face characteristic information obtained according to facial image, the acquisition methods of face characteristic information specifically: obtain facial image In human face region, characteristic area is obtained from human face region, characteristic area and its neighboring area are put into rectangular area, Rotation process is carried out, after each rotation process, rectangular area is divided into multiple zonules, extracts the textural characteristics of zonule, Textural characteristics set corresponding with each rotation process is obtained, and human face region is scaled 0.1~10 times, and carry out rotation behaviour Make, divide and extract, obtains textural characteristics set corresponding with each scaling-rotation process.That is, containing people in database The local grain information and the local grain information after scaling of the multiple angles of face characteristic area and neighboring area, it is thus possible to fill Divide and reflect people's face characteristic of putting on record, improves recognition accuracy.In identification process, images to be recognized is acquired first, utilizes side Edge algorithm obtains the first human face region, then obtains the second facial image using benchmark rgb value range, i.e., will be located at benchmark RGB Pixel within the scope of value is included into the second human face region, and the first face is due to being obtained according to edge algorithms, possibly can not only sentence It is disconnected whether to there is veil (such as cap, mask) to cover face, and be also possible to for the object in background being judged as using color Face, and use both coincidence region as third human face region be used for handle, can avoid to a certain degree veil or Environment interferes identification.To third human face region take in database obtain the method for face characteristic information it is identical, see Step 3 and step 4, characteristic area are eyes, nose, mouth etc., are easy to get using the prior art, and rotation angle is preferably 0 °, 60 °, 120 ° and 180 °, scaling multiple is preferably 0.5 times and 2 times.Then by the face characteristic information of acquisition and database into Row compares, i.e., compares the textural characteristics of each zonule one by one, judge the corresponding texture feature set of each rotation process The similarity of conjunction then judges personnel to be identified for the people that puts on record if being above given threshold.If there is one or more to be less than Given threshold is then rotated, is divided and extraction operation, and and data then by first multiple of third face area zoom Corresponding data in library is compared, if similarity not yet all higher than given threshold, then by third face area zoom Two multiples, are also rotated, are divided, extracted and are compared, if similarity is above given threshold, judge people to be identified Member is the people that puts on record, if similarity repeats scaling, rotation, separates, extracts and compare not yet all higher than given threshold, until Similarity is above given threshold, if similarity cannot be met the requirements, then judge personnel to be identified under all scaling multiples It is not the people that puts on record.As can be seen that the present invention obtains human face region using two methods of edge algorithms and color, avoids and block Interference of the object to identification process, and using repeatedly scaling and the texture information of multiple angles, it avoids leakage and knows, improve knowledge Other accuracy rate.
In another technical solution, the face identification method, benchmark rgb value range includes the RGB of skin of face It is worth the rgb value range of range, the rgb value range of lip and eyes.Here there is provided preferred benchmark rgb value range, bases The skin of face, lip and eye color that quasi- rgb value range can acquire all people that put on record obtain, for example can be averaged Value.
In another technical solution, the face identification method, characteristic area is eyes or mouth.Here, it provides Preferred feature region, eyes and mouth have more textural characteristics, everyone lip texture is also unique, eyes and mouth Bar comparison is suitble to recognition methods of the invention.
In another technical solution, the face identification method, it is identical that rectangular area is uniformly divided into multiple shapes, It and is the zonule of rectangle.Here, it is easier to compare using the zonule of rectangle, the zonule overlapping that will be compared It is compared, improves calculating speed.
In another technical solution, the face identification method distinguishes the textural characteristics in corresponding zonule It compares, the corresponding similarity of each rotation process is the ratio of textural characteristics identical zonule and zonule total quantity.Here, Judge the texture in two rectangle cell domains it is whether identical can rule of thumb or statistical result determine, and similarity be same cells The ratio in domain, preferred given threshold are 80%.
In another technical solution, the face identification method, databases contain the facial image for the people that puts on record And textural characteristics set corresponding with the rotation process of the facial image for the people that puts on record, textural characteristics set in database obtain Take mode identical as step 3.It here is the method and the acquisition side in specific identification process that database obtains face characteristic information Method is identical, facilitates comparison.
Although the embodiments of the present invention have been disclosed as above, but its is not only in the description and the implementation listed With it can be fully applied to various fields suitable for the present invention, for those skilled in the art, can be easily Realize other modification, therefore without departing from the general concept defined in the claims and the equivalent scope, the present invention is simultaneously unlimited In specific details and embodiment shown and described herein.

Claims (6)

1. face identification method characterized by comprising
Step 1: obtaining images to be recognized;
Step 2: obtaining the first human face region from images to be recognized according to edge detection algorithm, being chosen from images to be recognized First human face region is overlapped by region of the color within the scope of benchmark rgb value as the second human face region with the second human face region Region as third human face region;
Step 3: obtaining characteristic area from third human face region, a rectangular area is determined with the center of characteristic area, by rectangle Region successively rotates 0~180 ° around center, after each rotation process, rectangular area is divided into multiple zonules, is extracted respectively The textural characteristics of each zonule obtain textural characteristics set corresponding with each rotation process;
Step 4: textural characteristics set corresponding with each rotation process is compared with database, each rotation behaviour is calculated Make corresponding similarity, if the corresponding similarity of each rotation process is above given threshold, by people corresponding in database Face is as the face recognized;
Step 5: if the corresponding similarity of wherein one or more rotation process is lower than given threshold, by third human face region Then 0.1~10 times of scaling repeats the operation of step 3 and step 4 using the third human face region after scaling as object.
2. face identification method as described in claim 1, which is characterized in that benchmark rgb value range includes the RGB of skin of face It is worth the rgb value range of range, the rgb value range of lip and eyes.
3. face identification method as described in claim 1, which is characterized in that characteristic area is eyes or mouth.
4. face identification method as described in claim 1, which is characterized in that rectangular area is uniformly divided into multiple shape phases Together, and be rectangle zonule.
5. face identification method as claimed in claim 4, which is characterized in that distinguish the textural characteristics in corresponding zonule It compares, the corresponding similarity of each rotation process is the ratio of textural characteristics identical zonule and zonule total quantity.
6. face identification method as described in claim 1, which is characterized in that databases contain the facial image for the people that puts on record And textural characteristics set corresponding with the rotation process of the facial image for the people that puts on record, textural characteristics set in database obtain Take mode identical as step 3.
CN201810796508.5A 2018-07-19 2018-07-19 Face recognition method Active CN109002799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810796508.5A CN109002799B (en) 2018-07-19 2018-07-19 Face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810796508.5A CN109002799B (en) 2018-07-19 2018-07-19 Face recognition method

Publications (2)

Publication Number Publication Date
CN109002799A true CN109002799A (en) 2018-12-14
CN109002799B CN109002799B (en) 2021-08-24

Family

ID=64596747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810796508.5A Active CN109002799B (en) 2018-07-19 2018-07-19 Face recognition method

Country Status (1)

Country Link
CN (1) CN109002799B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886213A (en) * 2019-02-25 2019-06-14 湖北亿咖通科技有限公司 Fatigue state judgment method, electronic equipment and computer readable storage medium
CN110956768A (en) * 2019-12-05 2020-04-03 重庆电子工程职业学院 Automatic anti-theft device of intelligence house
CN111582983A (en) * 2020-05-07 2020-08-25 悠尼客(上海)企业管理有限公司 Personalized control method based on face recognition and customer behaviors
CN111768545A (en) * 2020-06-28 2020-10-13 北京伟杰东博信息科技有限公司 Traffic safety monitoring method and system
CN111814603A (en) * 2020-06-23 2020-10-23 汇纳科技股份有限公司 Face recognition method, medium and electronic device
CN112131915A (en) * 2019-06-25 2020-12-25 杭州海康威视数字技术股份有限公司 Face attendance system, camera and code stream equipment
CN113011277A (en) * 2021-02-25 2021-06-22 日立楼宇技术(广州)有限公司 Data processing method, device, equipment and medium based on face recognition
CN113420663A (en) * 2021-06-23 2021-09-21 深圳市海清视讯科技有限公司 Child face recognition method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1811793A (en) * 2006-03-02 2006-08-02 复旦大学 Automatic positioning method for characteristic point of human faces
US20090310828A1 (en) * 2007-10-12 2009-12-17 The University Of Houston System An automated method for human face modeling and relighting with application to face recognition
CN106250843A (en) * 2016-07-28 2016-12-21 北京师范大学 A kind of method for detecting human face based on forehead region and system
US20170343481A1 (en) * 2016-05-27 2017-11-30 Purdue Research Foundation Methods and systems for crack detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1811793A (en) * 2006-03-02 2006-08-02 复旦大学 Automatic positioning method for characteristic point of human faces
US20090310828A1 (en) * 2007-10-12 2009-12-17 The University Of Houston System An automated method for human face modeling and relighting with application to face recognition
US20170343481A1 (en) * 2016-05-27 2017-11-30 Purdue Research Foundation Methods and systems for crack detection
CN106250843A (en) * 2016-07-28 2016-12-21 北京师范大学 A kind of method for detecting human face based on forehead region and system

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886213A (en) * 2019-02-25 2019-06-14 湖北亿咖通科技有限公司 Fatigue state judgment method, electronic equipment and computer readable storage medium
CN109886213B (en) * 2019-02-25 2021-01-08 湖北亿咖通科技有限公司 Fatigue state determination method, electronic device, and computer-readable storage medium
CN112131915B (en) * 2019-06-25 2023-03-24 杭州海康威视数字技术股份有限公司 Face attendance system, camera and code stream equipment
CN112131915A (en) * 2019-06-25 2020-12-25 杭州海康威视数字技术股份有限公司 Face attendance system, camera and code stream equipment
CN110956768A (en) * 2019-12-05 2020-04-03 重庆电子工程职业学院 Automatic anti-theft device of intelligence house
CN111582983A (en) * 2020-05-07 2020-08-25 悠尼客(上海)企业管理有限公司 Personalized control method based on face recognition and customer behaviors
CN111814603A (en) * 2020-06-23 2020-10-23 汇纳科技股份有限公司 Face recognition method, medium and electronic device
CN111814603B (en) * 2020-06-23 2023-09-05 汇纳科技股份有限公司 Face recognition method, medium and electronic equipment
CN111768545A (en) * 2020-06-28 2020-10-13 北京伟杰东博信息科技有限公司 Traffic safety monitoring method and system
CN111768545B (en) * 2020-06-28 2021-07-23 广东邦盛北斗科技股份公司 Traffic safety monitoring method and system
CN113011277A (en) * 2021-02-25 2021-06-22 日立楼宇技术(广州)有限公司 Data processing method, device, equipment and medium based on face recognition
CN113011277B (en) * 2021-02-25 2023-11-21 日立楼宇技术(广州)有限公司 Face recognition-based data processing method, device, equipment and medium
CN113420663A (en) * 2021-06-23 2021-09-21 深圳市海清视讯科技有限公司 Child face recognition method and system
CN113420663B (en) * 2021-06-23 2022-02-22 深圳市海清视讯科技有限公司 Child face recognition method and system

Also Published As

Publication number Publication date
CN109002799B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN109002799A (en) Face identification method
Yan et al. Biometric recognition using 3D ear shape
Mozaffari et al. Gender classification using single frontal image per person: combination of appearance and geometric based features
CN109190535B (en) Face complexion analysis method and system based on deep learning
CN112232332B (en) Non-contact palm detection method based on video sequence
Atharifard et al. Robust component-based face detection using color feature
CN104765739B (en) Extensive face database search method based on shape space
Long et al. Near infrared face image quality assessment system of video sequences
Pervaiz Real time face recognition system based on EBGM framework
Dahal et al. Incorporating skin color for improved face detection and tracking system
Charoenpong et al. A new method for occluded face detection from single viewpoint of head
Batz et al. A computer vision system for monitoring medication intake
CN109753912A (en) A kind of multi-light spectrum palm print matching process based on tensor
Yi et al. Face detection method based on skin color segmentation and facial component localization
Singh et al. Efficient tool for face detection and face recognition in color group photos
Wu et al. Real-time 2D hands detection and tracking for sign language recognition
Le et al. Fast and robust self-training beard/moustache detection and segmentation
Guo et al. Face detection with abstract template
Mandhala et al. Face detection using image morphology–a review
Guerfi et al. Implementation of the watershed method in the hsi color space for the face extraction
Arca et al. Face recognition based on 2d and 3d features
Gupta et al. Automatic and Robust Detection of Facial Features in Posed and Tilted Face Images
Suksil et al. Hand detection and feature extraction for static Thai Sign Language recognition
Angulu et al. Frontal face landmark displacement across age
Adachi et al. Extraction of face parts by using flesh color and boundary images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant