CN109002799B - Face recognition method - Google Patents

Face recognition method Download PDF

Info

Publication number
CN109002799B
CN109002799B CN201810796508.5A CN201810796508A CN109002799B CN 109002799 B CN109002799 B CN 109002799B CN 201810796508 A CN201810796508 A CN 201810796508A CN 109002799 B CN109002799 B CN 109002799B
Authority
CN
China
Prior art keywords
face
region
rotation operation
recognition method
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810796508.5A
Other languages
Chinese (zh)
Other versions
CN109002799A (en
Inventor
金益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Vocational University
Original Assignee
Suzhou Vocational University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Vocational University filed Critical Suzhou Vocational University
Priority to CN201810796508.5A priority Critical patent/CN109002799B/en
Publication of CN109002799A publication Critical patent/CN109002799A/en
Application granted granted Critical
Publication of CN109002799B publication Critical patent/CN109002799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face recognition method, which comprises the following steps: acquiring an image to be identified; acquiring a first face region according to an edge detection algorithm, acquiring a second face region according to a reference RGB value range, and taking a region where the first face region and the second face region are overlapped as a third face region; acquiring a feature region from the third face region, determining a rectangular region by using the center of the feature region, sequentially rotating the rectangular region by 0-180 degrees around the center, dividing the rectangular region into a plurality of small regions after each rotation operation, and respectively extracting the texture feature of each small region; and comparing the texture feature set corresponding to each rotation operation with the database, calculating the similarity corresponding to each rotation operation, and if the similarity corresponding to each rotation operation is higher than a set threshold, taking the corresponding face in the database as the identified face. The invention can avoid the influence of the shielding object on the identification, increases the characteristic information quantity and improves the identification accuracy.

Description

Face recognition method
Technical Field
The invention relates to the technical field of image processing. More particularly, the present invention relates to a face recognition method.
Background
Face recognition has been widely used in the fields of video surveillance and identity recognition. But the inclination of the head, the shielding object, the shooting angle and the light all can influence the identification accuracy. In addition, in some occasions, the number of face images provided by the docket is small, only one face image is provided in many times, the feature information which can be acquired is limited, and certain influence is caused on the identification accuracy. Therefore, it is desirable to design a face recognition method that can solve the above problems.
Disclosure of Invention
The invention aims to provide a face recognition method, which respectively obtains a first face area and a second face area by utilizing an edge algorithm and a reference RGB value range, takes the overlapped area of the first face area and the second face area as a feature extraction area, avoids the influence of an occlusion on feature extraction, increases the amount of feature information by utilizing rotation and scaling operations, and improves the recognition accuracy.
To achieve these objects and other advantages in accordance with the purpose of the invention, there is provided a face recognition method including:
step one, acquiring an image to be identified;
acquiring a first face region from the image to be recognized according to an edge detection algorithm, selecting a region with the color within a reference RGB value range from the image to be recognized as a second face region, and taking a region where the first face region and the second face region are overlapped as a third face region;
step three, acquiring a feature region from a third face region, determining a rectangular region by using the center of the feature region, sequentially rotating the rectangular region by 0-180 degrees around the center, dividing the rectangular region into a plurality of small regions after each rotation operation, respectively extracting the texture feature of each small region, and acquiring a texture feature set corresponding to each rotation operation;
comparing the texture feature set corresponding to each rotation operation with the database, calculating the similarity corresponding to each rotation operation, and if the similarity corresponding to each rotation operation is higher than a set threshold, taking the corresponding face in the database as the identified face;
and step five, if the similarity corresponding to one or more rotation operations is lower than a set threshold, scaling the third face area by 0.1-10 times, and then repeating the operation of the step three and the operation of the step four by taking the scaled third face area as an object.
Preferably, in the face recognition method, the reference RGB value range includes an RGB value range of a face skin, an RGB value range of a lip, and an RGB value range of an eye.
Preferably, in the face recognition method, the characteristic region is an eye or a mouth.
Preferably, in the face recognition method, the rectangular area is uniformly divided into a plurality of small areas which have the same shape and are rectangular.
Preferably, the face recognition method compares the texture features in the corresponding small regions respectively, and the similarity corresponding to each rotation operation is the ratio of the small regions with the same texture features to the total number of the small regions.
Preferably, in the face recognition method, the face image of the docker and the texture feature set corresponding to the rotation operation of the face image of the docker are stored in the database, and the texture feature set in the database is obtained in the same manner as in the third step.
The invention at least comprises the following beneficial effects:
the method comprises the steps of respectively obtaining a first face area and a second face area by utilizing an edge algorithm and a reference RGB value range, searching for a characteristic area from a superposition area of the first face area and the second face area, putting the characteristic area and a peripheral area of the characteristic area into a rectangular area, dividing the rectangular area into a plurality of small areas, extracting texture characteristics of the small areas, comparing the texture characteristics serving as a texture characteristic set with a database, fully utilizing local characteristics, improving identification accuracy and reducing the influence of an obstruction on identification effect. The invention carries out multiple rotations on the rectangular area and carries out zooming operation on the overlapped area to respectively obtain the texture feature sets, thereby enhancing the use of local features and further improving the identification accuracy.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Detailed Description
The present invention is further described in detail below with reference to examples so that those skilled in the art can practice the invention with reference to the description.
It will be understood that terms such as "having," "including," and "comprising," as used herein, do not preclude the presence or addition of one or more other elements or groups thereof.
The invention provides a face recognition method, which comprises the following steps:
step one, acquiring an image to be identified;
acquiring a first face region from the image to be recognized according to an edge detection algorithm, selecting a region with the color within a reference RGB value range from the image to be recognized as a second face region, and taking a region where the first face region and the second face region are overlapped as a third face region;
step three, acquiring a feature region from a third face region, determining a rectangular region by using the center of the feature region, sequentially rotating the rectangular region by 0-180 degrees around the center, dividing the rectangular region into a plurality of small regions after each rotation operation, respectively extracting the texture feature of each small region, and acquiring a texture feature set corresponding to each rotation operation;
comparing the texture feature set corresponding to each rotation operation with the database, calculating the similarity corresponding to each rotation operation, and if the similarity corresponding to each rotation operation is higher than a set threshold, taking the corresponding face in the database as the identified face;
and step five, if the similarity corresponding to one or more rotation operations is lower than a set threshold, scaling the third face area by 0.1-10 times, and then repeating the operation of the step three and the operation of the step four by taking the scaled third face area as an object.
In the above technical solution, a database is pre-established, the database includes identity information of a docker, a face image, and face feature information obtained according to the face image, and the method for obtaining the face feature information specifically includes: the method comprises the steps of obtaining a face area in a face image, obtaining a feature area from the face area, placing the feature area and a peripheral area of the feature area into a rectangular area, performing rotation operation, dividing the rectangular area into a plurality of small areas after each rotation operation, extracting texture features of the small areas, obtaining a texture feature set corresponding to each rotation operation, scaling the face area by 0.1-10 times, performing rotation operation, division and extraction, and obtaining the texture feature set corresponding to each scaling-rotation operation. That is, the database contains the local texture information of the face feature region and the peripheral region at multiple angles and the zoomed local texture information, so that the face feature of the docket can be fully reflected, and the identification accuracy is improved. In the identification process, firstly, an image to be identified is collected, a first face area is obtained by using an edge algorithm, then a second face image is obtained by using a reference RGB value range, namely pixel points in the reference RGB value range are included in the second face area, the first face is obtained only according to the edge algorithm, whether a shielding object (such as a hat, a mask and the like) shields the face or not can not be judged, an object in the background is possibly judged to be the face by using the color, and the overlapped area of the first face and the second face is used as a third face area for processing, so that the interference of the shielding object or the environment on the identification can be avoided to a certain degree. The method for acquiring the face feature information of the third face area is the same as the method for acquiring the face feature information from the database, and the method comprises the third step and the fourth step, wherein the feature areas are eyes, noses, mouths and the like, and are easily obtained by adopting the prior art, the rotation angles are preferably 0 degree, 60 degree, 120 degree and 180 degree, and the scaling times are preferably 0.5 time and 2 times. And then comparing the obtained face feature information with a database, namely comparing the texture features of each small region one by one, and judging the similarity of the texture feature set corresponding to each rotation operation, wherein if the similarity is higher than a set threshold value, the person to be identified is judged to be a docket. If one or more than one of the face areas are smaller than the set threshold, the third face area is zoomed by a first multiple, then the rotation, the division and the extraction are carried out, and the third face area is zoomed by a second multiple and is also rotated, divided, extracted and compared with corresponding data in the database, if the similarity is not higher than the set threshold, the person to be identified is judged to be a docket, if the similarity is not higher than the set threshold, the zooming, the rotation, the separation, the extraction and the comparison are repeated until the similarity is higher than the set threshold, and if the similarity is not higher than the set threshold, the person to be identified is judged not to be the docket. The invention adopts two methods of edge algorithm and color to obtain the face area, avoids the interference of the blocking object to the identification process, and avoids missing identification by using texture information of multiple times of scaling and multiple angles, thereby improving the identification accuracy.
In another technical solution, in the face recognition method, the reference RGB value range includes an RGB value range of a face skin, an RGB value range of a lip, and an RGB value range of an eye. Here, a preferred reference RGB value range is provided, which may be obtained by collecting the color of the facial skin, lips and eyes of all dockers, such as an average value.
In another technical solution, in the face recognition method, the characteristic region is an eye or a mouth. Here, preferred feature areas are provided, the eyes and mouth have more texture features, the lip texture of each person is also unique, and the eyes and mouth are more suitable for the identification method of the present invention.
In another technical scheme, the face recognition method uniformly divides a rectangular area into a plurality of small areas which have the same shape and are rectangular. In this case, the small regions of the rectangle are easy to compare, and the small regions to be compared are overlapped to perform the comparison, so that the calculation speed is increased.
In another technical scheme, the face recognition method compares the corresponding texture features in the small regions respectively, and the similarity corresponding to each rotation operation is the ratio of the small regions with the same texture features to the total number of the small regions. Here, it may be determined empirically or statistically whether the textures of the two rectangular small regions are the same, and the similarity is a proportion of the same small region, and it is preferable to set the threshold value to 80%.
In another technical scheme, the face image of the docker and the texture feature set corresponding to the rotation operation of the face image of the docker are stored in the database, and the texture feature set in the database is obtained in the same manner as in the third step. The method for acquiring the face feature information, namely the database, is the same as the acquisition method in the specific identification process, so that the comparison is convenient.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable to various fields of endeavor for which the invention may be embodied with additional modifications as would be readily apparent to those skilled in the art, and the invention is therefore not limited to the details given herein and to the embodiments shown and described without departing from the generic concept as defined by the claims and their equivalents.

Claims (6)

1. The face recognition method is characterized by comprising the following steps:
step one, acquiring an image to be identified;
acquiring a first face region from the image to be recognized according to an edge detection algorithm, selecting a region with the color within a reference RGB value range from the image to be recognized as a second face region, and taking a region where the first face region and the second face region are overlapped as a third face region;
step three, acquiring a feature region from a third face region, determining a rectangular region by using the center of the feature region, sequentially rotating the rectangular region by 0-180 degrees around the center, dividing the rectangular region into a plurality of small regions after each rotation operation, respectively extracting the texture feature of each small region, and acquiring a texture feature set corresponding to each rotation operation;
comparing the texture feature set corresponding to each rotation operation with the database, calculating the similarity corresponding to each rotation operation, and if the similarity corresponding to each rotation operation is higher than a set threshold, taking the corresponding face in the database as the identified face;
and step five, if the similarity corresponding to one or more rotation operations is lower than a set threshold, scaling the third face area by 0.1-10 times, and then repeating the operation of the step three, the operation of the step four and the operation of the step five by taking the scaled third face area as an object.
2. The face recognition method of claim 1, wherein the reference RGB value ranges include an RGB value range of a face skin, an RGB value range of a lip, and an RGB value range of an eye.
3. The face recognition method of claim 1, wherein the feature area is an eye or a mouth.
4. The face recognition method of claim 1, wherein the rectangular area is evenly divided into a plurality of small areas which have the same shape and are all rectangular.
5. The face recognition method of claim 4, wherein the corresponding texture features in the small regions are compared respectively, and the similarity corresponding to each rotation operation is the ratio of the small regions with the same texture features to the total number of the small regions.
6. The face recognition method according to claim 1, wherein the database stores the face image of the dockee and the texture feature set corresponding to the rotation operation of the face image of the dockee, and the texture feature set in the database is obtained in the same manner as in the third step.
CN201810796508.5A 2018-07-19 2018-07-19 Face recognition method Active CN109002799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810796508.5A CN109002799B (en) 2018-07-19 2018-07-19 Face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810796508.5A CN109002799B (en) 2018-07-19 2018-07-19 Face recognition method

Publications (2)

Publication Number Publication Date
CN109002799A CN109002799A (en) 2018-12-14
CN109002799B true CN109002799B (en) 2021-08-24

Family

ID=64596747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810796508.5A Active CN109002799B (en) 2018-07-19 2018-07-19 Face recognition method

Country Status (1)

Country Link
CN (1) CN109002799B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886213B (en) * 2019-02-25 2021-01-08 湖北亿咖通科技有限公司 Fatigue state determination method, electronic device, and computer-readable storage medium
CN112131915B (en) * 2019-06-25 2023-03-24 杭州海康威视数字技术股份有限公司 Face attendance system, camera and code stream equipment
CN110956768A (en) * 2019-12-05 2020-04-03 重庆电子工程职业学院 Automatic anti-theft device of intelligence house
CN111582983A (en) * 2020-05-07 2020-08-25 悠尼客(上海)企业管理有限公司 Personalized control method based on face recognition and customer behaviors
CN111814603B (en) * 2020-06-23 2023-09-05 汇纳科技股份有限公司 Face recognition method, medium and electronic equipment
CN111768545B (en) * 2020-06-28 2021-07-23 广东邦盛北斗科技股份公司 Traffic safety monitoring method and system
CN113011277B (en) * 2021-02-25 2023-11-21 日立楼宇技术(广州)有限公司 Face recognition-based data processing method, device, equipment and medium
CN113420663B (en) * 2021-06-23 2022-02-22 深圳市海清视讯科技有限公司 Child face recognition method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1811793A (en) * 2006-03-02 2006-08-02 复旦大学 Automatic positioning method for characteristic point of human faces
CN106250843A (en) * 2016-07-28 2016-12-21 北京师范大学 A kind of method for detecting human face based on forehead region and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8090160B2 (en) * 2007-10-12 2012-01-03 The University Of Houston System Automated method for human face modeling and relighting with application to face recognition
US10753881B2 (en) * 2016-05-27 2020-08-25 Purdue Research Foundation Methods and systems for crack detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1811793A (en) * 2006-03-02 2006-08-02 复旦大学 Automatic positioning method for characteristic point of human faces
CN106250843A (en) * 2016-07-28 2016-12-21 北京师范大学 A kind of method for detecting human face based on forehead region and system

Also Published As

Publication number Publication date
CN109002799A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN109002799B (en) Face recognition method
CN104517104B (en) A kind of face identification method and system based under monitoring scene
CN109344874A (en) A kind of automatic chromosome analysis method and system based on deep learning
Li et al. Person-independent head pose estimation based on random forest regression
Jiang et al. Mathematical-morphology-based edge detectors for detection of thin edges in low-contrast regions
CN112232332B (en) Non-contact palm detection method based on video sequence
CN106650628B (en) Fingertip detection method based on three-dimensional K curvature
Li et al. A dorsal hand vein pattern recognition algorithm
Puhan et al. A novel iris database indexing method using the iris color
Lin et al. A feature-based gender recognition method based on color information
CN116665258B (en) Palm image finger seam segmentation method
Dahal et al. Incorporating skin color for improved face detection and tracking system
CN104156689B (en) Method and device for positioning feature information of target object
JP2018088236A (en) Image processing device, image processing method, and image processing program
Yang et al. A skeleton extracting algorithm for dorsal hand vein pattern
Mlyahilu et al. A fast fourier transform with brute force algorithm for detection and localization of white points on 3d film pattern images
Chang et al. A novel retinal blood vessel segmentation method based on line operator and edge detector
Zhou et al. A robust algorithm for iris localization based on radial symmetry and circular integro differential operator
Noruzi et al. Robust iris recognition in unconstrained environments
CN111914585A (en) Iris identification method and system
Yi et al. Face detection method based on skin color segmentation and facial component localization
Lakshmi et al. Plant leaf image detection method using a midpoint circle algorithm for shape-based feature extraction
Cheng et al. Self-assessment for optic disc segmentation
CN117075730B (en) 3D virtual exhibition hall control system based on image recognition technology
Dewi et al. Robust pupil localization algorithm under off-axial pupil occlusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant