CN110334643B - Feature evaluation method and device based on face recognition - Google Patents
Feature evaluation method and device based on face recognition Download PDFInfo
- Publication number
- CN110334643B CN110334643B CN201910587430.0A CN201910587430A CN110334643B CN 110334643 B CN110334643 B CN 110334643B CN 201910587430 A CN201910587430 A CN 201910587430A CN 110334643 B CN110334643 B CN 110334643B
- Authority
- CN
- China
- Prior art keywords
- feature
- face
- region
- picture
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/162—Detection; Localisation; Normalisation using pixel segmentation or colour matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a feature evaluation method and device based on face recognition, which are used for dividing a face picture through pixel points of the picture to obtain feature areas corresponding to all parts of the face one by one; extracting feature vectors of each feature region, carrying out average value calculation, determining the average feature vector, and determining a feature point set of each feature region in the face image according to the average vector and each feature vector; and carrying out feature evaluation on the user corresponding to the face picture according to the feature point sets to acquire a plurality of feature data of the user. Compared with the prior art, the method and the device have the advantages that the facial features are segmented, and then the feature points of each region are obtained according to the feature vectors of each region, so that feature recognition can be carried out according to the regions during feature recognition, the similarity between the feature points of the facial features of the same person is improved, and the accuracy of subsequent feature evaluation is further improved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a feature evaluation method and apparatus based on face recognition.
Background
The face recognition method is to aggregate face pictures belonging to the same person in the album into one cluster. In the aggregation process, feature points in faces are generally directly identified from the whole picture, and when aggregation is performed due to the fact that the gestures, expressions and the like of people among face pictures are possibly different, the similarity among the feature points identified by the face features of the same person is reduced, and further the face pictures belonging to the same person cannot be accurately aggregated, so that the accuracy of subsequent feature evaluation is low.
Disclosure of Invention
The technical problem to be solved by the embodiment of the application is to provide a feature evaluation method and device based on face recognition, so that the accuracy of feature evaluation is improved.
To solve the above problems, an embodiment of the present application provides a feature evaluation method based on face recognition, which is suitable for being executed in a computing device, and at least includes the following steps:
identifying each first pixel point in the face picture;
inputting each first pixel point into a trained decision tree, recursively dividing the face picture according to each first pixel point, and obtaining each first characteristic region corresponding to each part of the face one by one;
extracting first feature vectors of the first feature areas, carrying out average value calculation, determining first average feature vectors, and connecting the first average feature vectors with the first feature vectors to obtain first feature point sets corresponding to the first feature areas one by one; wherein the first feature point set comprises a plurality of first feature points;
according to the plurality of first feature point sets, carrying out feature evaluation on the user corresponding to the face picture, and obtaining a plurality of feature data of the user; wherein the characteristic data includes age, gender, and expression.
Further, the method further comprises the following steps:
and acquiring an original picture containing the head portrait of the user, cutting and compressing the original picture, and acquiring the face picture.
Further, the training method of the decision tree comprises the following steps:
acquiring a training picture containing a human face;
identifying each second pixel point of the training picture, inputting each second pixel point into a decision tree to be trained, and recursively dividing the training picture according to each second pixel point based on a cascade classifier to obtain each second characteristic region corresponding to each part of a human face one by one;
extracting second feature vectors of the second feature areas, carrying out average value calculation, determining second average feature vectors, and connecting the second average feature vectors with the second feature vectors to obtain a second feature point set of the second feature areas in the training picture; wherein the second feature point set comprises a plurality of second feature points;
and comparing the second characteristic point set with the corresponding predicted characteristic point set, and adjusting the calculation parameters of the cascade classifier according to the comparison result so that the comparison error of the second characteristic point set and the predicted characteristic point set is within a preset range.
Further, the first characteristic region comprises a face contour region, an eye region, a nose region, a mouth region and an ear region;
inputting each first pixel point into a trained decision tree, recursively dividing the face picture according to each first pixel point, and obtaining each first characteristic region corresponding to each part of the face one by one, wherein the method specifically comprises the following steps:
inputting each first pixel point into a trained decision tree, dividing the face picture according to each first pixel point, obtaining a face contour region representing a face contour image, and dividing the face contour region according to all the first pixel points in the face contour region to obtain a nose region, a mouth region and an ear region.
Further, an embodiment of the present application further provides a feature evaluation device based on face recognition, including:
the pixel identification module is used for identifying each first pixel point in the face picture;
the region segmentation module is used for inputting each first pixel point into a trained decision tree, recursively segmenting the face picture according to each first pixel point, and acquiring each first characteristic region corresponding to each part of the face one by one;
the feature extraction module is used for extracting first feature vectors of the first feature areas to perform average value calculation, and connecting the first average feature vectors with the first feature vectors after determining the first average feature vectors to obtain first feature point sets corresponding to the first feature areas one by one; wherein the first feature point set comprises a plurality of first feature points;
the feature evaluation module is used for performing feature evaluation on the user corresponding to the face picture according to the plurality of first feature point sets to acquire a plurality of feature data of the user; wherein the characteristic data includes age, gender, and expression.
Further, the method further comprises the following steps:
the image processing module is used for acquiring an original image containing the head portrait of the user, cutting and compressing the original image, and acquiring the face image.
Further, the training method of the decision tree comprises the following steps:
acquiring a training picture containing a human face;
identifying each second pixel point of the training picture, inputting each second pixel point into a decision tree to be trained, and recursively dividing the training picture according to each second pixel point based on a cascade classifier to obtain each second characteristic region corresponding to each part of a human face one by one;
extracting second feature vectors of the second feature areas, carrying out average value calculation, determining second average feature vectors, and connecting the second average feature vectors with the second feature vectors to obtain second feature point sets corresponding to the second feature areas one by one; wherein the second feature point set comprises a plurality of second feature points;
and comparing the second characteristic point set with the corresponding predicted characteristic point set, and adjusting the calculation parameters of the cascade classifier according to the comparison result so that the comparison error of the second characteristic point set and the predicted characteristic point set is within a preset range.
Further, the first characteristic region comprises a face contour region, an eye region, a nose region, a mouth region and an ear region;
the region segmentation module is specifically configured to:
inputting each first pixel point into a trained decision tree, dividing the face picture according to each first pixel point, obtaining a face contour region representing a face contour image, and dividing the face contour region according to all the first pixel points in the face contour region to obtain a nose region, a mouth region and an ear region.
The implementation of the embodiment of the application has the following beneficial effects:
according to the feature evaluation method and device based on face recognition, the face picture is segmented through the pixel points of the picture, and each feature area corresponding to each part of the face one by one is obtained; extracting feature vectors of each feature region, carrying out average value calculation, determining the average feature vector, and determining a feature point set of each feature region in the face image according to the average vector and each feature vector; and carrying out feature evaluation on the user corresponding to the face picture according to the feature point sets to acquire a plurality of feature data of the user. Compared with the prior art, the method and the device have the advantages that the facial features are segmented, and then the feature points of the areas are obtained according to the feature vectors of the areas, so that feature recognition can be carried out according to the areas during feature recognition, the similarity between the feature points of the facial features of the same person is improved, and the accuracy of subsequent feature evaluation is improved.
Drawings
Fig. 1 is a schematic flow chart of a feature evaluation method based on face recognition according to an embodiment of the present application;
FIG. 2 is a schematic view of region segmentation provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a feature point acquisition result provided in an embodiment of the present application;
FIG. 4 is a flow chart of a decision tree training method provided by an embodiment of the present application;
fig. 5 is a flowchart of a feature evaluation method based on face recognition according to still another embodiment of the present application;
fig. 6 is a schematic structural diagram of a feature evaluation device based on face recognition according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a feature evaluation device based on face recognition according to still another embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 1, a flowchart of a feature evaluation method based on face recognition according to an embodiment of the present application is shown in fig. 1, and the task processing method includes steps S11 to S14. The method comprises the following steps:
step S11, each first pixel point in the face picture is identified.
In this embodiment, the face picture may be a picture in an arbitrary format. After receiving the pictures in any format, uniformly converting the pictures in any format into pictures in a preset format, thereby avoiding the subsequent picture identification errors caused by different picture formats. The identification of each first pixel point in the picture adopts the common identification means in the prior art, and is not repeated here.
Step S12, inputting each first pixel point into a trained decision tree, recursively dividing the face picture according to each first pixel point, and obtaining each first characteristic area corresponding to each part of the face one by one.
In this embodiment, the first feature region includes a face contour region, an eye region, a nose region, a mouth region, and an ear region.
Inputting each first pixel point into a trained decision tree, recursively dividing the face picture according to each first pixel point, and acquiring each first characteristic region corresponding to each part of the face one by one, wherein the method specifically comprises the following steps:
inputting each first pixel point into a trained decision tree, dividing a face picture according to each first pixel point, obtaining a face contour region representing a face contour image, and dividing the face contour region according to all first pixel points in the face contour region to obtain a nose region, a mouth region and an ear region.
In this embodiment, the input picture is recursively partitioned into rectangles according to picture pixels using a cascade Classifier Algorithm (CART). The trained decision tree can be determined according to the characteristic of the current pixel point, whether the current pixel point belongs to a matrix or not is determined, and the like, so that the rectangles of the face outline, the eyes, the nose, the mouth and the ears are obtained, and each characteristic area is obtained. A schematic diagram of the region segmentation performed recursively is shown in fig. 2.
Step S13, extracting first feature vectors of each first feature area, carrying out average value calculation, determining the first average feature vectors, connecting the first average feature vectors with each first feature vector, and obtaining each first feature point set corresponding to each first feature area one by one.
The first feature point set includes a plurality of first feature points.
In this embodiment, an average shape (i.e., a feature point average value is selected) is performed on the first feature vectors of each first feature region, and the average shape is connected with the first feature vectors, so as to obtain a first feature point set of each first feature region. The first feature point sets form 72 feature points in total, and feature marks are carried out on each part of one face picture, as shown in fig. 3.
And S14, carrying out feature evaluation on the user corresponding to the face picture according to the plurality of first feature point sets, and obtaining a plurality of feature data of the user.
Wherein the characteristic data includes age, gender and expression.
In this embodiment, feature evaluation is performed on the face picture according to the plurality of first feature point sets by using an artificial intelligence algorithm to obtain face data information of the user, including information such as age, gender, expression, face shape, yan Zhi, and the like.
Further, as shown in fig. 4, a flow chart of the decision tree training method provided in this embodiment includes:
step S21, obtaining a training picture containing a human face.
Step S22, identifying each second pixel point of the training picture, inputting each second pixel point into a decision tree to be trained, and recursively dividing the training picture according to each second pixel point based on a cascade classifier to obtain each second characteristic region corresponding to each part of the face one by one;
step S23, extracting second feature vectors of the second feature areas, carrying out average value calculation, determining second average feature vectors, and connecting the second average feature vectors with the second feature vectors to obtain second feature point sets corresponding to the second feature areas one by one.
The second feature point set includes a plurality of second feature points.
And S24, comparing the second characteristic point set with the corresponding predicted characteristic point set, and adjusting calculation parameters of the cascade classifier according to the comparison result so that the comparison error of the second characteristic point set and the predicted characteristic point set is within a preset range.
In this embodiment, by continuously inputting a plurality of training pictures, a decision tree is continuously trained, so that the selected feature points are continuously converged until the feature points are close to the feature points of a normal face, thereby completing training of the decision tree.
The embodiment of the application provides a feature evaluation method based on face recognition, which is characterized in that a face picture is segmented through pixel points of the picture to obtain feature areas corresponding to all parts of the face one by one; extracting feature vectors of each feature region, carrying out average value calculation, determining the average feature vector, and determining a feature point set of each feature region in the face image according to the average vector and each feature vector; and carrying out feature evaluation on the user corresponding to the face picture according to the feature point sets to acquire a plurality of feature data of the user. Compared with the prior art, the method and the device have the advantages that the facial features are segmented, and then the feature points of the areas are obtained according to the feature vectors of the areas, so that feature recognition can be carried out according to the areas during feature recognition, the similarity between the feature points of the facial features of the same person is improved, and the accuracy of subsequent feature evaluation is improved.
Referring to fig. 5, a flowchart of a feature evaluation method based on face recognition according to another embodiment of the present application is shown. In addition to the steps shown in fig. 1, the method further comprises:
step S10, obtaining an original picture containing a user head portrait, and performing cutting compression on the original picture to obtain a face picture.
In this embodiment, the original picture is acquired through the image acquisition device, such as a camera (authorized permission) of a mobile phone of a user or other various devices, and the storage space and the reading time of the picture are reduced by performing certain cutting and compression on the original picture, so that the storage space is saved, the subsequent extraction and recognition are convenient, and the operation efficiency is improved.
Further, referring to fig. 6, a schematic structural diagram of a feature evaluation device based on face recognition according to an embodiment of the present application is provided. Comprising the following steps:
the pixel identification module 101 identifies each first pixel point in the face picture.
In this embodiment, the face picture may be a picture in an arbitrary format. After receiving the pictures in any format, uniformly converting the pictures in any format into pictures in a preset format, thereby avoiding the subsequent picture identification errors caused by different picture formats. The identification of each first pixel point in the picture adopts the common identification means in the prior art, and is not repeated here.
The region segmentation module 102 inputs each first pixel point into a trained decision tree, recursively segments the face picture according to each first pixel point, and obtains each first characteristic region corresponding to each part of the face one by one.
In this embodiment, the first feature region includes a face contour region, an eye region, a nose region, a mouth region, and an ear region.
The region segmentation module 102 is specifically configured to input each first pixel point into a trained decision tree, segment a face picture according to each first pixel point, obtain a face contour region representing a face contour image, and segment the face contour region according to all the first pixel points in the face contour region to obtain a nose region, a mouth region and an ear region.
In this embodiment, the input picture is recursively partitioned into rectangles according to picture pixels using a cascade Classifier Algorithm (CART). The trained decision tree can be determined according to the characteristic of the current pixel point, whether the current pixel point belongs to a matrix or not is determined, and the like, so that the rectangles of the face outline, the eyes, the nose, the mouth and the ears are obtained, and each characteristic area is obtained.
The feature extraction module 103 extracts the first feature vectors of each first feature region, performs average value calculation, determines a first average feature vector, and then connects the first average feature vector with each first feature vector to obtain each first feature point set corresponding to each first feature region one by one.
The first feature point set includes a plurality of first feature points.
In this embodiment, an average shape (i.e., a feature point average value is selected) is performed on the first feature vectors of each first feature region, and the average shape is connected with the first feature vectors, so as to obtain a first feature point set of each first feature region. And each first feature point set forms 72 feature points in total, and features of each part of one face picture are marked.
And the feature evaluation module 104 performs feature evaluation on the user corresponding to the face picture according to the plurality of first feature point sets to acquire a plurality of feature data of the user.
Wherein the characteristic data includes age, gender and expression.
In this embodiment, feature evaluation is performed on the face picture according to the plurality of first feature point sets by using an artificial intelligence algorithm to obtain face data information of the user, including information such as age, gender, expression, face shape, yan Zhi, and the like.
The embodiment of the application provides a feature evaluation device based on face recognition, which is used for dividing a face picture through pixel points of the picture to obtain feature areas corresponding to all parts of the face one by one; extracting feature vectors of each feature region, carrying out average value calculation, determining the average feature vector, and determining a feature point set of each feature region in the face image according to the average vector and each feature vector; and carrying out feature evaluation on the user corresponding to the face picture according to the feature point sets to acquire a plurality of feature data of the user. Compared with the prior art, the method and the device have the advantages that the facial features are segmented, and then the feature points of the areas are obtained according to the feature vectors of the areas, so that feature recognition can be carried out according to the areas during feature recognition, the similarity between the feature points of the facial features of the same person is improved, and the accuracy of subsequent feature evaluation is improved.
Further, referring to fig. 7, a schematic structural diagram of a feature evaluation device based on face recognition according to another embodiment of the present application is provided. In addition to the structure shown in fig. 6, the structure further includes:
the image processing module 100 is configured to obtain an original image including a user head portrait, and crop and compress the original image to obtain a face image.
In this embodiment, the original picture is acquired through the image acquisition device, such as a camera (authorized permission) of a mobile phone of a user or other various devices, and the storage space and the reading time of the picture are reduced by performing certain cutting and compression on the original picture, so that the storage space is saved, the subsequent extraction and recognition are convenient, and the operation efficiency is improved.
While the foregoing is directed to the preferred embodiments of the present application, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the present application, such changes and modifications are also intended to be within the scope of the present application.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
Claims (8)
1. The characteristic evaluation method based on the face recognition is characterized by at least comprising the following steps:
identifying each first pixel point in the face picture;
inputting each first pixel point into a trained decision tree, recursively dividing the face picture according to each first pixel point, and obtaining each first characteristic region corresponding to each part of the face one by one;
extracting first feature vectors of the first feature areas, carrying out average value calculation, determining first average feature vectors, and connecting the first average feature vectors with the first feature vectors to obtain first feature point sets corresponding to the first feature areas one by one; wherein the first feature point set comprises a plurality of first feature points;
according to the plurality of first feature point sets, carrying out feature evaluation on the user corresponding to the face picture, and obtaining a plurality of feature data of the user; wherein the characteristic data includes age, gender, and expression.
2. The face recognition-based feature evaluation method of claim 1, further comprising:
and acquiring an original picture containing the head portrait of the user, cutting and compressing the original picture, and acquiring the face picture.
3. The face recognition-based feature evaluation method of claim 1, wherein the training method of the decision tree is as follows:
acquiring a training picture containing a human face;
identifying each second pixel point of the training picture, inputting each second pixel point into a decision tree to be trained, and recursively dividing the training picture according to each second pixel point based on a cascade classifier to obtain each second characteristic region corresponding to each part of a human face one by one;
extracting second feature vectors of the second feature areas, carrying out average value calculation, determining second average feature vectors, and connecting the second average feature vectors with the second feature vectors to obtain a second feature point set of the second feature areas in the training picture; wherein the second feature point set comprises a plurality of second feature points;
and comparing the second characteristic point set with the corresponding predicted characteristic point set, and adjusting the calculation parameters of the cascade classifier according to the comparison result so that the comparison error of the second characteristic point set and the predicted characteristic point set is within a preset range.
4. The face recognition-based feature evaluation method of claim 1, wherein the first feature region comprises a face contour region, an eye region, a nose region, a mouth region, and an ear region;
inputting each first pixel point into a trained decision tree, recursively dividing the face picture according to each first pixel point, and obtaining each first characteristic region corresponding to each part of the face one by one, wherein the method specifically comprises the following steps:
inputting each first pixel point into a trained decision tree, dividing the face picture according to each first pixel point, obtaining a face contour region representing a face contour image, and dividing the face contour region according to all the first pixel points in the face contour region to obtain a nose region, a mouth region and an ear region.
5. A face recognition-based feature evaluation device, comprising:
the pixel identification module is used for identifying each first pixel point in the face picture;
the region segmentation module is used for inputting each first pixel point into a trained decision tree, recursively segmenting the face picture according to each first pixel point, and acquiring each first characteristic region corresponding to each part of the face one by one;
the feature extraction module is used for extracting first feature vectors of the first feature areas to perform average value calculation, and connecting the first average feature vectors with the first feature vectors after determining the first average feature vectors to obtain first feature point sets corresponding to the first feature areas one by one; wherein the first feature point set comprises a plurality of first feature points;
the feature evaluation module is used for performing feature evaluation on the user corresponding to the face picture according to the plurality of first feature point sets to acquire a plurality of feature data of the user; wherein the characteristic data includes age, gender, and expression.
6. The face recognition-based feature-assessment apparatus of claim 5, further comprising:
the image processing module is used for acquiring an original image containing the head portrait of the user, cutting and compressing the original image, and acquiring the face image.
7. The face recognition-based feature evaluation device of claim 5, wherein the training method of the decision tree is as follows:
acquiring a training picture containing a human face;
identifying each second pixel point of the training picture, inputting each second pixel point into a decision tree to be trained, and recursively dividing the training picture according to each second pixel point based on a cascade classifier to obtain each second characteristic region corresponding to each part of a human face one by one;
extracting second feature vectors of the second feature areas, carrying out average value calculation, determining second average feature vectors, and connecting the second average feature vectors with the second feature vectors to obtain second feature point sets corresponding to the second feature areas one by one; wherein the second feature point set comprises a plurality of second feature points;
and comparing the second characteristic point set with the corresponding predicted characteristic point set, and adjusting the calculation parameters of the cascade classifier according to the comparison result so that the comparison error of the second characteristic point set and the predicted characteristic point set is within a preset range.
8. The face recognition-based feature-assessment device of claim 5, wherein the first feature region comprises a face contour region, an eye region, a nose region, a mouth region, and an ear region;
the region segmentation module is specifically configured to:
inputting each first pixel point into a trained decision tree, dividing the face picture according to each first pixel point, obtaining a face contour region representing a face contour image, and dividing the face contour region according to all the first pixel points in the face contour region to obtain a nose region, a mouth region and an ear region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910587430.0A CN110334643B (en) | 2019-06-28 | 2019-06-28 | Feature evaluation method and device based on face recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910587430.0A CN110334643B (en) | 2019-06-28 | 2019-06-28 | Feature evaluation method and device based on face recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110334643A CN110334643A (en) | 2019-10-15 |
CN110334643B true CN110334643B (en) | 2023-05-23 |
Family
ID=68143853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910587430.0A Active CN110334643B (en) | 2019-06-28 | 2019-06-28 | Feature evaluation method and device based on face recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110334643B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101584575A (en) * | 2009-06-19 | 2009-11-25 | 无锡骏聿科技有限公司 | Age assessment method based on face recognition technology |
CN103902960A (en) * | 2012-12-28 | 2014-07-02 | 北京计算机技术及应用研究所 | Real-time face recognition system and method thereof |
WO2014180093A1 (en) * | 2013-05-10 | 2014-11-13 | Tencent Technology (Shenzhen) Company Limited | Systems and methods for facial property identification |
CN104504365A (en) * | 2014-11-24 | 2015-04-08 | 闻泰通讯股份有限公司 | System and method for smiling face recognition in video sequence |
CN108268838A (en) * | 2018-01-02 | 2018-07-10 | 中国科学院福建物质结构研究所 | Facial expression recognizing method and facial expression recognition system |
WO2019033571A1 (en) * | 2017-08-17 | 2019-02-21 | 平安科技(深圳)有限公司 | Facial feature point detection method, apparatus and storage medium |
WO2019085338A1 (en) * | 2017-11-01 | 2019-05-09 | 平安科技(深圳)有限公司 | Electronic apparatus, image-based age classification method and system, and storage medium |
-
2019
- 2019-06-28 CN CN201910587430.0A patent/CN110334643B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101584575A (en) * | 2009-06-19 | 2009-11-25 | 无锡骏聿科技有限公司 | Age assessment method based on face recognition technology |
CN103902960A (en) * | 2012-12-28 | 2014-07-02 | 北京计算机技术及应用研究所 | Real-time face recognition system and method thereof |
WO2014180093A1 (en) * | 2013-05-10 | 2014-11-13 | Tencent Technology (Shenzhen) Company Limited | Systems and methods for facial property identification |
CN104504365A (en) * | 2014-11-24 | 2015-04-08 | 闻泰通讯股份有限公司 | System and method for smiling face recognition in video sequence |
WO2019033571A1 (en) * | 2017-08-17 | 2019-02-21 | 平安科技(深圳)有限公司 | Facial feature point detection method, apparatus and storage medium |
WO2019085338A1 (en) * | 2017-11-01 | 2019-05-09 | 平安科技(深圳)有限公司 | Electronic apparatus, image-based age classification method and system, and storage medium |
CN108268838A (en) * | 2018-01-02 | 2018-07-10 | 中国科学院福建物质结构研究所 | Facial expression recognizing method and facial expression recognition system |
Also Published As
Publication number | Publication date |
---|---|
CN110334643A (en) | 2019-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109359575B (en) | Face detection method, service processing method, device, terminal and medium | |
US11043011B2 (en) | Image processing method, apparatus, terminal, and storage medium for fusing images of two objects | |
CN108921782B (en) | Image processing method, device and storage medium | |
KR102641115B1 (en) | A method and apparatus of image processing for object detection | |
WO2022078041A1 (en) | Occlusion detection model training method and facial image beautification method | |
CN109389562B (en) | Image restoration method and device | |
US8515136B2 (en) | Image processing device, image device, image processing method | |
US20090087038A1 (en) | Image processing apparatus, image pickup apparatus, processing method for the apparatuses, and program for the apparatuses | |
US9141851B2 (en) | Deformable expression detector | |
CN106056064A (en) | Face recognition method and face recognition device | |
JP2010108494A (en) | Method and system for determining characteristic of face within image | |
CN106056083B (en) | A kind of information processing method and terminal | |
CN111695462A (en) | Face recognition method, face recognition device, storage medium and server | |
Xue et al. | Automatic 4D facial expression recognition using DCT features | |
Ribeiro et al. | Exploring deep learning image super-resolution for iris recognition | |
WO2022135574A1 (en) | Skin color detection method and apparatus, and mobile terminal and storage medium | |
CN114299363A (en) | Training method of image processing model, image classification method and device | |
CN114663726A (en) | Training method of target type detection model, target detection method and electronic equipment | |
CN117689884A (en) | Method for generating medical image segmentation model and medical image segmentation method | |
CN113468925B (en) | Occlusion face recognition method, intelligent terminal and storage medium | |
CN106980818B (en) | Personalized preprocessing method, system and terminal for face image | |
CN110348353B (en) | Image processing method and device | |
CN110334643B (en) | Feature evaluation method and device based on face recognition | |
JP2006133941A (en) | Image processing device, image processing method, image processing program, and portable terminal | |
CN116012908A (en) | Face generation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230419 Address after: 350000 11 / F, building 1, Aofeng Plaza, No.2, Aofeng Road, Taijiang District, Fuzhou City, Fujian Province Applicant after: Zhiyu Zhilian Technology Co.,Ltd. Address before: Room 101, No. 10 Jingang Avenue, Nansha District, Guangzhou City, Guangdong Province 511457, Zone C, self-made Applicant before: GUANGDONG AOYUANAO PURCHASER ELECTRONIC COMMERCE CO.,LTD. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |