CN110874567B - Color value judging method and device, electronic equipment and storage medium - Google Patents

Color value judging method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110874567B
CN110874567B CN201910901612.0A CN201910901612A CN110874567B CN 110874567 B CN110874567 B CN 110874567B CN 201910901612 A CN201910901612 A CN 201910901612A CN 110874567 B CN110874567 B CN 110874567B
Authority
CN
China
Prior art keywords
feature
face
color value
features
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910901612.0A
Other languages
Chinese (zh)
Other versions
CN110874567A (en
Inventor
孙汀娟
黄竹梅
周雅君
赵星
李恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910901612.0A priority Critical patent/CN110874567B/en
Publication of CN110874567A publication Critical patent/CN110874567A/en
Priority to PCT/CN2020/093341 priority patent/WO2021057063A1/en
Application granted granted Critical
Publication of CN110874567B publication Critical patent/CN110874567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A color value determination method, the method comprising: acquiring a face image to be judged, which needs to be subjected to color value judgment; extracting a plurality of first characteristic points in the face image to be judged by using a face dotting technology; calculating a plurality of first features according to the coordinates of the plurality of first feature points; determining a first feature matched with a preset feature type from a plurality of first features as a first standard feature; according to the first standard features, carrying out standardization processing on the salient features in the first features to obtain a plurality of first feature ratio values, wherein the salient features are features with large color value discrimination; judging a plurality of first feature ratio values by using a pre-trained face value judging model to obtain a first face value judging result of the face image to be judged; and outputting a first color value judging result. The invention also provides a color value judging device, electronic equipment and a storage medium. The invention can improve the accuracy of color value judgment.

Description

Color value judging method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of machine learning technologies, and in particular, to a color value determining method, a color value determining device, an electronic device, and a storage medium.
Background
"Yan Zhi" is a popular network word in recent years, and news about the value of the color can now be seen on each large internet almost every day. The simplest explanation of the color values is long-term phase, which is the judgment of the quality of the appearance characteristics. "Yan Zhi" also has metrics that can be measured and compared, including: "low value", "high value", "Yan Zhi acts as" and "burst table" etc. Of these, "high value" and "Yan Zhi act" are long and attractive, while "low value" is long and unsightly.
At present, the color value judging method mainly judges according to the approach degree of the facial features position and ratio and the golden ratio of the face, and the closer to the face of the golden ratio, the higher the color value judging score is. However, different people have different aesthetic standards, along with the development of the times, the aesthetic level of people also changes, and the color value is judged according to the golden ratio, so that the accuracy is not high.
Therefore, how to determine the color value of the user more accurately is a technical problem to be solved urgently.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a color value determination method, apparatus, electronic device, and storage medium capable of improving accuracy of color value determination.
A first aspect of the present invention provides a color value determination method, the method including:
acquiring a face image to be judged, which needs to be subjected to color value judgment;
extracting a plurality of first characteristic points in the face image to be judged by using a face dotting technology;
calculating a plurality of first features according to the coordinates of the plurality of first feature points, wherein the first features comprise the length of each part of the face in the face image to be judged and the distance between certain two parts;
determining the first feature matched with a preset feature type as a first standard feature from a plurality of first features;
according to the first standard features, carrying out standardization processing on the salient features in the first features to obtain a plurality of first feature ratio values, wherein the salient features are features with large color value discrimination;
judging a plurality of first feature ratio values by using a pre-trained face value judging model to obtain a first face value judging result of the face image to be judged;
and outputting the first color value judging result.
In one possible implementation manner, the determining the plurality of first feature ratios using a pre-trained face value determining model, and obtaining a first face value determining result of the face image to be determined includes:
For each first feature ratio, acquiring a first feature type of the first feature ratio;
judging whether a plurality of first feature ratio values belong to a feature ratio value range matched with the first feature type in the color value grade or not by using a pre-trained color value judging model aiming at each preset color value grade;
if the plurality of first feature ratio values belong to the feature ratio value range matched with the first feature type in the color value grade, determining that the color value grade is the grade of the color value to be determined;
and if the grade of the to-be-determined face value is one, determining that the grade of the to-be-determined face value is a first face value judgment result of the to-be-judged face image.
In one possible implementation, the method further includes:
if the number of the to-be-determined gear levels is multiple, sequencing the multiple to-be-determined gear levels according to the gear level from high to low to obtain a gear level sequencing queue;
and determining any to-be-determined face value grade at the middle position in the face value grade sorting queue as a first face value judging result of the face image to be judged.
In one possible implementation, the method further includes:
If all the first feature ratios do not belong to the feature ratio range matched with the first feature type in the color value grade, determining that the first feature ratios belong to a preset color value grade;
and determining the preset color value grade as a first color value judging result of the face image to be judged.
In one possible implementation, the method further includes:
acquiring a plurality of face sample images to be trained;
extracting a plurality of second feature points in the face sample image for each face sample image;
calculating a plurality of second features according to the coordinates of the plurality of second feature points, wherein the second features comprise the length of each part of the face and the distance between certain two parts in the face sample image;
determining the second feature matched with the preset feature type from the plurality of second features as a second standard feature;
according to the second standard features, carrying out standardization processing on the plurality of second features to obtain a plurality of second feature ratio values;
selecting a significant feature with large color value discrimination from the plurality of second features according to the distribution condition of the plurality of second feature ratio values;
And constructing a color value judging model according to the salient features in the second features.
In one possible implementation manner, the constructing a color value determination model according to the salient features in the second features includes:
learning a salient feature of the plurality of second features;
determining a plurality of color value grades corresponding to the salient features in the plurality of second features and a feature ratio range corresponding to each color value grade;
judging whether the characteristic ratio ranges of different color value grades accord with extreme value consistency;
and if the characteristic ratio ranges of the different color value grades accord with the extreme value consistency, generating a color value judging model according to the obvious characteristics in the plurality of second characteristics, the plurality of color value grades and the characteristic ratio range corresponding to each color value grade.
In one possible implementation manner, the face value grade corresponding to the face value judging model includes a plurality of face value grades, the face value grades are divided according to the face value, each face value grade includes all feature types, and in different face value grades, the feature value ranges corresponding to the feature types are different.
A second aspect of the present invention provides a color value determination apparatus, the apparatus comprising:
The acquisition module is used for acquiring the face image to be judged, which is required to be subjected to face value judgment;
the extraction module is used for extracting a plurality of first characteristic points in the face image to be judged by using a face dotting technology;
the computing module is used for computing a plurality of first features according to the coordinates of the plurality of first feature points, wherein the first features comprise the length of each part of the face in the face image to be judged and the distance between certain two parts;
the determining module is used for determining the first characteristic matched with a preset characteristic type as a first standard characteristic from a plurality of first characteristics;
the processing module is used for carrying out standardization processing on the salient features in the plurality of first features according to the first standard features to obtain a plurality of first feature ratio values, wherein the salient features are features with large color value discrimination;
the judging module is used for judging a plurality of first feature ratio values by using a pre-trained face value judging model to obtain a first face value judging result of the face image to be judged;
and the output module is used for outputting the first color value judging result.
A third aspect of the present invention provides an electronic device comprising a processor and a memory, the processor being arranged to implement the color value determination method when executing a computer program stored in the memory.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the color value determination method.
By the technical scheme, the face image to be judged, which needs to be subjected to color value judgment, can be obtained; extracting a plurality of first characteristic points in the face image to be judged by using a face dotting technology; calculating a plurality of first features according to the coordinates of the plurality of first feature points, wherein the first features comprise the length of each part of the face in the face image to be judged and the distance between certain two parts; determining the first feature matched with a preset feature type as a first standard feature from a plurality of first features; according to the first standard features, carrying out standardization processing on the salient features in the first features to obtain a plurality of first feature ratio values, wherein the salient features are features with large color value discrimination; judging a plurality of first feature ratio values by using a pre-trained face value judging model to obtain a first face value judging result of the face image to be judged; and outputting the first color value judging result. In the invention, the salient features in the face image can be obtained, the data of the salient features are standardized, the salient features are input into the face value judging model, the face value judging result is finally obtained, in the whole face value judging process, the object aimed by the face value judging model is the salient features, the salient features are the features with large face value distinguishing degree, the salient features are judged, and the obtained first face value judging result can represent the face value of the face image to be judged, so that the accuracy of face value judging can be improved.
Drawings
Fig. 1 is a flowchart of a color value determining method according to a preferred embodiment of the present invention.
Fig. 2 is an exemplary diagram of a face image of the present disclosure.
Fig. 3 is a functional block diagram of a color value determining apparatus according to a preferred embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present invention for implementing the color value determining method.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The color value judging method of the embodiment of the invention is applied to the electronic equipment, and can also be applied to a hardware environment formed by the electronic equipment and a server connected with the electronic equipment through a network, and the method is jointly executed by the server and the electronic equipment. Networks include, but are not limited to: a wide area network, a metropolitan area network, or a local area network.
A server may refer to a computer system that provides services to other devices (e.g., electronic devices) in a network. If a personal computer can provide file transfer protocol (File Transfer Protocol, FTP) service to the outside, the server can also be called. In a narrow sense, a server is dedicated to some high-performance computers, and can provide services to the outside through a network, and compared with a common personal computer, the server has higher requirements on stability, security, performance and the like, so that the server is different from the common personal computer in terms of hardware such as a CPU, a chipset, a memory, a disk system, a network and the like.
The electronic device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware of the electronic device comprises, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a digital processor (DSP), an embedded device and the like. The electronic device may also include a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network server, a server group of multiple network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers. The user equipment includes, but is not limited to, any electronic product that can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad, a voice control device or the like, for example, a personal computer, a tablet computer, a smart phone, a personal digital assistant PDA and the like.
Fig. 1 is a flowchart of a color value determining method according to a preferred embodiment of the present invention. The sequence of steps in the flowchart may be changed and some steps may be omitted according to different needs.
S11, the electronic equipment acquires a face image to be judged, which needs to be subjected to face value judgment.
The face image to be judged is a face front image.
As an alternative embodiment, before step S11, the method further includes:
acquiring a plurality of face sample images to be trained;
extracting a plurality of second feature points in the face sample image for each face sample image;
calculating a plurality of second features according to the coordinates of the plurality of second feature points, wherein the second features comprise the length of each part of the face and the distance between certain two parts in the face sample image;
determining the second feature matched with the preset feature type from the plurality of second features as a second standard feature;
according to the second standard features, carrying out standardization processing on the plurality of second features to obtain a plurality of second feature ratio values;
selecting a significant feature with large color value discrimination from the plurality of second features according to the distribution condition of the plurality of second feature ratio values;
And constructing a color value judging model according to the salient features in the second features.
The face sample image is a pre-prepared face image, the face sample image carries face value grade information, and the face sample image is pre-determined according to mass aesthetic.
The second feature points are points marked on the external contour of the face and the edges of the organs, and a pre-trained face dotting technology can be used for acquiring a plurality of second feature points and coordinates thereof in the face sample image.
The second feature includes the length of each part of the face in the face sample image and the distance between two parts, for example, the length of the eyes: distance from bottom of nose to mouth of 2 cm: 2cm.
Wherein the second feature ratio refers to the ratio of the second feature to the second standard feature.
In the alternative implementation mode, firstly, characteristic points are extracted from the prepared face sample image, the length of each part and the distance between certain two parts in the face sample image are calculated according to the coordinates of the characteristic points, and then, the length and/or the distance are subjected to standardization processing, wherein the standardization processing is to convert all the length and/or the distance into a ratio result (namely a characteristic ratio) with the standard characteristic, through the standard processing, all the faces can be scaled to a ratio taking the standard characteristic as 1 unit, different faces can be compared with each other, and all the characteristics of the same face can be compared with each other; in addition, since the difference between face data and data is small, the data precision is required to be at least 6 bits after decimal point. For example, in the embodiment of the present invention, the face width of the straight line position where the two eyes are located is selected as the standard feature, and if the face width is 10cm and the length of the eyes is 2cm in a face sample image, the feature ratio obtained by performing the standardization processing on the feature of the length of the eyes is 0.200000.
After the feature ratios of the sample face images are obtained, the distribution situation of the feature ratios in different color values is observed, and then the feature types corresponding to the feature ratios with larger discrimination of different color values are found out. According to the embodiment of the invention, various box graphs with feature ratios in different face value grades are generated according to the feature ratios and the face value grades of the sample face image, and several obvious features with larger face value differentiation degrees in different grades are determined from the box graphs: the length of the vestibule, the length of the atrium, the distance between the eyes, the length of the eyes, the width of the nose, the width of the eyes, etc. These salient features are then trained to obtain a face value decision model. The box type diagram consists of a rectangular frame generated by the feature ratio and the color value grade.
Fig. 2 is an exemplary diagram of a face image of the present disclosure. As shown in fig. 2, 71 points may be extracted from the face image shown in fig. 2 using a face dotting technique, and at the same time, the position coordinates of each feature point may be recorded, and the distance or length of the three-vestibule five-eye may be further calculated, for example, the length of the "five-eye 1" is the length from the point 13 to the point 17 in the face image shown in fig. 2. And similarly, calculating the distances with large influence of all subjective judgments on the color value, such as the length of eyes, the width of nose, the width of mouth, the length of vestibule, the length of atrium, the length of inferior vestibule and the like.
Among these, 9 salient features can be selected, such as the vertical length of points 6 to 51-52 (i.e., the length of the lower court), the vertical length of points 26-39 straight lines 51-52 (i.e., the length of the atrium), the length of points 30-17 (i.e., the distance between the eyes), the length of points 10-2 (i.e., the face width), the length of points 11-1 (i.e., the face width), the vertical length of points 58-62 straight lines to 51-52 (i.e., the distance between the mouth and the nose), the length of points 50-53 (i.e., the width of the nose), the length of points 15-19 (i.e., the width of the eyes), and the length of points 30-34 (i.e., the length of the eyes).
Specifically, the constructing a color value determination model according to the salient features in the second features includes:
learning a salient feature of the plurality of second features;
determining a plurality of color value grades corresponding to the salient features in the plurality of second features and a feature ratio range corresponding to each color value grade;
judging whether the characteristic ratio ranges of different color value grades accord with extreme value consistency;
and if the characteristic ratio ranges of the different color value grades accord with the extreme value consistency, generating a color value judging model according to the obvious characteristics in the plurality of second characteristics, the plurality of color value grades and the characteristic ratio range corresponding to each color value grade.
The number and the types of the color value grades are preset, for example, the color value grades can be classified into 5 grades, wherein the color value grades can be classified into A, B, C, D, E five grades according to the color value from high to low, the color value grades can be classified into more grades or less grades according to the color value from high to low, A, B, C, D, E the characters are only preset to identify different color value grades, and other characters can be used to identify different color value grades, which is not particularly limited in the embodiment of the present invention.
In the embodiment of the invention, when the salient features are trained, the range of the feature ratio corresponding to the salient features in different color value grades is determined according to the maximum value and the minimum value in the box-shaped diagram corresponding to the salient features in different color value grades. After determining the feature ratio ranges of the salient features corresponding to different color values, it is required to determine whether the feature ratio ranges conform to extreme consistency, for example, the feature ratio ranges of the salient features corresponding to five color values are [ a1, b1], [ a2, b2], [ a3, b3], [ a4, b4], [ a5, b5], if the color value levels are monotonically increasing on the salient features, that is, the higher the color value level is, the larger the maximum value and the minimum value of the feature ratio corresponding to the salient features are, and if the feature ratio ranges satisfy a1< = a2< = a3< = a4< = a5, b1< = b2< = b3< = b4< = b 5). At this time, it can be determined that the feature ratio ranges of different color value grades conform to the extreme value consistency. And generating a color value judging model according to the salient features in the second features, the color value grades and the feature ratio range corresponding to each color value grade.
Optionally, if the feature ratio ranges of different color levels do not meet the extremum consistency, the feature ratio range needs to be changed, for example, the feature ratio ranges of the five color levels corresponding to the significant feature in the above example are [ a1, b1], [ a2, b2], [ a3, b3], [ a4, b4], [ a5, b5], and the color level is monotonically increasing on the significant feature, and if a certain feature ratio range does not meet a1< = a2< = a3< = a4< = a5, b1< = b2< = b3< = b4< = b 5), the feature ratio range needs to be changed to the feature ratio range of the next level, for example: a1> a2< =a3 < =a4 < =a5, and the value of a1 needs to be changed to the value of a2 so that a1< =a2 < =a3 < =a4 < =a5 holds.
The color value judgment model comprises a plurality of color value grades corresponding to the color value judgment model, the color value grades are divided according to the color value, each color value grade comprises all feature types, each feature type is in different color value grades, and feature value ranges corresponding to the feature types are different.
In the Yan Zhi decision model, each feature type has a corresponding feature ratio range (a value range of a feature ratio), and each feature type has a feature ratio range matched with the feature type in different color value grades.
Among them, feature types such as eye length, eye width, nose length, nose width, mouth length, and the like.
S12, the electronic equipment extracts a plurality of first characteristic points in the face image to be judged by using a face dotting technology.
The first feature points are points marked on the outer contour of the face and the edges of the organs, and a face dotting technology can be used for acquiring a plurality of first feature points and coordinates thereof in the face sample image.
And S13, the electronic equipment calculates a plurality of first features according to the coordinates of the plurality of first feature points, wherein the first features comprise the length of each part of the face in the face image to be judged and the distance between certain two parts.
Wherein the first characteristic refers to the length of a certain part or the distance between two parts, such as the length of eyes: distance from bottom of nose to mouth of 2 cm: 2cm. A plurality of the first features may be calculated from coordinates of the first feature points.
S14, the electronic equipment determines the first characteristic matched with the preset characteristic type as a first standard characteristic from a plurality of first characteristics.
Wherein the first standard feature refers to a certain feature specified in advance. The embodiment of the invention selects the face width of the straight line position where the two eyes are positioned as the first standard characteristic. Wherein the type of feature is preset such as eye length, eye width, nose length, nose width, mouth length, etc.
S15, the electronic equipment performs standardization processing on the salient features in the first features according to the first standard features to obtain a plurality of first feature ratio values, wherein the salient features are features with large color value discrimination.
Wherein the first feature ratio refers to the ratio result of the salient feature to the first standard feature.
Specifically, reference may be made to the processing method adopted in the model training, which is not described herein.
S16, the electronic equipment judges a plurality of first feature ratio values by using a pre-trained face value judging model, and a first face value judging result of the face image to be judged is obtained.
The face value judging result refers to a face value grade obtained after the face value judging model is used for judging the first feature ratio of the face image to be judged.
Specifically, the determining the plurality of first feature ratio values using a pre-trained face value determining model, and obtaining a first face value determining result of the face image to be determined includes:
for each first feature ratio, acquiring a first feature type of the first feature ratio;
judging whether a plurality of first feature ratio values belong to a feature ratio value range matched with the first feature type in the color value grade or not by using a pre-trained color value judging model aiming at each preset color value grade;
If the plurality of first feature ratio values belong to the feature ratio value range matched with the first feature type in the color value grade, determining that the color value grade is the grade of the color value to be determined;
and if the grade of the to-be-determined face value is one, determining that the grade of the to-be-determined face value is a first face value judgment result of the to-be-judged face image.
In this optional embodiment, when all the first feature ratios of the face image to be determined belong to the respective corresponding feature ratio ranges in the same face value level, the face value level may be determined as the face value level of the face image to be determined. For example, when a face image is subjected to face value determination, the feature X and the feature Y are selected to perform the determination, and if the determination result is the face value grade a, the following is satisfied: the characteristic ratio of the characteristic X belongs to the characteristic ratio range of the characteristic X in the color value grade A; the feature ratio of the feature Y belongs to the feature ratio range of the feature Y in the color value class a. Therefore, it is necessary to determine each of the first feature ratios of the face image to be determined, and determine whether all the first feature ratios belong to various corresponding feature ratio ranges in the same face value level. For each first feature ratio, firstly acquiring a feature type, namely the first feature type, and then judging whether a plurality of first feature ratios belong to a feature ratio range matched with the first feature type in the color value grade by using a pre-trained color value judging model in each color value grade; and if the color value grade meeting the requirement is one, determining the color value grade to be determined as a first color value determination result of the face image to be determined.
As an alternative embodiment, the method further comprises:
if the number of the to-be-determined gear levels is multiple, sequencing the multiple to-be-determined gear levels according to the gear level from high to low to obtain a gear level sequencing queue;
and determining any to-be-determined face value grade at the middle position in the face value grade sorting queue as a first face value judging result of the face image to be judged.
If the number of the to-be-determined gear value grades is multiple, namely the number of the first gear value judging results is multiple, sorting the multiple to-be-determined gear value grades from high to low according to the gear value grades, and obtaining a gear value grade sorting queue; and determining any grade of the to-be-determined grade of the grade in the middle position in the grade sorting queue as a first grade judgment result, wherein if only one grade of the to-be-determined grade in the middle position is determined as the first grade judgment result, and if 2 grades of the to-be-determined grade in the middle position are determined, one grade of the to-be-determined grade is taken as the first grade judgment result according to a preset rule. Such as: in the embodiment of the invention, the grade of the color value is divided into A, B, C, D, E five grades according to the high-low grade, if the grade of the color value to be determined preliminarily obtained by the model is the grade of 4 color values of ABCD, the first color value judging result is determined to be the grade B of the color value according to a preset rule.
As an alternative embodiment, the method further comprises:
if all the first feature ratios do not belong to the feature ratio range matched with the first feature type in the color value grade, determining that the first feature ratios belong to a preset color value grade;
and determining the preset color value grade as a first color value judging result of the face image to be judged.
In this optional embodiment, if none of the plurality of first feature ratios belongs to a feature ratio range matching the first feature type in the color value grades, a preset color value grade may be determined as the first color value determination result. For example, in the embodiment of the present invention, the color value grade is classified into A, B, C, D, E five grades according to the high-low grade, and if the judgment result of the face image does not satisfy any grade of ABCDE, it is judged as grade C.
S17, the electronic equipment outputs the first color value judging result.
In the embodiment of the invention, after the face value judgment result of the face image to be judged is obtained, namely the first face value judgment result is obtained, the first face value judgment result can be output to an interface/page interacted with a user.
In the method flow described in fig. 1, a face image to be determined, which needs to be subjected to face value determination, may be obtained; extracting a plurality of first characteristic points in the face image to be judged by using a face dotting technology; calculating a plurality of first features according to the coordinates of the plurality of first feature points, wherein the first features comprise the length of each part of the face in the face image to be judged and the distance between certain two parts; determining the first feature matched with a preset feature type as a first standard feature from a plurality of first features; according to the first standard features, carrying out standardization processing on the salient features in the first features to obtain a plurality of first feature ratio values, wherein the salient features are features with large color value discrimination; judging a plurality of first feature ratio values by using a pre-trained face value judging model to obtain a first face value judging result of the face image to be judged; and outputting the first color value judging result. Therefore, the salient features in the face image can be obtained, the data of the salient features are standardized, the salient features are input into a face value judging model, face value judgment is carried out on the salient features, finally, a face value judging result is obtained, in the whole face value judging process, the object aimed by the face value judging model is the salient features, the salient features are the features with large face value distinguishing degree, the salient features are judged, and the obtained first face value judging result can represent the face value of the face image to be judged, so that the accuracy of face value judgment can be improved.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.
Fig. 3 is a functional block diagram of a color value determining apparatus according to a preferred embodiment of the present invention.
In some embodiments, the color value determination means is operated in an electronic device. The color value determination means may comprise a plurality of functional modules consisting of program code segments. Program code for each program segment in the color value determining apparatus may be stored in a memory and executed by at least one processor to perform some or all of the steps in the color value determining method described in fig. 1.
In this embodiment, the color value determination device may be divided into a plurality of functional modules according to the functions it performs. The functional module may include: the device comprises an acquisition module 201, an extraction module 202, a calculation module 203, a determination module 204, a processing module 205, a judgment module 206 and an output module 207. The module referred to in the present invention refers to a series of computer program segments capable of being executed by at least one processor and of performing a fixed function, stored in a memory. In some embodiments, the function of each module will be described in detail in the following embodiments.
An obtaining module 201, configured to obtain a face image to be determined, where the face image needs to be determined by color value.
The face image to be judged is a face front image.
The extracting module 202 is configured to extract a plurality of first feature points in the face image to be determined using a face dotting technique.
The first feature points are points marked on the outer contour of the face and the edges of the organs, and a face dotting technology can be used for acquiring a plurality of first feature points and coordinates thereof in the face sample image.
The calculating module 203 is configured to calculate a plurality of first features according to coordinates of the plurality of first feature points, where the first features include lengths of each part of the face and distances between two parts in the face image to be determined.
Wherein the first characteristic refers to the length of a certain part or the distance between two parts, such as the length of eyes: distance from bottom of nose to mouth of 2 cm: 2cm. A plurality of the first features may be calculated from coordinates of the first feature points.
A determining module 204, configured to determine, from a plurality of the first features, the first feature that matches a preset feature type as a first standard feature.
Wherein the first standard feature refers to a certain feature specified in advance. The embodiment of the invention selects the face width of the straight line position where the two eyes are positioned as the first standard characteristic. Wherein the type of feature is preset such as eye length, eye width, nose length, nose width, mouth length, etc.
And the processing module 205 is configured to perform normalization processing on salient features in the plurality of first features according to the first standard features, to obtain a plurality of first feature ratio values, where the salient features are features with large color value discrimination.
Wherein the first feature ratio refers to the ratio result of the salient feature to the first standard feature.
Specifically, reference may be made to the processing method adopted in the model training, which is not described herein.
And the judging module 206 is configured to judge the plurality of first feature ratios by using a pre-trained face value judging model, so as to obtain a first face value judging result of the face image to be judged.
The face value judging result refers to a face value grade obtained after the face value judging model is used for judging the first feature ratio of the face image to be judged.
An output module 207, configured to output the first color value determination result.
In the embodiment of the invention, after the face value judgment result of the face image to be judged is obtained, namely the first face value judgment result is obtained, the first face value judgment result can be output to an interface/page interacted with a user.
As an alternative embodiment, the determining module 206 includes:
the obtaining submodule is used for obtaining a first feature type of the first feature ratio for each first feature ratio;
the judging sub-module is used for judging whether the plurality of first feature ratio values belong to the feature ratio value range matched with the first feature type in the color value grade or not by using a pre-trained color value judging model aiming at each preset color value grade;
the determining submodule is used for determining that the face value grade is the to-be-determined face value grade if a plurality of first feature ratio values belong to a feature ratio range matched with the first feature type in the face value grade;
and the determination submodule is further used for determining that the grade of the to-be-determined grade is a first grade judgment result of the to-be-judged face image if the grade of the to-be-determined grade is one grade.
In this optional embodiment, when all the first feature ratios of the face image to be determined belong to the respective corresponding feature ratio ranges in the same face value level, the face value level may be determined as the face value level of the face image to be determined. For example, when a face image is subjected to face value determination, the feature X and the feature Y are selected to perform the determination, and if the determination result is the face value grade a, the following is satisfied: the characteristic ratio of the characteristic X belongs to the characteristic ratio range of the characteristic X in the color value grade A; the feature ratio of the feature Y belongs to the feature ratio range of the feature Y in the color value class a. Therefore, it is necessary to determine each of the first feature ratios of the face image to be determined, and determine whether all the first feature ratios belong to various corresponding feature ratio ranges in the same face value level. For each first feature ratio, firstly acquiring a feature type, namely the first feature type, and then judging whether a plurality of first feature ratios belong to a feature ratio range matched with the first feature type in the color value grade by using a pre-trained color value judging model in each color value grade; and if the color value grade meeting the requirement is one, determining the color value grade to be determined as a first color value determination result of the face image to be determined.
As an optional implementation manner, the determining submodule is further configured to, if the number of the to-be-determined color value levels is multiple, rank the multiple to-be-determined color value levels from high to low according to the color value levels, and obtain a color value level ranking queue;
the determining submodule is further used for determining any to-be-determined face value grade at the middle position in the face value grade sorting queue as a first face value judging result of the face image to be judged.
If the number of the to-be-determined gear value grades is multiple, namely the number of the first gear value judging results is multiple, sorting the multiple to-be-determined gear value grades from high to low according to the gear value grades, and obtaining a gear value grade sorting queue; and determining any grade of the to-be-determined grade of the grade in the middle position in the grade sorting queue as a first grade judgment result, wherein if only one grade of the to-be-determined grade in the middle position is determined as the first grade judgment result, and if 2 grades of the to-be-determined grade in the middle position are determined, one grade of the to-be-determined grade is taken as the first grade judgment result according to a preset rule. Such as: in the embodiment of the invention, the grade of the color value is divided into A, B, C, D, E five grades according to the high-low grade, if the grade of the color value to be determined preliminarily obtained by the model is the grade of 4 color values of ABCD, the first color value judging result is determined to be the grade B of the color value according to a preset rule.
As an optional implementation manner, the determining submodule is further configured to determine that the plurality of first feature ratios belong to a preset color value grade if none of the plurality of first feature ratios belong to a feature ratio range matched with the first feature type in the color value grade;
and the determination submodule is further used for determining the preset color value grade as a first color value judgment result of the face image to be judged.
In this optional embodiment, if none of the plurality of first feature ratios belongs to a feature ratio range matching the first feature type in the color value grades, a preset color value grade may be determined as the first color value determination result. For example, in the embodiment of the present invention, the color value grade is classified into A, B, C, D, E five grades according to the high-low grade, and if the judgment result of the face image does not satisfy any grade of ABCDE, it is judged as grade C.
As an optional implementation manner, the obtaining module 201 is further configured to obtain a plurality of face sample images that need to be trained;
the extracting module 202 is further configured to extract, for each of the face sample images, a plurality of second feature points in the face sample image;
The calculating module 203 is further configured to calculate a plurality of second features according to coordinates of the plurality of second feature points, where the second features include lengths of each part of the face and distances between two parts in the face sample image;
the determining module 204 is further configured to determine, from the plurality of second features, the second feature that matches the preset feature type as a second standard feature;
the processing module 205 is further configured to perform normalization processing on the plurality of second features according to the second standard feature, to obtain a plurality of second feature ratio values;
the color value determination device may further include:
the selection module is used for selecting the obvious features with large color value discrimination degree from the plurality of second features according to the distribution condition of the plurality of second feature ratio values;
and the construction module is used for constructing a color value judgment model according to the salient features in the plurality of second features.
The face sample image is a pre-prepared face image, the face sample image carries face value grade information, and the face sample image is pre-determined according to mass aesthetic.
The second feature points are points marked on the external contour of the face and the edges of the organs, and a pre-trained face dotting technology can be used for acquiring a plurality of second feature points and coordinates thereof in the face sample image.
The second feature includes the length of each part of the face in the face sample image and the distance between two parts, for example, the length of the eyes: distance from bottom of nose to mouth of 2 cm: 2cm.
Wherein the second feature ratio refers to the ratio of the second feature to the second standard feature.
In the alternative implementation mode, firstly, characteristic points are extracted from the prepared face sample image, the length of each part and the distance between certain two parts in the face sample image are calculated according to the coordinates of the characteristic points, and then, the length and/or the distance are subjected to standardization processing, wherein the standardization processing is to convert all the length and/or the distance into a ratio result (namely a characteristic ratio) with the standard characteristic, through the standard processing, all the faces can be scaled to a ratio taking the standard characteristic as 1 unit, different faces can be compared with each other, and all the characteristics of the same face can be compared with each other; in addition, since the difference between face data and data is small, the data precision is required to be at least 6 bits after decimal point. For example, in the embodiment of the present invention, the face width of the straight line position where the two eyes are located is selected as the standard feature, and if the face width is 10cm and the length of the eyes is 2cm in a face sample image, the feature ratio obtained by performing the standardization processing on the feature of the length of the eyes is 0.200000.
After the feature ratios of the sample face images are obtained, the distribution situation of the feature ratios in different color values is observed, and then the feature types corresponding to the feature ratios with larger discrimination of different color values are found out. According to the embodiment of the invention, various box graphs with feature ratios in different face value grades are generated according to the feature ratios and the face value grades of the sample face image, and several obvious features with larger face value differentiation degrees in different grades are determined from the box graphs: the length of the vestibule, the length of the atrium, the distance between the eyes, the length of the eyes, the width of the nose, the width of the eyes, etc. These salient features are then trained to obtain a face value decision model. The box type diagram consists of a rectangular frame generated by the feature ratio and the color value grade.
Fig. 2 is an exemplary diagram of a face image of the present disclosure. As shown in fig. 2, 71 points may be extracted from the face image shown in fig. 2 using a face dotting technique, and at the same time, the position coordinates of each feature point may also be recorded. And further calculates the distance or length of the three-vestibule five eyes, such as the length of the five eyes 1 from the point 13 to the point 17 in the face image shown in fig. 2. And similarly, calculating the distances with large influence of all subjective judgments on the color value, such as the length of eyes, the width of nose, the width of mouth, the length of vestibule, the length of atrium, the length of inferior vestibule and the like.
Among these, 9 salient features can be selected, such as the vertical length of points 6 to 51-52 (i.e., the length of the lower court), the vertical length of points 26-39 straight lines 51-52 (i.e., the length of the atrium), the length of points 30-17 (i.e., the distance between the eyes), the length of points 10-2 (i.e., the face width), the length of points 11-1 (i.e., the face width), the vertical length of points 58-62 straight lines to 51-52 (i.e., the distance between the mouth and the nose), the length of points 50-53 (i.e., the width of the nose), the length of points 15-19 (i.e., the width of the eyes), and the length of points 30-34 (i.e., the length of the eyes).
As an optional implementation manner, the construction module constructs the color value determination model according to the salient features in the plurality of second features specifically:
learning a salient feature of the plurality of second features;
determining a plurality of color value grades corresponding to the salient features in the plurality of second features and a feature ratio range corresponding to each color value grade;
judging whether the characteristic ratio ranges of different color value grades accord with extreme value consistency;
and if the characteristic ratio ranges of the different color value grades accord with the extreme value consistency, generating a color value judging model according to the obvious characteristics in the plurality of second characteristics, the plurality of color value grades and the characteristic ratio range corresponding to each color value grade.
The number and the types of the color value grades are preset, for example, the color value grades can be classified into 5 grades, wherein the color value grades can be classified into A, B, C, D, E five grades according to the color value from high to low, the color value grades can be classified into more grades or less grades according to the color value from high to low, A, B, C, D, E the characters are only preset to identify different color value grades, and other characters can be used to identify different color value grades, which is not particularly limited in the embodiment of the present invention.
In the embodiment of the invention, when the salient features are trained, the range of the feature ratio corresponding to the salient features in different color value grades is determined according to the maximum value and the minimum value in the box-shaped diagram corresponding to the salient features in different color value grades. After determining the feature ratio ranges of the salient features corresponding to different color values, it is required to determine whether the feature ratio ranges conform to extreme consistency, for example, the feature ratio ranges of the salient features corresponding to five color values are [ a1, b1], [ a2, b2], [ a3, b3], [ a4, b4], [ a5, b5], if the color value levels are monotonically increasing on the salient features, that is, the higher the color value level is, the larger the maximum value and the minimum value of the feature ratio corresponding to the salient features are, and if the feature ratio ranges satisfy a1< = a2< = a3< = a4< = a5, b1< = b2< = b3< = b4< = b 5). At this time, it can be determined that the feature ratio ranges of different color value grades conform to the extreme value consistency. And generating a color value judging model according to the salient features in the second features, the color value grades and the feature ratio range corresponding to each color value grade.
Optionally, if the feature ratio ranges of different color levels do not meet the extremum consistency, the feature ratio range needs to be changed, for example, the feature ratio ranges of the five color levels corresponding to the significant feature in the above example are [ a1, b1], [ a2, b2], [ a3, b3], [ a4, b4], [ a5, b5], and the color level is monotonically increasing on the significant feature, and if a certain feature ratio range does not meet a1< = a2< = a3< = a4< = a5, b1< = b2< = b3< = b4< = b 5), the feature ratio range needs to be changed to the feature ratio range of the next level, for example: a1> a2< =a3 < =a4 < =a5, and the value of a1 needs to be changed to the value of a2 so that a1< =a2 < =a3 < =a4 < =a5 holds.
As an optional implementation manner, the face value grade corresponding to the face value judging model includes a plurality of face value grades, the plurality of face value grades are divided according to the face value, each face value grade includes all feature types, and in different face value grades, the feature value ranges corresponding to the feature types are different.
In the Yan Zhi decision model, each feature type has a corresponding feature ratio range (a value range of a feature ratio), and each feature type has a feature ratio range matched with the feature type in different color value grades.
Among them, feature types such as eye length, eye width, nose length, nose width, mouth length, and the like.
In the face value determination apparatus described in fig. 3, a face image to be determined, for which face value determination is required, may be acquired; extracting a plurality of first characteristic points in the face image to be judged by using a face dotting technology; calculating a plurality of first features according to the coordinates of the plurality of first feature points, wherein the first features comprise the length of each part of the face in the face image to be judged and the distance between certain two parts; determining the first feature matched with a preset feature type as a first standard feature from a plurality of first features; according to the first standard features, carrying out standardization processing on the salient features in the first features to obtain a plurality of first feature ratio values, wherein the salient features are features with large color value discrimination; judging a plurality of first feature ratio values by using a pre-trained face value judging model to obtain a first face value judging result of the face image to be judged; and outputting the first color value judging result. Therefore, the salient features in the face image can be obtained, the data of the salient features are standardized, the salient features are input into a face value judging model, face value judgment is carried out on the salient features, finally, a face value judging result is obtained, in the whole face value judging process, the object aimed by the face value judging model is the salient features, the salient features are the features with large face value distinguishing degree, the salient features are judged, and the obtained first face value judging result can represent the face value of the face image to be judged, so that the accuracy of face value judgment can be improved.
Fig. 4 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present invention for implementing the color value determining method. The electronic device 3 comprises a memory 31, at least one processor 32, a computer program 33 stored in the memory 31 and executable on the at least one processor 32, and at least one communication bus 34.
It will be appreciated by those skilled in the art that the schematic diagram shown in fig. 4 is merely an example of the electronic device 3 and is not limiting of the electronic device 3, and may include more or less components than illustrated, or may combine certain components, or different components, e.g. the electronic device 3 may further include input-output devices, network access devices, etc.
The at least one processor 32 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The processor 32 may be a microprocessor or the processor 32 may be any conventional processor or the like, the processor 32 being a control center of the electronic device 3, the various interfaces and lines being used to connect the various parts of the entire electronic device 3.
The memory 31 may be used to store the computer program 33 and/or modules/units, and the processor 32 may implement various functions of the electronic device 3 by running or executing the computer program and/or modules/units stored in the memory 31 and invoking data stored in the memory 31. The memory 31 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device 3 (such as audio data) and the like. In addition, the memory 31 may include a nonvolatile memory such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other nonvolatile solid state storage device.
In connection with fig. 1, the memory 31 in the electronic device 3 stores a plurality of instructions to implement a color value determination method, the processor 32 being executable to implement:
Acquiring a face image to be judged, which needs to be subjected to color value judgment;
extracting a plurality of first characteristic points in the face image to be judged by using a face dotting technology;
calculating a plurality of first features according to the coordinates of the plurality of first feature points, wherein the first features comprise the length of each part of the face in the face image to be judged and the distance between certain two parts;
determining the first feature matched with a preset feature type as a first standard feature from a plurality of first features;
according to the first standard features, carrying out standardization processing on the salient features in the first features to obtain a plurality of first feature ratio values, wherein the salient features are features with large color value discrimination;
judging a plurality of first feature ratio values by using a pre-trained face value judging model to obtain a first face value judging result of the face image to be judged;
and outputting the first color value judging result.
Specifically, the specific implementation method of the above instructions by the processor 32 may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
In the electronic device 3 described in fig. 4, a face image to be determined, for which color value determination is required, may be acquired; extracting a plurality of first characteristic points in the face image to be judged by using a face dotting technology; calculating a plurality of first features according to the coordinates of the plurality of first feature points, wherein the first features comprise the length of each part of the face in the face image to be judged and the distance between certain two parts; determining the first feature matched with a preset feature type as a first standard feature from a plurality of first features; according to the first standard features, carrying out standardization processing on the salient features in the first features to obtain a plurality of first feature ratio values, wherein the salient features are features with large color value discrimination; judging a plurality of first feature ratio values by using a pre-trained face value judging model to obtain a first face value judging result of the face image to be judged; and outputting the first color value judging result. Therefore, the salient features in the face image can be obtained, the data of the salient features are standardized, the salient features are input into a face value judging model, face value judgment is carried out on the salient features, finally, a face value judging result is obtained, in the whole face value judging process, the object aimed by the face value judging model is the salient features, the salient features are the features with large face value distinguishing degree, the salient features are judged, and the obtained first face value judging result can represent the face value of the face image to be judged, so that the accuracy of face value judgment can be improved.
The modules/units integrated in the electronic device 3 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, and a Read-Only Memory (ROM).
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (9)

1. A color value determination method, the method comprising:
acquiring a face image to be judged, which needs to be subjected to color value judgment;
extracting a plurality of first characteristic points in the face image to be judged by using a face dotting technology;
calculating a plurality of first features according to the coordinates of the plurality of first feature points, wherein the first features comprise the length of each part of the face in the face image to be judged and the distance between certain two parts;
determining the first feature matched with a preset feature type as a first standard feature from a plurality of first features;
according to the first standard feature, performing standardization processing on salient features in the first features to obtain a plurality of first feature ratio values, wherein the generation of the salient features comprises the following steps: generating a box diagram of various feature ratios in different color value grades according to the feature ratio and the color value grade of the sample face image, and determining obvious features according to the distinguishing degree of each feature ratio in the box diagram in different color value grades;
Judging a plurality of first feature ratio values by using a pre-trained face value judging model to obtain a first face value judging result of the face image to be judged, wherein the method comprises the following steps of: for each first feature ratio, obtaining a first feature type of the first feature ratio, judging whether a plurality of first feature ratios belong to a feature ratio range matched with the first feature type in the face value grade or not by using a pre-trained face value judging model for each preset face value grade, if the plurality of first feature ratios belong to a feature ratio range matched with the first feature type in the face value grade, determining that the face value grade is a face value grade to be determined, and if the face value grade to be determined is one, determining that the face value grade to be determined is a first face value judging result of the face image to be determined, wherein the face value judging model corresponds to a plurality of face value grades, and the face value judging model comprises a plurality of feature types and feature ratio ranges corresponding to each feature type;
and outputting the first color value judging result.
2. The method according to claim 1, wherein the method further comprises:
If the number of the to-be-determined gear levels is multiple, sequencing the multiple to-be-determined gear levels according to the gear level from high to low to obtain a gear level sequencing queue;
and determining any to-be-determined face value grade at the middle position in the face value grade sorting queue as a first face value judging result of the face image to be judged.
3. The method according to claim 1, wherein the method further comprises:
if all the first feature ratios do not belong to the feature ratio range matched with the first feature type in the color value grade, determining that the first feature ratios belong to a preset color value grade;
and determining the preset color value grade as a first color value judging result of the face image to be judged.
4. A method according to any one of claims 1 to 3, further comprising:
acquiring a plurality of face sample images to be trained;
extracting a plurality of second feature points in the face sample image for each face sample image;
calculating a plurality of second features according to the coordinates of the plurality of second feature points, wherein the second features comprise the length of each part of the face and the distance between certain two parts in the face sample image;
Determining the second feature matched with the preset feature type from the plurality of second features as a second standard feature;
according to the second standard features, carrying out standardization processing on the plurality of second features to obtain a plurality of second feature ratio values;
selecting a significant feature with large color value discrimination from the plurality of second features according to the distribution condition of the plurality of second feature ratio values;
and constructing a color value judging model according to the salient features in the second features.
5. The method of claim 4, wherein constructing a face value decision model from salient features of the second plurality of features comprises:
learning a salient feature of the plurality of second features;
determining a plurality of color value grades corresponding to the salient features in the plurality of second features and a feature ratio range corresponding to each color value grade;
judging whether the characteristic ratio ranges of different color value grades accord with extreme value consistency;
and if the characteristic ratio ranges of the different color value grades accord with the extreme value consistency, generating a color value judging model according to the obvious characteristics in the plurality of second characteristics, the plurality of color value grades and the characteristic ratio range corresponding to each color value grade.
6. The method according to claim 5, wherein the face value grade corresponding to the face value judging model includes a plurality of face value grades, the face value grades are divided according to face value levels, each face value grade includes all feature types, and each feature type is different in the face value grade, and the feature value range corresponding to the feature type is different.
7. A color value determination device for implementing the color value determination method according to any one of claims 1 to 6, characterized in that the color value determination device comprises:
the acquisition module is used for acquiring the face image to be judged, which is required to be subjected to face value judgment;
the extraction module is used for extracting a plurality of first characteristic points in the face image to be judged by using a face dotting technology;
the computing module is used for computing a plurality of first features according to the coordinates of the plurality of first feature points, wherein the first features comprise the length of each part of the face in the face image to be judged and the distance between certain two parts;
the determining module is used for determining the first characteristic matched with a preset characteristic type as a first standard characteristic from a plurality of first characteristics;
The processing module is configured to perform normalization processing on salient features in the plurality of first features according to the first standard features, to obtain a plurality of first feature ratio values, where the generating of the salient features includes: generating a box diagram of various feature ratios in different color value grades according to the feature ratio and the color value grade of the sample face image, and determining obvious features according to the distinguishing degree of each feature ratio in the box diagram in different color value grades;
the judging module is used for judging a plurality of first feature ratio values by using a pre-trained face value judging model to obtain a first face value judging result of the face image to be judged;
and the output module is used for outputting the first color value judging result.
8. An electronic device comprising a processor and a memory, the processor being configured to execute a computer program stored in the memory to implement the color value determination method of any one of claims 1 to 6.
9. A computer readable storage medium storing at least one instruction which when executed by a processor implements the color value determination method of any one of claims 1 to 6.
CN201910901612.0A 2019-09-23 2019-09-23 Color value judging method and device, electronic equipment and storage medium Active CN110874567B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910901612.0A CN110874567B (en) 2019-09-23 2019-09-23 Color value judging method and device, electronic equipment and storage medium
PCT/CN2020/093341 WO2021057063A1 (en) 2019-09-23 2020-05-29 Facial attractiveness determining method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910901612.0A CN110874567B (en) 2019-09-23 2019-09-23 Color value judging method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110874567A CN110874567A (en) 2020-03-10
CN110874567B true CN110874567B (en) 2024-01-09

Family

ID=69718056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910901612.0A Active CN110874567B (en) 2019-09-23 2019-09-23 Color value judging method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN110874567B (en)
WO (1) WO2021057063A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874567B (en) * 2019-09-23 2024-01-09 平安科技(深圳)有限公司 Color value judging method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169408A (en) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 A kind of face value decision method and device
CN108629336A (en) * 2018-06-05 2018-10-09 北京千搜科技有限公司 Face value calculating method based on human face characteristic point identification
WO2019061203A1 (en) * 2017-09-28 2019-04-04 深圳传音通讯有限公司 Method for acquiring change in facial attractiveness score, and terminal
CN109657539A (en) * 2018-11-05 2019-04-19 深圳前海达闼云端智能科技有限公司 Face value evaluation method and device, readable storage medium and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604377A (en) * 2009-07-10 2009-12-16 华南理工大学 A kind of facial beauty classification method that adopts computing machine to carry out woman image
US9626597B2 (en) * 2013-05-09 2017-04-18 Tencent Technology (Shenzhen) Company Limited Systems and methods for facial age identification
CN108764334A (en) * 2018-05-28 2018-11-06 北京达佳互联信息技术有限公司 Facial image face value judgment method, device, computer equipment and storage medium
CN110874567B (en) * 2019-09-23 2024-01-09 平安科技(深圳)有限公司 Color value judging method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169408A (en) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 A kind of face value decision method and device
WO2019061203A1 (en) * 2017-09-28 2019-04-04 深圳传音通讯有限公司 Method for acquiring change in facial attractiveness score, and terminal
CN108629336A (en) * 2018-06-05 2018-10-09 北京千搜科技有限公司 Face value calculating method based on human face characteristic point identification
CN109657539A (en) * 2018-11-05 2019-04-19 深圳前海达闼云端智能科技有限公司 Face value evaluation method and device, readable storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于深度卷积神经网络的颜值计算研究";陈良仁;《中国优秀硕士学位论文全文数据库信息科技辑》(第4期);第31页-48页 *

Also Published As

Publication number Publication date
CN110874567A (en) 2020-03-10
WO2021057063A1 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
CN106875422B (en) Face tracking method and device
CN108197532A (en) The method, apparatus and computer installation of recognition of face
CN109766925B (en) Feature fusion method and device, electronic equipment and storage medium
WO2021031825A1 (en) Network fraud identification method and device, computer device, and storage medium
CN112365876B (en) Method, device and equipment for training speech synthesis model and storage medium
CN109616102B (en) Acoustic model training method and device and storage medium
US10210424B2 (en) Method and system for preprocessing images
US20140012532A1 (en) System, method, and computer program product for simultaneously determining settings for a plurality of parameter variations
CN111784699B (en) Method and device for carrying out target segmentation on three-dimensional point cloud data and terminal equipment
CN113689436B (en) Image semantic segmentation method, device, equipment and storage medium
CN112163637B (en) Image classification model training method and device based on unbalanced data
CN110969172A (en) Text classification method and related equipment
CN110489659A (en) Data matching method and device
WO2021057062A1 (en) Method and apparatus for optimizing attractiveness judgment model, electronic device, and storage medium
CN110874567B (en) Color value judging method and device, electronic equipment and storage medium
CN109214671A (en) Personnel&#39;s group technology, device, electronic device and computer readable storage medium
CN109697083B (en) Fixed-point acceleration method and device for data, electronic equipment and storage medium
CN111460910A (en) Face type classification method and device, terminal equipment and storage medium
CN110876072B (en) Batch registered user identification method, storage medium, electronic device and system
CN109616103B (en) Acoustic model training method and device and storage medium
CN115545088B (en) Model construction method, classification method, device and electronic equipment
WO2021169356A1 (en) Voice file repairing method and apparatus, computer device, and storage medium
CN112258450B (en) Object scoring method and device
CN110688371B (en) Data adjustment method, device, electronic equipment and storage medium
CN111667045A (en) Multi-channel neural network model training method and device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40017502

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant