WO2021057063A1 - 颜值判定方法、装置、电子设备及存储介质 - Google Patents

颜值判定方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2021057063A1
WO2021057063A1 PCT/CN2020/093341 CN2020093341W WO2021057063A1 WO 2021057063 A1 WO2021057063 A1 WO 2021057063A1 CN 2020093341 W CN2020093341 W CN 2020093341W WO 2021057063 A1 WO2021057063 A1 WO 2021057063A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
face
face value
features
ratios
Prior art date
Application number
PCT/CN2020/093341
Other languages
English (en)
French (fr)
Inventor
孙汀娟
黄竹梅
周雅君
赵星
李恒
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021057063A1 publication Critical patent/WO2021057063A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a method, device, electronic device, and storage medium for judging face value.
  • the face value judgment method is mainly based on the proximity of the facial features and proportions of the face to the "golden ratio". The closer the face is to the "golden ratio", the higher the score for the face value judgment.
  • different people have different aesthetic standards. With the development of the times, people's aesthetic level has also changed. According to the "golden ratio" to determine the level of beauty, the accuracy is not high.
  • the inventor realized that how to determine the appearance of a user more accurately is a technical problem to be solved urgently.
  • the first aspect of the present application provides a method for judging face value, and the method includes:
  • the first standard feature standardize the salient features of the plurality of first features to obtain a plurality of first feature ratios, where the salient feature is a feature with a large degree of distinction in appearance;
  • a second aspect of the present application provides a device for judging appearance, the device includes:
  • the acquiring module is used to acquire the face image to be judged that needs to be judged on the face value
  • An extraction module which is used to extract multiple first feature points in the face image to be determined by using face scoring technology
  • the calculation module is configured to calculate multiple first features according to the coordinates of the multiple first feature points, where the first features include the length of each part of the face in the face image to be determined and certain two The distance of each part;
  • a determining module configured to determine, from a plurality of the first features, the first feature that matches the preset feature type as the first standard feature
  • the processing module is configured to standardize the salient features of the multiple first features according to the first standard feature to obtain multiple first feature ratios, where the salient features are those with a high degree of distinction in appearance. feature;
  • a judging module configured to use a pre-trained face value judgment model to judge a plurality of the first feature ratios, and obtain a first face value judgment result of the face image to be judged;
  • the output module is used to output the first color value judgment result.
  • a third aspect of the present application provides an electronic device that includes a processor and a memory, and the processor is configured to implement the face value determination method when executing computer-readable instructions stored in the memory.
  • the fourth aspect of the present application provides one or more readable storage media storing computer readable instructions.
  • the computer readable instructions When executed by one or more processors, the one or more processors execute Realize the described face value judgment method.
  • the object of the face value judgment model is a salient feature
  • the salient feature is a feature with a large degree of discrimination of face value.
  • the salient feature is judged, and the first judgment result of face value is obtained. It can represent the face value of the face image to be determined, so that the accuracy of face value determination can be improved.
  • FIG. 1 is a flowchart of a preferred embodiment of a method for judging face value disclosed in the present application.
  • Fig. 2 is an example diagram of a face image disclosed in the present application.
  • Fig. 3 is a functional block diagram of a preferred embodiment of a device for judging a face value disclosed in the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device implementing a preferred embodiment of the method for judging face value according to this application.
  • the face value determination method of the embodiment of the present application is applied to an electronic device, and can also be applied to a hardware environment composed of an electronic device and a server connected to the electronic device via a network, and is executed by the server and the electronic device together.
  • Networks include but are not limited to: wide area network, metropolitan area network or local area network.
  • the server may refer to a computer system that can provide services to other devices (such as electronic devices) in the network. If a personal computer can provide a File Transfer Protocol (FTP) service externally, it can also be called a server.
  • FTP File Transfer Protocol
  • the server refers to certain high-performance computers that can provide services to the outside world through the network. Compared with ordinary personal computers, they have higher requirements in terms of stability, security, and performance. Therefore, in the CPU , Chipset, memory, disk system, network and other hardware are different from ordinary personal computers.
  • the electronic device includes an electronic device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions, and its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit (ASIC), and a programmable gate. Array (FPGA), digital processor (DSP), embedded device, etc.
  • the electronic equipment may also include network equipment and/or user equipment.
  • the network device includes, but is not limited to, a single network server, a server group composed of multiple network servers, or a cloud composed of a large number of hosts or network servers based on Cloud Computing, where cloud computing is distributed computing One type, a super virtual computer composed of a group of loosely coupled computer sets.
  • the user equipment includes, but is not limited to, any electronic product that can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, and a personal digital device.
  • the network where the user equipment and the network equipment are located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a virtual private network VPN, and the like.
  • FIG. 1 is a flowchart of a preferred embodiment of a method for judging a face value disclosed in the present application. Among them, according to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted.
  • the electronic device acquires a face image to be judged that needs to be judged on the face value.
  • the face image to be determined refers to a front face image of a human face.
  • the method further includes:
  • a face value judgment model is constructed.
  • the face sample image refers to a pre-prepared face image
  • the face sample image carries face value grade information
  • the face sample image has a face value grade determined in advance based on popular aesthetics.
  • the second feature point refers to a point marked on the outer contour of the face and the edge of the organ, and a pre-trained face dot technique can be used to obtain a plurality of the second feature points in the face sample image And its coordinates.
  • the second feature includes the length of each part of the face in the face sample image and the distance between certain two parts, for example, the length of the eyes: 2cm, the distance from the bottom of the nose to the mouth: 2cm, and so on.
  • the second characteristic ratio refers to the ratio result of the second characteristic and the second standard characteristic.
  • first extract feature points from the prepared face sample image calculate the length of each part in the face sample image and the distance between certain two parts according to the coordinates of the feature point, and then compare These lengths and/or distances are standardized.
  • the standardized processing converts all lengths and/or distances to the ratio of the standard features (ie feature ratios). Through the standard processing, all faces can be scaled to The standard feature is the ratio of 1 unit. Different faces can be compared with each other, and the features of the same face can also be compared with each other. In addition, because the difference between the face data and the data is small, the data accuracy is required to be at least 6 digits after the decimal point.
  • the width of the face at the straight line position of the two eyes is selected as the standard feature.
  • the feature of eye length is performed The characteristic ratio obtained after normalization is 0.200000.
  • the embodiment of the application After obtaining the feature ratios of the sample face images, observe the distribution of these feature ratios in different face values, and then find out the feature types corresponding to the feature ratios with higher discrimination of different face values.
  • the embodiment of the application generates box graphs with various feature ratios at different face value grades, and determines from the box graphs that the difference in face value at different grades is relatively large.
  • Several distinctive features the length of the lower court, the length of the atrium, the distance between the eyes, the length of the eyes, the width of the nose, the width of the eyes, etc. Then train these salient features to obtain a model for judging the appearance.
  • the box chart is composed of rectangular boxes generated by feature ratios and color grades.
  • FIG. 2 is an example diagram of a face image disclosed in this application.
  • the face-dotting technology can be used to extract 71 points from the face image shown in Figure 2.
  • it can also record the position coordinates of each feature point, and further calculate the "three courts and five eyes"
  • the distance or length, for example, the length of "five eyes 1" is the length from point 13 to point 17 in the face image shown in FIG. 2.
  • calculate the distance that all subjective judgments have a great influence on the appearance such as the length of the eyes, the width of the eyes, the width of the nose, the width of the mouth, the length of the upper court, the length of the atrium, the length of the lower court, etc.
  • 9 salient features can be selected, such as the vertical length from point 6 to point 51-52 (ie the length of the lower court), point 26-39 the straight line point 51-52 the straight line vertical length (ie the length of the atrium), and point 30-17
  • the length of the point i.e. the distance between the eyes
  • the length of the point 10-2 i.e. the width of the face
  • the length of the point 11-1 i.e.
  • the width of the face the vertical length of the line from point 58-62 to line 51-52 ( That is, the distance from the mouth to the nose), the length of points 50-53 (ie the width of the nose), the length of points 15-19 (ie the width of the eyes), and the length of points 30-34 (ie the length of the eyes).
  • the constructing a face value judgment model according to the salient features among the plurality of second features includes:
  • the face value is generated according to the salient features of the plurality of second features, the multiple face value grades, and the feature ratio range corresponding to each of the face value grades Decision model.
  • the face value grades are pre-divided into multiple types, and the types and numbers of the face value grades are pre-defined.
  • the face value grades can be divided into 5 grades, among which, according to the color value from high to high Low, the appearance level can be divided into five grades: A, B, C, D, and E.
  • the appearance level can be divided into more or less grades according to the appearance from high to low, A, B,
  • the characters C, D, and E are only pre-defined to identify different face value grades, and other characters may also be used to identify different face value grades, which are not specifically limited in the embodiment of the present application.
  • the salient feature when training the salient feature, it is necessary to determine the salient feature corresponding to the salient feature in different colors according to the maximum value and the minimum value of the box diagram corresponding to the salient feature in different color levels.
  • the characteristic ratio range of the value grade when training the salient feature, it is necessary to determine the salient feature corresponding to the salient feature in different colors according to the maximum value and the minimum value of the box diagram corresponding to the salient feature in different color levels.
  • a face value judgment model is generated.
  • the feature ratio range of different color grades does not conform to the extreme value consistency, the feature ratio range needs to be changed.
  • the face value grades corresponding to the face value judgment model include multiple, and the multiple face value grades are divided according to the height of the face value, each of the face value grades includes all the feature types, and each of the feature types In different levels of the face value, the feature value range corresponding to the feature type is different.
  • each of the feature types has its corresponding feature ratio range (the value range of the feature ratio), and each of the feature types has different features in different face value grades.
  • the matching feature ratio range is the feature ratio range of the feature ratio
  • feature types such as eye length, eye width, nose length, nose width, mouth length, and so on.
  • the electronic device extracts a plurality of first feature points in the face image to be determined by using the face-dotting technology.
  • the first feature point refers to a point marked on the outer contour of the face and the edge of the organ, and a face dot technique may be used to obtain a plurality of the first feature points and their coordinates in the face sample image.
  • the electronic device calculates multiple first features according to the coordinates of the multiple first feature points, where the first features include the length of each part of the face in the face image to be determined and certain two The distance of the site.
  • the first feature refers to the length of a certain part or the distance between certain two parts, for example, the length of the eyes: 2cm, the distance from the bottom of the nose to the mouth: 2cm, etc.
  • a plurality of the first features may be calculated based on the coordinates of the first feature points.
  • the electronic device determines, from the plurality of first features, the first feature that matches the preset feature type as the first standard feature.
  • the first standard feature refers to a certain feature specified in advance.
  • the face width at the straight line position of the two eyes is selected as the first standard feature.
  • preset feature types such as eye length, eye width, nose length, nose width, mouth length, and so on.
  • the electronic device performs standardization processing on the salient features of the plurality of first features according to the first standard feature to obtain a plurality of first feature ratios, where the salient feature is a feature with a high degree of distinction in appearance. .
  • the first feature ratio refers to the ratio result of the salient feature and the first standard feature.
  • the electronic device uses a pre-trained face value judgment model to judge a plurality of the first feature ratios, and obtains a first face value judgment result of the face image to be judged.
  • the face value judgment result refers to the face value grade obtained after judging the first feature ratio of the face image to be judged using the face value judgment model.
  • using a pre-trained face value judgment model to judge a plurality of the first feature ratios, and obtaining the first face value judgment result of the face image to be judged includes:
  • For each preset face value grade use a pre-trained face value judgment model to determine whether a plurality of the first feature ratios belong to the feature ratio range of the face value grade that matches the first feature type;
  • the face value grade of the pending gear is one, it is determined that the face value grade of the pending gear is the first face value determination result of the face image to be determined.
  • the face image is the The face image grade of the face image to be determined.
  • the feature X and the feature Y are selected for judgment. If the result of the judgment is the face value grade A, then it is satisfied: the feature ratio of the feature X belongs to the feature X in the face value grade A
  • the feature ratio range of feature Y belongs to the feature ratio range of feature Y in the face value grade A.
  • each of the first feature ratios of the face image determines whether all the first feature ratios belong to various corresponding feature ratio ranges in the same face value grade. .
  • For each of the first feature ratios first obtain its feature type, that is, the first feature type, and then use the pre-trained color judgment model to determine a plurality of the first feature types in each color level. Whether a feature ratio belongs to the feature ratio range that matches the first feature type in the face value grade; if there is only one face value grade that meets the above requirements, determine that the face value grade of the pending gear is the person to be judged The judgment result of the first face value of the face image.
  • the method further includes:
  • the multiple face value grades of the pending gear are sorted according to the face value grade from high to low to obtain the face value grade. Sorting queue; determining any one of the pending gears that is in the middle position in the sorting queue of the face value grades as the first face value determination result, wherein if there is only one of the pending gears in the middle position , It is determined that the face value grade of the pending gear is the first face value judgment result. If there are two face value grades of the pending gear in the middle position, one of the pending gear face values is selected according to the preset rule. The value grade is the judgment result of the first color value.
  • the facial value grades are divided into five grades: A, B, C, D, and E from high to low. If the pending facial grades obtained by the model are the four facial grades ABCD , Determining that the first color value determination result is the color value grade B according to a preset rule.
  • the method further includes:
  • the preset face value grade is determined as the first face value judgment result of the face image to be judged.
  • the preset face value grade may be determined as The first color value judgment result.
  • the color level is divided into five levels: A, B, C, D, and E from high to low. If the result of the face image determination does not meet any level of ABCDE, it is determined as C file.
  • the electronic device outputs the first color value determination result.
  • the first face value judgment result may be output to the user interacting with the user. Interface/page.
  • the face image to be determined that needs to be judged can be obtained; the face dotting technology is used to extract multiple first feature points in the face image to be determined; The coordinates of multiple first feature points are calculated to calculate multiple first features, where the first feature includes the length of each part of the face in the face image to be determined and the distance between certain two parts;
  • the first feature that matches the preset feature type is determined as the first standard feature; according to the first standard feature, standardization processing is performed on the salient features of the plurality of first features , Obtain a plurality of first feature ratios, where the salient feature is a feature with a high degree of distinction in face value; use a pre-trained face value judgment model to determine the multiple first feature ratios to obtain the to-be-determined The first face value judgment result of the face image; and the first face value judgment result is output.
  • the object of the face value judgment model is a salient feature
  • the salient feature is a feature with a large degree of distinction in face value.
  • the salient feature is judged, and the obtained first face value judgment result is more representative of the person to be judged.
  • the face image's face value can improve the accuracy of face value judgment.
  • FIG. 3 is a functional block diagram of a preferred embodiment of a device for judging appearance disclosed in the present application.
  • the device for judging appearance runs in an electronic device.
  • the face value judging device may include multiple functional modules composed of program code segments.
  • the program code of each program segment in the face value judging device can be stored in a memory and executed by at least one processor to execute part or all of the steps in the face value judging method described in FIG. 1.
  • the face value judging device can be divided into multiple functional modules according to the functions it performs.
  • the functional modules may include: an acquisition module 201, an extraction module 202, a calculation module 203, a determination module 204, a processing module 205, a determination module 206, and an output module 207.
  • the module referred to in this application refers to a series of computer-readable instruction segments that can be executed by at least one processor and can complete fixed functions, and are stored in a memory. In some embodiments, the functions of each module will be detailed in subsequent embodiments.
  • the acquiring module 201 is used to acquire the face image to be determined that needs to be judged on the face value;
  • the face image to be determined refers to a front face image of a human face.
  • the extraction module 202 is configured to extract multiple first feature points in the face image to be determined by using face scoring technology
  • the first feature point refers to a point marked on the outer contour of the face and the edge of the organ, and a face dot technique may be used to obtain a plurality of the first feature points and their coordinates in the face sample image.
  • the calculation module 203 is configured to calculate multiple first features according to the coordinates of the multiple first feature points, where the first features include the length of each part of the face in the face image to be determined and certain The distance between the two parts;
  • the first feature refers to the length of a certain part or the distance between certain two parts, for example, the length of the eyes: 2cm, the distance from the bottom of the nose to the mouth: 2cm, etc.
  • a plurality of the first features may be calculated based on the coordinates of the first feature points.
  • the determining module 204 is configured to determine the first feature matching the preset feature type as the first standard feature from among the plurality of first features;
  • the first standard feature refers to a certain feature specified in advance.
  • the face width at the straight line position of the two eyes is selected as the first standard feature.
  • preset feature types such as eye length, eye width, nose length, nose width, mouth length, and so on.
  • the processing module 205 is configured to perform standardization processing on the salient features of the plurality of first features according to the first standard feature to obtain a plurality of first feature ratios, wherein the salient feature has a high degree of distinction in appearance. Characteristics;
  • the first feature ratio refers to the ratio result of the salient feature and the first standard feature.
  • the judging module 206 is configured to use a pre-trained face value judgment model to judge a plurality of the first feature ratios, and obtain the first face value judgment result of the face image to be judged;
  • the face value judgment result refers to the face value grade obtained after judging the first feature ratio of the face image to be judged using the face value judgment model.
  • the output module 207 is used to output the first color value judgment result.
  • the first face value judgment result may be output to the user interacting with the user. Interface/page.
  • the determination module 206 includes:
  • An obtaining sub-module configured to obtain the first feature type of the first feature ratio for each of the first feature ratios
  • the judging sub-module is used to determine whether a plurality of the first feature ratios belong to the first feature type in the color level and the first feature type by using a pre-trained color judgment model for each preset color level The matched feature ratio range;
  • a determining sub-module configured to determine that the face value grade is a face value grade to be determined if the multiple first feature ratios all belong to the characteristic ratio range matching the first characteristic type in the face value grades;
  • the determination sub-module is further configured to determine that the pending gear is the first face image determination result of the face image to be determined if the face value grade of the pending gear is one.
  • the face image is the The face image grade of the face image to be determined.
  • the feature X and the feature Y are selected for judgment. If the result of the judgment is the face value grade A, then it is satisfied: the feature ratio of the feature X belongs to the feature X in the face value grade A
  • the feature ratio range of feature Y belongs to the feature ratio range of feature Y in the face value grade A.
  • each of the first feature ratios of the face image determines whether all the first feature ratios belong to various corresponding feature ratio ranges in the same face value grade. .
  • For each of the first feature ratios first obtain its feature type, that is, the first feature type, and then use the pre-trained color judgment model to determine a plurality of the first feature types in each color level. Whether a feature ratio belongs to the feature ratio range that matches the first feature type in the face value grade; if there is only one face value grade that meets the above requirements, determine that the face value grade of the pending gear is the person to be judged The judgment result of the first face value of the face image.
  • the determining submodule is further configured to, if there are multiple face value grades of the pending gear, sort the multiple face value grades of the pending gear according to the face value grade from high to low, Obtain the sorting queue of the face value grade;
  • the determining sub-module is further configured to determine any one of the to-be-determined face value grades in the middle position in the face-value-rank sorting queue as the first face value judgment result of the face image to be determined.
  • the multiple face value grades of the pending gear are sorted according to the face value grade from high to low to obtain the face value grade. Sorting queue; determining any one of the pending gears that is in the middle position in the sorting queue of the face value grades as the first face value determination result, wherein if there is only one of the pending gears in the middle position , It is determined that the face value grade of the pending gear is the first face value judgment result. If there are two face value grades of the pending gear in the middle position, one of the pending gear face values is selected according to the preset rule. The value grade is the judgment result of the first color value.
  • the facial value grades are divided into five grades: A, B, C, D, and E from high to low. If the pending facial grades obtained by the model are the four facial grades ABCD , Determining that the first color value determination result is the color value grade B according to a preset rule.
  • the determining sub-module is further configured to determine if none of the plurality of first feature ratios belongs to the feature ratio range that matches the first feature type in the color level A plurality of the first feature ratios belong to a preset color level;
  • the determining sub-module is further configured to determine the preset face value grade as the first face value judgment result of the face image to be judged.
  • the preset face value grade may be determined as The first color value judgment result.
  • the color level is divided into five levels: A, B, C, D, and E from high to low. If the result of the face image determination does not meet any level of ABCDE, it is determined as C file.
  • the acquiring module 201 is further configured to acquire multiple face sample images that need to be trained;
  • the extraction module 202 is further configured to extract multiple second feature points in the face sample image for each face sample image;
  • the calculation module 203 is further configured to calculate multiple second features according to the coordinates of the multiple second feature points, where the second features include the length of each part of the face in the face sample image And the distance between certain two parts;
  • the determining module 204 is further configured to determine, from the plurality of second features, the second feature matching the preset feature type as a second standard feature;
  • the processing module 205 is further configured to perform standardization processing on the multiple second features according to the second standard feature to obtain multiple second feature ratios;
  • the face value judging device may further include:
  • the selection module is configured to select, from the plurality of second features, a salient feature with a large degree of discrimination according to the distribution of the ratios of the plurality of second features;
  • the construction module is used to construct a face value judgment model according to the salient features among the plurality of second features.
  • the face sample image refers to a pre-prepared face image
  • the face sample image carries face value grade information
  • the face sample image has a face value grade determined in advance based on popular aesthetics.
  • the second feature point refers to a point marked on the outer contour of the face and the edge of the organ, and a pre-trained face dot technique can be used to obtain a plurality of the second feature points in the face sample image And its coordinates.
  • the second feature includes the length of each part of the face in the face sample image and the distance between certain two parts, for example, the length of the eyes: 2cm, the distance from the bottom of the nose to the mouth: 2cm, and so on.
  • the second characteristic ratio refers to the ratio result of the second characteristic and the second standard characteristic.
  • first extract feature points from the prepared face sample image calculate the length of each part in the face sample image and the distance between certain two parts according to the coordinates of the feature point, and then compare These lengths and/or distances are standardized.
  • the standardized processing converts all lengths and/or distances to the ratio of the standard features (ie feature ratios). Through the standard processing, all faces can be scaled to The standard feature is the ratio of 1 unit. Different faces can be compared with each other, and the features of the same face can also be compared with each other. In addition, because the difference between the face data and the data is small, the data accuracy is required to be at least 6 digits after the decimal point.
  • the width of the face at the straight line position of the two eyes is selected as the standard feature.
  • the feature of eye length is performed The characteristic ratio obtained after normalization is 0.200000.
  • the embodiment of the application After obtaining the feature ratios of the sample face images, observe the distribution of these feature ratios in different face values, and then find out the feature types corresponding to the feature ratios with higher discrimination of different face values.
  • the embodiment of the application generates box graphs with various feature ratios at different face value grades, and determines from the box graphs that the difference in face value at different grades is relatively large.
  • Several distinctive features the length of the lower court, the length of the atrium, the distance between the eyes, the length of the eyes, the width of the nose, the width of the eyes, etc. Then train these salient features to obtain a model for judging the appearance.
  • the box chart is composed of rectangular boxes generated by feature ratios and color grades.
  • FIG. 2 is an example diagram of a face image disclosed in this application.
  • the face dot technology can be used to extract 71 points from the face image shown in Figure 2, and at the same time, the position coordinates of each feature point can be recorded. And further calculate the distance or length of "three courts and five eyes", for example, the length of "five eyes 1" is the length from point 13 to point 17 in the face image shown in FIG.
  • calculate the distance that all subjective judgments have a great influence on the appearance such as the length of the eyes, the width of the eyes, the width of the nose, the width of the mouth, the length of the upper court, the length of the atrium, the length of the lower court, etc.
  • 9 salient features can be selected, such as the vertical length from point 6 to point 51-52 (ie the length of the lower court), point 26-39 the straight line point 51-52 the straight line vertical length (ie the length of the atrium), and point 30-17
  • the length of the point i.e. the distance between the eyes
  • the length of the point 10-2 i.e. the width of the face
  • the length of the point 11-1 i.e.
  • the width of the face the vertical length of the line from point 58-62 to line 51-52 ( That is, the distance from the mouth to the nose), the length of points 50-53 (ie the width of the nose), the length of points 15-19 (ie the width of the eyes), and the length of points 30-34 (ie the length of the eyes).
  • the method for the construction module to construct the face value judgment model according to the salient features of the plurality of second features is specifically as follows:
  • the face value is generated according to the salient features of the plurality of second features, the multiple face value grades, and the feature ratio range corresponding to each of the face value grades Decision model.
  • the face value grades are pre-divided into multiple types, and the types and numbers of the face value grades are pre-defined.
  • the face value grades can be divided into 5 grades, among which, according to the color value from high to high Low, the appearance level can be divided into five grades: A, B, C, D, and E.
  • the appearance level can be divided into more or less grades according to the appearance from high to low, A, B,
  • the characters C, D, and E are only pre-defined to identify different face value grades, and other characters may also be used to identify different face value grades, which are not specifically limited in the embodiment of the present application.
  • the salient feature when training the salient feature, it is necessary to determine the salient feature corresponding to the salient feature in different colors according to the maximum value and the minimum value of the box diagram corresponding to the salient feature in different color levels.
  • the characteristic ratio range of the value grade when training the salient feature, it is necessary to determine the salient feature corresponding to the salient feature in different colors according to the maximum value and the minimum value of the box diagram corresponding to the salient feature in different color levels.
  • a face value judgment model is generated.
  • the feature ratio range of different color grades does not conform to the extreme value consistency, the feature ratio range needs to be changed.
  • the color value level corresponding to the color value judgment model includes multiple, and the multiple color value levels are divided according to the level of the color value, and each of the color value levels includes all the feature types. For each of the feature types in different color grades, the feature value ranges corresponding to the feature types are different.
  • each of the feature types has its corresponding feature ratio range (the value range of the feature ratio), and each of the feature types has different features in different face value grades.
  • the matching feature ratio range is the feature ratio range of the feature ratio
  • feature types such as eye length, eye width, nose length, nose width, mouth length, and so on.
  • the face image to be judged that needs to be judged can be obtained; the face dot technology is used to extract multiple first feature points in the face image to be judged; Calculating the coordinates of the plurality of first feature points, wherein the first feature includes the length of each part of the face in the face image to be determined and the distance between certain two parts;
  • the first feature matching the preset feature type is determined as the first standard feature;
  • the salient features of the plurality of first features are performed Standardization processing to obtain a plurality of first feature ratios, where the salient features are features with a high degree of distinction in face value; a pre-trained face value judgment model is used to determine the multiple first feature ratios to obtain the The first face value judgment result of the face image to be judged; and the first face value judgment result is output.
  • the object of the face value judgment model is a salient feature
  • the salient feature is a feature with a large degree of distinction in face value.
  • the salient feature is judged, and the obtained first face value judgment result is more representative of the person to be judged.
  • the face image's face value can improve the accuracy of face value judgment.
  • FIG. 4 is a schematic structural diagram of an electronic device that implements a preferred embodiment of a method for judging a face value according to the present application.
  • the electronic device 3 includes a memory 31, at least one processor 32, computer readable instructions 33 stored in the memory 31 and executable on the at least one processor 32, and at least one communication bus 34.
  • FIG. 4 is only an example of the electronic device 3, and does not constitute a limitation on the electronic device 3. It may include more or less components than those shown in the figure, or a combination. Certain components, or different components, for example, the electronic device 3 may also include input and output devices, network access devices, and so on.
  • the electronic device 3 also includes, but is not limited to, any electronic product that can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, etc.
  • Personal digital assistants Personal Digital Assistant, PDA
  • game consoles interactive network television (Internet Protocol Television, IPTV), smart wearable devices, etc.
  • the network where the electronic device 3 is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), etc.
  • the at least one processor 32 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and application specific integrated circuits (ASICs). ), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the processor 32 can be a microprocessor or the processor 32 can also be any conventional processor, etc.
  • the processor 32 is the control center of the electronic device 3, and connects the entire electronic device 3 through various interfaces and lines. The various parts.
  • the memory 31 may be used to store the computer-readable instructions 33 and/or modules/units, and the processor 32 runs or executes the computer-readable instructions and/or modules/units stored in the memory 31, and The data stored in the memory 31 is called to realize various functions of the electronic device 3.
  • the memory 31 may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may The data (such as audio data, phone book, etc.) created according to the use of the electronic device 3 and the like are stored.
  • the memory 31 may include a high-speed random access memory, and may also include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), and a Secure Digital (SD) Card, Flash Card, at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • a non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), and a Secure Digital (SD) Card, Flash Card, at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • the present application provides an electronic device 3, the memory 31 in the electronic device 3 stores a plurality of instructions to implement a method for judging a face value, and the processor 32 may The multiple instructions are executed to achieve:
  • the first standard feature standardize the salient features of the plurality of first features to obtain a plurality of first feature ratios, where the salient feature is a feature with a large degree of distinction in appearance;
  • the using a pre-trained face value judgment model to judge a plurality of the first feature ratios, and obtaining the first face value judgment result of the face image to be judged includes:
  • For each preset face value grade use a pre-trained face value judgment model to determine whether a plurality of the first feature ratios belong to the feature ratio range of the face value grade that matches the first feature type;
  • the face value grade of the pending gear is one, it is determined that the face value grade of the pending gear is the first face value determination result of the face image to be determined.
  • the processor 32 can execute the multiple instructions to achieve:
  • the processor 32 can execute the multiple instructions to achieve:
  • the preset face value grade is determined as the first face value judgment result of the face image to be judged.
  • the processor 32 can execute the multiple instructions to achieve:
  • a face value judgment model is constructed.
  • the constructing a face value judgment model according to the salient features of the plurality of second features includes:
  • the face value is generated according to the salient features of the plurality of second features, the multiple face value grades, and the feature ratio range corresponding to each of the face value grades Decision model.
  • the color value level corresponding to the color value judgment model includes multiple, and the multiple color value levels are divided according to the level of the color value, and each of the color value levels includes all the features.
  • Type each of the feature types is in different color grades, and the feature value ranges corresponding to the feature types are different.
  • the face image to be determined that needs to be judged on the face value can be obtained; the face dot technology is used to extract multiple first feature points in the face image to be determined; The coordinates of the multiple first feature points are calculated, and the first features include the length of each part of the face in the face image to be determined and the distance between certain two parts;
  • the first feature that matches the preset feature type is determined as the first standard feature;
  • the salient features in the plurality of first features are standardized Processing to obtain a plurality of first feature ratios, where the salient feature is a feature with a high degree of distinction in face value; a pre-trained face value judgment model is used to determine the multiple first feature ratios to obtain the Determine the first face value judgment result of the face image; output the first face value judgment result.
  • the object of the face value judgment model is a salient feature
  • the salient feature is a feature with a large degree of distinction in face value.
  • the salient feature is judged, and the obtained first face value judgment result is more representative of the person to be judged.
  • the face image's face value can improve the accuracy of face value judgment.
  • the integrated module/unit of the electronic device 3 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through computer-readable instructions, and the computer-readable instructions can be stored in a computer-readable storage medium. That is, this application provides one or more computer-readable storage media storing computer-readable instructions. When the computer-readable instructions are executed by one or more processors, the one or more processors Implement the steps of the above-mentioned method embodiments.
  • the readable storage medium includes a non-volatile readable storage medium and a volatile readable storage medium
  • the readable storage medium includes computer readable instructions including computer readable instruction code
  • the computer readable The instruction code can be in the form of source code, object code, executable file, or some intermediate form, etc.
  • the computer-readable medium may include: any entity or device capable of carrying the computer-readable instruction code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.
  • the content contained in the computer-readable medium can be appropriately added or deleted according to the requirements of the legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to the legislation and patent practice, the computer-readable medium Does not include electrical carrier signals and telecommunication signals.
  • the present application provides one or more readable storage media storing computer readable instructions.
  • the computer readable instructions are executed by one or more processors, the one or more Each processor performs the following steps:
  • the first standard feature standardize the salient features of the plurality of first features to obtain a plurality of first feature ratios, where the salient feature is a feature with a large degree of distinction in appearance;
  • the using a pre-trained face value judgment model to judge a plurality of the first feature ratios, and obtaining the first face value judgment result of the face image to be judged includes:
  • For each preset face value grade use a pre-trained face value judgment model to determine whether a plurality of the first feature ratios belong to the feature ratio range of the face value grade that matches the first feature type;
  • the face value grade of the pending gear is one, it is determined that the face value grade of the pending gear is the first face value determination result of the face image to be determined.
  • the one or more processors when executed by one or more processors, the one or more processors further execute the following steps:
  • the one or more processors when executed by one or more processors, the one or more processors further execute the following steps:
  • the preset face value grade is determined as the first face value judgment result of the face image to be judged.
  • the one or more processors when executed by one or more processors, the one or more processors further execute the following steps:
  • a face value judgment model is constructed.
  • the constructing a face value judgment model according to the salient features of the plurality of second features includes:
  • the face value is generated according to the salient features of the plurality of second features, the multiple face value grades, and the feature ratio range corresponding to each of the face value grades Decision model.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

一种颜值判定方法、装置、电子设备及存储介质,方法包括:电子设备获取需要进行颜值判定的待判定人脸图像(S11);电子设备使用人脸打点技术,提取待判定人脸图像中的多个第一特征点(S12);电子设备根据多个第一特征点的坐标,计算多个第一特征;电子设备从多个第一特征中,将与预设特征类型匹配的第一特征确定为第一标准特征(S14);电子设备根据第一标准特征,对多个第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,显著特征为颜值区分度大的特征(S15);电子设备使用预先训练好的颜值判定模型对多个第一特征比值进行判定,获得待判定人脸图像的第一颜值判定结果(S16);电子设备输出第一颜值判定结果(S17)。

Description

颜值判定方法、装置、电子设备及存储介质
本申请要求于2019年9月23日提交中国专利局、申请号为201910901612.0,发明名称为“颜值判定方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种颜值判定方法、装置、电子设备及存储介质。
背景技术
“颜值”是近年来流行的网络词语,现在几乎每天都可以在各大互联网看到关于颜值的新闻。颜值最简单的解释就是长相,它是对外貌特征优劣程度的判定。“颜值”也有衡量标准,可以测量和比较,颜值的衡量标准包括:“颜值低”、“颜值高”、“颜值担当”和“颜值爆表”等说法。其中,“颜值高”和“颜值担当”是长得好看,而“颜值低”是长得不好看。
技术问题
目前,颜值判定方法主要是根据人脸的五官位置、比例与“黄金比例”的接近程度来进行判定,越接近“黄金比例”的人脸,其颜值判定的得分越高。然而,不同的人有不同的审美标准,随着时代的发展,人们的审美水平也发生变化,根据“黄金比例”来判定颜值高低,准确度不高。
因此,发明人意识到,如何能够更加准确地对用户的颜值进行判定,是一个亟待解决的技术问题。
技术解决方案
鉴于以上内容,有必要提供一种颜值判定方法、装置、电子设备及存储介质,能够提高颜值判定的准确性。
本申请的第一方面提供一种颜值判定方法,所述方法包括:
获取需要进行颜值判定的待判定人脸图像;
使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;
根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;
从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;
根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;
使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果;
输出所述第一颜值判定结果。
本申请的第二方面提供一种颜值判定装置,所述装置包括:
获取模块,用于获取需要进行颜值判定的待判定人脸图像;
提取模块,用于使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;
计算模块,用于根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;
确定模块,用于从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;
处理模块,用于根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;
判定模块,用于使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果;
输出模块,用于输出所述第一颜值判定结果。
本申请的第三方面提供一种电子设备,所述电子设备包括处理器和存储器,所述处理器用于执行所述存储器中存储的计算机可读指令时实现所述的颜值判定方法。
本申请的第四方面提供一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行时实现所述的颜值判定方法。
有益效果
本申请的整个颜值判定的过程中,颜值判定模型针对的对象是显著特征,而显著特征为颜值区分度大的特征,对显著特征进行判定,获得的第一颜值判定结果,更能够代表所述待判定人脸图像的颜值高低,从而能够提高颜值判定的准确性。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1是本申请公开的一种颜值判定方法的较佳实施例的流程图。
图2是本申请公开的一种人脸图像的示例图。
图3是本申请公开的一种颜值判定装置的较佳实施例的功能模块图。
图4是本申请实现颜值判定方法的较佳实施例的电子设备的结构示意图。
本发明的实施方式
为了能够更清楚地理解本申请的上述目的、特征和优点,下面结合附图和具体实施例对本申请进行详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的 实施例的目的,不是旨在于限制本申请。
为使本申请的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本申请作进一步详细的说明。
本申请实施例的颜值判定方法应用在电子设备中,也可以应用在电子设备和通过网络与所述电子设备进行连接的服务器所构成的硬件环境中,由服务器和电子设备共同执行。网络包括但不限于:广域网、城域网或局域网。
其中,服务器可以是指能对网络中其它设备(如电子设备)提供服务的计算机系统。如果一个个人电脑能够对外提供文件传输协议(File Transfer Protocol,简称FTP)服务,也可以叫服务器。从狭义范围上讲,服务器专指某些高性能计算机,能通过网络,对外提供服务,其相对于普通的个人电脑来说,稳定性、安全性、性能等方面都要求更高,因此在CPU、芯片组、内存、磁盘系统、网络等硬件和普通的个人电脑有所不同。
所述电子设备包括一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的电子设备,其硬件包括但不限于微处理器、专用集成电路(ASIC)、可编程门阵列(FPGA)、数字处理器(DSP)、嵌入式设备等。所述电子设备还可包括网络设备和/或用户设备。其中,所述网络设备包括但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算(Cloud Computing)的由大量主机或网络服务器构成的云,其中,云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个超级虚拟计算机。所述用户设备包括但不限于任何一种可与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互的电子产品,例如,个人计算机、平板电脑、智能手机、个人数字助理PDA、游戏机、交互式网络电视IPTV、智能式穿戴式设备等。其中,所述用户设备及网络设备所处的网络包括但不限于互联网、广域网、城域网、局域网、虚拟专用网络VPN等。
请参见图1,图1是本申请公开的一种颜值判定方法的较佳实施例的流程图。其中,根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。
S11、电子设备获取需要进行颜值判定的待判定人脸图像。
其中,所述待判定人脸图像指的是人脸正面图像。
作为一种可选的实施方式,步骤S11之前,所述方法还包括:
获取需要进行训练的多个人脸样本图像;
针对每个所述人脸样本图像,提取所述人脸样本图像中的多个第二特征点;
根据所述多个第二特征点的坐标,计算多个第二特征,其中,所述第二特征包括所述人脸样本图像中人脸的各个部位的长度以及某两个部位的距离;
从所述多个第二特征中,将与所述预设特征类型匹配的所述第二特征确定为第二标准特征;
根据所述第二标准特征,对所述多个第二特征进行标准化处理,获得多个第二特征比值;
根据所述多个第二特征比值的分布情况,从所述多个第二特征中选择颜值区分度大的显著特征;
根据所述多个第二特征中的显著特征,构建颜值判定模型。
其中,所述人脸样本图像是指预先准备好的人脸图像,所述人脸样本图像携带有颜值档次信息,所述人脸样本图像是预先根据大众审美确定了颜值档次的。
其中,所述第二特征点是指标记在脸的外部轮廓和器官的边缘的点,可以使用预先训练好的人脸打点技术来获取所述人脸样本图像中多个所述第二特征点及其坐标。
其中,所述第二特征包括所述人脸样本图像中人脸的各个部位的长度以及某两个部位的距离,比如,眼睛的长度:2cm,鼻子底部到嘴巴的距离:2cm,等。
其中,所述第二特征比值是指所述第二特征与所述第二标准特征的比值结果。
在该可选的实施方式中,先对准备好的人脸样本图像提取特征点,根据特征点的坐标计算所述人脸样本图像中的各个部位的长度以及某两个部位的距离,然后对这些长度和/或距离进行标准化处理,所述标准化处理是将所有的长度和/距离换算为与所述标准特征的比值结果(即特征比值),通过标准处理,可以将所有人脸缩放到以标准特征为1个单位的比例,不同人脸之间可进行相互比较,同一张人脸各个特征之间也可相互比较;另外,由于人脸数据与数据间差异较小,因此需要数据精度至少为小数点后6位。比如,本申请实施例中,选取两眼所在直线位置的脸宽作为标准特征,在一张人脸样本图像中,若所述脸宽为10cm,眼睛的长度为2cm,对眼睛长度这个特征进行标准化处理后获得的特征比值为0.200000。
在获取样本人脸图像的特征比值后,观察这些特征比值在不同颜值中的分布情况,再从中找出不同颜值区分度比较大的特征比值对应的特征类型。本申请实施例根据样本人脸图像的特征比值和颜值档次,生成各种特征比值在不同颜值档次的箱型图,并从箱型图中确定了在不同档次颜值区分度比较大的几种显著特征:下庭长度、中庭长度、两眼之间的距离、眼睛的长度、鼻子宽度、眼睛的宽度等。然后对这些显著特征进行训练,获得颜值判定模型。其中,箱型图由特征比值以及颜值档次生成的矩形框构成。
请一并参见图2,图2是本申请公开的一种人脸图像的示例图。如图2所示,可以使用人脸打点技术,从图2所示的人脸图像中提取71个点,同时,还可以记录各个特征点的位置坐标,并进一步计算“三庭五眼”的距离或长度,如“五眼1”的长度为图2所示的人脸图像中点13到点17的长度。同理计算所有主观判断对颜值影响大的距离,如眼睛长度,眼睛宽度,鼻子宽度,嘴巴宽度,上庭长度,中庭长度,下庭长度等。
其中,可以选择出9个显著特征,比如点6到点51-52的垂直长度(即下庭长度)、点26-39直线点51-52直线垂直长度(即中庭长度)、点30-17点的长度(即两眼的距离)、点10-2点的长度(即脸宽)、点11-1点的长度(即脸宽)、点58-62直线到51-52直线垂直长度(即嘴巴到鼻子的距离)、点50-53点的长度(即鼻子的宽度)、点15-19点的长度(即眼睛的宽度)、点30-34点的长度(即眼睛的长度)。
具体的,所述根据所述多个第二特征中的显著特征,构建颜值判定模型包括:
对所述多个第二特征中的显著特征进行学习;
确定所述多个第二特征中的显著特征对应的多个颜值档次以及每个所述颜值档次对应的特征比值范围;
判断不同颜值档次的特征比值范围是否符合极值一致性;
若不同颜值档次的特征比值范围符合极值一致性,根据所述多个第二特征中的显著特征、多个颜值档次以及每个所述颜值档次对应的特征比值范围,生成颜值判定模型。
其中,所述颜值档次被预先分为多种,所述颜值档次的种类和数量都是预先规定的,比如,可以将颜值档次分为5个档次,其中,按照颜值从高到低,可以将颜值档次分为A、B、C、D、E五个档次,其中,可以按照颜值从高到低将颜值档次分为更多档次或更少档次,A、B、C、D、E这些字符只是预先规定用来标识不同的颜值档次,也可以使用其它的字符来标识不同的颜值档次,本申请实施例对此不做具体的限定。
本申请实施例中,在对所述显著特征进行训练时,需要根据所述显著特征对应的在不同颜值档次的箱型图中最大值以及最小值来确定所述显著特征对应的在不同颜值档次的特征比值范围。在确定了所述显著特征对应的在不同颜值档次的特征比值范围后,需要判断所述特征比值范围是否符合极值一致性,比如一个显著特征对应的在五个颜值档次的特征比值范围为[a1,b1],[a2,b2],[a3,b3],[a4,b4],[a5,b5],若颜值档次 在这个显著特征上是单调递增的,即,颜值档次越高,该显著特征对应的特征比值的最大值与最小值越大,若特征比值范围满足a1<=a2<=a3<=a4<=a5,b1<=b2<=b3<=b4<=b5。这时候可以确定不同颜值档次的特征比值范围符合极值一致性。根据所述多个第二特征中的显著特征、多个颜值档次以及每个所述颜值档次对应的特征比值范围,生成颜值判定模型。
可选的,若不同颜值档次的特征比值范围不符合极值一致性时,需要变更特征比值范围,比如上面例子中的显著特征对应的在五个颜值档次的特征比值范围为[a1,b1],[a2,b2],[a3,b3],[a4,b4],[a5,b5],颜值档次在这个显著特征上是单调递增的,若某个特征比值范围不满足a1<=a2<=a3<=a4<=a5,b1<=b2<=b3<=b4<=b5,需要将该特征比值范围变更为下一档次的特征比值范围,比如:a1>a2<=a3<=a4<=a5,这时需要将a1的值变更为a2的值,使得a1<=a2<=a3<=a4<=a5成立。
其中,所述颜值判定模型对应的颜值档次包括多个,多个所述颜值档次按照颜值高低进行划分,每个所述颜值档次包括所有的特征类型,每种所述特征类型在不同所述颜值档次中,所述特征类型对应的特征取值范围不同。
其中,在所述颜值判定模型中,每种所述特征类型都有其对应的特征比值范围(特征比值的取值范围),每种所述特征类型在不同的颜值档次中都有与之匹配的特征比值范围。
其中,特征类型比如眼睛长度、眼睛宽度、鼻子长度、鼻子宽度、嘴巴长度等等。
S12、电子设备使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点。
其中,所述第一特征点是指标记在脸的外部轮廓和器官的边缘的点,可以使用人脸打点技术来获取所述人脸样本图像中多个所述第一特征点及其坐标。
S13、电子设备根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离。
其中,所述第一特征是指某个部位的长度或某两个部位的距离,比如,眼睛的长度:2cm,鼻子底部到嘴巴的距离:2cm,等。可以通过所述第一特征点的坐标计算出多个所述第一特征。
S14、电子设备从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征。
其中,所述第一标准特征是指预先规定的某种特征。本申请实施例选择了两眼所在直线位置的脸宽作为所述第一标准特征。其中,预设特征类型比如眼睛长度、眼睛宽度、鼻子长度、鼻子宽度、嘴巴长度等等。
S15、电子设备根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征。
其中,所述第一特征比值是指所述显著特征与所述第一标准特征的比值结果。
具体,可以参照上述模型训练时所采用的处理方法,在此不再赘述。
S16、电子设备使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果。
其中,所述颜值判定结果是指使用所述颜值判定模型对所述待判定人脸图像的所述第一特征比值进行判定后得出的颜值档次。
具体的,所述使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果包括:
针对每个所述第一特征比值,获取所述第一特征比值的第一特征类型;
针对每个预设的颜值档次,使用预先训练好的颜值判定模型判断多个所述第一特征比值是否均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围;
若多个所述第一特征比值均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定所述颜值档次为待定档颜值档次;
若所述待定档颜值档次为一个,确定所述待定档颜值档次为所述待判定人脸图像的第一颜值判定结果。
在该可选的实施方式中,当所述待判定人脸图像的所有所述第一特征比值都属于同一颜值档次中各自对应的特征比值范围时,才可以判定该颜值档次为所述待判定人脸图像的颜值档次。比如,对一张人脸图像进行颜值判定时,选择了特征X和特征Y来进行判定,如果判定结果为颜值档次A,那么满足:特征X的特征比值属于特征X在颜值档次A中的特征比值范围;特征Y的特征比值属于特征Y在颜值档次A中的特征比值范围。因此,需要对所述待判定人脸图像的每个所述第一特征比值都要进行判定,判断是否所有的所述第一特征比值都属于同一个颜值档次中各种对应的特征比值范围。针对每个所述第一特征比值,先获取其特征类型,即所述第一特征类型,然后在每一个颜值档次中,使用预先训练好的所述颜值判定模型判断多个所述第一特征比值是否均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围;如果满足上述要求的颜值档次为一个,确定所述待定档颜值档次为所述待判定人脸图像的第一颜值判定结果。
作为一种可选的实施方式,所述方法还包括:
若所述待定档颜值档次为多个,将多个所述待定档颜值档次按颜值档次从高到低排序,获得颜值档次排序队列;
将所述颜值档次排序队列中处于中间位置的任一所述待定档颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
如果所述待定档颜值档次有多个,即所述第一颜值判定结果有多个时,将多个所述待定档颜值档次按颜值档次从高到低排序,获得颜值档次排序队列;将所述颜值档次排序队列中处于中间位置的任一所述待定档颜值档次确定为第一颜值判定结果,其中,如果处于中间位置的只有一个所述待定档颜值档次,则确定所述待定档颜值档次为所述第一颜值判定结果,如果处于中间位置的有2个所述待定档颜值档次,则按照预先设置的规则取其中一个所述待定档颜值档次为所述第一颜值判定结果。比如:本申请实施例中颜值档次按高到低分为A、B、C、D、E五个档次,若模型初步得出的所述待定档颜值档次为ABCD这4个颜值档次,根据预设的规则确定所述第一颜值判定结果为颜值档次B。
作为一种可选的实施方式,所述方法还包括:
若多个所述第一特征比值均不属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定多个所述第一特征比值属于预设颜值档次;
将所述预设颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
在该可选的实施方式中,如果多个所述第一特征比值均不属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,可以将预设的颜值档次确定为所述第一颜值判定结果。比如,本申请实施例中,颜值档次按高到低分为A、B、C、D、E五个档次,若对人脸图像的判定结果不满足ABCDE任何一档的时候,则判定为C档。
S17、电子设备输出所述第一颜值判定结果。
本申请实施例,在获得对所述待判定人脸图像的颜值判定结果后,即获得所述第一颜值判定结果后,可以将所述第一颜值判定结果输出至与用户交互的界面/页面。
在图1所描述的方法流程中,可以获取需要进行颜值判定的待判定人脸图像;使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;使用预先训练好的颜值判定模型对多个所述第一特征比 值进行判定,获得所述待判定人脸图像的第一颜值判定结果;输出所述第一颜值判定结果。可见,可以通过获取人脸图像中的显著特征,并将显著特征的数据标准化,输入颜值判定模型中,对所述显著特征进行颜值判定,最后得出颜值判定结果,整个颜值判定的过程中,颜值判定模型针对的对象是显著特征,而显著特征为颜值区分度大的特征,对显著特征进行判定,获得的第一颜值判定结果,更能够代表所述待判定人脸图像的颜值高低,从而能够提高颜值判定的准确性。
以上所述,仅是本申请的具体实施方式,但本申请的保护范围并不局限于此,对于本领域的普通技术人员来说,在不脱离本申请创造构思的前提下,还可以做出改进,但这些均属于本申请的保护范围。
请参见图3,图3是本申请公开的一种颜值判定装置的较佳实施例的功能模块图。
在一些实施例中,所述颜值判定装置运行于电子设备中。所述颜值判定装置可以包括多个由程序代码段所组成的功能模块。所述颜值判定装置中的各个程序段的程序代码可以存储于存储器中,并由至少一个处理器所执行,以执行图1所描述的颜值判定方法中的部分或全部步骤。
本实施例中,所述颜值判定装置根据其所执行的功能,可以被划分为多个功能模块。所述功能模块可以包括:获取模块201、提取模块202、计算模块203、确定模块204、处理模块205、判定模块206及输出模块207。本申请所称的模块是指一种能够被至少一个处理器所执行并且能够完成固定功能的一系列计算机可读指令段,其存储在存储器中。在一些实施例中,关于各模块的功能将在后续的实施例中详述。
获取模块201,用于获取需要进行颜值判定的待判定人脸图像;
其中,所述待判定人脸图像指的是人脸正面图像。
提取模块202,用于使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;
其中,所述第一特征点是指标记在脸的外部轮廓和器官的边缘的点,可以使用人脸打点技术来获取所述人脸样本图像中多个所述第一特征点及其坐标。
计算模块203,用于根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;
其中,所述第一特征是指某个部位的长度或某两个部位的距离,比如,眼睛的长度:2cm,鼻子底部到嘴巴的距离:2cm,等。可以通过所述第一特征点的坐标计算出多个所述第一特征。
确定模块204,用于从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;
其中,所述第一标准特征是指预先规定的某种特征。本申请实施例选择了两眼所在直线位置的脸宽作为所述第一标准特征。其中,预设特征类型比如眼睛长度、眼睛宽度、鼻子长度、鼻子宽度、嘴巴长度等等。
处理模块205,用于根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;
其中,所述第一特征比值是指所述显著特征与所述第一标准特征的比值结果。
具体,可以参照上述模型训练时所采用的处理方法,在此不再赘述。
判定模块206,用于使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果;
其中,所述颜值判定结果是指使用所述颜值判定模型对所述待判定人脸图像的所述第一特征比值进行判定后得出的颜值档次。
输出模块207,用于输出所述第一颜值判定结果。
本申请实施例,在获得对所述待判定人脸图像的颜值判定结果后,即获得所述第一颜值判定结果后,可以将所述第一颜值判定结果输出至与用户交互的界面/页面。
作为一种可选的实施方式,所述判定模块206包括:
获取子模块,用于针对每个所述第一特征比值,获取所述第一特征比值的第一特征类型;
判断子模块,用于针对每个预设的颜值档次,使用预先训练好的颜值判定模型判断多个所述第一特征比值是否均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围;
确定子模块,用于若多个所述第一特征比值均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定所述颜值档次为待定档颜值档次;
所述确定子模块,还用于若所述待定档颜值档次为一个,确定所述待定档颜值档次为所述待判定人脸图像的第一颜值判定结果。
在该可选的实施方式中,当所述待判定人脸图像的所有所述第一特征比值都属于同一颜值档次中各自对应的特征比值范围时,才可以判定该颜值档次为所述待判定人脸图像的颜值档次。比如,对一张人脸图像进行颜值判定时,选择了特征X和特征Y来进行判定,如果判定结果为颜值档次A,那么满足:特征X的特征比值属于特征X在颜值档次A中的特征比值范围;特征Y的特征比值属于特征Y在颜值档次A中的特征比值范围。因此,需要对所述待判定人脸图像的每个所述第一特征比值都要进行判定,判断是否所有的所述第一特征比值都属于同一个颜值档次中各种对应的特征比值范围。针对每个所述第一特征比值,先获取其特征类型,即所述第一特征类型,然后在每一个颜值档次中,使用预先训练好的所述颜值判定模型判断多个所述第一特征比值是否均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围;如果满足上述要求的颜值档次为一个,确定所述待定档颜值档次为所述待判定人脸图像的第一颜值判定结果。
作为一种可选的实施方式,所述确定子模块,还用于若所述待定档颜值档次为多个,将多个所述待定档颜值档次按颜值档次从高到低排序,获得颜值档次排序队列;
所述确定子模块,还用于将所述颜值档次排序队列中处于中间位置的任一所述待定档颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
如果所述待定档颜值档次有多个,即所述第一颜值判定结果有多个时,将多个所述待定档颜值档次按颜值档次从高到低排序,获得颜值档次排序队列;将所述颜值档次排序队列中处于中间位置的任一所述待定档颜值档次确定为第一颜值判定结果,其中,如果处于中间位置的只有一个所述待定档颜值档次,则确定所述待定档颜值档次为所述第一颜值判定结果,如果处于中间位置的有2个所述待定档颜值档次,则按照预先设置的规则取其中一个所述待定档颜值档次为所述第一颜值判定结果。比如:本申请实施例中颜值档次按高到低分为A、B、C、D、E五个档次,若模型初步得出的所述待定档颜值档次为ABCD这4个颜值档次,根据预设的规则确定所述第一颜值判定结果为颜值档次B。
作为一种可选的实施方式,所述确定子模块,还用于若多个所述第一特征比值均不属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定多个所述第一特征比值属于预设颜值档次;
所述确定子模块,还用于将所述预设颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
在该可选的实施方式中,如果多个所述第一特征比值均不属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,可以将预设的颜值档次确定为所述第一颜值判定结果。比如,本申请实施例中,颜值档次按高到低分为A、B、C、D、E五个档 次,若对人脸图像的判定结果不满足ABCDE任何一档的时候,则判定为C档。
作为一种可选的实施方式,所述获取模块201,还用于获取需要进行训练的多个人脸样本图像;
所述提取模块202,还用于针对每个所述人脸样本图像,提取所述人脸样本图像中的多个第二特征点;
所述计算模块203,还用于根据所述多个第二特征点的坐标,计算多个第二特征,其中,所述第二特征包括所述人脸样本图像中人脸的各个部位的长度以及某两个部位的距离;
所述确定模块204,还用于从所述多个第二特征中,将与所述预设特征类型匹配的所述第二特征确定为第二标准特征;
所述处理模块205,还用于根据所述第二标准特征,对所述多个第二特征进行标准化处理,获得多个第二特征比值;
所述颜值判定装置还可以包括:
选择模块,用于根据所述多个第二特征比值的分布情况,从所述多个第二特征中选择颜值区分度大的显著特征;
构建模块,用于根据所述多个第二特征中的显著特征,构建颜值判定模型。
其中,所述人脸样本图像是指预先准备好的人脸图像,所述人脸样本图像携带有颜值档次信息,所述人脸样本图像是预先根据大众审美确定了颜值档次的。
其中,所述第二特征点是指标记在脸的外部轮廓和器官的边缘的点,可以使用预先训练好的人脸打点技术来获取所述人脸样本图像中多个所述第二特征点及其坐标。
其中,所述第二特征包括所述人脸样本图像中人脸的各个部位的长度以及某两个部位的距离,比如,眼睛的长度:2cm,鼻子底部到嘴巴的距离:2cm,等。
其中,所述第二特征比值是指所述第二特征与所述第二标准特征的比值结果。
在该可选的实施方式中,先对准备好的人脸样本图像提取特征点,根据特征点的坐标计算所述人脸样本图像中的各个部位的长度以及某两个部位的距离,然后对这些长度和/或距离进行标准化处理,所述标准化处理是将所有的长度和/距离换算为与所述标准特征的比值结果(即特征比值),通过标准处理,可以将所有人脸缩放到以标准特征为1个单位的比例,不同人脸之间可进行相互比较,同一张人脸各个特征之间也可相互比较;另外,由于人脸数据与数据间差异较小,因此需要数据精度至少为小数点后6位。比如,本申请实施例中,选取两眼所在直线位置的脸宽作为标准特征,在一张人脸样本图像中,若所述脸宽为10cm,眼睛的长度为2cm,对眼睛长度这个特征进行标准化处理后获得的特征比值为0.200000。
在获取样本人脸图像的特征比值后,观察这些特征比值在不同颜值中的分布情况,再从中找出不同颜值区分度比较大的特征比值对应的特征类型。本申请实施例根据样本人脸图像的特征比值和颜值档次,生成各种特征比值在不同颜值档次的箱型图,并从箱型图中确定了在不同档次颜值区分度比较大的几种显著特征:下庭长度、中庭长度、两眼之间的距离、眼睛的长度、鼻子宽度、眼睛的宽度等。然后对这些显著特征进行训练,获得颜值判定模型。其中,箱型图由特征比值以及颜值档次生成的矩形框构成。
请一并参见图2,图2是本申请公开的一种人脸图像的示例图。如图2所示,可以使用人脸打点技术,从图2所示的人脸图像中提取71个点,同时,还可以记录各个特征点的位置坐标。并进一步计算“三庭五眼”的距离或长度,如“五眼1”的长度为图2所示的人脸图像中点13到点17的长度。同理计算所有主观判断对颜值影响大的距离,如眼睛长度,眼睛宽度,鼻子宽度,嘴巴宽度,上庭长度,中庭长度,下庭长度等。
其中,可以选择出9个显著特征,比如点6到点51-52的垂直长度(即下庭长度)、 点26-39直线点51-52直线垂直长度(即中庭长度)、点30-17点的长度(即两眼的距离)、点10-2点的长度(即脸宽)、点11-1点的长度(即脸宽)、点58-62直线到51-52直线垂直长度(即嘴巴到鼻子的距离)、点50-53点的长度(即鼻子的宽度)、点15-19点的长度(即眼睛的宽度)、点30-34点的长度(即眼睛的长度)。
作为一种可选的实施方式,所述构建模块根据所述多个第二特征中的显著特征,构建颜值判定模型的方式具体为:
对所述多个第二特征中的显著特征进行学习;
确定所述多个第二特征中的显著特征对应的多个颜值档次以及每个所述颜值档次对应的特征比值范围;
判断不同颜值档次的特征比值范围是否符合极值一致性;
若不同颜值档次的特征比值范围符合极值一致性,根据所述多个第二特征中的显著特征、多个颜值档次以及每个所述颜值档次对应的特征比值范围,生成颜值判定模型。
其中,所述颜值档次被预先分为多种,所述颜值档次的种类和数量都是预先规定的,比如,可以将颜值档次分为5个档次,其中,按照颜值从高到低,可以将颜值档次分为A、B、C、D、E五个档次,其中,可以按照颜值从高到低将颜值档次分为更多档次或更少档次,A、B、C、D、E这些字符只是预先规定用来标识不同的颜值档次,也可以使用其它的字符来标识不同的颜值档次,本申请实施例对此不做具体的限定。
本申请实施例中,在对所述显著特征进行训练时,需要根据所述显著特征对应的在不同颜值档次的箱型图中最大值以及最小值来确定所述显著特征对应的在不同颜值档次的特征比值范围。在确定了所述显著特征对应的在不同颜值档次的特征比值范围后,需要判断所述特征比值范围是否符合极值一致性,比如一个显著特征对应的在五个颜值档次的特征比值范围为[a1,b1],[a2,b2],[a3,b3],[a4,b4],[a5,b5],若颜值档次在这个显著特征上是单调递增的,即,颜值档次越高,该显著特征对应的特征比值的最大值与最小值越大,若特征比值范围满足a1<=a2<=a3<=a4<=a5,b1<=b2<=b3<=b4<=b5。这时候可以确定不同颜值档次的特征比值范围符合极值一致性。根据所述多个第二特征中的显著特征、多个颜值档次以及每个所述颜值档次对应的特征比值范围,生成颜值判定模型。
可选的,若不同颜值档次的特征比值范围不符合极值一致性时,需要变更特征比值范围,比如上面例子中的显著特征对应的在五个颜值档次的特征比值范围为[a1,b1],[a2,b2],[a3,b3],[a4,b4],[a5,b5],颜值档次在这个显著特征上是单调递增的,若某个特征比值范围不满足a1<=a2<=a3<=a4<=a5,b1<=b2<=b3<=b4<=b5,需要将该特征比值范围变更为下一档次的特征比值范围,比如:a1>a2<=a3<=a4<=a5,这时需要将a1的值变更为a2的值,使得a1<=a2<=a3<=a4<=a5成立。
作为一种可选的实施方式,所述颜值判定模型对应的颜值档次包括多个,多个所述颜值档次按照颜值高低进行划分,每个所述颜值档次包括所有的特征类型,每种所述特征类型在不同所述颜值档次中,所述特征类型对应的特征取值范围不同。
其中,在所述颜值判定模型中,每种所述特征类型都有其对应的特征比值范围(特征比值的取值范围),每种所述特征类型在不同的颜值档次中都有与之匹配的特征比值范围。
其中,特征类型比如眼睛长度、眼睛宽度、鼻子长度、鼻子宽度、嘴巴长度等等。
在图3所描述的颜值判定装置中,可以获取需要进行颜值判定的待判定人脸图像;使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;根据所述第一标准特征,对多个 所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果;输出所述第一颜值判定结果。可见,可以通过获取人脸图像中的显著特征,并将显著特征的数据标准化,输入颜值判定模型中,对所述显著特征进行颜值判定,最后得出颜值判定结果,整个颜值判定的过程中,颜值判定模型针对的对象是显著特征,而显著特征为颜值区分度大的特征,对显著特征进行判定,获得的第一颜值判定结果,更能够代表所述待判定人脸图像的颜值高低,从而能够提高颜值判定的准确性。
如图4所示,图4是本申请实现颜值判定方法的较佳实施例的电子设备的结构示意图。所述电子设备3包括存储器31、至少一个处理器32、存储在所述存储器31中并可在所述至少一个处理器32上运行的计算机可读指令33及至少一条通讯总线34。
本领域技术人员可以理解,图4所示的示意图仅仅是所述电子设备3的示例,并不构成对所述电子设备3的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述电子设备3还可以包括输入输出设备、网络接入设备等。
所述电子设备3还包括但不限于任何一种可与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互的电子产品,例如,个人计算机、平板电脑、智能手机、个人数字助理(Personal Digital Assistant,PDA)、游戏机、交互式网络电视(Internet Protocol Television,IPTV)、智能式穿戴式设备等。所述电子设备3所处的网络包括但不限于互联网、广域网、城域网、局域网、虚拟专用网络(Virtual Private Network,VPN)等。
所述至少一个处理器32可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。该处理器32可以是微处理器或者该处理器32也可以是任何常规的处理器等,所述处理器32是所述电子设备3的控制中心,利用各种接口和线路连接整个电子设备3的各个部分。
所述存储器31可用于存储所述计算机可读指令33和/或模块/单元,所述处理器32通过运行或执行存储在所述存储器31内的计算机可读指令和/或模块/单元,以及调用存储在存储器31内的数据,实现所述电子设备3的各种功能。所述存储器31可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备3的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器31可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
在一实施例中,结合图1,本申请提供了一种电子设备3,所述电子设备3中的所述存储器31存储多个指令以实现一种颜值判定方法,所述处理器32可执行所述多个指令从而实现:
获取需要进行颜值判定的待判定人脸图像;
使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;
根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;
从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;
根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;
使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果;
输出所述第一颜值判定结果。
在一种可选的实施方式中,所述使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果包括:
针对每个所述第一特征比值,获取所述第一特征比值的第一特征类型;
针对每个预设的颜值档次,使用预先训练好的颜值判定模型判断多个所述第一特征比值是否均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围;
若多个所述第一特征比值均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定所述颜值档次为待定档颜值档次;
若所述待定档颜值档次为一个,确定所述待定档颜值档次为所述待判定人脸图像的第一颜值判定结果。
在一种可选的实施方式中,所述处理器32可执行所述多个指令从而实现:
若所述待定档颜值档次为多个,将多个所述待定档颜值档次按颜值档次从高到低排序,获得颜值档次排序队列;
将所述颜值档次排序队列中处于中间位置的任一所述待定档颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
在一种可选的实施方式中,所述处理器32可执行所述多个指令从而实现:
若多个所述第一特征比值均不属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定多个所述第一特征比值属于预设颜值档次;
将所述预设颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
在一种可选的实施方式中,所述处理器32可执行所述多个指令从而实现:
获取需要进行训练的多个人脸样本图像;
针对每个所述人脸样本图像,提取所述人脸样本图像中的多个第二特征点;
根据所述多个第二特征点的坐标,计算多个第二特征,其中,所述第二特征包括所述人脸样本图像中人脸的各个部位的长度以及某两个部位的距离;
从所述多个第二特征中,将与所述预设特征类型匹配的所述第二特征确定为第二标准特征;
根据所述第二标准特征,对所述多个第二特征进行标准化处理,获得多个第二特征比值;
根据所述多个第二特征比值的分布情况,从所述多个第二特征中选择颜值区分度大的显著特征;
根据所述多个第二特征中的显著特征,构建颜值判定模型。
在一种可选的实施方式中,所述根据所述多个第二特征中的显著特征,构建颜值判定模型包括:
对所述多个第二特征中的显著特征进行学习;
确定所述多个第二特征中的显著特征对应的多个颜值档次以及每个所述颜值档次对应的特征比值范围;
判断不同颜值档次的特征比值范围是否符合极值一致性;
若不同颜值档次的特征比值范围符合极值一致性,根据所述多个第二特征中的显著特征、多个颜值档次以及每个所述颜值档次对应的特征比值范围,生成颜值判定模型。
在一种可选的实施方式中,所述颜值判定模型对应的颜值档次包括多个,多个所述颜值档次按照颜值高低进行划分,每个所述颜值档次包括所有的特征类型,每种所述特征类型在不同所述颜值档次中,所述特征类型对应的特征取值范围不同。
具体地,所述处理器32对上述指令的具体实现方法可参考图1对应实施例中相关步骤的描述,在此不赘述。
在图4所描述的电子设备3中,可以获取需要进行颜值判定的待判定人脸图像;使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果;输出所述第一颜值判定结果。可见,可以通过获取人脸图像中的显著特征,并将显著特征的数据标准化,输入颜值判定模型中,对所述显著特征进行颜值判定,最后得出颜值判定结果,整个颜值判定的过程中,颜值判定模型针对的对象是显著特征,而显著特征为颜值区分度大的特征,对显著特征进行判定,获得的第一颜值判定结果,更能够代表所述待判定人脸图像的颜值高低,从而能够提高颜值判定的准确性。
所述电子设备3集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一计算机可读存储介质中,也即本申请提供了一个或多个存储有计算机可读指令的计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器实现上述各个方法实施例的步骤。其中,所述可读存储介质包括非易失性可读存储介质和易失性可读存储介质,所述可读存储介质包括已计算机可读指令包括计算机可读指令代码,所述计算机可读指令代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机可读指令代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
在一实施例中,本申请提供了一个或多个存储有计算机可读指令的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
获取需要进行颜值判定的待判定人脸图像;
使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;
根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;
从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;
根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;
使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果;
输出所述第一颜值判定结果。
在一种可选的实施方式中,所述使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果包括:
针对每个所述第一特征比值,获取所述第一特征比值的第一特征类型;
针对每个预设的颜值档次,使用预先训练好的颜值判定模型判断多个所述第一特征比值是否均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围;
若多个所述第一特征比值均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定所述颜值档次为待定档颜值档次;
若所述待定档颜值档次为一个,确定所述待定档颜值档次为所述待判定人脸图像的第一颜值判定结果。
在一种可选的实施方式中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
若所述待定档颜值档次为多个,将多个所述待定档颜值档次按颜值档次从高到低排序,获得颜值档次排序队列;
将所述颜值档次排序队列中处于中间位置的任一所述待定档颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
在一种可选的实施方式中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
若多个所述第一特征比值均不属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定多个所述第一特征比值属于预设颜值档次;
将所述预设颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
在一种可选的实施方式中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
获取需要进行训练的多个人脸样本图像;
针对每个所述人脸样本图像,提取所述人脸样本图像中的多个第二特征点;
根据所述多个第二特征点的坐标,计算多个第二特征,其中,所述第二特征包括所述人脸样本图像中人脸的各个部位的长度以及某两个部位的距离;
从所述多个第二特征中,将与所述预设特征类型匹配的所述第二特征确定为第二标准特征;
根据所述第二标准特征,对所述多个第二特征进行标准化处理,获得多个第二特征比值;
根据所述多个第二特征比值的分布情况,从所述多个第二特征中选择颜值区分度大的显著特征;
根据所述多个第二特征中的显著特征,构建颜值判定模型。
在一种可选的实施方式中,所述根据所述多个第二特征中的显著特征,构建颜值判定模型包括:
对所述多个第二特征中的显著特征进行学习;
确定所述多个第二特征中的显著特征对应的多个颜值档次以及每个所述颜值档次对应的特征比值范围;
判断不同颜值档次的特征比值范围是否符合极值一致性;
若不同颜值档次的特征比值范围符合极值一致性,根据所述多个第二特征中的显著特征、多个颜值档次以及每个所述颜值档次对应的特征比值范围,生成颜值判定模型。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的 部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附关联图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。系统权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种颜值判定方法,其中,所述方法包括:
    获取需要进行颜值判定的待判定人脸图像;
    使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;
    根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;
    从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;
    根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;
    使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果;
    输出所述第一颜值判定结果。
  2. 根据权利要求1所述的方法,其中,所述使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果包括:
    针对每个所述第一特征比值,获取所述第一特征比值的第一特征类型;
    针对每个预设的颜值档次,使用预先训练好的颜值判定模型判断多个所述第一特征比值是否均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围;
    若多个所述第一特征比值均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定所述颜值档次为待定档颜值档次;
    若所述待定档颜值档次为一个,确定所述待定档颜值档次为所述待判定人脸图像的第一颜值判定结果。
  3. 根据权利要求2所述的方法,其中,所述方法还包括:
    若所述待定档颜值档次为多个,将多个所述待定档颜值档次按颜值档次从高到低排序,获得颜值档次排序队列;
    将所述颜值档次排序队列中处于中间位置的任一所述待定档颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
  4. 根据权利要求2所述的方法,其中,所述方法还包括:
    若多个所述第一特征比值均不属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定多个所述第一特征比值属于预设颜值档次;
    将所述预设颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
  5. 根据权利要求1至4中任一项所述的方法,其中,所述方法还包括:
    获取需要进行训练的多个人脸样本图像;
    针对每个所述人脸样本图像,提取所述人脸样本图像中的多个第二特征点;
    根据所述多个第二特征点的坐标,计算多个第二特征,其中,所述第二特征包括所述人脸样本图像中人脸的各个部位的长度以及某两个部位的距离;
    从所述多个第二特征中,将与所述预设特征类型匹配的所述第二特征确定为第二标准特征;
    根据所述第二标准特征,对所述多个第二特征进行标准化处理,获得多个第二特征比值;
    根据所述多个第二特征比值的分布情况,从所述多个第二特征中选择颜值区分度大的显著特征;
    根据所述多个第二特征中的显著特征,构建颜值判定模型。
  6. 根据权利要求5所述的方法,其中,所述根据所述多个第二特征中的显著特征,构 建颜值判定模型包括:
    对所述多个第二特征中的显著特征进行学习;
    确定所述多个第二特征中的显著特征对应的多个颜值档次以及每个所述颜值档次对应的特征比值范围;
    判断不同颜值档次的特征比值范围是否符合极值一致性;
    若不同颜值档次的特征比值范围符合极值一致性,根据所述多个第二特征中的显著特征、多个颜值档次以及每个所述颜值档次对应的特征比值范围,生成颜值判定模型。
  7. 根据权利要求6所述的方法,其中,所述颜值判定模型对应的颜值档次包括多个,多个所述颜值档次按照颜值高低进行划分,每个所述颜值档次包括所有的特征类型,每种所述特征类型在不同所述颜值档次中,所述特征类型对应的特征取值范围不同。
  8. 一种颜值判定装置,其中,所述颜值判定装置包括:
    获取模块,用于获取需要进行颜值判定的待判定人脸图像;
    提取模块,用于使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;
    计算模块,用于根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;
    确定模块,用于从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;
    处理模块,用于根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;
    判定模块,用于使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果;
    输出模块,用于输出所述第一颜值判定结果。
  9. 一种电子设备,其中,所述电子设备包括处理器,所述处理器用于执行存储器中存储的计算机可读指令以实现如下步骤:
    获取需要进行颜值判定的待判定人脸图像;
    使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;
    根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;
    从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;
    根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;
    使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果;
    输出所述第一颜值判定结果。
  10. 根据权利要求9所述的电子设备,其中,所述使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果包括:
    针对每个所述第一特征比值,获取所述第一特征比值的第一特征类型;
    针对每个预设的颜值档次,使用预先训练好的颜值判定模型判断多个所述第一特征比值是否均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围;
    若多个所述第一特征比值均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定所述颜值档次为待定档颜值档次;
    若所述待定档颜值档次为一个,确定所述待定档颜值档次为所述待判定人脸图像的第一颜值判定结果。
  11. 根据权利要求10所述的电子设备,其中,所述处理器执行所述计算机可读指令时还实现如下步骤:
    若所述待定档颜值档次为多个,将多个所述待定档颜值档次按颜值档次从高到低排序,获得颜值档次排序队列;
    将所述颜值档次排序队列中处于中间位置的任一所述待定档颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
  12. 根据权利要求10所述的电子设备,其中,所述处理器执行所述计算机可读指令时还实现如下步骤:
    若多个所述第一特征比值均不属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定多个所述第一特征比值属于预设颜值档次;
    将所述预设颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
  13. 根据权利要求9至12中任一项所述的电子设备,其中,所述处理器执行所述计算机可读指令时还实现如下步骤:
    获取需要进行训练的多个人脸样本图像;
    针对每个所述人脸样本图像,提取所述人脸样本图像中的多个第二特征点;
    根据所述多个第二特征点的坐标,计算多个第二特征,其中,所述第二特征包括所述人脸样本图像中人脸的各个部位的长度以及某两个部位的距离;
    从所述多个第二特征中,将与所述预设特征类型匹配的所述第二特征确定为第二标准特征;
    根据所述第二标准特征,对所述多个第二特征进行标准化处理,获得多个第二特征比值;
    根据所述多个第二特征比值的分布情况,从所述多个第二特征中选择颜值区分度大的显著特征;
    根据所述多个第二特征中的显著特征,构建颜值判定模型。
  14. 根据权利要求13所述的电子设备,其中,所述根据所述多个第二特征中的显著特征,构建颜值判定模型包括:
    对所述多个第二特征中的显著特征进行学习;
    确定所述多个第二特征中的显著特征对应的多个颜值档次以及每个所述颜值档次对应的特征比值范围;
    判断不同颜值档次的特征比值范围是否符合极值一致性;
    若不同颜值档次的特征比值范围符合极值一致性,根据所述多个第二特征中的显著特征、多个颜值档次以及每个所述颜值档次对应的特征比值范围,生成颜值判定模型。
  15. 一个或多个存储有计算机可读指令的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    获取需要进行颜值判定的待判定人脸图像;
    使用人脸打点技术,提取所述待判定人脸图像中的多个第一特征点;
    根据所述多个第一特征点的坐标,计算多个第一特征,其中,所述第一特征包括所述待判定人脸图像中人脸的各个部位的长度以及某两个部位的距离;
    从多个所述第一特征中,将与预设特征类型匹配的所述第一特征确定为第一标准特征;
    根据所述第一标准特征,对多个所述第一特征中的显著特征进行标准化处理,获得多个第一特征比值,其中,所述显著特征为颜值区分度大的特征;
    使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果;
    输出所述第一颜值判定结果。
  16. 根据权利要求15所述的可读存储介质,其中,所述使用预先训练好的颜值判定模型对多个所述第一特征比值进行判定,获得所述待判定人脸图像的第一颜值判定结果包括:
    针对每个所述第一特征比值,获取所述第一特征比值的第一特征类型;
    针对每个预设的颜值档次,使用预先训练好的颜值判定模型判断多个所述第一特征比值是否均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围;
    若多个所述第一特征比值均属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定所述颜值档次为待定档颜值档次;
    若所述待定档颜值档次为一个,确定所述待定档颜值档次为所述待判定人脸图像的第一颜值判定结果。
  17. 根据权利要求16所述的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    若所述待定档颜值档次为多个,将多个所述待定档颜值档次按颜值档次从高到低排序,获得颜值档次排序队列;
    将所述颜值档次排序队列中处于中间位置的任一所述待定档颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
  18. 根据权利要求16所述的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    若多个所述第一特征比值均不属于所述颜值档次中与所述第一特征类型匹配的特征比值范围,确定多个所述第一特征比值属于预设颜值档次;
    将所述预设颜值档次确定为所述待判定人脸图像的第一颜值判定结果。
  19. 根据权利要求15至18中任一项所述的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    获取需要进行训练的多个人脸样本图像;
    针对每个所述人脸样本图像,提取所述人脸样本图像中的多个第二特征点;
    根据所述多个第二特征点的坐标,计算多个第二特征,其中,所述第二特征包括所述人脸样本图像中人脸的各个部位的长度以及某两个部位的距离;
    从所述多个第二特征中,将与所述预设特征类型匹配的所述第二特征确定为第二标准特征;
    根据所述第二标准特征,对所述多个第二特征进行标准化处理,获得多个第二特征比值;
    根据所述多个第二特征比值的分布情况,从所述多个第二特征中选择颜值区分度大的显著特征;
    根据所述多个第二特征中的显著特征,构建颜值判定模型。
  20. 根据权利要求19所述的可读存储介质,其中,所述根据所述多个第二特征中的显著特征,构建颜值判定模型包括:
    对所述多个第二特征中的显著特征进行学习;
    确定所述多个第二特征中的显著特征对应的多个颜值档次以及每个所述颜值档次对应的特征比值范围;
    判断不同颜值档次的特征比值范围是否符合极值一致性;
    若不同颜值档次的特征比值范围符合极值一致性,根据所述多个第二特征中的显著特征、多个颜值档次以及每个所述颜值档次对应的特征比值范围,生成颜值判定模型。
PCT/CN2020/093341 2019-09-23 2020-05-29 颜值判定方法、装置、电子设备及存储介质 WO2021057063A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910901612.0A CN110874567B (zh) 2019-09-23 2019-09-23 颜值判定方法、装置、电子设备及存储介质
CN201910901612.0 2019-09-23

Publications (1)

Publication Number Publication Date
WO2021057063A1 true WO2021057063A1 (zh) 2021-04-01

Family

ID=69718056

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093341 WO2021057063A1 (zh) 2019-09-23 2020-05-29 颜值判定方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN110874567B (zh)
WO (1) WO2021057063A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874567B (zh) * 2019-09-23 2024-01-09 平安科技(深圳)有限公司 颜值判定方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604377A (zh) * 2009-07-10 2009-12-16 华南理工大学 一种采用计算机进行女性图像的人脸美丽分类方法
US20140334734A1 (en) * 2013-05-09 2014-11-13 Tencent Technology (Shenzhen) Company Limited Systems and Methods for Facial Age Identification
CN108629336A (zh) * 2018-06-05 2018-10-09 北京千搜科技有限公司 基于人脸特征点识别的颜值计算方法
CN108764334A (zh) * 2018-05-28 2018-11-06 北京达佳互联信息技术有限公司 人脸图像颜值判断方法、装置、计算机设备及存储介质
CN110874567A (zh) * 2019-09-23 2020-03-10 平安科技(深圳)有限公司 颜值判定方法、装置、电子设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169408A (zh) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 一种颜值判定方法及装置
CN111373409B (zh) * 2017-09-28 2023-08-25 深圳传音通讯有限公司 获取颜值变化的方法及终端
CN109657539B (zh) * 2018-11-05 2022-01-25 达闼机器人有限公司 人脸颜值评价方法、装置、可读存储介质及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604377A (zh) * 2009-07-10 2009-12-16 华南理工大学 一种采用计算机进行女性图像的人脸美丽分类方法
US20140334734A1 (en) * 2013-05-09 2014-11-13 Tencent Technology (Shenzhen) Company Limited Systems and Methods for Facial Age Identification
CN108764334A (zh) * 2018-05-28 2018-11-06 北京达佳互联信息技术有限公司 人脸图像颜值判断方法、装置、计算机设备及存储介质
CN108629336A (zh) * 2018-06-05 2018-10-09 北京千搜科技有限公司 基于人脸特征点识别的颜值计算方法
CN110874567A (zh) * 2019-09-23 2020-03-10 平安科技(深圳)有限公司 颜值判定方法、装置、电子设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHEN LIANGREN: "Research on Face Identification Based on Deep Convolutional Neural Network", CHINESE MASTER'S THESES FULL-TEXT DATABASE, 1 March 2016 (2016-03-01), pages 1 - 69, XP055794291 *

Also Published As

Publication number Publication date
CN110874567B (zh) 2024-01-09
CN110874567A (zh) 2020-03-10

Similar Documents

Publication Publication Date Title
CN112749344B (zh) 信息推荐方法、装置、电子设备、存储介质及程序产品
WO2017206400A1 (zh) 图像处理方法、装置及电子设备
WO2020244074A1 (zh) 表情交互方法、装置、计算机设备及可读存储介质
CN107491436A (zh) 一种标题党识别方法和装置、服务器、存储介质
WO2020211387A1 (zh) 电子合同显示方法、装置、电子设备及计算机存储介质
CN110969170A (zh) 一种图像主题色提取方法、装置及电子设备
CN108322317A (zh) 一种账号识别关联方法及服务器
CN104185041B (zh) 视频交互广告的自动生成方法和系统
CN110493612B (zh) 弹幕信息的处理方法、服务器及计算机可读存储介质
CN107656918B (zh) 获取目标用户的方法及装置
CN111696029A (zh) 虚拟形象视频生成方法、装置、计算机设备及存储介质
CN111145732A (zh) 多任务语音识别后的处理方法及系统
WO2021057062A1 (zh) 颜值判定模型优化方法、装置、电子设备及存储介质
WO2021057063A1 (zh) 颜值判定方法、装置、电子设备及存储介质
CN109190116B (zh) 语义解析方法、系统、电子设备及存储介质
CN113573044B (zh) 视频数据处理方法、装置、计算机设备及可读存储介质
CN112966756A (zh) 一种可视化的准入规则的生成方法、装置、机器可读介质及设备
CN108932703A (zh) 图片处理方法、图片处理装置及终端设备
CN112765990A (zh) 直播弹幕实时合并方法、装置、计算机设备及存储介质
WO2023025005A1 (zh) 音频数据播放方法与装置
CN108932704B (zh) 图片处理方法、图片处理装置及终端设备
CN114415997B (zh) 显示参数设置方法、装置、电子设备及存储介质
CN115545088B (zh) 模型构建方法、分类方法、装置和电子设备
CN109961152A (zh) 虚拟偶像的个性化互动方法、系统、终端设备及存储介质
CN113327212A (zh) 人脸驱动、模型的训练方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20869784

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20869784

Country of ref document: EP

Kind code of ref document: A1