WO2024075109A1 - Attractiveness determination system and method - Google Patents

Attractiveness determination system and method Download PDF

Info

Publication number
WO2024075109A1
WO2024075109A1 PCT/IL2023/050944 IL2023050944W WO2024075109A1 WO 2024075109 A1 WO2024075109 A1 WO 2024075109A1 IL 2023050944 W IL2023050944 W IL 2023050944W WO 2024075109 A1 WO2024075109 A1 WO 2024075109A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
person
attractiveness
image features
static
Prior art date
Application number
PCT/IL2023/050944
Other languages
French (fr)
Inventor
Uri Lipowezky
Irit PELEG
Ido PELEG
Amir UZIEL
Original Assignee
Facetrom Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Facetrom Limited filed Critical Facetrom Limited
Publication of WO2024075109A1 publication Critical patent/WO2024075109A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present invention relates to the field of attractiveness determination system and method.
  • Attractiveness between human beings is the capacity to arouse interest in another person. Determining if and how much one person is attractive in the eyes of another can be utilized for dating, advertising, medical treatment, matching, placement, humanresource needs and other applications that are based on attractiveness.
  • a system for determining attractiveness between a first person and a second person comprising processing circuitry configured to: (A)obtain: (a) at least one first image, representative of the first person; (b) at least one second image, representative of the second person; (c) at least one first machine learning model, capable of receiving: (i) second image features of the at least one second image, and (ii) a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and (d) at least one second machine learning model, capable of receiving: (i) the first image features, and (ii) a second mathematical manipulation based on the first image features and the second image features, and determining a reverse attractiveness score, the reverse attractiveness score being indicative of a level of attractiveness of the first person by the
  • the at least one first image is one or more of: (a) at least one static two-dimensional facial image of the first person, (b) at least one static three-dimensional facial image of the first person, (c) at least one static two-dimensional facial model of the first person, (d) at least one static three-dimensional facial model of the first person, (e) at least one two-dimensional static image of the first person, (f) at least one static three- dimensional image of the first person, (g) at least one moving image of the first person, (h) at least one analog video clip of the first person, and (i) at least one digital video clip of the first person.
  • the at least one second image is one or more of: (a) at least one static two-dimensional facial image of the second person, (b) at least one static three- dimensional facial image of the second person, (c) at least one static two-dimensional facial model of the second person, (d) at least one static three-dimensional facial model of the second person, (e) at least one two-dimensional static image of the second person, (f) at least one static three-dimensional image of the second person, (g) at least one moving image of the second person, (h) at least one analog video clip of the second person, and (i) at least one digital video clip of the second person.
  • one or more of the first image features are based on anatomical structure of a face appearing in the at least one first image.
  • one or more of the second image features are based on anatomical structure of a face appearing in the at least one second image.
  • the determination of the first image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one first image, and calculating the first image features, based on the landmarks, (b) analyzing the at least one first image using at least one machine learning model to determine the first image features, or (c) analysis of questionnaires answered by the first person.
  • the determination of the second image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one second image, and calculating the second image features, based on the landmarks, (b) analyzing the at least one second image using at least one machine learning model to determine the second image features, or (c) analysis of questionnaires answered by the second person.
  • the at least one first image includes at least part of a body of the first person and wherein at least one of the first image features is determined based on the at least part of the body.
  • the at least one second image includes at least part of a body of the second person and wherein at least one of the second image features is determined based on the at least part of the body.
  • the at least one first image includes at least part of garments worn by the first person and wherein at least one of the first image features is determined based on the at least part of the garments.
  • the at least one second image includes at least part of garments worn by the second person and wherein at least one of the second image features is determined based on the at least part of the garments.
  • the at least one first image includes at least part of a background behind the first person and wherein at least one of the first image features is determined based on the at least part of the background.
  • the at least one second image includes at least part of a background behind the second person and wherein at least one of the second image features is determined based on the at least part of the background.
  • the at least one first image includes at least part of a palm of the first person and wherein at least one of the first image features is determined based on the at least part of the palm.
  • the at least one second image includes at least part of a palm of the second person and wherein at least one of the second image features is determined based on the at least part of the palm.
  • the at least one first image is captured from one or more of: (a) a video recording of the first person, (b) a static two-dimensional image of the first person, and (c) a static three-dimensional image of the first person.
  • the at least one second image is captured from one or more of: (a) a video recording of the second person, or (b) a static two-dimensional image of the second person, and (c) a static three-dimensional image of the second person.
  • the at least one static three-dimensional facial image of the first person is generated from one or more of: (a) at least one hologram of the first person, and (b) one or more static two-dimensional images of the first person, and (c) one or more static three-dimensional images of the first person.
  • the at least one static three-dimensional facial image of the second person is generated from one or more of: (a) at least one hologram of the second person, and (b) one or more static two-dimensional images of the second person, and (c) one or more static three-dimensional images of the second person.
  • the obtaining includes meta-data about the first person and wherein at least one of the first image features is determined based on the meta-data.
  • the obtaining includes meta-data about the second person and wherein at least one of the second image features is determined based on the meta-data.
  • the first mathematical manipulation is a difference between the first image features and the second image features.
  • the second mathematical manipulation is a difference between the first image features and the second image features.
  • the first mathematical manipulation is a directional distance between the first image features and the second image features.
  • the second mathematical manipulation is a directional distance between the first image features and the second image features.
  • the first machine learning model can be based on one or more neural network techniques.
  • the second machine learning model can be based on one or more neural network techniques.
  • the first person is a male and the second person is a female.
  • the reciprocal attractiveness score calculation is based also on weights, wherein male attractiveness by the female gets preference over female attractiveness by the male.
  • the reciprocal attractiveness score calculation is based also on weights, wherein similarity between the male and the female gets preference over the first image features and the second image features.
  • the reciprocal attractiveness score calculation is based also on weights, wherein the objective beauty score of the female gets preference over the first image features and the second image features.
  • the first person is a female and the second person is a male.
  • the reciprocal attractiveness score calculation is based also on weights, wherein female attractiveness by the male gets preference over male attractiveness by the female.
  • the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
  • the first person is a female and the second person is a female.
  • the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
  • the first machine learning model is trained utilizing training-data comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
  • the second machine learning model is trained utilizing trainingdata comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
  • the processing circuitry is further configured to, after obtaining the at least one first image of the first person, pre-process the at least one first image to determine a facial quality score of a face appearing in the at least one first image and wherein the calculating is performed only when the facial quality score is above a first threshold.
  • the processing circuitry is further configured to, after obtaining the at least one second image of the second person, pre-process the at least one second image to determine a facial quality score of a face appearing in the at least one second image and wherein the calculating is performed only when the facial quality score is above a second threshold.
  • each given machine attractiveness detection model of the at least one of the first attractiveness detection models is trained to determine a sub-system direct attractiveness score and wherein the direct attractiveness score is determined based on the at least one subsystem direct attractiveness score.
  • each given attractiveness detection model of the at least one second attractiveness detection model is trained to determine a sub-system direct attractiveness score and wherein the reverse attractiveness score is determined based on the at least one sub-system reverse attractiveness score.
  • a method for determining attractiveness between a first person and a second person comprising: (A) obtaining, by a processing circuitry: (a) at least one first image, representative of the first person; (b) at least one second image, representative of the second person; (c) at least one first machine learning model, capable of receiving: (i) second image features of the at least one second image, and (ii) a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and (d) at least one second machine learning model, capable of receiving: (i) the first image features, and (ii) a second mathematical manipulation based on the first image features and the second image features, and determining a reverse attractiveness score, the reverse attractiveness score being indicative of a level of attractiveness of the first person by the second
  • the at least one first image is one or more of: (a) at least one static two-dimensional facial image of the first person, (b) at least one static three-dimensional facial image of the first person, (c) at least one static two-dimensional facial model of the first person, (d) at least one static three-dimensional facial model of the first person, (e) at least one two-dimensional static image of the first person, (f) at least one static three- dimensional image of the first person, (g) at least one moving image of the first person, (h) at least one analog video clip of the first person, and (i) at least one digital video clip of the first person.
  • the at least one second image is one or more of: (a) at least one static two-dimensional facial image of the second person, (b) at least one static three- dimensional facial image of the second person, (c) at least one static two-dimensional facial model of the second person, (d) at least one static three-dimensional facial model of the second person, (e) at least one two-dimensional static image of the second person, (f) at least one static three-dimensional image of the second person, (g) at least one moving image of the second person, (h) at least one analog video clip of the second person, and (i) at least one digital video clip of the second person.
  • one or more of the first image features are based on anatomical structure of a face appearing in the at least one first image.
  • one or more of the second image features are based on anatomical structure of a face appearing in the at least one second image.
  • the determination of the first image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one first image, and calculating the first image features, based on the landmarks, (b) analyzing the at least one first image using at least one machine learning model to determine the first image features, or (c) analysis of questionnaires answered by the first person.
  • the determination of the second image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one second image, and calculating the second image features, based on the landmarks, (b) analyzing the at least one second image using at least one machine learning model to determine the second image features, or (c) analysis of questionnaires answered by the second person.
  • the at least one first image includes at least part of a body of the first person and wherein at least one of the first image features is determined based on the at least part of the body.
  • the at least one second image includes at least part of a body of the second person and wherein at least one of the second image features is determined based on the at least part of the body.
  • the first at least one image includes at least part of garments worn by the first person and wherein at least one of the first image features is determined based on the at least part of the garments.
  • the at least one second image includes at least part of garments worn by the second person and wherein at least one of the second image features is determined based on the at least part of the garments.
  • the at least one first image includes at least part of a background behind the first person and wherein at least one of the first image features is determined based on the at least part of the background.
  • the at least one second image includes at least part of a background behind the second person and wherein at least one of the second image features is determined based on the at least part of the background.
  • the at least one first image includes at least part of a palm of the first person and wherein at least one of the first image features is determined based on the at least part of the palm.
  • the at least one second image includes at least part of a palm of the second person and wherein at least one of the second image features is determined based on the at least part of the palm.
  • the at least one first image is captured from one or more of: (a) a video recording of the first person, (b) a static two-dimensional image of the first person, and (c) a static three-dimensional image of the first person.
  • the at least one second image is captured from one or more of: (a) a video recording of the second person, or (b) a static two-dimensional image of the second person, and (c) a static three-dimensional image of the second person.
  • the at least one static three-dimensional facial image of the first person is generated from one or more of: (a) at least one hologram of the first person, and (b) one or more static two-dimensional images of the first person, and (c) one or more static three-dimensional images of the first person.
  • the at least one static three-dimensional facial image of the second person is generated from one or more of: (a) at least one hologram of the second person, and (b) one or more static two-dimensional images of the second person, and (c) one or more static three-dimensional images of the second person.
  • the obtaining includes meta-data about the first person and wherein at least one of the first image features is determined based on the meta-data.
  • the obtaining includes meta-data about the second person and wherein at least one of the second image features is determined based on the meta-data.
  • the first mathematical manipulation is a difference between the first image features and the second image features.
  • the second mathematical manipulation is a difference between the first image features and the second image features.
  • the first mathematical manipulation is a directional distance between the first image features and the second image features.
  • the second mathematical manipulation is a directional distance between the first image features and the second image features.
  • the first machine learning model can be based on one or more neural network techniques.
  • the second machine learning model can be based on one or more neural network techniques.
  • the first person is a male and the second person is a female.
  • the reciprocal attractiveness score calculation is based also on weights, wherein male attractiveness by the female gets preference over female attractiveness by the male.
  • the reciprocal attractiveness score calculation is based also on weights, wherein similarity between the male and the female gets preference over the first image features and the second image features. In some cases, the reciprocal attractiveness score calculation is based also on weights, wherein the objective beauty score of the female gets preference over the first image features and the second image features.
  • the first person is a female and the second person is a male.
  • the reciprocal attractiveness score calculation is based also on weights, wherein female attractiveness by the male gets preference over male attractiveness by the female.
  • the first person is a male and the second person is a male.
  • the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
  • the first person is a female and the second person is a female.
  • the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
  • the first machine learning model is trained utilizing training-data comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
  • the second machine learning model is trained utilizing trainingdata comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
  • the method further comprising, after obtaining the at least one first image of the first person, pre-processing, by the processing circuitry, the at least one first image to determine a facial quality score of a face appearing in the at least one first image and wherein the calculating is performed only when the facial quality score is above a first threshold.
  • each given machine attractiveness detection model of the at least one of the first attractiveness detection models is trained to determine a sub-system direct attractiveness score and wherein the direct attractiveness score is determined based on the at least one subsystem direct attractiveness score.
  • each given attractiveness detection model of the at least one second attractiveness detection model is trained to determine a sub-system direct attractiveness score and wherein the reverse attractiveness score is determined based on the at least one sub-system reverse attractiveness score.
  • a non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code, executable by at least one processing circuitry of a computer to perform a method comprising: (A) obtaining, by a processing circuitry: (a) at least one first image, representative of the first person; (b) at least one second image, representative of the second person; (c) at least one first machine learning model, capable of receiving: (i) second image features of the at least one second image, and (ii) a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and (d) at least one second machine learning model, capable of receiving: (i) the first image features, and (ii) a second mathematical manipulation based on the first image features and the second image features, and determining
  • Fig. 1 is a schematic illustration of an example of a first image of a first person and a second image of a second person, used for determining attractiveness between them, in accordance with the presently disclosed subject matter;
  • Fig. 2 is a block diagram schematically illustrating one example of an attractiveness determination system, in accordance with the presently disclosed subject matter
  • FIG. 3 is a flowchart illustrating an example of a sequence of operations carried out by an attractiveness determination system, in accordance with the presently disclosed subject matter
  • Fig. 4A is an exemplary results graph for male attractiveness determination with Receiver Operating Characteristic (ROC) curves for linear regression, boost and boost3 algorithms; and
  • ROC Receiver Operating Characteristic
  • Fig. 4B is an exemplary results graph for female attractiveness determination with ROC curves for linear regression, boost and boost3 algorithms.
  • should be expansively construed to cover any kind of electronic device with data processing capabilities, including, by way of non-limiting example, a personal desktop/laptop computer, a server, a computing system, a communication device, a smartphone, a tablet computer, a smart television, a processor (e.g. digital signal processor (DSP), a microcontroller, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), a group of multiple physical machines sharing performance of various tasks, virtual servers co-residing on a single physical machine, any other electronic computing device, and/or any combination thereof.
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • non-transitory is used herein to exclude transitory, propagating signals, but to otherwise include any volatile or nonvolatile computer memory technology suitable to the application.
  • the phrase “for example,” “such as”, “for instance” and variants thereof describe non-limiting embodiments of the presently disclosed subject matter.
  • Reference in the specification to “one case”, “some cases”, “other cases” or variants thereof means that a particular feature, structure or characteristic described in connection with the embodiment(s) is included in at least one embodiment of the presently disclosed subject matter.
  • the appearance of the phrase “one case”, “some cases”, “other cases” or variants thereof does not necessarily refer to the same embodiment(s).
  • Figs. 1 and 2 illustrate a general schematic of the system architecture in accordance with an embodiment of the presently disclosed subject matter.
  • Each module in Figs. 1 and 2 can be made up of any combination of software, hardware and/or firmware that performs the functions as defined and explained herein.
  • the modules in Figs. 1 and 2 may be centralized in one location or dispersed over more than one location.
  • the system may comprise fewer, more, and/or different modules than those shown in Figs. 1 and 2.
  • Any reference in the specification to a method should be applied mutatis mutandis to a system capable of executing the method and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that once executed by a computer result in the execution of the method.
  • Any reference in the specification to a system should be applied mutatis mutandis to a method that may be executed by the system and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that may be executed by the system.
  • Any reference in the specification to a non-transitory computer readable medium should be applied mutatis mutandis to a system capable of executing the instructions stored in the non-transitory computer readable medium and should be applied mutatis mutandis to method that may be executed by a computer that reads the instructions stored in the non-transitory computer readable medium.
  • Meta-verse is an example of a such a transfer of elements of our being from the real world to a virtual on-line world. The need for the described solution will only become more acute.
  • FIG. 1 showing a schematic illustration of an example of a first image of a first person and a second image of a second person, used for determining attractiveness, in accordance with the presently disclosed subject matter.
  • Image A 110-a is an image of a first person
  • Image B 110-b is an image of a second person.
  • the images can incorporate facial images that can be used to train the facial features of each individual.
  • Image A 110-a includes a facial image of the first person
  • Image B 110-b includes a facial image of the second person.
  • Facial structures are utilized to determine attractiveness between individuals.
  • Facial structure is unique. Each individual has its personality manifested in his or her own face structure (and features). Human faces have evolved to signal individual identity in human interaction. Facial structure (and features) can be utilized to determine attractiveness. The structure of the face reflects the genetic characteristics and history of that individual. Facial structure exposes the individual's health, parental suitability, level of aggressiveness, family history, and more.
  • Attractiveness is subjective. As each individual has an exclusive biological status, each individual has its own unique taste and match “requirements”. Facial structure can be utilized to identify this high level of match and to predict attractiveness between two persons.
  • OFC orbitofrontal cortex
  • An attractiveness determination system can utilize human images (e.g., Image A 110-a, Image B 110-b) to determine attractiveness and/or a match between the first person and the second person and vice versa.
  • the images can undergo a pre-processing stage, where the attractiveness determination system identifies that an image of a single human face is included in the image.
  • the pre-processing can be done, for example, by utilizing a visual object detection image processing method that can identify a bounding rectangle around a face within an image.
  • an image analysis method can be employed to identify two eyes within the bounding rectangle.
  • the images do not necessarily include faces.
  • the attractiveness determination system can utilize other parts of the images to determine a match.
  • the attractiveness determination system can match between the backgrounds of the images, between the travel landscapes of the images, the selfie angle of the images, clothing of the persons, their accessories, etc.
  • the attractiveness determination system may utilize images of at least part of a body of the depicted persons (such as: the palm of the first and second persons) to determine attractiveness. For example, by determining a finger feature (based on an index to ring ratio) for each of the images. Garments depicted within the image can be also utilized by the system to determine attractiveness.
  • the pre-processing stage can include determination of a Facial Quality Score (FQS) for at least some of the images.
  • FQS Facial Quality Score
  • the attractiveness determination system can analyze the images and determine the FQS for each of these images.
  • the facial quality score can be based on properties of the image and on properties of the facial image that they are related to. These properties can include, for example, the size of the facial image within the image, the sharpness of the facial image, the number of faces that appear in the image, etc.
  • An image with an FQS that is below a threshold is discarded by the attractiveness determination system and is not used to determine attractiveness.
  • this pre-processing stage arises from the fact that the manner of which the facial image was captured can have a large effect on the attractiveness score, e.g., on the accuracy of the prediction of the attractiveness determination system. For example, as the image resolution is larger, more details are seen in the face and prediction accuracy will most likely improve. In another example, as the head position of the person in the image is more frontal (i.e., closer to zero degrees), then more parts of the face are visible and the prediction will be more accurate.
  • the quality of the face within the image is more important than the quality of the (raw) image itself.
  • the image can be of high resolution, however if the area of the face within the image is very small (for example, only 1% for a person captured in the image from a remote distance), then the overall resolution of the face will be low.
  • the attractiveness determination system can be provided with multiple images of the same user (for example, when the user changes his/her selfie image for social purposes).
  • the additional images can be identical to the previous image provided in the past for that user, or it can be a new image, for example, an image taken ad-hoc.
  • the attractiveness determination system can utilize all the multiple images of the same user, available at a data repository used by the system to store the image provided by the user in the current transaction and other images provided by the same user in prior transactions. Since all the images relate to a specific person, they are utilized all to enhance the prediction score - each image according to the quality of the image and its underlying face.
  • a non-limiting exemplary algorithm for performing the FQS pre-processing for a single image can be: Elimination (rejection) of the images (e.g., the image will not be used by the system for prediction) when one or more of the following is true: (a) the image has no face, (b) the image has no eyes, (c) the face in the image is cut - chin and/or ears and/or forehead are missing, and (d) the face in the image is distorted - for example: fisheye (barrel-type) or pillow-type of the face, resulting from an image taken from very close proximity from the camera.
  • Elimination (rejection) of the images e.g., the image will not be used by the system for prediction
  • the face in the image is cut - chin and/or ears and/or forehead are missing
  • the face in the image is distorted - for example: fisheye (barrel-type) or pillow-type of the face, resulting from an image taken from very close proximity from the camera.
  • the parameters that effect FQS are one or more of: (a) head pose angles - pitch, pose/yaw and roll angles of the face. As the angles are smaller, the face quality score is larger.
  • the angles are estimated by a computer vision algorithm according to measurements over the face (specifically, nose and eye position and distances), (b) emotion type and level in the face - there are several types of emotion expressed by a human, such as sadness, joy, laugh, angriness, etc. Per type of emotion, there is a level of emotion (for example, a light smile or a strong anger).
  • face visibility the face visibility can be changeable according to many different natural parameters, such as mustache, beard, hair, light level, shadow, etc.
  • accessories - human-made objects that may affect the visibility of the face such as eyeglasses, sunglasses, hat, scarf, mask, earrings, tattoo, pipe, cigarette, etc.
  • resolution the number of pixels (in two dimensions) of the face (e.g., the bounding box of the face) and in the interpupillary distance, the distance, in pixels, between the centers of the eyes.
  • FQS can be calculated as a function of one or more of these parameters.
  • the pre-processing stage can include one or more image manipulations on at least one of the images. These manipulations can include: emotion negation, image capturing angle adjustments, facial image size corrections, in-plane facial image rotation, out-of- plane facial image rotation (frontalization) and more.
  • the attractiveness determination system will use the manipulated images to determine attractiveness.
  • the attractiveness determination system can work in a batch mode - where multiple pairs of images are provided to the system and the system determines the attractiveness between the persons depicted in each pair.
  • one or more video feeds are provided to the attractiveness determination system, and the system extracts the images from the video. For example, by capturing an image from the video feed, or by analyzing the video feed to identify one or more persons and extracting their facial images from the video.
  • the attractiveness determination system can determine, for at least some of the images, one or more landmarks (for example: landmark AA 120-aa, landmark AB 120- ab, landmark AC 120-ac, . . landmark AN 120-an, and landmark BA 120-ba, landmark BB 120-bb, landmark BC 120-bc, landmark BN 120-bn, etc.) within each of the images.
  • Each landmark is a predetermined point within the facial image.
  • the landmarks can be anatomical landmarks that are based on the anatomical structure of the face appearing within the image. These landmarks are unambiguously identified in every image and are placed in positions that ensure a reasonable degree of correspondence between the landmarks' locations across the images.
  • the attractiveness determination system analyzes the facial image to identify the location within the image of the facial organs such as: nose, ears, mouth, eyes, chin, etc.).
  • the system can determine the location of the landmarks in relation to these identified organs.
  • a non-limiting example is depicted in Fig. 1.
  • the system identifies the location of the nose in Image A 110- a and in Image B 110-b.
  • the system then utilizes these locations to determine the location of landmarks that are related to the nose.
  • the system can determine the location of additional landmarks that are associated with additional facial organs: such as the eyes, the mouth, the chin, etc.
  • the attractiveness determination system utilizes the landmarks to determine one or more features for each of the images.
  • the features can be calculated based on the location of the landmarks within each image.
  • the first feature can be calculated as the distance between an eye and the nose with respect to inter-pupil distance (IPD).
  • IPD inter-pupil distance
  • This first feature can be calculated for image A 110-a based on the location of landmark AA 120-aa and the location of landmark AN 120-an.
  • the same first feature can be calculated for image B 110-b based on the location of landmark BA 120-ba and the location of landmark BN 120-bn.
  • the first feature will have a first value for image A 110- a and a second value for image B 110-b.
  • the system can calculate additional features for each given image of the images, based on the locations of the corresponding landmarks within the given image.
  • the attractiveness determination system can also utilize facial images to determine features that are not landmark related. Another example is the system determining the emotions of the persons in the images and calculates features based on these emotions. For example: the system can determine that a person is smiling in the image and can calculate a feature of happiness level for that person. Other parts of the images can be utilized by the system to determine features. In some cases, these other parts of the images can be used to determine additional features, in addition to features determined based on the facial images. For example, the amount of light in the background (such as: night, day, neon, white balance, etc.) of the image can be used to calculate features. In another example, scenery in the background of the images can be used to calculate a feature related to travel locations of the depicted persons or time of day.
  • Features can also be calculated based on other skin and/or body parts within the images, such as: race, age, height, Body Mass Index (BMI), etc. For example, by calculating a finger feature (based on an index to ring ratio) for each of the images.
  • Features can also be calculated based on garments or other clothing features (such as: glasses, tattoo, earrings, etc.) that appear within the images.
  • a season feature based on the type of clothes the persons in the images are wearing
  • the system can utilize one or more machine learning models (such as: supervised learning models, unsupervised learning models, deep learning models, etc.) to determine the features of at least some of the images.
  • the system can calculate features for the depicted persons from supplementary sources that can accompany the images, for example: from meta-data obtained with the images, from answers to questionnaires provided by the persons depicted in the images, from sensors sensing the persons (heart rate, sweat, eye blinking rate, etc.), etc.
  • the features can be calculated based on knowledge associated with the domains of anthropology, neurobiology, physiology, neuropsychology, evolutionary biology (morphology, dysmorphology), chemistry and others.
  • At least some of the images of the first and second persons can include other types of imagery (including, non-visual imagery), such as: Functional Magnetic Resonance Imaging or functional MRI (fMRI), spectral imaging in different wavelengths, facial topography, Cloud of Points (COP) from 3D facial scanning, etc.
  • fMRI Functional Magnetic Resonance Imaging
  • COP Cloud of Points
  • the features can be utilized by the attractiveness determination system to train a machine learning model that can be used to predict the attractiveness between the first person and the second person, as will be further described hereafter in reference to Fig. 3.
  • FIG. 2 is a block diagram schematically illustrating one example of the attractiveness determination system 200, in accordance with the presently disclosed subject matter.
  • the attractiveness determination system 200 can comprise a network interface 206.
  • the network interface 206 e.g., a network card, a WiFi client, a Li-Fi client, 3G/4G/5G client, satellite communications or any other component
  • system 200 can receive and/or send, through network interface 206, a plurality of images (for example: image A 110-a, image B 110-b, etc.), data about landmarks (for example: landmark AA 120-aa, landmark AB 120-ab, landmark AC 120-ac, . .
  • landmark AN 120- an and landmark BA 120-ba, landmark BB 120-bb, landmark BC 120-bc, landmark BN 120-bn, etc.), data about features, one or more machine learning models, training data-sets used to train machine learning models, attractiveness scores, etc.
  • System 200 can further comprise or be otherwise associated with a data repository 204 (e.g., a database, a storage system, a memory including Read Only Memory - ROM, Random Access Memory - RAM, or any other type of memory, etc.) configured to store data.
  • a data repository 204 e.g., a database, a storage system, a memory including Read Only Memory - ROM, Random Access Memory - RAM, or any other type of memory, etc.
  • data that can be stored in the data repository 204 include: a plurality of images (for example: image A 110-a, image B 110-b, etc.), data about landmarks (for example: landmark AA 120-aa, landmark AB 120-ab, landmark AC 120- ac, landmark AN 120-an, and landmark BA 120-ba, landmark BB 120-bb, landmark BC 120-bc, landmark BN 120-bn, etc.), data about landmarks, data about features, one or more machine learning models, training data-sets used to train the machine learning models, etc.
  • Data repository 204 can be further configured to enable retrieval and/or update and/or deletion of the stored data. It is to be noted that in some cases, data repository 204 can be distributed, while system 200 has access to the information stored thereon, e.g., via a wired or wireless network to which system 200 is able to connect (utilizing its network interface 206).
  • System 200 further comprises processing circuitry 202.
  • Processing circuitry 202 can be one or more processing units (e.g., central processing units), microprocessors, microcontrollers (e.g., microcontroller units (MCUs) cloud servers, graphical processing units (GPUs), or any other computing devices or modules, including multiple and/or parallel and/or distributed processing units, which are adapted to independently or cooperatively process data for controlling relevant system 200 resources and for enabling operations related to system’s 200 resources.
  • processing units e.g., central processing units
  • microprocessors e.g., microcontroller units (MCUs) cloud servers, graphical processing units (GPUs), or any other computing devices or modules, including multiple and/or parallel and/or distributed processing units, which are adapted to independently or cooperatively process data for controlling relevant system 200 resources and for enabling operations related to system’s 200 resources.
  • MCUs microcontroller units
  • GPUs graphical processing units
  • the processing circuitry 202 comprises an attractiveness scoring module 208, configured to perform an attractiveness scoring process, as further detailed herein, inter alia with reference to Fig. 3.
  • system 200 can operate as a standalone system without the need for network interface 206 and/or data repository 204. Adding one or both of these elements to system 200 is optional and not mandatory, as system 200 can operate according to its intended use either way.
  • FIG. 3 there is shown a flowchart illustrating an example of a sequence of operations carried out by an attractiveness determination system, in accordance with the presently disclosed subject matter.
  • the attractiveness determination system 200 can be configured to perform an attractiveness scoring process 300, e.g., using the attractiveness scoring module 208.
  • the attractiveness determination system 200 can use a training data-set of multiple pairs of images (such as the pair: image A 110-a and image B 110-b). Each image pair is associated with one or more attractiveness labels.
  • the labels can be a binary label - does the person depicted in the first image of the pair find the person depicted in the second image of the pair attractive or not. In some cases, the label can be non-binary attractiveness scores.
  • the system 200 can extract features from the multiple pairs of images. The features and the labels are used to train one or more machine learning models (such as: supervised learning models, unsupervised learning models, deep learning models, etc.).
  • the training of the machine learning models can be based on a mathematical manipulation between the features extracted from the first image of the pair and the features extracted from the second image of the pair.
  • the mathematical manipulation can be a difference between the first image features of the first image and the second image features of the second image.
  • the mathematical manipulation can be a substruction of the second image features from the first image features.
  • the mathematical manipulation can be a directional distance between the first image features and the second image features.
  • the first feature calculated for the first image can be the distance between the eye and the nose depicted in the first image
  • the second feature calculated for the second image can be the distance between the eye and the nose depicted in the second image.
  • the training of the machine learning model for this pair is done based on the second image features and the difference between the features of the two persons.
  • the difference is the feature value calculated for a male in the first image minus the feature value calculated for a female in the second image.
  • Training the machine learning model based on pairs has the advantage of exact calculation of attractiveness match between any pair of persons appearing in two images, and optimally using gradient descent for quickly reaching the minima point of the loss function of the training algorithm, resulting in a machine learning model that is better than a model trained on the values of the features themselves. If the values of the features themselves were used for training, a less optimal minimum point would have been reached.
  • the features of the subject person express the objective attractiveness of the subject and the difference between male and female features expresses a measure of the inter-personal biological matching.
  • Fig. 4A is an exemplary results graph for male attractiveness determination with ROC curves for linear regression, boost and boost3 machine learning algorithms used by system 200 to determine attractiveness of females by males in an exemplary experiment.
  • the results of the exemplary experiment show that the machine learning models trained in the experiment can predict attractiveness of females by males.
  • Fig. 4B is an exemplary results graph for female attractiveness determination with ROC curves for logistic regression, boost and boost3 algorithms machine learning algorithms used by system 200 to determine attractiveness of males by females in an exemplary experiment.
  • the results of the exemplary experiment show that the machine learning models trained in the experiment can predict attractiveness of males by females.
  • attractiveness determination system 200 obtains: (a) at least one first image (e.g., Image A 110-a, Image B 110-b), representative of the first person; (b) at least one second image (e.g., Image A 110-a, Image B 110-b), representative of the second person; (c) at least one first machine learning model, capable of receiving: (i) second image features of the at least one second image, and (ii) a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and (d) at least one second machine learning model, capable of receiving: (i) the first image features, and (ii) a second mathematical manipulation based on the first image features and the second image features, and determining a reverse attractiveness score,
  • the first and second machine learning models can be based on one or more algorithms, such as: logistic regression algorithms, boosting algorithms, evolution algorithms, support-vector machine algorithms, decision trees algorithms, random forest algorithms, etc.
  • the first and second machine learning models can be based on one or more deep learning and/or neural network techniques, for example: Convolutional Neural Networks (CNN), encoders-decoders, Deep Stacking Networks (DSN), backpropagation networks, etc.
  • CNN Convolutional Neural Networks
  • DSN Deep Stacking Networks
  • system 200 obtained a first image of a given male and a second image of a given female. This pair of images is unlabeled and there is no prior-knowledge on the attractiveness between the given male and the given female.
  • system 200 also obtains the first machine learning model.
  • the first machine learning model can receive features calculated based on at least one image of a female and features calculated based on a mathematical manipulation on the features calculated based on at least one image of a male and on the features calculated based on at least one image of a female (for example: a subtraction of the value of each female feature from the value of the corresponding male features) and that has been trained to determining a direct attractiveness score, indicative of a level of attractiveness of the female by the male.
  • the first machine learning model can be used by system 200 to predict the direct attractiveness score of the given female by the given male.
  • system 200 also obtains a second machine learning model.
  • the second machine learning model can receive features calculated based on an image of a male and features calculated based on a mathematical manipulation on the features calculated based on at least one image of a male and on the features calculated based on at least one image of a female (for example: a subtraction of the value of each male feature from the value of the corresponding female features) and that has been trained to determining a reverse attractiveness score, indicative of a level of attractiveness of the male by the female.
  • the second machine learning model can be used by system 200 to predict the reverse attractiveness score of the given male by the given female.
  • the first person is a male and the second person is a female.
  • the reciprocal attractiveness score calculation can be based also on weights, wherein male attractiveness by the female gets preference over female attractiveness by the male.
  • the reciprocal attractiveness score calculation can be also based on weights, wherein similarity between the male and the female gets preference over the first image features and the second image features.
  • Another option for the weights is where the objective beauty score of the female gets preference over the first image features and the second image features.
  • the weights can be intertwined into the first and/or second machine learning models in such a way that preference is given to specific features.
  • the first person is a female and the second person is a male.
  • the reciprocal attractiveness score calculation is based also on weights, wherein female attractiveness by the male gets preference over male attractiveness by the female.
  • the first person is a male and the second person is a male.
  • the first machine learning model and the second machine learning model are also trained for identifying same-sex attractiveness, in addition to the features and distance between features.
  • the first person is a female and the second person is a female.
  • the first machine learning model and the second machine learning model are also trained for identifying same-sex attractiveness, in addition to the features and distance between features.
  • the first machine learning model is trained utilizing trainingdata comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
  • Each given machine learning model of at least one first machine learning model can be trained to determine a sub-system direct attractiveness score and wherein the direct attractiveness score is determined based on at least one subsystem direct attractiveness score.
  • An example of a biological subsystem score can be a skeleton sub-system score, that includes features that are related to the skeletal properties of the imaged person. Additional examples of subsystems scores are: health/immune sub- system score, hormone sub-system score, etc. Combining these subsystems scores can produce an overall attractiveness score.
  • system 200 determines, based on the at least one first image, one or more first image features (block 304).
  • the first image features can be determined by system 200 by analyzing one or more images containing the first person, a video of the first person, a three-dimensional model of the first person, etc.
  • the at least one first image in these cases can be one or more of: (a) at least one static two-dimensional facial image of the first person, (b) at least one static three-dimensional facial image of the first person, (c) at least one static two- dimensional facial model of the first person, (d) at least one static three-dimensional facial model of the first person, (e) at least one two-dimensional static image of the first person, (f) at least one static three-dimensional image of the first person, (g) at least one moving image of the first person, (h) at least one analog video clip of the first person, and (i) at least one digital video clip of the first person.
  • System 200 can then determine, based on the at least one second image, one or more second image features (block 306).
  • the second image features can be determined by system 200 by analyzing one or more images containing the second person, a video of the second person, a three-dimensional model of the second person, etc.
  • the at least one second image in these cases can be one or more of: (a) at least one static two-dimensional facial image of the second person, (b) at least one static three-dimensional facial image of the second person, (c) at least one static two-dimensional facial model of the second person, (d) at least one static three-dimensional facial model of the second person, (e) at least one two-dimensional static image of the second person, (f) at least one static three- dimensional image of the second person, (g) at least one moving image of the second person, (h) at least one analog video clip of the second person, and (i) at least one digital video clip of the second person.
  • system 200 obtains two or more images representing a person (the first person and/or the second person). These two or more images can be a series of images taken over time or can be extracted from a video (for example: from a video file, from a live video feed, etc.) representing the person (the first person and/or the second person).
  • the two or more images can be manipulated mathematically to generate a three-dimensional model of the person and specifically a three-dimensional model of the person's face.
  • the three-dimensional model of the person, and specifically the three-dimensional model of the person's face can be generated directly from a video representing the person (for example, a video where the person appears in one or more of its frames).
  • the three-dimensional model can be generated for example by identifying the organs such as the mouth, nose or the eyes of the person in the two or more images, and using their location to create the three- dimensional model.
  • the three-dimensional model of the person, and specifically the three-dimensional model of the person's face can be utilized by system 200 to determine the landmarks (for example: landmark A A 120-aa, landmark AB 120-ab, landmark AC 120-ac, ..., landmark AN 120-an, and landmark BA 120-ba, landmark BB 120-bb, landmark BC 120-bc, ... , landmark BN 120-bn, etc.) in the three-dimension space and calculate the persons’ features. These features are more accurate than features calculated based on landmarks from a static two-dimensional image.
  • a wrinkle in the face of the first person is modeled in the three-dimensional model and the landmarks determined by system 200 analyze the three-dimensional shape of the wrinkle, including its depth, as part of the landmark determination.
  • the same methods of extracting features from three-dimensional models can be used by system 200 or by an external system to train the first machine learning model and the second machine learning model based on training data that at least in part extracts features from three-dimensional models of the first person and/or second person.
  • three-dimensional models support easily creation of synthetic training data by using machine learning methods, such as: Generative Adversarial Network (GAN), to generate synthetic images from a base three- dimensional model by adding one or more variations to the base three-dimensional model thereby creating a series of synthetic variants of the base three-dimensional model that can be used for training the machine learning models.
  • GAN Generative Adversarial Network
  • system 200 calculates: (a) a direct attractiveness score by utilizing the first image features, the second image features and the first machine learning model; (b) a reverse attractiveness score by utilizing the first image features, the second image features and the second machine learning model; and (c) a reciprocal attractiveness score based on the direct attractiveness score and the reverse attractiveness score (block 308).
  • the reciprocal attractiveness score can be calculated as an average of the direct attractiveness score and the reverse attractiveness score or weighted score, such that attractiveness of a male by a female is given a priority.
  • system can be implemented, at least partly, as a suitably programmed computer.
  • the presently disclosed subject matter contemplates a computer program being readable by a computer for executing the disclosed method.
  • the presently disclosed subject matter further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing the disclosed method.

Abstract

A system for determining attractiveness between a first person and a second person, the system comprising processing circuitry configured to: obtain: at least one first image, representative of the first person; at least one second image, representative of the second person; at least one first machine learning model, capable of receiving: second image features of the at least one second image, and a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and at least one second machine learning model, capable of receiving: the first image features, and a second mathematical manipulation based on the first image features and the second image features, and determining a reverse attractiveness score.

Description

ATTRACTIVENESS DETERMINATION SYSTEM AND METHOD
TECHNICAL FIELD
The present invention relates to the field of attractiveness determination system and method.
BACKGROUND
Attractiveness between human beings is the capacity to arouse interest in another person. Determining if and how much one person is attractive in the eyes of another can be utilized for dating, advertising, medical treatment, matching, placement, humanresource needs and other applications that are based on attractiveness.
Current solutions rely on generic facial attractiveness models and/or on subjective information from questionnaires provided by the individuals that are matched, for determining attractiveness between a first person and a second person. As attractiveness is a subjective and unique feeling, these current solutions provide limited capabilities to determine attractiveness in an accurate, fast, online and personal manner. These current solutions do not take into account the potential advantage of utilizing features extracted from images of the first and the second persons to determine relative attractiveness between them, even without any additional information.
Thus, there is a need for a novel technique for attractiveness determination system and method.
GENERAL DESCRIPTION
In accordance with a first aspect of the presently disclosed subject matter, there is provided a system for determining attractiveness between a first person and a second person, the system comprising processing circuitry configured to: (A)obtain: (a) at least one first image, representative of the first person; (b) at least one second image, representative of the second person; (c) at least one first machine learning model, capable of receiving: (i) second image features of the at least one second image, and (ii) a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and (d) at least one second machine learning model, capable of receiving: (i) the first image features, and (ii) a second mathematical manipulation based on the first image features and the second image features, and determining a reverse attractiveness score, the reverse attractiveness score being indicative of a level of attractiveness of the first person by the second person; (B) determine, based on the at least one first image, one or more first image features; (C) determine, based on the at least one second image, one or more second image features; and (D) calculate: (a) a direct attractiveness score by utilizing the first image features, the second image features and the first machine learning model; (b) a reverse attractiveness score by utilizing the first image features, the second image features and the second machine learning model; and (c) a reciprocal attractiveness score based on the direct attractiveness score and the reverse attractiveness score.
In some cases, the at least one first image is one or more of: (a) at least one static two-dimensional facial image of the first person, (b) at least one static three-dimensional facial image of the first person, (c) at least one static two-dimensional facial model of the first person, (d) at least one static three-dimensional facial model of the first person, (e) at least one two-dimensional static image of the first person, (f) at least one static three- dimensional image of the first person, (g) at least one moving image of the first person, (h) at least one analog video clip of the first person, and (i) at least one digital video clip of the first person.
In some cases, the at least one second image is one or more of: (a) at least one static two-dimensional facial image of the second person, (b) at least one static three- dimensional facial image of the second person, (c) at least one static two-dimensional facial model of the second person, (d) at least one static three-dimensional facial model of the second person, (e) at least one two-dimensional static image of the second person, (f) at least one static three-dimensional image of the second person, (g) at least one moving image of the second person, (h) at least one analog video clip of the second person, and (i) at least one digital video clip of the second person.
In some cases, one or more of the first image features are based on anatomical structure of a face appearing in the at least one first image.
In some cases, one or more of the second image features are based on anatomical structure of a face appearing in the at least one second image.
In some cases, the determination of the first image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one first image, and calculating the first image features, based on the landmarks, (b) analyzing the at least one first image using at least one machine learning model to determine the first image features, or (c) analysis of questionnaires answered by the first person.
In some cases, the determination of the second image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one second image, and calculating the second image features, based on the landmarks, (b) analyzing the at least one second image using at least one machine learning model to determine the second image features, or (c) analysis of questionnaires answered by the second person.
In some cases, the at least one first image includes at least part of a body of the first person and wherein at least one of the first image features is determined based on the at least part of the body.
In some cases, the at least one second image includes at least part of a body of the second person and wherein at least one of the second image features is determined based on the at least part of the body.
In some cases, the at least one first image includes at least part of garments worn by the first person and wherein at least one of the first image features is determined based on the at least part of the garments.
In some cases, the at least one second image includes at least part of garments worn by the second person and wherein at least one of the second image features is determined based on the at least part of the garments.
In some cases, the at least one first image includes at least part of a background behind the first person and wherein at least one of the first image features is determined based on the at least part of the background.
In some cases, the at least one second image includes at least part of a background behind the second person and wherein at least one of the second image features is determined based on the at least part of the background.
In some cases, the at least one first image includes at least part of a palm of the first person and wherein at least one of the first image features is determined based on the at least part of the palm.
In some cases, the at least one second image includes at least part of a palm of the second person and wherein at least one of the second image features is determined based on the at least part of the palm. In some cases, the at least one first image is captured from one or more of: (a) a video recording of the first person, (b) a static two-dimensional image of the first person, and (c) a static three-dimensional image of the first person.
In some cases, the at least one second image is captured from one or more of: (a) a video recording of the second person, or (b) a static two-dimensional image of the second person, and (c) a static three-dimensional image of the second person.
In some cases, the at least one static three-dimensional facial image of the first person is generated from one or more of: (a) at least one hologram of the first person, and (b) one or more static two-dimensional images of the first person, and (c) one or more static three-dimensional images of the first person.
In some cases, the at least one static three-dimensional facial image of the second person is generated from one or more of: (a) at least one hologram of the second person, and (b) one or more static two-dimensional images of the second person, and (c) one or more static three-dimensional images of the second person.
In some cases, the obtaining includes meta-data about the first person and wherein at least one of the first image features is determined based on the meta-data.
In some cases, the obtaining includes meta-data about the second person and wherein at least one of the second image features is determined based on the meta-data.
In some cases, the first mathematical manipulation is a difference between the first image features and the second image features.
In some cases, the second mathematical manipulation is a difference between the first image features and the second image features.
In some cases, the first mathematical manipulation is a directional distance between the first image features and the second image features.
In some cases, the second mathematical manipulation is a directional distance between the first image features and the second image features.
In some cases, the first machine learning model can be based on one or more neural network techniques.
In some cases, the second machine learning model can be based on one or more neural network techniques.
In some cases, the first person is a male and the second person is a female. In some cases, the reciprocal attractiveness score calculation is based also on weights, wherein male attractiveness by the female gets preference over female attractiveness by the male.
In some cases, the reciprocal attractiveness score calculation is based also on weights, wherein similarity between the male and the female gets preference over the first image features and the second image features.
In some cases, the reciprocal attractiveness score calculation is based also on weights, wherein the objective beauty score of the female gets preference over the first image features and the second image features.
In some cases, the first person is a female and the second person is a male.
In some cases, the reciprocal attractiveness score calculation is based also on weights, wherein female attractiveness by the male gets preference over male attractiveness by the female.
In some cases, the first person is a male and the second person is a male.
In some cases, the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
In some cases, the first person is a female and the second person is a female.
In some cases, the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
In some cases, the first machine learning model is trained utilizing training-data comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
In some cases, the second machine learning model is trained utilizing trainingdata comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
In some cases, the processing circuitry is further configured to, after obtaining the at least one first image of the first person, pre-process the at least one first image to determine a facial quality score of a face appearing in the at least one first image and wherein the calculating is performed only when the facial quality score is above a first threshold.
In some cases, the processing circuitry is further configured to, after obtaining the at least one second image of the second person, pre-process the at least one second image to determine a facial quality score of a face appearing in the at least one second image and wherein the calculating is performed only when the facial quality score is above a second threshold.
In some cases, each given machine attractiveness detection model of the at least one of the first attractiveness detection models is trained to determine a sub-system direct attractiveness score and wherein the direct attractiveness score is determined based on the at least one subsystem direct attractiveness score.
In some cases, each given attractiveness detection model of the at least one second attractiveness detection model is trained to determine a sub-system direct attractiveness score and wherein the reverse attractiveness score is determined based on the at least one sub-system reverse attractiveness score.
In accordance with a second aspect of the presently disclosed subject matter, there is provided a method for determining attractiveness between a first person and a second person, the method comprising: (A) obtaining, by a processing circuitry: (a) at least one first image, representative of the first person; (b) at least one second image, representative of the second person; (c) at least one first machine learning model, capable of receiving: (i) second image features of the at least one second image, and (ii) a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and (d) at least one second machine learning model, capable of receiving: (i) the first image features, and (ii) a second mathematical manipulation based on the first image features and the second image features, and determining a reverse attractiveness score, the reverse attractiveness score being indicative of a level of attractiveness of the first person by the second person; (B) determining, by the processing circuitry, based on the at least one first image, one or more first image features; (C) determining, by the processing circuitry, based on the at least one second image, one or more second image features; and (D) calculating, by the processing circuitry: (a) a direct attractiveness score by utilizing the first image features, the second image features and the first machine learning model; (b) a reverse attractiveness score by utilizing the first image features, the second image features and the second machine learning model; and (c) a reciprocal attractiveness score based on the direct attractiveness score and the reverse attractiveness score.
In some cases, the at least one first image is one or more of: (a) at least one static two-dimensional facial image of the first person, (b) at least one static three-dimensional facial image of the first person, (c) at least one static two-dimensional facial model of the first person, (d) at least one static three-dimensional facial model of the first person, (e) at least one two-dimensional static image of the first person, (f) at least one static three- dimensional image of the first person, (g) at least one moving image of the first person, (h) at least one analog video clip of the first person, and (i) at least one digital video clip of the first person.
In some cases, the at least one second image is one or more of: (a) at least one static two-dimensional facial image of the second person, (b) at least one static three- dimensional facial image of the second person, (c) at least one static two-dimensional facial model of the second person, (d) at least one static three-dimensional facial model of the second person, (e) at least one two-dimensional static image of the second person, (f) at least one static three-dimensional image of the second person, (g) at least one moving image of the second person, (h) at least one analog video clip of the second person, and (i) at least one digital video clip of the second person.
In some cases, one or more of the first image features are based on anatomical structure of a face appearing in the at least one first image.
In some cases, one or more of the second image features are based on anatomical structure of a face appearing in the at least one second image.
In some cases, the determination of the first image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one first image, and calculating the first image features, based on the landmarks, (b) analyzing the at least one first image using at least one machine learning model to determine the first image features, or (c) analysis of questionnaires answered by the first person.
In some cases, the determination of the second image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one second image, and calculating the second image features, based on the landmarks, (b) analyzing the at least one second image using at least one machine learning model to determine the second image features, or (c) analysis of questionnaires answered by the second person.
In some cases, the at least one first image includes at least part of a body of the first person and wherein at least one of the first image features is determined based on the at least part of the body.
In some cases, the at least one second image includes at least part of a body of the second person and wherein at least one of the second image features is determined based on the at least part of the body.
In some cases, the first at least one image includes at least part of garments worn by the first person and wherein at least one of the first image features is determined based on the at least part of the garments.
In some cases, the at least one second image includes at least part of garments worn by the second person and wherein at least one of the second image features is determined based on the at least part of the garments.
In some cases, the at least one first image includes at least part of a background behind the first person and wherein at least one of the first image features is determined based on the at least part of the background.
In some cases, the at least one second image includes at least part of a background behind the second person and wherein at least one of the second image features is determined based on the at least part of the background.
In some cases, the at least one first image includes at least part of a palm of the first person and wherein at least one of the first image features is determined based on the at least part of the palm.
In some cases, the at least one second image includes at least part of a palm of the second person and wherein at least one of the second image features is determined based on the at least part of the palm.
In some cases, the at least one first image is captured from one or more of: (a) a video recording of the first person, (b) a static two-dimensional image of the first person, and (c) a static three-dimensional image of the first person.
In some cases, the at least one second image is captured from one or more of: (a) a video recording of the second person, or (b) a static two-dimensional image of the second person, and (c) a static three-dimensional image of the second person. In some cases, the at least one static three-dimensional facial image of the first person is generated from one or more of: (a) at least one hologram of the first person, and (b) one or more static two-dimensional images of the first person, and (c) one or more static three-dimensional images of the first person.
In some cases, the at least one static three-dimensional facial image of the second person is generated from one or more of: (a) at least one hologram of the second person, and (b) one or more static two-dimensional images of the second person, and (c) one or more static three-dimensional images of the second person.
In some cases, the obtaining includes meta-data about the first person and wherein at least one of the first image features is determined based on the meta-data.
In some cases, the obtaining includes meta-data about the second person and wherein at least one of the second image features is determined based on the meta-data.
In some cases, the first mathematical manipulation is a difference between the first image features and the second image features.
In some cases, the second mathematical manipulation is a difference between the first image features and the second image features.
In some cases, the first mathematical manipulation is a directional distance between the first image features and the second image features.
In some cases, the second mathematical manipulation is a directional distance between the first image features and the second image features.
In some cases, the first machine learning model can be based on one or more neural network techniques.
In some cases, the second machine learning model can be based on one or more neural network techniques.
In some cases, the first person is a male and the second person is a female.
In some cases, the reciprocal attractiveness score calculation is based also on weights, wherein male attractiveness by the female gets preference over female attractiveness by the male.
In some cases, the reciprocal attractiveness score calculation is based also on weights, wherein similarity between the male and the female gets preference over the first image features and the second image features. In some cases, the reciprocal attractiveness score calculation is based also on weights, wherein the objective beauty score of the female gets preference over the first image features and the second image features.
In some cases, the first person is a female and the second person is a male.
In some cases, the reciprocal attractiveness score calculation is based also on weights, wherein female attractiveness by the male gets preference over male attractiveness by the female.
In some cases, the first person is a male and the second person is a male.
In some cases, the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
In some cases, the first person is a female and the second person is a female.
In some cases, the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
In some cases, the first machine learning model is trained utilizing training-data comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
In some cases, the second machine learning model is trained utilizing trainingdata comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
In some cases, the method further comprising, after obtaining the at least one first image of the first person, pre-processing, by the processing circuitry, the at least one first image to determine a facial quality score of a face appearing in the at least one first image and wherein the calculating is performed only when the facial quality score is above a first threshold.
In some cases, after obtaining the at least one second image of the second person, pre-processing, by the processing circuitry, the at least one second image to determine a facial quality score of a face appearing in the at least one second image and wherein the calculating is performed only when the facial quality score is above a second threshold. In some cases, each given machine attractiveness detection model of the at least one of the first attractiveness detection models is trained to determine a sub-system direct attractiveness score and wherein the direct attractiveness score is determined based on the at least one subsystem direct attractiveness score.
In some cases, each given attractiveness detection model of the at least one second attractiveness detection model is trained to determine a sub-system direct attractiveness score and wherein the reverse attractiveness score is determined based on the at least one sub-system reverse attractiveness score.
In accordance with a third aspect of the presently disclosed subject matter, there is provided a non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code, executable by at least one processing circuitry of a computer to perform a method comprising: (A) obtaining, by a processing circuitry: (a) at least one first image, representative of the first person; (b) at least one second image, representative of the second person; (c) at least one first machine learning model, capable of receiving: (i) second image features of the at least one second image, and (ii) a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and (d) at least one second machine learning model, capable of receiving: (i) the first image features, and (ii) a second mathematical manipulation based on the first image features and the second image features, and determining a reverse attractiveness score, the reverse attractiveness score being indicative of a level of attractiveness of the first person by the second person; (B) determining, by the processing circuitry, based on the at least one first image, one or more first image features; (C) determining, by the processing circuitry, based on the at least one second image, one or more second image features; and (D) calculating, by the processing circuitry: (a) a direct attractiveness score by utilizing the first image features, the second image features and the first machine learning model; (b) a reverse attractiveness score by utilizing the first image features, the second image features and the second machine learning model; and (c) a reciprocal attractiveness score based on the direct attractiveness score and the reverse attractiveness score. BRIEF DESCRIPTION OF THE DRAWINGS
In order to understand the presently disclosed subject matter and to see how it may be carried out in practice, the subject matter will now be described, by way of non-limiting examples only, with reference to the accompanying drawings, in which:
Fig. 1 is a schematic illustration of an example of a first image of a first person and a second image of a second person, used for determining attractiveness between them, in accordance with the presently disclosed subject matter;
Fig. 2 is a block diagram schematically illustrating one example of an attractiveness determination system, in accordance with the presently disclosed subject matter;
Fig. 3 is a flowchart illustrating an example of a sequence of operations carried out by an attractiveness determination system, in accordance with the presently disclosed subject matter;
Fig. 4A is an exemplary results graph for male attractiveness determination with Receiver Operating Characteristic (ROC) curves for linear regression, boost and boost3 algorithms; and
Fig. 4B is an exemplary results graph for female attractiveness determination with ROC curves for linear regression, boost and boost3 algorithms.
DETAILED DESCRIPTION
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the presently disclosed subject matter. However, it will be understood by those skilled in the art that the presently disclosed subject matter may be practiced without these specific details. In other instances, well- known methods, procedures, and components have not been described in detail so as not to obscure the presently disclosed subject matter.
In the drawings and descriptions set forth, identical reference numerals indicate those components that are common to different embodiments or configurations.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "obtaining", "identifying", "matching", "calculating", "generating", "determining" or the like, include action and/or processes of a computer that manipulate and/or transform data into other data, said data represented as physical quantities, e.g., such as electronic quantities, and/or said data representing the physical objects. The terms “computer”, “processor”, “processing resource”, “processing circuitry”, and “controller” should be expansively construed to cover any kind of electronic device with data processing capabilities, including, by way of non-limiting example, a personal desktop/laptop computer, a server, a computing system, a communication device, a smartphone, a tablet computer, a smart television, a processor (e.g. digital signal processor (DSP), a microcontroller, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), a group of multiple physical machines sharing performance of various tasks, virtual servers co-residing on a single physical machine, any other electronic computing device, and/or any combination thereof.
The operations in accordance with the teachings herein may be performed by a computer specially constructed for the desired purposes or by a general-purpose computer specially configured for the desired purpose by a computer program stored in a non- transitory computer readable storage medium. The term "non-transitory" is used herein to exclude transitory, propagating signals, but to otherwise include any volatile or nonvolatile computer memory technology suitable to the application.
As used herein, the phrase "for example," "such as", "for instance" and variants thereof describe non-limiting embodiments of the presently disclosed subject matter. Reference in the specification to "one case", "some cases", "other cases" or variants thereof means that a particular feature, structure or characteristic described in connection with the embodiment(s) is included in at least one embodiment of the presently disclosed subject matter. Thus, the appearance of the phrase "one case", "some cases", "other cases" or variants thereof does not necessarily refer to the same embodiment(s).
It is appreciated that, unless specifically stated otherwise, certain features of the presently disclosed subject matter, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the presently disclosed subject matter, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination.
In embodiments of the presently disclosed subject matter, fewer, more and/or different stages than those shown in Fig. 3 may be executed. In embodiments of the presently disclosed subject matter one or more stages illustrated in Fig. 3 may be executed in a different order and/or one or more groups of stages may be executed simultaneously. Figs. 1 and 2 illustrate a general schematic of the system architecture in accordance with an embodiment of the presently disclosed subject matter. Each module in Figs. 1 and 2 can be made up of any combination of software, hardware and/or firmware that performs the functions as defined and explained herein. The modules in Figs. 1 and 2 may be centralized in one location or dispersed over more than one location. In other embodiments of the presently disclosed subject matter, the system may comprise fewer, more, and/or different modules than those shown in Figs. 1 and 2.
Any reference in the specification to a method should be applied mutatis mutandis to a system capable of executing the method and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that once executed by a computer result in the execution of the method.
Any reference in the specification to a system should be applied mutatis mutandis to a method that may be executed by the system and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that may be executed by the system.
Any reference in the specification to a non-transitory computer readable medium should be applied mutatis mutandis to a system capable of executing the instructions stored in the non-transitory computer readable medium and should be applied mutatis mutandis to method that may be executed by a computer that reads the instructions stored in the non-transitory computer readable medium.
Matching between people is a challenging task. One possibility is utilizing findings from the domain of psychology. Evolutionary psychology research has found a correlation between a person's physical properties and his choices and preferences of partners. Physiology suggests that biological properties of an individual have major influence on that individual's survival and procreation probabilities. This is why physiological properties can be utilized as a measure of attractiveness between people. Other domains can offer additional correlations between personal properties and attractiveness measures, for example: correlation between the level of certain neurotransmitter in a given person and the attractiveness level that person sees another person.
It is to be noted that matching is predicted to be an even more challenging task as more and more aspects of our lives are experienced on-line. The Meta-verse is an example of a such a transfer of elements of our being from the real world to a virtual on-line world. The need for the described solution will only become more acute.
Bearing this in mind, attention is drawn to Fig. 1, showing a schematic illustration of an example of a first image of a first person and a second image of a second person, used for determining attractiveness, in accordance with the presently disclosed subject matter.
As shown in the schematic illustration, a pair of images (e.g., Image A 110-a, Image B 110-b) are presented. Image A 110-a is an image of a first person and Image B 110-b is an image of a second person. The images can incorporate facial images that can be used to train the facial features of each individual. Image A 110-a includes a facial image of the first person, and Image B 110-b includes a facial image of the second person. Facial structures are utilized to determine attractiveness between individuals.
It is notable that face structure (and features) is unique. Each individual has its personality manifested in his or her own face structure (and features). Human faces have evolved to signal individual identity in human interaction. Facial structure (and features) can be utilized to determine attractiveness. The structure of the face reflects the genetic characteristics and history of that individual. Facial structure exposes the individual's health, parental suitability, level of aggressiveness, family history, and more.
Evolutionary psychology has created differences in the morphology of males' and females' facial structures. These differences have evolved to draw attractive partners. Visual facial elements hint at the hormonal state, the health, the fertility and the gene pool of the preferred partner. It turns out that perspective of beauty is universal and is independent of geography, religion, race, culture or political views.
Attractiveness, on the other hand, is subjective. As each individual has an exclusive biological status, each individual has its own unique taste and match “requirements”. Facial structure can be utilized to identify this high level of match and to predict attractiveness between two persons.
There is a correlation between the visually identifiable facial structure of an individual and his biological systems. The immune system, the hormonal system, and other biological systems are expressed especially in the facial structure. Symmetrical facial structure, for example, can imply better capabilities in coping with environmental challenges, such as resistance to viruses. Symmetry has also been identified as one of many indicators for attractiveness. A symmetrical face structure is indicative of the immune system of potential mates and as such it such it affects attractiveness. People prefer average symmetrical faces and not perfectly symmetrical faces. Another example relates to Estrogens, a group of hormones that play an important role in the normal sexual and reproductive development in women. Estrogen level affects the facial appearance of women. For example, in males, testosterone levels can change the facial appearance and are associated with facial width and facial bone structure. Hormones, in general, indicate the gender, levels of body energy available, and general health of a person.
When an individual views an image of an attractive person, his/her brains' orbitofrontal cortex (OFC) responds. The mid-anterior OFC tracks subjective pleasure in response to the attractive facial image. The brain will then release oxytocin and dopamine to enhance the reward of viewing a facial image of a (potential) attractive partner. These hormones affect the behavior of the individual as they are rewarding pleasurable stimuli.
An attractiveness determination system (also interchangeably referred to herein as "system") can utilize human images (e.g., Image A 110-a, Image B 110-b) to determine attractiveness and/or a match between the first person and the second person and vice versa. The images can undergo a pre-processing stage, where the attractiveness determination system identifies that an image of a single human face is included in the image. The pre-processing can be done, for example, by utilizing a visual object detection image processing method that can identify a bounding rectangle around a face within an image. In addition, an image analysis method can be employed to identify two eyes within the bounding rectangle.
It is to be noted that the images (e.g., Image A 110-a, Image B 110-b) do not necessarily include faces. The attractiveness determination system can utilize other parts of the images to determine a match. For example, the attractiveness determination system can match between the backgrounds of the images, between the travel landscapes of the images, the selfie angle of the images, clothing of the persons, their accessories, etc.
Another possibility is for the attractiveness determination system to utilize images of at least part of a body of the depicted persons (such as: the palm of the first and second persons) to determine attractiveness. For example, by determining a finger feature (based on an index to ring ratio) for each of the images. Garments depicted within the image can be also utilized by the system to determine attractiveness.
The pre-processing stage can include determination of a Facial Quality Score (FQS) for at least some of the images. In this phase, the attractiveness determination system can analyze the images and determine the FQS for each of these images. The facial quality score can be based on properties of the image and on properties of the facial image that they are related to. These properties can include, for example, the size of the facial image within the image, the sharpness of the facial image, the number of faces that appear in the image, etc. An image with an FQS that is below a threshold is discarded by the attractiveness determination system and is not used to determine attractiveness.
The importance of this pre-processing stage arises from the fact that the manner of which the facial image was captured can have a large effect on the attractiveness score, e.g., on the accuracy of the prediction of the attractiveness determination system. For example, as the image resolution is larger, more details are seen in the face and prediction accuracy will most likely improve. In another example, as the head position of the person in the image is more frontal (i.e., closer to zero degrees), then more parts of the face are visible and the prediction will be more accurate.
It is to be noted that the quality of the face within the image is more important than the quality of the (raw) image itself. The image can be of high resolution, however if the area of the face within the image is very small (for example, only 1% for a person captured in the image from a remote distance), then the overall resolution of the face will be low.
In some cases, the attractiveness determination system can be provided with multiple images of the same user (for example, when the user changes his/her selfie image for social purposes). The additional images can be identical to the previous image provided in the past for that user, or it can be a new image, for example, an image taken ad-hoc. The attractiveness determination system can utilize all the multiple images of the same user, available at a data repository used by the system to store the image provided by the user in the current transaction and other images provided by the same user in prior transactions. Since all the images relate to a specific person, they are utilized all to enhance the prediction score - each image according to the quality of the image and its underlying face.
A non-limiting exemplary algorithm for performing the FQS pre-processing for a single image can be: Elimination (rejection) of the images (e.g., the image will not be used by the system for prediction) when one or more of the following is true: (a) the image has no face, (b) the image has no eyes, (c) the face in the image is cut - chin and/or ears and/or forehead are missing, and (d) the face in the image is distorted - for example: fisheye (barrel-type) or pillow-type of the face, resulting from an image taken from very close proximity from the camera.
The parameters that effect FQS are one or more of: (a) head pose angles - pitch, pose/yaw and roll angles of the face. As the angles are smaller, the face quality score is larger. The angles are estimated by a computer vision algorithm according to measurements over the face (specifically, nose and eye position and distances), (b) emotion type and level in the face - there are several types of emotion expressed by a human, such as sadness, joy, laugh, angriness, etc. Per type of emotion, there is a level of emotion (for example, a light smile or a strong anger). The smaller the level of emotion, the larger the face quality is likely to be, (c) face visibility - the face visibility can be changeable according to many different natural parameters, such as mustache, beard, hair, light level, shadow, etc. The more visible the face, the more accurate the attractiveness scores are likely to be, (d) accessories - human-made objects that may affect the visibility of the face, such as eyeglasses, sunglasses, hat, scarf, mask, earrings, tattoo, pipe, cigarette, etc. The more visible the face, the more accurate the attractiveness prediction is likely to be, and (e) resolution - the number of pixels (in two dimensions) of the face (e.g., the bounding box of the face) and in the interpupillary distance, the distance, in pixels, between the centers of the eyes. FQS can be calculated as a function of one or more of these parameters.
The pre-processing stage can include one or more image manipulations on at least one of the images. These manipulations can include: emotion negation, image capturing angle adjustments, facial image size corrections, in-plane facial image rotation, out-of- plane facial image rotation (frontalization) and more. The attractiveness determination system will use the manipulated images to determine attractiveness.
The attractiveness determination system can work in a batch mode - where multiple pairs of images are provided to the system and the system determines the attractiveness between the persons depicted in each pair. In some cases, one or more video feeds are provided to the attractiveness determination system, and the system extracts the images from the video. For example, by capturing an image from the video feed, or by analyzing the video feed to identify one or more persons and extracting their facial images from the video.
The attractiveness determination system can determine, for at least some of the images, one or more landmarks (for example: landmark AA 120-aa, landmark AB 120- ab, landmark AC 120-ac, . . landmark AN 120-an, and landmark BA 120-ba, landmark BB 120-bb, landmark BC 120-bc, landmark BN 120-bn, etc.) within each of the images. Each landmark is a predetermined point within the facial image. The landmarks can be anatomical landmarks that are based on the anatomical structure of the face appearing within the image. These landmarks are unambiguously identified in every image and are placed in positions that ensure a reasonable degree of correspondence between the landmarks' locations across the images. The attractiveness determination system analyzes the facial image to identify the location within the image of the facial organs such as: nose, ears, mouth, eyes, chin, etc.). The system can determine the location of the landmarks in relation to these identified organs. A non-limiting example is depicted in Fig. 1. In this example, the system identifies the location of the nose in Image A 110- a and in Image B 110-b. The system then utilizes these locations to determine the location of landmarks that are related to the nose. In our example, landmark AA 120-aa in Image A 110-a and landmark BA 120-ba in Image B 110-b. In a similar fashion, the system can determine the location of additional landmarks that are associated with additional facial organs: such as the eyes, the mouth, the chin, etc.
The attractiveness determination system utilizes the landmarks to determine one or more features for each of the images. The features can be calculated based on the location of the landmarks within each image. For example, the first feature can be calculated as the distance between an eye and the nose with respect to inter-pupil distance (IPD). This first feature can be calculated for image A 110-a based on the location of landmark AA 120-aa and the location of landmark AN 120-an. The same first feature can be calculated for image B 110-b based on the location of landmark BA 120-ba and the location of landmark BN 120-bn. The first feature will have a first value for image A 110- a and a second value for image B 110-b. The system can calculate additional features for each given image of the images, based on the locations of the corresponding landmarks within the given image.
The attractiveness determination system can also utilize facial images to determine features that are not landmark related. Another example is the system determining the emotions of the persons in the images and calculates features based on these emotions. For example: the system can determine that a person is smiling in the image and can calculate a feature of happiness level for that person. Other parts of the images can be utilized by the system to determine features. In some cases, these other parts of the images can be used to determine additional features, in addition to features determined based on the facial images. For example, the amount of light in the background (such as: night, day, neon, white balance, etc.) of the image can be used to calculate features. In another example, scenery in the background of the images can be used to calculate a feature related to travel locations of the depicted persons or time of day. Features can also be calculated based on other skin and/or body parts within the images, such as: race, age, height, Body Mass Index (BMI), etc. For example, by calculating a finger feature (based on an index to ring ratio) for each of the images. Features can also be calculated based on garments or other clothing features (such as: glasses, tattoo, earrings, etc.) that appear within the images. For example, by calculating a season feature (based on the type of clothes the persons in the images are wearing) for each of the images. In some cases, the system can utilize one or more machine learning models (such as: supervised learning models, unsupervised learning models, deep learning models, etc.) to determine the features of at least some of the images. In addition, the system can calculate features for the depicted persons from supplementary sources that can accompany the images, for example: from meta-data obtained with the images, from answers to questionnaires provided by the persons depicted in the images, from sensors sensing the persons (heart rate, sweat, eye blinking rate, etc.), etc. The features can be calculated based on knowledge associated with the domains of anthropology, neurobiology, physiology, neuropsychology, evolutionary biology (morphology, dysmorphology), chemistry and others. At least some of the images of the first and second persons can include other types of imagery (including, non-visual imagery), such as: Functional Magnetic Resonance Imaging or functional MRI (fMRI), spectral imaging in different wavelengths, facial topography, Cloud of Points (COP) from 3D facial scanning, etc.
The features can be utilized by the attractiveness determination system to train a machine learning model that can be used to predict the attractiveness between the first person and the second person, as will be further described hereafter in reference to Fig. 3.
Attention is now drawn to a description of the components of the attractiveness determination system 200. Fig. 2 is a block diagram schematically illustrating one example of the attractiveness determination system 200, in accordance with the presently disclosed subject matter.
In accordance with the presently disclosed subject matter, the attractiveness determination system 200 (also interchangeably referred to herein as "system 200") can comprise a network interface 206. The network interface 206 (e.g., a network card, a WiFi client, a Li-Fi client, 3G/4G/5G client, satellite communications or any other component), enables system 200 to communicate over a network with external systems and handles inbound and outbound communications from such systems. For example, system 200 can receive and/or send, through network interface 206, a plurality of images (for example: image A 110-a, image B 110-b, etc.), data about landmarks (for example: landmark AA 120-aa, landmark AB 120-ab, landmark AC 120-ac, . . landmark AN 120- an, and landmark BA 120-ba, landmark BB 120-bb, landmark BC 120-bc, landmark BN 120-bn, etc.), data about features, one or more machine learning models, training data-sets used to train machine learning models, attractiveness scores, etc.
System 200 can further comprise or be otherwise associated with a data repository 204 (e.g., a database, a storage system, a memory including Read Only Memory - ROM, Random Access Memory - RAM, or any other type of memory, etc.) configured to store data. Some examples of data that can be stored in the data repository 204 include: a plurality of images (for example: image A 110-a, image B 110-b, etc.), data about landmarks (for example: landmark AA 120-aa, landmark AB 120-ab, landmark AC 120- ac, landmark AN 120-an, and landmark BA 120-ba, landmark BB 120-bb, landmark BC 120-bc, landmark BN 120-bn, etc.), data about landmarks, data about features, one or more machine learning models, training data-sets used to train the machine learning models, etc. Data repository 204 can be further configured to enable retrieval and/or update and/or deletion of the stored data. It is to be noted that in some cases, data repository 204 can be distributed, while system 200 has access to the information stored thereon, e.g., via a wired or wireless network to which system 200 is able to connect (utilizing its network interface 206).
System 200 further comprises processing circuitry 202. Processing circuitry 202 can be one or more processing units (e.g., central processing units), microprocessors, microcontrollers (e.g., microcontroller units (MCUs) cloud servers, graphical processing units (GPUs), or any other computing devices or modules, including multiple and/or parallel and/or distributed processing units, which are adapted to independently or cooperatively process data for controlling relevant system 200 resources and for enabling operations related to system’s 200 resources.
The processing circuitry 202 comprises an attractiveness scoring module 208, configured to perform an attractiveness scoring process, as further detailed herein, inter alia with reference to Fig. 3.
It should be noted that system 200 can operate as a standalone system without the need for network interface 206 and/or data repository 204. Adding one or both of these elements to system 200 is optional and not mandatory, as system 200 can operate according to its intended use either way.
Turning to Fig. 3 there is shown a flowchart illustrating an example of a sequence of operations carried out by an attractiveness determination system, in accordance with the presently disclosed subject matter.
Accordingly, the attractiveness determination system 200 can be configured to perform an attractiveness scoring process 300, e.g., using the attractiveness scoring module 208. The attractiveness determination system 200 can use a training data-set of multiple pairs of images (such as the pair: image A 110-a and image B 110-b). Each image pair is associated with one or more attractiveness labels. The labels can be a binary label - does the person depicted in the first image of the pair find the person depicted in the second image of the pair attractive or not. In some cases, the label can be non-binary attractiveness scores. The system 200 can extract features from the multiple pairs of images. The features and the labels are used to train one or more machine learning models (such as: supervised learning models, unsupervised learning models, deep learning models, etc.). The training of the machine learning models can be based on a mathematical manipulation between the features extracted from the first image of the pair and the features extracted from the second image of the pair. The mathematical manipulation can be a difference between the first image features of the first image and the second image features of the second image. The mathematical manipulation can be a substruction of the second image features from the first image features. In some cases, the mathematical manipulation can be a directional distance between the first image features and the second image features. Continuing our example above, the first feature calculated for the first image can be the distance between the eye and the nose depicted in the first image and the second feature calculated for the second image can be the distance between the eye and the nose depicted in the second image. In this example, the training of the machine learning model for this pair is done based on the second image features and the difference between the features of the two persons. In this example, the difference is the feature value calculated for a male in the first image minus the feature value calculated for a female in the second image. It is to be noted, that an example of a female in the first image and a male in the second image is possible. Training the machine learning model based on pairs has the advantage of exact calculation of attractiveness match between any pair of persons appearing in two images, and optimally using gradient descent for quickly reaching the minima point of the loss function of the training algorithm, resulting in a machine learning model that is better than a model trained on the values of the features themselves. If the values of the features themselves were used for training, a less optimal minimum point would have been reached. The features of the subject person express the objective attractiveness of the subject and the difference between male and female features expresses a measure of the inter-personal biological matching.
Fig. 4A is an exemplary results graph for male attractiveness determination with ROC curves for linear regression, boost and boost3 machine learning algorithms used by system 200 to determine attractiveness of females by males in an exemplary experiment. The results of the exemplary experiment show that the machine learning models trained in the experiment can predict attractiveness of females by males. Fig. 4B is an exemplary results graph for female attractiveness determination with ROC curves for logistic regression, boost and boost3 algorithms machine learning algorithms used by system 200 to determine attractiveness of males by females in an exemplary experiment. The results of the exemplary experiment show that the machine learning models trained in the experiment can predict attractiveness of males by females.
The trained machine learning models are obtained by the attractiveness determination system 200 and are utilized to calculate attractiveness scores for unlabeled pairs of images. For this purpose, attractiveness determination system 200 obtains: (a) at least one first image (e.g., Image A 110-a, Image B 110-b), representative of the first person; (b) at least one second image (e.g., Image A 110-a, Image B 110-b), representative of the second person; (c) at least one first machine learning model, capable of receiving: (i) second image features of the at least one second image, and (ii) a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and (d) at least one second machine learning model, capable of receiving: (i) the first image features, and (ii) a second mathematical manipulation based on the first image features and the second image features, and determining a reverse attractiveness score, the reverse attractiveness score being indicative of a level of attractiveness of the first person by the second person (block 302). The first and second machine learning models can be based on one or more algorithms, such as: logistic regression algorithms, boosting algorithms, evolution algorithms, support-vector machine algorithms, decision trees algorithms, random forest algorithms, etc. In some cases, the first and second machine learning models can be based on one or more deep learning and/or neural network techniques, for example: Convolutional Neural Networks (CNN), encoders-decoders, Deep Stacking Networks (DSN), backpropagation networks, etc. In a non-limiting example, system 200 obtained a first image of a given male and a second image of a given female. This pair of images is unlabeled and there is no prior-knowledge on the attractiveness between the given male and the given female. In this example, system 200 also obtains the first machine learning model. The first machine learning model can receive features calculated based on at least one image of a female and features calculated based on a mathematical manipulation on the features calculated based on at least one image of a male and on the features calculated based on at least one image of a female (for example: a subtraction of the value of each female feature from the value of the corresponding male features) and that has been trained to determining a direct attractiveness score, indicative of a level of attractiveness of the female by the male. The first machine learning model can be used by system 200 to predict the direct attractiveness score of the given female by the given male. In this example, system 200 also obtains a second machine learning model. The second machine learning model can receive features calculated based on an image of a male and features calculated based on a mathematical manipulation on the features calculated based on at least one image of a male and on the features calculated based on at least one image of a female (for example: a subtraction of the value of each male feature from the value of the corresponding female features) and that has been trained to determining a reverse attractiveness score, indicative of a level of attractiveness of the male by the female. The second machine learning model can be used by system 200 to predict the reverse attractiveness score of the given male by the given female.
In some cases, the first person is a male and the second person is a female. In these cases, the reciprocal attractiveness score calculation can be based also on weights, wherein male attractiveness by the female gets preference over female attractiveness by the male. The reciprocal attractiveness score calculation can be also based on weights, wherein similarity between the male and the female gets preference over the first image features and the second image features. Another option for the weights is where the objective beauty score of the female gets preference over the first image features and the second image features. In these examples, the weights can be intertwined into the first and/or second machine learning models in such a way that preference is given to specific features.
In some cases, the first person is a female and the second person is a male. In these cases, the reciprocal attractiveness score calculation is based also on weights, wherein female attractiveness by the male gets preference over male attractiveness by the female.
In other cases, the first person is a male and the second person is a male. In these cases, the first machine learning model and the second machine learning model are also trained for identifying same-sex attractiveness, in addition to the features and distance between features.
In other cases, the first person is a female and the second person is a female. In these cases, the first machine learning model and the second machine learning model are also trained for identifying same-sex attractiveness, in addition to the features and distance between features.
It is to be noted that the first machine learning model is trained utilizing trainingdata comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair. Each given machine learning model of at least one first machine learning model can be trained to determine a sub-system direct attractiveness score and wherein the direct attractiveness score is determined based on at least one subsystem direct attractiveness score. An example of a biological subsystem score can be a skeleton sub-system score, that includes features that are related to the skeletal properties of the imaged person. Additional examples of subsystems scores are: health/immune sub- system score, hormone sub-system score, etc. Combining these subsystems scores can produce an overall attractiveness score.
Once the images and the machine learning models are obtained, system 200 determines, based on the at least one first image, one or more first image features (block 304). The first image features can be determined by system 200 by analyzing one or more images containing the first person, a video of the first person, a three-dimensional model of the first person, etc. The at least one first image in these cases can be one or more of: (a) at least one static two-dimensional facial image of the first person, (b) at least one static three-dimensional facial image of the first person, (c) at least one static two- dimensional facial model of the first person, (d) at least one static three-dimensional facial model of the first person, (e) at least one two-dimensional static image of the first person, (f) at least one static three-dimensional image of the first person, (g) at least one moving image of the first person, (h) at least one analog video clip of the first person, and (i) at least one digital video clip of the first person.
System 200 can then determine, based on the at least one second image, one or more second image features (block 306). The second image features can be determined by system 200 by analyzing one or more images containing the second person, a video of the second person, a three-dimensional model of the second person, etc. The at least one second image in these cases can be one or more of: (a) at least one static two-dimensional facial image of the second person, (b) at least one static three-dimensional facial image of the second person, (c) at least one static two-dimensional facial model of the second person, (d) at least one static three-dimensional facial model of the second person, (e) at least one two-dimensional static image of the second person, (f) at least one static three- dimensional image of the second person, (g) at least one moving image of the second person, (h) at least one analog video clip of the second person, and (i) at least one digital video clip of the second person.
It is to be noted, that in some cases, system 200 obtains two or more images representing a person (the first person and/or the second person). These two or more images can be a series of images taken over time or can be extracted from a video (for example: from a video file, from a live video feed, etc.) representing the person (the first person and/or the second person). The two or more images can be manipulated mathematically to generate a three-dimensional model of the person and specifically a three-dimensional model of the person's face. In some cases, the three-dimensional model of the person, and specifically the three-dimensional model of the person's face, can be generated directly from a video representing the person (for example, a video where the person appears in one or more of its frames). The three-dimensional model can be generated for example by identifying the organs such as the mouth, nose or the eyes of the person in the two or more images, and using their location to create the three- dimensional model. The three-dimensional model of the person, and specifically the three-dimensional model of the person's face, can be utilized by system 200 to determine the landmarks (for example: landmark A A 120-aa, landmark AB 120-ab, landmark AC 120-ac, ..., landmark AN 120-an, and landmark BA 120-ba, landmark BB 120-bb, landmark BC 120-bc, ... , landmark BN 120-bn, etc.) in the three-dimension space and calculate the persons’ features. These features are more accurate than features calculated based on landmarks from a static two-dimensional image. For example, a wrinkle in the face of the first person is modeled in the three-dimensional model and the landmarks determined by system 200 analyze the three-dimensional shape of the wrinkle, including its depth, as part of the landmark determination. The same methods of extracting features from three-dimensional models can be used by system 200 or by an external system to train the first machine learning model and the second machine learning model based on training data that at least in part extracts features from three-dimensional models of the first person and/or second person. It is to be noted that three-dimensional models support easily creation of synthetic training data by using machine learning methods, such as: Generative Adversarial Network (GAN), to generate synthetic images from a base three- dimensional model by adding one or more variations to the base three-dimensional model thereby creating a series of synthetic variants of the base three-dimensional model that can be used for training the machine learning models.
Following the image features determination, system 200 calculates: (a) a direct attractiveness score by utilizing the first image features, the second image features and the first machine learning model; (b) a reverse attractiveness score by utilizing the first image features, the second image features and the second machine learning model; and (c) a reciprocal attractiveness score based on the direct attractiveness score and the reverse attractiveness score (block 308). For example, the reciprocal attractiveness score can be calculated as an average of the direct attractiveness score and the reverse attractiveness score or weighted score, such that attractiveness of a male by a female is given a priority. It is to be noted, with reference to Fig. 3, that some of the blocks can be integrated into a consolidated block or can be broken down to a few blocks and/or other blocks may be added. It is to be further noted that some of the blocks are optional. It should be also noted that whilst the flow diagram is described also with reference to the system elements that realizes them, this is by no means binding, and the blocks can be performed by elements other than those described herein.
It is to be understood that the presently disclosed subject matter is not limited in its application to the details set forth in the description contained herein or illustrated in the drawings. The presently disclosed subject matter is capable of other embodiments and of being practiced and carried out in various ways. Hence, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for designing other structures, methods, and systems for carrying out the several purposes of the present presently disclosed subject matter.
It will also be understood that the system according to the presently disclosed subject matter can be implemented, at least partly, as a suitably programmed computer. Likewise, the presently disclosed subject matter contemplates a computer program being readable by a computer for executing the disclosed method. The presently disclosed subject matter further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing the disclosed method.

Claims

CLAIMS:
1. A system for determining attractiveness between a first person and a second person, the system comprising processing circuitry configured to:
(A) obtain:
(a) at least one first image, representative of the first person;
(b) at least one second image, representative of the second person;
(c) at least one first machine learning model, capable of receiving: (i) second image features of the at least one second image, and (ii) a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and
(d) at least one second machine learning model, capable of receiving: (i) the first image features, and (ii) a second mathematical manipulation based on the first image features and the second image features, and determining a reverse attractiveness score, the reverse attractiveness score being indicative of a level of attractiveness of the first person by the second person;
(B) determine, based on the at least one first image, one or more first image features;
(C) determine, based on the at least one second image, one or more second image features; and
(D) calculate:
(a) a direct attractiveness score by utilizing the first image features, the second image features and the first machine learning model;
(b) a reverse attractiveness score by utilizing the first image features, the second image features and the second machine learning model; and
(c) a reciprocal attractiveness score based on the direct attractiveness score and the reverse attractiveness score.
2. The system of claim 1, wherein the at least one first image is one or more of: (a) at least one static two-dimensional facial image of the first person, (b) at least one static three-dimensional facial image of the first person, (c) at least one static two-dimensional facial model of the first person, (d) at least one static three-dimensional facial model of the first person, (e) at least one two-dimensional static image of the first person, (f) at least one static three-dimensional image of the first person, (g) at least one moving image of the first person, (h) at least one analog video clip of the first person, and (i) at least one digital video clip of the first person.
3. The system of claim 1, wherein the at least one second image is one or more of: (a) at least one static two-dimensional facial image of the second person, (b) at least one static three-dimensional facial image of the second person, (c) at least one static two- dimensional facial model of the second person, (d) at least one static three-dimensional facial model of the second person, (e) at least one two-dimensional static image of the second person, (f) at least one static three-dimensional image of the second person, (g) at least one moving image of the second person, (h) at least one analog video clip of the second person, and (i) at least one digital video clip of the second person.
4. The system of claim 1, wherein one or more of the first image features are based on anatomical structure of a face appearing in the at least one first image.
5. The system of claim 1, wherein one or more of the second image features are based on anatomical structure of a face appearing in the at least one second image.
6. The system of claim 1, wherein the determination of the first image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one first image, and calculating the first image features, based on the landmarks, (b) analyzing the at least one first image using at least one machine learning model to determine the first image features, or (c) analysis of questionnaires answered by the first person.
7. The system of claim 1, wherein the determination of the second image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one second image, and calculating the second image features, based on the landmarks, (b) analyzing the at least one second image using at least one machine learning model to determine the second image features, or (c) analysis of questionnaires answered by the second person.
8. The system of claim 1, wherein the at least one first image includes at least part of a body of the first person and wherein at least one of the first image features is determined based on the at least part of the body.
9. The system of claim 1, wherein the at least one second image includes at least part of a body of the second person and wherein at least one of the second image features is determined based on the at least part of the body.
10. The system of claim 1, wherein the at least one first image includes at least part of garments worn by the first person and wherein at least one of the first image features is determined based on the at least part of the garments.
11. The system of claim 1, wherein the at least one second image includes at least part of garments worn by the second person and wherein at least one of the second image features is determined based on the at least part of the garments.
12. The system of claim 1, wherein the at least one first image includes at least part of a background behind the first person and wherein at least one of the first image features is determined based on the at least part of the background.
13. The system of claim 1, wherein the at least one second image includes at least part of a background behind the second person and wherein at least one of the second image features is determined based on the at least part of the background.
14. The system of claim 1, wherein the at least one first image includes at least part of a palm of the first person and wherein at least one of the first image features is determined based on the at least part of the palm.
15. The system of claim 1, wherein the at least one second image includes at least part of a palm of the second person and wherein at least one of the second image features is determined based on the at least part of the palm.
16. The system of claim 1, wherein the at least one first image is captured from one or more of: (a) a video recording of the first person, (b) a static two-dimensional image of the first person, and (c) a static three-dimensional image of the first person.
17. The system of claim 1, wherein the at least one second image is captured from one or more of: (a) a video recording of the second person, or (b) a static two-dimensional image of the second person, and (c) a static three-dimensional image of the second person.
18. The system of claim 2, wherein the at least one static three-dimensional facial image of the first person is generated from one or more of: (a) at least one hologram of the first person, and (b) one or more static two-dimensional images of the first person, and (c) one or more static three-dimensional images of the first person.
19. The system of claim 3, wherein the at least one static three-dimensional facial image of the second person is generated from one or more of: (a) at least one hologram of the second person, and (b) one or more static two-dimensional images of the second person, and (c) one or more static three-dimensional images of the second person.
20. The system of claim 1, wherein the obtaining includes meta-data about the first person and wherein at least one of the first image features is determined based on the meta-data.
21. The system of claim 1, wherein the obtaining includes meta-data about the second person and wherein at least one of the second image features is determined based on the meta-data.
22. The system of claim 1, wherein the first mathematical manipulation is a difference between the first image features and the second image features.
23. The system of claim 1, wherein the second mathematical manipulation is a difference between the first image features and the second image features.
24. The system of claim 1, wherein the first mathematical manipulation is a directional distance between the first image features and the second image features.
25. The system of claim 1, wherein the second mathematical manipulation is a directional distance between the first image features and the second image features.
26. The system of claim 1, wherein the first machine learning model can be based on one or more neural network techniques.
27. The system of claim 1, wherein the second machine learning model can be based on one or more neural network techniques.
28. The system of claim 1, wherein the first person is a male and the second person is a female.
29. The system of claim 28, wherein the reciprocal attractiveness score calculation is based also on weights, wherein male attractiveness by the female gets preference over female attractiveness by the male.
30. The system of claim 28, wherein the reciprocal attractiveness score calculation is based also on weights, wherein similarity between the male and the female gets preference over the first image features and the second image features.
31. The system of claim 28 wherein the reciprocal attractiveness score calculation is based also on weights, wherein the objective beauty score of the female gets preference over the first image features and the second image features.
32. The system of claim 1, wherein the first person is a female and the second person is a male.
33. The system of claim 32, wherein the reciprocal attractiveness score calculation is based also on weights, wherein female attractiveness by the male gets preference over male attractiveness by the female.
34. The system of claim 1, wherein the first person is a male and the second person is a male.
35. The system of claim 34, wherein the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
36. The system of claim 1, wherein the first person is a female and the second person is a female.
37. The system of claim 36, wherein the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
38. The system of claim 1, wherein the first machine learning model is trained utilizing training-data comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
39. The system of claim 1, wherein the second machine learning model is trained utilizing training-data comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
40. The system of claim 1, wherein the processing circuitry is further configured to, after obtaining the at least one first image of the first person, pre-process the at least one first image to determine a facial quality score of a face appearing in the at least one first image and wherein the calculating is performed only when the facial quality score is above a first threshold.
41. The system of claim 1, wherein the processing circuitry is further configured to, after obtaining the at least one second image of the second person, pre-process the at least one second image to determine a facial quality score of a face appearing in the at least one second image and wherein the calculating is performed only when the facial quality score is above a second threshold.
42. The system of claim 1 wherein each given machine attractiveness detection model of the at least one of the first attractiveness detection models is trained to determine a subsystem direct attractiveness score and wherein the direct attractiveness score is determined based on the at least one subsystem direct attractiveness score.
43. The system of claim 1 wherein each given attractiveness detection model of the at least one second attractiveness detection model is trained to determine a sub-system direct attractiveness score and wherein the reverse attractiveness score is determined based on the at least one sub-system reverse attractiveness score.
44. A method for determining attractiveness between a first person and a second person, the method comprising:
(A) obtaining, by a processing circuitry:
(a) at least one first image, representative of the first person;
(b) at least one second image, representative of the second person;
(c) at least one first machine learning model, capable of receiving: (i) second image features of the at least one second image, and (ii) a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and
(d) at least one second machine learning model, capable of receiving: (i) the first image features, and (ii) a second mathematical manipulation based on the first image features and the second image features, and determining a reverse attractiveness score, the reverse attractiveness score being indicative of a level of attractiveness of the first person by the second person; (B) determining, by the processing circuitry, based on the at least one first image, one or more first image features;
(C) determining, by the processing circuitry, based on the at least one second image, one or more second image features; and
(D) calculating, by the processing circuitry:
(a) a direct attractiveness score by utilizing the first image features, the second image features and the first machine learning model;
(b) a reverse attractiveness score by utilizing the first image features, the second image features and the second machine learning model; and
(c) a reciprocal attractiveness score based on the direct attractiveness score and the reverse attractiveness score.
45. The method of claim 44, wherein the at least one first image is one or more of: (a) at least one static two-dimensional facial image of the first person, (b) at least one static three-dimensional facial image of the first person, (c) at least one static two-dimensional facial model of the first person, (d) at least one static three-dimensional facial model of the first person, (e) at least one two-dimensional static image of the first person, (f) at least one static three-dimensional image of the first person, (g) at least one moving image of the first person, (h) at least one analog video clip of the first person, and (i) at least one digital video clip of the first person.
46. The method of claim 44, wherein the at least one second image is one or more of: (a) at least one static two-dimensional facial image of the second person, (b) at least one static three-dimensional facial image of the second person, (c) at least one static two- dimensional facial model of the second person, (d) at least one static three-dimensional facial model of the second person, (e) at least one two-dimensional static image of the second person, (f) at least one static three-dimensional image of the second person, (g) at least one moving image of the second person, (h) at least one analog video clip of the second person, and (i) at least one digital video clip of the second person.
47. The method of claim 44, wherein one or more of the first image features are based on anatomical structure of a face appearing in the at least one first image.
48. The method of claim 44, wherein one or more of the second image features are based on anatomical structure of a face appearing in the at least one second image.
49. The method of claim 44, wherein the determination of the first image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one first image, and calculating the first image features, based on the landmarks, (b) analyzing the at least one first image using at least one machine learning model to determine the first image features, or (c) analysis of questionnaires answered by the first person.
50. The method of claim 44, wherein the determination of the second image features is performed utilizing one or more of the following methods: (a) determining multiple landmarks on a face appearing in the at least one second image, and calculating the second image features, based on the landmarks, (b) analyzing the at least one second image using at least one machine learning model to determine the second image features, or (c) analysis of questionnaires answered by the second person.
51. The method of claim 44, wherein the at least one first image includes at least part of a body of the first person and wherein at least one of the first image features is determined based on the at least part of the body.
52. The method of claim 44, wherein the at least one second image includes at least part of a body of the second person and wherein at least one of the second image features is determined based on the at least part of the body.
53. The method of claim 44, wherein the first at least one image includes at least part of garments worn by the first person and wherein at least one of the first image features is determined based on the at least part of the garments.
54. The method of claim 44, wherein the at least one second image includes at least part of garments worn by the second person and wherein at least one of the second image features is determined based on the at least part of the garments.
55. The method of claim 44, wherein the at least one first image includes at least part of a background behind the first person and wherein at least one of the first image features is determined based on the at least part of the background.
56. The method of claim 44, wherein the at least one second image includes at least part of a background behind the second person and wherein at least one of the second image features is determined based on the at least part of the background.
57. The method of claim 44, wherein the at least one first image includes at least part of a palm of the first person and wherein at least one of the first image features is determined based on the at least part of the palm.
58. The method of claim 44, wherein the at least one second image includes at least part of a palm of the second person and wherein at least one of the second image features is determined based on the at least part of the palm.
59. The method of claim 44, wherein the at least one first image is captured from one or more of: (a) a video recording of the first person, (b) a static two-dimensional image of the first person, and (c) a static three-dimensional image of the first person.
60. The method of claim 44, wherein the at least one second image is captured from one or more of: (a) a video recording of the second person, or (b) a static two-dimensional image of the second person, and (c) a static three-dimensional image of the second person.
61. The system of claim 45, wherein the at least one static three-dimensional facial image of the first person is generated from one or more of: (a) at least one hologram of the first person, and (b) one or more static two-dimensional images of the first person, and (c) one or more static three-dimensional images of the first person.
62. The system of claim 46, wherein the at least one static three-dimensional facial image of the second person is generated from one or more of: (a) at least one hologram of the second person, and (b) one or more static two-dimensional images of the second person, and (c) one or more static three-dimensional images of the second person.
63. The method of claim 44, wherein the obtaining includes meta-data about the first person and wherein at least one of the first image features is determined based on the meta-data.
64. The method of claim 44, wherein the obtaining includes meta-data about the second person and wherein at least one of the second image features is determined based on the meta-data.
65. The method of claim 44, wherein the first mathematical manipulation is a difference between the first image features and the second image features.
66. The method of claim 44, wherein the second mathematical manipulation is a difference between the first image features and the second image features.
67. The method of claim 44, wherein the first mathematical manipulation is a directional distance between the first image features and the second image features.
68. The method of claim 44, wherein the second mathematical manipulation is a directional distance between the first image features and the second image features.
69. The method of claim 44, wherein the first machine learning model can be based on one or more neural network techniques.
70. The method of claim 44, wherein the second machine learning model can be based on one or more neural network techniques.
71. The method of claim 44, wherein the first person is a male and the second person is a female.
72. The method of claim 71, wherein the reciprocal attractiveness score calculation is based also on weights, wherein male attractiveness by the female gets preference over female attractiveness by the male.
73. The method of claim 71, wherein the reciprocal attractiveness score calculation is based also on weights, wherein similarity between the male and the female gets preference over the first image features and the second image features.
74. The method of claim 71, wherein the reciprocal attractiveness score calculation is based also on weights, wherein the objective beauty score of the female gets preference over the first image features and the second image features.
75. The method of claim 44, wherein the first person is a female and the second person is a male.
76. The method of claim 75, wherein the reciprocal attractiveness score calculation is based also on weights, wherein female attractiveness by the male gets preference over male attractiveness by the female.
77. The method of claim 44, wherein the first person is a male and the second person is a male.
78. The method of claim 77, wherein the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
79. The method of claim 44, wherein the first person is a female and the second person is a female.
80. The method of claim 79, wherein the first machine learning model and the second machine learning model are trained for identifying same-sex attractiveness.
81. The method of claim 44, wherein the first machine learning model is trained utilizing training-data comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
82. The method of claim 44, wherein the second machine learning model is trained utilizing training-data comprising of a plurality of pairs of images, wherein at least some given pairs of the pairs of images are associated with a label indicative of the attractiveness between a third person imaged in a first image of a given pair and a fourth person imaged in a second image of the given pair.
83. The method of claim 44 further comprising, after obtaining the at least one first image of the first person, pre-processing, by the processing circuitry, the at least one first image to determine a facial quality score of a face appearing in the at least one first image and wherein the calculating is performed only when the facial quality score is above a first threshold.
84. The method of claim 44 further comprising, after obtaining the at least one second image of the second person, pre-processing, by the processing circuitry, the at least one second image to determine a facial quality score of a face appearing in the at least one second image and wherein the calculating is performed only when the facial quality score is above a second threshold.
85. The method of claim 44 wherein each given machine attractiveness detection model of the at least one of the first attractiveness detection models is trained to determine a sub-system direct attractiveness score and wherein the direct attractiveness score is determined based on the at least one subsystem direct attractiveness score.
86. The method of claim 44 wherein each given attractiveness detection model of the at least one second attractiveness detection model is trained to determine a sub-system direct attractiveness score and wherein the reverse attractiveness score is determined based on the at least one sub-system reverse attractiveness score.
87. A non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code, executable by at least one processing circuitry of a computer to perform a method comprising:
(A) obtaining, by a processing circuitry: (c) at least one first image, representative of the first person;
(d) at least one second image, representative of the second person;
(c) at least one first machine learning model, capable of receiving: (i) second image features of the at least one second image, and (ii) a first mathematical manipulation based on first image features of the at least one first image and the second image features, and determining a direct attractiveness score, the direct attractiveness score being indicative of a level of attractiveness of the second person by the first person; and
(d) at least one second machine learning model, capable of receiving: (i) the first image features, and (ii) a second mathematical manipulation based on the first image features and the second image features, and determining a reverse attractiveness score, the reverse attractiveness score being indicative of a level of attractiveness of the first person by the second person;
(B) determining, by the processing circuitry, based on the at least one first image, one or more first image features;
(C) determining, by the processing circuitry, based on the at least one second image, one or more second image features; and
(D) calculating, by the processing circuitry:
(a) a direct attractiveness score by utilizing the first image features, the second image features and the first machine learning model;
(b) a reverse attractiveness score by utilizing the first image features, the second image features and the second machine learning model; and
(c) a reciprocal attractiveness score based on the direct attractiveness score and the reverse attractiveness score.
PCT/IL2023/050944 2022-10-05 2023-09-04 Attractiveness determination system and method WO2024075109A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263378515P 2022-10-05 2022-10-05
US63/378,515 2022-10-05

Publications (1)

Publication Number Publication Date
WO2024075109A1 true WO2024075109A1 (en) 2024-04-11

Family

ID=90607725

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2023/050944 WO2024075109A1 (en) 2022-10-05 2023-09-04 Attractiveness determination system and method

Country Status (1)

Country Link
WO (1) WO2024075109A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019136354A1 (en) * 2018-01-05 2019-07-11 L'oreal Machine-implemented facial health and beauty assistant
US10997703B1 (en) * 2018-04-24 2021-05-04 Igor Khalatian Methods and systems for automated attractiveness prediction
WO2021243640A1 (en) * 2020-06-04 2021-12-09 The Procter & Gamble Company Oral care based digital imaging systems and methods for determining perceived attractiveness of facial image portion
US20220079510A1 (en) * 2020-09-11 2022-03-17 University Of Iowa Research Foundation Methods And Apparatus For Machine Learning To Analyze Musculo-Skeletal Rehabilitation From Images
US20220102010A1 (en) * 2020-09-25 2022-03-31 Koninklijke Philips N.V. Systems and methods for modelling a human subject

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019136354A1 (en) * 2018-01-05 2019-07-11 L'oreal Machine-implemented facial health and beauty assistant
US10997703B1 (en) * 2018-04-24 2021-05-04 Igor Khalatian Methods and systems for automated attractiveness prediction
WO2021243640A1 (en) * 2020-06-04 2021-12-09 The Procter & Gamble Company Oral care based digital imaging systems and methods for determining perceived attractiveness of facial image portion
US20220079510A1 (en) * 2020-09-11 2022-03-17 University Of Iowa Research Foundation Methods And Apparatus For Machine Learning To Analyze Musculo-Skeletal Rehabilitation From Images
US20220102010A1 (en) * 2020-09-25 2022-03-31 Koninklijke Philips N.V. Systems and methods for modelling a human subject

Similar Documents

Publication Publication Date Title
CN108701216B (en) Face recognition method and device and intelligent terminal
US10667697B2 (en) Identification of posture-related syncope using head-mounted sensors
Zhang et al. Facial expression analysis under partial occlusion: A survey
Pampouchidou et al. Depression assessment by fusing high and low level features from audio, video, and text
Corneanu et al. Survey on rgb, 3d, thermal, and multimodal approaches for facial expression recognition: History, trends, and affect-related applications
JP7229174B2 (en) Person identification system and method
Dantcheva et al. What else does your biometric data reveal? A survey on soft biometrics
Laurentini et al. Computer analysis of face beauty: A survey
Fu et al. Learning race from face: A survey
CN105005777B (en) Audio and video recommendation method and system based on human face
Rafique et al. Age and gender prediction using deep convolutional neural networks
Yadav et al. Bacteria foraging fusion for face recognition across age progression
Shu et al. Age progression: Current technologies and applications
CN111627117A (en) Method and device for adjusting special effect of portrait display, electronic equipment and storage medium
CN111902821A (en) Detecting motion to prevent recognition
KR20150064977A (en) Video analysis and visualization system based on face information
Štěpánek et al. Evaluation of facial attractiveness for purposes of plastic surgery using machine-learning methods and image analysis
Dantcheva et al. Expression recognition for severely demented patients in music reminiscence-therapy
Dadiz et al. Detecting depression in videos using uniformed local binary pattern on facial features
WO2024075109A1 (en) Attractiveness determination system and method
CN116129473A (en) Identity-guide-based combined learning clothing changing pedestrian re-identification method and system
Dinculescu et al. Novel approach to face expression analysis in determining emotional valence and intensity with benefit for human space flight studies
Chinchanikar Facial expression recognition using deep learning: A review
Ramu et al. A GoogleNet architecture based Facial emotions recognition using EEG data for future applications
Stathopoulou Visual affect recognition