US20210183124A1 - Method and system to provide a computer-modified visualization of the desired face of a person - Google Patents

Method and system to provide a computer-modified visualization of the desired face of a person Download PDF

Info

Publication number
US20210183124A1
US20210183124A1 US17/117,805 US202017117805A US2021183124A1 US 20210183124 A1 US20210183124 A1 US 20210183124A1 US 202017117805 A US202017117805 A US 202017117805A US 2021183124 A1 US2021183124 A1 US 2021183124A1
Authority
US
United States
Prior art keywords
face
person
visual
data set
modifications
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/117,805
Inventor
Heike BENDITTE-KLEPETKO
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quantiface GmbH
Original Assignee
Quantiface GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quantiface GmbH filed Critical Quantiface GmbH
Priority to US17/331,121 priority Critical patent/US11227424B2/en
Publication of US20210183124A1 publication Critical patent/US20210183124A1/en
Assigned to QuantiFace GmbH reassignment QuantiFace GmbH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BENDITTE-KLEPETKO, Heike
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/167Personality evaluation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06K9/00281
    • G06K9/6256
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10148Varying focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to a method and system to provide a computer-modified visualization of a desired face of a person who considers undergoing a minimally invasive and/or invasive cosmetic and/or medical treatment to improve the person's appearance.
  • the face is one of the main areas of the body relevant for this appearance.
  • a person interested in such a change typically makes an appointment with a beautician, dermatologist, physician, specialist certified to do facial modifications, or plastic surgeon to get information about possible treatments.
  • the specialist mentioned above inspects the different regions of the face and based on their personal knowledge and skills proposes medical and/or cosmetic treatments that could provide the desired change of the appearance.
  • the problem of this way of working is that it is difficult for the person to understand the possible visual effects and/or effects on the first impression (i.e., how the face is perceived by others) of the treatments the specialist proposes.
  • LIAO YANBING ET AL “Deep Rank Learning for Facial Attractiveness”, 2017 4 TH IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR), IEEE, 26 Nov. 2017 (2017-11-26), pages 565-570, XP033475316 discloses a Deep Convolutional Neuronal Network and artificial intelligence for fully automatic facial beauty assessment and ranking.
  • MESSER U ET AL “Predicting Social Perception from Faces: A Deep Learning Approach”, ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, N.Y. 14853, 29 Jun. 2019 (2019-06-29), XP081385771 discloses a Deep Convolutional Neuronal Network and artificial intelligence to predict human perceiver's impression of the characteristics “warmth” and “competence” based on a visual representation of a face of a person.
  • the artificial intelligence extracts features or face property data from regions of face images and was trained with a 10K Adults Face database where human raters rated the characteristics “warmth” and “competence” of these faces to generate face characteristics. Heat maps were used to identify regions of faces relevant for human perceivers.
  • WO 2015/017687A2 discloses a method and system that enables a user to take a picture of their face with a device and send it to a computer server.
  • the server uses a facial recognition software to identify anatomical features of the face and the user selects the anatomical area of his face he wants to improve.
  • the server determines the level of “severity” of a defect in the selected anatomical area. After that, the person has to select a medical or cosmetic treatment from a list of possible treatments and the server determines the likely treatment outcome based on data of clinical studies for this selected medical or cosmetic treatment.
  • the server generates a modified picture of the user's face with the likely outcome of the selected medical or cosmetic treatment for the selected anatomical area, which picture is displayed at the user's device next to the original picture taken.
  • the disclosed method provides the disadvantage that a user is left alone both with the selection of facial regions and with the selection of possible medical or cosmetic treatments which might lead to an overall less attractive appearance of the user after one or more treatments. There is a need for technical means to solve these technical selection problems.
  • This inventive method, system and/or computer program uses a complete different concept and technique to enable an informed decision for a user how to change the facial appearance, expression and/or morphology.
  • the user may be for example, a beautician, dermatologist, physician, specialist certified to do facial modifications or plastic surgeon, as well as a person interested in a change of their own appearance.
  • the invention is based on the finding that when aiming to change a person's appearance it is only a secondary goal to e.g. selectively reduce frown lines or to increase the volume of the lips, since persons categorize the appearance of others in the course of forming a first impression in a more complex way and as a whole.
  • Characteristics attributed to a person when making a first impression are, for example, attractiveness, healthiness, youthfulness, tiredness, sadness, friendliness, dominance, competence, likability or trustworthiness just to name some of these.
  • the new concept and technique enables a user to select at least one characteristic she/he wants to change.
  • a change in a characteristic attributed by others during forming the first impression can be achieved in both ways, i.e. increasing a characteristic which is perceived as positive or decreasing a characteristic which is perceived as negative. For example, a person might wish to appear less tired. In another example, a person might wish to appear more friendly. In a further example, a person might wish to appear more
  • a new data set of visuals of faces is compiled. This includes images and videos of people's faces as well as computer-generated images and 3D models of artificial faces.
  • this data set is normalized via transformations for face alignment, cropping and resizing. Facial landmark detection of face properties first and then more detailed like skin texture analysis are performed in order to properly align face data.
  • the data set of visuals is generated and improved with human assessments and/or modifications of such faces to train a deep learning based application and/or artificial intelligence and/or software. The quality of this application is further refined in an iterative process using data generated by previous versions of itself. Reference is made to above cited scientific articles as state of the art documents that describe the structure and model architecture of such an artificial intelligence.
  • a new technique is used to generate a data set of modifications that comprise complex information about all modifications of the face needed to change the user's characteristics in the desired direction.
  • the modifications might affect the facial appearance, expression and/or morphology.
  • Such modifications of the face may comprise, for example, height of the eyebrows, fullness of the cheeks, width and height of the chin, volume of the lips, volume of the zygoma, depth of the marionette lines, straightness of the jaw line, depth of the glabellar lines, periorbital hollowness and skin tightness.
  • these changes can be obtained by minimally invasive and/or invasive cosmetic and/or medical treatments, as has been proven by clinical studies or treatments performed in the past. These treatments include application of dermal fillers, Botulinum toxin, threads, implants, surgery, laser treatments, autologous fat transplantation, and skin resurfacing treatments amongst others.
  • visual data of the face of the person e.g. photos or videos
  • the data set of visuals of faces are transferred to a server.
  • the additional input of age and/or gender and/or ethnicity data of the person is possible as well.
  • a deep learning based application/artificial intelligence processed by the server is used to modify the visual data of the user's face according to the user's selected change of one or more characteristics.
  • the artificial intelligence can also be used to optionally describe and/or rate the characteristics based on the original visual data of the person's face.
  • the computer-modified visualization of the desired face can be shown on a display next to the original visual data of the user to show the possible change.
  • This change may be obtained by using one or a combination of different invasive and/or minimally invasive cosmetic and/or medical treatments that in sum modify the user's face towards the desired change of the selected characteristic(s).
  • a proposal for the necessary treatments to achieve the desired face can be given.
  • This new method provides the major advantage that it visualizes the effect of changes of the facial appearance, expression and/or morphology on the characteristics attributed to a person when making a first impression.
  • the user has the option to choose at least one specific characteristic he/she wishes to change (e.g. reduce or improve). Subsequently, the needed changes of the facial appearance, expression and/or morphology, which are necessary to reach the desired effect (i.e. the desired face), are visualized.
  • One of the requirements for applying this principle is the ability to process large data sets in a novel, innovative, fast and efficient way via using an artificial intelligence.
  • the invention solves the technical problem of prior art to process data in order to analyse, visualize and predict a person's face and its perception by others according to characteristics attributed to the person when making a first impression, by a combination of steps listed above.
  • FIG. 1 shows a system to display a computer-modified visualization of a desired face of a person.
  • FIG. 3 shows in which regions the face of the person is divided for further analyses.
  • FIG. 4 shows a description and rating of the characteristics based on the original visual data of the person's face.
  • FIG. 5 shows how characteristics may be selected by the user.
  • FIG. 6 shows the face of the person with a data set of modifications overlaid.
  • FIG. 7 shows a comparison of the original picture of the face of the person with the computer-modified visualization of the desired face of the person.
  • FIG. 8 shows a recommendation which treatments to use.
  • FIG. 9 shows a line drawing of a face with regions of the face marked to be treated to increase the characteristic “dominant” attributed to a person when making a first impression.
  • FIG. 10 shows a picture of a face with regions of the face marked to be treated to increase the characteristic “dominant” attributed to the person when making a first impression.
  • FIG. 11 shows a line drawing of a face with regions of the face marked to be treated to increase the characteristic “competence” attributed to a person when making a first impression.
  • FIG. 12 shows a table with examples of invasive and/or minimally invasive cosmetic and medical treatments to achieve changes of desired characteristics of a person's face by actions in particular regions of the face of the person.
  • FIG. 13 shows a visual of a face of a person before and after the performance of the recommended treatments.
  • FIG. 1 shows a system 1 to display a computer-modified visualization or visual of a desired face of a person 2 with a mobile device 3 .
  • the mobile device 3 processes a software and in particular an App for person 2 , who considers undergoing an invasive and/or minimally invasive cosmetic and/or medical treatment, or for a specialist performing such treatment who would like to take a data-driven decision which treatments to choose to obtain the desired changes of the face.
  • a camera of the mobile device 3 is used to obtain the visual of the face of person 2 as standardized visual data 4 shown in FIG. 2 .
  • Visual data 4 may represent a photo or a film of the face of person 2 .
  • Standardization of the visual data 4 may be split into instructions for person 2 and the photographer what to do for taking a standardized photo or film and into a post-processing of the photo or film taken.
  • the instructions for person 2 and the photographer may include one or more of the following steps: ask person 2 to take off e.g. earrings or a nose ring; ask person 2 not to smile, ask person 2 to make a neutral facial expression; ask person 2 to keep head hair out of his/her face; ask person 2 to look straight into the camera; good general condition of lightning; neutral background.
  • the post-processing of the photo or film may include one or more of the following steps: cut-out the background behind the face from the visual data 4 ; cut-out the ears of the person's face to reduce the visual influence of e.g. earrings; cut-out clothes and other wardrobe that might influence with the face; cut-out the head hair of the person 2 .
  • System 1 comprises a remote server 5 connected via a broadband network 6 or other remote connection technology with the mobile device 3 .
  • the server 5 processes a deep learning based application 7 and as such forms an artificial intelligence that analyses visual data 4 of the face of person 2 to rate one or more characteristics attributed to a person 2 when making a first impression.
  • face characteristics or traits may for example be attractiveness, healthiness, youthfulness, tiredness, sadness, friendliness, dominance, competence, likability or trustworthiness.
  • the deep learning based applications 7 is a computer program comprising instructions which, when the program is executed by remote server 5 , causes remote server 5 to carry out the following steps to provide a computer-modified visualization 13 of a desired face of person 2 .
  • a representative number of such visuals of faces stored as visual data in database 8 are shown on a display to a representative number of humans to manually rate these visuals of faces about their characteristics.
  • the humans may rate them with scores (e.g. from 0 to 7) for different characteristics or traits.
  • scores e.g. from 0 to 7
  • These human ratings are stored in database 8 linked to the visual data of the faces and provide a basis information for the deep learning based application 7 to rate characteristics attributed to a person 2 when making a first impression.
  • a second step further face property data of these visuals of faces are extracted by conventional computer vision algorithms for example landmark detection, wrinkle detection, skin texture analysis, analysis of facial proportions.
  • These face property data of visuals of faces are used together with the data set generated and stored in database 8 in the first step for training of the deep learning based application 7 to enable the artificial intelligence to provide an automated rating of the characteristics of the visuals of faces.
  • any visual of a face may be provided to the deep learning based application 7 , which will, based on the data set stored in database 8 , provide an automated rating of the characteristics of the visuals of the face.
  • FIG. 4 shows the result of such a description and automated rating of the characteristics or traits of a person based on the visuals of the person's face displayed on mobile device 3 .
  • Server 5 furthermore comprises a database 9 with data generated in a third step based on clinical studies, case studies or other publicly available information, which data comprise information about visual modifications of a face achievable by invasive and/or minimally invasive cosmetic and medical treatments.
  • This database 9 for instance comprises information of the effectiveness of a treatment improving a wrinkle score of 3.1 to 1.5 within 3 weeks.
  • FIG. 12 shows a table, which includes the information of database 9 with examples of invasive and/or minimally invasive cosmetic and medical treatments to achieve actions or improvements in particular regions of the face of person 2 .
  • system 1 is ready to be used to provide computer-modified visuals of a face of a person as described in the following steps of the method.
  • a fourth step the camera of mobile device 3 is used to obtain the standardized visual data 4 of the face of person 2 as described above and shown in FIG. 2 .
  • these visual data 4 are sent to sever 5 and deep learning based application 7 processes an automated rating of the characteristics or traits of person 2 and provides the rating shown in FIG. 4 to support person 2 to decide which characteristic or trait he/she might want to change.
  • person 2 makes his/her decision which characteristic of his/her face to improve without the automated rating shown in FIG. 4 .
  • FIG. 11 shows a line drawing of a face with regions of the face marked to be treated to increase the characteristic “competent” attributed to a person when making a first impression.
  • the table of FIG. 12 shows the actions needed to increase the characteristic “competent” attributed to person 2 when making a first impression: make the chin less wide and the cheeks less full and lower the eyebrows.
  • FIG. 12 furthermore shows the cosmetic and/or medical treatments, which can be performed by a beautician, dermatologist, physician, specialist certified to do facial modifications, or plastic surgeon to realize these actions.
  • This data set of modifications 12 technically describes what modifications are needed to modify visual data 4 of the face of person 2 to show the possible result of one or more invasive and/or minimally invasive cosmetic and/or medical treatments to improve the characteristic “competent” of person 2 in a computer-modified visual 13 of the face of a person 2 as shown in FIG. 6 .
  • FIG. 6 shows the face of person 2 with an overlay of arrows that indicate with regions of the face need to be treated to achieve the desired result of improved “competence”. So for instance, the eyebrow position needs to be lifted and the volume of the jawline needs to be increased.
  • the arrows shown are only symbolic as data set of modifications 12 may comprise further information about the face and processing of the visual data 4 needed.
  • the visual data 4 of the face of the person 2 are modified based on the data set of modifications 12 and a computer-modified visual 13 of the face of the person 2 with the modification of the face achievable by the at least one proposed cosmetic and/or medical treatment is generated.
  • the data set of modifications 12 may include information to soft focus the area of lower eyelid and zygoma in the visual data 4 of the face of person 2 .
  • Deep learning based application 7 or other conventional image processing methods like image warping therefore processes the photo or film of person 2 to provide the computer-modified visual 13 of the desired face of person 2 .
  • the artificial intelligence is used to automatically identify an area with wrinkles in the visual of the person's face based on technologies known to a man skilled in the art. Such areas for instance may be the area of lower eyelid and zygoma.
  • the artificial intelligence may then be used to automatically soft focus these identified areas in case for instance the characteristic “attractiveness” of person 2 should be improved and the data set of modifications therefore includes such information to modify the visual of the face of the person to generate the computer-modified visual of the desired face of the person.
  • the artificial intelligence may also add wrinkles to the visual of a for instance young person's face, who wants to improve the characteristic “competence” in areas where elder people use to have wrinkles.
  • the computer-modified visual 13 of the desired face of the person 2 is displayed with mobile device 3 as shown in FIG. 7 .
  • the first preferred mode is to use a toggle mode to alternatively show the taken standardized visual data 4 of the face of the person 2 and the computer-modified visual 13 of the desired face of the person 2 .
  • Person 2 just has to touch the display of mobile device 3 in a button area to toggle between the two visuals as fast as person 2 wants to see them to better see the differences and modifications.
  • the second preferred mode is to use a marking mode to mark the areas of the face of the person 2 modified by the data set of modifications 12 as an overlay to the displayed computer-modified visual 13 of the desired face of the person 2 as shown in FIG. 10 . Marking may be done e.g. with lines or broken lines overlaid over the computer-modified visual 13 of the desired face of the person 2 . Both preferred modes enable person 2 to easily see those areas of the face that would need to be treated with invasive and/or minimally invasive cosmetic and/or medical treatments.
  • FIG. 13 shows the face of another person 2 before and after the recommended treatment was performed.
  • the left photo shows person 2 who was interested to change the appearance of his face by increasing the characteristics “dominance” and “competence”.
  • the inventive method and system provided and displayed a computer-modified visual 13 of the desired face similar to the right photo and provided a recommendation to use the treatment of lipofilling in particular identified regions of the face. After the recommended treatment was performed, the right photo of FIG. 13 was taken and it turned out that the computer-modified visual 13 of the desired face was nearly identical to the actual photo taken after the treatment. This technical solution helps substantially to make informed decisions about cosmetic and/or medical treatments.
  • server 5 It is furthermore possible to display all invasive and/or minimally invasive cosmetic and/or medical treatments stored in database 9 and to select some of these invasive and/or minimally invasive cosmetic and/or medical treatments upfront to send these together with the visual data 4 of person 2 and the characteristic input by person 2 to change his/her facial appearance to server 5 .
  • the artificial intelligence of server 5 only uses those selected invasive and/or minimally invasive cosmetic and/or medical treatments during their search for a best match achievable for the data set of modifications 12 in the database 9 of visual modifications achievable by selected invasive and/or minimally invasive cosmetic and/or medical treatments. This enables person 2 to decide upfront which of the invasive and/or minimally invasive cosmetic and/or medical treatments are acceptable to be used to change his/her facial appearance and helps to streamline processing of server 5 .
  • Deep learning based application 7 is optionally built to evaluate the age period, ethnicity and gender of person 2 in picture data 4 . This helps to reduce data needed to be input when using the App.
  • System 1 furthermore enables to show those invasive and/or minimally invasive cosmetic and/or medical treatments 16 on the display of mobile device 3 that have been selected by the artificial intelligence to achieve the desired face as shown in FIG. 8 .
  • Person 2 may decide to use a filter to select only some of the shown invasive and/or minimally invasive cosmetic and/or medical treatments, if for instance person 2 is not willing to undergo a surgical intervention.
  • This selection of person 2 is sent to server 5 , which calculates the necessary data set of modifications 12 , which are achievable with the reduced number of invasive and/or minimally invasive cosmetic and/or medical treatments to achieve the desired changes in the characteristic (e.g. attractiveness).
  • the person that uses the App is not the person that wants to change his/her appearance, but a person that wants to enable an informed decision like for example, a beautician, dermatologist, physician, specialist certified to do facial modifications or plastic surgeon.
  • the computer program realized as an APP of a mobile phone may be programmed to ask the user questions like the following: gender, age, profession, level of education, sexual orientation, religion and political orientation. These questions may be asked in the fifth step of above explained method about the user in one embodiment and about the target group, the user is interested in, in a second embodiment. This information may be used in the sixth step of above explained method, when generating the data set of modifications.
  • the result of an analyse of the information about the user and/or the target group, the user is interested in, for which the user wants to be recognized as e.g. more “dominant” may be used as further input how characteristics of the user need to be modified. This has the advantage that the modifications closely fit the personal needs and wishes of the user.

Abstract

A data set of visuals of faces and extracted face property data are generated and linked to face characteristics data provided by a representative set of humans that rate the visuals of these faces with respect to their face characteristics. Further face property data of these visuals of faces is extracted and together with the generated data set used to train an artificial intelligence. The artificial intelligence is used to analyse a visual of the person's face and generate a data set of modifications based on a selected desired characteristic(s) and modifications achievable by at least one cosmetic and/or medical treatment. The visual of the face of the person is modified based on the data set of modifications and the computer-modified visual of the desired face of the person with the modification of the face achievable by the least one proposed cosmetic and/or medical treatment is generated and displayed.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • This application claims benefit of European Patent Application Serial No. 19215134.8, filed 11 Dec. 2019 and which application is incorporated herein by reference. To the extent appropriate, a claim of priority is made to the above disclosed application.
  • BACKGROUND
  • The present invention relates to a method and system to provide a computer-modified visualization of a desired face of a person who considers undergoing a minimally invasive and/or invasive cosmetic and/or medical treatment to improve the person's appearance. There is a general wish to optimize the own appearance. The face is one of the main areas of the body relevant for this appearance. There are many different treatments known to change the facial appearance, expression and/or morphology, e.g. reduction of wrinkles in the skin of the face or modification of the cheekbones. A person interested in such a change typically makes an appointment with a beautician, dermatologist, physician, specialist certified to do facial modifications, or plastic surgeon to get information about possible treatments. In a first step the specialist mentioned above inspects the different regions of the face and based on their personal knowledge and skills proposes medical and/or cosmetic treatments that could provide the desired change of the appearance. The problem of this way of working is that it is difficult for the person to understand the possible visual effects and/or effects on the first impression (i.e., how the face is perceived by others) of the treatments the specialist proposes.
  • LIAO YANBING ET AL: “Deep Rank Learning for Facial Attractiveness”, 2017 4TH IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR), IEEE, 26 Nov. 2017 (2017-11-26), pages 565-570, XP033475316 discloses a Deep Convolutional Neuronal Network and artificial intelligence for fully automatic facial beauty assessment and ranking. A “HOTorNOT” database of 1.885 female face images collected from a popular social/dating website was used to train the artificial intelligence how to predict facial attractiveness of female faces and rank them.
  • MESSER U ET AL: “Predicting Social Perception from Faces: A Deep Learning Approach”, ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, N.Y. 14853, 29 Jun. 2019 (2019-06-29), XP081385771 discloses a Deep Convolutional Neuronal Network and artificial intelligence to predict human perceiver's impression of the characteristics “warmth” and “competence” based on a visual representation of a face of a person. The artificial intelligence extracts features or face property data from regions of face images and was trained with a 10K Adults Face database where human raters rated the characteristics “warmth” and “competence” of these faces to generate face characteristics. Heat maps were used to identify regions of faces relevant for human perceivers.
  • WO 2015/017687A2 discloses a method and system that enables a user to take a picture of their face with a device and send it to a computer server. The server uses a facial recognition software to identify anatomical features of the face and the user selects the anatomical area of his face he wants to improve. In a next step, the server determines the level of “severity” of a defect in the selected anatomical area. After that, the person has to select a medical or cosmetic treatment from a list of possible treatments and the server determines the likely treatment outcome based on data of clinical studies for this selected medical or cosmetic treatment. Finally, the server generates a modified picture of the user's face with the likely outcome of the selected medical or cosmetic treatment for the selected anatomical area, which picture is displayed at the user's device next to the original picture taken. The disclosed method provides the disadvantage that a user is left alone both with the selection of facial regions and with the selection of possible medical or cosmetic treatments which might lead to an overall less attractive appearance of the user after one or more treatments. There is a need for technical means to solve these technical selection problems.
  • SUMMARY
  • These problems are solved with a method that comprises the following steps:
    • Generate a data set of visuals of faces and extracted face property data thereof linked to face characteristics data provided by a representative set of humans that rate the visuals of these faces about their face characteristics and store the data set in a database;
    • Extract further face property data of these visuals of faces and use these extracted face property data together with the generated data set for training of an artificial intelligence to enable the artificial intelligence to provide an automated rating of the characteristics of the visuals of faces;
    • Generate a data set of visual modifications of a face achievable by cosmetic and/or medical treatments and store the data set in a database;
    • Take a standardized visual of the face of the person;
    • Input at least one desired characteristic of the face of the person to be changed;
    • Use the artificial intelligence to analyse the visual of the person's face and to generate a data set of modifications based on the selected desired characteristic(s) and modifications achievable by at least one cosmetic and/or medical treatment;
    • Modify the visual of the face of the person based on the data set of modifications and generate the computer-modified visual of the desired face of the person with the modification of the face achievable by at least one cosmetic and/or medical treatment;
    • Display the computer-modified visual of the desired face of the person.
  • This inventive method, system and/or computer program uses a complete different concept and technique to enable an informed decision for a user how to change the facial appearance, expression and/or morphology. The user may be for example, a beautician, dermatologist, physician, specialist certified to do facial modifications or plastic surgeon, as well as a person interested in a change of their own appearance.
  • The invention is based on the finding that when aiming to change a person's appearance it is only a secondary goal to e.g. selectively reduce frown lines or to increase the volume of the lips, since persons categorize the appearance of others in the course of forming a first impression in a more complex way and as a whole. Characteristics attributed to a person when making a first impression are, for example, attractiveness, healthiness, youthfulness, tiredness, sadness, friendliness, dominance, competence, likability or trustworthiness just to name some of these. The new concept and technique enables a user to select at least one characteristic she/he wants to change. A change in a characteristic attributed by others during forming the first impression can be achieved in both ways, i.e. increasing a characteristic which is perceived as positive or decreasing a characteristic which is perceived as negative. For example, a person might wish to appear less tired. In another example, a person might wish to appear more friendly. In a further example, a person might wish to appear more attractive.
  • In a first step of the inventive method, a new data set of visuals of faces is compiled. This includes images and videos of people's faces as well as computer-generated images and 3D models of artificial faces. In a pre-processing step for the deep learning algorithm, this data set is normalized via transformations for face alignment, cropping and resizing. Facial landmark detection of face properties first and then more detailed like skin texture analysis are performed in order to properly align face data. The data set of visuals is generated and improved with human assessments and/or modifications of such faces to train a deep learning based application and/or artificial intelligence and/or software. The quality of this application is further refined in an iterative process using data generated by previous versions of itself. Reference is made to above cited scientific articles as state of the art documents that describe the structure and model architecture of such an artificial intelligence.
  • According to deep learning principles, a new technique is used to generate a data set of modifications that comprise complex information about all modifications of the face needed to change the user's characteristics in the desired direction. The modifications might affect the facial appearance, expression and/or morphology. Such modifications of the face may comprise, for example, height of the eyebrows, fullness of the cheeks, width and height of the chin, volume of the lips, volume of the zygoma, depth of the marionette lines, straightness of the jaw line, depth of the glabellar lines, periorbital hollowness and skin tightness. Technically, these changes can be obtained by minimally invasive and/or invasive cosmetic and/or medical treatments, as has been proven by clinical studies or treatments performed in the past. These treatments include application of dermal fillers, Botulinum toxin, threads, implants, surgery, laser treatments, autologous fat transplantation, and skin resurfacing treatments amongst others.
  • In a further step, visual data of the face of the person (e.g. photos or videos) are obtained. Together with a selection of at least one characteristic to be changed, the data set of visuals of faces are transferred to a server. The additional input of age and/or gender and/or ethnicity data of the person is possible as well.
  • In a final step of the method, a deep learning based application/artificial intelligence processed by the server is used to modify the visual data of the user's face according to the user's selected change of one or more characteristics. The artificial intelligence can also be used to optionally describe and/or rate the characteristics based on the original visual data of the person's face.
  • The computer-modified visualization of the desired face can be shown on a display next to the original visual data of the user to show the possible change. This change may be obtained by using one or a combination of different invasive and/or minimally invasive cosmetic and/or medical treatments that in sum modify the user's face towards the desired change of the selected characteristic(s). Optionally, a proposal for the necessary treatments to achieve the desired face can be given.
  • This new method provides the major advantage that it visualizes the effect of changes of the facial appearance, expression and/or morphology on the characteristics attributed to a person when making a first impression. The user has the option to choose at least one specific characteristic he/she wishes to change (e.g. reduce or improve). Subsequently, the needed changes of the facial appearance, expression and/or morphology, which are necessary to reach the desired effect (i.e. the desired face), are visualized.
  • Methods according to the state of the art visualize only changes of isolated regions of the face. In the context of characteristics attributed to a person when making a first impression the face of the user might be changed in a not desired way. The main aim of a businessman or politician might be to improve his appearance towards being perceived as a competent person while at the same time he wishes to appear younger. However, the selected changes in the facial characteristics to look younger might result in a less competent appearance contradicting his professional needs.
  • The inventors found that taking into consideration one or more characteristics attributed to a person when making a first impression is essential for optimizing the choice and/or selection of individualized different invasive and/or minimally invasive cosmetic and/or medical treatments in order to reach the desired modification of the face. One of the requirements for applying this principle is the ability to process large data sets in a novel, innovative, fast and efficient way via using an artificial intelligence.
  • Therefore, the invention solves the technical problem of prior art to process data in order to analyse, visualize and predict a person's face and its perception by others according to characteristics attributed to the person when making a first impression, by a combination of steps listed above.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and further advantageous embodiments of the invention will be explained based on the following description and the accompanying drawings.
  • FIG. 1 shows a system to display a computer-modified visualization of a desired face of a person.
  • FIG. 2 shows a mobile device of the system with a picture of the face of the person.
  • FIG. 3 shows in which regions the face of the person is divided for further analyses.
  • FIG. 4 shows a description and rating of the characteristics based on the original visual data of the person's face.
  • FIG. 5 shows how characteristics may be selected by the user.
  • FIG. 6 shows the face of the person with a data set of modifications overlaid.
  • FIG. 7 shows a comparison of the original picture of the face of the person with the computer-modified visualization of the desired face of the person.
  • FIG. 8 shows a recommendation which treatments to use.
  • FIG. 9 shows a line drawing of a face with regions of the face marked to be treated to increase the characteristic “dominant” attributed to a person when making a first impression.
  • FIG. 10 shows a picture of a face with regions of the face marked to be treated to increase the characteristic “dominant” attributed to the person when making a first impression.
  • FIG. 11 shows a line drawing of a face with regions of the face marked to be treated to increase the characteristic “competence” attributed to a person when making a first impression.
  • FIG. 12 shows a table with examples of invasive and/or minimally invasive cosmetic and medical treatments to achieve changes of desired characteristics of a person's face by actions in particular regions of the face of the person.
  • FIG. 13 shows a visual of a face of a person before and after the performance of the recommended treatments.
  • DETAILED DESCRIPTION
  • FIG. 1 shows a system 1 to display a computer-modified visualization or visual of a desired face of a person 2 with a mobile device 3. The mobile device 3 processes a software and in particular an App for person 2, who considers undergoing an invasive and/or minimally invasive cosmetic and/or medical treatment, or for a specialist performing such treatment who would like to take a data-driven decision which treatments to choose to obtain the desired changes of the face. A camera of the mobile device 3 is used to obtain the visual of the face of person 2 as standardized visual data 4 shown in FIG. 2. Visual data 4 may represent a photo or a film of the face of person 2. Standardization of the visual data 4 may be split into instructions for person 2 and the photographer what to do for taking a standardized photo or film and into a post-processing of the photo or film taken. The instructions for person 2 and the photographer may include one or more of the following steps: ask person 2 to take off e.g. earrings or a nose ring; ask person 2 not to smile, ask person 2 to make a neutral facial expression; ask person 2 to keep head hair out of his/her face; ask person 2 to look straight into the camera; good general condition of lightning; neutral background. The post-processing of the photo or film may include one or more of the following steps: cut-out the background behind the face from the visual data 4; cut-out the ears of the person's face to reduce the visual influence of e.g. earrings; cut-out clothes and other wardrobe that might influence with the face; cut-out the head hair of the person 2.
  • System 1 comprises a remote server 5 connected via a broadband network 6 or other remote connection technology with the mobile device 3. The server 5 processes a deep learning based application 7 and as such forms an artificial intelligence that analyses visual data 4 of the face of person 2 to rate one or more characteristics attributed to a person 2 when making a first impression. Such face characteristics or traits may for example be attractiveness, healthiness, youthfulness, tiredness, sadness, friendliness, dominance, competence, likability or trustworthiness. The deep learning based applications 7 is a computer program comprising instructions which, when the program is executed by remote server 5, causes remote server 5 to carry out the following steps to provide a computer-modified visualization 13 of a desired face of person 2.
  • To enable the deep learning based application 7 to rate face characteristics the following steps are processed. In a first step a data set of visual data of visuals of faces and extracted face property data thereof linked to face characteristics data is generated. To extract face properties conventional computer vision algorithms like a landmark detection divide the face of person 2 in regions like the chin and the jawline as shown in FIG. 3 and automatically extracts characteristics and their location in the face. Such face property data may for instance include the distance between the eyes or the distance between the eyes and the mouth and other distances to be measure to describe a face. These face property data are stored together with the visual data of these faces by the deep learning based application 7 in a database 8 of the server 5. A representative number of such visuals of faces stored as visual data in database 8 are shown on a display to a representative number of humans to manually rate these visuals of faces about their characteristics. The humans may rate them with scores (e.g. from 0 to 7) for different characteristics or traits. These human ratings are stored in database 8 linked to the visual data of the faces and provide a basis information for the deep learning based application 7 to rate characteristics attributed to a person 2 when making a first impression.
  • In a second step further face property data of these visuals of faces are extracted by conventional computer vision algorithms for example landmark detection, wrinkle detection, skin texture analysis, analysis of facial proportions. These face property data of visuals of faces are used together with the data set generated and stored in database 8 in the first step for training of the deep learning based application 7 to enable the artificial intelligence to provide an automated rating of the characteristics of the visuals of faces. Reference is made to the scientific articles listed above that describe the structure and model architecture of such an artificial intelligence. As a result, any visual of a face may be provided to the deep learning based application 7, which will, based on the data set stored in database 8, provide an automated rating of the characteristics of the visuals of the face. FIG. 4 shows the result of such a description and automated rating of the characteristics or traits of a person based on the visuals of the person's face displayed on mobile device 3.
  • Server 5 furthermore comprises a database 9 with data generated in a third step based on clinical studies, case studies or other publicly available information, which data comprise information about visual modifications of a face achievable by invasive and/or minimally invasive cosmetic and medical treatments. This database 9 for instance comprises information of the effectiveness of a treatment improving a wrinkle score of 3.1 to 1.5 within 3 weeks. Reference is made to prior art WO 2015/017687 that discloses parameters-based clinical trial summaries, for example the investigator's rating of glabellar line severity at maximum frown and the subject's global assessment of change in appearance of glabellar lines after a medical treatment with Botox over a time period of 120 days post-injection. This prior art document discloses a database that stores data how to improve one particular area of a face with one particular medical treatment. FIG. 12 as one further example shows a table, which includes the information of database 9 with examples of invasive and/or minimally invasive cosmetic and medical treatments to achieve actions or improvements in particular regions of the face of person 2.
  • After databases 8 and 9 have been setup with above described three steps, system 1 is ready to be used to provide computer-modified visuals of a face of a person as described in the following steps of the method.
  • In a fourth step the camera of mobile device 3 is used to obtain the standardized visual data 4 of the face of person 2 as described above and shown in FIG. 2. In a preferred optional embodiment, these visual data 4 are sent to sever 5 and deep learning based application 7 processes an automated rating of the characteristics or traits of person 2 and provides the rating shown in FIG. 4 to support person 2 to decide which characteristic or trait he/she might want to change. In another less preferred embodiment person 2 makes his/her decision which characteristic of his/her face to improve without the automated rating shown in FIG. 4.
  • In a fifth step person 2 inputs at least one characteristic of the face of the person to be changed based on the personal interest of person 2. The characteristic(s) input by person 2 is stored and transmitted in the form of face characteristics data. The person 2 may decide to change his/her facial appearance, expression and/or morphology for improving the characteristic “competent” with input means 10 of mobile device 3 as shown in FIG. 5. This characteristic selected by person 2 is transmitted via broadband network 6 to server 5. In another embodiment of the invention person 2 may use another way to input the at least one characteristic of the face he/she is interested to change with mobile phone 3. This may be done by the selection of the App used by person 2, as there may be an App to improve the characteristic “competent” and another App to improve the characteristic “dominant”.
  • In a sixth step of the method the artificial intelligence of the server 5 analyses the visual of the person's face and generates a data set of modifications 12 based on the selected desired characteristic(s) and modifications achievable by at least one cosmetic and/or medical treatment of database 9. This means that either only one or more cosmetic treatments or only one or more medical treatments or any combination of one or more cosmetic and medical treatments of database 9 may be used to modify the user's face towards the desired change of the selected desired characteristic(s). Furthermore, references to a cosmetic treatment are meant to cover any invasive and/or minimally invasive cosmetic treatment. To achieve this technical step, the deep learning based application 7, based on database 8, evaluates which modifications are needed to improve the characteristic “competent” of person 2 and matches these modifications needed with modifications possible as stored in database 9. FIG. 11 shows a line drawing of a face with regions of the face marked to be treated to increase the characteristic “competent” attributed to a person when making a first impression. In line with that, the table of FIG. 12 shows the actions needed to increase the characteristic “competent” attributed to person 2 when making a first impression: make the chin less wide and the cheeks less full and lower the eyebrows. FIG. 12 furthermore shows the cosmetic and/or medical treatments, which can be performed by a beautician, dermatologist, physician, specialist certified to do facial modifications, or plastic surgeon to realize these actions.
  • The result of a best match of modifications needed and modifications possible are then stored in the data set of modifications 12. This data set of modifications 12 technically describes what modifications are needed to modify visual data 4 of the face of person 2 to show the possible result of one or more invasive and/or minimally invasive cosmetic and/or medical treatments to improve the characteristic “competent” of person 2 in a computer-modified visual 13 of the face of a person 2 as shown in FIG. 6. FIG. 6 shows the face of person 2 with an overlay of arrows that indicate with regions of the face need to be treated to achieve the desired result of improved “competence”. So for instance, the eyebrow position needs to be lifted and the volume of the jawline needs to be increased. The arrows shown are only symbolic as data set of modifications 12 may comprise further information about the face and processing of the visual data 4 needed.
  • In as seventh step of the method the visual data 4 of the face of the person 2 are modified based on the data set of modifications 12 and a computer-modified visual 13 of the face of the person 2 with the modification of the face achievable by the at least one proposed cosmetic and/or medical treatment is generated. If for instance artificial intelligence of the server 5 concluded that the eye bags of person 2 need to be tightened to improve the characteristic “attractiveness” of person 2, then the data set of modifications 12 may include information to soft focus the area of lower eyelid and zygoma in the visual data 4 of the face of person 2. Deep learning based application 7 or other conventional image processing methods like image warping therefore processes the photo or film of person 2 to provide the computer-modified visual 13 of the desired face of person 2. This seventh step may be processed by server 5 due to its large processing power, but could be processed by mobile device 3 as well. In this embodiment server 5 therefore comprises picture data modification means 14 to modify the visual data 4 of the face of person 2 according to the data set of modifications 12, which modified visual data 4 of the face of person 2 are transmitted to mobile device 3 and displayed as computer-modified visual 13 of the desired face of person 2 with mobile device 3.
  • In a preferred embodiment of the invention the artificial intelligence is used to automatically identify an area with wrinkles in the visual of the person's face based on technologies known to a man skilled in the art. Such areas for instance may be the area of lower eyelid and zygoma. The artificial intelligence may then be used to automatically soft focus these identified areas in case for instance the characteristic “attractiveness” of person 2 should be improved and the data set of modifications therefore includes such information to modify the visual of the face of the person to generate the computer-modified visual of the desired face of the person. In a further preferred embodiment, the artificial intelligence may also add wrinkles to the visual of a for instance young person's face, who wants to improve the characteristic “competence” in areas where elder people use to have wrinkles.
  • In an eighth step of the method the computer-modified visual 13 of the desired face of the person 2 is displayed with mobile device 3 as shown in FIG. 7. There are two preferred modes to display the visual data 4 of the face and the computer-modified visual 13 of the desired face of person 2 to enable him/her to easier see the differences. The first preferred mode is to use a toggle mode to alternatively show the taken standardized visual data 4 of the face of the person 2 and the computer-modified visual 13 of the desired face of the person 2. Person 2 just has to touch the display of mobile device 3 in a button area to toggle between the two visuals as fast as person 2 wants to see them to better see the differences and modifications. The second preferred mode is to use a marking mode to mark the areas of the face of the person 2 modified by the data set of modifications 12 as an overlay to the displayed computer-modified visual 13 of the desired face of the person 2 as shown in FIG. 10. Marking may be done e.g. with lines or broken lines overlaid over the computer-modified visual 13 of the desired face of the person 2. Both preferred modes enable person 2 to easily see those areas of the face that would need to be treated with invasive and/or minimally invasive cosmetic and/or medical treatments.
  • FIG. 13 shows the face of another person 2 before and after the recommended treatment was performed. The left photo shows person 2 who was interested to change the appearance of his face by increasing the characteristics “dominance” and “competence”. The inventive method and system provided and displayed a computer-modified visual 13 of the desired face similar to the right photo and provided a recommendation to use the treatment of lipofilling in particular identified regions of the face. After the recommended treatment was performed, the right photo of FIG. 13 was taken and it turned out that the computer-modified visual 13 of the desired face was nearly identical to the actual photo taken after the treatment. This technical solution helps substantially to make informed decisions about cosmetic and/or medical treatments.
  • It is furthermore possible to display all invasive and/or minimally invasive cosmetic and/or medical treatments stored in database 9 and to select some of these invasive and/or minimally invasive cosmetic and/or medical treatments upfront to send these together with the visual data 4 of person 2 and the characteristic input by person 2 to change his/her facial appearance to server 5. In this case, the artificial intelligence of server 5 only uses those selected invasive and/or minimally invasive cosmetic and/or medical treatments during their search for a best match achievable for the data set of modifications 12 in the database 9 of visual modifications achievable by selected invasive and/or minimally invasive cosmetic and/or medical treatments. This enables person 2 to decide upfront which of the invasive and/or minimally invasive cosmetic and/or medical treatments are acceptable to be used to change his/her facial appearance and helps to streamline processing of server 5.
  • In case the sevenths step of above described method is processed by mobile device 3 and not on server 5, then mobile device 3 would comprise visual data modification means 14 to modify the visual data 4 of the face of the person 2 with the received data set of modifications 12 from server 5 by altering certain image vectors. The computer-modified visual 13 of the desired face of the person 2 is then displayed with the device 3. In another embodiment of the invention image processing by visual data modification means might be split between the server 5 and mobile device 3.
  • Deep learning based application 7 is optionally built to evaluate the age period, ethnicity and gender of person 2 in picture data 4. This helps to reduce data needed to be input when using the App.
  • System 1 furthermore enables to show those invasive and/or minimally invasive cosmetic and/or medical treatments 16 on the display of mobile device 3 that have been selected by the artificial intelligence to achieve the desired face as shown in FIG. 8. Person 2 may decide to use a filter to select only some of the shown invasive and/or minimally invasive cosmetic and/or medical treatments, if for instance person 2 is not willing to undergo a surgical intervention. This selection of person 2 is sent to server 5, which calculates the necessary data set of modifications 12, which are achievable with the reduced number of invasive and/or minimally invasive cosmetic and/or medical treatments to achieve the desired changes in the characteristic (e.g. attractiveness).
  • It is furthermore possible to input more than one characteristic with input means of mobile device 3 as shown in FIG. 5. This provides the advantage that person 2 is free to improve two or more characteristics based on his/her personal interest.
  • It is furthermore possible that the person that uses the App is not the person that wants to change his/her appearance, but a person that wants to enable an informed decision like for example, a beautician, dermatologist, physician, specialist certified to do facial modifications or plastic surgeon.
  • In a further embodiment of the invention, the computer program realized as an APP of a mobile phone may be programmed to ask the user questions like the following: gender, age, profession, level of education, sexual orientation, religion and political orientation. These questions may be asked in the fifth step of above explained method about the user in one embodiment and about the target group, the user is interested in, in a second embodiment. This information may be used in the sixth step of above explained method, when generating the data set of modifications. The result of an analyse of the information about the user and/or the target group, the user is interested in, for which the user wants to be recognized as e.g. more “dominant” may be used as further input how characteristics of the user need to be modified. This has the advantage that the modifications closely fit the personal needs and wishes of the user.
  • In a further embodiment of the invention, an artificial intelligence system is used to provide an automated rating of the characteristic of the visuals of faces. Such artificial intelligence system may include a software to detect landmarks in the face of a person or any other conventional algorithm with the ability to do so.

Claims (9)

1. A method to provide a computer-modified visual of a desired face of a person, wherein the method comprises the following steps:
generating a data set of visuals of faces and extracted face property data thereof linked to face characteristics data provided by a representative set of humans that rate the visuals of these faces about their face characteristics and store the data set in a database;
extracting further face property data of these visuals of faces and use these extracted face property data together with the generated data set for training of an artificial intelligence to enable the artificial intelligence to provide an automated rating of the characteristics of the visuals of faces;
generating a data set of visual modifications of a face achievable by cosmetic and/or medical treatments and store the data set in a database;
taking a standardized visual of the face of the person;
inputting at least one desired characteristic of the face of the person to be changed;
using the artificial intelligence to analyse the visual of the person's face and to generate a data set of modifications based on the selected desired characteristic(s) and modifications achievable by at least one cosmetic and/or medical treatment;
modifying the visual of the face of the person based on the data set of modifications and generate the computer-modified visual of the desired face of the person with the modification of the face achievable by the least one proposed cosmetic and/or medical treatment; and
displaying the computer-modified visual of the desired face of the person.
2. Method according to claim 1, wherein the method comprises the following further steps:
using a toggle mode to alternatively show the taken standardized visual of the face of the person and the computer-modified visual of the desired face of the person, or
using a marking mode to mark the areas of the face of the person modified by the data set of modifications as an overlay to the displayed computer-modified visual of the desired face of the person.
3. The method according to claim 1, wherein the method comprises the following further steps:
displaying all cosmetic and/or medical treatments stored in the data set;
selecting only one or more of the cosmetic and/or medical treatments displayed; and
using only selected cosmetic and/or medical treatments for the generation of the data set of modifications.
4. The method according to claim 1, wherein the method comprises the following further steps:
displaying all characteristics stored in the database; and
selecting two or more of the characteristics displayed as desired characteristic to be used by the artificial intelligence to generate the data set of modifications.
5. The method according to claim 1, wherein the method comprises the following further step:
displaying those cosmetic and/or medical treatments which, were selected by the artificial intelligence to be used to generate the data set of modifications.
6. The method according to claim 1, wherein the method comprises the following further steps:
using the artificial intelligence to analyse the visual of the person's face to provide an automated rating of the characteristics of the visual of the person's face; and
displaying the characteristics of the visual of the person's face.
7. The method according to claim 1, wherein the method comprises the following further steps:
using the artificial intelligence to automatically identify an area with wrinkles in the visual of the person's face and to automatically soft focus this identified area in case the data set of modifications includes such information to modify the visual of the face of the person to generate the computer-modified visual of the desired face of the person.
8. A system to display a computer-modified visual of a desired face of a person with a mobile device, which mobile device comprises a display and a camera and input means to input a characteristic, which mobile device is connected to a remote server which generates a data set of modifications, wherein
the remote server comprises a deep learning based application to process the steps of the method according to claim 1 to generate a database with data sets of visuals of faces and extracted face property data thereof linked to face characteristics and to generate a database with a data set of visual modifications of a face achievable by cosmetic and/or medical treatments and the deep learning based application is built to generate the data set of modifications for at least one region of the face of the person to achieve the desired change in the characteristic of the face based on the generated databases; and
the remote server or the mobile device comprises visual data modification means to modify the visual data of the face of the person with the data set of modifications achievable by the at least one cosmetic and/or medical treatment, which modified visual data of the face are displayed with the mobile device.
9. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the following steps to provide a computer-modified visualization of a desired face of a person:
generating a data set of visuals of faces and extracted face property data thereof linked to face characteristics data provided by a representative set of humans that rate the visuals of these faces about their face characteristics and store the data set in a database;
extracting further face property data of these visuals of faces and use these extracted face property data together with the generated data set for training of an artificial intelligence to enable the artificial intelligence to provide an automated rating of the characteristics of the visuals of faces;
generating a data set of visual modifications of a face achievable by cosmetic and/or medical treatments and store the data set in a database;
receiving a standardized visual of the face of the person from a mobile device;
receiving at least one desired characteristic of the face of the person to be changed input by the person with mobile device;
using the artificial intelligence to analyse the visual of the person's face and to generate a data set of modifications based on the selected desired characteristic(s) and modifications achievable by at least one cosmetic and/or medical treatment;
modifying the visual of the face of the person based on the data set of modifications and generate the computer-modified visual of the face of the person with the modification of the face achievable by the at least one proposed cosmetic and/or medical treatment; and
transmitting the computer-modified visual of the desired face of the person to mobile device to display the computer-modified visual.
US17/117,805 2019-12-11 2020-12-10 Method and system to provide a computer-modified visualization of the desired face of a person Abandoned US20210183124A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/331,121 US11227424B2 (en) 2019-12-11 2021-05-26 Method and system to provide a computer-modified visualization of the desired face of a person

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP19215134 2019-12-11
EP19215134.8 2019-12-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/331,121 Continuation US11227424B2 (en) 2019-12-11 2021-05-26 Method and system to provide a computer-modified visualization of the desired face of a person

Publications (1)

Publication Number Publication Date
US20210183124A1 true US20210183124A1 (en) 2021-06-17

Family

ID=68886753

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/117,805 Abandoned US20210183124A1 (en) 2019-12-11 2020-12-10 Method and system to provide a computer-modified visualization of the desired face of a person
US17/331,121 Active US11227424B2 (en) 2019-12-11 2021-05-26 Method and system to provide a computer-modified visualization of the desired face of a person

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/331,121 Active US11227424B2 (en) 2019-12-11 2021-05-26 Method and system to provide a computer-modified visualization of the desired face of a person

Country Status (5)

Country Link
US (2) US20210183124A1 (en)
EP (1) EP4072403A1 (en)
CN (1) CN115209789A (en)
DE (1) DE212020000466U1 (en)
WO (1) WO2021115798A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11341619B2 (en) 2019-12-11 2022-05-24 QuantiFace GmbH Method to provide a video with a computer-modified visual of a desired face of a person

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10482333B1 (en) 2017-01-04 2019-11-19 Affectiva, Inc. Mental state analysis using blink rate within vehicles
US8917914B2 (en) 2011-04-05 2014-12-23 Alcorn State University Face recognition system and method using face pattern words and face pattern bytes
US9520072B2 (en) 2011-09-21 2016-12-13 University Of South Florida Systems and methods for projecting images onto an object
EP2915101A4 (en) 2012-11-02 2017-01-11 Itzhak Wilf Method and system for predicting personality traits, capabilities and suggested interactions from images of a person
WO2015017687A2 (en) 2013-07-31 2015-02-05 Cosmesys Inc. Systems and methods for producing predictive images
CN106415594B (en) 2014-06-16 2020-01-10 北京市商汤科技开发有限公司 Method and system for face verification
CN105844202A (en) 2015-01-12 2016-08-10 芋头科技(杭州)有限公司 Image recognition system and method
EP3335195A2 (en) 2015-08-14 2018-06-20 Metail Limited Methods of generating personalized 3d head models or 3d body models
CN108664782B (en) 2017-03-28 2023-09-12 三星电子株式会社 Face verification method and device
US10438415B2 (en) 2017-04-07 2019-10-08 Unveil, LLC Systems and methods for mixed reality medical training
US10977674B2 (en) 2017-04-28 2021-04-13 Qualtrics, Llc Conducting digital surveys that collect and convert biometric data into survey respondent characteristics
CN108960020A (en) 2017-05-27 2018-12-07 富士通株式会社 Information processing method and information processing equipment
TW201907334A (en) * 2017-07-03 2019-02-16 華碩電腦股份有限公司 Electronic apparatus, image processing method and non-transitory computer-readable recording medium
US10825564B1 (en) * 2017-12-11 2020-11-03 State Farm Mutual Automobile Insurance Company Biometric characteristic application using audio/video analysis
US10997703B1 (en) * 2018-04-24 2021-05-04 Igor Khalatian Methods and systems for automated attractiveness prediction
US11151362B2 (en) * 2018-08-30 2021-10-19 FaceValue B.V. System and method for first impression analysis and face morphing by adjusting facial landmarks using faces scored for plural perceptive traits
US11250245B2 (en) 2019-09-20 2022-02-15 The Trustees Of Princeont University Data-driven, photorealistic social face-trait encoding, prediction, and manipulation using deep neural networks
US10764535B1 (en) 2019-10-14 2020-09-01 Facebook, Inc. Facial tracking during video calls using remote control input

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11341619B2 (en) 2019-12-11 2022-05-24 QuantiFace GmbH Method to provide a video with a computer-modified visual of a desired face of a person

Also Published As

Publication number Publication date
EP4072403A1 (en) 2022-10-19
CN115209789A (en) 2022-10-18
DE212020000466U1 (en) 2021-09-09
US20210279933A1 (en) 2021-09-09
WO2021115798A1 (en) 2021-06-17
US11227424B2 (en) 2022-01-18

Similar Documents

Publication Publication Date Title
US11625878B2 (en) Method, apparatus, and system generating 3D avatar from 2D image
US20230244306A1 (en) Technique for controlling virtual image generation system using emotional states of user
CN108780228B (en) Augmented reality system and method using images
CN109310196B (en) Makeup assisting device and makeup assisting method
Bruce et al. Sex discrimination: how do we tell the difference between male and female faces?
Sadr et al. The role of eyebrows in face recognition
EP4073682B1 (en) Generating videos, which include modified facial images
JP7278724B2 (en) Information processing device, information processing method, and information processing program
WO2015122195A1 (en) Impression analysis device, game device, health management device, advertising support device, impression analysis system, impression analysis method, program, and program recording medium
JP2020067868A (en) Treatment support system
US11227424B2 (en) Method and system to provide a computer-modified visualization of the desired face of a person
JP5782540B2 (en) Beauty counseling method
JP7095849B1 (en) Eyewear virtual fitting system, eyewear selection system, eyewear fitting system and eyewear classification system
KR102195033B1 (en) Big Data Molding Consulting System
EP4170609A1 (en) Automated filter selection for altering a stream
Gujre et al. A Review of Facial Expression Recognition using Machine Learning
KR20240009440A (en) Computer-based body part analysis methods and systems
JP2023038871A (en) Feature extraction method and feature extraction system
JP2023038870A (en) Impression evaluation method and impression evaluation system
Ji et al. Classifier Guided Domain Adaptation for VR Facial Expression Tracking
Bouchani et al. A novel framework for quantitative rhinoplasty evaluation by ResNet convolutional neural network
Raut et al. An Analysis of the Capability of Machine Learning to Recognize Facial Expressions
KR20230118191A (en) digital makeup artist
Wang Emotion Cues: Effects of Facial Expressions and Eyebrows on Experiencing Emotions
Feng Does the sum of facial parts tell more than the face? Study of impacts of fashion alterations on facial identity

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

AS Assignment

Owner name: QUANTIFACE GMBH, AUSTRIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BENDITTE-KLEPETKO, HEIKE;REEL/FRAME:056668/0100

Effective date: 20201221

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION