WO2017177259A1 - Système et procédé pour traiter des images photographiques - Google Patents

Système et procédé pour traiter des images photographiques Download PDF

Info

Publication number
WO2017177259A1
WO2017177259A1 PCT/AU2017/000087 AU2017000087W WO2017177259A1 WO 2017177259 A1 WO2017177259 A1 WO 2017177259A1 AU 2017000087 W AU2017000087 W AU 2017000087W WO 2017177259 A1 WO2017177259 A1 WO 2017177259A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
image
face
features
facial
Prior art date
Application number
PCT/AU2017/000087
Other languages
English (en)
Inventor
Steven Moss
Original Assignee
Phi Technologies Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2016901367A external-priority patent/AU2016901367A0/en
Application filed by Phi Technologies Pty Ltd filed Critical Phi Technologies Pty Ltd
Publication of WO2017177259A1 publication Critical patent/WO2017177259A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2178Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present invention relates generally to an image processing system and method, and in particular, to a system and method for processing a photographic image of a user to provide beautification of the image in accordance with predetermined beauty preferences.
  • self-taken photographs or portraits is a particularly common form of photograph.
  • Many individuals take selfies to capture themselves experiencing a moment either alone or with others, which can be simply posted onto social media platforms for sharing with the individuals contacts and the general public.
  • This phenomenon has become so popular that there are dedicated extension devices provided for use with smart phones to facilitate such photographs, generally referred to as a "selfie stick”.
  • a method of processing an image comprising:
  • the image is a digital photograph captured by a digital camera.
  • the image is a moving image, such as a video.
  • the step of analysing the image may comprise receiving the digital photograph and scanning the digital photograph to identify and analyse predetermined features of the user's face.
  • the predetermined features of the user's face may include any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter- eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour.
  • the predetermined features of the user's face are measured by placing mapping points on said multiple predetermined features of the image of the user's face and measuring distances and angles between the various mapping points to generate a 2D or 3D model of the user's face.
  • a numerical code may be created based on landmark features of the user's face.
  • the numerical code may be searchable on a remote database of stored 2D or 3D models of user faces to identify the user in the digital photograph.
  • the step of comparing the generated data against model data representative of features of the user's ideal face characteristics may comprise comparing the generated 2D or 3D model of the user's face against a stored 2D or 3D model representative of the user's ideal facial characteristics.
  • the stored 2D or 3D model representative of the user's ideal face characteristics may be generated by the user's previous actions in identifying preferred altered images of their facial characteristics.
  • the stored 2D or 3D model representative of the user's ideal face characteristics may be constantly updated based on feedback from the user regarding preferred altered images.
  • the step of determining differences between the generated data and the model data may comprise identifying the presence of optical distortion within the image.
  • the differences determined between the generated data and the model data may comprise differences in the facial dimensions and proportions of any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour; against the stored 2D or 3D model representative of the user's ideal face characteristics and determining differences between the relative dimensions and proportions of the features.
  • the step of determining of the differences between the generated data and the model data may further comprise determining differences between the face tone, clarity, colour and texture, including colour of lips, teeth and eyes.
  • a plurality of altered images may be supplied to the user, each altered image being altered from the original image a different percentage.
  • the plurality of altered images may comprise images that are altered to remove 100% of the differences, 75% of the differences; 50% of the differences; 25% of the differences and 0% of the differences.
  • the plurality of altered images may also include mirror images of the original image and the altered image.
  • the user is able to select each of the displayed altered images according to their preferences.
  • the most preferred altered image may be stored by the user for retention.
  • the relati ve dimensions and proportions of the features of the most prefened altered image may also be used to update the model data representative of features of the user's ideal face characteristics.
  • an image processing apparatus comprising:
  • an image capturing unit for capturing an image containing a user's face
  • a processor for:
  • the image capturing unit comprises a digital camera and the captured image is a digital photographic image.
  • the image capturing unit comprises a video camera and the captured image is a moving image.
  • the processor may be a computer processor provided on the apparatus. The processor may be configured to receive the captured image and may comprise software for scanning the image to identify and analyse predetermined features of the user's face.
  • the predetermined features of the user's face that are identified and analysed may include any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour.
  • the predetermined features of the user's face may be measured by placing mapping points on multiple predetermined features of the image of the user's face and measuring distances and angles between the various mapping points to generate a 2D or 3D model of the user's face.
  • a numerical code may be created based on landmark features of the user's face.
  • the numerical code may be searchable by the controller on a remote database of stored 2D or 3D models of user faces to identify the user in the captured image.
  • the controller may compare the generated data against model data representative of features of the user's ideal face characteristics by comparing the generated 2D or 3D model of the user's face against a stored 2D or 3D model representative of the user's ideal face characteristics.
  • the stored 2D or 3D model representative of the user's ideal facial characteristics may be generated by the user's previous actions in identifying preferred altered images of their face.
  • the stored 2D or 3D model representative of the user's ideal face characteristics may be constantly updated based on feedback from the user regarding preferred altered images.
  • the processor may determine differences between said generated data and said model data including identifying the presence of optical distortion within the image.
  • the processor may compare the facial dimensions and proportions of any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour; against the stored 2D or 3D model representative of the user's ideal face characteristics and determining differences between the relative dimensions and proportions of the features.
  • the processor may further determine differences between the face, tone, clarity colour and texture, including lips, teeth and eyes of the image and the model image.
  • the processor may alter the image by adjusting the dimensions and proportions of the user's face in the image to remove the differences between the relative dimensions and proportions of the features and by adjusting the face, tone, clarity, colour and texture, including lips teeth and eyes of the user's face in the image to substantially accord with the model data.
  • the processor may display a plurality of altered images to the user by way of a user interface, each altered image may be altered from the original image a different percentage.
  • the plurality of altered images may comprise images that are altered to remove 100% of the differences, 75% of the differences; 50% of the differences; 25% of the differences and 0% of the differences.
  • the processor may also display mirror images of the original image and the altered images to the user.
  • the user interface may be configured to enable the user to select each of the displayed altered images according to their preferences and rate their preferred image for attractiveness.
  • the most preferred altered image may be stored in the processor for retention.
  • the relative dimensions and proportions of the features of the most preferred altered image may also be used by the processor to update the stored model data representative of features of the user's ideal face characteristics
  • a method for processing an image of a human face to facilitate improved beautification of said face comprising:
  • the step of taking a digital image may comprise taking both a full frontal image of the user and a lateral image of the face of the user.
  • the step of analysing the at least partial image of the user's face may comprise applying a facial recognition and mapping algorithm to identify major anthropometric features associated with a user's face.
  • the major anthropomorphic features may comprise any one or more of facial shape including cheeks and chin; forehead height; eyebrow shape; eye size and inter- eye distance; nose shape and lip shape, length and height.
  • the step of digitally modifying the major anthropometric features associated with a user's face may comprise applying any one or more of Face Shaping, Skin Smoothing & Removal of imperfections; Face Contouring & Highlighting - make up; Wrinkle Reduction; Crow's Feet Removal; Nose Sharpening & Shaping; Wider Eyes; Eye Enlargement ; Eye Brightening; Red Eye Reduction; Teeth Whitening ; Pouting Lips; Soft Chin; and Chin Lifting
  • the step of digitally modifying the major anthropometric features may comprise a further step of applying multiple image options to the user for review.
  • the multiple image options may be generated by applying a combination of different modifications to the image for selection by the user.
  • the user may select the most preferred option of the multiple of options and the selection parameters may be stored for application the next time the image is taken.
  • the present invention comprises a process for performing alterations to a photographic image of a face to facilitate beautification of the facial image, comprising;
  • the step of establishing a database of facial images comprises collecting digital photographs from individuals across a variety of age and ethnic groups.
  • the step of analysing each facial image and generating data relating to predetermined features of the face may comprise receiving each digital photograph and scanning the digital photograph to identify and analyse predetermined features of the user's face.
  • the predetermined features of the user's face may include any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour.
  • the predetermined features of the user's face may be measured by placing mapping points on said multiple predetermined features of the image of the user's face and measuring distances and angles between the various mapping points to generate a 2D or 3D model of the user's face.
  • the step of identifying consistencies between those predetermined features of the face across different levels of perceived attractiveness comprises comparing each of the images and establishing a range of consistencies between features deemed as attractive.
  • the step of defining the dimension and range of measurements of the predetermined features which were perceived as being most attractive may comprise statistically analysing the measurements of the predetermined features to determine ranges of dimensions for each of the identified facial features.
  • a method of establishing a database of facial features representing attractiveness comprising:
  • the step of registering with the platfomi may also require the user to rate various alterations of their self-image in accordance with attractiveness.
  • the images may be rated based on whether they are liked or disliked.
  • the third party images may include facial images from beauty and fashion sources.
  • a method of recording facial features of an individual in an image comprising:
  • the step of directing the individual to capture the image of their face comprises providing visual and audio cues to guide the individual to position the camera at a predetermined position for taking the image.
  • the predetermined position of the camera may include a position to facilitate a front repose, lateral repose and a 90° in/out plane rotation of the individual's face.
  • the step of mapping a plurality of predetermined points on the captured image may comprise placing a minimum of 72-101 mapping points around the individual's facial shape including cheeks and chin; forehead; eyebrow; eyes; nose; and lips.
  • the step of extracting dimensions and measurements of said predetermined points may comprise establishing facial shape of the individual's facial shape including cheeks and chin; the individual's forehead height; the individual's eyebrow shape; the individual's eye size and inter-eye distance and pupil position; the individual's nose shape; and the individual's lip dimensions including length and height.
  • the other relevant data associated with the individual may include the individual's skin clarity, texture and colour.
  • the step of generating a model of the individual's head and face may comprise generating a 2D or 3D model of the individual's face replicating the extracted dimensions and measurements of the predetermined points.
  • determining a presence of any optical distortion in the image based on any differences between the compared extracted dimensions and measurements of the predetermined features and the dimensions and measurements of those features on the predetermined model of the user's face being at or above a predetermined level.
  • the step of mapping a plurality of predetermined features on the image comprises placing a minimum of 72-101 mapping points around the user's cheeks and chin; forehead; eyebrow; eyes; nose; and lips and extracting dimensions and measurements of those predetermined features to establish facial shape of the user's cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; and the user's lip dimensions including length and height.
  • An optical distortion level may be identified by establishing a difference in the dimensions and measurements of the predetermined features in the image and the dimensions and measurements of those features on the predetemiined model of the user's face.
  • the image is presented to the user via an electronic interface.
  • the step of monitoring the plurality of predetermined facial features of the user may comprise using a camera on the electronic interface to monitor the user.
  • the emotion of the user may be determined by comparing the extracted dimensions and measurements of the predetermined facial features of the user against a predetermined model of facial features associated with an emotion.
  • the emotions may be determined between any one of happiness, sadness, anger, fear, surprise, disgust and neutral.
  • Fig. 1 is a diagram of a system for use in the manipulation, display and storage of photographic images of individuals in real-time;
  • Fig. 2 is a flow chart depicting a method for users to create an account with the system of Fig. 1 in accordance with a first embodiment
  • Fig. 3 is a flow chart depicting a method for initialising a beautification algorithm for a user in accordance with a first embodiment
  • Fig. 4 is a screen shot depicting a step of the method of Fig. 3;
  • Fig. 5 is a screen shot depicting a step of the method of Fig. 3.
  • Fig. 6 is a screen shot depicting a step of the method of Fig. 3;
  • Fig. 7 is a flow chart depicting an embodiment by which a facilal database is created and applied into the facial imaging software of the present invention
  • Fig. 8 is a flow chart depicting an embodiment of how the system and method of the present invention is employed to beautification to an image taken by a camera of the present invention
  • Fig. 9 is a diagrammatical depiction of 2D or 3D models of a face and facial features which can be generated for each user in accordance to the present invention
  • Fig. 10 is a flow chart depicting a manner in which a beautification algorithm can be applied for a specific user.
  • Fig. 1 1 is a flow chart depicting a manner in which image correction can be applied to a user's image based on the beautification algorithm of Fig. 10.
  • system and method of the present invention will be described below in relation to its application for use in the manipulation, display and storage of photographic images of individuals in real-time. It will be appreciated that the system and method of the present invention may also be applicable for use with videos and other imaging technologies where a user's face is present either alone or with other individuals.
  • the system 10 will be referred to as a photographic image processing platform.
  • the system 10 generally includes a network 14 that facilitates communication between a host service 11 and one or more remote services 16.
  • the system 10 also facilitates communication of the host service 1 1 and remote services 16, with one or more third party servers 17 as will be discussed in more detail below.
  • the host service 1 1 is depicted as comprising one or more host servers 12 that communicate with the network 14 via wired or wireless communication, as will be appreciated by those skilled in the art.
  • the one or more host servers 12 are configured to store a variety of information collected by each of the remote services 16 as well as to exchange data between third party servers 17 via the network 14.
  • the host servers 12 are also able to house multiple databases necessary for the operation of the methods and systems of the present invention and for the storage of information collected from the individual users of the remote services 16.
  • the servers 12 may comprise any of a number of servers known to those skilled in the art and are intended to be operably connected to the network 14 so as to operably link to the plurality of remote services 16.
  • the servers 12 typically include a central processing unit or CPU that includes one or more microprocessors and memoiy operably connected to the CPU.
  • the memory can include any combination of random access memory (RAM), a storage medium such as a magnetic hard disk drive(s) and the like.
  • the distributed computing network 14 is the internet or a dedicated mobile or cellular network in combination with the internet, such as a GSM, CDMA, EDGE, LTE, HSDPA/HSPA, EV-DO or WCDMA network.
  • a GSM Global System for Mobile communications
  • CDMA Code Division Multiple Access
  • EDGE Evolved Universal Terrestrial
  • LTE Long Term Evolution
  • HSDPA/HSPA Evolved Universal Terrestriality
  • EV-DO or WCDMA Wideband Code Division Multiple Access
  • Other types of networks such as an intranet, an extranet, a virtual private network (VPN) and non-TCP/IP based networks are also envisaged.
  • VPN virtual private network
  • non-TCP/IP based networks are also envisaged.
  • the remote services 16 are configured for use by users who are registered with the host service 1 1.
  • the remote services 16 is typically in the form of a smart phone, tablet or similar portable computing device that is configured with a dedicated software application and camera technology to enable a user to take photographs and review the resultant modified images for transmission to other users 16, third party servers 17 and/or the host service 1 1, in real time. The manner in which this is achieved will be discussed in more detail below.
  • the remote service 16 may also be configured such that it is able to communicate with the host service 1 1 via a mobile web browser thereby obviating the need for the remote service 16 to download software for this purpose.
  • the third party servers 17 may include existing social media platforms and the like, that facilitate the download, display, sharing and storage of photographs taken and modified by the users, such as Facebook®, SNAPCHAT®, and the like.
  • the host service 1 1 is able to communicate with the third party servers 17 via the network 14 to obtain specific information about the photographs/images posted on the third party servers in accordance with the system and method of the present invention.
  • the memoiy of the servers 12 may be used for storing an operating system, databases, software applications and the like for execution on the CPU.
  • the database stores data relating to each registered user of the system 10, as well as information relating to settings and preferences identified by the user, and other users, over time.
  • each user is connected to the network 14 by way of their remote service 16.
  • the remote service 16 stores one or more programs that include executable code to facilitate operation of a software application or "app”, which is configured to provide an interface between the remote service 16 and the host service 1 1 , as well as to control operation of the camera device present on the remote service 16.
  • apps software application or "app”
  • Such an arrangement enables communication therebetween, as well as between other remote services 16, depending upon the type of user and the overal l requirements of the system.
  • the functionality of the remote service 16 is provided by the type of software application that is installed in the local non-volatile storage of the remote service 16 and which is executed by the internal processor of the remote service 16.
  • the software application may be downloaded to the remote service 16 via the network 14 from the host service 11.
  • the software application may be purchased or otherwise downloaded through a software application provider, such as iTunes®, Google ⁇ Play and the like, for storage on the remote service 16.
  • the remote service 16 may provide a means for a user to collect and transfer information to the host server 12 via the network 14 automatically, by transmitting data collected by the remote service 16 which is captured in a form that can be readily transmitted between the remote service 16 each time the user takes a photograph, and the host service 1 1.
  • the user in order for a user to obtain authorisation to use the system 10 of the present invention, the user is required to register with the host service 11 in accordance with the method 20 as set out in Fig. 2.
  • a user via their remote service 16, downloads a software application for use of the present system and method.
  • the user may download the software application directly from the host service 11 by way of a guest user interface, or may obtain the software application from an on-line software application store, such as Google® Play, or iTunes®.
  • the user may be charged a small fee to obtain the software application or the software application may be supplied to the user free of charge, with the user being charged based on their use of the software application, such as the number of images processed by the application or by way of an equivalent assessment of use.
  • the user may be charged a yearly membership fee upon activation of the software application or may be provided with an initial service having limited functionality which can be upgraded upon purchase of the full version of the software application.
  • access may be free. Irrespective of the manner in which the user is permitted access to the software application of the present system, upon access to the host service 1 1, the software application is downloaded into the user's remote service 16.
  • the login details include the preferred means by which the user accesses the application, which will be stored with the host service to generate a profile that is to be stored in the memory of the host servers 12 for each user.
  • the user may be able to log-in to use the software application by two preferred methods. Firstly, the user may login by way of their social media site of preference, such as FACEBOOK®, such that their social media account will be linked to their account generated with the host service 1 1 of the present application.
  • the other preferred login method will be via a conventional login name and login password which can be generated by the user.
  • the login name will be the user's preferred email address and they will be required to generate a password to complete the connection. Should the user forget their password, facilities will be provided for retrieving forgotten passwords or generating new passwords.
  • step 23 the user is required to create their profile by entering important personal details to assist the application in generating a beautification algorithm specific for the user.
  • the software application will typically direct the user to a dedicated screen to enable the user to enter their details such as name, address, contact details, gender and date of birth.
  • a message may appear that explains that the application is optimised for certain ages or ethnicities and for some age groups the level of beautification applied by the present invention may vary.
  • the user may be required to enter other details such as their height, ethnicity or race, and any other details considered relevant to assist the software application to effectively and accurately generate a beautification algorithm to beautify the user's image.
  • the user may also register their social networks with the host service 11 , such that the host service is able to access data from the user's social network pages.
  • the host service is able to access data from the user's social network pages.
  • the software application is initiated, the user is able to register with the host service 1 1 through their Facebook® profile.
  • step 24 the user will be asked to set their preferences for using the application.
  • a preferred embodiment of this step will be described in more detail below, but generally involves the user, under instruction from the application, taking a series of photographs of their face and head in various predetermined positions to enable the application to perform facial mapping on the user. This will then be used by the application to generate the beautification algorithm that creates a number of beautified images of the user, for the user to review. The user will then select the most preferable of the images. These algorithm settings will then be saved for the user, as the user's beautification algorithm.
  • step 25 the user completes the registration process and creates their account with the host service 1 1 , whereby the user's preferences will be stored in their account for ongoing reference and analysis.
  • the method 30 for setting the specific beautification algorithm containing the user preferences and for calibrating the user's application for use is depicted in Fig. 3.
  • step 31 the user is prompted to take a photograph of their face and head in frontal position (Repose & Smiling). This is achieved by the software application directing the user to a screen depicting the device's camera which will be set to front-facing mode.
  • the software application will include instructions on how the user should take the photograph of their face front-on. This will include instructions via audio or visual cues to indicate that their face is in the correct position. Once the correct facial position or alignment has been achieved, a photo will be taken automatically. An example screen shoot depicting this step is shown in Fig. 4.
  • step 32 the application will then use the photograph to recognise and place a minimum of 72-101 markers or mapping points on the users face and record measurements of facial features. This is the facial mapping process.
  • step 33 the application will apply beautification level adjustment to the photograph taken in step 31 and presents a variety of frontal view beautified images to the user for review.
  • this may include providing eight versions of the same frontal image, with four of those images presented in "Camera View Mode" and the other four images presented in Mirror Flip Mode for review.
  • Three each of the two sets of images will have a different percentage of beautification level applied to the user's face in the beautification algorithm. Therefore, two of the images will have no beautification applied and will be kept as the original image, and the mirror of the original image. The user will be presented with these images in a random manner as depicted in Fig. 5.
  • the user is able to manually adjust the beautification modification percentage applied to the image in accordance with their personal preferences.
  • the user will be able to adjust this percentage by manually increasing or decreasing an adjustment bar.
  • This setting may be applied to all images or only a selection of images. Changes to the face will be made in real time as the user moves the adjustment bar. This will include adjustments to the geometiy of the head & face as well as the facial characteristics.
  • Such functionality allows the user to adjust the beautification level based on what they prefer and consider as ideal, as well as how they perceive themselves as being more or less attractive. Note that modification changes will take effect in real time as the adjustment bar is increased or decreased.
  • step 34 the user is required to rate the images in order of preference.
  • Each of the images will be displayed in groups of four in a 2x2 grid. Each image can be viewed individually in larger dimensions by selecting the image and clicking an expand button. The user is then able to select their preferences. In cases where some or any of the images are not liked by the user, there will be an ability for the user to return to the previous screen to take new images.
  • the user will be required to rate their images by giving each image a number from T- '8' where ' 1 ' would be an image they like most and '8' would be an image they like least.
  • each of the rated images are saved into the host server database together with the modification data surrounding preferences selected by the user.
  • the modification data may include the features modified and the level of modification applied to those features. This information will then be collated by the host server and processed by the host server's Machine Learning Engine to analyse the data and improve the beautification adjustment system over time.
  • the most preferred image for which a rating of ⁇ ', is the only image saved onto the user's device camera roll.
  • the data derived by the user simply making these adjustments and selecting their preferences is critical and helps understand the perception of beauty and adds to improving or fine tuning the Facial Profile of an Ideal Beautiful Face (FPIBF) and ultimately defining universal beauty.
  • a 2D or 3D model of the user's head and face is created based on these preferences, as depicted in Fig. 9.
  • This 2D or 3D model contains the relevant dimensions and proportions of the seven most important facial features as discussed below which is then recorded in the Host Service database against each registered user's profile.
  • the user is then redirected back to step 31 to repeat the process with photographs of the head and face of the user in a lateral or side position (Repose and Smiling), as depicted in Fig. 6.
  • the account is created and the application becomes calibrated and ready for use.
  • the user's device camera will be set to front-facing mode by default and all photos and videos taken via this screen will apply the beautification algorithm preferred by the user instantly and in real-time (i.e through camera pre-image capture) once the subject's face is recognised by camera.
  • beautification will be applied only to the user's face and not anyone else found in the photo and the beautification algorithm will be applied whether or not the user clicks on the capture button. As discussed above, this may occur in real time at pre-image capture, or following image capture when the image is stored. Multiple photos can be taken without needing to review or confirm the image.
  • the user remains on the camera screen once they have taken a photo (as a beautified image).
  • the photo will be saved in the host server's library. A copy of the photo will also be saved in the Album of the device camera roll. Multiple versions of the same photo will not be saved (i.e the eight beautified versions), only the one that represents the preferred image as selected by the user.
  • the system and method as described above provides a simple means for a user to download the software application and to simply customise and use the system.
  • This then provides a system whereby every photograph taken after the initial registration and installation process has an automatic beautification algorithm applied to it in accordance with the user's face type and preferences. This may occur in real time and instantly, as soon as the camera recognises or tracks the user's face, or may occur after the image is taken and stored.
  • the system is able to analyse each user's facial features to a significant degree as part of the facial image assessment of the software application of the present invention, the system is able to collect the data and compare the collected data with stored data for recognition purposes.
  • the present system is able to recognise the user within a photograph even if the user is part of a group photograph.
  • facial recognition is possible due to the ability of the software application to identify major anthropometric points or features of the user's face, irrespective of the user's age, skin colour or sex. This can also be achieved if the user is wearing make-up or a hat or, in the case of males, the user has grown a beard.
  • the software application of the present system will only function to manipulate the facial image of the user to which the phone belongs, and not apply any manipulation to the other members of the group. If the group photo is shared with someone else who is a registered user of the present invention and is downloaded into that user's mobile phone, the software application will be able to identify and edit the image of that user, even though that user did not take the photograph.
  • the present invention is able to link and record the facial feature dimensions of the subject in its database, so it can be referenced to a photo database within the host service, for face recognition purposes. This will allow the subject's face and facial features to be recognised in a group photo or video, even if there are many individuals present.
  • the face recognition technology may not be possible for the face recognition technology to recognise the user's face. In such circumstances, the user will be prompted to tag their face within the image. Once tagged, the system of the present invention will then be able to apply a beautification algorithm to the user's face in accordance with their preferences. In such circumstances, the image preference selections made by the user for tagged images will not be recorded in the database.
  • the host service of the present invention is able to employ methods that build a large database of an extensive variety of facial images with diversified attractiveness levels with each of those facial images also rated for aesthetic appeal by a diverse range of men and women of various ages and races, it is able to provide a means from which the host service machine learning engine can be trained. This means is built from an analysis of data about the consistencies between characteristics of the facial features found in images that are perceived as attractive and unattractive.
  • the host service machine learning engine can analyse the following data: • Generated Images
  • the machine learning engine will measure seven facial features of individuals in images that have been found as attractive and non-attractive, as will be discussed in more detail below.
  • the system of the present invention is able to analyse those preferred images selected to determine any correlation between those images and in doing so, establish rules governing attractiveness based on the dimensions & proportions of the facial features consistently present in those images.
  • each user will also be requested to rate their preferred image selected for attractiveness.
  • the attractiveness ratings will be governed by a star rating system between 1-5. In such a rating system, five stars are awarded to an image that is considered as being veiy attractive, with one star being not attractive. This rating system will be used for;
  • the host service will then be able to analyse the ratings applied to images and find the correlation between those images to identify any rules that govern beauty;
  • the process forms part of the normal functionality of the present system and provides a simple, cost effective and automated system for generating beauty data.
  • the host service 1 1 does not require volunteers or paid staff to rate images for this purpose.
  • the large database of individuals of various sexes, ages and races will be rating a large and varied database of facial images with diversified attractiveness levels.
  • Such extensive and varied data captured by the host service 1 1 enables a deep insight into beauty and enables the identification of key facial features that define beauty, based on a large pool of people's opinion, consistently present in images that have been perceive as attractive.
  • the host service 1 1 is able to determine the presence of such emotions when the user is reviewing images, to track the users face for emotions in order to gauge the true emotional sentiment of the user towards the image when rating for preference or attractiveness. This is achieved by using a camera to view the user's facial features in real time as they are reviewing the images. This assists in ensuring accuracy of data collected and to gauge whether or not the user truly feels positive towards the image they have selected or not.
  • the system is able to detect, in real time, the micro expressions displayed by the user.
  • the user's lips, mouth, eyebrows, eyelids, jaw, chin, forehead, pupils and overall facial expression of the face tell a story.
  • the present invention is able to detect the true emotional sentiment of the user towards the image. For example if the micro-expressions displayed when viewing an image corresponds to happiness when selected, then the selection they have made would be considered as true.
  • the methods adopted by the host service 1 1 will provide the ability to display multiple images of the same user.
  • the user who is rating an image for preference and attractiveness may be rating multiple images of the same individual at various facial expressions and angles. This means that with deep machine learning the system of the present invention is able to conduct a more reliable analysis and identify the hidden clues or facial feature dimensions and proportions consistently present in images that are found attractive and unattractive.
  • the Host Service 11 also comprises a social media platform in the form of a software application whereby users can access images saved on the host server database and rate those images in terms of attractiveness, via their smart phone or similar device.
  • the host service is able to also utilise this source of facial image rating to create a large facial database for analysis and for increasing the effectiveness of the facial imaging processing technique.
  • this system 40 is depicted in the flow chart of Fig. 7, through the Host Service 1 1 providing a social media service or attractiveness rating software application, the number of images available for analysis and comparison purposes can be significantly increased, which enables the imaging processing engine used by the Host service to be optimised and changeable as face types change over time.
  • This system 40 is able to take head/face analysis information from the individual user registration process 41 (as described above in relation to method 30) based on the user's own profile pictures.
  • facial images are captured as part of the user profile set up process. As this process forms part of the normal functionality of the system, it will be a pre-requisite to using the present invention. Capturing these images and saving them onto the facial database will be easy, cost effective and automated. Users will be willing to provide these images as it is a necessity in order to effectively register with the host service to beautify their photos.
  • any photographic posts 43 such as selfies or other photographs generated by a registered user to the social media site hosted by the Host Service 1 1 can also be captured and analysed by the Host Service system for this purpose. These images will typically be captured and uploaded to the Host Server database by the user. This process also forms part of the normal functionality of the Host Service. Although it will not be a requirement for the users to upload their "selfies”, it is envisaged that users will upload their "selfies” onto the Host Service social media platform, as users of such platforms enjoy having their photos "liked” by other users.
  • any third party posts 42 made to the social media site can also be utilised by the Host Service to generate a large facial database of images. These images will be obtained from third party companies and uploaded onto the Host Service Database. Obtaining images from third party companies such as modelling agencies, beauty and fashion magazines and beauty pageant organisers who already have a database of images of beautiful individuals, would normally be a difficult task. However as the social media site of the present invention will provide an incentive and a service that would create a mass benefit and value to third parties, obtaining such images will become much easier.
  • the Host Service 1 1 is able to record the images on a facial database stored on the one or more servers 12, for further analysis.
  • step 45 measurements of each of the photographs captured and stored on the Facial Database in step 44 is able to be performed to generate data for use by the facial imaging software of the present invention.
  • These measurements may comprise various head measurements as well as measurements associated with seven facial features as will be discussed below. These measurements can be mapped and the various dimensions and proportions of skin colour, tone and texture of the individual faces can be extracted from the photographs, analysed and recorded in the facial database.
  • the data captured and recorded is then able to be used as inputs into the beautification algorithms employed by the facial imaging software for use in ensuring that the facial imaging software is continually learning and updating as beauty preferences change.
  • the present invention will be able to develop a large quality database of facial images, facial feature dimensions and characteristics that will be extensive in terms of: numbers of images accessible; variability in facial attractiveness; variability in ages and ethnic groups; and facial angles and expressions; the environments the images have been captured in; and the number of images of individuals already classified as attractive.
  • Such a facial database of facial dimensions and facial characteristics will provide a quality, unique and valuable database of facial images not available previously.
  • Such a database will contain a large number of images of indi viduals of various sexes, ages and races with diversified attractiveness levels that will form the basis through which the self-learning facial imaging software will function.
  • the head/face analysis engine that forms part of the facial imaging processing technology of the present invention system applied by the software application of the present invention is able accurately and precisely detect/track/trace and map the head and face by placing a minimum of 72- 101 mapping points or markers on the individual's facial features irrespective of whether the image is a still or moving picture.
  • the beautification engine of the present system and method is able to analyse the following seven main facial features:
  • Facial shape including cheeks and chin
  • the ability to track, map, extract, analyse and record the various features of a user's face is the key to identifying the level of attractiveness of a specific user prior to making any comparison as to how the user's specific facial geometry compares against an ideal beautiful facial profile.
  • accurate facial recognition and facial feature detection is important not only for beautification purposes but also to help identify the distortion degree, if any, in the relative positions and dimensions of the facial features in a captured image due to optics.
  • this is achieved by way of the device providing audio and visual cues to the user to guide and instruct them to position the camera at the correct distance from the user's head and to position their head at the correct angles with required facial expressions. Once there is proper and correct alignment, the image of the user will be automatically captured.
  • step 51 at pre-image capture and in real time, a minimum of 72-101 mapping points are placed around the seven facial features and periphery of the head to enable the facial and head dimensions to be measured and extracted. At this time, data associated with the face colour, skin tone and texture of the user are also extracted and analysed in this pre-image capture step.
  • step 52 this data that is extracted from the camera and analysed, and a search of the Host Service database is conducted to match the registered user Facial profile stored for that user in step 53.
  • This step 53 is achieved by recognition that every face has numerous, distinguishable landmarks, that make up facial features. These include such features as: distance between the eyes; width of the nose; depth of the eye sockets; shape of the cheekbones; length of the jaw line. These features are unique and act as an identifier much like a fingerprint.
  • a numerical code is created representing the face in the database. This code is then used for facial recognition memeposes and is saved on the Host Service database for searching.
  • step 54 upon finding a facial match with the registered user facial profile, the dimensions extracted from the user's facial image are compared against the user's stored 2D or 3D model data.
  • step 55 the facial data is fed and analysed by the Host Service machine learning engine to compare the data against an ideal beautiful face profile to determine those features that differ from the range considered beautiful.
  • step 56 the Host Service facial beautification engine applies a beautification algorithm to correct to the image in accordance with perceived differences between measured features and those features considered beautiful. To take into consideration any effects of optical distortion, the user's facial angle, the distance of the face to the camera lens and image light, shadows, colour tones, etc is also analysed and measured.
  • the facial dimensions and proportions of each of the seven facial features in that specific image captured will be compared with the User's Facial Profile Data containing specific user's facial geometry and skin characteristics captured at registration. Any differences in the relati ve proportions of each of the facial features (facial geometry) including the face colour and texture are identified to enable the degree of optical distortion in facial features and characteristics to be identified and measured.
  • Fig. 10 The method undertaken by the Host Service machine learning engine in step 55 to compare the user data against an ideal beautiful face profile is depicted in Fig. 10.
  • an underlying principle of the present invention is that facial beauty is measureable and the key method in discovering universal beauty is to identify the consistencies of facial feature dimension and soft-biometrics found and inherent in what are universally considered to be attractive faces.
  • the machine learning methods of the present invention provide a means for discovering a succinct set of rules that identify attractiveness and conclusively define universal facial beauty in 2 processes: ongoing analysis of data and images; and discovery of consistencies and rules to build an ideal beautiful face.
  • the Host Service machine learning engine is able to access the stored database data and images collected from registered members. This information may be sourced directly from individual member profiles or the social media site(s) hosted by the Host Service, where the consistencies in the facial features are identified in images which have been assessed as attractive or not attractive.
  • the Host Service machine learning engine also accesses existing scientific research information. In this regard, there have been extensive studies conducted to discover universal beauty. Although these studies have not been conclusive, the studies have identified a variety of features and mathematical applications for assessing the beauty of an image. Such research is able to provide us with details on dimensions and proportions of the seven facial features identified in images consistently inherent in individual's faces that were found as attractive and unattractive.
  • the present Host Service machine learning engine can then compare these measurements to those of a registered user, find the differences, set new parameters for the facial features to correspond with the facial profile of an Ideal Beautiful Face (FPIBF) as further described below in relation to step 65, and then apply these new parameters to beautify the user's image.
  • FPIBF Ideal Beautiful Face
  • the Host Service machine learning engine can also access other scientific research, such as the Marquardt Mask; which is a male and female mask developed by researcher Stephen Marquardt.
  • This research considers that by using such masks as a template and comparing the major anthropometric points of a face against the mask, the attractiveness level of an individual in the image can be measured. The closer the facial dimensions and proportions fit to the mask, the more attractive the individual will be perceived. So in order to beautify an image of a registered user of the present invention, that person's facial proportions and dimensions can be morphed to fit the mask as closely as possible. A variety of other studies have also been conducted in analysing facial beauty. In step 60, each of these methods can be accessed to generate as much data as possible to identify facial beauty. These models may include: The Golden Ratio; Vertical Thirds and Horizontal Fifths; Neoclassical Canons; Averageness; and Facial Symmetry.
  • the Host Service machine learning engine is able to constantly filter through the ever-expanding database of information and analyse and measure features that are considered both unattractive and attractive.
  • the Host Service machine learning engine can simply obtain this information as the user's selected profile image has been considered as attractive and those images rejected by the user during the registration process have been rated as less attractive.
  • the Host Service machine learning engine is able to simply analyse and measure all features considered to be attractive and not attractive from the camera images taken, profile images stored, pictures posted by third parties and selfies posted by the individual members to generate a constantly updating database of measurements and features from the various camera images taken, profile images stored.
  • step 62 this information is analysed further to identify and discover a succinct set of consistencies inherent in the 7 facial features dimensions and skin characteristics of those faces that have been found as attractive and unattractive.
  • This step includes taking measurements such as the geometry of the face and head in the image including; facial shape - including chicks and chin; forehead height; eyebrow shape - including length, width and height; eye size and inter- eye distance and pupil position; nose shape - including length and width; and lips and teeth- including length and height.
  • ear length to interocular distance ear length to nose width; mid- eye distance to interocular distance; mid-eye distance to nose width; mouth width to interocular distance; lips-chin distance to interocular distance; lips-chin distance to nose width; interocular distance to eye fissure width; interocular distance to lip height; nose width to eye fissure width; nose width to lip height; eye fissure width to nose-mouth distance; lip height to nose-mouth distance; length of face to width of face; nose-chin distance to lip-chin distance; and nose width to nose-mouth distance.
  • the Host Service machine learning engine will use a variety of existing software applications to perform this function. Extracting the geometry of the face will be based on the positions of the face region and the pupils and eyes. The face is mapped by placing 72-101 landmarks (mapping points) around the periphery of the head and facial features. The landmarks are extracted, facial measurements are calculated and finally the geometry of the face is generated.
  • the Facial Characteristics in the image including; facial colour, tone, clarity and texture, eye colour; lip colour; and teeth colour are extracted.
  • a BLBP software model and PCANet software model may be employed.
  • step 63 the measurements taken are statistically analysed to discover rules and or relationships within the data.
  • step 64 these rules or relationships can be used to define the dimensions or proportions in the seven facial features and skin colour, tone and texture that are required to be present in an ideal beautiful face in accordance with the collected data. This can be determined across all environments and camera angles.
  • step 65 using the data of consistencies identified and the results of the experiments derived from the conventional methods of facial beauty analysis, the Host Service machine learning engine will build a set of rules that will represent a Facial Profile of an Ideal Beautiful Face (FPIBF).
  • FPIBF Ideal Beautiful Face
  • this data will form the mathematical basis from which a beautification algorithm is automatically developed and then used by the Host Service facial beautification engine to apply correction to the image in accordance with perceived differences between measured features and those features considered beautiful.
  • the Host Service Beautification Engine applies the moiphing (beautification) of the image, in real time, in order to enhance the facial appeal of the user's face in the source image.
  • moiphing beautification
  • the manner in which this is done will be described below in relation to Fig. 1 1 and it is generally achieved by subtly moiphing the image to a level so that it will perceived as attractive (irrespective of viewer), whilst maintaining a natural appearance that is as close to the original source image.
  • step 70 the dimensions and proportions of each of the seven facial features in the image taken is compared generally to the facial features of the FPIBF obtained from the Host Service machine learning engine to determine the level of moiphing or degree of changes to be performed to the user's facial geometiy and skin characteristics to fit closely with the FPIBF.
  • step 71 the beautification algorithm is applied to the facial dimensions and proportions of the user's face in the image and the distances between the variety of facial feature locations are extracted and the differences in the relative positions and dimensions of each of the facial features (facial geometry) including in the face colour and texture between the FPIBF are identified.
  • New target landmark/mapping points are identified and degree adjustments to be made to the image are calculated and set. The new target is set based on the FPIBF, so its fits as close as possible to it, while still maintaining a natural appearance and unmistakable similarity to the original user's source image.
  • the morphing could be applied by adjusting the dimensions of the seven facial features against the FPIBF (step 72) or by adjusting the user's skin tone, colour and texture against the FPIBF (step 73) or a combination of both.
  • the user's image represented by the User's Facial Profile Data
  • the percentage by which each of the facial feature dimensions and distances are morphed is dependent on the differences identified between the user's facial dimensions and proportions in the specific image captured and the FPIBF. The larger the difference then the larger the percentage change may be performed.
  • the Facial Characteristics in the image are also able to be morphed to better fit the FPIBF.
  • the facial skin region is also able to be identified.
  • the facial texture, tone and colour will be improved by using multilevel median filtering to: remove imperfections in the skin such as scars and acne; improve skin colour, texture; and whiten teeth.
  • Optical rectification may be done simultaneously with facial beautification/morphing, or it may be performed separately.
  • step 74 a number of beautified images with varying beautification percentage levels are presented to the user via their camera after the image has been captured.
  • step 75 the user is able to manually select those images considered most favourite to least favourite.
  • the data derived by the user simply selecting their preferences is critical and helps understand the perception of beauty and adds to improving or fine tuning the Facial Profile of an Ideal Beautiful Face (FPIBF) and ultimately defining universal beauty.
  • step 76 the images are then saved in the user's profile with the Host Server and their preference data is saved and fed to the Host Server machine learning engine for further analysis.
  • the user's selection forms part of the ongoing fine tuning of the Facial Profile of an Ideal Beautiful Face (FPIBF).
  • FPIBF Ideal Beautiful Face
  • the pre-determined Beautification algorithm specific to the user's preferences, is updated and set and future images captured are beautified according to this saved algorithm, by the user simply selecting their preferences (favourite image) is critical and helps us better understand the perception of beauty and adds to improving or fine tuning the Facial Profile of an Ideal Beautiful Face (FPIBF) and ultimately defining universal beauty. All data is fed to the Machine learning Engine and analysed. Their selection forms part of the ongoing fine tuning of the Facial Profile of an Ideal Beautiful Face (FPI BF). If their current selection is inconsistent or varies from their last selection, the predetermined Beautification algorithm, specific to the user's preferences, is updated and set and future images captured are beautified accordingly.
  • optical distortion of the user's facial features and skin colour and texture may occur in images that are captured depending on factors such as; distance of the user's face to the camera lens; angle of the user's face at which the image was captured; the facial expressions and the environment under which the image was captured. Some or all of these situations may have a substantial optical distortion effect in the relative positions and dimensions of the user's facial features and skin colour and texture including creating an allusion that the ears are of different sizes, the nose appearing bulbous, the head or chin appearing pointed and the skin colour and texture appearing uneven and coarse.
  • the present invention seeks to address this during the pre-image capture step, as referred to in Fig. 8.
  • Step 51 during Pre-image capture the user's facial features and dimension of the head is instantly, and in real time, tracked and mapping points are placed around the facial features of interest, as previously discussed.
  • the measurements of the facial features are extracted and matched to the User Facial Profile Data captured at registration. Once matched, that specific user's face is recognised.
  • the technology will be unable to recognise the subject's face and therefore as a result mapping and beautifying the subject's face will not be possible.
  • the circumstances under which this may happen are; the resolution of the image is poor or out of focus; poor lighting and excessive shadowing where the key facial markers cannot be automatically identified; the user is not present in the photo; there are no individuals present in the photo; the subject's face cannot be detected. In such situations, the subject will be prompted to tag their face in the image. Once tagged the beautification will be applied to the tagged face.
  • the details captured about the user's face in step 51 are compared with the User's Facial Profile Data to identify the degree of optical distortion.
  • the present invention will then compare the dimensions and proportions of each of the 7 facial features to identify the distortion level, if any, in the relative positions and dimensions of the facial features including changes in the face colour and texture.
  • the distortion level is ascertained, it is fed to the Host Service Beautification Engine where it is then determined the level of changes or morphing to the user's face that needs to be performed to correct the optical distortion.
  • the Host Service Beautification Engine is able to understand the applicable distortion level, and take steps to correct such distortion back to an original or improved (beautified) proportions and dimensions.
  • any optical distortion of the image is automatically corrected by morphing the dimensions and proportions of the facial features in the image; and applying various optical filters to the image using multi-level median filtering.
  • the Beautification Engine of the present invention is able to calculate the degree of adjustments to make and automatically, instantly and in real time apply an algorithm to beautify the subject's face. This is done initially using user feedback and scientific research as its basis, and as further data is gathered on the definition of beauty, it will include and learn from this data to further develop, improve or fine tune the beautification algorithm. Beautification will be able to be performed on males and female of most ages and races in Repose and Smiling facial expressions and at various facial angles.
  • the software application is able to learn, refine, and automatically develop its own improved algorithms that will ultimately produce more accurate beautified images of users.
  • this is achieved by recording which of the set of provided manipulated photographs the user prefers, and recording the preferences.
  • the ideal of the present invention is to automatically provide pictures that have been beautified to a level of satisfaction by the user, this should be done without requiring the user to make any further selections.
  • the system may continue to provide multiple preference options to the user until such time as the user's preferences become learned by the software system. This may occur when the preferences are consistent and the system can track the level of consistency to be above a predetermined target. At such a time, providing multiple options for the user to select from is no longer necày and the system may function as a conventional camera with the beautification engine functioning in the background.
  • the beautification process applied automatically by the software system may no longer be preferred by the user.
  • the user may be able to manually override the beautification process to alter the image manipulation.
  • the learning process will be reset and the user will be provided with multiple options to choose from until the level of consistence is once again detected.
  • the data associated with the preference is saved and stored in the host service 1 1 or in the device 16 to be repeated and applied at the next opportunity.
  • a survey may be provided by the host service 1 1 for completion by one or more of the registered users.
  • Such a survey may provide users with multiple images and request each user to rate images by selecting their preference in return for credits, or similar forms of an incentive.
  • the host service 1 1 may provide a social network service to facilitate interaction between members.
  • a user (User 1) may provide a rating of a photograph and will have an option to meet the person (User 2) whose photo they have just rated.
  • the image selection process every time user 1 selects an image from the various options they will be presented with, they will also be given an option to click on a like button if they want to meet that individual User 2.
  • User 1 clicks on like the other individual "User 2" will receive a notification that User 1 liked User 2's photo and they would like to meet them.
  • User 2 then also has to rate User l 's photo in order to qualify to meet. If User 1 and User 2 like each other's photos, they will be introduced to each other. Introductions will only be possible through the host service if photos are rated and both users like each other's photos. Each user will also be able to see how many people liked their picture.
  • the software application associated with the present invention will analyse all aspects of photo, including such aspects as the lighting, contrast, colour, shadows, facial angles, proximity of the image to the camera lens.
  • the software application will take these aspects into consideration during the photo manipulation step and will learn from the correlation of the beautified image the user selects of themselves in the various conditions, circumstances and environments to define:
  • the system of the present invention may comprise an embodiment that enables users to save their photographs and videos to a storage library that may be hosted by the host service 1 1. The user can then further manipulate the photograph within the library with each manipulation being recorded by the system to facilitate further machine learning capabilities about the user's preferences.
  • the device and system may be able to collect micro expressions from the user. This will include real time analysis of facial expressions of the user during selection of their photograph options. These micro facial expressions can be used to ascertain the emotions generated by the user after viewing their image which can be used to more actively gauge the user's preferences. Similarly, when other users are required to rate photographs of other users, their micro expressions can be used to provide an indication of whether the user likes or dislikes a photo.
  • the software application may then automatically render the individuals face, as supplied in the photograph of step 1, with makeup in various recommended styles and display it on the screen as a still image. The user will then have an option to select the look they most prefer. Once a selection is made, the person's face will be rendered with makeup and displayed on the screen in real time. They will be able to move their face whilst still having their faced rendered with makeup in real time.
  • the software application will then display two images of the user in real time next to each other on the one screen.
  • One image will be with the makeup applied and the other will be an image of the user with no makeup.
  • both images in real time should look the same i.e. on the screen will be displayed is what they look like now and what the expected result would be after they have completed physically applying the makeup. All in real time.
  • the software application will then provide step by step instructions and guidance on how to physically apply the make up to achieve the beautified simulated vision of them as displayed on the screen.
  • the instructions will be in visual and audio formats and will be in real time. It will instruct the user as to what make up to use, colours, types, where and how to apply it. Visually it will display directions on the screen with arrows, highlights, and the like.
  • the software application will detect any such mistakes. Once a mistake is detected, a notification will then be triggered and displayed on the screen specifying the mistake made and suggesting option to correct.
  • both the simulated version of the image (the target) displayed at the beginning of the makeup application process and the image of the user displayed on the screen in real time should look the same.
  • the system and method of the present invention functions to collect and identify major anthropometric features associated with a user's face, so as to apply a digital alteration to a user's face image as the photograph is taken in real time, without the need for multiple interaction on behalf of the individual to manipulate their image.
  • a software approach with the camera that is able to identify major anthropometric points on the individual's face as the photograph is taken and to compare these points against an ideal human face, it is possible for the software application to identify those features which diverge from the ideal location or position and take actions to digitally alter and correct such diversions in forming the original photograph.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé de traitement d'une image, consistant : à capturer une image contenant le visage d'un utilisateur ; à analyser l'image et à générer des données associées à des caractéristiques prédéterminées du visage de l'utilisateur ; à comparer les données générées à des données de modèle représentatives de caractéristiques de caractéristiques de visage idéales de l'utilisateur pour déterminer des différences entre lesdites données générées et lesdites données de modèle dans au moins l'une d'une pluralité de caractéristiques faciales ; et à modifier l'image en faisant varier au moins l'une de ladite pluralité de caractéristiques prédéterminées du visage de l'utilisateur pour réduire au minimum lesdites différences ; à afficher ladite image modifiée à l'utilisateur ; et à stocker ladite image modifiée.
PCT/AU2017/000087 2016-04-12 2017-04-12 Système et procédé pour traiter des images photographiques WO2017177259A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2016901367 2016-04-12
AU2016901367A AU2016901367A0 (en) 2016-04-12 System and Method for Processing Photographic Images

Publications (1)

Publication Number Publication Date
WO2017177259A1 true WO2017177259A1 (fr) 2017-10-19

Family

ID=60041274

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2017/000087 WO2017177259A1 (fr) 2016-04-12 2017-04-12 Système et procédé pour traiter des images photographiques

Country Status (1)

Country Link
WO (1) WO2017177259A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108008815A (zh) * 2017-11-30 2018-05-08 西安科锐盛创新科技有限公司 基于眼睛状态识别技术的人机交互方法
CN108765264A (zh) * 2018-05-21 2018-11-06 深圳市梦网科技发展有限公司 图像美颜方法、装置、设备及存储介质
WO2019085792A1 (fr) * 2017-10-31 2019-05-09 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Dispositif et procédé de traitement d'image, support d'informations lisible et dispositif électronique
CN110263635A (zh) * 2019-05-14 2019-09-20 中国人民解放军火箭军工程大学 基于结构森林和PCANet的标志物检测与识别方法
WO2020076356A1 (fr) * 2018-10-08 2020-04-16 Google Llc Systèmes et procédés pour fournir une rétroaction pour des dispositifs de capture d'image basée sur l'intelligence artificielle
CN111275650A (zh) * 2020-02-25 2020-06-12 北京字节跳动网络技术有限公司 美颜处理方法及装置
CN111429439A (zh) * 2020-03-31 2020-07-17 北京新氧科技有限公司 一种美学特征评测方法、装置及终端
US10825150B2 (en) 2017-10-31 2020-11-03 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and device, electronic device and computer-readable storage medium
US10853929B2 (en) 2018-07-27 2020-12-01 Rekha Vasanthakumar Method and a system for providing feedback on improvising the selfies in an original image in real time
US11341619B2 (en) 2019-12-11 2022-05-24 QuantiFace GmbH Method to provide a video with a computer-modified visual of a desired face of a person
CN115018698A (zh) * 2022-08-08 2022-09-06 深圳市联志光电科技有限公司 一种用于人机交互的图像处理方法及系统
WO2022197429A1 (fr) * 2021-03-15 2022-09-22 Tencent America LLC Procédés et systèmes d'extraction de couleurs à partir d'une image faciale

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060132506A1 (en) * 1997-03-06 2006-06-22 Ryuichi Utsugi Method of modifying facial images, makeup simulation method, makeup method, makeup support apparatus and foundation transfer film
US20120299945A1 (en) * 2006-05-05 2012-11-29 Parham Aarabi Method, system and computer program product for automatic and semi-automatic modificatoin of digital images of faces
US20150199558A1 (en) * 2014-01-10 2015-07-16 Pixtr Ltd. Systems and methods for automatically modifying a picture or a video containing a face

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060132506A1 (en) * 1997-03-06 2006-06-22 Ryuichi Utsugi Method of modifying facial images, makeup simulation method, makeup method, makeup support apparatus and foundation transfer film
US20120299945A1 (en) * 2006-05-05 2012-11-29 Parham Aarabi Method, system and computer program product for automatic and semi-automatic modificatoin of digital images of faces
US20150199558A1 (en) * 2014-01-10 2015-07-16 Pixtr Ltd. Systems and methods for automatically modifying a picture or a video containing a face

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10825150B2 (en) 2017-10-31 2020-11-03 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and device, electronic device and computer-readable storage medium
WO2019085792A1 (fr) * 2017-10-31 2019-05-09 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Dispositif et procédé de traitement d'image, support d'informations lisible et dispositif électronique
CN108008815B (zh) * 2017-11-30 2021-05-25 永目堂股份有限公司 基于眼睛状态识别技术的人机交互方法
CN108008815A (zh) * 2017-11-30 2018-05-08 西安科锐盛创新科技有限公司 基于眼睛状态识别技术的人机交互方法
CN108765264A (zh) * 2018-05-21 2018-11-06 深圳市梦网科技发展有限公司 图像美颜方法、装置、设备及存储介质
CN108765264B (zh) * 2018-05-21 2022-05-20 深圳市梦网科技发展有限公司 图像美颜方法、装置、设备及存储介质
US10853929B2 (en) 2018-07-27 2020-12-01 Rekha Vasanthakumar Method and a system for providing feedback on improvising the selfies in an original image in real time
CN112585940A (zh) * 2018-10-08 2021-03-30 谷歌有限责任公司 为基于人工智能的图像捕获设备提供反馈的系统和方法
US11995530B2 (en) 2018-10-08 2024-05-28 Google Llc Systems and methods for providing feedback for artificial intelligence-based image capture devices
US11403509B2 (en) 2018-10-08 2022-08-02 Google Llc Systems and methods for providing feedback for artificial intelligence-based image capture devices
WO2020076356A1 (fr) * 2018-10-08 2020-04-16 Google Llc Systèmes et procédés pour fournir une rétroaction pour des dispositifs de capture d'image basée sur l'intelligence artificielle
CN110263635A (zh) * 2019-05-14 2019-09-20 中国人民解放军火箭军工程大学 基于结构森林和PCANet的标志物检测与识别方法
US11341619B2 (en) 2019-12-11 2022-05-24 QuantiFace GmbH Method to provide a video with a computer-modified visual of a desired face of a person
CN111275650A (zh) * 2020-02-25 2020-06-12 北京字节跳动网络技术有限公司 美颜处理方法及装置
EP4113430A4 (fr) * 2020-02-25 2023-08-09 Beijing Bytedance Network Technology Co., Ltd. Procédé et dispositif de traitement de beauté
US11769286B2 (en) 2020-02-25 2023-09-26 Beijing Bytedance Network Technology Co., Ltd. Beauty processing method, electronic device, and computer-readable storage medium
CN111275650B (zh) * 2020-02-25 2023-10-17 抖音视界有限公司 美颜处理方法及装置
CN111429439A (zh) * 2020-03-31 2020-07-17 北京新氧科技有限公司 一种美学特征评测方法、装置及终端
WO2022197429A1 (fr) * 2021-03-15 2022-09-22 Tencent America LLC Procédés et systèmes d'extraction de couleurs à partir d'une image faciale
CN115018698A (zh) * 2022-08-08 2022-09-06 深圳市联志光电科技有限公司 一种用于人机交互的图像处理方法及系统

Similar Documents

Publication Publication Date Title
WO2017177259A1 (fr) Système et procédé pour traiter des images photographiques
JP7075085B2 (ja) 全身測定値抽出のためのシステムおよび方法
US9760935B2 (en) Method, system and computer program product for generating recommendations for products and treatments
US11321385B2 (en) Visualization of image themes based on image content
US8265351B2 (en) Method, system and computer program product for automatic and semi-automatic modification of digital images of faces
US8660319B2 (en) Method, system and computer program product for automatic and semi-automatic modification of digital images of faces
US8620038B2 (en) Method, system and computer program product for automatic and semi-automatic modification of digital images of faces
US10607372B2 (en) Cosmetic information providing system, cosmetic information providing apparatus, cosmetic information providing method, and program
EP3338217B1 (fr) Détection et masquage de caractéristique dans des images sur la base de distributions de couleurs
US10049477B1 (en) Computer-assisted text and visual styling for images
CN109310196B (zh) 化妆辅助装置以及化妆辅助方法
US8379999B2 (en) Methods, circuits, devices, apparatuses and systems for providing image composition rules, analysis and improvement
Zhang et al. Computer models for facial beauty analysis
EP3182362A1 (fr) Procédé et système d'évaluation de condition physique entre porteurs de lunettes et lunettes ainsi portées
Jiang et al. Photohelper: portrait photographing guidance via deep feature retrieval and fusion
WO2020211347A1 (fr) Procédé et appareil de modification d'image par reconnaissance faciale, et dispositif informatique
AU2015263079A1 (en) ID information for identifying an animal
CA3050456C (fr) Systemes et methodes de modelisation faciale et de recherche de correspondance
De Pessemier et al. Enhancing recommender systems for TV by face recognition
Day et al. Physical and perceptual accuracy of upright and inverted face drawings
WO2016030620A1 (fr) Systeme de recommandation, procede, programme informatique et support correspondant
KR101734212B1 (ko) 표정 연습 시스템
US9373021B2 (en) Method, apparatus and system for outputting a group of images
CN115130493A (zh) 基于图像识别的面部形变推荐方法、装置、设备和介质
CN113408452A (zh) 表情重定向训练方法、装置、电子设备和可读存储介质

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

WPC Withdrawal of priority claims after completion of the technical preparations for international publication

Ref document number: 2016901367

Country of ref document: AU

Date of ref document: 20181009

Free format text: WITHDRAWN AFTER TECHNICAL PREPARATION FINISHED

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17781626

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17781626

Country of ref document: EP

Kind code of ref document: A1