EP3707678A1 - Procédé et dispositif de traitement d'image - Google Patents

Procédé et dispositif de traitement d'image

Info

Publication number
EP3707678A1
EP3707678A1 EP19775892.3A EP19775892A EP3707678A1 EP 3707678 A1 EP3707678 A1 EP 3707678A1 EP 19775892 A EP19775892 A EP 19775892A EP 3707678 A1 EP3707678 A1 EP 3707678A1
Authority
EP
European Patent Office
Prior art keywords
image
parameter
face
image processing
contextual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19775892.3A
Other languages
German (de)
English (en)
Other versions
EP3707678A4 (fr
Inventor
Albert SAÀ-GARRIGA
Karthikeyan SARAVANAN
Alessandro VANDINI
Antoine LARRECHE
Daniel ANSORREGUI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority claimed from PCT/KR2019/003449 external-priority patent/WO2019190142A1/fr
Publication of EP3707678A1 publication Critical patent/EP3707678A1/fr
Publication of EP3707678A4 publication Critical patent/EP3707678A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the disclosure relates to methods and devices for processing an image. More particularly, the disclosure relates to methods of detecting and manipulating a face in an image and devices for performing the methods.
  • an image processing method includes detecting a face present in an image, obtaining at least one feature from the detected face as at least one facial parameter, obtaining at least one context related to the image as at least one contextual parameter, determining a manipulation point for manipulating the detected face, based on the obtained at least one facial parameter and the obtained at least one contextual parameter, and manipulating the image based on the determined manipulation point.
  • the user can perform an appropriate manipulation in accordance with a beauty concept of each culture, increase the effect of advertisement and protect privacy by manipulating a face on an image using context information.
  • FIG. 1 is a configuration diagram of an image processing device according to an embodiment of the disclosure
  • FIG. 2 is a flowchart of an image processing method according to an embodiment of the disclosure
  • FIG. 3 is a diagram illustrating a method of manipulating an image, according to an embodiment of the disclosure.
  • FIG. 4 is a diagram of an example of a face model used to manipulate an image, according to an embodiment of the disclosure.
  • FIG. 5 is a diagram of an example of determining facial parameters to be applied based on contextual parameters, according to an embodiment of the disclosure
  • FIG. 6 is a flowchart of an example of determining facial parameters to be applied based on contextual parameters, according to an embodiment of the disclosure
  • FIG. 7 is a diagram of an example of applying contextual parameters and then applying facial parameters based on results thereof, according to an embodiment of the disclosure.
  • FIG. 8 is a flowchart of an example of applying contextual parameters and then applying facial parameters based on results thereof, according to an embodiment of the disclosure
  • FIG. 9 is a structural diagram of a device for processing an image, according to an embodiment of the disclosure.
  • FIG. 10 is a flowchart of a method, performed by a clustering unit, of selecting a manipulation point using a machine learning algorithm, according to an embodiment of the disclosure
  • FIG. 11 is another flowchart of an image processing method according to an embodiment of the disclosure.
  • FIG. 12 is another flowchart of an image processing method according to an embodiment of the disclosure.
  • FIG. 13 is a diagram illustrating an example of differently enhancing a face on an image according to a user by applying contextual parameters in a beauty application, according to an embodiment of the disclosure
  • FIG. 14 is a diagram of an example of manipulating a face of an advertising model to be similar to that of a target consumer, according to an embodiment of the disclosure.
  • FIG. 15 is a diagram of an example of manipulating a face on an image by applying contextual parameters to protect privacy, according to an embodiment of the disclosure.
  • an image processing method includes detecting a face present in an image, obtaining at least one feature from the detected face as at least one facial parameter, obtaining at least one context related to the image as at least one contextual parameter, determining a manipulation point for manipulating the detected face, based on the obtained at least one facial parameter and the obtained at least one contextual parameter, and manipulating the image based on the determined manipulation point.
  • the determining of the manipulation point for manipulating the detected face may include selecting at least one parameter to be used to determine the manipulation point from among the at least one facial parameter, based on at least one of the obtained at least one contextual parameter.
  • the determining of the manipulation point for manipulating the detected face may include selecting, from among the at least one facial parameter, at least one parameter to be excluded from or corrected in a process of determining the manipulation point, based on at least one of the at least one contextual parameter.
  • the determining of the manipulation point may include when the obtained at least one contextual parameter is a plurality of contextual parameters, generating a plurality of clusters by combining the obtained at least one contextual parameter according to various combination methods and selecting one of the generated plurality of clusters and determining the manipulation point corresponding to the selected cluster.
  • One of the plurality of clusters may be selected using a machine learning algorithm with the obtained at least one contextual parameter as an input value.
  • the determining of the manipulation point may include selecting, from a plurality of face models, one face model to be combined with the detected face.
  • the manipulating of the image may include replacing at least a portion of the detected face with a corresponding portion of the selected face model.
  • the determining of the manipulation point may include selecting one of a plurality of image filters based on the obtained at least one facial parameter and the obtained at least one contextual parameter, and wherein the manipulating of the image includes applying the selected image filter to the image.
  • the at least one contextual parameter may include at least one of person identification information for identifying at least one person appearing on the image, a profile of the identified at least one person, a profile of a user manipulating the image, a relationship between the user manipulating the image and the identified at least one person, a location where the image was captured, a time when the image was captured, weather of an image capture time, information about a device used to capture the image, an image manipulation history of the user manipulating the image and the identified at least one person, or evaluation information of the image.
  • the selecting of the one face model to be combined with the detected face may include presenting a plurality of face models extracted based on the obtained at least one facial parameter and the obtained at least one contextual parameter to a user, and receiving a selection of one of the plurality of presented face models from the user.
  • an image processing device includes at least one processor configured to detect a face present in an image, obtain at least one feature from the detected face as at least one facial parameter, obtain at least one context related to the image as at least one contextual parameter, determine a manipulation point for manipulating the detected face based on the obtained at least one facial parameter and the obtained at least one contextual parameter, and manipulate the image based on the determined manipulation point, and a display configured to display the manipulated image.
  • the at least one processor may be further configured to select, from among the at least one facial parameter, at least one parameter to be used to determine the manipulation point, based on at least one of the at least one contextual parameter.
  • the at least one processor may be further configured to select, from among the at least one facial parameter, at least one parameter to be excluded from or corrected in a process of determining the manipulation point, based on at least one of the at least one contextual parameter.
  • the at least one processor may be further configured to, when the obtained at least one contextual parameter is a plurality of contextual parameters, generate a plurality of clusters by combining the obtained at least one contextual parameter according to various combination methods, select one of the generated plurality of clusters, and determine the manipulation point corresponding to the selected cluster.
  • the at least one processor may be further configured to determine the manipulation unit by selecting, from a plurality of face models, one face model to be combined with the detected face.
  • the at least one processor may be further configured to manipulate the image by replacing at least a portion of the detected face with a corresponding portion of the selected face model.
  • the at least one processor may be further configured to determine the manipulation unit by selecting one of a plurality of image filters based on the obtained at least one facial parameter and the obtained at least one contextual parameter, and manipulate the image by applying the selected image filter to the image.
  • a non-transitory computer-readable recording medium having recorded thereon a computer program for executing the method is provided.
  • the expression "at least one of a, b or c" indicates only a, only b, only c, both a and b, both a and c, both b and c, all of a, b, and c, or variations thereof.
  • FIG. 1 is a configuration diagram of an image processing device 100 according to an embodiment of the disclosure.
  • the image processing device 100 may include a processor 130 and a display 150.
  • the processor 130 may detect a face of an object existing on an image.
  • the processor 130 may include a plurality of processors.
  • the processor 130 may detect a face of each person and sequentially perform image manipulations of the disclosure on the detected face.
  • the processor 130 may also obtain at least one feature obtained from a detected face image as at least one facial parameter.
  • a feature that may be obtained from the face image may include a type of a face, a size of the face, shapes of ears, eyes, mouth, and nose, a facial expression of a person, an emotion of a person, albedo of light with respect to a part of the face, intensity of illumination, a direction of illumination, etc.
  • a facial parameter may refer to information categorized by combining in various ways the above features which may be obtained from the face image that is an object of image manipulation.
  • the facial parameter may be obtained in a variety of ways.
  • the facial parameter may be obtained by applying a facial parameterization algorithm to an image.
  • the processor 130 may also obtain at least one context related to the image as at least one contextual parameter.
  • the context related to the image may include person identification information for identifying at least one person appearing on the image, a profile of the identified person, a user profile including nationality, age, race, sex, family relationship, friendship, etc. of a user of the image processing device 100, a relationship between the identified person and the user, a location where the image was captured, a time when the image was captured, weather of an image capture time, information about a device that captured the image, an image manipulation history of the user, an aesthetic preference of the user, evaluation information of the image, etc.
  • the context related to the image may be extracted from information about an image part other than the face image that is the object of manipulation, information about the user, information about the device, etc.
  • the contextual parameter may refer to information categorized by combining various contexts in various ways.
  • the contextual parameter may include metadata related to an input image and information generated by analyzing the input image.
  • the processor 130 may also determine a manipulation point for manipulating the detected face from the image based on the obtained facial parameter and the obtained contextual parameter and manipulate the image based on the determined manipulation point. Determining of the manipulation point and manipulating he image based on the determined manipulation point will be described later in more detail.
  • the display 150 may output the image in which face manipulation is completed.
  • the display 150 may include a panel, a hologram device, or a projector.
  • the processor 130 and the display 150 are represented as separate configuration units, but the processor 130 and the display 150 may be combined and implemented in the same configuration unit.
  • processor 130 and the display 150 are represented as configuration units positioned adjacent to an inside of the image processing device 100 in the embodiment of the disclosure, because there is no need for devices for performing the respective functions of the processor 130 and the display 150 to be physically adjacent, the processor 130 and the display 150 may be distributed according to an embodiment of the disclosure.
  • the image processing device 100 is not limited to a physical device, some functions of the image processing device 100 may be implemented in software rather than hardware.
  • the image processing device 100 may further include a memory, a capturer, a communication interface, etc.
  • Each of the elements described herein may include one or more components, and a name of each element may change according to a type of a device.
  • the device may include at least one of the elements described herein, and may omit some elements or further include additional elements. Also, some of the elements of the device according to an embodiment of the disclosure may be combined into one entity such that the entity may perform functions of the elements before combined in the same manner.
  • FIG. 2 is a flowchart of an image processing method according to an embodiment of the disclosure.
  • the image processing device 100 may detect a face of an object existing on an image.
  • the image processing device 100 may use various types of face detection algorithms already known to detect the face of the object existing on the image.
  • the image processing device 100 may perform operations S230 to S270 on a face of each person.
  • the image processing device 100 may obtain at least one feature obtained from a detected face image as at least one facial parameter and obtain at least one context related to the image as at least one contextual parameter.
  • a facial parameter may refer to information obtained from the face image that is an object of image manipulation.
  • a contextual parameter may refer to information obtained from a part of the image other than the face image that is the object of manipulation or information obtained from an outside of the image such as information about a user, information about a capturing device, etc.
  • An image parameter may include a type of a face, a size of the face, shapes of ears, eyes, mouth, and nose, a facial expression of a person, an emotion of a person, albedo of light with respect to a part of the face, intensity of illumination, a direction of illumination, etc.
  • the contextual parameter may include person identification information for identifying at least one person appearing on the image, a profile of the identified person, a user profile including nationality, age, race, sex, family relationship, friendship, etc. of a user of the image processing device 100, a relationship between the identified person and the user, a location where the image was captured, a time when the image was captured, weather of an image capture time, information about a device that captured the image, an image manipulation history of the user, an aesthetic preference of the user, evaluation information of the image, etc.
  • the image processing device 100 may determine a manipulation point for manipulating the detected face based on the obtained facial parameter and the obtained contextual parameter.
  • the manipulation point may refer to a part of an original image that is to be changed.
  • the image processing device 100 may determine, based on context obtained from the image, at least one of face image features, such as a face shape of the person, a tone of the skin, or the intensity of illumination, etc. as the manipulation point.
  • the image processing device 100 may automatically apply a camera setting most used in capturing in a similar situation to a camera.
  • the image processing device 100 may determine the most suitable manipulation point for the detected face image based on statistical data of image manipulation used in an image obtained by capturing a person having a similar skin tone in a similar situation.
  • the image processing device 100 may determine the manipulation point by selecting one face model to be combined with the detected face from among a plurality of face models.
  • the plurality of face models may refer to various types of face models stored in the image processing device 100.
  • the image processing device 100 may present the plurality of face models extracted based on the obtained facial parameter and contextual parameter to the user and receive a selection of one of the presented face models from the user to determine the face model selected by the user as the manipulation point.
  • the image processing device 100 may manipulate the image based on the manipulation point.
  • the image processing device 100 may replace all or at least a portion of the detected face with a corresponding portion of the selected face model.
  • FIG. 3 is a diagram illustrating a method of manipulating an image according to an embodiment of the disclosure.
  • the image processing device 100 may obtain from an image 300 at least one facial parameter 310 including various face features and at least one contextual parameter 320 including various contexts related to an image 300.
  • the image processing device 100 may apply the facial parameter 310 and the contextual parameter 320 to a plurality of stored face models 330 to select one face model 140.
  • the selected face model 340 may be a model most similar to a face feature on the image 300 selected from the various face models 330 according to the facial parameter 310 or may be a model selected from the various face models 330 according to the contextual parameter 320.
  • the image 300 may be combined with the selected face model 340 and changed to an output image 350.
  • the image processing device 100 may combine the selected face model 340 with a face on the original image 300 by blending the selected face model 340 with the face on the original image 300 or may combine the selected face model 340 with the face on the original image 300 by replacing at least a portion of the original image 300 with a corresponding portion of the face model 340.
  • FIG. 4 is a diagram of an example of a face model used to manipulate an image according to an embodiment of the disclosure.
  • the face model 340 may be a parameterized model.
  • the face model 340 is the parameterized model may mean that the face model 340 is a model generated as a set of various parameters that determine an appearance of a face.
  • the face model 340 may include geometry information (a) defining a shape of the face, albedo information; (b) defining how incident light is reflected at different parts of the face, illumination information; (c) defining how illumination is applied during capturing, pose information about rotation; information (d) about zooming; facial expression information (e), etc.
  • a method of manipulating the image according to an embodiment of the disclosure is not limited to using the parameterized face model, and various image manipulation methods such as the embodiment described below with respect to FIGS. 11 and 12 may be used.
  • an image manipulation method capable of obtaining a more suitable result may be determined.
  • FIG. 5 is a diagram of an example of determining facial parameters to be applied based on contextual parameters, according to an embodiment of the disclosure.
  • FIG. 6 is a flowchart of an example of determining facial parameters to be applied based on contextual parameters, according to an embodiment of the disclosure.
  • the image processing device 100 may determine a manipulation point for manipulating a face image based on the facial parameters and the contextual parameters.
  • the facial parameters and the contextual parameters may be applied at the same time, and one of the facial parameters and the contextual parameters may be applied first, and the other one may be applied later.
  • the image processing device 100 may select at least one parameter to be excluded or corrected in determining of the manipulation point among the facial parameters, based on at least one of the contextual parameters.
  • the image processing device 100 may predict that, among facial parameters obtained from the face image due to strong illumination contrast, there may be a distortion in illumination information, based on a contextual parameter that a location where the image is captured is a bar. In this case, the image processing device 100 may exclude some of the facial parameters, that is, information about illumination, from determining of the manipulation point, based on the contextual parameter that the image is a capturing location.
  • the image processing device 100 may exclude or correct a specific facial parameter 570 from features 530 of the face image obtained from an image 500 based on a contextual parameter 550 and then apply the adjusted facial parameter 570 to selecting of a face model.
  • the image processing device 100 may detect a face present on an image.
  • the image processing device 100 may apply a facial parameterization algorithm to the detected face to obtain facial parameters.
  • the image processing device 100 may optimize the facial parameters using contextual parameters obtained from an original image. Optimization of the facial parameters at a present stage may mean eliminating or correcting facial parameters that are likely to be distorted and adjusting the facial parameters to be applied to selecting of the face model.
  • the image processing device 100 may apply the optimized facial parameters to the face model to select one face model.
  • the image processing device 100 may manipulate the image by combining the selected face model with the detected face on the original image.
  • FIG. 7 is a diagram of an example of applying contextual parameters and then applying facial parameters based on results thereof, according to an embodiment of the disclosure.
  • FIG. 8 is a flowchart of an example of applying contextual parameters and then applying facial parameters based on results thereof, according to an embodiment of the disclosure.
  • the image processing device 100 may determine a manipulation point for manipulating a face image based on the facial parameters and the contextual parameters.
  • the facial parameters and the contextual parameters may be applied at the same time, and one of the facial parameters and the contextual parameters may be applied first, and the other one may be applied later.
  • the image processing device 100 may first apply the contextual parameters and then select at least one parameter to be used to determine the manipulation point among the facial parameters, based on at least one of the contextual parameters.
  • the image processing device 100 may use statistical information about the tendency of users of a specific nationality to manipulate images to predict information about face features preferred by the users of the nationality.
  • the image processing device 100 may select facial parameters for the face features preferred by the users of the nationality to manipulate images, based on a contextual parameter that is a user's nationality, and apply only the selected parameters to selection of a face model.
  • the image processing device 100 may select facial parameters for face features that are primarily manipulated by users according to a time or a location at which images were captured, and apply only the selected parameters to the selection of the face model.
  • the image processing device 100 may first apply a contextual parameter 730 to an image 700 to select some of face models and select a facial parameter 770 to be applied to the selection of the face model from features 750 of the face image obtained from the image 700 according to the contextual parameter 730.
  • the image processing device 100 may detect a face present on an image.
  • the image processing device 100 may obtain the contextual parameters related to the image.
  • the image processing device 100 may apply the obtained contextual parameters to select some of a plurality of face models.
  • the image processing device 100 may apply a facial parameterization algorithm to the detected face to obtain the facial parameters.
  • the image processing device 100 may optimize the facial parameters using at least one of the contextual parameters. Optimization of the facial parameters at a present stage may mean selecting the facial parameters with respect to face features that are highly likely to be manipulated with respect to at least one contextual parameter.
  • the image processing device 100 may apply the optimized facial parameters to the face models to select one face model.
  • the image processing device 100 may manipulate the image by combining the selected face model with the detected face on an original image.
  • FIG. 9 is a structural diagram of a device for processing an image according to an embodiment of the disclosure.
  • the image processing device 100 may include a processor 130, a display 150 and a clustering unit 950.
  • the processor 130 may include a face detector 910, a parameterization unit 920, and an image manipulator 930 therein.
  • the processor 130 and the display 150 according to the embodiment illustrated in FIG. 9 may perform all the functions described in FIG. 1, except for a function performed by the clustering unit 950.
  • the face detector 910 may detect a face of a person from an input image 940.
  • the face detector 910 may use one of various face detection algorithms to detect a face of one or more persons present in the input image 940.
  • the parameterization unit 920 may obtain contextual parameters based on context information related to the image and obtain facial parameters based on features of the image of the detected face.
  • the parameterization unit 920 may transmit the obtained contextual parameters and facial parameters to the clustering unit 950 and receive a manipulation point from the clustering unit 950.
  • the clustering unit 950 may apply a machine learning algorithm to the contextual parameters and facial parameters received from the parameterization unit 920 to identify a manipulation point related to a specific cluster and transmit the identified manipulation point to the parameterization unit 920.
  • a cluster may refer to a set of contextual parameters generated by combining obtained contextual parameters according to various combination methods when the obtained contextual parameters are plural.
  • the cluster may refer to global data commonality for each contextual parameter.
  • a set of contextual parameters for a specific location may indicate a commonality for images captured at the location.
  • the clustering unit 950 may select one of a plurality of clusters based on the contextual parameters and the facial parameters and determine a manipulation point corresponding to the selected cluster.
  • the clustering unit 950 is described in more detail below with respect to FIG. 10.
  • the clustering unit 950 is not present within the image processing device 100, but may be present in an external server.
  • the image processing unit may include a communicator (including a transmitter and receiver) to transmit data to, and receive data from, the external server.
  • the image manipulator 930 may manipulate the input image 940 based on the determined manipulation point to generate an output image 960.
  • the face detector 910, the parameterization unit 920, the image manipulator 930 and the clustering unit 950 are represented as configuration units positioned adjacent to an inside of the image processing device 100 in the embodiment of the disclosure, because there is no need for devices for performing the respective functions of the face detector 910, the parameterization unit 920, the image manipulator 930 and the clustering unit 950 to be physically adjacent, the face detector 910, the parameterization unit 920, the image manipulator 930 and the clustering unit 950 may be distributed according to an embodiment of the disclosure.
  • the image processing device 100 is not limited to a physical device, some of functions of the image processing device 100 may be implemented in software rather than hardware.
  • FIG. 10 is a flowchart of a method performed by a clustering unit of selecting a manipulation point using a machine learning algorithm according to an embodiment of the disclosure.
  • the clustering unit 950 may receive an input of facial parameters and contextual parameters.
  • the clustering unit 950 may input the received facial parameters and contextual parameters into the machine learning algorithm to identify clusters corresponding to the received facial parameters and contextual parameters.
  • the machine learning algorithm may be trained to identify a specific cluster to which a current image belongs based on at least one of facial parameters or contextual parameters.
  • the clustering unit 950 may use a neural network, clustering algorithm, or other suitable methods to identify the clusters corresponding to the received facial parameters and contextual parameters.
  • the clustering unit 950 may select face models corresponding to the identified clusters.
  • a current operation is not performed by the clustering unit 950 and may be performed on a recipient side that received the transmitted cluster.
  • the clustering unit 950 may output the identified cluster as a resultant output and select the face model corresponding to the cluster identified on the recipient side that received the identified cluster.
  • the face model is used to determine the manipulation point, but other methods such as an image filter may also be used.
  • the clustering unit 950 may transmit the selected face models as an output.
  • the clustering unit 950 may transmit the selected face model and then update the face model according to the received facial parameters and contextual parameters. In an embodiment of the disclosure, the clustering unit 950 may store the updated face model and use the face model for processing of a next image.
  • the clustering unit 950 may develop itself by continuously updating the stored face model and improving the cluster.
  • the order of the update job may be changed.
  • the update job may be performed in operations previous to the current operation or may be performed between other operations.
  • FIG. 11 is another flowchart of an image processing method according to an embodiment of the disclosure.
  • the image processing device 100 may, in some embodiments, not use a face model to determine a manipulation point.
  • the image processing device 100 may detect a face present on an image in operation S1110.
  • the image processing device 100 may obtain facial parameters and contextual parameters in operation S1120. A method of obtaining the facial parameters and the contextual parameters is described above with respect to FIGS. 1 and 2.
  • the image processing device 100 may retrieve reference facial parameters corresponding to the obtained contextual parameters in operation S1130.
  • the image processing device 100 may determine a manipulation point by retrieving the reference facial parameters corresponding to the obtained contextual parameters.
  • the image processing device 100 may determine a color filter capable of representing a facial albedo similar to a reference albedo as the manipulation point.
  • the image processing device 100 may change a face type on the image by determining the face type capable of representing a geometry model similar to a reference geometry model as the manipulation point.
  • the image processing device 100 may manipulate a face image based on facial parameters similar to the retrieved reference facial parameters in operation S1140.
  • FIG. 12 is another flowchart of an image processing method according to an embodiment of the disclosure.
  • the image processing device 100 may use an image filter instead of a face model to determine a manipulation point.
  • the image processing device 100 may detect a face present on an image in operation S1210.
  • the image processing device 100 may obtain facial parameters and contextual parameters in operation S1220.
  • a method of obtaining the facial parameters and the contextual parameters is described above with respect to FIGS. 1 and 2.
  • the image processing device 100 may automatically select an image filter according to the obtained facial parameters and contextual parameters in operation S1230.
  • the image processing device 100 may select one image filter according to the obtained facial parameters and contextual parameters from among a plurality of stored image filters.
  • the image processing device 100 may apply the selected image filter to the image in operation S1240.
  • the image processing method according to the embodiment of the disclosure of FIG. 12 may be used to automatically set a camera effect that matches context information at the time of capturing when a user takes a picture.
  • a specific user may use his or her own image filter optimized for him/her.
  • the image processing device 100 may select a de-noising camera filter according to the facial parameters and the contextual parameters.
  • the image processing device 100 may use the same de-noising filter that was previously used for a face similar to a previously processed image.
  • the image processing device 100 may automatically apply the same camera settings as camera settings that were previously used at the same capturing location when capturing the image.
  • FIG. 13 is a diagram illustrating an example of differently enhancing a face on an image according to a user by applying contextual parameters in a beauty application according to an embodiment of the disclosure.
  • an image processing method may be applied to a portrait beauty application and used to automatically enhance a face of a person.
  • beauty is subjective, depending on a taste of each person or a cultural background, beauty may mean different things to different people.
  • people of a culture A may think that a person with a thin face and a narrow face is a beautiful person, whereas people of a culture B may think that a person with a big mouth as a beautiful person.
  • the image processing device 100 may determine a face type as a manipulation point and manipulate the face type to be thin (1330), and when a person of the culture B is the user, may determine a size of mouth as the manipulation unit and manipulate the size of mouth to be large (1350).
  • the image processing device 100 may perform an appropriate manipulation in accordance with a beauty concept of each user by using context information such as information about a nationality and location of the user, thereby enhancing a face image.
  • FIG. 14 is a diagram of an example of manipulating a face of an advertising model to be similar to that of a target consumer according to an embodiment of the disclosure.
  • the image processing device 100 may manipulate a face 1410 of an actor used in a commercial advertisement as an appearance similar to that of a viewer or the target consumer. This is to increase the effect of advertisement by using the tendency of humans who have good feelings toward people having appearances similar to themselves.
  • the image processing device 100 may enhance concentration of the target consumer for the advertisement by manipulating the face 1410 of the actor to be similar to an average face of the target consumer (1420 and 1430).
  • the image processing device 100 may enhance concentration of a user by manipulating a game character on a video game as a face similar to the user by utilizing context information such as an appearance of the user.
  • FIG. 15 is a diagram of an example of manipulating a face on an image by applying contextual parameters to protect privacy according to an embodiment of the disclosure.
  • an image processing method may be utilized to apply the contextual parameters and manipulate the face, instead of blurring a face on a photograph to protect privacy.
  • the image processing device 100 may protect privacy of a person on an original image by changing a face 1510 on the image to any other face 1520 or a wanted face 1520 instead of blurring the face 1510 on the image.
  • Blurring of the face 1510 on the image may make an image of the entire photo unnatural, which may lower the value of the photo.
  • the face 1510 on the image is not blurred but is manipulated according to the context information, a phenomenon that the photo becomes unnatural or all eyes are concentrated on a blurred part may be prevented.
  • An embodiment of the disclosure may be implemented by storing computer-readable codes in a non-transitory computer-readable storage medium.
  • the non-transitory computer-readable storage medium is any data storage device that stores data which may be thereafter read by a computer system.
  • the computer-readable codes are configured to perform operations of implementing a capturing device control method according to the embodiment of the disclosure when the computer-readable codes are read, from the non-transitory computer-readable storage medium, and executed by a processor.
  • the computer-readable codes may be implemented in a variety of programming languages. Functional programs, codes, and code segments for implementing the embodiment of the disclosure may be easily programmed by those skilled in the art to which the embodiment of the disclosure belongs.
  • non-transitory computer-readable storage medium examples include read only memory (ROM), random access memory (RAM), compact disc (CD)-ROMs, magnetic tape, floppy disk, optical data storage devices.
  • ROM read only memory
  • RAM random access memory
  • CD compact disc
  • the non-transitory computer-readable storage medium may also be distributed over a network coupled computer system so that the computer-readable codes are stored and executed in distributed fashion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé de traitement d'image. Le procédé de traitement d'image consiste à détecter un visage d'un objet présent sur une image, à obtenir au moins une caractéristique à partir du visage détecté en tant qu'au moins un paramètre de visage, et à obtenir au moins un contexte associé à l'image en tant qu'au moins un paramètre de contexte, à déterminer un point de manipulation pour manipuler le visage détecté, sur la base du ou des paramètres de visage obtenus et du ou des paramètres de contexte obtenus, et à manipuler l'image sur la base du point de manipulation déterminé.
EP19775892.3A 2018-03-29 2019-03-25 Procédé et dispositif de traitement d'image Withdrawn EP3707678A4 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB1805270.4A GB2572435B (en) 2018-03-29 2018-03-29 Manipulating a face in an image
KR1020190016357A KR20190114739A (ko) 2018-03-29 2019-02-12 이미지 처리 방법 및 디바이스
PCT/KR2019/003449 WO2019190142A1 (fr) 2018-03-29 2019-03-25 Procédé et dispositif de traitement d'image

Publications (2)

Publication Number Publication Date
EP3707678A1 true EP3707678A1 (fr) 2020-09-16
EP3707678A4 EP3707678A4 (fr) 2020-12-23

Family

ID=62142414

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19775892.3A Withdrawn EP3707678A4 (fr) 2018-03-29 2019-03-25 Procédé et dispositif de traitement d'image

Country Status (3)

Country Link
EP (1) EP3707678A4 (fr)
KR (1) KR20190114739A (fr)
GB (1) GB2572435B (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275650B (zh) 2020-02-25 2023-10-17 抖音视界有限公司 美颜处理方法及装置
WO2021262187A1 (fr) * 2020-06-26 2021-12-30 Hewlett-Packard Development Company, L.P. Ré-éclairage d'image de document
KR20220015019A (ko) * 2020-07-30 2022-02-08 삼성전자주식회사 데이터 마스킹을 이용하여 이미지를 변환하는 전자 장치 및 방법

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7039222B2 (en) * 2003-02-28 2006-05-02 Eastman Kodak Company Method and system for enhancing portrait images that are processed in a batch mode
US8896725B2 (en) * 2007-06-21 2014-11-25 Fotonation Limited Image capture device with contemporaneous reference image capture mechanism
TW200614094A (en) * 2004-10-18 2006-05-01 Reallusion Inc System and method for processing comic character
JP4760349B2 (ja) * 2005-12-07 2011-08-31 ソニー株式会社 画像処理装置および画像処理方法、並びに、プログラム
US8098904B2 (en) * 2008-03-31 2012-01-17 Google Inc. Automatic face detection and identity masking in images, and applications thereof
US20120257072A1 (en) * 2011-04-06 2012-10-11 Apple Inc. Systems, methods, and computer-readable media for manipulating images using metadata
JP2012244525A (ja) * 2011-05-23 2012-12-10 Sony Corp 情報処理装置、情報処理方法及びコンピュータプログラム
US8811686B2 (en) * 2011-08-19 2014-08-19 Adobe Systems Incorporated Methods and apparatus for automated portrait retouching using facial feature localization
US9298741B1 (en) * 2014-06-26 2016-03-29 Amazon Technologies, Inc. Context-specific electronic media processing
US9830727B2 (en) * 2015-07-30 2017-11-28 Google Inc. Personalizing image capture

Also Published As

Publication number Publication date
GB2572435A (en) 2019-10-02
GB2572435B (en) 2022-10-05
EP3707678A4 (fr) 2020-12-23
KR20190114739A (ko) 2019-10-10
GB201805270D0 (en) 2018-05-16

Similar Documents

Publication Publication Date Title
WO2020159232A1 (fr) Procédé, appareil, dispositif électronique et support d'informations lisible par ordinateur permettant de rechercher une image
WO2021251689A1 (fr) Dispositif électronique et procédé de commande de dispositif électronique
WO2020105948A1 (fr) Appareil de traitement d'images et son procédé de commande
WO2018117704A1 (fr) Appareil électronique et son procédé de fonctionnement
WO2021132927A1 (fr) Dispositif informatique et procédé de classification de catégorie de données
WO2019164374A1 (fr) Dispositif électronique et procédé de gestion d'objet personnalisé basé sur un avatar
WO2019190142A1 (fr) Procédé et dispositif de traitement d'image
WO2019093819A1 (fr) Dispositif électronique et procédé de fonctionnement associé
EP3707678A1 (fr) Procédé et dispositif de traitement d'image
WO2020235852A1 (fr) Dispositif de capture automatique de photo ou de vidéo à propos d'un moment spécifique, et procédé de fonctionnement de celui-ci
WO2020130747A1 (fr) Appareil et procédé de traitement d'image pour transformation de style
WO2017131348A1 (fr) Appareil électronique et son procédé de commande
EP3539056A1 (fr) Appareil électronique et son procédé de fonctionnement
WO2020180134A1 (fr) Système de correction d'image et son procédé de correction d'image
WO2015137666A1 (fr) Appareil de reconnaissance d'objet et son procédé de commande
EP3756145A1 (fr) Appareil électronique et son procédé de commande
WO2021150033A1 (fr) Dispositif électronique et procédé de commande de dispositif électronique
WO2021025509A1 (fr) Appareil et procédé d'affichage d'éléments graphiques selon un objet
WO2021006482A1 (fr) Appareil et procédé de génération d'image
WO2021132798A1 (fr) Procédé et appareil d'anonymisation de données
WO2023018084A1 (fr) Procédé et système de capture et de traitement automatiques d'une image d'un utilisateur
WO2020036468A1 (fr) Procédé d'application d'effet bokeh sur une image et support d'enregistrement
WO2020060012A1 (fr) Plateforme mise en œuvre par ordinateur pour fournir des contenus à un dispositif de réalité augmentée, et procédé associé
WO2022191474A1 (fr) Dispositif électronique pour améliorer la qualité d'image et procédé pour améliorer la qualité d'image à l'aide de celui-ci
WO2022108001A1 (fr) Procédé de commande de dispositif électronique par reconnaissance d'un mouvement au niveau du bord d'un champ de vision (fov) d'une caméra, et dispositif électronique associé

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200610

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20201123

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 5/00 20060101ALI20201117BHEP

Ipc: G06T 11/00 20060101ALI20201117BHEP

Ipc: G06K 9/00 20060101ALI20201117BHEP

Ipc: G06T 7/11 20170101ALI20201117BHEP

Ipc: G06T 5/20 20060101ALI20201117BHEP

Ipc: G06K 9/40 20060101ALI20201117BHEP

Ipc: G06T 11/60 20060101AFI20201117BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20210622