CN107705245A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN107705245A
CN107705245A CN201710954364.7A CN201710954364A CN107705245A CN 107705245 A CN107705245 A CN 107705245A CN 201710954364 A CN201710954364 A CN 201710954364A CN 107705245 A CN107705245 A CN 107705245A
Authority
CN
China
Prior art keywords
target
parameter
makeups
illumination
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710954364.7A
Other languages
Chinese (zh)
Inventor
吴珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201710954364.7A priority Critical patent/CN107705245A/en
Publication of CN107705245A publication Critical patent/CN107705245A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The disclosure is directed to image processing method and device.This method includes:Obtain target intensity of illumination;Target makeups parameter corresponding with target intensity of illumination is obtained, target makeups parameter is at least used to indicate at least one in desired cosmetic and target cosmetic mode;According to the pending image of target makeups parameter processing including face to obtain target image.The technical scheme may insure that target image can be shown when user is under target intensity of illumination, the user is using the effect that suitably used cosmetics are desired cosmetic under target intensity of illumination and suitably at least one in used cosmetic mode i.e. target cosmetic mode is made up under target intensity of illumination, so as to improve Consumer's Experience.

Description

Image processing method and device
Technical field
This disclosure relates to image processing field, more particularly to image processing method and device.
Background technology
With the development of science and technology, people start in daily life frequently using image processing techniques to image at Reason, such as in order to allow users to understand its effect for using cosmetics, the photo including the user can be obtained and to the photograph Piece is handled, and the photo after processing is shown that user face uses the effect after cosmetics.
The content of the invention
To overcome problem present in correlation technique, embodiment of the disclosure provides a kind of image processing method and device. Technical scheme is as follows:
First aspect in accordance with an embodiment of the present disclosure, there is provided a kind of image processing method, including:
Obtain target intensity of illumination;
Target makeups parameter corresponding with target intensity of illumination is obtained, target makeups parameter is at least used to indicate that target is made up At least one of in product and target cosmetic mode;
According to the pending image of target makeups parameter processing including face to obtain target image.
By obtaining target intensity of illumination, and obtain target makeups parameter corresponding with target intensity of illumination, target makeups Parameter is at least used to indicate at least one in desired cosmetic and target cosmetic mode, wherein indicated by target makeups parameter Desired cosmetic and target cosmetic mode are suitably used cosmetics and cosmetic mode under target intensity of illumination;It is logical Cross according to the pending image of target makeups parameter processing including face to obtain target image, it can be ensured that target image can be with When user is under target intensity of illumination, the user is using suitably used cosmetics are under target intensity of illumination for display Desired cosmetic and suitably under target intensity of illumination used cosmetic mode is to enter at least one of in target cosmetic mode The effect that row is made up, in order to the user can intuitively know the cosmetics that oneself should be used under target intensity of illumination and At least one of in cosmetic mode, so as to improve Consumer's Experience.
In one embodiment, the image processing method that embodiment of the disclosure provides also includes:
Face characteristic identification is carried out to pending image, and people in image to be detected is obtained according to face characteristic recognition result The target face characteristic parameter of face;
Target makeups parameter corresponding with target intensity of illumination is obtained, including:
Obtain target makeups parameter corresponding with target intensity of illumination and target face characteristic parameter.
In one embodiment, the image processing method that embodiment of the disclosure provides also includes:
Makeups parameter sets are obtained, makeups parameter sets include at least one makeups parameter;
Target makeups parameter corresponding with target intensity of illumination is obtained, including:
Target makeups parameter corresponding with target intensity of illumination is determined at least one makeups parameter.
In one embodiment, the image processing method that embodiment of the disclosure provides also includes:
Target environment information is obtained, and target environment is determined according to target environment information;
Target intensity of illumination is obtained, including:
Obtain target intensity of illumination corresponding with target environment.
In one embodiment, target environment information is obtained, including:
The motion track information of user is obtained, and target environment information is obtained according to motion track information.
In one embodiment, the image processing method that embodiment of the disclosure provides also includes:
Recognition of face is carried out to target image, and the non-face region in target image is determined according to face recognition result, Non-face region is the region for not including face;
Non-face region in target image is replaced with into environmental background corresponding with target environment.
Second aspect in accordance with an embodiment of the present disclosure, there is provided a kind of image processing apparatus, including:
Light intensity acquisition module, for obtaining target intensity of illumination;
Makeups parameter acquisition module, for obtaining target makeups parameter corresponding with target intensity of illumination, target makeups ginseng Number is at least used to indicate at least one in desired cosmetic and target cosmetic mode;
Image processing module, for obtaining target figure according to pending image of the target makeups parameter processing including face Picture.
In one embodiment, the image processing apparatus that embodiment of the disclosure provides also includes:
Face characteristic identification module, for carrying out face characteristic identification to pending image, and identified according to face characteristic As a result the target face characteristic parameter of face in image to be detected is obtained;
Makeups parameter acquisition module, including:
First makeups parameter acquiring submodule, it is corresponding with target intensity of illumination and target face characteristic parameter for obtaining Target makeups parameter.
In one embodiment, the image processing apparatus that embodiment of the disclosure provides also includes:
Makeups parameter sets acquisition module, for obtaining makeups parameter sets, makeups parameter sets include at least one U.S. Adornment parameter;
Makeups parameter acquisition module, including:
Second makeups parameter acquiring submodule, it is corresponding with target intensity of illumination for being determined at least one makeups parameter Target makeups parameter.
In one embodiment, the image processing apparatus that embodiment of the disclosure provides also includes:
Environment information acquisition module, target environment is determined for obtaining target environment information, and according to target environment information;
Light intensity acquisition module, including:
Light intensity acquisition submodule, for obtaining target intensity of illumination corresponding with target environment.
In one embodiment, environment information acquisition module, including:
Environment information acquisition submodule, obtained for obtaining the motion track information of user, and according to motion track information Target environment information.
In one embodiment, the image processing apparatus that embodiment of the disclosure provides also includes:
Face recognition module, for carrying out recognition of face to target image, and target figure is determined according to face recognition result Non-face region as in, non-face region are the region for not including face;
Background replacement module, carried on the back for the non-face region in target image to be replaced with into environment corresponding with target environment Scape.
The third aspect in accordance with an embodiment of the present disclosure, there is provided a kind of image processing apparatus, including:
Processor;
For storing the memory of processor-executable instruction;
Wherein, processor is configured as:
Obtain target intensity of illumination;
Target makeups parameter corresponding with target intensity of illumination is obtained, target makeups parameter is at least used to indicate that target is made up At least one of in product and target cosmetic mode;
According to the pending image of target makeups parameter processing including face to obtain target image.
Fourth aspect in accordance with an embodiment of the present disclosure, there is provided a kind of computer-readable recording medium, be stored thereon with meter Calculation machine instructs, and the step for any one method that the first aspect of embodiment of the disclosure provides is realized in the instruction when being executed by processor Suddenly.
It should be appreciated that the general description and following detailed description of the above are only exemplary and explanatory, not The disclosure can be limited.
Brief description of the drawings
Accompanying drawing herein is merged in specification and forms the part of this specification, shows the implementation for meeting the disclosure Example, and be used to together with specification to explain the principle of the disclosure.
Fig. 1 a are the schematic flow sheets 1 of the image processing method according to an exemplary embodiment;
Fig. 1 b are the schematic flow sheets 2 of the image processing method according to an exemplary embodiment;
Fig. 1 c are the schematic flow sheets 3 of the image processing method according to an exemplary embodiment;
Fig. 1 d are the schematic flow sheets 4 of the image processing method according to an exemplary embodiment;
Fig. 1 e are the schematic flow sheets 5 of the image processing method according to an exemplary embodiment;
Fig. 1 f are the schematic flow sheets 6 of the image processing method according to an exemplary embodiment;
Fig. 2 is the schematic flow sheet of the image processing method according to an exemplary embodiment;
Fig. 3 is the schematic flow sheet of the image processing method according to an exemplary embodiment;
Fig. 4 a are structural representation Fig. 1 of the image processing apparatus according to an exemplary embodiment;
Fig. 4 b are structural representation Fig. 2 of the image processing apparatus according to an exemplary embodiment;
Fig. 4 c are structural representation Fig. 3 of the image processing apparatus according to an exemplary embodiment;
Fig. 4 d are structural representation Fig. 4 of the image processing apparatus according to an exemplary embodiment;
Fig. 4 e are structural representation Fig. 5 of the image processing apparatus according to an exemplary embodiment;
Fig. 4 f are structural representation Fig. 6 of the image processing apparatus according to an exemplary embodiment;
Fig. 5 is a kind of block diagram of device according to an exemplary embodiment;
Fig. 6 is a kind of block diagram of device according to an exemplary embodiment;
Fig. 7 is a kind of block diagram of device according to an exemplary embodiment.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they be only with it is such as appended The example of the consistent apparatus and method of some aspects be described in detail in claims, the disclosure.
With the high speed development and people's living standards continue to improve of science and technology, in recent years, people are in daily life Middle beginning is frequently handled image using image processing techniques.Wherein, target is used in order to allow users to understand it The effect of cosmetics, the photo including the user can be obtained, recognition of face is carried out to the photo, determine to use according to recognition result The position of the eyes at family, nose or face, handles target picture, the target picture after processing is shown in user Eyes, the opening position of nose or face use the effect of desired cosmetic.
Although such scheme can make user handled by viewing after target picture understand oneself and use desired cosmetic Effect, but when user is located in different environment, because often difference is larger for intensity of illumination in different environment, such as night Late indoor intensity of illumination and the intensity of illumination difference outside daytime room are larger, and under different intensities of illumination, user uses cosmetic Effect after product is there is also different, therefore the cosmetics being applicable in varying environment are also different, such as is adapted to make outside daytime room With the cosmetics of paler colour, it is adapted in night room using the more deep cosmetics of color.Target after being handled in such scheme Photo can not show that the user uses and suitably used in the case where this specifies intensity of illumination when user is in and specified under intensity of illumination Dressing effect of cosmetics when being made up, user can not be intuitively known how oneself enters in the case where specifying intensity of illumination Row is made up, so as to compromise Consumer's Experience.
In order to solve the above problems, in the technical scheme provided by this disclosed embodiment, by obtaining target intensity of illumination, And obtain corresponding with target intensity of illumination target makeups parameter, target makeups parameter at least for indicate desired cosmetic and At least one of in target cosmetic mode, the wherein desired cosmetic indicated by target makeups parameter and target cosmetic mode is suitable Used cosmetics and cosmetic mode preferably under target intensity of illumination;By including face according to target makeups parameter processing Pending image to obtain target image, it can be ensured that target image can be shown when user is under target intensity of illumination When, the user is using suitably used cosmetics are desired cosmetic and suitably shone in target light under target intensity of illumination Used cosmetic mode is the effect that at least one in target cosmetic mode is made up under intensity, in order to which the user can Oneself is intuitively known at least one of in the cosmetics and cosmetic mode that target intensity of illumination should use, so as to improve use Experience at family.
Embodiment of the disclosure provides a kind of image processing method, and this method can apply to electronic equipment, wherein electricity Sub- equipment can be terminal or server, and terminal can be mobile phone, tablet personal computer, intelligent wearable device.Server can be There is provided and used by image processing services operator offer calculate service equipment, or by Virtual network operator provide by The equipment that the offer that image processing services operator uses calculates service.As shown in Figure 1a, 101 are comprised the following steps to step 103:
In a step 101, target intensity of illumination is obtained.
Exemplary, when embodiment of the disclosure is applied to terminal, target intensity of illumination is obtained, can be to pass through terminal On human-computer interaction device obtained such as keyboard, touch-screen and microphone user input target intensity of illumination, Ke Yiwei Detected by the intensity of illumination sensor in terminal, and target intensity of illumination is obtained according to testing result, or from Other devices or system obtain target intensity of illumination for example at intelligent camera.When embodiment of the disclosure is applied to server, Target intensity of illumination is obtained, can be that server obtains target intensity of illumination from end, or server fills from other Put or system at obtain target intensity of illumination.
In a step 102, target makeups parameter corresponding with target intensity of illumination is obtained.
Wherein, target makeups parameter is at least used to indicate at least one in desired cosmetic and target cosmetic mode.
Exemplary, when target makeups parameter is used to indicate desired cosmetic, obtain corresponding with target intensity of illumination Target makeups parameter, can be to obtain cosmetics database, the cosmetics database is used to indicate at least one cosmetics mark With the corresponding relation of at least one intensity of illumination, at least one intensity of illumination includes target intensity of illumination, according to the cosmetics Database can determine that desired cosmetic corresponding with target intensity of illumination identifies at least one cosmetics mark, and according to Desired cosmetic mark determines desired cosmetic.When target makeups parameter is used to indicate target cosmetic mode, acquisition and mesh Target makeups parameter corresponding to intensity of illumination is marked, can be to obtain cosmetic mode database, the cosmetic mode database is used to refer to Show that at least one cosmetic mode identifies the corresponding relation with least one intensity of illumination, at least one intensity of illumination includes target Intensity of illumination, it can be determined according to the cosmetic mode database at least one cosmetic mode identifies and target intensity of illumination pair The target cosmetic mode answered identifies, and is identified according to the target cosmetic mode and determine target cosmetic mode.
It should be noted that target makeups parameter corresponding with target intensity of illumination can only indicate a kind of cosmetics or one Kind cosmetic mode, can also indicate that a variety of cosmetics and a variety of cosmetic modes.
In step 103, according to the pending image of target makeups parameter processing including face to obtain target image.
Exemplary, the pending image of face is included according to target makeups parameter processing, can be to obtain and target U.S. Target algorithm corresponding to adornment parameter, and pending image is handled according to target algorithm.
In the technical scheme provided by this disclosed embodiment, by obtaining target intensity of illumination, and obtain and shone with target light Target makeups parameter corresponding to intensity, target makeups parameter are at least used to indicate in desired cosmetic and target cosmetic mode extremely One item missing, the wherein desired cosmetic indicated by target makeups parameter and target cosmetic mode are suitably in target intensity of illumination Cosmetics and cosmetic mode used in lower;Obtained by including the pending image of face according to target makeups parameter processing Take target image, it can be ensured that target image can be shown when user is under target intensity of illumination, and the user use it is suitable Used cosmetics are desired cosmetic and suitable usedization under target intensity of illumination under target intensity of illumination Adornment mode is dressing effect when at least one in target cosmetic mode is made up, in order to which the user can intuitively be known At least one of in cosmetics and cosmetic mode that oneself should be used under target intensity of illumination, so as to improve Consumer's Experience.
In one embodiment, as shown in Figure 1 b, the image processing method that embodiment of the disclosure provides also includes as follows Step 104:
At step 104, face characteristic identification is carried out to pending image, and is obtained and treated according to face characteristic recognition result The target face characteristic parameter of face in detection image.
Exemplary, target face characteristic parameter is used for the characteristics of image for indicating face in image to be detected.Target face Characteristic parameter can include the parameters for shape characteristic of face in the Color characteristics parameters of face, image to be detected in image to be detected And spatial relation characteristics parameter of face etc. in image to be detected.Wherein the Color characteristics parameters of face can in image to be detected Think the colour of designated area on face in image to be detected, such as the colour of lip, colour of forehead area skin etc., its Middle colour can be RGB color passage numerical value or YUV color coding values, by taking RGB color passage numerical value as an example, target face characteristic Parameter can be:RGB (252,224,203), wherein RGB color passage numerical value are RGB (red, green, blue) mark red, green, blues three The color data of individual passage, the color data by adjusting three Color Channels of red, green, blue can obtain different colors, one Specific RGB color passage numerical value represents a kind of specific color;The parameters for shape characteristic of face can be with image to be detected For indicating to specify the wheel profile in each region on face in the overall contour shape of face in image to be detected or image to be detected Shape, such as contour shape of eyes, nose or face etc.;The spatial relation characteristics parameter of face can be used in image to be detected Indicate the spatial relationship on face between at least two designated areas in image to be detected, for example, between eyes and face away from From the distance between, eyes and nose etc..
Exemplary, face characteristic parameter is used for the contour shape for indicating face, and the contour shape of wherein face includes:Mark Quasi- face, elongated face, circular face, square face, inverted triangle face.Acquired target face characteristic parameter is used to indicate image to be detected The contour shape of middle face is circular face.
In a step 102, target makeups parameter corresponding with target intensity of illumination is obtained, can be real by step 1021 It is existing:
In step 1021, target makeups ginseng corresponding with target intensity of illumination and target face characteristic parameter is obtained Number.
Exemplary, target makeups parameter corresponding with target intensity of illumination and target face characteristic parameter is obtained, can Think obtain makeups parameter database, the makeups parameter database be used for indicate at least one makeups parameter identification with it is at least one The corresponding relation of intensity of illumination and at least one target face characteristic parameter, at least one intensity of illumination are shone including target light Intensity, it can be determined according to the makeups parameter database at least one makeups parameter and target intensity of illumination and target person Target makeups parameter corresponding to face characteristic parameter.It should be noted that with target intensity of illumination and target face characteristic parameter Corresponding target makeups parameter can indicate one or more cosmetics, can indicate one or more cosmetic modes, can also Instruction is corresponding with specified cosmetics to specify cosmetic mode.
For example, when target intensity of illumination is in night room, if in image to be detected of target face characteristic parameter instruction The contour shape of face is standard face, and the cosmetic mode indicated by target makeups parameter can be:Dark foundation cream is stamped for two cheeks, Using the lipstick of suitable standard lip, and it is the rouge or standard rouge that two cheeks gently paint ellipse;If target face characteristic is joined The contour shape of face is elongated face in image to be detected of number instruction, and the cosmetic mode indicated by target makeups parameter can be with For:Dark foundation cream is stamped for forehead and chin, is drawn directly at eyebrow 2/3, eyebrow peak is placed in the middle, and eye make-up draws ellipse and simultaneously uses daily cigarette Smoked adornment, rouge are repaiied appearance and wiped in one's ear so that horizontal is past;If the wheel profile of face in image to be detected of target face characteristic parameter instruction Shape is round face, and the cosmetic mode indicated by target makeups parameter can be:Deepen toner bottom and in chin and forehead in two cheeks Centre plus white foundation cream, eyebrow peak is placed in the middle, and the distance of two eyebrows is less than distance threshold, uses light color such as jelly color lipstick.
By carrying out face characteristic identification to pending image, obtained according to face characteristic recognition result in image to be detected The target face characteristic parameter of face, and obtain target makeups corresponding with target intensity of illumination and target face characteristic parameter Parameter, it can be ensured that desired cosmetic and target cosmetic mode indicated by target makeups parameter not only suitably shine in target light Intensity is lower to be used, and is also matched with the face in image to be detected, is made by be checked according to the target makeups parameter processing The face in target image acquired in altimetric image is more attractive in appearance, in order to which the user can intuitively know oneself in target light According at least one in the cosmetics matched with the face characteristic of oneself and cosmetic mode that should be used under intensity, so as to improve Consumer's Experience.
In one embodiment, as illustrated in figure 1 c, the image processing method that embodiment of the disclosure provides also includes step 105:
In step 105, makeups parameter sets are obtained.
Wherein, makeups parameter sets include at least one makeups parameter
Exemplary, when embodiment of the disclosure is applied to terminal, makeups parameter sets are obtained, can be to pass through terminal On human-computer interaction device obtained such as keyboard, touch-screen and microphone user input makeups parameter sets, can also To obtain makeups parameter sets at other devices or system.When embodiment of the disclosure is applied to server, target is obtained Intensity of illumination, can be that server obtains makeups parameter sets from end, or server is from other devices or system Place obtains makeups parameter sets.
It should be noted that the makeups parameter in makeups parameter sets, it can be understood as the makeups ginseng that user selectes in advance Number, it is understood that for the makeups parameter being previously set according to empirical value.
In a step 102, target makeups parameter corresponding with target intensity of illumination is obtained, can be real by step 1022 It is existing:
In step 1022, target makeups ginseng corresponding with target intensity of illumination is determined at least one makeups parameter Number.
Exemplary, target makeups parameter corresponding with target intensity of illumination is determined at least one makeups parameter, can Think and obtain makeups parameter database, the makeups parameter database is used to indicate at least one makeups parameter and at least one illumination The corresponding relation of intensity, at least one intensity of illumination includes target intensity of illumination, according to the makeups parameter database at least one Makeups parameter corresponding with target intensity of illumination is determined in individual makeups parameter, when makeups parameter sets include to shine by force with target light Corresponding to degree during makeups parameter, it is determined that it is target makeups parameter to be somebody's turn to do makeups parameter corresponding with target intensity of illumination.
By obtaining makeups parameter sets, and mesh corresponding with target intensity of illumination is determined at least one makeups parameter Makeups parameter is marked, the span of acquired target makeups parameter can be controlled, allow users to formulate the span, is increased Add the space of user's selection when carrying out image procossing, improve Consumer's Experience.
In one embodiment, as shown in Figure 1 d, the image processing method that embodiment of the disclosure provides also includes step 106:
In step 106, target environment information is obtained, and target environment is determined according to target environment information.
Wherein, target environment information is used to indicate target environment.
Exemplary, when embodiment of the disclosure is applied to terminal, target environment information is obtained, can be to pass through terminal On human-computer interaction device obtained such as keyboard, touch-screen and microphone user input target environment information, Ke Yiwei Read the environmental information being previously stored in terminal, or target is obtained for example at intelligent camera from other devices or system Intensity of illumination.When embodiment of the disclosure is applied to server, target environment information is obtained, can be server from end Obtain target environment information, or server obtains target environment information at other devices or system.
It should be noted that target environment can be understood as the environment that user often goes, or the environment that user's needs are gone to. Such as target environment can be meeting-place, music hall or meeting room of concert etc..
In a step 101, target intensity of illumination is obtained, can be realized by step 1011:
In step 1011, target intensity of illumination corresponding with target environment is obtained.
Exemplary, target intensity of illumination corresponding with target environment is obtained, can be to obtain environmental light intensity database, should Environmental light intensity database is used to indicate at least one environmental labelses and the corresponding relation of at least one intensity of illumination, and this is at least one Environmental labelses include the environmental labelses of target environment, can be at least one environmental labelses really according to the environmental light intensity database The fixed target environment mark for being used to indicate target environment, and determine corresponding with target environment mark in the environmental light intensity database Intensity of illumination is target intensity of illumination.
By obtaining target environment information, target environment is determined according to target environment information, and obtain and target environment pair The target intensity of illumination answered, it can be ensured that target intensity of illumination and the environments match specified, it can be ensured that target image can show Show when user is in target environment and the user is using the suitably cosmetics used in target environment and suitably in mesh Dressing effect when at least one in cosmetic mode used in mark environment is made up, in order to which the user can be intuitively At least one in the cosmetics that oneself should be used in target environment and cosmetic mode is known, so as to improve Consumer's Experience.
In one embodiment, as shown in fig. le, in step 106, target environment information is obtained, and according to target environment Information determines target environment, can be realized by step 1061:
In step 1061, the motion track information of user is obtained, and target environment letter is obtained according to motion track information Breath, target environment is determined according to target environment information.
Wherein, the motion track information of user can serve to indicate that movement locus of the user in the scheduled date, according to this The motion track information of user can determine the place that the user went within the scheduled date.Such as the motion track information of user User position coordinates per minute within the scheduled date can be included.
Exemplary, when embodiment of the disclosure is applied to terminal, the motion track information of user is obtained, can be logical The locating module crossed in terminal such as GPS module or inertial navigation module obtain the motion track information of user, can be to read The motion track information for the user being previously stored in terminal, or from other devices or system such as intelligent watch or intelligence The motion track information of user is obtained at energy bracelet.When embodiment of the disclosure is applied to server, target environment letter is obtained Breath, can be that server obtains target environment information from end, or server obtains at other devices or system Target environment information.
Target environment information is obtained according to motion track information, motion track information can determine that the user went according to Place, and obtain corresponding with the place that the user went target environment information.Wherein being determined according to motion track information should The place that user went can be to be inquired about in locality database to obtain the position per minute within the scheduled date with the user Place corresponding to coordinate.Target environment information corresponding with the place that the user went is obtained, can be in place and environment information Inquired about in database to obtain target environment information corresponding to the place that user went.
Target environment information is obtained by obtaining the motion track information of user, and according to motion track information, can be true Protect target environment information and be used for the environment that instruction user was gone or often gone, it can be ensured that target image can be shown at user In the environment for oneself going or often going and the user is using suitably used in the environment for oneself going or often going When at least one in cosmetics and suitably the cosmetic mode used in the environment for oneself going or often going is made up Dressing effect, so as to improve Consumer's Experience.
In one embodiment, as shown in Figure 1 f, the image processing method that embodiment of the disclosure provides also includes step 107 to step 108:
In step 107, recognition of face is carried out to target image, and determined according to face recognition result in target image Non-face region.
Wherein, non-face region is the region for not including face;
Exemplary, recognition of face is carried out to target image, can be to be calculated by Face datection algorithm such as AdaBoost Method carries out recognition of face to target image, and determines facial image region and target in target image according to recognition result Inhuman face image region in image, wherein inhuman face image region is the area that non-face region does not include face Domain.
In step 108, the non-face region in target image is replaced with into environmental background corresponding with target environment.
Exemplary, when embodiment of the disclosure is applied to terminal, environmental background corresponding with target environment can be It is previously stored in terminal, or at the intelligent video camera head in server or other apparatus and systems such as target environment Obtain.When embodiment of the disclosure is applied to server, environmental background corresponding with target environment can be to be previously stored in On server, or server obtains at other devices or system.
For example, Environment Background Data storehouse and a variety of environmental backgrounds are stored in terminal in advance, Environment Background Data storehouse bag At least one environmental background mark and at least one environmental labelses are included, by determining target environment mark corresponding with target environment Know, and target environment background mark corresponding with target environment mark is inquired about in the Environment Background Data storehouse, read in terminal The target environment background of target environment background mark instruction is taken, and the non-face region in target image is replaced with into target ring Border background.
By carrying out recognition of face to target image, the non-face area in target image is determined according to face recognition result Domain, and the non-face region in target image is replaced with into environmental background corresponding with target environment, it can be ensured that target image The environment that user went or often gone can be shown, in order to which user can intuitively know that oneself is being employed suitably in the ring After cosmetics or cosmetic mode used in border, environmental background whether with dressing effect phase contrast, so as to improve Consumer's Experience.
Implementation process is discussed in detail below by embodiment.
Fig. 2 is that a kind of indicative flowchart of image processing method according to an exemplary embodiment illustrates. As shown in Fig. 2 comprise the following steps:
In step 201, the motion track information of user is obtained, and target environment letter is obtained according to motion track information Breath.
In step 202, target environment is determined according to target environment information.
In step 203, target intensity of illumination corresponding with target environment is obtained.
In step 204, face characteristic identification is carried out to pending image, and is obtained and treated according to face characteristic recognition result The target face characteristic parameter of face in detection image.
In step 205, target makeups parameter corresponding with target intensity of illumination and target face characteristic parameter is obtained.
Wherein, target makeups parameter is at least used to indicate at least one in desired cosmetic and target cosmetic mode.
In step 206, according to the pending image of target makeups parameter processing including face to obtain target image.
Fig. 3 is that a kind of indicative flowchart of image processing method according to an exemplary embodiment illustrates. As shown in figure 3, comprise the following steps:
In step 301, the motion track information of user is obtained, and target environment letter is obtained according to motion track information Breath.
In step 302, target environment is determined according to target environment information.
In step 303, target intensity of illumination corresponding with target environment is obtained.
In step 304, makeups parameter sets are obtained.
In step 305, face characteristic identification is carried out to pending image, and is obtained and treated according to face characteristic recognition result The target face characteristic parameter of face in detection image.
Within step 306, determine to join with target intensity of illumination and target face characteristic at least one makeups parameter Target makeups parameter corresponding to number.
Wherein, target makeups parameter is at least used to indicate at least one in desired cosmetic and target cosmetic mode.
In step 307, according to the pending image of target makeups parameter processing including face to obtain target image.
In step 308, recognition of face is carried out to target image, and determined according to face recognition result in target image Non-face region.
Wherein, non-face region is the region for not including face;
In a step 309, the non-face region in target image is replaced with into environmental background corresponding with target environment.
By obtaining target intensity of illumination, and obtain target makeups parameter corresponding with target intensity of illumination, target makeups Parameter is at least used to indicate at least one in desired cosmetic and target cosmetic mode, wherein indicated by target makeups parameter Desired cosmetic and target cosmetic mode are suitably used cosmetics and cosmetic mode under target intensity of illumination;It is logical Cross according to the pending image of target makeups parameter processing including face to obtain target image, it can be ensured that target image can be with When user is under target intensity of illumination, the user is using suitably used cosmetics are under target intensity of illumination for display Desired cosmetic and suitably under target intensity of illumination used cosmetic mode is to enter at least one of in target cosmetic mode The effect that row is made up, in order to the user can intuitively know the cosmetics that oneself should be used under target intensity of illumination and At least one of in cosmetic mode, so as to improve Consumer's Experience.
Following is embodiment of the present disclosure, can be used for performing embodiments of the present disclosure.
Fig. 4 a are a kind of block diagrams of image processing apparatus 40 according to an exemplary embodiment, image processing apparatus 40 can be a part for terminal or terminal, or a part for server or server, image processing apparatus 40 can be with Pass through being implemented in combination with as some or all of of electronic equipment for software, hardware or both.As shown in fig. 4 a, at the image Reason device 40 includes:
Light intensity acquisition module 401, for obtaining target intensity of illumination.
Makeups parameter acquisition module 402, for obtaining target makeups parameter corresponding with target intensity of illumination, target makeups Parameter is at least used to indicate at least one in desired cosmetic and target cosmetic mode.
Image processing module 403, for obtaining mesh according to pending image of the target makeups parameter processing including face Logo image.
As shown in Figure 4 b, in one embodiment, image processing apparatus 40 also includes:
Face characteristic identification module 404, for carrying out face characteristic identification to pending image, and known according to face characteristic Other result obtains the target face characteristic parameter of face in image to be detected.
Makeups parameter acquisition module 402, including:
First makeups parameter acquiring submodule 4021, for obtaining and target intensity of illumination and target face characteristic parameter Corresponding target makeups parameter.
As illustrated in fig. 4 c, in one embodiment, image processing apparatus 40 also includes:
Makeups parameter sets acquisition module 405, for obtaining makeups parameter sets, makeups parameter sets include at least one Makeups parameter.
Makeups parameter acquisition module 402, including:
Second makeups parameter acquiring submodule 4022, for being determined and target intensity of illumination at least one makeups parameter Corresponding target makeups parameter.
As shown in figure 4d, in one embodiment, image processing apparatus 40 also includes:
Environment information acquisition module 406, target ring is determined for obtaining target environment information, and according to target environment information Border.
Light intensity acquisition module 401, including:
Light intensity acquisition submodule 4011, for obtaining target intensity of illumination corresponding with target environment.
As shown in fig 4e, in one embodiment, environment information acquisition module 406, including:
Environment information acquisition submodule 4061, for obtaining the motion track information of user, and according to motion track information Obtain target environment information.
As shown in fig. 4f, in one embodiment, image processing apparatus 40 also includes:
Face recognition module 407, for carrying out recognition of face to target image, and target is determined according to face recognition result Non-face region in image, non-face region are the region for not including face;
Background replacement module 408, for the non-face region in target image to be replaced with into ring corresponding with target environment Border background.
Embodiment of the disclosure provides a kind of image processing apparatus, and the image processing apparatus can be shone by obtaining target light Intensity, and target makeups parameter corresponding with target intensity of illumination is obtained, target makeups parameter is at least used to indicate that target is made up At least one in product and target cosmetic mode, the wherein desired cosmetic indicated by target makeups parameter and target cosmetic side Formula is suitably used cosmetics and cosmetic mode under target intensity of illumination;By according to target makeups parameter processing bag The pending image of face is included to obtain target image, it can be ensured that target image can be shown when user is in target light according to strong Under degree, and the user is using suitably used cosmetics are desired cosmetic and suitably in target under target intensity of illumination Used cosmetic mode is dressing effect when at least one in target cosmetic mode is made up under intensity of illumination, in order to The user can intuitively know at least one in the cosmetics that oneself should be used under target intensity of illumination and cosmetic mode, So as to improve Consumer's Experience.
Fig. 5 is a kind of block diagram of image processing apparatus 50 according to an exemplary embodiment, the image processing apparatus 50 can be a part for terminal or terminal, or a part for server or server, image processing apparatus 50 include:
Processor 501;
For storing the memory 502 of the executable instruction of processor 501;
Wherein, processor 501 is configured as:
Obtain target intensity of illumination;
Target makeups parameter corresponding with target intensity of illumination is obtained, target makeups parameter is at least used to indicate that target is made up At least one of in product and target cosmetic mode;
According to the pending image of target makeups parameter processing including face to obtain target image.
In one embodiment, above-mentioned processor 501 can be additionally configured to:
Face characteristic identification is carried out to pending image, and people in image to be detected is obtained according to face characteristic recognition result The target face characteristic parameter of face;
Target makeups parameter corresponding with target intensity of illumination is obtained, including:
Obtain target makeups parameter corresponding with target intensity of illumination and target face characteristic parameter.
In one embodiment, above-mentioned processor 501 can be additionally configured to:
Makeups parameter sets are obtained, makeups parameter sets include at least one makeups parameter;
Target makeups parameter corresponding with target intensity of illumination is obtained, including:
Target makeups parameter corresponding with target intensity of illumination is determined at least one makeups parameter.
In one embodiment, above-mentioned processor 501 can be additionally configured to:
Target environment information is obtained, and target environment is determined according to target environment information;
Target intensity of illumination is obtained, including:
Obtain target intensity of illumination corresponding with target environment.
In one embodiment, above-mentioned processor 501 can be additionally configured to:
The motion track information of user is obtained, and target environment information is obtained according to motion track information.
In one embodiment, above-mentioned processor 501 can be additionally configured to:
Recognition of face is carried out to target image, and the non-face region in target image is determined according to face recognition result, Non-face region is the region for not including face;
Non-face region in target image is replaced with into environmental background corresponding with target environment.
A kind of image processing apparatus that embodiment of the disclosure provides, the image processing apparatus is by obtaining target light according to strong Degree, and target makeups parameter corresponding with target intensity of illumination is obtained, target makeups parameter is at least used to indicate desired cosmetic And at least one of in target cosmetic mode, the wherein desired cosmetic indicated by target makeups parameter and target cosmetic mode For suitably used cosmetics and cosmetic mode under target intensity of illumination;By being included according to target makeups parameter processing The pending image of face is to obtain target image, it can be ensured that target image can be shown when user is in target intensity of illumination When lower, the user is using suitably used cosmetics are desired cosmetic and suitably in target light under target intensity of illumination It is the effect that at least one in target cosmetic mode is made up according to used cosmetic mode under intensity, in order to user's energy It is enough intuitively to know at least one in the cosmetics that oneself used under target intensity of illumination and cosmetic mode, so as to improve Consumer's Experience.
Fig. 6 is a kind of block diagram of device 600 for image procossing according to an exemplary embodiment, device 600 Can be terminal, for example, mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, Medical Devices, body-building equipment, personal digital assistant etc..
Device 600 can include following one or more assemblies:Processing component 602, memory 604, power supply module 606, Multimedia groupware 608, audio-frequency assembly 610, the interface 612 of input/output (I/O), sensor cluster 614, and communication component 616。
The integrated operation of the usual control device 600 of processing component 602, such as communicated with display, call, data, phase The operation that machine operates and record operation is associated.Treatment element 602 can refer to including one or more processors 620 to perform Order, to complete all or part of step of above-mentioned method.In addition, processing component 602 can include one or more modules, just Interaction between processing component 602 and other assemblies.For example, processing component 602 can include multi-media module, it is more to facilitate Interaction between media component 608 and processing component 602.
Memory 604 is configured not store various types of data to support the operation in device 600.These data are shown Example includes the instruction of any application program or method for being operated on device 600, contact data, telephone book data, disappears Breath, picture, video etc..Memory 604 can be by any kind of volatibility or non-volatile memory device or their group Close and realize, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM) are erasable to compile Journey read-only storage (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash Device, disk or CD.
Power supply module 606 provides electric power for the various assemblies of device 600.Power supply module 606 can include power management system System, one or more power supplys, and other components associated with generating, managing and distributing electric power for device 600.
Multimedia groupware 608 is included in the screen of one output interface of offer between device 600 and user.In some realities Apply in example, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen can To be implemented as touch-screen, to receive the input signal from user.Touch panel include one or more touch sensors with Gesture on sensing touch, slip and touch panel.The touch sensor can the not only side of sensing touch or sliding action Boundary, but also detect the duration and pressure related to the touch or slide.In certain embodiments, multimedia group Part 608 includes a front camera and/or rear camera.When device 600 is in operator scheme, such as screening-mode or video During pattern, front camera and/or rear camera can receive outside multi-medium data.Each front camera and rearmounted Camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio-frequency assembly 610 is configured as output and/or input audio signal.For example, audio-frequency assembly 610 includes a Mike Wind (MIC), when device 600 is in operator scheme, during such as call model, logging mode and speech recognition mode, microphone by with It is set to reception external audio signal.The audio signal received can be further stored in memory 604 or via communication set Part 616 is sent.In certain embodiments, audio-frequency assembly 610 also includes a loudspeaker, for exports audio signal.
I/O interfaces 612 provide interface between processing component 602 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock Determine button.
Sensor cluster 614 includes one or more sensors, and the state for providing various aspects for device 600 is commented Estimate.For example, sensor cluster 614 can detect opening/closed mode of device 600, and the relative positioning of component, for example, it is described Component is the display and keypad of device 600, and sensor cluster 614 can be with 600 1 components of detection means 600 or device Position change, the existence or non-existence that user contacts with device 600, the orientation of device 600 or acceleration/deceleration and device 600 Temperature change.Sensor cluster 614 can include proximity transducer, be configured to detect in no any physical contact The presence of neighbouring object.Sensor cluster 614 can also include optical sensor, such as CMOS or ccd image sensor, for into As being used in application.In certain embodiments, the sensor cluster 614 can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 616 is configured to facilitate the communication of wired or wireless way between device 600 and other equipment.Device 600 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.In an exemplary implementation In example, communication component 616 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 616 also includes near-field communication (NFC) module, to promote junction service.Example Such as, in NFC module radio frequency identification (RFID) technology can be based on, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 600 can be believed by one or more application specific integrated circuits (ASIC), numeral Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for performing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided Such as include the memory 604 of instruction, above-mentioned instruction can be performed to complete the above method by the processor 620 of device 600.For example, The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk With optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of device 600 When device performs so that device 600 is able to carry out above-mentioned image processing method, and methods described includes:
Obtain target intensity of illumination;
Target makeups parameter corresponding with target intensity of illumination is obtained, target makeups parameter is at least used to indicate that target is made up At least one of in product and target cosmetic mode;
According to the pending image of target makeups parameter processing including face to obtain target image.
In one embodiment, methods described also includes:
Face characteristic identification is carried out to pending image, and people in image to be detected is obtained according to face characteristic recognition result The target face characteristic parameter of face;
Target makeups parameter corresponding with target intensity of illumination is obtained, including:
Obtain target makeups parameter corresponding with target intensity of illumination and target face characteristic parameter.
In one embodiment, methods described also includes:
Makeups parameter sets are obtained, makeups parameter sets include at least one makeups parameter;
Target makeups parameter corresponding with target intensity of illumination is obtained, including:
Target makeups parameter corresponding with target intensity of illumination is determined at least one makeups parameter.
In one embodiment, methods described also includes:
Target environment information is obtained, and target environment is determined according to target environment information;
Target intensity of illumination is obtained, including:
Obtain target intensity of illumination corresponding with target environment.
In one embodiment, target environment information is obtained, including:
The motion track information of user is obtained, and target environment information is obtained according to motion track information.
In one embodiment, methods described also includes:
Recognition of face is carried out to target image, and the non-face region in target image is determined according to face recognition result, Non-face region is the region for not including face;
Non-face region in target image is replaced with into environmental background corresponding with target environment.
Fig. 7 is a kind of block diagram of device 700 for image procossing according to an exemplary embodiment.For example, dress Put 700 and may be provided in a server.Device 700 includes processing component 722, and it further comprises one or more processing Device, and as the memory resource representated by memory 732, for store can by the instruction of the execution of processing component 722, such as Application program.The application program stored in memory 732 can include it is one or more each refer to corresponding to one group The module of order.In addition, processing component 722 is configured as execute instruction, to perform the above method.
Device 700 can also include the power management that a power supply module 726 is configured as performs device 700, and one has Line or radio network interface 750 are configured as device 700 being connected to network, and input and output (I/O) interface 758.Dress Putting 700 can operate based on the operating system for being stored in memory 732, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of device 700 When device performs so that device 700 is able to carry out above-mentioned image processing method, and methods described includes:
Obtain target intensity of illumination;
Target makeups parameter corresponding with target intensity of illumination is obtained, target makeups parameter is at least used to indicate that target is made up At least one of in product and target cosmetic mode;
According to the pending image of target makeups parameter processing including face to obtain target image.
In one embodiment, methods described also includes:
Face characteristic identification is carried out to pending image, and people in image to be detected is obtained according to face characteristic recognition result The target face characteristic parameter of face;
Target makeups parameter corresponding with target intensity of illumination is obtained, including:
Obtain target makeups parameter corresponding with target intensity of illumination and target face characteristic parameter.
In one embodiment, methods described also includes:
Makeups parameter sets are obtained, makeups parameter sets include at least one makeups parameter;
Target makeups parameter corresponding with target intensity of illumination is obtained, including:
Target makeups parameter corresponding with target intensity of illumination is determined at least one makeups parameter.
In one embodiment, methods described also includes:
Target environment information is obtained, and target environment is determined according to target environment information;
Target intensity of illumination is obtained, including:
Obtain target intensity of illumination corresponding with target environment.
In one embodiment, target environment information is obtained, including:
The motion track information of user is obtained, and target environment information is obtained according to motion track information.
In one embodiment, methods described also includes:
Recognition of face is carried out to target image, and the non-face region in target image is determined according to face recognition result, Non-face region is the region for not including face;
Non-face region in target image is replaced with into environmental background corresponding with target environment.
Those skilled in the art will readily occur to the disclosure its after considering specification and putting into practice disclosure disclosed herein Its embodiment.The application is intended to any modification, purposes or the adaptations of the disclosure, these modifications, purposes or Person's adaptations follow the general principle of the disclosure and including the undocumented common knowledges in the art of the disclosure Or conventional techniques.Description and embodiments are considered only as exemplary, and the true scope of the disclosure and spirit are by following Claim is pointed out.
It should be appreciated that the precision architecture that the disclosure is not limited to be described above and is shown in the drawings, and And various modifications and changes can be being carried out without departing from the scope.The scope of the present disclosure is only limited by appended claim.

Claims (14)

  1. A kind of 1. image processing method, it is characterised in that including:
    Obtain target intensity of illumination;
    Target makeups parameter corresponding with the target intensity of illumination is obtained, the target makeups parameter is at least used to indicate target At least one of in cosmetics and target cosmetic mode;
    According to the pending image of the target makeups parameter processing including face to obtain target image.
  2. 2. image processing method according to claim 1, it is characterised in that methods described also includes:
    Face characteristic identification is carried out to the pending image, and described image to be detected is obtained according to face characteristic recognition result The target face characteristic parameter of middle face;
    It is described to obtain corresponding with target intensity of illumination target makeups parameter, including:
    Obtain the target makeups parameter corresponding with the target intensity of illumination and the target face characteristic parameter.
  3. 3. image processing method according to claim 1, it is characterised in that methods described also includes:
    Makeups parameter sets are obtained, the makeups parameter sets include at least one makeups parameter;
    It is described to obtain corresponding with target intensity of illumination target makeups parameter, including:
    The target makeups parameter corresponding with the target intensity of illumination is determined at least one makeups parameter.
  4. 4. image processing method according to claim 1, it is characterised in that methods described also includes:
    Target environment information is obtained, and target environment is determined according to the target environment information;
    The acquisition target intensity of illumination, including:
    Obtain the target intensity of illumination corresponding with the target environment.
  5. 5. image processing method according to claim 4, it is characterised in that the acquisition target environment information, including:
    The motion track information of user is obtained, and the target environment information is obtained according to the motion track information.
  6. 6. image processing method according to claim 4, it is characterised in that methods described also includes:
    Recognition of face is carried out to the target image, and the non-face area in the target image is determined according to face recognition result Domain, the non-face region are the region for not including face;
    Non-face region in the target image is replaced with into environmental background corresponding with the target environment.
  7. A kind of 7. image processing apparatus, it is characterised in that including:
    Light intensity acquisition module, for obtaining target intensity of illumination;
    Makeups parameter acquisition module, for obtaining target makeups parameter corresponding with the target intensity of illumination, the target is beautiful Adornment parameter is at least used to indicate at least one in desired cosmetic and target cosmetic mode;
    Image processing module, for obtaining target figure according to pending image of the target makeups parameter processing including face Picture.
  8. 8. image processing apparatus according to claim 7, it is characterised in that described device also includes:
    Face characteristic identification module, for carrying out face characteristic identification to the pending image, and identified according to face characteristic As a result the target face characteristic parameter of face in described image to be detected is obtained;
    The makeups parameter acquisition module, including:
    First makeups parameter acquiring submodule, for obtaining and the target intensity of illumination and the target face characteristic parameter The corresponding target makeups parameter.
  9. 9. image processing apparatus according to claim 7, it is characterised in that described device also includes:
    Makeups parameter sets acquisition module, for obtaining makeups parameter sets, the makeups parameter sets include at least one U.S. Adornment parameter;
    The makeups parameter acquisition module, including:
    Second makeups parameter acquiring submodule, for being determined and the target intensity of illumination at least one makeups parameter The corresponding target makeups parameter.
  10. 10. image processing apparatus according to claim 7, it is characterised in that described device also includes:
    Environment information acquisition module, target environment is determined for obtaining target environment information, and according to the target environment information;
    The light intensity acquisition module, including:
    Light intensity acquisition submodule, for obtaining the target intensity of illumination corresponding with the target environment.
  11. 11. image processing apparatus according to claim 10, it is characterised in that the environment information acquisition module, including:
    Environment information acquisition submodule, obtained for obtaining the motion track information of user, and according to the motion track information The target environment information.
  12. 12. image processing apparatus according to claim 10, it is characterised in that described device also includes:
    Face recognition module, for carrying out recognition of face to the target image, and the mesh is determined according to face recognition result Non-face region in logo image, the non-face region are the region for not including face;
    Background replacement module, for the non-face region in the target image to be replaced with into ring corresponding with the target environment Border background.
  13. A kind of 13. image processing apparatus, it is characterised in that including:
    Processor;
    For storing the memory of processor-executable instruction;
    Wherein, the processor is configured as:
    Obtain target intensity of illumination;
    Target makeups parameter corresponding with the target intensity of illumination is obtained, the target makeups parameter is at least used to indicate target At least one of in cosmetics and target cosmetic mode;
    According to the pending image of the target makeups parameter processing including face to obtain target image.
  14. 14. a kind of computer-readable recording medium, is stored thereon with computer instruction, it is characterised in that the instruction is by processor The step of any one of claim 1-6 methods described is realized during execution.
CN201710954364.7A 2017-10-13 2017-10-13 Image processing method and device Pending CN107705245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710954364.7A CN107705245A (en) 2017-10-13 2017-10-13 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710954364.7A CN107705245A (en) 2017-10-13 2017-10-13 Image processing method and device

Publications (1)

Publication Number Publication Date
CN107705245A true CN107705245A (en) 2018-02-16

Family

ID=61184894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710954364.7A Pending CN107705245A (en) 2017-10-13 2017-10-13 Image processing method and device

Country Status (1)

Country Link
CN (1) CN107705245A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965849A (en) * 2018-07-27 2018-12-07 北京小米移动软件有限公司 Image processing method, apparatus and system
CN110929146A (en) * 2019-10-23 2020-03-27 阿里巴巴集团控股有限公司 Data processing method, device, equipment and storage medium
CN111797775A (en) * 2020-07-07 2020-10-20 云知声智能科技股份有限公司 Recommendation method and device for image design and intelligent mirror
CN111932332A (en) * 2020-06-04 2020-11-13 北京旷视科技有限公司 Virtual makeup trial method, device, electronic equipment and computer readable medium
CN112712479A (en) * 2020-12-24 2021-04-27 厦门美图之家科技有限公司 Dressing method, system, mobile terminal and storage medium
CN115239575A (en) * 2022-06-06 2022-10-25 荣耀终端有限公司 Beautifying method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279925A (en) * 2011-08-25 2011-12-14 三峡大学 Chain processing face recognition method and system
CN104424483A (en) * 2013-08-21 2015-03-18 中移电子商务有限公司 Face image illumination preprocessing method, face image illumination preprocessing device and terminal
CN105210110A (en) * 2013-02-01 2015-12-30 松下知识产权经营株式会社 Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program
CN105431852A (en) * 2014-03-14 2016-03-23 三星电子株式会社 Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium
CN105469407A (en) * 2015-11-30 2016-04-06 华南理工大学 Facial image layer decomposition method based on improved guide filter
CN105678232A (en) * 2015-12-30 2016-06-15 中通服公众信息产业股份有限公司 Face image feature extraction and comparison method based on deep learning
CN105787981A (en) * 2016-02-25 2016-07-20 上海斐讯数据通信技术有限公司 Method and system for assisting in makeup through mobile terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279925A (en) * 2011-08-25 2011-12-14 三峡大学 Chain processing face recognition method and system
CN105210110A (en) * 2013-02-01 2015-12-30 松下知识产权经营株式会社 Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program
CN104424483A (en) * 2013-08-21 2015-03-18 中移电子商务有限公司 Face image illumination preprocessing method, face image illumination preprocessing device and terminal
CN105431852A (en) * 2014-03-14 2016-03-23 三星电子株式会社 Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium
CN105469407A (en) * 2015-11-30 2016-04-06 华南理工大学 Facial image layer decomposition method based on improved guide filter
CN105678232A (en) * 2015-12-30 2016-06-15 中通服公众信息产业股份有限公司 Face image feature extraction and comparison method based on deep learning
CN105787981A (en) * 2016-02-25 2016-07-20 上海斐讯数据通信技术有限公司 Method and system for assisting in makeup through mobile terminal

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965849A (en) * 2018-07-27 2018-12-07 北京小米移动软件有限公司 Image processing method, apparatus and system
CN108965849B (en) * 2018-07-27 2021-05-04 北京小米移动软件有限公司 Image processing method, device and system
CN110929146A (en) * 2019-10-23 2020-03-27 阿里巴巴集团控股有限公司 Data processing method, device, equipment and storage medium
CN110929146B (en) * 2019-10-23 2024-04-02 阿里巴巴集团控股有限公司 Data processing method, device, equipment and storage medium
CN111932332A (en) * 2020-06-04 2020-11-13 北京旷视科技有限公司 Virtual makeup trial method, device, electronic equipment and computer readable medium
CN111932332B (en) * 2020-06-04 2023-04-21 北京旷视科技有限公司 Virtual makeup testing method, virtual makeup testing device, electronic equipment and computer readable medium
CN111797775A (en) * 2020-07-07 2020-10-20 云知声智能科技股份有限公司 Recommendation method and device for image design and intelligent mirror
CN112712479A (en) * 2020-12-24 2021-04-27 厦门美图之家科技有限公司 Dressing method, system, mobile terminal and storage medium
CN112712479B (en) * 2020-12-24 2024-07-30 厦门美图之家科技有限公司 Dressing processing method, system, mobile terminal and storage medium
CN115239575A (en) * 2022-06-06 2022-10-25 荣耀终端有限公司 Beautifying method and device
CN115239575B (en) * 2022-06-06 2023-10-27 荣耀终端有限公司 Beautifying method and device

Similar Documents

Publication Publication Date Title
CN110929651B (en) Image processing method, image processing device, electronic equipment and storage medium
CN107705245A (en) Image processing method and device
KR102661019B1 (en) Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof
CN106156730B (en) A kind of synthetic method and device of facial image
CN105512605B (en) Face image processing process and device
US20160357578A1 (en) Method and device for providing makeup mirror
CN105469356B (en) Face image processing process and device
CN107302662A (en) A kind of method, device and mobile terminal taken pictures
CN105357425B (en) Image capturing method and device
CN107622472A (en) Face dressing moving method and device
CN108876732A (en) Face U.S. face method and device
CN107368810A (en) Method for detecting human face and device
WO2016183047A1 (en) Systems and methods of updating user identifiers in an image-sharing environment
CN113569614A (en) Virtual image generation method, device, equipment and storage medium
CN105095917B (en) Image processing method, device and terminal
CN107240143A (en) Bag generation method of expressing one's feelings and device
CN108132983A (en) The recommendation method and device of clothing matching, readable storage medium storing program for executing, electronic equipment
CN107369142A (en) Image processing method and device
CN109523461A (en) Method, apparatus, terminal and the storage medium of displaying target image
CN107529699A (en) Control method of electronic device and device
WO2019184679A1 (en) Method and device for implementing game, storage medium, and electronic apparatus
CN109117819B (en) Target object identification method and device, storage medium and wearable device
CN109451235B (en) Image processing method and mobile terminal
CN112114653A (en) Terminal device control method, device, equipment and storage medium
CN111373409A (en) Method and terminal for acquiring color value change

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination