CN109461118A - A kind of image processing method and device - Google Patents
A kind of image processing method and device Download PDFInfo
- Publication number
- CN109461118A CN109461118A CN201811339208.0A CN201811339208A CN109461118A CN 109461118 A CN109461118 A CN 109461118A CN 201811339208 A CN201811339208 A CN 201811339208A CN 109461118 A CN109461118 A CN 109461118A
- Authority
- CN
- China
- Prior art keywords
- picture
- target site
- personage
- processed
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 210000001747 pupil Anatomy 0.000 claims abstract description 58
- 230000037308 hair color Effects 0.000 claims abstract description 40
- 210000004209 hair Anatomy 0.000 claims description 58
- 210000001508 eye Anatomy 0.000 claims description 38
- 238000000034 method Methods 0.000 claims description 32
- 210000004709 eyebrow Anatomy 0.000 claims description 21
- 230000015572 biosynthetic process Effects 0.000 claims description 20
- 238000003786 synthesis reaction Methods 0.000 claims description 20
- 238000000605 extraction Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 13
- 230000007613 environmental effect Effects 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 210000001331 nose Anatomy 0.000 claims description 3
- 210000000887 face Anatomy 0.000 claims 1
- 210000003128 head Anatomy 0.000 claims 1
- 238000006243 chemical reaction Methods 0.000 abstract description 7
- 239000000284 extract Substances 0.000 description 40
- 238000012549 training Methods 0.000 description 29
- 230000004048 modification Effects 0.000 description 17
- 238000012986 modification Methods 0.000 description 17
- 238000012360 testing method Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 2
- 210000005252 bulbus oculi Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000002790 cross-validation Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000001061 forehead Anatomy 0.000 description 2
- 238000002955 isolation Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- G06T3/04—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
The present invention provides a kind of image processing method and device, obtain personage's picture to be processed, the first picture of each target site is extracted from personage's picture to be processed, to either objective position: using the first picture of the target site as the input of the corresponding default identification model of the target site, obtain the mark of the target site of the default identification model output, mark based on the target site, the second picture of the target site shown in a special way is obtained from default picture library, and determine the skin color of personage's picture to be processed, hair color and pupil color, second picture based on each target site, skin color, hair color and pupil color, obtain the personage's picture shown with special image, personage's picture to be processed is automatically converted to the personage's picture shown with special image to realize, and which can So that user's need not have fine arts grounding in basic skills can also obtain the special image for meeting user's esthetic requirement while the complexity that can reduce conversion.
Description
Technical field
The invention belongs to image processing technologies, more specifically more particularly to a kind of image processing method and device.
Background technique
It, can be by having the APP (Application, using journey for repairing figure function in terminal at present for equal pictures of taking pictures certainly
Sequence) it mixed colours picture, increase special efficacy and the specially treateds such as thin face are carried out to the personage in picture, or for a cartoon
Personage modifies five official ranks of cartoon figure by having the APP for repairing figure function, but has the APP for repairing figure function cannot at present
Convert the face with special image for the face in personage's picture, the face in personage's picture cannot be such as automatically synthesized for
Face with cartoon character.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of image processing method and device, for realizing will be to be processed
Personage's picture is converted to the personage's picture shown with special image.Technical solution is as follows:
The present invention provides a kind of image processing method, comprising:
Obtain personage's picture to be processed;
The first picture of each target site, each target site packet are extracted from personage's picture to be processed
It includes: at least one position in nose, mouth, eyes, ear, eyebrow and hair;
To the either objective position in each target site: the first picture of the target site is determined as the target
The input of the corresponding default identification model in position obtains the target site of the corresponding default identification model output of the target site
Mark, wherein the corresponding default identification model of the target site is by the of the target site extracted from historical personage's picture
One picture is trained to obtain as input using the mark marked for the target site as output;
To the either objective position in each target site: the mark based on the target site is obtained from default picture library
The second picture of the target site is obtained, wherein the second picture of the target site shows the target site with special image;
Determine the skin color of personage's picture to be processed, the hair color of personage's picture to be processed and it is described to
Handle the pupil color of personage's picture;
Second picture, the skin color of personage's picture to be processed, the people to be processed based on each target site
The pupil color of the hair color of object picture and personage's picture to be processed obtains the personage's picture shown with special image.
Preferably, the skin color of determination personage's picture to be processed includes:
Determine that the skin area in personage's picture to be processed, the skin area are personage's pictures to be processed
Meet the region of pre-set color condition in face and/or face's following region;
Interference of the filtering environmental factor to the skin color in the skin area obtains personage's picture to be processed
Skin color.
Preferably, the default identification model exports number of the target site in the default picture library, by institute
The mark that number of the target site in the default picture library is determined as the target site is stated, and the same number corresponds to
It include an at least width second picture in the default picture library, the phase between each width picture in an at least width second picture
Like degree in default similarity dimensions;
The either objective position in each target site: the mark based on the target site, from default picture library
The middle second picture for obtaining the target site includes: to determine the target site in the default picture in the default picture library
The corresponding each width second picture of number in library, chooses a width picture from identified each width second picture.
Preferably, the second picture based on each target site, the skin color of personage's picture to be processed, institute
The hair color of personage's picture to be processed and the pupil color of personage's picture to be processed are stated, is obtained with special image displaying
Personage's picture:
Determine the second picture position and corresponding direction of each target site;
Second picture position and corresponding direction based on each target site, to the second figure of each target site
Piece is spliced, and the personage's picture shown with special image is obtained;
Set described wait locate for skin color, hair color and the pupil color of the personage's picture shown with special image
Manage skin color, hair color and the pupil color of personage's picture.
Preferably, the method also includes:
User is obtained to the feedback information of personage's picture with feature image display, the feedback information is at least used for
Show the user to the satisfaction of the face picture shown with special image;
Based on the feedback information, the default identification model is corrected again, to modify the default identification mould
Corresponding relationship between the mark of the target site of the input of type and the default identification model output.
The present invention also provides a kind of picture processing units, comprising:
Module is obtained, for obtaining personage's picture to be processed;
Extraction module, it is described for extracting the first picture of each target site from personage's picture to be processed
Each target site includes: at least one position in nose, mouth, eyes, ear, eyebrow and hair;
Identification module, for the either objective position in each target site: by the first figure of the target site
Piece is determined as the input of the corresponding default identification model of the target site, and it is defeated to obtain the corresponding default identification model of the target site
The mark of the target site out, wherein the target site corresponding default identification model will be extracted from historical personage's picture
First picture of the target site is trained to obtain as input using the mark marked for the target site as output;
Second picture determining module, for the either objective position in each target site: based on the target site
Mark, obtains the second picture of the target site, wherein the second picture of the target site is with special form from default picture library
As showing the target site;
Color determination module, for determining skin color, the personage's picture to be processed of personage's picture to be processed
Hair color and personage's picture to be processed pupil color
Synthesis module, for second picture, personage's picture to be processed based on each target site skin color,
The pupil color of the hair color of personage's picture to be processed and personage's picture to be processed, obtains showing with special image
Personage's picture.
Preferably, the color determination module, specifically for the skin area in determination personage's picture to be processed, and
Interference of the filtering environmental factor to the skin color in the skin area obtains the skin face of personage's picture to be processed
Color, the skin area be personage's picture to be processed face and/or face's following region in meet pre-set color condition
Region.
Preferably, the identification module, for exporting number of the target site in the default picture library, by institute
The mark that number of the target site in the default picture library is determined as the target site is stated, and the same number corresponds to
At least width second picture for including in the default picture library, between each width picture in an at least width second picture
Similarity is in default similarity dimensions;
The second picture determining module, for determining the target site in the default figure in the default picture library
The corresponding each width second picture of the number of valut, chooses a width picture from identified each width second picture.
Preferably, the synthesis module comprises determining that unit, concatenation unit and synthesis unit,
The determination unit, for determining the second picture position and corresponding direction of each target site;
The concatenation unit, for second picture position and corresponding direction based on each target site, to each
The second picture of a target site is spliced, and the personage's picture shown with special image is obtained;
The synthesis unit, skin color, hair color and the pupil of personage's picture for will be shown with special image
Color is set as skin color, hair color and the pupil color of personage's picture to be processed.
Preferably, described device further includes correction module, and the correction module includes: obtaining unit and amending unit,
The acquiring unit, for obtaining user to the feedback information of personage's picture with feature image display, institute
Stating feedback information is at least used to show the user to the satisfaction of the face picture shown with special image;
The amending unit is corrected the default identification model, again for being based on the feedback information to repair
Change the corresponding relationship between the mark of the target site of input and the default identification model output of the default identification model.
From above-mentioned technical proposal it is found that being mentioned from personage's picture to be processed in the case where obtaining personage's picture to be processed
The first picture for taking out each target site, to the either objective position in each target site: by the first of the target site
Picture is determined as the input of the corresponding default identification model of the target site, obtains the corresponding default identification model of the target site
The mark of the target site of output, the mark based on the target site obtain the of the target site from default picture library
Two pictures, wherein the second picture of the target site shows the target site with special image;And determining figure map to be processed
In the case where the pupil color of the skin color of piece, the hair color of personage's picture to be processed and personage's picture to be processed, it is based on
Skin color, hair color and the pupil color of the second picture of each target site and personage's picture to be processed, obtain with spy
Personage's picture to be processed is automatically converted to the figure map shown with special image to realize by personage's picture of different image display
Piece, and above-mentioned transform mode can choose second picture from default picture library automatically, so that user's need not have fine arts function
Bottom can also obtain the special image for meeting user's esthetic requirement, and above-mentioned conversion is selected from default picture library manually without user
Interested second picture is taken, so as to reduce the complexity of conversion.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is the present invention
Some embodiments for those of ordinary skill in the art without creative efforts, can also basis
These attached drawings obtain other attached drawings.
Fig. 1 is a kind of flow chart of image processing method provided in an embodiment of the present invention;
Fig. 2 is the schematic diagram of default identification model training provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram of image processing method provided in an embodiment of the present invention;
Fig. 4 is the flow chart of another image processing method provided in an embodiment of the present invention;
Fig. 5 is a kind of structural schematic diagram of picture processing unit provided in an embodiment of the present invention;
Fig. 6 is the structural schematic diagram of synthesis module in picture processing unit provided in an embodiment of the present invention;
Fig. 7 is the structural schematic diagram of another picture processing unit provided in an embodiment of the present invention;
Fig. 8 is the structural schematic diagram of correction module in picture processing unit provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Referring to Fig. 1, it illustrates a kind of flow chart of image processing method provided in an embodiment of the present invention, for will be to
Processing personage's picture is automatically converted to the personage's picture shown with special image, specifically, image processing method shown in Fig. 1 can be with
The following steps are included:
S101 obtains personage's picture to be processed, it can be understood as: the picture with personage can be considered as personage to be processed
Picture, such as user take pictures certainly or the picture including personage and landscape, and the mode for obtaining personage's picture to be processed can
Be user upload picture, from the picture that other equipment receive and from least one in the picture downloaded in other equipment
Kind.
S102, extracts the first picture of each target site from personage's picture to be processed, and each target site includes:
At least one position in nose, mouth, eyes, ear, eyebrow and hair, that is to say, that mentioned from personage's picture to be processed
It takes one or more target sites, such as personage's picture to be processed to can be the picture including target complete position, is also possible to
The mesh that picture including partial target position, then the first picture of each target site extracted and personage's picture to be processed include
It is related to mark position.
It, will be from personage to be processed such as if personage's picture to be processed is the picture for including target complete position
The first picture that each target site is extracted in picture is respectively: the first picture, the eyes of the first picture of nose, mouth
First picture, the first picture of ear, the first picture of eyebrow and hair the first picture, if personage's picture to be processed be packet
The picture at partial target position is included, specific personage's picture to be processed can only include eyes and eyebrow target site, then extract
The first picture of each target site only has out: the first picture of eyebrow and the first picture of eyes.
In the present embodiment, one kind that the first picture of each target site is extracted from personage's picture to be processed is feasible
Mode is: carrying out recognition of face to figure map's piece to be processed, identification is obtained nose, mouth, eyes, ear, eyebrow and hair
In at least one position, by nose, mouth, eyes, ear, eyebrow and hair at least one position region carry out
It cuts, target site is separated from personage's picture to be processed, to obtain nose, mouth, eyes, ear, eyebrow
With first picture at the either objective position of hair, such as multiple keys of above-mentioned each target site are obtained by recognition of face
Point, to either objective position in each target site: determining target site place by multiple key points of the target site
Then determined region is extracted from personage's picture to be processed, obtains the first picture of the target site by region.
Another feasible pattern for extracting the first picture of each target site is: training obtains picture and extracts mould in advance
Personage's picture to be processed is input to the picture and extracted in model by type, obtains each target portion that the picture extracts model output
First picture of position, can also train in advance in practical applications certainly and obtain multiple pictures extraction models, and each picture extracts
Model corresponds to a target site, i.e. a picture extracts the first picture that model is used to extract a target site, wherein in advance
The process that first training obtains picture extraction model may is that be determined from default personage's picture set by cross validation mode
Training set and test set, to any personage's picture in training set: marking out each target portion from personage's picture
Position, each target site based on mark carry out model training, obtain picture and extract model, are obtained with extracting model by picture
The key point of each target site, and model is extracted by picture, each target portion is exported based on the key point of each target site
Then first picture of position extracts model to obtained picture by each personage's picture in test set again and tests,
It determines that picture extracts model and can extract the first picture of each target site if testing, terminates to train, otherwise again to instruction
Personage's picture in white silk set is labeled is trained again.
Additionally need to illustrate a bit: picture, which extracts model, can be directed at least one people, that is to say, that a picture mentions
Modulus type can be only applicable to a people, obtain picture by personage's picture training of a people and extract model or a figure
Piece, which extracts model, can be adapted for multiple people, obtains picture by personage's picture training of multiple people and extracts model.
S103, to the either objective position in each target site: using the first picture of the target site as the target
The input of the corresponding default identification model in position obtains the target site of the corresponding default identification model output of the target site
Mark, wherein the corresponding default identification model of the target site is by the of the target site extracted from historical personage's picture
One picture is trained to obtain as input using the mark marked for the target site as output.That is, passing through face
Historical personage's picture training obtain each target site respectively corresponding default identification model, and then subsequent can be applied to face
Personage's picture to be processed mark identification.
In the present embodiment, the acquisition process of identification model is preset are as follows: the set for obtaining historical personage's picture of face, from
Training set and test set are marked off in the set of historical personage's picture, and model is extracted by recognition of face or above-mentioned picture
The first picture of a target site in any historical personage's picture in training set is obtained, and marks out a target site
Mark carries out model training with the first picture of the target site and the mark marked out, obtains corresponding default identification model,
Wherein the obtained default identification model of training is corresponding with target site respectively, that is to say, that each target site has pair
The default identification model answered, if nose target site has corresponding default identification model, the nose target site got
First picture will be input to the corresponding default identification model of nose target site;Then pass through each going through in test set
History personage picture tests default identification model, to determine whether default identification model meets the requirements, undesirable
In the case where can realize optimization to default identification model by repetition training and test, can be according to reality for desired setting
Depending on the application of border, this present embodiment is no longer illustrated.
It in practical annotation process, needs to classify to either objective position, and is directed to for either objective position, it is right
The special image of face and the shape of user in the thinner personage's picture that can to show with special image of the classification of the target site
As being more bonded, such as by taking the hair of women as an example, the hair of women can be divided into but be not limited to long hair, middle hair, bob and super
Bob can be divided into big wave curly hair, bubble roll hair and straight hair in long hair again, then need to distinguish for these types in assorting process
A mark is marked out, it is consistent or close with the feature having by the corresponding type of the same mark, but different identification
The distinctiveness for the feature that corresponding type has becomes apparent from, and can make same text in datagram valut by the classification in this way
(the first picture in such as same file has the similarity between the first picture in part folder in default similarity dimensions
Feature it is consistent or close), the higher the better for the degree of isolation of the first picture of different files, in this way based on default identification mould
Type can make the mark of target site under different files different when identifying.Still by taking the hair of women as an example, No. 00 file
In the hair styles of hair of all first pictures be all consistent as far as possible, and the hair style in the first picture in No. 01 file
It will be with the difference of the hair style in the first picture in No. 00 file as far as possible, then identifying the two by default identification model
The mark of the hair obtained when hair under file is different, so that second picture also can be different, realizes to different target
Position shows the second picture of different special images.
Wherein No. 00 and No. 01 can then be considered as mark of this target site of hair in data picture library, have same
The default similarity of all first pictures of one mark in default similarity dimensions, all first pictures of different identification it is pre-
If similarity should be less than default similarity dimensions, meanwhile, the quantity of the first picture with like-identified is greater than or equal to 1,
Being exactly includes at least one secondary first picture in certain mark, includes at least one first in file such as by taking file as an example
Picture, and the similarity between each the first picture in same file folder is being preset in similarity dimensions, different files
The default similarity between each the first picture in folder is not in default similarity dimensions.
In the present embodiment, can be using such as to the classification of target site: AlexNet, VGGNET16, GoogleNet,
The picture classifications mode such as SimpleNet, the classification accuracy obtained by these picture classification modes are as shown in table 1 below:
The classification accuracy of 1 picture classification mode of table
Mode | Memory size (M) | T1/T5 accuracy rate |
AlexNet | 60 | 57.2/80.3 |
VGGNet16 | 138 | 70.5 |
GoogleNet | 8 | 68.7 |
WideResNet | 11.7 | 69.6/89.07 |
SimpleNet | 5.4 | 67.17/87.44 |
Needing explanation herein is a bit: at the terminal due to image processing method provided in this embodiment operation, and eventually
The memory at end is smaller, it is therefore desirable to be classified using a kind of lesser picture classification mode of memory requirements, and 1 institute of consolidated statement
The classification accuracy shown is it is found that it is preferred that SimpleNet.Furthermore the operation of SimpleNet is simple, and off-line operation may be implemented, and makes
The personage's picture shown with special image can be obtained by image processing method offline by obtaining, and so as to improve recognition speed and be improved
User experience.
It combines Fig. 2 to be illustrated default identification model by taking eyes as an example below, is concentrated from eyes labeled data and choose one
A picture exports the picture of eyes in selected picture by picture pretreatment (model is extracted in such as recognition of face or above-mentioned picture)
(i.e. the first pictures of eyes) input the mark (marks of such as eyes) inside the picture of the eyes and eyes labeled data collection
Into model training code, eye model is exported.It in the present embodiment, can be with to the either objective position of above-mentioned target site
Training obtains a default identification model, and carries out fine cutting to figure map's piece to be processed, obtains either objective position
First picture, to be identified by respective default identification model, so that the influence of other target sites is reduced, so that identification
It is more accurate.
S104, to the either objective position in each target site: the mark based on the target site, from default picture library
The middle second picture for obtaining the target site, wherein the second picture of the target site shows the target site with special image.
In this embodiment, the second picture preset in picture library can be by the personnel with fine arts grounding in basic skills according to target site
Reality image be designed so that the target site that is shown of second picture has special image, such as cartoon character, ancient costume
At least one of image, soldier's image etc. image.
Target site is identified as presetting the number in picture library by target site, is obtained from default picture library
A kind of feasible pattern of the second picture of the target site is: determining the target site in default picture library in default picture library
In the corresponding each width second picture of number, choose a width picture from identified each width second picture, specific default figure
The same number corresponds to a file in valut, can have one or more second picture under a file, according to each
Number of the target site in default picture library obtains a file, and then determines a width second picture from this document folder
As the second picture of the target site, it can such as randomly select a width or be chosen according to the priority of picture, wherein picture
Priority can be depending on the number that picture is selected, specific the present embodiment no longer illustrates.For nose, existed based on nose
Number in default picture library is obtained with special image from default picture library, as cartoon character shows the second picture of nose.
The similarity between each width second picture under a file is being preset in similarity dimensions in the present embodiment,
With guarantee same file folder under each width second picture between it is similar to each other and from each width second picture under different files
Difference, so that obtained second picture different from can be numbered with based on other by numbering obtained second picture based on one.
S105 determines the skin color of personage's picture to be processed, the hair color of personage's picture to be processed and people to be processed
The pupil color of object picture.In the present embodiment, determining a kind of feasible pattern of the skin color of personage's picture to be processed is: really
Skin area in fixed personage's picture to be processed, wherein skin area is the face and/or face of personage's picture to be processed or less
Meet the region of pre-set color condition in region;Interference of the filtering environmental factor to the skin color in skin area, obtain to
Handle the skin color of personage's picture.
Such as it determines that the implementation of the skin area in personage's picture to be processed may is that and determines figure map to be processed
Face area and face's following region in piece, to RGB (Red-Green-Blue, the RGB) color in this two parts region
Spatial dimension is reduced, and obtains meeting the region of pre-set color condition as skin area, and wherein pre-set color condition can be with
It is following but is not limited to following formula:
R > 95AndG > 40AndB > 20AndR > BAndMax (R, G, B)-Min (R, G, B) > 15AndAbs (R-G) > 15
Wherein, R indicates red value, and G indicates the value of green, and B indicates the value of blue, and the value of R, G, B
Range is 0-255, And indicate and logical relation, Max (R, G, B) expression takes R, the maximum value in G, B, Min (R, G, B)
Expression takes R, the minimum value in G, B, and Abs (R-G) expression takes R-G to obtain absolute value.
And interference of the filtering environmental factor to the skin color in skin area, obtain the skin face of personage's picture to be processed
Color implementation may is that the color by all pixels point in skin area is weighted summation, obtains mean skin color,
And carry out eliminating the processing of environmental disturbances factor, obtain skin area color.
Such as it carries out elimination environmental disturbances factor processing implementation and may is that the mean skin color that will be indicated with RGB
It is converted into HSB (Hues-Saturation-Brightness, hue saturation brightness) expression, and obtained HSB is adjusted
It is whole, adjust S component (S in HSB, saturation degree) and B component (B in HSB, brightness), specific adjustment mode is as follows: judgement
Whether B component belongs to the first preset range, and B component and S component increase by the first preset increments value if belonging to, if be not belonging to
Then continuing to judge whether B component belongs to the second preset range, B component and S component increase by the second preset increments value if belonging to,
B component and S component increase third preset increments value if being not belonging to, it should be noted that B component and S component are by increasing
After preset increments value, the final B component and S component that should ensure that are no more than 1.Wherein, the first preset range, second pre-
If range, the first preset increments and the second preset increments can depend on the circumstances, B component such as is judged whether less than 0.5, if met
B component is less than 0.5, then B component increases by 0.5 and S component and increases by 0.3, if not meeting B component less than 0.5, continues to judge B
Whether component is less than 0.7, if meeting B component less than 0.7, B component increases by 0.15 and S component and increases by 0.15 if category, if not
Meet B component less than 0.7, then B component increases by 0.25 and S component and increases by 0.25, it should be noted that B component and S component pass through
After increasing preset increments value, the final B component and S component that should ensure that are no more than 1.About preset range and default increasing
Other numerical value setting of amount, this implementation are not illustrating.
Determining a kind of feasible mode of the hair color of personage's picture to be processed is: extracting to from personage's picture to be processed
Hair the first picture, pixel in the first picture of hair is handled, hair color, such as first to hair are obtained
The color of all pixels point in picture is weighted summation, obtains hair color.
Determining a kind of feasible mode of the pupil color of personage's picture to be processed is: obtaining pupil region, is based on lesser ring of Merkel
Domain obtains pupil color.That is, all pixels point progress weighted sum obtains pupil face in the pupil region that will acquire
Color can also be wherein the source for obtaining pupil region, which can be from personage's picture to be processed, extracts pupil region picture from eye
First picture of eyeball extracts pupil region picture.
In this embodiment, it determines that the feasible mode of pupil region has but is not limited to: being recognized in personage's picture to be processed
Key point of the pupil in eyes can determine pupil region based on these key points, the region formed such as these key points
It can be considered as pupil region.
S106, second picture, the skin color of personage's picture to be processed, figure map to be processed based on each target site
The pupil color of the hair color of piece and personage's picture to be processed obtains the personage's picture shown with special image.It is understood that
Are as follows: the skin color of the second picture of obtained each target site and personage's picture to be processed is synthesized into complete figure map
Piece, at least one of nose, mouth, eyes, ear, eyebrow and hair in personage's picture target site is with special form
As displaying, which specific target site shown with special image need by extracting the first picture of which target site and
It is fixed.
In the present embodiment, a kind of feasible pattern for obtaining the personage's picture shown with special image is: determining each mesh
Mark the second picture position and corresponding direction at position;Second picture position and institute based on each target site are right
Direction is answered, the second picture of each target site is spliced, obtains the personage's picture shown with special image;It will be with special
The skin color of personage's picture of image display is set as the skin color of personage's picture to be processed.
The process of the second picture position and corresponding direction that wherein determine each target site may is that with each
Position and direction of the target site in personage's picture to be processed, determine position of the second picture of each target site in face
It sets and direction can be according to people to be processed with the position where the second picture of the nose of special image displaying by taking nose as an example
Nose position is foundation in object picture, determines the position of the second picture, such as determines the left and right wing of nose apart from left and right face
Distance and bridge of the nose the top (be place that the bridge of the nose most start) distance apart from forehead of the distance, nose of cheek apart from chin,
Position of the second picture of nose in face can be then determined with these.And the second figure of the nose shown with special image
Direction corresponding to piece then depending on the face of personage's picture to be processed be positive face shine and side face shine depending on, as face be positive face shine when, with
Before the direction of the second picture for the nose that special image is shown is positive, when face is left side of the face, with the nose of special image displaying
The direction of the second picture of son is left side.
After the second picture position and corresponding direction for obtaining each target site, in each target site
Any position for, placed based on its position and corresponding direction, then spelled after the placement of any target site in this way
It connects to obtain the personage's picture shown with special image, and then sets personage to be processed for skin color obtained in step S105
The skin color of picture sets obtained hair color to the hair color of personage's picture to be processed, the pupil face that will be obtained
Color is set as the pupil color of personage's picture to be processed, i.e., by shown with special image each target site second picture, to
Handle skin color, the hair color of personage's picture to be processed and the pupil color knot of personage's picture to be processed of personage's picture
It closes, obtains the personage's picture of better authenticity shown with special image.
From above-mentioned technical proposal it is found that being mentioned from personage's picture to be processed in the case where obtaining personage's picture to be processed
The first picture for taking out each target site, to the either objective position in each target site: by the first of the target site
Picture is determined as the input of the corresponding default identification model of the target site, obtains the corresponding default identification model of the target site
The mark of the target site of output, the mark based on the target site obtain the of the target site from default picture library
Two pictures, wherein the second picture of the target site shows the target site with special image;And determining figure map to be processed
In the case where the skin color of piece, the skin color of second picture and personage's picture to be processed based on each target site is obtained
To the personage's picture shown with special image, personage's picture to be processed is automatically converted to special image displaying to realize
Personage's picture, and above-mentioned transform mode can choose second picture from default picture library automatically, so that user's need not have
Fine arts grounding in basic skills can also obtain the special image for meeting user's esthetic requirement, and above-mentioned conversion is not necessarily to user from default picture library
Interested second picture is chosen, manually so as to reduce the complexity of conversion.
For below using user's self-timer as personage's picture to be processed, picture provided in this embodiment is handled in conjunction with Fig. 3
Method is illustrated, in Fig. 3 using personage's picture to be processed as input, by picture pretreatment (such as above-mentioned recognition of face or
Picture extracts model) the first picture of multiple target sites is obtained, eyes picture and nose picture are only shown, but practical in Fig. 3
It can obtain the first picture of eyes, nose, mouth, eyebrow, ear and hair these target sites, and then by these target sites
The first picture be input in corresponding default identification model and (eyes picture be input to eye model in such as Fig. 3), by
(eyes number as shown in Figure 3, i.e., eyes are in default picture library for corresponding default respective mark of identification model output
Number), it is selected from default picture library based on respective mark and shows the second picture of target site (as schemed with cartoon character
Eyes part shown in 3, i.e. second picture), finally by the splicing for the second picture for showing these target sites with cartoon character,
The personage's picture shown with special image is obtained, and the skin color of personage's picture is the skin face of personage's picture to be processed
Color, to keep the consistent of skin color.
Referring to Fig. 4, it is illustrated, the embodiment of the invention provides the flow charts of another image processing method, for leading to
User feedback is crossed to correct default identification model, to improve users satisfaction degree, which may include following step
It is rapid:
S201 obtains personage's picture to be processed.
S202, extracts the first picture of each target site from personage's picture to be processed, and each target site includes:
At least one position in nose, mouth, eyes, ear, eyebrow and hair.
First picture of the target site is determined as the input of the corresponding default identification model of the target site by S203,
Obtain the mark of the target site of the corresponding default identification model output of the target site.
S204, to the either objective position in each target site: the mark based on the target site, from default picture library
The middle second picture for obtaining the target site, wherein the second picture of the target site shows the target site with special image.
S205 determines the skin color of personage's picture to be processed, the hair color of personage's picture to be processed and people to be processed
The pupil color of object picture.
S206, the skin color of second picture and personage's picture to be processed based on each target site, personage to be processed
The pupil color of the hair color of picture and personage's picture to be processed obtains the personage's picture shown with special image.
In the present embodiment, above-mentioned S201 to S206 is identical as the implementation procedure of above-mentioned S101 to S106 and principle, this
In repeat no more.
S207 is obtained field feedback, and is modified based on field feedback to default identification model.Specifically
: obtain user to the feedback information of personage's picture of feature image display, feedback information be at least used to show user to
The satisfaction for the face picture that special image is shown is based on feedback information, is corrected again to default identification model, to repair
Change the corresponding relationship between the mark of the target site of input and the output of default identification model of default identification model.
Obtain user be to a kind of feasible pattern of the feedback information of the personage's picture shown with special image: display with
After personage's picture that special image is shown, display one, for investigating the table of satisfaction, may include: full in the table
Meaning degree and modification mode etc., wherein satisfaction shows whether the personage's picture shown with special image meets user to special
The requirement of image, modification mode then demonstrate the need for modifying to which position of the personage's picture shown with special image,
Its desired second picture can also be further chosen from default picture library by user, these information can be carried anti-
In feedforward information.The input and default identification of default identification model are modified by modified mode again after obtaining feedback information
Corresponding relationship between the mark of the target site of model output.
Such as the corresponding relationship at the modification position in default identification model is carried out based on the modification position that feedback information carries
Modification, or the corresponding relationship at the modification position in default identification model is revised as the second figure that user in feedback information specifies
Number of the piece in default picture library improves user so that the second picture of subsequent output is the second picture that user wants
Satisfaction, wherein modification position is the determining needs of any user in above-mentioned nose, mouth, eyes, ear, eyebrow and hair
The position of modification.
From above-mentioned technical proposal it is found that correcting automatically by feedback information to pre-identification model, default identification can be modified
Corresponding relationship between the mark of the target site of the input of model and the output of default identification model, so that the standard of pre-identification model
True rate is higher and higher and increasingly meets user's expectation, so that the personage's picture shown with special image is close to true face,
So as to improve users satisfaction degree.
For the various method embodiments described above, for simple description, therefore, it is stated as a series of action combinations, but
Be those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because according to the present invention, certain
A little steps can be performed in other orders or simultaneously.Secondly, those skilled in the art should also know that, it is retouched in specification
The embodiment stated belongs to preferred embodiment, and related actions and modules are not necessarily necessary for the present invention.
Corresponding with above method embodiment, the embodiment of the present invention provides a kind of picture processing unit, structural schematic diagram
As shown in figure 5, may include: that obtain module 11, extraction module 12, identification module 13, second picture determining module 14, color true
Cover half block 15 and synthesis module 16.
Module 11 is obtained, for obtaining personage's picture to be processed, in the present embodiment, module 11 is obtained and obtains tape handling people
The mode of object picture can be, by the picture of user's upload, the picture received from other equipment and under in other equipment
At least one of picture of load.
Extraction module 12, for extracting the first picture of each target site, each mesh from personage's picture to be processed
Mark position includes: at least one position in nose, mouth, eyes, ear, eyebrow and hair, that is to say, that from people to be processed
One or more target sites are extracted in object picture, such as personage's picture to be processed can be the figure including target complete position
Piece, be also possible to include partial target position picture, then the first picture of each target site extracted and personage to be processed
The target site that picture includes is related.
In the present embodiment, extraction module 12 extracts the first picture of each target site from personage's picture to be processed
A kind of feasible pattern be: to figure map's piece to be processed carry out recognition of face, by identification obtain nose, mouth, eyes, ear,
At least one position in eyebrow and hair, by least one position institute in nose, mouth, eyes, ear, eyebrow and hair
Cut in region, target site separated from personage's picture to be processed, thus obtain nose, mouth, eyes,
First picture at the either objective position of ear, eyebrow and hair, such as above-mentioned each target site is obtained by recognition of face
Multiple key points, to either objective position in each target site: determining the mesh by multiple key points of the target site
Position region is marked, then extracts in determined region from personage's picture to be processed, obtains the of the target site
One picture.
Another feasible pattern that extraction module 12 extracts the first picture of each target site is: training obtains figure in advance
Piece extracts model, and personage's picture to be processed is input to the picture and is extracted in model, obtains the picture and extracts each of model output
First picture of a target site can also train in advance in practical applications certainly and obtain multiple pictures extraction models, each
Picture extracts the corresponding target site of model, i.e. a picture extracts the first figure that model is used to extract a target site
Piece extracts the process of model and may is that through cross validation mode from default personage's pictures wherein training in advance obtains picture
Training set and test set are determined in conjunction, to any personage's picture in training set: marking out from personage's picture
Each target site, each target site based on mark carry out model training, obtain picture and extract model, to be mentioned by picture
Modulus type obtains the key point of each target site, and extracts model by picture and exported based on the key point of each target site
Then first picture of each target site extracts model to obtained picture by each personage's picture in test set again
It is tested, determines that picture extracts model and can extract the first picture of each target site if testing, terminate to train, otherwise
Again personage's picture in training set is labeled and is trained again.
Additionally need to illustrate a bit: extraction module 12, which extracts model from picture, can be directed at least one people, that is,
It says that a picture extracts model and can be only applicable to a people, picture is obtained by personage's picture training of a people and extracts mould
Type or a picture extract model and can be adapted for multiple people, obtain picture by personage's picture training of multiple people and extract
Model.
Identification module 13, for the either objective position in each target site: by the first picture of the target site
As the input of the corresponding default identification model of the target site, the corresponding default identification model output of the target site is obtained
The mark of the target site, the mesh that wherein the corresponding default identification model of the target site will be extracted from historical personage's picture
First picture at position is marked as input, is trained to obtain using the mark marked for the target site as output.
In the present embodiment, identification module 13 obtains the process of default identification model are as follows: obtains historical personage's figure of face
The set of piece marks off training set and test set, by recognition of face or above-mentioned from the set of historical personage's picture
Picture extracts model and obtains the first picture of a target site in any historical personage's picture in training set, and marks out one
The mark of a target site carries out model training with the first picture of the target site and the mark marked out, obtains corresponding
Default identification model, wherein the default identification model that training obtains is corresponding with target site respectively, that is to say, that each mesh
There is corresponding default identification model at mark position, as nose target site has corresponding default identification model, the nose got
First picture at sub-goal position will be input to the corresponding default identification model of nose target site;Then pass through test set
Each historical personage's picture in conjunction tests default identification model, to determine whether default identification model meets the requirements,
The optimization to default identification model can be realized by repetition training and test in the case where undesirable, set for what is required
This present embodiment can no longer be illustrated depending on practical application by setting.
It in practical annotation process, needs to classify to either objective position, and is directed to for either objective position, it is right
The special image of face and the shape of user in the thinner personage's picture that can to show with special image of the classification of the target site
As being more bonded, such as by taking the hair of women as an example, the hair of women can be divided into but be not limited to long hair, middle hair, bob and super
Bob can be divided into big wave curly hair, bubble roll hair and straight hair in long hair again, then need to distinguish for these types in assorting process
A mark is marked out, it is consistent or close with the feature having by the corresponding type of the same mark, but different identification
The distinctiveness for the feature that corresponding type has becomes apparent from, and can make same text in datagram valut by the classification in this way
(the first picture in such as same file has the similarity between the first picture in part folder in default similarity dimensions
Feature it is consistent or close), the higher the better for the degree of isolation of the first picture of different files, still by taking the hair of women as an example,
The hair style of the hair of all first pictures in No. 00 file is all consistent as far as possible, and the first figure in No. 01 file
Hair style in piece will be with the difference of the hair style in the first picture in No. 00 file as far as possible.
Wherein No. 00 and No. 01 can then be considered as mark of this target site of hair in data picture library, have same
The default similarity of all first pictures of one mark in default similarity dimensions, all first pictures of different identification it is pre-
If similarity should be less than default similarity dimensions, meanwhile, the quantity of the first picture with like-identified is greater than or equal to 1,
Being exactly includes at least one secondary first picture in certain mark, includes at least one first in file such as by taking file as an example
Picture, and the similarity between each the first picture in same file folder is being preset in similarity dimensions, different files
The default similarity between each the first picture in folder is not in default similarity dimensions.In the present embodiment, identification module
13 use SimpleNet picture classification mode as the classification to target site, are specifically chosen SimpleNet reason in a kind of figure
It has been illustrated in piece processing method, has no longer repeated herein.
Second picture determining module 14, for the either objective position in each target site: being based on the target site
Mark, the second picture of the target site is obtained from default picture library, wherein the second picture of the target site is with special
The image display target site.
In this embodiment, the second picture preset in picture library can be by the personnel with fine arts grounding in basic skills according to target site
Reality image be designed so that the target site that is shown of second picture has special image, such as cartoon character, ancient costume
At least one of image, soldier's image etc. image.
Target site is identified as presetting the number in picture library by target site, second picture determining module 14
A kind of feasible pattern from the second picture for obtaining the target site in default picture library is: determining the mesh in default picture library
Number corresponding each width second picture of the position in default picture library is marked, chooses a width from identified each width second picture
Picture, specifically the same number corresponds to a file in default picture library, can have one or more under a file
Second picture obtains a file according to number of each target site in default picture library, and then from this document folder
It determines second picture of the width second picture as the target site, can such as randomly select a width or according to the preferential of picture
Grade is chosen, and wherein the priority of picture can be depending on the number that picture is selected, and specific the present embodiment no longer illustrates.With nose
For, the number based on nose in default picture library is obtained with special image from default picture library, as cartoon character is shown
The second picture of nose.
The similarity between each width second picture under a file is being preset in similarity dimensions in the present embodiment,
With guarantee same file folder under each width second picture between it is similar to each other and from each width second picture under different files
Difference, so that obtained second picture different from can be numbered with based on other by numbering obtained second picture based on one.
Color determination module 15, for determining the skin color of personage's picture to be processed, the hair of personage's picture to be processed
The pupil color of color and personage's picture to be processed.In the present embodiment, color determination module 15 determines skin color, hair face
The mode of color and pupil color is following but is not limited to such as under type:
Determining a kind of feasible mode of the skin color of personage's picture to be processed is: determining the skin in personage's picture to be processed
Skin region, wherein skin area be personage's picture to be processed face and/or face's following region in meet pre-set color condition
Region;Interference of the filtering environmental factor to the skin color in skin area obtains the skin color of personage's picture to be processed.
Determining a kind of feasible mode of the hair color of personage's picture to be processed is: extracting to from personage's picture to be processed
Hair the first picture, pixel in the first picture of hair is handled, hair color, such as first to hair are obtained
The color of all pixels point in picture is weighted summation, obtains hair color.
Determining a kind of feasible mode of the pupil color of personage's picture to be processed is: obtaining pupil region, is based on lesser ring of Merkel
Domain obtains pupil color.That is, all pixels point progress weighted sum obtains pupil face in the pupil region that will acquire
Color, wherein the source for obtaining pupil region, which may is that from processing personage's picture, extracts pupil region picture, it may also is that from eye
The first picture of eyeball extracts pupil region picture.
Above-mentioned determining skin color, hair color and the mode of pupil color are please referred to related in embodiment of the method
Illustrate, this present embodiment is not being illustrated.
Synthesis module 16, for second picture, personage's picture to be processed based on each target site skin color, to
The hair color of personage's picture and the pupil color of personage's picture to be processed are handled, the figure map shown with special image is obtained
Piece.
In the present embodiment, a kind of structure of synthesis module 16 is as shown in fig. 6, synthesis module 16 may include: to determine list
Member 161, concatenation unit 162 and synthesis unit 163, wherein determination unit 161 is used to determine the second picture of each target site
Position and corresponding direction;Concatenation unit 162 for based on each target site second picture position and institute it is right
Direction is answered, the second picture of each target site is spliced, obtains the personage's picture shown with special image;Synthesis unit
163 for setting the skin color of the personage's picture shown with special image to the skin color of personage's picture to be processed.
Such as the second picture position of the determining each target site of determination unit 161 and the process in corresponding direction can
To be: with position and direction of each target site in personage's picture to be processed, determining the second picture of each target site
Position and direction in face, by taking nose as an example, the position where the second picture of the nose shown with special image can be with
It is foundation according to nose position in personage's picture to be processed, determines the position of the second picture, such as determines left and right nose
Distance and the bridge of the nose the top (being the place that the bridge of the nose most starts) distance that foilbase is with a distance from the cheek of left and right, nose is apart from chin
The distance of forehead can then determine position of the second picture of nose in face with these.And shown with special image
Direction corresponding to the second picture of nose then depending on the face of personage's picture to be processed be positive face shine and side face shine depending on, as face is
When positive face shines, before the direction of the second picture of the nose shown with special image is positive, when face is left side of the face, with special form
As the direction of the second picture of the nose of displaying is left side.
After concatenation unit 162 obtains the second picture position and corresponding direction of each target site, to each
For any position in target site, is placed based on its position and corresponding direction, put in this way in any target site
Then splicing obtains the personage's picture shown with special image after setting, and then is set obtained skin color by synthesis unit 163
It is set to the skin color of personage's picture to be processed, sets obtained hair color to the hair color of personage's picture to be processed,
It sets obtained pupil color to the pupil color of personage's picture to be processed, i.e., will show each target site with special image
Second picture, the skin color of personage's picture to be processed, the hair color of personage's picture to be processed and personage's picture to be processed
Pupil color combine, obtain better authenticity with special image show personage's picture.
From above-mentioned technical proposal it is found that being mentioned from personage's picture to be processed in the case where obtaining personage's picture to be processed
First picture at the either objective position in each target site is determined as presetting by the first picture for taking out each target site
The input of identification model obtains the mark of the target site of default identification model output, to any in each target site
Target site: the mark based on the target site obtains the second picture of the target site from default picture library, wherein the mesh
The second picture at mark position shows the target site with special image;And in the feelings for the skin color for determining personage's picture to be processed
Under condition, the skin color of second picture and personage's picture to be processed based on each target site obtains showing with special image
Personage's picture, thus realize by personage's picture to be processed be automatically converted to special image show personage's picture, and on
Second picture can be chosen from default picture library automatically by stating transform mode, so that user's need not have fine arts grounding in basic skills can also obtain
To the special image for meeting user's esthetic requirement, and above-mentioned conversion chosen manually from default picture library without user it is interested
Second picture, so as to reduce the complexity of conversion.
The embodiment of the present invention also provides another picture processing unit, and structural schematic diagram is as shown in fig. 7, may include:
Obtain module 21, extraction module 22, identification module 23, second picture determining module 24, color determination module 25, synthesis module 26
With correction module 27.
Module 21 is obtained, for obtaining personage's picture to be processed.
Extraction module 22 extracts the first picture of each target site, each target portion from personage's picture to be processed
Position includes: at least one position in nose, mouth, eyes, ear, eyebrow and hair.
Identification module 23, for the either objective position in each target site: by the first picture of the target site
It is determined as the input of the corresponding default identification model of the target site, obtains the corresponding default identification model output of the target site
The target site mark.
Second picture determining module 24, for the either objective position in each target site: being based on the target site
Mark, the second picture of the target site is obtained from default picture library, wherein the second picture of the target site is with special
The image display target site.
Color determination module 25, for determining the skin color of personage's picture to be processed, the hair of personage's picture to be processed
The pupil color of color and personage's picture to be processed.
Synthesis module 26, for based on each target site second picture and personage's picture to be processed skin color,
The pupil color of the hair color of personage's picture to be processed and personage's picture to be processed obtains the figure map shown with special image
Piece.
In the present embodiment, above-mentioned acquisition module 21, extraction module 22, identification module 23, second picture determining module 24,
Color determination module 25, synthesis module 26 and above-mentioned acquisition module 11, extraction module 12, identification module 13, second picture determine
Module 14, color determination module 15, the implementation procedure of synthesis module 16 and principle are identical, and which is not described herein again.
Correction module 27 carries out default identification model for obtaining field feedback, and based on field feedback
Amendment.Correction module 27 includes acquiring unit 271 and amending unit 272, and specific structure please join Fig. 8.Acquiring unit 271 is for obtaining
User is obtained to the feedback information of personage's picture of feature image display;Feedback information is at least used to show user to special form
As the satisfaction of the face picture of displaying, amending unit 272 is used to be based on feedback information, carries out again to default identification model
Amendment, the input pass corresponding between the mark for the target site that default identification model exports to modify default identification model
System.
Wherein acquiring unit 271 is for obtaining user to one kind of the feedback information of the personage's picture shown with special image
Feasible pattern is: after personage's picture that display is shown with special image, display one is used to investigate the table of satisfaction,
May include in the table: satisfaction and modification mode etc., wherein satisfaction shows the figure map shown with special image
Whether piece meets requirement of the user to special image, and modification mode is then demonstrated the need for the personage's picture shown with special image
Which position modify, its desired second picture can also be further chosen from default picture library by user, this
A little information can carry in feedback information.Default identification is modified by modified mode again after obtaining feedback information
Corresponding relationship between the mark of the target site of the input of model and the output of default identification model.
Amending unit 272 is corrected a kind of feasible pattern to default identification model again: being carried based on feedback information
Modification position modify to the corresponding relationship at the modification position in default identification model, or will in default identification model should
The corresponding relationship at modification position, which is revised as the second picture that user in feedback information specifies, is presetting the number in picture library, so that
The second picture that the second picture of subsequent output is wanted for user is obtained, users satisfaction degree is improved, wherein modification position is above-mentioned
Any user in nose, mouth, eyes, ear, eyebrow and hair determines the position for needing to modify.
From above-mentioned technical proposal it is found that correcting automatically by feedback information to pre-identification model, default identification can be modified
Corresponding relationship between the mark of the target site of the input of model and the output of default identification model, so that the standard of pre-identification model
True rate is higher and higher and increasingly meets user's expectation, so that the personage's picture shown with special image is close to true face,
So as to improve users satisfaction degree.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment weight
Point explanation is the difference from other embodiments, and the same or similar parts between the embodiments can be referred to each other.
For device class embodiment, since it is basically similar to the method embodiment, so being described relatively simple, related place ginseng
See the part explanation of embodiment of the method.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by
One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation
Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning
Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that
A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or
The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged
Except there is also other identical elements in the process, method, article or apparatus that includes the element.
The foregoing description of the disclosed embodiments can be realized those skilled in the art or using the present invention.To this
A variety of modifications of a little embodiments will be apparent for a person skilled in the art, and the general principles defined herein can
Without departing from the spirit or scope of the present invention, to realize in other embodiments.Therefore, the present invention will not be limited
It is formed on the embodiments shown herein, and is to fit to consistent with the principles and novel features disclosed in this article widest
Range.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (10)
1. a kind of image processing method, which is characterized in that the described method includes:
Obtain personage's picture to be processed;
The first picture of each target site is extracted from personage's picture to be processed, each target site includes:
At least one position in nose, mouth, eyes, ear, eyebrow and hair;
To the either objective position in each target site: the first picture of the target site is determined as the target site
The input of corresponding default identification model obtains the mark of the target site of the corresponding default identification model output of the target site
Know, wherein the corresponding default identification model of the target site is by the first figure of the target site extracted from historical personage's picture
Piece is trained to obtain as input using the mark marked for the target site as output;
To the either objective position in each target site: the mark based on the target site is somebody's turn to do from default picture library
The second picture of target site, wherein the second picture of the target site shows the target site with special image;
Determine the skin color of personage's picture to be processed, the hair color of personage's picture to be processed and described to be processed
The pupil color of personage's picture;
Second picture, the skin color of personage's picture to be processed, the figure map to be processed based on each target site
The pupil color of the hair color of piece and personage's picture to be processed obtains the personage's picture shown with special image.
2. the method according to claim 1, wherein the skin color of determination personage's picture to be processed
Include:
Determine that the skin area in personage's picture to be processed, the skin area are the faces of personage's picture to be processed
And/or meet the region of pre-set color condition in face's following region;
Interference of the filtering environmental factor to the skin color in the skin area obtains the skin of personage's picture to be processed
Color.
3. the method according to claim 1, wherein the default identification model exports the target site in institute
The number in default picture library is stated, number of the target site in the default picture library is determined as the target site
Mark, and at least width second picture for including in the corresponding default picture library of the same number, an at least width
The similarity between each width picture in second picture is in default similarity dimensions;
The either objective position in each target site: the mark based on the target site is obtained from default picture library
The second picture for obtaining the target site includes: to determine the target site in the default picture library in the default picture library
The corresponding each width second picture of number, choose a width picture from identified each width second picture.
4. the method according to claim 1, wherein the second picture based on each target site, described
The pupil of the skin color of personage's picture to be processed, the hair color of personage's picture to be processed and personage's picture to be processed
Hole color obtains the personage's picture shown with special image:
Determine the second picture position and corresponding direction of each target site;
Second picture position and corresponding direction based on each target site, to the second picture of each target site into
Row splicing, obtains the personage's picture shown with special image;
The people to be processed is set by skin color, hair color and the pupil color of the personage's picture shown with special image
Skin color, hair color and the pupil color of object picture.
5. the method according to claim 1, wherein the method also includes:
User is obtained to the feedback information of personage's picture with feature image display, the feedback information is at least used to show
Satisfaction of the user to the face picture shown with special image;
Based on the feedback information, the default identification model is corrected again, to modify the default identification model
Corresponding relationship between the mark of the target site of input and the default identification model output.
6. a kind of picture processing unit, which is characterized in that described device includes:
Module is obtained, for obtaining personage's picture to be processed;
Extraction module, it is described each for extracting the first picture of each target site from personage's picture to be processed
Target site includes: at least one position in nose, mouth, eyes, ear, eyebrow and hair;
Identification module, for the either objective position in each target site: the first picture of the target site is true
It is set to the input of the corresponding default identification model of the target site, obtains the corresponding default identification model output of the target site
The mark of the target site, the mesh that wherein the corresponding default identification model of the target site will be extracted from historical personage's picture
First picture at position is marked as input, is trained to obtain using the mark marked for the target site as output;
Second picture determining module, for the either objective position in each target site: the mark based on the target site,
The second picture of the target site is obtained from default picture library, wherein the second picture of the target site is shown with special image
The target site;
Color determination module, for determining the skin color of personage's picture to be processed, the head of personage's picture to be processed
Send out the pupil color of color and personage's picture to be processed;
Synthesis module, the skin color, described for second picture, personage's picture to be processed based on each target site
The pupil color of the hair color of personage's picture to be processed and personage's picture to be processed obtains the people shown with special image
Object picture.
7. device according to claim 6, which is characterized in that the color determination module, be specifically used for determining it is described to
The skin area in personage's picture, and interference of the filtering environmental factor to the skin color in the skin area are handled, is obtained
The skin color of personage's picture to be processed, the skin area are the face and/or face of personage's picture to be processed
Meet the region of pre-set color condition in following region.
8. device according to claim 6, which is characterized in that the identification module exists for exporting the target site
Number of the target site in the default picture library is determined as the target portion by the number in the default picture library
The mark of position, and at least width second picture for including in the corresponding default picture library of the same number, described at least one
The similarity between each width picture in width second picture is in default similarity dimensions;
The second picture determining module, for determining the target site in the default picture library in the default picture library
The corresponding each width second picture of number, choose a width picture from identified each width second picture.
9. device according to claim 6, which is characterized in that the synthesis module comprise determining that unit, concatenation unit and
Synthesis unit;
The determination unit, for determining the second picture position and corresponding direction of each target site;
The concatenation unit, for second picture position and corresponding direction based on each target site, to each mesh
The second picture at mark position is spliced, and the personage's picture shown with special image is obtained;
The synthesis unit, skin color, hair color and the pupil color of personage's picture for will be shown with special image
It is set as skin color, hair color and the pupil color of personage's picture to be processed.
10. the device as claimed in claim 6, which is characterized in that described device further includes correction module, the correction module
It include: obtaining unit and amending unit;
The acquiring unit, it is described anti-for obtaining user to the feedback information of personage's picture with feature image display
Feedforward information is at least used to show the user to the satisfaction of the face picture shown with special image;
The amending unit is corrected the default identification model, again for being based on the feedback information to modify
State the corresponding relationship between the mark of the target site of input and the default identification model output of default identification model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811339208.0A CN109461118A (en) | 2018-11-12 | 2018-11-12 | A kind of image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811339208.0A CN109461118A (en) | 2018-11-12 | 2018-11-12 | A kind of image processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109461118A true CN109461118A (en) | 2019-03-12 |
Family
ID=65610073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811339208.0A Pending CN109461118A (en) | 2018-11-12 | 2018-11-12 | A kind of image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109461118A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110989912A (en) * | 2019-10-12 | 2020-04-10 | 北京字节跳动网络技术有限公司 | Entertainment file generation method, device, medium and electronic equipment |
CN111832369A (en) * | 2019-04-23 | 2020-10-27 | 中国移动通信有限公司研究院 | Image identification method and device and electronic equipment |
CN112199529A (en) * | 2020-10-12 | 2021-01-08 | 北京自如信息科技有限公司 | Picture processing method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101527049A (en) * | 2009-03-31 | 2009-09-09 | 西安交通大学 | Generating method of multiple-style face cartoon based on sample learning |
CN102096934A (en) * | 2011-01-27 | 2011-06-15 | 电子科技大学 | Human face cartoon generating method based on machine learning |
CN102542586A (en) * | 2011-12-26 | 2012-07-04 | 暨南大学 | Personalized cartoon portrait generating system based on mobile terminal and method |
CN102682420A (en) * | 2012-03-31 | 2012-09-19 | 北京百舜华年文化传播有限公司 | Method and device for converting real character image to cartoon-style image |
US20180268595A1 (en) * | 2017-03-20 | 2018-09-20 | Google Llc | Generating cartoon images from photos |
CN108596091A (en) * | 2018-04-24 | 2018-09-28 | 杭州数为科技有限公司 | Figure image cartooning restoring method, system and medium |
CN108717719A (en) * | 2018-05-23 | 2018-10-30 | 腾讯科技(深圳)有限公司 | Generation method, device and the computer storage media of cartoon human face image |
-
2018
- 2018-11-12 CN CN201811339208.0A patent/CN109461118A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101527049A (en) * | 2009-03-31 | 2009-09-09 | 西安交通大学 | Generating method of multiple-style face cartoon based on sample learning |
CN102096934A (en) * | 2011-01-27 | 2011-06-15 | 电子科技大学 | Human face cartoon generating method based on machine learning |
CN102542586A (en) * | 2011-12-26 | 2012-07-04 | 暨南大学 | Personalized cartoon portrait generating system based on mobile terminal and method |
CN102682420A (en) * | 2012-03-31 | 2012-09-19 | 北京百舜华年文化传播有限公司 | Method and device for converting real character image to cartoon-style image |
US20180268595A1 (en) * | 2017-03-20 | 2018-09-20 | Google Llc | Generating cartoon images from photos |
CN108596091A (en) * | 2018-04-24 | 2018-09-28 | 杭州数为科技有限公司 | Figure image cartooning restoring method, system and medium |
CN108717719A (en) * | 2018-05-23 | 2018-10-30 | 腾讯科技(深圳)有限公司 | Generation method, device and the computer storage media of cartoon human face image |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111832369A (en) * | 2019-04-23 | 2020-10-27 | 中国移动通信有限公司研究院 | Image identification method and device and electronic equipment |
CN110989912A (en) * | 2019-10-12 | 2020-04-10 | 北京字节跳动网络技术有限公司 | Entertainment file generation method, device, medium and electronic equipment |
CN112199529A (en) * | 2020-10-12 | 2021-01-08 | 北京自如信息科技有限公司 | Picture processing method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103914699B (en) | A kind of method of the image enhaucament of the automatic lip gloss based on color space | |
CN109461118A (en) | A kind of image processing method and device | |
KR101887216B1 (en) | Image Reorganization Server and Method | |
CN104636759B (en) | A kind of method and picture filter information recommendation system for obtaining picture and recommending filter information | |
CN110414394A (en) | A kind of face blocks face image method and the model for face occlusion detection | |
CN102043965A (en) | Information processing apparatus, information processing method, and program | |
CN103945104B (en) | Information processing method and electronic equipment | |
CN106056064A (en) | Face recognition method and face recognition device | |
CN108765268A (en) | A kind of auxiliary cosmetic method, device and smart mirror | |
CN109685713A (en) | Makeup analog control method, device, computer equipment and storage medium | |
CN103810490A (en) | Method and device for confirming attribute of face image | |
CN106485222A (en) | A kind of method for detecting human face being layered based on the colour of skin | |
KR101967814B1 (en) | Apparatus and method for matching personal color | |
CN105469072A (en) | Method and system for evaluating matching degree of glasses wearer and the worn glasses | |
CN110956071B (en) | Eye key point labeling and detection model training method and device | |
CN109754444A (en) | Image rendering methods and device | |
KR20180130778A (en) | Cosmetic recommendation method, and recording medium storing program for executing the same, and recording medium storing program for executing the same, and cosmetic recommendation system | |
CN109584153A (en) | Modify the methods, devices and systems of eye | |
CN103679767A (en) | Image generation apparatus and image generation method | |
CN103957396A (en) | Image processing method and device used when tongue diagnosis is conducted with intelligent device and equipment | |
CN109598210A (en) | A kind of image processing method and device | |
CN108088567A (en) | Human body infrared thermal imagery processing method and system | |
CN113609944A (en) | Silent in-vivo detection method | |
EP3628187A1 (en) | Method for simulating the rendering of a make-up product on a body area | |
CN106599185B (en) | HSV-based image similarity identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220523 Address after: 100083 2-1101, 11 / F, 28 Chengfu Road, Haidian District, Beijing Applicant after: BEIJING IHANDY MOBILE INTERNET TECHNOLOGY Co.,Ltd. Address before: 20th floor, block a, Qiaofu commercial building, 300 Lockhart Road, Wanchai Applicant before: TAIPU INTELLIGENT Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190312 |