CN109191410A - A kind of facial image fusion method, device and storage medium - Google Patents
A kind of facial image fusion method, device and storage medium Download PDFInfo
- Publication number
- CN109191410A CN109191410A CN201810886318.2A CN201810886318A CN109191410A CN 109191410 A CN109191410 A CN 109191410A CN 201810886318 A CN201810886318 A CN 201810886318A CN 109191410 A CN109191410 A CN 109191410A
- Authority
- CN
- China
- Prior art keywords
- pixel value
- facial image
- pixel
- fused
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
The embodiment of the invention discloses a kind of facial image fusion methods, device and storage medium, in the method, by the pixel value distributed intelligence for obtaining face complexion pixel in facial image and material facial image to be fused, and then the pixel value adjusting parameter of each Color Channel is determined according to pixel value distributed intelligence, to be adjusted using these pixel value adjusting parameters to facial image to be fused, then it is merged according to facial image to be fused adjusted and material facial image to generate target image, through the above way, it can make the colour of skin of target image more evenly, reduce color difference, so that target image more true nature.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of facial image fusion method, device and storage are situated between
Matter.
Background technique
Currently, face U.S. face in respectively take pictures application or image processing application, textures, restyle the hair, assume a hostile attitude etc. it is with a wide range of features by
User welcomes.Wherein, Face Changing namely facial image fusion, mainly carry out face fusion for user picture and material photo,
To make to merge obtained image while there is the figure image (ratio in the face macroscopic features and material photo in user picture
Such as military uniform image, Er Tongzhao, ancient costume image) feature, the entertainment orientation demand of user is met with this, improves the interest of application.
In facial image integration technology, it will usually colour of skin adjusting is carried out to user picture, so that the human face region of user picture is more
Add and be fused in the human face region of material image naturally, thus effect that is more natural to user, more mixing the spurious with the genuine.
In the research and practice process to the prior art, it was found by the inventors of the present invention that existing be usually used filter
Mode carry out colour of skin adjusting, however user picture different in such mode is melted selecting same material photo to carry out face
It is all colour of skin adjustment to be done using same filter, for example the user picture of the black colour of skin and the yellow colour of skin is all using identical when conjunction
Filter, can not be adjusted for the different colours of skin, and fused photo is easy to cause to generate uneven color, so that user
Face part seems more lofty, reduces the authenticity of photo.
Summary of the invention
The embodiment of the present invention provides a kind of facial image fusion method, device and storage medium, and fusion is enabled to obtain
Target image color more evenly, the indisposed sense of face part is reduced, so that whole target image is more natural true.
The embodiment of the present invention provides a kind of facial image fusion method, comprising:
Obtain facial image and material facial image to be fused;
Obtain face complexion pixel in the facial image to be fused predetermined color space each Color Channel first
Pixel value distributed intelligence, and obtain face complexion pixel in the material facial image and lead in each color of predetermined color space
The second pixel value distributed intelligence in road;
Determine the first pixel value adjusting parameter of each Color Channel according to the first pixel value distributed intelligence, and according to
The second pixel value distributed intelligence determines the second pixel value adjusting parameter of each Color Channel;
According to the first pixel value adjusting parameter and the second pixel value adjusting parameter of each Color Channel, to described wait melt
The pixel value for closing the corresponding color channel of each image pixel in facial image is adjusted;
The facial image to be fused adjusted is merged with the material facial image, to generate target figure
Picture.
The embodiment of the present invention also provides a kind of facial image fusing device, comprising:
First obtains module, for obtaining facial image and material facial image to be fused;
Second obtains module, for obtaining in the facial image to be fused face complexion pixel in predetermined color space
Face complexion pixel is in predetermined face in first pixel value distributed intelligence of each Color Channel, and the acquisition material facial image
Second pixel value distributed intelligence of each Color Channel of the colour space;
Determining module, for determining that the first pixel value of each Color Channel is adjusted according to the first pixel value distributed intelligence
Parameter, and determine according to the second pixel value distributed intelligence the second pixel value adjusting parameter of each Color Channel;
Module is adjusted, for according to the first pixel value adjusting parameter of each Color Channel and the second pixel value adjustment ginseng
Number, is adjusted the pixel value in the corresponding color channel of each image pixel in the facial image to be fused;
Generation module, for the facial image to be fused adjusted to be merged with the material facial image,
To generate target image.
Wherein, the predetermined color space is RGB color, and each Color Channel includes the channel R, the channel G and B
Channel;
The second acquisition module is specifically used for:
Determine the face complexion pixel in the facial image to be fused;
Obtain the channel R, the channel G of each face complexion pixel and the picture of channel B in the facial image to be fused
Element value;
According to the picture of the channel R of the face complexion pixel each in the facial image to be fused, the channel G and channel B
Element value, obtains the histogram of each Color Channel of face complexion pixel described in the facial image to be fused, and then obtains the
One pixel value distributed intelligence.
Wherein, the second acquisition module is specifically used for:
Face datection is carried out to the facial image to be fused, to obtain the five features letter of the facial image to be fused
Breath;
According to the five features information and preset mask images, face's skin in the facial image to be fused is determined
Color region;
Face Detection is carried out to face's area of skin color, with the face complexion picture in the determination facial image to be fused
Element.
Wherein, the first acquisition module is specifically used for:
Obtain original facial image;
The original facial image is pre-processed, obtains facial image to be fused, the pretreatment includes to described
Original facial image is cut, face correction, the colour of skin adjusts and deformation.
In the fusion method of the present inventor's face image, by obtaining face in facial image and material facial image to be fused
Skin pixel determines the of each Color Channel according to the pixel value distributed intelligence in the pixel value distributed intelligence of each Color Channel
One and the second pixel value adjusting parameter, the image pixel of blending image is then treated according to the one the second pixel value adjusting parameters
The pixel value in corresponding color channel is adjusted, and facial image to be fused adjusted and material facial image are merged,
To generate target image.Pass through the pixel Distribution value using face complexion pixel in facial image to be fused and material facial image
Information adjusts facial image to be fused, so as to do different colour of skin adjustment for different facial image to be fused, and
The color that can make target image more evenly, reduces the indisposed sense of face part, so that whole target image is more natural true.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 a is the overall framework figure of facial image fusion method provided in an embodiment of the present invention;
Fig. 1 b is the schematic diagram of a scenario of facial image fusion method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of facial image fusion method provided in an embodiment of the present invention;
Fig. 3 is carried out to original facial image pretreated in facial image fusion method provided in an embodiment of the present invention
Flow diagram;
Fig. 4 is human face five-sense-organ feature point schematic diagram in facial image fusion method provided in an embodiment of the present invention;
Fig. 5 is to obtain face complexion picture using mask images in facial image fusion method provided in an embodiment of the present invention
The histogram schematic diagram of each Color Channel of element;
Fig. 6 is the pixel value and picture adjusted in facial image fusion method provided in an embodiment of the present invention, before adjustment
Mapping relations curve graph between element value;
Fig. 7 is facial image to be fused and element adjusted in facial image fusion method provided in an embodiment of the present invention
The fusion results schematic diagram of material facial image;
Fig. 8 is the structural schematic diagram of facial image fusing device provided in an embodiment of the present invention;
Fig. 9 is the structural schematic diagram of terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those skilled in the art's every other implementation obtained without creative efforts
Example, shall fall within the protection scope of the present invention.
The embodiment of the present invention provides fusion method, device and the storage medium of a kind of facial image.
Wherein the fusing device of facial image can integrate tablet PC (personal computer, Personal Computer),
Mobile phone etc. has storage element and is equipped with microprocessor and has in the terminal of operational capability.Can be as shown in Figure 1a, it is this hair
The overall framework figure of facial image fusion method provided by bright embodiment, facial image fusing device are integrated in tablet PC,
It is mainly used for obtaining facial image and material facial image to be fused;Then facial image and material face to be fused is obtained respectively
Face complexion pixel is in the pixel value distributed intelligence of each Color Channel of predetermined color space, respectively the first pixel value in image
Distributed intelligence and the second pixel value distributed intelligence, to determine the first picture of each Color Channel according to the first pixel value distributed intelligence
Element is worth adjusting parameter, and the second pixel value adjusting parameter of each Color Channel is determined according to the second pixel value distributed intelligence;So
Afterwards according to the first pixel value adjusting parameter of each Color Channel and the second pixel value adjusting parameter, to every in facial image to be fused
The pixel value in the corresponding color channel of a image pixel is adjusted;By facial image to be fused adjusted and material face figure
As being merged, to generate target image.
B refering to fig. 1 is the application scenarios schematic diagram of facial image fusion method provided in an embodiment of the present invention.For example, can
To be suitable for various take pictures scene or image procossing scenes, it can be applied to various social activity APP (Application, using journey
Sequence), image procossing APP etc., provide Face Changing function for various APP.For example, material can be provided in an image procossing APP
Library, material database is for storing various types of material facial images, such as children's photograph, military uniform photograph, ancient costume photograph etc., each type
Material facial image can have it is multiple.When user needs to assume a hostile attitude, for example, can by click " Face Changing " function button,
Then a material facial image in a material facial image, such as military uniform photograph, Zhi Houyong are selected from the interface of pop-up
Family can use camera and carry out portrait self-timer or select a portrait photo from local photograph album, using as face to be fused
Image carries out face fusion to the material facial image and facial image to be fused of user's selection, to obtain both having user
Macroscopic features has the target image of the image characteristics of material facial image again.Wherein, when carrying out face fusion, by using
The facial image fusion method of the embodiment of the present invention can make user's face complexion and the color of material facial image more connect
Closely, so that the color of target image more evenly, reduces the indisposed sense of face part, so that whole target image is more natural true
It is real.
It will be described in detail respectively below.
Embodiment one,
The present embodiment is described from the angle of face fusion image device, and wherein face fusion image device can integrate
In the terminals such as mobile phone, tablet computer.
Referring to Fig.2, being mainly used in the facial image fusion method of the present embodiment by the people in facial image to be fused
Face is fused in the face of material facial image, can specifically include following process:
S201, facial image and material facial image to be fused are obtained.
Facial image to be fused such as can come from user take a picture certainly or user selects in photograph album has people
The image of face can be true facial image, be also possible to animation image, hand-drawing image etc..Material facial image such as may be used
To be military uniform photograph or certificate photo, etc..Facial image and material facial image to be fused can determine according to the user's choice,
Or material facial image is also possible to the material facial image of default, such as when opening the Face Changing function of image procossing APP,
One material facial image of default choice, user carry out self-timer or selection facial image to be fused.
In the present embodiment, obtaining facial image to be fused can be can specifically include: obtain original facial image;To original
Facial image is pre-processed, and obtains facial image to be fused, pretreatment include original facial image is cut, face is rectified
Just, colour of skin adjusting and deformation.
In conjunction with Fig. 3, original facial image 31 such as can be taking pictures certainly for user, facial image to be fused be to user from
It takes pictures obtained image after pre-processing, dotted box portion is preprocessing process in figure.Wherein, the colour of skin in pretreatment is adjusted
For example can be and lighten the colour of skin, dim the colour of skin or beautification face etc., deformation such as can be and zoom in or out image.Wherein,
Material facial image 32 can be cut, face correction and the pretreatment such as deformation, wherein by people to be fused after deformation
Face image and the size of material facial image are essentially identical.By being pre-processed to image, is conducive to raising face and melts
Close effect.
Wherein, in some embodiments, facial image to be fused can directly be original facial image.
S202, obtain face complexion pixel in facial image to be fused predetermined color space each Color Channel first
Pixel value distributed intelligence, and face complexion pixel is obtained in material facial image in each Color Channel of predetermined color space
Second pixel value distributed intelligence.
Wherein, predetermined color space can be selected according to actual needs, for example can be RGB color, HSV face
The colour space or YUV color space, etc..The corresponding Color Channel of different color spaces is different, each face of RGB color
Chrominance channel includes R (red) channel, (green) channel G and B (blue channel);Each Color Channel in hsv color space includes H
(tone) channel, (saturation degree) channel S and V (lightness) channel;Each Color Channel of YUV color space includes Y (brightness)
Channel, (coloration) channel U and V (concentration) channel.
By taking predetermined color space is RGB color as an example, the pixel value distributed intelligence of each Color Channel includes the channel R
Pixel value distributed intelligence, the pixel value distributed intelligence in the channel G and the pixel value distributed intelligence of channel B.Obtain the first pixel value
Distributed intelligence can specifically include following steps:
(11) the face complexion pixel in facial image to be fused is determined.
For example, Face datection can be carried out to facial image to be fused, to obtain the five features of facial image to be fused
Information.Five features information can be the feature point information of face, the i.e. position data of face, and the feature points of face can be with
It is the location point in as shown in Figure 4 on the face profile of face.Wherein face may include shape of face, eyebrow, eyes, nose and mouth
Bar.
Thereafter, according to five features information and preset mask images, face's colour of skin in facial image to be fused is determined
Region.In the present embodiment, face's area of skin color is determined using the method for mask (mask) filtering.Mask master in image procossing
If being blocked to image (all or part) to be processed using selected image, figure or object to control image procossing
Region or treatment process.As shown in figure 5, a1 is facial image to be fused, a2 is mask images, the mask images of the present embodiment
A2 is bianry image, and mask images a2 is used to block the position in face below eyes, except nose and mouth, thus using covering
Mould image extracts in face below eyes and filters out the region of nose and mouth position, thus obtains face's area of skin color
A3, as shown in figure 5, obtained face's area of skin color a3 is eyes or less between chin and getting rid of nose and mouth
Region.
Wherein, in the other parts in addition to face's area of skin color for filtering out facial image to be fused using mask images
Before, mask images are adjusted according to the five features information of facial image to be fused, such as according to face figure to be fused
The shape of face point information of picture, to adjust the shape of face size and location of mask images, so that the shape of face size and location of the two is consistent
Or it is substantially the same, according to the point information of the nose of facial image to be fused and mouth, adjust nose and mouth in mask images
Bar position and size so that the nose size and location of the two image is substantially consistent, mouth size and location is also substantially
Unanimously, it is possible thereby to make the shape of face of mask images, nose and mouth respectively with the shape of face of facial image to be fused, nose and mouth
Bar alignment, so as to obtain more accurate face's area of skin color.
Certainly, in other embodiments, face's area of skin color can also can use from forehead to the region between chin
Mask images filter out eyebrow, eyes, nose and mouth part, interfere to avoid these positions to skin pixel detection.
After determining face's area of skin color, Face Detection is carried out to face's area of skin color, mainly to colour of skin area of face
Each pixel in domain is analyzed, to judge whether pixel is skin pixel point.Since human skin has characteristic
Color, can relatively significantly be distinguished with background, the color of skin only occupies certain model in the primary colours of color space
It encloses, therefore skin pixel can be extracted whether within the scope of this by the value of analysis pixel.Skin color detection method can have
It is a variety of, such as the Face Detection based on RGB color model, skin detection or hsv color space based on oval skin model
H range screening method, etc., can specifically be selected according to actual needs.
In the present embodiment, face's area of skin color is determined first with mask images, to remove the positions such as eyes, nose, mouth
Interference, the accuracy of skin pixel detection can be improved, reduce erroneous judgement, while calculation amount can also be reduced, improve colour of skin inspection
Survey efficiency.
(12) channel R, the channel G of each face skin pixel and the pixel value of channel B in facial image to be fused are obtained.
After the face complexion pixel for determining face's area of skin color, the pixel of each Color Channel of each face skin pixel is obtained
Value, the pixel value namely grayscale value, value range 0-255.
(13) according to the pixel value of the channel R of face skin pixel each in facial image to be fused, the channel G and channel B,
The histogram of each Color Channel of face complexion pixel in facial image to be fused is obtained, and then obtains the first pixel Distribution value letter
Breath.
Each face skin pixel includes R primary color pixels, G primary color pixels and B primary color pixels, the pixel value of each primary color pixels
Namely the pixel value of respective channel.In the present embodiment, face complexion pixel can be carried out by the method for RGB statistics with histogram
Statistics, to count the corresponding face complexion pixel quantity of each pixel value on each channel, to obtain the picture of face complexion pixel
Plain Distribution value situation, pixel value distributed intelligence are indicated using histogram.As shown in figure 5, the a4 of Fig. 5 indicates face figure to be fused
The histogram in the channel R, the channel G and channel B of the face complexion pixel of face's area of skin color as in.Wherein, according to each face
The pixel value in the channel R of skin pixel, the channel G and channel B, by counting on each channel, corresponding to each pixel value
Face complexion pixel quantity, it is hereby achieved that the histogram of respective channel.Wherein, what the horizontal axis of histogram indicated is 0~
The pixel value of 255 ranges, what the longitudinal axis indicated is the corresponding face complexion pixel quantity of each pixel value in corresponding color channel, such as
In the histogram in the channel R, the value of horizontal axis is 125, corresponding to the value of the longitudinal axis be 300, then it represents that be the channel R pixel value
There are 300 for 125 face complexion pixel.When the corresponding face complexion pixel quantity of some pixel value in a channel is zero
When, illustrate not including the pixel value in the channel.Therefore by the histogram in the channel R, the channel G and channel B, it can determine that R is logical
The pixel value distribution situation in road, the channel G and channel B.
Wherein, the second pixel value distributed intelligence of each Color Channel of face complexion pixel can be in material facial image
The histogram in the channel R of face complexion pixel, the channel G and channel B.Second pixel value distributed intelligence can be preconfigured letter
Breath, i.e., when carrying out facial image fusion, the material facial image that need to be only selected according to user, then directly from storing data library
The middle configuration information for reading material facial image, which includes the second pixel value distributed intelligence, thus in face
It does not need during image co-registration to carry out face complexion pixel detection, statistics with histogram etc. to material facial image to calculate, it can be with
Fusion efficiencies are improved, the time that user waits fusion results is reduced.For example, when storing material facial image to material database
(before step S201), i.e., the second pixel Distribution value letter of each Color Channel of face complexion pixel in acquisition material facial image
It ceases and is stored, to need to only directly read second when subsequent user selection material facial image is to carry out face fusion
Pixel value distributed intelligence information.
Alternatively, being also possible to pass through face skin (after step S201) during face fusion in other embodiments
The processes such as color pixel detection, statistics with histogram obtain the second pixel value distributed intelligence.Wherein, the second pixel value distributed intelligence
Acquisition modes and the acquisition process of the first pixel value distributed intelligence are similar, as shown in figure 5, b1, b2, b3, b4 in Fig. 5 distinguish
Indicate material facial image, mask images, face's area of skin color in material facial image, face complexion in material facial image
Histogram of the pixel in the channel R, the channel G and channel B.For example, can first determine the face complexion pixel in material facial image,
Then the channel R of each face skin pixel, the pixel value in the channel G and channel B are obtained, and then according to the pixel value in each channel, is obtained
Thus the histogram for taking each Color Channel of face complexion pixel obtains the second pixel value distributed intelligence.Wherein, material face figure
The acquisition modes of face complexion pixel in the acquisition modes and facial image to be fused of face complexion pixel as in are similar,
Five features information as first determined material facial image, then determines material face according to five features information and mask images
Face's area of skin color in image, and then Face Detection is carried out to face's area of skin color, to obtain face complexion pixel.
S203, the first pixel value adjusting parameter that each Color Channel is determined according to the first pixel value distributed intelligence, Yi Jigen
The second pixel value adjusting parameter of each Color Channel is determined according to the second pixel value distributed intelligence.
Specifically, face complexion pixel in facial image to be fused can be obtained according to the first pixel value distributed intelligence
The n1 percentile and n2 percentile of the pixel value of each Color Channel, and then obtain the first pixel value adjusting parameter;And root
According to the second pixel value distributed intelligence, the n3 hundred of the pixel value of each Color Channel of face complexion pixel in material facial image is obtained
Quantile and n4 percentile, and then obtain the second pixel value adjusting parameter.
In the present embodiment, the first pixel value adjusting parameter of any Color Channel includes face skin in facial image to be fused
The n1 percentile and n2 percentile of the pixel value in the corresponding color channel of color pixel, the second pixel value of any Color Channel
Adjusting parameter includes the n3 percentile and n4 of the pixel value in the corresponding color channel of face complexion pixel in material facial image
Percentile.Pixel value distributed intelligence namely histogram.The value range of the pixel value of each Color Channel is 0~255, wherein
When the face complexion pixel quantity of the longitudinal axis corresponding to some pixel value is zero on horizontal axis in the histogram of a certain Color Channel, then
It does not include the pixel value that the face skin pixel quantity is zero in all pixels value that the Color Channel is included, if otherwise face
The quantity of skin pixel is not zero, then includes that the face skin pixel is not zero in all pixels value that the Color Channel is included
Corresponding pixel value.Therefore, by the histogram of each Color Channel, each Color Channel institute of face complexion pixel can be determined
Which the pixel value for including has.
For all pixels value that any Color Channel is included, by the way that all pixels value is suitable according to from small to large
Sequence is arranged, and 100 equal portions are then divided into, and the n percentile of the pixel value of any Color Channel refers to the Color Channel
In all pixels value for being included, in 100 equal portions divided be in the position n% pixel value, indicate meaning be
The face complexion pixel quantity that pixel value is less than or equal to n percentile in the Color Channel accounts for total face complexion pixel quantity
N%, i.e., in all face skin pixels, the pixel value there are the face complexion pixel of n% in the Color Channel is less than n percentage
Digit.N percentile can also indicate that the value interval of n is [1,100] with n%.
Therefore, in facial image to be fused the pixel value in the channel R of face complexion pixel n1 percentile and n2 percentage
Digit, be respectively by all pixels value that the channel R is included according to being ranked up from small to large and being divided into 100 equal portions after,
Pixel value in 100 equal portions divided in the position n1% and the pixel value in the position n2%;In facial image to be fused
The n1 percentile and n2 percentile of the pixel value in the channel G of face complexion pixel are included all respectively by the channel G
After pixel value is according to sorting from small to large and being divided into 100 equal portions, the pixel of the position n1% is in 100 equal portions divided
Value and the pixel value in the position n2%;And so on obtain channel B pixel value n1 percentile and n2 percentile, with
And the n3 percentile and n4 percentile in the channel R, the channel G of face complexion pixel in material facial image, the pixel value of channel B
Number, does not do repeat one by one herein.
Wherein, n1 is less than n2, and n3 is less than n4, and the value interval of n1, n2, n3, n4 are all [1,100].
Wherein, n1 and n3 can be respectively less than 50, n2 and n4 can be all larger than or be equal to 50.Further, n1=n3,
N2=n4.For example, the value of n1, n3 can be 20, the value of n2, n4 can be 90.Certainly, n1, n2, n3, n4 can also take
Others value, such as n1, n3 can be 30,35 or 45 etc., and n2, n4 can be 55,65 or 80 etc., specifically can be according to practical need
It is chosen.In some other embodiment, the value of n1 and n3 can not be identical, and for example n1 can be 10,30 or 40 etc.,
N3 can be 15,25 or 38 etc., and n2 and n4 can not also be identical, and for example n2 can be 50,60 or 70 etc., and n4 can be 58,75
Or 95 etc..
S204, the first pixel value adjusting parameter and the second pixel value adjusting parameter according to each Color Channel, to be fused
The pixel value in the corresponding color channel of each image pixel is adjusted in facial image.
Specifically, the pixel value of each Color Channel of each image pixel in facial image to be fused, image slices are first obtained
Element namely the pixel for constituting facial image to be fused, including face complexion pixel.For any Color Channel of image pixel
Pixel value x can be adjusted according to the following formula:
As x < D1, F (x)=x*D3/D1; (1)
As x >=D1, F (x)=D3+ (x-D1) * (D4-D3)/(D2-D1); (2)
Wherein, F (x) is pixel value x adjusted, and D1 is n1 percentile, and D2 is n2 percentile, and D3 is n3 percentile
Number, D4 are n4 percentile.As shown in fig. 6, the mapping relations of x and F (x) are curve as shown in FIG. 6.For example, for one
The pixel value in the channel R of image pixel is adjusted the pixel value using formula (1), when the pixel value is less than D1 when this
When pixel value is greater than or equal to D1, the pixel value is adjusted using formula (2), and so on.
It wherein, can also be only using above-mentioned formula (1) and (2) to face to be fused in some other embodiment
Face complexion pixel in image is adjusted.
S205, facial image to be fused adjusted is merged with material facial image, to generate target image.
After being adjusted to each image pixel of facial image to be fused, by facial image to be fused adjusted and element
Material facial image is merged, by human face region of the face fusion in facial image to be fused into material facial image,
As shown in fig. 7, obtaining target image 73 after facial image 71 to be fused and material facial image 72 are merged.
In the present embodiment, by the pixel value for obtaining face complexion pixel in facial image and material facial image to be fused
Distributed intelligence, and then determine according to pixel value distributed intelligence the adjusted value of each Color Channel, melted with being treated using these adjusted values
It closes facial image to be adjusted, so as to carry out different adjustment for different facial images to be fused, after adjustable
Facial image to be fused in face complexion and material facial image face complexion it is closer so that fusion obtains
The colour of skin of target image more evenly, reduces color difference, so that target image seems more true nature.
Embodiment two,
The present embodiment provides a kind of facial image fusing devices, and wherein the facial image fusing device can integrate in mobile phone
Etc. in terminals.
Refering to Fig. 8, the facial image fusing device of the present embodiment, including the first acquisition module 801, second obtain module
802, determining module 803, adjustment module 804 and generation module 805.
First acquisition module 801 is for obtaining facial image and material facial image to be fused.Facial image ratio to be fused
It such as can come from the image with face of user taken a picture certainly or user selects in photograph album.Wherein, first mould is obtained
Block 801 specifically can be used for obtaining original facial image;Original facial image is pre-processed, face figure to be fused is obtained
Picture, pretreatment include original facial image is cut, face correction, the colour of skin adjust and deformation.Original facial image example
For example user takes pictures certainly, and facial image to be fused is that the image obtained after pretreatment is complied to user self-timer.
Second acquisition module 802 is for obtaining in facial image to be fused face complexion pixel in each of predetermined color space
Face complexion pixel is in predetermined color space in first pixel value distributed intelligence of Color Channel, and acquisition material facial image
Each Color Channel the second pixel value distributed intelligence.Wherein, predetermined color space can be selected according to actual needs, than
It such as can be RGB color, hsv color space or YUV color space, etc..
In the present embodiment, the pixel value distributed intelligence of each Color Channel is the histogram of each Color Channel.With predetermined color
Space is specifically used for determining the face complexion picture in facial image to be fused for for RGB color, second obtains module 802
Element;Then, the pixel value of the channel R of each face skin pixel, the channel G and channel B is obtained;Later according to each face complexion picture
The channel R of element, the channel G and channel B pixel value, obtain the histogram of each Color Channel of face complexion pixel, so
To the first pixel value distributed intelligence.Wherein it is determined that the mode of face complexion pixel, for example can be first to facial image to be fused
Face datection is carried out, to obtain the five features information of facial image to be fused;Thereafter, according to five features information and preset
Mask images determine face's area of skin color in facial image to be fused, so that Face Detection is carried out to face's area of skin color, with
Determine the face complexion pixel in facial image to be fused.
Wherein, the acquisition modes of the second pixel value distributed intelligence and the acquisition modes of the first pixel value distributed intelligence are similar
Seemingly, the acquisition process for specifically referring to the first pixel value distributed intelligence, does not do repeat one by one herein.
Determining module 803 is used to determine the first pixel value adjustment ginseng of each Color Channel according to the first pixel value distributed intelligence
It counts, and determines the second pixel value adjusting parameter of each Color Channel according to the second pixel value distributed intelligence.Specifically, Ke Yigen
According to the first pixel value distributed intelligence, the n1 of the pixel value of each Color Channel of face complexion pixel in facial image to be fused is obtained
Percentile and n2 percentile, and then obtain the first pixel value adjusting parameter;And it according to the second pixel value distributed intelligence, obtains
The n3 percentile and n4 percentile of the pixel value of each Color Channel of face complexion pixel in material facial image are taken, in turn
Obtain the second pixel value adjusting parameter.
Wherein, n1 is less than n2, and n3 is less than n4, and the value interval of n1, n2, n3, n4 are all [1,100].
Wherein, n1 and n3 can be respectively less than 50, n2 and n4 can be all larger than or be equal to 50.Further, n1=n3,
N2=n4.
Module 804 is adjusted to be used for according to the first pixel value adjusting parameter of each Color Channel and the second pixel value adjustment ginseng
Number, is adjusted the pixel value in the corresponding color channel of each image pixel in facial image to be fused.Specifically, it first obtains
The pixel value of each Color Channel of each image pixel in facial image to be fused, image pixel namely composition face figure to be fused
The pixel of picture, including face complexion pixel.It, can be according to following public affairs for the pixel value x of any Color Channel of image pixel
Formula is adjusted:
As x < D1, F (x)=x*D3/D1; (1)
As x >=D1, F (x)=D3+ (x-D1) * (D4-D3)/(D2-D1); (2)
Wherein, F (x) is pixel value x adjusted, and D1 is n1 percentile, and D2 is n2 percentile, and D3 is n3 percentage
Digit, D4 are n4 percentile.For example, the pixel value in the channel R for an image pixel, when the pixel value is less than D1,
The pixel value is adjusted using formula (1), when the pixel value is greater than or equal to D1, using formula (2) to the pixel value
It is adjusted, and so on.
Generation module 805 is for merging facial image to be fused adjusted with material facial image, to generate
Target image.
In the present embodiment, by the pixel value for obtaining face complexion pixel in facial image and material facial image to be fused
Distributed intelligence, and then determine according to pixel value distributed intelligence the adjusted value of each Color Channel, melted with being treated using these adjusted values
It closes facial image to be adjusted, so as to carry out different adjustment for different facial images to be fused, after adjustable
Facial image to be fused in face complexion and material facial image face complexion it is closer so that fusion obtains
The colour of skin of target image more evenly, reduces color difference, so that target image seems more true nature.
Embodiment three,
Correspondingly, the embodiment of the present invention also provides a kind of terminal, as shown in figure 9, the terminal may include radio frequency (RF,
Radio Frequency) circuit 901, the memory 902, defeated that includes one or more computer readable storage medium
Enter unit 903, display unit 904, sensor 905, voicefrequency circuit 906, Wireless Fidelity (WiFi, Wireless Fidelity)
The components such as module 907, the processor 908 for including one or more than one processing core and power supply 909.This field skill
Art personnel are appreciated that the restriction of the not structure paired terminal of terminal structure shown in Fig. 9, may include more or more than illustrating
Few component perhaps combines certain components or different component layouts.Wherein:
RF circuit 901 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station
After downlink information receives, one or the processing of more than one processor 908 are transferred to;In addition, the data for being related to uplink are sent to
Base station.In general, RF circuit 901 includes but is not limited to antenna, at least one amplifier, tuner, one or more oscillators, uses
Family identity module (SIM, Subscriber Identity Module) card, transceiver, coupler, low-noise amplifier
(LNA, Low Noise Amplifier), duplexer etc..In addition, RF circuit 901 can also by wireless communication with network and its
He communicates equipment.Any communication standard or agreement, including but not limited to global system for mobile telecommunications system can be used in the wireless communication
Unite (GSM, Global System of Mobile communication), general packet radio service (GPRS, General
Packet Radio Service), CDMA (CDMA, Code Division Multiple Access), wideband code division it is more
Location (WCDMA, Wideband Code Division Multiple Access), long term evolution (LTE, Long Term
Evolution), Email, short message service (SMS, Short Messaging Service) etc..
Memory 902 can be used for storing software program and module, and processor 908 is stored in memory 902 by operation
Software program and module, thereby executing various function application and data processing.Memory 902 can mainly include storage journey
Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function
Such as sound-playing function, image player function) etc.;Storage data area, which can be stored, uses created data according to terminal
(such as audio data, phone directory etc.) etc..In addition, memory 902 may include high-speed random access memory, can also include
Nonvolatile memory, for example, at least a disk memory, flush memory device or other volatile solid-state parts.Phase
Ying Di, memory 902 can also include Memory Controller, to provide processor 908 and input unit 903 to memory 902
Access.
Input unit 903 can be used for receiving the number or character information of input, and generate and user setting and function
Control related keyboard, mouse, operating stick, optics or trackball signal input.Specifically, in a specific embodiment
In, input unit 903 may include touch sensitive surface and other input equipments.Touch sensitive surface, also referred to as touch display screen or touching
Control plate, collect user on it or nearby touch operation (such as user using any suitable object such as finger, stylus or
Operation of the attachment on touch sensitive surface or near touch sensitive surface), and corresponding connection dress is driven according to preset formula
It sets.Optionally, touch sensitive surface may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined
The touch orientation of user is surveyed, and detects touch operation bring signal, transmits a signal to touch controller;Touch controller from
Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 908, and can reception processing
Order that device 908 is sent simultaneously is executed.Furthermore, it is possible to a variety of using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Type realizes touch sensitive surface.In addition to touch sensitive surface, input unit 903 can also include other input equipments.Specifically, other are defeated
Entering equipment can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse
One of mark, operating stick etc. are a variety of.
Display unit 904 can be used for showing information input by user or be supplied to user information and terminal it is various
Graphical user interface, these graphical user interface can be made of figure, text, icon, video and any combination thereof.Display
Unit 904 may include display panel, optionally, can using liquid crystal display (LCD, Liquid Crystal Display),
The forms such as Organic Light Emitting Diode (OLED, Organic Light-Emitting Diode) configure display panel.Further
, touch sensitive surface can cover display panel, after touch sensitive surface detects touch operation on it or nearby, send processing to
Device 908 is followed by subsequent processing device 908 and is provided on a display panel accordingly according to the type of touch event to determine the type of touch event
Visual output.Although touch sensitive surface and display panel are to realize input and input as two independent components in Fig. 9
Function, but in some embodiments it is possible to touch sensitive surface and display panel are integrated and realizes and outputs and inputs function.
Terminal may also include at least one sensor 905, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light
Light and shade adjust the brightness of display panel, proximity sensor can close display panel and/or back when terminal is moved in one's ear
Light.As a kind of motion sensor, gravity accelerometer can detect (generally three axis) acceleration in all directions
Size can detect that size and the direction of gravity when static, can be used to identify mobile phone posture application (such as horizontal/vertical screen switching,
Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;It can also configure as terminal
The other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, details are not described herein.
Voicefrequency circuit 906, loudspeaker, microphone can provide the audio interface between user and terminal.Voicefrequency circuit 906 can
By the electric signal after the audio data received conversion, it is transferred to loudspeaker, voice signal output is converted to by loudspeaker;It is another
The voice signal of collection is converted to electric signal by aspect, microphone, is converted to audio data after being received by voicefrequency circuit 906, then
After the processing of audio data output processor 908, it is sent to such as another terminal through RF circuit 901, or by audio data
Output is further processed to memory 902.Voicefrequency circuit 906 is also possible that earphone jack, with provide peripheral hardware earphone with
The communication of terminal.
WiFi belongs to short range wireless transmission technology, and terminal can help user's transceiver electronics postal by WiFi module 907
Part, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 9 is shown
WiFi module 907, but it is understood that, and it is not belonging to must be configured into for terminal, it can according to need do not changing completely
Become in the range of the essence of invention and omits.
Processor 908 is the control centre of terminal, using the various pieces of various interfaces and connection whole mobile phone, is led to
It crosses operation or executes the software program and/or module being stored in memory 902, and call and be stored in memory 902
Data execute the various functions and processing data of terminal, to carry out integral monitoring to mobile phone.Optionally, processor 908 can wrap
Include one or more processing cores;Preferably, processor 908 can integrate application processor and modem processor, wherein answer
With the main processing operation system of processor, user interface and application program etc., modem processor mainly handles wireless communication.
It is understood that above-mentioned modem processor can not also be integrated into processor 908.
Terminal further includes the power supply 909 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply pipe
Reason system and processor 908 are logically contiguous, to realize management charging, electric discharge and power managed by power-supply management system
Etc. functions.Power supply 909 can also include one or more direct current or AC power source, recharging system, power failure inspection
The random components such as slowdown monitoring circuit, power adapter or inverter, power supply status indicator.
Although being not shown, terminal can also include camera, bluetooth module etc., and details are not described herein.Specifically in this implementation
In example, the processor 908 in terminal can be corresponding by the process of one or more application program according to following instruction
Executable file is loaded into memory 902, and the application program being stored in memory 902 is run by processor 908, from
And realize various functions:
Facial image and material facial image to be fused is obtained, face complexion pixel in facial image to be fused is then obtained
In the first pixel value distributed intelligence of each Color Channel of predetermined color space, and obtain face complexion in material facial image
Second pixel value distributed intelligence of the pixel in each Color Channel of predetermined color space;According to the first pixel value distributed intelligence, really
First pixel value adjusting parameter of fixed each Color Channel, and according to the second pixel value distributed intelligence, determine each Color Channel
Second pixel value adjusting parameter;To according to the first pixel value adjusting parameter of each Color Channel and the second pixel value adjustment ginseng
Number, is adjusted the pixel value in the corresponding color channel of each image pixel in facial image to be fused;After adjusting later
Facial image to be fused merged with material facial image, to generate target image.
Wherein it is possible to according to the first pixel value distributed intelligence, each of face complexion pixel in facial image to be fused is obtained
The n1 percentile and n2 percentile of the pixel value of Color Channel, and then obtain the first pixel value adjusting parameter;According to second
Pixel value distributed intelligence obtains the n3 percentile of the pixel value of each Color Channel of face complexion pixel in material facial image
With n4 percentile, and then obtain the second pixel value adjusting parameter.
Wherein, the pixel value of each Color Channel of each image pixel in facial image to be fused is obtained;For image slices
The pixel value x of any Color Channel of element, can be adjusted according to the following formula:
As x < D1, F (x)=x*D3/D1;
As x >=D1, F (x)=D3+ (x-D1) * (D4-D3)/(D2-D1);
Wherein, F (x) is pixel value x adjusted, and D1 is n1 percentile, and D2 is n2 percentile, and D3 is n3 percentage
Digit, D4 are n4 percentile.
Wherein, n1=n3, n2=n4, and n1 and n3 are respectively less than 50, n2 and n4 and are all larger than or are equal to 50.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
The embodiment of the present invention, by the pixel for obtaining face complexion pixel in facial image and material facial image to be fused
Distribution value information, and then determine according to pixel value distributed intelligence the adjusted value of each Color Channel, to be treated using these adjusted values
Fusion facial image is adjusted, adjustable so as to carry out different adjustment for different facial images to be fused
The face complexion of the face complexion and material facial image in facial image to be fused afterwards is closer, so that fusion obtains
Target image the colour of skin more evenly, reduce color difference so that target image seems more true nature.
Example IV,
It will appreciated by the skilled person that all or part of the steps in the various methods of above-described embodiment can be with
It is completed by instructing, or relevant hardware is controlled by instruction to complete, which can store computer-readable deposits in one
In storage media, and is loaded and executed by processor.
For this purpose, the embodiment of the present invention provides a kind of storage medium, wherein being stored with a plurality of instruction, which can be processed
Device is loaded, to execute the step in any facial image fusion method provided by the embodiment of the present invention.For example, this refers to
Order can be with following steps:
Facial image and material facial image to be fused is obtained, face complexion pixel in facial image to be fused is then obtained
In the first pixel value distributed intelligence of each Color Channel of predetermined color space, and obtain face complexion in material facial image
Second pixel value distributed intelligence of the pixel in each Color Channel of predetermined color space;According to the first pixel value distributed intelligence, really
First pixel value adjusting parameter of fixed each Color Channel, and according to the second pixel value distributed intelligence, determine each Color Channel
Second pixel value adjusting parameter;To according to the first pixel value adjusting parameter of each Color Channel and the second pixel value adjustment ginseng
Number, is adjusted the pixel value in the corresponding color channel of each image pixel in facial image to be fused;After adjusting later
Facial image to be fused merged with material facial image, to generate target image.
Wherein it is possible to according to the first pixel value distributed intelligence, each of face complexion pixel in facial image to be fused is obtained
The n1 percentile and n2 percentile of the pixel value of Color Channel, and then obtain the first pixel value adjusting parameter;According to second
Pixel value distributed intelligence obtains the n3 percentile of the pixel value of each Color Channel of face complexion pixel in material facial image
With n4 percentile, and then obtain the second pixel value adjusting parameter.
Wherein, the pixel value of each Color Channel of each image pixel in facial image to be fused is obtained;For image slices
The pixel value x of any Color Channel of element, can be adjusted according to the following formula:
As x < D1, F (x)=x*D3/D1;
As x >=D1, F (x)=D3+ (x-D1) * (D4-D3)/(D2-D1);
Wherein, F (x) is pixel value x adjusted, and D1 is n1 percentile, and D2 is n2 percentile, and D3 is n3 percentage
Digit, D4 are n4 percentile.
Wherein, n1=n3, n2=n4, and n1 and n3 are respectively less than 50, n2 and n4 and are all larger than or are equal to 50.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
Wherein, which may include: read-only memory (ROM, Read Only Memory), random access memory
Body (RAM, Random Access Memory), disk or CD etc..
By the instruction stored in the storage medium, any face figure provided by the embodiment of the present invention can be executed
As the step in fusion method, it is thereby achieved that any facial image fusion method institute provided by the embodiment of the present invention
The beneficial effect being able to achieve is detailed in the embodiment of front, and details are not described herein.
A kind of facial image fusion method, device and storage medium is provided for the embodiments of the invention above to have carried out in detail
Thin to introduce, used herein a specific example illustrates the principle and implementation of the invention, and above embodiments are said
It is bright to be merely used to help understand method and its core concept of the invention;Meanwhile for those skilled in the art, according to this hair
Bright thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not manage
Solution is limitation of the present invention.
Claims (15)
1. a kind of facial image fusion method characterized by comprising
Obtain facial image and material facial image to be fused;
Face complexion pixel is obtained in the facial image to be fused in the first pixel of each Color Channel of predetermined color space
Distribution value information, and face complexion pixel in the material facial image is obtained in each Color Channel of predetermined color space
Second pixel value distributed intelligence;
The first pixel value adjusting parameter of each Color Channel is determined according to the first pixel value distributed intelligence, and according to described
Second pixel value distributed intelligence determines the second pixel value adjusting parameter of each Color Channel;
According to the first pixel value adjusting parameter and the second pixel value adjusting parameter of each Color Channel, to the people to be fused
The pixel value in the corresponding color channel of each image pixel is adjusted in face image;
The facial image to be fused adjusted is merged with the material facial image, to generate target image.
2. the method according to claim 1, wherein described determine respectively according to the first pixel value distributed intelligence
First pixel value adjusting parameter of Color Channel, comprising:
According to the first pixel value distributed intelligence, each face of face complexion pixel in the facial image to be fused is obtained
The n1 percentile and n2 percentile of the pixel value of chrominance channel, and then the first pixel value adjusting parameter is obtained, the n1 is less than
n2。
3. according to the method described in claim 2, it is characterized in that, described determine respectively according to the second pixel value distributed intelligence
Second pixel value adjusting parameter of Color Channel, comprising:
According to the second pixel value distributed intelligence, each color of face complexion pixel in the material facial image is obtained
The n3 percentile and n4 percentile of the pixel value in channel, and then the second pixel value adjusting parameter is obtained, the n3 is less than n4.
4. according to the method described in claim 3, it is characterized in that, the first pixel value tune according to each Color Channel
Whole parameter and the second pixel value adjusting parameter, to the corresponding color channel of each image pixel in the facial image to be fused
Pixel value is adjusted, comprising:
Obtain the pixel value of each Color Channel of each image pixel in the facial image to be fused;
For the pixel value x of any Color Channel of described image pixel, it is adjusted according to the following formula:
As x < D1, F (x)=x*D3/D1;
As x >=D1, F (x)=D3+ (x-D1) * (D4-D3)/(D2-D1);
Wherein, the F (x) is pixel value x adjusted, and D1 is the n1 percentile, and D2 is the n2 percentile, and D3 is
The n3 percentile, D4 are the n4 percentile.
5. according to the method described in claim 3, it is characterized in that, the n1=n3, the n2=n4.
6. according to the method described in claim 3, the n2 and n4 are all larger than it is characterized in that, the n1 and n3 are respectively less than 50
Or it is equal to 50.
7. each described the method according to claim 1, wherein the predetermined color space is RGB color
Color Channel includes the channel R, the channel G and channel B;
It is described obtain face complexion pixel in the facial image to be fused predetermined color space each Color Channel first
Pixel value distributed intelligence, comprising:
Determine the face complexion pixel in the facial image to be fused;
Obtain the channel R, the channel G of each face complexion pixel and the pixel value of channel B in the facial image to be fused;
According to the pixel value of the channel R of the face complexion pixel each in the facial image to be fused, the channel G and channel B,
The histogram of each Color Channel of face complexion pixel described in the facial image to be fused is obtained, and then obtains the first pixel
Distribution value information.
8. the method according to the description of claim 7 is characterized in that the face skin in the determination facial image to be fused
Color pixel, comprising:
Face datection is carried out to the facial image to be fused, to obtain the five features information of the facial image to be fused;
According to the five features information and preset mask images, the colour of skin area of face in the facial image to be fused is determined
Domain;
Face Detection is carried out to face's area of skin color, with the face complexion pixel in the determination facial image to be fused.
9. the method according to claim 1, wherein described obtain facial image and material face figure to be fused
Picture, comprising:
Obtain original facial image;
The original facial image is pre-processed, obtains facial image to be fused, the pretreatment includes to described original
Facial image is cut, face is corrected, the colour of skin is adjusted and deformation.
10. a kind of facial image fusing device characterized by comprising
First obtains module, for obtaining facial image and material facial image to be fused;
Second obtains module, for obtaining in the facial image to be fused face complexion pixel in each face of predetermined color space
Face complexion pixel is in predetermined color sky in first pixel value distributed intelligence of chrominance channel, and the acquisition material facial image
Between each Color Channel the second pixel value distributed intelligence;
Determining module, for determining that the first pixel value of each Color Channel adjusts ginseng according to the first pixel value distributed intelligence
It counts, and determines the second pixel value adjusting parameter of each Color Channel according to the second pixel value distributed intelligence;
Module is adjusted, for the first pixel value adjusting parameter and the second pixel value adjusting parameter according to each Color Channel,
The pixel value in the corresponding color channel of each image pixel in the facial image to be fused is adjusted;
Generation module, for merging the facial image to be fused adjusted with the material facial image, with life
At target image.
11. device according to claim 10, which is characterized in that the determining module is specifically used for:
According to the first pixel value distributed intelligence, each face of face complexion pixel in the facial image to be fused is obtained
The n1 percentile and n2 percentile of the pixel value of chrominance channel, and then the first pixel value adjusting parameter is obtained, the n1 is less than
n2。
12. device according to claim 11, which is characterized in that the determining module is specifically used for:
According to the second pixel value distributed intelligence, each color of face complexion pixel in the material facial image is obtained
The n3 percentile and n4 percentile of the pixel value in channel, and then the second pixel value adjusting parameter is obtained, the n3 is less than n4.
13. device according to claim 12, which is characterized in that the adjustment module is specifically used for:
Obtain the pixel value of each Color Channel of each image pixel in the facial image to be fused;
For the pixel value x of any Color Channel of described image pixel, it is adjusted according to the following formula:
As x < D1, F (x)=x*D3/D1;
As x >=D1, F (x)=D3+ (x-D1) * (D4-D3)/(D2-D1);
Wherein, the F (x) is pixel value x adjusted, and D1 is the n1 percentile, and D2 is the n2 percentile, and D3 is
The n3 percentile, D4 are the n4 percentile.
14. device according to claim 12, which is characterized in that the n1=n3, the n2=n4, and the n1 and n3
Respectively less than 50, the n2 and n4 are all larger than or are equal to 50.
15. a kind of storage medium, which is characterized in that the storage medium is stored with a plurality of instruction, and described instruction is suitable for processor
It is loaded, the step in 1 to 9 described in any item facial image fusion methods is required with perform claim.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810886318.2A CN109191410B (en) | 2018-08-06 | 2018-08-06 | Face image fusion method and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810886318.2A CN109191410B (en) | 2018-08-06 | 2018-08-06 | Face image fusion method and device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109191410A true CN109191410A (en) | 2019-01-11 |
CN109191410B CN109191410B (en) | 2022-12-13 |
Family
ID=64920362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810886318.2A Active CN109191410B (en) | 2018-08-06 | 2018-08-06 | Face image fusion method and device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109191410B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110035320A (en) * | 2019-03-19 | 2019-07-19 | 星河视效文化传播(北京)有限公司 | The advertisement load rendering method and device of video |
CN110211030A (en) * | 2019-06-04 | 2019-09-06 | 北京字节跳动网络技术有限公司 | Image generating method and device |
CN110232730A (en) * | 2019-06-03 | 2019-09-13 | 深圳市三维人工智能科技有限公司 | A kind of three-dimensional face model textures fusion method and computer-processing equipment |
CN110348496A (en) * | 2019-06-27 | 2019-10-18 | 广州久邦世纪科技有限公司 | A kind of method and system of facial image fusion |
CN110782419A (en) * | 2019-10-18 | 2020-02-11 | 杭州趣维科技有限公司 | Three-dimensional face fusion method and system based on graphics processor |
CN110838084A (en) * | 2019-09-24 | 2020-02-25 | 咪咕文化科技有限公司 | Image style transfer method and device, electronic equipment and storage medium |
CN110929617A (en) * | 2019-11-14 | 2020-03-27 | 北京神州绿盟信息安全科技股份有限公司 | Face-changing composite video detection method and device, electronic equipment and storage medium |
CN111047511A (en) * | 2019-12-31 | 2020-04-21 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111063008A (en) * | 2019-12-23 | 2020-04-24 | 北京达佳互联信息技术有限公司 | Image processing method, device, equipment and storage medium |
CN111275648A (en) * | 2020-01-21 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Face image processing method, device and equipment and computer readable storage medium |
CN111627076A (en) * | 2020-04-28 | 2020-09-04 | 广州华多网络科技有限公司 | Face changing method and device and electronic equipment |
CN111754396A (en) * | 2020-07-27 | 2020-10-09 | 腾讯科技(深圳)有限公司 | Face image processing method and device, computer equipment and storage medium |
CN112102153A (en) * | 2020-08-20 | 2020-12-18 | 北京百度网讯科技有限公司 | Cartoon processing method and device for image, electronic equipment and storage medium |
CN112581413A (en) * | 2019-09-29 | 2021-03-30 | 天津工业大学 | Self-adaptive nonlinear weighted human face image fusion method |
CN113052783A (en) * | 2019-12-27 | 2021-06-29 | 杭州深绘智能科技有限公司 | Face image fusion method based on face key points |
CN113395569A (en) * | 2021-05-29 | 2021-09-14 | 北京优幕科技有限责任公司 | Video generation method and device |
WO2022179215A1 (en) * | 2021-02-23 | 2022-09-01 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN115348709A (en) * | 2022-10-18 | 2022-11-15 | 良业科技集团股份有限公司 | Smart cloud service lighting display method and system suitable for text travel |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8811740B1 (en) * | 2012-10-19 | 2014-08-19 | Google Inc. | Automatic color correction |
CN106156730A (en) * | 2016-06-30 | 2016-11-23 | 腾讯科技(深圳)有限公司 | The synthetic method of a kind of facial image and device |
CN106447604A (en) * | 2016-09-30 | 2017-02-22 | 北京奇虎科技有限公司 | Method and device for transforming facial frames in videos |
US20170139572A1 (en) * | 2015-11-17 | 2017-05-18 | Adobe Systems Incorporated | Image Color and Tone Style Transfer |
CN106791365A (en) * | 2016-11-25 | 2017-05-31 | 努比亚技术有限公司 | Facial image preview processing method and processing device |
CN107045714A (en) * | 2017-05-11 | 2017-08-15 | 杭州知聊信息技术有限公司 | A kind of beautifying faces algorithm for live video communication |
CN107146199A (en) * | 2017-05-02 | 2017-09-08 | 厦门美图之家科技有限公司 | A kind of fusion method of facial image, device and computing device |
CN107302662A (en) * | 2017-07-06 | 2017-10-27 | 维沃移动通信有限公司 | A kind of method, device and mobile terminal taken pictures |
CN108090477A (en) * | 2018-01-23 | 2018-05-29 | 北京易智能科技有限公司 | A kind of face identification method and device based on Multi-spectral image fusion |
-
2018
- 2018-08-06 CN CN201810886318.2A patent/CN109191410B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8811740B1 (en) * | 2012-10-19 | 2014-08-19 | Google Inc. | Automatic color correction |
US20170139572A1 (en) * | 2015-11-17 | 2017-05-18 | Adobe Systems Incorporated | Image Color and Tone Style Transfer |
CN106156730A (en) * | 2016-06-30 | 2016-11-23 | 腾讯科技(深圳)有限公司 | The synthetic method of a kind of facial image and device |
CN106447604A (en) * | 2016-09-30 | 2017-02-22 | 北京奇虎科技有限公司 | Method and device for transforming facial frames in videos |
CN106791365A (en) * | 2016-11-25 | 2017-05-31 | 努比亚技术有限公司 | Facial image preview processing method and processing device |
CN107146199A (en) * | 2017-05-02 | 2017-09-08 | 厦门美图之家科技有限公司 | A kind of fusion method of facial image, device and computing device |
CN107045714A (en) * | 2017-05-11 | 2017-08-15 | 杭州知聊信息技术有限公司 | A kind of beautifying faces algorithm for live video communication |
CN107302662A (en) * | 2017-07-06 | 2017-10-27 | 维沃移动通信有限公司 | A kind of method, device and mobile terminal taken pictures |
CN108090477A (en) * | 2018-01-23 | 2018-05-29 | 北京易智能科技有限公司 | A kind of face identification method and device based on Multi-spectral image fusion |
Non-Patent Citations (3)
Title |
---|
VICTOR-EMIL NEAGOE ET AL.: "Face detection in color images using fusion of the chrominance and luminance channel decisions", 《2010 8TH INTERNATIONAL CONFERENCE ON COMMUNICATIONS》 * |
张妍琰等: "基于局部相位量化与多通道颜色的鲁棒人脸识别", 《河南城建学院学报》 * |
杨世强等: "基于高斯模型的手部肤色建模与区域检测", 《中国图象图形学报》 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110035320A (en) * | 2019-03-19 | 2019-07-19 | 星河视效文化传播(北京)有限公司 | The advertisement load rendering method and device of video |
CN110232730A (en) * | 2019-06-03 | 2019-09-13 | 深圳市三维人工智能科技有限公司 | A kind of three-dimensional face model textures fusion method and computer-processing equipment |
CN110232730B (en) * | 2019-06-03 | 2024-01-19 | 深圳市三维人工智能科技有限公司 | Three-dimensional face model mapping fusion method and computer processing equipment |
CN110211030A (en) * | 2019-06-04 | 2019-09-06 | 北京字节跳动网络技术有限公司 | Image generating method and device |
CN110211030B (en) * | 2019-06-04 | 2023-10-17 | 北京字节跳动网络技术有限公司 | Image generation method and device |
CN110348496A (en) * | 2019-06-27 | 2019-10-18 | 广州久邦世纪科技有限公司 | A kind of method and system of facial image fusion |
CN110348496B (en) * | 2019-06-27 | 2023-11-14 | 广州久邦世纪科技有限公司 | Face image fusion method and system |
CN110838084B (en) * | 2019-09-24 | 2023-10-17 | 咪咕文化科技有限公司 | Method and device for transferring style of image, electronic equipment and storage medium |
CN110838084A (en) * | 2019-09-24 | 2020-02-25 | 咪咕文化科技有限公司 | Image style transfer method and device, electronic equipment and storage medium |
CN112581413B (en) * | 2019-09-29 | 2022-10-11 | 天津工业大学 | Self-adaptive nonlinear weighted human face image fusion method |
CN112581413A (en) * | 2019-09-29 | 2021-03-30 | 天津工业大学 | Self-adaptive nonlinear weighted human face image fusion method |
CN110782419A (en) * | 2019-10-18 | 2020-02-11 | 杭州趣维科技有限公司 | Three-dimensional face fusion method and system based on graphics processor |
CN110782419B (en) * | 2019-10-18 | 2022-06-21 | 杭州小影创新科技股份有限公司 | Three-dimensional face fusion method and system based on graphics processor |
CN110929617A (en) * | 2019-11-14 | 2020-03-27 | 北京神州绿盟信息安全科技股份有限公司 | Face-changing composite video detection method and device, electronic equipment and storage medium |
CN111063008A (en) * | 2019-12-23 | 2020-04-24 | 北京达佳互联信息技术有限公司 | Image processing method, device, equipment and storage medium |
CN113052783A (en) * | 2019-12-27 | 2021-06-29 | 杭州深绘智能科技有限公司 | Face image fusion method based on face key points |
CN111047511A (en) * | 2019-12-31 | 2020-04-21 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111275648A (en) * | 2020-01-21 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Face image processing method, device and equipment and computer readable storage medium |
CN111275648B (en) * | 2020-01-21 | 2024-02-09 | 腾讯科技(深圳)有限公司 | Face image processing method, device, equipment and computer readable storage medium |
CN111627076A (en) * | 2020-04-28 | 2020-09-04 | 广州华多网络科技有限公司 | Face changing method and device and electronic equipment |
CN111627076B (en) * | 2020-04-28 | 2023-09-19 | 广州方硅信息技术有限公司 | Face changing method and device and electronic equipment |
CN111754396A (en) * | 2020-07-27 | 2020-10-09 | 腾讯科技(深圳)有限公司 | Face image processing method and device, computer equipment and storage medium |
WO2022022154A1 (en) * | 2020-07-27 | 2022-02-03 | 腾讯科技(深圳)有限公司 | Facial image processing method and apparatus, and device and storage medium |
CN111754396B (en) * | 2020-07-27 | 2024-01-09 | 腾讯科技(深圳)有限公司 | Face image processing method, device, computer equipment and storage medium |
CN112102153B (en) * | 2020-08-20 | 2023-08-01 | 北京百度网讯科技有限公司 | Image cartoon processing method and device, electronic equipment and storage medium |
CN112102153A (en) * | 2020-08-20 | 2020-12-18 | 北京百度网讯科技有限公司 | Cartoon processing method and device for image, electronic equipment and storage medium |
WO2022179215A1 (en) * | 2021-02-23 | 2022-09-01 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN113395569B (en) * | 2021-05-29 | 2022-12-09 | 北京优幕科技有限责任公司 | Video generation method and device |
CN113395569A (en) * | 2021-05-29 | 2021-09-14 | 北京优幕科技有限责任公司 | Video generation method and device |
CN115348709A (en) * | 2022-10-18 | 2022-11-15 | 良业科技集团股份有限公司 | Smart cloud service lighting display method and system suitable for text travel |
Also Published As
Publication number | Publication date |
---|---|
CN109191410B (en) | 2022-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109191410A (en) | A kind of facial image fusion method, device and storage medium | |
US11443462B2 (en) | Method and apparatus for generating cartoon face image, and computer storage medium | |
US20220284844A1 (en) | Dark mode display interface processing method, electronic device, and storage medium | |
CN109816663B (en) | Image processing method, device and equipment | |
CN108307125B (en) | Image acquisition method, device and storage medium | |
CN107851422A (en) | Display control method in electronic equipment and electronic equipment | |
CN108600647A (en) | Shooting preview method, mobile terminal and storage medium | |
CN108701365A (en) | Luminous point recognition methods, device and system | |
CN106204423A (en) | A kind of picture-adjusting method based on augmented reality, device and terminal | |
CN110443769A (en) | Image processing method, image processing apparatus and terminal device | |
CN108900780A (en) | A kind of screen light compensation method, mobile terminal and storage medium | |
CN109213407A (en) | A kind of screenshot method and terminal device | |
CN109144361A (en) | A kind of image processing method and terminal device | |
CN108875594A (en) | A kind of processing method of facial image, device and storage medium | |
US20230245441A9 (en) | Image detection method and apparatus, and electronic device | |
CN109525783A (en) | A kind of exposure image pickup method, terminal and computer readable storage medium | |
CN109151428A (en) | automatic white balance processing method, device and computer storage medium | |
CN108259746A (en) | A kind of image color detection method and mobile terminal | |
CN110035270A (en) | A kind of 3D rendering display methods, terminal and computer readable storage medium | |
CN109348137A (en) | Mobile terminal camera control method, device, mobile terminal and storage medium | |
CN108427938A (en) | Image processing method, device, storage medium and electronic equipment | |
CN109639981A (en) | A kind of image capturing method and mobile terminal | |
EP4099162A1 (en) | Method and apparatus for configuring theme color of terminal device, and terminal device | |
CN109859115A (en) | A kind of image processing method, terminal and computer readable storage medium | |
CN111901519B (en) | Screen light supplement method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |