CN107256555B - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN107256555B
CN107256555B CN201710387281.4A CN201710387281A CN107256555B CN 107256555 B CN107256555 B CN 107256555B CN 201710387281 A CN201710387281 A CN 201710387281A CN 107256555 B CN107256555 B CN 107256555B
Authority
CN
China
Prior art keywords
image
color
hair
target
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710387281.4A
Other languages
Chinese (zh)
Other versions
CN107256555A (en
Inventor
吴磊
蒋兴华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN201710387281.4A priority Critical patent/CN107256555B/en
Publication of CN107256555A publication Critical patent/CN107256555A/en
Application granted granted Critical
Publication of CN107256555B publication Critical patent/CN107256555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The embodiment of the invention discloses an image processing method, an image processing device and a storage medium, wherein in the embodiment of the invention, a to-be-processed image and a target hair color selected by a user are obtained, then a hair area of the to-be-processed image is determined, a target color image is generated according to the target hair color, trajectory data of hair in the hair area is obtained, so that the target color image is subjected to deformation processing based on the trajectory data of the hair, then, the image part of the hair area and pixels at corresponding positions of the deformed target color image are subjected to color filtering and mixing processing to obtain a first target image with the target hair color, through the mode, the embodiment of the invention can change the hair color in the image into a dark color system color, a light color system color or a gradient color, so that the target hair color comprising the dark color system, the light color system or the gradient color can be provided for the user to select, and the diversified requirements of users are met.

Description

Image processing method, device and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
With the continuous development of electronic technology, the functions of mobile terminals (such as smart phones or tablet computers) are becoming more and more powerful, and besides the basic functions of communication and short messages, the functions of entertainment are also rich, such as taking pictures, and some picture processing can be performed on the taken pictures, such as changing the light and shadow effect, whitening skin, changing hair style and hair color, and the like, so as to meet the requirements of users.
At present, there are many image processing applications in the field of mobile applications, and after the image processing applications are installed on a mobile terminal, a user can take a picture or select a photo to change the hair style, the hair color and the like by using the image processing applications, so as to achieve a beautifying effect. In the color-changing technology applied to the existing image processing, a color layer is generally obtained according to a target color selected by a user, and then the color layer and the hair color of the user in a photo are simply superposed, so that the color-changing is realized.
In the process of research and practice of the prior art, the inventor of the present invention finds that in the prior art, because the hair color is changed by simple color superposition, the hair dyeing effect mainly based on the deep color can be generally realized, and the hair dyeing effect based on the light color is difficult to realize, so that the selectable target hair color types are few, and the diversified requirements of users cannot be met.
Disclosure of Invention
Embodiments of the present invention provide an image processing method, an image processing apparatus, and a storage medium, which can change a hair color in an image into a dark color system color, a light color system color, or a gradient color, thereby providing a target color including a dark color system, a light color system, or a gradient color for a user to select, and meeting diversified requirements of the user.
The embodiment of the invention provides an image processing method, which comprises the following steps:
acquiring an image to be processed selected by a user and a target color;
determining a hair area of the image to be processed, and generating a target color image according to the target color;
acquiring trajectory data of the hairline in the hair area;
performing deformation processing on the target color image based on the trajectory data of the hairline;
and carrying out color filtering and mixing processing on the image part of the hair area and the pixels at the corresponding positions of the deformed target color image to obtain a first target image with the target color.
Accordingly, an embodiment of the present invention further provides an image processing apparatus, including:
the first acquisition module is used for acquiring the to-be-processed image and the target color selected by the user;
the determining module is used for determining a hair area of the image to be processed;
the generating module is used for generating a target color image according to the target color;
the second acquisition module is used for acquiring the trajectory data of the hairline in the hair area;
the deformation processing module is used for carrying out deformation processing on the target color image based on the trajectory data of the hairline;
and the first processing module is used for carrying out color filtering and mixing processing on the image part of the hair area and the pixels at the corresponding positions of the deformed target color image to obtain a first target image with target color development.
Optionally, the deformation processing module is specifically configured to:
determining a center line of the target color image;
and acquiring the track data of the central line to obtain the track data of the line at the preset position.
Optionally, the image processing apparatus further comprises a scaling processing module;
the scaling processing module is used for determining a circumscribed rectangle of the shape of the hair region according to the hair region and scaling the target color image based on the circumscribed rectangle so as to enable the size of the target color image to be consistent with the size of the circumscribed rectangle;
and the deformation processing module is used for carrying out deformation processing on the target color image after the zooming processing based on the trajectory data of the hair.
Optionally, the first processing module is specifically configured to:
acquiring the position of a first pixel to be processed of the image part of the hair area and pixel values of red, green and blue three primary color components;
acquiring pixel values of red, green and blue three-primary-color components of a second pixel to be processed corresponding to the first pixel to be processed in the deformed target color image;
and performing color filtering and mixing processing on the pixel values of the same primary color component of the first pixel to be processed and the second pixel to be processed by taking the pixel value of each primary color component of the first pixel to be processed as a primary color value and the pixel value of each primary color component of the second pixel to be processed as a mixed color value.
Optionally, the third processing module is specifically configured to:
acquiring brightness information of the image part of the hair area to generate a brightness channel image of the image part of the hair area;
performing histogram equalization processing on the brightness channel image to obtain a second weight image;
determining a second weight coefficient according to the second weight image;
and carrying out alpha mixing processing on the image part of the hair area and the second target image based on the second weight coefficient to obtain a third target image with target hair color.
Correspondingly, the embodiment of the present invention further provides a storage medium, where the storage medium is used to store a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform the steps in the image processing method.
In the image processing method according to the embodiment of the invention, the image to be processed and the target hair color are determined according to the image processing request, then the hair area of the image to be processed is determined, the target color image is generated according to the target hair color, then the trajectory data of the hair in the image to be processed is obtained, so that the target color image is deformed based on the trajectory data of the hair, then the image part of the hair area and the pixels at the corresponding positions of the deformed target color image are subjected to color filtering mixing processing, so as to obtain the first target image with the target hair color, in the scheme, the texture corresponding to the hair can be obtained on the target color image by deforming the target color image based on the trajectory data of the hair, so that the color required to be changed by a single hair is determined through the target color image, and the change is not restricted by the background hair color in the image to be processed, the method can change the hair color in the image into a dark color system color, a light color system color or a gradual change color, thereby providing target hair colors including a dark color system, a light color system or a gradual change color for a user to select, and meeting the diversified requirements of the user.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1a is a scene diagram of an image processing method according to an embodiment of the present invention;
FIG. 1b is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 1c is a flowchart of a deformation process performed on a target color image in an image processing method according to an embodiment of the present invention;
fig. 2a is a flowchart after a first target image is obtained in the image processing method according to the embodiment of the present invention;
fig. 2b is a flowchart illustrating an alpha blending process performed on an image portion of a hair region and pixels at corresponding positions of a first target image in the image processing method according to the embodiment of the present invention;
fig. 2c is a flowchart illustrating an alpha blending process performed on the image portion of the hair region and the pixels at the corresponding positions of the second target image in the image processing method according to the embodiment of the present invention;
FIG. 3a is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of another structure of an image processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image processing method, an image processing device and a storage medium.
The image processing device can be specifically integrated in a terminal such as a smart phone, a tablet computer, a notebook computer or a personal computer.
For example, referring to fig. 1a, the image processing apparatus may provide an image processing interface, in which a user may open a picture and then process the picture by using an editing function provided by the image processing interface, such as adding characters to the picture, beautifying the face, changing the color of hair, and so on. In the interface for changing the hair color, a plurality of target hair colors are provided for the user to select, for example, the target hair colors such as a dark color system, a light color system and a gradient color system, and the user can select one of the target hair colors according to needs to modify the hair color in the picture into the target hair color.
Specifically, the image processing apparatus may acquire an image to be processed selected by a user and a target hair color, for example, yellow, brown, red, or the like, then determine a hair region of the image to be processed, generate a target color image according to the target hair color, acquire trajectory data of a hair in the image to be processed, thereby perform deformation processing on the target color image based on the trajectory data of the hair, and then perform color filtering and mixing processing on an image portion of the hair region and pixels at positions corresponding to the deformed target color image to obtain a first target image with the target hair color.
The target color image may be subjected to deformation processing by using a moving least square method image deformation calculation, for example, trajectory data of each hair is sequentially acquired, trajectory data of lines at predetermined positions in the target color image is acquired, and then positions of pixels in the target color image are adjusted according to the trajectory data of each hair and the trajectory data of the lines at the predetermined positions in sequence to obtain a plurality of hair color images, so that the pixels at corresponding positions of the plurality of hair color images are subjected to color filtering and mixing processing to obtain a deformed target color image.
The target color image is subjected to deformation processing based on the hair track, so that the texture corresponding to the hair can be obtained on the target color image, and then the target color image with the hair texture and the hair area of the image to be processed are subjected to color filtering mixed processing.
The following are detailed below.
Referring to fig. 1b, the embodiment will be described from the perspective of an image processing apparatus, which may be specifically integrated in a terminal or other devices, where the terminal may be, for example, a smart phone, a tablet computer, or a personal computer.
As shown in fig. 1b, the specific flow of the image processing method may be as follows:
101. and acquiring the to-be-processed image and the target color selected by the user.
For example, in the image processing interface, the user can perform editing processing such as character addition, facial beautification, and hair color change on a picture. In the interface for editing hair color, a plurality of target hair colors can be selected by a user, for example, target hair colors such as dark color, light color, gradient color and the like, and the user can select one of the target hair colors according to needs to modify the hair color in the picture into the target hair color. And acquiring the to-be-processed image and the target color selected by the user according to the picture opened by the user in the image processing interface and the selected target color.
102. Determining a hair region of the image to be processed, and generating a target color image according to the target color.
It can be understood that the hair region of the image to be processed refers to a region where hair pixels in the image to be processed are located, determining the hair region of the image to be processed is mainly to determine the hair pixels of the image to be processed, hair detection can be performed on the image to be processed according to the feature information of the hair pixels, and the hair region of the image to be processed is determined according to the detected hair pixels. The hair detection can be performed in various ways, such as hair color detection based on color space statistics, hair color detection based on a probability model, a gaussian model, or hair color detection based on neural network machine learning, and the like.
The target color is a color selected by the user, such as red, brown, or light yellow. The target color image is an image displaying a target color selected by the user, and is determined according to the target color selected by the user, for example, if the target color selected by the user is red, the target color image is an image displaying red.
103. Trajectory data of hair in the hair zone is acquired.
For example, the trajectory of the hair in the hair region may be determined and then the trajectory data of the hair may be obtained. The hair trajectory may be obtained in various ways, for example, by detecting the direction of the hair to determine the trajectory of the hair, where the direction of the hair may be detected by a Gabor filter, and it is understood by those skilled in the art that the Gabor filter is a linear filter for edge detection, the frequency and direction of which represent the frequency and direction that are close to the representation of the human visual system, and are commonly used for texture representation and description, and the relevant features may be extracted in different dimensions and directions of the frequency domain, so that the Gabor filter may be used to perform feature extraction on the whole hair, thereby determining the direction of the whole hair, and further determining the trajectory of the whole hair. The trajectory data of the hair may be, for example, position information of a plurality of pixel points on the hair trajectory, wherein after the hair trajectory is determined, the position information of the plurality of pixel points on the hair trajectory may be selected along the hair trajectory to obtain the trajectory data of the hair.
104. And performing deformation processing on the target color image based on the trajectory data of the hair.
In this embodiment, the target color image is subjected to deformation processing based on hair trajectory data, so that the target color image has lines corresponding to hair trajectories in the image to be processed.
The deformation processing method for the target color image may be various, such as a moving least squares image deformation algorithm, an image deformation algorithm based on two-dimensional interpolation, or an image deformation algorithm based on a contour polygon, and so on.
The following describes in detail the deformation processing of the target color image in the present embodiment by a moving least squares image deformation algorithm.
Before the target color image is subjected to deformation processing based on the trajectory data of the hair, the method further comprises the following steps: determining a circumscribed rectangle of the shape of the hair region according to the hair region of the image to be processed, and carrying out scaling processing on the target color image based on the circumscribed rectangle so as to enable the size of the target color image to be consistent with the size of the circumscribed rectangle.
By performing scaling processing on the target color image, the size of the target color image and the size of the hair region are made substantially the same.
In order to ensure that the hair region and the target color image have substantially the same size, in this embodiment, a circumscribed rectangle of the hair region is calculated to use the size of the circumscribed rectangle as the size of the hair region, and then the target color image is scaled so that the length and width of the target color image are respectively equal to the length and width of the circumscribed rectangle, thereby making the size of the target color image substantially the same as the size of the hair region.
By making the sizes of the target color image and the hair region substantially the same, it is possible to make the pixel of the target color image correspond to the pixel position in the hair region one-to-one.
Specifically, the deforming process of the target color image based on the hair trajectory data includes deforming the scaled target color image based on the hair trajectory data, for example, as shown in fig. 1c, the deforming process may include the following sub-steps:
1041. trajectory data for each hair strand is obtained.
The trajectory of the hair can be understood as a line segment consisting of a plurality of pixel points, and the trajectory data of the hair is the position information of the plurality of pixel points on the trajectory of the hair. The position information refers to position information of pixel points in an image to be processed, specifically, a coordinate system is established in an image range corresponding to an external rectangle of a hair region, the trajectory data of the hair is coordinate information of the pixel points on the hair trajectory in a reference coordinate system by taking the coordinate system in the image range corresponding to the external rectangle of the hair region as a reference coordinate system. The locus of each hair can be represented by a point array, and each element in the point array is position information of a pixel point on the hair locus.
For example, if the number of the hair tracks is m, that is, the hair tracks are detected in the image to be processed, then there are m hair track data, where the hair track data is a point array dstlist-n (p1, p2, … …, pn), the array length of the point array is n, which indicates that there is n element, and n is greater than or equal to 1, where each element pn in the point array is the coordinate information of one pixel point on the hair track, that is, pn ═ di (xi, yi), where di indicates the ith pixel on the hair track, xi indicates the x-axis coordinate of the ith pixel in the reference coordinate system, and yi indicates the y-axis coordinate of the ith pixel in the reference coordinate system.
The elements in the point array dstlist-n (p1, p2, … …, pn) may be coordinate information of all the pixels on the hair line, or coordinate information of a plurality of pixels selected from all the pixels on the hair line, for example, one pixel is selected as an element in the point array at intervals of a predetermined number.
1042. And acquiring the track data of the lines at the preset positions in the target color image after the zooming processing.
The line at the predetermined position may be any line in the target color image after the scaling processing along the y-axis direction, for example, a centerline line of the target color image after the scaling processing, or a line at a position to the left of the centerline of the target color image after the scaling processing, or a line at a position to the right of the centerline. Taking the neutral line as an example, step S1042 specifically includes: determining a central line of the target color image after the scaling processing, wherein the length direction of the central line is the y-axis direction, and then acquiring the track data of the central line so as to obtain the track data of a line on a preset position.
The locus data of the central line refers to the position information of the pixel points on the central line locus in the target color image, wherein a point array srclist (s1, s2, … …, sn) can be used, wherein the array length of the point array srclist (s1, s2, … …, sn) is the same as that of the point array dstlist-n (p1, p2, … …, pn), and the point arrays srclist-n are both n.
1043. And adjusting the positions of pixels in the target color image after the scaling treatment according to the track data of each hair and the track data of the lines at the preset positions in sequence to obtain a plurality of hair color images.
And adjusting the position of a pixel in the target color image after scaling processing according to the trajectory data of the single hair and the trajectory data of the line at the preset position to obtain a hair color image corresponding to the single hair.
And adjusting the position of the pixel of the target color image after the scaling processing by using an image deformation algorithm based on a point moving least square method.
In an image deformation algorithm based on a point moving least square method, the positions of pixel points are mainly changed according to a deformation function, so that an image generates a deformation effect. In this embodiment, the trajectory data of the central line of the target color image is set as a set of original control point positions, the trajectory data of a single hair is set as a set of deformed control point positions, and it can be known from the principle of image deformation algorithm based on the moving least square method of points that a deformation function can be determined according to the set of original control points and the set of deformed control point positions, so as to perform deformation processing on the target color image after scaling processing according to the deformation function, i.e., adjust the position of a pixel in the target color image, where a variable in the deformation function is a coordinate of any pixel of the target color image, and a value of the deformation function is a coordinate of the deformed pixel, so that the target color image after adjusting the pixel position, i.e., a hair color image corresponding to a single hair, and so on, a hair color image corresponding to each hair is obtained, thereby obtaining a plurality of hair color images.
1044. And carrying out color filtering and mixing processing on pixels at corresponding positions of the plurality of hair color images to obtain a deformed target color image.
The color filter blending process of the pixels of the two images refers to a process of performing color filter calculation on the same primary color components of the pixels of the two images. Each pixel comprises three primary color components of red (R), green (G) and blue (B), and the value range of the pixel value of the three primary color components is 0-255. One of the two images is set as an upper image, and the other image is set as a lower image (namely, a background image), and the calculation formula of color filtering mixing is as follows:
P0=1-(1-P1)*(1-P2) (1)
wherein, P1 represents the base color value of the color filter blending algorithm, which is the pixel value of any base color component of the pixel at any position in the upper layer image, P2 represents the blending color value of the color filter blending algorithm, which is the pixel value of the corresponding base color component of the pixel at the corresponding position in the lower layer image (i.e. the background image), and P0 represents the pixel value of the corresponding base color component of the pixel at the corresponding position of the new image obtained after the color filter blending processing. For example, assuming that P1 is the pixel value of the red component of the pixel at the Q-point (x1, y1) in the upper layer image, P2 is the pixel value of the red component of the pixel at the position corresponding to the Q-point position in the lower layer image, and P0 represents the pixel value of the red component of the pixel at the Q-point position in a new image obtained by color filtering and mixing the pixels of the upper layer image and the lower layer image.
In this embodiment, when the trajectories of m hairs are detected in the image to be processed, the number of the plurality of hair color images is also m, the m hair color images are set to imgH1, imgH2, imgH3, … … and imgH, and the color filtering and mixing processing modes of the m hair color images may be various, for example, as follows:
and (2) performing color filtering mixing processing on pixels at corresponding positions of two hair color images by using the formula (1), and performing color filtering mixing processing on a new image obtained by the color filtering mixing processing and a pixel at a corresponding position of the next hair color image until the last hair color image is filtered.
Specifically, color filter blending processing is performed on imgH1 and imgH2 using formula (1), where imgH1 is an upper layer image and imgH2 is a lower layer image, and for each primary color component of a pixel at the same position of imgH1 and imgH2, a color filter blending calculation formula is as follows:
P0r=1-(1-P1r)*(1-P2r)
P0g=1-(1-P1g)*(1-P2g)
P0b=1-(1-P1b)*(1-P2b)
wherein, P1r, P1g, P1b are pixel values of red, green and blue three primary color components of a pixel at any position of imgH1, P2r, P2g, P2b are pixel values of red, green and blue three primary color components of a pixel at a corresponding position of imgH2, and P0r, P0g, P0b are pixel values of red, green and blue three primary color components at a corresponding position of a new color filtered and mixed filament image.
The method comprises the steps of sequentially carrying out filtering and mixing calculation on pixel values of three primary color components of pixels at corresponding positions in imgH1 and imgH2 by using the formula to obtain a new hair color image subjected to filtering and mixing processing, then carrying out filtering and mixing calculation on the pixel values of the three primary color components of the pixels at corresponding positions of the new hair color image and the next hair color image imgH3 by using the formula (1), wherein the new hair color image is an upper layer image, the next hair color image imgH3 is a lower layer image, and so on until the last hair color image imgHm is calculated, and the final calculation result is a deformed target color image, namely the deformed target color image is an image obtained by carrying out filtering and mixing processing on a plurality of hair color images.
105. And carrying out color filtering and mixing processing on the image part of the hair area and the pixels at the corresponding positions of the deformed target color image to obtain a first target image with the target color.
For example, the method specifically comprises the following steps: the method comprises the steps of obtaining the position of a first to-be-processed pixel of an image part of a hair area and pixel values of red, green and blue three-primary-color components, then obtaining the pixel values of the red, green and blue three-primary-color components of a second to-be-processed pixel corresponding to the position of the first to-be-processed pixel in a deformed target color image, taking the pixel value of each primary-color component of the first to-be-processed pixel as a base color value and the pixel value of each primary-color component of the second to-be-processed pixel as a mixed color value, and carrying out color filtering and mixing processing on the pixel values of the same primary-color component of the first to-be-processed pixel and the second to-be-processed pixel.
The first pixel to be processed is any one pixel of the image part of the hair area, and the second pixel to be processed is a pixel corresponding to the first pixel to be processed in the deformed target color image.
For convenience of description, hereinafter, the deformed target color image is denoted by imgL, and the image portion of the hair region in the image to be processed is denoted by imgM. Still, the above formula (1) is used to perform filtering mixing processing calculation on the pixel values of the primary color components of the pixels at the corresponding positions of imgM and imgL, where imgM is an upper layer image and imgL is a lower layer image, and the specific calculation process may refer to the filtering mixing calculation processes of imgH1 and imgH2, which are not described in detail here. The method comprises the steps of sequentially carrying out color filtering and mixing calculation on pixel values of the same primary color component of each pixel at the corresponding position in imgM and imgL by using a formula (1), and taking the calculation result at each time as a new pixel value of the same primary color component of the pixel at the corresponding position, so that a first target image with target color is obtained, the hair color in an image to be processed is modified into the target color selected by a user, and the hair dyeing effect of the image is realized.
In this embodiment, a target color image is generated according to a target hair color selected by a user, and the target color image is subjected to deformation processing by using trajectory data of hair in an image to be processed, so that a texture corresponding to the hair can be obtained on the target color image, and thus, a color required to be changed by a single hair is determined by the target color image, so that the change of the hair color is not restricted by a background hair color in the image to be processed, and the hair color in the image can be changed into a dark color system color, a light color system color or a gradient color, so that the target hair color including a dark color system, a light color system or a gradient color can be provided for the user to select, and the diversified requirements of the user can be met.
Referring to fig. 2a, in another embodiment of the image processing method according to the present invention, after obtaining the first target image with the target color, for example, the following process is further included:
201. and carrying out alpha blending processing on the image part of the hair area and the pixels at the corresponding positions of the first target image to obtain a second target image with target hair color.
In particular, as shown in fig. 2b, the following sub-steps are included:
2011. and generating a hair area image according to the hair area of the image to be processed.
Specifically, a hair region image is generated from pixels in the hair region, the hair region image being an image in which only an image portion of the hair region is displayed, and background colors in the hair region image other than the image portion of the hair region are all black. In this embodiment, the hair region image is a single-channel grayscale image, and the generation process thereof may be as follows: extracting pixel information of a hair area of an image to be processed, wherein the pixel information is the pixel value of red, green and blue three-primary-color components of each pixel, generating a three-channel image according to the extracted pixel information, and converting the three-channel image into a single-channel gray image so as to obtain the single-channel gray hair area image.
2012. And performing soft light mixing processing on the pixels at the corresponding positions of the hair region image and the deformed target color image to obtain a first weight image.
The soft light mixing processing of the pixels of the two images refers to a process of performing soft light mixing calculation on the same primary color component of the pixels of the two images, wherein one of the two images is an upper layer image, the other one is a lower layer image, and the soft light mixing calculation formula is as follows:
Figure BDA0001304540180000121
wherein sqrt (U2) is the square root of U2, U1 represents the pixel value of any primary color component of a pixel at any position in the upper layer image, U2 represents the pixel value of the corresponding primary color component of a pixel at a corresponding position in the lower layer image, and U0 represents the pixel value of the corresponding primary color component of a pixel at a corresponding position in a new image obtained by blending two image subdued lights.
In this embodiment, the soft light mixing calculation is performed on the pixels at the corresponding positions of the hair region image and the deformed target color image imgL by using the formula (2). The hair region image is represented by imgF, where imgF is an upper layer image, imgL is a lower layer image, where imgF is a single-channel gray image, and there is only one primary color component, and the pixel value of the primary color component is also called gray value, so that, for each primary color component of the pixels at the same position of imgF and imgL, the soft light mixture calculation formula is as follows:
Figure BDA0001304540180000122
Figure BDA0001304540180000123
Figure BDA0001304540180000124
u1x is a pixel value of a primary color component of a pixel at any position of imgF, U2r, U2g, and U2b are pixel values of red, green, and blue three-primary color components of a pixel at a corresponding position of imgL, respectively, and U0r, U0g, and U0b are pixel values of red, green, and blue three-primary color components at a corresponding position of the first weight image obtained by soft light mixing, respectively.
The soft light mixing calculation is sequentially carried out on the pixel values of the three primary color components of the pixel at each corresponding position in imgF and imgL by using the formula, so that a first weight image after soft light mixing processing can be obtained.
2013. A first weight coefficient is determined from the first weight image.
2014. And carrying out alpha blending processing on the image part of the hair area and the pixel at the corresponding position of the first target image based on the first weight coefficient to obtain a second target image with target hair color.
Alpha blending processing of pixels of two images refers to a process of performing alpha blending calculation on the same primary color component of the pixels of the two images, one of the two images is an upper layer image, and the other is a lower layer image, and an alpha blending calculation formula is as follows:
D0=A*D1+(1-A)*D2 (3)
wherein D1 represents the pixel value of any primary color component of the pixel at any position in the upper layer image, D2 represents the pixel value of the corresponding primary color component of the pixel at the corresponding position in the lower layer image, and D0 represents the pixel value of the corresponding primary color component of the pixel at the corresponding position in the second target image obtained after the two image alpha blending processes. And a is a first weight coefficient, which takes the value of the pixel value of the corresponding primary color component of the pixel at the corresponding position in the first weight image.
In this embodiment, the alpha blending calculation is performed on the image portion imgM of the hair region and the pixel at the corresponding position of the first target image by using the formula (3). Wherein imgM is the lower layer image and the first target image is the upper layer image, and therefore, for each primary color component of the pixel at the same position of imgM and the first target image, the alpha blending calculation formula is as follows:
D0r=Ar*D1r+(1-Ar)*D2r
D0g=gr*D1g+(1-Ag)*D2g
D0b=Ab*D1b+(1-Ab)*D2b
d1r, D1g, and D1b are pixel values of red, green, and blue three-primary-color components of a pixel at any position of imgM, D2r, D2g, and D2b are pixel values of red, green, and blue three-primary-color components of a pixel at a corresponding position of the first target image, Ar, Ag, and Ab are first weight coefficients and take values of the pixel values of the red, green, and blue three-primary-color components of the pixel at the corresponding position in the first weight image, and D0r, D0g, and D0b are pixel values of the red, green, and blue three-primary-color components of a pixel at the corresponding position of the second target image obtained by alpha blending processing of the imgM and the first target image.
The above formula is used to sequentially perform alpha blending calculation on the imgM and the pixel values of the three primary color components of the pixel at each corresponding position in the first target image, so as to obtain a second target image after alpha blending.
By performing further soft blending and alpha blending on the image portion of the hair region, the hair edge transition can be optimized, making the image transition more natural.
202. And carrying out alpha blending processing on the image part of the hair area and the pixels at the corresponding positions of the second target image to obtain a third target image with target hair color.
In particular, as shown in fig. 2c, the following sub-steps may be included:
2021. acquiring brightness information of an image part of the hair area in the image to be processed to generate a brightness channel image of the image part of the hair area.
The method comprises the steps of obtaining pixel values of red, green and blue three-primary-color components of each pixel of an image part of a hair area, obtaining a hue value H, a saturation value S and a brightness value V of each pixel on an HSV color space according to the pixel values of the three-primary-color components of each pixel, and then extracting the brightness value V of each pixel to generate a brightness channel image of the image part of the hair area.
2022. And carrying out histogram equalization processing on the brightness channel image to obtain a second weight image.
The histogram equalization processing mainly refers to adjusting the contrast of an image according to histogram information of the image, for example, counting the number of pixels of each gray level of a luminance channel image, widening the gray level with a large number of pixels, and compressing the gray level with a small number of pixels, thereby improving the contrast and gray tone variation. In this embodiment, the degrees of widening and compressing the gray levels of the luminance channel image may be determined according to the pixel distribution of the image portion of the hair region, so as to perform histogram equalization on the luminance channel image of the image portion of the hair region, thereby improving the contrast of the luminance channel image.
2023. A second weight coefficient is determined from the second weighted image.
2024. And carrying out alpha mixing processing on the image part of the hair area and the second target image based on the second weight coefficient to obtain a third target image with target hair color.
In this step, when the alpha blending calculation is performed by using the formula (3), a is a second weight coefficient, and a value of the second weight coefficient is a pixel value of a primary color component of a pixel of the second weight image. Wherein imgM is the lower layer image and the second target image is the upper layer image, and therefore, for each primary color component of the pixel at the same position of imgM and the second target image, the alpha blending calculation formula is as follows:
D0r=Ar*D1r+(1-Ar)*D2r
D0g=gr*D1g+(1-Ag)*D2g
D0b=Ab*D1b+(1-Ab)*D2b
d1r, D1g, and D1b are pixel values of red, green, and blue three-primary-color components of a pixel at any position of imgM, D2r, D2g, and D2b are pixel values of red, green, and blue three-primary-color components of a pixel at a corresponding position of the second target image, Ar, Ag, and Ab are second weight coefficients that take values of the pixel values of the red, green, and blue three-primary-color components of the pixel at the corresponding position in the second weight image, and D0r, D0g, and D0b are pixel values of the red, green, and blue three-primary-color components of a pixel at the corresponding position of the third target image obtained by alpha blending processing of the imgM and the second target image.
203. And outputting a third target image with the target color.
The third target image with the target color is displayed to the user as the final target image.
In this embodiment, the second weighted image after histogram equalization is used to determine the weighting coefficient for alpha blending, so that the image portion of the hair region in the image to be processed and the second target image are subjected to alpha blending based on the weighting coefficient, and the contrast of the third target image can be improved, so that the target image after color development change has a better display effect.
Of course, in other embodiments, the image portion of the hair region and the first target image may be subjected to alpha blending processing based on the second weight coefficient determined by the second weight image, and the blended image may be output to the user as a final target image.
In order to better implement the above method, an embodiment of the present invention further provides an image processing apparatus, which is abbreviated as an image processing apparatus, and the image processing apparatus may be integrated in a terminal device such as a mobile phone, for example. As shown in fig. 3a, the image processing apparatus includes: a first acquisition module 301, a determination module 302, a generation module 303, a second acquisition module 304, a deformation processing module 305, and a first processing module 306.
The first obtaining module 301 is configured to obtain an image to be processed and a target color selected by a user.
For example, according to a picture opened by a user in an image processing interface and a selected target color, a to-be-processed image and a target color selected by the user are obtained.
The determination module 302 is used to determine the hair region of the image to be processed.
The hair detection can be carried out on the image to be processed according to the characteristic information of the hair pixels, and the hair area of the image to be processed is determined according to the detected hair pixels. The hair detection can be performed in various ways, such as hair color detection based on color space statistics, hair color detection based on a probability model, a gaussian model, or hair color detection based on neural network machine learning, and the like.
The generating module 303 is configured to generate a target color image according to the target color.
The target color image is an image showing a target color selected by the user, and is determined according to the target color selected by the user.
The second obtaining module 304 is configured to obtain trajectory data of the hair in the hair region.
The trajectory of the hair may be obtained in various ways, for example by determining the trajectory of the hair by detecting the direction of the hair, which may be detected, for example, by a Gabor filter. After the trajectory of the hairline is determined, the position information of a plurality of pixel points on the hairline trajectory can be selected along the hairline trajectory to obtain the trajectory data of the hairline.
The deformation processing module 305 is configured to perform deformation processing on the target color image based on trajectory data of the hair.
The step of deforming the target color image is to adjust the positions of the pixels of the target color image so as to achieve the purpose of image deformation. The deformation processing method for the target color image may be various, such as a moving least squares image deformation algorithm, an image deformation algorithm based on two-dimensional interpolation, or an image deformation algorithm based on a contour polygon, and so on.
Optionally, the image processing apparatus may further include a scaling processing module 307.
The scaling processing module 307 is configured to perform scaling processing on the target color image before the deformation processing module 305 performs deformation processing on the target color image. For example, the scaling module 307 is configured to determine a circumscribed rectangle of the shape of the hair region according to the hair region of the image to be processed, and scale the target color image based on the circumscribed rectangle so that the size of the target color image is consistent with the size of the circumscribed rectangle.
The scaling processing of the target compressed image based on the circumscribed rectangle is specifically to adjust the length and width of the target compressed image so that the length and width of the target color image are respectively equal to the length and width of the circumscribed rectangle, thereby making the size of the target color image substantially the same as the size of the hair region.
Taking a moving least squares method image deformation algorithm as an example, the deformation processing module 305 is specifically configured to perform deformation processing on the scaled target color image based on hair trajectory data, for example, the deformation processing module 305 is configured to obtain trajectory data of each hair, where the trajectory data of each hair is a point group dstlist-n (p1, p2, … …, pn), each element pn in the point group is coordinate information of a pixel point on the hair trajectory, and then obtain trajectory data of a line at a predetermined position in the scaled target color image, where the line at the predetermined position may be, for example, a centerline of the scaled target color image, and obtain trajectory data of the line at the predetermined position by obtaining trajectory data of the centerline on the target color image, where the trajectory data of the centerline refers to position information of the pixel point on the centerline in the target color image, for example, a point array srclist (s1, s2, … …, sn) can be used, where the array length of the point array srclist (s1, s2, … …, sn) is the same as the array length of the point array dstlist-n (p1, p2, … …, pn), and is n.
Then, the deformation processing module 305 is configured to adjust the positions of the pixels in the scaled target color image according to the trajectory data of each hair and the trajectory data of the predetermined position line in sequence, so as to obtain a plurality of hair color images.
Setting the trajectory data of the central line of the target color image as a set of original control point positions, setting the trajectory data of a single hair as a set of deformed control point positions, and determining a deformation function according to the set of the original control points and the set of the deformed control point positions according to an image deformation algorithm principle based on a moving least square method of points, so as to perform deformation processing on the target color image after scaling processing according to the deformation function, namely adjust the positions of pixels in the target color image, wherein a variable in the deformation function is a coordinate of any pixel point of the target color image, the value of the deformation function is a coordinate of the deformed pixel point, and thus a target color image after the pixel position is adjusted, namely a hair color image corresponding to the single hair, and so on to obtain a hair color image corresponding to each hair, thereby obtaining a plurality of hair color images.
Then, the deformation processing module 305 is configured to perform color filtering and blending processing on the pixels at the corresponding positions of the multiple hair color images to obtain a deformed target color image.
For example, for two images obtained by color filter blending, one of the images is set as an upper layer image and the other image is set as a lower layer image (i.e., a background image), the calculation formula of color filter blending is as follows:
P0=1-(1-P1)*(1-P2)
wherein, P1 represents the base color value of the color filter blending algorithm, which is the pixel value of any base color component of the pixel at any position in the upper layer image, P2 represents the blending color value of the color filter blending algorithm, which is the pixel value of the corresponding base color component of the pixel at the corresponding position in the lower layer image (i.e. the background image), and P0 represents the pixel value of the corresponding base color component of the pixel at the corresponding position of the new image obtained after the color filter blending processing.
In this embodiment, when the trajectories of m hairs are detected in the image to be processed, the number of the plurality of hair color images is also m, the m hair color images are set to imgH1, imgH2, imgH3, … … and imgH, and the color filtering and mixing processing modes of the m hair color images may be various, for example, as follows:
firstly, the imgH1 and imgH2 are processed by color filtering and mixing by using the above color filtering and mixing calculation formula, wherein imgH1 is an upper layer image, imgH2 is a lower layer image, pixel values of three primary color components of pixels at each corresponding position in imgH1 and imgH2 are sequentially subjected to filtering and mixing calculation, so that a new hair color image after filtering and mixing treatment can be obtained, then, the pixel values of the three primary color components of the pixels at the corresponding positions of the new hair color image and the next hair color image imgH3 are subjected to filter mixing calculation by using the above filter mixing calculation formula, wherein the new hair color image is an upper layer image, the next hair color image imgH3 is a lower layer image, and so on until the last hairline color image imgHm is calculated, the final calculation result is the deformed target color image, namely, the deformed target color image is an image obtained by filtering and mixing a plurality of hair color images.
The first processing module 306 is configured to perform color filtering and blending processing on the image portion of the hair region and the pixels at the corresponding positions of the deformed target color image to obtain a first target image with a target color.
For example, the first processing module 306 may be specifically configured to acquire a position of a first to-be-processed pixel of the image portion of the hair region and pixel values of three primary color components of red, green, and blue, then acquire pixel values of three primary color components of red, green, and blue of a second to-be-processed pixel corresponding to the position of the first to-be-processed pixel in the deformed target color, and perform color filtering and mixing processing on the pixel values of the same primary color component of the first to-be-processed pixel and the same primary color component of the second to-be-processed pixel by using the pixel values of the primary color components of the first to-be-processed pixel as a primary color value and the pixel values of the primary color components of the second to-be-processed pixel as a mixed color value.
The deformed target color image is represented by imgL, and the image portion of the hair region in the image to be processed is represented by imgM, and the above-mentioned color filtering and mixing calculation formula can be used to perform color filtering and mixing processing calculation on the pixel values of the primary color components of the pixels at the corresponding positions of imgM and imgL, where imgM is an upper layer image and imgL is a lower layer image, and the specific calculation process may refer to the above-mentioned color filtering and mixing calculation processes of imgH1 and imgH2, and is not described here any more. And sequentially carrying out color filtering and mixing calculation on the pixel values of the same primary color component of the pixels at each corresponding position in the imgM and the imgL by using the color filtering and mixing calculation formula, wherein the result of each calculation is used as a new pixel value of the same primary color component of the pixels at the corresponding positions, so that a first target image with a target color can be obtained, the hair color in the image to be processed is modified into the target color selected by the user, and the hair dyeing effect of the image is realized.
In this embodiment, a target color image is generated according to a target hair color selected by a user, and the target color image is subjected to deformation processing by using trajectory data of hair in an image to be processed, so that a texture corresponding to the hair can be obtained on the target color image, and thus, a color required to be changed by a single hair is determined by the target color image, so that the change of the hair color is not restricted by a background hair color in the image to be processed, and the hair color in the image can be changed into a dark color system color, a light color system color or a gradient color, so that the target hair color including a dark color system, a light color system or a gradient color can be provided for the user to select, and the diversified requirements of the user can be met.
Optionally, as shown in fig. 3b, the image processing apparatus provided in the embodiment of the present invention further may further include a second processing module 308, a third processing module 309, and an output module 310.
The second processing module 308 is configured to, after the first processing module 306 obtains the first target image with the target color, perform alpha blending processing on the image portion of the hair region and the pixels at the corresponding positions of the first target image to obtain a second target image with the target color.
For example, the second processing module 308 is specifically configured to generate a hair region image according to a hair region of the image to be processed, perform soft light blending processing on the hair region image and pixels at corresponding positions of the deformed target color image to obtain a first weight image, determine a first weight coefficient according to the first weight image, and perform alpha blending processing on an image portion of the hair region and pixels at corresponding positions of the first target image based on the first weight coefficient to obtain a second target image with the target color.
The hair region image is an image in which only an image portion of the hair region is displayed, and the background color of the hair region image other than the image portion of the hair region is black.
In this embodiment, when performing soft light mixing calculation on pixels at corresponding positions of the hair region image and the deformed target color image imgL, the hair region image is represented by imgF, imgF is an upper layer image, and imgL is a lower layer image, where imgF is a single-channel grayscale image and has only one primary color component, and a pixel value of the primary color component is also called a grayscale value, so that, for each primary color component of the pixels at the same position of imgF and imgL, the soft light mixing calculation formula is as follows:
Figure BDA0001304540180000191
Figure BDA0001304540180000201
Figure BDA0001304540180000202
u1x is a pixel value of a primary color component of a pixel at any position of imgF, U2r, U2g, and U2b are pixel values of red, green, and blue three-primary color components of a pixel at a corresponding position of imgL, respectively, and U0r, U0g, and U0b are pixel values of red, green, and blue three-primary color components at a corresponding position of the first weight image obtained by soft light mixing, respectively.
The soft light mixing calculation is sequentially carried out on the pixel values of the three primary color components of the pixel at each corresponding position in imgF and imgL by using the formula, so that a first weight image after soft light mixing processing can be obtained.
When alpha blending calculation is performed on the image portion imgM of the hair region and the pixel at the corresponding position of the first target image, imgM is a lower layer image, and the first target image is an upper layer image, so that an alpha blending calculation formula is as follows for each primary color component of the pixel at the same position of the imgM and the first target image:
D0r=Ar*D1r+(1-Ar)*D2r
D0g=gr*D1g+(1-Ag)*D2g
D0b=Ab*D1b+(1-Ab)*D2b
d1r, D1g, and D1b are pixel values of red, green, and blue three-primary-color components of a pixel at any position of imgM, D2r, D2g, and D2b are pixel values of red, green, and blue three-primary-color components of a pixel at a corresponding position of the first target image, Ar, Ag, and Ab are first weight coefficients and take values of the pixel values of the red, green, and blue three-primary-color components of the pixel at the corresponding position in the first weight image, and D0r, D0g, and D0b are pixel values of the red, green, and blue three-primary-color components of a pixel at the corresponding position of the second target image obtained by alpha blending processing of the imgM and the first target image.
The above formula is used to sequentially perform alpha blending calculation on the imgM and the pixel values of the three primary color components of the pixel at each corresponding position in the first target image, so as to obtain a second target image after alpha blending.
By performing further soft blending and alpha blending on the image portion of the hair region, the hair edge transition can be optimized, making the image transition more natural.
The third processing module 309 is configured to perform alpha blending processing on the image portion of the hair region and the pixel at the corresponding position of the second target image to obtain a third target image with a target color.
For example, the third processing module 309 may be specifically configured to obtain luminance information of an image portion of a hair region in the image to be processed to generate a luminance channel image of the image portion of the hair region, then perform histogram equalization on the luminance channel image to obtain a second weight image, so as to determine a second weight coefficient according to the second weight image, thereby performing alpha blending processing on the image portion of the hair region and the second target image based on the second weight coefficient to obtain a third target image with the target hair color.
When alpha blending calculation is performed on the image portion imgM of the hair region and the pixel at the corresponding position of the second target image, imgM is a lower layer image, and the second target image is an upper layer image, so that an alpha blending calculation formula is as follows for each primary color component of the pixel at the same position of the imgM and the second target image:
D0r=Ar*D1r+(1-Ar)*D2r
D0g=gr*D1g+(1-Ag)*D2g
D0b=Ab*D1b+(1-Ab)*D2b
d1r, D1g, and D1b are pixel values of red, green, and blue three-primary-color components of a pixel at any position of imgM, D2r, D2g, and D2b are pixel values of red, green, and blue three-primary-color components of a pixel at a corresponding position of the second target image, Ar, Ag, and Ab are second weight coefficients that take values of the pixel values of the red, green, and blue three-primary-color components of the pixel at the corresponding position in the second weight image, and D0r, D0g, and D0b are pixel values of the red, green, and blue three-primary-color components of a pixel at the corresponding position of the third target image obtained by alpha blending processing of the imgM and the second target image.
The output module 310 is configured to output a third target image with a target color, for example, display the third target image.
The second weighted image after histogram equalization processing is used for determining the weighting coefficient of alpha blending processing, so that the image part of the hair area in the image to be processed and the second target image are subjected to alpha blending processing based on the weighting coefficient, the contrast of the third target image can be improved, and the target image after color development change has better display effect.
Accordingly, an embodiment of the present invention further provides a terminal, as shown in fig. 4, the terminal may include Radio Frequency (RF) circuits 401, a memory 402 including one or more computer-readable storage media, an input unit 403, a display unit 404, a sensor 405, an audio circuit 406, a Wireless Fidelity (WiFi) module 407, a processor 408 including one or more processing cores, and a power supply 409. Those skilled in the art will appreciate that the terminal configuration shown in fig. 4 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 401 may be used for receiving and transmitting signals during a message transmission or communication process, and in particular, for receiving downlink information of a base station and then sending the received downlink information to the one or more processors 408 for processing; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 401 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 401 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 402 may be used to store software programs and modules, and the processor 408 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 408 and the input unit 403 access to the memory 402.
The input unit 403 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in a particular embodiment, the input unit 403 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 408, and can receive and execute commands from the processor 408. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 403 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 404 may be used to display information input by or provided to the user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 404 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 408 to determine the type of touch event, and then the processor 408 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 4 the touch-sensitive surface and the display panel are shown as two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The terminal may also include at least one sensor 405, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
Audio circuitry 406, a speaker, and a microphone may provide an audio interface between the user and the terminal. The audio circuit 406 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, which is received by the audio circuit 406 and converted into audio data, which is then processed by the audio data output processor 408, and then transmitted to, for example, another terminal via the RF circuit 401, or the audio data is output to the memory 402 for further processing. The audio circuitry 406 may also include an earbud jack to provide peripheral headset communication with the terminal.
WiFi belongs to short distance wireless transmission technology, and the terminal can help the user to send and receive e-mail, browse web page and access streaming media etc. through WiFi module 407, it provides wireless broadband internet access for the user. Although fig. 4 shows the WiFi module 407, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 408 is a control center of the terminal, connects various parts of the entire handset using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby integrally monitoring the handset. Optionally, processor 408 may include one or more processing cores; preferably, the processor 408 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily the wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 408.
The terminal also includes a power source 409 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 408 via a power management system to manage charging, discharging, and power consumption via the power management system. The power supply 409 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal may further include a camera, a bluetooth module, and the like, which will not be described herein. Specifically, in this embodiment, the processor 408 in the terminal loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 408 runs the application programs stored in the memory 402, thereby implementing various functions:
the method comprises the steps of obtaining an image to be processed and target hair color selected by a user, determining a hair area of the image to be processed, generating a target color image according to the target hair color, obtaining trajectory data of hair in the hair area, deforming the target color image based on the trajectory data of the hair, and performing color filtering and mixing processing on an image part of the hair area and pixels at corresponding positions of the deformed target color image to obtain a first target image with the target hair color.
For example, the target color image may be deformed using a least squares image deformation algorithm.
After the first target image is obtained, alpha blending processing may be further performed on the image portion of the hair region and the pixels at the corresponding positions of the first target image to obtain a second target image with a target hair color, so that a hair edge transition of the second target image may be optimized. For example, the soft light blending process may be performed on the pixels at the corresponding positions of the hair region image and the deformed target color image to obtain a first weight image, and then the alpha blending process may be performed based on a first weight coefficient determined by the first weight image.
Further, after the second target image is obtained, alpha blending processing may be performed on the image portion of the hair region and the pixels at the corresponding positions of the second target image to obtain a third target image with the target hair color. For example, histogram equalization processing may be performed on the luminance channel image of the image portion of the hair region to obtain a second weight image, so that alpha blending processing is performed based on a second weight coefficient determined by the second weight image.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
As can be seen from the above, in the embodiment of the present invention, the target color image is generated according to the target hair color selected by the user, and the target color image is subjected to the deformation processing by using the trajectory data of the hair in the image to be processed, so that the texture corresponding to the hair can be obtained on the target color image, and thus the color to be changed for a single hair is determined by the target color image, so that the change of the hair color is not restricted by the background hair color in the image to be processed, and the hair color in the image can be changed into the dark color system color, the light color system color or the gradient color, so that the target hair color including the dark color system, the light color system or the gradient color can be provided for the user to select, and the diversified requirements of the user can be met.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present invention further provide a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the image processing methods provided by the embodiments of the present invention. For example, the instructions may include the steps of:
the method comprises the steps of obtaining an image to be processed and target hair color selected by a user, determining a hair area of the image to be processed, generating a target color image according to the target hair color, obtaining trajectory data of hair in the hair area, deforming the target color image based on the trajectory data of the hair, and performing color filtering and mixing processing on an image part of the hair area and pixels at corresponding positions of the deformed target color image to obtain a first target image with the target hair color.
For example, the target color image may be deformed using a least squares image deformation algorithm.
After the first target image is obtained, alpha blending processing may be further performed on the image portion of the hair region and the pixels at the corresponding positions of the first target image to obtain a second target image with a target hair color, so that a hair edge transition of the second target image may be optimized. For example, the soft light blending process may be performed on the pixels at the corresponding positions of the hair region image and the deformed target color image to obtain a first weight image, and then the alpha blending process may be performed based on a first weight coefficient determined by the first weight image.
Further, after the second target image is obtained, alpha blending processing may be performed on the image portion of the hair region and the pixels at the corresponding positions of the second target image to obtain a third target image with the target hair color. For example, histogram equalization processing may be performed on the luminance channel image of the image portion of the hair region to obtain a second weight image, so that alpha blending processing is performed based on a second weight coefficient determined by the second weight image.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image processing method provided in the embodiment of the present invention, the beneficial effects that can be achieved by any image processing method provided in the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The foregoing detailed description has provided a method, an apparatus, and a storage medium for image processing according to embodiments of the present invention, and the present disclosure has been made in detail by applying specific examples to explain the principles and embodiments of the present invention, and the description of the foregoing embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. An image processing method, comprising:
acquiring an image to be processed selected by a user and a target color;
determining a hair area of the image to be processed, and generating a target color image according to the target color;
acquiring trajectory data of the hairline in the hair area;
performing deformation processing on the target color image based on the trajectory data of the hairline;
and carrying out color filtering and mixing processing on the image part of the hair area and the pixels at the corresponding positions of the deformed target color image to obtain a first target image with the target color.
2. The image processing method according to claim 1, wherein the deforming the target color image based on the trajectory data of the hair includes:
acquiring track data of each hair;
acquiring track data of lines on a preset position in a target color image;
adjusting the positions of pixels in the target color image according to the trajectory data of each hair and the trajectory data of the lines at the preset positions in sequence to obtain a plurality of hair color images;
and carrying out color filtering and mixing processing on pixels at corresponding positions of the plurality of hair color images to obtain a deformed target color image.
3. The image processing method according to claim 2, wherein the acquiring trajectory data of the line at the predetermined position in the target color image comprises:
determining a center line of the target color image;
and acquiring the track data of the central line to obtain the track data of the line at the preset position.
4. The image processing method according to claim 1, wherein after performing color filtering mixing processing on the image portion of the hair region and the pixels at the corresponding positions of the deformed target color image, the method further comprises:
and carrying out alpha blending processing on the image part of the hair area and the pixels at the corresponding positions of the first target image to obtain a second target image with target hair color.
5. The image processing method according to claim 4, wherein said alpha blending the image portion of the hair region and the pixel at the corresponding position of the first target image comprises:
generating a hair region image according to a hair region of the image to be processed;
performing soft light mixing processing on the pixels at the corresponding positions of the hair region image and the deformed target color image to obtain a first weight image;
determining a first weight coefficient according to the first weight image;
and carrying out alpha blending processing on the image part of the hair area and the pixel at the corresponding position of the first target image based on the first weight coefficient to obtain a second target image with target hair color.
6. The image processing method according to claim 4, wherein after performing alpha blending processing on the image portion of the hair region and the pixels at the corresponding positions of the first target image, the method further comprises:
and carrying out alpha blending processing on the image part of the hair area and the pixels at the corresponding positions of the second target image to obtain a third target image with target hair color.
7. The image processing method according to claim 6, wherein said alpha blending the image portion of the hair region and the pixels at the corresponding positions of the second target image comprises:
acquiring brightness information of the image part of the hair area to generate a brightness channel image of the image part of the hair area;
performing histogram equalization processing on the brightness channel image to obtain a second weight image;
determining a second weight coefficient according to the second weight image;
and carrying out alpha mixing processing on the image part of the hair area and the second target image based on the second weight coefficient to obtain a third target image with target hair color.
8. The image processing method according to claim 1, wherein before the deforming the target color image based on the trajectory data of the hair, further comprising:
determining a circumscribed rectangle of the shape of the hair region according to the hair region;
scaling the target color image based on the circumscribed rectangle so that the size of the target color image is consistent with the size of the circumscribed rectangle;
the deformation processing of the target color image based on the trajectory data of the hairline comprises the following steps: and carrying out deformation processing on the target color image after the scaling processing based on the trajectory data of the hair.
9. The image processing method according to any one of claims 1 to 8, wherein performing color filtering blending processing on the image portion of the hair region and the pixels at the corresponding positions of the deformed target color image includes:
acquiring the position of a first pixel to be processed of the image part of the hair area and pixel values of red, green and blue three primary color components;
acquiring pixel values of red, green and blue three-primary-color components of a second pixel to be processed corresponding to the first pixel to be processed in the deformed target color image;
and performing color filtering and mixing processing on the pixel values of the same primary color component of the first pixel to be processed and the second pixel to be processed by taking the pixel value of each primary color component of the first pixel to be processed as a primary color value and the pixel value of each primary color component of the second pixel to be processed as a mixed color value.
10. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring the to-be-processed image and the target color selected by the user;
the determining module is used for determining a hair area of the image to be processed;
the generating module is used for generating a target color image according to the target color;
the second acquisition module is used for acquiring the trajectory data of the hairline in the hair area;
the deformation processing module is used for carrying out deformation processing on the target color image based on the trajectory data of the hairline;
and the first processing module is used for carrying out color filtering and mixing processing on the image part of the hair area and the pixels at the corresponding positions of the deformed target color image to obtain a first target image with target color development.
11. The image processing apparatus according to claim 10, wherein the deformation processing module is specifically configured to:
acquiring track data of each hair;
acquiring track data of lines on a preset position in a target color image;
adjusting the positions of pixels in the target color image according to the trajectory data of each hair and the trajectory data of the lines at the preset positions in sequence to obtain a plurality of hair color images;
and carrying out color filtering and mixing processing on pixels at corresponding positions of the plurality of hair color images to obtain a deformed target color image.
12. The image processing apparatus according to claim 10, further comprising:
and the second processing module is used for carrying out alpha mixing processing on the image part of the hair area and the pixels at the corresponding positions of the first target image to obtain a second target image with target color.
13. The image processing apparatus according to claim 12, wherein the second processing module is specifically configured to:
generating a hair region image according to a hair region of the image to be processed;
performing soft light mixing processing on the pixels at the corresponding positions of the hair region image and the deformed target color image to obtain a first weight image;
determining a first weight coefficient according to the first weight image;
and carrying out alpha blending processing on the image part of the hair area and the pixel at the corresponding position of the first target image based on the first weight coefficient to obtain a second target image with target hair color.
14. The image processing apparatus according to claim 12, further comprising:
and the third processing module is used for carrying out alpha mixing processing on the image part of the hair area and the pixels at the corresponding positions of the second target image to obtain a third target image with target color.
15. A storage medium for storing a plurality of instructions adapted to be loaded by a processor for performing the steps of the image processing method according to any one of claims 1 to 9.
CN201710387281.4A 2017-05-25 2017-05-25 Image processing method, device and storage medium Active CN107256555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710387281.4A CN107256555B (en) 2017-05-25 2017-05-25 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710387281.4A CN107256555B (en) 2017-05-25 2017-05-25 Image processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN107256555A CN107256555A (en) 2017-10-17
CN107256555B true CN107256555B (en) 2021-11-02

Family

ID=60027745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710387281.4A Active CN107256555B (en) 2017-05-25 2017-05-25 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN107256555B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198127A (en) * 2017-11-27 2018-06-22 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN108036746B (en) * 2017-12-26 2019-08-06 太原理工大学 A kind of Gabor transformation realization carbon fibre composite surface texture analysis method based on Spectrum Method
CN109994037B (en) * 2017-12-29 2021-12-14 杭州海康威视系统技术有限公司 Method, device, terminal and storage medium for generating electronic map
CN108492247A (en) * 2018-03-23 2018-09-04 成都品果科技有限公司 A kind of eye make-up chart pasting method based on distortion of the mesh
CN108629819B (en) * 2018-05-15 2019-09-13 北京字节跳动网络技术有限公司 Image hair dyeing treating method and apparatus
CN111724449A (en) * 2019-03-19 2020-09-29 北京京东尚科信息技术有限公司 Image processing method, device and equipment
CN110009708B (en) * 2019-04-10 2020-08-28 上海大学 Color development transformation method, system and terminal based on image color segmentation
CN110930296B (en) * 2019-11-20 2023-08-08 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN110969631B (en) * 2019-11-25 2023-04-11 杭州小影创新科技股份有限公司 Method and system for dyeing hair by refined photos
CN111047669B (en) * 2019-12-11 2023-08-22 广州品唯软件有限公司 Color taking method and device
CN111127591B (en) * 2019-12-24 2023-08-08 腾讯科技(深圳)有限公司 Image hair dyeing processing method, device, terminal and storage medium
CN111260581B (en) * 2020-01-17 2023-09-26 北京达佳互联信息技术有限公司 Image processing method, device and storage medium
CN111586444B (en) * 2020-06-05 2022-03-15 广州繁星互娱信息科技有限公司 Video processing method and device, electronic equipment and storage medium
CN112465882A (en) * 2020-11-17 2021-03-09 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112634155B (en) * 2020-12-22 2024-03-22 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN112862807A (en) * 2021-03-08 2021-05-28 网易(杭州)网络有限公司 Data processing method and device based on hair image
CN113421278B (en) * 2021-06-22 2023-08-15 咪咕互动娱乐有限公司 Range detection method, device, equipment and storage medium based on edge detection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4872056A (en) * 1987-02-05 1989-10-03 Video Graphic Styling, Inc. Method for displaying selected hairstyles in video form
CN103955962A (en) * 2014-04-21 2014-07-30 华为软件技术有限公司 Device and method for virtualizing human hair growth
CN104484664A (en) * 2014-12-31 2015-04-01 小米科技有限责任公司 Human face image processing method and device
CN105405163A (en) * 2015-12-28 2016-03-16 北京航空航天大学 Vivid static-state hair modeling method based on multiple direction fields
CN105654420A (en) * 2015-12-21 2016-06-08 小米科技有限责任公司 Face image processing method and device
CN106203399A (en) * 2016-07-27 2016-12-07 厦门美图之家科技有限公司 A kind of image processing method, device and calculating equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060282288A1 (en) * 2003-12-31 2006-12-14 Klinger Advanced Aesthetics, Inc. Methods of providing a patient with aesthetic improvement procedures

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4872056A (en) * 1987-02-05 1989-10-03 Video Graphic Styling, Inc. Method for displaying selected hairstyles in video form
CN103955962A (en) * 2014-04-21 2014-07-30 华为软件技术有限公司 Device and method for virtualizing human hair growth
CN104484664A (en) * 2014-12-31 2015-04-01 小米科技有限责任公司 Human face image processing method and device
CN105654420A (en) * 2015-12-21 2016-06-08 小米科技有限责任公司 Face image processing method and device
CN105405163A (en) * 2015-12-28 2016-03-16 北京航空航天大学 Vivid static-state hair modeling method based on multiple direction fields
CN106203399A (en) * 2016-07-27 2016-12-07 厦门美图之家科技有限公司 A kind of image processing method, device and calculating equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
image fusion by color texture extraction;Marcelino Anguiano-Morales et al.;《Optik》;20140131;全文 *
人脸图像的自适应美化与渲染研究;梁凌宇;《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》;20141115;全文 *

Also Published As

Publication number Publication date
CN107256555A (en) 2017-10-17

Similar Documents

Publication Publication Date Title
CN107256555B (en) Image processing method, device and storage medium
US11443462B2 (en) Method and apparatus for generating cartoon face image, and computer storage medium
CN109191410B (en) Face image fusion method and device and storage medium
CN107817939B (en) Image processing method and mobile terminal
US9779527B2 (en) Method, terminal device and storage medium for processing image
CN108846274B (en) Security verification method, device and terminal
EP3035283A1 (en) Image processing method and apparatus, and terminal device
CN108551519B (en) Information processing method, device, storage medium and system
CN110458921B (en) Image processing method, device, terminal and storage medium
CN107895352A (en) A kind of image processing method and mobile terminal
CN107644396B (en) Lip color adjusting method and device
CN108280813A (en) A kind of image processing method, terminal and computer readable storage medium
CN110689479B (en) Face makeup method, device, equipment and medium
CN107845057A (en) One kind is taken pictures method for previewing and mobile terminal
CN109727212B (en) Image processing method and mobile terminal
CN111182236A (en) Image synthesis method and device, storage medium and terminal equipment
CN109471579A (en) Terminal screen arrangement information method of adjustment, device, mobile terminal and storage medium
CN109104578B (en) Image processing method and mobile terminal
CN113018856A (en) Image processing method, image processing device, electronic equipment and storage medium
CN105574834B (en) Image processing method and device
CN109639981B (en) Image shooting method and mobile terminal
CN112053416B (en) Image processing method, device, storage medium and computer equipment
CN113888447A (en) Image processing method, terminal and storage medium
CN113645476A (en) Picture processing method and device, electronic equipment and storage medium
CN110766606B (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant