CN110009708B - Color development transformation method, system and terminal based on image color segmentation - Google Patents

Color development transformation method, system and terminal based on image color segmentation Download PDF

Info

Publication number
CN110009708B
CN110009708B CN201910283493.7A CN201910283493A CN110009708B CN 110009708 B CN110009708 B CN 110009708B CN 201910283493 A CN201910283493 A CN 201910283493A CN 110009708 B CN110009708 B CN 110009708B
Authority
CN
China
Prior art keywords
image
background
color
color development
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910283493.7A
Other languages
Chinese (zh)
Other versions
CN110009708A (en
Inventor
马然
刘云
安平
杨梦雅
尤志翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201910283493.7A priority Critical patent/CN110009708B/en
Publication of CN110009708A publication Critical patent/CN110009708A/en
Application granted granted Critical
Publication of CN110009708B publication Critical patent/CN110009708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a color development transformation method based on image color segmentation, which comprises the following steps: s1, collecting an image and preprocessing the image; s2, carrying out background filtering processing on the preprocessed image obtained in the S1; s3, carrying out face recognition and eye recognition on the portrait in the image obtained in the S2, further segmenting a hair module, and carrying out shielding treatment on the eye part; s4, extracting a plurality of pixel points of color development pixels in the original image, calculating the average value of the pixel points to replace the color development pixels, and performing color development transformation processing on the image subjected to the shielding processing and obtained in the S3 based on the replaced color development pixels; s5, the color-converted image obtained in S4 is subjected to background restoration processing. Meanwhile, a color development transformation system and a terminal based on image color segmentation are provided. The technology of the invention is simple to operate, can effectively eliminate interference and quickly and accurately realize the color development transformation.

Description

Color development transformation method, system and terminal based on image color segmentation
Technical Field
The invention relates to the technical field of computer image processing, in particular to a color development transformation method, a color development transformation system and a color development transformation terminal based on image color segmentation, and discloses an image processing method based on image segmentation and color transformation.
Background
With the continuous improvement of computer performance and the continuous development of image acquisition equipment, the application of images has penetrated the fields of daily life, production and consumption, scientific research and the like of people. Detection, presentation, analysis and use of hair characteristics are also widely appreciated and studied, but hair is an important feature of human appearance, and there are at least two potential areas of application for hair analysis: human recognition and facial image indexing. However, since hair appearance and properties can be so easily changed, they are widely recognized as unstable features for face recognition. In addition, hair has been an important research topic for computer graphics and animation. In daily life, people like to use some related art designing software when taking pictures, and the function of color development transformation in the software is very attractive.
However, the research on hair is very slow, and there is almost no related prior art on hair color transformation, and the technology and method for segmenting hair parts are not mature, and there are errors in segmenting edge parts.
Disclosure of Invention
The invention provides a color development transformation method, a system and a terminal based on image color segmentation, which are used for converting hair colors into required colors in the image processing process through picture preprocessing and interference factor elimination based on the original color development (such as black) of people.
The invention is realized by the following technical scheme.
According to one aspect of the present invention, there is provided a color development transformation method based on image color segmentation, comprising:
s1, collecting an image and preprocessing the image;
s2, carrying out background filtering processing on the preprocessed image obtained in the S1;
s3, carrying out face recognition and eye recognition on the portrait in the image obtained in the S2, further segmenting a hair module, and carrying out shielding treatment on the eye part;
s4, extracting a plurality of pixel points of color development pixels in the original image, calculating the average value of the pixel points to replace the color development pixels, and performing color development transformation processing on the image subjected to the shielding processing and obtained in the S3 based on the replaced color development pixels;
s5, the color-converted image obtained in S4 is subjected to background restoration processing.
Preferably, the S1, including:
adjusting the contrast and brightness of the picture by adopting the following calculation formula: :
g(x,y)=a×f(x,y)+b
wherein f (x, y) represents the value of the c channel of the pixel point of the x-th row and the y-th column of the source image; g (x, y) represents the value of the c channel of the pixel point of the x row and the y column of the target image; the a parameter represents the magnification factor, a > 0; the b parameter is a bias for adjusting the brightness.
Preferably, a is 0.0 or more and 3.0 or less.
Preferably, the S2, including:
separating the foreground and the background of the image based on the change rate of the pixel points; adopting a mixed Gaussian model to represent the characteristics of each pixel point in the image, updating the mixed Gaussian model after a new frame of image is obtained, matching each pixel point in the current image with the mixed Gaussian model, if the matching is successful, judging the point as a background point, otherwise, judging the point as a foreground point;
the gaussian mixture model includes K gaussian models, where for a gaussian model representing a background, it is assumed that, for a background in an image, the distribution of the luminance of a specific pixel in the background satisfies gaussian distribution, that is, for a background W in the image, the luminance of each pixel (x, y) of the background W satisfies W (x, y) -N (u, d), then:
Figure BDA0002022478050000021
wherein, p (x) represents a probability density function of one-dimensional Gaussian distribution, and u and d are respectively a mean parameter and a variance parameter in each pixel point attribute in a Gaussian model representing a background;
for a given image Q, when:
Figure BDA0002022478050000022
wherein, T is a constant threshold, the pixel point (x, y) is a background point, otherwise, the pixel point is a foreground point;
then, updating the background of each frame of image, and then:
Wt(x,y)=p×Wt-1(x,y)+(1-p)×Qt(x,y)
wherein, Wt(x, y) represents the background pixel parameter at t; p isA constant (in the range of 0-1) reflecting the background update rate, the larger p, the slower the background update; qt(x, y) represents pixel parameters of the image at t;
after the background is obtained, entering image circulation; and (4) regarding the person marked in the static image as a foreground, and then deleting the background to obtain the foreground in the image.
Preferably, the value of K is 3-5.
Preferably, the S3, detecting a face region in the image based on the algorithm of the cascade enhanced classifier and training the eye detector to position the eyes in the face region by adopting the methods of face recognition and human eye recognition; the classifier operates specifically as follows:
calculating the characteristic values of the training samples for each classifier, and sequencing the characteristic values;
calculating the sum of all the characteristic values of the samples belonging to the human face t1 and all the characteristics of the samples belonging to the non-human face
The sum of eigenvalues t 0;
s1 calculating the sum of the feature values of all samples belonging to the face before the ith sampleiAnd of non-human faces
Sum of eigenvalues of samples s0i
Calculating r ═ min ((s1+ (t 0)i-s0i)),(s0+(t1i-s1i)));
The minimum r value obtained by calculation is the threshold value. With the threshold, a weak classifier is constructed using a decision tree, as follows:
Figure BDA0002022478050000031
wherein x is a sub-image window, f is a feature, p is used for controlling the directions of unequal signs, so that the unequal signs are all < ", and theta is a threshold value;
increasing the weight of the misclassified samples, abandoning the correctly classified samples, adding new samples, wherein the weight of the new samples is 1/N, N is the total number of the samples, and performing a new round of training of the weak classifier;
training T weak classifiers after T rounds; the T weak classifiers are weighted and summed according to their classification error rates to form a strong classifier, as follows:
Figure BDA0002022478050000032
Figure BDA0002022478050000041
α thereintIs the weight of the t-th weak classifier, βtIs the error rate of the t-th weak classifier;
and finally, after the eye position is positioned, the eyes are shielded and hidden.
Preferably, the S4, including:
firstly, extracting a plurality of pixel values of hair color in an original image, storing the pixel values into a text, averaging the pixel values to obtain an average RGB pixel value of the hair pixel, then introducing the average RGB pixel value into an original color development pixel value of a color development transformation part, and converting the image from an RGB model into a YCrCb model to obtain:
Y=0.257×R+0.504×G+0.098×B+16
Cb=-0.148×R-0.504×G+0.439×B+128
Cr=0.439×R-0.368×G+0.071×B+128;
and after the model conversion, converting the hair color of the image, setting an upper limit range and a lower limit range which can communicate color development pixels for the color conversion part by adopting a flood filling method, filling different colors for the pixels in the upper limit range and the lower limit range to obtain the converted image, and then converting the image from a YCrCb model to an RGB model to obtain a result image after color development change.
Preferably, the S5, including:
and obtaining the image after color change, performing transparentization treatment on the white background of the image after background deletion, and then overlapping the white background with the original background to obtain a final image with a recovered background.
According to a second aspect of the present invention, there is provided a color development transformation system based on image color segmentation, comprising:
the image acquisition module acquires an image and preprocesses the acquired image;
the background processing module is used for carrying out background filtering processing on the preprocessed image; the human image recognition module is used for carrying out human face recognition and human eye recognition on the human image in the image after the background is recovered, further segmenting a hair module and carrying out shielding treatment on the eye part;
the color development transformation module is used for extracting a plurality of pixel points of color development pixels in the original image, solving the average value of the pixel points to replace the color development pixels, and carrying out color development transformation processing on the image subjected to shielding processing based on the replaced color development pixels;
and the background recovery module is used for recovering the background of the image into the original image background based on the converted image after the color development conversion is finished so as to obtain the complete color development conversion image.
According to a third aspect of the present invention, there is provided a terminal comprising a memory, a processor and a computer program stored on the memory and operable on the processor, the processor being operable when executing the computer program to perform any of the methods described above.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a simple and convenient color development transformation method, a system and a terminal based on image color segmentation. In addition, by means of methods such as background filtering and the like, interference caused by background factors and the like is eliminated; and the picture quality can be improved and the color change effect can be improved by preprocessing the picture. In addition, the method can be used for color development transformation of real-time videos collected by the camera, and good real-time performance and field performance are achieved. The algorithm has the advantages of high running speed, high accuracy and good robustness, and has good detection effects for different hairstyles.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading the following detailed description of embodiments of the color development variations with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a color conversion method;
FIG. 2 is a flow chart of a color development extraction and color development transformation process;
FIG. 3 background recovery flow chart;
FIG. 4 is a diagram of a beautification effect of a picture; wherein, (a) is the light too dark effect picture, (b) is the normal beautification effect picture, and (c) is the exposure too much effect picture;
FIG. 5 is a schematic diagram of background filtering; wherein, (a) is the original picture, (b) is the foreground segmentation effect picture, (c) is the background filtering effect picture;
FIG. 6 is a schematic diagram of background recovery; wherein, (a) is the original picture, (b) is the foreground segmentation effect picture, (c) is the background filtering effect picture, (d) is the background recovery effect picture;
FIG. 7 is a schematic view of face recognition and eye recognition;
FIG. 8 is a schematic illustration of a color development transformation comparison; wherein, (a) is the original picture, (b) is the color development transformation effect picture;
FIG. 9 is a schematic representation of a complete color transformation.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Fig. 1 is a flowchart of a color development transformation system based on image color segmentation according to an embodiment of the present invention. Firstly, acquiring a new image, preprocessing the image, and changing the effect of the image by changing factors such as the brightness and the contrast of the image so as to improve the effect of color development transformation; carrying out background filtering processing on the two pairs of pictures, and changing a multicolor background into white so as to avoid the influence of the variegated colors of the background on color development transformation; thirdly, shielding the black color of the eyes of the person according to the human eye recognition and the human face recognition so as to prevent the black pixels of the eyes from being mixed with the black pixels of the hair; fourthly, extracting hair pigments from the pictures after a series of processing, averaging the pixel values of a plurality of points to obtain hair pixel values, and performing color conversion processing on the hair; and finally, restoring the background to obtain a complete picture after color development transformation.
The following detailed steps of the color development transformation method based on image color segmentation provided by the embodiment of the present invention are further detailed as follows with reference to the accompanying drawings:
step 1: collecting images (including pictures and videos), preprocessing the images, beautifying the image effect, improving the brightness of the images and the like, as shown in (a) - (c) of fig. 4.
Firstly, loading a stored image, or acquiring a picture or a video acquired in real time from a camera, preprocessing the loaded image, and adjusting the brightness, the contrast, the sharpening and the saturation of the image so as to achieve the effects of optimizing the image, enhancing the color and the like.
The adjustment to the image contrast and brightness is generally calculated by the following formula:
g(x,y)=a×f(x,y)+b
wherein f (x, y) represents the value of the c channel of the pixel point of the x-th row and the y-th column of the source image; g (x, y) represents the value of the c channel of the pixel point of the x row and the y column of the target image; the a parameter (a >0) represents the magnification factor (generally between 0.0 and 3.0); the b parameter, commonly referred to as bias, is used to adjust the brightness.
The processed pixel value can be obtained through the formula, but in order to prevent the pixel value from exceeding the range (0-255), the fault-tolerant processing is carried out to ensure that the value is between 0-255 (the values of a and b can be changed by properly modifying the observation result). In addition, the image effect can be adjusted in real time by arranging a slide bar with contrast and brightness above the image. The optimized effect diagram is shown in fig. 3.
Step 2: and after the preprocessed image is obtained, carrying out background filtering processing on the image.
In order to remove the interference of the background on the color development transformation, the image (picture or video) is subjected to background filtering processing before the color development transformation. The gaussian model background removal method is a common method for background removal, and the criterion for separating the foreground from the background is to judge the change rate of pixel points, so that slow-changing learning is regarded as the background, and fast-changing learning is regarded as the foreground. The Gaussian mixture model uses K (basically 3 to 5) Gaussian models to represent the characteristics of each pixel point in the image, the Gaussian mixture model is updated after a new frame of image is obtained, each pixel point in the current image is matched with the Gaussian mixture model, if the matching is successful, the point is judged to be a background point, and if the matching is not successful, the point is judged to be a foreground point.
The single distribution gaussian background model assumes that, for a background in an image, the distribution of the luminance of a specific pixel satisfies gaussian distribution, that is, for a background W in the image, the luminance of each pixel (x, y) satisfies W (x, y) -N (u, d):
Figure BDA0002022478050000071
thus each pixel attribute of the gaussian background model includes two parameters: the mean u and the variance d.
For a given image Q, when:
Figure BDA0002022478050000072
wherein, T is a constant threshold, the pixel point (x, y) is a background point, otherwise, the pixel point is a foreground point;
then, updating the background of each frame of image, and then:
Wt(x,y)=p×Wt-1(x,y)+(1-p)×Qt(x,y)
wherein, Wt(x, y) represents the background pixel parameter at t; p is a constant (range 0-1) to reflect the background update rate, the larger p, the slower the background update; qt(x, y) represents pixel parameters of the image at t;
and after acquiring the background, entering image circulation. It is also necessary to regard the person marked in the still image as the foreground and then delete the background to obtain the desired foreground in the image, see (a) to (c) in fig. 5.
And step 3: carrying out face recognition and eye recognition on the portrait in the image, further segmenting a hair module, and carrying out shielding treatment on the eye part;
referring to fig. 7, a face recognition and eye recognition method is employed to detect a face region in an image based on an algorithm of a cascade enhanced classifier and train an eye detector to position eyes within the face region;
the classifier operates specifically as follows:
(1) calculating the characteristic values of the training samples for each classifier, and sequencing the characteristic values;
(2) calculating the sum of the eigenvalues of all samples belonging to a face t1 and the sum of the eigenvalues of all samples belonging to a non-face t 0;
(3) s1 calculating the sum of the feature values of all samples belonging to the face before the ith sampleiAnd the sum of the eigenvalues of the samples belonging to non-human faces s0i
(4) Calculating r ═ min ((s1+ (t 0)i-s0i)),(s0+(t1i-s1i)));
The minimum r value obtained by calculation is the threshold value. With the threshold, we construct a simple weak classifier with a decision tree as follows:
Figure BDA0002022478050000081
wherein, x is a sub-image window, f is a feature, p is used for controlling the directions of unequal signs, so that the unequal signs are all < ", and theta is a threshold value;
(5) increasing the weight of the misclassified samples, abandoning the correctly classified samples, and adding new samples, wherein the weight of the new samples is 1/N, and N is the total number of the samples to carry out a new round of training of the weak classifier;
(6) training T weak classifiers after T rounds; the T weak classifiers are weighted and summed according to their classification error rates to form a strong classifier, as follows:
Figure BDA0002022478050000082
Figure BDA0002022478050000083
α thereintIs the weight of the t-th weak classifier, βtIs the error rate of the t-th weak classifier;
after the positions of the eyes of the people are positioned, the eyes are shielded and hidden, namely the eyes are shielded, so that the black parts of the eyes and eyebrows are prevented from interfering the color change of the hair. The face recognition and the eye recognition and hiding greatly help the hair color positioning.
And 4, step 4: and extracting color development pixels in the original image, taking a plurality of pixel points, solving the average value of the pixel points to replace the color development pixels, and performing color development transformation processing on the image based on the replaced color development pixels.
For color development transformation, the method adopts a color transformation method based on pixel values, firstly, the pixel values of the hair colors in the picture are extracted to obtain average RGB pixel points of the hair pixels, and color transformation is carried out on the color development based on the RGB pixel values, and the method is shown in figure 2.
Firstly, extracting pixel values of hair colors in a picture, storing the pixel values into a text, and averaging the points to obtain average RGB pixel points of the hair pixels, wherein the extraction of the pixel values of individual points is not performed in the extraction process, but a plurality of pixel points on the hair are performed; the pixel values are then imported into the original color development pixel values of the color development transformation component of the program, and the image is converted from the RGB model to the YCrCb model (in YCrCb color space, Y stands for luminance and Cr and Cb stand for chrominance).
Y=0.257×R+0.504×G+0.098×B+16
Cr=0.439×R-0.368×G+0.071×B+128
Cb=-0.148×R-0.504×G+0.439×B+128
After the model conversion, the hair color of the image is converted, the color conversion part adopts a flood filling method, the upper and lower limit ranges which can be communicated with the color development pixels are set, for example, black is taken as an example, the upper and lower limit ranges of the black pixels are set, and the pixels in the range are filled with different colors (such as red) so as to achieve the purpose of changing the hair color. And after the converted image is obtained, converting the image back to the RGB model to obtain the image with converted color.
And 5: and carrying out background recovery processing on the obtained image after color development transformation to obtain a final complete color development transformation result. The background is restored as shown in (a) to (d) of fig. 6.
Obtaining an image after color development transformation, as shown in (a) and (b) in fig. 8, but the image is filtered by a background, and the background is single white; the final image of the restored background is obtained by performing transparentization processing on the image background (i.e., the white background) after the background is filtered out, and then overlapping the image background with the original background, as shown in fig. 9.
The embodiment of the invention also provides a color development transformation system based on image color segmentation, which comprises the following steps:
the image acquisition module acquires an image and preprocesses the acquired image;
the background processing module is used for carrying out background filtering processing on the preprocessed image;
the human image recognition module is used for carrying out human face recognition and human eye recognition on the human image in the image after the background is recovered, further segmenting a hair module and carrying out shielding treatment on the eye part;
and the color development transformation module is used for extracting a plurality of pixel points of color development pixels in the original image, solving the average value of the pixel points to replace the color development pixels, and carrying out color development transformation processing on the image subjected to shielding processing based on the replaced color development pixels.
And the background recovery module is used for recovering the white background of the image into the original image background based on the converted image to obtain the complete color development conversion image.
Based on the color development transformation method and system based on image color segmentation provided by the above embodiments, an embodiment of the present invention further provides a terminal, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor, when executing the computer program, can be used to execute any one of the methods provided by the above embodiments of the present invention.
The color development transformation method, system and terminal based on image color segmentation provided by the embodiments of the present invention convert the hair color into the required color based on the original color development (such as black) of people on the basis of beautifying and color adjustment of the picture. Firstly, regulating the contrast and brightness of an original image by inputting a collected image or video, and carrying out background filtering treatment to reduce the interference of pixels with the same property as hair in a background; then, the hair area is further identified by adopting face identification and eye identification, and the influence of the face such as eyes and eyebrows is eliminated; and finally, extracting hair pixels according to the hair segmentation area and carrying out color conversion. The technology of the invention is simple to operate, can effectively eliminate interference and quickly and accurately realize the color development transformation.
It should be noted that, the steps in the method provided by the present invention may be implemented by using corresponding modules in the system, and those skilled in the art may implement the step flow of the method by referring to the technical solution of the system, that is, the embodiment in the system may be understood as a preferred example for implementing the method, and details are not described herein.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (9)

1. A color development transformation method based on image color segmentation is characterized by comprising the following steps:
s1, collecting an image and preprocessing the image;
s2, carrying out background filtering processing on the preprocessed image obtained in the S1;
s3, carrying out face recognition and eye recognition on the portrait in the image obtained in the S2, further segmenting a hair module, and carrying out shielding treatment on the eye part;
s4, extracting a plurality of pixel points of color development pixels in the original image, calculating the average value of the pixel points to replace the color development pixels, and performing color development transformation processing on the image subjected to the shielding processing and obtained in the S3 based on the replaced color development pixels;
s5, performing background restoration processing on the color-converted image obtained in S4;
the S4, including:
firstly, extracting a plurality of pixel values of hair color in an original image, storing the pixel values into a text, averaging the pixel values to obtain an average RGB pixel value of the hair pixel, then introducing the average RGB pixel value into an original color development pixel value of a color development transformation part, and converting the image from an RGB model into a YCrCb model to obtain:
Y=0.257×R+0.504×G+0.098×B+16
Cb=-0.148×R-0.504×G+0.439×B+128
Cr=0.439×R-0.368×G+0.071×B+128;
and after the model conversion, converting the hair color of the image, setting an upper limit range and a lower limit range which can communicate color development pixels for the color conversion part by adopting a flood filling method, filling different colors for the pixels in the upper limit range and the lower limit range to obtain the converted image, and then converting the image from a YCrCb model to an RGB model to obtain a result image after the color development conversion.
2. The method for transforming color based on image color segmentation according to claim 1, wherein the step S1 includes:
adjusting the contrast and brightness of the picture by adopting the following calculation formula:
g(x,y)=a×f(x,y)+b
wherein f (x, y) represents the value of the c channel of the pixel point of the x-th row and the y-th column of the source image; g (x, y) represents the value of the c channel of the pixel point of the x row and the y column of the target image; the a parameter represents the magnification factor, a > 0; the b parameter is a bias for adjusting the brightness.
3. The color development transformation method based on image color segmentation according to claim 2, characterized in that a is 0.0 or more and 3.0 or less.
4. The method for transforming color based on image color segmentation according to claim 1, wherein the step S2 includes:
separating the foreground and the background of the image based on the change rate of the pixel points; adopting a mixed Gaussian model to represent the characteristics of each pixel point in the image, updating the mixed Gaussian model after a new frame of image is obtained, matching each pixel point in the current image with the mixed Gaussian model, if the matching is successful, judging the point as a background point, otherwise, judging the point as a foreground point;
the gaussian mixture model includes K gaussian models, where for a gaussian model representing a background, it is assumed that, for a background in an image, the distribution of the luminance of a specific pixel in the background satisfies gaussian distribution, that is, for a background W in the image, the luminance of each pixel (x, y) of the background W satisfies W (x, y) -N (u, d), then:
Figure FDA0002560906520000021
wherein, p (x) represents a probability density function of one-dimensional Gaussian distribution, and u and d are respectively a mean parameter and a variance parameter in each pixel point attribute in a Gaussian model representing a background;
for a given image Q, when:
Figure FDA0002560906520000022
wherein, T is a constant threshold, the pixel point (x, y) is a background point, otherwise, the pixel point is a foreground point;
then, updating the background of each frame of image, and then:
Wt(x,y)=p×Wt-1(x,y)+(1-p)×Qt(x,y)
wherein, Wt(x, y) represents the background pixel parameter at t; p is a constant with the range of 0-1 and is used for reflecting the background updating rate, and the larger p is, the slower the background updating is; qt(x, y) represents pixel parameters of the image at t;
after the background is obtained, entering image circulation; and (4) regarding the person marked in the static image as a foreground, and then deleting the background to obtain the foreground in the image.
5. The image color segmentation-based color development transformation method according to claim 4, wherein the value of K is 3-5.
6. The method for transforming color based on image color segmentation according to claim 1, wherein said S3, using face recognition and human eye recognition, detects the face region in the image based on the algorithm of the cascade enhanced classifier and trains the eye detector to position the eye in the face region; the method comprises the following steps:
calculating the characteristic values of the training samples for each classifier, and sequencing the characteristic values;
calculating the sum of the eigenvalues of all samples belonging to a face t1 and the sum of the eigenvalues of all samples belonging to a non-face t 0;
s1 calculating the sum of the feature values of all samples belonging to the face before the ith sampleiAnd the sum of the eigenvalues of the samples belonging to non-human faces s0i
Calculating r ═ min ((s1+ (t 0)i-s0i)),(s0+(t1i-s1i)));
The minimum r value obtained by calculation is the calculated threshold value; based on the threshold value, a weak classifier is formed by adopting a decision tree, and the weak classifier comprises the following steps:
Figure FDA0002560906520000031
wherein x is the sub-image window, f is the feature, p is used for controlling the direction of the unequal sign, so that the unequal signs are all "<", and theta is the threshold;
increasing the weight of the misclassified samples, abandoning the correctly classified samples, adding new samples, wherein the weight of the new samples is 1/N, N is the total number of the samples, and performing a new round of training of the weak classifier;
training T weak classifiers after T rounds; and weighting and summing the T weak classifiers according to respective classification error rates to form a strong classifier, wherein the strong classifier comprises the following steps:
Figure FDA0002560906520000032
Figure FDA0002560906520000033
α thereintIs the weight of the t-th weak classifier, βtIs the error rate of the t-th weak classifier;
and finally, after the eye position is positioned, the eyes are shielded and hidden.
7. The method for transforming color based on image color segmentation according to claim 1, wherein the step S5 includes:
and obtaining the image after color change, performing transparentization treatment on the white background of the image after background deletion, and then overlapping the white background with the original background to obtain a final image with a recovered background.
8. A color development transformation system based on image color segmentation, comprising:
the image acquisition module acquires an image and preprocesses the acquired image;
the background processing module is used for carrying out background filtering processing on the preprocessed image;
the human image recognition module is used for carrying out human face recognition and human eye recognition on the human image in the image after the background is recovered, further segmenting a hair module and carrying out shielding treatment on the eye part;
the color development transformation module is used for extracting a plurality of pixel points of color development pixels in the original image, solving the average value of the pixel points to replace the color development pixels, and carrying out color development transformation processing on the image subjected to shielding processing based on the replaced color development pixels;
the background recovery module is used for recovering the background of the image into the original image background based on the converted image after the color development conversion is finished so as to obtain a complete color development conversion image;
the color conversion module includes:
firstly, extracting a plurality of pixel values of hair color in an original image, storing the pixel values into a text, averaging the pixel values to obtain an average RGB pixel value of the hair pixel, then introducing the average RGB pixel value into an original color development pixel value of a color development transformation part, and converting the image from an RGB model into a YCrCb model to obtain:
Y=0.257×R+0.504×G+0.098×B+16
Cb=-0.148×R-0.504×G+0.439×B+128
Cr=0.439×R-0.368×G+0.071×B+128;
and after the model conversion, converting the hair color of the image, setting an upper limit range and a lower limit range which can communicate color development pixels for the color conversion part by adopting a flood filling method, filling different colors for the pixels in the upper limit range and the lower limit range to obtain the converted image, and then converting the image from a YCrCb model to an RGB model to obtain a result image after the color development conversion.
9. A terminal comprising a memory, a processor and a computer program stored on the memory and operable on the processor, wherein the computer program, when executed by the processor, is operable to perform the method of any of claims 1 to 7.
CN201910283493.7A 2019-04-10 2019-04-10 Color development transformation method, system and terminal based on image color segmentation Active CN110009708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910283493.7A CN110009708B (en) 2019-04-10 2019-04-10 Color development transformation method, system and terminal based on image color segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910283493.7A CN110009708B (en) 2019-04-10 2019-04-10 Color development transformation method, system and terminal based on image color segmentation

Publications (2)

Publication Number Publication Date
CN110009708A CN110009708A (en) 2019-07-12
CN110009708B true CN110009708B (en) 2020-08-28

Family

ID=67170664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910283493.7A Active CN110009708B (en) 2019-04-10 2019-04-10 Color development transformation method, system and terminal based on image color segmentation

Country Status (1)

Country Link
CN (1) CN110009708B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465882A (en) * 2020-11-17 2021-03-09 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114240788B (en) * 2021-12-21 2023-09-08 西南石油大学 Complex scene-oriented robustness and adaptive background restoration method
CN115587930B (en) * 2022-12-12 2023-04-18 成都索贝数码科技股份有限公司 Image color style migration method, device and medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003024136A (en) * 2001-07-17 2003-01-28 Shiseido Co Ltd Beauty treatment based on balance among skin color, hair color, and make-up color
US6775411B2 (en) * 2002-10-18 2004-08-10 Alan D. Sloan Apparatus and method for image recognition
JP2011221812A (en) * 2010-04-09 2011-11-04 Sony Corp Information processing device, method and program
CN102103690A (en) * 2011-03-09 2011-06-22 南京邮电大学 Method for automatically portioning hair area
CN106203399B (en) * 2016-07-27 2019-06-04 厦门美图之家科技有限公司 A kind of image processing method, device and calculate equipment
CN106780303A (en) * 2016-12-02 2017-05-31 上海大学 A kind of image split-joint method based on local registration
CN107256555B (en) * 2017-05-25 2021-11-02 腾讯科技(上海)有限公司 Image processing method, device and storage medium
JP2019028731A (en) * 2017-07-31 2019-02-21 富士ゼロックス株式会社 Information processing device and program
CN212846789U (en) * 2017-08-01 2021-03-30 苹果公司 Electronic device
CN108305146A (en) * 2018-01-30 2018-07-20 杨太立 A kind of hair style recommendation method and system based on image recognition
CN108268859A (en) * 2018-02-08 2018-07-10 南京邮电大学 A kind of facial expression recognizing method based on deep learning
CN108830874A (en) * 2018-04-19 2018-11-16 麦克奥迪(厦门)医疗诊断系统有限公司 A kind of number pathology full slice Image blank region automatic division method

Also Published As

Publication number Publication date
CN110009708A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
Rahmad et al. Comparison of Viola-Jones Haar Cascade classifier and histogram of oriented gradients (HOG) for face detection
US8831379B2 (en) Cartoon personalization
Lin Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network
EP2685419B1 (en) Image processing device, image processing method, and computer-readable medium
CN108268859A (en) A kind of facial expression recognizing method based on deep learning
CN110009708B (en) Color development transformation method, system and terminal based on image color segmentation
CN111667400B (en) Human face contour feature stylization generation method based on unsupervised learning
CN111310718A (en) High-accuracy detection and comparison method for face-shielding image
CN1750017A (en) Red eye moving method based on human face detection
KR20090111939A (en) Method and apparatus for separating foreground and background from image, Method and apparatus for substituting separated background
CN111489330B (en) Weak and small target detection method based on multi-source information fusion
Obrador et al. Towards category-based aesthetic models of photographs
CN105046202A (en) Adaptive face identification illumination processing method
Liu et al. Single image haze removal via depth-based contrast stretching transform
Yusuf et al. Human face detection using skin color segmentation and watershed algorithm
Arsic et al. Improved lip detection algorithm based on region segmentation and edge detection
CN117437691A (en) Real-time multi-person abnormal behavior identification method and system based on lightweight network
CN104966271B (en) Image de-noising method based on biological vision receptive field mechanism
Marius et al. Face detection using color thresholding and eigenimage template matching
Borah et al. ANN based human facial expression recognition in color images
Wu et al. Real-time 2D hands detection and tracking for sign language recognition
Prinosil et al. Automatic hair color de-identification
Bhandari et al. Image aesthetic assessment using deep learning for automated classification of images into appealing or not-appealing
Rahman et al. An automatic face detection and gender identification from color images using logistic regression
Yuan et al. Full convolutional color constancy with adding pooling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB03 Change of inventor or designer information

Inventor after: Ma Ran

Inventor after: Liu Yun

Inventor after: An Ping

Inventor after: Yang Mengya

Inventor after: You Zhixiang

Inventor before: An Ping

Inventor before: Liu Yun

Inventor before: Ma Ran

Inventor before: Yang Mengya

Inventor before: You Zhixiang

CB03 Change of inventor or designer information
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant