CN106375316B - A kind of method of video image processing and equipment - Google Patents

A kind of method of video image processing and equipment Download PDF

Info

Publication number
CN106375316B
CN106375316B CN201610798032.XA CN201610798032A CN106375316B CN 106375316 B CN106375316 B CN 106375316B CN 201610798032 A CN201610798032 A CN 201610798032A CN 106375316 B CN106375316 B CN 106375316B
Authority
CN
China
Prior art keywords
image
skin
colour
video image
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610798032.XA
Other languages
Chinese (zh)
Other versions
CN106375316A (en
Inventor
杜凌霄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Netstar Information Technology Co., Ltd.
Original Assignee
All Kinds Of Fruits Garden Guangzhou Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by All Kinds Of Fruits Garden Guangzhou Network Technology Co Ltd filed Critical All Kinds Of Fruits Garden Guangzhou Network Technology Co Ltd
Priority to CN201610798032.XA priority Critical patent/CN106375316B/en
Publication of CN106375316A publication Critical patent/CN106375316A/en
Application granted granted Critical
Publication of CN106375316B publication Critical patent/CN106375316B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • G06T5/70
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the invention discloses a kind of method of video image processing and equipment, and wherein the realization of method includes: acquisition video image, and the video image belongs to the image in brightness and color yuv video stream;It determines chroma blue component U and red chrominance component V in the video image, and determines the colour of skin of face in the video image according to the U and V;Mill skin processing is carried out to the video image according to the luminance component Y of the video image and the colour of skin, using the colour of skin to the mill skin treated image progress hue adjustment, and exports the image after the hue adjustment.There is successional characteristic based on video image, image optimization, and the background masses of adjustable video image are carried out to the face in video image, to improve video image quality.

Description

A kind of method of video image processing and equipment
Technical field
The present invention relates to field of computer technology, in particular to a kind of method of video image processing and equipment.
Background technique
During real-time video call or net cast, needs to acquire video image in real time and form video;It is adopted in image The broadcasting end of video can be sent to after collection by network.In this process, there may be various flaws in video image, mainly Comprising in terms of following three:
One, the flaw of face itself, such as: fleck;This can generate deleterious effect to the overall aesthetic degree of people.
Two, the skin of people appears not to be so healthy.
Three, the influence that the environment reasons such as light generate image, such as: general image can be too when environment light is too dark In dim, face is caused not see Chu.
Based on above the problem of influencing video image quality, if video image can be optimized, such as: eliminate face Flaw beautifies the complexion, and people is allowed to seem more Natural;It adjusted dark video image and promotes whole image quality, so that view Frequency image seems more to understand nature.
At present be directed to video image processing as a result, especially for the video image processing in real time communication neighborhood as a result, There are blurred background, face is coarse and light environment is of poor quality the problems such as.
Summary of the invention
The embodiment of the invention provides a kind of method of video image processing and equipment, and it is especially real to be applied to video image When the communications field in video image optimization, improve video image quality.
On the one hand the embodiment of the invention provides a kind of method of video image processing, comprising:
Video image is obtained, the video image belongs to the image in brightness and color yuv video stream;
Determine chroma blue component U and red chrominance component V in the video image, and according to described in the U and V determination The colour of skin of face in video image;
Mill skin processing is carried out to the video image according to the luminance component Y of the video image and the colour of skin, is used The colour of skin carries out hue adjustment to the mill skin treated image, and exports the image after the hue adjustment.
In an optional implementation, the colour of skin packet that face in the video image is determined according to the U and V It includes:
The first colour of skin parameter and the second colour of skin parameter of face in the video image are determined according to the U and V;Described One colour of skin parameter is the colour of skin parameter for grinding skin processing, and second colour of skin parameter is the colour of skin parameter for hue adjustment.
In an optional implementation, it is described according to the luminance component Y of the video image and the colour of skin to institute It states video image and carries out mill skin processing, using the colour of skin to the mill skin treated image progress hue adjustment, and export Image after the hue adjustment includes:
The luminance component Y of the video image is carried out to protect side filtering the first image of acquisition, by the first image and institute It states video image to be overlapped to obtain the second image according to first colour of skin parameter, second image is subjected to hue adjustment Obtain third image;The third image and second image are carried out being mixed to get the 4th according to second colour of skin parameter Image, and export the 4th image.
In an optional implementation, chroma blue component U and red color in the determination video image Component V, and determine that the colour of skin of face in the video image includes: according to the U and V
The value range for setting the U and V of area of skin color is respectively as follows: u ∈ [umin,umax],v∈[vmin,vmax];Wherein umin And umaxThe respectively minimum value of U and maximum value, vminAnd vmaxThe respectively minimum value of V and maximum value;For the video image In pixel (u, v) Face Detection result are as follows:
V(u,v)=Pu*Pv
vstepleft=vmin/scale;
vstepright=(255-vmax)/scale;
ustepleft=umin/scale;
ustepright=(255-umax)/scale;
The scale is order quantization parameter;The V(u,v)For the first colour of skin parameter;
Pixel (u, the v) value of the video image passes through following formula:
Wherein μu, μvIt is the colour of skin respectively at the center of u and v;σu, σvIt is the variance of the u and v component of the colour of skin respectively;ρ is skin The cross-correlation coefficient of the u and v of color;Calculate the mahalanobis distance that pixel (u, v) arrives elliptical center, second colour of skin of pixel (u, v) Parameter are as follows:
In an optional implementation, the luminance component Y to the video image carries out protecting side filtering acquisition The first image and the video image are overlapped to obtain the second figure by the first image according to first colour of skin parameter Second image progress hue adjustment is obtained third image by picture;By the third image and second image according to institute It states the second colour of skin parameter to carry out being mixed to get the 4th image, and exports the 4th image and include:
Remember the corresponding dimensional table of luminance component Y are as follows:
Guarantor side is carried out to the luminance component Y of the video image to filter to obtain Yfiltered, then obtained using following formula Error image: Ydif=Yfiltered-Y+128;Then linear optical superposition is carried out using following formula obtain mill skin image, Ysmooth= Y+2*Ydif-256;
Mill skin image YsmoothWith the luminance component Y of the video image according to first colour of skin parameter and described bright The corresponding dimensional table of degree component Y is mixed using following formula, Yres=(1-a) * Y+a*Ysmooth, wherein a=V(u,v)*Yi
YresThe image that RGB rgb format is transformed to the U and V image formed, then respectively to R, G, B tri- A component carries out curvilinear stretch, then the image after stretching is gone back to the image I after yuv format obtains hue adjustmenttoneAdjusted; After hue adjustment image and second image using following formula carry out linear hybrid obtain the 4th image:
Wherein, b=P(u,v)/255;In the ItoneAdjustedIn, luminance component YtoneAdjusted, red chrominance component and Chroma blue component is respectively UtoneAdjustedAnd VtoneAdjusted
Export the 4th image.
The embodiment of the invention also provides a kind of video image processing apparatus for two aspects, comprising:
Image acquisition unit, for obtaining video image, the video image belongs to the figure in brightness and color yuv video stream Picture;
Colour of skin determination unit, for determining chroma blue component U and red chrominance component V in the video image, and according to The colour of skin of face in the video image is determined according to the U and V;
Image control unit, for according to the video image luminance component Y and the colour of skin to the video image Mill skin processing is carried out, using the colour of skin to the mill skin treated image progress hue adjustment;
Image output unit, for exporting the image after the hue adjustment.
In an optional implementation, the colour of skin determination unit, for determining the video according to the U and V The first colour of skin parameter and the second colour of skin parameter of face in image;First colour of skin parameter is the colour of skin ginseng for grinding skin processing Number, second colour of skin parameter are the colour of skin parameter for hue adjustment.
In an optional implementation, described image adjustment unit, for the luminance component Y to the video image It carries out protecting side filtering the first image of acquisition, the first image and the video image is carried out according to first colour of skin parameter Superposition obtains the second image, and second image progress hue adjustment is obtained third image;By the third image and described Second image carries out being mixed to get the 4th image according to second colour of skin parameter;
Described image output unit, for exporting the 4th image.
In an optional implementation, the colour of skin determination unit, the value of U and V for setting area of skin color Range is respectively as follows: u ∈ [umin,umax],v∈[vmin,vmax];Wherein uminAnd umaxThe respectively minimum value of U and maximum value, vmin And vmaxThe respectively minimum value of V and maximum value;For pixel (u, v) Face Detection result in the video image are as follows:
V(u,v)=Pu*Pv
vstepleft=vmin/scale;
vstepright=(255-vmax)/scale;
ustepleft=umin/scale;
ustepright=(255-umax)/scale;
The scale is order quantization parameter;The V(u,v)For the first colour of skin parameter;
Pixel (u, the v) value of the video image passes through following formula:
Wherein μu, μvIt is the colour of skin respectively at the center of u and v;σu, σvIt is the variance of the u and v component of the colour of skin respectively;ρ is skin The cross-correlation coefficient of the u and v of color;Calculate the mahalanobis distance that pixel (u, v) arrives elliptical center, second colour of skin of pixel (u, v) Parameter are as follows:
In an optional implementation, described image adjustment unit, for remembering the corresponding dimensional table of luminance component Y Are as follows:
Guarantor side is carried out to the luminance component Y of the video image to filter to obtain Yfiltered, then obtained using following formula Error image: Ydif=Yfiltered-Y+128;Then linear optical superposition is carried out using following formula obtain mill skin image, Ysmooth= Y+2*Ydif-256;
Mill skin image YsmoothWith the luminance component Y of the video image according to first colour of skin parameter and described bright The corresponding dimensional table of degree component Y is mixed using following formula, Yres=(1-a) * Y+a*Ysmooth, wherein a=V(u,v)*Yi
YresThe image that RGB rgb format is transformed to the U and V image formed, then respectively to R, G, B tri- A component carries out curvilinear stretch, then the image after stretching is gone back to the image I after yuv format obtains hue adjustmenttoneAdjusted; After hue adjustment image and second image using following formula carry out linear hybrid obtain the 4th image:
Wherein, b=P(u,v)/255;In the ItoneAdjustedIn, luminance component YtoneAdjusted, red chrominance component and Chroma blue component is respectively UtoneAdjustedAnd VtoneAdjusted
The embodiment of the invention also provides a kind of electronic equipment, which includes: input-output equipment, deposits three aspects Reservoir and processor;Wherein memory is for storing executable program, and the processor is for executing the executable program Realize the method flow in the embodiment of the present invention.
As can be seen from the above technical solutions, the embodiment of the present invention has the advantage that is had continuously based on video image Property characteristic, image optimization, and background masses of adjustable video image, to improve are carried out to the face in video image Video image quality.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly introduced, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill in field, without any creative labor, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is present invention method flow diagram;
Fig. 2 is present invention method flow diagram;
Fig. 3 is device structure schematic diagram of the embodiment of the present invention;
Fig. 4 is device structure schematic diagram of the embodiment of the present invention;
Fig. 5 is server architecture of embodiment of the present invention schematic diagram.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention make into It is described in detail to one step, it is clear that the described embodiments are only some of the embodiments of the present invention, rather than whole implementation Example.Based on the embodiments of the present invention, obtained by those of ordinary skill in the art without making creative efforts All other embodiment, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a kind of method of video image processing, as shown in Figure 1, comprising:
101: obtaining video image, above-mentioned video image belongs to the image in brightness and color yuv video stream;
YUV is a kind of colour coding method, belongs to line-by-line inversion (Phase Alteration Line, PAL), be PAL and Transmission is colored in order and stores what (SequentielCouleur A Memoire, SECAM) simulation color television system used Color space.In modern color television system, three pipe colour cameras or colour charge coupling element are generallyd use (Charge-coupled Device, CCD) video camera carries out capture, then the colour picture signal of acquirement through color separation, difference RGB (Red, Green, Blue, RGB) is obtained after amplification correction, obtains luminance signal Y and two using matrixer A colour difference signal B-Y (i.e. U), R-Y (i.e. V), last transmitting terminal compile the signal of three components of brightness and color difference respectively Code, is sent with same channel.The representation method of this color is exactly that so-called YUV color space indicates.Using YUV color The luminance signal Y and carrier chrominance signal U, V that the importance in space is it are separation.
102: determining chroma blue component U and red chrominance component V in above-mentioned video image, and determined according to above-mentioned U and V The colour of skin of face in above-mentioned video image;
The colour of skin is the parameter for determining the colour of skin depth of face, and can be divided into two classes based on purposes difference: one kind is to use The colour of skin parameter that uses of mill skin is carried out to facial image, one kind be for the colour of skin to the progress tone reversal of entire video image Parameter.
103: mill skin processing is carried out to above-mentioned video image according to the luminance component Y of above-mentioned video image and the above-mentioned colour of skin, Hue adjustment is carried out to above-mentioned mill skin treated image using the above-mentioned colour of skin, and exports the image after above-mentioned hue adjustment.
The present embodiment, has successional characteristic based on video image, carries out image optimization to the face in video image, And the background masses of adjustable video image, to improve video image quality.
Introduction based on preceding embodiment, the above-mentioned colour of skin packet that face in above-mentioned video image is determined according to above-mentioned U and V It includes:
The first colour of skin parameter and the second colour of skin parameter of face in above-mentioned video image are determined according to above-mentioned U and V;Above-mentioned One colour of skin parameter is the colour of skin parameter for grinding skin processing, and above-mentioned second colour of skin parameter is the colour of skin parameter for hue adjustment.
Further, the above-mentioned luminance component Y according to above-mentioned video image and the above-mentioned colour of skin to above-mentioned video image into Row mill skin processing, carries out hue adjustment to above-mentioned mill skin treated image using the above-mentioned colour of skin, and export above-mentioned hue adjustment Image afterwards includes:
The luminance component Y of above-mentioned video image is carried out protecting side filtering and obtains the first image, by above-mentioned first image and upper It states video image to be overlapped to obtain the second image according to above-mentioned first colour of skin parameter, above-mentioned second image is subjected to hue adjustment Obtain third image;Above-mentioned third image and above-mentioned second image are carried out being mixed to get the 4th according to above-mentioned second colour of skin parameter Image, and export above-mentioned 4th image.
More specifically, chroma blue component U and red chrominance component V in the above-mentioned above-mentioned video image of determination, and according to upper It states U and V and determines that the colour of skin of face in above-mentioned video image includes:
The value range for setting the U and V of area of skin color is respectively as follows: u ∈ [umin,umax],v∈[vmin,vmax];Wherein umin And umaxThe respectively minimum value of U and maximum value, vminAnd vmaxThe respectively minimum value of V and maximum value;For above-mentioned video image In pixel (u, v) Face Detection result are as follows:
V(u,v)=Pu*Pv
vstepleft=vmin/scale;
vstepright=(255-vmax)/scale;
ustepleft=umin/scale;
ustepright=(255-umax)/scale;
Above-mentioned scale is order quantization parameter;Above-mentioned V(u,v)For the first colour of skin parameter;
Pixel (u, the v) value of above-mentioned video image passes through following formula:
Wherein μu, μvIt is the colour of skin respectively at the center of u and v;σu, σvIt is the variance of the u and v component of the colour of skin respectively;ρ is skin The cross-correlation coefficient of the u and v of color;Calculate the mahalanobis distance that pixel (u, v) arrives elliptical center, second colour of skin of pixel (u, v) Parameter are as follows:
More specifically, the above-mentioned luminance component Y to above-mentioned video image carries out protecting side filtering the first image of acquisition, it will be above-mentioned First image and above-mentioned video image are overlapped to obtain the second image according to above-mentioned first colour of skin parameter, by above-mentioned second image It carries out hue adjustment and obtains third image;Above-mentioned third image and above-mentioned second image are carried out according to above-mentioned second colour of skin parameter It is mixed to get the 4th image, and exports above-mentioned 4th image and includes:
Remember the corresponding dimensional table of luminance component Y are as follows:
Guarantor side is carried out to the luminance component Y of above-mentioned video image to filter to obtain Yfiltered, then obtained using following formula Error image: Ydif=Yfiltered-Y+128;Then linear optical superposition is carried out using following formula obtain mill skin image, Ysmooth= Y+2*Ydif-256;
Mill skin image YsmoothWith the luminance component Y of above-mentioned video image according to above-mentioned first colour of skin parameter and above-mentioned bright The corresponding dimensional table of degree component Y is mixed using following formula, Yres=(1-a) * Y+a*Ysmooth, wherein a=V(u,v)*Yi
YresThe image that RGB rgb format is transformed to the above-mentioned U and V image formed, then respectively to R, G, B tri- A component carries out curvilinear stretch, then the image after stretching is gone back to the image I after yuv format obtains hue adjustmenttoneAdjusted; After hue adjustment image and above-mentioned second image using following formula carry out linear hybrid obtain above-mentioned 4th image:
Wherein, b=P(u,v)/255;In above-mentioned ItoneAdjustedIn, luminance component YtoneAdjusted, red chrominance component and Chroma blue component is respectively UtoneAdjustedAnd VtoneAdjusted
Export above-mentioned 4th image.
Following embodiment will be by taking net cast as an example, and video image is being communicated in the form of video flowing during net cast It is propagated in network, this embodiment scheme can be realized in the server where service platform, can also be in the recipient of video flowing Or realized on the terminal device of sender, the embodiment of the present invention does not make uniqueness restriction to this, specific as follows:
Firstly, carry out two kinds of Face Detections of the colour of skin to the UV component of yuv video stream, obtain for the Face Detection of grinding skin As a result, and, for carrying out the Face Detection result of tone reversal;
Then, the filtering of guarantor side is carried out to Y-component and linear superposition is carried out to filtered image and original image to carry out Reach mill bark effect, then after mill skin image and original image according to the result for the Face Detection for being used to grind skin be overlapped come It obtains finally grinding the image after skin;
Finally, the image after final mill skin is carried out tone tune according to the Face Detection result for being used to carry out tone reversal It is whole, the image output after obtaining U.S. face.
Specific process, as shown in Figure 2, comprising:
S10, the threshold value for setting the UV of area of skin color are respectively u ∈ [umin,umax],v∈[vmin,vmax].Threshold value therein It is to be previously obtained by experiment.The present embodiment constructs the colour of skin likelihood image of a 256*256 size, wherein the side x of image Xiang representing u component, the direction y represents v component, and the pixel value of a point (u, v) in image can be calculated with following formula It obtains:
V(u,v)=Pu*Pv
Wherein, V(u,v)It is for grinding the Face Detection of skin as a result, exp is the exponential function using natural constant e the bottom of as, entirely Referred to as exponential curve (Exponential);
vstepleft=vmin/scale;
vstepright=(255-vmax)/scale;
ustepleft=umin/scale;
ustepright=(255-umax)/scale;
Scale is an order quantizating index, is obtained by experiment.
S20, for 0 to 255 Y-component, we also construct a size be 256 dimensional table:
S30, the colour of skin can use an ellipse fitting in the statistical data of UV plane:
Wherein μu, μvIt is the colour of skin respectively at the center of u and v;σu, σvIt is the variance of the u and v component of the colour of skin respectively;ρ is skin The cross-correlation coefficient of the u and v of color.These numbers can be obtained from the data of colour of skin database.
For the video flowing of input, (u, v) value of any one pixel can calculate it into ellipse with above-mentioned formula The mahalanobis distance of the heart, we indicate that the point is colour of skin possibility with following formula:
S40, to the Y-component of the image of input carry out protect side filter to obtain image Yfiltered, (the first image) then uses Following formula obtains error image:
Ydif=Yfiltered-Y+128;
It carries out linear optical superposition and obtains mill skin image (the second image):
Ysmooth=Y+2*Ydif-256;
S50, the Y-component of mill skin image and original image is sentenced according to brightness in the result and S20 of Face Detection in S10 It is disconnected to be mixed (third image):
Yres=(1-a) * Y+a*Ysmooth
Wherein, a=V(u,v)*Yi
S60, YresWith original U, V component composition image transform to rgb space, then respectively to three components into Row curvilinear stretch, then the image after stretching is gone back to yuv space and obtains image ItoneAdjusted
S70, after hue adjustment image and original image carry out linear hybrid (the 4th image):
B=P(u,v)*Yi
S80, U.S. face image is exported.
The method that the embodiment of the present invention proposes can guarantee to beautify face in the continuous situation of video frame, while right It is adjusted in dark image.
Inventive embodiments additionally provide a kind of video image processing apparatus, as shown in Figure 3, comprising:
Image acquisition unit 301, for obtaining video image, above-mentioned video image belongs in brightness and color yuv video stream Image;
Colour of skin determination unit 302, for determining chroma blue component U and red chrominance component V in above-mentioned video image, and The colour of skin of face in above-mentioned video image is determined according to above-mentioned U and V;
Image control unit 303, for according to above-mentioned video image luminance component Y and the above-mentioned colour of skin to above-mentioned video figure As carrying out mill skin processing, hue adjustment is carried out to above-mentioned mill skin treated image using the above-mentioned colour of skin;
Image output unit 304, for exporting the image after above-mentioned hue adjustment.
Optionally, above-mentioned colour of skin determination unit 302, for determining of face in above-mentioned video image according to above-mentioned U and V One colour of skin parameter and the second colour of skin parameter;Above-mentioned first colour of skin parameter is the colour of skin parameter for grinding skin processing, above-mentioned second skin Color parameter is the colour of skin parameter for hue adjustment.
Optionally, above-mentioned image control unit 303 carries out protecting side filtering for the luminance component Y to above-mentioned video image The first image is obtained, above-mentioned first image and above-mentioned video image are overlapped to obtain second according to above-mentioned first colour of skin parameter Above-mentioned second image progress hue adjustment is obtained third image by image;By above-mentioned third image and above-mentioned second image according to Above-mentioned second colour of skin parameter carries out being mixed to get the 4th image;
Above-mentioned image output unit 304, for exporting above-mentioned 4th image.
Optionally, above-mentioned colour of skin determination unit 302 is respectively as follows: u ∈ for setting the value range of U and V of area of skin color [umin,umax],v∈[vmin,vmax];Wherein uminAnd umaxThe respectively minimum value of U and maximum value, vminAnd vmaxRespectively V's Minimum value and maximum value;For pixel (u, v) Face Detection result in above-mentioned video image are as follows:
V(u,v)=Pu*Pv
vstepleft=vmin/scale;
vstepright=(255-vmax)/scale;
ustepleft=umin/scale;
ustepright=(255-umax)/scale;
Above-mentioned scale is order quantization parameter;Above-mentioned V(u,v)For the first colour of skin parameter;
Pixel (u, the v) value of above-mentioned video image passes through following formula:
Wherein μu, μvIt is the colour of skin respectively at the center of u and v;σu, σvIt is the variance of the u and v component of the colour of skin respectively;ρ is skin The cross-correlation coefficient of the u and v of color;Calculate the mahalanobis distance that pixel (u, v) arrives elliptical center, second colour of skin of pixel (u, v) Parameter are as follows:
Optionally, above-mentioned image control unit 303, for remembering the corresponding dimensional table of luminance component Y are as follows:
Guarantor side is carried out to the luminance component Y of above-mentioned video image to filter to obtain Yfiltered, then obtained using following formula Error image: Ydif=Yfiltered-Y+128;Then linear optical superposition is carried out using following formula obtain mill skin image, Ysmooth= Y+2*Ydif-256;
Mill skin image YsmoothWith the luminance component Y of above-mentioned video image according to above-mentioned first colour of skin parameter and above-mentioned bright The corresponding dimensional table of degree component Y is mixed using following formula, Yres=(1-a) * Y+a*Ysmooth, wherein a=V(u,v)*Yi
YresThe image that RGB rgb format is transformed to the above-mentioned U and V image formed, then respectively to R, G, B tri- A component carries out curvilinear stretch, then the image after stretching is gone back to the image I after yuv format obtains hue adjustmenttoneAdjusted; After hue adjustment image and above-mentioned second image using following formula carry out linear hybrid obtain above-mentioned 4th image:
Wherein, b=P(u,v)/255;In above-mentioned ItoneAdjustedIn, luminance component YtoneAdjusted, red chrominance component and Chroma blue component is respectively UtoneAdjustedAnd VtoneAdjusted
The embodiment of the invention also provides a kind of electronic equipment, as shown in figure 4, the electronic equipment includes: that input and output are set Standby 401, memory 402 and processor 403;Wherein memory 401 is used for storing executable program, above-mentioned processor 403 The method flow in the embodiment of the present invention is realized in the above-mentioned executable program of execution.
The embodiment of the invention also provides a kind of server, Fig. 5 is a kind of server architecture provided in an embodiment of the present invention Schematic diagram, the server 500 can generate bigger difference because configuration or performance are different, may include one or more Central processing unit (central processing units, CPU) 522 (for example, one or more processors) and storage Device 532, one or more storage application programs 542 or data 544 storage medium 530 (such as one or more Mass memory unit).Wherein, memory 532 and storage medium 530 can be of short duration storage or persistent storage.It is stored in storage The program of medium 530 may include one or more modules (diagram does not mark), and each module may include to server In series of instructions operation.Further, central processing unit 522 can be set to communicate with storage medium 530, service The series of instructions operation in storage medium 530 is executed on device 500.
Server 500 can also include one or more power supplys 526, one or more wired or wireless networks Interface 550, one or more input/output interfaces 558, and/or, one or more operating systems 541, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM etc..
The step as performed by video image processing apparatus can be based on the server knot shown in fig. 5 in above-described embodiment Structure.
It is worth noting that, included each unit is only according to function in above-mentioned video image processing apparatus embodiment Energy logic is divided, but is not limited to the above division, as long as corresponding functions can be realized;In addition, each function The specific name of energy unit is also only for convenience of distinguishing each other, the protection scope being not intended to restrict the invention.
In addition, those of ordinary skill in the art will appreciate that realizing all or part of the steps in above-mentioned each method embodiment It is that relevant hardware can be instructed to complete by program, corresponding program can store in a kind of computer readable storage medium In, storage medium mentioned above can be read-only memory, disk or CD etc..
The above is only the preferable specific embodiments of the present invention, but scope of protection of the present invention is not limited thereto, any Those familiar with the art the variation that can readily occur in or replaces in the technical scope that the embodiment of the present invention discloses It changes, should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claim Subject to enclosing.

Claims (6)

1. a kind of method of video image processing characterized by comprising
Video image is obtained, the video image belongs to the image in brightness and color yuv video stream;
It determines chroma blue component U and red chrominance component V in the video image, and determines the video according to the U and V The colour of skin of face in image;The colour of skin that face in the video image is determined according to the U and V include: according to the U and V determines the first colour of skin parameter and the second colour of skin parameter of face in the video image;First colour of skin parameter is for grinding The colour of skin parameter of skin processing, second colour of skin parameter are the colour of skin parameter for hue adjustment;
The luminance component Y of the video image is carried out to protect side filtering the first image of acquisition, by the first image and the view Frequency image is overlapped to obtain the second image according to first colour of skin parameter, and second image progress hue adjustment is obtained Third image;The third image and second image are carried out being mixed to get the 4th figure according to second colour of skin parameter Picture, and export the 4th image.
2. method according to claim 1, which is characterized in that in the determination video image chroma blue component U and Red chrominance component V, and determine that the colour of skin of face in the video image includes: according to the U and V
The value range for setting the U and V of area of skin color is respectively as follows: u ∈ [umin, umax], v ∈ [vmin, vmax];Wherein uminAnd umax The respectively minimum value of U and maximum value, vminAnd vmaxThe respectively minimum value of V and maximum value;For in the video image Pixel (u, v) Face Detection result are as follows:
V(u, v)=Pu*Pv
vstepleft=vmin/scale;
vstepright=(255-vmax)/scale;
ustepleft=umin/scale;
ustepright=(255-umax)/scale;
The scale is order quantization parameter;The V(u, v)For the first colour of skin parameter;
Pixel (u, the v) value of the video image passes through following formula:
Wherein μu, μvIt is the colour of skin respectively at the center of u and v;σu, σvIt is the variance of the u and v component of the colour of skin respectively;ρ is the u of the colour of skin With the cross-correlation coefficient of v;Calculate the mahalanobis distance that pixel (u, v) arrives elliptical center, the second colour of skin parameter of pixel (u, v) Are as follows:
3. method according to claim 2, which is characterized in that the luminance component Y to the video image carries out guarantor side Filtering obtains the first image, and the first image and the video image are overlapped to obtain according to first colour of skin parameter Second image progress hue adjustment is obtained third image by the second image;By the third image and second image It carries out being mixed to get the 4th image according to second colour of skin parameter, and exports the 4th image and include:
Remember the corresponding dimensional table of luminance component Y are as follows:
Guarantor side is carried out to the luminance component Y of the video image to filter to obtain Yfiltered, then difference is obtained using following formula Image: Ydif=Yfiltered-Y+128;Then linear optical superposition is carried out using following formula obtain mill skin image, Ysmooth=Y+2* Ydif-256;
Mill skin image YsmoothLuminance component Y with the video image is according to first colour of skin parameter and the brightness point The corresponding dimensional table of amount Y is mixed using following formula, Yres=(1-a) * Y+a*Ysmooth, wherein a=V(u, v)*Yi
YresThe image that RGB rgb format is transformed to the U and V image formed, then respectively to tri- points of R, G, B Amount carries out curvilinear stretch, then the image after stretching is gone back to the image I after yuv format obtains hue adjustmenttoneAdjusted;Color It adjusts image adjusted and second image to carry out linear hybrid using following formula and obtains the 4th image:
Wherein, b=P(u, v)/255;In the ItoneAdjustedIn, luminance component YtoneAdjusted, red chrominance component and blue Chromatic component is respectively UtoneAdjustedAnd VtoneAdjusted
Export the 4th image.
4. a kind of video image processing apparatus characterized by comprising
Image acquisition unit, for obtaining video image, the video image belongs to the image in brightness and color yuv video stream;
Colour of skin determination unit, for determining chroma blue component U and red chrominance component V in the video image, and according to institute State the first colour of skin parameter and the second colour of skin parameter that U and V determines face in the video image;First colour of skin parameter is to use In the colour of skin parameter of mill skin processing, second colour of skin parameter is the colour of skin parameter for hue adjustment;
Image control unit carries out protecting side filtering the first image of acquisition for the luminance component Y to the video image, will be described First image and the video image are overlapped to obtain the second image according to first colour of skin parameter, by second image It carries out hue adjustment and obtains third image;The third image and second image are carried out according to second colour of skin parameter It is mixed to get the 4th image;
Image output unit, for exporting the 4th image.
5. video image processing apparatus according to claim 4, which is characterized in that
The colour of skin determination unit is respectively as follows: u ∈ [u for setting the value range of U and V of area of skin colormin, umax], v ∈ [vmin, vmax];Wherein uminAnd umaxThe respectively minimum value of U and maximum value, vminAnd vmaxThe respectively minimum value of V and maximum Value;For pixel (u, v) Face Detection result in the video image are as follows:
V(u, v)=Pu*Pv
vstepleft=vmin/scale;
vstepright=(255-vmax)/scale;
ustepleft=umin/scale;
ustepright=(255-umax)/scale;
The scale is order quantization parameter;The V(u, v)For the first colour of skin parameter;
Pixel (u, the v) value of the video image passes through following formula:
Wherein μu, μvIt is the colour of skin respectively at the center of u and v;σu, σvIt is the variance of the u and v component of the colour of skin respectively;ρ is the u of the colour of skin With the cross-correlation coefficient of v;Calculate the mahalanobis distance that pixel (u, v) arrives elliptical center, the second colour of skin parameter of pixel (u, v) Are as follows:
6. video image processing apparatus according to claim 5, which is characterized in that
Described image adjustment unit, for remembering the corresponding dimensional table of luminance component Y are as follows:
Guarantor side is carried out to the luminance component Y of the video image to filter to obtain Yfiltered, then difference is obtained using following formula Image: Ydif=Yfiltered-Y+128;Then linear optical superposition is carried out using following formula obtain mill skin image, Ysmooth=Y+2* Ydif-256;
Mill skin image YsmoothLuminance component Y with the video image is according to first colour of skin parameter and the brightness point The corresponding dimensional table of amount Y is mixed using following formula, Yres=(1-a) * Y+a*Ysmooth, wherein a=V(u, v)*Yi
YresThe image that RGB rgb format is transformed to the U and V image formed, then respectively to tri- points of R, G, B Amount carries out curvilinear stretch, then the image after stretching is gone back to the image I after yuv format obtains hue adjustmenttoneAdjusted;Color It adjusts image adjusted and second image to carry out linear hybrid using following formula and obtains the 4th image:
Wherein, b=P(u, v)/255;In the ItoneAdjustedIn, luminance component YtoneAdjusted, red chrominance component and blue Chromatic component is respectively UtoneAdjustedAnd VtoneAdjusted
CN201610798032.XA 2016-08-31 2016-08-31 A kind of method of video image processing and equipment Active CN106375316B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610798032.XA CN106375316B (en) 2016-08-31 2016-08-31 A kind of method of video image processing and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610798032.XA CN106375316B (en) 2016-08-31 2016-08-31 A kind of method of video image processing and equipment

Publications (2)

Publication Number Publication Date
CN106375316A CN106375316A (en) 2017-02-01
CN106375316B true CN106375316B (en) 2019-10-29

Family

ID=57899265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610798032.XA Active CN106375316B (en) 2016-08-31 2016-08-31 A kind of method of video image processing and equipment

Country Status (1)

Country Link
CN (1) CN106375316B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180415B (en) * 2017-03-30 2020-08-14 北京奇艺世纪科技有限公司 Skin beautifying processing method and device in image
CN108230331A (en) * 2017-09-30 2018-06-29 深圳市商汤科技有限公司 Image processing method and device, electronic equipment, computer storage media
CN109274983A (en) * 2018-12-06 2019-01-25 广州酷狗计算机科技有限公司 The method and apparatus being broadcast live
CN111160267A (en) * 2019-12-27 2020-05-15 深圳创维-Rgb电子有限公司 Image processing method, terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1496132A (en) * 2002-08-23 2004-05-12 三星电子株式会社 Method for adjusting colour saturation using saturation control
CN101882315A (en) * 2009-05-04 2010-11-10 青岛海信数字多媒体技术国家重点实验室有限公司 Method for detecting skin color areas
CN103745193A (en) * 2013-12-17 2014-04-23 小米科技有限责任公司 Skin color detection method and skin color detection device
CN105787888A (en) * 2014-12-23 2016-07-20 联芯科技有限公司 Human face image beautifying method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7844076B2 (en) * 2003-06-26 2010-11-30 Fotonation Vision Limited Digital image processing using face detection and skin tone information
US8154612B2 (en) * 2005-08-18 2012-04-10 Qualcomm Incorporated Systems, methods, and apparatus for image processing, for color classification, and for skin color detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1496132A (en) * 2002-08-23 2004-05-12 三星电子株式会社 Method for adjusting colour saturation using saturation control
CN101882315A (en) * 2009-05-04 2010-11-10 青岛海信数字多媒体技术国家重点实验室有限公司 Method for detecting skin color areas
CN103745193A (en) * 2013-12-17 2014-04-23 小米科技有限责任公司 Skin color detection method and skin color detection device
CN105787888A (en) * 2014-12-23 2016-07-20 联芯科技有限公司 Human face image beautifying method

Also Published As

Publication number Publication date
CN106375316A (en) 2017-02-01

Similar Documents

Publication Publication Date Title
CN107038680B (en) Self-adaptive illumination beautifying method and system
CN106375316B (en) A kind of method of video image processing and equipment
KR101450423B1 (en) Skin tone and feature detection for video conferencing compression
DE102019106252A1 (en) Method and system for light source estimation for image processing
WO2016110188A1 (en) Method and electronic device for aesthetic enhancements of face in real-time video
CN108932696B (en) Signal lamp halo suppression method and device
CN105279487A (en) Beauty tool screening method and system
CN106097261B (en) Image processing method, device, storage medium and terminal device
CN107895350B (en) HDR image generation method based on self-adaptive double gamma transformation
US20100182461A1 (en) Image-signal processing device and image signal processing program
CN107396079B (en) White balance adjustment method and device
CN110493532A (en) A kind of image processing method and system
CN109040720B (en) A kind of method and device generating RGB image
CN105913376A (en) Method and device for quick photo beautifying
CN113132696A (en) Image tone mapping method, device, electronic equipment and storage medium
CN110807735A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN109587466B (en) Method and apparatus for color shading correction
CN106815803A (en) The processing method and processing device of picture
CN110175967B (en) Image defogging processing method, system, computer device and storage medium
CN106550227A (en) A kind of image saturation method of adjustment and device
CN110192388A (en) Image processing apparatus, digital camera, image processing program and recording medium
CN109636739A (en) The treatment of details method and device of image saturation enhancing
CN107454318A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN106408617A (en) Interactive single image material acquiring system based on YUV color space and method
CN110335257A (en) A kind of image color detection method and mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20191106

Address after: 510000 X1301-E6803 (Cluster Address) (JM) No. 106 Fengze East Road, Nansha District, Guangzhou, Guangdong Province

Patentee after: Guangzhou Netstar Information Technology Co., Ltd.

Address before: 511442, Guangdong Province, Guangzhou, Panyu District Town, Huambo business district, Wanda Plaza, block B1, 28 floor

Patentee before: All kinds of fruits garden, Guangzhou network technology company limited

TR01 Transfer of patent right