Promote the method and system of video image clarity
Technical field
The present invention relates to technical field of video image processing, particularly a kind of method and system that promote video image clarity.
Background technology
Due at present in video playback field, promote video pictures quality is the target that everybody pursues always, but for current technology, the details that all cannot overcome real-time processing and allow video image more meet human-eye visual characteristic strengthens.Therefore video image enhancement technology more and more highlights its importance in the development of video playback field.
Traditional image enchancing method is divided into two large classes: a class is directly the grey scale pixel value of composition diagram picture to be carried out to calculation process, as nonlinear transformation, the equalization processing etc. of gray scale.Another kind of is first image to be carried out to frequency domain conversion, then carries out spectrum analysis computing, then by the inverse transformation of frequency domain, obtains the pixel value after processing.
But traditional image enchancing method, its algorithm is generally more single, as highlights, strengthens contrast, enhancing colourity etc.Image is had to enhancing to a certain degree, but also do not reach more satisfactory effect.
Summary of the invention
(1) technical problem that will solve
The technical problem to be solved in the present invention is: the definition that how to improve video image.
(2) technical scheme
For solving the problems of the technologies described above, the invention provides a kind of method that promotes video image clarity, said method comprising the steps of:
S1: current video image is sampled, to extract the average pixel value of pixel in current video image;
S2: the each pixel in traversal current video image, other pixel of the default number to each pixel surrounding carries out texture and strengthens processing, carries out the pixel value after texture enhancing to obtain each pixel;
S3: the pixel value after strengthening by the texture of described average pixel value and each pixel carries out color enhancement processing, carries out the pixel value after color enhancement, and the current video image after color enhancement is exported to obtain each pixel.
Preferably, step S1 specifically comprises the following steps:
S11: current video image is evenly divided into n according to pixel number capable, the span of described n is to be greater than zero integer;
S12: the odd-numbered line in capable to n is got m sampled point, the even number line in capable to n is got m+1 or m-1 sampled point, and the span of described m is to be greater than zero integer;
S13: the average pixel value of the sampled point of selecting in calculation procedure S12, using the average pixel value of the average pixel value of sampled point pixel in current video image.
Preferably, step S1 specifically comprises the following steps:
S11: current video image is evenly divided into n row according to pixel number, and the span of described n is to be greater than zero integer;
S12: the odd column in n row is got to m sampled point, the even column in n row is got to m+1 or m-1 sampled point, the span of described m is to be greater than zero integer;
S13: the average pixel value of the sampled point of selecting in calculation procedure S12, using the average pixel value of the average pixel value of sampled point pixel in current video image.
Preferably, step S2 specifically comprises the following steps:
S21: centered by current pixel point, with the square of M × M the pixel of other pixel composition of described current pixel point surrounding, described M is 1 positive odd number;
S22: build Gauss's matrix of a M × M, be positioned at the element value maximum at the center of described Gauss's matrix, the element value far away with the element distance at described Gauss's matrix center is less;
S23: the weighted value using the element in described Gauss's matrix as described M × M pixel, and calculate the pixel value weighted sum of described M × M pixel;
S24: calculate the difference value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S25: calculate the antipode value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S26: the pixel value after strengthening according to the texture of the calculated for pixel values acquisition current pixel point of described pixel value weighted sum, difference value summation, antipode value summation and current pixel point;
S27: judged whether to travel through the each pixel in current video image, if so, performed step S3, otherwise current pixel point is replaced with to other non-selected pixel, returned to step S21.
Preferably, in step S26, calculate current pixel point by following formula and carry out the pixel value after texture enhancing,
Y
V=clamp((float)(Y
P+(Y
P-sum)*clamp(sqrt((float)(sumdiff
abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, Y
vcarry out the Y component of the pixel value after texture enhancing for current pixel point, clamp (a, b, c) is circumscription function, and a is for treating limit value, and b is minimum value, and c is maximum, Y
pfor the Y component of the pixel value of current pixel point, sum is described pixel value weighted sum, and sumdiff is described difference value summation, sumdiff
absfor described antipode value summation, float is floating number conversion, and sqrt is the root of making even, and abs is for taking absolute value.
Preferably, in step S26, calculate current pixel point by following formula and carry out the pixel value after texture enhancing,
Y
V=clamp((float)(Y
P+(Y
P-sum)*sqrt((float)iHD)*clamp(sqrt((float)(sumdiff
abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, Y
vcarry out the Y component of the pixel value after texture enhancing for current pixel point, clamp (a, b, c) is circumscription function, and a is for treating limit value, and b is minimum value, and c is maximum, Y
pfor the Y component of the pixel value of current pixel point, iHD is the intensity parameters factor, and its span is between 0 ~ 10, and sum is described pixel value weighted sum, and sumdiff is described difference value summation, sumdiff
absfor described antipode value summation, float is floating number conversion, and sqrt is the root of making even, and abs is for taking absolute value.
Preferably, step S3 carries out color enhancement processing by following formula,
Wherein, Y
cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U
cfor current pixel point is carried out the U component of the pixel value after color enhancement, V
cfor current pixel point is carried out the V component of the pixel value after color enhancement, Y
vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, U
pfor the U component of the pixel value of current pixel point, V
pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion.
Preferably, step S3 carries out color enhancement processing by following formula,
Wherein, Y
cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U
cfor current pixel point is carried out the U component of the pixel value after color enhancement, V
ccarry out the V component of the pixel value after color enhancement for current pixel point, iCS is the intensity parameters factor, and its span is between 0 ~ 10, Y
vfor current pixel point is carried out the Y component of the pixel value after texture enhancing,, U
pfor the U component of the pixel value of current pixel point, V
pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion.
Preferably, further comprising the steps of before step S1:
S0: compare according to the wide height of the wide height of described current video image and target video image, judge whether to need to amplify, if so, described current video image is amplified to processing, then perform step S1, otherwise directly perform step S1.
The invention discloses a kind of system that promotes video image clarity, described system comprises:
Mean value extraction module, for current video image is sampled, to extract the average pixel value of pixel in current video image;
Texture strengthens module, and for traveling through each pixel of current video image, other pixel of the default number to each pixel surrounding carries out texture and strengthens processing, carries out the pixel value after texture enhancing to obtain each pixel;
Color enhancement module, pixel value after strengthening for the texture by described average pixel value and each pixel carries out color enhancement processing, carry out the pixel value after color enhancement to obtain each pixel, and the current video image after color enhancement is exported.
(3) beneficial effect
The present invention processes respectively by the texture to video image and color, has improved the definition of video image.
Brief description of the drawings
Fig. 1 is according to the method flow diagram of the lifting video image clarity of one embodiment of the present invention;
Fig. 2 is the grain effect comparison diagram after being optimized according to the method shown in Fig. 1 and before optimizing;
Fig. 3 is the color effect comparison diagram after being optimized according to the method shown in Fig. 1 and before optimizing;
Fig. 4 is after being optimized according to the method shown in Fig. 1 and strengthens effect contrast figure after treatment according to traditional images.
Embodiment
Below in conjunction with drawings and Examples, the specific embodiment of the present invention is described in further detail.Following examples are used for illustrating the present invention, but are not used for limiting the scope of the invention.
Fig. 1 is according to the method flow diagram of the lifting video image clarity of one embodiment of the present invention; With reference to Fig. 1, said method comprising the steps of:
S1: current video image is sampled, to extract the average pixel value of pixel in current video image;
S2: the each pixel in traversal current video image, other pixel of the default number to each pixel surrounding carries out texture and strengthens processing, with reference to Fig. 2, carries out the pixel value after texture enhancing to obtain each pixel;
S3: the pixel value after strengthening by the texture of described average pixel value and each pixel carries out color enhancement processing, carries out the pixel value after color enhancement to obtain each pixel, with reference to Fig. 3, and the current video image after color enhancement is exported.
For improving the extraction efficiency of average pixel value, and improve as much as possible accuracy, preferably, step S1 can specifically comprise the following steps:
S11: current video image is evenly divided into n according to pixel number capable, the span of described n is to be greater than zero integer;
S12: the odd-numbered line in capable to n is got m sampled point, the even number line in capable to n is got m+1 or m-1 sampled point, and the span of described m is to be greater than zero integer;
S13: the average pixel value of the sampled point of selecting in calculation procedure S12, using the average pixel value of the average pixel value of sampled point pixel in current video image.
Or step S1 can also specifically comprise the following steps:
S11: current video image is evenly divided into n row according to pixel number, and the span of described n is to be greater than zero integer;
S12: the odd column in n row is got to m sampled point, the even column in n row is got to m+1 or m-1 sampled point, the span of described m is to be greater than zero integer;
S13: the average pixel value of the sampled point of selecting in calculation procedure S12, using the average pixel value of the average pixel value of sampled point pixel in current video image.
Preferably, step S2 specifically comprises the following steps:
S21: centered by current pixel point, with the square of M × M the pixel of other pixel composition of described current pixel point surrounding, described M is 1 positive odd number;
S22: build Gauss's matrix of a M × M, be positioned at the element value maximum at the center of described Gauss's matrix, the element value far away with the element distance at described Gauss's matrix center is less;
S23: the weighted value using the element in described Gauss's matrix as described M × M pixel, and calculate the pixel value weighted sum of described M × M pixel;
S24: calculate the difference value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S25: calculate the antipode value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S26: the pixel value after strengthening according to the texture of the calculated for pixel values acquisition current pixel point of described pixel value weighted sum, difference value summation, antipode value summation and current pixel point;
S27: judged whether to travel through the each pixel in current video image, if so, performed step S3, otherwise current pixel point is replaced with to other non-selected pixel, returned to step S21.
Preferably, in step S26, calculate current pixel point by following formula and carry out the pixel value after texture enhancing,
Y
V=clamp((float)(Y
P+(Y
P-sum)*clamp(sqrt((float)(sumdiff
abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, Y
vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, clamp (a, b, c) is that circumscription function (in the time that the value of a is between b and c, returns to the value of a; In the time that the value of a is greater than c, return to the value of c; In the time that the value of a is less than b, return to the value of b), a is for treating limit value, and b is minimum value, and c is maximum, Y
pfor the Y component of the pixel value of current pixel point, sum is described pixel value weighted sum, and sumdiff is described difference value summation, sumdiff
absfor described antipode value summation, float is floating number conversion, and sqrt is the root of making even, and abs is for taking absolute value.
For ease of user, texture is strengthened the adjustment of degree, introduce this parameter of intensity parameters factor iHD, this parameter can be arranged it by user, its span is between 0 ~ 10, when being set to 0, automatically close texture and strengthen, when being set to 10, texture strengthens degree maximum, in step S26, calculate current pixel point by following formula and carry out the pixel value after texture enhancing
Y
V=clamp((float)(Y
P+(Y
P-sum)*sqrt((float)iHD)*clamp(sqrt((float)(sumdiff
abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, Y
vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, Y
pfor the Y component of the pixel value of current pixel point, iHD is the intensity parameters factor, and its span is between 0 ~ 10, and sum is described pixel value weighted sum, and sumdiff is described difference value summation, sumdiff
absfor described antipode value summation, float is floating number conversion, and sqrt is the root of making even, and abs is for taking absolute value.
Preferably, step S3 carries out color enhancement processing by following formula,
Wherein, Y
cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U
cfor current pixel point is carried out the U component of the pixel value after color enhancement, V
cfor current pixel point is carried out the V component of the pixel value after color enhancement, Y
vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, U
pfor the U component of the pixel value of current pixel point, V
pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion.
Adjustment for ease of user to color enhancement degree, introduce this parameter of intensity parameters factor iCS, this parameter can be arranged it by user, its span is between 0 ~ 10, when being set to 0, automatically close color enhancement, color enhancement degree maximum when being set to 10, preferably, step S3 carries out color enhancement processing by following formula
Wherein, Y
cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U
cfor current pixel point is carried out the U component of the pixel value after color enhancement, V
ccarry out the V component of the pixel value after color enhancement for current pixel point, iCS is the intensity parameters factor, and its span is between 0 ~ 10, Y
vfor current pixel point is carried out the Y component of the pixel value after texture enhancing,, U
pfor the U component of the pixel value of current pixel point, V
pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion.
Preferably, further comprising the steps of before step S1:
S0: compare according to the wide height of the wide height of described current video image and target video image, judge whether to need to amplify, if so, described current video image is amplified to processing, then perform step S1, otherwise directly perform step S1.
Difference between the method for present embodiment and traditional image enchancing method is:
First, at traditional image enchancing method, its algorithm is generally more single, as highlights, strengthens contrast, enhancing colourity etc.With reference to Fig. 4, although image is had to enhancing to a certain degree, also do not reach more satisfactory effect, and the method for present embodiment strengthens from texture and color aspect, display effect significantly promotes.
Secondly, aspect algorithm adaptability.Traditional image enchancing method is generally to process for a static picture, and it can only just can reach enhancing effect under specific occasion and special environment.And video is in the process of playing, be the fluency of guaranteeing that it is watched, video data is constantly to refresh variation, and refreshing each time all produces having brand-new view data.This just requires algorithm for image enhancement to possess preferably dynamic adaptable in real time, and traditional enhancing aspect does not also meet the requirement of adaptability aspect, and the method for present embodiment can be used in multiple occasion, possesses good adaptability.
Again, aspect processing speed.In video display process, not only to consider to strengthen effect, also want emphasis to consider the video data volume that its magnanimity is huge, in having promising result, the fluency of playing also must ensure, this has just proposed very high wanting to strengthening the efficiency optimization of algorithm, and traditional image enchancing method, because it processes for static picture, what it was considered emphatically is to strengthen effect, very do not take notice of the efficiency of calculation process, the method of present embodiment has not only improved display effect, in improving display effect, also ensure treatment effeciency, make the playing fluency of video can be unaffected.
Finally, aspect versatility.Traditional image enchancing method, great majority are to do computing for common picture (as: BMP bitmap), its algorithm only need be adapted to RGB32 form, and in video playback field, the form of video data but has a lot, as: the forms such as RGB32, RGB24, RGB555, YV12, NV12, YUY2, want the diversity of adaptive video data format, require algorithm for image enhancement to there is higher versatility, the method of present embodiment can be applicable to the form of various video data, has good versatility.
Embodiment 1
Adopt an embodiment to be specifically described method of the present invention below, but do not limit protection scope of the present invention.
First,, in video display process, for same video file, the wide height of the video of itself is fixed.But while being shown to beholder, due to the real-time size variation (as full frame state and non-full frame state) of broadcast window, the wide height of video that observer sees is but dynamic change.
Due to the wide Gao Yue great of image, the image data information of preserving is more, information is more, it is just better that the effect of bringing is processed in enhancing, therefore before strengthening processing, preferably, (be the wide height of current video image according to the original wide height of video, represent respectively with width_src and height_src) and the wide height (be the wide height of target video image, represent respectively with width_dst and height_dst) finally seen of beholder, judge whether to amplify.
In the time of width_dst<width_src and height_dst<height_src, show that the original wide high value of video is larger, do not need to amplify.
In the time of width_dst>width_src or height_dst>height_src, the video of seeing due to beholder is larger than original video, needs to amplify.
For under the impregnable prerequisite of effect, ensure higher efficiency, the present invention has adopted interval method to amplify, and in the present embodiment, amplification range is divided into 6 intervals:
In the time of 0<width_dst/width_src<=1.20: amplify 1.0 times;
In the time of 1.20<width_dst/width_src<=1.45: amplify 1.3 times;
In the time of 1.45<width_dst/width_src<=1.80: amplify 1.6 times;
In the time of 1.80<width_dst/width_src<=2.50: amplify 2.0 times;
In the time of 2.50<width_dst/width_src<=3.50: amplify 3.0 times;
In the time of 3.50<width_dst/width_src<=100. 0: amplify 4.0 times;
After demarcation interval, the data in each interval distribute and more meet specific alignment rule, under the prerequisite that meets specific rule, are more conducive to the optimization to interpolator arithmetic efficiency like this.
Secondly, because video is in playing process, image content constantly changes, and different video contents need to have a different parameter value and its correspondence.This value can not be static, must be the difference along with video pictures, generates according to its content dynamic calculation.
In order evenly to collect the point set on image, the present embodiment is divided into 10 row by image by high height_dst, every between-line spacing height_dst/10 pixel.For every a line, the pixel value number of collection also must be spaced apart.For 1,3,5,7,9 odd-numbered lines, evenly gather the pixel value of 11 points, and for 2,4,6,8,10 even number lines, evenly gather the pixel value of 10 points.So just can evenly collect to full figure the value of 105 pixels, to these 105 value summations, then divided by 105, calculate in real time a parameter value that is applicable to current video picture, this result is designated as to fMid.
Again, for making the appearance profile of object more clear, the contour of object in picture is sketched the contours again, allow object clear presenting in background in picture.
All pixels in traversal video image, do texture to each pixel and strengthen processing, and concrete steps are:
1,, centered by current pixel point, with the square of 3 × 3 pixels of other pixel composition of described current pixel point surrounding, the square of establishing 3 × 3 pixels is:
Y00 Y01 Y02
Y10 Y11 Y12
Y20 Y21 Y22
Wherein, the pixel value of current pixel point is Y11.
2, build Gauss's matrix of 3 × 3, this matrix structure is:
0.0754 0.1230 0.0754
0.1230 0.2063 0.1230
0.0754 0.1230 0.0754
Value from this matrix structure can see that the position difference at pixel place, according to the distance distance of its distance center point, has provided different weights.Y11 position is the position at self place, and its weighted value providing is 0.2063, is maximum in all values, and other points are 0.1230 and 0.0754, different according to different these values of distance of its distance Y 11.Distance is nearer, and weights are larger, affect larger.Distance is far away, and weights are less, and impact is also just less.
3, to 9 pixel values of 3 × 3, do multiplying (as: Y00*0.0754, Y01*0.1230 waits the like) by each self-corresponding different weighted values in Gauss's matrix above all multiplication results are added, formula is as follows:
sum=Y00*0.0754+Y01*0.1230+Y02*0.0754
+Y10*0.1230+Y11*0.2063+Y12*0.1230
+Y20*0.0754+Y21*0.1230+Y22*0.0754。
Wherein, sum is pixel value weighted sum.
4, for to make the appearance profile of object more clear, need the difference between calculating pixel point self and its other point facing mutually, do respectively subtraction by Y11 and all 9 sampled points, then by results added, can calculate the summation of difference value, formula is as follows:
sumdiff=Y11-Y00+Y11-Y01+Y11-Y02
+Y11-Y10+Y11-Y11+Y11-Y12
+Y11-Y20+Y11-Y21+Y11-Y22。
Wherein, sumdiff is difference value summation, as the parameter of subsequent algorithm.
5, owing to having calculated the summation of difference value in step 4, this value is signed, may may be negative for positive number, therefore also needs the summation of the absolute value of a difference, and formula is as follows:
sumdiff_abs=abs(Y11-Y00)+abs(Y11-Y01)+abs(Y11-Y02)
+abs(Y11-Y10)+abs(Y11-Y11)+abs(Y11-Y12)
+abs(Y11-Y20)+abs(Y11-Y21)+abs(Y11-Y22)。
Wherein, abs () is for taking absolute value, and sumdiff_abs is antipode value summation.
6, according to the pixel value after the texture enhancing of the calculated for pixel values acquisition current pixel point of described pixel value weighted sum, difference value summation, antipode value summation and current pixel point, its computing formula is:
Y
V=clamp((float)(Y
P+(Y
P-sum)*sqrt((float)iHD)*clamp(sqrt((float)(sumdiff_abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, iHD is the intensity parameters factor, and this parameter can regulate, and its span is 0-10, when being set to 0, automatically closes, and when being set to 10, represents that operation strength is maximum, from 0 to 10, and the larger expression intensity of this value is stronger, Y
vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, Y
pfor the Y component of the pixel value of current pixel point.
7, judge whether to travel through the each pixel in current video image, if so, carried out next step, otherwise current pixel point is replaced with to other non-selected pixel, returned to step 1.
Finally, the pixel value after strengthening by the texture of described average pixel value and each pixel carries out color enhancement processing, processes formula and is:
For the calculating of Y component, can use the end value fMid of color histogram sampling above, its operational formula is:
Y
C=clamp((float)(Y
V+(Y
V-fMid)*iCS/22),0.0,255.0)
U
C=clamp((float)(U
P+(U
P-128)*iCS/32),0.0,255.0)
V
C=clamp((float)(V
P+(V
P-128)*iCS/32),0.0,255.0)
Wherein, Y
cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U
cfor current pixel point is carried out the U component of the pixel value after color enhancement, V
ccarry out the V component of the pixel value after color enhancement for current pixel point, iCS is the intensity parameters factor, and this parameter can regulate, and its span is 0-10, when being set to 0, automatically closes, and when being set to 10, represents that operation strength be maximum.From 0 to 10, the larger expression intensity of this value is stronger, Y
vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, U
pfor the U component of the pixel value of current pixel point, V
pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion.
The invention also discloses a kind of system that promotes video image clarity, described system comprises:
Mean value extraction module, for current video image is sampled, to extract the average pixel value of pixel in current video image;
Texture strengthens module, and for traveling through each pixel of current video image, other pixel of the default number to each pixel surrounding carries out texture and strengthens processing, carries out the pixel value after texture enhancing to obtain each pixel;
Color enhancement module, pixel value after strengthening for the texture by described average pixel value and each pixel carries out color enhancement processing, carry out the pixel value after color enhancement to obtain each pixel, and the current video image after color enhancement is exported.
Above execution mode is only for illustrating the present invention; and be not limitation of the present invention; the those of ordinary skill in relevant technologies field; without departing from the spirit and scope of the present invention; can also make a variety of changes and modification; therefore all technical schemes that are equal to also belong to category of the present invention, and scope of patent protection of the present invention should be defined by the claims.