CN102811353B - Method and system for improving video image definition - Google Patents

Method and system for improving video image definition Download PDF

Info

Publication number
CN102811353B
CN102811353B CN201210196854.2A CN201210196854A CN102811353B CN 102811353 B CN102811353 B CN 102811353B CN 201210196854 A CN201210196854 A CN 201210196854A CN 102811353 B CN102811353 B CN 102811353B
Authority
CN
China
Prior art keywords
pixel
value
current
pixel value
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210196854.2A
Other languages
Chinese (zh)
Other versions
CN102811353A (en
Inventor
孙冰晶
刘江
黄森堂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Storm group Limited by Share Ltd
Original Assignee
BEIJING BAOFENG TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING BAOFENG TECHNOLOGY Co Ltd filed Critical BEIJING BAOFENG TECHNOLOGY Co Ltd
Priority to CN201210196854.2A priority Critical patent/CN102811353B/en
Publication of CN102811353A publication Critical patent/CN102811353A/en
Application granted granted Critical
Publication of CN102811353B publication Critical patent/CN102811353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a method and a system for improving video image definition, and relates to the technical field of video image processing. The method comprises the following steps of: S1, sampling a current video image to extract an average pixel value of the pixel points in the current video image; S2, traversing each pixel point in the current video image, and performing texture enhancement processing on a preset number of other pixel points on the periphery of each pixel point to obtain a pixel value of each pixel point which is subjected to the texture enhancement processing; and S3, performing color enhancement processing on the average pixel value and the pixel value of each pixel point which is subjected to the texture enhancement to obtain the pixel value of each pixel point which is subjected to the color enhancement, and outputting the current video image which is subjected to the color enhancement. By the method and the system, the texture and the color of the video image are processed respectively, so that the definition of the video image is improved.

Description

Promote the method and system of video image clarity
Technical field
The present invention relates to technical field of video image processing, particularly a kind of method and system that promote video image clarity.
Background technology
Due at present in video playback field, promote video pictures quality is the target that everybody pursues always, but for current technology, the details that all cannot overcome real-time processing and allow video image more meet human-eye visual characteristic strengthens.Therefore video image enhancement technology more and more highlights its importance in the development of video playback field.
Traditional image enchancing method is divided into two large classes: a class is directly the grey scale pixel value of composition diagram picture to be carried out to calculation process, as nonlinear transformation, the equalization processing etc. of gray scale.Another kind of is first image to be carried out to frequency domain conversion, then carries out spectrum analysis computing, then by the inverse transformation of frequency domain, obtains the pixel value after processing.
But traditional image enchancing method, its algorithm is generally more single, as highlights, strengthens contrast, enhancing colourity etc.Image is had to enhancing to a certain degree, but also do not reach more satisfactory effect.
Summary of the invention
(1) technical problem that will solve
The technical problem to be solved in the present invention is: the definition that how to improve video image.
(2) technical scheme
For solving the problems of the technologies described above, the invention provides a kind of method that promotes video image clarity, said method comprising the steps of:
S1: current video image is sampled, to extract the average pixel value of pixel in current video image;
S2: the each pixel in traversal current video image, other pixel of the default number to each pixel surrounding carries out texture and strengthens processing, carries out the pixel value after texture enhancing to obtain each pixel;
S3: the pixel value after strengthening by the texture of described average pixel value and each pixel carries out color enhancement processing, carries out the pixel value after color enhancement, and the current video image after color enhancement is exported to obtain each pixel.
Preferably, step S1 specifically comprises the following steps:
S11: current video image is evenly divided into n according to pixel number capable, the span of described n is to be greater than zero integer;
S12: the odd-numbered line in capable to n is got m sampled point, the even number line in capable to n is got m+1 or m-1 sampled point, and the span of described m is to be greater than zero integer;
S13: the average pixel value of the sampled point of selecting in calculation procedure S12, using the average pixel value of the average pixel value of sampled point pixel in current video image.
Preferably, step S1 specifically comprises the following steps:
S11: current video image is evenly divided into n row according to pixel number, and the span of described n is to be greater than zero integer;
S12: the odd column in n row is got to m sampled point, the even column in n row is got to m+1 or m-1 sampled point, the span of described m is to be greater than zero integer;
S13: the average pixel value of the sampled point of selecting in calculation procedure S12, using the average pixel value of the average pixel value of sampled point pixel in current video image.
Preferably, step S2 specifically comprises the following steps:
S21: centered by current pixel point, with the square of M × M the pixel of other pixel composition of described current pixel point surrounding, described M is 1 positive odd number;
S22: build Gauss's matrix of a M × M, be positioned at the element value maximum at the center of described Gauss's matrix, the element value far away with the element distance at described Gauss's matrix center is less;
S23: the weighted value using the element in described Gauss's matrix as described M × M pixel, and calculate the pixel value weighted sum of described M × M pixel;
S24: calculate the difference value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S25: calculate the antipode value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S26: the pixel value after strengthening according to the texture of the calculated for pixel values acquisition current pixel point of described pixel value weighted sum, difference value summation, antipode value summation and current pixel point;
S27: judged whether to travel through the each pixel in current video image, if so, performed step S3, otherwise current pixel point is replaced with to other non-selected pixel, returned to step S21.
Preferably, in step S26, calculate current pixel point by following formula and carry out the pixel value after texture enhancing,
Y V=clamp((float)(Y P+(Y P-sum)*clamp(sqrt((float)(sumdiff abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, Y vcarry out the Y component of the pixel value after texture enhancing for current pixel point, clamp (a, b, c) is circumscription function, and a is for treating limit value, and b is minimum value, and c is maximum, Y pfor the Y component of the pixel value of current pixel point, sum is described pixel value weighted sum, and sumdiff is described difference value summation, sumdiff absfor described antipode value summation, float is floating number conversion, and sqrt is the root of making even, and abs is for taking absolute value.
Preferably, in step S26, calculate current pixel point by following formula and carry out the pixel value after texture enhancing,
Y V=clamp((float)(Y P+(Y P-sum)*sqrt((float)iHD)*clamp(sqrt((float)(sumdiff abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, Y vcarry out the Y component of the pixel value after texture enhancing for current pixel point, clamp (a, b, c) is circumscription function, and a is for treating limit value, and b is minimum value, and c is maximum, Y pfor the Y component of the pixel value of current pixel point, iHD is the intensity parameters factor, and its span is between 0 ~ 10, and sum is described pixel value weighted sum, and sumdiff is described difference value summation, sumdiff absfor described antipode value summation, float is floating number conversion, and sqrt is the root of making even, and abs is for taking absolute value.
Preferably, step S3 carries out color enhancement processing by following formula,
Y C = clamp ( ( float ) ( Y V + ( Y V - fMid ) / 22 ) , 0.0.255.0 ) U C = clamp ( ( float ) ( U P + ( U P - 128 ) / 32 ) , 0.0,255.0 ) V C = clamp ( ( float ) ( V P + ( V P - 128 ) / 32 ) , 0.0,255.0 )
Wherein, Y cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U cfor current pixel point is carried out the U component of the pixel value after color enhancement, V cfor current pixel point is carried out the V component of the pixel value after color enhancement, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, U pfor the U component of the pixel value of current pixel point, V pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion.
Preferably, step S3 carries out color enhancement processing by following formula,
Y C = clamp ( ( float ) ( Y V + ( Y V - fMid ) * iCS / 22 ) , 0.0,255.0 ) U C = clamp ( ( float ) ( U P + ( U P - 128 ) * iCS / 32 ) , 0.0,255.0 ) V C = clamp ( ( float ) ( V P + ( V P - 128 ) * iCS / 32 ) , 0.0,255.0 )
Wherein, Y cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U cfor current pixel point is carried out the U component of the pixel value after color enhancement, V ccarry out the V component of the pixel value after color enhancement for current pixel point, iCS is the intensity parameters factor, and its span is between 0 ~ 10, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing,, U pfor the U component of the pixel value of current pixel point, V pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion.
Preferably, further comprising the steps of before step S1:
S0: compare according to the wide height of the wide height of described current video image and target video image, judge whether to need to amplify, if so, described current video image is amplified to processing, then perform step S1, otherwise directly perform step S1.
The invention discloses a kind of system that promotes video image clarity, described system comprises:
Mean value extraction module, for current video image is sampled, to extract the average pixel value of pixel in current video image;
Texture strengthens module, and for traveling through each pixel of current video image, other pixel of the default number to each pixel surrounding carries out texture and strengthens processing, carries out the pixel value after texture enhancing to obtain each pixel;
Color enhancement module, pixel value after strengthening for the texture by described average pixel value and each pixel carries out color enhancement processing, carry out the pixel value after color enhancement to obtain each pixel, and the current video image after color enhancement is exported.
(3) beneficial effect
The present invention processes respectively by the texture to video image and color, has improved the definition of video image.
Brief description of the drawings
Fig. 1 is according to the method flow diagram of the lifting video image clarity of one embodiment of the present invention;
Fig. 2 is the grain effect comparison diagram after being optimized according to the method shown in Fig. 1 and before optimizing;
Fig. 3 is the color effect comparison diagram after being optimized according to the method shown in Fig. 1 and before optimizing;
Fig. 4 is after being optimized according to the method shown in Fig. 1 and strengthens effect contrast figure after treatment according to traditional images.
Embodiment
Below in conjunction with drawings and Examples, the specific embodiment of the present invention is described in further detail.Following examples are used for illustrating the present invention, but are not used for limiting the scope of the invention.
Fig. 1 is according to the method flow diagram of the lifting video image clarity of one embodiment of the present invention; With reference to Fig. 1, said method comprising the steps of:
S1: current video image is sampled, to extract the average pixel value of pixel in current video image;
S2: the each pixel in traversal current video image, other pixel of the default number to each pixel surrounding carries out texture and strengthens processing, with reference to Fig. 2, carries out the pixel value after texture enhancing to obtain each pixel;
S3: the pixel value after strengthening by the texture of described average pixel value and each pixel carries out color enhancement processing, carries out the pixel value after color enhancement to obtain each pixel, with reference to Fig. 3, and the current video image after color enhancement is exported.
For improving the extraction efficiency of average pixel value, and improve as much as possible accuracy, preferably, step S1 can specifically comprise the following steps:
S11: current video image is evenly divided into n according to pixel number capable, the span of described n is to be greater than zero integer;
S12: the odd-numbered line in capable to n is got m sampled point, the even number line in capable to n is got m+1 or m-1 sampled point, and the span of described m is to be greater than zero integer;
S13: the average pixel value of the sampled point of selecting in calculation procedure S12, using the average pixel value of the average pixel value of sampled point pixel in current video image.
Or step S1 can also specifically comprise the following steps:
S11: current video image is evenly divided into n row according to pixel number, and the span of described n is to be greater than zero integer;
S12: the odd column in n row is got to m sampled point, the even column in n row is got to m+1 or m-1 sampled point, the span of described m is to be greater than zero integer;
S13: the average pixel value of the sampled point of selecting in calculation procedure S12, using the average pixel value of the average pixel value of sampled point pixel in current video image.
Preferably, step S2 specifically comprises the following steps:
S21: centered by current pixel point, with the square of M × M the pixel of other pixel composition of described current pixel point surrounding, described M is 1 positive odd number;
S22: build Gauss's matrix of a M × M, be positioned at the element value maximum at the center of described Gauss's matrix, the element value far away with the element distance at described Gauss's matrix center is less;
S23: the weighted value using the element in described Gauss's matrix as described M × M pixel, and calculate the pixel value weighted sum of described M × M pixel;
S24: calculate the difference value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S25: calculate the antipode value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S26: the pixel value after strengthening according to the texture of the calculated for pixel values acquisition current pixel point of described pixel value weighted sum, difference value summation, antipode value summation and current pixel point;
S27: judged whether to travel through the each pixel in current video image, if so, performed step S3, otherwise current pixel point is replaced with to other non-selected pixel, returned to step S21.
Preferably, in step S26, calculate current pixel point by following formula and carry out the pixel value after texture enhancing,
Y V=clamp((float)(Y P+(Y P-sum)*clamp(sqrt((float)(sumdiff abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, clamp (a, b, c) is that circumscription function (in the time that the value of a is between b and c, returns to the value of a; In the time that the value of a is greater than c, return to the value of c; In the time that the value of a is less than b, return to the value of b), a is for treating limit value, and b is minimum value, and c is maximum, Y pfor the Y component of the pixel value of current pixel point, sum is described pixel value weighted sum, and sumdiff is described difference value summation, sumdiff absfor described antipode value summation, float is floating number conversion, and sqrt is the root of making even, and abs is for taking absolute value.
For ease of user, texture is strengthened the adjustment of degree, introduce this parameter of intensity parameters factor iHD, this parameter can be arranged it by user, its span is between 0 ~ 10, when being set to 0, automatically close texture and strengthen, when being set to 10, texture strengthens degree maximum, in step S26, calculate current pixel point by following formula and carry out the pixel value after texture enhancing
Y V=clamp((float)(Y P+(Y P-sum)*sqrt((float)iHD)*clamp(sqrt((float)(sumdiff abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, Y pfor the Y component of the pixel value of current pixel point, iHD is the intensity parameters factor, and its span is between 0 ~ 10, and sum is described pixel value weighted sum, and sumdiff is described difference value summation, sumdiff absfor described antipode value summation, float is floating number conversion, and sqrt is the root of making even, and abs is for taking absolute value.
Preferably, step S3 carries out color enhancement processing by following formula,
Y C = clamp ( ( float ) ( Y V + ( Y V - fMid ) / 22 ) , 0.0.255.0 ) U C = clamp ( ( float ) ( U P + ( U P - 128 ) / 32 ) , 0.0,255.0 ) V C = clamp ( ( float ) ( V P + ( V P - 128 ) / 32 ) , 0.0,255.0 )
Wherein, Y cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U cfor current pixel point is carried out the U component of the pixel value after color enhancement, V cfor current pixel point is carried out the V component of the pixel value after color enhancement, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, U pfor the U component of the pixel value of current pixel point, V pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion.
Adjustment for ease of user to color enhancement degree, introduce this parameter of intensity parameters factor iCS, this parameter can be arranged it by user, its span is between 0 ~ 10, when being set to 0, automatically close color enhancement, color enhancement degree maximum when being set to 10, preferably, step S3 carries out color enhancement processing by following formula
Y C = clamp ( ( float ) ( Y V + ( Y V - fMid ) * iCS / 22 ) , 0.0,255.0 ) U C = clamp ( ( float ) ( U P + ( U P - 128 ) * iCS / 32 ) , 0.0,255.0 ) V C = clamp ( ( float ) ( V P + ( V P - 128 ) * iCS / 32 ) , 0.0,255.0 )
Wherein, Y cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U cfor current pixel point is carried out the U component of the pixel value after color enhancement, V ccarry out the V component of the pixel value after color enhancement for current pixel point, iCS is the intensity parameters factor, and its span is between 0 ~ 10, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing,, U pfor the U component of the pixel value of current pixel point, V pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion.
Preferably, further comprising the steps of before step S1:
S0: compare according to the wide height of the wide height of described current video image and target video image, judge whether to need to amplify, if so, described current video image is amplified to processing, then perform step S1, otherwise directly perform step S1.
Difference between the method for present embodiment and traditional image enchancing method is:
First, at traditional image enchancing method, its algorithm is generally more single, as highlights, strengthens contrast, enhancing colourity etc.With reference to Fig. 4, although image is had to enhancing to a certain degree, also do not reach more satisfactory effect, and the method for present embodiment strengthens from texture and color aspect, display effect significantly promotes.
Secondly, aspect algorithm adaptability.Traditional image enchancing method is generally to process for a static picture, and it can only just can reach enhancing effect under specific occasion and special environment.And video is in the process of playing, be the fluency of guaranteeing that it is watched, video data is constantly to refresh variation, and refreshing each time all produces having brand-new view data.This just requires algorithm for image enhancement to possess preferably dynamic adaptable in real time, and traditional enhancing aspect does not also meet the requirement of adaptability aspect, and the method for present embodiment can be used in multiple occasion, possesses good adaptability.
Again, aspect processing speed.In video display process, not only to consider to strengthen effect, also want emphasis to consider the video data volume that its magnanimity is huge, in having promising result, the fluency of playing also must ensure, this has just proposed very high wanting to strengthening the efficiency optimization of algorithm, and traditional image enchancing method, because it processes for static picture, what it was considered emphatically is to strengthen effect, very do not take notice of the efficiency of calculation process, the method of present embodiment has not only improved display effect, in improving display effect, also ensure treatment effeciency, make the playing fluency of video can be unaffected.
Finally, aspect versatility.Traditional image enchancing method, great majority are to do computing for common picture (as: BMP bitmap), its algorithm only need be adapted to RGB32 form, and in video playback field, the form of video data but has a lot, as: the forms such as RGB32, RGB24, RGB555, YV12, NV12, YUY2, want the diversity of adaptive video data format, require algorithm for image enhancement to there is higher versatility, the method of present embodiment can be applicable to the form of various video data, has good versatility.
Embodiment 1
Adopt an embodiment to be specifically described method of the present invention below, but do not limit protection scope of the present invention.
First,, in video display process, for same video file, the wide height of the video of itself is fixed.But while being shown to beholder, due to the real-time size variation (as full frame state and non-full frame state) of broadcast window, the wide height of video that observer sees is but dynamic change.
Due to the wide Gao Yue great of image, the image data information of preserving is more, information is more, it is just better that the effect of bringing is processed in enhancing, therefore before strengthening processing, preferably, (be the wide height of current video image according to the original wide height of video, represent respectively with width_src and height_src) and the wide height (be the wide height of target video image, represent respectively with width_dst and height_dst) finally seen of beholder, judge whether to amplify.
In the time of width_dst<width_src and height_dst<height_src, show that the original wide high value of video is larger, do not need to amplify.
In the time of width_dst>width_src or height_dst>height_src, the video of seeing due to beholder is larger than original video, needs to amplify.
For under the impregnable prerequisite of effect, ensure higher efficiency, the present invention has adopted interval method to amplify, and in the present embodiment, amplification range is divided into 6 intervals:
In the time of 0<width_dst/width_src<=1.20: amplify 1.0 times;
In the time of 1.20<width_dst/width_src<=1.45: amplify 1.3 times;
In the time of 1.45<width_dst/width_src<=1.80: amplify 1.6 times;
In the time of 1.80<width_dst/width_src<=2.50: amplify 2.0 times;
In the time of 2.50<width_dst/width_src<=3.50: amplify 3.0 times;
In the time of 3.50<width_dst/width_src<=100. 0: amplify 4.0 times;
After demarcation interval, the data in each interval distribute and more meet specific alignment rule, under the prerequisite that meets specific rule, are more conducive to the optimization to interpolator arithmetic efficiency like this.
Secondly, because video is in playing process, image content constantly changes, and different video contents need to have a different parameter value and its correspondence.This value can not be static, must be the difference along with video pictures, generates according to its content dynamic calculation.
In order evenly to collect the point set on image, the present embodiment is divided into 10 row by image by high height_dst, every between-line spacing height_dst/10 pixel.For every a line, the pixel value number of collection also must be spaced apart.For 1,3,5,7,9 odd-numbered lines, evenly gather the pixel value of 11 points, and for 2,4,6,8,10 even number lines, evenly gather the pixel value of 10 points.So just can evenly collect to full figure the value of 105 pixels, to these 105 value summations, then divided by 105, calculate in real time a parameter value that is applicable to current video picture, this result is designated as to fMid.
Again, for making the appearance profile of object more clear, the contour of object in picture is sketched the contours again, allow object clear presenting in background in picture.
All pixels in traversal video image, do texture to each pixel and strengthen processing, and concrete steps are:
1,, centered by current pixel point, with the square of 3 × 3 pixels of other pixel composition of described current pixel point surrounding, the square of establishing 3 × 3 pixels is:
Y00 Y01 Y02
Y10 Y11 Y12
Y20 Y21 Y22
Wherein, the pixel value of current pixel point is Y11.
2, build Gauss's matrix of 3 × 3, this matrix structure is:
0.0754 0.1230 0.0754
0.1230 0.2063 0.1230
0.0754 0.1230 0.0754
Value from this matrix structure can see that the position difference at pixel place, according to the distance distance of its distance center point, has provided different weights.Y11 position is the position at self place, and its weighted value providing is 0.2063, is maximum in all values, and other points are 0.1230 and 0.0754, different according to different these values of distance of its distance Y 11.Distance is nearer, and weights are larger, affect larger.Distance is far away, and weights are less, and impact is also just less.
3, to 9 pixel values of 3 × 3, do multiplying (as: Y00*0.0754, Y01*0.1230 waits the like) by each self-corresponding different weighted values in Gauss's matrix above all multiplication results are added, formula is as follows:
sum=Y00*0.0754+Y01*0.1230+Y02*0.0754
+Y10*0.1230+Y11*0.2063+Y12*0.1230
+Y20*0.0754+Y21*0.1230+Y22*0.0754。
Wherein, sum is pixel value weighted sum.
4, for to make the appearance profile of object more clear, need the difference between calculating pixel point self and its other point facing mutually, do respectively subtraction by Y11 and all 9 sampled points, then by results added, can calculate the summation of difference value, formula is as follows:
sumdiff=Y11-Y00+Y11-Y01+Y11-Y02
+Y11-Y10+Y11-Y11+Y11-Y12
+Y11-Y20+Y11-Y21+Y11-Y22。
Wherein, sumdiff is difference value summation, as the parameter of subsequent algorithm.
5, owing to having calculated the summation of difference value in step 4, this value is signed, may may be negative for positive number, therefore also needs the summation of the absolute value of a difference, and formula is as follows:
sumdiff_abs=abs(Y11-Y00)+abs(Y11-Y01)+abs(Y11-Y02)
+abs(Y11-Y10)+abs(Y11-Y11)+abs(Y11-Y12)
+abs(Y11-Y20)+abs(Y11-Y21)+abs(Y11-Y22)。
Wherein, abs () is for taking absolute value, and sumdiff_abs is antipode value summation.
6, according to the pixel value after the texture enhancing of the calculated for pixel values acquisition current pixel point of described pixel value weighted sum, difference value summation, antipode value summation and current pixel point, its computing formula is:
Y V=clamp((float)(Y P+(Y P-sum)*sqrt((float)iHD)*clamp(sqrt((float)(sumdiff_abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, iHD is the intensity parameters factor, and this parameter can regulate, and its span is 0-10, when being set to 0, automatically closes, and when being set to 10, represents that operation strength is maximum, from 0 to 10, and the larger expression intensity of this value is stronger, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, Y pfor the Y component of the pixel value of current pixel point.
7, judge whether to travel through the each pixel in current video image, if so, carried out next step, otherwise current pixel point is replaced with to other non-selected pixel, returned to step 1.
Finally, the pixel value after strengthening by the texture of described average pixel value and each pixel carries out color enhancement processing, processes formula and is:
For the calculating of Y component, can use the end value fMid of color histogram sampling above, its operational formula is:
Y C=clamp((float)(Y V+(Y V-fMid)*iCS/22),0.0,255.0)
U C=clamp((float)(U P+(U P-128)*iCS/32),0.0,255.0)
V C=clamp((float)(V P+(V P-128)*iCS/32),0.0,255.0)
Wherein, Y cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U cfor current pixel point is carried out the U component of the pixel value after color enhancement, V ccarry out the V component of the pixel value after color enhancement for current pixel point, iCS is the intensity parameters factor, and this parameter can regulate, and its span is 0-10, when being set to 0, automatically closes, and when being set to 10, represents that operation strength be maximum.From 0 to 10, the larger expression intensity of this value is stronger, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, U pfor the U component of the pixel value of current pixel point, V pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion.
The invention also discloses a kind of system that promotes video image clarity, described system comprises:
Mean value extraction module, for current video image is sampled, to extract the average pixel value of pixel in current video image;
Texture strengthens module, and for traveling through each pixel of current video image, other pixel of the default number to each pixel surrounding carries out texture and strengthens processing, carries out the pixel value after texture enhancing to obtain each pixel;
Color enhancement module, pixel value after strengthening for the texture by described average pixel value and each pixel carries out color enhancement processing, carry out the pixel value after color enhancement to obtain each pixel, and the current video image after color enhancement is exported.
Above execution mode is only for illustrating the present invention; and be not limitation of the present invention; the those of ordinary skill in relevant technologies field; without departing from the spirit and scope of the present invention; can also make a variety of changes and modification; therefore all technical schemes that are equal to also belong to category of the present invention, and scope of patent protection of the present invention should be defined by the claims.

Claims (8)

1. a method that promotes video image clarity, is characterized in that, said method comprising the steps of:
S1: current video image is sampled, to extract the average pixel value of pixel in current video image;
S2: the each pixel in traversal current video image, other pixel of the default number to each pixel surrounding carries out texture and strengthens processing, carries out the pixel value after texture enhancing to obtain each pixel;
S3: the pixel value after strengthening by the texture of described average pixel value and each pixel carries out color enhancement processing, carries out the pixel value after color enhancement, and the current video image after color enhancement is exported to obtain each pixel;
Wherein, step S2 specifically comprises the following steps:
S21: centered by current pixel point, with the square of M × M the pixel of other pixel composition of described current pixel point surrounding, described M is 1 positive odd number;
S22: build Gauss's matrix of a M × M, be positioned at the element value maximum at the center of described Gauss's matrix, the element value far away with the element distance at described Gauss's matrix center is less;
S23: the weighted value using the element in described Gauss's matrix as described M × M pixel, and calculate the pixel value weighted sum of described M × M pixel;
S24: calculate the difference value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S25: calculate the antipode value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S26: the pixel value after strengthening according to the texture of the calculated for pixel values acquisition current pixel point of described pixel value weighted sum, difference value summation, antipode value summation and current pixel point;
S27: judged whether to travel through the each pixel in current video image, if so, performed step S3, otherwise current pixel point is replaced with to other non-selected pixel, returned to step S21;
Wherein, step S3 carries out color enhancement processing by following formula,
Y C = clamp ( ( float ) ( Y V + ( Y V - fMid ) / 22 ) , 0.0,255.0 ) U C = clamp ( ( float ) ( U P + ( U P - 128 ) / 32,0.0,255.0 ) V C = clamp ( ( float ) ( V P + ( V P - 128 ) / 32 ) , 0.0,255.0 )
Wherein, Y cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U cfor current pixel point is carried out the U component of the pixel value after color enhancement, V cfor current pixel point is carried out the V component of the pixel value after color enhancement, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, U pfor the U component of the pixel value of current pixel point, V pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion, and clamp (a, b, c) is circumscription function, in the time that the value of a is between b and c, returns to the value of a; In the time that the value of a is greater than c, return to the value of c; In the time that the value of a is less than b, return to the value of b, a is for treating limit value, and b is minimum value, and c is maximum.
2. the method for claim 1, is characterized in that, step S1 specifically comprises the following steps:
S11: current video image is evenly divided into n according to pixel number capable, the span of described n is to be greater than zero integer;
S12: the odd-numbered line in capable to n is got m sampled point, the even number line in capable to n is got m+1 or m-1 sampled point, and the span of described m is to be greater than zero integer;
S13: the average pixel value of the sampled point of selecting in calculation procedure S12, using the average pixel value of the average pixel value of sampled point pixel in current video image.
3. the method for claim 1, is characterized in that, step S1 specifically comprises the following steps:
S11: current video image is evenly divided into n row according to pixel number, and the span of described n is to be greater than zero integer;
S12: the odd column in n row is got to m sampled point, the even column in n row is got to m+1 or m-1 sampled point, the span of described m is to be greater than zero integer;
S13: the average pixel value of the sampled point of selecting in calculation procedure S12, using the average pixel value of the average pixel value of sampled point pixel in current video image.
4. the method for claim 1, is characterized in that, in step S26, calculates current pixel point by following formula and carries out the pixel value after texture enhancing,
Y V=clamp((float)(Y P+(Y P-sum)*clamp(sqrt((float)(sumdiff abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, Y pfor the Y component of the pixel value of current pixel point, sum is described pixel value weighted sum, and sumdiff is described difference value summation, sumdiff absfor described antipode value summation, float is floating number conversion, and sqrt is the root of making even, and abs is for taking absolute value.
5. the method for claim 1, is characterized in that, in step S26, calculates current pixel point by following formula and carries out the pixel value after texture enhancing,
Y V=clamp((float)(Y P+(Y P-sum)*sqrt((float)iHD)*clamp(sqrt((float)(sumdiff abs-abs(sumdiff))/5.0,0.0,1.0)),0.0,255.0)
Wherein, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, Y pfor the Y component of the pixel value of current pixel point, iHD is the intensity parameters factor, and its span is between 0~10, and sum is described pixel value weighted sum, and sumdiff is described difference value summation, sumdiff absfor described antipode value summation, float is floating number conversion, and sqrt is the root of making even, and abs is for taking absolute value.
6. the method as described in claim 4 or 5, is characterized in that, step S3 carries out color enhancement processing by following formula,
Y C = clamp ( ( float ) ( Y V + ( Y V - fMid ) * iCS / 22 ) , 0.0,255.0 ) U C = clamp ( ( float ) ( U P + ( U P - 128 ) * iCS / 3 2 ) ,0.0,255.0 ) V C = clamp ( ( float ) ( V P + ( V P - 128 ) * iCS / 32 ) , 0.0,255.0 )
Wherein, Y cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U cfor current pixel point is carried out the U component of the pixel value after color enhancement, V ccarry out the V component of the pixel value after color enhancement for current pixel point, iCS is the intensity parameters factor, and its span is between 0~10, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing,, U pfor the U component of the pixel value of current pixel point, V pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion.
7. the method for claim 1, is characterized in that, further comprising the steps of before step S1:
S0: compare according to the wide height of the wide height of described current video image and target video image, judge whether to need to amplify, if so, described current video image is amplified to processing, then perform step S1, otherwise directly perform step S1.
8. a system that promotes video image clarity, is characterized in that, described system comprises:
Mean value extraction module, for current video image is sampled, to extract the average pixel value of pixel in current video image;
Texture strengthens module, and for traveling through each pixel of current video image, other pixel of the default number to each pixel surrounding carries out texture and strengthens processing, carries out the pixel value after texture enhancing to obtain each pixel;
Color enhancement module, pixel value after strengthening for the texture by described average pixel value and each pixel carries out color enhancement processing, carry out the pixel value after color enhancement to obtain each pixel, and the current video image after color enhancement is exported;
Wherein, described texture strengthens module and carries out texture enhancing processing by carrying out following steps:
S21: centered by current pixel point, with the square of M × M the pixel of other pixel composition of described current pixel point surrounding, described M is 1 positive odd number;
S22: build Gauss's matrix of a M × M, be positioned at the element value maximum at the center of described Gauss's matrix, the element value far away with the element distance at described Gauss's matrix center is less;
S23: the weighted value using the element in described Gauss's matrix as described M × M pixel, and calculate the pixel value weighted sum of described M × M pixel;
S24: calculate the difference value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S25: calculate the antipode value summation between pixel value and the pixel value of current pixel point of described M × M pixel;
S26: the pixel value after strengthening according to the texture of the calculated for pixel values acquisition current pixel point of described pixel value weighted sum, difference value summation, antipode value summation and current pixel point;
S27: judged whether to travel through the each pixel in current video image, otherwise current pixel point is replaced with to other non-selected pixel, returned to step S21;
Wherein, described color enhancement module is carried out color enhancement processing by following formula,
Y C = clamp ( ( float ) ( Y V + ( Y V - fMid ) / 22 ) , 0.0,255.0 ) U C = clamp ( ( float ) ( U P + ( U P - 128 ) / 32,0.0,255.0 ) V C = clamp ( ( float ) ( V P + ( V P - 128 ) / 32 ) , 0.0,255.0 )
Wherein, Y cfor current pixel point is carried out the Y component of the pixel value after color enhancement, U cfor current pixel point is carried out the U component of the pixel value after color enhancement, V cfor current pixel point is carried out the V component of the pixel value after color enhancement, Y vfor current pixel point is carried out the Y component of the pixel value after texture enhancing, U pfor the U component of the pixel value of current pixel point, V pfor the V component of the pixel value of current pixel point, fMid is described average pixel value, and float is floating number conversion, and clamp (a, b, c) is circumscription function, in the time that the value of a is between b and c, returns to the value of a; In the time that the value of a is greater than c, return to the value of c; In the time that the value of a is less than b, return to the value of b, a is for treating limit value, and b is minimum value, and c is maximum.
CN201210196854.2A 2012-06-14 2012-06-14 Method and system for improving video image definition Active CN102811353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210196854.2A CN102811353B (en) 2012-06-14 2012-06-14 Method and system for improving video image definition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210196854.2A CN102811353B (en) 2012-06-14 2012-06-14 Method and system for improving video image definition

Publications (2)

Publication Number Publication Date
CN102811353A CN102811353A (en) 2012-12-05
CN102811353B true CN102811353B (en) 2014-09-24

Family

ID=47234916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210196854.2A Active CN102811353B (en) 2012-06-14 2012-06-14 Method and system for improving video image definition

Country Status (1)

Country Link
CN (1) CN102811353B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104093069B (en) * 2014-06-13 2018-11-06 北京奇艺世纪科技有限公司 A kind of video broadcasting method and player device
CN105447830B (en) * 2015-11-27 2018-05-25 合一网络技术(北京)有限公司 Dynamic video image clarity intensifying method and device
CN105469367B (en) * 2015-11-27 2018-03-02 合一网络技术(北京)有限公司 Dynamic video image definition intensifying method and device
CN106550261A (en) * 2016-12-12 2017-03-29 暴风集团股份有限公司 Lift the exhibiting method and system of video image clarity
KR20210112042A (en) * 2020-03-04 2021-09-14 에스케이하이닉스 주식회사 Image sensing device and operating method of the same
CN116567247A (en) * 2022-01-27 2023-08-08 腾讯科技(深圳)有限公司 Video encoding method, real-time communication method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1545327A (en) * 2003-11-10 2004-11-10 System and method for reinforcing video image quality
CN101854536A (en) * 2009-04-01 2010-10-06 深圳市融创天下科技发展有限公司 Method for improving image visual effect for video encoding and decoding

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005130046A (en) * 2003-10-21 2005-05-19 Olympus Corp Image pickup device
GB2476027A (en) * 2009-09-16 2011-06-15 Sharp Kk Display privacy image processing method to emphasise features of a secondary image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1545327A (en) * 2003-11-10 2004-11-10 System and method for reinforcing video image quality
CN101854536A (en) * 2009-04-01 2010-10-06 深圳市融创天下科技发展有限公司 Method for improving image visual effect for video encoding and decoding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JP特开2005-130046A 2005.05.19

Also Published As

Publication number Publication date
CN102811353A (en) 2012-12-05

Similar Documents

Publication Publication Date Title
CN102811353B (en) Method and system for improving video image definition
Chandel et al. Image filtering algorithms and techniques: A review
CN106470292B (en) Image processing apparatus and image processing method
CN105247568B (en) The method and apparatus for generating improved color image with the sensor with coloured filter
CN102063704B (en) Airborne vision enhancement method and device
CN104537634A (en) Method and system for removing raindrop influences in dynamic image
CN105139343B (en) A kind of image processing method, device
CN101325646A (en) Method and apparatus for contrast enhancement
CN101626454B (en) Method for intensifying video visibility
Kim et al. Applications of convolution in image processing with MATLAB
CN104200447B (en) Real-time low-light color image enhancement device
CN104574328A (en) Color image enhancement method based on histogram segmentation
CN104809701A (en) Image salt-and-pepper noise removal method based on mean value in iteration switch
CN102968769A (en) Image consistency enhancing device
CN105389776B (en) Image scaling techniques
CN101552868A (en) Apparatus and method for reducing motion blur in a video signal
CN102663714A (en) Saliency-based method for suppressing strong fixed-pattern noise in infrared image
CN101924899A (en) Image processing equipment and image processing method
CN104166967A (en) Method for improving definition of video image
CN101448163A (en) Down-sampling method based on sub-pel and device therefor
CN102750671A (en) Image colorful noise removal method
Liu et al. Large size single image fast defogging and the real time video defogging FPGA architecture
CN104574363A (en) Full reference image quality assessment method in consideration of gradient direction difference
Wang et al. Detection of multi-frequency weak signals with adaptive stochastic resonance system
CN105023250A (en) FPGA-based real-time image self-adaptive enhancing system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP01 Change in the name or title of a patent holder

Address after: 100191, Beijing, Xueyuan Road, Haidian District No. 51, the first 13 towers of science and Technology Building

Patentee after: Storm group Limited by Share Ltd

Address before: 100191, Beijing, Xueyuan Road, Haidian District No. 51, the first 13 towers of science and Technology Building

Patentee before: Beijing Baofeng Technology Co., Ltd.

PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20191018

Granted publication date: 20140924

PD01 Discharge of preservation of patent
PD01 Discharge of preservation of patent

Date of cancellation: 20211018

Granted publication date: 20140924

PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20211018

Granted publication date: 20140924