CN104954770A - Image processing device and method thereof - Google Patents

Image processing device and method thereof Download PDF

Info

Publication number
CN104954770A
CN104954770A CN201410125628.4A CN201410125628A CN104954770A CN 104954770 A CN104954770 A CN 104954770A CN 201410125628 A CN201410125628 A CN 201410125628A CN 104954770 A CN104954770 A CN 104954770A
Authority
CN
China
Prior art keywords
signal
value
image
correction coefficient
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410125628.4A
Other languages
Chinese (zh)
Inventor
蔡婉清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Novatek Microelectronics Corp
Original Assignee
Novatek Microelectronics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Novatek Microelectronics Corp filed Critical Novatek Microelectronics Corp
Priority to CN201410125628.4A priority Critical patent/CN104954770A/en
Publication of CN104954770A publication Critical patent/CN104954770A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides an image processing device and a method thereof. The device comprises an effective bit detector and a compensator. The effective bit detector is used for detecting effective number of bits in bit depth of image input signals so that a correction coefficient can be correspondingly outputted. The compensator is coupled with the effective bit detector to receive the correction coefficient and perform bit number compensation on the image input signals according to the correction coefficient so that corresponding image output signals can be outputted.

Description

Image processing apparatus and method thereof
Technical field
The invention relates to a kind of image processing apparatus, and relate to a kind of image processing apparatus and method thereof especially.
Background technology
Along with making rapid progress of science and technology, the display of high definition is more and more universal, makes beholder can watch more image detail, such as, be high-definition multi-media interface (High Definition Multimedia Interface; Be called for short HDMI) display, it can display resolution be the picture of 1920 × 1080, or or even the display of popular now 4K definition (4K resolution), its definition can reach 3840x2160 and 4096 × 2160 pixels.But, image input/playing device multiple now, such as digital versatile disc (Digital Versatile Disc, be called for short DVD) player, PC (PC), Set Top Box (set-top box, be called for short STB) etc., the image quality that can provide is mostly the definition such as 720 × 480 or 1920 × 1080, and the display resolution that can provide with aforementioned display device is not quite similar.On the other hand, image input/playing device provide the bit-depth of picture signal (bit depth, such as color depth etc.) to be often different from the bit-depth of display.
For DVD player, the bit-depth of the picture signal that DVD player inputs can be such as 6,8,10 than top grade, and the display (being such as television set) being connected to described DVD player shown/bit-depth of picture signal that exports can be such as 8,10,12 compare top grade.When if the bit-depth (being such as 6 bits) inputing to the picture signal of display is less than the nominal bit degree of depth (being such as 10 bits) of display, owing to there being the mismatch (mismatch) of 4 bits between the significant bit of picture signal of input and the nominal bit degree of depth of display, then often find the phenomenon occurring so-called " false contouring " (false contour) in the gradation zone (being such as image border) of image frame, comparatively coarse and irregularity (smooth) is shown to cause the gradation zone of image frame, thus the perception of user for shown image frame is affected widely.
Summary of the invention
The invention provides a kind of image processing apparatus and method thereof, effective number of bits (valid bits) in the bit-depth of detected image input signal also carries out bit-depth compensation (bit depth compensation) to image input signal, thus effectively improves the display quality of the image frame of display.
A kind of image processing apparatus of the present invention, described device comprises effective bit-detector and compensator.Described significant bit detector is in order to the effective number of bits in the bit-depth of detected image input signal, thus corresponding output calibration coefficient.Described compensator couples effective bit-detector to receive correction coefficient, and carries out bit number compensation according to correction coefficient to image input signal, thus exports corresponding picture output signal.
A kind of image processing method of the present invention, is applicable to image processing apparatus, comprises: the effective number of bits in the bit-depth of detected image input signal, thus corresponding generation correction coefficient; And according to correction coefficient, bit number compensation is carried out to image input signal, thus produce corresponding picture output signal.
In one embodiment of this invention, wherein significant bit detector comprises signal statistics unit, automatically correlativity unit and quantizes detector.The brightness value of signal statistics unit to image input signal is added up, and output brightness statistics.Automatic correlativity unit couples signal statistics unit, in order to brightness statistics is converted to automatic correlation curve.Quantize detector and couple automatic correlativity unit, it in order to calculate correction coefficient according to automatic correlation curve, and exports correction coefficient to compensator.
In one embodiment of this invention, wherein brightness statistics is converted to automatic correlation curve according to relevance function by correlativity unit automatically.
In one embodiment of this invention, wherein quantize the peak that automatic correlation curve found out by detector, and high-pass filtering is carried out to obtain filter curve to automatic correlation curve, and calculate correction coefficient at the automatic correlation of peak and filter value respectively according to automatic correlation curve and filter curve.
In one embodiment of this invention, wherein quantize detector and automatic correlation curve is converted to the first temporary transient parameter at the automatic correlation of peak, and filter curve is converted to the second temporary transient parameter at the filter value of peak, and calculate correction coefficient according to the first temporary transient parameter and the second temporary transient parameter.
In one embodiment of this invention, wherein quantize detector and the first temporary transient parameter is multiplied with the second temporary transient parameter, and obtain correction coefficient.
In one embodiment of this invention, wherein significant bit detector comprises signal statistics unit, automatically correlativity unit, quantizes detector and image computing unit (graphic meter).The brightness value of signal statistics unit to image input signal is added up, and output brightness statistics.Automatic correlativity unit couples signal statistics unit, in order to brightness statistics is converted to automatic correlation curve.Quantize detector and couple automatic correlativity unit, it is in order to calculate initial calibration coefficient according to automatic correlation curve.Described image computing unit is coupled to and quantizes detector to receive initial calibration coefficient, it carries out rim detection in order to the multiple pixels in the picture frame (image frame) to image input signal, and calculates correction coefficient according to the result of the rim detection of initial calibration coefficient and pixel.
In one embodiment of this invention, wherein quantize the peak that this automatic correlation curve found out by detector, high-pass filtering is carried out to obtain a filter curve to this automatic correlation curve, automatic correlation curve is converted to the first temporary transient parameter at the automatic correlation of peak, filter curve is converted to the second temporary transient parameter at the filter value of peak, and calculates initial calibration coefficient according to the first temporary transient parameter and the second temporary transient parameter.
In one embodiment of this invention, wherein rim detection comprises: the current pixel in calculating pixel in the summation of the first adjacent pixels group of first direction, as the first adjacent pixels and; Calculate current pixel in the summation of the second adjacent pixels group of second direction, as the second adjacent pixels and, wherein first direction differs 180 degree with second direction; Calculate the first adjacent pixels and with the second adjacent pixels and difference, as the first marginal value of current pixel; According to the first marginal value of pixel and the relation of initial calibration coefficient, the first correcting gain value of statistical pixel; Calculate current pixel in the summation of the 3rd adjacent pixels group of third direction, as the 3rd adjacent pixels and; Calculate current pixel in the summation of the 4th adjacent pixels group of fourth direction, as the 4th adjacent pixels and, wherein third direction differs 180 degree with fourth direction; Calculate the 3rd adjacent pixels and with the 4th adjacent pixels and difference, as the Second Edge edge value of current pixel; According to the Second Edge edge value of pixel and the relation of initial calibration coefficient, the second correcting gain value of statistical pixel; And using the first correcting gain value and the second correcting gain value as the described result of rim detection.
In one embodiment of this invention, the wherein said correction coefficient that calculates comprises and being multiplied, to obtain correction coefficient with the first correcting gain value and the second correcting gain value by initial calibration coefficient.
In one embodiment of this invention, wherein compensator comprises the first false contouring reduction device (false contour reduction device) and the second false contouring reduction device.First false contouring reduction device is in order to receive image input signal and to carry out the first false contouring reduction computing, to export the first image correction signal according to correction coefficient to image input signal.Second false contouring reduction device couples the first false contouring reduction device, in order to receive the first image correction signal and to carry out the second false contouring reduction computing according to correction coefficient to the first image correction signal, outputs signal with output image.
In one embodiment of this invention, wherein the first false contouring reduction device comprises horizontal filtering unit, shake (dithering) unit, horizontal boundary detecting unit and mixed cell.Horizontal filtering unit in order to judge whether the difference of current pixel in image input signal and horizontal direction neighborhood pixels is greater than correction coefficient, thus according to signal after the corresponding output filtering of judged result.Dither unit couples horizontal filtering unit, in order to receive and to carry out dither operation to filtered signal, with signal after output jitter.Horizontal boundary detecting unit in order to receive and foundation image input signal and carrier chrominance signal detection level border, and determines horizontal effective value according to this.Mixed cell couples dither unit and horizontal boundary detecting unit, in order to carry out weight computing to signal after image input signal, shake, thus export the first image correction signal, wherein mixed cell determines the weight of the rear signal of image input signal and shake according to horizontal effective value.
In one embodiment of this invention, wherein horizontal boundary detecting unit calculates horizontal boundary level according to carrier chrominance signal and image input signal, and horizontal boundary level is compared to multiple horizontal boundary threshold value, obtain horizontal effective value with quantization level boundary level.
In one embodiment of this invention, wherein image input signal comprises luminance signal and carrier chrominance signal.Carrier chrominance signal comprises red carrier chrominance signal and blue carrier chrominance signal.And horizontal boundary detecting unit selects the maximum as horizontal boundary level from the horizontal gradient value of the horizontal gradient value of luminance signal, red carrier chrominance signal and the horizontal gradient value three of blue carrier chrominance signal.
In one embodiment of this invention, wherein the second false contouring reduction device comprises vertical filtering unit, dither unit, vertical boundary detecting unit and mixed cell.Vertical filtering unit in order to judge whether the difference of current pixel in the first image correction signal and vertical direction neighborhood pixels is greater than correction coefficient, thus according to signal after the corresponding output filtering of judged result.Dither unit couples vertical filtering unit, in order to receive and to carry out dither operation to filtered signal, with signal after output jitter.Vertical boundary detecting unit in order to receive and to detect vertical boundary according to the first image correction signal and carrier chrominance signal, and determines vertical effective value according to this.Mixed cell couples dither unit and vertical boundary detecting unit, in order to carry out weight computing to signal after the first image correction signal, shake, thus output image output signal, wherein mixed cell determines the weight of the rear signal of the first image correction signal and shake according to vertical effective value.
In one embodiment of this invention, also comprise buffer cell, in order to buffers images input signal, to make image input signal synchronous with correction coefficient, and the image input signal after buffering is inputed to compensator.
Based on above-mentioned, a kind of image processing apparatus proposed by the invention and method, wherein by the significant bit detector in image processing apparatus with the effective number of bits in the bit-depth of detected image input signal, and image input signal is processed, computing is with the correction coefficient obtained and export compensator to, according to this correction coefficient, bit number compensation is carried out to the bit-depth of image input signal deficiency for compensator, thus effectively improve the display quality of the image frame of display, to avoid the generation of false contouring phenomenon.
For above-mentioned feature and advantage of the present invention can be become apparent, special embodiment below, and coordinate accompanying drawing to be described in detail below.
Accompanying drawing explanation
Fig. 1 is the block schematic diagram of the image processing apparatus according to one embodiment of the invention;
Fig. 2 is the block schematic diagram of significant bit detector according to one embodiment of the invention and compensator inside;
Fig. 3 is the brightness histogram exported according to the signal statistics unit of one embodiment of the invention;
Fig. 4 is the automatic correlation curve schematic diagram exported according to the automatic correlativity unit of one embodiment of the invention;
Fig. 5 is the automatic strength of correlation curve synoptic diagram exported according to the quantification detector of one embodiment of the invention;
Fig. 6 a, 6b are the schematic diagrames of tabling look-up of the quantification detector according to one embodiment of the invention;
Fig. 7 is the schematic diagram according to the multiple pixels in the picture frame of the image input signal of one embodiment of the invention;
Fig. 8 a, 8b are the schematic diagrames of the pixel alignments according to one embodiment of the invention;
Fig. 9 a, 9b are the schematic diagrames of tabling look-up of the image computing unit according to one embodiment of the invention;
Figure 10 is the block schematic diagram of the first false contouring reduction device inside according to one embodiment of the invention;
Figure 11 is the schematic diagram of tabling look-up of the horizontal boundary detecting unit according to one embodiment of the invention;
Figure 12 is the block schematic diagram of the second false contouring reduction device inside according to one embodiment of the invention key diagram 2;
Figure 13 is the schematic diagram of tabling look-up of the vertical boundary detecting unit according to one embodiment of the invention;
Figure 14 is the circuit box schematic diagram of significant bit detector according to another embodiment of the present invention and compensator inside;
Figure 15 is the flow chart of the image processing method according to one embodiment of the invention;
Figure 16 is the schematic flow sheet that the step S100 in Figure 15 is described according to embodiments of the invention;
Figure 17 is the schematic flow sheet that the step S130 in Figure 16 is described according to embodiments of the invention;
Figure 18 is the schematic flow sheet that the step S136 in Figure 17 is described according to embodiments of the invention;
Figure 19 is the schematic flow sheet that the step S100 in Figure 15 is described according to another embodiment of the present invention;
Figure 20 is the flow chart that the step S1930 in Figure 19 is described according to embodiments of the invention;
Figure 21 is the schematic flow sheet that the step S1940 in Figure 19 is described according to embodiments of the invention;
Figure 22 is the schematic flow sheet that the step S1944 in Figure 21 is described according to embodiments of the invention;
Figure 23 is the schematic flow sheet that the step S200 in Figure 15 is described according to embodiments of the invention;
Figure 24 is the schematic flow sheet that the step S210 in Figure 23 is described according to embodiments of the invention;
Figure 25 is the schematic flow sheet that the step S216 in Figure 24 is described according to embodiments of the invention;
Figure 26 is the schematic flow sheet that the step S220 in Figure 23 is described according to embodiments of the invention.
Description of reference numerals:
CbCr_in: carrier chrominance signal;
Hlpf_coef: horizontal effective value;
Q: initial calibration coefficient;
Q_final: correction coefficient;
Q_gain1: the first correcting gain value;
Q_gain2: the second correcting gain value;
Vlpf_coef: vertical effective value;
Y_in: image input signal;
Y_lpf_out: filtered signal;
Y_lpf_out': signal after shake;
Y_out: picture output signal;
Y_out': the first image correction signal;
100: image processing apparatus;
110: significant bit detector;
112: signal statistics unit;
114: correlativity unit automatically;
116: quantize detector;
118: image computing unit;
120: compensator;
122: the first false contouring reduction devices;
122_2: horizontal filtering unit;
122_4,124_4: dither unit;
122_6: horizontal boundary detecting unit;
122_8,124_8: mixed cell;
124: the second false contouring reduction devices;
124_2: vertical filtering unit;
124_6: vertical boundary detecting unit;
130: buffer;
400: correlation curve automatically;
500: filter curve;
S100, S200: step;
S110 ~ S130: step;
S132 ~ S136: step;
S136_1 ~ S136_3: step;
S210, S220: step;
S212 ~ S218: step;
S216_1, S216_2: step;
S222 ~ S228: step;
S1930 ~ S1950, S1932 ~ S1938, S1941 ~ S1948, S1944_1 ~ S1944_3: step.
Embodiment
With detailed reference to one exemplary embodiment of the present invention, the example of described one exemplary embodiment is described in the accompanying drawings.In addition, all may part, in graphic and execution mode, use the identical or similar portions of the element/component/symbology of identical label.
Fig. 1 is the block schematic diagram of the image processing apparatus according to one embodiment of the invention.Please refer to Fig. 1, the image processing apparatus 100 in the present embodiment comprises effective bit-detector 110 and compensator 120, but not as restriction.Significant bit detector 110 is in order to the effective number of bits (valid bits) in the bit-depth of detected image input signal Y _ in, thus corresponding output calibration coefficient Q_final.Compensator 120 couples effective bit-detector 110 to receive correction coefficient Q_final, and carries out bit number compensation according to correction coefficient Q_final to image input signal Y_in, thus exports corresponding picture output signal Y_out.
In the present embodiment, image processing apparatus 100 can be applicable to such as, such as, between image-input device (not shown, to be DVD player etc.) Yu display (not shown, to be television set etc.), but not as limit.The image input signal Y_in that image-input device can provide by image processing apparatus 100 carries out bit-depth compensation, thus exports the picture output signal Y_out meeting the nominal bit degree of depth of display.Therefore, image processing apparatus 100 can reduce the phenomenon of " false contouring " (false contour).
Fig. 2 is the block schematic diagram of significant bit detector according to one embodiment of the invention and compensator inside.Embodiment illustrated in fig. 2ly can analogize it with reference to the related description of Fig. 1.Please refer to Fig. 2, the significant bit detector 110 in the present embodiment comprises signal statistics unit 112, automatically correlation (auto-correlation) unit 114 and quantizes detector 116, but not as limit.Signal statistics unit 112 in order to receive and to add up the brightness value of image input signal Y_in, and exports a brightness statistics.Described brightness statistics can record and shows by any way.Such as in certain embodiments, described brightness statistics can comprise brightness histogram (luma histogram) as shown in Figure 3.Fig. 3 is the brightness histogram exported according to the signal statistics unit of one embodiment of the invention, and wherein transverse axis t is the brightness value in brightness histogram, longitudinal axis X tit is the pixel quantity in a picture frame (image frame) with brightness value t.More specifically, signal statistics unit 112 is added up for the pixel quantity of brightness value different in image input signal Y_in (i.e. GTG numerical value) respectively, thus obtains brightness value histogram.
Then please be back to Fig. 2, the automatic correlativity unit 114 in significant bit detector 110 couples signal statistics unit 112, in order to the brightness statistics shown in Fig. 3 to be converted to an automatic correlation curve 400, as shown in Figure 4.Fig. 4 is the automatic correlation curve schematic diagram exported according to the automatic correlativity unit of one embodiment of the invention.In the diagram, transverse axis τ is the brightness span in brightness histogram, longitudinal axis R (τ) for there is brightness span τ two brightness values between relevance values.
In one embodiment, the brightness statistics that signal statistics unit 112 exports can be converted to automatic correlation curve 400 according to relevance function by automatic correlativity unit 114.Described brightness statistics can comprise brightness histogram, and described relevance function following (but not as limit):
R ( τ ) = Σ t ( X t · X t + τ ) Σ t ( X t 2 )
Wherein, t is the brightness value in brightness histogram, X tfor having pixel quantity, the X of brightness t in brightness histogram t+ τfor having the pixel quantity of brightness t+ τ in brightness histogram.
In another embodiment, wherein relevance function is as follows:
R ( τ ) = Σ t [ ( X t - μ ) · ( X t + τ - μ ) ] Σ t ( X t 2 - μ 2 )
Wherein t is the brightness value in brightness histogram, X tfor having pixel quantity, the X of brightness t in brightness histogram t+ τfor having the pixel quantity of brightness t+ τ in brightness histogram, μ is whole X in brightness histogram taverage.But the relevance function of the present embodiment does not limit the execution mode of automatic correlativity unit 114 because of above-mentioned explanation.
Please return Fig. 2 again, quantize detector 116 and couple automatic correlativity unit 114, it calculates initial calibration coefficient Q in order to the automatic correlation curve 400 exported according to automatic correlativity unit 114.Such as, quantize detector 116 and find out the peak (in such as Fig. 4 the position 1 of peak value R0, R1 and Q1) that automatic correlation curve 400 corresponds to the longitudinal axis, and high-pass filtering is carried out to obtain filter curve 500 to automatic correlation curve 400, as shown in Figure 5.Fig. 5 is the automatic strength of correlation curve synoptic diagram exported according to the quantification detector of one embodiment of the invention.In Figure 5, transverse axis τ is the brightness span in brightness histogram, longitudinal axis R (τ) for there is brightness span τ two brightness values between relevance values.Curve 400 shown in Fig. 5 is local of curve 400 shown in Fig. 4.Quantize detector 116 and can calculate initial calibration coefficient Q according to automatic correlation curve 400 and the filter curve 500 automatic correlation R1 respectively corresponding to peak Q1 and filter value K1.The example calculating initial calibration coefficient Q with reference to following, but can should not be limited to this.
For example, quantize detector 116 and automatic correlation curve 400 can be converted to the first temporary transient parameter Q_tmp1 at the automatic correlation R1 of peak Q1, and filter curve 500 is converted to the second temporary transient parameter Q_tmp2 at the filter value K1 of peak Q1.At acquisition first temporarily parameter Q_tmp1 and second temporarily parameter Q_tmp2, quantizing detector 116 can calculate initial calibration coefficient Q according to the temporary transient parameter Q_tmp2 of the first temporary transient parameter Q_tmp1 and second.
Fig. 6 a is the schematic diagram of tabling look-up of the quantification detector according to one embodiment of the invention.In Fig. 6 a, transverse axis represents the automatic correlation of automatic correlation curve 400, and the longitudinal axis represents the first temporary transient parameter Q_tmp1.In the present embodiment, quantize detector 116 and according to transformational relation shown in Fig. 6 a, automatic correlation curve 400 can be converted to the first temporary transient parameter Q_tmp1 at the automatic correlation (being such as R0, R1) of peak (such as 1, Q1).More specifically, quantizing detector 116 can with the automatic correlation R0 of brightness span τ=1 as reference value, and to automatic correlation curve 400 the automatic correlation R1 of peak Q1 carry out normalization (normalize) and obtain through normalized value (be such as R1/R0, the automatic correlation of other positions can the rest may be inferred), thus can table look-up through normalized value according to described, to be converted to the first temporary transient parameter Q_tmp1 by described through normalized value, as shown in Figure 6.But this is not restriction in the above-mentioned computing for automatic correlation.
Fig. 6 b is the schematic diagram of tabling look-up of the quantification detector according to one embodiment of the invention.In figure 6b, transverse axis represents the filter value of filter curve 500, and the longitudinal axis represents the second temporary transient parameter Q_tmp2.Please refer to Fig. 6 b, quantize detector 116 and also the filter value (be such as K1) of filter curve 500 corresponding to peak (being such as Q1) can be tabled look-up, to be converted to the second temporary transient parameter Q_tmp2 with transformational relation shown in Fig. 6 b.
After the acquisition first temporarily temporary transient parameter Q_tmp2 of parameter Q_tmp1 and second, quantizing detector 116 can calculate initial calibration coefficient Q according to the temporary transient parameter Q_tmp2 of the first temporary transient parameter Q_tmp1 and second.In one embodiment, wherein quantizing detector 116 can be multiplied the first temporary transient parameter Q_tmp1 with the second temporary transient parameter Q_tmp2, and obtains initial calibration coefficient Q, such as Q=Q_tmp1*Q_tmp2.But the compute mode of initial calibration coefficient Q is not as limit in other embodiments.
Please referring back to Fig. 2, in one embodiment, significant bit detector 110 also can comprise image computing unit 118(graphic meter), as shown in Figure 2.Image computing unit 118 is coupled to and quantizes detector 116 to receive initial calibration coefficient Q, it carries out rim detection (being described in more detail) below in order to the multiple pixels in the picture frame to image input signal Y_in, and calculate correction coefficient Q_final according to the result of the rim detection of initial calibration coefficient Q and pixel, thus can resolution image input signal be natural image (nature image) or artificial image (graphic image) further, to avoid judging false contouring by accident.In the present embodiment, image computing unit 118 to be configurable in significant bit detector 110 or in be built in and quantize in detector 116, but not as restriction.Under the embodiment for above-mentioned rim detection synchronous will be done more detailed description with reference to Fig. 7, Fig. 8 a, Fig. 8 b, Fig. 9.
Fig. 7 is the schematic diagram according to the multiple pixels in the picture frame of the image input signal of one embodiment of the invention.The brightness value Y of multiple pixels that image input signal Y_in comprises in current picture frame 1,1, Y 1,2... Y 1, hcnt..., Y 2,1, Y 2,2..., Y vcnt, 1, Y vcnt, 2... Y vcnt, hcnt, each pixel by left and right, from top to bottom sequential as shown in Figure 7, but not as restriction.
Fig. 8 a, Fig. 8 b are the schematic diagrames of the pixel alignments according to one embodiment of the invention.Image computing unit 118 can scan one by one to each of the multiple pixels in the picture frame of image input signal Y_in, and carries out rim detection according to mode shown in Fig. 8 a and/or Fig. 8 b in the process scanned picture frame.More specifically, the mode of operation of described rim detection comprises beneath step.First, image computing unit 118 can to the brightness value Y of the multiple pixels in the picture frame of image input signal Y_in 1,1~ Y vcnt, hcntscan one by one.Suppose that the pixel brightness value be scanned at present is Y at this c.
Please refer to Fig. 8 a, the current pixel Y in image computing unit 118 calculating pixel cat the first adjacent pixels group Y of first direction c-n, Y c-n+1..., Y c-1summation, as the first adjacent pixels and in the present embodiment, described first direction is row (row) direction, but this is not limited.Then, image computing unit 118 calculates current pixel Y cat the second adjacent pixels group Y of second direction c+1..., Y c+n-1, Y c+nsummation, as the second adjacent pixels and wherein first direction differs 180 degree with second direction.Image computing unit 118 can calculate the first adjacent pixels and with the second adjacent pixels and difference, as current pixel Y cthe first marginal value.
Please refer to Fig. 8 b, image computing unit 118 can current pixel Y in calculating pixel cat the 3rd adjacent pixels group Y of third direction c-n, Y c-n+1..., Y c-1summation, as the 3rd adjacent pixels and in the present embodiment, described third direction is row (column) direction, but this is not limited.Then, image computing unit 118 calculates current pixel Y cat the 4th adjacent pixels group Y of fourth direction c+1..., Y c+n-1, Y c+nsummation, as the 4th adjacent pixels and wherein third direction differs 180 degree with fourth direction.Image computing unit 118 can calculate the 3rd adjacent pixels and with the 4th adjacent pixels and difference, as current pixel Y csecond Edge edge value.
For Fig. 7, suppose that the pixel brightness value be scanned at present is Y x,y, wherein 1≤x≤vcnt, 1≤y≤hcnt, vcnt and hcnt is integer.Analogized by the explanation of above-mentioned Fig. 8 a and Fig. 8 b, suppose that the distance n of adjacent pixels group is 4, then current pixel Y in picture frame shown in Fig. 7 x,ythe first adjacent pixels and be and the second adjacent pixels and be image computing unit 118 can calculate the first adjacent pixels and with the second adjacent pixels and difference, as current pixel Y x,ythe first marginal value Yhdiff x,y.Such as, in like manner, current pixel Y in picture frame shown in Fig. 7 x,ythe 3rd adjacent pixels and be and the 4th adjacent pixels and be image computing unit 118 can calculate the 3rd adjacent pixels and with the 4th adjacent pixels and difference, as current pixel Y x,ysecond Edge edge value Yvdiff x,y.Such as,
Then, image computing unit 118 can according to the first marginal value (such as pixel Y of pixels all in picture frame x,ythe first marginal value Yhdiff x,y) with the relation of initial calibration coefficient Q, add up the first correcting gain value Q_gain1 of these pixels.The example calculating the first correcting gain value Q_gain1 with reference to following, but can should not be limited to this.More specifically, the mode of the first correcting gain value Q_gain1 of image computing unit 118 statistical pixel comprises beneath step.First, image computing unit 118 can in counting diagram picture frame in these pixels, be positioned at go together mutually (row) and its first marginal value and be greater than the first threshold value N and its first marginal value is less than the quantity of the pixel of k times of initial calibration coefficient Q, using as described horizontal edge pixel count value of going together mutually, wherein k is real number (such as 4 or other numbers).For example, image computing unit 118 can be positioned at the horizontal edge pixel count value contour_h_cnt of the i-th row (row) in picture frame shown in statistical chart 7 i.Institute is set forth in the i-th row horizontal edge pixel count value contour_h_cnt istatistical be described as follows (remaining row can the rest may be inferred) with pseudo code (pseudo code):
Then, image computing unit 118 can in multiple row of counting diagram picture frame, described horizontal edge pixel count value of going together mutually and the difference of the horizontal edge pixel count value of described adjacent lines of going together mutually are less than the quantity of the row of the second threshold value th_h, using as horizontal edge line number value Graphic_h_level.For example, image computing unit 118 can check that the 1st row (row) in picture frame shown in Fig. 7 is to the capable horizontal edge pixel count value contour_h_cnt of vcnt 1~ contour_h_cnt vcntand add up, to obtain the horizontal edge line number value Graphic_h_level of picture frame shown in Fig. 7.The statistical of described horizontal edge line number value Graphic_h_level is described as follows with pseudo code (pseudo code):
Finally, image computing unit 118 can be tabled look-up according to described horizontal edge line number value Graphic_h_level, so that described horizontal edge line number value Graphic_h_level corresponding conversion is obtained the first correcting gain value Q_gain1, as illustrated in fig. 9.Fig. 9 a is the schematic diagram of tabling look-up of the image computing unit according to one embodiment of the invention.In fig. 9 a, transverse axis represents horizontal edge line number value Graphic_h_level, and the longitudinal axis represents the first correcting gain value Q_gain1.Image computing unit 118 can be tabled look-up, horizontal edge line number value Graphic_h_level is converted to the first correcting gain value Q_gain1 according to transformational relation shown in Fig. 9 a.
Similarly, image computing unit 118 can according to Second Edge edge value (the such as pixel Y of pixels all in picture frame x,ysecond Edge edge value Yvdiff x,y) with the relation of initial calibration coefficient Q, add up the second correcting gain value Q_gain2 of these pixels.The example calculating the second correcting gain value Q_gain2 with reference to following, but can should not be limited to this.First, image computing unit 118 can in counting diagram picture frame in these pixels, be positioned at go together mutually (row) and its Second Edge edge value and be greater than the first threshold value N and its Second Edge edge value is less than the quantity of the pixel of k times of initial calibration coefficient Q, using as described vertical edge pixel quantitative value of going together mutually, wherein k is real number (such as 4 or other numbers).For example, image computing unit 118 can be positioned at the vertical edge pixel quantitative value contour_v_cnt of the i-th row (row) in picture frame shown in statistical chart 7 i.Institute is set forth in the i-th row vertical edge pixel quantitative value contour_v_cnt istatistical be described as follows (remaining row can the rest may be inferred) with pseudo code (pseudo code):
Image computing unit 118 can in multiple row of counting diagram picture frame, described vertical edge pixel quantitative value of going together mutually and the difference of the vertical edge pixel quantitative value of described adjacent lines of going together mutually are less than the quantity of the row of the second threshold value th_h, using as vertical edge line number value Graphic_v_level.For example, image computing unit 118 can check that the 1st row (row) in picture frame shown in Fig. 7 is to the capable vertical edge pixel quantitative value contour_v_cnt of vcnt 1~ contour_v_cnt vcntand add up, to obtain the vertical edge line number value Graphic_v_level of picture frame shown in Fig. 7.The statistical of described vertical edge line number value Graphic_v_level is described as follows with pseudo code (pseudo code):
Image computing unit 118 can be tabled look-up according to described vertical edge line number value Graphic_v_level, so that described vertical edge line number value Graphic_v_level corresponding conversion is obtained the second correcting gain value Q_gain2, as shown in figure 9b.Fig. 9 b is the schematic diagram of tabling look-up of the image computing unit according to one embodiment of the invention.In figure 9b, transverse axis represents vertical edge line number value Graphic_v_level, and the longitudinal axis represents the second correcting gain value Q_gain2.Image computing unit 118 can be tabled look-up, vertical edge line number value Graphic_v_level is converted to the second correcting gain value Q_gain2 according to transformational relation shown in Fig. 9 a.The mode that image computing unit 118 calculates the second correcting gain value Q_gain2 adopts similar compute mode with calculating the first correcting gain value Q_gain1, its difference is that the direction that image computing unit 118 carries out rim detection in order to the pixel in the picture frame to image input signal Y_in is y direction, namely arranges (column) direction.
After acquisition first correcting gain value Q_gain1 and the second correcting gain value Q_gain2, image computing unit 118 can the first correcting gain value Q_gain1 and the second correcting gain value Q_gain2 as the described result of described rim detection.In one embodiment, the wherein said correction coefficient Q_final that calculates comprises multiplied result initial calibration coefficient Q being multiplied by the first correcting gain value Q_gain1 and the second correcting gain value Q_gain2, to obtain correction coefficient Q_final, such as Q_final=Q*Q_gain1*Q_gain2.But the compute mode of correction coefficient Q_final is not as limit.
On the other hand, please return Fig. 2 again, in the present embodiment, compensator 120 comprises the first false contouring reduction device 122 and the second false contouring reduction device 124.First false contouring reduction device 122 is in order to receive image input signal Y_in and to carry out the first false contouring reduction computing, to export the first image correction signal Y_out' according to correction coefficient Q_final to image input signal Y_in.Second false contouring reduction device 124 couples the first false contouring reduction device 122, in order to receive the first image correction signal Y_out' and to carry out the second false contouring reduction computing according to correction coefficient Q_final to the first image correction signal Y_out', with output image output signal Y_out.The serial connection sequence of the first false contouring reduction device 122 and the second false contouring reduction device 124 should not be limited to shown in Fig. 2.Such as, in other embodiments, the input of the second false contouring reduction device 124 can receive image input signal Y_in and carrier chrominance signal CbCr_in, the output of the second false contouring reduction device 124 exports the first image correction signal to the input of the first false contouring reduction device 122, and the output output image of the first false contouring reduction device 122 output signal Y_out.In the embodiment depicted in figure 2, the embodiment of above-mentioned false contouring reduction computing is described in more detail for the first false contouring reduction device 122 in Figure 10.
Figure 10 is the block schematic diagram of the first false contouring reduction device inside according to one embodiment of the invention.In the present embodiment, image input signal Y_in comprises luminance signal.First false contouring reduction device 122 comprises horizontal filtering unit 122_2, shake (dithering) unit 122_4, horizontal boundary detecting unit 122_6 and mixed cell 122_8, but not as restriction.Horizontal filtering unit 122_2 is in order to receive and to judge current pixel (the such as current pixel Y shown in Fig. 8 a in image input signal Y_in c) and horizontal direction neighborhood pixels (such as neighborhood pixels Y c+i, i is integer) difference whether be greater than correction coefficient Q_final, thus according to judged result signal Y_lpf_out after corresponding output filtering.
For example, in certain embodiments, horizontal filtering unit 122_2 can comprise edge maintenance processor (edge preserved processor) and low pass filter (low pass filter) (not shown).First input end and second input of described edge maintenance processor receive correction coefficient Q_final and image input signal Y_in respectively.The output of described edge maintenance processor is coupled to the input of described low pass filter.After the output output filtering of described low pass filter, signal Y_lpf_out is to the input of dither unit 122_4.Described low pass filter can be any type of low-pass filter circuit, such as conventional lowpass filter etc.Described edge maintenance processor can judge the current pixel Y in image input signal Y_in cwith horizontal direction neighborhood pixels Y c+idifference whether be greater than correction coefficient Q_final, thus according to judged result determine whether adjust current pixel Y cneighborhood pixels Y in the horizontal direction c+iluminance signal, and will through adjustment luminance signal `Y export to described low pass filter.More specifically, as the current pixel Y in image input signal Y_in cwith horizontal direction neighborhood pixels Y c+idifference when being greater than correction coefficient Q_final, then described edge maintain processor can by horizontal direction neighborhood pixels Y c+ichange current pixel Y into cpixel value; If when judged result is no, then described edge maintenance processor does not change horizontal direction neighborhood pixels Y c+ipixel value.The operation that described edge maintains processor can refer to Fig. 8 a and is described as follows with pseudo code (pseudo code):
Then, described edge maintains processor and will export to described low pass filter through adjustment luminance signal `Y.For example, described edge maintenance processor can by current pixel Y cnear horizontal direction neighborhood pixels through adjustment luminance signal `Y c-n~ `Y c+nexport to 2n+1 rank (2n+1taps) low pass filter.This 2n+1 rank low pass filter by these through adjustment luminance signal `Y c-n~ `Y c+ncarry out filtering, thus after output filtering signal Y_lpf_out to the dither unit 122_4 of next stage.
Dither unit 122_4 couples horizontal filtering unit 122_2 to receive and to shake (dithering) operation, with signal Y_lpf_out' after output jitter to filtered signal Y_lpf_out.Dither operation is a kind of technology on image procossing, because human vision can produce illusion to the average color of zonule, and the embodiment of described dither operation is in the color saucer system of a limited color, approximate color not on color saucer is carried out by diffusion (diffusion), therefore by the degree of depth of color can be increased after dither operation, the quality of image is made to seem better.Described dither unit 122_4 can be any type of dither circuit, such as traditional dither circuit etc.
Meanwhile, the horizontal boundary detecting unit 122_6 in the first false contouring reduction device 122 in order to receive and foundation image input signal Y_in and carrier chrominance signal CbCr_in detection level border H_edge_level, and determines horizontal effective value hlpf_coef according to this.More detailed, horizontal boundary detecting unit 122_6 can calculate the brightness Y horizontal gradient (horizontal gradient of Y) of current pixel Yc, chroma Cb horizontal gradient (horizontal gradient of Cb) and chroma Cr horizontal gradient (horizontal gradient of Cr), from brightness Y horizontal gradient, chroma Cb horizontal gradient and chroma Cr horizontal gradient three, then select the maximum as described horizontal boundary H_edge_level.Please also refer to the schematic diagram of tabling look-up that Figure 11, Figure 11 are the horizontal boundary detecting units according to one embodiment of the invention.In the present embodiment, horizontal boundary level H_edge_level can be compared to multiple horizontal boundary threshold value (being such as h_edge_th0, h_edge_th1, h_edge_th2, h_edge_th3) by horizontal boundary detecting unit 122_6, with quantization level boundary level H_edge_level, the horizontal effective value hlpf_coef(of acquisition is such as Coef0, Coef1, Coef2, Coef3), as shown in figure 11.The operation of the horizontal effective value hlpf_coef of described decision can refer to Figure 11 and is described as follows with pseudo code (pseudo code):
In the present embodiment, image input signal Y_in comprises luminance signal (Y), and carrier chrominance signal CbCr_in comprises red carrier chrominance signal (Cr) and blue carrier chrominance signal (Cb).In above-mentioned pseudo code, H Gradient represents horizontal gradient value (horizontal gradient).Horizontal boundary detecting unit 122_6 can select the maximum as horizontal boundary level H_edge_level from the horizontal gradient value three of the horizontal gradient value of the horizontal gradient value of brightness signal Y, red carrier chrominance signal Cr and blue carrier chrominance signal Cb.
Finally, please return Figure 10 again, mixed cell 122_8 couples dither unit 122_4 and horizontal boundary detecting unit 122_6, in order to carry out weight computing to signal Y_lpf_out' after image input signal Y_in, shake, thus exports the first image correction signal Y_out'.In the present embodiment, mixed cell 122_8 can determine the weight of the rear signal Y_lpf_out' of image input signal Y_in and shake according to horizontal effective value hlpf_coef.For example, in part embodiment, mixed cell 122_8 can calculate Y_out'=hlpf_coef*Y_lpf_out'+ (1-hlpf_coef) * Y_in, to obtain the first image correction signal Y_out'.
In like manner, in the present embodiment, the inner member of the second false contouring reduction device 124 and mode of operation all similar with the first false contouring reduction device 122.And the Main Differences of the first false contouring reduction device 122 and the second false contouring reduction device 124 is, the second false contouring reduction device 124 for carry out computing for vertical direction, therefore can analogize it with reference to the related description of above-mentioned Figure 10.For example, Figure 12 is the block schematic diagram of the second false contouring reduction device inside according to one embodiment of the invention key diagram 2.In the present embodiment, the second false contouring reduction device 124 comprises vertical filtering unit 124_2, dither unit 124_4, vertical boundary detecting unit 124_6 and mixed cell 124_8, but not as restriction.Vertical filtering unit 124_2 is in order to receive and to judge current pixel (the such as current pixel Y shown in Fig. 8 b in the first image correction signal Y_out' c) and vertical direction neighborhood pixels (such as neighborhood pixels Y shown in Fig. 8 b c+i, i is integer) difference whether be greater than correction coefficient Q_final, thus according to judged result and after corresponding output filtering signal to dither unit 124_4.Dither unit 124_4 shown in Figure 12 can analogize it with reference to the related description of dither unit 122_4 shown in Figure 10, therefore does not repeat at this.
In certain embodiments, vertical filtering unit 124_2 may comprise edge maintenance processor (edge preserved processor) and low pass filter.First input end and second input of described edge maintenance processor receive correction coefficient Q_final and the first image correction signal Y_out' respectively.The output of described edge maintenance processor is coupled to the input of described low pass filter.After the output output filtering of described low pass filter, signal is to the input of dither unit 124_4.Described low pass filter can be any type of low-pass filter circuit, such as conventional lowpass filter etc.Described edge maintenance processor can judge the current pixel Y in the first image correction signal Y_out' cwith vertical direction neighborhood pixels Y c+idifference whether be greater than correction coefficient Q_final, thus according to judged result determine whether adjust current pixel Y cneighborhood pixels Y in the vertical direction c+iluminance signal, and will through adjustment luminance signal `Y export to described low pass filter.More specifically, as the current pixel Y in image input signal Y_in cwith vertical direction neighborhood pixels Y c+idifference when being greater than correction coefficient Q_final, then described edge maintain processor can by vertical direction neighborhood pixels Y c+ichange current pixel Y into cpixel value; If when judged result is no, then described edge maintenance processor does not change vertical direction neighborhood pixels Y c+ipixel value.The operation that described edge maintains processor can refer to Fig. 8 b and is described as follows with pseudo code (pseudo code):
Then, the described edge in vertical filtering unit 124_2 maintains processor will export to described low pass filter through adjustment luminance signal `Y.For example, described edge maintenance processor can by current pixel Y cnear vertical direction neighborhood pixels through adjustment luminance signal `Y c-n~ `Y c+nexport to 2n+1 rank low pass filter.This 2n+1 rank low pass filter by these through adjustment luminance signal `Y c-n~ `Y c+ncarry out filtering, thus after output filtering signal to dither unit 124_4.Dither unit 124_4 carries out dither operation to filtered signal, with signal after output jitter to mixed cell 124_8.
Meanwhile, the vertical boundary detecting unit 124_6 in the second false contouring reduction device 124 in order to receive and to detect vertical boundary V_edge_level according to the first image correction signal Y_out' and carrier chrominance signal CbCr_in, and determines vertical effective value vlpf_coef according to this.For example, vertical boundary detecting unit 124_6 can calculate the brightness Y vertical gradient (vertical gradient of Y) of current pixel Yc, chroma Cb vertical gradient (vertical gradient of Cb) and chroma Cr vertical gradient (vertical gradient of Cr), from brightness Y vertical gradient, chroma Cb vertical gradient and chroma Cr vertical gradient three, then select the maximum as described vertical boundary V_edge_level.Figure 13 is the schematic diagram of tabling look-up of the vertical boundary detecting unit according to one embodiment of the invention.In the present embodiment, vertical boundary level V_edge_level can be compared to multiple vertical boundary threshold value (being such as v_edge_th0, v_edge_th1, v_edge_th2, v_edge_th3) by vertical boundary detecting unit 124_6, obtaining vertical effective value vlpf_coef(to quantize vertical boundary level V_edge_level is such as Coef0, Coef1, Coef2, Coef3), as shown in figure 13.The operation of the vertical effective value vlpf_coef of described decision can refer to Figure 13 and is described as follows with pseudo code (pseudo code):
In the embodiment shown in fig. 12, the first image correction signal Y_out' comprises luminance signal (Y), and carrier chrominance signal CbCr_in comprises red carrier chrominance signal (Cr) and blue carrier chrominance signal (Cb).In above-mentioned pseudo code, VGradient represents vertical gradient value.Vertical boundary detecting unit 124_6 can select the maximum as vertical boundary level V_edge_level from the vertical gradient value three of the vertical gradient value of the vertical gradient value of brightness signal Y, red carrier chrominance signal Cr and blue carrier chrominance signal Cb.
Finally, please return Figure 12 again, mixed cell 124_8 couples dither unit 124_4 and vertical boundary detecting unit 124_6, in order to signal after the shake that exports the first image correction signal Y_out' and dither unit 124_4 to carry out weight computing, thus output image output signal Y_out.In the present embodiment, the mixed cell 124_8 vertical effective value vlpf_coef that can export according to vertical boundary detecting unit 124_6 and the weight of signal after determining the shake that the first image correction signal Y_out' and dither unit 124_4 exports.The 124_8 of mixed cell shown in Figure 12 can analogize it with reference to the related description of the 122_8 of mixed cell shown in Figure 10, therefore does not repeat at this.
Figure 14 is the circuit box schematic diagram of significant bit detector according to another embodiment of the present invention and compensator inside.Embodiment illustrated in fig. 14ly can analogize it with reference to the related description of Fig. 2.Please refer to Figure 14, the significant bit detector 110 in the present embodiment comprises signal statistics unit 112, automatically correlation (auto-correlation) unit 114 and quantizes detector 116, but not as limit.And the discrepancy of the present embodiment and Fig. 2 is, when do not need further resolution image input signal be natural image or artificial image, image computing unit 118 can not be comprised in the detector of significant bit shown in Figure 14 110, and the initial calibration coefficient Q calculated by quantification detector 116 is directly as correction coefficient Q_final, to be sent in compensator 120, and all the other elements can refer to the related description of Fig. 2, to repeat no more at this.
In addition, it should be noted that the correction coefficient Q_final that significant bit detecting unit 110 exports can have a picture (frame) to postpone with image input units Y_in.Therefore, in the embodiment shown in fig. 14, image correction apparatus 100 also can comprise buffer 130.The output of buffer 130 is coupled to the input of compensator 120, in order to buffers images input signal Y _ in and carrier chrominance signal CbCr_in, to make the image input signal Y_in1(carrier chrominance signal CbCr_in1 after buffering) synchronous with correction coefficient Q_final, and the image input signal Y_in1 after buffering and carrier chrominance signal CbCr_in1 is inputed to compensator 120, but the present invention is not as limit.
As for the bearing calibration of image processing apparatus described in embodiments of the invention 100, in order to more clearly demonstrate, under each item of namely arranging in pairs or groups in above-mentioned Fig. 1, Fig. 2 (or Figure 14), Figure 10 in image processing apparatus 100, so that the detailed process of the bearing calibration of the image processing apparatus 100 of different embodiments of the invention to be described.
Figure 15 is the flow chart of the image processing method according to one embodiment of the invention.Referring to Fig. 1 and Figure 15, first, the effective number of bits in the bit-depth of significant bit detector 110 detected image input signal Y _ in, thus the corresponding correction coefficient Q_final that produces is to compensator 120(step S100).Then, compensator 120 carries out bit number compensation according to correction coefficient Q_final to image input signal Y_in, thus exports corresponding picture output signal Y_out(step S200).
Figure 16 is the schematic flow sheet that the step S100 in Figure 15 is described according to embodiments of the invention.The step S100 of the present embodiment comprises sub-step S110 to S130.Referring to Figure 14, Fig. 3 to Fig. 5 and Figure 16, the brightness value of signal statistics unit 112 couples of image input signal Y_in is added up, and output brightness statistics (step S110).Then, brightness statistics is converted to automatic correlation curve 400(step S120 by automatic correlativity unit 114).Quantize detector 116 calculate initial calibration coefficient Q according to automatic correlation curve 400, and using initial calibration coefficient Q as correction coefficient Q_final to be sent to (step S130) in compensator 120.
Figure 17 is the schematic flow sheet that the step S130 in Figure 16 is described according to embodiments of the invention.The step S130 of the present embodiment comprises sub-step S132 to S136.Referring to Fig. 4, Figure 14 and Figure 17, quantize the peak (being such as Q1) that automatic correlation curve 400 found out by detector 116 in step S132.Then, quantize detector 116 and high-pass filtering is carried out in step S134 to obtain the related description that filter curve 500(please refer to Fig. 5 to automatic correlation curve 400).Quantize detector 116 and calculate initial calibration coefficient Q at the automatic correlation R1 of peak Q1 and filter value K1 respectively according to automatic correlation curve 400 and filter curve 500, and using initial calibration coefficient Q as correction coefficient Q_final to be sent to (step S136) in compensator 120.
Figure 18 is the schematic flow sheet that the step S136 in Figure 17 is described according to embodiments of the invention.The step S136 of the present embodiment comprises sub-step S136_1 to S136_3.In the step S136_1 of the present embodiment, quantize detector 116 and this automatic correlation curve 400 is converted at the automatic correlation R1 of peak Q1 the related description that the first temporary transient parameter Q_tmp1(please refer to Fig. 6 a).Then, quantize detector 116, in step S136_2, filter curve 500 is converted at the filter value K1 of peak Q1 the related description that the second temporary transient parameter Q_tmp2(please refer to Fig. 6 b).Quantize detector 116 shown in Figure 14 and calculate initial calibration coefficient Q according to the temporary transient parameter Q_tmp2 of the first temporary transient parameter Q_tmp1 and second, and using initial calibration coefficient Q as correction coefficient Q_final to be sent to (step S136_3) in compensator 120.
Figure 19 is the schematic flow sheet that the step S100 in Figure 15 is described according to another embodiment of the present invention.Step S110, S120 and S1930 shown in Figure 19 can analogize it with reference to the related description of step S110 shown in Figure 16, S120 and S130.Referring to Fig. 2 and Figure 19, in the present embodiment, significant bit detector 110 also comprises image computing unit 118.Quantize detector 116 in step S1930, calculate initial calibration coefficient Q, and initial calibration coefficient Q is sent to image computing unit 118.The each of the multiple pixels in one picture frame of the image computing unit 118 couples of image input signal Y_in in significant bit detector 110 carries out rim detection (step S1940), and calculates correction coefficient Q_final(step S1950 according to the result of the rim detection of initial calibration coefficient Q and these pixels).Initial calibration coefficient Q is multiplied with the first correcting gain value Q_gain1 and the second correcting gain value Q_gain2, to obtain correction coefficient Q_final(step S1950).
Figure 20 is the flow chart that the step S1930 in Figure 19 is described according to embodiments of the invention.The step S1930 of the present embodiment comprises sub-step S1932 to S1938.S1932 and the S1934 of step shown in Figure 20 can analogize it with reference to the related description of S132 and the S134 of step shown in Figure 17.S1936 and the S1938 of step shown in Figure 20 can analogize it with reference to the related description of step S136_1 shown in Figure 18, S136_2 and S136_3.Therefore, quantize detector 116 shown in Fig. 2 and can calculate initial calibration coefficient Q according to the temporary transient parameter Q_tmp2 of the first temporary transient parameter Q_tmp1 and second in step S1938, and initial calibration coefficient Q is sent in image computing unit 118.
Figure 21 is the schematic flow sheet that the step S1940 in Figure 19 is described according to embodiments of the invention.The step S1940 of the present embodiment comprises sub-step S1941 to S1948.Referring to Fig. 2 and Figure 21, in the step S1941 of the present embodiment, image computing unit 118 calculates the current pixel Y in those pixels cat the first adjacent pixels group Y of first direction (be such as line direction or horizontal direction, join the related description of Fig. 7, Fig. 8 a in detail) c-n, Y c-n+1..., Y c-1summation, as the first adjacent pixels and then, image computing unit 118 calculates the second adjacent pixels group Y of current pixel Yc in second direction in step S1942 c+1..., Y c+n-1, Y c+nsummation, as the second adjacent pixels and wherein, described first direction differs 180 degree with described second direction.Then, image computing unit 118 calculate in step S1943 the first adjacent pixels and with the second adjacent pixels and difference, as current pixel Y cthe first marginal value.If for Fig. 7, then image computing unit 118 can calculate current pixel Y in step S1943 x,ythe first marginal value Yhdiff x,y image computing unit 118 according to the relation of the first marginal value of those pixels and initial calibration coefficient Q, can add up the first correcting gain value Q_gain1 of those pixels in step S1944.
In like manner, please refer to Fig. 8 b, image computing unit 118 can calculate current pixel Y in step S1945 cat the 3rd adjacent pixels group Y of third direction (being such as column direction or vertical direction) c-n, Y c-n+1..., Y c-1summation, as the 3rd adjacent pixels and image computing unit 118 can also calculate current pixel Y in step S1946 cat the 4th adjacent pixels group Y of fourth direction c+1..., Y c+n-1, Y c+nsummation, as the 4th adjacent pixels and wherein, described first direction differs 180 degree with described second direction.Then, image computing unit 118 can calculate the 3rd adjacent pixels in step S1947 and with the 4th adjacent pixels and difference, as current pixel Y csecond Edge edge value.If for Fig. 7, then image computing unit 118 can calculate current pixel Y in step S1947 x,ysecond Edge edge value image computing unit 118 according to the relation of the Second Edge edge value of those pixels and initial calibration coefficient Q, can add up the second correcting gain value Q_gain2 of those pixels in step S1948.
Figure 22 is the schematic flow sheet that the step S1944 in Figure 21 is described according to embodiments of the invention.The step S1944 of the present embodiment comprises sub-step S1944_1 to S1944_3.In step S1944_1, image computing unit 118 counts in those pixels and is positioned at identical row (row) and its first marginal value Yhdiff x,ybe greater than the first threshold value N and its first marginal value is less than the quantity of the pixel of k times of initial calibration coefficient Q, using as the horizontal edge pixel count value of going together mutually, wherein k is real number (such as 4 or other numbers).For example, image computing unit 118 can be arranged in eligible " (the first marginal value Yhdiff of the i-th row in picture frame shown in statistical chart 7 i,jand (the first marginal value Yhdiff >N) i,j<k*Q) pixel quantity ", as the horizontal edge pixel count value contour_h_cnt of the i-th row i.Then, image computing unit 118 counts in multiple row of picture frame in step S1944_2, the difference of the edge pixel quantitative value of certain a line and the edge pixel quantitative value of adjacent lines is less than the quantity of the row of the second threshold value th_h, using as horizontal edge line number value Graphic_h_level.For example, image computing unit 118 can picture frame shown in statistical chart 7 the 1st walk to vcnt capable in eligible " | contour_h_cnt i-contour_h_cnt i+1| <th_h " line number amount, as the horizontal edge line number value Graphic_h_level of picture frame.Image computing unit 118 carries out table look-up (such as with reference to the related description of Fig. 9 a) according to horizontal edge line number value Graphic_h_level, so that horizontal edge line number value Graphic_h_level is converted to the first correcting gain value Q_gain1 in step S1944_3.
In like manner, image computing unit 118 can in counting diagram picture frame in these pixels in step S1948 shown in Figure 21, be positioned at colleague and its Second Edge edge value is greater than the first threshold value N and its Second Edge edge value is less than the quantity of the pixel of k times of initial calibration coefficient Q mutually, using as described vertical edge pixel quantitative value of going together mutually.For example, image computing unit 118 can be arranged in eligible " (the Second Edge edge value Yvdiff of the i-th row in picture frame shown in statistical chart 7 i,jand (Second Edge edge value Yvdiff >N) i,j<k*Q) pixel quantity ", as the vertical edge pixel quantitative value contour_v_cnt of the i-th row i.Image computing unit 118 can in multiple row of counting diagram picture frame in step S1948, the difference of the vertical edge pixel quantitative value of certain a line and the vertical edge pixel quantitative value of adjacent lines is less than the quantity of the row of the second threshold value th_h, using as vertical edge line number value Graphic_v_level.For example, image computing unit 118 can picture frame shown in statistical chart 7 the 1st walk to vcnt capable in eligible " | contour_v_cnt i-contour_v_cnt i+1| <th_h " line number amount, as the vertical edge line number value Graphic_v_level of picture frame.Image computing unit 118 is tabled look-up according to vertical edge line number value Graphic_v_level in step S1948 again, so that vertical edge line number value Graphic_v_level is converted to the second correcting gain value Q_gain2.
Figure 23 is the schematic flow sheet that the step S200 in Figure 15 is described according to embodiments of the invention.The step S200 of the present embodiment comprises sub-step S210 to S220.Referring to Fig. 2 and Figure 23, the first false contouring reduction device 122 in compensator 120 carries out the first false contouring reduction computing according to correction coefficient Q_final to image input signal Y_in, to export the first image correction signal Y_out'(step S210).Then, the second false contouring reduction device 124 in compensator 120 carries out the second false contouring reduction computing according to correction coefficient Q_final to the first image correction signal Y_out', with output image output signal Y_out(step S220).
Figure 24 is the schematic flow sheet that the step S210 in Figure 23 is described according to embodiments of the invention.The step S210 of the present embodiment comprises sub-step S212 to S218.Referring to Fig. 2, Fig. 8 a, Figure 10 and Figure 24, horizontal filtering unit 122_2 judges whether the difference of current pixel Yc in image input signal Y_in and horizontal direction neighborhood pixels Yc+i is greater than correction coefficient Q_final, thus according to signal Y_lpf_out(step S212 after the corresponding output filtering of judged result).Then, dither unit 122_4 carries out dither operation to filtered signal, to produce the rear signal Y_lpf_out'(step S214 of shake).Horizontal boundary detecting unit 122_6 according to image input signal Y_in and carrier chrominance signal CbCr_in detection level border, and determines horizontal effective value hlpf_coef(step S216 according to this).Mixed cell 122_8 carries out weight computing according to horizontal effective value hlpf_coef to signal Y_lpf_out' after image input signal Y_in, shake, thus produces the first image correction signal Y_out'(step S218).
Figure 25 is the schematic flow sheet that the step S216 in Figure 24 is described according to embodiments of the invention.The step S216 of the present embodiment comprises sub-step S216_1 to S216_2.Referring to Fig. 2, Figure 10, Figure 11 and Figure 25, horizontal boundary detecting unit 122_6 calculates horizontal boundary level H_edge_level(step S216_1 according to carrier chrominance signal CbCr_in and image input signal Y_in).Horizontal boundary level is compared to multiple horizontal boundary threshold value (being such as h_edge_th0, h_edge_th1, h_edge_th2, h_edge_th4) by horizontal boundary detecting unit 122_6, obtains horizontal effective value hlpf_coef(step S216_2 with quantization level boundary level).
In like manner, Figure 26 is the schematic flow sheet that the step S220 in Figure 23 is described according to embodiments of the invention.The step S220 of the present embodiment comprises sub-step S222 to S228.Referring to Fig. 2, Fig. 8 b, Figure 12 and Figure 26, vertical filtering unit 124_2 judges whether the difference of current pixel Yc in the first image correction signal Y_out' and vertical direction neighborhood pixels Yc+i is greater than correction coefficient Q_final, thus according to signal after the corresponding output filtering of judged result to dither unit 124_4(step S222).Then, dither unit 124_4 carries out dither operation to filtered signal, to produce the rear signal of shake to vertical boundary detecting unit 124_6(step S224).Vertical boundary detecting unit 124_6 detects vertical boundary according to the first image correction signal Y_out' and carrier chrominance signal CbCr_in, and determines vertical effective value vlpf_coef(step S226 according to this).After the shake that mixed cell 124_8 exports the first image correction signal Y_out' and dither unit 124_4 according to vertical effective value vlpf_coef, signal is to carry out weight computing, thus produces picture output signal Y_out(step S228).
In sum, a kind of image processing apparatus that the embodiment of the present invention proposes and method, wherein by the effective number of bits in the bit-depth of the significant bit detector 110 detected image input signal Y _ in image processing apparatus 100, and with the correction coefficient Q_final obtained to compensator 120 is exported to image input signal Y_in process, computing.Compensator 120 carries out bit number compensation according to the bit-depth of this correction coefficient Q_final to image input signal Y_in deficiency, thus effectively improves the display quality of the image frame of display, to avoid the generation of false contouring phenomenon.
Last it is noted that above each embodiment is only in order to illustrate technical scheme of the present invention, be not intended to limit; Although with reference to foregoing embodiments to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein some or all of technical characteristic; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.

Claims (31)

1. an image processing apparatus, is characterized in that, comprising:
Significant bit detector, in order to the effective number of bits in the bit-depth of detected image input signal, thus corresponding output calibration coefficient; And
Compensator, couples this significant bit detector to receive this correction coefficient, and carries out bit number compensation according to this correction coefficient to this image input signal, thus exports corresponding picture output signal.
2. image processing apparatus according to claim 1, is characterized in that, this significant bit detector comprises:
Signal statistics unit, adds up the brightness value of this image input signal, and output brightness statistics;
Automatic correlativity unit, couples this signal statistics unit, in order to this brightness statistics is converted to automatic correlation curve; And
Quantize detector, couple this automatic correlativity unit, it in order to calculate this correction coefficient according to this automatic correlation curve, and exports this correction coefficient to this compensator.
3. image processing apparatus according to claim 2, is characterized in that, this brightness statistics is converted to this automatic correlation curve according to relevance function by this automatic correlativity unit.
4. image processing apparatus according to claim 2, is characterized in that, this quantification detector
Find out the peak of this automatic correlation curve,
High-pass filtering is carried out to obtain filter curve to this automatic correlation curve, and
This correction coefficient is calculated at the automatic correlation of this peak and filter value respectively according to this automatic correlation curve and this filter curve.
5. image processing apparatus according to claim 4, is characterized in that, this quantification detector
This automatic correlation curve is converted to the first temporary transient parameter at this automatic correlation of this peak,
This filter curve is converted to the second temporary transient parameter at this filter value of this peak, and
This correction coefficient is calculated according to this first temporary transient parameter and this second temporary transient parameter.
6. image processing apparatus according to claim 5, is characterized in that, this quantification detector
This first temporary transient parameter is multiplied with this second temporary transient parameter, and obtains this correction coefficient.
7. image processing apparatus according to claim 1, is characterized in that, this significant bit detector comprises:
Signal statistics unit, adds up the brightness value of this image input signal, and output brightness statistics;
Automatic correlativity unit, couples this signal statistics unit, in order to this brightness statistics is converted to automatic correlation curve;
Quantize detector, couple this automatic correlativity unit, it is in order to calculate initial calibration coefficient according to this automatic correlation curve; And
Image computing unit, be coupled to this quantification detector to receive this initial calibration coefficient, it is in order to carry out rim detection to the multiple pixels in the picture frame of this image input signal, and according to this rim detection of this initial calibration coefficient and those pixels result and calculate this correction coefficient.
8. image processing apparatus according to claim 7, is characterized in that, this quantification detector
Find out the peak of this automatic correlation curve,
High-pass filtering is carried out to obtain filter curve to this automatic correlation curve,
This automatic correlation curve is converted to the first temporary transient parameter at the automatic correlation of this peak,
This filter curve is converted to the second temporary transient parameter at the filter value of this peak, and
This initial calibration coefficient is calculated according to this first temporary transient parameter and this second temporary transient parameter.
9. image processing apparatus according to claim 7, is characterized in that, this rim detection comprises:
Calculate the summation of the current pixel in those pixels the first adjacent pixels group of first direction, as the first adjacent pixels and;
Calculate the summation of this current pixel the second adjacent pixels group of second direction, as the second adjacent pixels and, wherein this first direction differs 180 degree with this second direction;
Calculate this first adjacent pixels and with this second adjacent pixels and difference, as the first marginal value of this current pixel;
According to those first marginal values of those pixels and the relation of this initial calibration coefficient, add up the first correcting gain value of those pixels;
Calculate the summation of this current pixel the 3rd adjacent pixels group of third direction, as the 3rd adjacent pixels and;
Calculate the summation of this current pixel the 4th adjacent pixels group of fourth direction, as the 4th adjacent pixels and, wherein this third direction differs 180 degree with this fourth direction;
Calculate the 3rd adjacent pixels and with the 4th adjacent pixels and difference, as the Second Edge edge value of this current pixel;
According to those Second Edge edge value of those pixels and the relation of this initial calibration coefficient, add up the second correcting gain value of those pixels; And
Using this first correcting gain value and this second correcting gain value as the described result of this rim detection.
10. image processing apparatus according to claim 9, is characterized in that, described in calculate this correction coefficient and comprise:
This initial calibration coefficient is multiplied with this first correcting gain value and this second correcting gain value, to obtain this correction coefficient.
11. image processing apparatus according to claim 1, is characterized in that, this compensator comprises:
First false contouring reduction device, in order to receive this image input signal and to carry out the first false contouring reduction computing, to export the first image correction signal according to this correction coefficient to this image input signal; And
Second false contouring reduction device, couples this first false contouring reduction device, in order to receive this first image correction signal and to carry out the second false contouring reduction computing, to export this picture output signal according to this correction coefficient to this first image correction signal.
12. image processing apparatus according to claim 11, is characterized in that, this first false contouring reduction device comprises:
Horizontal filtering unit, in order to judge whether the difference of current pixel in this image input signal and horizontal direction neighborhood pixels is greater than this correction coefficient, thus according to signal after the corresponding output filtering of judged result;
Dither unit, couples this horizontal filtering unit, in order to receive and to carry out dither operation to this filtered signal, with signal after output jitter;
Horizontal boundary detecting unit, in order to reception and according to this image input signal and carrier chrominance signal detection level border, and determines horizontal effective value according to this; And
Mixed cell, couple this dither unit and this horizontal boundary detecting unit, in order to carry out weight computing to signal after this image input signal, this shake, thus export this first image correction signal, wherein this mixed cell determines the weight of signal after this image input signal and this shake according to this horizontal effective value.
13. image processing apparatus according to claim 12, is characterized in that, this horizontal boundary detecting unit
Horizontal boundary level is calculated according to this carrier chrominance signal and this image input signal, and
This horizontal boundary level is compared to multiple horizontal boundary threshold value, obtains this horizontal effective value to quantize this horizontal boundary level.
14. image processing apparatus according to claim 13, it is characterized in that, this image input signal comprises luminance signal, this carrier chrominance signal comprises red carrier chrominance signal and blue carrier chrominance signal, and this horizontal boundary detecting unit selects the maximum as this horizontal boundary level from the horizontal gradient value of this luminance signal, the horizontal gradient value of this red carrier chrominance signal and the horizontal gradient value three of this blue carrier chrominance signal.
15. image processing apparatus according to claim 11, is characterized in that, this second false contouring reduction device comprises:
Vertical filtering unit, in order to judge whether the difference of current pixel in this first image correction signal and vertical direction neighborhood pixels is greater than this correction coefficient, thus according to signal after the corresponding output filtering of judged result;
Dither unit, couples this vertical filtering unit, in order to receive and to carry out dither operation to this filtered signal, with signal after output jitter;
Vertical boundary detecting unit, in order to receive and to detect vertical boundary according to this first image correction signal and carrier chrominance signal, and determines vertical effective value according to this; And
Mixed cell, couple this dither unit and this vertical boundary detecting unit, in order to carry out weight computing to signal after this first image correction signal, this shake, thus export this picture output signal, wherein this mixed cell determines the weight of signal after this first image correction signal and this shake according to this vertical effective value.
16. image processing apparatus according to claim 1, also comprise:
Buffer cell, in order to cushion this image input signal, to make this image input signal synchronous with this correction coefficient, and inputs to this compensator by the image input signal after buffering.
17. 1 kinds of image processing methods, are applicable to image processing apparatus, it is characterized in that, comprising:
Effective number of bits in the bit-depth of detected image input signal, thus corresponding generation correction coefficient; And
According to this correction coefficient, bit number compensation is carried out to this image input signal, thus produce corresponding picture output signal.
18. image processing methods according to claim 17, is characterized in that, detect this effective number of bits in the bit-depth of this image input signal, thus the step exporting this correction coefficient comprise:
The brightness value of this image input signal is added up, and output brightness statistics;
This brightness statistics is converted to automatic correlation curve; And
This correction coefficient is calculated according to this automatic correlation curve.
19. image processing methods according to claim 18, is characterized in that, the described step this brightness statistics being converted to automatic correlation curve comprises:
According to relevance function, this brightness statistics is converted to this automatic correlation curve.
20. image processing methods according to claim 18, is characterized in that, described in calculate this correction coefficient step comprise:
Find out the peak of this automatic correlation curve;
High-pass filtering is carried out to obtain filter curve to this automatic correlation curve; And
This correction coefficient is calculated at the automatic correlation of this peak and filter value respectively according to this automatic correlation curve and this filter curve.
21. image processing methods according to claim 20, is characterized in that, the described step calculating this correction coefficient according to this automatic correlation and this filter value comprises:
This automatic correlation curve is converted to the first temporary transient parameter at this automatic correlation of this peak;
This filter curve is converted to the second temporary transient parameter at this filter value of this peak; And
This correction coefficient is calculated according to this first temporary transient parameter and this second temporary transient parameter.
22. image processing methods according to claim 21, is characterized in that, the described step calculating this correction coefficient according to this first temporary transient parameter and this second temporary transient parameter comprises:
This first temporary transient parameter is multiplied with this second temporary transient parameter, and obtains this correction coefficient.
23. image processing methods according to claim 17, is characterized in that, detect this effective number of bits in the bit-depth of this image input signal, thus the step exporting this correction coefficient comprise:
The brightness value of this image input signal is added up, and output brightness statistics;
This brightness statistics is converted to automatic correlation curve;
Initial calibration coefficient is calculated according to this automatic correlation curve; And
Rim detection is carried out to the multiple pixels in the picture frame of this image input signal, and according to this rim detection of this initial calibration coefficient and those pixels result and calculate this correction coefficient.
24. image processing methods according to claim 23, is characterized in that, described in calculate this initial calibration coefficient step comprise:
Find out the peak of this automatic correlation curve;
High-pass filtering is carried out to obtain filter curve to this automatic correlation curve;
This automatic correlation curve is converted to the first temporary transient parameter at this automatic correlation of this peak;
This filter curve is converted to the second temporary transient parameter at this filter value of this peak; And
This initial calibration coefficient is calculated according to this first temporary transient parameter and this second temporary transient parameter.
25. image processing methods according to claim 24, is characterized in that, this rim detection comprises:
Calculate the summation of the current pixel in those pixels the first adjacent pixels group of first direction, as the first adjacent pixels and;
Calculate the summation of this current pixel the second adjacent pixels group of second direction, as the second adjacent pixels and, wherein this first direction differs 180 degree with this second direction;
Calculate this first adjacent pixels and with this second adjacent pixels and difference, as the first marginal value of this current pixel;
According to those first marginal values of those pixels and the relation of this initial calibration coefficient, add up the first correcting gain value of those pixels;
Calculate the summation of this current pixel the 3rd adjacent pixels group of third direction, as the 3rd adjacent pixels and;
Calculate the summation of this current pixel the 4th adjacent pixels group of fourth direction, as the 4th adjacent pixels and, wherein this third direction differs 180 degree with this fourth direction;
Calculate the 3rd adjacent pixels and with the 4th adjacent pixels and difference, as the Second Edge edge value of this current pixel;
According to those Second Edge edge value of those pixels and the relation of this initial calibration coefficient, add up the second correcting gain value of those pixels; And
Using this first correcting gain value and this second correcting gain value as the described result of this rim detection.
26. image processing methods according to claim 25, is characterized in that, described in calculate this correction coefficient and comprise:
This initial calibration coefficient is multiplied with this first correcting gain value and this second correcting gain value, to obtain this correction coefficient.
27. image processing methods according to claim 17, is characterized in that, the described step producing this corresponding picture output signal comprises:
According to this correction coefficient, the first false contouring reduction computing is carried out, to export the first image correction signal to this image input signal; And
According to this correction coefficient, the second false contouring reduction computing is carried out, to export this picture output signal to this first image correction signal.
28. image processing methods according to claim 27, is characterized in that, this first false contouring reduction computing comprises:
Judge whether the difference of current pixel in this image input signal and horizontal direction neighborhood pixels is greater than this correction coefficient, thus according to signal after the corresponding output filtering of judged result;
Dither operation is carried out to this filtered signal, to produce the rear signal of shake;
According to this image input signal and carrier chrominance signal detection level border, and determine horizontal effective value according to this; And
Carry out weight computing to signal after this image input signal, this shake, thus produce this first image correction signal, wherein after this image input signal and this shake, the weight of signal determines according to this horizontal effective value.
29. image processing methods according to claim 28, is characterized in that, the step of this horizontal effective value of described decision comprises:
Horizontal boundary level is calculated according to this carrier chrominance signal and this image input signal; And
This horizontal boundary level is compared to multiple horizontal boundary threshold value, obtains this horizontal effective value to quantize this horizontal boundary level.
30. image processing methods according to claim 29, is characterized in that, this image input signal comprises luminance signal, and this carrier chrominance signal comprises red carrier chrominance signal and blue carrier chrominance signal, and described in calculate this horizontal boundary level step comprise:
From the horizontal gradient value of this luminance signal, the horizontal gradient value of this red carrier chrominance signal and the horizontal gradient value three of this blue carrier chrominance signal, select the maximum as this horizontal boundary level.
31. image processing methods according to claim 27, is characterized in that, this second false contouring reduction computing comprises:
Judge whether the difference of current pixel in this first image correction signal and vertical direction neighborhood pixels is greater than this correction coefficient, thus according to signal after the corresponding output filtering of judged result;
Dither operation is carried out to this filtered signal, with signal after output jitter;
Detect vertical boundary according to this first image correction signal and carrier chrominance signal, and determine vertical effective value according to this; And
Carry out weight computing to signal after this first image correction signal, this shake, thus produce this picture output signal, wherein this first image correction signal and the weight of signal after this shake determine according to this vertical effective value.
CN201410125628.4A 2014-03-31 2014-03-31 Image processing device and method thereof Pending CN104954770A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410125628.4A CN104954770A (en) 2014-03-31 2014-03-31 Image processing device and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410125628.4A CN104954770A (en) 2014-03-31 2014-03-31 Image processing device and method thereof

Publications (1)

Publication Number Publication Date
CN104954770A true CN104954770A (en) 2015-09-30

Family

ID=54169069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410125628.4A Pending CN104954770A (en) 2014-03-31 2014-03-31 Image processing device and method thereof

Country Status (1)

Country Link
CN (1) CN104954770A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325918A (en) * 2018-07-26 2019-02-12 京东方科技集团股份有限公司 Image processing method and device and computer storage medium
CN110191340A (en) * 2019-06-03 2019-08-30 Oppo广东移动通信有限公司 Video frame processing method, device, equipment and storage medium
WO2020001149A1 (en) * 2018-06-29 2020-01-02 京东方科技集团股份有限公司 Method and apparatus for extracting edge of object in depth image, and computer readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1759445A (en) * 2003-03-17 2006-04-12 Lg电子株式会社 Apparatus and method for processing image data in an interactive media player
CN1997165A (en) * 2005-12-31 2007-07-11 财团法人工业技术研究院 Method and device for digital image adaptative color adjustment of display
CN101583970A (en) * 2007-01-19 2009-11-18 汤姆森许可贸易公司 Reducing contours in digital images
US20100125648A1 (en) * 2008-11-17 2010-05-20 Xrfiles, Inc. System and method for the serving of extended bit depth high resolution images
CN102216953A (en) * 2008-12-01 2011-10-12 马维尔国际贸易有限公司 Bit resolution enhancement
CN101448075B (en) * 2007-10-15 2012-07-04 英特尔公司 Converting video and image signal bit depths
CN102903088A (en) * 2011-07-28 2013-01-30 索尼公司 Image processing apparatus and method
CN103024300A (en) * 2012-12-25 2013-04-03 华为技术有限公司 Device and method for high dynamic range image display
CN103069809A (en) * 2010-08-25 2013-04-24 杜比实验室特许公司 Extending image dynamic range
CN103327323A (en) * 2012-03-14 2013-09-25 杜比实验室特许公司 Efficient tone-mapping of high-bit-depth video to low-bit-depth display
CN101860661B (en) * 2009-04-08 2013-09-25 佳能株式会社 Image display apparatus, image display method, and recording medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1759445A (en) * 2003-03-17 2006-04-12 Lg电子株式会社 Apparatus and method for processing image data in an interactive media player
CN1997165A (en) * 2005-12-31 2007-07-11 财团法人工业技术研究院 Method and device for digital image adaptative color adjustment of display
CN101583970A (en) * 2007-01-19 2009-11-18 汤姆森许可贸易公司 Reducing contours in digital images
CN101448075B (en) * 2007-10-15 2012-07-04 英特尔公司 Converting video and image signal bit depths
US20100125648A1 (en) * 2008-11-17 2010-05-20 Xrfiles, Inc. System and method for the serving of extended bit depth high resolution images
CN102216953A (en) * 2008-12-01 2011-10-12 马维尔国际贸易有限公司 Bit resolution enhancement
CN101860661B (en) * 2009-04-08 2013-09-25 佳能株式会社 Image display apparatus, image display method, and recording medium
CN103069809A (en) * 2010-08-25 2013-04-24 杜比实验室特许公司 Extending image dynamic range
CN102903088A (en) * 2011-07-28 2013-01-30 索尼公司 Image processing apparatus and method
CN103327323A (en) * 2012-03-14 2013-09-25 杜比实验室特许公司 Efficient tone-mapping of high-bit-depth video to low-bit-depth display
CN103024300A (en) * 2012-12-25 2013-04-03 华为技术有限公司 Device and method for high dynamic range image display

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020001149A1 (en) * 2018-06-29 2020-01-02 京东方科技集团股份有限公司 Method and apparatus for extracting edge of object in depth image, and computer readable storage medium
US11379988B2 (en) * 2018-06-29 2022-07-05 Boe Technology Group Co., Ltd. Method and apparatus for extracting edge of object in depth image and computer readable storage medium
CN109325918A (en) * 2018-07-26 2019-02-12 京东方科技集团股份有限公司 Image processing method and device and computer storage medium
US11257187B2 (en) 2018-07-26 2022-02-22 Boe Technology Group Co., Ltd. Image processing method, image processing device and computer storage medium
CN110191340A (en) * 2019-06-03 2019-08-30 Oppo广东移动通信有限公司 Video frame processing method, device, equipment and storage medium
CN110191340B (en) * 2019-06-03 2021-05-14 Oppo广东移动通信有限公司 Video frame processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US6842536B2 (en) Image processing apparatus, image processing method and computer program product for correcting image obtained by shooting subject
US8131111B2 (en) Device of processing dead pixel
CN101523889B (en) Contour correcting method, image processing device and display device
CN101689356B (en) Image processing device, display, image processing method
US20040233217A1 (en) Adaptive pixel-based blending method and system
CN102402918B (en) Method for improving picture quality and liquid crystal display (LCD)
CN102216953A (en) Bit resolution enhancement
CN1655228A (en) Reducing burn-in associated with mismatched video image/display aspect ratios
US20120093430A1 (en) Image processing method and device
US8270750B2 (en) Image processor, display device, image processing method, and program
CN110290370B (en) Image processing method and device
US5570461A (en) Image processing using information of one frame in binarizing a succeeding frame
CN104954770A (en) Image processing device and method thereof
CN101123081A (en) Brightness signal processing method
US20070279531A1 (en) TV receiver and TV receiving method
US7889279B2 (en) Method and apparatus for suppressing cross-coloration in a video display device
Someya et al. The suppression of noise on a dithering image in LCD overdrive
CN113068011B (en) Image sensor, image processing method and system
CN1167401A (en) Word frame image automatic testing circuit in TV set
CN101252642A (en) Television set imaging method
CN101221659A (en) Dynamic contrast extension circuit and method
US8300150B2 (en) Image processing apparatus and method
CN102413271B (en) Image processing method and device for eliminating false contour
JPH07322179A (en) Video display processing method for electronic display and its device
TW201419260A (en) Method for enhancing contrast of color image displayed on display system and image processing system utilizing the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150930