CN102045532B - Image processing method and image processing unit - Google Patents

Image processing method and image processing unit Download PDF

Info

Publication number
CN102045532B
CN102045532B CN 200910208041 CN200910208041A CN102045532B CN 102045532 B CN102045532 B CN 102045532B CN 200910208041 CN200910208041 CN 200910208041 CN 200910208041 A CN200910208041 A CN 200910208041A CN 102045532 B CN102045532 B CN 102045532B
Authority
CN
China
Prior art keywords
image
zones
gray scale
gtg
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200910208041
Other languages
Chinese (zh)
Other versions
CN102045532A (en
Inventor
梁仁宽
廖振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MediaTek Inc
MStar Semiconductor Inc Taiwan
Original Assignee
MStar Software R&D Shenzhen Ltd
MStar Semiconductor Inc Taiwan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MStar Software R&D Shenzhen Ltd, MStar Semiconductor Inc Taiwan filed Critical MStar Software R&D Shenzhen Ltd
Priority to CN 200910208041 priority Critical patent/CN102045532B/en
Publication of CN102045532A publication Critical patent/CN102045532A/en
Application granted granted Critical
Publication of CN102045532B publication Critical patent/CN102045532B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides an image processing method and an image processing unit. The method comprises the following steps: providing a video signal stream, and judging whether the video signal stream comprises at least one image which is converted from low definition into high definition; and if so, executing image quality promotion processing on the video signal stream.

Description

Image treatment method and image processor
Technical field
The present invention is relevant with image treatment method and image processor, and especially relevant with method that can promote image quality and device.
Background technology
In recent years, because gradually maturation and manufacturing cost reduction of associated fabrication techniques, the display device of various sizes is all more and more universal.The demand that how to promote the quality of display device and make its display characteristic more press close to the user is all the subject under discussion that its designer pays much attention to.
Generally speaking, the display device largest image definition that can present (that is image vertically with horizontal number of pixels) fix.Yet, be connected to various signals source (for example DVD player, cable TV cable, the radiovision antenna of display device ...) the original definition of the video signal that provided may be all inequality.In order to cooperate the definition specification of display device, these provide the device of video signal may be in advance the size (definition just) of image output itself to be adjusted into the size that meets the display device screen standard.
As be familiar with known to this technical field person, the low definition image be expanded as high definition image, must increase the pixel quantity in the image.For example, the video conversion that be the 800*600 pixel with the script definition is that definition is the image of 1200*900 pixel, and image processor must add 400 pixels in each line of image, and adds 300 pixels in each craspedodrome of image.The GTG value of these newly-increased pixels is to decide by interpolative operation mostly.
The image that enlarges size through interpolative operation has gentle gray scale variation usually.Concerning the beholder, this picture sometimes is that comparison is dull and stereotyped even fuzzy.When the definition difference before and after the conversion was too big, it is not satisfactory that the beholder tends to obviously to feel image quality.Yet existing display device does not have the signal that way receives according to oneself and judges whether these images pass through the definition conversion process, therefore also can't avoid the not good problem of above-mentioned image quality.
Summary of the invention
For addressing the above problem, the invention provides a kind of whether once method and apparatus of mistake definition conversion from low to high of image of judging, also propose to improve the method and apparatus of image quality at this video signal.
According to an aspect of the present invention by a kind of image processor, comprise the image in a plurality of zones in order to processing, this image processor comprises a judging unit and an adjustment unit.This judging unit is in order to determining these each self-corresponding gray scale variation degree of zone, and judges that according to these gray scale variation degree whether this image is a conversion back image that is converted to high definition by low definition.If this image is this conversion back image, this adjustment unit is promptly carried out an image quality at this image and is promoted processing.
A kind of image treatment method is provided according to a further aspect of the invention, and it comprises step (a) provides an image, and this image comprises a plurality of zones; Step (b) is judged these each self-corresponding gray scale variation degree of zone respectively; And step (c) determines according to these gray scale variation degree whether this image is a conversion back image that is converted to high definition by low definition.
According to a kind of image processor of another aspect of the invention, it comprises a judging unit and an adjustment unit.This judging unit is in order to judge whether comprise a conversion back image that is converted to high definition by low definition in the video flowing.If the judged result of this judging unit comprises this conversion back image for this video flowing, this adjustment unit is promptly carried out an image quality at this video flowing and is promoted processing.
A kind of image treatment method according to a further aspect of the present invention, it comprises step (a) provides a video flowing; Step (b) judges whether comprise a conversion back image that is converted to high definition by low definition in this video flowing; And step (c) is if the judged result of step (b) is for being to carry out an image quality at this video flowing and promote processing.
Description of drawings
Can be further understood by the detailed Description Of The Invention of following conjunction with figs. about the advantages and spirit of the present invention, wherein:
Fig. 1 is according to the image treatment method flow chart in the specific embodiment of the present invention.
Fig. 2 is the gray scale variation degree methods flow chart of this target area of decision.
Fig. 3 (A) is according to the image processor calcspar in the specific embodiment of the present invention.
Fig. 3 (B) is the enforcement example according to judging unit of the present invention.
Fig. 4 (A) and Fig. 4 (B) are according to the image treatment method flow chart in another specific embodiment of the present invention.
Fig. 5 is according to the image processor calcspar in another specific embodiment of the present invention.
Embodiment
See also Fig. 1, Fig. 1 is according to the image treatment method flow chart in the specific embodiment of the present invention.This method is execution in step S11 at first, and an image that comprises a plurality of zones is provided.For example, definition is the zonule that the image of 1200*900 pixel can be divided into a plurality of each self-contained 3*1 pixel.These dividing region are virtual, do not represent originally just to exist the physical segregation line in the image, or must add separator bar in image.
Then, step S12 judges these each self-corresponding gray scale variation degree of zone respectively.Step S13 determines according to these gray scale variation degree whether this image is a conversion back image that is converted to high definition by low definition.As discussed previously, change the image that enlarges size through definition and have the milder characteristic of gray scale variation usually.Therefore, if the image that receives is conversion back image among the step S11, the gray scale variation degree in most zones that step S12 found out should can be too not high.According to this principle, step S13 can judge whether this image is the previous image that the definition conversion enlarges size of once crossing.
After above-mentioned steps S13, can further comprise the step S14 that promotes image quality according to image treatment method of the present invention.More particularly, if the image that provides among the step S13 determination step S11 is conversion back image, this method can be carried out image quality lifting at this image and handle.In addition, this image might be to belong to the part that certain comprises the video flowing of many images.If step S13 judges this image and is conversion back image, represent that the video flowing under this image may be through definition conversion from low to high.Therefore, can carry out image quality at the video flowing that comprises this image according to image treatment method of the present invention and promote processing.On the practice, this image quality promotes to handle and can be the sharpening processing, but not as limit.
Relatively, if step S13 determination step S11 in providing image and non-conversion after image, this method just can not carried out image quality to the video flowing under this image or this image and promote and handle.
Suppose that certain zone in this image comprises a plurality of pixels, and these pixels has a GTG value separately.Step S12 can calculate a maximum gray gap (that is the maximum gray value deducts the result of minimum gray value in this zone) in this zone according to these GTG values, as this regional gray scale variation degree.On the other hand, step S13 can at first calculate the summation of each regional gray scale variation degree, more relatively this summation and a summation threshold value.If this summation is lower than this summation threshold value, represent that the gray scale variation degree of this image integral body is on the low side, therefore may be image through the definition conversion.
In one embodiment, each zone of this image all comprises 3 pixels of arranging in regular turn, and the zone that step 12 is being handled is called the target area, and the target area comprises one first pixel, one second pixel and one the 3rd pixel in regular turn.First pixel has the first GTG value P1, and second pixel has the second GTG value P2, and the 3rd pixel has the 3rd GTG value P3.(P3) the maximum gray value in these three GTG values of representative calculating deducts the maximum gray gap of minimum gray value gained to minmax for P1, P2.(P1, P2 P3) represent the middle GTG value of these three GTG values to med; Abs[P2-med (P1, P2, P3)] then be the absolute difference that P2 and this middle GTG value are calculated in representative.
Fig. 2 be according to maximum gray gap minmax (P1, P2, P3) and middle GTG value med (P1, P2 P3) determine the gray scale variation degree methods of this target area.At first, step S201 be calculate maximum gray gap minmax (P1, P2, P3).Then, step S202 relatively maximum gray gap minmax (P1, P2 is P3) with the size of one first threshold value T1.If (P1, P2 P3) greater than the first threshold value T1, represent that scope that these three pixels are formed has gray scale variation to a certain degree to maximum gray gap minmax, and this method is execution in step S204, and one first assessed value is made as A1.If (P3) less than the first threshold value T1, this method is execution in step S203 to maximum gray gap minmax, and this first assessed value is made as A0 for P1, P2.A1 is greater than A0.In other words, (when P3) higher, corresponding first assessed value also can be bigger for P1, P2 as maximum gray gap minmax.
Step S205 be calculated difference absolute value abs[P2-med (P1, P2, P3)].If this result of calculation equals zero, and GTG value med in the middle of expression P2 equals (P1, P2, P3).That is to say that though the scope that these three pixels are formed has gray scale variation (judged result of step S202) to a certain degree, P1, P2, these three values of P3 are to arrange in regular turn from low to high or from high to low.Can find out that thus (P1, P2 P3), low-Gao-low or high-such gray scale variation of low-Gao do not occur to GTG value med within this scope in the middle of P2 equals.In addition, if absolute difference abs[P2-med (P1, P2, P3)] higher, represent that the gray scale variation of low in this zone-Gao-low or high-low-Gao is more violent.
Step S206 be comparing difference absolute value abs[P2-med (P1, P2, P3)] with the size of one second threshold value T2.If absolute difference abs[P2-med (P1, P2, P3)] less than the second threshold value T2, this method is execution in step S207, and one second assessed value is made as B0.If absolute difference abs[P2-med (P1, P2, P3)] greater than the second threshold value T2, then this method can execution in step S208, comparing difference absolute value abs[P2-med (P1, P2, P3)] with the size of one the 3rd threshold value T3.And the 3rd threshold value T3 is higher than the second threshold value T2.
If absolute difference abs[P2-med (P1, P2, P3)] greater than the 3rd threshold value T3, second assessed value can be set as B3 in step S209.If absolute difference abs[P2-med (P1, P2, P3)] less than the 3rd threshold value T3, this method can continue execution in step S210, comparing difference absolute value abs[P2-med (P1, P2, P3)] with the size of one the 4th threshold value T4.The 4th threshold value T4 is between the second threshold value T2 and the 3rd threshold value T3.
If absolute difference abs[P2-med (P1, P2, P3)] greater than the 4th threshold value T4, second assessed value can be set as B2 in step S211.If absolute difference abs[P2-med (P1, P2, P3)] less than the 4th threshold value T4, this method meeting execution in step S212 is made as B1 with this second assessed value.
B0, B1, B2, B3 are four ascending numerical value that increase in regular turn.For example, these four values can be set to 0,1,4,32 respectively.As seen from Figure 2, absolute difference abs[P2-med (P1, P2, P3)] higher, second assessed value of corresponding generation is bigger.Second assessed value can be regarded as absolute difference abs[P2-med (P1, P2, P3)] a pairing weighted value.
Can be according to image treatment method of the present invention simultaneously according to maximum gray gap minmax (P1, P2, P3) with absolute difference abs[P2-med (P1, P2, P3)] assess the gray scale variation degree of this target area, also can be only by selecting a foundation in these two numerical value as the gray scale variation degree of judging the target area.
The image that step S11 provided comprises a plurality of zones, can find out pairing first assessed value of each regional gray scale variation degree and second assessed value respectively at each zone execution step as shown in Figure 2 of this image in step S12 according to image treatment method of the present invention.Whether process in accordance with the present invention S13 can add up first assessed value, second assessed value of All Ranges, be the foundation of conversion back image as whole image of assessment.If first assessed value after adding up and second assessed value show that all the whole gray scale variation of this image is big inadequately, this image of image treatment method decidable then according to the present invention is the previous image that the definition conversion enlarges size of once crossing.In one embodiment, image treatment method of the present invention can be only according to the part of image, judge whether this image is the image that conversion enlarges size through definition, for instance, judge according to 1/4,1/2 or 2/3 presentation content whether this image is the image that conversion enlarges size through definition.
See also Fig. 3 (A), Fig. 3 (A) is according to the image processor calcspar in the specific embodiment of the present invention.This image processor mainly comprises a judging unit 31 and an adjustment unit 32.Whether 31 of judging units comprise an image in a plurality of zones in order to reception, and determine these each self-corresponding gray scale variation degree of zone, be a conversion back image that is converted to high definition by low definition to judge this image according to these gray scale variation degree.If judging unit 31 these images of judgement are image after changing, adjustment unit 32 is promptly carried out an image quality at the video flowing under this image or this image and is promoted processing, and on the practice, this image quality promotes processing and can be the sharpening processing, but not as limit.
Fig. 3 (B) illustrates one of judging unit 31 to implement example.In this example, judging unit 31 comprises the first counting circuit 31A, the first decision-making circuit 31B, the first comparison circuit 31C, the second counting circuit 31D, the second decision-making circuit 31E, adds way circuit 31F, and the second comparison circuit 31G.
The first counting circuit 31A is in order to the step S201 in the execution graph 2, that is according to three GTG values in the target area calculate this target area maximum gray gap minmax (P1, P2, P3).The first decision-making circuit 31B is in order to execution in step S202~step S204, that is (P1, P2 P3) determine first pointer of the gray scale variation degree of this target area, i.e. first assessed value according to minmax.
The first comparison circuit 31C is in order to relatively this first GTG value, this second GTG value and the 3rd GTG value, and by GTG value in the middle of wherein selecting.In other words, the first comparison circuit 31C be responsible for calculating med (P1, P2, P3).The second counting circuit 31D then in order to calculate abs[P2-med (P1, P2, P3)].The second decision-making circuit 31E is execution in step S206~step S212, according to abs[P2-med (P1, P2, P3)] determine second pointer of the gray scale variation degree of this target area, i.e. second assessed value.
Add first assessed value summation and the second assessed value summation of way circuit 31F in order to calculate each zone respectively.The second comparison circuit 31G is then in order to the first assessed value summation and one first summation threshold value relatively, and with the second assessed value summation and the comparison of one second summation threshold value.If these two summations all are lower than each self-corresponding threshold value, then judging unit 31 judges that promptly this image is this conversion back image.
Function block schematic diagram shown in Fig. 3 (B) is a preferred embodiment of judging unit, in the enforcement, judging unit 31 can only comprise the first counting circuit 31A, the first decision-making circuit 31B, add way circuit 31F, and the second comparison circuit 31G, whether be this conversion back image to assess this image by the summation of first assessed value.Whether judging unit 31 can also only comprise the first comparison circuit 31C, the second counting circuit 31D, the second decision-making circuit 31E, add way circuit 31F, and the second comparison circuit 31G, be this conversion back image to assess this image by the summation of second assessed value.
See also Fig. 4 (A), Fig. 4 (A) is according to the image treatment method flow chart in another specific embodiment of the present invention.This method is execution in step S41 at first, and a video flowing is provided.Then, step S42 is for judging that whether comprising M in this video flowing opens by low definition and convert image after the conversion of high definition to, and M is a positive integer.If the judged result of step S42 is for being, this method is execution in step S43, carries out an image quality at this video flowing and promotes and handle (for example a sharpening is handled).Relatively, if the judged result of step S42 is not for, this method re-executes step S42, continues whether to comprise in this video flowing of detecting conversion back image.
When M equals 1, as long as detecting, expression comprises image after the conversion in this video flowing, this method will execution in step S43.When M greater than 1, expression step S42 must detect and comprise not only image after the conversion in this video flowing, this method is just understood execution in step S43.On the practice, step S42 has two kinds of different possibilities.First kind for judging that whether comprising M in this video flowing opens image after the continuous conversion.Second kind then is to judge that whether comprising M in this video flowing opens conversion back image, and image is discontinuous in this video flowing even this M opens the conversion back also has no relations.
Fig. 4 (B) is that one of the image treatment method shown in Fig. 4 (A) extends example.In this example, if the judged result of step S42 is for being, except step S43, this method also can execution in step S44, judges whether comprise the non-conversion image that P opens in this video flowing after the image after the conversion that this M opens, P also is a positive integer.If the judged result of step S44 represents that for being previous judgement has error, perhaps ensuing this section image no longer has been a conversion back image in the video flowing.Therefore, if the judged result of step S44 is for being, this method is execution in step S45, stops this image quality and promotes and handle.Relatively, if the judged result of step S44 is that this method can not re-execute step S44, continue to detect in this video flowing whether comprise the non-conversion image.
When P equals 1, as long as detecting, expression comprises a non-conversion image in this video flowing, this method will execution in step S45.When P greater than 1, expression step S44 must detect and comprise not only image after the conversion in this video flowing, this method is just understood execution in step S45.On the practice, step S44 has two kinds of different possibilities.First kind for to judge that whether comprising P in this video flowing opens continuous non-conversion image.Second kind then is to judge that whether comprising P in this video flowing opens the non-conversion image, and the non-conversion image is discontinuous in this video flowing also to have no relations even this P opens.
On the practice, the process step that step S42 and step S44 can utilize Fig. 1 and Fig. 2 to illustrate judges respectively whether each image in this video flowing is through image after the conversion of definition conversion.
See also Fig. 5, Fig. 5 is according to the image processor calcspar in another specific embodiment of the present invention.This image processor mainly comprises a judging unit 51 and an adjustment unit 52.Judging unit 51 is in order to judge that whether comprising M in the video flowing opens by low definition and convert image after the conversion of high definition to.If the judged result of judging unit 51 is opened conversion back image for this video flowing comprises this M, adjustment unit 52 is promptly carried out an image quality at this video flowing and is promoted processing (for example a sharpening is handled).
In addition, judging that this video flowing comprises after the conversion that this M opens after the image, judging unit 51 can continue to judge whether comprise the non-conversion image that P opens in this video flowing after the image after the conversion that this M opens.If judging unit 51 judges that this video flowing comprises the non-conversion image that this P opens, adjustment unit 52 can stop this image quality and promote processing.
Identical with previous embodiment is that it can also can be discontinuous for continuously in video flowing that this M opens conversion back image.The non-conversion image that this P opens can also can be discontinuous for continuously in video flowing.In addition, judging unit 51 also can utilize the circuit of Fig. 3 (B) to judge respectively whether each image in this video flowing is through image after the conversion of definition conversion.
As mentioned above, the present invention proposes and judge whether image once crossed the method and apparatus of definition conversion from low to high, also propose to improve the method and apparatus of image quality, with the problem of bad after the solution image process definition conversion process at this video flowing.
By the above detailed description of preferred embodiments, be to wish to know more to describe feature of the present invention and spirit, and be not to come category of the present invention is limited with above-mentioned disclosed preferred embodiment.On the contrary, its objective is that hope can contain in the category of claim of being arranged in of various changes and tool equality institute of the present invention desire application.

Claims (26)

1. image treatment method comprises the following step:
(a) provide a video flowing, this video flowing comprises an image, and this image comprises a plurality of zones;
(b1) calculate each regional GTG gap in these a plurality of zones according to the GTG value of a plurality of pixels in these a plurality of zones, as these each self-corresponding gray scale variation degree of zone;
(b2) calculate the summation of each regional gray scale variation degree according to these gray scale variation degree, this summation and a summation threshold value are compared, judge whether comprise a conversion back image that converts high definition by low definition in this video flowing; And
(c) if the judged result of step (b2) is for being to carry out an image quality at this video flowing and promote processing.
2. image treatment method according to claim 1 is characterized in that, step (b2) is for judging whether comprise this continuous conversion back image in this video flowing.
3. image treatment method according to claim 1 is characterized in that, this image quality promotes and is treated to sharpening processing.
4. image treatment method according to claim 1 is characterized in that, further comprises the following step after this step (c):
(d) judge whether comprise a non-conversion image in this video flowing; And
(e) if the judged result of step (d) is for being to stop this image quality and promote processing.
5. image treatment method according to claim 1, it is characterized in that, in step (b1), calculate each regional maximum gray gap in these a plurality of zones according to the GTG value of a plurality of pixels in these a plurality of zones, and determine these each self-corresponding gray scale variation degree of zone according to those maximum gray gaps.
6. image treatment method according to claim 1, it is characterized in that, in step (b1), calculate the middle GTG value in each zone in these a plurality of zones according to the GTG value of a plurality of pixels in these a plurality of zones, and determine these each self-corresponding gray scale variation degree of zone according to GTG value in the middle of those.
7. image treatment method according to claim 4 is characterized in that, step (d) is for to judge whether comprise this continuous non-conversion image in this video flowing.
8. image processor comprises:
Whether one judging unit comprises a conversion back image that is converted to high definition by low definition, wherein this video flowing comprises an image, this image comprises a plurality of zones, this judging unit comprises in order to judge in the video flowing:
One counting circuit is used for calculating each regional GTG gap in these a plurality of zones according to the GTG value of these a plurality of pixels in a plurality of zones;
One decision-making circuit determines these each self-corresponding gray scale variation degree of zone according to those GTG gaps;
One adds way circuit, calculates the summation of each regional gray scale variation degree according to these gray scale variation degree; And
One comparison circuit compares this summation and a summation threshold value, judges whether this image is this conversion back image; And
One adjustment unit, if the judged result of this judging unit comprises this conversion back image for this video flowing, this adjustment unit is promptly carried out an image quality at this video flowing and is promoted processing.
9. image processor according to claim 8 is characterized in that, this judging unit is to judge whether comprise this continuous conversion back image in this video flowing.
10. image processor according to claim 8 is characterized in that, this image quality promotes and is treated to sharpening processing.
11. image processor according to claim 8 is characterized in that, whether comprises a non-conversion image in this video flowing of this judgment unit judges; If this judging unit judges that this video flowing comprises this non-conversion image, this adjustment unit promptly stops this image quality and promotes processing.
12. image processor according to claim 8 is characterized in that, this counting circuit calculates each regional maximum gray gap in these a plurality of zones according to the GTG value of a plurality of pixels in these a plurality of zones; This decision-making circuit determines these each self-corresponding gray scale variation degree of zone according to those maximum gray gaps.
13. image treatment method according to claim 8 is characterized in that, this counting circuit calculates the middle GTG value in each zone in these a plurality of zones according to the GTG value of a plurality of pixels in these a plurality of zones; This decision-making circuit determines these each self-corresponding gray scale variation degree of zone according to GTG value in the middle of those.
14. image processor according to claim 11 is characterized in that, this judging unit is to judge whether comprise this continuous non-conversion image in this video flowing.
15. an image treatment method comprises the following step:
(a) provide an image, this image comprises a plurality of zones;
(b) calculate each regional GTG gap in these a plurality of zones according to the GTG value of a plurality of pixels in these a plurality of zones, as these each self-corresponding gray scale variation degree of zone; And
(c) calculate the summation of each regional gray scale variation degree according to these gray scale variation degree, this summation and a summation threshold value are compared, judge whether this image is a conversion back image that is converted to high definition by low definition.
16. image treatment method according to claim 15 is characterized in that, further comprises the following step:
If this image is this conversion back image, carries out an image quality at this image and promote processing.
17. image treatment method according to claim 15 is characterized in that, further comprises the following step:
If this image is this conversion back image, carries out an image quality at a video flowing that comprises this image and promote processing.
18. image treatment method according to claim 15 is characterized in that, the target area in these zones comprises a plurality of pixels, and these pixels have a GTG value separately, and step (b) comprises:
Calculate a maximum gray gap of this target area according to these GTG values; And
Determine this gray scale variation degree of this target area according to this maximum gray gap.
19. image treatment method according to claim 15, it is characterized in that, target area in these zones comprises one first pixel, one second pixel and one the 3rd pixel of arranging in regular turn, this first pixel has one first GTG value, this second pixel has one second GTG value, the 3rd pixel has one the 3rd GTG value, and step (b) comprises:
By GTG value in the middle of selecting in this first GTG value, this second GTG value and the 3rd GTG value;
Calculate a difference value of this second GTG value and this centre GTG value; And
Determine this gray scale variation degree of this target area according to this difference value.
20. image treatment method according to claim 19 is characterized in that, this gray scale variation degree equals the pairing weighted value of this difference value.
21. image treatment method according to claim 15 is characterized in that, step (c) comprises:
If this summation is lower than this summation threshold value, judge that this image is this conversion back image.
22. an image processor comprises the image in a plurality of zones in order to processing, this image processor comprises:
Whether one judging unit is a conversion back image that is converted to high definition by low definition in order to judge this image, and described judging unit comprises:
One counting circuit is used for calculating each regional GTG gap in these a plurality of zones according to the GTG value of these a plurality of pixels in a plurality of zones;
One decision-making circuit determines these each self-corresponding gray scale variation degree of zone according to those GTG gaps;
One adds way circuit, calculates the summation of each regional gray scale variation degree according to these gray scale variation degree; And
One comparison circuit compares this summation and a summation threshold value, judges whether this image is this conversion back image; And
One adjustment unit, if this image is this conversion back image, this adjustment unit is promptly carried out an image quality at this image and is promoted processing.
23. image processor according to claim 22, it is characterized in that, target area in these zones comprises a plurality of pixels, and these pixels have a GTG value separately, and this counting circuit calculates a maximum gray gap of this target area according to these GTG values; And
This decision-making circuit determines this gray scale variation degree of this target area according to this maximum gray gap.
24. image processor according to claim 22, it is characterized in that, target area in these zones comprises one first pixel, one second pixel and one the 3rd pixel of arranging in regular turn, this first pixel has one first GTG value, this second pixel has one second GTG value, and the 3rd pixel has one the 3rd GTG value;
This judging unit also comprises:
Another comparison circuit is in order to by selecting GTG value in the middle of in this first GTG value, this second GTG value and the 3rd GTG value;
Another counting circuit is in order to calculate a difference value of this second GTG value and this centre GTG value; And
Another decision-making circuit is in order to determine this gray scale variation degree of this target area according to this difference value.
25. image processor according to claim 24 is characterized in that, this gray scale variation degree equals the pairing weighted value of this difference value.
26. image processor according to claim 22 is characterized in that,
If this summation is lower than this summation threshold value, this judging unit judges that promptly this image is this conversion back image.
CN 200910208041 2009-10-13 2009-10-13 Image processing method and image processing unit Expired - Fee Related CN102045532B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910208041 CN102045532B (en) 2009-10-13 2009-10-13 Image processing method and image processing unit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910208041 CN102045532B (en) 2009-10-13 2009-10-13 Image processing method and image processing unit

Publications (2)

Publication Number Publication Date
CN102045532A CN102045532A (en) 2011-05-04
CN102045532B true CN102045532B (en) 2013-07-24

Family

ID=43911247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910208041 Expired - Fee Related CN102045532B (en) 2009-10-13 2009-10-13 Image processing method and image processing unit

Country Status (1)

Country Link
CN (1) CN102045532B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005026814A (en) * 2003-06-30 2005-01-27 Sharp Corp Video display apparatus
JP2005065195A (en) * 2003-08-20 2005-03-10 Toshiba Corp Broadcast receiver, digital broadcast receiver, broadcast reception method, and digital broadcast reception method
CN101137034A (en) * 2006-08-29 2008-03-05 索尼株式会社 Image determination by frequency domain processing
CN101365092A (en) * 2007-08-07 2009-02-11 索尼株式会社 Image determining device, image determining method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005026814A (en) * 2003-06-30 2005-01-27 Sharp Corp Video display apparatus
JP2005065195A (en) * 2003-08-20 2005-03-10 Toshiba Corp Broadcast receiver, digital broadcast receiver, broadcast reception method, and digital broadcast reception method
CN101137034A (en) * 2006-08-29 2008-03-05 索尼株式会社 Image determination by frequency domain processing
CN101365092A (en) * 2007-08-07 2009-02-11 索尼株式会社 Image determining device, image determining method, and program

Also Published As

Publication number Publication date
CN102045532A (en) 2011-05-04

Similar Documents

Publication Publication Date Title
JP4513819B2 (en) Video conversion device, video display device, and video conversion method
US8363975B2 (en) Image determining device, image determining method, and program
JP2000209428A (en) Electronic watermark insertion system, and electronic watermark characteristic parameter table preparation method
JP2005301274A (en) Display apparatus
JP5887764B2 (en) Motion compensation frame generation apparatus and method
TW201005675A (en) Image processing methof of noise reduction and apparatus thereof
WO2011041922A1 (en) Method and device for calculating blur in video images
JPWO2003088648A1 (en) Motion detection device, image processing system, motion detection method, program, and recording medium
CN102045532B (en) Image processing method and image processing unit
KR100553892B1 (en) Method and apparatus interpolating a digital image
CN100366074C (en) Information processing apparatus and method, recording medium, and program
JPH10257319A (en) Image processor and image-processing method
JP5737072B2 (en) Motion compensation frame generation apparatus and method
TWI508542B (en) Image processing method and image processing apparatus
US9715736B2 (en) Method and apparatus to detect artificial edges in images
WO2004097777A1 (en) Gray scale display device
JP2005266918A (en) Image processing apparatus and method, recording medium, and program
US8082216B2 (en) Information processing apparatus, method and program having a historical user functionality adjustment
CN116777796B (en) Full-automatic accurate PCBA circuit board material detecting system of making a video recording
JP5018198B2 (en) Interpolation signal generation circuit, interpolation signal generation method, program, and video signal processing apparatus
CN113688929B (en) Prediction model determining method, apparatus, electronic device and computer storage medium
JP4544342B2 (en) Image processing device
JP5887763B2 (en) Motion compensation frame generation apparatus and method
KR100252943B1 (en) scan converter
JPH08184409A (en) Image target detector

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201030

Address after: No. 1, Xingzhu Road, Hsinchu Science Park, Taiwan, China

Patentee after: MEDIATEK Inc.

Address before: 518057, Guangdong, Shenzhen hi tech Zone, South District, science and technology, South ten road, Shenzhen Institute of Aerospace Science and technology innovation, C block, building 4

Patentee before: Mstar Semiconductor,Inc.

Patentee before: MEDIATEK Inc.

Effective date of registration: 20201030

Address after: 4 / F, block C, Shenzhen Aerospace Science and Technology Innovation Research Institute, science and technology south 10th Road, South District, Shenzhen high tech Zone, Guangdong Province

Patentee after: Mstar Semiconductor,Inc.

Patentee after: MEDIATEK Inc.

Address before: 518057, Guangdong, Shenzhen hi tech Zone, South District, science and technology, South ten road, Shenzhen Institute of Aerospace Science and technology innovation, C block, building 4

Patentee before: Mstar Semiconductor,Inc.

Patentee before: MSTAR SEMICONDUCTOR Inc.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130724