CN101751894B - Image processing device and image display system - Google Patents

Image processing device and image display system Download PDF

Info

Publication number
CN101751894B
CN101751894B CN 200910262430 CN200910262430A CN101751894B CN 101751894 B CN101751894 B CN 101751894B CN 200910262430 CN200910262430 CN 200910262430 CN 200910262430 A CN200910262430 A CN 200910262430A CN 101751894 B CN101751894 B CN 101751894B
Authority
CN
China
Prior art keywords
image
correction
processing apparatus
pixel value
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200910262430
Other languages
Chinese (zh)
Other versions
CN101751894A (en
Inventor
荒岛谦治
谷野友哉
西亨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2008322300A external-priority patent/JP5176936B2/en
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN101751894A publication Critical patent/CN101751894A/en
Application granted granted Critical
Publication of CN101751894B publication Critical patent/CN101751894B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Liquid Crystal Display Device Control (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

The present invention provides an image processing device and an image display system achieving image quality improvement in moving picture by suppressing a motion blur in a hold-type display device, while maintaining low cost. The image processing device processes image data provided from outside and outputting the image data to a hold type display device. The image processing device includes a correction processing section performing a correction process to correct a pixel value in the image data for each pixel through performing a spatial LPF (low pass filter) process on the image data in a frame to be displayed in the display device according to a magnitude of a motion vector in the image data, the spatial LPF process allowing a slope of a changing edge portion in the image data to be more gentle.

Description

Image processing apparatus and image display system
Technical field
The present invention relates to a kind of image processing apparatus and the image display system that comprises this image processing apparatus, this image processing apparatus is handled from the view data of outside input and with view data and is exported freeze mode (hold type) display device to.
Background technology
In recent years, such as the low-grade display of LCD (LCD) widespread come replaced C RT (cathode-ray tube (CRT)), and the moving image technique for displaying that is used on the LCD causes concern.
Unlike the impact-type display device such as CRT, when in such as the freeze mode display device of LCD, showing moving image, to show, be designated as and to keep demonstration to constitute all pixels of screen during the period of demonstration up to next frame from a plurality of frames of formation moving image or some being designated as (after this will be called " frame " simply).Like this, in preserving type display device, because so-called eye tracking is concentrated (eye trace integration) (the twilight sunset feature that represents) effect in human retina when the pursuit movement image, appearance such as leading edge is fuzzy in the motion object, back edge is trailed and the problem of the motion blur that perceived position is delayed so exist in.Especially, in LCD, owing to response speed of liquid crystal is thought slowly and produced this motion blur probably.
In order to solve such problem, there is a kind of overdrive technique, as suppressing one of technology of motion blur by the response characteristic of improving LCD.In this overdrive technique, in order to improve the response characteristic for the step input, for example for the input of the step among the LCD, when input signal changes, in first frame, apply the voltage higher than the target voltage corresponding with indicated brightness value.Thereby, accelerated the speed of brightness transition.By adopting this overdrive technique, response speed of liquid crystal increases in intermediate gray-scale (gradation) zone, and can obtain to suppress the effect of motion blur.In addition, in this overdrive technique, by changing the wavelength of the voltage that applies according to the motion vector in each frame, a kind of technology (for example, with reference to the open No.2005-43864 of Japanese unexamined patent) that suppresses motion blur has more effectively been proposed also.
Summary of the invention
Yet, in overdrive technique, for the voltage range existence restriction of the voltage that wherein can impose on liquid crystal.Thereby, for example, under the situation of target voltage close to the limit of voltage range of white demonstration, black demonstration etc. (situation that gray scale changes in high gray areas and the low gray areas), there are the following problems: namely may not apply to increase the sufficiently high voltage of response speed of liquid crystal, and may not show the effect that suppresses motion blur fully.
In addition, in the LCD that drives with the VA type-scheme, liquid crystal rise with descend in feature different, the aligning of liquid crystal molecule changes spended time when rising from 0 grade (for example black).When only using overdrive technique, there is such problem: namely consider the response characteristic of liquid crystal, may not in a frame, realize to the brightness transition of indicated brightness value.
On the other hand, in recent years, researched and developed a kind of dual rate Driving technique.In this dual rate Driving technique, in order to reduce the eye tracking concentration effect, display frame is divided into a plurality of subframes by time-division ground (time-divisionally), and a plurality of subframe is for increasing the display frequency of moving image.Calculate insertion image between the frame based on the motion vector of input picture, and be presented on the LCD.
Yet, because driving the driving frequency of the display driver of display device increases by increasing display frequency, therefore have problems such as the numbers of terminals that electric charge deficiency, IC number and connector occur increases, Substrate Area increase, generation heat, EMI (electromagnetic interference (EMI)) increase, and this may be the reason that cost increases.
According to aforementioned, expectation provides a kind of image processing apparatus and image display system, and it can improve the picture quality of moving image and suppress the cost increase simultaneously by the motion blur that suppresses in the freeze mode display device.
According to embodiments of the invention, a kind of image processing apparatus is provided, this image processing apparatus is provided by the view data that provides from the outside, and export view data to the freeze mode display device, this image processing apparatus comprises correction-processing portion, this correction-processing portion is according to the amplitude of motion vector in the view data, carry out the correction processing by the view data in the frame that will show is carried out Space L PF (low-pass filter) processing in display device, with for the pixel value in each pixel correction view data, this LPF handles and allows the inclination of the variation marginal portion in view data to relax more.
A kind of image display system according to the embodiment of the invention comprises: image processing apparatus, the view data that processing provides from the outside; And the freeze mode display device, based on the treated view data from image processing apparatus output, carries out image shows.
In image processing apparatus and image display system according to the embodiment of the invention, according to the amplitude of motion vector in the view data, handle and for the pixel value in each pixel correction view data by the Space L PF (low-pass filter) that the view data in the frame that will show is carried out.Thereby, in the freeze mode display device, by utilizing the basic frame rate improvement effect of the insertion in the direction in space, suppressed because the motion blur in the motion object that the eye tracking concentration effect causes (maintenance fuzzy such as leading edge, that back edge is trailed and perceived position is delayed is fuzzy).In addition, different with dual rate Driving technique of the prior art (insertions in the time orientation) owing to do not need modifier self, the problem that cost increases can not occur.In addition, different with overdrive technique of the prior art, in the gray scale in the zone except the middle gray zone changes, suppressed motion blur fully.
In image processing apparatus and image display system according to the embodiment of the invention, correction-processing portion is preferably according to the amplitude of motion vector, by the view data in the frame that will show being carried out Space H PF (Hi-pass filter) handles and LPF handles to carry out to proofread and correct and handles, this HPF handles and provides overshoot (overshoot) zone near the two ends that allow the variation marginal portion in view data and down towards (undershoot) zone.Under the situation of this structure, the overshoot that utilizes the HPF processing to provide is regional and dash regional combination down, and liquid crystal response is improved.Thereby, suppressed motion blur, described motion blur such as edge fog, owing to brightness changes to delaying the edge hangover that middle gray causes and the response of falling from middle gray scale.Therefore, suppress the motion blur in the freeze mode display device more effectively, and improved the picture quality of moving image more.
Image processing apparatus and image display system according to the embodiment of the invention, amplitude according to the motion vector in the view data, by being carried out Space L PF (low-pass filter), the view data in the frame that will show handles, for the pixel value in each pixel correction view data.Thereby, by utilizing the basic frame rate improvement effect of the insertion in the direction in space, suppressed the eye tracking concentration effect, and suppressed motion blur.In addition, unlike the prior art, can prevent the problem that cost increases, and in the grey scale change in the zone except the middle gray zone, suppress motion blur fully.Thereby, in the freeze mode display device, by suppressing motion blur, can improve the picture quality of moving image, suppressing cost simultaneously increases.
From the following description, other and other purpose, feature and advantage of the present invention will be presented more fully.
Description of drawings
Fig. 1 is illustrated under the situation of liquid crystal that pulse signal is imported into typical VA type the key diagram of the example of the response wave shape of liquid crystal.
Fig. 2 is for the key diagram of explaining at the example of the eye tracking concentration effect of freeze mode display device and the relation between the motion blur.
Fig. 3 is for the key diagram of explaining at the example of the eye tracking concentration effect of freeze mode display device and the relation between the motion blur.
Fig. 4 is for the key diagram of explaining at the example of the eye tracking concentration effect of freeze mode display device and the relation between the motion blur.
Fig. 5 is for the key diagram of explaining at the example of the eye tracking concentration effect of freeze mode display device and the relation between the motion blur.
Fig. 6 is the key diagram that is illustrated schematically in according to the example of the image processing method in the image processing apparatus of first embodiment of the invention.
Among Fig. 7 A to 7D each is to be illustrated in step (step) waveform to be imported under the situation of freeze mode display device the key diagram of the example of operation waveform.
Among Fig. 8 A to 8C each is the key diagram that the example of the input signal in the image processing apparatus that is input to first embodiment is shown.
Fig. 9 is illustrated in to watch the key diagram along the change of the light quantity of direction in space that shows based on the output signal from the output of the image processing apparatus of first embodiment that user's the retina of the freeze mode display device of image gathers.
Figure 10 is the block scheme of functional configuration that the image processing apparatus of first embodiment is shown.
Figure 11 is the block scheme that illustrates according to the functional configuration of the display device of first embodiment.
Figure 12 is the block scheme that illustrates according to the functional configuration of the correction-processing portion of first embodiment.
Figure 13 is for the key diagram of explaining according to the high-frequency detection function partly of first embodiment.
Figure 14 illustrates utilization arranges the filter characteristic of part according to the filtering of first embodiment the key diagram that example is set.
Figure 15 illustrates utilization arranges the filter characteristic of part according to the filtering of first embodiment the key diagram that example is set.
Figure 16 is the block scheme that illustrates according to the hardware construction of the image processing apparatus of first embodiment.
Figure 17 is the process flow diagram that illustrates according to treatment scheme in the image processing method of first embodiment.
Figure 18 is the process flow diagram that the object lesson of handling according to the correction of first embodiment is shown.
Figure 19 is the block scheme that illustrates according to the structure of the image processing apparatus of second embodiment of the invention.
Figure 20 illustrates the block scheme that motion image blurring shown in Figure 19 improves the structure example of processing section.
Among Figure 21 A to 21C each is the oscillogram for the summary of explaining the correction processing of carrying out with correction-processing portion shown in Figure 20.
Figure 22 is the view that the example of amount of travel among LPF and the HPF and the relation between the tap number is shown.
Figure 23 is the block scheme that the structure example of signal characteristic test section shown in Figure 20 is shown.
Figure 24 A to 24B is the waveform synoptic diagram for the weight of the weight of the MAX value in the explanation hunting zone (process range) and MIN value and MAX value and MIN value.
Figure 25 is the characteristic pattern that the example of the relation between high-frequency signal value and the weight is shown.
Among Figure 26 A to 26C each is for the relation and the rising of picture signal and the waveform synoptic diagram of decline that illustrate between MAX position and the MIN position.
Figure 27 is the block scheme that the structure example of correction-processing portion shown in Figure 20 is shown.
Among Figure 28 each is to utilize edge shown in Figure 27 to replace the waveform synoptic diagram of the summary of the processing of carrying out the processing section for explanation.
Among Figure 29 A to 29C each is to utilize first edge shown in Figure 27 to replace the waveform synoptic diagram of the details of the processing of carrying out the processing section for explanation.
Among Figure 30 A and the 30B each is to utilize first edge shown in Figure 27 to replace the waveform synoptic diagram of the details of the processing of carrying out the processing section for explanation.
Among Figure 31 A to 31C each is to utilize second edge shown in Figure 27 to replace the waveform synoptic diagram of the details of the processing of carrying out the processing section for explanation.
Among Figure 32 A and the 32B each is to utilize second edge shown in Figure 27 to replace the waveform synoptic diagram of the details of the processing of carrying out the processing section for explanation.
Figure 33 is the block scheme that the structure example of LPF processing section shown in Figure 27 is shown.
Figure 34 is the block scheme that the structure example of HPF processing section shown in Figure 27 is shown.
Among Figure 35 A and the 35B each is that the waveform synoptic diagram that utilizes the example that filtering that LPF processing section shown in Figure 27 carries out handles is shown.
Among Figure 36 A and the 36B each is that the waveform synoptic diagram that utilizes another example that filtering that LPF processing section shown in Figure 27 carries out handles is shown.
Among Figure 37 A and the 37B each is that the waveform synoptic diagram that utilizes the example that filtering that LPF processing section shown in Figure 27 and HPF processing section carry out handles is shown.
Figure 38 is the waveform synoptic diagram that is illustrated in the example of the filtering processing in the HPF processing section shown in Figure 27.
Figure 39 A is the characteristic pattern that the example of the relation between image pickup fuzzy quantity and the LPF gain is shown, and Figure 39 B is the characteristic pattern that the example of the relation between high-frequency signal value and the high-frequency gain is shown.
Figure 40 A is the characteristic pattern that the example of the relation between low frequency signal value and the low-frequency gain is shown, Figure 40 B is the characteristic pattern that the example of the relation between MAX/MIN difference and the HPF amplitude gain is shown, and Figure 40 C is the characteristic pattern that the example of the relation between MIN value and the correction value delta is shown.
Figure 41 is the characteristic pattern that the example of the relation between movement quantity vector and the amount of travel gain is shown.
Figure 42 is for the view of explaining the motion image blurring under the situation of desirable freeze mode.
Figure 43 is for the view of explaining the motion image blurring under the situation of typical liquid crystal response.
Figure 44 is for the view of explaining the motion image blurring under the situation of inserting frame.
Figure 45 is for the view of explaining the motion image blurring under the situation that the LPF that uses second embodiment handles.
Figure 46 is the waveform synoptic diagram of example that is illustrated in the rising characteristic of the liquid crystal under the situation that the LPF that uses second embodiment handles.
Figure 47 is for the view of explaining the motion image blurring under the situation of installing usually.
Figure 48 is for the view of explaining the motion image blurring under the situation that the HPF that uses second embodiment handles.
Figure 49 A to 49D is the waveform synoptic diagram be used to the example that is illustrated in the liquid crystal response feature under the situation that LPF handles and HPF handles of using second embodiment.
Figure 50 illustrates the block scheme that improves the structure of processing section according to the motion image blurring of the modification of second embodiment.
Figure 51 illustrates the block scheme that improves the structure of processing section according to the motion image blurring of another modification of second embodiment.
Among Figure 52 A and the 52B each is the block scheme of the structure of the image processing section of a modification again that illustrates according to second embodiment.
Among Figure 53 A and the 53B each is the block scheme of the structure of the image processing apparatus of an example again that illustrates according to second embodiment.
Figure 54 is the block scheme of the structure of the image processing apparatus of a modification again that illustrates according to second embodiment.
Among Figure 55 A and the 55B each is the block scheme of the structure of the image processing apparatus of an example again that illustrates according to second embodiment.
Among Figure 56 A and the 56B each is the block scheme of the structure of the image processing apparatus of an example again that illustrates according to second embodiment.
Figure 57 A and 57B are the waveform synoptic diagram for the example application of the filtering processing of explaining single dot structure and sub-pixel structure.
Embodiment
To describe embodiments of the invention with reference to the accompanying drawings in detail.This description will be carried out with following order.In embodiment and accompanying drawing, identical Reference numeral will have the parts of basic identical functional configuration for expression, thereby omit the description that repeats.
1, first embodiment (using the example of the image processing of LPF (low-pass filter))
2, second embodiment (using the example of another image processing of LPF and HPF (Hi-pass filter))
3, modification
1, first embodiment
Improvement for motion blur is measured
The preferred embodiments of the present invention (first embodiment) are being described before, this description will be carried out with following process, wherein the inventor has conceived the image processing apparatus according to the embodiment of the invention, measures as the improvement that is used for motion blur in such as the freeze mode display device of liquid crystal indicator.
As mentioned above, in the freeze mode display device, the motion blur fuzzy such as leading edge, that back edge is trailed and perceived position is delayed appears in the motion object.In the prior art, think that motion blur is because the response speed of the display element of liquid crystal etc. slowly causes.Thereby, adopt overdrive technique as the measurement for improvement of the motion blur in the freeze mode display device.When adopting this overdrive technique, can improve the response speed of the display element etc. of liquid crystal.
On the other hand, the appearance of motion blur is not only because the response speed of the display element of liquid crystal etc. slowly causes in the freeze mode display device.Eye tracking concentration effect as the twilight sunset feature that represents in human retina when the pursuit movement image also is the one of the main reasons of motion blur.Thereby, when the typical overdrive technique slowly of the response speed that only adopts the display element wherein only consider liquid crystal etc., can not suppress the motion blur in the freeze mode display device fully.
In this regard, the image processing apparatus of describing among the No.2005-43864 is disclosed according to the Japanese unexamined patent of submitting to Jap.P. office before the application procurator of the present invention, when adopting overdrive technique, by not only considering response speed of liquid crystal but also considering the eye tracking concentration effect, can suppress the motion blur in the freeze mode display device fully.
During gray scale in medium gray areas changed, overdrive technique can show the effect of the response speed that increases display element.Yet, but under the situation of target voltage close to the limit in the applied voltage scope of white demonstration, black demonstration etc., because can not apply sufficiently high voltage to display element, so overdrive technique can not show the effect of the response speed that increases display element fully.
In addition, in the liquid crystal indicator of the driving method that adopts the VA type-scheme, from the rising of 0 grade (for example black) time, the aligning of liquid crystal molecule changes spended time.Thereby, when only adopting overdrive technique, there is the not enough situation of a frame for response speed.
At this, with reference to figure 1, the example that utilizes pulse signal to be imported into the situation in the typical VA type liquid crystal is described the response characteristic of liquid crystal.Fig. 1 is illustrated under the situation that pulse signal is imported into typical VA type liquid crystal the key diagram of the example of the response wave shape of liquid crystal.In Fig. 1, Z-axis is represented the gray scale of liquid crystal, and transverse axis is represented the time.In addition, Fig. 1 shows the response wave shape L of liquid crystal with solid line.Under pulse signal P is imported into situation in the typical VA type liquid crystal, generate the response wave shape L of liquid crystal, this pulse signal P has the waveform in a frame period that dots.
As shown in Figure 1, under the situation of VA type liquid crystal, the response characteristic in rising and descending differs from one another.In rising, occur because respond along the VT curve, thereby be input to response existence delay from signal.On the other hand, under situation about descending, because response does not occur along the VT curve, though thereby have that to postpone be not so big.Especially, as with the dotted line among Fig. 1 around regional U shown in, from the rising of low gray scale (for example 0 grade), can see that the delay of response time is bigger.In addition, in rising, can see the gray scale when depending on the signal input, there is sizable difference in the response time.
Thereby the inventor has also studied eye tracking concentration effect in the freeze mode display device and the relation between the motion blur.The result, the inventor has been found that by utilization depends on that the response time of gray scale is poor, so that applying according to the response time control driving voltage of the display element of liquid crystal etc., can suppress the motion blur in the freeze mode display device effectively, and finish the application's invention.
The eye tracking concentration effect
Hereinafter, referring to figs. 2 to 5, will be to making description by the eye tracking concentration effect in the freeze mode display device of inventor's research and the relation between the motion blur.Among Fig. 2 to 5 each is for the key diagram of explaining at the example of the eye tracking concentration effect of freeze mode display device and the relation between the motion blur.
In the following description, provide liquid crystal indicator as the example of freeze mode display device.This description is made based on following hypothesis: wherein a certain pixel in a plurality of pixels of configuration frame or (after this will be called " frame " simply for the convenience of describing) is corresponding with each display element (being liquid crystal in this example) of the screen that constitutes liquid crystal indicator.
As the condition of the image that will handle, suppose that the image that step changes moves with fixed speed, and have with the painted background of a kind of pure color.Because these conditions, so describing under the situation that eye tracking concentrates, utilize periodic function to calculate brightness on this trace.Thereby, in eye tracking is concentrated, only can consider a frame.Calculate in order to carry out simply, it is vertical that the brightness in this example on the border (marginal portion) of hypothesis image changes.
Based on being to obtain to be similar to or to be better than the result who concentrates with the eye tracking in the 120Hz LCD driven, can determine whether the improvement of the motion blur in the freeze mode display device reaches aimed quality, wherein drive typical 60Hz with dual rate mode and drive.As the definite project that is used for this aimed quality, exist the border (leading edge and back edge) of eye tracking in concentrating steepness, arrive delay in the half value point (half value of high-high brightness) of brightness etc.
At this, in Fig. 2 to 5, show the situation that image that step changes is from left to right advanced with the speed of 4 pixels/frame on the display screen of liquid crystal indicator.The top of each among Fig. 2 to 5 shows the waveform of the received image signal that is input to liquid crystal indicator.The center section of each among Fig. 2 to 5 shows under the image based on the received image signal in the top of Fig. 2 to 5 is displayed on situation in the liquid crystal indicator, and the time domain in the output level of liquid crystal (brightness) changes.The lower part of each among Fig. 2 to 5 shows user (people) and watches under the situation that is presented at the image in the liquid crystal indicator, the light quantity of catching in the retina of eyes of user (that is the concentrated result of eye tracking).
In the center section of Fig. 2 to 5, the position of along continuous straight runs shows the locations of pixels (direction in space) that constitutes each frame.In the drawings, the change along vertical downward direction shows flowing through of time.In addition, in the center section of Fig. 2 to 5, a pixel is corresponding to a liquid crystal, and the output level of each liquid crystal is shown with gray scale.Among Reference numeral 0F, the 1F etc. each is represented the numbering of each frame.
In the lower part of Fig. 2 to 5, the time in the center section of Fig. 2 to 5, " tb " located, and the position of along continuous straight runs shows amphiblestroid position (direction in space) in the eyes of user.In the accompanying drawings, the edge shows the light quantity of catching in the position of direction vertically upward in the retina of eyes of user.That is, as the result that the light quantity in the amphiblestroid position of eyes of user is concentrated, regional S1, S2, S3 and S4 represent the result that eye tracking is concentrated.About more detailed description, in the center section of Fig. 2 to 5, represent the motion of eyes of user towards the tilted arrows of bottom right.Each between time " ta " and time " tb ", the light of the predeterminated level of exporting in the liquid crystal from the position that this tilted arrows sends entered user's retina constantly the time moment.As a result, the incident light during each moment accumulates in user's the retina continuously." tb " locates in the time, catches to have the light (other integrated value of level of incident light) of assembling light quantity in user's retina.
Based among Fig. 2 to 5 each, hereinafter will be for being described by the relation between eye tracking concentration effect and motion blur in the freeze mode display device of inventor's research.
Fig. 2 shows at time " tb ", having received image signal (with the received image signal that frame 1F is corresponding in the accompanying drawing) at the waveform shown in the top of accompanying drawing therein is imported into the display device of using desirable holding element, namely has under the situation that the response time is 0 freeze mode display element (for example liquid crystal) relation between eye tracking concentration effect and motion blur.
As shown in Figure 2, in the display device of using desirable holding element, the response time of importing for step is 0.Thereby, the output level moment arrival brightness (object brightness) corresponding with received image signal of liquid crystal, and the response of liquid crystal is very fast.Yet, because the eye tracking concentration effect in desirable holding element, also occurs, thereby produce the motion blur of 4 pixels, be equivalent to the gait of march of the input picture of step change.
Fig. 3 shows in the time " tb " and locates, the received image signal (with the received image signal that frame F1 is corresponding in the accompanying drawing) that has therein at the waveform shown in the top of accompanying drawing is imported under the situation of typical liquid crystal indicator (LCD) relation between eye tracking concentration effect and motion blur.
As shown in Figure 3, slower for the response speed of step input in typical LCD, and before arriving object brightness, need response time of an approximate frame.In addition, drive because LCD carries out freeze mode, thereby produce the eye tracking concentration effect.Like this, carry out in typical LCD under the situation of step input, the eye tracking concentration effect was added on response time based on response speed of liquid crystal.Therefore, for example, produce the motion blur of 8 pixels, and this twice with the gait of march of the input picture of step change is corresponding.
Fig. 4 shows in the time " tb " and locates, having received image signal (with the received image signal that frame 1F is corresponding in the accompanying drawing) at the waveform shown in the top of accompanying drawing therein is imported under the situation about carrying out among the LCD that dual rate drives (doubling frequency that moving image shows), relation between eye tracking concentration effect and motion blur, wherein LCD shows the insertion image based on motion vector in the son field that forms by even division one frame.
As shown in Figure 4, even in carrying out the dual rate LCD driven, compare with typical LCD, the response speed of liquid crystal self does not change yet.On the other hand, in carrying out the dual rate LCD driven, a frame is divided into two sub, and shows the insertion image in each son field.Thereby the retention time that is used for a received image signal is half, has reduced the eye tracking concentration effect thus.The result is the whole motion blur that reduces to 5 pixels for example.As mentioned above, whether be equal to or less than the motion blur of carrying out 5 pixels in the dual rate LCD driven based on motion blur, can determine the motion blur in the freeze mode display device improves whether reach aimed quality.
Fig. 5 shows in the time " tb " and locates, the received image signal (with the received image signal that frame 1F is corresponding in the accompanying drawing) that has therein at the waveform shown in the top of accompanying drawing is imported under the situation of the image processing apparatus of using embodiments of the invention the relation between eye tracking concentration effect and motion blur.
In the image processing apparatus of using the embodiment of the invention, the response time information indication is from applying for showing that in the freeze mode display device driving voltage of the image with object brightness begins, the time till showing the image with brightness corresponding with driving voltage in display device.Response time information is stored corresponding to the brightness variable quantity.Motion vector based on response time information and input picture, in the frame (being 1F in this example) that will show frame (in this example, being 0F) before, namely under the situation of this example at time " ta ", constitute the brightness of each pixel of the frame that will show for each pixel correction.For example, carry out this correction so that each pixel in the frame (1F) that will show has object brightness.In example shown in Figure 5, for being presented at first as the pixel (from 4 pixels of the right genesis) among the frame 1F of the frame that will show, when frame 0F, adjust the voltage that imposes on liquid crystal corresponding with each pixel, and adjust the output level (with reference to when the frame 0F, wherein the output level of liquid crystal is the part of class step shape) of liquid crystal for each pixel.Thereby in the frame (1F) that will show, each pixel has object brightness.
In this way, in frame (1F) frame (0F) before that will show, consider the response time of liquid crystal, the voltage that is suitable for each pixel was before applied (pixel value is corrected) and was given the liquid crystal corresponding with each pixel, reached object brightness up to each pixel that constitutes the frame that will show.Thereby, reduced the eye tracking concentration effect significantly.The result is as shown in Figure 5, for example, motion blur reduces to the motion blur of two pixels on the whole, and can see and have the motion blur depression effect, and this motion blur depression effect is more effective than the motion blur depression effect of carrying out the dual rate LCD driven.In an embodiment of the present invention, for each pixel correction pixel value.Thereby, motion blur depression effect with the processing of proofreading and correct is more effective, during more high-quality pixel in realizing being similar to High Resolution Display etc., be similar to the situation of the liquid crystal of VA type, depend on that the difference in the response time that gray scale changes is bigger, and the gait of march of motion object (movement quantity vector) is higher.
Therefore, the image of handling through the image processing apparatus of using present embodiment of the present invention is displayed in the freeze mode display device, thereby can obtain than the effective more motion blur depression effect of the motion blur depression effect of carrying out the dual rate LCD driven.In addition, in carrying out the dual rate LCD driven, by inserting image synthesis in input picture, frame is divided into a plurality of sons field to increase frame rate.Thereby, reduced the retention time, thereby motion blur is suppressed.On the other hand, in the image processing apparatus of using the embodiment of the invention, based on the insertion of motion vector execution along direction in space rather than time orientation, and based on response time information, insert the result and is transformed into the time change from the space change.Thereby, used the effect that increases frame rate basically.As a result, in the freeze mode display device, improve the moving image response characteristic, and can suppress motion blur.
Handle the summary of the method for image according to first embodiment of the invention
Hereinafter, with reference to figure 6, will make description to the summary of the example of the method for in image processing apparatus, handling image according to first embodiment.Fig. 6 schematically shows the key diagram of example of handling the method for image according to first embodiment of the invention in image processing apparatus.
As shown in Figure 6, when input image data is imported into image processing apparatus 100, image processing apparatus 100 relatively will be corresponding with the incoming frame that will show input image data with and the view data among the frame storer 5-1 corresponding, that be stored in image processing apparatus 100 before the frame that shows compared and the motion vector (S11) of detection input picture.The motion vector that detects is used for wherein generating the next step (S13) of inserting image.In addition, the correction that the motion vector that detects also is used for is subsequently handled and the processing of overdriving, and if necessary can be stored among the storer 5-1.
Next, based on the motion vector that detects in step S11, image processing apparatus generate to insert image, and this insertions image will be inserted in the frame that will show and with between the frame frame before that shows (S13).Insert image by generating, the moving image display frequency doubles (in typical LCD, the moving image display frequency is increased to 120Hz from 60Hz).The insertion image that generates is used for correction treatment step (S15) subsequently.The insertion image that generates can be stored among the storer 5-1.In embodiments of the present invention, inserting image, to generate step (S13) always unnecessary.Handle (S15) by carrying out the correction that next will describe, can in the freeze mode display device, obtain the motion image blurring depression effect fully, and need not increase moving image display frequency (frame rate).
Next, after the process schedule time, based on the motion vector that in step S11, detects and the response time information that is stored among look-up table (LUP) 5-2, image processing apparatus generates control information to be presented at the insertion image that generates among the step S13, is displayed in the frame that will show so that have the image of object brightness.In addition, image processing apparatus comprehensively inserts information and input image data, and generates the image correcting data (S15) that pixel value wherein is corrected.The image correcting data that generates is used for the processing (S17) of overdriving subsequently.Proofread and correct in the frame for the treatment of step (S15) before the frame that will show and carry out.Under the situation of execution in step S13 not, (do not generate and insert image), in step S15, need not to use the insertion image, based on the motion vector that detects among the step S11 and be stored in response time information among look-up table (LUT) 5-2, directly calculate for the correction pixels value that shows the image with object brightness at the frame that will show.In addition, based on the correction pixels value of calculating, generate image correcting data.
Next, by the image correcting data that use is stored in the input image data among the storer 5-1 and generates in step S15, the image processing apparatus pair image correcting data corresponding with the frame that will show carried out the processing (S17) of overdriving.The result generates the display image data that will be presented in the freeze mode display device.
Next, with reference to figure 7A to 7D, will make description to the operation waveform that step waveform wherein is imported under the situation of freeze mode display device.Among Fig. 7 A to 7D each is the key diagram that is illustrated in the example of the operation waveform under the situation that step waveform is imported into the freeze mode display device.In Fig. 7 A to 7D, vertical direction is represented the brightness of each pixel of configuration frame, and horizontal direction is represented each locations of pixels (direction in space) of configuration frame.The zone that with dashed lines is divided among Fig. 7 A to 7D is known as the unit with a plurality of pixels (being 4 pixels) configuration in this example.
Fig. 7 A shows the waveform of the step signal that is input to typical LCD.Shown in Fig. 7 A, in the input step signal, there is the marginal portion in the right hand edge at N unit.The height indication at edge is with the object brightness in the frame that shows.
Fig. 7 B shows in step signal and is imported into operation waveform under the situation of the LCD that adopts over-driving method.Shown in Fig. 7 B, in over-driving method, for example, in first frame that input changes, apply and be used for showing that in display device the target voltage of the image with object brightness compares higher voltage, and accelerate brightness transition.Thereby in N cell position, brightness is higher than object brightness.Yet, in typical over-driving method, do not detect the motion of objects of in frame, moving, be motion vector, and irrespectively apply voltage uniformly with motion vector.Thereby, having than object brightness more in the part of high brightness, brightness is consistent whole N the unit of conduct (brightness is consistent in each pixel in being included in N unit).
Fig. 7 C shows in step signal and is imported into operation waveform under the situation among the LCD, and this LCD adopts wherein to overdrive in execution and applies voltage method based on motion vector when driving, described in the open No.2005-43864 of Japanese unexamined patent.Shown in Fig. 7 C, in the method, when applying than the higher voltage of target voltage, detect the motion vector of input picture, and based on the motion vector that detects, to each pixel adjustment with the voltage that applies.Thereby, compare with typical over-driving method, improved the motion blur depression effect in the freeze mode display device.
Yet, as mentioned above, there is restriction because can be applied to the scope of the voltage of liquid crystal, for example black show, under the situation of the white target voltage that shows etc. during close to the limit in the voltage range (situation that gray scale changes in high gray areas and low gray areas), the problem that exists is the sufficiently high voltage that may not apply to increase response speed of liquid crystal, and may not represent the motion blur depression effect fully.Thereby, in an embodiment of the present invention, as described in the step S15 of Fig. 6, carry out to proofread and correct and handle.
Fig. 7 D shows in step signal and is imported into application according to the example of the operation waveform under the situation of the image processing apparatus of the method for first embodiment of the present invention processing image.Shown in Fig. 7 D, in the method according to first embodiment of the invention, in the frame before the frame that will show, based on the motion vector of response time information and input picture, constitute the brightness value of each pixel of the frame that will show for each pixel correction.For example, carry out this correction so that each pixel in the frame that shows is had object brightness.The result is, corresponding with response speed of liquid crystal in the marginal portion of step signal, brightness is not vertically to be reduced to low-light level from high brightness sharp, but brightness little by little is reduced to low-light level from high brightness, for example with the shape of similar step.In Fig. 7 D, except the image processing method according to first embodiment of the invention, show the operation waveform under the situation that adopts the over-driving method of considering motion vector.Yet, in an embodiment of the present invention, if necessary can adopt over-driving method.Over-driving method is not always necessary.
Next, with reference to figure 8A to 8C and 9, utilization is input to image processing apparatus input signal waveform and from the output signal of image processing apparatus output the operation that the correction the image processing apparatus of using the embodiment of the invention is handled is described.Fig. 8 A is the key diagram that the example of the input signal that is input to the image processing apparatus of using the embodiment of the invention is shown.Among Fig. 8 B and the 8C each is the key diagram that the example of the output signal of exporting from the image processing apparatus of using the embodiment of the invention is shown.Fig. 9 is illustrated in to watch the key diagram along the change of the light quantity of direction in space that shows based on the output signal of exporting that user's the retina of the freeze mode display device of image gathers from the image processing apparatus of using the embodiment of the invention.
In Fig. 8 A to 8C, each locations of pixels (direction in space) of configuration frame is represented in the position of horizontal direction, and vertical direction is represented the luminance level exported from display device.Each pixel of the region representation configuration frame that with dashed lines is divided among Fig. 8 A to 8C.Following description will be based on following hypothesis: the input signal that namely is input to image processing apparatus is the signal with step waveform, and has the motion vector of 4dot/v based on the input picture of the signal with step waveform.
Signal with step waveform of the marginal portion shown in Fig. 8 A is imported into image processing apparatus.As mentioned above, step signal is from left to right advanced with the speed of 4dot/v in the drawings.Before the input step signal, in display device, carry out black the demonstration, and along with the input of step signal, black display change be white demonstration.
In the image processing apparatus of using the embodiment of the invention, shown in Fig. 8 B, for example, especially, for the rising in the level and smooth holding element (liquid crystal etc.), according to the response characteristic of liquid crystal, the rising part of first forward direction input step signal applies voltage, so that luminance level reduces (proofread and correct and handle) gradually.Especially, this handles under situation about rising from black demonstration very important.At this moment, determine the scope of the voltage that before applied based on movement quantity vector, for example, under the situation of this example, voltage before had been applied to the pixel coverage of 4 points, and this is consistent with movement quantity vector (4dot/v).In addition, formerly apply under the voltage condition, can determine to impose on the voltage of each pixel for each pixel.For example, shown in Fig. 8 B, voltage can be applied so that luminance level reduces gradually in the mode of similar step.Replacedly, can apply voltage so that luminance level reduces gradually with linear mode, rather than in the mode of similar step.This preferably reduces luminance level with linear mode, because can more smoothly rise.
Fig. 8 C shows the operation waveform under the situation that the overdrive technique of describing is applied to view data in the open No.2005-43864 of Japanese unexamined patent, carry out the correction of the image processing apparatus of using the embodiment of the invention and handle in this view data.In the case, shown in Fig. 8 C, increase and overdrive so that export the signal with chevron shape (mountain-shaped) waveform.Apply the voltage that is higher than target voltage because utilizing overdrives, so the voltage that is used for the correction processing that had before applied is also high.Therefore, luminance level is higher than the luminance level of (only carrying out the situation about handling of proofreading and correct according to first embodiment of the invention) in the situation of Fig. 8 B on the whole.
Based on the display operation of Fig. 8 A to 8C carries out image, make that the light quantity on the retina that accumulates in the user changes along direction in space, as shown in Figure 9 as mentioned above.That is to say, do not carrying out overdriving and proofreading and correct under the situation of handling the two according to first embodiment of the invention, the luminance level of gathering at user's retina does not reach the luminance level of input step signal, shown in the curve of two dot dot dash, and shows serious the delay.Thereby, in the freeze mode display device, produce motion blur.In addition, only carrying out under the situation about overdriving, the difference between the luminance level of luminance level that user's retina gathers and input step signal is less, and has reduced the delay that shows slightly.Yet, postpone because still exist, so the motion blur depression effect is not enough.On the other hand, carry out handle according to the correction of first embodiment of the invention and the two the situation of overdriving under, the luminance level of gathering at user's retina reaches the luminance level of input step signal, shown in the curve of solid line, and can see that luminance level does not change sharp, but little by little reduce.As a result, the eye tracking concentration effect is suppressed fully, and the motion blur depression effect becomes effective in the freeze mode display device.
Structure according to the image display system of first embodiment of the invention
Next with reference to Figure 10 and 11, with the functional configuration of describing in detail as the system that can realize function as mentioned above according to the image display system 10 of first embodiment of the invention.Figure 10 is the block scheme that illustrates according to the functional configuration of the image processing apparatus 100 of the composing images display system 10 of first embodiment of the invention.Figure 11 is the block scheme that illustrates according to the functional configuration of the display device 200 of the composing images display system 10 of first embodiment of the invention.
Shown in Figure 10 and 11, image display system 10 according to first embodiment of the invention comprises image processing apparatus 100 and preserves type display device 200, wherein this image processing apparatus 100 is handled the input image data that is input to image processing apparatus 100 from the outside, and the output display image data, in fact this freeze mode display device 200 shows image based on the display image data from image processing apparatus 100 inputs.At this, term " system " means that a plurality of devices (function) are wherein logically concentrated and the device (function) of each structure inessential object in same shell whether.Thereby, for example, be similar to the TV receiver, exist the image processing apparatus 100 of composing images display system 10 and display device 200 to be disposed in the device situation as the object that will handle, and only have that the display device 200 of single object is treated to the situation of separating housing.Hereinafter, the image processing apparatus 100 of composing images display system 10 and the functional configuration of display device 200 will be described in detail.
The structure of image processing apparatus 100
As shown in figure 10, the image processing apparatus 100 according to first embodiment of the invention comprises: input image data storage area 110, motion vector detection section divide 120, response time information storage area 130, correction-processing portion 140 and output 160.
In input image data storage area 110, be input to the input image data of image processing apparatus 100 from the outside corresponding with a plurality of successive frames each and store.More specifically, for example, when being imported into image processing apparatus 100 in order to the input image data that shows the image in the frame that will show, input image data is stored in the input image data storage area 110.In addition, when being imported into image processing apparatus 100 in order to the input image data that shows the image in the frame that will show subsequently, input image data in the frame before the frame that will show is subsequently stored by former state and is kept, and is used to motion vector detection section and divides motion vector detection in 120.For example, if necessary, deletion is stored in the input image data in the input image data storage area 110 can be from the time the goes up outmoded data.
For example, when the input image data in the frame that will show was imported into motion vector detection section and divides in 120, motion vector detection section was divided the input image datas in 120 frames that extract before the frames that will show from input image data storage area 110.Motion vector detection section divides 120 relatively with the input image data in the frame before the input image data in the frame that shows and the frame that will show.Motion vector detection section divides 120 to focus on the object that moves in the demonstration image, and based on the direction of described object motion with apart from the motion vector that detects the input image data in the frame that will show.In addition, be similar to first embodiment, motion vector detection section divides 120 can be a assembly in the image processing apparatus 100, or for example can be an assembly in the device of image processing apparatus 100 outsides, such as mpeg decoder and IP converter.In one situation of back, in the device of image processing apparatus 100 outsides, detect the motion vector of input image data dividually, and be entered into image processing apparatus 100.
130 storages of response time information storage area are applied to the temporal information of display device 200 till display device 200 shows the image gray corresponding with driving voltage from driving voltage, namely indicate the response time information of the response time of freeze mode display element, the amplitude of the grey scale change in this response time information and the display device 200 is corresponding.As the form of memory response temporal information in response time information storage area 130 wherein, for example there is following situation: namely with response time of the amplitude of the form storage grey scale change of look-up table (LUT) and the display element corresponding with the amplitude of grey scale change.Replacedly, as the form of memory response temporal information in response time information storage area 130 wherein, for example there is following situation: i.e. the function of the relation between the response time of the previous amplitude that obtains the expression grey scale change and display element, and it is stored in the response time information storage area 130.In the case, the input image data in the frame before the input image data in the frame that shows and the frame that will show is compared, and calculate the amplitude of grey scale change for each pixel.The function that utilization is stored in the response time information storage area 130 converts the amplitude of the grey scale change calculated to response time information.Such function can be used such as the hardware of RAM or ROM and realize.
Each pixel for configuration frame, correction-processing portion 140 is based on input image data, motion vector and response time information, correction is with the pixel value of the input image data in the frame before the frame that shows, from input image data storage area 110, extract input image data, utilize motion vector detection section to divide 120 to detect motion vector, and from response time information storage area 130, extract response time information.As the result who proofreaies and correct, generate display image data, and the display image data that generates is output to output 160.
At this, correction-processing portion 140 for example can comprise inserts image production part branch (not shown), demonstration temporal information generating portion (not shown) and image synthesizing section (not shown).Insert image production part and divide and generate the insertion image that is inserted between the frame, import this insertion image based on input image data and image vector.Show that the temporal information generating portion generates Displaying timer information, this Displaying timer information shows the timing of inserting image based on the response time information indication after the process schedule time.Comprehensive demonstration information and the input image data that generates of image synthesizing section.Under the situation of this structure, insert image production part and divide based on motion vector and generate along direction in space rather than along the insertion image of time orientation.Depend on the difference in response time of display element of demonstration amplitude of grey scale change by employing, Displaying timer information generating portion will be inserted image modification for showing temporal information, thereby can be converted into change along time orientation along the change of direction in space.Thereby, by synthesis display temporal information and input image data, can obtain to be similar to and generating along the effect under the situation of the insertion image of time orientation, i.e. the insertion image along direction in space that generates easily by using based on motion vector increases the effect of frame rate in fact.
Be similar to above-mentioned structure, need not to generate and insert image, can adopt the wherein structure of direct correction pixels value by the spatial filter of use such as the average filter of advancing (travel-average filter).Functional configuration about back one situation will more specifically be described below.
Display image data is input to output 160 from correction-processing portion 140.Output 160 will be imported display image data and export to display device 200.
The structure of correction-processing portion 140
At this, with reference to Figure 12, will the functional configuration of above-mentioned correction-processing portion 140 be described in further detail.Figure 12 is the block scheme that illustrates according to the functional configuration of the correction-processing portion 140 of first embodiment.
As shown in figure 12, correction-processing portion 140 comprises: correcting range arranges part 141, max min test section 142, rim detection part 143, high-frequency detection part 144, outsidely replaces part 145, wave filter arranges part 146, filter process part 147, gain adjustment member 148, selects part 149 and composite part 150.
Correcting range arranges part 141 based on divide the wherein correcting range of correction pixels value input image data of the 120 motion vector settings of importing from motion vector detection section.Especially, correcting range arranges part 141 and detects the zone (part corresponding with the motion object) that has motion in the input image data wherein, and is arranged in and wherein exists the pixel in the zone of motion to be set to correcting range.About information that correcting range is set and be sent to max min test section 142, rim detection part 143, high-frequency detection part 144 and wave filter about the information of input motion vector part 146 is set.
Maximal value and the minimum value of the input image data (input signal) in the correcting range detected based on the information about correcting range that part 141 sends is set from correcting range in max min test section 142.Be sent to rim detection part 143 and the outside part 145 of replacing about the maximal value of the input signal that detects and the information of minimum value.
Rim detection part 143 based on arrange from correcting range that part 141 sends about the information of correcting range and about the information of input motion vector and from the max min test section 142 send about the maximal value of input signal and the information of minimum value, detect the marginal portion in the input image data (input signal).Rim detection part 143 not only detects the position of edge (variation marginal portion), also the edge direction in the change detected marginal portion (being from hanging down the direction that gray scale changes to the direction of high gray scale or changes to low gray scale from high gray scale).Utilize the direction of edge direction, the response that can determine display element is to rise or descend.The information about changing marginal portion and edge direction that detects is sent to selects part 149.
High-frequency detection part 144 detects the high-frequency signal that has spatial frequency in the input image data in correcting range based on the information about correcting range that part 141 sends is set from correcting range.At this, term " high-frequency signal " means the have half-wavelength signal of (1/2 wavelength), and its scope is littler than the correcting range, as shown in figure 13.That is to say that high-frequency detection part 144 detects the signal that has than the little wavelength of correcting range twice as high-frequency signal.This be because, under the situation of high-frequency signal, because elevated areas and descend regional both in correcting range, so may not carry out suitable processing.The high-frequency signal that detects is output to gain adjustment member 148, and is used for gain adjustment after the processing that utilizes filter process part 147.
Based on from the max min test section 142 send about the maximal value of input signal and the information of minimum value, the outside part 145 of replacing is carried out outside the replacement by using maximal value and minimum value to input image data (input signal).Input image data (input signal) through replacing is sent to filter process part 147.
Based on input image data, from correcting range arrange that part 141 sends about the information of correcting range and motion vector and the response time information that extracts from response time information storage area 130, wave filter arranges the characteristic that part 146 is provided for proofreading and correct the spatial filter of the pixel value in the input image data, so that when display device 200 shows the frame that shows, show to have the image gray that arranges based on input image data.Naturally, filter characteristic only is applied to being positioned at the pixel of correcting range.As the spatial filter according to first embodiment, for example, can use the average filter of advancing such as low-pass filter (LPF).As the filter characteristic according to first embodiment, for example, have the zone of filtering, the tap number of wave filter etc.Such filter characteristic can realize by the filter coefficient that electric-wave filter matrix is set suitably.Information about the filter characteristic that arranges in this way is sent to filter process part 147.
At this, refer to figs. 14 and 15, will the example that arrange of filter characteristic be described.Figure 14 and 15 illustrates according to first embodiment to utilize wave filter that the key diagram that part 146 arranges the example of filter characteristic is set.
Figure 14 shows example is set, wherein for the rising of display element (liquid crystal etc.) with descend the filter characteristic that differs from one another is set.In this example, wave filter only is applied to the elevated areas at edge.In Figure 14, as input signal, example be the 4 class step signals of from left to right advancing in the drawings, and maximal value (high-high brightness), minimum value (minimum brightness) and brim height (between maximal value and the minimum value poor) differ from one another in 4 class step signals.In Figure 14, the brightness of each pixel is represented in digital value " 255 " and " 0 ".
As shown in figure 14, although depend on the amplitude (between the maximal value of brightness and the minimum value poor) of grey scale change, the correcting value difference of pixel value can arrange the filter characteristic that its median filter only is applied to the elevated areas at edge in each pixel.Especially, although Figure 13 is not shown,, for example, following filter characteristic can be set.Wave filter arranges the information about edge direction that part 146 obtains to utilize 143 detections of rim detection part, and determines that according to changing the direction that gray scale changes in the marginal portion elevated areas still is the zone that descends.Be filter application characteristic just under the situation of elevated areas being defined as only.
Next, Figure 15 shows the example that the tap number in the spatial filter is set according to the movement quantity vector of input image data.In this example, tap number and the movement quantity vector of wave filter change pro rata.In Figure 15, as input signal, example be the 4 class step signals of from left to right advancing in the drawings, and amount of travel (movement quantity vector) differs from one another.From figure from the left side, for the step signal (amount of travel 0dot/v) of rest image, have amount of travel 2dot/v step signal, have the step signal of amount of travel 4dot/v and have the step signal of amount of travel 6dot/v.In Figure 15, the brightness of each pixel is represented in digital value " 255 " and " 0 ".
In example shown in Figure 15, wave filter arranges part 146 filter characteristic is set, and wherein tap number is configured to equal the quantity (for example, when amount of travel was 2dot/v, tap number was 2) of the movement quantity vector (pixel count) of input image data.In this way, when the movement quantity vector of received image signal big (because gait of march is fast), the tap number of wave filter increases.Thereby when the movement quantity vector of received image signal big (because gait of march is fast), available littler and more accurate pixel value is carried out and is proofreaied and correct processing.Therefore, according to the image processing apparatus 100 of first embodiment, when the movement quantity vector of input image data is big, can suppress the motion blur in the freeze mode display device 200 more effectively.
In the frame before the frame that will be presented at display device 200, filter process part 147 is for the input image data filter application after the outside replaces it of standing of replacing from the outside that part 145 sends, and this wave filter has with wave filter the filter characteristic that part 146 arranges is set.Thereby, proofread and correct the pixel value of the pixel that is positioned at correcting range.Wherein the input image data that is corrected of pixel value is sent to gain adjustment member 148.According to the filter process part 147 of first embodiment for the input image data filter application that stands after the outside replaces it.Yet, needn't be always for the input image data filter application that stands after the outside replaces it, and can be for input image data self filter application.
In order to prevent the error in the high frequency, gain adjustment member 148 is carried out gain adjustment based on the high-frequency signal that sends from high-frequency detection part 144 for the correction input image data that sends from filter process part 147.The input image data that stands to gain after adjusting is sent to selection part 149.
For selecting part 149, the testing result of input rim detection part 143, for example, the input image data self that is not corrected through the input image data of overcorrect, the wherein pixel value that from input image data storage area 110, extracts of the wherein pixel value that sends about the information that changes marginal portion and edge direction, from filter process part 147 that sends from rim detection part 143 etc.Select part 149 according to about changing the input information of marginal portion and edge direction, select input image data that pixel value wherein proofreaies and correct through wave filter processing section 147 and wherein pixel value not through one in the input image data of wave filter processing section 147 corrections.In addition, only under the situation of selecting part 149 to select wherein the input image data that pixel value is corrected (execution filter process), select part 149 just the general wherein the input image data that is corrected of pixel value export to composite part 150.More particularly, for example, determine to change the situation of marginal portion being in from low gray scale to the elevated areas of high gray scale based on edge direction under, the input image data of selecting part 149 to select pixel value wherein to be corrected.On the other hand, under the situation during being in from low gray scale to the decline zone of high gray scale based on the definite variation of edge direction marginal portion, select part 149 to select the wherein uncorrected input image data of pixel value.By carrying out such processing, can only wave filter be applied to elevated areas, as described in Figure 14.
In first embodiment, select part 149 to be disposed in the back level of filter process part 147.The input image data self that utilizes the input image data of filter process part 147 filtering processing and import from the outside is transfused to selecting part 149.Select part 149 adopt from via 147 inputs of filter process part through input image data that filtering is handled and select the method for input image data via the input image data of outside input.Yet, be not limited to such method.For example, before utilizing filter process part 147 filtering processing, select part 149 before to determine whether to carry out filtering and handle.Under the situation of selecting part 149 to determine to carry out filtering to handle (for example, determine to change the marginal portion be under the situation in the elevated areas), filter process part 147 can be carried out filtering and handle.
Input under the situation of composite part 150 from selecting part 149 at the input image data of handling through filtering, composite part 150 is comprehensively from the input image datas self (wherein not carrying out filtering handles) of outside input and the input image data of handling through filtering, and the input image data after will be comprehensive is exported to output 160.On the other hand, not from selecting part 149 to input under the situation of composite part 150, the input image data self from the outside input that composite part 150 will not handled through filtering is exported to output 160 at the input image data of handling through filtering.
The structure example of display device 200
Hereinbefore, describe the functional configuration of image processing apparatus 100 in detail.Next, with reference to Figure 11, will the structure of display device 200 be described.As shown in figure 11, display device 200 is freeze mode display device, and comprises image displaying part 210, source electrode driver 220, gate drivers 230 and display control section 240.
Image displaying part 210 shows the image corresponding with the display image data of importing from image processing apparatus 100.For example, image displaying part 210 is to have the dot matrix display that m * n arranges.As the object lesson of image displaying part 210, for example, there are active array type OLED (Organic Light Emitting Diode) display and the LCD that use Si (amorphous silicon) TFT.
Source electrode driver 220 and gate drivers 230 are the drive units that have the image displaying part 210 of m * n arrangement for driving.Wherein, source electrode driver 220 offers data line 221 with data-signal, and gate drivers 230 will select signal (address signal) to offer sweep trace 231.
Display control section 240 is controlled the driving (driving of source electrode driver 210 and gate drivers 230) of image displaying part 210 based on the display image data that is input to from image processing apparatus 100 display control section 240.More particularly, in the timing of necessity, display control section 240 is based on the display image data (vision signal) that obtains from image processing section 100, and output will offer the control signal of each driver (source electrode driver 220 and gate drivers 230) circuit.
Hereinbefore, example according to the function of the display device 200 of first embodiment and image processing apparatus 100 has been described.Above-mentioned each assembly can dispose by using general part and universal circuit, maybe can utilize the hardware of the function that is exclusively used in each assembly to dispose.Replacedly, CPU etc. can have all functions of each assembly.Thereby, when implementing first embodiment, the structure that can adopt according to the technical merit appropriate change.
The hardware construction of image processing apparatus 100
Next, with reference to Figure 16, with the hardware construction of describing according to the image processing apparatus 100 of first embodiment.Figure 16 is the block scheme that illustrates according to the hardware construction of the image processing apparatus of first embodiment.
Image processing apparatus 100 mainly comprises: CPU (CPU (central processing unit)) 901, ROM (ROM (read-only memory)) 903, RAM (random access memory) 905, host bus 907, bridge 909, external bus 911, interface 913, input media 915, output unit 917, memory storage 919, driver 921, connectivity port 923 and communicator 925.
According to the various programs that are stored in ROM 903, RAM 905, memory storage 919 or the removable recording medium 927, CPU 901 plays calculation processing apparatus and control device, and all or part of operation in the control image processing apparatus 100.The program that the oily CPU 901 of ROM 903 storages uses, calculating parameter etc.RAM 905 temporarily stores the program that is used for operation CPU 901, in the parameter that changes suitably in service of CPU 901 etc.Internal bus of CPU 901, ROM 903, RAM 905 etc. and cpu bus etc. is connected to each other.
Host bus 907 is connected to external bus 911 such as PCI (Peripheral Component Interconnect/interface) bus by bridge 909.
For example, input media 915 is the functional units such as the bar of mouse, keyboard, touch panel, button, switch and user's operation.Corresponding to the operation of image processing apparatus 100, input media 915 for example can be the remote control unit (so-called telepilot) that adopts infrared ray and other radiowave, maybe can be the external connection device 929 such as cell phone and PDA.In addition, for example, input media 915 disposes based on the user by using the information of aforesaid operations parts input, generates input signal and input signal is exported to the input control circuit etc. of CPU 901.By input device 915, use the user of image processing apparatus 100 various data and instruction operation can be input to image processing apparatus 100.
Output unit 917 for example disposes the device that can notify the information that the user obtained with vision or the sense of hearing, such as the display device that comprises CRT display device, liquid crystal indicator, plasm display device, EL display device and lamp, or such as the audio output device that comprises loudspeaker and earphone, or such as printer, cell phone and facsimile recorder.Especially, display device is with the various information of form demonstration such as the view data of text or image.On the other hand, audio output device converts to sound such as voice data.
Memory storage 919 is the devices for the data storage, it is configured to the example according to the storage area in the image processing apparatus 100 of first embodiment, and for example comprises: such as the magnetic storage partial devices of HDD (hard disk drive), semiconductor storage, light storage device, magneto optical storage devices etc.The program that memory storage 919 memory by using CPU 901 carry out, various data, the image signal data that obtains from the outside etc.
Driver 921 is the reader/writers for recording medium, and it is outside or be installed in the image signal processing apparatus to be disposed in image signal processing apparatus.Driver 921 read that storage provides within it such as the information in the removable recording medium 927 of disk, CD and magneto-optic disk or semiconductor memory, and information exported to RAM 905.Driver 921 can write information in the removable recording medium 927 of for example disk, CD and the magneto-optic disk that provide in it or semiconductor memory, and information is exported to RAM 905.Removable recording medium 927 for example be dvd media, HD-DVD medium, blu-ray media, compact flash memory (registered trademark) (CF), memory stick, SD storage card (secure data storage card) etc.Removable recording medium 927 for example can be the IC-card (integrated circuit card) that is equipped with contactless IC chip, electronic installation etc.
Connectivity port 923 for example be for will be for example USB (USB (universal serial bus)) port, the device of IEEE 1394 ports, SCSI (small computer system interface) port, RS-232C port and optoacoustic terminal such as i.Link be directly connected to the port of image processing apparatus 100.External connection device 929 is connected to connectivity port 923, thereby image processing apparatus 100 directly obtains image signal data from external connection device 929, and image signal data is offered external connection device 929.
Communicator 925 for example is the communication interface that disposes for communicator 925 being connected to the communicator of communication network 10.Communicator 925 for example is wired or wireless LAN (LAN (Local Area Network)), bluetooth, the communication card that is used for WUSB (Wireless USB), be used for the router of optical communication, be used for the router of ADSL (ADSL (Asymmetric Digital Subscriber Line)) or be used for the modulator-demodular unit of various communications.For example utilize communicator 925, picture signal etc. can send and receive between the Internet and other communicator and display device 200.The communication network 10 that is connected to communicator 925 disposes by the wired or wireless network that is connected to communicator 925 etc., and for example can be the Internet, LAN, infrared communication or the satellite communication that is used for individual dwelling house.
Utilize above-mentioned structure, image processing apparatus 100 is from the information of various information sources acquisitions about received image signal, and picture signal can be sent to display device 200, described information source is such as the external connection device 929 that is connected to connectivity port 923 or communication network 10.
Be substantially similar to the hardware construction of image processing apparatus 100 according to the hardware construction of the display device 200 of first embodiment, thereby omit and describe.
Hereinbefore, the example that can realize according to the hardware construction of the function of the image processing apparatus 100 of first embodiment and display device 200 has been described.Above-mentioned each assembly can dispose by using universal component, maybe can utilize the hardware of the function that is exclusively used in each assembly to dispose.Thereby, when implementing first embodiment, the hardware construction that can adopt according to the technical merit appropriate change.
According to the treatment scheme in the method for first embodiment of the invention processing image
Hereinbefore, describe in detail according to the image processing apparatus 100 of first embodiment and the structure of display device 200.Next, with reference to Figure 17, handle the method for image with describing image processing apparatus 100 that employing according to first embodiment has such structure in detail.Figure 17 illustrates the process flow diagram of handling the treatment scheme in the method for image according to first embodiment.
Handling according to first embodiment in the method for image, handling the input image data that is input to image processing apparatus 100 from the outside, thereby generating the display image data that will output to freeze mode display device 200.
Especially, as shown in figure 17, when input image data was imported into image processing apparatus 100 from the outside, input image data was stored in input image data storage area 110 (S101), and was imported into motion vector detection section simultaneously and divides 120.
When the input image data in the frame that will show was imported into motion vector detection section and divides 120, motion vector detection section was divided the input image datas in 120 frames that for example extract before the frames that will show from input image data storage area 110.Motion vector detection section divides 120 relatively with the input image data in the frame that shows with the input image data in the frame before the frame that shows.Motion vector detection section divides 120 to pay close attention to the object that moves in showing image, and based on the direction of described object motion with apart from the motion vector (S103) that detects the input picture in the frame that will show.The motion vector that detects is sent to correction-processing portion 140 etc.
Next, when the input image data in the frame that will show is imported into correction-processing portion 140 from the outside, correction-processing portion 140 from response time information storage area 130, extracts with will the frame of demonstration in the amplitude corresponding response temporal information (S105) of grey scale change of each pixel.Based on from the input image data of outside input, divide the motion vectors of 120 inputs and the response time information that extracts from response time information storage area 130 from motion vector detection section, correction-processing portion 140 is proofreaied and correct for each pixel execution of configuration frame and is handled, to proofread and correct the pixel value (S107) in the input image data in the frame frame before that will show.As proofreading and correct the result who handles, generate display image data, and correction-processing portion 140 is exported to output 160 (S109) with the display image data that generates.
When display image data is imported into output 160 from correction-processing portion 140, output 160 will be imported display image data and export to display device 200 (S111).
At this, with reference to Figure 18, with the object lesson of describing according to the correction treatment step (S107) of first embodiment.Figure 18 is the process flow diagram that the object lesson of handling according to the correction of first embodiment is shown.
As shown in figure 18, when input image data is imported into correction-processing portion 140 from the outside (S201), at first, based on the motion vector that divides 120 inputs from motion vector detection section, correcting range arranges the correcting range (S203) that part 141 is provided for proofreading and correct the pixel value in the input image data.Especially, correcting range arranges part 141 and detects the zone (part corresponding with the motion object) that has motion in the input image data wherein, and is arranged in and wherein exists the pixel in the zone of motion to be set to correcting range.In addition, correcting range arranges part 141 and will and send to max min test section 142, rim detection part 143, high-frequency detection part 144 and wave filter about the information of input motion vector about information that correcting range is set part 146 etc. is set.
Next, based on the information about correcting range that part 141 sends is set from correcting range, maximal value and the minimum value (S205) of the input image data (input signal) in the correcting range detected in max min test section 142.In addition, max min test section 142 will send to rim detection part 143 and the outside part 145 etc. of replacing about the maximal value of the input signal that detects and the information of minimum value.
Next, based on arrange from correcting range that part 141 sends about the information of correcting range with about the input information of motion vector, and from the max min test section 142 send about the maximal value of input signal and the information of minimum value, the fringe region (S207) that rim detection part 143 detects in the input image datas (input signal).At this moment, rim detection part 143 not only detects the position (variation marginal portion) that has the edge, also the edge direction in the change detected marginal portion (being from hanging down the direction that gray scale changes to the direction of high gray scale or changes to low gray scale from high gray scale).In addition, rim detection part 143 sends to the information about changing marginal portion and edge direction that detects and selects part 149.
Next, based on the information about correcting range that part 141 sends is set, the high-frequency signal (S209) that has spatial frequency in the input image data that high-frequency detection part 144 detects in the correcting range from correcting range.At this, term " high-frequency signal " means the have half-wavelength signal of (1/2 wavelength), and its scope is littler than correcting range.That is to say that high-frequency detection part 144 detects has the signal of the wavelength littler than the correcting range of twice as high-frequency signal.This be because, under the situation of high-frequency signal, because elevated areas and descend regional both in correcting range, so may not carry out suitable processing.High-frequency detection part 144 is exported to gain adjustment member 148 with the high-frequency signal that detects, and the high-frequency signal of exporting is used for gain adjustment after the processing that utilizes filter process part 147.
Next, based on from the max min test section 142 send about the maximal value of input signal and the information of minimum value, outside part 145 usefulness maximal values and the minimum value of replacing carried out input image data (input signal) carried out outside replacement (S211).The outside input image data of replacing after part 145 will be replaced (input signal) sends to filter process part 147.
Next, input image data in the frame that will show is imported into wave filter part 146 is set from the outside, and from correcting range part 141 is set about the information of correcting range and motion vector and is sent to wave filter when part 146 is set, wave filter arrange part 146 from response time information storage area 130, extract with the frame that will show in the amplitude corresponding response temporal information (S213) of grey scale change of each pixel.
Based on input image data, information, motion vector and response time information about correcting range, wave filter arranges the characteristic that part 146 is provided for proofreading and correct the spatial filter of the pixel value in the input image data, so that when display device 200 shows the frame that shows, show to have the image gray (S215) that arranges based on input image data.As the spatial filter according to first embodiment, for example, can use the average filter of advancing such as low-pass filter (LPF).As the filter characteristic according to first embodiment, for example, have the zone of filtering, the tap number of wave filter etc.Can realize such filter characteristic by the filter coefficient that electric-wave filter matrix is set suitably.Wave filter arranges part 146 will send to filter process part 147 about the information of the filter characteristic that arranges in this way.
Next, in the frame before the frame in will being presented at display device 200, filter process part 147 teams replace from the outside that part 145 sends stands input image data filter application after the outside replaces it, and this wave filter has with wave filter the filter characteristic that part 146 arranges is set.Thereby the pixel value that is positioned at the pixel of correcting range is corrected (S217).In addition, filter process part 147 wherein the input image data that is corrected of pixel value send to gain adjustment member 148.According to the filter process part 147 of first embodiment for the input image data filter application that stands after outside the replacement.Yet, not always must be for the input image data filter application that stands after outside the replacement, and can be to input image data self filter application.
In order to prevent the error in the high frequency, gain adjustment member 148 is carried out gain to the correction input image data that sends from filter process part 147 and is adjusted (S219) based on the high-frequency signal that sends from high-frequency detection part 144.The input image data that gain adjustment member 148 will stand to gain after adjusting sends to selection part 149.
When the input image data that is corrected when the testing result of rim detection part 143, the wherein pixel value that sends from filter process part 147, the uncorrected input image data of wherein pixel value that extracts from input image data storage area 110 etc. are imported into and select part 149, select part 149 according to about changing the input information of marginal portion and edge direction, select input image data that pixel value wherein proofreaies and correct through wave filter processing section 147 and wherein pixel value not through one in the input image data of wave filter processing section 147 corrections.Especially, for example, select part 149 based on edge direction, determining to change the marginal portion is from hanging down gray scale to the elevated areas of high gray scale, still changes the marginal portion the decline zone from high gray scale to low gray scale (S221).
Variation marginal portion in determining input image data is under the situation in the elevated areas, as this result who determines, and the input image data (S223) of selecting part 149 to select pixel value wherein to be corrected.Then, select part 149 wherein the pixel value input image data that is corrected (execution filter process) export to composite part 150 (S225).
On the other hand, the variation marginal portion in determining input image data is under the situation in the decline zone, as the result who determines among the step S221, selects part 149 to select the wherein uncorrected input image data of pixel value (S227).
At last, be imported under the situation of composite part 150 from selecting part 149 at the input image data through filter process, composite part 150 is comprehensive from the input image data self (wherein not carrying out filter process) of outside input and the input image data (S229) of process filter process, and comprehensive input image data is exported to output 160 (S231).On the other hand, be not input under the situation of composite part 150 from selection part 149 at the input image data through filter process, what composite part 150 will be imported from the outside does not export to output 160 (S233) through the input image data self of filter process.
In first embodiment, after the filter process of using filter process part 147, carry out and use the selection of selecting part 149 to handle.Select the input image data of part 149 selection process filter process and from the input image data of outside input.Yet this is not limited to this situation.For example, before carrying out the filter process of using filter process part 147, select part 149 formerly to determine whether to carry out filter process.Selecting part 149 to determine to carry out under the situation of filter process (for example, determine to change the marginal portion be under the situation in the elevated areas), filter process part 147 can be carried out filter process.
Second embodiment
Next, second embodiment of the present invention will be described.The Reference numeral identical with above-mentioned first embodiment is used for the assembly that expression is roughly the same, thereby suitably omits and describe.
The structure of entire image processing device
Figure 19 shows the piece structure of the image processing apparatus (image processing apparatus 300) according to second embodiment of the invention.Image processing apparatus 300 comprises: high frame-rate conversion part 31, motion image blurring Characteristics Detection part 32 and motion image blurring improve processing section 33.Be similar to the structure of the display device 200 of image display system 10 according to first embodiment shown in Figure 11 according to the structure of the display device of the image display system of second embodiment, thereby omit and describe.
High frame-rate conversion part 31 is that unit handles carrying out high frame-rate conversion from the input image data (for example, such as the motion image signal of television broadcasting signal) of outside input with the frame, and generates and output conversion image data (picture signal).Especially, the input image data of 31 pairs of high frame-rate conversion parts with first frame rate carried out high frame-rate conversion and handled.High frame-rate conversion part 31 is improved processing section 33 outputs as a result of and the conversion image data that obtains to motion image blurring Characteristics Detection part 32 and motion image blurring, and wherein this conversion image data has second frame rate higher than first frame rate.It is the processing of carrying out under the situation of first frame rate when input second frame rate when being lower than output (demonstration) that high frame-rate conversion is handled, and show wherein by forming and insert new frame between the frame of the moving image when constituting input, with the conversion process of first frame-rate conversion one-tenth, second high frame rate than first frame rate.
First frame rate shows the frame rate of moving image when moving image is imported into high frame-rate conversion part 31.Thereby first frame rate can be any frame rate.Yet at this, for example, first frame rate is the frame rate when unshowned imaging device is with the moving image imaging in using accompanying drawing, i.e. image pickup frame rate.In addition, after this, under the situation that does not need to distinguish individually moving image and the motion image data corresponding with moving image, moving image and motion image data are referred to as moving image simply.Similarly, under the situation that does not need to distinguish individually frame and frame data corresponding with frame, frame and the frame data corresponding with frame are called frame simply.
Motion image blurring Characteristics Detection part 32 detects the information (moving image characteristic information) that shows the moving image characteristic for each frame that the conversion image data (picture signal) that is provided by high frame-rate conversion part 31 is provided.The motion image blurring characteristic information that detects is provided for motion image blurring and improves processing section 33.As the motion image blurring characteristic information, for example can use motion vector.
Hereinafter, the value of motion vector is called as gait of march (amount of travel), and the direction of motion vector is called as direct of travel.Direct of travel can be any direction on the two-dimensional plane.Even direct of travel be two-dimensional plane where take up an official post to situation under, image processing apparatus 300 also can be carried out the various processing that hereinafter will describe in identical mode.Yet, for the convenience of describing, suppose that direct of travel is along horizontal direction.In addition, be not particularly limited the quantity of the motion blur characteristic information that detects in the frame.For example, can only detect a motion blur characteristic information for a frame, or for each pixel of configuration frame, can detect a motion blur characteristic information separately.Replacedly, frame is divided into some pieces, and for each piece of dividing, can detect a motion blur characteristic information individually.
Based on value corresponding with the frame that will handle in the motion blur characteristic information that detects with motion blur Characteristics Detection part 32, motion image blurring improves processing section 33 for each frame that the conversion image data (picture signal) that provides from high frame-rate conversion part 11 is provided, according to the characteristic of display panel (image displaying part 210), proofread and correct each pixel value that constitutes the frame that to handle.Especially, motion image blurring improves processing section 33 according to the feature (value of motion image blurring image information) of the motion image blurring characteristic in the frame that will handle and the characteristic of image displaying part 210, correction is with each pixel value of the frame of processing, so that suppress motion image blurring when display device 200 is carried out demonstration.Handle the view data (display image data) that generates with such correction and be output to image display device 200.
Motion image blurring improves the structure example of processing section
Figure 20 shows the piece structure that motion image blurring improves processing section 33.Motion image blurring improves processing section 33 and comprises characteristics of signals test section 331, correction-processing portion 332, correcting value adjustment member 333 and addition section 334.
Characteristics of signals test section 331 is by (for example using the motion image blurring characteristic information, motion vector, hereinafter so same), detection is from the prearranged signals characteristic information of proofreading and correct use when handling in correction-processing portion 332 of picture signal (conversion image data, adopt equally in hereinafter suitable place).As such characteristics of signals information, for example, there are MAX/MIN information, spatial high-frequency information, space low-frequency information and edge directional information.Wherein, MAX/MIN information means minimum value (the MIN value that comprises the brightness in the predetermined correction scope (with hereinafter that the hunting zone of describing is corresponding scope), minimum pixel value) and the information of the maximal value of location of pixels and brightness (MAX value, max pixel value) and location of pixels thereof.Edge directional information is to show that the variation marginal portion that will proofread and correct in the picture signal is in from hanging down gray scale to the ascent direction of high gray scale or being in from high gray scale to the descent direction that hangs down gray scale.Hereinafter will describe characteristics of signals test section (Figure 23 to 26C) in detail.
Correction-processing portion 332 is by using the characteristics of signals information that detects and the motion image blurring characteristic information that detects in motion image blurring Characteristics Detection part 32 in characteristics of signals test section 331, by picture signal being carried out hereinafter the Space L PF (low-pass filter) that describes is handled and Space H PF (Hi-pass filter) processing, for the pixel value in each pixel correction picture signal.Picture signal (correction signal) after handling like this is output to correcting value adjustment member 333.In some cases, correction-processing portion 332 can be carried out the correction processing and need not to use characteristics of signals information.
For example, shown in Figure 21 A to 21C, carry out above-mentioned LPF processing and HPF and handle.
For example, shown in the arrow among Figure 21 A, it is in the correcting range corresponding with amount of travel that LPF handles, the filter process that the inclination of the variation marginal portion in the picture signal becomes progressive.But it also is the motion adaptive form that LPF handles, and the filter process of second differential for example.As mentioned below, the LPF processing is the asymmetric processing according to edge direction (ascent direction or descent direction).
For example, shown in the arrow among Figure 21 B, it is in the correcting range corresponding with amount of travel that HPF handles, and wherein view field is provided near near the filter process at the two ends (top and the bottom) of the variation marginal portion in the picture signal.Especially, near the top that changes the marginal portion (high gray scale side), the view field that is called as the overshoot zone to high gray scale direction is provided, and near the bottom that changes the marginal portion (low gray scale side), provides under being called as of low gray scale direction and dash regional view field.
When the such LPF of combination handles and such HPF when handling (carrying out this two kinds of processing), for example, the filter process of generation shown in the arrow among Figure 21 C.Each filter coefficient that hereinafter will describe is set, makes that the variation marginal portion after combination and execution LPF processing and HPF processing is straight waveform.
At this, shown in Figure 21 A to 21C, when aforesaid filter process, according to amount of travel correcting range is set, and as shown in figure 22, the tap number of wave filter changes (increase) according to amount of travel.This is because when amount of travel changed, the effective waveform with filter process also changed.Especially, according to Figure 22, for example, be under the situation of even number at amount of travel, the tap number that LPF handles is amount of travel-1, and the tap number that HPF handles is amount of travel+1.Thereby, always the tap number of each filter process is configured to odd number, and no matter the value of amount of travel how.This is because in the filter process with even number tap number, waveform is not bilateral symmetry (bilaterallysymmetric) at the two ends at edge.
Hereinafter will describe correction-processing portion 332 (Figure 27 to 40) in detail.
Correcting value adjustment member 333 is carried out the gain adjustment of filter process (proofread and correct and handle) by using the motion image blurring characteristic information, with the step of the interface that prevents from producing along with the switching of proofreading and correct the tap number of wave filter when handling in correction-processing portion 332.Hereinafter will describe correcting value adjustment member 333 (Figure 41) in detail.
Addition section 334 is improved the original image signal of processing section 33 and picture signal calibrated processing and that calibrated amount is adjusted (standing the correction signal after correcting value is adjusted) addition, generation and the output display image data of exporting from correcting value adjustment member 333 by being input to motion image blurring.
The structure example of characteristics of signals test section
Next, with reference to Figure 23 to 26C, will describe characteristics of signals test section 331 in detail.Figure 23 shows the piece structure example of characteristics of signals test section 331.Characteristics of signals test section 331 comprises MAX/MIN test section 331A, spatial high-frequency test section 331B, space low frequency test section 331C and edge direction test section 331D.
MAX/MIN test section 331A detects above-mentioned MAX/MIN information by using the motion image blurring characteristic information from picture signal.The MAX/MIN information that detects is provided for spatial high-frequency test section 331B, space low frequency test section 331C and edge direction test section 331D, and exports to correction-processing portion 332 as a characteristics of signals information.
Especially, shown in Figure 24 A, MAX/MIN test section 331A detects MIN value and location of pixels and MAX value and location of pixels thereof in the hunting zone, the size that this hunting zone has is the twice (={ pixel coverage of (tap number-1) * 2+1} quantity in) of amount of travel size.At this moment, for example, shown in Figure 24 B, when detecting MIN value and location of pixels thereof, in the hunting zone, after each pixel value of positive dirction weighting, carrying out the detection processing according to interested intended pixel to the distance between each pixel value.When detecting MAX value and location of pixels thereof, after each pixel value of negative direction weighting, carrying out the detection processing according to interested above-mentioned pixel to the distance between each pixel value.Carry out in this way to detect and handle, to prevent when detecting MAX/MIN information because the error-detecting that noise etc. cause, and can by carry out weighting and select with the hunting zone in pixel value in the position that approaches, the position of pixels of interest improve the noise resistant property.
For example, as shown in figure 25, according to the increase (according to the increase of spatial frequency) of hunting zone medium-high frequency semaphore, be adjusted to increase along the weight of positive dirction with along in the weight of negative direction one or two.Especially, at this, when the value of high-frequency signal amount is 0 or bigger and during less than H11, weight is steady state value.When the value of high-frequency signal amount is H11 or bigger and during less than H12, weight is linear to be increased.When the value of high-frequency signal amount is H12 or when bigger, weight is steady state value again.This be because, hereinafter the mistake of the edge direction among the edge direction test section 331D that describes is determined to occur in the zone with high spatial frequency (high frequency) probably, and by in such high-frequency region, increasing weight, reduced the mistake in the high frequency and determined.
Based on MAX/MIN information and motion image blurring characteristic information, spatial high-frequency test section 331B detects the spatial high-frequency information (high communication number amount) of the picture signal in the hunting zone, and the output region high-frequency information is as a characteristics of signals information.
Based on MAX/MIN information and motion image blurring characteristic information, low frequency test section, space 331C detects the space low-frequency information (low-pass signal amount) of the picture signal in the hunting zone, and the output region low-frequency information is as a characteristics of signals information.
Based on MAX/MIN information and motion image blurring characteristic information, edge direction test section 331D obtains variation marginal portion and the edge direction in the picture signal.This is because utilizing correction that correction-processing portion 332 carries out to handle to depend on edge direction still is descent direction and difference along ascent direction.That is to say that although hereinafter will describe in detail, correction-processing portion 332 determines whether to carry out LPF according to the edge direction that obtains and handles, and the filter coefficient of definite HPF when handling.
Especially, for example, in the mode shown in Figure 26 A to 26C, edge direction test section 331D obtains edge direction.Namely, for example shown in Figure 26 A, along under from left to right the situation of direction among the figure, when the location of pixels (MAX position) of MAX value was compared on the right side with the location of pixels (MIN position) of MIN value, edge direction test section 331D determined that edge direction is descent direction at the direct of travel of motion vector.Under the situation of this direct of travel, for example, shown in Figure 26 B, compare with the MAX position when being in the right side when the MIN position, edge direction test section 331D determines that edge direction is ascent direction.For example, shown in Figure 26 C, be positioned under the situation of same position in MIN position and MAX position, do not change the marginal portion because in this region of search, do not exist, hereinafter utilize the correction that the correction-processing portion of describing 332 is carried out to handle so edge direction test section 331D determines not carry out.
The structure example of correction-processing portion
Next, with reference to Figure 27 to 40C, will describe correction-processing portion 332 in detail.Figure 27 shows the piece structure example of correction-processing portion 332.As the piece in the processing of the LPF shown in Figure 21 A, correction-processing portion 332 comprises that processing section 332A, LPF processing section 332C are replaced in first edge, the LPF control portion of gain divides 332E.As the piece in the processing of the HPF shown in Figure 21 B, correction-processing portion 332 comprises that also processing section 332B, HPF processing section 332D are replaced in second edge, the HPF control portion of gain divides 332F.That is to say that correction-processing portion 20 has the piece of LPF in handling and the parallel connection structure of the piece of HPF in handling.In addition, correction-processing portion 332 comprises addition section 332G.
Processing section 332A is replaced by using characteristics of signals information and motion image blurring characteristic information in first edge, picture signal is carried out first edge hereinafter will describe replace and handle, thereby and generate and the first replacement signal of (handling corresponding with previous LPF) is handled in correction that output is used for LPF processing section 332C.Processing section 332B is replaced by using characteristics of signals information and motion image blurring characteristic information in second edge, picture signal is carried out second edge hereinafter will describe replace and handle, thereby and generate and the second replacement signal of (handling corresponding with previous HPF) is handled in correction that output is used for HPF processing section 332D.Carry out this replacement and handle, be used for to swing (slew) speed and be increased to natural image.Especially, when image was carried out filter process, wherein this image comprised the fuzzy and edge dimness wherein of image pickup, carries out to replace and handles, relax very much and surpass the expectation in order to prevent from changing the inclination of marginal portion, and reduce reversal effect in the image with black surround edge etc.Thereby, for example, do not exist therein in the fuzzy magic lantern image (telop image) of image pickup or the animation etc., always do not need to carry out the edge and replace processing.
Especially, first edge is replaced processing section 332A and second edge and is replaced processing section 332C and carry out the edge and replace and handle, for example as the part (A) of Figure 28 to partly shown in (C).
Processing section 332A is replaced by use motion image blurring characteristic information, as MAX value and MAX position and MIN value and MIN position (the MAX/MIN information) of characteristics of signals information in first edge, carries out first edge and replaces processing (MAX/MIN replaces (outside replacement)).Especially, for example, shown in the part (A) and part (B) of Figure 28, processing section 332A is replaced in the hunting zone beyond the pixel region between MIN position and the MAX position in first edge, replace MIN position pixel value (MIN replacement) in addition with the MIN value, and replace MAX position pixel value (MAX replacement) in addition with the MAX value.By replacing and carry out the LPF processing after handling carrying out the first such edge, can suppress to change that phase place moves from original image signal in the marginal portion.
Processing section 332B is replaced by use motion image blurring characteristic information, as MAX value and MAX position and MIN value and MIN position (the MAX/MIN information) of characteristics of signals information in second edge, and (MAX/MIN replaces (outside replacement) and 3 replacements to carry out the replacement processing of second edge.Especially, be similar to above-mentioned first edge and replace and handle, for example, as the part (A) of Figure 28 to shown in the part (C), second edge is replaced processing section 332B and outside the pixel region between MIN position and the MAX position, is carried out MAX/MIN and replace (the outside replacement) in the hunting zone.In addition, processing section 332B is replaced with the pixel value near pixel value (near the pixel of the mid point between MIN value and the MAX value) the replacement pixel zone of the pixel the mid point between MIN value, MAX value and MIN position and the MAX position in second edge, thereby with the pixel value in the pixel value replacement pixel zone of three points (3 replacements).Not by utilizing MIN value and MAX value to carry out 2 replacements, but by carrying out 3 such replacements, can utilize second edge to replace the phase shifts that suppresses to change the marginal portion effectively.
More particularly, first edge is replaced processing section 332A and is carried out the replacement processing of first edge, for example shown in Figure 29 A to 30B.In these accompanying drawings, Z-axis is represented luminance level, and transverse axis is represented location of pixels.Figure 29 A shows wherein MAX position and the MIN position situation outside the hunting zone, Figure 29 B shows wherein in only the MAX position and MIN position one (at this, be the MAX position) situation outside the hunting zone, and Figure 29 C shows wherein MAX position and the MIN position situation in the hunting zone.Figure 30 A and 30B show the not situation in the pixel region between MIN position and MAX position of hunting zone wherein.
Second edge is replaced processing section 332B and is carried out the replacement processing of second edge, for example shown in Figure 31 A to 32B.Figure 31 A shows wherein MAX position and the MIN position situation outside the hunting zone, Figure 31 B shows wherein in only the MAX position and MIN position one (at this, be the MAX position) situation outside the hunting zone, and Figure 31 C shows wherein MAX position and the MIN position situation in the hunting zone.Figure 32 A and 32B show the not situation in the pixel region between MIN position and MAX position of hunting zone wherein.That is, shown in Figure 31 A to 31C, under the situation of intermediate point in pixel region between MAX position and the MIN position, carry out 3 replacements as mentioned above.On the other hand, shown in Figure 32 A and 32B, at intermediate point not under the situation in the pixel region between MAX position and MIN position, be similar to first edge and replace and handle, only carry out MAX/MIN and replace.That is, in the case, hereinafter the correcting value that the HPF that describes is handled is zero.
LPF processing section 332C handles by the first replacement signal of replacing 332A output in processing section from first edge is carried out above-mentioned LPF, and generates and output LPF processing signals by using the motion image blurring characteristic information.For example, as shown in figure 33, LPF processing section 332C comprises that the fixed filters coefficient retaining unit is divided 332C1 and the average filter part 332C2 that advances.The fixed filters coefficient retaining unit divides 332C1 to keep the fixing filter coefficient that uses when LPF handles when carrying out.In fact the average filter part of advancing 332C2 carries out LPF and handles by using fixing filter coefficient and motion image blurring characteristic information.At this, although the average filter of advancing is used as the example of the wave filter of carrying out the LPF processing, can use other LPF.
As mentioned above, determine whether to carry out by using the correction processing of such LPF processing according to edge direction.Especially, under the situation edge direction is in from low gray scale to the ascent direction of high gray scale, carry out by the correction of using LPF to handle and handle.On the other hand, under the situation edge direction is in from high gray scale to the descent direction that hangs down gray scale, do not carry out by the correction of using LPF to handle and handle.
HPF processing section 332D handles by the second replacement signal of replacing 332B output in processing section from second edge is carried out above-mentioned HPF, and generates and output HPF processing signals by using motion image blurring characteristic information and characteristics of signals information.For example, as shown in figure 34, HPF processing section 332D comprises variable filter coefficient calculations part 332D1 and high pass filter section 332D2.Variable filter coefficient calculations part 332D1 calculates according to motion image blurring characteristic information and characteristics of signals information and variable filter coefficient.In fact high pass filter section 332D2 carries out HPF and handles by using variable filter coefficient and motion image blurring characteristic information.
In these filter process parts, LPF processing section 332C carries out LPF and handles, for example shown in Figure 35 A and 35B.Figure 35 A shows the correcting pattern under the situation of using the correcting pattern (amount of travel=6) with LPF processing on the rising step edge that is converted to white level from black level linearly.Figure 35 B shows the curve (motion picture response curve) of motion image blurring in the case.From these views, can understand, when forming the inclination (correcting pattern that LPF handles) of amount of travel width for the step edge from the black level to the white level, for example can carry out the LPF with simple average and handle (the linear correcting pattern that connects).Equally, according to the simulation of the liquid crystal panel that utilizes VA (vertical calibration) method, can obtain such correcting pattern is preferred result.
Yet, in the rising edge from grey level to the white level, can understand in above-mentioned simple LPF handles not obtain enough effects, and compare with the situation from the rising edge of black level that phase place follows into that direction moves.Thereby, as the countermeasure to this, can understand, when the step edge from grey level to the white level being formed the inclination (correcting pattern that LPF handles) of amount of travel width, for example shown in Figure 36 A and 36B, preferably along the correcting pattern that drops to the direction of black level side.Figure 36 A shows the correcting pattern (amount of travel=6) that expectation is applied to be converted to from the grey level rising step edge of white level.Figure 36 B shows the curve (motion picture response curve) of motion image blurring in the case.
On the other hand, utilize the combination that above-mentioned LPF handles and HPF handles, HPF processing section 332D carries out HPF and handles, for example shown in Figure 37 A and 37B.Figure 37 A shows the correcting pattern (amount of travel=6) under the situation that combination LPF handles and HPF handles.Figure 37 B shows the curve (motion picture response curve) of motion image blurring in the case.As the basis of correcting pattern, initial coefficients execution HPF handles by using fixedly.Yet, because preferably the LPF that carries out above line input width close to a side of white level is handled the correcting pattern that keeps such, so depend on the gray scale in the rising edge, use computing formula (hereinafter will describe) that the filter coefficient of handling close to the HPF on the side of LPF is set.In Figure 37 A and 37B, wherein the mean value of the pixel of the low-response of liquid crystal and the pixel on the side of handling close to HPF works (be zero in the liquid crystal response time, response is under the situation of step shape) along the direction close to step response.
In this way, in the 332D of HPF processing section, filter coefficient when HPF being set handling makes and handles and utilize the combination that LPF that LPF processing section 332C carries out handles by the HPF that utilizes HPF processing section 332D to carry out, and can obtain the correcting pattern of optimum in the 332D of HPF processing section.
Especially, the filter coefficient when HPF being set handling for example is that part (A) as Figure 38 is to shown in the part (E) so that be used for the correcting pattern of rising edge.
-when the rising from the black level to the white level, the value of the filter coefficient when suppressing the HPF processing makes that carrying out the LPF with amount of travel handles (with reference to the part (B) of Figure 38).
-when rising from low gray scale, the value of the filter coefficient when suppressing the HPF processing, feasible projection (with reference to the part (C) of Figure 38) of not interrupting dashing down in the zone (in advance towards the zone).
-when the rising from grey level to the white level, the value of the filter coefficient when suppressing HPF and handling makes and dashes projection not too big (with reference to the part (D) of Figure 38) in the zone (the pre-zone that dashes) down.
-when rising from low gray scale, the value of the filter coefficient when suppressing HPF and handling makes down projection not too big (with reference to the part (E) of Figure 38) in the zone.
On the other hand, the filter coefficient when HPF being set handling makes that the correcting pattern that is used for the drop edge for example is that part (F) as Figure 38 is to shown in the part (J).
-in the grey level, the value of the filter coefficient when suppressing HPF and handling makes projection not too big (with reference to the part (H) of Figure 38) in the overshoot zone (pre-mistake (preover) zone).
-to the decline of low gray scale the time, the value of the filter coefficient when suppressing HPF and handling makes that the projection in the overshoot zone is not too big (with reference to the part (H) of Figure 38 and partly (I)).
-when dropping to black level, the filter coefficient when HPF handles is zero, and do not carry out HPF and handle (with reference to part (G) and the part (J) of Figure 38).
Filter coefficient when in this way, HPF handles is adjusted to and depends on edge direction and change.Especially, be along under the situation of ascent direction in edge direction, filter coefficient is adjusted to close to LPF and handles side (vertically asymmetric), and be under the situation of descent direction in edge direction, filter coefficient is adjusted to the projection that makes in the overshoot zone and projection (projection at the place, two ends that the changes the marginal portion) size mutually the same (vertical symmetry) in the zone down.More particularly, be along under the situation of ascent direction in edge direction, the filter coefficient when adjusting the HPF processing makes following projection in the zone greater than the projection in the overshoot zone.This be because, because the effect that HPF handles when rising to white level from the grey level tends to weaken, so increase the effect that HPF handles by adjusting filter coefficient in this way.
Preferably adjust with the overshoot zone in projection big or small corresponding filter coefficient and with the big or small corresponding filter coefficient of projection time in the zone so that the pixel value in the image signal correction (correction signal) is in the dynamic range of signal.This is because when the edge (black level and white level) of such projection at the place, two ends that changes the marginal portion and dynamic range when contact, the waveform of correcting pattern twists, and the effect of HPF processing is weakened.
In addition, under the projection in the overshoot zone and the situation that down projected size in the zone differs from one another, the filter coefficient when preferably adjusting HPF and handling keeps the ratio of its projected size simultaneously.This is because under the situation that the small magnitude from low gray scale to low gray scale changes, the correcting value of the initial coefficients during HPF handles is too big.
Filter coefficient when in addition, preferably changing HPF and handle according to the spatial frequency of (correcting range in) in the hunting zone.This is because for example, under the low spatial frequency of similar lamp (lamp) RST, and for being avoided in HPF processing inefficacy and having in the zone of high spatial frequency, have the possibility of HPF handling failure.
Filter coefficient when in addition, preferably changing the HPF processing according to the size of the edge amplitude corresponding with difference between (proofread and correct search in) MIN value and the MAX value in the hunting zone.This is because when not changing filter coefficient, correcting value is too big in the variation marginal portion with big amplitude.
The filter coefficient that aforesaid HPF handles for example is by using following formula (1) to formula (5) to obtain.In these formula, formula (1) to formula (3) is to be in computing formula under the situation of ascent direction in edge direction.Formula (4) and formula (5) are to be in computing formula under the situation of descent direction in edge direction.When calculating such filter coefficient, at first, obtain the tap coefficient at the place, two ends of amount of travel+1 tap, and obtain the tap coefficient between these tap coefficients.The coefficient of locating when the two ends of tap is that " am " and " ap " (coefficient at the left end place of the rising edge of advancing to the right is defined as " am ", and the coefficient at the right-hand member place is defined as " ap "), and when the correcting pattern that utilizes the initial coefficients acquisition of depending on the edge amplitude reaches predetermined threshold, brachymemma (clip) these " am " and " ap " (with reference to Figure 38).
<rise
a m 1 = MIN ( MIN - MAX ) . . . ( 1 )
a m 2 = ( Th 2 - MAX ) ( MIN - MAX ) . . . ( 2 )
Th 2: fixed value
a m=MAX(a m1,a m2,a m_in_u_gain)
a p 1 = ( Th 1 - MAX ) MAX ( MIN - MAX ) W . . . ( 3 )
Th 1 = W - W V + Δ ( W - MIN ) · a m _ in _ u _ gain + MIN ≤ 0 )
Th 1 = W - ( W - MIN ) · ( 1 - a m _ in _ u _ gain ) V + Δ ( W - MIN ) · a m _ in _ u _ gain + MIN > 0 )
a p=MAX(a p1,a p_in_u_gain)
a m_in_u_gain=a m_in_r·in_gain
a p_in_u_gain=a p_in_r·in_gain
a M_in_u, a P_in_u: the rising initial coefficients
The function of in_gain:MAX-MIN
The function of Δ: MIN
V: amount of travel (tap number-1)
<descend
a m 1 = MIN ( MIN - MAX ) . . . ( 4 )
a m=MAX(a m1,a m_in_d_gain)
a p 1 = MIN ( MIN - MAX ) . . . ( 5 )
a p=MAX(a p1,a p_in_d_gain)
a m_in_d_gain=a m_in_r·in_gain
a p_in_d_gain=a p_in_r·in_gain
a M_in_d, a P_in_d: the decline initial coefficients
At this, with reference to Figure 38, will describe formula (1) in detail to formula (5).At first, because be under the rising situation and edge direction is in and has two initial coefficients under the decline situation respectively in edge direction, therefore always co-exist in 4 coefficients.Basically, HPF is configured with such initial coefficients.For initial coefficients being carried out various calculating, use MIN value and MAX value in detection in the pixel region (hunting zone) of the center twice amount of travel of processing section.
To shown in the part (E), there are 3 threshold values altogether as the part (A) of Figure 38: black level, the threshold value Th1 that depends on MIN value and threshold value Th2, the fixed value of the luminance level that threshold value Th2 conduct is bigger than the luminance level of white level.Along in the variation marginal portion of ascent direction, for the increase correcting value signal in the dynamic range is set, for example as the condition formula of formula (1) foundation in order to the pixel shown in the dashed circle of the part (D) of Figure 38 in the brachymemma black level.In the case, when the MAX value is big, utilize formula (2) brachymemma pixel.At this moment, coefficient shows as " am " (corresponding with the projection (following projection in the zone) on the downside that changes the marginal portion).With formula (3) about the pixel shown in the dashed circle in the part (E) of threshold value Th1 brachymemma Figure 38, thereby realizing HPF handled to change near LPF handles side.At this, threshold value Th1 is configured to have: according to MIN value and the automatic part that changes of amount of travel (tap number-1); And adjustment member (below with the corrected value of describing) Δ.Under the situation that image displaying part 210 is made by the liquid crystal panel of VA method, from experiment, can understand, in single dot structure, when the adjustment member Δ is zero, threshold value Th1 optimum.On the other hand, in sub-pixel structure, as threshold value Th1 during less than the value of setting automatically, carry out HPF effectively and handle.It is to multiply by the value that the gain according to tap number obtains by the function with the MIN value that the adjustment member Δ is defined as.Under the little situation of MAX value, because initial coefficients for correcting value and Yan Taida, is therefore automatically adjusted correcting value according to the MAX value.At this moment, coefficient shows as " ap " (corresponding with the projection (projection in the overshoot zone) on the upside that changes the marginal portion).
On the other hand, only adjust variation marginal portion along descent direction with the threshold value of black level.In the brachymemma black level on the downside of the part of Figure 38 (H) condition formula of the pixel shown in the dashed circle be formula (4).In addition, using formula (5) under the situation of this pixel of brachymemma on the black level, the pixel on the upside of the part of Figure 38 (H) in the dashed circle works together, and reduces adjustment amount.
Turn back to Figure 27, based on the value (Figure 23 is not shown) of the image pickup fuzzy quantity in the characteristics of signals information and the value (high-frequency signal amount) of spatial high-frequency information, the LPF control portion of gain divides 332E to carrying out gain control from the LPF signal of LPF processing section 332C output.
Especially, for example, shown in Figure 39 A, the amplitude of the gain (LPF gain) of the wave filter when LPF handles changes according to the image pickup fuzzy quantity.At this more particularly, when the value of image pickup fuzzy quantity be 0 or bigger and during less than d11, the LPF gain equals zero.When the value of image pickup fuzzy quantity is d11 or bigger and during less than d12, the LPF gain linearity increases.When the value of image pickup fuzzy quantity is d12 or when bigger, the LPF gain is steady state value again.This be because, under to the situation that comprises the picture signal execution LPF processing that image pickup is fuzzy, because carrying out LPF handles and tilts to such an extent that be equal to or greater than the inclination of the motion vector of expectation will change the marginal portion, so in the picture signal that the image pickup fuzzy quantity is big, have the situation of the gain when expecting to reduce the LPF processing therein.
For example, shown in Figure 39 B, the amplitude of the gain (high-frequency gain) of the wave filter when LPF handles changes according to the spatial high-frequency information in above-mentioned hunting zone (correcting range) (high-frequency signal amount).At this more particularly, when the value of high-frequency signal amount be 0 or bigger and during less than H21, high-frequency gain is steady state value.When the value of high-frequency signal amount is H21 or bigger and during less than H22, the high-frequency gain linearity is reduced to zero.When the value of high-frequency signal amount is H22 or when bigger, high-frequency gain is steady state value 0.This be because, search area memory is in the high-frequency signal of a plurality of variations marginal portion therein, because in the edge direction of utilizing edge direction test section 331D is determined, occur determining mistake probably, so the gain when reducing LPF and handle according to the increase of the value of high-frequency signal amount.
Next, based on the value (high-frequency signal amount) of the spatial high-frequency information in the characteristics of signals information and the value (low frequency signal amount) of space low-frequency information, the HPF control portion of gain divides 332F to carrying out gain control from the HPF signal of HPF processing section 332D output.
Especially, at first, for example, be similar to the situation shown in Figure 39 B, the amplitude of the gain (high-frequency gain) of the wave filter when HPF handles changes according to the spatial high-frequency information in hunting zone (correcting range) (high-frequency signal amount).This be because, search area memory is in the high-frequency signal of a plurality of variations marginal portion therein, because utilize the edge direction of edge direction test section 331D to occur determining mistake probably in determining, and the HPF handling failure takes place probably, thereby the gain when reducing HPF and handle according to the increase of high-frequency signal amount.
For example, shown in Figure 40 A, the amplitude of the gain (low-frequency gain) of the wave filter when HPF handles changes according to space low-frequency information (low frequency signal amount) in hunting zone (correcting range).At this, when the value of low frequency signal amount is 0 or bigger and during less than L11, low-frequency gain is steady state value.When the value of low frequency signal amount is L11 or bigger and during less than L12, low-frequency gain is linear to be increased.When the value of low frequency signal amount is L12 or when bigger, low-frequency gain is steady state value again.This be because, for example, therein the low frequency signal that is similar to modulating signal is carried out under the situation that HPF handles, big correcting value is applied to tiny inclination, and the inefficacy that HPF handles may take place.
In addition, for example, shown in Figure 40 B, the amount of the edge amplitude that the amplitude of the gain (HPF amplitude gain) of the wave filter when HPF handles basis is corresponding with the difference (MAX/MIN difference) between MIN value and the MAX value changes.At this, when the MAX/MIN difference is 0 or bigger and during less than M11, the value of HPF amplitude gain is from 0 linear increasing.When the MAX/MIN difference is M11 or bigger and during less than M12, the value of HPF amplitude gain is steady state value.When the MAX/MIN difference is M12 or bigger and during less than M13, the value linearity of HPF amplitude gain reduces.When the MAX/MIN difference is M13 or when bigger, the value of HPF amplitude gain is steady state value again.Thereby the variation edge part office having big edge amplitude prevents that correcting value is excessive.
For example, shown in Figure 40 C, the amplitude of above-mentioned correction value delta changes according to the amplitude of MIN value.At this, when the MIN value is 0 or bigger and during less than M21, correction value delta is linear to be increased.When the MIN value is M12 or bigger and during less than M22, correction value delta is steady state value.When the MIN value is M22 or bigger and during less than M23, the correction value delta linearity is reduced to negative value.When the MIN value is M23 or when bigger, correction value delta is steady state value (negative value) again.
Next, addition section 332G generates and the output calibration signal by dividing the gain controlled LPF signal of 332E output and divide 332F the gain controlled HPF signal plus of output from the HPF control portion of gain from the LPF control portion of gain.
The structure example of correcting value adjustment member
Next, with reference to Figure 41, will describe correcting value adjustment member 333 in detail.Figure 41 shows the example that the adjustment in the correcting value adjustment member 333 is handled.
The amplitude of the gain (amount of travel gain) of the wave filter when Figure 41 shows LPF processing and HPF processing changes according to the absolute value (movement quantity vector and amount of travel) of motion vector.Because producing step by switching tap (only odd number) at boundary, so gain changes according to the absolute value of motion vector, make the big change that in the switching timing of tap, does not produce correcting value.
The operation of image processing section 300
Next, with the operation of describing according to the image processing section 300 of second embodiment.Because be similar to operation according to display device 200 in the image display system of first embodiment according to the class of operation of display device in the image display system of second embodiment, thereby omit and describe.
As shown in figure 19, in image processing section 300, high frame-rate conversion part 31, stand high frame-rate conversion from the outside for the input image data of each frame input and handle, and thereby generation conversion image data (picture signal).Next, in motion image blurring Characteristics Detection part 32, from conversion image data, detect the motion image blurring characteristic information, and output it to motion image blurring improvement processing section 33.Improve in the processing section 33 at motion image blurring, by using the motion image blurring characteristic information, conversion image data (picture signal) is carried out correction handle (motion image blurring improves and handles), and thereby generate the demonstration image and export to display device 200.
At this moment, as shown in figure 20, improve in the processing section 33 at motion image blurring, at first, detection signal characteristic information in characteristics of signals test section 331.In correction-processing portion 332, by using characteristics of signals information and motion image blurring characteristic information, picture signal is carried out correction handle.Then, in correcting value adjustment member 333, the adjustment that stands to proofread and correct the correction signal execution correcting value after handling is handled.In addition section 334, from conditioned correction signal and the original image signal addition of correcting value adjustment member 333 outputs, thereby generate display image data.
At this moment, in correction-processing portion 332, picture signal is carried out for example processing of the LPF shown in Figure 21 A to 21C and HPF processing, thereby generate correction signal.
Especially, as shown in figure 27, in LPF handles, at first, in first edge replacement processing section 332A, picture signal is carried out above-mentioned first edge and replace processing, thereby generate the first replacement signal.Then, in the 332C of LPF processing section, this first replacement signal is carried out LPF handle, thereby generate the LPF signal.Divide among the 332E at the LPF control portion of gain, this LPF signal is carried out above-mentioned LPF gain control.On the other hand, in HPF handles, at first, in second edge replacement processing section 332B, picture signal is carried out above-mentioned second edge and replace processing, replace signal thereby generate second.Then, in the 332D of HPF processing section, this second replacement signal is carried out HPF handle, thereby generate the HPF signal.Divide among the 332F at the HPF control portion of gain, this HPF signal is carried out above-mentioned HPF gain control.At last, in addition section 332G, divide the gain controlled LPF signal of 332E output and divide 332F the gain controlled HPF signal plus of output from the HPF control portion of gain from the LPF control portion of gain, thereby generate correction signal.
Operation and effect that LPF handles
At this, with reference to Figure 42 to 46, by compare operation and effect that the LPF that utilizes LPF processing section 332C to carry out description handles with comparative example.Figure 42 shows the situation of desirable freeze mode, Figure 43 shows the situation of typical liquid crystal response, among Figure 44 each shows by insert inserting the situation that frame doubles frame rate, and Figure 45 shows the motion image blurring under the situation that the LPF that uses therein according to second embodiment handles (LPF of amount of travel width handles).In these accompanying drawings, partly (A) shows the response characteristic on the display screen, and partly (B) shows MPRC (motion picture response curve), and part (C) shows the timing (tracking visual axis) of liquid crystal response.
Under the situation of the desirable freeze mode that the part (A) of Figure 42 to part (C) illustrates, to response time of step input be zero.Thereby the output level of liquid crystal is instantaneous to reach the brightness (object brightness) corresponding with received image signal, and the response of liquid crystal is very fast.Yet, because the eye tracking concentration effect also takes place in desirable holding element, so the motion image blurring of the pixel that generation equates with the amount of travel of the input picture of step change.
On the other hand, under the situation of the typical liquid crystal response shown in Figure 43, normally and since the maintenance of the fuzzy amount of travel that is added to a frame that liquid crystal response causes fuzzy in so that motion image blurring spreads in the scope of the amount of travel that doubles a frame.That is to say, in typical liquid crystal indicator because low to the response speed of step input, thus response time that must a frame reach object brightness.In addition, because in liquid crystal indicator, carry out the driving of freeze mode, thereby produce the eye tracking concentration effect.Therefore, in typical liquid crystal indicator, carry out under the situation of step input, because the eye tracking concentration effect was added in response time based on response speed of liquid crystal, so for example produce the motion image blurring of the pixel corresponding with the twice of the amount of travel of the input picture of step change.
Thereby, insert to insert under the situation of frame, to shown in the part (C), be reduced to half by inserting frame with the amount of travel with a frame as the part (A) of Figure 44, can reduce the eye tracking concentration effect, and keep fuzzy quantity to be reduced to half.In addition, overdrive therein and handle under the situation of the gray scale that works, because the liquid crystal response time also becomes half, become half so the motion image blurring amount is whole in the case.Yet, in fact, from the grey black degree to the transformation close to the lime degree, and from bright gray scale to the transformation near the gray scale of black level, because the amount of overdrive deficiency, liquid crystal response can not be improved fully, and the motion image blurring amount can not be improved to the level of half.
Under the situation of using the LPF according to second embodiment shown in Figure 45 and 46 to handle, in the frame that will show, amplitude according to the motion vector in the picture signal, picture signal is carried out Space L PF handle (in correcting range, in order to the filter process of the inclination that relaxes the variation marginal portion in the picture signal).Thereby, for the pixel value in each pixel correction picture signal.
Thereby, in freeze mode display device 200, by utilizing along the improved effect of essence frame rate of the insertion of direction in space the motion blur in the motion object that has suppressed to be caused by the eye tracking concentration effect (maintenance fuzzy such as leading edge, that back edge is trailed and perceived position is delayed is fuzzy).In addition, unlike dual rate Driving technique of the prior art (inserting along time orientation), shown in the part (A) and part (C) of Figure 44, because do not need modifier self, so problem that can the cost incurred increase.In addition, different with overdrive technique of the prior art, the gray scale in the zone except the zone of middle gray has also suppressed motion blur in changing fully.
Yet, situation about being increased by reality unlike frame rate, because in this LPF handles, do not expect the improvement effect of liquid crystal, so the liquid crystal response curve is revealed as motion image blurring pattern (with reference to part (B) and the part (C) of Figure 45, and the part of Figure 47 (B) and partly (C)).Thereby, except this LPF handles, we can say by using hereinafter the HPF that describes is handled, also carried out satisfactorily to proofread and correct and handled.
Operation and effect that HPF handles
Next,, compare operation and effect that the HPF that utilizes HPF processing section 332D to carry out description handles with reference to Figure 47 and 48 by utilizing with comparative example.Figure 47 shows the motion image blurring under the typical driving situation, and Figure 48 shows the motion image blurring that wherein uses according under HPF processing (HPF of amount of travel width handles) situation of second embodiment.In these accompanying drawings, (A) show response characteristic on the display screen, (B) show MPRC (motion picture response curve), and (C) show the timing (tracking visual axis) of liquid crystal response.
Under the situation that the typical case shown in Figure 47 drives, in typical liquid crystal indicator, low to the response speed of step input.Thereby, shown in the Reference numeral P0 in the part (C) of Figure 47, need the response time of a frame to reach object brightness.
On the other hand, under the situation of execution according to the HPF processing of second embodiment, shown in Figure 48 A to 48C, in the frame that will show, amplitude according to the motion vector in the picture signal, picture signal is carried out Space H PF handle (in correcting range, in order near near the filter process of the view field of (top and the bottom) two ends that the variation marginal portion in the picture signal is provided), thereby for the pixel value in each pixel correction picture signal.
Thereby, utilizing the combination (for example, the combination of the Reference numeral P1L among Figure 48 C and the combination of P1H and Reference numeral P2L and P2H) in two view fields (overshoot zone and the following zone that dashes), liquid crystal response improves.Therefore, in freeze mode display device 200, suppressed motion blur, such as because brightness changes to delay leading edge that middle gray causes fuzzy and back edge hangover and the response that descends from middle gray scale.In addition, different with the dual rate Driving technique of the prior art (along the insertion of time orientation) shown in the part (B) with the part (A) of Figure 44, because do not need modifier self, so the problem of cost can not occur increasing.In addition, different with overdrive technique in the prior art, the gray scale in the zone except the zone of middle gray has also suppressed motion blur in changing fully.
Figure 49 A to 49D shows under the situation of using according to second embodiment that LPF handles and HPF handles, the timing waveform of the example of liquid crystal response characteristic.Figure 49 A and 49B are corresponding with liquid crystal response when rising to 255 gray scales (white level) from 0 gray scale (black level).Figure 49 A shows the situation of (OD) processing of overdriving of only using prior art.Figure 49 B shows situation about also using according to the LPF processing of second embodiment except OD handles.Figure 49 C and 49D are corresponding with liquid crystal response when rising to 96 gray scales (by-level) from 0 gray scale (black level).Figure 49 C shows the situation of only using OD to handle.Figure 49 D shows except OD handles, and also uses the situation that LPF handles and HPF handles according to second embodiment.
From Figure 49 A and 49B, can understand, when rising to 255 gray scales (white level) from 0 gray scale (black level), execution is handled according to the LPF of second embodiment, makes the value of PBET (perceived blur fringe time) be reduced to 7.8ms from 9.8ms, and has improved the liquid crystal response characteristic.From Figure 49 C and 49D, can understand, when rising to 96 gray scales (by-level) from 0 gray scale (black level), execution is handled and the HPF processing according to the LPF of second embodiment, makes the value of PBET be reduced to 6ms from 9.3ms, and has improved the liquid crystal response characteristic more.
As mentioned above, in a second embodiment, in the frame that will show, the Space L PF according to the amplitude of the motion vector of picture signal handles to the picture signal execution, thereby for the pixel value in each pixel correction picture signal.Thereby by utilizing along the improved effect of essence frame rate of the insertion of direction in space, the eye tracking concentration effect is reduced, and can suppress motion blur.In addition, different with technology of the prior art, can prevent that cost from increasing problem, and the motion blur of the gray scale that can be suppressed at the zone except the middle gray zone fully in changing.Thereby, can suppress the motion blur in the freeze mode display device 200, and improve the picture quality of moving image, suppressed the cost increase simultaneously.
In addition, in the frame that will show, except above-mentioned LPF handles, according to the amplitude of the motion vector of picture signal, picture signal is also carried out Space H PF handle.Thereby, for the pixel value in each pixel correction picture signal.Therefore, the combination that utilizes the overshoot zone and dash the zone has down improved liquid crystal response, and can suppress motion blur.Therefore, the motion blur in the freeze mode display device 200 can be suppressed more effectively, and the picture quality of moving image can be improved more.
In addition, as mentioned above, because in display device 200, do not increase cost, so can realize having the display device 200 of low relatively cost.
In addition, as mentioned above, in the gray scale in the zone except the zone of middle gray changes, exist for the improved effect of motion blur.Especially, because response speed is slow in the display, so it is bigger to change the difference of the time delay that causes by gray scale.Thereby improvement effect is big.
In addition, for each pixel correction pixel value.Therefore, owing to realized being similar to the more high-quality pixel of high definition demonstration etc., so utilize the motion blur inhibition of proofreading and correct processing more effective, be similar to the situation of the liquid crystal of VA type, the difference of response time that depends on grey scale change is bigger, and the gait of march of motion object (movement quantity vector) is higher.
3, modification
Hereinbefore, although described the present invention with first embodiment and second embodiment, the present invention is not limited to these embodiment, and can carry out various distortion.
For example, be similar to motion image blurring shown in Figure 50 and improve processing section 33-1, motion image blurring at second embodiment improves in the processing section 33, can provide preprocessing part 335 and aftertreatment part 336 in correction-processing portion 332 and correcting value adjustment member 333 previous stage or back one-level.Preprocessing part 335 is carried out such as the processing of removing the high fdrequency component of picture signal before standing to proofread and correct processing.Aftertreatment part 336 is carried out such as the processing of removing the high fdrequency component of signal after standing the correcting value adjustment.Under the situation of this structure, can remove by proofreading and correct and handle the side effect (side effect) that causes.
In addition, in first embodiment and second embodiment, for the convenience that illustrates, described following situation: wherein direct of travel (direction of motion vector) is in horizontal direction, and when carrying out such as to the filter process of interested pixel with when proofreading and correct above-mentioned various processings of handling, use is along the pixel of next-door neighbour's interested pixel of horizontal direction.Yet, be not limited to this situation.That is, direct of travel can be any direction on the two-dimensional plane.In addition, even be (for example, even under the situation in the vertical direction) under the situation of any direction on the two-dimensional plane at direct of travel, motion image blurring improves the processing section also can carry out above-mentioned various processing in identical mode.Yet, (or be processing under the situation of vergence direction at direct of travel when carrying out processing under the situation that direct of travel is vertical direction, and the combined treatment of the processing of processing vertically and along continuous straight runs) time, for example, can use the motion image blurring shown in Figure 51 to improve processing section 33-2, the motion image blurring that replaces describing in a second embodiment improves the processing section.Improve among the 33-2 of processing section at motion image blurring, in order to realize processing vertically, the previous stage in correction-processing portion 332 and characteristics of signals test section 331 provides line memory 337.
In a second embodiment in the correction-processing portion 332 of Miao Shuing, described wherein picture signal has been carried out that LPF as filter process handles and HPF handles the two situation.Yet, be not limited to such situation.That is to say, for example, be similar to the correction-processing portion 332-1 shown in Figure 52 A, can only carry out the correction of using LPF to handle to picture signal and handle.Alternatively, for example, be similar to the correction-processing portion 332-2 shown in Figure 52 B, can only carry out the correction of using HPF to handle to picture signal and handle.
Replace the image processing apparatus 300 described among second embodiment, can use image processing apparatus 300-1 and 300-2 shown in Figure 53 A and 53B.Especially, in the image processing apparatus 300-1 shown in Figure 53 A, in motion image blurring improvement processing section 33, after the improvement motion image blurring, in high frame-rate conversion part 31, carry out frame-rate conversion.Under the situation of such structure, can carry out various processing with low relatively frame rate.In the image processing apparatus 300-2 shown in Figure 53 B, be arranged in parallel high frame-rate conversion part 31 and motion image blurring Characteristics Detection part 32.Under the situation of this structure, can reduce retardation on the whole.
For example, be similar to the image processing apparatus 300-3 shown in Figure 54, omitted high frame-rate conversion part 31, and can use and carry out the display device that common frame rate shows.That is, image processing apparatus can only dispose the combination of motion image blurring Characteristics Detection part 32 and motion image blurring improvement processing section 33.In the case, can suppress motion image blurring, reduce cost simultaneously more.
For example, be similar to the image processing apparatus 300-4 shown in Figure 55 A, and the image processing apparatus 300-5 shown in Figure 55 B, replace high frame-rate conversion part 31 (or except high frame-rate conversion part 31), another functional block can be provided in image processing apparatus 300.Especially, in the image processing apparatus 300-4 shown in Figure 55 A, can provide MPEG (Motion Picture Experts Group) decoded portion 34 to replace high frame-rate conversion part 31, and be provided for motion image blurring Characteristics Detection part 32 from the parameter information of mpeg decode part 34 outputs.In the image processing apparatus 300-5 shown in Figure 55 B, provide IP (interlaced/progressive) conversion portion 35 to replace high frame-rate conversion part 31, and be provided for motion image blurring Characteristics Detection part 32 from the parameter information of IP conversion portion 35 outputs.Under the situation of this structure, from mpeg decode part 34 and IP conversion portion 35, transmit the parameter information such as motion vector, so that can reduce circuit scale on the whole.
For example, be similar to the image processing apparatus 300-7 shown in the image processing apparatus 300-6 shown in Figure 56 A and Figure 56 B, replace high frame-rate conversion part 31 (or except high frame-rate conversion part 31), the fuzzy image pickup of image pickup that can be provided for suppressing to be included in the picture signal blurs inhibition processing section 36.Especially, in the image processing apparatus 300-6 shown in Figure 56 A, fuzzy processing section 36 and the motion image blurring improvement processing section 33 of suppressing of arranged in series image pickup.Under the situation of this structure, in the fuzzy inhibition of image pickup processing section 36, because carry out motion image blurring and improve and handle by importing the fuzzy picture signal that is suppressed of image pickup wherein, so can reduce the tap number of the wave filter corresponding with the amplitude of motion vector.In the image processing apparatus 300-7 shown in Figure 56 B, be arranged in parallel fuzzy processing section 36 and the motion image blurring improvement processing section 33 of suppressing of image pickup.Under the situation of this structure, image pickup is fuzzy to be suppressed processing section 36 and motion image blurring and improves processing section 33 and carry out processing simultaneously, can omit delay circuit with delay feature etc., and can reduce circuit scale on the whole.
In addition, in above-mentioned first embodiment and second embodiment, following situation has been described: the motion image blurring Characteristics Detection part 32 that namely in image processing apparatus, is provided for the motion vector etc. in the detected image signal.Yet, can be at image processing apparatus external detection motion vector etc., and provide it to image processing apparatus.
In an embodiment of the present invention, because reduced the maintenance effect by the response characteristic of utilizing liquid crystal, so the effect of wave filter setting and wave filter setting depends on panel.As an example, with the situation of descriptor dot structure.Be imported under the situation of the panel with sub-pixel structure with the similar correcting pattern of the correcting pattern of the panel that is used for having single dot structure, in display control section 240 (timing controller), carrying out gamma conversion about each sub-pixel for correcting pattern.Thereby, be shifted from optimal value about the correcting pattern of each sub-pixel.At this, because find from analog result, in constituting the A pixel and B pixel of sub-pixel, effect remains in the B pixel, and can think desired is the correcting pattern that improves the A pixel.Thereby for example, shown in Figure 57 A and 57B, to formula (5), expectation arranges the threshold value V1 that will reduce at above-mentioned formula (1).That is to say, under by the situation of using each pixel in the sub-pixel structure configuration display device 200, when the variation marginal portion that will proofread and correct was in the edge direction of ascent direction, the amplitude of the gain during HPF is handled in expectation is adjusted into to reduce near LPF handled side.Utilize adjustment in the direction, obtain high improvement effect in the rising edge from middle gray scale.Especially, for example, during the rolling of magic lantern character, under the situation of not carrying out above-mentioned adjustment, perpendicular line increases the width of motion image blurring.On the other hand, under the situation of carrying out above-mentioned adjustment, can be suppressed at the increase in this perpendicular line.In this way, even in having the liquid crystal indicator of sub-pixel structure, also can improve the picture quality of moving image.
In addition, during the high frame-rate conversion of carrying out in first embodiment and second embodiment is handled, do not limit the combination of second frame rate (frame rate) in first frame rate (frame rate) and output image signal in the received image signal especially, and can be any combination.Especially, for example, can adopt 60 (or 30) [Hz] as first frame rate in the received image signal, and can adopt 120[Hz] as second frame rate in the output image signal.For example, can adopt 60 (or 30) [Hz] as first frame rate in the received image signal, and can adopt 240[Hz] as second frame rate in the output image signal.For example, adopt the 50[Hz corresponding with PAL (line-by-line inversion) direction] as first frame rate in the received image signal, and can adopt 100[Hz] or 200[Hz] as second frame rate in the output image signal.For example, adopt the 48[Hz corresponding with the film television system] as first frame rate in the received image signal, and can adopt be higher than 48[Hz] preset frequency as second frame rate in the output image signal.Received image signal from existing television system etc. is carried out as handled in the high frame-rate conversion described in first embodiment and second embodiment, thus can be with the existing content of high-quality display.
In addition, in first embodiment and second embodiment, for the convenience that illustrates, be described below situation: wherein picture signal is the Y (monochrome information) of yuv format, and above-mentioned such as filter process with when proofreading and correct the various processing of handling when interested pixel is carried out, the signal of employing also is luminance signal.Yet, can use the picture signal with different-format.For example, can use the UV (colour difference information) with rgb format or yuv format.Under the situation of using UV, can suitably improve the picture quality in the aberration variation by the gain in the output of adjustment wave filter.
The application comprise with the Japanese priority patent application JP 2008-322300 that submitted Jap.P. office on Dec 18th, 2008 in relevant theme disclosed, its whole contents is incorporated in this by reference.
It should be appreciated by those skilled in the art that and depend on design requirement and other factors, various modifications, combination, sub-portfolio and replacement can occur, as long as they are in the scope of claims or its equivalent.

Claims (19)

1. image processing apparatus, described image processing apparatus is provided by the view data that provides from the outside, and exports view data to the freeze mode display device, and described image processing apparatus comprises:
Correction-processing portion, described correction-processing portion is according to the amplitude of the motion vector in the view data, carry out the correction processing by the view data in the frame that will show is carried out spatial low-pass filter LPF processing in display device, with for the pixel value in each pixel correction view data, described LPF handles and allows the inclination of the variation marginal portion in the view data to flatten slow; And
The prearranged signals characteristic information that will use is detected based on the amplitude of described motion vector in characteristics of signals test section, described characteristics of signals test section in proofreading and correct processing from view data,
Wherein, described characteristics of signals test section is based on the variation marginal portion in the amplitude inspection image data of motion vector, and minimum and max pixel value in the predetermined correction zone of detection pixel data and the locations of pixels with described minimum and max pixel value are as the prearranged signals characteristic information.
2. image processing apparatus according to claim 1, wherein,
The change direction of the variation marginal portion in the described correcting area is determined based on described minimum and max pixel value and locations of pixels with described minimum and max pixel value in described characteristics of signals test section, and
Described correction-processing portion determines whether to carry out described correction and handles based on the change direction of being determined by described characteristics of signals test section.
3. image processing apparatus according to claim 2, wherein,
When described change direction was pointed to ascent direction from low gray scale to high gray scale, described correction-processing portion was determined to carry out described correction and is handled; And
When described change direction was pointed to descent direction from high gray scale to low gray scale, described correction-processing portion was determined not carry out described correction and is handled.
4. image processing apparatus according to claim 1 wherein, is determined described correcting area based on the amplitude of described motion vector.
5. image processing apparatus according to claim 1, wherein, described correction-processing portion comprises first edge replacement part, described first edge is replaced part and is used the described minimum pixel value that detected by described characteristics of signals test section or max pixel value to replace pixel value in the perimeter, described perimeter has the outside of the pixel region between the locations of pixels of minimum and max pixel value in the correcting area of described view data, and
Described correction-processing portion is carried out the correction processing by using the LPF processing for the view data that stands by the replacement that described first edge replacement part is carried out is handled.
6. image processing apparatus according to claim 1, wherein,
Described characteristics of signals test section was carried out weighting and is handled before detecting described minimum pixel value and having the locations of pixels of described minimum pixel value, with according to from correcting area interested intended pixel to the distance of each pixel, positive coefficient is weighted on each pixel value, and
Described characteristics of signals test section was carried out weighting and is handled before detecting described max pixel value and having the locations of pixels of described max pixel value, negative coefficient is weighted on each pixel value to the distance of each pixel according to interested intended pixel from correcting area.
7. image processing apparatus according to claim 6, wherein,
Adjust described weighting in handling positive coefficient or negative coefficient or positive coefficient and negative coefficient the two, to increase along with the increase of space frequency in the correcting area.
8. image processing apparatus according to claim 1, wherein,
The tap number that is used for the wave filter that LPF handles changes according to the amplitude of described motion vector.
9. image processing apparatus according to claim 8, wherein,
The tap number that is used for the wave filter that LPF handles is configured to odd number, and no matter the value of described motion vector how.
10. image processing apparatus according to claim 1, wherein,
The amplitude that is used for the gain of the wave filter that LPF handles changes according to the spatial frequency in the described correcting area.
11. image processing apparatus according to claim 1, wherein,
The amplitude that is used for the gain of the wave filter that LPF handles changes according to the amplitude of described motion vector.
12. image processing apparatus according to claim 1, wherein,
Described correction-processing portion is by using response time information and using described motion vector to carry out described correction and handle, and described response time information is associated the amplitude of the response time in the described display device with the gray scale variable.
13. image processing apparatus according to claim 1 also comprises:
The motion vector detection section branch detects the motion vector in the described view data.
14. image processing apparatus according to claim 1, wherein,
Described correction-processing portion is according to the amplitude of described motion vector, by the view data in the frame that will show being carried out space Hi-pass filter HPF handles and LPF handles to carry out to proofread and correct and handles, described HPF provides the overshoot zone and dashes regional down near handling the two ends that allow the variation marginal portion in view data.
15. image processing apparatus according to claim 14,
Wherein, described correction-processing portion comprises second edge replacement part, described second edge is replaced part and is used the described minimum pixel value that detected by described characteristics of signals test section or max pixel value to replace pixel value in the perimeter, described perimeter has the outside of the pixel region between the locations of pixels of described minimum and max pixel value in the correcting area of described view data, and the pixel value in the pixel region between the locations of pixels that has described minimum and max pixel value in part replaces described view data with the pixel value that comprises following three values the correcting area is replaced at described second edge, described three values are minimum pixel value, the pixel value of the intermediate pixel at max pixel value and place, the centre position between the locations of pixels with described minimum and max pixel value, and
Described correction-processing portion is carried out the correction processing by using described HPF processing for the view data that stands by the replacement that described second edge replacement part is carried out is handled.
16. image processing apparatus according to claim 14, wherein,
Each pixel in the described display device has sub-pixel structure, and
When the change direction of the variation marginal portion that will proofread and correct is pointed to ascent direction from low gray scale to high gray scale, gain during described correction-processing portion is handled described HPF is adjusted near the gain in the described LPF processing, causes the gain in the described HPF processing to reduce.
17. image processing apparatus according to claim 14, wherein,
Described correction-processing portion is by using the prearranged signals characteristic information in the described view data and using described motion vector to carry out described HPF and handle.
18. an image display system comprises:
Image processing apparatus is provided by the view data that provides from the outside; And
The freeze mode display device shows based on the treated view data carries out image from described image processing apparatus output,
Wherein, described image processing apparatus comprises
Correction-processing portion, described correction-processing portion is according to the amplitude of the motion vector in the described view data, carry out the correction processing by the view data in the frame that will show is carried out spatial low-pass filter LPF processing in described display device, with for the pixel value in each pixel correction view data, described LPF handles and allows the inclination of the variation marginal portion in the described view data to flatten slow; And
The prearranged signals characteristic information that will use is detected based on the amplitude of described motion vector in characteristics of signals test section, described characteristics of signals test section in proofreading and correct processing from view data,
Wherein, described characteristics of signals test section is based on the variation marginal portion in the amplitude inspection image data of motion vector, and minimum and max pixel value in the predetermined correction zone of detection pixel data and the locations of pixels with described minimum and max pixel value are as the prearranged signals characteristic information.
19. image display system according to claim 18, wherein,
Described correction-processing portion is according to the amplitude of described motion vector, by carrying out for the view data in the frame that will show that space Hi-pass filter HPF handles and LPF handles to carry out to proofread and correct and handles, described HPF provides the overshoot zone and dashes regional down near handling the two ends that allow the variation marginal portion in described view data.
CN 200910262430 2008-12-18 2009-12-18 Image processing device and image display system Expired - Fee Related CN101751894B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008322300A JP5176936B2 (en) 2007-12-18 2008-12-18 Image processing apparatus and image display system
JP322300/08 2008-12-18

Publications (2)

Publication Number Publication Date
CN101751894A CN101751894A (en) 2010-06-23
CN101751894B true CN101751894B (en) 2013-07-03

Family

ID=42479204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910262430 Expired - Fee Related CN101751894B (en) 2008-12-18 2009-12-18 Image processing device and image display system

Country Status (1)

Country Link
CN (1) CN101751894B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5929538B2 (en) * 2012-06-18 2016-06-08 セイコーエプソン株式会社 Display control circuit, display control method, electro-optical device, and electronic apparatus
US10304396B2 (en) * 2016-10-28 2019-05-28 Himax Display, Inc. Image processing method for alleviating tailing phenomenon and related imaging processing circuit and display apparatus
CN107784641B (en) * 2017-11-21 2021-08-20 天地伟业技术有限公司 HPF-based image sharpening algorithm
CN111727471B (en) * 2018-02-23 2022-06-03 索尼半导体解决方案公司 Display device, driving method of display device, and electronic apparatus
CN109584774B (en) * 2018-12-29 2022-10-11 厦门天马微电子有限公司 Edge processing method of display panel and display panel

Also Published As

Publication number Publication date
CN101751894A (en) 2010-06-23

Similar Documents

Publication Publication Date Title
JP5024634B2 (en) Image processing apparatus and image display system
US7705816B2 (en) Generating corrected gray-scale data to improve display quality
CN101543043B (en) Image display device, video signal processing device, and video signal processing method
US8624936B2 (en) Display panel control device, liquid crystal display device, electronic appliance, display device driving method, and control program
JP4800381B2 (en) Liquid crystal display device and driving method thereof, television receiver, liquid crystal display program, computer-readable recording medium recording liquid crystal display program, and driving circuit
CN101563725B (en) Display control device, display control method
US10810952B2 (en) Display device and method
CN100517455C (en) Apparatus and method for driving liquid crystal display device
JP5005757B2 (en) Image display device
CN101751894B (en) Image processing device and image display system
US20120169686A1 (en) Timing controller, display apparatus including the same, and method of driving the same
US20080246784A1 (en) Display device
CN101496088B (en) Image processing device and image processing method
CN102034418B (en) Image processing apparatus and image processing method
KR100935404B1 (en) Display device
CN101751893B (en) Image processing device and image display system
JP2005268912A (en) Image processor for frame interpolation and display having the same
CN101189652B (en) Display device
JP3763773B2 (en) Subtitle character display method and display system
JP2007225945A (en) Display apparatus
JP2008191300A (en) Video image processor and display device
KR20070081656A (en) Display apparatus and method for controlling thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130703

Termination date: 20151218

EXPY Termination of patent right or utility model