WO2006098194A1 - Procede, dispositif et programme de commande de dispositif d'affichage, support d'enregistrement et dispositif d'affichage les utilisant - Google Patents

Procede, dispositif et programme de commande de dispositif d'affichage, support d'enregistrement et dispositif d'affichage les utilisant Download PDF

Info

Publication number
WO2006098194A1
WO2006098194A1 PCT/JP2006/304433 JP2006304433W WO2006098194A1 WO 2006098194 A1 WO2006098194 A1 WO 2006098194A1 JP 2006304433 W JP2006304433 W JP 2006304433W WO 2006098194 A1 WO2006098194 A1 WO 2006098194A1
Authority
WO
WIPO (PCT)
Prior art keywords
video data
luminance
pixel
period
display
Prior art date
Application number
PCT/JP2006/304433
Other languages
English (en)
Japanese (ja)
Inventor
Makoto Shiomi
Original Assignee
Sharp Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Kabushiki Kaisha filed Critical Sharp Kabushiki Kaisha
Priority to US11/886,226 priority Critical patent/US7956876B2/en
Publication of WO2006098194A1 publication Critical patent/WO2006098194A1/fr

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/36Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals
    • G09G3/3611Control of matrices with row and column drivers
    • G09G3/3648Control of matrices with row and column drivers using an active matrix
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0252Improving the response speed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0261Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/028Improving the quality of display appearance by changing the viewing angle properties, e.g. widening the viewing angle, adapting the viewing angle to the view direction
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0285Improving the quality of display appearance using tables for spatial correction of display data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/16Determination of a pixel data signal depending on the signal applied in the previous frame

Definitions

  • Display device driving method display device driving device, program and recording medium thereof, and display device including the same
  • the present invention relates to a display device driving method capable of improving the image quality and brightness when displaying a moving image, a display device driving device, a program and a recording medium thereof, and a display device including the display device driving device. It is about.
  • the drive signal is modulated and driven so as to emphasize the tone transition to the previous power this time.
  • a method is also used.
  • Patent Document 1 Japanese Patent Laid-Open No. 4-302289 (Publication Date: October 26, 1994)
  • Patent Document 2 JP-A-5-68221 (Publication date: March 19, 1995)
  • Patent Document 3 Japanese Patent Laid-Open No. 2001-281625 (Publication Date: October 10, 2001)
  • Patent Document 4 Japanese Patent Laid-Open No. 2002-23707 (Publication date: January 25, 2002)
  • Patent Document 5 Japanese Patent Laid-Open No. 2003-22061 (Publication Date: January 24, 2003)
  • Patent Document 6 Japanese Patent No. 2650479 (issue date: September 3, 1997)
  • Non-Patent Document 1 New edition Color Science Handbook; 2nd edition (University of Tokyo Press; Publication date; June 10, 1998)
  • the present invention has been made in view of the above-described problems, and its object is to suppress deterioration in image quality due to over-emphasis of gradation transitions that are brighter and have a wide viewing angle. It is an object of the present invention to provide a display device with improved display image quality.
  • the display device driving method includes a generation process that is repeated every time input video data is input to a pixel.
  • the output video data to the pixel is generated in a plurality of predetermined numbers for each input cycle.
  • Performed before or after each generation step corrects the correction target data that is one of the input video data or the output video data, and corrects the pixel drive period according to the corrected correction target data.
  • the drive period of the target data it includes a correction process with prediction for predicting the luminance reached by the pixel at the end of the drive period of the correction target data.
  • a low luminance process that sets a value and controls at least one of the remaining output video data to control a time integral value of the luminance of the pixel during a period driven by the plurality of output video data; This is performed when the input video data has a brightness higher than a predetermined threshold, and at least one of the plurality of output video data has a brightness within a predetermined range for bright display.
  • each of the correction processes with prediction includes a prediction result indicating that the pixel arrives at the first time of the driving period of the correction target data and indicates the brightness.
  • the correction process for correcting the correction target data according to the prediction result, the prediction results so far, the correction target data input so far, and the correction target to be corrected this time Prediction that predicts the brightness at the end of the drive period of the current correction target data based on at least the prediction result indicating the luminance at the first time and the current correction target data. And a process is provided.
  • At least one of the plurality of output video data is set to a value indicating the luminance (brightness for bright display) in a predetermined range for bright display. At least one of the remaining output video data is increased or decreased in order to control the time integral value of the luminance of the pixel in the period driven by the plurality of output video data. Therefore, in most cases, the luminance of the pixels in a period other than the period (bright display period) driven according to the output video data indicating the brightness for bright display can be set lower than that in the bright display period.
  • the image quality at the time of moving image display can be improved as long as the difference from the luminance in the bright display period is more than a certain level. In this case, it is possible to improve the image quality when displaying a moving image.
  • the luminance of a pixel when the luminance of a pixel is close to the maximum or minimum, the luminance is kept within an allowable range, compared to the case where the luminance is in the middle.
  • the possible field of view is wide. This is because in the state close to the maximum or minimum brightness, the alignment state of the liquid crystal molecules is simple due to the demand for contrast, and it is easy to correct and is also preferred for images. This is because the viewing angle is more selectively assured particularly in the portion close to the minimum luminance. Therefore, if the time-division driving is not performed, the viewing angle at which halftones can be suitably displayed becomes narrow, and there is a possibility that problems such as whitening may occur when the external force is observed.
  • one of the output video data in the case of dark display, one of the output video data is set to a value indicating the luminance for dark display, and thus the luminance of the pixel is within an allowable range during the dark display period.
  • the maintained viewing angle can be expanded.
  • one of the output video data is set to a value indicating the brightness for dark display. Therefore, during the dark display period, the field of view in which the brightness of the pixels is maintained within the allowable range is set. The corner can be enlarged. As a result, it is possible to prevent the occurrence of defects such as whitening and to increase the viewing angle, compared to a configuration in which time-division driving is not performed.
  • the correction target data is corrected according to the prediction result indicating the luminance reached by the pixel at the beginning of the driving period of the correction target data among the previous prediction results. Therefore, the response speed of the pixels can be improved, and the types of display devices that can be driven by the driving method of the display device can be increased.
  • the pixels are required to have a faster response speed than when pixels are not time-division driven.
  • the response speed of the pixel is sufficient, the luminance of the pixel at the last point in the drive period of the correction target data is not corrected, even if the correction target data is output as it is without referring to the prediction result. The brightness indicated by is reached.
  • the response speed of the pixel is insufficient, it is difficult to achieve the luminance of the pixel at the last time point to the luminance indicated by the correction target data simply by outputting the correction target data as it is.
  • the types of display devices that can be driven by the time-division driving method are more limited than when the time-division driving is not performed.
  • the correction target data is corrected according to the prediction result. So, for example, if the response speed is expected to be insufficient, processing according to the prediction result becomes possible, such as enhancing the pixel response speed by emphasizing the gradation transition, and improving the pixel response speed. be able to.
  • the luminance at the last time point is at least the first time point among the prediction results so far, the correction target data input so far, and the correction target data to be corrected this time. Therefore, it can be predicted with higher accuracy than a configuration that assumes that the brightness indicated by the current correction target data has been reached.
  • the image quality during display can be improved.
  • At least one of the plurality of output video data is set to display brightness, and in the case of bright display, the plurality of output video data are set.
  • the viewing angle of the display device can be enlarged and the image quality at the time of moving image display can be improved.
  • the prediction since the prediction is performed as described above, the prediction can be performed with higher accuracy. Therefore, it is possible to prevent deterioration in image quality due to excessive emphasis on gradation transition, to increase the viewing angle of the display device, and to improve image quality when displaying a moving image.
  • the drive device for the display device provides the pixel to the pixel in order to time-division drive the pixel every time input video data is input to the pixel.
  • a display device driving device having generation means for generating a plurality of output video data to the pixel for each input cycle.
  • the correction target data that is arranged before or after the generating means and that is one of the input video data or each of the output video data is corrected and the pixel is driven according to the corrected correction target data.
  • the correction means predicts the luminance reached by the pixel at the end of the correction target data drive period, and the generation means includes the input video data in advance.
  • the luminance is lower than a predetermined threshold
  • at least one of the plurality of output video data is set to a value indicating the luminance within a predetermined range for dark display
  • the remaining output At least one of the video data is increased or decreased to control the time integral value of the luminance of the pixel during the period driven by the plurality of output video data
  • the input video data If the data indicates a luminance higher than a predetermined threshold, at least one of the plurality of output video data is set to a value indicating a luminance within a predetermined range for bright display, At least one of the remaining output video data is increased / decreased to control the time integral value of the luminance of the pixel during the period driven by the plurality of output video data, and the correction means includes The correction target data is corrected according to the prediction result indicating the luminance reached by
  • the correction target data input so far and the correction target data to be corrected this time at least based on the prediction result indicating the luminance at the first time point and the current correction target data. It is characterized by predicting the brightness of the last point of the driving period of the correction Target data.
  • the display device driving device in most cases, at least once in each input cycle, a period in which the luminance of the pixel is lower than the other periods is provided. Therefore, the image quality when the display device displays a moving image can be improved. Further, in the case of bright display, as the luminance indicated by the input video data increases, the luminance of the pixels in the period other than the bright display period increases, so that a display device capable of brighter display can be realized.
  • the prediction result according to the prediction result indicating the luminance reached by the pixel at the first point in the driving period of the correction target data among the prediction results so far since the correction target data can be corrected, the response speed of the pixels can be improved.
  • the types of display devices that can be driven by the display device drive device can be increased.
  • the luminance at the last time point is calculated based on the prediction results thus far, the correction target data input so far, and the correction target data to be corrected this time.
  • the prediction is based on at least the prediction result indicating the luminance at the first time point and the current correction target data
  • the luminance at the last time point can be predicted with higher accuracy. Therefore, in order to improve characteristics such as image quality, brightness, and viewing angle when displaying moving images on a display device, repeated gradation transitions with increasing and decreasing gradations frequently occur. Nevertheless, it is possible to prevent deterioration in image quality due to over-emphasis of gradation transitions and improve image quality when displaying moving images.
  • the correction target data is input video data
  • the correction unit is arranged before the generation unit, and at the end of the driving period of the correction target data.
  • the luminance reached by the pixel it is possible to predict the luminance reached at the end of the period in which the pixel is driven by a plurality of output video data generated by the generating means according to the corrected input video data.
  • a circuit for prediction for example, a value indicating a prediction result corresponding to a value that can be input is stored in advance in a storage unit, and a prediction result corresponding to an actually input value is stored in the storage unit. Force reading circuit.
  • each output video data corresponding to the corrected input video data is determined, and the plurality of output video data generated by the generation unit according to the corrected input video data
  • the luminance of the pixel at the first time point in the period in which the pixel is driven and the output video data is determined, the luminance of the pixel at the last time point is determined.
  • the correction means predicts the luminance at the last time point only once per input period, but at least the current prediction of the input video data of the current prediction results. Based on the prediction result indicating the luminance reached by the pixel at the beginning of the drive period (drive period of the correction target data) and the current input video data, the current input video data drive period that has no problem The luminance at the last time can be predicted. As a result, the correction The operation speed of the stage can be suppressed.
  • the correction means may be arranged after the generation means, and may correct each output video data as the correction target data. In this configuration, since each output video data is corrected by the correcting means, more accurate correction processing can be performed, and the response speed of the pixels can be further improved.
  • the correction means corrects the plurality of output video data generated for each input period, and divides the input period into the predetermined number. For each divided period, a correction unit that outputs corrected output video data corresponding to each divided period and a prediction result storage unit that stores a prediction result related to the last divided period among the prediction results are provided.
  • the correction unit corrects the correction target data based on the prediction result read from the prediction result storage unit when the correction target data corresponds to the first division period, and the correction unit The data corresponds to the second and subsequent division periods! In the case of speaking, the luminance at the first time point is predicted based on the output video data corresponding to the division period before the correction target data and the prediction result stored in the prediction result storage unit.
  • the correction unit corrects the correction target data according to the prediction result, and the correction unit outputs the output video data corresponding to the last divided period, the output video data corresponding to the previous divided period, and the prediction result storage unit.
  • the prediction result stored in the pixel On the basis of the prediction result stored in the pixel, the luminance of the pixel at the end of the drive period of the output video data corresponding to the last division period is predicted, and the prediction result is stored in the prediction result storage unit.
  • the result of predicting the luminance reached by the pixel at the end of the previous division period corresponding to the correction target data is stored in the prediction result storage unit each time.
  • Data to be corrected can be corrected.
  • the forecast results for each split period are Compared to the configuration stored in the prediction result storage unit, the amount of prediction result data stored in the prediction result storage unit per input cycle can be reduced.
  • the pixel includes a plurality of pixels, and the generation unit outputs the output video data to each pixel for each input cycle according to the input video data to each pixel.
  • Generating a plurality of predetermined numbers, and correcting means correcting each output video data to each pixel, respectively, and each prediction result corresponding to each pixel is stored in the prediction result storage unit.
  • the generating means generates a plurality of output video data to be generated for each of the pixels for each of the input periods, for each of the plurality of predetermined numbers.
  • the correction unit reads out the prediction result for the pixel for each of the input periods for each of the input cycles, and reads the prediction result and the output of each of the predetermined number of times.
  • the prediction processing of the luminance of the pixel at the last time point and the storage processing of the prediction result can be performed for each pixel multiple times for each input cycle.
  • the writing process of the results can be done with Interbow IV.
  • the plurality of output video data generated for each input cycle is generated for each of the plurality of predetermined numbers, and the prediction result is calculated for each input cycle.
  • Each is read a plurality of times determined in advance. This Based on these prediction results and each output video data, the brightness of the pixel at the last time point can be predicted a plurality of times and the prediction results can be stored. Note that the number of pixels is plural, and the reading process and the generation process are performed corresponding to each pixel.
  • At least one prediction result writing process is performed among the prediction process and the prediction result storage process that can be performed a plurality of times for each input period.
  • the time interval for storing the prediction result of each pixel in the prediction result storage unit can be increased in the prediction result storage unit, and the response speed required for the prediction result storage unit can be reduced compared to a configuration that does not thin out. can do.
  • the generation unit increases or decreases specific output video data, which is a specific one of the remaining output video data, and is driven by the plurality of output video data. And controlling the time integral value of the luminance of the pixel during a period of time, and indicating the luminance within a predetermined range for the dark display of the plurality of output video data other than the specific output video data It may be set to a value or a value indicating the brightness within a predetermined range for bright display.
  • video data other than the specific output video data is a value indicating a luminance in a predetermined range for dark display or a display for bright display. Is set to a value indicating the luminance in a predetermined range, so that the video data other than the specific output video data is set to a value that is not included in the difference between the two ranges. Furthermore, the occurrence of problems such as whitening can be prevented and the viewing angle can be expanded.
  • the generation means includes a period in which a pixel is driven in accordance with each of the plurality of output video data, divided into a plurality of divided periods,
  • the period during which the pixel is driven in accordance with the plurality of output video data is defined as a unit period, in the region where the luminance indicated by the input video data is the lowest, the time of the unit period is divided among the divided periods.
  • Output video data corresponding to the division period closest to the center position Is selected as the specific output video data the luminance indicated by the input video data gradually increases, and the specific output video data falls within a predetermined range for the bright display.
  • Output video data is set to a value within this range, and among the remaining divided periods, output video data corresponding to the divided period closest to the temporal center position of the unit period is newly added to the specific output video data.
  • the temporal gravity center position of the luminance of the pixel in the unit period is set near the temporal center position of the unit period.
  • the ratio of the period during which the pixels are driven by each of the plurality of output video data may be any output video data among the plurality of output video data.
  • Timing power for switching whether to convert to video data It is set closer to the timing to equally divide the range of brightness that can be expressed by the pixel than the timing to equally divide the range of brightness that can be expressed by the pixel. Moyo.
  • the time integral value power of the luminance of the pixel during the period driven by the plurality of output video data is appropriately determined as to which of the output video data indicates the luminance. Since the brightness can be switched, the amount of whitening recognized by a person can be further reduced and the viewing angle can be further expanded, compared with the case where the brightness range is switched evenly.
  • the drive device of the display device may be realized by hardware, or may be realized by causing a computer to execute a program.
  • the program according to the present invention is a program for operating a converter as each means provided in the drive device of the above-mentioned display device, and the recording medium according to the present invention includes The program is recorded.
  • the computer When these programs are executed by a computer, the computer operates as a drive device for the display device. Therefore, the same as the driving device of the display device. In addition, it is possible to realize a display device drive device that can provide a display device that is brighter, has a wider viewing angle, and suppresses deterioration in image quality due to over-emphasis of gradation transition, and that has improved image quality when displaying moving images. .
  • a display device includes any one of the drive devices for the display device and a display unit including pixels driven by the drive device.
  • the image processing apparatus includes image receiving means for receiving a television broadcast and inputting a video signal indicating an image transmitted by the television broadcast to the driving device of the display device.
  • the unit may be a liquid crystal display panel, and the display device may operate as a liquid crystal television receiver.
  • the display unit is a liquid crystal display panel, and a video signal is input from the outside to the driving device of the display device, and the display device displays a video signal indicating the video signal. Operate as a LCD monitor device that displays.
  • the display device having the above configuration includes the drive device driving device for the display device! /, Therefore, similar to the drive device for the display device, the gradation transition is emphasized with a brighter and wider viewing angle. Therefore, it is possible to realize a display device in which deterioration in image quality due to excess is suppressed and the image quality when displaying moving images is improved.
  • driving as described above suppresses image quality deterioration due to over-emphasis of gradation transitions with a brighter viewing angle and wider, and also improves image quality when displaying moving images. Therefore, it can be used widely and suitably as a driving device for various display devices such as a liquid crystal television receiver and a liquid crystal monitor.
  • FIG. 1, showing an embodiment of the present invention is a block diagram showing a main configuration of a signal processing circuit provided in an image display device.
  • FIG. 2 is a block diagram showing a main configuration of the image display device.
  • FIG. 3 (a) is a block diagram showing a main configuration of a television receiver provided with the image display device.
  • FIG. 3 (b) is a block diagram showing a main configuration of a liquid crystal monitor device provided with the image display device. is there.
  • FIG. 4 is a circuit diagram illustrating a configuration example of a pixel provided in the image display device.
  • V5 A graph showing the difference in luminance when the pixel is viewed from the front and obliquely when it is driven without time division.
  • FIG. 7 shows a comparative example, and is a block diagram showing a configuration in which a ⁇ correction circuit is provided before the modulation processing unit in the signal processing circuit.
  • FIG. 8 is a block diagram illustrating an exemplary configuration of a modulation processing unit provided in the signal processing circuit according to the embodiment, and illustrating a main configuration of the modulation processing unit.
  • FIG. 10 The video signal input to the frame memory shown in FIG. 1 and the video signal output from the frame memory to the front LUT in the case of 3: 1 division are also output to the rear LUT. It is explanatory drawing which shows a video signal.
  • FIG. 11 is an explanatory diagram showing the ON timing of the scanning signal lines related to the front display signal and the rear display signal when the frame is divided into 3: 1 in the present embodiment.
  • FIG. 12 is a graph showing the relationship between scheduled brightness and actual brightness when a frame is divided into 3: 1 in this embodiment.
  • ⁇ 13 (a)] is an explanatory diagram showing a method of inverting the polarity of the interelectrode voltage at the frame period.
  • 13 (b)] is an explanatory diagram showing another method of inverting the polarity of the interelectrode voltage at the frame period.
  • FIG. 14 (a) is an explanatory diagram illustrating an example of fluctuations in the voltage applied to the liquid crystal in one frame for explaining the response speed of the liquid crystal.
  • FIG. 14 (b) is an explanatory diagram illustrating a change in the voltage between electrodes according to the response speed of the liquid crystal, in order to explain the response speed of the liquid crystal.
  • FIG. 14 (c) is an explanatory diagram showing the voltage between the electrodes when the response speed of the liquid crystal is low, for explaining the response speed of the liquid crystal.
  • FIG.15 When displaying sub-frames using liquid crystal with slow response speed, 3 is a graph showing the display luminance (relation between planned luminance and actual luminance) output from the computer.
  • FIG. 16 (a) is a graph showing the luminance displayed by the previous subframe and the rear subframe when the display luminance force Lmax is 3Z4 and 1Z4.
  • FIG. 16 (b) is a graph showing the transition state of the liquid crystal voltage when the polarity of the voltage applied to the liquid crystal (liquid crystal voltage) is changed in the subframe period.
  • ⁇ 17 (a)] is an explanatory diagram showing a method of inverting the polarity of the interelectrode voltage at the frame period.
  • FIG. 17 (b)] is an explanatory diagram showing another method of inverting the polarity of the interelectrode voltage at the frame period.
  • FIG. 18 (a) is an explanatory diagram showing an example of the polarities of the four subpixels and the liquid crystal voltage of each subpixel in the liquid crystal panel.
  • FIG. 18 (b) is an explanatory diagram showing a case where the polarity of the liquid crystal voltage of each sub-pixel in FIG. 18 (a) is reversed.
  • FIG. 18 (c) is an explanatory diagram showing a case where the polarity of the liquid crystal voltage of each sub-pixel in FIG. 18 (b) is reversed.
  • FIG. 18 (d) is an explanatory diagram showing a case where the polarity of the liquid crystal voltage of each sub-pixel in FIG. 18 (c) is reversed.
  • FIG. 20 is a graph showing the transition of the liquid crystal voltage when the frame is divided into three and the voltage polarity is inverted for each frame.
  • FIG. 21 is a graph showing the transition of the liquid crystal voltage when the frame is divided into three and the voltage polarity is inverted for each subframe.
  • FIG. 23 is a block diagram illustrating a main configuration of a signal processing circuit, illustrating another embodiment of the present invention.
  • FIG. 24 is a block diagram illustrating a configuration example of a modulation processing unit provided in the signal processing circuit, and illustrating a configuration of a main part of the modulation processing unit.
  • FIG. 25 is a timing chart showing the operation of the signal processing circuit.
  • FIG. 26 is a block diagram showing another configuration example of the modulation processing unit provided in the signal processing circuit, and showing a main configuration of the modulation processing unit.
  • FIG. 27 is a timing chart showing the operation of the signal processing circuit.
  • the image display device is a display device in which image quality deterioration due to excessive enhancement of gradation transitions with a brighter viewing angle is suppressed, and the image quality when displaying moving images is improved.
  • it can be suitably used as an image display device of a television receiver.
  • television broadcasts received by the television receiver include broadcasts using artificial satellites such as terrestrial television broadcasts, BS (Broadcasting Satellite) digital broadcasts and CS (Communication Satellite) digital broadcasts, or Cape Television Television Broadcasting.
  • the panel 11 of the image display device (display device) 1 includes, for example, subpixels that can display R, G, and B colors, and controls the luminance of each subpixel.
  • a panel capable of color display for example, a pixel array (display) having sub-pixels SPIX (1,1) to SPIX (n, m) arranged in a matrix as shown in FIG. 2), a data signal line driving circuit 3 for driving the data signal lines SL1 to SLn of the pixel array 2, and a scanning signal line driving circuit 4 for driving the scanning signal lines GL1 to GLm of the pixel array 2.
  • the image display device 1 includes a control circuit 12 that supplies control signals to both drive circuits 3 and 4, and a video signal DAT2 that is supplied to the control circuit 12 based on the video signal DAT input from the video signal source VS. And a signal processing circuit 21 for generating. These circuits are operated by supplying power from the power supply circuit 13.
  • one pixel PIX is composed of three sub-pixels SPIX adjacent in the direction along the scanning signal lines GLl to GLm. Note that the sub-pixel SPIX (1,1) •• ⁇ according to the present embodiment corresponds to the pixel described in the claims.
  • the video signal source VS may be any device as long as it can generate the video signal DAT, but as an example, when the device including the image display device 1 is a television receiver, There is a tuner (image receiving means) that receives a television broadcast and generates a video signal indicating a video transmitted by the television broadcast.
  • the video signal source VS as a tuner selects the channel of the broadcast signal, transmits the television video signal of the selected channel to the signal processing circuit 21, and the signal processing circuit 21 converts the television video signal into the TV video signal. Based on this, a video signal DAT2 after signal processing is generated.
  • the video signal source VS include a personal computer.
  • the television receiver 100a when the device including the image display device 1 is the television receiver 100a, the television receiver 100a includes the video signal source VS and the image display device 1, and is shown in FIG. ),
  • a television broadcast signal is input to the video signal source VS.
  • the video signal source vs includes a tuner unit TS that selects a channel of the TV broadcast signal power and outputs the TV video signal of the selected channel as a video signal DAT.
  • the liquid crystal monitor device 100b receives a video monitor signal from, for example, a personal computer as shown in FIG. 3 (b).
  • a monitor signal processing unit 101 that outputs a video signal to the liquid crystal panel 11 is provided.
  • the monitor signal processing unit 101 may be the signal processing circuit 21 or the control circuit 12 itself, or may be a circuit provided in the preceding stage or the subsequent stage.
  • the pixel array 2 includes a plurality (in this case, n) of data signal lines SLl to SLn and a plurality of data signal lines SLl to SLn (in this case, m). Scanning signal lines GLl to GLm, where i is an arbitrary integer up to 1 force and n, and j is an arbitrary integer up to 1 force and m, for each combination of data signal line SLi and scanning signal line GLj Subpixel SPIX (i, j) is provided.
  • each sub-pixel SPIX (iJ) includes two adjacent data signal lines SL (i-1) ′ SLi and two adjacent scanning signal lines GL (jl) ′ GLj It is arranged in the part surrounded by.
  • the sub-pixel SPIX may be any display element, but as an example, the image display device 1 is a liquid crystal display.
  • the subpixel SPIXGJ is a field effect transistor SW having a gate connected to the scanning signal line GLj and a source connected to the data signal line SLi, as shown in FIG. (i, j) and a pixel capacitor Cp (i, j) having one electrode connected to the drain of the field effect transistor SW (i, j).
  • the other end of the pixel capacitor Cp (i, j) is connected to a common electrode line common to all the subpixels SPIX. It has been continued.
  • the pixel capacitor Cp (i, j) includes a liquid crystal capacitor CL (i, j) and an auxiliary capacitor Cs (i, j) that is added as necessary.
  • the field-effect transistor SW (i, j) becomes conductive, and the voltage applied to the data signal line SLi becomes the pixel capacitance Cp (i, applied to j).
  • the pixel capacitor Cp (i, j) continues to hold the voltage at the time of shutoff. I will.
  • the transmittance or reflectance of the liquid crystal varies depending on the voltage applied to the liquid crystal capacitor CL (i, j).
  • the scanning signal line GLj is selected and a voltage corresponding to the video data to the subpixel SPIX (i, j) is applied to the data signal line SLi, the display state of the subpixel SPIX (i, j) Can be changed according to the video data.
  • the liquid crystal display device is a vertical alignment mode liquid crystal cell as a liquid crystal cell, that is, when no voltage is applied, liquid crystal molecules are aligned substantially perpendicular to the substrate, and the subpixel SPIX (The liquid crystal cell in which the liquid crystal molecules tilt from the vertical alignment state according to the voltage applied to the liquid crystal capacitance CL (i, j) of (i, x) is adopted, and the liquid crystal cell is normally black mode (no voltage applied) Sometimes used in black display mode).
  • the scanning signal line drive circuit 4 shown in FIG. 2 outputs a signal indicating whether or not the selection period is valid, such as a voltage signal, to each of the scanning signal lines GL1 to GLm. Further, the scanning signal line drive circuit 4 changes the scanning signal line GLj that outputs a signal indicating the selection period based on timing signals such as a clock signal GCK and a start pulse signal GSP supplied from the control circuit 12, for example. ing. Thus, the scanning signal lines GLl to GLm are sequentially selected at a predetermined timing.
  • the data signal line drive circuit 3 extracts, as a video signal, the video data to each sub-pixel SPIX that is input in a time division manner by sampling at a predetermined timing. To do. Further, the data signal line driving circuit 3 sends each data signal line to each subpixel SPIX (l, j) to SPIX (n, j) corresponding to the scanning signal line GLj selected by the scanning signal line driving circuit 4. Outputs the output signal according to the video data to each via SLl-SLn.
  • the data signal line driving circuit 3 receives the clock signal SCK input from the control circuit 12. Based on the timing signal such as the start pulse signal SSP and the like, the output timing of the output signal is determined by the sampling timing.
  • each of the subpixels SPIX (l, j) to SPIX (n, j) has its corresponding data signal line SL1 to SLn while the scanning signal line GLj corresponding to itself is selected.
  • the brightness and transmittance when emitting light are adjusted to determine its own brightness.
  • the scanning signal line driving circuit 4 sequentially selects the scanning signal lines GLl to GLm.
  • the subpixels SPIX (1,1) to SPIX (n, m) that make up all the pixels of the pixel array 2 can be set to the brightness (gradation) indicated by the video data for each, and displayed on the pixel array 2. Can be updated.
  • the video data D to each of the sub-pixels SPIX may be the gradation level itself or the gradation level for calculating the gradation level as long as the gradation level of the sub-pixel SPIX can be specified. Although it may be a parameter, in the following description, as an example, the case where the video data D is the gradation level of the sub-pixel SPIX will be described.
  • the video signal DAT supplied from the video signal source VS to the signal processing circuit 21 may be an analog signal or a digital signal, as will be described later. . Also, it may be transmitted in frame units (entire screen unit), or one frame may be divided into a plurality of fields and may be transmitted in the field unit. The case where the signal DAT is transmitted in frame units will be described.
  • the video signal source VS when the video signal source VS according to the present embodiment transmits the video signal DAT to the signal processing circuit 21 of the image display device 1 via the video signal line VL, the video data for a certain frame is transmitted. After all the data has been transmitted, the video data for the next frame is transmitted in a time division manner, such as by transmitting the video data for the next frame.
  • the frame is composed of a plurality of horizontal lines.
  • the next transmission is performed.
  • video data for each horizontal line is transmitted in a time-sharing manner by transmitting video data for the horizontal line.
  • the video signal source VS drives the video signal line VL in a time-sharing manner when transmitting video data for one horizontal line.
  • the video data is sequentially transmitted in a predetermined order.
  • the video data can identify the video data D for each sub-pixel, the video data D itself is transmitted to the sub-pixel individually, and the video data itself is transmitted to the sub-pixel.
  • the video data D can be used as video data D, or data that has undergone some data processing is transmitted to each video data D, and the signal processing circuit 21 restores the data to the original video data D.
  • video data indicating the color of the pixel for example, data displayed in RGB
  • the signal processing circuit 21 is based on the video data of each pixel.
  • the video data D for each sub-pixel is generated.
  • the transmission frequency (dot clock) of the video data of each pixel is 65 [MHz].
  • the signal processing circuit 21 performs a process of enhancing gradation transition, a process of dividing into subframes, and a ⁇ conversion process on the video signal DAT transmitted via the video signal line VL.
  • Video signal DAT2 can be output.
  • the video signal DAT2 is composed of video data power to each sub-pixel after processing, and the video data to each sub-pixel in a certain frame is transmitted to each sub-pixel in each sub-frame. It is given as a combination of video data.
  • each video data constituting the video signal DAT2 is also transmitted in a time division manner.
  • the signal processing circuit 21 transmits all the video data for a certain frame and then transmits the video data for the next frame.
  • the video data for each frame is transmitted in a time division manner.
  • Each frame includes a plurality of subframes.
  • the signal processing circuit 21, for example, transmits all video data for a certain subframe and then transmits the video for the next subframe to be transmitted.
  • Video data for each subframe is transmitted in a time division manner, such as by transmitting image data.
  • the video data for the sub-frame is composed of video data for a plurality of horizontal lines
  • the video data for the horizontal line is composed of video data for each sub-pixel.
  • the signal processing circuit 21 transmits the video data for a certain horizontal line and then transmits the video data for the next time.
  • transmitting video data for each horizontal line by transmitting time-division video data for each horizontal line and transmitting video data for each horizontal line, for example, in a predetermined order, Transmit video data to each sub-pixel sequentially.
  • the gradation transition emphasis process may be performed later, but in the following, after the gradation transition is emphasized, the subframe division process and the ⁇ conversion process are performed. And explain.
  • the signal processing circuit 21 performs correction for emphasizing gradation transition in each sub-pixel SPIX on the video signal DAT, and the corrected video A modulation processing unit (correction means) 31 that outputs the signal DATo, and subframe processing that performs division into ⁇ -frames and ⁇ conversion processing based on the video signal DATo and outputs the corrected video signal DAT2 Part 32 is provided.
  • the image display device 1 according to the present embodiment includes R, G, and ⁇ sub-pixels for color display, and the modulation processing unit 31 and the subframe processing unit 32 include R, G, and Although each circuit has the same configuration except for the input video data D (i, j, k), in the following, referring to FIG. Only the circuit will be described.
  • the modulation processing unit 31 is described in detail below.
  • Each of the video data (in this case, video data D (i, j, k)) to each sub-pixel indicated by the input video signal will be described later.
  • the video signal DATo consisting of each corrected video data (in this case, video data Do (i, j, k)) can be output.
  • FIG. 1, FIG. 7, FIG. 8, FIG. 23, FIG. 24, and FIG. 26, which will be described later, exemplify only video data relating to a specific subpixel SPIXGJ). For example, the symbol U indicating the location is omitted as in the video data Do (k).
  • the subframe processing unit 32 divides one frame period into a plurality of subframes and, based on the video data Do (i, j, k) of a certain frame FR (k), Video data S to (i, j, k) for each subframe of FR (k) can be generated.
  • one frame FR (k) is divided into two subframes, and the subframe processing unit 32 determines the frame (for example, FR (k)) for each frame.
  • Video data Sol corresponding to each sub-frame based on video data Do (i, j, k) Sol (i, j, k) and So2 (i, j, k) are output.
  • the temporally previous subframe is SFRl (k)
  • the temporally subsequent subframe is SFR2 (k).
  • the case where the signal processing circuit 21 transmits the video data for the subframe SFRl (k) after transmitting the video data for the subframe SFRl (k) will be described.
  • the subframe SFRl (k) corresponds to the video data Sol (i, j, k)
  • the subframe SFR2 (k) corresponds to the video data So2 (i, j, k).
  • the voltage corresponding to the video data D (i, j, k) is changed to the subpixel SPIX.
  • the time to be applied to (i, j) is the force that can be set at various times.
  • the video data D (i, j, k) of a certain frame FR (k) is subjected to gradation transition emphasis processing, frame division processing, and ⁇ correction processing (corrected data Sol (i, j, k) and So2 (i, j, k)) and the voltages (Vl (i, j, k) and V2 (i, j, k)) corresponding to the corrected data are ⁇ corresponding to the same frame FR (k).
  • the period corresponding to these data and voltage is referred to as frame FR (k).
  • These data, voltage and frame are referred to with the same frame number (for example, k).
  • the period corresponding to these data and voltage is more specifically, the video data D (i, j, k) of a certain frame FR (k) is input to the sub-pixel SPIX (iJ). Until the video data D (i, j, k + 1) of the next frame FR (k + l) is input until the video data D (i, j, k) Output the first of the corrected data Sol (i, j, k) and So2 (i, j, k) (Sol (i, j, k) in this example) After that, the corrected data Sol (i, j, k + l) and So2 (i, j, k + l) obtained by performing the above processing on the next video data D (i, j, k + 1) l) The first one of (in this example, Sol (i, j, k + l)) is output, or it is applied according to the video data Sol (i, j, k) After the voltage Vl
  • each subframe and the video data or voltage corresponding to the subframe are collectively referred to as a subframe SFR (x), for example. References are omitted by omitting the number at the end of the number. In this case, certain subframes SFRl (k) and SFR2 (k) become subframes SFR (x) and SFR (x + l).
  • the subframe processing unit 32 includes a frame memory 41 that stores video data D to each subpixel SPIX for one frame, and video data and video data Sol in the first subframe.
  • a look-up table (LUT) 42 that stores the correspondence relationship between the image data and the LUT 43 that stores the correspondence relationship between the video data and the video data So2 in the second subframe, and a control circuit 44 that controls them. Yes.
  • the LUT 42'43 corresponds to the storage means described in the claims, and the control circuit 44 corresponds to the generation means.
  • the control circuit 44 performs video data D to each sub-pixel SPIX (1, 1) to (n, m) in the frame (for example, FR (k)) once for each frame. (l, l, k) to D (n, m, k) are written to the frame memory 41, and the number of subframes (in this case, twice) for each frame is written into the frame memory 41. From the above, each of the video data D (l, l, k) to D (n, m, k) can be read out.
  • the LUT 42 outputs when the read video data D (l, l, k) to D (n, m, k) are associated with each of the possible values.
  • a value indicating video data Sol to be stored is stored.
  • the LUT 43 is associated with each of the possible values and stores a value indicating the video data So2 to be output when the value is obtained.
  • control circuit 44 refers to the LUT 42 and outputs the video data Sol (i, j, k) corresponding to the read video data D (i, j, k).
  • the video data So2 (i, j, k) corresponding to the read video data D (i, j, k) can be output with reference.
  • the value stored in each LUT 42'43 may be, for example, a difference from the above possible value as long as each video data Sol 'So2 can be specified.
  • the value itself of the image data Sol ′ So2 is stored, and the control circuit 44 outputs the value read from each LUT 42′43 as each video data Sol ′ So2.
  • the values stored in the LUT 42'43 are as follows when the possible values are g and the values stored in the LUT 42'43 are Pl and P2, respectively. Is set to. Note that the video data Sol of subframe SFRl (k) is set to show higher brightness. However, in the following, the case where the video data So2 of the sub-frame SFR2 (k) is set so as to indicate luminance higher than the video data Sol will be described.
  • the value P1 is set to a value within the range determined for dark display.
  • the value P2 is set according to the value P1 and the value g.
  • the dark display range is a gradation equal to or lower than a gradation predetermined for dark display, and the minimum gradation is indicated when the predetermined gradation for dark display indicates the minimum luminance. It is a gradation (black) indicating luminance.
  • the value P2 is a range defined for bright display.
  • the value P1 is set to a value corresponding to the value P2 and the value g.
  • the range for bright display is a gradation greater than or equal to the gradation predetermined for bright display, and when the gradation predetermined for the bright display shows the maximum luminance, the maximum luminance is Is a gradation (white).
  • the gradation previously determined for the bright display is set to a value that can suppress the amount of whitening described later to a desired amount or less.
  • the video data D (i, j, k) force on the sub-pixel SPIXGJ) indicates a gradation equal to or lower than the above threshold value, that is, in a low luminance region
  • the brightness of the sub-pixel SPIX (iJ) in the frame FR (k) is controlled mainly by the magnitude of the value P2. Therefore, the display state of the sub-pixel SPIXGJ) can be set to the dark display state at least during the sub-frame SFRl (k) in the frame FR (k).
  • the sub-pixel SPIX (iJ) in the frame FR (k) can be brought close to an impulse type light emission such as CRT, and the image quality when displaying a moving image on the pixel array 2 can be improved.
  • the video data D (i, j, k) to the sub-pixel SPIXGJ) in a certain frame FR (k) shows a gradation higher than the threshold value, that is, in the high luminance region.
  • the brightness of the subpixel SPIX (iJ) in the frame FR (k) is mainly due to the magnitude of the value P1. Controlled. Therefore, compared with the configuration in which the luminances of both subframes SFRl (k) and SFR2 (k) are allocated approximately equally, the luminance in subframe SFRl (k) of subpixel SPIXGJ) and the subframe SFR2 (k ) Can be set large.
  • the video data So2 (i, j, k) for the subframe SFR2 (k) ) Becomes a value within the range specified for bright display, and as the luminance indicated by the video data D (i, j, k) increases, the video data Sol (i, j, k) increases. Therefore, the luminance of the sub-pixel SPIX (iJ) in the frame FR (k) can be increased as compared with a configuration in which a period for dark display is always provided even when white display is instructed.
  • the luminance value of the sub-pixel SPIX (i, j) is made closer to the impulse type described above, and the maximum value of the luminance of the sub-pixel SPIXGJ) is greatly increased despite the improved image quality during video display. Therefore, a brighter image display device 1 can be realized.
  • the gray-scale ⁇ characteristic changes and the halftone brightness becomes brighter when the panel is desired from the front (viewing angle 0 degree).
  • the floating phenomenon will occur.
  • the IPS mode liquid crystal display panel depending on the design of the optical characteristics of the optical film and the like, changes the gradation characteristics as the viewing angle increases, depending on the size.
  • the video data D (i, j, k) is either when the high luminance region gradation or the low luminance region gradation is indicated.
  • One of the video data Sol (i, j, k) and So2 (i, j, k) is a value within the range defined for bright display or within the range defined for dark display. Value, and the sub image in the frame FR (k)
  • the brightness of the element SPIX (i, j) is mainly controlled by the other size.
  • the amount of whitening (deviation from the assumed brightness) is the largest in the case of intermediate gradation, and the brightness is sufficiently low. In the case of sufficiently high brightness, the value is relatively small.
  • the video signal DAT is input, It is necessary to perform ⁇ correction processing before applying the corresponding voltage to panel 11. Even if the two ⁇ characteristics are the same, if an image is displayed with a ⁇ characteristic different from the original according to a user's instruction, etc., the video signal DAT is input and then the corresponding voltage is applied. Before applying to panel 11, it is necessary to perform gamma correction.
  • the y-correction circuit 133 that performs y-correction and changes the signal input to the panel 11 requires the ⁇ -correction circuit 133 instead of the circuit that controls the reference voltage.
  • the circuit scale may increase.
  • the ⁇ correction circuit 133 refers to the LUT 133 3a that stores the output value after ⁇ correction corresponding to the input value and stores the output data after ⁇ correction. Is generated.
  • each of the LUTs 42'43 is the LUTs 42'43.
  • the split drive LUT142.143 and the LUT133a for ⁇ conversion are shared.
  • the circuit scale can be reduced by the amount of the LUT 133a for ⁇ conversion, and the circuit scale required for the signal processing circuit 21 can be greatly reduced.
  • the LUT 42'43 is provided for each color of the sub-pixels SPIX (i, j) (in this example, R, G, and B, respectively). Different video data Sol 'So 2 can be output, and more appropriate values can be output than when using the same LUT between different colors.
  • the birefringence changes according to the display wavelength, and therefore has different ⁇ characteristics for each color.
  • the gradation is expressed by the response integrated luminance by time-division driving as in the present embodiment, it is desirable to perform independent ⁇ correction processing, which is particularly effective.
  • the LUT 42'43 is provided for each changeable ⁇ value, and the control circuit 44 receives an instruction to change the ⁇ value, for example, by a user operation or the like. If it is attached, the LUT 42'43 that matches the instruction is selected from the plurality of LUTs 42'43, and the LUT 42.43 is referred to. Thereby, the subframe processing unit 32 can switch the ⁇ value to be corrected.
  • the subframe processing unit 32 may change the time ratio of each of the subframes SFR1 and SFR2 in response to an instruction to change the y value.
  • the subframe processing unit 32 instructs the modulation processing unit 31 to change the time ratio of each subframe SFR1 ′ SFR2 in the modulation processing unit 31.
  • the time ratio of each subframe SFRl 'S FR2 can be changed in accordance with the instruction to change the ⁇ value. Therefore, as will be described in detail later, any correction to ⁇ value is instructed.
  • the modulation processing unit 31 performs prediction-type gradation transition enhancement processing, stores the predicted value E (i, j, k) of each sub-pixel SPIX (iJ), and Frame memory (predicted value storage means) 51 that stores up to the frame FR (k + l) of the current frame, and the predicted value E (i, j, k-1) of the previous frame FR (kl) stored in the frame memory 51 ), Each frame of the current frame FR (k)
  • the correction processing unit 52 that corrects the image data D (i, j, k) and outputs the corrected value as the video data Do (i, j, k), and each subframe of the current frame FR (k) With reference to the video data D (i, j, k) to the pixel SPIX (iJ), the predicted value E (i, j, k-1) related to the subpixel SPIXGJ) stored in the frame memory 51 is obtained.
  • the predicted value E (i, j, k) of the current frame FR (k) is subtracted when the subpixel SPIXGJ) is driven by the corrected video data Do (i, j, k).
  • Pixel SPIXGJ) Power Sub-pixel SPIX (i, j) at the start of the next frame FR (k + l), that is, the video data Do (i, j, k + l) of the next frame FR (k + l) ) Is the value indicating the gray level corresponding to the predicted brightness when the drive starts, and the prediction processing unit 53 predicts the predicted value E (i, j, kl) and the predicted value E (i, j, k) are predicted based on the video data D (i, j, k) in the current frame FR (k).
  • frame division and ⁇ correction processing are performed on the corrected video data Do (i, j, k) to obtain two video data per frame.
  • Sol (i, j, k) and So2 (i, j, k) are generated, and the corresponding voltages Vl (i, j, k) and V2 (i, j, k) are , Sub-pixel SPIXGJ).
  • the predicted value E (i, j, k-1) of the previous frame FR (k-1) and the video data D (i, j, k) of the current frame FR (k) are specified.
  • both the video data Sol (i, j, k) And So2 (i, j, k) as well as the two voltages Vl (i, j, k) and V2 (i, j, k) are also specified.
  • the predicted value E (i, j, k-1) is a predicted value of the previous frame FR (kl), it can be rephrased based on the current frame FR (k).
  • j, k-1) is a value indicating the gradation corresponding to the luminance that the sub-pixel SPIXGJ) is predicted to reach at the start of the current frame FR (k), and represents the current frame FR (k ) Is a value indicating the display state of the sub-pixel SPIX (iJ) at the start time.
  • the subpixel SPIX (iJ) is a liquid crystal display element
  • the value also indicates the alignment state of the liquid crystal molecules of the subpixel SPIX (i, j).
  • the prediction processing unit 53 determines whether the prediction method by the prediction processing unit 53 is accurate and the predicted value E (i, j, k-1) of the previous frame FR (kl) is accurately predicted.
  • the prediction processing unit 53 The front frame FR (kl) Based on the predicted value E (i, j, k-1) and the video data D (i, j, k) in the current frame FR (k), the predicted value E (i, j, k) is also accurate. Can be predicted.
  • the correction processing unit 52 performs the predicted value E (i, j, k-1) of the previous frame FR (kl), that is, the sub-pixel at the start time of the current frame FR (k). Based on the value indicating the display state of SPIX (iJ) and the video data D (i, j, k) of the current frame FR (k), the level indicated by the predicted value E (i, j, k-1) The video data D (i, j, k) can be corrected so as to emphasize the gradation transition from the key to the video data D (i, j, k).
  • both the processing units 52 and 53 may be realized by only the LUT, in the present embodiment, the processing units 52 and 53 are realized by using both the LUT reference processing and the interpolation processing.
  • the correction processing unit 52 includes an LUT 61.
  • this combination is input to the corresponding LU T61 in association with each of the possible combinations of the video data D (i, j, k) and the predicted value E (i, j, k-1) Stores the value indicating the video data Do to be output.
  • the value may be any value as long as the image data Do can be specified, but in the following, the image data Do itself is stored. In the case of, it will be explained.
  • values corresponding to all possible combinations may be stored in the LUT 61.
  • the LUT 61 uses some predetermined combinations in order to reduce the storage capacity. Only the matching is stored with the corresponding value.
  • the calculation unit 62 provided in the correction processing unit 52 reads values corresponding to a plurality of combinations close to the input combination from the LUT 61 when a combination is input. These values are interpolated by a predetermined calculation, and values corresponding to the input combinations are calculated.
  • video data D (i, j, k) and a predicted value E (i, j, k-1) can be taken.
  • a value indicating a value to be output when the combination is input is stored in association with each combination.
  • the LUT 71 also stores the value to be output (in this case, the predicted value E (i, j, k)) itself, as described above.
  • the combinations for storing values in the LUT 71 are limited to some predetermined combinations, and the calculation unit 7 provided in the prediction processing unit 53 2 calculates the value corresponding to the input combination by interpolation calculation referring to LUT71.
  • the predicted value E (i, j, k-1) is not stored in the frame memory 51 but the video data D (i, j, k-1) itself of the previous frame FR (k-1).
  • the correction processing unit 52 stores the predicted value E (i, j, k-1) of the previous frame FR (kl), that is, the display state of the sub-pixel SPIXGJ at the start of the current frame FR (k).
  • the video data D (i, j, k) of the current frame FR (k) is corrected with reference to the predicted value.
  • the signal processing circuit 21 reaches the luminance indicated by the video data So (i, j, x) of the previous subframe SFR (x_l) at the start time of the current subframe FR (x). If the gradation transition is emphasized, the gradation transition is overemphasized or the gradation transition is not sufficiently emphasized.
  • a gradation transition in which the brightness increases (rise gradation transition) and a gradation transition in which the brightness decreases
  • the voltages Vl (i, j, k) and V2 (i corresponding to the video data Sol (i, j, k) and So2 (i, j, k) are described.
  • j, k) to the sub-pixel SPIX (iJ) the light emission state of the sub-pixel SPIXGJ) is brought close to the impulse-type light emission.
  • the luminance that the pixel SPIX (iJ) should take increases or decreases for each subframe.
  • the prediction value E (i, j, k) is referred to, so that the prediction is performed with higher accuracy than in the case considered as described above.
  • improper gradation transition intensities can be prevented in spite of frequent rise ⁇ decay repetitions.
  • a prediction method with higher accuracy than considered as described above for example, a prediction is made by referring to a plurality of input video data, or a plurality of prediction results obtained so far. A method of predicting by referring to a method, a method of predicting by referring to a plurality of prediction results, a video data input so far, and a plurality of the current video data including at least the current video data, etc. Is mentioned.
  • the subframe processing unit 32 divides the frame into subframes (video data Sol and So2 generation processing), that is, a pixel array as described below.
  • Reference numeral 2 denotes an active matrix (TFT) liquid crystal panel in VA mode, and an example in which each subpixel SPIX can display 8-bit gradation will be described in more detail.
  • TFT active matrix
  • the video data Sol and So2 are referred to as a front display signal and a rear display signal.
  • the brightness gradation (signal gradation) of the signal (video signal DAT2) applied to the LCD panel in the normal hold display range from 0 to 255.
  • L is the signal gradation (frame gradation) when displaying an image in one frame (when displaying an image with normal hold display)
  • Lmax is the maximum luminance gradation (255)
  • T is the display luminance
  • is the correction value (usually 2.2).
  • the display brightness T output from the liquid crystal panel is as shown in FIG. 5 described above.
  • the horizontal axis indicates “brightness that should be output (scheduled luminance; value corresponding to signal gradation, equivalent to the above display luminance T)”, and the vertical axis indicates “brightness actually output. (Actual brightness) ”.
  • the actual brightness becomes brighter with a halftone brightness due to the change in the gradation ⁇ characteristics.
  • control circuit 44 is
  • the control circuit 44 includes two frames. It is designed to divide evenly into subframes and display up to half of the maximum brightness with one subframe!
  • the control circuit 44 sets the previous subframe to the minimum luminance (black), and sets the Tone expression is performed by adjusting only the display luminance of the subframe (tone expression is performed using only the subsequent subframe).
  • the integrated luminance in one frame is “(minimum luminance + luminance of subsequent subframe) / 2”.
  • the control circuit 44 sets the rear subframe to the maximum luminance (white) and adjusts the display luminance of the previous subframe to adjust the level. Make a key expression.
  • the integrated luminance in one frame is “(luminance of the previous subframe + maximum luminance) Z2”.
  • the signal gradation setting is performed by the control circuit 44 shown in FIG.
  • the control circuit 44 preliminarily calculates a frame gradation corresponding to the above-described threshold luminance (TmaxZ2) using the above-described equation (1).
  • control circuit 44 obtains the frame gradation L based on the video signal output from the frame memory 41.
  • control circuit 44 sets the luminance gradation (F) of the preceding display signal to the minimum (0) by the preceding LUT 42.
  • control circuit 44 determines the luminance gradation (R) of the subsequent display signal based on the equation (1).
  • R 0.5 "( ⁇ / ⁇ ) XL... (3)
  • the control circuit 44 sets the luminance gradation R of the subsequent display signal to the maximum (255).
  • control circuit 44 determines the luminance gradation F of the previous subframe based on the equation (1).
  • control circuit 44 transmits the video signal DAT2 after the signal processing to the control circuit 12 shown in FIG. 2, thereby sending the first scanning signal line GL1 to the data signal line driving circuit 3 with a double clock.
  • the previous stage display signals of the sub-pixels SPIX (n) are accumulated.
  • the control circuit 44 causes the scanning signal line drive circuit 4 to turn on (select) the first scanning signal line GL1 via the control circuit 12, and the subpixel SPIX of the scanning signal line GL1.
  • the previous stage display signal is written to.
  • the control circuit 44 similarly turns on the second to m-th scanning signal lines GL2 to GLm with the double clock while changing the previous display signal accumulated in the data signal line driving circuit 3.
  • the previous stage display signal can be written to all the sub-pixels SPIX in a half period of 1 frame (lZ 2 frame period).
  • control circuit 44 performs the same operation, and writes the post-stage display signal to the subpixels SPIX of all the scanning signal lines GLl to GLm in the remaining 1Z2 frame period.
  • the pre-stage display signal and the post-stage display signal are written to each subpixel SPIX by equal time (1Z2 frame period).
  • FIG. 6 described above shows the results (broken line and solid line) of the subframe display in which the preceding display signal and the subsequent display signal are divided into the front and rear subframes and output (the broken line and the solid line). It is a graph shown together with (a dashed-dotted line and a solid line).
  • the deviation between the actual luminance at the large viewing angle and the planned luminance is the minimum or maximum display luminance. In some cases, the minimum (0) is used, but the largest LCD panel is used in the halftone (near the threshold brightness). [0143] Then, the image display device 1 according to the present configuration example performs subframe display in which one frame is divided into subframes.
  • the previous subframe is displayed in black and the display is performed using only the rear subframe within the range where the integrated luminance in one frame is not changed.
  • the display is performed by adjusting the luminance of only the previous subframe, with the subsequent subframe being displayed in white within the range in which the integrated luminance in one frame is not changed. For this reason, in this case as well, the shift of the subsequent subframe is minimized, so that the total shift of both subframes can be reduced to approximately half as shown by the broken line in FIG.
  • the image display device 1 according to the present configuration example has an overall shift compared to the configuration in which the normal hold display is performed (the configuration in which the image is displayed in one frame without using the subframe). Can be reduced to about half.
  • the period of the previous subframe and that of the subsequent subframe are assumed to be equal. This is because the luminance up to half of the maximum value is displayed in one subframe. However, these subframe periods may be set to different values.
  • the white-floating phenomenon which is a problem in the image display device 1 according to the present configuration example, has a characteristic as shown in FIG. 5 when the viewing angle is large. This is a phenomenon in which an image appears bright and white.
  • an image captured by a camera is usually a signal based on luminance.
  • the image is converted into a display signal using ⁇ shown in equation (1) (that is, the luminance signal is multiplied by ( ⁇ ⁇ ) and divided equally. To add gradation).
  • an image displayed by the image display device 1 such as a liquid crystal panel has a display luminance represented by the expression (1).
  • the human visual sense perceives an image not as luminance but as brightness.
  • the lightness (lightness index) M is expressed by the following equations (5) and (6) (see Non-Patent Document 1).
  • y is the y value of tristimulus values in the xyz color system of an arbitrary color
  • yn is the y value of standard diffuse reflection surface light
  • yn 100.
  • FIG. 9 is a graph showing the luminance graph shown in FIG. 5 converted to lightness.
  • This graph shows “lightness that should be output (scheduled lightness; signal gradation) on the horizontal axis.
  • the value according to, corresponding to the above brightness M) ”), and“ the actual output brightness (actual brightness) ”on the vertical axis.
  • the two brightness values mentioned above are equal on the front of the liquid crystal panel (viewing angle 0 °).
  • the frame division ratio according to the brightness that is not brightness, in order to further suppress the white-floating phenomenon in accordance with the human visual sense.
  • the deviation from the brightness is the largest at the half of the maximum value of the planned brightness as in the case of the brightness. [0155] Therefore, rather than splitting the frame to display up to half the maximum brightness in one subframe, the frame is displayed so that the brightness up to half the maximum is displayed in one subframe.
  • Ability to divide It will be possible to improve the misalignment (ie, whitening) felt by humans.
  • ⁇ in this equation is about 2.5.
  • the subframe that is used for display when the luminance is low (the subframe that is maintained at the maximum luminance when the luminance is high) is set to a short period. It will be.
  • control circuit 44 sets the previous subframe to the minimum luminance (black) when performing a low luminance display in which the luminance up to 1 to 4 (threshold luminance; TmaxZ4) is output in one frame. ) And adjust the display luminance only in the subsequent subframe to express the gradation (use only the subsequent subframe to express the gradation).
  • the integral luminance in one frame is “(minimum luminance + luminance of subsequent subframe) Z4”.
  • the control circuit 44 sets the rear subframe to the maximum luminance (white), and displays the luminance of the previous subframe. Is used to express gradation.
  • the integrated luminance in one frame is “(luminance of the previous subframe + maximum luminance) Z4”.
  • the signal gradation (and the output operation described later) is set so as to satisfy the above conditions (a) and (b).
  • control circuit 44 preliminarily calculates the frame gradation corresponding to the above-described threshold luminance (TmaxZ4) using the above-described equation (1).
  • the control circuit 44 obtains the frame gradation L based on the video signal output from the frame memory 41 when displaying an image.
  • control circuit 44 sets the luminance gradation (F) of the previous stage display signal to the minimum (0) using the previous stage LUT 42.
  • control circuit 44 determines the luminance gradation (R) of the subsequent display signal based on the equation (1).
  • the control circuit 44 sets the luminance gradation R of the subsequent display signal to the maximum (255).
  • control circuit 44 determines the luminance gradation F of the previous subframe based on the equation (1).
  • the sub pixel SPIX has the front display signal and the rear display signal. , Each written in equal time (1Z2 frame period) It is.
  • the division ratio can be changed by changing the write start timing of the post-stage display signal (ON timing of the running signal line GL... Related to the post-stage display signal).
  • FIG. 10 is a video signal input to the frame memory 41
  • (b) in Fig. 10 is a video signal output from the frame memory 41 to the preceding LUT 42 in the case of 3: 1 division.
  • (C) in FIG. 10 is an explanatory diagram showing the video signal output to the rear stage LUT 43.
  • FIG. 11 is a front stage display signal and rear stage signal when the signal is divided into 3: 1. It is explanatory drawing which shows the ON timing of the scanning signal line GL ... regarding a display signal.
  • control circuit 44 writes the preceding display signal of the first frame to the sub-pixels SPIX of each scanning signal line GL ... with a normal clock.
  • the time integral value (integral sum) of the display luminance in these two subframes becomes the integral luminance in one frame.
  • the data stored in the frame memory 41 is at the ON timing of the scanning signal line GL ... At the same time, it is output to the data signal line driving circuit 3.
  • Fig. 12 is a graph showing the relationship between the planned brightness and the actual brightness when the frame is divided into 3: 1.
  • the frame can be divided at the point where the deviation between the planned brightness and the actual brightness is the largest. Therefore, compared with the result shown in FIG. 9, the difference between the planned brightness and the actual brightness when the viewing angle is 60 degrees is very small.
  • the previous subframe is displayed in black within a range in which the integrated luminance in one frame is not changed, Display using only the rear subframe.
  • the total deviation in both subframes can be reduced to about half as shown by the broken line in FIG.
  • the display is performed by adjusting the luminance of only the previous subframe, with the subsequent subframe being displayed in white within a range in which the integrated luminance in one frame is not changed.
  • the image display device 1 As described above, in the image display device 1 according to this configuration example, it is possible to reduce the brightness shift to about a half as compared with the configuration in which the normal hold display is performed.
  • the display start time force may be displayed with a double clock by using a dummy rear stage display signal.
  • the former display signal and the latter display signal of signal gradation 0 may be output alternately.
  • control circuit 44 outputs the previous sub-frame to the minimum luminance (in the case of low luminance) when the luminance up to lZ (n + l) (threshold luminance; TmaxZ (n + l)) of the maximum luminance is output in one frame. (Black), and adjust the display brightness only in the subsequent subframe to express the gradation (use only the subsequent subframe to express the gradation).
  • the integrated luminance in one frame is “(minimum luminance + luminance of subsequent subframe) / (n + 1)”.
  • control circuit 44 sets the rear subframe to the maximum luminance (white) and displays the previous subframe. Adjust the brightness to express the gradation.
  • the integrated luminance in one frame is “(luminance of the previous subframe + maximum luminance) / (n + 1)”.
  • the signal gradation (and the output operation described later) is set so as to satisfy the above conditions (a) and (b).
  • control circuit 44 uses the above equation (1) to calculate the above threshold luminance (TmaxZ (n + l)
  • the control circuit 44 obtains the frame gradation L based on the video signal output from the frame memory 41 when displaying an image.
  • control circuit 44 sets the luminance gradation (F) of the previous stage display signal to the minimum (0) using the previous stage LUT 42.
  • control circuit 44 determines the luminance gradation (R) of the subsequent display signal based on the equation (1). '' (11) Set by using the LUT43 in the latter stage.
  • the control circuit 44 sets the luminance gradation R of the subsequent display signal to the maximum (255).
  • the display signal output operation in the case where the frame is divided into 3: 1, after the nZ (n + l) frame period of the first frame, the previous display signal is output with a double clock. It is sufficient to design so that and the subsequent display signal are output alternately.
  • n 2 or more, it is preferable to alternately output the preceding display signal and the succeeding display signal as described above.
  • the ratio of the previous subframe and the subsequent subframe can be set to n: 1, so the required clock frequency is set to 2 Can be doubled.
  • the liquid crystal panel is preferably driven by alternating current. This is because the alternating current drive can change the charge polarity of the subpixel SPIX (the direction of the voltage between the pixel electrodes (interelectrode voltage) sandwiching the liquid crystal) for each frame.
  • One method is to apply a voltage of the same polarity for one frame.
  • the interelectrode voltage is reversed between two subframes in one frame, and the subsequent subframe and the previous subframe of the next frame are driven with the same polarity. Is the method.
  • Figure 13 (a) shows the relationship between the voltage polarity (polarity of the interelectrode voltage) and the frame period when the former method is used.
  • Figure 13 (b) shows the relationship between voltage polarity and frame period when the latter method is used.
  • either of the two methods described above may be used to prevent flickering if burn-in occurs.
  • a configuration in which the polarity is the same for one frame is more preferable. More specifically, dividing into sub-frames reduces the charging time of TFTs, so even if the charging time is within the design range, it is the margin for charging compared to a configuration that does not divide into sub-frames. It is undeniable that will decrease. Therefore, in mass production, there is a risk of brightness variations due to insufficient charging due to variations in panel and TFT performance.
  • the latter half frame that is the main display of luminance corresponds to the second writing of the same polarity, and the voltage change in the second half frame that is the main display of luminance can be reduced.
  • the required charge charge amount can be reduced, and display defects due to insufficient charge can be prevented.
  • one liquid crystal state corresponds to a certain luminance gradation in the TFT liquid crystal panel. Therefore, the response characteristics of the liquid crystal do not depend on the luminance gradation of the display signal.
  • the interelectrode voltage changes as shown by the solid line X in Fig. 14 (b) according to the response speed (response characteristics) of the liquid crystal.
  • the display brightness of the previous subframe is not minimized and the display brightness of the subsequent subframe is maximized.
  • the relationship between the planned brightness and the actual brightness is as shown in FIG. In other words, even when subframe display is performed, it is not possible to perform display with luminance (minimum luminance / maximum luminance) in which the difference (shift) between the planned luminance and the actual luminance when the viewing angle is large is small.
  • the response speed of the liquid crystal in the liquid crystal panel satisfies the following (c) and (d): Designed to be preferred.
  • the control circuit 44 is preferably designed so that the response speed of the liquid crystal can be monitored.
  • control circuit 44 interrupts the sub-frame display, It is usually set to drive by hold display.
  • the same display can be obtained even if the context of subframes is exchanged (even if the sub-frame is black in the case of low luminance and gradation is expressed using only the previous sub-frame).
  • the actual panel has brightness even in the case of black display (gradation 0), and the response speed of the liquid crystal is finite. Therefore, these factors are taken into account when setting the signal gradation. It is preferable.
  • an actual image is displayed on the liquid crystal panel, the relationship between the signal gradation and the display brightness is measured, and an LUT (output table) that satisfies Equation (1) is determined based on the actual measurement result. preferable.
  • a shown in Expression (6a) is assumed to be in the range of 2.2 to 3. This range is not strictly derived, but is a range that is considered to be almost appropriate for human visual sense.
  • the input signal gradation the luminance gradation of the display signal
  • Such a data signal line driving circuit 3 outputs the voltage signal used in the normal hold display as it is in each subframe according to the input signal gradation even when performing the subframe display. Will be.
  • the data signal line driving circuit 3 is preferably designed to output a voltage signal converted into divided luminances.
  • the data signal line driving circuit 3 is set so as to finely adjust the voltage (interelectrode voltage) applied to the liquid crystal according to the signal gradation.
  • the liquid crystal panel is a VA panel!
  • the present invention is not limited to this, and even if a liquid crystal panel of a mode other than the VA mode is used, the white-out phenomenon can be suppressed by the sub-frame display of the image display device 1 according to this configuration example.
  • the liquid crystal panel (planned brightness (planned brightness) and actual brightness (actual brightness) are shifted when the viewing angle is increased ( It is possible to suppress the white floating phenomenon for a liquid crystal panel in a mode in which the viewing angle characteristics of the gradation gamma change.
  • the sub-frame display of the image display device 1 according to the present configuration example is effective for a liquid crystal panel having such a characteristic that the display luminance increases when the viewing angle is increased.
  • the liquid crystal panel in the image display device 1 according to this configuration example may be NB (Normally Black) or NW (Normally White). May be.
  • another display panel for example, an organic EL panel or a plasma display panel
  • the liquid crystal panel may be used instead of the organic EL panel.
  • the present invention is not limited to this, and the image display device 1 according to the present configuration example is designed to divide the frame in the range of l: n or n: 1 (n is a natural number of 1 or more). Also good.
  • the signal gradation of the display signal (the front display signal and the rear display signal) is set using the above-described equation (10).
  • the threshold luminance gradation Lt is a frame gradation of this luminance.
  • Lt may be a little more complicated, and the threshold luminance Tt may not be expressed by a simple equation. Therefore, it may be difficult to express Lt with Lmax.
  • Lt obtained using Equation (10) is an ideal value, and is preferably used as a guideline.
  • the above description is a model of display luminance in the present embodiment.
  • the power is expressed as "Tmax / 2 ' ⁇ " maximum luminance "," minimum luminance ", etc.
  • special gamma preferred by the user, etc. there may be some variation, that is, when the display brightness is less than a certain threshold brightness, If the luminance of the image is sufficiently darker than the luminance of the other frame, the effect of improving the moving image display and the viewing angle in this embodiment is exhibited.
  • FIG. 16 (a) is a graph showing the luminance displayed by the previous subframe and the rear subframe when the display luminance is 3Z4 and 1Z4 with Lmax.
  • the voltage value applied to the liquid crystal (voltage value applied between pixel electrodes; absolute value) differs between subframes.
  • the image display device 1 it is preferable to invert the polarity of the liquid crystal voltage at the frame period.
  • One method is to apply a voltage of the same polarity for one frame.
  • the other method is to reverse the liquid crystal voltage between two subframes in one frame, and to make the subsequent subframe and the previous subframe of the next frame have the same polarity. It is.
  • FIG. 17 (a) is a graph showing the relationship between the voltage polarity (polarity of the liquid crystal voltage), the frame period, and the liquid crystal voltage when the former method is used.
  • Fig. 17 (b) is a similar graph when the latter method is used.
  • FIGS. 18A to 18D are explanatory diagrams showing the polarities of the four subpixels SPIX and the liquid crystal voltage of each subpixel SPIX in the liquid crystal panel.
  • the polarity of the liquid crystal voltage of each sub-pixel SPIX changes as shown in the order of FIG. 18A to FIG. 18D for each frame period.
  • the sum of the liquid crystal voltages applied to all the sub-pixels SPIX of the liquid crystal panel is preferably 0 V.
  • Such control can be realized, for example, by changing the voltage polarity between adjacent sub-pixels SPIX as shown in FIGS. 18 (a) to 18 (d).
  • 3: 1 to 7: 1 is given as a preferable ratio (frame division ratio) between the previous subframe period and the subsequent subframe period, but the present invention is not limited to this.
  • the split ratio may be set to 1: 1 or 2: 1.
  • the liquid crystal panel it takes time according to the response speed of the liquid crystal before the liquid crystal voltage (voltage applied to the liquid crystal; voltage between electrodes) is set to a value corresponding to the display signal. Therefore, if any of the subframe periods is too short, the voltage of the liquid crystal may not be raised to a value corresponding to the display signal within this period.
  • n 1
  • the division ratio may be n: l (n is a real number of 1 or more (more preferably, a real number greater than 1)). For example, by setting this division ratio to 1.5: 1, the viewing angle characteristics can be improved as compared with the case of 1: 1. In addition, it is easier to use a liquid crystal material with a slow response speed compared to 2: 1.
  • the front subframe When displaying low-brightness (low brightness) images up to 1 / (TmaxZ (n + l)) j, the front subframe should be displayed in black and only the back subframe should be used for display. Is preferred.
  • n 1
  • n l and l: n are the same in terms of viewing angle improvement effect.
  • n is a real number of 1 or more, it is effective for controlling the luminance gradation using the above equations (10) to (12).
  • the sub-frame display of the image display device 1 is a display performed by dividing the frame into two sub-frames.
  • the present invention is not limited to this, and the image display device 1 may be designed to perform subframe display in which a frame is divided into three or more subframes.
  • FIG. 19 shows the result of displaying the frame divided into three equal subframes (broken line and solid line) and the normal hold display by the image display device 1 according to the present configuration example. It is a graph that is shown together with the results (similar to that shown in Fig. 5), as shown in this graph.
  • the number of subframes is increased to 3
  • the actual luminance is very close to the planned luminance. Is possible. Therefore, it is clear that the viewing angle characteristics of the image display device 1 according to this configuration example can be made in a better state.
  • the position of the sub-frame for adjusting the luminance is such that the temporal gravity center position of the luminance of the sub-pixel in the frame period is close to the temporal center position of the frame period. It is desirable to set so that
  • FIG. 20 is a graph showing the transition of the liquid crystal voltage when the frame is divided into three and the voltage polarity is inverted for each frame.
  • the total liquid crystal voltage in 2 frames can be OV.
  • FIG. 21 is a graph showing the transition of the liquid crystal voltage when the frame is similarly divided into three and the voltage polarity is inverted for each subframe.
  • the total liquid crystal voltage in two frames can be set to OV.
  • the S-th (S; l to s) subframes between adjacent frames are applied with liquid crystal voltages of different polarities. It can be said that it is preferable to be in the state. This allows the total liquid crystal voltage in two frames to be OV.
  • the liquid crystal voltage is adjusted so that the total liquid crystal voltage in 2 frames (or more frames) is OV. It is preferable to reverse the polarity.
  • s an integer greater than or equal to 2
  • the liquid crystal voltage is adjusted so that the total liquid crystal voltage in 2 frames (or more frames) is OV. It is preferable to reverse the polarity.
  • the other subframes are displayed in white (maximum luminance) or black (minimum luminance). Talking about it.
  • viewing angle characteristics can be improved by displaying at least one subframe in white (maximum luminance) or black (minimum luminance).
  • the luminance is not adjusted!
  • the luminance of the sub-frame may be set to "a value greater than the maximum or the second predetermined value” instead of the maximum luminance.
  • “the minimum or smaller than the first predetermined value” may be used instead of setting the minimum luminance.
  • Lightness deviation can be made sufficiently small. Therefore, the viewing angle characteristics of the image display device 1 according to this configuration example can be improved.
  • FIG. 22 shows the signal gradation (%: luminance gradation of the display signal) output to panel 11 and the actual luminance gradation corresponding to each signal gradation in the sub-frame where the luminance is not adjusted. It is a graph which shows the relationship (viewing angle gradation characteristic (actual measurement)) with (%).
  • the actual luminance gradation means that "the luminance (actual luminance) output from the liquid crystal panel of panel 11 in accordance with each signal gradation is converted into the luminance gradation using the above equation (1).”
  • the display quality of the image display device 1 according to this configuration example is It is possible that it can be kept sufficiently (the above-mentioned brightness deviation can be made sufficiently small).
  • the range of the signal gradation that does not exceed 10% of the maximum value is 80 to: LO 0% and 0 to 0.02% of the maximum value of the signal gradation. This range does not change even when the viewing angle changes. [0249] Therefore, it is preferable to set the second predetermined value described above to 80% of the maximum luminance, and it is preferable to set the first predetermined value to 0.02% of the maximum luminance. Yes.
  • the viewing angle characteristics of the liquid crystal panel can be improved by making a slight difference in the display state of each subframe.
  • the modulation processing unit 3 la that performs substantially the same operation as the modulation processing unit 31 and the subframe processing unit 32 shown in FIG.
  • a subframe processing unit 32a is provided in the preceding stage of the modulation processing unit 31a, and replaces the corrected image data Do (i, j, k) with each of the uncorrected images.
  • Frame division and ⁇ correction processing are performed on video data D (i, j, k), and each subframe corresponding to the video data D (i, j, k) SF Rl (k)-SFRl (k ) Video data S l (i, j, k)-S2 (i, j, k).
  • the modulation processing unit 31a replaces the video data D (i, j, k) before correction with the video data Sl (i, i, j, k)-S2 (i, j, k), each of which is corrected so as to emphasize gradation transition, and the corrected video data is converted into video data Slo that constitutes video signal DAT2. (i, j, k) ⁇ Outputs as 82 ⁇ ( ⁇ , 1 ⁇ ). Note that the video data Slo (i, j, k) ⁇ 82 ⁇ ( ⁇ ,], 1 ⁇ ) is also time-divisionally similar to the video data Sol (i, j, k) ⁇ 8 ⁇ 2 ( ⁇ ⁇ , 1 ⁇ ). Is being transmitted.
  • correction processing and prediction processing by the modulation processing unit 31a are also performed in units of subframes.
  • the modulation processing unit 31a transmits the predicted value E (i, j, x-1) of the previous subframe SFR (xl) read from the frame memory (not shown) and the subframe SFR (x) in the current subframe SFR (x). based video data So (i to picture element SPIXGJ), j, x) and to corrects the video data So the present sub-frame SFR (x) (i, j , X).
  • the modulation processing unit 3 la based on the predicted value E (i, j, x-1) and the video data So (i, j, x), the subpixel SPIXGJ) A value indicating a gradation corresponding to the luminance predicted to be reached at the start of FR (x + l) is predicted, and the predicted value E (i, j, x) is stored in the frame memory. Yes.
  • the modulation processing unit 3 lb includes members 51a to 53a for generating the video data Slo (i, j, k) and the video data S2o (i, j , k) is provided with members 51b-53b.
  • These members 51a to 53a and 51b to 53b are configured in substantially the same manner as the members 51 to 53 shown in FIG.
  • each of the members 51a to 53b is configured to be able to operate at a speed twice that of FIG. 8, and the values stored in the LUTs (not shown in FIG. 24) provided for each of the members 51a to 53b are also illustrated. This is different from the case of 8.
  • the correction processing unit 52a and the prediction processing unit 53a receive each video data Sl from the subframe processing unit 32a in place of each video data D (i, j, k) of the current frame FR (k). (i, j, k) is input, and the correction processing unit 52a outputs the corrected video data as video data Slo (i, j, k). Similarly, in the correction processing unit 52b and the prediction processing unit 53b, each video data S2 from the subframe processing unit 32a is used instead of each video data D (i, j, k) of the current frame FR (k).
  • the correction processing unit 52a outputs the corrected video data as video data S2o (i, j, k).
  • the prediction processing unit 53a outputs the predicted value El (i, j, k) to the frame memory 5 lb referred to by the correction processing unit 52b that is not included in the frame memory 51a referred to by the correction processing unit 52a.
  • the processing unit 53b outputs the predicted value E 2 (i, j, k) to the frame memory 51a.
  • the predicted value El (i, j, k) is obtained when the sub-pixel SPIX (iJ) is driven by the video data Slo (i, j, k) output from the correction processing unit 52a.
  • the sub-pixel SPIXGJ) is a value indicating the gradation corresponding to the luminance predicted to arrive at the start of the next sub-frame SFR2 (k), and the prediction processing unit 53a receives the current frame FR (k ) Based on the video data Sl (i, j, k) and the predicted value E2 (i, j, k-1) of the previous frame FR (k-1) read from the frame memory 5 la. Predicted value El (i, j, k) is predicted. Similarly, the predicted value E2G, j, k) is calculated when the subpixel SP IX (i, j) is driven by the video data S2o (i, j, k) output from the correction processing unit 52b.
  • the sub-pixel SPIX (U) is a value indicating the gradation corresponding to the luminance predicted to arrive at the start of the next sub-frame SFRl (k + l), and the prediction processing unit 53b Based on the video data S2 (i, j, k) in FR (k) and the predicted value El (i, j, k) read from the frame memory 51b, the predicted value E2 (i, j, k) is predicted.
  • control circuit 44 outputs the video data S1 (1,1, k) to Sl (n, m, k) for the subframe SFRl (k) with reference to the LU T42 during the first reading.
  • the video data S2 (l, l, k) to S2 (n, m, k) for the subframe SFR2 (k) is referenced with reference to the LTU43. Is output (period tl2 to tl3). Note that the time tl when the signal processing circuit 21a receives the first video data D (l, l, k) and the video data for the subframe SFRl (k) corresponding to the video data D (l, l, k).
  • the time difference from the point of time Sl (l, l, k) output ti l can be increased or decreased by providing a notch memory.
  • the time difference is half a frame (one subframe). Is shown.
  • the frame memory 51a of the modulation processing unit 31b stores the video data S2 (l, 1) for the subframe SFR2 (k-1) of the previous frame FR (k-1).
  • the predicted values E2 (l, l, k-1) to E2 (n, m, kl) updated with reference to k-1) are accumulated, and the correction processing unit 52a receives the predicted value E2 (l , l, k-1) to E2 (n, m, kl), the video data Sl (l, l, k) to Sl (n, m, k) output from the control circuit 44 is corrected.
  • the corrected video data is output as Slo (l, l, k) to Slo (n, m, k).
  • the prediction processing unit 53a includes the video data S1 (1,1, k) to Sl (n, m, k) and the predicted values E2 (l, l, k-1) to E2 (n, m , k ⁇ 1), the predicted value El (l, l, k) to the predicted value El (n, m, k) are generated and stored in the frame memory 51b.
  • the correction processing unit 52b refers to the predicted values El (l, l, k) to El (n, m, k) and outputs the control circuit 44.
  • the prediction processing unit 53b includes the video data S2 (l, l, k) to S2 (n, m, k) and predicted values El (1,1, k-1) to El (n, m, k). -Based on 1), the predicted value E2 (l, l, k) to predicted value E2 (n, m, k) are generated and stored in the frame memory 5 la.
  • the signal processing circuit 21a performs the correction process (gradation transition emphasis process) and the prediction process in units of subframes. Therefore, compared to the configuration of the first embodiment, that is, the configuration in which these processes are performed in units of frames, more accurate prediction processing is possible, and gradation transition can be more accurately emphasized. As a result, it is possible to improve the image quality at the time of moving image display while further suppressing deterioration in image quality due to inappropriate gradation transition enhancement.
  • the members constituting the signal processing circuit 21a according to the present embodiment are often integrated in one integrated circuit chip in order to increase the speed.
  • the frame memories 41 and 51a '51b are difficult to integrate in an integrated circuit that is significantly larger than the required storage capacity JT, they are often externally attached to the integrated circuit chip.
  • the data transmission path between the frame memory 41 and 51a '51b is external. Therefore, it is difficult to increase the transmission speed as compared with the case of transmitting through the integrated circuit chip. Also, if you try to increase the number of signal lines in order to increase the transmission speed, the number of pins on the integrated circuit chip will increase!] And the dimensions of the integrated circuit chip will increase significantly. Further, since the modulation processing unit 31b shown in FIG. 24 is driven at double speed, the frame memory 41 and 51a ′ 51b can operate at high speed and require a large capacity memory.
  • each video data D (l, l, k) to D (n, m, k) is written.
  • the frame memory 41 outputs each video data D (l, l, k) to D (n, m, k) twice for each frame. Therefore, if a signal line for transmitting data is shared between reading and writing as in a general memory, the frequency at which each video data D ... is transmitted in the video signal DAT.
  • the frame memory 41 is required to access at a frequency three times or more than f.
  • the access speed required at the time of reading / writing is, for example, an access speed required for reading at the above frequency f or an access speed writing required for writing at the above frequency f, such as r: 2 times.
  • the ratio when the required access speed is set to 1 is shown after the letter (rZ w) indicating read Z write.
  • each predicted value E2 (l, l, k) to predicted value E2 (n, m, k) and each predicted value El are once per frame. (l, l, k) to predicted value El (n, m, k) are read and written, but in the configuration of FIG. 24, as shown in FIG. 25, the period of reading from the frame memory 51a (for example, tl 1 to tl2) and a period for reading from the frame memory 51b (for example, tl2 to tl3) are provided separately, and each period is a half period of the frame. Similarly, the period for writing to the frame memories 51a and 51b is also a half period of the frame. Therefore, both frame memories 5 la '51b require an access speed that is four times the frequency f.
  • the modulation processing unit 31b shown in FIG. 24 when the modulation processing unit 31b shown in FIG. 24 is used, the access speed required for each of the frame memories 41 ′ 51a and 51b increases, and the manufacturing cost of the signal processing circuit 21a increases. If the number of signal lines is increased, the size of the integrated circuit chip and the number of pins may increase. On the other hand, in the signal processing circuit 21c according to another configuration example of the present embodiment, as shown in FIG. 27, the video data S 1 (1, 1, 2) is performed twice for each frame.
  • the subframe processing unit 32c performs the video data S1 (1,1, k) to Sl (n) twice for each frame. , m, k) and video data S2 (l, l, k) to S2 (n, m, k).
  • control circuit 44 of the subframe processing unit 32a shown in FIG. 23 displays the video while outputting the video data Sl (l, l, k) to Sl (n, m, k).
  • the control circuit 44c of the subframe processing unit 32c according to this configuration example is as shown in FIG.
  • both of the video data Sl (i, j, k) and S2 (i, j, k) are generated based on the same value, that is, the video data D (i, j, k).
  • the control circuit 44c every time the control circuit 44c reads one video data D (i, j, k) from the frame memory 41, the video data D (i, j, k) is used to read both the video data Sl (i, , j, k) and S2 (i, j, k) can be prevented from increasing the amount of data transmission between the frame memory 41 and the control circuit 44c.
  • the amount of data transmission between the subframe processing unit 32c and the modulation processing unit 31c is greater than that in the configuration of FIG. 24. Since this data transmission is performed within the integrated circuit chip, there is no problem. Can be transmitted.
  • the modulation processing unit 31c performs prediction instead of the frame memory 5la'51b that stores the prediction values E1 and E2 for one subframe each. Only the value E2 is stored for 2 subframes and the predicted value E is twice per frame.
  • a frame memory (predicted value storage means) 54 capable of outputting 2 (l, l, kl) to predicted value E2 (n, m, k-1) is provided.
  • the modulation processing unit 31c according to the present configuration example is provided with the saddle member 52c ′ 52d ′ 53c ′ 53d force substantially the same as the members 52a ′ 52b • 53a ′ 53b of FIG. In this configuration column, the members 52c ⁇ 52d '53c' 53d correspond to the correcting means described in the claims.
  • predicted values E2 (l, l, k-1) to predicted values E2 (n, m, k-1) to the correction processing unit 52c and the prediction processing unit 53c Is supplied from the frame memory 54 which is not included in the frame memory 41a.
  • the predicted value El (l, l, k) to predicted value El (n, m, k) to the correction processing unit 52d and the prediction processing unit 53d are given from the prediction processing unit 53c, which is not included in the frame memory 41b.
  • the predicted value E2 (l, l, k-1) to the predicted value E2 (n, m, k-1) and the video data Sl (l, l, k) to Sl (n, m, k) is output twice for each frame, and the prediction processing unit 53c, based on these, outputs the predicted value twice for each frame, as shown in FIG. El (l, l, k) to El (n, m, k) are generated and output.
  • the force prediction process itself in which the number of predicted values E1 output for each frame is different and the circuit configuration of the prediction processing unit 53c are the same as those of the prediction processing unit 53a shown in FIG.
  • the predicted value E2 (l, l, k-1) to the predicted value E2 (n, m, k-1) and the video data S1 (1,1, k) to Sl (n, m, k), the force correction processing unit 52c that is output twice for each frame, the corrected video data Slo (l, l, k) to Slo is based on the first one of these. (n, m, k) is generated and output (period t21 to t22). Further, the correction processing unit 52d outputs the predicted value El (l, l, k) to the predicted value El (n, m, k) and the video data S2 (l, l, which are output twice for each frame.
  • the video data S2 (l, l, k) to S2 (n, m, k) and the predicted value El (l, l, k) to El (n, m, k) force S i every frame
  • the predicted values E2 (l, l, k) to E2 (n, m, k) can be generated twice per frame.
  • the prediction processing unit 53d has the predicted values E2 (l, l, k) to E2 (n, m, k) and the predicted values E2 (l, l, k) to E2 (n , m, k) and half of the generation and output processing, and once per frame, the predicted value E2 (l, l, k) ⁇ E2 (n, m, k) is generated and output.
  • the force prediction process itself in which the timing for generating and outputting the predicted value E2 in each frame is the same as the prediction processing unit 53b shown in FIG.
  • the circuit configuration is also the same as that of the prediction processing unit 53b, and a timing for thinning out the force is determined, and a circuit for thinning out generation processing and output processing is added.
  • the prediction processing unit 53d when the time ratio of both subframes SFR1 'SFR2 is 1: 1, the prediction processing unit 53d according to this configuration example skips the above generation and output processing by skipping one. Will be described. Specifically, the prediction processing unit 53d is the period during which the first video data S2 (i, j, k) and the predicted value El (i, j, k) are output (period t21 to t22). Of these, the predicted value E2 () is based on the odd-numbered and even-numbered video data S2 (i, j, k) and the predicted value El (i, j, k). i, j, k).
  • the prediction processing unit 53d In the period during which the second time is output (period t22 to t23), the prediction processing unit 53d generates a predicted value E (i, j, k) based on the remaining one.
  • the prediction processing unit 53d can output all the predicted values E2 (l, l, k) to E2 (n, m, k) once for each frame, and each predicted value E2 (i, The time interval for outputting j, k) is twice as long as the configuration in FIG.
  • the access speed required for the frame memory 54 can be reduced to 3Z4 times the configuration of FIG.
  • the dot clock of each video data D (i, j, k) is about 65 [MHz V, so the frame memories 51a and 51b in FIG. It is necessary to respond to access at approximately 260 [MHz].
  • the frame memory 54 according to this configuration example like the frame memory 41, only needs to respond to an access at three times the dot clock, that is, about 195 [MHz].
  • the entire storage area (for two subframes) of the frame memory 54 is allocated to the access speed.
  • the frame memory 54 is configured by two frame memories 54a.54b, and the access speed required for one of them is further increased. It is late.
  • the frame memory 54 includes two frame memories 54a '54b capable of storing the prediction value E2 for one subframe.
  • the frame memory 54a is a frame memory in which each prediction value E2 (i, j, k) is written by the prediction processing unit 53d, and the prediction for one subframe written in the previous frame FR (kl).
  • Value E2 (l, l, k-1) to E2 (n, m, kl) 1S Overwritten by predicted value E2 (l, l, k) to E2 (n, m, k) of current frame FR (k)
  • the predicted values E2 (l, l, k-1) to E2 (n, m, k-1) can be transferred to the frame memory 54b.
  • the frame memory 54a only needs to be able to read and write the predicted value E2 for one subframe at a time within one frame period, so that it can respond to access at the same frequency as the frequency f.
  • the frame memory 54b receives the predicted values E2 (l, l, k-1) to E2 (n, m, k-1), and receives the predicted values E2 ( l, l, k-1) to E2 (n, m, kl) can be output.
  • the predicted value E2 for one subframe needs to be written once and read twice each within one frame period, so it is necessary to respond to access at a frequency three times the above frequency f. is there.
  • the prediction value E2 stored in the frame memory 54a by the prediction processing unit 53d is transferred to the frame memory 54b for outputting the prediction value E2 to the correction processing unit 52c and the prediction processing unit 53c.
  • the area that is read twice per frame is limited to the frame memory 54b having a storage capacity for one subframe.
  • FIG. 27 illustrates a case where the transfer from the frame memory 54a to the frame memory 54b is shifted by one subframe in order to reduce the storage capacity required for the buffer.
  • the entire storage area of the frame memory 54 is configured to be able to respond to access at a frequency three times the frequency f, so that the storage area that can respond to access at the frequency is higher.
  • the size can be reduced, and the frame memory 54 can be provided more inexpensively and easily.
  • the generation processing and output processing of the predicted value E2 by the prediction processing unit 53d are performed. Although the case of thinning out has been described as an example, only output processing may be thinned out.
  • the predicted value El (l, l, k) is generated so that the predicted values E2 (l, l, k) to E2 (n, m, k) can be generated twice for each frame period.
  • the modulation processing unit corrects each of the plurality of video data Sl (i, j, k) ⁇ S2 (i, j, k) generated for each frame period, Output corrected video data Slo (i, j, k)-S2o (i, j, k) corresponding to each subframe for each divided subframe SFRl (k)-SFR2 (k)
  • the sub-pixel SPIX (i, j) is driven according to the correction processing unit 52c'52d and the corrected video data S2o (i, j, k) corresponding to the last subframe SFR2 (k)
  • a frame memory 54 for storing i, j, k) is provided.
  • the correction processing unit 52c determines that the video data Sl (i, j, k) or S2 (i, j, k) to be corrected corresponds to the first subframe SFRl (k) (video data).
  • the luminance of the predicted value E2 (i, j, k-1) read from the frame memory 54 indicates the video data Sl (i, j, k).
  • the video data Sl (i, j, k) is corrected so as to emphasize the gradation transition to luminance.
  • the correction processing unit 52d and the prediction processing unit 53c provided in the modulation processing unit have the second video data Sl (i, j, k) or S2 (i, j, k) to be corrected.
  • the following subframes are supported (in the case of video data S2 (i, j, k))
  • the video data S2 (i, j, k) and the previous subframe SFRl (k) are supported.
  • the first time of the subframe SFR2 (k) The luminance of the subpixel SPIX (iJ) is predicted, and the gradation transition from the predicted luminance (the luminance indicated by El (i, j, k)) to the luminance indicated by the video data S2 (i, j, k)
  • the video data S2 (i, j, k) is corrected so as to emphasize.
  • the prediction processing units 53c and 53d provided in the modulation processing unit include the video data Sl (i, j, k) or S2 (i, j, k) force to be corrected as the last subframe SFR2 (k) (For video data S2 (i, j, k)), video data S2 (i, j, k) and video data Sl corresponding to the previous subframe SFRl (k) Based on (i, j, k) and the predicted value E2 (i, j, k-1) stored in the frame memory 54, video data S2 (i, j, k) to be corrected The brightness of the subpixel SPIX (iJ) at the last time of the subframe SFR2 (k) corresponding to is predicted, and the predicted value E2 (i, j, k) indicating the prediction result is stored in the frame memory 54. .
  • subframes SFRl (k)-SFR2 (k) corresponding to video data Sl (i, j, k)-S2 (i, j, k) The result of predicting the luminance reached by the sub-pixel SPIX (iJ) at the end of the previous subframe SFR2 (k-1)-SF Rl (k) El (i, j, k)
  • Video data Sl (i, j, k) -S2 (i, j, k) can be corrected without storing i, j, k) in the frame memory each time.
  • the prediction result of each subframe is stored in the frame memory per frame period as compared with the configuration in which the prediction is stored in the frame memory (51a'51b) each time.
  • the amount of predicted value data can be reduced. Since the amount of data can be reduced, for example, even if a buffer is provided to reduce the access speed required for the frame memory, the access speed can be reduced only by providing a smaller scale circuit.
  • the prediction processing unit 53d performs prediction values E2 (l, l, k) to E2 (n, m, k) and prediction values E2 (l, l, k) to E2
  • Half of the generation and output processing with (n, m, k) is thinned out, and the predicted values E2 (l, l, k) to E2 (n, m, k) are calculated once for each frame.
  • one pixel has a sub-pixel SPIX force for each color, and the power described in the case where color display is possible is not limited to this. Even when used, the same effect can be obtained.
  • the control circuit (44'44c) regardless of the surrounding conditions of the image display device 1, which causes a change in the temporal change in luminance of the pixel (sub-pixel) such as a temperature change.
  • Force The force described when referring to the same LUT (42'43) is not limited to this.
  • a plurality of LUTs corresponding to the surrounding conditions are provided in advance, and a sensor for detecting the surrounding conditions of the image display device 1 is provided.
  • the control circuit is referred to when generating video data for each subframe. Depending on the detection result of the sensor It may be switched. In this configuration, since the video data for each subframe can be changed according to the surrounding conditions, display quality can be maintained even if the surrounding conditions change.
  • the response characteristics and gradation luminance characteristics of a liquid crystal panel change depending on the environmental temperature (the temperature of the environment where the panel 11 is placed (temperature)). For this reason, even if the input video signal DAT is the same, the optimum value as the video data for each subframe also changes according to the environmental temperature.
  • the panel 11 is a liquid crystal panel
  • an LUT (42'43) suitable for use in different temperature ranges is provided
  • a sensor for measuring the environmental temperature is provided
  • the control circuit (44 ' 44c) Force If the LUT referred to above is switched according to the measurement result of the environmental temperature by the sensor, the signal processing unit (21 to 21d) including the control circuit can be used even if the video signal DAT is the same.
  • a more appropriate video signal DAT2 can be generated and transmitted to the LCD panel. Therefore, it is possible to display an image with more brilliant luminance in all assumed temperature ranges (for example, a range of 0 ° C to 65 ° C).
  • the LUT42.43 and the ⁇ -converted LUT142'143 shown in Fig. 7 are stored in the ⁇ -converted value by storing the value indicating the video data of each subframe.
  • the configuration that shares the LUT133a for use with the camera has been explained, but it is not limited to this! /.
  • the same LUT 142 ′ 143 and ⁇ correction circuit 133 as in FIG. 7 may be provided. If ⁇ correction is unnecessary, the ⁇ correction circuit 133 may be deleted.
  • the power described mainly using the case where the subframe processing unit (32'32c) divides one frame into two subframes is not limited to this.
  • the subframe processing unit displays video data (Slo ′) for each subframe. At least one of S2o; Sl'S2) is set to a value indicating luminance within a predetermined range for dark display, and at least one of the remaining video data for each subframe is increased or decreased.
  • At least one is pre-defined for bright display Set the value to indicate the brightness of the specified range, increase or decrease at least one of the remaining subframe video data, and control the time integral value of the brightness of that pixel in one frame period. A little.
  • one of the output video data in the case of dark display, one of the output video data is set to a value indicating the luminance for dark display, and thus the luminance of the pixel is within an allowable range during the dark display period.
  • the maintained viewing angle can be expanded.
  • one of the output video data in the case of bright display, one of the output video data is set to a value indicating the luminance for dark display, so that the viewing angle at which the pixel luminance is maintained within the allowable range during the dark display period. Can be expanded.
  • the generation unit may change the number of pixels according to the input video data to each pixel.
  • Output video data to each pixel is generated for each of the input cycles in a plurality of the predetermined number, and the correction means corrects each output video data to each pixel.
  • the prediction result corresponding to each pixel is stored in the prediction result storage unit, and the generation unit generates a plurality of the pixels generated for each input cycle for each of the pixels.
  • a plurality of predetermined numbers of output video data are generated, and the correction unit generates, for each of the pixels, a prediction result for the pixel for each of the input periods.
  • the prediction process and prediction of the luminance of the pixel at the last time point can be performed for each pixel a plurality of times for each input period. At least one of the results storage process is predicted.
  • the plurality of output video data generated for each input cycle is generated for each of the plurality of predetermined numbers, and the prediction result is calculated for each input cycle.
  • Each is read a plurality of times determined in advance. Thereby, based on these prediction results and each output video data, the luminance of the pixel at the last time point can be predicted a plurality of times and the prediction results can be stored. Note that the number of pixels is plural, and the reading process and the generation process are performed corresponding to each pixel.
  • At least one prediction result writing process is performed among the prediction process and the prediction result storage process that can be performed a plurality of times for each input period.
  • the time interval for storing the prediction result of each pixel in the prediction result storage unit can be increased and the response speed required for the prediction result storage unit can be reduced compared to a configuration that does not thin out. can do.
  • the above configuration is not changed and the above
  • the video data other than a specific one of the video data for the remaining subframes is a value indicating the luminance in a predetermined range for dark display or a predetermined range for bright display. It is desirable to set a value indicating luminance, increase / decrease the specific video data, and control the temporal integration value of the luminance of the pixel in one frame period.
  • the video data other than the specific video data is a value indicating a luminance within a predetermined range for dark display, or a display for bright display. Is set to a value indicating the luminance of a predetermined range, so that the video data for a plurality of subframes is set to a value not included in either of the ranges.
  • the occurrence of defects such as white floating can be prevented and the viewing angle can be expanded.
  • the video data for each subframe is set such that the temporal center of gravity of the luminance of the pixel in one frame period is close to the temporal center position of the one frame period. Is better.
  • the subframe processing units (32, 32c) in the region where the luminance indicated by the input video data is the lowest, out of the subframes constituting one frame period, The video data corresponding to the subframe closest to the central position is set as the specific video data, and the value of the video data is increased or decreased to control the time integral value of the luminance of the pixel in one frame period.
  • the video data of the subframe is set to a value within the range.
  • the video data corresponding to the subframe closest to the temporal center position of the frame period is set as the specific video data, and the value of the video data is increased or decreased. Controls the time integral value of the brightness of the pixel in the frame period. The selection of the subframe corresponding to the specific video data is repeated every time the specific video data enters the predetermined range for the bright display.
  • the temporal gravity center position of the luminance of the pixel in one frame period is set to be close to the temporal center position of the one frame period.
  • the signal processing unit (21 to 21f) sets the time ratio of each of the subframe periods as follows: Subframe switching timing force corresponding to the specific video data Set to be closer to the timing to equally divide the range of brightness that can be represented by pixels than the timing to divide the range of luminance that can be represented by pixels would be better.
  • each member constituting the signal processing circuit is a The force described with reference to the case where it is realized only by software is not limited to this. You may implement
  • a signal processing circuit may be realized as a device driver used when a computer connected to the image display device 1 drives the image display device 1.
  • a signal processing circuit is realized as a conversion board built in or externally attached to the image display device 1, and the operation of the circuit that realizes the signal processing circuit can be changed by rewriting a program such as firmware. For example, by distributing a recording medium on which the software is recorded or transmitting the software via a communication path, the software is distributed to the hardware and the software is executed.
  • Hardware may be operated as the signal processing circuit of each of the above embodiments.
  • the signal processing circuit according to each of the above embodiments can be realized only by causing the hardware to execute the program. .
  • the CPU or hardware that can execute the functions described above is powerful computing means such as program code stored in a storage device such as ROM or RAM.
  • the signal processing circuit according to each of the above embodiments can be realized by executing and controlling peripheral circuits such as an input / output circuit (not shown).
  • the arithmetic means can also be realized by combining hardware that performs a part of the processing and the arithmetic means that executes the program code for controlling the hardware and the remaining processing. Further, among the above-described members, even the members described as hardware, the hardware for performing a part of the processing, and the arithmetic means for executing the program code for controlling the hardware and the remaining processing It can also be realized by combining.
  • the arithmetic means may be a single unit, or a plurality of arithmetic means connected via a nose inside the apparatus or various communication paths may execute the program code jointly.
  • each transmission medium constituting the communication path propagates a signal sequence indicating a program, whereby the program is transmitted via the communication path.
  • the transmission device may superimpose the signal sequence on the carrier by modulating the carrier with the signal sequence indicating the program. In this case, the signal sequence is restored by the receiving apparatus demodulating the carrier wave.
  • the transmission device may divide the signal sequence as a digital data sequence and transmit it. In this case, the receiving apparatus concatenates the received packet groups and restores the signal sequence.
  • the transmission device may multiplex and transmit the signal sequence with another signal sequence by a method such as time division Z frequency division Z code division.
  • the receiving apparatus extracts and restores individual signal sequences from the multiplexed signal sequence. In either case, the same effect can be obtained if the program can be transmitted via the communication channel.
  • the recording medium for distributing the program is removable, but it does not matter whether the recording medium after distributing the program is removable.
  • the recording medium may be rewritten (written), volatile, recording method, and shape as long as a program is stored.
  • Examples of recording media include magnetic tapes, force set tapes, etc., floppy disks (registered trademark), magnetic disks, such as node disks, CD-ROMs, magneto-optical disks (MO), and mini disks (MD). And digital video disc (DVD) discs.
  • the recording medium may be a card such as an IC card or an optical card, or a semiconductor memory such as a mask ROM, EPROM, EEPROM, or flash ROM. Alternatively, it may be a memory formed in a calculation means such as a CPU.
  • the program code may be a code for instructing the arithmetic means of all procedures of the processes, or a part or all of the processes may be executed by calling according to a predetermined procedure. If a possible basic program (for example, operating system or library) already exists, replace all or part of the above procedure with code or pointers that instruct the arithmetic means to call the basic program.
  • a possible basic program for example, operating system or library
  • the format for storing the program in the recording medium may be a storage format that can be accessed and executed by the arithmetic means, for example, in a state where the program is stored in the real memory.
  • the storage format after installation on a local recording medium that is always accessible by the computing means (for example, real memory or a node disk) before being placed in the memory, or from a network or transportable recording medium. It may be the storage format before installing on a local recording medium.
  • the program may be stored as source code that is not limited to the object code after con- taining, or as intermediate code generated during interpretation or compilation. In any case, the above calculation is performed by a process such as decompression of compressed information, decoding of encoded information, interpretation, compilation, linking, allocation to real memory, or a combination of processes. If the means can be converted into an executable format, the same effect can be obtained regardless of the format in which the program is stored in the recording medium.
  • the present invention by driving as described above, a brighter viewing angle is widened, and deterioration in image quality due to over-emphasis of gradation transition is suppressed, and the image quality when displaying moving images is improved. Therefore, it can be used widely and suitably as a driving device for various display devices such as a liquid crystal television receiver and a liquid crystal monitor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

La présente invention concerne un dispositif comprenant une unité de traitement des sous-trames (32) qui, pour obtenir un affichage sombre à partir de sous-pixels, définit les données vidéo (So1) d'une sous-trame (SFR1) avec une valeur comprise dans une plage d'affichage sombre, et augmente ou réduit la valeur des données vidéo (So2) d'une autre sous-trame (SFR2) afin de contrôler la luminosité des sous-pixels. Ladite unité (32), pour obtenir un affichage lumineux à partir de sous-pixels, définit les données vidéo (So2) avec une valeur comprise dans une plage d'affichage lumineux, et augmente ou réduit la valeur des données vidéo (So1) afin de contrôler la luminosité des sous-pixels. Une unité de modulation (31) corrige les données vidéo (D) de chaque trame (FR), fournit les données corrigées (Do) à l'unité de traitement des sous-trames (32), évalue une luminosité atteinte par les sous-pixels en fin de trame (FR) et enregistre la luminosité prévue pour correction et évaluation dans la trame suivante. La présente invention permet ainsi d'obtenir un dispositif d'affichage qui présente un affichage plus lumineux et un angle de vue élargi, élimine la dégradation de la qualité d'image due à une mise en évidence excessive de la transition échelle de gris et qui offre une meilleure qualité d'image en cas de projection cinématographique.
PCT/JP2006/304433 2005-03-15 2006-03-08 Procede, dispositif et programme de commande de dispositif d'affichage, support d'enregistrement et dispositif d'affichage les utilisant WO2006098194A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/886,226 US7956876B2 (en) 2005-03-15 2006-03-08 Drive method of display device, drive unit of display device, program of the drive unit and storage medium thereof, and display device including the drive unit

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005-073902 2005-03-15
JP2005073902 2005-03-15

Publications (1)

Publication Number Publication Date
WO2006098194A1 true WO2006098194A1 (fr) 2006-09-21

Family

ID=36991542

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/304433 WO2006098194A1 (fr) 2005-03-15 2006-03-08 Procede, dispositif et programme de commande de dispositif d'affichage, support d'enregistrement et dispositif d'affichage les utilisant

Country Status (2)

Country Link
US (1) US7956876B2 (fr)
WO (1) WO2006098194A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8253678B2 (en) 2005-03-15 2012-08-28 Sharp Kabushiki Kaisha Drive unit and display device for setting a subframe period
JPWO2012035768A1 (ja) * 2010-09-14 2014-01-20 学校法人幾徳学園 情報表示装置

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006098246A1 (fr) * 2005-03-15 2006-09-21 Sharp Kabushiki Kaisha Procédé d’excitation de dispositif d’affichage à cristaux liquides, dispositif d’excitation de dispositif d’affichage à cristaux liquides, programme de celui-ci, support d’enregistrement, and dispositif d’affichage à cristaux liquides
US20090122207A1 (en) * 2005-03-18 2009-05-14 Akihiko Inoue Image Display Apparatus, Image Display Monitor, and Television Receiver
US20080136752A1 (en) * 2005-03-18 2008-06-12 Sharp Kabushiki Kaisha Image Display Apparatus, Image Display Monitor and Television Receiver
US8659746B2 (en) 2009-03-04 2014-02-25 Nikon Corporation Movable body apparatus, exposure apparatus and device manufacturing method
WO2011004538A1 (fr) * 2009-07-10 2011-01-13 シャープ株式会社 Circuit d’excitation de cristaux liquides et dispositif d’affichage à cristaux liquides
JP2011102876A (ja) * 2009-11-10 2011-05-26 Hitachi Displays Ltd 液晶表示装置
KR101094304B1 (ko) * 2010-02-23 2011-12-19 삼성모바일디스플레이주식회사 표시 장치 및 그의 영상 처리 방법
US9202406B2 (en) * 2010-04-02 2015-12-01 Sharp Kabushiki Kaisha Liquid crystal display, display method, program, and recording medium
TWI427612B (zh) * 2010-12-29 2014-02-21 Au Optronics Corp 用於驅動顯示面板之畫素的方法
TWI739510B (zh) 2014-03-28 2021-09-11 日商尼康股份有限公司 曝光裝置、平板顯示器之製造方法及元件製造方法
KR20170026705A (ko) * 2015-08-26 2017-03-09 삼성디스플레이 주식회사 표시 장치 및 그 구동 방법
CN108139688A (zh) 2015-09-30 2018-06-08 株式会社尼康 曝光装置、平面显示器的制造方法、组件制造方法、及曝光方法
WO2017057583A1 (fr) 2015-09-30 2017-04-06 株式会社ニコン Dispositif d'exposition, procédé de fabrication de dispositif d'affichage à écran plat, procédé de fabrication de dispositif, et procédé d'exposition
JP6855009B2 (ja) 2015-09-30 2021-04-07 株式会社ニコン 露光装置及び露光方法、並びにフラットパネルディスプレイ製造方法
KR20180059812A (ko) 2015-09-30 2018-06-05 가부시키가이샤 니콘 노광 장치 및 노광 방법, 그리고 플랫 패널 디스플레이 제조 방법
KR20180059814A (ko) 2015-09-30 2018-06-05 가부시키가이샤 니콘 노광 장치, 플랫 패널 디스플레이의 제조 방법, 및 디바이스 제조 방법
WO2017057569A1 (fr) 2015-09-30 2017-04-06 株式会社ニコン Dispositif d'exposition, procédé d'exposition, et procédé de fabrication de dispositif d'affichage à écran plat
JP6819887B2 (ja) 2015-09-30 2021-01-27 株式会社ニコン 露光装置及び露光方法、並びにフラットパネルディスプレイ製造方法
US10242649B2 (en) * 2016-09-23 2019-03-26 Apple Inc. Reduced footprint pixel response correction systems and methods
KR102370367B1 (ko) * 2017-07-17 2022-03-07 삼성디스플레이 주식회사 표시 장치 및 이의 구동 방법
CN112017609B (zh) * 2020-09-03 2021-07-23 Tcl华星光电技术有限公司 显示面板的控制方法、显示面板以及显示装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002236472A (ja) * 2001-02-08 2002-08-23 Semiconductor Energy Lab Co Ltd 液晶表示装置およびその駆動方法
JP2003058120A (ja) * 2001-08-09 2003-02-28 Sharp Corp 表示装置およびその駆動方法
WO2003098588A1 (fr) * 2002-05-17 2003-11-27 Sharp Kabushiki Kaisha Dispositif d'affichage a cristaux liquides
JP2004240317A (ja) * 2003-02-07 2004-08-26 Sanyo Electric Co Ltd 表示方法、表示装置およびそれに利用可能なデータ書込回路
JP2005173387A (ja) * 2003-12-12 2005-06-30 Nec Corp 画像処理方法、表示装置の駆動方法及び表示装置

Family Cites Families (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2650479B2 (ja) 1989-09-05 1997-09-03 松下電器産業株式会社 液晶制御回路および液晶パネルの駆動方法
JP2761128B2 (ja) * 1990-10-31 1998-06-04 富士通株式会社 液晶表示装置
JP3295437B2 (ja) 1991-03-29 2002-06-24 日本放送協会 表示装置
JPH0568221A (ja) 1991-09-05 1993-03-19 Toshiba Corp 液晶表示装置の駆動方法
US5488389A (en) * 1991-09-25 1996-01-30 Sharp Kabushiki Kaisha Display device
US5390293A (en) * 1992-08-19 1995-02-14 Hitachi, Ltd. Information processing equipment capable of multicolor display
JP3240218B2 (ja) 1992-08-19 2001-12-17 株式会社日立製作所 多色表示可能な情報処理装置
JPH0683295A (ja) 1992-09-03 1994-03-25 Hitachi Ltd マルチメディア表示システム
JPH07294881A (ja) 1994-04-20 1995-11-10 Kodo Eizo Gijutsu Kenkyusho:Kk 液晶表示装置
JPH08114784A (ja) 1994-08-25 1996-05-07 Toshiba Corp 液晶表示装置
US5874933A (en) * 1994-08-25 1999-02-23 Kabushiki Kaisha Toshiba Multi-gradation liquid crystal display apparatus with dual display definition modes
JP3305129B2 (ja) 1994-09-02 2002-07-22 キヤノン株式会社 表示装置
US5818419A (en) * 1995-10-31 1998-10-06 Fujitsu Limited Display device and method for driving the same
JPH10161600A (ja) 1996-11-29 1998-06-19 Hitachi Ltd 液晶表示制御装置
JP3703247B2 (ja) * 1997-03-31 2005-10-05 三菱電機株式会社 プラズマディスプレイ装置及びプラズマディスプレイ駆動方法
EP1331626B1 (fr) * 1997-07-24 2009-12-16 Panasonic Corporation Dispositif d'affichage d'images et dispositif d'évaluation d'images
JP3425083B2 (ja) 1997-07-24 2003-07-07 松下電器産業株式会社 画像表示装置及び画像評価装置
DE69800055T2 (de) * 1998-04-17 2000-08-03 Barco Nv Videosignalumsetzung zur Steuerung einer Flüssigkristallanzeige
AUPP340998A0 (en) 1998-05-07 1998-05-28 Canon Kabushiki Kaisha A method of halftoning an image on a video display having limited characteristics
JPH11352923A (ja) 1998-06-05 1999-12-24 Canon Inc 画像表示方法及び装置
JP2000187469A (ja) 1998-12-24 2000-07-04 Fuji Film Microdevices Co Ltd 画像表示システム
EP1022714A3 (fr) 1999-01-18 2001-05-09 Pioneer Corporation Méthode de commande pour un panneau d'affichage à plasma
JP3678401B2 (ja) 1999-08-20 2005-08-03 パイオニア株式会社 プラズマディスプレイパネルの駆動方法
JP2001296841A (ja) 1999-04-28 2001-10-26 Matsushita Electric Ind Co Ltd 表示装置
JP3556150B2 (ja) * 1999-06-15 2004-08-18 シャープ株式会社 液晶表示方法および液晶表示装置
JP4519251B2 (ja) * 1999-10-13 2010-08-04 シャープ株式会社 液晶表示装置およびその制御方法
JP2001215916A (ja) * 2000-02-03 2001-08-10 Kawasaki Steel Corp 画像処理装置及び液晶表示装置
JP4240743B2 (ja) * 2000-03-29 2009-03-18 ソニー株式会社 液晶表示装置及びその駆動方法
JP2001350453A (ja) 2000-06-08 2001-12-21 Hitachi Ltd 画像表示方法および画像表示装置
JP3769463B2 (ja) * 2000-07-06 2006-04-26 株式会社日立製作所 表示装置、表示装置を備えた画像再生装置及びその駆動方法
US7106350B2 (en) * 2000-07-07 2006-09-12 Kabushiki Kaisha Toshiba Display method for liquid crystal display device
JP4655341B2 (ja) * 2000-07-10 2011-03-23 日本電気株式会社 表示装置
JP3647364B2 (ja) * 2000-07-21 2005-05-11 Necエレクトロニクス株式会社 クロック制御方法及び回路
JP2002091400A (ja) 2000-09-19 2002-03-27 Matsushita Electric Ind Co Ltd 液晶表示装置
JP2002108294A (ja) * 2000-09-28 2002-04-10 Advanced Display Inc 液晶表示装置
JP2002131721A (ja) 2000-10-26 2002-05-09 Mitsubishi Electric Corp 液晶表示装置
EP1227460A3 (fr) * 2001-01-22 2008-03-26 Toshiba Matsushita Display Technology Co., Ltd. Dispositif d'affichage et méthode de commande de celui-ci
JP2002229547A (ja) * 2001-02-07 2002-08-16 Hitachi Ltd 画像表示システム及び画像情報伝送方法
JP3660610B2 (ja) * 2001-07-10 2005-06-15 株式会社東芝 画像表示方法
JP2003114648A (ja) 2001-09-28 2003-04-18 Internatl Business Mach Corp <Ibm> 液晶表示装置、コンピュータ装置及びそのlcdパネルの駆動制御方法
JP2003177719A (ja) 2001-12-10 2003-06-27 Matsushita Electric Ind Co Ltd 画像表示装置
JP3999081B2 (ja) 2002-01-30 2007-10-31 シャープ株式会社 液晶表示装置
JP2003222790A (ja) 2002-01-31 2003-08-08 Minolta Co Ltd カメラ
JP2003262846A (ja) 2002-03-07 2003-09-19 Mitsubishi Electric Corp 表示装置
JP4342200B2 (ja) 2002-06-06 2009-10-14 シャープ株式会社 液晶表示装置
JP4248306B2 (ja) 2002-06-17 2009-04-02 シャープ株式会社 液晶表示装置
KR100908655B1 (ko) * 2002-11-27 2009-07-21 엘지디스플레이 주식회사 데이터 공급시간의 변조방법과 이를 이용한액정표시장치의 구동방법 및 장치
JP4436622B2 (ja) * 2002-12-19 2010-03-24 シャープ株式会社 液晶表示装置
JP2004258139A (ja) 2003-02-24 2004-09-16 Sharp Corp 液晶表示装置
JP4413515B2 (ja) 2003-03-31 2010-02-10 シャープ株式会社 画像処理方法及びそれを用いた液晶表示装置
KR100836986B1 (ko) * 2003-03-31 2008-06-10 샤프 가부시키가이샤 화상 처리 방법 및 그것을 이용한 액정 표시 장치
JP4457572B2 (ja) 2003-04-03 2010-04-28 セイコーエプソン株式会社 画像表示装置とその階調表現方法、投射型表示装置
JP4719429B2 (ja) * 2003-06-27 2011-07-06 株式会社 日立ディスプレイズ 表示装置の駆動方法及び表示装置
US20040266643A1 (en) * 2003-06-27 2004-12-30 The Procter & Gamble Company Fabric article treatment composition for use in a lipophilic fluid system
JP4341839B2 (ja) * 2003-11-17 2009-10-14 シャープ株式会社 画像表示装置、電子機器、液晶テレビジョン装置、液晶モニタ装置、画像表示方法、表示制御プログラムおよび記録媒体
JP4197322B2 (ja) 2004-01-21 2008-12-17 シャープ株式会社 表示装置,液晶モニター,液晶テレビジョン受像機および表示方法
US8112383B2 (en) * 2004-02-10 2012-02-07 Microsoft Corporation Systems and methods for a database engine in-process data provider
US20050253793A1 (en) * 2004-05-11 2005-11-17 Liang-Chen Chien Driving method for a liquid crystal display
US7903064B2 (en) 2004-09-17 2011-03-08 Sharp Kabushiki Kaisha Method and apparatus for correcting the output signal for a blanking period
KR20060065956A (ko) 2004-12-11 2006-06-15 삼성전자주식회사 액정 표시 장치 및 표시 장치의 구동 장치
US8253678B2 (en) 2005-03-15 2012-08-28 Sharp Kabushiki Kaisha Drive unit and display device for setting a subframe period
WO2006098246A1 (fr) 2005-03-15 2006-09-21 Sharp Kabushiki Kaisha Procédé d’excitation de dispositif d’affichage à cristaux liquides, dispositif d’excitation de dispositif d’affichage à cristaux liquides, programme de celui-ci, support d’enregistrement, and dispositif d’affichage à cristaux liquides
US20090122207A1 (en) 2005-03-18 2009-05-14 Akihiko Inoue Image Display Apparatus, Image Display Monitor, and Television Receiver
US20080136752A1 (en) 2005-03-18 2008-06-12 Sharp Kabushiki Kaisha Image Display Apparatus, Image Display Monitor and Television Receiver
JP4497067B2 (ja) * 2005-03-23 2010-07-07 セイコーエプソン株式会社 電気光学装置、電気光学装置用駆動回路および電気光学装置用駆動方法
JP4722942B2 (ja) 2005-11-25 2011-07-13 シャープ株式会社 画像表示方法、画像表示装置、画像表示モニター、および、テレビジョン受像機

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002236472A (ja) * 2001-02-08 2002-08-23 Semiconductor Energy Lab Co Ltd 液晶表示装置およびその駆動方法
JP2003058120A (ja) * 2001-08-09 2003-02-28 Sharp Corp 表示装置およびその駆動方法
WO2003098588A1 (fr) * 2002-05-17 2003-11-27 Sharp Kabushiki Kaisha Dispositif d'affichage a cristaux liquides
JP2004240317A (ja) * 2003-02-07 2004-08-26 Sanyo Electric Co Ltd 表示方法、表示装置およびそれに利用可能なデータ書込回路
JP2005173387A (ja) * 2003-12-12 2005-06-30 Nec Corp 画像処理方法、表示装置の駆動方法及び表示装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8253678B2 (en) 2005-03-15 2012-08-28 Sharp Kabushiki Kaisha Drive unit and display device for setting a subframe period
JPWO2012035768A1 (ja) * 2010-09-14 2014-01-20 学校法人幾徳学園 情報表示装置

Also Published As

Publication number Publication date
US20080129762A1 (en) 2008-06-05
US7956876B2 (en) 2011-06-07

Similar Documents

Publication Publication Date Title
WO2006098194A1 (fr) Procede, dispositif et programme de commande de dispositif d&#39;affichage, support d&#39;enregistrement et dispositif d&#39;affichage les utilisant
JP4567052B2 (ja) 表示装置,液晶モニター,液晶テレビジョン受像機および表示方法
US8253678B2 (en) Drive unit and display device for setting a subframe period
JP4197322B2 (ja) 表示装置,液晶モニター,液晶テレビジョン受像機および表示方法
JP5031553B2 (ja) 表示装置、液晶モニター、液晶テレビジョン受像機および表示方法
US8624936B2 (en) Display panel control device, liquid crystal display device, electronic appliance, display device driving method, and control program
US7903064B2 (en) Method and apparatus for correcting the output signal for a blanking period
US7382383B2 (en) Driving device of image display device, program and storage medium thereof, image display device, and television receiver
JP5220268B2 (ja) 表示装置
WO2006098246A1 (fr) Procédé d’excitation de dispositif d’affichage à cristaux liquides, dispositif d’excitation de dispositif d’affichage à cristaux liquides, programme de celui-ci, support d’enregistrement, and dispositif d’affichage à cristaux liquides
US8063897B2 (en) Display device
JP5110788B2 (ja) 表示装置
JP2007538268A (ja) 液晶表示装置及びその駆動方法、並びに液晶表示装置を備えた液晶テレビ及び液晶モニタ
WO2006025506A1 (fr) Procédé de contrôle de l&#39;affichage, dispositif de commande du dispositif d&#39;affichage, dispositif d&#39;affichage, programme et support d&#39;enregistrement
US20080246784A1 (en) Display device
JP4731971B2 (ja) 表示装置の駆動装置および表示装置
JP2007333770A (ja) 電気光学装置、電気光学装置用駆動回路、及び電気光学装置の駆動方法、並びに電子機器
CN113808550B (zh) 可应用于在显示模块中进行亮度增强的设备
JP2006292973A (ja) 表示装置の駆動装置、および、それを備える表示装置
KR20070062835A (ko) 액정 표시 장치의 데이터 처리 방법 및 장치
KR20100076605A (ko) 액정표시장치 및 그 구동방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 11886226

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06728752

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP

WWP Wipo information: published in national office

Ref document number: 11886226

Country of ref document: US