WO2010073561A1 - Dispositif de traitement d'images et dispositif d'affichage d'images - Google Patents

Dispositif de traitement d'images et dispositif d'affichage d'images Download PDF

Info

Publication number
WO2010073561A1
WO2010073561A1 PCT/JP2009/006985 JP2009006985W WO2010073561A1 WO 2010073561 A1 WO2010073561 A1 WO 2010073561A1 JP 2009006985 W JP2009006985 W JP 2009006985W WO 2010073561 A1 WO2010073561 A1 WO 2010073561A1
Authority
WO
WIPO (PCT)
Prior art keywords
subfield
image
light emission
motion vector
frame image
Prior art date
Application number
PCT/JP2009/006985
Other languages
English (en)
Japanese (ja)
Inventor
木内真也
森光広
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Publication of WO2010073561A1 publication Critical patent/WO2010073561A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2007Display of intermediate tones
    • G09G3/2018Display of intermediate tones by time modulation using two or more time intervals
    • G09G3/2022Display of intermediate tones by time modulation using two or more time intervals using sub-frames
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/22Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources
    • G09G3/28Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using luminous gas-discharge panels, e.g. plasma panels
    • G09G3/288Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using luminous gas-discharge panels, e.g. plasma panels using AC panels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0112Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards corresponding to a cinematograph film standard
    • H04N7/0115Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards corresponding to a cinematograph film standard with details on the detection of a particular field or frame pattern in the incoming video signal, e.g. 3:2 pull-down pattern
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0261Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0266Reduction of sub-frame artefacts
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0613The adjustment depending on the type of the information to be displayed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/10Special adaptations of display systems for operation with variable images
    • G09G2320/106Determination of movement vectors or equivalent parameters within the image

Definitions

  • the present invention relates to a video processing apparatus that divides one field or one frame into a plurality of subfields and processes an input image to perform gradation display by combining a light emitting subfield that emits light and a non-light emitting subfield that does not emit light.
  • the present invention relates to a video display device using the device.
  • the plasma display device has the advantage that it can be made thin and have a large screen, and the AC type plasma display panel used in such a plasma display device is formed by arranging a plurality of scan electrodes and sustain electrodes.
  • a discharge plate is formed in a matrix by combining a front plate made of a glass substrate and a back plate with a plurality of data electrodes arranged so that scan electrodes, sustain electrodes, and data electrodes are orthogonal to each other, and any discharge cell is selected.
  • An image is displayed by emitting plasma.
  • one field is divided into a plurality of screens having different luminance weights (hereinafter referred to as subfields (SF)) in the time direction, and light emission of discharge cells in each subfield.
  • SF luminance weights
  • one field image that is, one frame image is displayed by controlling non-light emission.
  • Patent Document 1 discloses a motion in which a pixel in one field is a start point and a pixel in another field is an end point among a plurality of fields included in a moving image.
  • An image display device is disclosed that detects a vector, converts a moving image into light emission data of a subfield, and reconstructs the light emission data of the subfield by processing using a motion vector.
  • a motion vector whose end point is a pixel to be reconstructed in another field is selected from the motion vectors, and a position vector is calculated by multiplying the motion vector by a predetermined function.
  • the moving image is converted into the light emission data of each subfield, and the light emission data of each subfield is rearranged according to the motion vector.
  • the rearrangement method will be specifically described below.
  • FIG. 8 is a schematic diagram showing an example of the transition state of the display screen
  • FIG. 9 shows the light emission of each subfield before rearranging the light emission data of each subfield when the display screen shown in FIG. 8 is displayed.
  • FIG. 10 is a schematic diagram for explaining data.
  • FIG. 10 is a schematic diagram for explaining light emission data of each subfield after rearrangement of light emission data of each subfield when the display screen shown in FIG. 8 is displayed.
  • an N-2 frame image D1, an N-1 frame image D2, and an N frame image D3 are sequentially displayed as continuous frame images, and the entire screen is black (for example, luminance level 0) as a background.
  • black for example, luminance level 0
  • the state is displayed and a moving object OJ with a white circle (for example, luminance level 255) moves from the left to the right of the display screen as the foreground.
  • the conventional image display device converts a moving image into light emission data of each subfield, and as shown in FIG. 9, the light emission data of each subfield of each pixel is as follows for each frame. Created.
  • the N-2 frame corresponds to the moving object OJ.
  • the light emission data of all the subfields SF1 to SF5 of the pixel P-10 to be turned into the light emission state (hatched subfield in the figure), and the light emission data of the subfields SF1 to SF5 of the other pixels are in the non-light emission state (illustrated) (Omitted).
  • the light emission data of all the subfields SF1 to SF5 of the pixel P-5 corresponding to the moving object OJ is in the light emission state.
  • the light emission data of the subfields SF1 to SF5 of other pixels is in a non-light emission state.
  • the light emission data of all the subfields SF1 to SF5 of the pixel P-0 corresponding to the moving body OJ becomes the light emission state, and so on.
  • the light emission data of the subfields SF1 to SF5 of the pixels in this pixel is in a non-light emission state.
  • the conventional image display apparatus rearranges the emission data of each subfield according to the motion vector, and after rearranging each subfield of each pixel for each frame, as shown in FIG. Is generated as follows.
  • the first subfield SF1 of the pixel P-5 is detected in the N-1 frame.
  • the light emission data (light emission state) is moved to the left by 4 pixels, and the light emission data of the first subfield SF1 of the pixel P-9 is changed from the non-light emission state to the light emission state (hatched subfield in the figure).
  • the light emission data of the first subfield SF1 of the pixel P-5 is changed from the light emission state to the non-light emission state (broken line white subfield in the figure).
  • the light emission data (light emission state) of the second subfield SF2 of the pixel P-5 is moved leftward by three pixels, and the light emission data of the second subfield SF2 of the pixel P-8 is emitted from the non-light emission state.
  • the light emission data of the second subfield SF2 of the pixel P-5 is changed from the light emission state to the non-light emission state.
  • the light emission data (light emission state) of the third subfield SF3 of the pixel P-5 is moved to the left by two pixels, and the light emission data of the third subfield SF3 of the pixel P-7 is emitted from the non-light emission state.
  • the light emission data of the third subfield SF3 of the pixel P-5 is changed from the light emission state to the non-light emission state.
  • the light emission data (light emission state) of the fourth subfield SF4 of the pixel P-5 is moved to the left by one pixel, and the light emission data of the fourth subfield SF4 of the pixel P-6 emits light from the non-light emission state.
  • the light emission data of the fourth subfield SF4 of the pixel P-5 is changed from the light emission state to the non-light emission state. Further, the light emission data of the fifth subfield SF5 of the pixel P-5 is not changed.
  • the emission data of the first to fourth subfields SF1 to SF4 of the pixel P-0 is detected.
  • (Light emission state) is moved to the left by 4 to 1 pixel
  • the light emission data of the first subfield SF1 of the pixel P-4 is changed from the non-light emission state to the light emission state
  • the second subfield of the pixel P-3 The light emission data of SF2 is changed from the non-light emission state to the light emission state
  • the light emission data of the third subfield SF3 of the pixel P-2 is changed from the nonlight emission state to the light emission state
  • the fourth subfield SF4 of the pixel P-1 is changed.
  • the light emission data is changed from the non-light-emitting state to the light-emitting state, the light emission data of the first to fourth subfields SF1 to SF4 of the pixel P-0 is changed from the light-emitting state to the non-light-emitting state, Emission data of the field SF5 is not changed.
  • video materials such as DVD and Blu-ray store 24 frames of cinema images (24 frame images) per second.
  • the cinema image is displayed by the 2-2 pull-down system that displays the same frame image twice.
  • the NTSC system with a frequency of 60 Hz the same frame image is first displayed twice and then the same frame image.
  • a cinema image is displayed by the 2-3 pull-down method of displaying 3 times.
  • FIG. 11 is a schematic diagram for explaining the light emission state of each subfield when the conventional subfield rearrangement process is applied to a cinema image.
  • an N-3 frame image (not shown) and an N-2 frame image are the same frame image
  • an N-1 frame image and an N frame image are the same frame image
  • an N-2 frame image and an N ⁇ frame image.
  • a horizontal movement amount of 5 pixels is detected as the motion vector V1 between the N-2 frame image and the N-1 frame image. The amount of movement between the frame images is 0, and no motion vector is detected.
  • the light emission data (light emission state) of the first to fourth subfields SF1 to SF4 of the pixel P-5 is moved to the left by 4 to 1 pixel.
  • the light emission data of the first subfield SF1 of the pixel P-9 is changed from the non-light emission state to the light emission state
  • the light emission data of the second subfield SF2 of the pixel P-8 is changed from the non-light emission state to the light emission state.
  • the light emission data of the third subfield SF3 of P-7 is changed from the non-light emission state to the light emission state
  • the light emission data of the fourth subfield SF4 of the pixel P-6 is changed from the nonlight emission state to the light emission state.
  • the light emission data of the 5th first to fourth subfields SF1 to SF4 is changed from the light emission state to the non-light emission state
  • the light emission data of the fifth subfield SF5 is not changed.
  • the motion vectors of the N-1 frame image and the N frame image are not detected, the light emission data of the first to fifth subfields SF1 to SF5 of the pixel P-5 of the N frame are not rearranged. For this reason, when the viewer views a display image that transitions from the N-2 frame to the N-1 frame, the line-of-sight direction moves smoothly along the arrow AR direction. In the display image that transitions to, the line-of-sight direction does not move along the direction of the arrow AR. Therefore, when the viewer views such an image, a portion in which the motion of the cinema image becomes unnatural remains, and this judder is It cannot be suppressed.
  • An object of the present invention is to provide a video processing device and a video display device capable of suppressing judder such as cinema video.
  • An image processing apparatus divides one field or one frame into a plurality of subfields, and combines an input image to perform gradation display by combining a light emitting subfield that emits light and a non-light emitting subfield that does not emit light.
  • a video processing apparatus for processing wherein the input image is converted into light emission data of each subfield, and the input image has the same frame in a period of at least N fields or N frames (N is 2 or more)
  • a detection unit that detects that the images are continuously output, a motion vector detection unit that detects a motion vector using two or more input images that are temporally mixed, and the detection unit detects the input image Is detected as a continuous image, two or more frame images of at least two field periods or more are As a physical target frame image, the light emission data of each subfield of the processing target frame image converted by the subfield conversion unit is spatially rearranged according to the motion vector detected by the motion vector detection unit. Accordingly, a regenerating unit that generates rearranged light emission data of each subfield of the processing target frame image is provided.
  • the light emission data of each subfield of the processing target frame image can be spatially rearranged according to the already detected motion vector even for the same frame image in which no motion vector is detected. Therefore, judder such as cinema images can be suppressed.
  • FIG. 2 is a schematic diagram illustrating an example of rearranged light emission data in which subfields of a cinema image are rearranged by the 2-2 pulldown method by the video display device illustrated in FIG. 1. It is a schematic diagram for demonstrating an example of the light emission gravity center value of each subfield.
  • FIG. 3 is a schematic diagram illustrating an example of rearranged light emission data obtained by rearranging light emission data of each subfield for the moving image data illustrated in FIG. 2 using first and second corrected light emission centroid values.
  • FIG. 3 is a schematic diagram illustrating an example of rearranged light emission data obtained by rearranging light emission data of each subfield for the moving image data illustrated in FIG. 2 using first and second corrected light emission centroid values.
  • FIG. 3 is a schematic diagram for explaining a method for rearranging subfields of a cinema image by a 2-3 pull-down method. It is a schematic diagram for explaining another rearrangement method of sub-fields of a cinema image by 2-3 pull-down method. It is a schematic diagram which shows an example of the transition state of a display screen. It is a schematic diagram for demonstrating the light emission data of each subfield before rearranging the light emission data of each subfield when displaying the display screen shown in FIG.
  • FIG. 9 is a schematic diagram for explaining light emission data of each subfield after rearrangement of light emission data of each subfield when the display screen shown in FIG. 8 is displayed. It is a schematic diagram for demonstrating the light emission state of each subfield at the time of applying the rearrangement process of the conventional subfield to a cinema image.
  • a plasma display device will be described as an example of a video display device.
  • the video display device to which the present invention is applied is not particularly limited to this example, and one field or one frame is divided into a plurality of sub-displays.
  • the present invention can be similarly applied to other video display devices as long as gradation display is performed by dividing into fields.
  • a cinema image will be described as an example of a continuous image that continuously outputs the same frame image, but the same frame image is continuously output even in a television broadcast commercial video or animation video. In such a case, the present invention can be similarly applied.
  • the description “subfield” includes the meaning “subfield period”, and the description “subfield emission” also includes the meaning “pixel emission in the subfield period”.
  • the light emission period of the subfield means a sustain period in which light is emitted by sustain discharge so that the viewer can visually recognize, and includes an initialization period and a writing period in which the viewer does not emit light that can be visually recognized.
  • the non-light emission period immediately before the subfield means a period in which the viewer does not emit light that can be visually recognized.
  • the initialization period, the writing period, and the sustain discharge in which the viewer does not perform visible light emission are performed. Including maintenance periods that have not been conducted.
  • FIG. 1 is a block diagram showing a configuration of a video display device according to an embodiment of the present invention.
  • the video display apparatus shown in FIG. 1 includes an input unit 1, a subfield conversion unit 2, a motion vector detection unit 3, a subfield regeneration unit 4, a cinema information detection unit 5, and an image display unit 6.
  • the subfield conversion unit 2, the motion vector detection unit 3, the subfield regeneration unit 4, and the cinema information detection unit 5 divide one field or one frame into a plurality of subfields, and emit light subfields and light emission.
  • a video processing apparatus is configured to process an input image in order to perform gradation display by combining non-light-emitting subfields.
  • the input unit 1 includes, for example, a tuner for TV broadcasting, an image input terminal, a network connection terminal, and the like, and moving image data is input to the input unit 1.
  • the input unit 1 performs a known conversion process or the like on the input moving image data, and outputs the frame image data after the conversion process to the subfield conversion unit 2, the motion vector detection unit 3, and the cinema information detection unit 5.
  • the sub-field conversion unit 2 sequentially converts 1-frame image data, that is, 1-field image data into light-emission data of each sub-field, and outputs it to the sub-field regeneration unit 4.
  • One field is composed of K subfields (where K is an integer equal to or greater than 2), and each subfield is given a predetermined weight corresponding to the luminance, and the luminance of each subfield changes according to this weighting.
  • the light emission period is set to For example, when 7 subfields are used and weighting of 2 7 is performed, the weights of the first to seventh subfields are 1, 2, 4, 8, 16, 32, 64, respectively.
  • the motion vector detection unit 3 receives two temporally continuous frame image data, for example, the image data of the frame N-1 and the image data of the frame N (where N is an integer), and the motion vector detection unit 3 detects a motion vector for each pixel of the frame N by detecting the amount of motion between these frames, and outputs it to the subfield regeneration unit 4.
  • this motion vector detection method a known motion vector detection method is used. For example, a detection method by matching processing for each block is used.
  • the cinema information detection unit 5 receives frame image data, detects that the frame image data is a cinema image using a known cinema video detection method, and sets the pull-down type of the cinema image and the corresponding frame image data for each frame image data.
  • Cinema information which is an example of continuous image information, for specifying the order of images in the pull-down type is output to the subfield regeneration unit 4.
  • the cinema information detection unit 5 receives the 2-2 pull-down method as cinema information.
  • the cinema information number A1 is output for the first image of the two identical images
  • the cinema information number A2 is output for the second image.
  • the frame image data is a cinema image by the 2-3 pull-down method in which the same frame image is first displayed twice and then the same frame image is displayed three times in the NTSC format with a frequency of 60 Hz
  • the cinema information number B1 is output for the first image of the next three identical images
  • the cinema information number B2 is output for the second image
  • the cinema information number B3 is output for the image.
  • the continuous image information is not particularly limited to the above example, and various changes can be made.
  • the subfield regeneration unit 4 holds the motion vector detected by the motion vector detection unit 3 for only one field period or two field periods according to the cinema information detected by the cinema information detection unit 5, and 2 or 3 frame image data determined to be the same frame image from the cinema information detected by 5 is set as one processing target frame image, and the processing target frame image of the processing target frame image is determined according to the motion vector held.
  • the light emission data of each subfield is rearranged spatially.
  • the subfield regeneration unit 4 identifies a subfield that emits light among the subfields of each pixel of the processing target frame image, so that the subfield that precedes in time greatly moves according to the arrangement order of the subfields. , Change the emission data of the corresponding subfield of the pixel located spatially backward by the pixel corresponding to the motion vector to the emission state, and change the emission data of the subfield of the pixel before the movement to the non-emission state. By changing, rearranged light emission data of each subfield is generated for each pixel of each processing target frame image and output to the image display unit 6.
  • the subfield rearrangement method is not particularly limited to this example.
  • the subfield rearrangement method is spatially equivalent to the pixel corresponding to the motion vector so that the temporally preceding subfield moves greatly according to the subfield arrangement order.
  • the image display unit 6 includes a plasma display panel, a panel drive circuit, and the like, and displays a moving image by controlling lighting or extinguishing of each subfield of each pixel of the plasma display panel based on rearranged light emission data.
  • a process for creating rearranged light emission data by the video display device configured as described above will be specifically described.
  • a process for creating rearranged light emission data for a cinema image by the 2-2 pull-down method will be described.
  • the cinema information detection unit 5 outputs the cinema information number while repeating A1 and A2
  • the motion vector detection unit 3 outputs the image data of the cinema information number A1 and the cinema information number A2.
  • motion data between frames (for example, 0) indicating that a motion vector is not detected and motion data between the image data with cinema information number A2 and the image data with cinema information number A1
  • the pixel movement amount (for example, the number of pixels) representing the amount is output while being repeated.
  • the subfield regeneration unit 4 selects the image data of the cinema information number A2 and the cinema information number.
  • the motion vector “14” with the image data of A1 is held for one field period, the image data with cinema information number A1 and the image data with cinema information number A2 are set as one processing target frame image, and the motion vector held
  • the light emission data of 14 subfields consisting of 7 subfields of the image data of cinema information number A1 and 7 subfields of the image data of cinema information number A2 are spatially reproduced. Deploy.
  • FIG. 2 is a schematic diagram showing an example of moving image data.
  • the entire screen of the display screen DP is displayed in black (minimum luminance level) as a background, and one line (one pixel is one column in the vertical direction) of white (maximum luminance level) as the foreground.
  • a line line WL) is an image moving from the left to the right of the display screen DP. For example, a case where this moving image data is input to the input unit 1 will be described as an example.
  • FIG. 3 is a schematic diagram showing an example of rearranged light emission data obtained by rearranging sub-fields of a cinema image by the 2-2 pull-down method by the video display device shown in FIG.
  • N-4 frame image data image data of cinema information number A2
  • one white line WL shown in FIG. 2 is located at the pixel P0 as a spatial position (position in the horizontal direction x) on the display screen DP.
  • next N-3 frame image data (cinema information number A1 image data) and the next N-2 frame image data (cinema information number A2 image data)
  • one line WL is positioned at the pixel P14
  • the next N-1 frame image data (cinema information number A1 image data) and next N frame image data (cinema information number A2 image data)
  • N-4 A motion vector V1 between frame image data (image data with cinema information number A2) and N-3 frame image data (image data with cinema information number A1).
  • the motion vector between the N-3 frame image data and N-2 frame image data (the image data of the cinema information number A1) (image data of the cinema information number A2) is not detected.
  • a motion vector V2 is detected between N-2 frame image data (image data with cinema information number A2) and N-1 frame image data (image data with cinema information number A1).
  • a motion vector between image data (image data with cinema information number A1) and N frame image data (image data with cinema information number A2) is not detected.
  • the subfield regeneration unit 4 holds the motion vector V1 for one field period, and the pixel P14 of the N-3 frame image data (image data of cinema information number A1) is stored.
  • the light emission data (light emission state) of the first subfield SF1 is moved to the left by 13 pixels, and the light emission data of the first subfield SF1 of the pixel P1 is changed from the non-light emission state to the light emission state (hatched sub-line in the figure). Field), and the light emission data of the first subfield SF1 of the pixel P14 is changed from the light emission state to the non-light emission state (broken line white subfield in the figure).
  • the light emission data (light emission state) of the 2nd to 7th subfields SF2 to SF7 of the pixel P14 of the N-3 frame image data is moved to the left by 12 to 7 pixels, and the pixels P2 to 7
  • the light emission data of the second to seventh subfields SF2 to SF7 are changed from the non-light emission state to the light emission state, and the light emission data of the second to seventh subfields SF2 to SF7 of the pixel P14 are changed from the light emission state to the non-light emission state.
  • the subfield regeneration unit 4 moves the light emission data (light emission state) of the first subfield SF1 of the pixel P14 of the N-2 frame image data (cinema information number A2 image data) to the left by 6 pixels.
  • the light emission data of the first subfield SF1 of the pixel P8 is changed from the non-light emission state to the light emission state
  • the light emission data of the first subfield SF1 of the pixel P14 is changed from the light emission state to the non-light emission state.
  • the light emission data (light emission state) of the 2nd to 7th subfields SF2 to SF6 of the pixel P14 of the N-2 frame image data is moved to the left by 5 to 1 pixel, and the pixels P2 to 6
  • the light emission data in the second to sixth subfields SF2 to SF6 is changed from the non-light emission state to the light emission state, and the light emission data in the second to sixth subfields SF2 to SF6 of the pixel P14 is changed from the light emission state to the nonlight emission state.
  • the light emission data of the 7 subfield SF7 is not changed.
  • N-1 frame image data image data of cinema information number A1
  • N frame image data image data of cinema information number A2
  • two image data are set as one processing target frame image
  • the light emission data of the seven subfields of the pixel P28 of the N-1 frame image data and the seven subfields of the pixel P28 of the N frame image data are combined spatially. Rearranged.
  • the maximum light emission subfield (for example, the pixel P7 of the N-3 frame image data) of the regenerated light emission data that has been regenerated.
  • the seventh subfield SF7) is approximately the center of the maximum light emission subfield of each frame before and after (for example, the seventh subfield SF7 of the pixel P0 of the N-4 frame image data and the pixel P14 of the N-2 frame image data). In the seventh subfield SF7).
  • the light emission centroid value is a value (0 to 1) obtained by normalizing the light emission position of each subfield in one frame, and the movement amount D [pixel] of each subfield is V [pixel / frame], the light emission.
  • the center of gravity value is G
  • D V ⁇ G
  • the amount of movement of each subfield corresponding to the motion vector can be calculated using the light emission center of gravity value of each subfield.
  • FIG. 4 is a schematic diagram for explaining an example of the light emission centroid value of each subfield.
  • the emission center of gravity of the first subfield SF1 is obtained.
  • the emission centroid value SG2 of the second subfield SF2 is 0.72 ( ⁇ (25-7) / 25)
  • the value SG3 is 0.56 ( ⁇ (25-11) / 25), the emission center of gravity SG4 of the fourth subfield SF4 is 0.32 ( ⁇ (25-17) / 25), and the emission center of gravity of the fifth subfield SF5.
  • the motion vector MV between the N-1 frame and the N frame is 25 (pixel / frame) in the x direction (horizontal direction of the display screen), and 0 (pixel / frame) in the y direction (vertical direction of the display screen).
  • D V ⁇ G
  • the movement amounts (x, y) [pixels] of the first to fifth subfields SF1 to SF5 are (20, 0) and (18, 0), respectively. , (14,0), (8,0), (0,0). These values are not the numerical values on the horizontal axis in FIG. 4, but the number of pixels from the emission centroid value SG5 of the fifth subfield SF5 to the emission centroid values SG1 to SG5 of the first to fifth subfields SF1 to SF5. is there.
  • the movement amount of each subfield corresponding to the motion vector is determined using the corrected light emission centroid value obtained by correcting the light emission centroid value of each subfield.
  • the image data of the cinema information number A1 and the image data of the cinema information number A2 are set as one processing target frame image, and one motion vector for the processing target frame image. Therefore, the subfield regeneration unit 4 for the image data of the cinema information number A1 obtained by normalizing the emission centroid value of each subfield of the image data of the cinema information number A1 by 1/2 to 1.0.
  • the first corrected emission centroid value of each subfield and the emission data centroid value of each subfield of the image data of cinema information number A2 are normalized for the image information of cinema information number A2 by 0.0 to 1/2.
  • FIG. 5 is a schematic diagram showing an example of rearranged light emission data obtained by rearranging the light emission data of each subfield for the moving image data shown in FIG. 2 using the first and second corrected light emission centroid values.
  • one white line WL is positioned at the pixel P14 as a spatial position (position in the horizontal direction x) on the display screen DP, and N-2 frame image data (cinema information number A2 image data) and N-1 frame image
  • the subfield regeneration unit 4 performs the rearrangement light emission shown in FIG. 5 as follows. Generate data.
  • the subfield regeneration unit 4 holds a motion vector between the N-2 frame image data and the N-1 frame image data for one field period, and stores the N-1 frame image data (image data with cinema information number A1). ) Is multiplied by the first corrected emission center-of-gravity value Ga of each subfield and the motion vector V, and the emission data of the first to seventh subfields SF1 to SF7 of the pixel P14 is based on the multiplication result.
  • the light emission data of the first subfield SF1 of the pixel P1 is changed from the non-light emitting state to the light emitting state, and similarly, the first subfield SF1 of the pixels P2 to P7 is changed.
  • the emission data of the second to seventh subfields SF2 to SF7 is changed from the non-emission state to the emission state, and the emission of the first to seventh subfields SF1 to SF7 of the pixel P14 is changed. Changing the data from the emission state to the non-emission state.
  • the subfield regeneration unit 4 multiplies the N frame image data (cinema information number A2 image data) by the second corrected emission centroid value Gb of each subfield and the motion vector V. Based on the result of multiplication, the first to sixth subfields SF1 to SF6 of the pixel P14 are moved rightward by 6 to 1 pixel in the light emission data (light emission state) of the pixels P14 to P13. The light emission data of the 6 subfields SF1 to SF6 is changed from the non-light emission state to the light emission state, and the light emission data of the first to sixth subfields SF1 to SF6 of the pixel P14 is changed from the light emission state to the nonlight emission state. The light emission data of the seventh subfield SF7 is not changed.
  • two frame images are set as one processing target frame image, and the processing target frame image is set according to the detected motion vector. Since the rearranged light emission data of each subfield of the processing target frame image is generated by spatially rearranging the light emission data of each subfield, even for the same frame image in which no motion vector is detected, The light emission data of each subfield is spatially rearranged according to the motion vector, and the viewer's line-of-sight direction moves along the direction of the motion vector for all frames. -It is possible to suppress judder of cinema video by pull-down method.
  • FIG. 6 is a schematic diagram for explaining a method of rearranging sub-fields of a cinema image by the 2-3 pull-down method.
  • each of the first frame image data FC1 and the second frame image data FC2 of the first two identical images A sub-field is allocated on the pixel P2, and the first frame image data FB1, the second frame image data FB2, and the third frame image data FB3 among the next three identical images. Assume that each subfield is allocated on pixel P3.
  • the frame image data (image data of cinema information number C1) FC1 and the frame image data (image data of cinema information number C2) FC2 are set as the first processing target frame image.
  • the subfield regeneration unit 4 is a cinema that normalizes the emission centroid value of each subfield of the image data of the cinema information number C1 by 1/2 to 1.0.
  • the second corrected emission center-of-gravity value of each subfield for image data of number C2 is stored in advance.
  • the first corrected emission centroid value of the cinema information number C1 Gc1
  • the second corrected emission centroid value of the cinema information number C2 Gc2
  • the frame image data (image data of cinema information number B1) FB1 the frame image data (image data of cinema information number B2) FB2, and the frame image data (image data of cinema information number B3) FB3 are secondly stored. Since the processing target frame image is used, and one motion vector V2 is detected for the processing target frame image, the subfield regeneration unit 4 sets the emission centroid value of each subfield of the image data of the cinema information number B1.
  • the first corrected emission centroid value of each subfield for image data of cinema information number B1 normalized by 2/3 to 1.0 and the emission centroid value of each subfield of image data of cinema information number B2 are 1 / 3 to 2/3
  • the cinema information number The second corrected emission centroid value of each subfield for the image data of cinema information number B3 obtained by normalizing the emission centroid value of each subfield of the image data of B3 by 0.0 to 1/3 is previously stored. Yes.
  • the first corrected emission centroid value of cinema information number B1 is Gb1
  • the second corrected emission centroid value of cinema information number B2 is Gb2
  • the third corrected emission centroid value of cinema information number B3 is Gb3, and the emission centroid value.
  • Is G, Gb1 G / 3 + 2/3
  • Gb2 G / 3 + 1/3
  • Gb3 G / 3.
  • the subfield regeneration unit 4 determines the amount of movement of each subfield according to the motion vector using the corrected light emission center-of-gravity value for the 2-3 pull-down method described above.
  • the subfield regeneration unit 4 holds the motion vector V1 between the first frame image data FB3 and the frame image data FC1 for one field period, and uses the motion vector V1 and the first corrected emission centroid value Gc1.
  • the light emission data (light emission state) of the first to seventh subfields SF1 to SF7 of the pixel P2 of the frame image data FC1 on the line L1 (for example, to the right by 13 to 7 pixels)
  • the light emission data of the first to seventh subfields SF1 to SF7 of each pixel on the line L1 is changed from the non-light emission state to the light emission state, and the light emission data of the first to seventh subfields SF1 to SF7 of the pixel P2 is emitted. Change from state to non-light emitting state.
  • the subfield regeneration unit 4 uses the motion vector V1 and the second corrected light emission centroid value Gc2 to emit light data (light emission) of the first to sixth subfields SF1 to SF6 of the pixel P2 of the frame image data FC2.
  • the emission data of the first to sixth subfields SF1 to SF6 of each pixel on the line L1 is moved from the non-emission state.
  • the light emission data of the first to sixth subfields SF1 to SF6 of the pixel P2 are changed from the light emission state to the non-light emission state, and the light emission data of the seventh subfield SF7 of the pixel P2 is not changed.
  • the subfield regeneration unit 4 holds the motion vector V2 between the frame image data FC2 and the frame image data FB1 for two field periods, and uses the motion vector V2 and the first corrected emission centroid value Gb1.
  • the line L2 for example, to the right by 20 to 14 pixels
  • the line L2 for example, to the right by 20 to 14 pixels
  • the line L2 for example, to the right by 20 to 14 pixels
  • the line L2 for example, to the right by 20 to 14 pixels
  • the line The light emission data of the first to seventh subfields SF1 to SF7 of each pixel on L2 is changed from the non-light emission state to the light emission state, and the light emission data of the first to seventh subfields SF1 to SF7 of the pixel P3 is changed to the light emission state. Change from to non-lighting state.
  • the subfield regeneration unit 4 uses the motion vector V2 and the second corrected light emission centroid value Gb2 to emit light data (light emission) of the first to seventh subfields SF1 to SF7 of the pixel P3 of the frame image data FB2.
  • the line L2 for example, 13 to 7 pixels to the right
  • the light emission data of the first to seventh subfields SF1 to SF7 of each pixel on the line L2 is moved from the non-light emission state.
  • the light emission state is changed, and the light emission data of the first to seventh subfields SF1 to SF7 of the pixel P3 is changed from the light emission state to the non-light emission state.
  • the subfield regeneration unit 4 uses the motion vector V2 and the third corrected light emission centroid value Gb3 to emit light data (light emission) of the first to sixth subfields SF1 to SF6 of the pixel P3 of the frame image data FB3.
  • the line L2 for example, rightward by 6 to 1 pixel
  • the light emission data of the first to sixth subfields SF1 to SF6 of each pixel on the line L2 is moved from the non-light emission state.
  • the light emission data of the first to sixth subfields SF1 to SF6 of the pixel P3 are changed from the light emission state to the non-light emission state, and the light emission data of the seventh subfield SF7 of the pixel P3 is not changed.
  • the light emission data of each subfield of the first processing target frame image is rearranged according to the motion vector V1 of the first processing target frame image in which two identical frame images are continuous, and the same Since the light emission data of each subfield of the second processing target frame image is rearranged according to the second motion vector V2 of the second processing target frame image in which three frame images are continuous, the motion vector V1, The light emission data of each subfield that accurately reflects V2 can be rearranged, and 2-3 pulldown video judder can be more reliably suppressed.
  • FIG. 7 is a schematic diagram for explaining another rearrangement method of sub-fields of a cinema image by the 2-3 pull-down method.
  • the frame image data (image data with cinema information number C1) FC1 and the frame image data (image data with cinema information number C2) FC2 are the first.
  • the motion vector is included in the frame image data FC1 and FC2.
  • V1 is used, and the motion vector V2 is used for the frame image data FB1, FB2, and FB3.
  • the subfield regeneration unit 4 uses the subfields for the image data of the cinema information number C1 obtained by normalizing the emission centroid value of each subfield of the image data of the cinema information number C1 by 0.6 to 1.0.
  • the first corrected emission center of gravity value and the emission center of gravity value of each subfield of the image data of cinema information number C2 are normalized by 0.2 to 0.6, and each subfield for the image data of cinema information number C2 is normalized.
  • the second corrected emission centroid value and the emission field centroid value of each subfield of the image data of cinema information number B1 are normalized by 0.8 to 1.2, and the first of the subfields for the image data of cinema information number B1 is normalized.
  • 1 sub-field for image data of cinema information number B2 obtained by normalizing the light-emission center-of-gravity value of 1 and the light emission center-of-gravity value of each subfield of the image data of cinema information number B2 from 0.4 to 0.8.
  • the second corrected emission centroid value and the emission centroid value of each subfield of the cinema data number B3 image data are normalized by 0.0 to 0.4, and each subfield for the cinema information number B3 image data is normalized.
  • the second corrected light emission centroid value may be stored in advance.
  • Gc1 is the first corrected emission centroid value of cinema information number C1
  • Gc2 is the second corrected emission centroid value of cinema information number C2
  • Gb1 is the first corrected emission centroid value of cinema information number B1.
  • the second corrected emission centroid value of the number B2 is Gb2
  • the third corrected emission centroid value of the cinema information number B3 is Gb3
  • the emission centroid value is G
  • Gc1 G ⁇ 0.4 + 0.6
  • Gc2 G ⁇ 0.4 + 0.2
  • Gb1 G ⁇ 0.4 + 0.8
  • Gb2 G ⁇ 0.4 + 0.4
  • Gb3 G ⁇ 0.4.
  • the subfield regeneration unit 4 may determine the amount of movement of each subfield according to the motion vector as follows using another corrected light emission centroid value for the 2-3 pulldown method.
  • the subfield regeneration unit 4 holds the motion vector V1 between the first frame image data FB3 and the frame image data FC1 for one field period, and uses the motion vector V1 and the first corrected emission centroid value Gc1.
  • the subfield regeneration unit 4 uses the motion vector V1 and the first corrected emission centroid value Gc1.
  • the subfield regeneration unit 4 uses the motion vector V1 and the second corrected light emission centroid value Gc2 to emit light data (light emission) of the first to seventh subfields SF1 to SF7 of the pixel P2 of the frame image data FC2. State) on line L0, the emission data of the first to seventh subfields SF1 to SF7 of each pixel on line L0 is changed from the non-emission state to the emission state, and the first to seventh pixels P2 are changed. The light emission data of the seventh subfields SF1 to SF7 is changed from the light emission state to the non-light emission state.
  • the subfield regeneration unit 4 holds the motion vector V2 between the frame image data FC2 and the frame image data FB1 for two field periods, and uses the motion vector V2 and the first corrected emission centroid value Gb1.
  • the first to seventh subfields of each pixel on the line L0 are moved.
  • the light emission data of SF1 to SF7 is changed from the non-light emission state to the light emission state, and the light emission data of the first to seventh subfields SF1 to SF7 of the pixel P3 is changed from the light emission state to the non-light emission state.
  • the subfield regeneration unit 4 uses the motion vector V2 and the second corrected light emission centroid value Gb2 to emit light data (light emission) of the first to seventh subfields SF1 to SF7 of the pixel P3 of the frame image data FB2. State) on line L0, the emission data of first to seventh subfields SF1 to SF7 of each pixel on line L0 are changed from the non-emission state to the emission state, and the first to seventh pixels P3 The light emission data of the seventh subfields SF1 to SF7 is changed from the light emission state to the non-light emission state.
  • the subfield regeneration unit 4 uses the motion vector V2 and the third corrected light emission centroid value Gb3 to emit light data (light emission) of the first to sixth subfields SF1 to SF6 of the pixel P3 of the frame image data FB3.
  • the emission data of the first to sixth subfields SF1 to SF6 of each pixel on the line L0 is changed from the non-emission state to the emission state, and the first to sixth pixels P3
  • the light emission data of the sixth subfield SF1 to SF6 is changed from the light emission state to the non-light emission state, and the light emission data of the seventh subfield SF7 of the pixel P3 is not changed.
  • each light emission subfield of the first and second processing target frame images is arranged on one line L0.
  • the rearranged light emission data of each subfield can be generated so that the viewer's line of sight moves smoothly with respect to the entire first and second processing target frame images. Judder can be suppressed as a whole.
  • the example of the single-crest driving method in which the luminance distribution formed by all the light emission of the plurality of sub-fields forms one mountain shape has been described. Even in the case of the two-crest driving method in which the luminance distribution formed by all the light emission of the field forms a mountain shape with two or more, the present invention can be similarly applied, and the corrected light emission center-of-gravity value is determined in advance for each mountain. Thus, the subfields can be rearranged as described above.
  • the pull-down method to which the present invention is applied is not particularly limited to the above example, and the present invention can be similarly applied to other pull-down methods.
  • the light emission subfield may be moved to a plurality of subfields of one pixel depending on the magnitude of the motion vector.
  • the luminance of the pixel has been described as an example for ease of explanation.
  • the above processing is performed for each color. It is clear that the above effect can be obtained by applying.
  • the subfield regeneration unit 4 replaces the motion vector with 0 and emits light for each subfield of the processing target frame image. Data rearrangement may be stopped.
  • the subfield regeneration unit 4 changes the motion vector so that the motion vector is temporally shortened, and changes the motion vector to the changed motion vector. Accordingly, the rearranged emission data of each subfield of the processing target frame image may be generated by spatially rearranging the emission data of each subfield of the processing target frame image.
  • the present invention is summarized as follows. That is, the video processing apparatus according to the present invention divides one field or one frame into a plurality of subfields, and combines the light emitting subfield that emits light and the non-light emitting subfield that does not emit light to display an input image.
  • a video processing apparatus for processing wherein the input image is converted into light emission data of each subfield, and the input image has the same frame in a period of at least N fields or N frames (N is 2 or more)
  • a detection unit that detects that the images are continuously output, a motion vector detection unit that detects a motion vector using two or more input images that are temporally mixed, and the detection unit detects the input image
  • at least two frame images of at least two field periods are processed as one Spatially rearranges the emission data of each subfield of the processing target frame image converted by the subfield conversion unit according to the motion vector detected by the motion vector detection unit as an elephant frame image
  • a regenerating unit that generates rearranged light emission data of each subfield of the processing target frame image.
  • this video processing apparatus when an input image is converted into light emission data of each subfield and it is detected that the input image is a continuous image, two or more frame images of at least two field periods or more are processed as one processing target.
  • the rearranged emission data of each subfield of the processing target frame image is generated by spatially rearranging the emission data of each subfield of the processing target frame image according to the detected motion vector. Therefore, even for the same frame image in which no motion vector is detected, the emission data of each subfield can be spatially rearranged according to the already detected motion vector, thereby suppressing judder such as cinema video. can do.
  • Another video processing apparatus divides one field or one frame into a plurality of subfields, and inputs an input image for gradation display by combining light emitting subfields that emit light and non-light emitting subfields that do not emit light.
  • a video processing apparatus for processing wherein the input image is converted into light emission data of each subfield, and the input image has the same frame in a period of at least N fields or N frames (N is 2 or more)
  • a detection unit that detects that the images are continuously output, a motion vector detection unit that detects a motion vector using two or more input images that are temporally mixed, and the detection unit detects the input image Is detected as a continuous image, the previous motion vector is detected according to the motion vector detected by the motion vector detection unit.
  • a regeneration unit that generates rearranged light emission data of each subfield of the processing target frame image by spatially rearranging the light emission data of each subfield of the processing target frame image converted by the subfield conversion unit
  • the maximum light emission subfield of the regenerated light emission data that has been regenerated is approximately at the center of the maximum light emission subfield of each frame that follows.
  • the input image when the input image is converted into the light emission data of each subfield, the input image is detected to be a continuous image, and the motion vector is detected, the maximum of the regenerated light emission data that is regenerated Since the light emission subfield is substantially at the center of the maximum light emission subfield of each frame before and after, even in the same frame image in which no motion vector is detected, the light emission of each subfield is performed according to the already detected motion vector.
  • Data can be rearranged spatially, and judder such as cinema images can be suppressed.
  • the regeneration unit When the detection unit detects that the input image is a continuous image, the regeneration unit generates continuous image information for specifying the pull-down type of the continuous image and the order of the images in the pull-down type for each input image.
  • the regenerating unit spatially rearranges the light emission data of each subfield of the processing target frame image according to the continuous image information and the motion vector, so that the processing target frame image It is preferable to generate rearranged light emission data for each subfield.
  • the rearranged light emission data of each subfield of the processing target frame image is generated by spatially rearranging the light emission data of each subfield of the processing target frame image. It is possible to rearrange the light emission data of the subfields suitable for the cinema, and to more reliably suppress judder such as cinema video.
  • the regenerating unit sets a plurality of frame images determined to be the same frame image from the continuous image information as the processing target frame image, and emits light of each subfield of the processing target frame image according to the motion vector. It is preferable to generate rearranged light emission data of each subfield of the processing target frame image by spatially rearranging the data.
  • the light emission data of each subfield of the processing target frame image determined to be the same frame image is spatially rearranged according to the motion vector, the already detected motion vector is accurately reflected.
  • the light emission data of each subfield can be rearranged, and judder such as cinema video can be more reliably suppressed.
  • the regeneration unit holds the motion vector for at least one field period, and spatially rearranges the emission data of each subfield of the processing target frame image according to the held motion vector, It is preferable to generate rearranged light emission data for each subfield of the processing target frame image.
  • the motion vector is held for at least one field period, and the light emission data of each subfield of the processing target frame image is spatially rearranged according to the held motion vector, so that the motion vector is not detected
  • the light emission data of each subfield can be reliably rearranged according to the already detected motion vector, and judder such as cinema video can be more reliably suppressed.
  • the regenerating unit determines that the input image is a 2-2 pull-down video from the continuous image information
  • the regenerating unit outputs the emission data of each subfield of the processing target frame image according to the held motion vector. It is preferable that rearranged light emission data of each subfield of the processing target frame image is generated by spatial rearrangement.
  • the re-generation unit determines that the input image is a 2-3 pull-down video from the continuous image information
  • the re-generation unit converts the two frame images determined to be the same frame image from the continuous image information to a first
  • the light emission data of each subfield of the first processing target frame image is spatially reproduced.
  • the second frame that is arranged and held as the second processing target frame image is the three frame images determined to be the same frame image from the continuous image information, and is held for the second processing target frame image
  • the first and second processing are performed by spatially rearranging the emission data of each subfield of the second processing target frame image according to the motion vector of the second processing target frame image. It is preferred to generate a rearranged light emission data of each subfield elephant frame image.
  • the emission data of each subfield of the first processing target frame image is rearranged according to the first motion vector of the first processing target frame image in which two identical frame images are continuous
  • the light emission data of each subfield of the second processing target frame image is rearranged according to the second motion vector of the second processing target frame image in which three identical frame images are continuous
  • the first and The light emission data of each subfield that accurately reflects the second motion vector can be rearranged, and judder of 2-3 pull-down video can be more reliably suppressed.
  • the re-generation unit determines that the input image is a 2-3 pull-down video from the continuous image information
  • the re-generation unit converts the two frame images determined to be the same frame image from the continuous image information to a first
  • Three frame images that are determined to be processing target frame images from the continuous image information as the same frame image are set as second processing target frame images, and further, the first processing target frame image and the second processing target frame image
  • the frame image to be processed is a frame image to be combined, and the frame image of the frame to be combined is only in the first two frames according to the first motion vector held for the first frame image to be processed.
  • a second motion vector that spatially rearranges the light emission data of each subfield and that is held for the second processing target frame image; Correspondingly, it is preferable that the light emission data for each subfield of the binding processing target frame image only during the next three frames spatially repositioned.
  • a first motion vector of a first processing target frame image in which two identical frame images are continuous and a second motion vector of a second processing target frame image in which three identical frame images are continuous
  • the light emission data of each subfield of the combined processing target frame image is rearranged together, so that the viewer's line of sight moves smoothly with respect to the entire first and second processing target frame images.
  • rearranged light emission data of each subfield can be generated, and judder of 2-3 pull-down video can be suppressed as a whole.
  • the regeneration unit detects that the input image is a continuous image by the detection unit, and that the input image is a full-screen scroll image from the motion vector detected by the motion vector detection unit. Only when it is determined, the rearranged light emission data of each subfield of the processing target frame image is spatially rearranged according to the motion vector, thereby rearranging the light emission data of each subfield of the processing target frame image. It is preferable to produce.
  • the emission data of each subfield of the processing target frame image is spatially rearranged according to the motion vector only when it is determined that the continuous image is a full-screen scroll image. It is possible to more reliably suppress judder such as cinema video while preventing the image from being broken.
  • Another video processing apparatus divides one field or one frame into a plurality of subfields, and inputs an input image for gradation display by combining light emitting subfields that emit light and non-light emitting subfields that do not emit light.
  • a video processing apparatus for processing wherein the input image is converted into light emission data of each subfield, and the input image has the same frame in a period of at least N fields or N frames (N is 2 or more)
  • a detection unit that detects that the images are continuously output, a motion vector detection unit that detects a motion vector using two or more input images that are temporally changed, and a detection that is performed by the motion vector detection unit
  • N is 2 or more
  • a regenerator that generates rearranged light emission data of each subfield of the processing target frame image by spatially rearranging the light emission data of the field, and the regenerator is configured to input the input by the detection unit.
  • the rearrangement of the light emission data of each subfield of the processing target frame image may be stopped.
  • another video processing apparatus divides one field or one frame into a plurality of subfields, and inputs for performing gradation display by combining light emitting subfields that emit light and non-light emitting subfields that do not emit light.
  • An image processing apparatus for processing an image wherein the input image is the same in a period of at least N fields or N frames (N is 2 or more), and a subfield conversion unit that converts the input image into light emission data of each subfield.
  • a detection unit that detects that the frame images are continuously output, a motion vector detection unit that detects a motion vector using two or more input images that are temporally mixed, and the motion vector detection unit
  • the motion vector is changed so as to be sequentially shorter in time, and light emission data of each subfield of the processing target frame image is changed according to the changed motion vector.
  • the rearranged light emission data of each subfield of the processing target frame image may be generated by spatially rearranging.
  • the video display device includes any of the video processing devices described above and a display unit that displays video using rearranged light emission data output from the video processing device.
  • the emission data of each subfield of the processing target frame image is spatially rearranged according to the detected motion vector. Since the rearranged light emission data of each subfield of the processing target frame image is generated, the light emission data of each subfield is also obtained according to the motion vector for the same frame image in which the motion vector that has already been detected is not detected. Spatial rearrangement can be performed, and judder such as cinema images can be suppressed.
  • the video processing apparatus can suppress cinema video judder, one field or one frame is divided into a plurality of subfields, and a light emitting subfield that emits light and a non-light emitting subfield that does not emit light are combined. It is useful as a video processing device for processing an input image for gradation display.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Plasma & Fusion (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

Selon l'invention, un dispositif de traitement vidéo est équipé d'un convertisseur de sous-champs (2), qui convertit des images d'entrée en données de luminescence pour chaque sous-champ, un détecteur de données de cinéma (5), qui détecte que des images d'entrée sont des images de cinéma dans lesquelles les mêmes images vidéo sont délivrées consécutivement dans au moins deux périodes de champ, un détecteur de vecteurs de mouvement (3), qui détecte des vecteurs de mouvement en utilisant deux images d'entrée chronologiquement consécutives, et un régénérateur de sous-champs (4), qui, quand il est détecté que des images d'entrée sont des images de cinéma, prend au moins deux images vidéo d'au moins deux périodes de champ comme une image vidéo à traiter, et réarrange spatialement les données de luminescence pour chaque sous-champ de l'image vidéo à traiter en fonction du vecteur de mouvement qui a été détecté, et génère ainsi des données de luminescence réarrangées pour chaque sous-champ de l'image vidéo à traiter.
PCT/JP2009/006985 2008-12-25 2009-12-17 Dispositif de traitement d'images et dispositif d'affichage d'images WO2010073561A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008330166 2008-12-25
JP2008-330166 2008-12-25

Publications (1)

Publication Number Publication Date
WO2010073561A1 true WO2010073561A1 (fr) 2010-07-01

Family

ID=42287212

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/006985 WO2010073561A1 (fr) 2008-12-25 2009-12-17 Dispositif de traitement d'images et dispositif d'affichage d'images

Country Status (1)

Country Link
WO (1) WO2010073561A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09138666A (ja) * 1995-11-10 1997-05-27 Fujitsu General Ltd 表示装置の動画補正方法及び動画補正装置
JPH10161587A (ja) * 1996-11-29 1998-06-19 Fujitsu General Ltd 表示装置の動画補正方法及び動画補正回路
JP2000259146A (ja) * 1999-03-09 2000-09-22 Hitachi Ltd 画像表示装置
JP2008072300A (ja) * 2006-09-13 2008-03-27 Sharp Corp 映像表示装置
JP2008078859A (ja) * 2006-09-20 2008-04-03 Sharp Corp 画像表示装置及び方法、画像処理装置及び方法
JP2008256986A (ja) * 2007-04-05 2008-10-23 Hitachi Ltd 画像処理方法及びこれを用いた画像表示装置
JP2008261984A (ja) * 2007-04-11 2008-10-30 Hitachi Ltd 画像処理方法及びこれを用いた画像表示装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09138666A (ja) * 1995-11-10 1997-05-27 Fujitsu General Ltd 表示装置の動画補正方法及び動画補正装置
JPH10161587A (ja) * 1996-11-29 1998-06-19 Fujitsu General Ltd 表示装置の動画補正方法及び動画補正回路
JP2000259146A (ja) * 1999-03-09 2000-09-22 Hitachi Ltd 画像表示装置
JP2008072300A (ja) * 2006-09-13 2008-03-27 Sharp Corp 映像表示装置
JP2008078859A (ja) * 2006-09-20 2008-04-03 Sharp Corp 画像表示装置及び方法、画像処理装置及び方法
JP2008256986A (ja) * 2007-04-05 2008-10-23 Hitachi Ltd 画像処理方法及びこれを用いた画像表示装置
JP2008261984A (ja) * 2007-04-11 2008-10-30 Hitachi Ltd 画像処理方法及びこれを用いた画像表示装置

Similar Documents

Publication Publication Date Title
KR100926506B1 (ko) 플라즈마 디스플레이를 위한 움직임 보상된 상향변환
JP3758294B2 (ja) ディスプレイ装置の動画補正方法及び動画補正回路
JP2000036969A (ja) 立体画像表示方法および装置
WO2000043979A1 (fr) Appareil et procede permettant de realiser l'affichage d'une echelle de gris au moyen de sous-zones
JP3711378B2 (ja) 中間調表示方法及び中間調表示装置
JP2009145664A (ja) プラズマディスプレイ装置
JP2000112428A (ja) 立体画像表示方法および装置
WO2011086877A1 (fr) Dispositif de traitement vidéo et dispositif d'affichage vidéo
JP4867170B2 (ja) 画像表示方法
JPH09258688A (ja) ディスプレイ装置
WO2010073562A1 (fr) Appareil de traitement d'image et appareil d'affichage d'image
CN1691104B (zh) 等离子显示设备及其驱动方法
WO2010073561A1 (fr) Dispositif de traitement d'images et dispositif d'affichage d'images
US20050068268A1 (en) Method and apparatus of driving a plasma display panel
JP2000148084A (ja) プラズマディスプレイの駆動方法
JPH09138666A (ja) 表示装置の動画補正方法及び動画補正装置
WO2010073560A1 (fr) Appareil de traitement vidéo et appareil d'affichage vidéo
JP2008299272A (ja) 画像表示装置及び方法
WO2010089956A1 (fr) Appareil de traitement d'image et appareil d'affichage d'image
JP2009008738A (ja) 表示装置及び表示方法
JP2005250449A (ja) 画像表示装置の駆動方法
JP2006267929A (ja) 画像表示方法および画像表示装置
JP2002091368A (ja) 階調表示方法及び表示装置
JPH1124625A (ja) プラズマディスプレイ表示装置およびその駆動方法
JP2005504346A (ja) 大画面フリッカ及び消費ピークを修正して表示装置上に動画像を表示する方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09834373

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 09834373

Country of ref document: EP

Kind code of ref document: A1