WO2008075657A1 - 表示制御装置、表示制御方法、及びプログラム - Google Patents
表示制御装置、表示制御方法、及びプログラム Download PDFInfo
- Publication number
- WO2008075657A1 WO2008075657A1 PCT/JP2007/074259 JP2007074259W WO2008075657A1 WO 2008075657 A1 WO2008075657 A1 WO 2008075657A1 JP 2007074259 W JP2007074259 W JP 2007074259W WO 2008075657 A1 WO2008075657 A1 WO 2008075657A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- unit
- image data
- display
- processing
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 284
- 238000012545 processing Methods 0.000 claims abstract description 855
- 230000008569 process Effects 0.000 claims description 240
- 238000012937 correction Methods 0.000 claims description 118
- 230000003044 adaptive effect Effects 0.000 claims description 23
- 238000006243 chemical reaction Methods 0.000 description 238
- 238000010894 electron beam technology Methods 0.000 description 172
- 238000010586 diagram Methods 0.000 description 115
- 230000033001 locomotion Effects 0.000 description 107
- 230000010354 integration Effects 0.000 description 101
- 238000004364 calculation method Methods 0.000 description 84
- 238000001514 detection method Methods 0.000 description 77
- 230000015654 memory Effects 0.000 description 76
- 238000009826 distribution Methods 0.000 description 57
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 48
- 238000004088 simulation Methods 0.000 description 44
- 238000013500 data storage Methods 0.000 description 41
- 238000003860 storage Methods 0.000 description 39
- 230000006870 function Effects 0.000 description 36
- 230000008859 change Effects 0.000 description 35
- 239000011295 pitch Substances 0.000 description 21
- 238000005401 electroluminescence Methods 0.000 description 20
- 238000000605 extraction Methods 0.000 description 18
- 239000011159 matrix material Substances 0.000 description 18
- 230000007246 mechanism Effects 0.000 description 17
- 238000012805 post-processing Methods 0.000 description 17
- 239000007787 solid Substances 0.000 description 17
- 238000004891 communication Methods 0.000 description 16
- 238000011161 development Methods 0.000 description 15
- 230000018109 developmental process Effects 0.000 description 15
- 230000002123 temporal effect Effects 0.000 description 15
- 230000000007 visual effect Effects 0.000 description 15
- 230000009467 reduction Effects 0.000 description 13
- 230000004069 differentiation Effects 0.000 description 12
- 230000000694 effects Effects 0.000 description 11
- 239000004973 liquid crystal related substance Substances 0.000 description 11
- 238000009792 diffusion process Methods 0.000 description 10
- 230000001934 delay Effects 0.000 description 9
- 230000003111 delayed effect Effects 0.000 description 9
- 238000007493 shaping process Methods 0.000 description 9
- 230000006872 improvement Effects 0.000 description 8
- 230000006978 adaptation Effects 0.000 description 7
- 230000015556 catabolic process Effects 0.000 description 7
- 238000006731 degradation reaction Methods 0.000 description 7
- 230000001678 irradiating effect Effects 0.000 description 7
- 238000013507 mapping Methods 0.000 description 6
- 230000002093 peripheral effect Effects 0.000 description 6
- 210000001525 retina Anatomy 0.000 description 6
- 239000003086 colorant Substances 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000003247 decreasing effect Effects 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 241001270131 Agaricus moelleri Species 0.000 description 3
- 238000009125 cardiac resynchronization therapy Methods 0.000 description 3
- 230000008030 elimination Effects 0.000 description 3
- 238000003379 elimination reaction Methods 0.000 description 3
- 238000012952 Resampling Methods 0.000 description 2
- 102100029860 Suppressor of tumorigenicity 20 protein Human genes 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 239000004576 sand Substances 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000007480 spreading Effects 0.000 description 2
- 238000003892 spreading Methods 0.000 description 2
- 238000010408 sweeping Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- JWDFQMWEFLOOED-UHFFFAOYSA-N (2,5-dioxopyrrolidin-1-yl) 3-(pyridin-2-yldisulfanyl)propanoate Chemical compound O=C1CCC(=O)N1OC(=O)CCSSC1=CC=CC=N1 JWDFQMWEFLOOED-UHFFFAOYSA-N 0.000 description 1
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- KNMAVSAGTYIFJF-UHFFFAOYSA-N 1-[2-[(2-hydroxy-3-phenoxypropyl)amino]ethylamino]-3-phenoxypropan-2-ol;dihydrochloride Chemical compound Cl.Cl.C=1C=CC=CC=1OCC(O)CNCCNCC(O)COC1=CC=CC=C1 KNMAVSAGTYIFJF-UHFFFAOYSA-N 0.000 description 1
- 241000834695 Auchenoglanis occidentalis Species 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 241000251730 Chondrichthyes Species 0.000 description 1
- 241000252185 Cobitidae Species 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 101000585359 Homo sapiens Suppressor of tumorigenicity 20 protein Proteins 0.000 description 1
- 241000406668 Loxodonta cyclotis Species 0.000 description 1
- 238000000342 Monte Carlo simulation Methods 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 238000004020 luminiscence type Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004071 soot Substances 0.000 description 1
- 238000005507 spraying Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/34—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
- G09G3/36—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G1/00—Control arrangements or circuits, of interest only in connection with cathode-ray tube indicators; General aspects or details, e.g. selection emphasis on particular characters, dashed line or dotted line generation; Preprocessing of data
- G09G1/002—Intensity circuits
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/2007—Display of intermediate tones
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/363—Graphics controllers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/004—Diagnosis, testing or measuring for television systems or their details for digital television systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/04—Diagnosis, testing or measuring for television systems or their details for receivers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4314—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440263—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440281—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G1/00—Control arrangements or circuits, of interest only in connection with cathode-ray tube indicators; General aspects or details, e.g. selection emphasis on particular characters, dashed line or dotted line generation; Preprocessing of data
- G09G1/04—Deflection circuits ; Constructional details not otherwise provided for
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2300/00—Aspects of the constitution of display devices
- G09G2300/04—Structural and physical details of display devices
- G09G2300/0439—Pixel structures
- G09G2300/0443—Pixel structures with several sub-pixels for the same colour in a pixel, not specifically used to display gradations
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2300/00—Aspects of the constitution of display devices
- G09G2300/04—Structural and physical details of display devices
- G09G2300/0439—Pixel structures
- G09G2300/0452—Details of colour pixel setup, e.g. pixel composed of a red, a blue and two green components
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/08—Arrangements within a display terminal for setting, manually or automatically, display parameters of the display terminal
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/10—Special adaptations of display systems for operation with variable images
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/10—Special adaptations of display systems for operation with variable images
- G09G2320/103—Detection of image changes, e.g. determination of an index representative of the image change
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/10—Special adaptations of display systems for operation with variable images
- G09G2320/106—Determination of movement vectors or equivalent parameters within the image
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
- G09G2340/0435—Change or adaptation of the frame rate of the video stream
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/10—Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/06—Use of more than one graphics processor to process data before displaying to one or more screens
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/16—Calculation or use of calculated indices related to luminance levels in display data
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/18—Use of a frame buffer in a display terminal, inclusive of the display panel
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/2007—Display of intermediate tones
- G09G3/2018—Display of intermediate tones by time modulation using two or more time intervals
- G09G3/2022—Display of intermediate tones by time modulation using two or more time intervals using sub-frames
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/2007—Display of intermediate tones
- G09G3/2044—Display of intermediate tones using dithering
- G09G3/2051—Display of intermediate tones using dithering with use of a spatial dither pattern
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/2007—Display of intermediate tones
- G09G3/2059—Display of intermediate tones using error diffusion
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/2007—Display of intermediate tones
- G09G3/2059—Display of intermediate tones using error diffusion
- G09G3/2062—Display of intermediate tones using error diffusion using error diffusion in time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/445—Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
- H04N5/45—Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0127—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
- H04N7/0145—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes the interpolation being class adaptive, i.e. it uses the information of class which is determined for a pixel based upon certain characteristics of the neighbouring pixels
Definitions
- Display control apparatus display control method, and program
- the present invention relates to a display control device, a display control method, and a program. In particular, for example, it is possible to check an image displayed on a broadcast side of a television broadcast!
- the present invention relates to a display control device, a display control method, and a program.
- Patent Document 1 Japanese Patent Laid-Open No. 2001-136548
- a high performance display device that is, a display device having a larger screen than a display device for checking, for example, has come to be viewed!
- a display control apparatus is a display control apparatus that controls display of an image, and includes signal processing means that performs predetermined signal processing on input image data, and the input image data.
- An image corresponding to the input image data is displayed in a display area of a part of the screen of a display device having a screen having a number of pixels larger than the number of pixels, and the display part of the other part of the screen displays the image.
- a display control method or program is a display control method for controlling display of an image or a program for causing a computer to execute display control processing, and targets input image data. And performing predetermined signal processing to display an image corresponding to the input image data in a display area of a part of the screen of a display device having a screen having a number of pixels larger than the number of pixels of the input image data. And displaying an image corresponding to the processed image data obtained by the predetermined signal processing in another display area of the screen.
- a part of the screen of the display device having a screen having a number of pixels larger than the number of pixels of the input image data, in which predetermined signal processing is performed on the input image data.
- An image corresponding to the input image data is displayed in the display area, and the post-processed image data obtained by the predetermined signal processing is displayed in another part of the display area of the screen. An image is displayed.
- the program may be transmitted through a transmission medium or recorded and provided on a recording medium.
- an image can be displayed. Further, by checking the display, for example, an image displayed on the receiving side or the like can be checked.
- FIG. 1 is a block diagram showing a configuration example of an embodiment of a monitor system to which the present invention is applied. is there.
- FIG. 2 is a diagram showing a configuration example of a screen of display device 2.
- FIG. 3 is a flowchart for explaining processing of the monitor system.
- FIG. 4 is a block diagram showing a first configuration example of the signal processing unit 12.
- FIG. 5 is a diagram showing a display example of the display device 2.
- FIG. 6 is a diagram showing a display example of an image of mH X mV pixels.
- FIG. 7] is a block diagram showing a second configuration example of the signal processing unit 12.
- FIG. 8 is a diagram showing a display example of the display device 2.
- FIG. 9 is a block diagram showing a third configuration example of the signal processing unit 12.
- FIG. 10 is a diagram showing a display example of the display device 2.
- FIG. 11 is a block diagram showing a fourth configuration example of the signal processing unit 12.
- FIG. 12 is a diagram showing a display example of the display device 2.
- FIG. 13 is a block diagram showing a fifth configuration example of the signal processing unit 12.
- FIG. 14 is a diagram showing a display example of the display device 2.
- FIG. 15 is a block diagram showing a sixth configuration example of the signal processing unit 12.
- FIG. 16 is a diagram showing a display example of display device 2.
- FIG. 17 is a diagram illustrating a pseudo inch image generation process.
- FIG. 18 is a diagram illustrating a pseudo inch image generation process.
- FIG. 19 A diagram illustrating a pseudo inch image generation process.
- FIG. 20 is a flowchart for explaining processing of the display control apparatus 1 when displaying an image corresponding to n-inch pseudo-inch image data in the display area # 1.
- FIG. 21 is a block diagram showing a seventh configuration example of the signal processing unit 12.
- FIG. 22 is a diagram showing a display example of the display device 2;
- FIG. 23 is a block diagram illustrating an eighth configuration example of the signal processing unit 12.
- FIG. 24 is a diagram showing a display example of the display device 2.
- FIG. 25 is a block diagram illustrating a configuration example of the image conversion apparatus 101 that performs image conversion processing using class classification adaptive processing.
- FIG. 26 is a flowchart for explaining image conversion processing by the image conversion apparatus 101.
- FIG. 27 is a block diagram illustrating a configuration example of a learning device 121 that learns tap coefficients.
- FIG. 28 is a block diagram illustrating a configuration example of a learning unit 136 of the learning device 121.
- FIG. 29 is a diagram for explaining various image conversion processes.
- FIG. 30 is a flowchart for explaining learning processing by the learning device 121.
- FIG. 31 is a block diagram illustrating a configuration example of an image conversion device 151 that performs image conversion processing using class classification adaptation processing.
- FIG. 32 is a block diagram illustrating a configuration example of the coefficient output unit 155 of the image conversion apparatus 151.
- FIG. 33 is a block diagram illustrating a configuration example of a learning device 171 that learns coefficient seed data.
- FIG. 34 is a block diagram illustrating a configuration example of a learning unit 176 of the learning device 171.
- 35] is a flowchart for explaining the learning process by the learning device 171.
- FIG. 36 is a block diagram illustrating a configuration example of an embodiment of a computer to which the present invention has been applied.
- FIG. 37 is a block diagram showing a configuration of an example of a conventional FPD display device.
- FIG. 38 is a block diagram illustrating a configuration example of an embodiment of an image signal processing device included in an FPD display device.
- FIG. 39 is a block diagram illustrating a configuration example of a CRT display device.
- FIG. 40 is a flowchart for explaining processing of the image signal processing device.
- FIG. 41 is a block diagram illustrating a configuration example of a VM processing unit 10034.
- FIG. 42 is a diagram showing examples of VM coefficients.
- FIG. 43 is a diagram illustrating a method for obtaining a VM coefficient.
- FIG. 44 is a diagram showing the relationship between beam current and spot size.
- Fig. 45 is a diagram showing a color identification mechanism.
- FIG. 46 is a diagram showing electron beam spots.
- FIG. 47 is a diagram showing electron beam spots.
- FIG. 49 is a diagram showing an electron beam intensity distribution approximated by a two-dimensional normal distribution.
- FIG. 50 is a diagram showing the intensity distribution of the electron beam passing through the slit of the aperture grill. is there.
- FIG. 51 is a diagram showing an electron beam intensity distribution and an electron beam intensity distribution of the electron beams passing through the slits of the aperture grille.
- FIG. 52 is a diagram showing an electron beam intensity distribution and an electron beam intensity distribution passing through the slit of the shadow mask among the electron beams.
- FIG. 53 is a diagram showing an electron beam intensity distribution and an electron beam intensity distribution passing through the slit of the shadow mask among the electron beams.
- Fig. 54 is a diagram illustrating the integration for obtaining the intensity of the electron beam passing through the slit.
- FIG. 55 is a view showing a state where an electron beam is incident on an aperture grill as a color selection mechanism.
- FIG. 56 is a diagram showing a pixel and an electron beam intensity distribution.
- FIG. 57 is a diagram illustrating a configuration example of a circuit for obtaining an EB influence component.
- FIG. 58 is a block diagram showing a configuration example of an EB processing unit 10220.
- FIG. 59 is a block diagram showing another configuration example of the EB processing unit 10220.
- FIG. 60 is a block diagram illustrating a configuration example of a portion that performs color temperature compensation processing of a CRT ⁇ processing unit 10035.
- FIG. 61 is a block diagram showing another configuration example of the VM processing unit 10034.
- FIG. 62 is a block diagram illustrating a configuration example of a luminance correction unit 10310.
- FIG. 63 is a diagram for explaining luminance correction processing.
- FIG. 64 is a block diagram showing another configuration example of the luminance correction unit 10310.
- FIG. 65 is a flowchart for explaining a learning process for obtaining a tap coefficient as a VM coefficient.
- FIG. 66 is a flowchart for describing learning processing for obtaining a class prediction coefficient.
- FIG. 67 is a block diagram illustrating a configuration example of an embodiment of a computer.
- FIG. 69 is a block diagram illustrating a configuration example of the motion detection unit 20100.
- FIG. 70 is a diagram for explaining motion detection.
- FIG. 71 is a diagram for explaining motion detection.
- FIG. 72 is a block diagram showing a configuration example of a subfield developing unit 20200.
- Fig. 73 is a diagram illustrating a configuration example of a subfield.
- FIG. 74 is a diagram illustrating a configuration example of a subfield.
- FIG. 75 is a block diagram illustrating a configuration example of a light amount integrating unit 20300.
- FIG. 76 is a diagram for explaining the generation of a pseudo contour.
- FIG. 77 is a diagram showing a light amount integration region.
- FIG. 78 is a diagram showing a light amount integration region.
- Second implementation of an image processing apparatus that uses a first display device to reproduce a state in which an image is displayed on a second display device having characteristics different from those of the first display device. It is a block diagram which shows the structural example of this form.
- FIG. 80 is a block diagram illustrating a configuration example of a gradation conversion unit 20400.
- FIG. 81 is a diagram for explaining the operation of the dither conversion circuit 20404.
- Third embodiment of an image processing apparatus that uses a first display device to reproduce a state in which an image is displayed on a second display device having characteristics different from those of the first display device. It is a block diagram which shows the structural example of this form.
- FIG. 84 is a block diagram illustrating a configuration example of a visual correction unit 20500.
- FIG. 85 is a diagram for explaining the operation of the dither correction circuit 20501.
- FIG. 86 is a diagram for explaining the operation of the diffusion error correction circuit 20502.
- FIG. 88 is a flowchart illustrating a motion detection process.
- FIG. 89 is a flowchart for describing processing for expanding an image into subfields.
- FIG. 90 is a flowchart for explaining the process of integrating the amount of light.
- Second implementation of an image processing apparatus that uses a first display device to reproduce an image display state on a second display device having characteristics different from those of the first display device. It is a flowchart explaining the operation
- FIG. 92 is a flowchart for explaining a process of converting gradation.
- FIG. 95 is a flowchart for describing visual correction processing.
- FIG. 96 is a diagram showing a display model.
- FIG. 97 is a diagram showing pixels of a display model.
- FIG. 98 is a diagram showing a light amount integration region in the display model.
- Fig. 99 is a diagram showing a cross-sectional area.
- FIG. 100 is a diagram showing a cross-sectional area that moves in the display model as time T elapses.
- FIG. 101 is a diagram showing a cross-sectional area that moves in the display model as time T elapses.
- FIG. 102 is a flowchart for explaining light amount integration processing.
- FIG. 103 is a block diagram showing another configuration example of the light amount integrating unit 20300.
- FIG. 104 is a diagram showing a light amount integrated value table.
- FIG. 105 is a flowchart for explaining light amount integration processing.
- Fig. 106 is a block diagram illustrating a configuration example of an embodiment of a computer.
- FIG. 107 is a block diagram illustrating a configuration example of an embodiment of an image signal processing device that reproduces the appearance of a PDP on a display other than the PDP.
- FIG. 108 is a diagram for explaining stripe arrangement reproduction processing.
- FIG. 109 is a block diagram illustrating a configuration example of an image processing unit 30001 that performs stripe array reproduction processing.
- FIG. 110 is a flowchart illustrating a stripe arrangement reproduction process.
- FIG. 111 is a diagram for explaining a color shift that occurs in an image displayed on a PDP.
- FIG. 112 is a diagram illustrating coefficients that are multiplied with an image signal in color shift addition processing.
- FIG. 113 is a block diagram illustrating a configuration example of an image processing unit 30001 that performs color misregistration addition processing.
- FIG. 114 is a flowchart illustrating color misregistration addition processing.
- FIG. 115 is a diagram for explaining inter-pixel pitch reproduction processing.
- FIG. 116 is a diagram illustrating a configuration example of an image processing unit 30001 that performs inter-pixel pitch reproduction processing.
- This is a flowchart for explaining inter-pixel pitch reproduction processing.
- FIG. 119 is a block diagram illustrating a configuration example of an image processing unit 30001 that performs spatial dither addition processing.
- FIG. 120 is a diagram showing a look-up table stored in a spatial dither pattern ROM30043.
- FIG. 120 is a diagram showing a look-up table stored in a spatial dither pattern ROM30043.
- FIG. 122 is a block diagram illustrating a configuration example of an image processing unit 30001 that performs time dither addition processing.
- FIG. 123 is a flowchart illustrating time dither addition processing.
- FIG. 124 is a block diagram showing a configuration example of an image processing unit 30001 that performs all of color misregistration addition processing, spatial dither addition processing, time dither addition processing, inter-pixel pitch reproduction processing, and stripe arrangement reproduction processing.
- FIG. 125 is a flowchart illustrating processing of an image processing unit 30001.
- FIG. 126 is a block diagram illustrating a configuration example of an embodiment of a computer.
- ABL processing unit 10034 VM processing unit, 10035 CRT ⁇ processing unit, 10036 full screen brightness average level detection unit, 10037 peak detection differential control value detection unit, 10038 ABL control unit,
- 10039 VM control unit, 10040 display color temperature compensation control unit, 10051 brightness adjustment contrast adjustment unit, 10052 high image quality processing unit, 10053 gain adjustment unit, 10054 ⁇ correction unit, 10055 video amplifier, 10056 CRT, 10057 FBT, 10058 Beam current detection unit, 10059 ABL control unit, 10060 Image signal differentiation circuit, 10061 VM drive circuit, 1 0101 bus, 10102 CPU, 10103 ROM, 10104 RAM, 10105 Hard disk, 10106 output unit, 10107 input unit, 10108 communication unit, 10109 Drive, 10110 I / O interface, 10111 Removable recording medium 10210 Brightness correction unit, 1021 1 VM coefficient generation unit, 10212 Calculation unit, 10220 ⁇ Processing unit, 10241 ⁇ Coefficient generation unit, 10242A to 10242D, 10242F to 102421 Calculation unit, 10250 ⁇ Function section, 10 251 to 10259 delay section, 10260 ⁇ coefficient generation section, 10261 product-sum operation section, 10271, 10272 selector, 102
- Figure 1 shows a monitoring system to which the present invention is applied (a system is a logical collection of multiple devices. It is a block diagram showing an example of the configuration of an embodiment of the present invention (regardless of whether or not devices of each configuration are in the same casing).
- the monitor system includes a display control device 1, a display device 2, and a remote commander 3, and is used for checking image quality and the like, for example, in a broadcasting station that performs television broadcasting.
- the monitor system includes image data output by a camera that captures an image, image data output by an editing device that edits a material, and encoded data encoded by an MPEG (Moving Picture Expert Group) method or the like.
- the image data output by the decoder that decodes the video and the image data of the moving image of the program before broadcasting at other broadcasting stations or the like are supplied as input image data to be input to the monitor system.
- an image corresponding to the image data of the pre-broadcast program as the input image data is displayed on a display device (other display device of display device 2) on the receiving side such as a home.
- a display device other display device of display device 2
- the receiving side such as a home.
- an evaluator or the like who checks (evaluates) the image quality looks at the display, and the image corresponding to the input image data is displayed on the display device on the receiving side with any image quality. It is now possible to check whether or not.
- the display control device 1 includes an image conversion unit 11, a signal processing unit 12, a display control unit 13, and a control unit.
- the image conversion unit 11 converts the input image data into check image data that is a target for checking what kind of image is displayed on the receiving display device, and converts the number of pixels of the check image data.
- the image conversion process is performed as necessary and supplied to the signal processing unit 12 and the display control unit 13.
- the signal processing unit 12 includes three first signal processing units 12 and second signal processing units. It is composed of a processing unit 12 and a third signal processing unit 12, and a check image from the image conversion unit 11
- the processed image data obtained by the signal processing is supplied to the display control unit 13.
- the first signal processing unit 12 applies the check image data from the image conversion unit 11 to the check image data.
- the signal processing according to the control from the control unit 14 is performed, and the processed image data obtained by the signal processing is supplied to the display control unit 13.
- the second signal processing unit 12 and the third signal processing unit 12 also perform image conversion.
- the check image data from the conversion unit 11 is subjected to signal processing according to the control from the control unit 14, and processed image data obtained by the signal processing is supplied to the display control unit 13.
- the display control unit 13 Under the control of the control unit 14, the display control unit 13 displays an image corresponding to the check image data supplied from the image conversion unit 11 in a partial display area of the screen of the display device 2. Make it. Further, the display control unit 13 uses the force S controlled by the control unit 14, the first signal processing unit 12,
- An image corresponding to the data is displayed in another partial display area of the screen of the display device 2.
- the display control unit 13 controls the position and size of the image displayed on the display device 2 according to the force S set as a parameter supplied from the control unit 14.
- the first signal processing unit 12 and the second signal processing are appropriately performed on the display control unit 13.
- post-processing image data Also referred to as post-processing image data, second post-processing image data, or third post-processing image data.
- the control unit 14 receives an operation signal transmitted from a remote commander 3 or an operation unit (not shown) provided in the display control device 1, and responds to the operation signal with a first signal processing unit. 1 2, controls the second signal processing unit 12, the third signal processing unit 12, and the display control unit 13.
- the control unit 14 also includes a first signal processing unit 12, a second signal processing unit 12, and a third signal processing.
- the display device 2 is a device that displays an image on an LCD (Liquid Crystal Display), for example, and is based on the number of pixels of the check image data that the image conversion unit 11 supplies to the signal processing unit 12 and the display control unit 13. It has a screen with a large number of pixels. Then, the display device 2 displays the image corresponding to the check image data in the display area of a part of the screen according to the force controlled by the display control unit 13, and the first processed image data and the second processed image. Display the post-image data and the image corresponding to the post-third process post-image data in the other part of the display area of the screen.
- LCD Liquid Crystal Display
- the remote commander 3 is used, for example, by an evaluator or the like that checks the image quality, etc., of the check image data, and thus the image corresponding to the input image data, on the display device on the reception side.
- An operation signal corresponding to the operation is transmitted to the control unit 14 by radio waves such as infrared rays.
- FIG. 2 shows a configuration example of the screen of the display device 2.
- the image corresponding to the check image data is displayed in the upper left display area # 0 of the four display areas # 0 to # 3, and the upper right display area # 1
- the image corresponding to the first processed image data is displayed
- the image corresponding to the second processed image data is displayed in the lower left display area # 2
- the third process is displayed in the lower right display area # 3.
- the image corresponding to the post-image data is displayed.
- the screen of the display device 2 has horizontal X vertical length, It consists of 2H X 2 V monitor pixels (2H X 2V monitor pixels)!
- HDTV High-Definition Television
- images with a 16: 9 aspect ratio can be displayed in display area #i.
- the screen of display device 2 is divided into four display areas # 0 to # 3.
- each of the four display areas # 0 to # 3 is displayed as a virtual one screen, and the (one) image is displayed on each of the display areas # 0 to # 3.
- the display area #i is composed of 1920 X 1080 monitor pixels
- the display device 2 has [2 X 1920J X
- High-definition images can be displayed, consisting of [2 X 1080] pixels, than HDTV images
- the image conversion unit 11 uses the input image data as check image data. For example, it is determined whether or not the check image data is composed of the same number of pixels as the monitor pixels constituting the display area # 0, that is, whether or not the check image data is composed of the check image data force HXV pixels.
- step S11 If it is determined in step S11 that the check image data force display area # 0 is composed of the same HXV pixel as the monitor pixel, the process skips step S12 and step S13. Proceed to
- step S11 If it is determined in step S11 that the check image data force display area # 0 is composed of the same number of pixels as the monitor pixels other than the HXV pixels, the process proceeds to step S12.
- the image conversion unit 11 1 performs image conversion processing for converting the number of pixels of the check image data into the same HXV pixel as the number of monitor pixels constituting the display area # 0.
- the check image data after the image conversion process is supplied to the signal processing unit 12 and the display control unit 13, and the process proceeds to step S13.
- step S13 the first signal processing unit 12 and the second signal processing unit constituting the signal processing unit 12
- the third signal processing unit 12 and the third signal processing unit 12 respectively check image data from the image conversion unit 11.
- the first post-processing image data, the second post-processing image data, and the third post-processing image obtained by performing signal processing on the data according to the control from the control unit 14.
- the data is supplied to the display control unit 13, and the process proceeds to step S14.
- step S 14 the display control unit 13 displays an image corresponding to the check image data from the image conversion unit 11 1 on the display area # 0 of the display device 2 according to the control of the control unit 14.
- step S14 the display control unit 13 follows the control of the control unit 14 to display the image corresponding to the first processed image data from the first signal processing unit 12 in the display area # 1 and the second signal.
- the image corresponding to the second processed image data from the processing unit 12 is displayed in the display area # 2, and the third processed image data from the third signal processing unit 12 is displayed in the display area # 3.
- an image corresponding to the check image data is displayed in the display area # 0, and the display area # 1 is obtained by performing predetermined signal processing on the check image data. If an image corresponding to the first processed image data, that is, an image corresponding to the check image data is displayed on a certain type of display device on the receiving side, an image that will be displayed is displayed.
- an image corresponding to the second post-processed image data obtained by performing predetermined signal processing on the check image data that is, another type of display device on the receiving side If an image corresponding to the check image data is displayed, an image that will be displayed is displayed, and display area # 3 displays a third process obtained by performing predetermined signal processing on the check image data.
- An image corresponding to the post-image data that is, an image that will be displayed if an image corresponding to the check image data is displayed on still another type of display device on the receiving side is displayed.
- the image quality of the image data of the program for example, S / N (Signal to Noise Ratio) can be checked by the image displayed in the display area # 0. Furthermore, it is possible to check how the images displayed in display area # 0 are displayed on the various display devices on the receiving side depending on the images displayed in display areas # 1 to # 3. it can.
- the display device 2 has a screen of monitor pixels larger than the number of pixels of the check image data of HXV pixels, as shown in FIG.
- the check image data is specified in the display areas # 1, # 2, and # 3, which are other display areas of the screen. If the image corresponding to the post-processing image data obtained by performing the signal processing of (i.e., the image corresponding to the check image data is displayed on the display device on the receiving side), the image that will be displayed is displayed. Can be displayed.
- the image corresponding to the check image data and the display state of the image on the receiving-side display device is broadcast as a program and received by the receiving-side display device.
- the power S can check the deterioration state of the image (deteriorated image) displayed on the display device on the receiving side.
- 1S Display device 2 is physically displayed on one screen, so there is a problem when the image corresponding to the check image data and the image corresponding to the processed image data are displayed on different display devices. It is not necessary to consider the difference in various characteristics of the display device.
- FIG. 4 illustrates a first configuration example of the signal processing unit 12 in FIG.
- the signal processing unit 12 includes a first signal processing unit 12 and an image conversion unit 31.
- the second signal processing unit 12 includes an image conversion unit 31, and the third signal processing unit 12 includes an image conversion unit 31.
- the image conversion unit 31 uses the magnification S information supplied from the control unit 14 as a target, and the check image data from the image conversion unit 11 is used as a target for the image that the display device on the receiving side performs. Signal processing corresponding to the enlargement process is performed.
- some display devices on the receiving side have an enlargement function for performing processing for enlarging an image as a program from a broadcast station. Signal processing equivalent to the image enlargement processing performed by the display device is performed.
- the image conversion unit 31 follows the enlargement ratio information supplied from the control unit 14.
- the image conversion unit 11 performs an image conversion process for converting the check image data into m-fold enlarged image data obtained by multiplying the check image data by m times, and processes the m-fold enlarged image data obtained by the image conversion process.
- the post-image data is supplied to the display control unit 13 (Fig. 1).
- the image conversion unit 31 uses the magnification S information supplied from the control unit 14 as an image conversion unit.
- the post-image data is supplied to the display control unit 13.
- the image conversion unit 31 also uses the power of the enlargement ratio information supplied from the control unit 14 to determine whether the image conversion unit 11
- the check image data is converted to m "double-enlarged image data obtained by multiplying the check image data by m", and the m "double-enlarged image data obtained by the image conversion process is converted to The processed image data is supplied to the display control unit 13.
- FIG. 5 shows a display example of the display device 2 when the signal processing unit 12 is configured as shown in FIG.
- an image corresponding to the check image data (hereinafter also referred to as a check image as appropriate) is displayed in the display area # 0.
- the image corresponding to the m-fold enlarged image data is displayed in the display area # 1
- the image corresponding to the m'-fold enlarged image data is displayed in the display area # 2
- m "is displayed in the display area # 3 Images corresponding to the double enlarged image data are respectively displayed.
- the display device having the enlargement function among the display devices on the receiving side the display state when the image as the program from the broadcasting station is enlarged and displayed by the enlargement function (Enlarged image quality etc.) can be checked.
- magnifications m, m ', m " are determined by the force S specified by operating the remote commander 3 (Fig. 1), for example.
- the number of pixels in the horizontal and vertical check image data is converted to m times enlarged image data.
- the check image data is composed of HXV pixels having the same number of pixels as display area #i composed of HXV monitor pixels.
- the data is composed of mH X mV pixels.
- FIG. 6 shows a display example of an image of mH ⁇ mV pixels corresponding to m-fold enlarged image data.
- the H X V pixel area portion of the m H X mV pixel image corresponding to the m-fold enlarged image data is displayed.
- the check image region corresponding to the HXV pixel region displayed in the display region # 1 (indicated by the hatching in FIG. 6).
- the display range area can be specified by operating the remote commander 3, for example, and the display control unit 13 follows the specification to m.
- a part of the image of mH X mV pixel corresponding to the double magnified image data is displayed in display area # 1.
- the display range area in the check image can be displayed, for example, in a display area # 0 in which the check image is displayed, superimposed on the check image.
- FIG. 7 illustrates a second configuration example of the signal processing unit 12 in FIG.
- the type information indicating the type of display device to be used is supplied.
- the simulation processing unit 41 is directed to the check image data from the image conversion unit 11 according to the type information supplied from the control unit 14, and has another display device having a display characteristic different from that of the display device 2.
- the image data to be displayed in the display area #i of the display device 2 is processed corresponding to the image displayed on the other display device when the check image is displayed in Signal processing for generating post-processing image data is performed.
- the display device 2 is a display device having a display characteristic different from that of the LCD as a display device on the power receiving side constituted by an LCD, for example, CRT (Cathode Ray Tubes), PDPs (Plasma Display Panels), organic EL (Electro Luminescence) displays, FEDs (Field Emission Displays), and other display devices.
- CTR Cathode Ray Tubes
- PDPs Plasma Display Panels
- organic EL Electro Luminescence
- FEDs Field Emission Displays
- other display devices for example, CTR (Cathode Ray Tubes), PDPs (Plasma Display Panels), organic EL (Electro Luminescence) displays, FEDs (Field Emission Displays), and other display devices.
- a display device having a new display device may be developed.
- the simulation processing unit 41 displays an image corresponding to the check image displayed on the display device on the receiving side having display characteristics different from those of the display device 2 in the display area # of the display device 2 #. 4. Perform signal processing to generate image data to be displayed on i as post-processing image data.
- the image data for displaying on the LCD display device 2 an image corresponding to the check image displayed on the display device on the receiving side having the organic EL display is referred to as the pseudo organic EL image data and the check image data. Therefore, the signal processing that generates pseudo organic EL image data is called organic EL simulation processing.
- image data for displaying on the LCD display device 2 an image corresponding to the check image displayed on the display device on the receiving side having the PDP is referred to as pseudo PDP image data and from the check image data.
- Signal processing that generates PDP image data is called PDP simulation processing.
- image data for displaying an image corresponding to the check image displayed on the display device on the receiving side having the CRT on the LCD display device 2 is referred to as pseudo CRT image data, and from the check image data, pseudo CRT Signal processing that generates image data is called CRT simulation processing.
- the simulation processing unit 41 follows the type information supplied from the control unit 14.
- an organic EL simulation process for generating pseudo organic EL image data is performed, and the pseudo organic EL image data obtained by the organic EL simulation process is converted into a processed image.
- the data is supplied to the display control unit 13 (Fig. 1).
- the simulation processing unit 41 uses the force S based on the type information supplied from the control unit 14, the control unit 1 From the check image data from the image converter 11, for example, PDP simulation processing for generating pseudo PDP image data is performed, and the pseudo PDP obtained by the PDP simulation processing is used.
- the image data is supplied to the display control unit 13 as processed image data.
- the simulation processing unit 41 also performs CRT simulation processing for generating, for example, pseudo CRT image data from the check image data from the image conversion unit 11 according to the type information supplied from the control unit 14.
- the pseudo CRT image data obtained by the CRT simulation process is supplied to the display control unit 13 as processed image data.
- FIG. 8 shows a display example of the display device 2 when the signal processing unit 12 is configured as shown in FIG.
- the display area # 1 contains an image corresponding to the pseudo organic EL image data
- the display area # 2 contains the image force S corresponding to the pseudo PDP image data
- the display area # 3 contains the pseudo CRT image data. Corresponding images are displayed respectively.
- each of the display device having an LCD, the display device having an organic EL display panel, the display device having a PDP, and the display device having a CRT from the broadcasting station. It is possible to check what kind of image quality is displayed as an image of the program.
- image data that causes the LCD display device 2 to display an image corresponding to a check image displayed on a display device having a display device having any display characteristic is displayed.
- Whether to perform the signal processing to be generated is determined by the type information supplied from the control unit 14 to the simulation processing unit 41. What kind of information is supplied from the control unit 14 to the simulation processing unit 41 is determined by, for example, the force S that is determined by operating the remote commander 3 (FIG. 1).
- control unit 14 supplies other parameters necessary for performing signal processing to the simulation processing unit 41.
- FIG. 9 shows a third configuration example of the signal processing unit 12 of FIG.
- the first signal processing unit 12 1 of the signal processing unit 12 1 the image conversion unit 31 and the simulation.
- the image conversion unit 31 performs image conversion processing according to the enlargement ratio information supplied from the control unit 14.
- the check image data from the image conversion unit 11 is converted to m-fold enlarged image data and supplied to the simulation processing unit 41.
- the simulation processing unit 41 for example, according to the type information supplied from the control unit 14,
- the pseudo organic EL image data is generated from the data and supplied to the display control unit 13 (FIG. 1) as processed image data.
- the image conversion unit 31 is supplied with check image data from the image conversion unit 11 and
- the enlargement rate information is supplied from the control unit 14.
- the image conversion unit 31 converts the check image data from the image conversion unit 11 into m'-fold enlarged image data by performing image conversion processing according to the enlargement ratio information supplied from the control unit 14. To the simulation processing unit 41.
- the simulation processing unit 41 for example, according to the type information supplied from the control unit 14,
- pseudo PDP image data is generated from the m′-magnified image data from the image conversion unit 31 and supplied to the display control unit 13 as processed image data.
- the image conversion unit 31 is supplied with the check image data from the image conversion unit 11 and
- the enlargement rate information is supplied from the control unit 14.
- the image conversion unit 31 performs image conversion processing according to the enlargement ratio information supplied from the control unit 14.
- the check image data from the image conversion unit 11 is converted into m "double enlarged image data and supplied to the simulation processing unit 41.
- the simulation processing unit 41 for example, according to the type information supplied from the control unit 14,
- pseudo CRT image data is generated and supplied to the display control unit 13 as processed image data.
- FIG. 10 shows a display example of the display device 2 when the signal processing unit 12 is configured as shown in FIG.
- a check image is displayed in display area # 0.
- an image corresponding to the pseudo organic EL image data generated from the m-fold enlarged image data is displayed, and in the display area # 2, the pseudo PDP image generated from the m′-fold enlarged image data is displayed.
- the image corresponding to the data is displayed in the display area # 3, and the image corresponding to the pseudo CRT image data generated from the m "double enlarged image data is displayed.
- each of the display device having an organic EL display panel, the display device having a PDP, and the display device having a CRT expands an image as a program from a broadcasting station. When it is displayed, it is possible to check the display state (image quality etc. of the enlarged image).
- FIG. 11 shows a fourth configuration example of the signal processing unit 12 in FIG.
- the signal processing unit 12 includes a first signal processing unit 12 and an image conversion unit 31.
- the second signal processing unit 12 1 comprises an image conversion unit 51, and the third signal processing unit 12 1 image
- the image conversion unit 31 uses the enlargement ratio information supplied from the control unit 14.
- the check image data from the image conversion unit 11 is converted into m-fold enlarged image data and supplied to the display control unit 13 (FIG. 1) as post-processing image data.
- Check image data is supplied from the image conversion unit 11 to the image conversion unit 51, and playback speed information indicating the playback speed of slow playback is supplied from the control unit 14.
- the image converter 51 uses the playback speed information supplied from the controller 14 to display the check image data from the image converter 11 and display the check image q less than 1x (q 1). Double speed At the playback speed q Image conversion processing to convert to double-speed slow playback image data is performed, and the q-speed slow playback image data obtained by the image conversion processing is used as post-processing image data, and the display control unit 13 (Fig. 1) To supply.
- the display rate of the display device 2 (the rate at which the display is updated) and the frame rate of the check image are 30 Hz
- the playback speed represented by the playback speed information is For example, assuming 1 / 2-speed, the image conversion unit 51 converts the check image data with a frame rate of 30 Hz into q-times speed single playback image data that is 60 Hz image data with a frame rate of 2 times. Perform the conversion process.
- the image appears as if it was slow playback at 2x speed.
- the image conversion unit 31 performs the image conversion process according to the enlargement ratio information supplied from the control unit 14, thereby obtaining the check image data from the image conversion unit 11 as “m”.
- the image data is converted into double-enlarged image data and supplied to the image conversion unit 52.
- the image conversion unit 52 is supplied with image conversion unit 31 power m "double-enlarged image data, and is also supplied with playback speed information from the control unit 14.
- the image conversion unit 52 uses the power obtained from the playback speed information supplied from the control unit 14 and the image conversion unit 31 as many as m "double-magnified image data and a check image display q" (1) Performed at double-speed playback speed q Performs image conversion processing to convert to double-speed slow playback image data, and displays q ”double-speed slow playback image data obtained by the image conversion processing as post-processing image data Supply to the control unit 13.
- FIG. 12 shows a display example of the display device 2 when the signal processing unit 12 is configured as shown in FIG.
- the image corresponding to m-magnified image data displayed in display area # 1 is displayed in display area # 0. Since the spatial resolution is higher than the displayed check image! /, The check image displayed in the display area # 0 has a conspicuous force. Image degradation can be checked.
- the image corresponding to the q-speed slow playback image data displayed in display area # 2 has a higher temporal resolution than the check image displayed in display area # 0!
- each of the image conversion units 51 and 52 the number of times of slow playback of the check image data is converted to image data as if it was slow-played by the control unit 14 from the image conversion units 51 and 52. It is determined by the reproduction speed information supplied to each. What playback speed information is supplied from the control unit 14 to each of the image conversion units 51 and 52 can be designated by operating the remote commander 3 (FIG. 1), for example.
- FIG. 13 shows a fifth configuration example of the signal processing unit 12 in FIG.
- the signal processing unit 12 includes a first signal processing unit 12 and a force processing unit 61.
- the second signal processing unit 12 includes an adaptive gamma processing unit 62, and the third signal processing unit 1 2 1 includes a high frame rate processing unit 63.
- the image processing unit 61 is supplied with the check image data for the image conversion unit 11 (Fig. 1) and the signal processing parameters from the control unit 14 (Fig. 1).
- the enhancement processing unit 61 targets the check image data from the image conversion unit 11, and when the receiving display device displays an image corresponding to the image data, a process to be performed on the image data The signal processing corresponding to is performed.
- the enhancement processing unit 61 performs the ensemble processing as the signal processing similar to that performed by the display device on the receiving side.
- the enn nonce processing unit 61 performs the check by filtering the check image data from the image conversion unit 11. Enhance processing for emphasizing a part of the edge of the image data is performed, and the check image data after the processing is supplied to the display control unit 13 (FIG. 1) as processed image data.
- the degree of enhancement of the check image data by the enhancement processing in the enhancement processing unit 61 is determined according to the enhancement processing parameters included in the signal processing parameters supplied from the control unit 14.
- the parameters for the enhancement processing can be specified, for example, by operating the remote commander 3 (Fig. 1).
- the adaptive gamma processing unit 62 is supplied with check image data from the image conversion unit 11, and is also supplied with signal processing parameters from the control unit 14.
- the adaptive gamma processing unit 62 targets the check image data from the image conversion unit 11, and when the reception-side display device displays an image corresponding to the image data, processing to be performed on the image data The signal processing corresponding to is performed.
- the display device currently absorbs the characteristics of the display device adopted by each manufacturer that manufactures the display device, and the appearance of the image does not differ depending on the manufacturer.
- each manufacturer will be subjected to a unique gamma correction process that makes the image unique to that manufacturer appear according to the characteristics of the image to be displayed and the display device. In this case, the appearance of the image will differ depending on the manufacturer of the display device.
- the adaptive gamma processing unit 62 is adaptive so that an image corresponding to the image displayed on the display device of each manufacturer can be displayed (reproduced) on the LCD display device 2. Performs adaptive gamma correction, which is a gamma correction process.
- the adaptive gamma processing unit 62 generates image data for displaying on the LCD display device 2 an image corresponding to the check image displayed on the display device on the receiving side on which the gamma correction processing unique to the manufacturer is performed. Check image data from image converter 11 so that it can be obtained Then, an adaptive gamma correction process is performed on the image data, and the check image data after the adaptive gamma correction process is supplied to the display control unit 13 as processed image data.
- the adaptive gamma processing unit 62 what kind of characteristic adaptive gamma correction processing is performed in the adaptive gamma processing unit 62 depends on the parameters for adaptive gamma correction processing included in the signal processing parameters supplied from the control unit 14. It is determined.
- the parameter for the adaptive gamma correction process can be designated by operating the remote commander 3, for example.
- the adaptive gamma correction processing for example, the gamma correction processing described in JP-A-08-023460, JP-A-2002-354290, JP-A-2005-229245, etc. should be adopted. Touch with force S.
- Japanese Patent Application Laid-Open No. 08-023460 discloses that when displaying an image signal with a large variation in APL (Average Picture Level) on a device such as an LCD or PDP that is difficult to obtain luminance contrast, As a gamma correction process that performs the optimum gamma correction according to the brightness level of the image signal, the brightness level of the image signal is divided into a plurality of sections, and the frequency in each section is taken. It is described that the frequency distribution is divided by the frequency level, and the gamma correction characteristic is selected by using the result as a selection signal for the gamma correction characteristic, and the dynamic gamma correction adapted to the image signal is performed.
- APL Average Picture Level
- Japanese Patent Application Laid-Open No. 2002-354290 discloses an APL as a gamma correction process for improving the gradation reproducibility by constantly changing the operating point of the gamma correction so that the gamma correction always works. It describes that an operating point adapted to APL is obtained from the initial value of the operating point and gamma correction is applied to the luminance signal on the white side of the operating point.
- Japanese Patent Laid-Open No. 2005-229245 discloses a method for performing gradation enlargement control adapted to an image signal by reducing color saturation and detecting the maximum value of each RGB color of the image signal. The maximum value obtained by multiplying each maximum value by a weighting coefficient is detected, and the maximum value is compared with the maximum value of the luminance level of the image signal. The larger value is set as the maximum value of the luminance level of the image signal.
- the high frame rate processing unit 63 is supplied with the check image data from the image conversion unit 11 and the signal processing parameters from the control unit 14.
- the high frame rate processing unit 63 receives the check image data from the image conversion unit 11.
- the display device on the receiving side displays an image corresponding to the image data
- signal processing corresponding to the processing applied to the image data is performed.
- the frame rate of the image as a program from the broadcast station is converted into an image with a high frame rate such as twice, and the display rate corresponding to the high frame rate is displayed.
- the high frame rate processing unit 63 performs high frame rate processing as signal processing similar to that performed by such a display device on the receiving side.
- the high frame rate processing unit 63 uses the signal processing parameters supplied from the control unit 14 as the signal processing parameter S, and the frame rate between frames of the check image data from the image conversion unit 1; High frame rate processing such as double speed processing that generates image data whose frame rate is twice that of the original check image data by interpolating the frame rate, and processing the check image data after the high frame rate processing
- High frame rate processing such as double speed processing that generates image data whose frame rate is twice that of the original check image data by interpolating the frame rate, and processing the check image data after the high frame rate processing
- the post-image data is supplied to the display control unit 13.
- the high frame rate processing unit 63 how many times the frame rate of the check image data is increased by the high frame rate processing is included in the signal processing parameters supplied from the control unit 14.
- the parameter for high frame rate processing that is determined according to the high frame rate processing parameter is determined by the force S that is determined by operating the remote commander 3 (Fig. 1).
- the display rate of the display device 2 and the frame rate of the check image are now 30 Hz, and the image data obtained by the high frame rate processing of the high frame rate processing unit 63 is, for example. If the frame rate is 60 Hz, which is twice the frame rate of the check image, the display device 2 will display at a frame rate of 60 Hz and a display rate of 30 Hz. The image will appear as if it was slow playback at 2x speed.
- the display device 2 can display images at a high display rate higher than 30Hz, for example, 60Hz, 120Hz, 240Hz, etc. in addition to 30Hz. It is assumed that the display control unit 13 (FIG. 1) can control the display device 2 to display an image at a high display rate in addition to 30 Hz.
- the display control unit 13 is configured such that the frame rate of the image data (hereinafter, appropriately referred to as high frame rate image data) obtained by the high frame rate processing of the high frame rate processing unit 63 is, for example, the frame rate of the check image. If the frequency is 60 Hz, the display device 2 is controlled to display an image corresponding to the high frame rate image data at the same 60 Hz display rate as the frame rate of the high frame rate image data. .
- an image corresponding to the high frame rate image data is displayed at a (same) display rate corresponding to the frame rate of the high frame rate image data.
- an image corresponding to high frame rate image data having a frame rate of 60 Hz, for example, obtained by high frame rate processing by the high frame rate processing unit 63 constituting the third signal processing unit 12 is used. Is displayed in display area # 3, but when the frame rate of the check image displayed in display area # 0 other than that display area # 3 is 30 Hz, the display on display device 2 If the rate is 60 Hz, which is the same as the frame rate of high frame rate image data, the check image displayed in display area # 0 will appear as if it was played back at double speed.
- the display rate of the display device 2 is determined by multiplying the frame rate of the check image data by the high frame rate processing of the high frame rate processing unit 63. It is controlled by the control unit 14 in conjunction with whether or not.
- FIG. 14 shows a display example of the display device 2 when the signal processing unit 12 is configured as shown in FIG. [0150]
- a check image is displayed in the display area # 0
- an image corresponding to the check image data after the enhancement process is displayed in the display area # 1.
- the image corresponding to the check image data after the adaptive gamma correction processing is displayed in the display region # 1
- the image corresponding to the check image data after the high frame rate processing is displayed in the display region # 2. Is done.
- the display device having the function of performing the enhancement processing on the image among the display devices on the receiving side the image corresponding to the image data after the enhancement processing is displayed.
- the image quality can be checked.
- the image corresponding to the image data after the specific gamma correction processing is displayed on the display device that displays the image by performing the gamma correction processing specific to the manufacturer. In such a case, it is possible to check the image quality of the image.
- FIG. 15 shows a sixth configuration example of the signal processing unit 12 in FIG.
- a second signal processing unit 12 force pseudo inch image generation unit 71.
- a pseudo inch image generation unit 71 force pseudo inch image generation unit 71.
- the pseudo inch image generation unit 71 checks the check image data from the image conversion unit 11 with the display device on the receiving side having a certain number of inches according to the inch number information supplied from the control unit 14. When the image is displayed, signal processing is performed to generate image data for displaying an image corresponding to the image displayed on the display device in the display area #i of the display device 2 as processed image data.
- the pseudo inch image generation unit 71 is displayed on a display device on the receiving side of an n inch. Signal processing for generating image data for displaying an image corresponding to the check image in the display area # 1 of the display device 2 as processed image data is performed. Similarly, each of the pseudo-inch image generation units 71 71 corresponds to the check image displayed on the n'-inch receiving display device.
- Display image # 2 of display device 2 is the image data that displays the corresponding image in display region # 1 of display device 2 and the image that corresponds to the check image displayed on the display device on the n-inch receiving side. Signal processing for generating image data to be displayed in 1 as post-processing image data is performed.
- the image data that causes the display area #i of the display device 2 to display an image corresponding to the check image displayed on the display device on the receiving side having a certain number of inches is also referred to as pseudo-inch image data.
- the signal processing for generating pseudo inch image data from the check image data is also referred to as pseudo inch image generation processing.
- the pseudo inch image generation unit 71 uses the inch number information supplied from the control unit 14.
- pseudo-inch image generation processing for generating n-inch pseudo-inch image data is performed from the check image data from the image conversion unit 11, and the resulting n-inch pseudo-inch image data is processed image data. Is supplied to the display control unit 13 (FIG. 1).
- the number of inches supplied from the control unit 14 is also the same.
- the pseudo-inch image generating process for generating n'-inch pseudo-inch image data from the check image data from the image conversion unit 11 and the pseudo-inch image for generating n "-inch pseudo-inch image data according to the report Generation processing is performed, and n ′ inch pseudo inch image data and n ′′ inch pseudo inch image data obtained as a result are supplied to the display control unit 13 as processed image data.
- the pseudo inch image data is generated by performing a process of increasing or decreasing the number of pixels of the check image data.
- processing for increasing the number of pixels of image data for example, it is possible to employ processing for interpolating pixels, image conversion processing for converting image data into image data having more pixels than the image data, etc. .
- a process for reducing the number of pixels of the image data for example, a process of thinning out pixels or an averaging process using an average value of a plurality of pixels as a pixel value of one pixel is used. .
- FIG. 16 shows a table of the display device 2 when the signal processing unit 12 is configured as shown in FIG. An example is shown.
- a check image is displayed in display area # 0.
- the image power display area # 2 corresponding to the n-inch pseudo-inch image data is displayed in the image force display area # 3 corresponding to the pseudo-inch image data in ⁇ 'inch. “Images corresponding to inch inch pseudo inch image data are respectively displayed.
- display area #i is composed of H X V monitor pixels, and check image data is also included.
- H X V pixels having the same number of pixels as the display area #i.
- FIG. 17 shows a state in which the H X V pixel check image data power is displayed in the display area #i of the H X V monitor pixel.
- the check image data of the HXV pixel is displayed as it is in the display area #i of the HXV monitor pixel. By doing so, an image corresponding to the check image displayed on the N-inch display device is displayed.
- the check image of the HXV pixel is displayed as it is, which is equivalent to the check image displayed on the N-inch display device
- the image to be displayed will be displayed.
- this N inch is also referred to as a reference inch.
- FIG. 18 shows a state in which pseudo inch image data obtained by increasing the number of pixels of the check image data by the pseudo inch image generation process is displayed in the display area #i of the HXV monitor pixel. ing.
- pseudo-inch image data of 3H X 3V pixels is generated by the pseudo-inch image generation process in which one pixel of the check image data of HXV pixels is increased to 3X3 pixels. Displayed in display area #i of H XV pixel power in inch image data!
- the check image data of the original H XV pixel is equivalently displayed on the 3X3 monitor pixel in the display area #i.
- the display area #i contains 3XN
- An image corresponding to pseudo inch image data of an inch, that is, an image corresponding to a check image displayed on a 3 XN inch display device is displayed.
- the display area #i of the H XV monitor pixel cannot display the entire image corresponding to the pseudo inch image data of 3HX3V pixels, which is larger than the number of pixels.
- a part of the image corresponding to the pseudo-inch image data of 3HX3V pixels is displayed in the display area #i.
- Which part of the image corresponding to the pseudo inch image data of 3HX3V pixels is displayed in the display area #i can be specified by operating the remote commander 3, for example. According to the specification, a part of the image corresponding to the pseudo inch image data of 3HX3V pixels is displayed in the display area #i.
- FIG. 19 shows a state in which pseudo inch image data obtained by reducing the number of pixels of the check image data by the pseudo inch image generation process is displayed in the display area #i of the H XV monitor pixel. ing.
- the H / 2 XV / 2 pixel pseudo-inch image is generated by performing a pseudo-inch image generation process that reduces the 2X2 pixels of the H XV pixel check image data to 1 pixel. Data is generated and the pseudo inch image data is displayed in the display area #i of the H XV monitor pixel.
- the image corresponding to the H / 2XV / 2 pixel pseudo-inch image data is the H XV monitor image. Displayed in the H / 2 XV / 2 monitor pixel area of the prime display area ffi. In the display area #i of the H XV monitor pixel, the image corresponding to the H / 2 XV / 2 pixel pseudo-inch image data is displayed. For example, the H / 2 XV / 2 monitor pixel area operates the remote commander 3.
- the display control unit 13 displays an image corresponding to the pseudo inch image data of the H / 2 XV / 2 pixel in the display area #i according to the specification.
- control unit 14 determines whether or not remote commander 3 has been operated so as to change (designate) the number of inches n.
- step S31 If it is determined in step S31 that the remote commander 3 is not operated so as to change the number of inches n, the process returns to step S31.
- step S31 when it is determined that the remote commander 3 is operated so as to change the number of inches n, that is, the remote commander 3 is operated so as to change the number of inches n.
- the control signal corresponding to is received by the control unit 14
- the process proceeds to step S32, and the control unit 14 recognizes the changed number of inches n from the operation signal from the remote commander 3, and the number of inches.
- the pseudo inch image generation unit 71 determines the ratio of changing the number of pixels of the check image data.
- control unit 14 supplies the inch number information including the pixel number change rate n / N to the pseudo inch image generation unit 71, and the processing is started from step S32.
- step S33 the pseudo-inch image generation unit 71 uses the inch number information from the control unit 14 as a reference.
- the pseudo-inch image generation process that changes (increases or decreases) the number of horizontal and vertical pixels of the check image data from the image converter 11 to a pixel number change rate n / N times.
- n-inch pseudo-inch image data is generated and displayed in the display control unit 13.
- step S33 the control unit 14 determines whether the number of inches n is equal to or less than the reference inch N.
- step S34 If it is determined in step S34 that the number of inches n is equal to or less than the reference inch N, that is, the entire image corresponding to the n-inch pseudo-inch image data may be displayed in the display area # 1. If it can, the process proceeds to step S35, and the display control unit 13 displays the whole of the pseudo-inch image data from the n-inch pseudo-inch image data.
- step S37 the display control unit 13 displays the image corresponding to the display image data in the display area # 1, and the process returns to step S31. In this case, the entire image corresponding to the n-inch pseudo-inch image data is displayed in the display area # 1.
- step S34 determines whether the number of inches n is not equal to or less than the reference inch N, that is, the entire image corresponding to the pseudo-inch image data of n inches is displayed in the display area # 1. If not, the process proceeds to step S36, and the display control unit 13 obtains the display area from the pseudo inch image data of the pseudo inch image generation unit 71.
- H XV pixels that can be displayed in area # 1 are extracted as display image data, and the process proceeds to step S37.
- step S37 the display control unit 13 displays the image corresponding to the display image data in the display area # 1, and the process returns to step S31.
- the image corresponding to the H XV pixel extracted in step S36 is displayed in the display area # 1.
- FIG. 21 shows a seventh configuration example of the signal processing unit 12 in FIG.
- An inch image generator 71 is configured.
- the image conversion unit 31 performs image conversion processing according to the enlargement ratio information supplied from the control unit 14.
- the check image data from the image conversion unit 11 is converted into m-fold enlarged image data and supplied to the pseudo inch image generation unit 71.
- the pseudo-inch image generation unit 71 performs the number-of-inch information supplied from the control unit 14.
- the m-fold enlarged image data from the image conversion unit 31 is obtained.
- N-inch pseudo-inch image data is generated from the data and supplied to the display control unit 13 (FIG. 1) as post-processing image data.
- the image converter 31 is supplied with the check image data from the image converter 11 and
- the enlargement rate information is supplied from the control unit 14.
- the image conversion unit 31 converts the check image data from the image conversion unit 11 into m'-fold enlarged image data by performing image conversion processing according to the enlargement ratio information supplied from the control unit 14. And supplied to the pseudo inch image generation unit 71.
- the pseudo-inch image generation unit 71 is in accordance with the number-of-inch information supplied from the control unit 14.
- n ′ inch pseudo inch image data is generated from the m′-fold enlarged image data from the image conversion unit 31, and the display control unit 13 To supply.
- the image converter 31 is supplied with the check image data from the image converter 11 and
- the enlargement rate information is supplied from the control unit 14.
- the image conversion unit 31 performs image conversion processing according to the enlargement ratio information supplied from the control unit 14.
- the check image data from the image conversion unit 11 is converted into m "-magnified image data and supplied to the pseudo inch image generation unit 71.
- the pseudo-inch image generation unit 71 performs the inch number information supplied from the control unit 14.
- the m ”-fold enlarged image data from the image conversion unit 31 is obtained.
- FIG. 22 shows a display example of the display device 2 when the signal processing unit 12 is configured as shown in FIG.
- an image corresponding to the n-inch pseudo-inch image data is magnified m times in the display area # 1, and an image corresponding to the n'-inch pseudo-inch image data is displayed in the display area # 2. Images magnified m 'times are displayed in display area # 3. Images corresponding to n "inch pseudo-inch image data are magnified m" times.
- FIG. 23 shows an eighth configuration example of the signal processing unit 12 in FIG.
- the image signal generating unit 71 includes a second signal processing unit 12, an image converting unit 31, a simulation processing unit 41, and a pseudo inch image generating unit 71.
- the third signal processing unit 12 includes a force image converting unit 31. , A simulation processing unit 41, and a pseudo inch image generation unit 71.
- the image conversion unit 31 performs image processing according to the enlargement ratio information supplied from the control unit 14 (Fig. 1).
- the image check unit 11 (FIG. 1) converts the force check image data into the m-fold enlarged image data and supplies it to the pseudo inch image generation unit 71.
- the pseudo-inch image generation unit 71 performs the inch number information supplied from the control unit 14.
- the m-fold enlarged image data from the image conversion unit 31 is obtained.
- pseudo-inch image data of n inches which is any value within the range of 20 to 103 inches, is generated from the data and supplied to the display control unit 13 (FIG. 1) as processed image data. .
- the image conversion unit 31 performs the image conversion process according to the enlargement ratio information supplied from the control unit 14, thereby converting the check image data from the image conversion unit 11 into m'-fold enlarged image data. To the simulation processing unit 4.
- the simulation processing unit 41 performs, for example, a PDP simulation process according to the type information supplied from the control unit 14, thereby performing a pseudo PDP image from the m'-fold enlarged image data from the image conversion unit 31. Data is generated and supplied to the pseudo inch image generation unit 71.
- the pseudo-inch image generation unit 71 performs the pseudo-inch image generation process according to the inch number information supplied from the control unit 14, thereby generating, for example, 20
- n ′ inch pseudo-inch image data that is any value within the range of 103 inches is generated and supplied to the display control unit 13 as processed image data.
- the image conversion unit 31 converts the check image data from the image conversion unit 11 into m "double-enlarged image data by performing image conversion processing according to the enlargement ratio information supplied from the control unit 14. To the simulation processing unit 41.
- the simulation processing unit 41 for example, according to the type information supplied from the control unit 14,
- pseudo CRT image data is generated from the m ”-fold enlarged image data from the image conversion unit 31 and supplied to the pseudo inch image generation unit 71.
- the pseudo-inch image generation unit 71 performs the number-of-inch information supplied from the control unit 14.
- n-inch pseudo-inch image data that is any value within the range of 20 to 40 inches, for example, is obtained from the simulated CRT image data of the simulation processing unit 41 force. Generated and supplied to the display control unit 13 as processed image data.
- FIG. 24 shows a display example of the display device 2 when the signal processing unit 12 is configured as shown in FIG.
- a check image of reference inch N is displayed in display area # 0.
- the image area corresponding to the n-inch pseudo-inch image data is displayed in the display area # 1
- the image corresponding to the n-inch pseudo-inch image data is displayed in m.
- the image power S is equivalent to the image displayed by PDP with the image magnified twice
- the display area # 3 has an image corresponding to the n "inch pseudo-inch image data magnified m" times with CRT. Images corresponding to the displayed images are respectively displayed.
- the above-described image conversion processing is, for example, processing for converting image data into image data having a larger number of pixels than that image data, image data having a high frame rate, or the like, that is, first image data.
- Is converted into the second image data and the image conversion process for converting the first image data into the second image data can be performed using, for example, a class classification adaptive process.
- the image conversion processing for converting the first image data into the second image data is the first
- the image conversion process improves the spatial resolution. It can be called spatial resolution creation (improvement) processing.
- the first image data is image data having a predetermined number of pixels (size)
- the second image data is an image in which the number of pixels of the first image data is increased or decreased.
- the image conversion process can be compared with the resizing process that changes the number of pixels of the image (performs resizing (enlargement or reduction) of the image).
- the image conversion processing is performed using temporal resolution (frame Time resolution creation (improvement) processing to improve the rate)!
- the first image that is image data with low spatial resolution is used.
- the second image data may be image data having the same number of pixels as the first image data. It can also be image data having more pixels than the first image data.
- the spatial resolution creation process is a process that improves the spatial resolution and resizes to increase the image size (number of pixels) It is also a process.
- FIG. 25 shows a configuration example of the image conversion apparatus 101 that performs image conversion processing using class classification adaptation processing.
- the image data supplied thereto is supplied to the tap selection units 112 and 113 as the first image data.
- the pixel-of-interest selecting unit 111 sequentially sets pixels constituting the second image data as the pixel of interest, and supplies information representing the pixel of interest to a necessary block.
- the tap selection unit 112 selects some of the pixels (the pixel values) constituting the first image data used to predict the target pixel (the pixel values thereof) as prediction taps.
- the tap selection unit 112 selects, as prediction taps, a plurality of pixels of the first image data that are spatially or temporally close to the temporal and spatial positions of the target pixel.
- the tap selection unit 113 clusters some of the pixels constituting the first image data used for classifying the target pixel into one of several classes. Select as a group. That is, tap selection section 113 selects a class tap in the same manner as tap selection section 112 selects a prediction tap.
- prediction tap and the class tap may have the same tap structure. It may have a different tap structure.
- the prediction tap obtained by the tap selection unit 112 is supplied to the prediction calculation unit 116, and the class tap obtained by the tap selection unit 113 is supplied to the class classification unit 114.
- the class classification unit 114 class-categorizes the pixel of interest based on the class tap from the tap selection unit 113, and supplies the class code corresponding to the resulting class to the coefficient output unit 115.
- ADRC Adaptive Dynamic Range Coding
- the pixels constituting the class tap are subjected to ADRC processing, and the class of the pixel of interest is determined according to the resulting ADRC code.
- the pixel value of each pixel composing the class tap is requantized to K bits. That is, the pixel value of each pixel forming the class taps, the minimum value MIN is subtracted, is divided (re-quantized) with the subtraction value force 3 ⁇ 4R / 2 K. Then, a bit string obtained by arranging the pixel values of the K-bit pixels constituting the class tap in a predetermined order, which is obtained as described above, is output as an ADRC code.
- class tap 1S For example, when 1-bit ADRC processing is performed, the pixel value of each pixel constituting the class tap is divided by the average value of the maximum value MAX and the minimum value MIN (rounded down). As a result, the pixel value of each pixel is made 1 bit (binarized). Then, a bit string in which the 1-bit pixel values are arranged in a predetermined order is output as an ADRC code.
- the class classification unit 114 can also output, for example, the level distribution pattern of the pixel values of the pixels constituting the class tap as it is as the class code.
- the class tap force S is composed of pixel values of N pixels, and K bits are assigned to the pixel values of each pixel.
- the number of is (2 N ) K , which is a large number that is exponentially proportional to the number of bits ⁇ of the pixel value of the pixel.
- the class classification unit 114 the information amount of the class tap is converted into the above-described ADRC processing. It is also preferable to perform classification by compressing by vector quantization or the like.
- the coefficient output unit 115 stores the tap coefficient for each class obtained by learning described later, and further corresponds to the class code supplied from the class classification unit 114 among the stored tap coefficients.
- the tap coefficient stored in the address (the tap coefficient of the class represented by the class code supplied from the class classification unit 114) is output. This tap coefficient is supplied to the prediction calculation unit 116.
- the tap coefficient corresponds to a coefficient to be multiplied with input data in a so-called tap in the digital filter.
- the prediction calculation unit 116 acquires the prediction tap output from the tap selection unit 112 and the tap coefficient output from the coefficient output unit 115, and uses the prediction tap and the tap coefficient to calculate the true value of the target pixel. Predetermined calculation for obtaining the predicted value is performed. As a result, the prediction calculation unit 116 calculates and outputs the pixel value of the pixel of interest (predicted value thereof), that is, the pixel value of the pixels constituting the second image data.
- step S111 the pixel-of-interest selecting unit 111 selects one of the pixels constituting the second image data for the first image data input to the image conversion device 101, which has not yet been set as the pixel of interest. Is selected as a target pixel, and the process proceeds to step S112.
- the pixel-of-interest selecting unit 111 selects, for example, pixels that are not regarded as the pixel of interest in the raster scan order among the pixels constituting the second image data.
- step S112 the tap selection units 112 and 113 respectively select the prediction tap and the class tap for the target pixel from the first image data supplied thereto.
- the prediction tap is supplied from the tap selection unit 112 to the prediction calculation unit 116, and the class tap is supplied from the tap selection unit 113 to the class classification unit 114.
- the class classification unit 114 receives the class tap for the target pixel from the tap selection unit 113, and classifies the target pixel based on the class tap in step S113. Further, the class classification unit 114 displays the class of the target pixel obtained as a result of the class classification. The class code is output to the coefficient output unit 115, and the process proceeds to step S114.
- step S114 the coefficient output unit 115 acquires and outputs the tap coefficient stored at the address corresponding to the class code supplied from the class classification unit 114. Further, in step S114, the prediction calculation unit 116 obtains the tap coefficient output by the coefficient output unit 115, and proceeds to step S115.
- step S115 the prediction calculation unit 116 outputs the prediction tap output from the tap selection unit 112.
- step S116 a predetermined prediction calculation is performed using the tap coefficient acquired from the coefficient output unit 115.
- the prediction calculation unit 116 obtains and outputs the pixel value of the target pixel, and proceeds to step S116.
- step S116 the pixel-of-interest selecting unit 111 determines whether there is second image data that is not yet the pixel of interest. If it is determined in step S116 that there is still second image data that is not the pixel of interest, the process returns to step S111, and the same processing is repeated thereafter.
- step S116 If it is determined in step S116 that there is no second image data that has not yet been set as the pixel of interest, the processing ends.
- the high-quality image data (high-quality image data) is used as the second image data, and the high-quality image data is filtered by LPF (Low Pass Filter).
- LPF Low Pass Filter
- a prediction tap is selected from the low-quality image data, and the prediction tap and tap coefficient are used to generate high-quality image data.
- the pixel value y of the high-quality pixel is obtained by the following linear linear expression.
- Equation (1) X constitutes a prediction tap for the high-quality pixel y.
- n the nth tap coefficient to be multiplied by the nth low image quality pixel.
- the prediction tap force S is composed of N low-quality pixels X 1, X 2,.
- the pixel value y of the high-quality pixel can be obtained by a higher-order expression of the second or higher order than the linear first-order expression shown in the expression (1).
- the prediction error e is expressed by the following equation.
- Equation (3) X is the prediction tap for the high-quality pixel of the k-th sample.
- the optimal tap coefficient w is the sum of the square errors expressed by the following equation:
- K is the high-quality pixel y and the prediction for the high-quality pixel y.
- Equation (7) is the normal equation shown in Equation (8).
- the normal equation of equation (8) can be solved for the tap coefficient w by using, for example, a sweeping method (Gauss-Jordan elimination method).
- the optimal tap coefficient (here, the tap coefficient that minimizes the sum E of squared errors) w is determined for each class. Can ask for n
- Fig. 27 shows a configuration example of the learning device 121 that performs learning n to obtain the tap coefficient w by building and solving the normal equation of equation (8).
- the learning image storage unit 131 converts the learning image o ...- image data used for learning the tap coefficient w into n
- the learning image data for example, high-resolution image data with high resolution can be used.
- the teacher data generation unit 132 reads learning image data from the learning image storage unit 131. Further, the teacher data generation unit 132 generates, from the learning image data, teacher data that serves as a tap coefficient learning teacher (true value), that is, a pixel value of a mapping destination as a prediction calculation according to Equation (1). And supplied to the teacher data storage unit 133.
- the teacher data generation unit 132 supplies high-quality image data as learning image data to the teacher data storage unit 133 as teacher data as it is.
- the teacher data storage unit 133 stores high-quality image data as teacher data supplied from the teacher data generation unit 132.
- the student data generation unit 134 reads the learning image data from the learning image storage unit 131. Further, the student data generation unit 134 learns tap coefficients from the learning image data. Student data, that is, student data that is a pixel value to be converted by mapping as a prediction calculation according to Equation (1) is generated and supplied to the student data storage unit 135. Here, the student data generation unit 134 generates low-quality image data by, for example, filtering the high-quality image data as the learning image data to reduce the resolution, and generates the low-quality image data. Is supplied to the student data storage unit 135 as student data.
- the student data storage unit 135 stores the student data supplied from the student data generation unit 134.
- the learning unit 136 sequentially sets the pixels constituting the high-quality image data as the teacher data stored in the teacher data storage unit 133 as the pixel of interest, and the student data storage unit 135 for the pixel of interest.
- the low-quality pixels constituting the low-quality image data stored in the student data the low-quality pixels having the same tap structure as those selected by the tap selection unit 112 in FIG. 25 are selected as the prediction taps.
- the learning unit 136 uses each pixel constituting the teacher data and the prediction tap selected when the pixel is the pixel of interest, and constructs a normal equation of Equation (8) for each class. Find the tap coefficient for each class by solving
- FIG. 28 shows a configuration example of the learning unit 136 of FIG.
- the pixel-of-interest selecting unit 141 sequentially selects pixels constituting the teacher data stored in the teacher data storage unit 133 as the pixel of interest, and uses the information representing the pixel of interest as a necessary block. Supply.
- the tap selection unit 142 selects the pixel of interest from the low image quality pixels constituting the low image quality image data as the student data stored in the student data storage unit 135, by the tap selection unit 112 in FIG. By selecting the same pixel, a prediction tap having the same tap structure as that obtained by the tap selection unit 112 is obtained and supplied to the adding unit 145.
- the tap selection unit 143 selects the pixel of interest from the low-quality pixels constituting the low-quality image data as the student data stored in the student data storage unit 135, by the tap selection unit 113 in FIG. As a result, a class tap having the same tap structure as that obtained by the tap selection unit 113 is obtained and supplied to the class classification unit 144.
- the class classification unit 144 is based on the class tap output from the tap selection unit 143, and the class shown in FIG. The same class classification as the class classification unit 114 is performed, and the class code corresponding to the class obtained as a result is output to the addition unit 145.
- the addition unit 145 receives the teacher data from the teacher data storage unit 133 as the pixel of interest.
- the class classification unit 144 adds the target pixel and the target data supplied from the tap selection unit 142 to the student data (pixels) constituting the prediction tap. This is done for each class code supplied from.
- the adding unit 145 includes the teacher data y stored in the teacher data storage unit 133, the prediction tap X output from the tap selection unit 142, and the class code k n, k output from the class classification unit 144.
- the adding unit 145 uses the prediction tap (student data) X for each class corresponding to the class code supplied from the class classification unit 144, and generates the raw data in the matrix on the left side of Equation (8).
- the adding unit 145 also uses the prediction tap (student data) X and the teacher data y for each class corresponding to the class code supplied from the class classification unit 144, and uses equation (8).
- the adding unit 145 is a combination of the left-side matrix component ( ⁇ ⁇ X) and the right-side vector component in Equation (8) previously obtained for the teacher data set as the target pixel.
- the addition unit 145 performs the above-described addition using all the teacher data stored in the teacher data storage unit 133 (Fig. 27) as the target pixel, so that the expression (8)
- the normal equation shown in (1) is established, the normal equation is supplied to the tap coefficient calculation unit 146.
- the tap coefficient calculation unit 146 performs normalization for each class supplied from the addition unit 145. By solving the equation, the optimum tap coefficient w is obtained and output for each class.
- the coefficient output unit 115 in the image conversion apparatus 101 in Fig. 25 stores the tap coefficient w for each class obtained as described above!
- tap coefficients are As described above, it is possible to obtain what performs various image conversion processes.
- the high-quality image data is used as teacher data corresponding to the second image data
- the low-quality image data in which the spatial resolution of the high-quality image data is degraded is
- SD Standard Definition
- the tap coefficient is the low-quality image data (SD (Standard Definition) image) as shown first in FIG.
- the first image data may have the same or less number of pixels than the second image data (teacher data).
- the tap coefficient is the high S / N ratio obtained by removing (decreasing) the noise contained in the first image data that is low S / N image data. It is possible to obtain an image that performs image conversion processing as noise removal processing that converts the image data to second image data.
- the first image data that is a part or all of a certain image data is converted into the second image data that is an enlarged image data obtained by enlarging the first image data. It is possible to obtain what performs image conversion processing as resizing processing (processing to change the number of pixels) for conversion to image data.
- the tap coefficient for performing the resizing process is that the high-quality image data is the teacher data.
- the spatial resolution of the high-quality image data can also be obtained by learning tap coefficients using the low-quality image data that has been degraded by thinning out the number of pixels as student data.
- high frame rate image data is used as teacher data
- tap coefficient learning is performed using image data obtained by thinning out frames of high frame rate image data as the teacher data as student data.
- the tap coefficient is the time required to convert the first image data having a predetermined frame rate into the second image data having a high frame rate. It is possible to obtain an image conversion process as a resolution creation process.
- step S121 the teacher data generation unit 132 and the student data generation unit 134 obtain the second image data obtained by the image conversion process from the learning image data stored in the learning image storage unit 131.
- Teacher data corresponding to (or equivalent to) the image data and student data corresponding to the first image data to be subjected to image conversion processing are generated and supplied to the teacher data storage unit 133 and the student data generation unit 134, respectively.
- step S122 the pixel-of-interest selecting unit 141 selects the teacher data stored in the teacher-data storage unit 133 that has not yet been set as the pixel of interest. , Select as the pixel of interest, and proceed to Step S123.
- step S 123 the tap selection unit 142 selects a pixel as student data to be a prediction tap from the student data stored in the student data storage unit 135 for the target pixel, and supplies the selected pixel to the addition unit 145.
- the tap selection unit 143 selects student data as class taps from the student data stored in the student data storage unit 135 for the target pixel, and supplies the selected class data to the class classification unit 144.
- step S124 the class classification unit 144 performs cluster clustering on the pixel of interest. Based on the group, the target pixel is classified, the class code corresponding to the class obtained as a result is output to the adding unit 145, and the process proceeds to step S125.
- step S125 the adding unit 145 reads the target pixel from the teacher data storage unit 133, and configures the target pixel and the prediction tap selected for the target pixel supplied from the tap selection unit 142.
- the addition of Expression (8) for the student data is performed for each class code supplied from the class classification unit 144, and the process proceeds to Step S126.
- step S126 the pixel-of-interest selecting unit 141 determines whether teacher data that is not yet a pixel of interest is stored in the teacher data storage unit 133. In step S 126, when it is determined that the teacher data power that is not the pixel of interest is still stored in the teacher data storage unit 133, the process returns to step S 122 and the same processing is repeated thereafter.
- step S126 If it is determined in step S126 that the teacher data is not stored in the teacher data storage unit 133 as the pixel of interest, the adding unit 145 determines that the addition step 145 has been performed in steps S122 to S126.
- the matrix on the left side and the vector on the right side in Equation (8) for each class obtained by the above process are supplied to the tap coefficient calculation unit 146, and the process proceeds to step S127.
- step S127 the tap coefficient calculation unit 146 solves the normal equation for each class composed of the matrix on the left side and the vector on the right side in the equation (8) for each class supplied from the addition unit 145. For each class, tap coefficient w is obtained and output for processing.
- the tap coefficient calculation unit 146 outputs a default tap coefficient, for example.
- FIG. 31 shows a configuration example of an image conversion apparatus 151 that is another image conversion apparatus that performs image conversion processing using class classification adaptation processing.
- the image conversion device 151 is different from the coefficient output unit 115 in that the image conversion device 151 of FIG. 25 is provided except that the coefficient output unit 155 is provided.
- the configuration is the same as that of the device 101.
- the coefficient output unit 155 is supplied with a class (class code) from the class classification unit 114, and is also supplied with a parameter z input from the outside in response to a user operation, for example. .
- the coefficient output unit 155 generates a tap coefficient for each class corresponding to the parameter z as described later, and predicts the tap coefficient of the class from the class classification unit 114 out of the tap coefficients for each class. The result is output to the calculation unit 116.
- FIG. 32 shows a configuration example of the coefficient output unit 155 of FIG.
- the coefficient generation unit 161 generates a tap coefficient for each class based on the coefficient seed data stored in the coefficient seed memory 162 and the parameter z stored in the parameter memory 163, and the coefficient memory 164 To be stored and overwritten.
- Coefficient seed memory 162 stores coefficient seed data for each class obtained by learning of coefficient seed data described later.
- the coefficient seed data is so-called seed data that generates tap coefficients.
- the parameter memory 163 stores the parameter z input from the outside in accordance with a user operation or the like in an overwritten manner.
- the coefficient memory 164 stores the tap coefficient for each class supplied from the coefficient generation unit 161 (the tap coefficient for each class corresponding to the parameter z). Then, the coefficient memory 164 reads the tap coefficient of the class supplied from the class classification unit 114 (FIG. 31) and outputs it to the prediction calculation unit 116 (FIG. 31).
- the parameter z when the parameter z is input from the outside to the coefficient output unit 155, the parameter z is stored in the parameter memory 163 of the coefficient output unit 155 (FIG. 32). Is stored in an overwritten form.
- the coefficient generation unit 161 reads the coefficient seed data for each class from the coefficient seed memory 162. Then, the parameter z is read from the parameter memory 163, and the tap coefficient for each class is obtained based on the coefficient seed data and the parameter z. Then, the coefficient generation unit 161 supplies the tap coefficient for each class to the coefficient memory 164, and overwrites the coefficient in the form of overwriting.
- Image conversion apparatus 151 stores tap coefficients, and generates a tap coefficient corresponding to parameter z in coefficient output section 155 provided in place of coefficient output section 115 that outputs the tap coefficient. 25, the same processing as the processing according to the flowchart of FIG. 26 performed by the image conversion apparatus 101 of FIG. 25 is performed.
- the high-quality image data (high-quality image data) is used as the second image data, and the spatial resolution of the high-quality image data is reduced.
- High-quality image data (low-quality image data) is used as the first image data, and a prediction tap is selected from the low-quality image data. For example, let us consider obtaining (predicting) the pixel value of by the linear first-order prediction calculation of equation (1).
- the pixel value y of the high-quality pixel can be obtained by a higher-order expression of the second or higher order than the linear first-order expression shown in the expression (1).
- the coefficient generation unit 161 performs tap coefficient w force coefficient type memo.
- the coefficient seed data stored in the memory 162 and the parameter z stored in the parameter memory 163 are generated.
- Equation (11) the tap coefficient w is given by the linear linear equation of coefficient seed data / 3 and variable t.
- Equation (13) X is the prediction tap for the high-quality pixel of the k-th sample.
- Coefficient seed data 13 with the prediction error e of equation (14) set to 0 is used to predict high-quality pixels.
- coefficient seed data 0 is optimal, for example, the minimum
- the optimal coefficient seed data 0 is the square error represented by the following equation:
- Equation (15) K is the high-quality pixel y and the prediction for the high-quality pixel y.
- equation (17) can be expressed by the normal equation shown in equation (20) using X and Y.
- Xi, M, N, M ⁇ M, 1 ⁇ , ⁇ 2, 1, 1, 1 2, 1, 1,2--, M, 1, M X2, M, 2, 1--. ⁇ 2 , M, N, M ⁇ ,, 2 ⁇ 2 , 1
- the normal equation of equation (20) can be solved for coefficient seed data ⁇ by using, for example, a sweep-out method (Gauss-Jordan elimination method).
- the coefficient generation unit 161 the coefficient seed data ⁇ and m, n stored in the parameter memory 163
- the tap coefficient w and the target pixel as the high-quality pixel are set to n.
- N formula (1) is calculated using the low-quality pixels (pixels of the first image data) x that make up the prediction tap, and the pixel value of the target pixel as a high-quality pixel Value).
- Fig. 33 shows a configuration example of a learning device 171 that performs learning to obtain coefficient seed data 0 for each class by solving the normal equation of equation (20) for each class. .
- the learning device 171 includes a student data generation unit 174 and a learning unit 176 instead of the student data generation unit 134 and the learning unit 136, respectively, and a parameter generation unit 181. Is configured in the same manner as the learning device 121 of FIG.
- the student data generation unit 174 generates student data from the learning image data, and supplies it to the student data storage unit 135 to store it, similarly to the student data generation unit 134 of FIG.
- the student data generation unit 174 filters the high-quality image data as the learning image data with, for example, an LPF having a cutoff frequency corresponding to the parameter z supplied thereto, as student data. Low-quality image data.
- the student data generation unit 174 generates low-quality image data as student data having different spatial resolutions of Z + 1 for the high-quality image data as the learning image data.
- the larger the value of the parameter z the higher the cutoff frequency.
- LPF is used to filter high-quality image data and generate low-quality image data as student data. Therefore, here, the lower the image quality data corresponding to the larger parameter z, the higher the spatial resolution.
- the student data generation unit 174 supports the spatial resolution in the horizontal and vertical directions of the high-quality image data in the parameter z. It is assumed that low-quality image data that has been reduced by this amount is generated.
- the learning unit 176 uses the teacher data stored in the teacher data storage unit 133, the student data stored in the student data storage unit 135, and the parameter z supplied from the parameter generation unit 181 to The coefficient seed data for each is obtained and output.
- FIG. 34 shows a configuration example of the learning unit 176 in FIG.
- portions corresponding to those in the learning unit 136 in FIG. 28 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the tap selection unit 192 uses the low-quality pixels constituting the low-quality image data as the student data stored in the student data storage unit 135 for the target pixel. Thirty-one tap selection units 112 select a prediction tap having the same tap structure as that selected by the tap selection unit 112 and supply it to the addition unit 195.
- the tap selection unit 193 is also configured to display the target pixel from the low image quality pixels constituting the low quality image data as the student data stored in the student data storage unit 135.
- a cluster having the same tap structure as that selected by the 31 tap selection units 113 is selected and supplied to the class classification unit 144.
- the parameter z generated by the parameter generation unit 181 of FIG. 33 is supplied to the tap selection units 192 and 193, and the tap selection units 192 and 193 Student data generated corresponding to the parameter z supplied from the data generator 181 (here, low-quality image data as student data generated using an LPF with a cutoff frequency corresponding to the parameter z) To select a prediction tap and a class tap.
- the adding unit 195 reads out the target pixel from the teacher data storage unit 133 in FIG. 33, and predicts the target pixel and the target pixel supplied from the tap selection unit 192! /. For each class supplied from the class classification unit 144, addition is performed for the student data composing the class and the parameter z when the student data is generated.
- the addition unit 195 includes the teacher data y stored as the target pixel stored in the teacher data storage unit 133 and the prediction tap X regarding the target pixel output from the tap selection unit 192.
- the parameter z when the student data constituting the prediction tap for the eye pixel is generated is supplied from the parameter generation unit 181.
- the adding unit 195 uses the prediction type (student data) X (X) and the parameter z, and in the matrix on the left side of the equation (20), Formula (1
- the addition unit 195 again performs prediction tap (student data) X, teacher data y, and parameter z for each class corresponding to the class code supplied from the class classification unit 144.
- the adding unit 195 calculates the component X of the left-hand side matrix and the component of the right-hand side vector i.P.j.q in Equation (20) previously obtained for the teacher data set as the target pixel.
- Nent Y is stored in its built-in memory (not shown), and the matrix component is stored.
- the addition unit 195 uses all the teacher data stored in the teacher data storage unit 133 as the target pixel in accordance with the parameters z of all the values of 0, 1, ..., Z,
- the normal equation shown in Equation (20) is established for each class by performing the above addition, the normal equation is supplied to the coefficient seed calculation unit 196.
- the coefficient seed calculation unit 196 obtains and outputs coefficient seed data 0 for each class by solving the normal equation for each class supplied from the addition unit 195.
- step S 131 the teacher data generation unit 132 and the student data generation unit
- the 174 generates and outputs teacher data and student data from the learning image data stored in the learning image storage unit 131, respectively. That is, the teacher data generation unit 132 outputs the learning image data as it is, for example, as teacher data. Further, the student data generation unit 174 is supplied with the parameter z of Z + 1 value generated by the parameter generation unit 181. For example, the student data generation unit 174 converts the image data for learning to the cutoff frequency corresponding to the parameter z of Z + 1 values (0, 1,..., Z) from the parameter generation unit 181. By filtering with LPF, Z + 1 frame student data is generated and output for each frame of teacher data (learning image data).
- the teacher data output from the teacher data generation unit 132 is supplied to and stored in the teacher data storage unit 133, and the student data output from the student data generation unit 174 is supplied to and stored in the student data storage unit 135. Is done.
- step S132 in which the parameter generation unit 181 sets the parameter z to, for example, 0 as an initial value, and arranges the tap selection units 192 and 193 of the learning unit 176 (Fig. 34).
- step S133 in step S133, the target pixel selection unit 141 is still the target pixel of the teacher data stored in the teacher data storage unit 133. If not, the process proceeds to step S134.
- step S134 the tap selection unit 192 performs the student data storage unit on the target pixel.
- the student data for the parameter z output by the parameter generation unit 181 stored in 135 (the learning image data corresponding to the teacher data serving as the pixel of interest is stored in the parameter z
- the prediction tap is selected from the student data generated by filtering with the LPF having the corresponding cutoff frequency, and supplied to the adding unit 195.
- the tap selection unit 193 again selects the class tap from the student data for the parameter z output from the parameter generation unit 181 and stored in the student data storage unit 135 for the pixel of interest, and classifies it. Supply to part 144.
- step S135 the class classification unit 144 classifies the target pixel based on the cluster group for the target pixel, and outputs the class of the target pixel obtained as a result to the addition unit 195. Then, the process proceeds to step S136.
- step S135 the adding unit 195 reads the target pixel from the teacher data storage unit 133, the target pixel, the prediction tap supplied from the tap selection unit 192, and the parameter z output by the parameter generation unit 181. Is used to calculate the left side matrix component X t X t and the right side vector component X ty in equation (20). Furthermore, the addition part i, K p j, K q i, K p K
- step S137 the parameter generation unit 181 determines whether or not the parameter z output by itself is equal to Z which is the maximum value that can be taken. If it is determined in step S 136 that the parameter z output by the parameter generation unit 181 is not equal to the maximum value Z (less than the maximum value Z), the process proceeds to step S138, and the parameter generation unit 181 1 is added to the parameter z, and the added value is output as a new parameter z to the tap selection units 192 and 193 and the addition unit 195 of the learning unit 176 (FIG. 34).
- step S 137 If it is determined in step S 137 that the parameter z is equal to the maximum value Z, the process proceeds to step S 139, where the pixel-of-interest selection unit 141 stores the pixel of interest in the teacher data storage unit 133. Whether or not teacher data is stored is determined.
- step S 138 when it is determined that the teacher data power not set as the pixel of interest is still stored in the teacher data storage unit 133, the process returns to step S 132, and the same processing is repeated thereafter. It is.
- step S139 when it is determined that the teacher data is not stored in the teacher data storage unit 133 as the pixel of interest, the addition unit 195 determines the class obtained by the processing so far.
- the matrix on the left side and the vector on the right side in each equation (20) are supplied to the coefficient seed calculation unit 196, and the process proceeds to step S140.
- step S140 the coefficient seed calculation unit 196 solves the regular equation for each class composed of the matrix on the left side and the vector on the right side in Equation (20) for each class supplied from the addition unit 195. For each class, the coefficient seed data 13 is obtained and output, and the processing m, n
- the coefficient seed calculation unit 196 outputs default coefficient seed data, for example.
- the coefficient seed data learning the student data corresponding to the first image data and the second image data are supported in the same manner as in the tap coefficient learning described in FIG.
- the coefficient seed data can be obtained by performing various image conversion processes.
- the learning image data is used as teacher data corresponding to the second image data as it is, and low-quality image data in which the spatial resolution of the learning image data is degraded is used. Since the coefficient seed data is learned as the student data corresponding to the first image data, the first image data is improved as the spatial resolution of the coefficient seed data. It is possible to obtain an image conversion process as a spatial resolution creation process that converts the image data into the second image data.
- the horizontal resolution and the vertical resolution of the image data can be improved to a resolution corresponding to the parameter z.
- high-quality image data is used as teacher data
- image data obtained by superimposing noise at a level corresponding to the parameter z on high-quality image data as the teacher data is used as student data.
- Coefficient seed data is obtained by learning coefficient seed data.
- an image conversion process as a noise removal process for converting the first image data into the second image data from which the noise included therein is removed (reduced).
- the image conversion device 151 in FIG. 31 uses the force S to obtain S / N image data corresponding to the parameter z.
- certain image data is used as teacher data, and image data obtained by thinning out the number of pixels of the image data as the teacher data corresponding to the parameter z is used as student data, or a predetermined size.
- the image data is used as student data, and the pixel data of the image data as the student data is used as the teacher data for learning the coefficient seed data using the image data obtained with the space bow IV rate corresponding to the parameter z.
- the coefficient seed data can be obtained by performing image conversion processing as resizing processing for converting the first image data into second image data whose size is enlarged or reduced.
- the image conversion device 151 in FIG. 31 can obtain the image data changed to the size (number of pixels) corresponding to the parameter z.
- the tap coefficient w is set to 0 ⁇ ° + ⁇ ⁇ + ⁇ ⁇ ⁇ , ⁇ 2, ⁇ , as shown in equation (9).
- the force n is determined to obtain the tap coefficient W to improve the deviation corresponding to the parameter ⁇ .
- the tap factor W corresponds to the horizontal and vertical resolutions corresponding to the independent parameters Z and Z, n
- the tap coefficient w is replaced with, for example, the cubic equation / 3 ⁇ ° ⁇ ° + / 3 ⁇ ' ⁇ ° ⁇ ⁇ , ⁇ ⁇ 2, ⁇ ⁇
- Equation (11) Can be expressed by Equation (11), and accordingly, the learning device 171 in FIG. 33 and the image data in which the horizontal resolution and vertical resolution of the teacher data are deteriorated corresponding to the parameters z and z, By using the data as learning and obtaining coefficient seed data 0, water m, n
- the flat resolution, vertical resolution, and temporal resolution correspond to the independent parameters Z, Z, Z, t
- image data in which the horizontal resolution and vertical resolution of the teacher data are deteriorated corresponding to the parameter z and noise is added to the teacher data corresponding to the parameter z is obtained.
- learning is performed as student data, and the coefficient seed data ⁇ is obtained, so that the horizontal and vertical resolutions corresponding to the parameter ⁇ are changed to m, n
- the learning device 171 in FIG. 33 uses the horizontal and vertical pixel numbers respectively. However, the image data of m times, m times, ...
- the coefficient seed data is learned by using the image data of the number of pixels as student data corresponding to the check image data.
- the image conversion unit 31 is configured by the image conversion device 151 of FIG.
- the coefficient seed data obtained in this way is used for the image conversion device 151 (FIG. 31) as the image conversion unit 31.
- the image conversion device 151 as the image conversion unit 31 is expanded as a parameter ⁇ .
- an image conversion device 15 as the image conversion unit 31 is obtained.
- the image conversion process for converting the check image data into m-times enlarged image data with m times the number of pixels can be performed by the class classification adaptive process.
- the above-described series of processing can be performed by software or software.
- a series of processing is performed by software, it is installed in a program card, a general-purpose computer, or the like that constitutes the software.
- FIG. 36 shows an example of the configuration of an embodiment of a computer on which a program for executing the series of processes described above is installed.
- the program can be recorded in advance in a hard disk 205 or ROM 203 as a recording medium built in the computer.
- the program may be a flexible disk, CD-ROM (Compact Disc Read
- a removable recording medium 211 such as a Only Memory), MO (Magneto Optical) disk, DVD (Digital Versatile Disc), magnetic disk, or semiconductor memory.
- a removable recording medium 211 can be provided as so-called knocking software.
- the program is installed on the computer from the removable recording medium 211 as described above, or transferred from the download site to the computer wirelessly via a digital satellite broadcasting artificial satellite. (Area Network), transferred to the computer via a network such as the Internet, and the computer can receive the program transferred in this way by the communication unit 208 and install it in the built-in hard disk 205. it can.
- a network such as the Internet
- the computer has a CPU (Central Processing Unit) 202 built in!
- An input / output interface 210 is connected to the CPU 202 via the bus 201, and the CPU 202 has an input unit 207 configured with a keyboard, a mouse, a microphone, and the like by the user via the input / output interface 210.
- a program stored in a ROM (Read Only Memory) 203 is executed accordingly.
- the CPU 202 is transferred from a program, satellite, or network stored in the hard disk 205, received by the communication unit 208, and installed in the hard disk 205.
- the program read from the removable recording medium 211 installed in the drive 209 or installed in the hard disk 205 is loaded into a RAM (Random Access Memory) 204 and executed.
- the CPU 202 performs processing according to the above-described flow chart or processing performed by the above-described configuration of the block diagram.
- the CPU 202 outputs the processing result as necessary, for example, from an output unit 206 composed of an LCD (Liquid Crystal Display), a speaker, or the like via the input / output interface 210, or a communication unit. It is transmitted from 208 and further recorded on the hard disk 205.
- the display device 2 in addition to the check image, three images are displayed simultaneously.
- the images displayed simultaneously with the check image are 1, 2, Or it may be 4 or more.
- the screen of display device 2 is divided into four display areas # 0 to # 3 which are equally divided horizontally and vertically, and images are displayed in each of the display areas # 0 to # 3.
- the screen S of the display device 2 can be divided into other display areas such as 2, 8, 16 and the like, and an image can be displayed in each display area. .
- the arrangement of the display area is not limited to the matrix arrangement, and the display area may be arranged at an arbitrary position on the screen of the display device 2. it can
- the display device 2 is an LCD with a force S
- the display device is, for example, a CRT, PDP, organic EL, projector (front projector that emits light from the front of the screen) , And rear projectors that irradiate light from the back of the screen), and FED can be used.
- the signal processing unit 12 displays an image corresponding to the image displayed on the organic EL, PDP, and CRT on the display device 2 that is an LCD.
- the signal processing for generating each data is performed and displayed on the display device 2, but the signal processing unit 12 corresponds to an image displayed on the FED, front projector, rear projector, etc.
- Signal processing for generating post-processing image data for displaying the image on the display device 2 that is an LCD can be displayed on the display device 2.
- An FPD display device that performs FPD (Flat Panel Display) signal processing, including ABL (Automatic Beam Current Limiter) processing, VM (Velocity Modulation) processing, and CRT (Cathode Ray Tube) saddle processing Embodiment in which the FPD display device is a natural display equivalent to the CRT display device which is a CRT display device]
- FIG. 37 shows an example of a conventional FPD display device (FPD display device) such as an LCD (Liquid Crystal Display).
- FPD display device such as an LCD (Liquid Crystal Display).
- Brightness adjustment Contrast adjustment unit 10011 adjusts the brightness of the image signal by applying an offset to the input image signal, and adjusts the contrast of the image signal by adjusting the gain. , And supply it to the high image quality processing unit 10012.
- the high image quality processing unit 10012 performs high image quality processing represented by DRC (Digital Reality Creation). That is, the image quality enhancement processing unit 10012 is a processing block for obtaining a high quality image, and performs image signal processing including pixel number conversion on the image signal from the brightness adjustment contrast adjustment unit 10011, ⁇ ⁇ ⁇ Supply to the correction unit 10013.
- DRC Digital Reality Creation
- DRC is described as class classification adaptation processing in, for example, JP-A-2005-236634 and JP-A-2002-223167.
- the ⁇ correction unit 10013 is a gamma that adjusts the signal level of the dark part by signal processing in addition to the ⁇ characteristic of the original phosphor (CRT light emitting part) because the collar part is difficult to see on the CRT display device. It is a processing block for performing correction processing.
- the LCD panel also contains a processing circuit that adjusts the photoelectric conversion characteristics (transmission characteristics) of the liquid crystal to the ⁇ characteristics of the CRT, so the conventional FPD display device is the same as the CRT display device. ⁇ correction processing is performed.
- the ⁇ correction unit 10013 performs gamma correction processing on the image signal from the image quality enhancement processing unit 10012, and supplies the resulting image signal to, for example, an LCD as an FPD (not shown). To do. As a result, an image is displayed on the LCD.
- the image signal is directly input to the FPD through the image quality improvement process and the gamma correction process after the contrast and brightness adjustment processes are performed.
- the brightness of the input and the display image is proportional to gamma, but the display image is an image that feels glaring if it is too bright compared to the CRT display device. End up.
- an image displayed on an FPD display device is an image that is brighter than a CRT display device. Only the image signal processing system that performs processing only for the existing image signal was modified for FPD and installed in the FPD display device, so the drive system itself that the CRT display device has only the image signal processing system has This is due to the fact that the display system and the overall system structure are considered by comprehensive signal processing including the unique response characteristics and drive system.
- Fig. 38 shows a configuration example of an embodiment of an image signal processing device included in an FPD display device capable of performing natural display equivalent to a CRT display device.
- the image signal processing device in FIG. 38 is a display device of a display method other than CRT, that is, here, for example, when an image signal is displayed on an FPD display device having an FPD such as an LCD, CRT display is performed.
- the image signal is processed so that it appears as an image displayed on the device.
- a CRT display device that displays an image to be displayed by the image signal processing device of FIG. 38, that is, the image signal processing device of FIG. Describes the CRT display device that emulates!
- FIG. 39 shows a configuration example of the CRT display device.
- the brightness adjustment contrast adjustment unit 10051 and the image quality improvement processing unit 10052 perform the brightness adjustment contrast adjustment unit 10011 in FIG.
- the image processing unit 10012 performs the same processing, and the processed image signal is supplied to the gain adjusting unit 10053 and the image signal differentiating circuit 10060.
- the gain adjustment unit (limiter) 10053 limits the signal level of the image signal from the image quality enhancement processing unit 10052 by an ABL control signal from the ABL control unit 10059 described later, and supplies the signal level to the ⁇ correction unit 10054. . That is, the gain adjustment unit 10053 adjusts the gain of the image signal from the image quality enhancement processing unit 10052 instead of directly limiting the amount of electron beam current of the CRT 10056 described later.
- gamma correction unit 10054 subjects the image signal from the gain adjustment unit 10053 performs the same ⁇ correction processing ⁇ correcting unit 10013 of FIG. 37, an image signal obtained as a result, video (Video) amplifier Supply to 10055.
- the video amplifier 10055 amplifies the image signal from the ⁇ correction unit 10054 and supplies it to the CRT 10056 as a CRT drive image signal.
- an FBT (Flyback Transformer) 10057 generates a horizontal deflection drive current for performing horizontal scanning of an electron beam and an anode voltage of a CRT (CRT) 10056 in a CRT display device!
- the output of the transformer is supplied to the beam current detector 10058.
- the beam current detection unit 10058 detects the amount of electron beam current necessary for ABL control from the output of the FBT 10057 and supplies it to the CRT 10056 and the ABL control unit 10059.
- the ABL control unit 10059 measures the current value of the electron beam from the beam current detection unit 10058 and outputs an ABL control signal for controlling the signal level of the image signal to the gain adjustment unit 10053 .
- the image signal differentiating circuit 10060 differentiates the image signal from the image quality enhancement processing unit 10052 and supplies the differential value of the image signal obtained as a result to the VM drive circuit 10061.
- VM (Velocity Modulation) drive circuit 10061 performs VM processing that changes the display brightness of the same image signal by partially changing the deflection (horizontal deflection) speed of the electron beam in the CRT display device. .
- VM processing is performed separately from the main horizontal deflection circuit (consisting of deflection yoke DY, FBT10057, horizontal drive circuit (not shown), etc.) and dedicated VM coil (not shown) and VM drive.
- circuit 10061 Implemented using circuit 10061. That is, the VM drive circuit 10061 generates a VM coil drive signal for driving the VM coil based on the differential value of the image signal from the image signal differentiation circuit 10060, and supplies the VM coil drive signal to the CRT 10056.
- the CRT 10056 includes an electron gun EG, a deflection yoke DY, and the like.
- the CRT 10056 emits an electron beam according to the output of the electron gun EG force beam current detection unit 10058 and the CRT drive image signal from the video amplifier 10055, and the electron beam is a magnetic field generated by the deflection yoke DY which is a coil.
- the image is displayed by changing (scanning) the horizontal and vertical directions in accordance with and colliding with the phosphor screen of CRT10056.
- the VM coil is driven in accordance with the VM coil drive signal from the VM drive circuit 10061, whereby the deflection speed of the electron beam is partially changed, and for example, CRT 10056 The edge of the image displayed on the screen is emphasized.
- VM processing that partially changes the deflection speed
- ABL processing ABL control
- Control signals that affect the quality of the image displayed on CRT10056.
- the image signal processing device of FIG. 38 by converting the image signal in the processing order as shown in FIG. 38, the image signal processing device adapts to the FPD driving method and is similar to the CRT display device. It is possible to perform display.
- the brightness adjustment contrast adjustment unit 10031 and the image quality improvement processing unit 10032 perform the brightness adjustment contrast adjustment unit 10011 of FIG.
- the same processing as each of the high image quality processing unit 10012 is performed and supplied to the ABL processing unit 10033, the full screen brightness average level detection unit 10036, and the peak detection differential control value detection unit 10037.
- the ABL processing unit 10033 receives an image from the high image quality processing unit 10032 when the image has a certain level of brightness (luminance and area).
- ABL emulation processing is performed to limit the image signal level according to the control from the ABL control unit 10038.
- the ABL emulation processing in FIG. 38 is processing for emulating the ABL processing in FIG.
- the ABL process performed in the CRT display device is a process that limits the current when the brightness (luminance and area) exceeds a certain level so that the electron beam (current) does not become excessive in the CRT.
- the ABL processing unit 10033 performs ABL processing emulation in FIG.
- the ABL processing unit 10033 performs processing (ABL emulator) to reduce the actual display brightness when attempting to display a bright image in a large area by limiting the current of the electron beam in the CRT. (Rate processing) is performed by nonlinear arithmetic processing as processing to limit the signal level of the image signal.
- the entire screen brightness average level detection unit 10036 detects the screen brightness and average level based on the image signal from the image quality enhancement processing unit 10032, and determines the peak detection differential control value. This is supplied to the detection unit 10037 and the ABL control unit 10038.
- the ABL control unit 10038 detects the screen brightness and its area based on the screen brightness and average level detection results from the full screen brightness average level detection unit 10036, thereby limiting the screen brightness.
- a control signal is generated and supplied to the ABL processing unit 10033.
- the ABL processing unit 10033 realizes (emulates) the ABL processing by performing the above non-linear operation on the image signal from the image quality enhancement processing unit 10032 based on the control signal from the ABL control unit 10038.
- the image signal subjected to ABL processing in the ABL processing unit 10033 is supplied to the VM processing unit 10034.
- the VM processing unit 10034 is a processing block for performing processing equivalent to the VM processing in the CRT display device in FIG. 39 on the image signal, and performs emulation of VM processing performed in the CRT display device in FIG. .
- the peak detection differential control value detection unit 10037 is a partial peak signal of an image signal or an edge signal obtained by differentiation of the image signal from the image signal from the image quality enhancement processing unit 10032.
- Image from the full-screen brightness average level detector 10036 It is supplied to the VM control unit 10039 along with the brightness and average level of the surface.
- the VM control unit 10039 is a partial image based on the partial peak signal of the image signal from the peak detection differential control value detection unit 10037, the edge signal obtained by differentiation of the image signal, the brightness of the screen, and the like.
- a VM control signal corresponding to the VM coil drive signal in the CRT display device that changes the signal level is generated and supplied to the VM processing unit 10034.
- the VM processing unit 10034 is a process for changing the level of the image signal from the ABL processing unit 10033 in part by the VM control signal generated by the VM control unit 10039, that is, a partial image signal. Processing such as correction and edge enhancement and peak enhancement of the image signal is performed.
- the deflection yoke that does not apply correction to the force image signal itself to be subjected to VM processing
- the luminance is changed as a result by changing the deflection speed (time) of horizontal deflection peculiar to CRT10056.
- the VM processing unit 10034 calculates a correction value corresponding to a luminance change caused by the VM processing performed in the CRT display device, and performs a calculation process of correcting the image signal using the correction value, thereby performing CRT display. Emulate VM processing performed on the device.
- the CRT gamma processing unit 10035 corrects defects that include processing performed in the processing circuit (conversion circuit) for obtaining the same gamma characteristics as the CRT that a conventional LCD panel had in the panel. In order to perform processing and color temperature compensation processing, processing for adjusting the level of each color signal (component signal) is performed.
- the CRT gamma processing unit 10035 in Fig. 38 corrects not only the CRT characteristics on the same LCD screen but also the electro-optical conversion characteristics necessary for expressing multiple display characteristics such as PDP and LED display.
- processing necessary for matching the input voltage-transmittance characteristics of the LCD with the electrical brightness characteristics of the CRT is performed in this embodiment.
- the display color temperature compensation controller 10040 divides the LCD display screen into a plurality of display areas (for example, display areas # 0 to # 3 in FIG. 2), and displays each display area.
- a system that presents an image of the same quality as an image that would be displayed on a display device having a plurality of different display characteristics (for example, the monitor system in FIG. 1), it will be displayed on a CRT.
- a control signal to be displayed as a color temperature for T is generated to perform control for adjusting the tolerance of each color signal (component signal) and supplied to the CRT ⁇ processing unit 10035.
- the CRT gamma processing unit 10035 also performs processing for adjusting the balance of the color signals of the image signal from the VM processing unit 10034 in accordance with the control signal from the display color temperature compensation control unit 10040.
- the display color temperature compensation controller 10040 shown in Fig. 38 is required.
- the CRT ⁇ processing unit 10035 performs processing according to the control signal from the display color temperature compensation control unit 10040.
- the gradation characteristics of each panel, which was processed inside a flat panel such as an LCD, are equivalent to the CRT. This includes processing performed by the processing circuit that has been converted so as to absorb the difference in characteristics due to the display panel.
- the CRT ⁇ processing unit 10035 performs the above processing on the image signal from the VM processing unit 10034, and then supplies the processed image signal to an LCD (not shown) as an FPD. To display.
- the processing procedure in which the processing performed in the CRT display device is not simply replaced with the image signal processing (after the processing of the ABL processing unit 10033, the VM processing unit 10034 And processing of the CRT y processing unit 10035 after the processing of the VM processing unit 10034 is also taken into consideration, so that the LCD display can be displayed more correctly and the image quality of the image displayed on the CRT display device It is possible to get close to. Therefore, according to the image signal processing apparatus of FIG. 38, an image can be output to the LCD with display characteristics equivalent to those of a CRT.
- the image signal processing device of Fig. 38 it is possible to emulate the display characteristics due to the characteristic differences of the CRT itself, and it is possible to switch the difference in color and texture on the same LCD. Become. For example, by comparing the color difference between the EBU phosphor and the general phosphor on the same screen, it is possible to easily perform color adjustment and image quality adjustment at the time of transmission.
- the image is displayed with the “preferred image quality” in the original sense.
- An image can be displayed.
- the display is displayed on display devices having different characteristics (for example, CRTs with different phosphors, LCDs and CRTs, etc.) by changing the processing range within the display screen. This makes it easy to use images for comparison and adjustment.
- step S10011 the brightness adjustment contrast adjustment unit 10031 adjusts the brightness of the image signal supplied thereto, and further Then, the contrast is adjusted and supplied to the image quality enhancement processing unit 10032, and the process proceeds to step S10012.
- step S10012 the image quality improvement processing unit 10032 performs image signal processing including pixel number conversion on the image signal from the brightness adjustment contrast adjustment unit 10011 1 and after the image signal processing.
- image signal processing including pixel number conversion on the image signal from the brightness adjustment contrast adjustment unit 10011 1 and after the image signal processing.
- the ABL processing unit 10033 the full screen brightness average level detection unit 10036, and the peak detection differential control value detection unit 10037, and the process proceeds to step S10013.
- the full screen brightness average level detection unit 10036 detects the screen brightness and average level based on the image signal from the image quality enhancement processing unit 10032, and detects the peak detection differential control value detection unit 10037, And ABL control unit 10038.
- the ABL control unit 10038 generates a control signal for limiting the screen brightness based on the screen brightness and average level detection results from the full screen brightness average level detection unit 10036, and the ABL processing unit Supply to 10033.
- the peak detection differential control value detection unit 10037 obtains a partial peak signal of the image signal and an edge signal obtained by differentiation of the image signal from the image signal from the image quality enhancement processing unit 10032. Then, together with the screen brightness and the average level from the full screen brightness average level detection unit 10036, it is supplied to the VM control unit 10039.
- the VM control unit 10039 uses the VM in the CRT display device based on the partial peak signal of the image signal from the peak detection differential control value detection unit 10037, the edge signal obtained by the differentiation of the image signal, the brightness of the screen, and the like.
- a VM control signal corresponding to the coil drive signal is generated and supplied to the VM processing unit 10034.
- the ABL processing unit 10033 applies a process of emulating ABL processing to the image signal from the image quality enhancement processing unit 10032.
- the ABL processing unit 10033 emulates the ABL processing (ABL emulation processing, such as limiting the level of the image signal from the image quality enhancement processing unit 10032 in accordance with the control from the ABL control unit 10038. ) And supplies the resulting image signal to the VM processing unit 10034.
- ABL processing ABL emulation processing, such as limiting the level of the image signal from the image quality enhancement processing unit 10032 in accordance with the control from the ABL control unit 10038.
- step S10013 the process proceeds from step S10013 to step S10014, and the VM processing unit 10034 applies a process of emulating the VM process to the image signal from the ABL processing unit 10033.
- the VM processing unit 10034 emulates VM processing such as correcting the luminance of the image signal from the ABL processing unit 10033 in accordance with the VM control signal supplied from the VM control unit 10039 in step S10014.
- the image signal obtained as a result is supplied to the CRT ⁇ processing unit 10035, and the process proceeds to step S10015.
- step S10015 the CRT ⁇ processing unit 10035 performs ⁇ correction processing on the image signal from the VM processing unit 10034, and further, according to the control signal from the display color temperature compensation control unit 10040, the VM processing unit Color temperature compensation processing for adjusting the balance of each color signal of the image signal from 10034 is performed. Then, the CRT gamma processing unit 10035 supplies the image signal obtained as a result of the color temperature compensation process to an LCD (not shown) as an FPD for display.
- FIG. 41 is a block diagram showing a configuration example of the VM processing unit 10034 of FIG.
- the VM processing unit 10034 includes a luminance correction unit 10210 and an EB processing unit 10220.
- the brightness correction unit 10210 targets the image signal supplied from the ABL processing unit 10033 (Fig. 38), and the influence of the change in the deflection speed of the horizontal deflection of the electron beam of the CRT display device affects the brightness.
- the luminance correction processing for correcting the above is performed, and the resulting image signal is supplied to the EB processing unit 10220.
- the luminance correction unit 10210 includes a VM coefficient generation unit 10211 and a calculation unit 10212.
- the VM coefficient generator 10211 is supplied with a VM control signal from the VM controller 10039 (Fig. 38).
- the VM coefficient generation unit 10211 generates a VM coefficient in accordance with the VM control signal from the VM control unit 10039, and supplies the VM coefficient to the calculation unit 10212.
- the VM coefficient from the VM coefficient generation unit 10211 is supplied to the calculation unit 10212, and the image signal from the ABL processing unit 10033 (Fig. 38) is supplied.
- the calculation unit 10212 adds the VM coefficient generation unit 102 to the image signal from the ABL processing unit 10033 (Fig. 38).
- the effect of the change in the deflection speed of the horizontal deflection of the electron beam of the CRT display device on the image signal is corrected for the image signal.
- the signal is supplied to the EB processing unit 10220.
- the EB processing unit 10220 targets the image signal from the luminance correction unit 10210 (the image signal processed by the ABL processing unit 10033 and further processed by the luminance correction unit 10210) as an electron beam of the CRT display device.
- a process that emulates that the light spreads and collides with the phosphor of the CRT display device (EB (Erectron Beam) emulation process) is applied to the CRT gamma processing unit 10035 (Fig. 38).
- the VM emulation processing performed by the VM processing unit 10034 includes the luminance correction processing performed by the luminance correction unit 10210 and the ⁇ emulation processing performed by the 220 processing unit 10220.
- FIG. 42 shows an example of the VM coefficient generated by the VM coefficient generation unit 10211 of FIG.
- the VM coefficient indicates the deflection speed of horizontal deflection (horizontal deflection) in the CRT display device, based on the VM coil drive signal, and the pixel of interest (here, the pixel that corrects the brightness by VM processing).
- the pixel of interest here, the pixel that corrects the brightness by VM processing.
- multiple pixels aligned horizontally around the pixel of interest are used as luminance correction targets, and the luminance correction is performed. This coefficient is multiplied by the pixel value (luminance) of the target pixel.
- the VM coefficient multiplied by the pixel value of the target pixel among the pixels whose luminance is to be corrected is set to a value of 1 or more, and other pixels
- the VM coefficient multiplied by is set to a value equal to or less than 1 so that the gain in the calculation unit 10212 becomes 1.
- FIG. 43 shows a method for obtaining the VM coefficient generated by the VM coefficient generation unit 10211 of FIG. That is, A in FIG. 43 shows the waveform of the voltage (deflection voltage) applied to the deflection yoke DY (FIG. 39) of the CRT display device.
- the deflection yoke DY (Fig. 39) is repeatedly applied with a deflection voltage force that changes with a constant slope and the period of horizontal scanning over time t as shown in A of Fig. 43.
- the deflection yoke DY (Fig. 39) is repeatedly applied with a deflection voltage force that changes with a constant slope and the period of horizontal scanning over time t as shown in A of Fig. 43.
- B in Fig. 43 shows a VM coil drive signal generated by the VM drive circuit 10061 (Fig. 39) of the CRT display device.
- the VM coil force in the deflection yoke DY (Fig. 39) is driven by the VM coil drive signal of B in Fig. 43, and the magnetic field generated by the VM coil reduces the deflection speed of the electron beam. Partially changed, as shown in Figure 43C.
- C in FIG. 43 shows the time change of the horizontal position of the electron beam when the VM coil generates a magnetic field by the VM coil drive signal in B of FIG.
- D in Fig. 43 is a subtraction value obtained by subtracting the time change of the horizontal position of the electron beam of C in Fig. 43 from the time change of the horizontal position of the electron beam by the deflection voltage of A in Fig. 43. The derivative value of is shown.
- VM coefficient generating section 10211 (Fig. 41) generates a value corresponding to the differential value of D in Fig. 43 as the VM coefficient.
- the specific value of the VM coefficient the range of pixels to be multiplied with the VM coefficient (how many pixel values in the horizontal direction around the target pixel are multiplied with the VM coefficient),
- the pixel value (level) of the pixel of interest is determined according to the specifications of the CRT display device that emulates the display of the image signal processing device of FIG.
- Fig. 44 shows the current (beam current) applied to the electron gun that irradiates the electron beam, and the electron beam force S and the spot formed on the display screen of the CRT corresponding to the beam current. Show the relationship with the diameter (spot size).
- FIG. 44 shows the relationship between the beam current and the spot size for two types of CRTs.
- the spot size increases as the beam current increases. In other words, if the brightness is large, the spot size is also large.
- the CRT display screen is coated with red, green, and blue phosphors (fluorescent substances).
- the red, green, and blue phosphors are coated with red, green, and blue phosphors.
- the CRT was provided with an opening through which the electron beams pass so that the red, green, and blue phosphors were irradiated with the red, green, and blue electron beams. It is provided on the color selection mechanism display screen.
- FIG. 45 shows the color selection mechanism.
- a in FIG. 45 shows a shadow mask which is one of color selection mechanisms.
- the shadow mask is provided with a hole as a circular opening, and an electron beam passing through the hole is irradiated to the phosphor.
- a circle without a pattern indicates a hole for irradiating the red phosphor with an electron beam
- a circle with a diagonal line indicates an electron on the green phosphor.
- the holes for irradiating the beam and the black circles indicate the Honore for irradiating the blue phosphor with the electron beam.
- FIG. 45 shows an aperture grill which is another one of the color selection mechanisms.
- the aperture grill is provided with a slit as an opening extending in the vertical direction, and an electron beam passing through the slit is irradiated to the phosphor.
- the square without pattern has a slit for irradiating the red phosphor with an electron beam
- the hatched rectangle has an electron beam on the green phosphor.
- the slit for irradiating, and the black rectangle indicates the slit for irradiating the blue phosphor with the electron beam.
- Figure 46 shows the spot of the electron beam formed on the color selection mechanism when the luminance is medium
- Fig. 47 shows the spot formed on the color selection mechanism when the luminance is high. Each electron beam spot is shown schematically.
- FIG. 46 and A in FIG. 47 show electron beam spots formed on the shadow mask when the color selection mechanism is a shadow mask.
- FIG. 46B and FIG. 47B shows the spot of the electron beam formed on the aperture grill when the color selection mechanism is an aperture grill.
- FIG. 48 is a cross-sectional view showing a state in which an electron beam is irradiated when an aperture grill is employed as a color selection mechanism.
- FIG. 48 shows the irradiation of the electron beam when the beam current is the first current value.
- FIG. 48B shows a state in which the electron beam is irradiated when the beam current is a second current value larger than the first current value.
- the pixel corresponding to the green phosphor is the pixel of interest, and the electron beam when the beam current is the first current value is the spot as shown in A of FIG.
- the size is within a range between adjacent slits, and only the phosphor corresponding to the target pixel is irradiated, and the other phosphors are blocked from being irradiated.
- the electron beam when the beam current is the second current value does not fall within the range between the spot size force S and adjacent slits as shown in FIG.
- Other phosphors are irradiated with just the corresponding phosphor.
- the spot size of the electron beam is a size including other slits in addition to the phosphor slit corresponding to the pixel of interest.
- the electron beam passes through another slit and is also irradiated to a phosphor other than the phosphor corresponding to the target pixel.
- the beam current when the electron beam passes through a slit other than the slit of the phosphor corresponding to the pixel of interest is the spot size of the electron beam, the It is determined by the relationship with the slit width of the slit of the patio grill.
- FIG. 49 shows the intensity distribution of the electron beam approximated by a two-dimensional normal distribution (Gaussian distribution).
- FIG. 50 shows the distribution of the intensity of the electron beam passing through the slit of the aperture grille in the electron beam of FIG.
- a in FIG. 50 shows the intensity distribution of the electron beam passing through the phosphor slit corresponding to the target pixel and the electron beam passing through the slits adjacent to the left and right of the slit. .
- B in FIG. 50 shows the distribution of the intensity of the electron beam passing through the phosphor slit corresponding to the target pixel in the distribution of the intensity of the electron beam shown in A of FIG. C in FIG. 50 shows the intensity distribution of the electron beam passing through the left slit and the right slit.
- Fig. 51 shows the distribution of the intensity of the electron beam having a higher intensity than that of Fig. 49 and the distribution of the intensity of the electron beam passing through the slit of the aperture grille among the electron beams. is doing.
- a in FIG. 51 shows the distribution of the intensity of the electron beam having a higher intensity than in the case of FIG.
- the electron beam A in FIG. 51 has a larger spot size (intensity range or more) than the electron beam in FIG.
- B in Fig. 51 shows the distribution of the intensity of the electron beam passing through the slit of the aperture grinolet in the electron beam of A in Fig. 51! /.
- FIG. 51B compared to the case of FIG. 50, the intensity of the electron beam passing through the left slit and the right slit is large, and therefore, the pixel corresponding to the phosphor of the left slit. In addition, the display with the pixel corresponding to the phosphor in the right slit has a greater effect.
- C in FIG. 51 shows the distribution of the intensity of the electron beam passing through the phosphor slit corresponding to the target pixel in the electron beam intensity distribution shown in B of FIG. D in FIG. 51 shows the intensity distribution of the electron beam passing through the left slit and the right slit!
- FIG. 52 shows the electron beam intensity distribution shown in FIG. 49 and the electron beam intensity distribution of the electron beam passing through the slit of the shadow mask.
- a in FIG. 52 shows the same electron beam intensity distribution as in FIG.
- B in Fig. 52 shows the intensity distribution of the electron beam passing through the hole of the shadow mask among the electron beams in A in Fig. 52.
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2007335486A AU2007335486B2 (en) | 2006-12-18 | 2007-12-18 | Display control apparatus, display control method, and program |
CN2007800466052A CN101563725B (zh) | 2006-12-18 | 2007-12-18 | 显示控制设备和显示控制方法 |
US12/517,558 US20100026722A1 (en) | 2006-12-18 | 2007-12-18 | Display control apparatus display control method, and program |
IN2592CHN2009 IN2009CN02592A (ja) | 2006-12-18 | 2007-12-18 | |
BRPI0720516-3A BRPI0720516A2 (pt) | 2006-12-18 | 2007-12-18 | Aparelho e método de controle de exibição para controlar a exibição de uma imagem, e, programa para fazer com que um computador execute um processo de controle de exibição |
EP07850747A EP2101313A4 (en) | 2006-12-18 | 2007-12-18 | DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND PROGRAM |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006340080 | 2006-12-18 | ||
JP2006-340080 | 2006-12-18 | ||
JP2007-288456 | 2007-11-06 | ||
JP2007288456A JP2008178075A (ja) | 2006-12-18 | 2007-11-06 | 表示制御装置、表示制御方法、及びプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2008075657A1 true WO2008075657A1 (ja) | 2008-06-26 |
Family
ID=39536287
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/074259 WO2008075657A1 (ja) | 2006-12-18 | 2007-12-18 | 表示制御装置、表示制御方法、及びプログラム |
Country Status (11)
Country | Link |
---|---|
US (1) | US20100026722A1 (ja) |
EP (1) | EP2101313A4 (ja) |
JP (1) | JP2008178075A (ja) |
KR (1) | KR20090090346A (ja) |
CN (1) | CN101563725B (ja) |
AU (1) | AU2007335486B2 (ja) |
BR (1) | BRPI0720516A2 (ja) |
IN (1) | IN2009CN02592A (ja) |
RU (1) | RU2450366C2 (ja) |
TW (1) | TWI385636B (ja) |
WO (1) | WO2008075657A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010185988A (ja) * | 2009-02-11 | 2010-08-26 | Nanao Corp | 表示ムラの再現方法、画像表示システム、表示装置、コンピュータプログラム及び記録媒体 |
Families Citing this family (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7738712B2 (en) * | 2006-01-04 | 2010-06-15 | Aten International Co., Ltd. | Mixing 2-D gradient-difference and interpolation/decimation method and device for scaling a digital image |
JP4835949B2 (ja) * | 2007-12-21 | 2011-12-14 | ソニー株式会社 | 画像処理装置および方法、学習装置および方法、プログラム、並びに記録媒体 |
TWI410119B (zh) * | 2008-11-18 | 2013-09-21 | Innolux Corp | 應用3d對照表以進行四面體內插之色彩管理系統及其方法 |
US8071943B2 (en) * | 2009-02-04 | 2011-12-06 | Advantest Corp. | Mask inspection apparatus and image creation method |
US20120223881A1 (en) * | 2009-11-11 | 2012-09-06 | Sharp Kabushiki Kaisha | Display device, display control circuit, and display control method |
KR102031848B1 (ko) * | 2010-01-20 | 2019-10-14 | 가부시키가이샤 한도오따이 에네루기 켄큐쇼 | 전자 기기 및 전자 시스템 |
US8654250B2 (en) * | 2010-03-30 | 2014-02-18 | Sony Corporation | Deriving visual rhythm from video signals |
US20130016138A1 (en) * | 2010-04-09 | 2013-01-17 | Sharp Kabushiki Kaisha | Display panel driving method, display device driving circuit, and display device |
CN102860026A (zh) * | 2010-04-23 | 2013-01-02 | Nec显示器解决方案株式会社 | 显示装置、显示系统、显示方法以及程序 |
US9053562B1 (en) | 2010-06-24 | 2015-06-09 | Gregory S. Rabin | Two dimensional to three dimensional moving image converter |
JP5398667B2 (ja) * | 2010-08-23 | 2014-01-29 | 株式会社東芝 | 画像処理装置 |
KR20120058763A (ko) * | 2010-11-30 | 2012-06-08 | 삼성전자주식회사 | 영상 장치에서 영상 데이터를 송신하기 위한 장치 및 방법 |
US8963800B2 (en) * | 2011-02-10 | 2015-02-24 | Sharp Kabushiki Kaisha | Multi-display device and image display device |
US20130066452A1 (en) * | 2011-09-08 | 2013-03-14 | Yoshiyuki Kobayashi | Information processing device, estimator generating method and program |
KR20140063774A (ko) | 2011-09-09 | 2014-05-27 | 파나몰프, 인코포레이티드 | 이미지 처리 시스템 및 방법 |
US9013502B2 (en) * | 2011-12-29 | 2015-04-21 | Tektronix, Inc. | Method of viewing virtual display outputs |
JP2015518350A (ja) * | 2012-04-24 | 2015-06-25 | ヴィド スケール インコーポレイテッド | Mpeg/3gpp−dashにおける滑らかなストリーム切り換えのための方法および装置 |
BR112015001555A2 (pt) | 2012-07-26 | 2017-07-04 | Olive Medical Corp | vídeo contínuo em ambiente com deficiência de luz |
KR102127100B1 (ko) | 2012-07-26 | 2020-06-29 | 디퍼이 신테스 프로덕츠, 인코포레이티드 | 광 부족 환경에서 ycbcr 펄싱된 조명 수법 |
US20140028726A1 (en) * | 2012-07-30 | 2014-01-30 | Nvidia Corporation | Wireless data transfer based spanning, extending and/or cloning of display data across a plurality of computing devices |
US9143823B2 (en) | 2012-10-01 | 2015-09-22 | Google Inc. | Providing suggestions for optimizing videos to video owners |
EP2967294B1 (en) | 2013-03-15 | 2020-07-29 | DePuy Synthes Products, Inc. | Super resolution and color motion artifact correction in a pulsed color imaging system |
WO2014145249A1 (en) | 2013-03-15 | 2014-09-18 | Olive Medical Corporation | Controlling the integral light energy of a laser pulse |
EP2967301B1 (en) | 2013-03-15 | 2021-11-03 | DePuy Synthes Products, Inc. | Scope sensing in a light controlled environment |
US9531992B2 (en) * | 2013-03-26 | 2016-12-27 | Sharp Kabushiki Kaisha | Display apparatus, portable terminal, television receiver, display method, program, and recording medium |
KR102025184B1 (ko) * | 2013-07-31 | 2019-09-25 | 엘지디스플레이 주식회사 | 데이터 변환 장치 및 이를 이용한 디스플레이 장치 |
JP2015088828A (ja) * | 2013-10-29 | 2015-05-07 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
US9349160B1 (en) * | 2013-12-20 | 2016-05-24 | Google Inc. | Method, apparatus and system for enhancing a display of video data |
KR102185249B1 (ko) * | 2014-01-20 | 2020-12-02 | 삼성디스플레이 주식회사 | 표시 장치 및 그 구동 방법 |
US9930349B2 (en) * | 2014-02-20 | 2018-03-27 | Konica Minolta Laboratory U.S.A., Inc. | Image processing to retain small color/gray differences |
KR20150100998A (ko) * | 2014-02-24 | 2015-09-03 | 삼성디스플레이 주식회사 | 영상처리장치 및 영상처리방법 |
JP6573960B2 (ja) | 2014-03-21 | 2019-09-11 | デピュイ・シンセス・プロダクツ・インコーポレイテッド | イメージングセンサ用のカードエッジコネクタ |
KR102391860B1 (ko) * | 2014-05-09 | 2022-04-29 | 소니그룹주식회사 | 정보 처리 시스템 및 정보 처리 방법 |
JP6078038B2 (ja) * | 2014-10-31 | 2017-02-08 | 株式会社Pfu | 画像処理装置、画像処理方法、および、プログラム |
US9401107B2 (en) * | 2014-12-31 | 2016-07-26 | Shenzhen China Star Optoelectronics Technology Co., Ltd. | Image data processing method and device thereof |
WO2016110943A1 (ja) * | 2015-01-06 | 2016-07-14 | 日立マクセル株式会社 | 映像表示装置、映像表示方法、及び映像表示システム |
JP6597041B2 (ja) * | 2015-08-18 | 2019-10-30 | 富士ゼロックス株式会社 | サーバー装置及び情報処理システム |
CN105072430B (zh) * | 2015-08-19 | 2017-10-03 | 海信集团有限公司 | 一种调整投影图像的方法和设备 |
CN105611213A (zh) * | 2016-01-04 | 2016-05-25 | 京东方科技集团股份有限公司 | 一种图像处理方法、播放方法及相关的装置和系统 |
KR102468329B1 (ko) * | 2016-01-22 | 2022-11-18 | 삼성디스플레이 주식회사 | 액정 표시 장치 및 이의 구동 방법 |
US10448912B2 (en) * | 2016-04-06 | 2019-10-22 | Canon Medical Systems Corporation | Image processing apparatus |
KR102208872B1 (ko) * | 2016-08-26 | 2021-01-28 | 삼성전자주식회사 | 디스플레이 장치 및 그 구동 방법 |
CN106205460B (zh) * | 2016-09-29 | 2018-11-23 | 京东方科技集团股份有限公司 | 显示装置的驱动方法、时序控制器和显示装置 |
US10395584B2 (en) * | 2016-11-22 | 2019-08-27 | Planar Systems, Inc. | Intensity scaled dithering pulse width modulation |
CN110476427A (zh) * | 2017-03-24 | 2019-11-19 | 索尼公司 | 编码装置和编码方法以及解码装置和解码方法 |
US10269279B2 (en) * | 2017-03-24 | 2019-04-23 | Misapplied Sciences, Inc. | Display system and method for delivering multi-view content |
KR102390476B1 (ko) * | 2017-08-03 | 2022-04-25 | 엘지디스플레이 주식회사 | 유기발광 표시장치 및 유기발광 표시장치의 데이터 처리방법 |
KR102442449B1 (ko) * | 2017-09-01 | 2022-09-14 | 삼성전자주식회사 | 영상 처리 장치, 영상 처리 방법 및 컴퓨터 판독가능 기록 매체 |
CN109493809B (zh) * | 2017-09-12 | 2021-01-01 | 纬创资通(中山)有限公司 | 显示装置以及背光驱动方法 |
JP2019090858A (ja) * | 2017-11-10 | 2019-06-13 | キヤノン株式会社 | 表示装置、表示制御装置及び表示制御方法 |
JP7058800B2 (ja) * | 2019-04-12 | 2022-04-22 | 三菱電機株式会社 | 表示制御装置、表示制御方法、及び表示制御プログラム |
KR20210014260A (ko) * | 2019-07-29 | 2021-02-09 | 삼성디스플레이 주식회사 | 영상 보정부를 포함하는 표시장치 |
CN110572595B (zh) * | 2019-08-28 | 2022-08-30 | 深圳Tcl数字技术有限公司 | 激光电视的调整方法、激光电视及可读存储介质 |
CN110674433B (zh) * | 2019-09-25 | 2022-05-06 | 博锐尚格科技股份有限公司 | 一种图表显示方法、存储介质及电子设备 |
US11357087B2 (en) * | 2020-07-02 | 2022-06-07 | Solomon Systech (Shenzhen) Limited | Method for driving a passive matrix LED display |
CN114205658A (zh) * | 2020-08-27 | 2022-03-18 | 西安诺瓦星云科技股份有限公司 | 图像显示方法、装置、系统以及计算机可读存储介质 |
US11508273B2 (en) * | 2020-11-12 | 2022-11-22 | Synaptics Incorporated | Built-in test of a display driver |
CN112817548B (zh) * | 2021-01-28 | 2022-08-12 | 浙江大华技术股份有限公司 | 电子设备、显示控制方法、显示方法、装置和存储介质 |
CN112985616B (zh) * | 2021-05-06 | 2021-10-22 | 北京泽声科技有限公司 | 一种具有多种配置方案的人体红外线感应信号处理系统 |
CN113314085B (zh) * | 2021-06-15 | 2022-09-27 | 武汉华星光电技术有限公司 | 显示面板的显示方法及显示装置 |
US20230338841A1 (en) * | 2022-04-26 | 2023-10-26 | Sony Interactive Entertainment Inc. | Foveated enhancement of non-xr games within a hmd system |
CN116684687B (zh) * | 2023-08-01 | 2023-10-24 | 蓝舰信息科技南京有限公司 | 基于数字孪生技术的增强可视化教学方法 |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61167280A (ja) | 1985-01-19 | 1986-07-28 | Sony Corp | テレビ受像機における速度変調回路の制御回路 |
JPH0795591A (ja) | 1993-09-21 | 1995-04-07 | Sony Corp | ディジタル画像信号処理装置 |
JPH0823460A (ja) | 1994-07-11 | 1996-01-23 | Fujitsu General Ltd | ダイナミックガンマ補正回路 |
JPH0876741A (ja) * | 1994-09-02 | 1996-03-22 | Konica Corp | 画像表示装置 |
JPH08163582A (ja) | 1994-11-30 | 1996-06-21 | Sony Corp | カラーブラウン管色温度設定装置 |
JPH11231827A (ja) * | 1997-07-24 | 1999-08-27 | Matsushita Electric Ind Co Ltd | 画像表示装置及び画像評価装置 |
JP2000039864A (ja) | 1998-07-24 | 2000-02-08 | Matsushita Electric Ind Co Ltd | 動画像表示方法及び動画像表示装置 |
WO2000010324A1 (fr) | 1998-08-14 | 2000-02-24 | Sony Corporation | Circuit modulateur de vitesse d'exploration pour dispositif de visualisation d'image |
JP2000310987A (ja) * | 1999-04-28 | 2000-11-07 | Mitsubishi Electric Corp | 画像表示装置 |
JP2001136548A (ja) | 1999-11-08 | 2001-05-18 | Ddi Corp | 画像の客観評価用モニタ装置 |
JP2002223167A (ja) | 2001-01-25 | 2002-08-09 | Sony Corp | データ処理装置およびデータ処理方法、並びにプログラムおよび記録媒体 |
JP2002232905A (ja) | 2001-01-30 | 2002-08-16 | Sony Corp | 色度変換装置および色度変換方法、表示装置および表示方法、記録媒体、並びにプログラム |
JP2002354290A (ja) | 2001-05-28 | 2002-12-06 | Nec Viewtechnology Ltd | ガンマ補正回路 |
JP2004039300A (ja) | 2002-06-28 | 2004-02-05 | Sony Corp | 陰極線管用電子銃及び陰極線管 |
JP2004138783A (ja) | 2002-10-17 | 2004-05-13 | Matsushita Electric Ind Co Ltd | 画像表示装置 |
JP2005039817A (ja) | 2003-07-15 | 2005-02-10 | Samsung Electronics Co Ltd | 画質改善装置およびその方法 |
JP2005229245A (ja) | 2004-02-12 | 2005-08-25 | Matsushita Electric Ind Co Ltd | 映像信号処理装置 |
JP2005236634A (ja) | 2004-02-19 | 2005-09-02 | Sony Corp | 画像処理装置および画像処理方法、プログラム、並びに記録媒体 |
Family Cites Families (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2549671B1 (fr) * | 1983-07-22 | 1987-05-22 | Thomson Csf | Dispositif d'affichage d'une image de television de grandes dimensions et recepteur de television comportant un tel dispositif |
US5012333A (en) * | 1989-01-05 | 1991-04-30 | Eastman Kodak Company | Interactive dynamic range adjustment system for printing digital images |
JP3152396B2 (ja) * | 1990-09-04 | 2001-04-03 | 株式会社東芝 | 医用画像表示装置 |
DE4418782C2 (de) * | 1993-05-21 | 1997-01-09 | Mitsubishi Electric Corp | System und Verfahren zum Einstellen eines Farbbildes |
US5515488A (en) * | 1994-08-30 | 1996-05-07 | Xerox Corporation | Method and apparatus for concurrent graphical visualization of a database search and its search history |
US5982953A (en) * | 1994-09-02 | 1999-11-09 | Konica Corporation | Image displaying apparatus of a processed image from temporally sequential images |
US5940089A (en) * | 1995-11-13 | 1999-08-17 | Ati Technologies | Method and apparatus for displaying multiple windows on a display monitor |
JP3344197B2 (ja) * | 1996-03-08 | 2002-11-11 | 株式会社日立製作所 | 映像信号の処理装置及びこれを用いた表示装置 |
US6525734B2 (en) * | 1996-09-17 | 2003-02-25 | Fujitsu Limited | Display control apparatus, display control method and computer program product |
JP3586351B2 (ja) * | 1997-03-21 | 2004-11-10 | インターナショナル・ビジネス・マシーンズ・コーポレーション | ウインドウ表示装置および方法、並びにウインドウ表示制御プログラムを記録した記録媒体 |
US6005636A (en) * | 1997-03-27 | 1999-12-21 | Sharp Laboratories Of America, Inc. | System for setting user-adjustable image processing parameters in a video system |
US6809776B1 (en) * | 1997-04-23 | 2004-10-26 | Thomson Licensing S.A. | Control of video level by region and content of information displayed |
EP0891075A3 (en) * | 1997-06-09 | 2002-03-06 | Seiko Epson Corporation | An image processing apparatus and method, and an image evaluation device and method |
EP0893916B1 (en) * | 1997-07-24 | 2004-04-07 | Matsushita Electric Industrial Co., Ltd. | Image display apparatus and image evaluation apparatus |
JP3582382B2 (ja) * | 1998-11-13 | 2004-10-27 | 株式会社日立製作所 | マルチディスプレイ装置の表示制御装置、表示装置及びマルチディスプレイ装置 |
JP2000338941A (ja) * | 1999-05-27 | 2000-12-08 | Seiko Epson Corp | 投射型表示装置 |
JP4114279B2 (ja) * | 1999-06-25 | 2008-07-09 | コニカミノルタビジネステクノロジーズ株式会社 | 画像処理装置 |
JP2001202053A (ja) * | 1999-11-09 | 2001-07-27 | Matsushita Electric Ind Co Ltd | 表示装置及び情報携帯端末 |
JP3526019B2 (ja) * | 1999-11-30 | 2004-05-10 | インターナショナル・ビジネス・マシーンズ・コーポレーション | 画像表示システム、画像表示装置、および画像表示方法 |
JP4920834B2 (ja) * | 2000-06-26 | 2012-04-18 | キヤノン株式会社 | 画像表示装置、及び画像表示装置の駆動方法 |
US6985637B1 (en) * | 2000-11-10 | 2006-01-10 | Eastman Kodak Company | Method and apparatus of enhancing a digital image using multiple selected digital images |
JP2002354367A (ja) * | 2001-05-25 | 2002-12-06 | Canon Inc | マルチ画面表示装置、マルチ画面表示方法、記録媒体、及びプログラム |
JP3927995B2 (ja) * | 2001-12-27 | 2007-06-13 | ソニー株式会社 | 画像表示制御装置と画像表示制御方法及び撮像装置 |
JP2003319933A (ja) * | 2002-05-01 | 2003-11-11 | Fuji Photo Film Co Ltd | 画像表示システム |
JP4032355B2 (ja) * | 2003-03-27 | 2008-01-16 | カシオ計算機株式会社 | 表示処理装置、表示制御方法および表示処理プログラム |
JP4369151B2 (ja) * | 2003-03-31 | 2009-11-18 | セイコーエプソン株式会社 | 画像処理装置および画像処理方法並びにこれらに用いるプログラム |
US7034776B1 (en) * | 2003-04-08 | 2006-04-25 | Microsoft Corporation | Video division detection methods and systems |
NO20031586L (no) * | 2003-04-08 | 2004-10-11 | Favourite Systems As | Vindussystem for datainnretning |
US7777691B1 (en) * | 2004-03-05 | 2010-08-17 | Rockwell Collins, Inc. | System and method for driving multiple tiled displays from a single digital video source |
WO2005088602A1 (ja) * | 2004-03-10 | 2005-09-22 | Matsushita Electric Industrial Co., Ltd. | 画像伝送システムおよび画像伝送方法 |
JP4281593B2 (ja) * | 2004-03-24 | 2009-06-17 | セイコーエプソン株式会社 | プロジェクタの制御 |
US7487118B2 (en) * | 2005-05-06 | 2009-02-03 | Crutchfield Corporation | System and method of image display simulation |
US7882442B2 (en) * | 2007-01-05 | 2011-02-01 | Eastman Kodak Company | Multi-frame display system with perspective based image arrangement |
TW200915217A (en) * | 2007-09-20 | 2009-04-01 | Awind Inc | Method for detecting and replaying an image region of computer picture |
-
2007
- 2007-11-06 JP JP2007288456A patent/JP2008178075A/ja active Pending
- 2007-12-04 TW TW096146188A patent/TWI385636B/zh not_active IP Right Cessation
- 2007-12-18 KR KR1020097012590A patent/KR20090090346A/ko not_active Application Discontinuation
- 2007-12-18 BR BRPI0720516-3A patent/BRPI0720516A2/pt not_active IP Right Cessation
- 2007-12-18 IN IN2592CHN2009 patent/IN2009CN02592A/en unknown
- 2007-12-18 US US12/517,558 patent/US20100026722A1/en not_active Abandoned
- 2007-12-18 EP EP07850747A patent/EP2101313A4/en not_active Withdrawn
- 2007-12-18 RU RU2009123156/07A patent/RU2450366C2/ru not_active IP Right Cessation
- 2007-12-18 WO PCT/JP2007/074259 patent/WO2008075657A1/ja active Application Filing
- 2007-12-18 CN CN2007800466052A patent/CN101563725B/zh not_active Expired - Fee Related
- 2007-12-18 AU AU2007335486A patent/AU2007335486B2/en not_active Ceased
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0584706B2 (ja) | 1985-01-19 | 1993-12-02 | Sony Corp | |
JPS61167280A (ja) | 1985-01-19 | 1986-07-28 | Sony Corp | テレビ受像機における速度変調回路の制御回路 |
JP3271101B2 (ja) | 1993-09-21 | 2002-04-02 | ソニー株式会社 | ディジタル画像信号処理装置および処理方法 |
JPH0795591A (ja) | 1993-09-21 | 1995-04-07 | Sony Corp | ディジタル画像信号処理装置 |
JPH0823460A (ja) | 1994-07-11 | 1996-01-23 | Fujitsu General Ltd | ダイナミックガンマ補正回路 |
JPH0876741A (ja) * | 1994-09-02 | 1996-03-22 | Konica Corp | 画像表示装置 |
JPH08163582A (ja) | 1994-11-30 | 1996-06-21 | Sony Corp | カラーブラウン管色温度設定装置 |
JPH11231827A (ja) * | 1997-07-24 | 1999-08-27 | Matsushita Electric Ind Co Ltd | 画像表示装置及び画像評価装置 |
JP2000039864A (ja) | 1998-07-24 | 2000-02-08 | Matsushita Electric Ind Co Ltd | 動画像表示方法及び動画像表示装置 |
WO2000010324A1 (fr) | 1998-08-14 | 2000-02-24 | Sony Corporation | Circuit modulateur de vitesse d'exploration pour dispositif de visualisation d'image |
JP2000310987A (ja) * | 1999-04-28 | 2000-11-07 | Mitsubishi Electric Corp | 画像表示装置 |
JP2001136548A (ja) | 1999-11-08 | 2001-05-18 | Ddi Corp | 画像の客観評価用モニタ装置 |
JP2002223167A (ja) | 2001-01-25 | 2002-08-09 | Sony Corp | データ処理装置およびデータ処理方法、並びにプログラムおよび記録媒体 |
JP2002232905A (ja) | 2001-01-30 | 2002-08-16 | Sony Corp | 色度変換装置および色度変換方法、表示装置および表示方法、記録媒体、並びにプログラム |
JP2002354290A (ja) | 2001-05-28 | 2002-12-06 | Nec Viewtechnology Ltd | ガンマ補正回路 |
JP2004039300A (ja) | 2002-06-28 | 2004-02-05 | Sony Corp | 陰極線管用電子銃及び陰極線管 |
JP2004138783A (ja) | 2002-10-17 | 2004-05-13 | Matsushita Electric Ind Co Ltd | 画像表示装置 |
JP2005039817A (ja) | 2003-07-15 | 2005-02-10 | Samsung Electronics Co Ltd | 画質改善装置およびその方法 |
JP2005229245A (ja) | 2004-02-12 | 2005-08-25 | Matsushita Electric Ind Co Ltd | 映像信号処理装置 |
JP2005236634A (ja) | 2004-02-19 | 2005-09-02 | Sony Corp | 画像処理装置および画像処理方法、プログラム、並びに記録媒体 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2101313A4 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010185988A (ja) * | 2009-02-11 | 2010-08-26 | Nanao Corp | 表示ムラの再現方法、画像表示システム、表示装置、コンピュータプログラム及び記録媒体 |
Also Published As
Publication number | Publication date |
---|---|
RU2450366C2 (ru) | 2012-05-10 |
EP2101313A1 (en) | 2009-09-16 |
TWI385636B (zh) | 2013-02-11 |
IN2009CN02592A (ja) | 2015-08-07 |
JP2008178075A (ja) | 2008-07-31 |
CN101563725A (zh) | 2009-10-21 |
TW200844975A (en) | 2008-11-16 |
US20100026722A1 (en) | 2010-02-04 |
CN101563725B (zh) | 2013-01-23 |
BRPI0720516A2 (pt) | 2013-12-31 |
KR20090090346A (ko) | 2009-08-25 |
AU2007335486B2 (en) | 2012-12-20 |
EP2101313A4 (en) | 2010-12-29 |
RU2009123156A (ru) | 2010-12-27 |
AU2007335486A1 (en) | 2008-06-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2008075657A1 (ja) | 表示制御装置、表示制御方法、及びプログラム | |
US6456302B2 (en) | Image display apparatus and image evaluation apparatus | |
EP0947976B1 (en) | Motion pixel distortion reduction for a digital display device using pulse number equalization | |
JP4064268B2 (ja) | サブフィールド法を用いた表示装置及び表示方法 | |
JP2004133467A (ja) | パルス数変調方式デジタルディスプレイパネルにおける擬似輪郭減少のための方法及び装置 | |
CN102054424B (zh) | 图像处理装置及图像处理方法 | |
US8363071B2 (en) | Image processing device, image processing method, and program | |
KR100799893B1 (ko) | 서브-필드들에서 영상을 디스플레이하기 위한 방법 및 유닛 | |
AU2007335487B2 (en) | Image signal processing device, image signal processing method, and program | |
KR100472483B1 (ko) | 의사 윤곽 제거 방법 및 이에 적합한 장치 | |
WO2008075660A1 (ja) | 画像信号処理装置 | |
JP2000352954A (ja) | 表示装置に表示するためにビデオ画像を処理する方法及び装置 | |
JP2002123211A (ja) | ビデオ画像を処理する方法及び装置 | |
JP2002333858A (ja) | 画像表示装置および画像再生方法 | |
JP5110358B2 (ja) | 画像信号処理装置、画像信号処理方法、及びプログラム | |
JP2004514176A (ja) | ビデオピクチャ処理方法及び装置 | |
JP3593799B2 (ja) | 複数画面表示装置の誤差拡散回路 | |
JP2001042819A (ja) | 階調表示方法、及び階調表示装置 | |
JP2008209427A (ja) | 画像信号処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200780046605.2 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07850747 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2592/CHENP/2009 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007335486 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12517558 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2007335486 Country of ref document: AU Date of ref document: 20071218 Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007850747 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2009123156 Country of ref document: RU Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020097012590 Country of ref document: KR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: PI0720516 Country of ref document: BR Kind code of ref document: A2 Effective date: 20090618 |