WO2016157671A1 - 情報処理装置、情報処理方法、プログラム、及び画像表示装置 - Google Patents
情報処理装置、情報処理方法、プログラム、及び画像表示装置 Download PDFInfo
- Publication number
- WO2016157671A1 WO2016157671A1 PCT/JP2016/000113 JP2016000113W WO2016157671A1 WO 2016157671 A1 WO2016157671 A1 WO 2016157671A1 JP 2016000113 W JP2016000113 W JP 2016000113W WO 2016157671 A1 WO2016157671 A1 WO 2016157671A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- blur
- information processing
- processing apparatus
- representative
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 54
- 238000003672 processing method Methods 0.000 title claims description 6
- 238000010586 diagram Methods 0.000 claims description 39
- 238000004364 calculation method Methods 0.000 claims description 31
- 238000012937 correction Methods 0.000 claims description 19
- 238000005516 engineering process Methods 0.000 abstract description 26
- 238000000034 method Methods 0.000 description 26
- 238000012545 processing Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 7
- 102100027780 Splicing factor, proline- and glutamine-rich Human genes 0.000 description 6
- 229920013655 poly(bisphenol-A sulfone) Polymers 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 239000004973 liquid crystal related substance Substances 0.000 description 4
- 230000007423 decrease Effects 0.000 description 3
- 230000006866 deterioration Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 102100030346 Antigen peptide transporter 1 Human genes 0.000 description 2
- 102100030343 Antigen peptide transporter 2 Human genes 0.000 description 2
- 101000652570 Homo sapiens Antigen peptide transporter 1 Proteins 0.000 description 2
- 101000652582 Homo sapiens Antigen peptide transporter 2 Proteins 0.000 description 2
- 101001080484 Homo sapiens DNA replication complex GINS protein PSF1 Proteins 0.000 description 2
- 101000736065 Homo sapiens DNA replication complex GINS protein PSF2 Proteins 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 230000008929 regeneration Effects 0.000 description 2
- 238000011069 regeneration method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- QSHDDOUJBYECFT-UHFFFAOYSA-N mercury Chemical compound [Hg] QSHDDOUJBYECFT-UHFFFAOYSA-N 0.000 description 1
- 229910052753 mercury Inorganic materials 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 229910052724 xenon Inorganic materials 0.000 description 1
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/001—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3141—Constructional details thereof
- H04N9/317—Convergence or focusing systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
Definitions
- the present technology relates to an image display device such as a projector, an information processing device that controls the image display device, an information processing method, and a program.
- image display devices such as projectors have been widely used.
- a light modulation element such as a liquid crystal element
- the modulated light is projected onto a screen or the like to display an image.
- a reflective liquid crystal display element a transmissive liquid crystal element, DMD (Digital Micromirror Device), or the like is used.
- DMD Digital Micromirror Device
- Patent Document 1 a technique for reducing a deterioration of an image due to a decrease in imaging performance in a projection optical system and generating a projection image close to input image information is disclosed.
- an inverse filter process that compensates for the degradation of the projected image is performed using an inverse filter or the like of MTF (Modulation Transfer Function) reduction of the projection lens.
- MTF Modulation Transfer Function
- the image information of the pixel area that cannot be expressed is returned to the image information of the original image or changed to a limit value that can be expressed.
- a high-quality image can be projected (for example, paragraphs [0026] [0031] [0035] in the specification of Patent Document 1).
- an image when an image is projected by an image display device such as a projector, it is important to prevent the occurrence of blurring due to the performance of the projection optical system or the like.
- the projected image may be blurred.
- an object of the present technology is to provide an information processing apparatus, an information processing method, a program, and an image display apparatus that can project a high-quality image.
- an information processing apparatus includes a projection instruction unit and an output unit.
- the projection instruction unit instructs the projection of a calibration image on which one or more representative pixels are displayed.
- the output unit outputs a GUI (Graphical User Interface) for inputting the degree of blur of each of the one or more representative pixels in the projected calibration image.
- GUI Graphic User Interface
- the degree of blur of each representative pixel in the projected calibration image is input by the user via the GUI.
- a high-quality image can be projected.
- the information processing apparatus may further include a correction unit that corrects a projected image based on the degree of blur of each of the one or more representative pixels input via the GUI. Thereby, a high quality image can be projected.
- the information processing apparatus further calculates a PSF (Point Spread Function) for each pixel of the projected image based on the degree of blur of each of the one or more representative pixels input via the GUI. You may comprise a calculation part. By performing inverse filtering on the input image using the calculated PSF, it is possible to project a high-quality image.
- PSF Point Spread Function
- the output unit may output a GUI capable of creating a shape representing the degree of blur. This makes it possible to project an image that does not fail when viewed from the user.
- the calculation unit may calculate a PSF for the representative pixel based on the input shape representing the degree of blur and a size of light spread from the representative pixel due to blur. As a result, it is possible to project a high-quality image that is not broken by the user.
- the output unit may output a frame image indicating a size of spread of light from the representative pixel due to the blur so that a shape representing the degree of blur can be created in the frame image.
- the output unit may output a reference image indicating the representative pixel in a state where no blur occurs so that the shape of the reference image can be changed. Accordingly, it is possible to automatically calculate the size of the spread of light due to the blur based on the size of the reference image, and to omit the operation of inputting the size.
- the output unit may output a GUI for inputting a size of light spread from the representative pixel due to the blur. Thereby, a simple operation for the user is realized.
- the information processing apparatus may further include a storage unit that stores a spot diagram of a projection apparatus that projects the calibration image.
- the calculation unit may calculate the size of the spread of light from the representative pixel due to the blur based on the stored spot diagram. As a result, the PSF can be easily calculated.
- the output unit may output a plurality of candidate shape images serving as shape candidates representing the degree of blur. As a result, the user can easily input the degree of blur.
- the output unit may output the plurality of candidate shape images so that each shape can be changed. As a result, the accuracy of PSF calculation can be improved.
- the projection instruction unit may instruct the projection of an image corrected based on the PSF for each pixel calculated by the calculation unit.
- the user can input the degree of blur while confirming the projected image.
- An information processing method is an information processing method executed by a computer and includes instructing projection of a calibration image on which one or more representative pixels are displayed.
- a GUI for inputting the degree of blur of each of the one or more representative pixels in the projected calibration image is output.
- a program causes a computer to execute the following steps. Instructing projection of a calibration image on which one or more representative pixels are displayed. Outputting a GUI for inputting the degree of blur of each of the one or more representative pixels in the projected calibration image.
- An image display device includes an input unit, an image projection unit, a projection instruction unit, an output unit, and a correction unit.
- Image information is input to the input unit.
- the image projection unit can generate and project an image based on the image information.
- the projection instruction unit projects a calibration image in which one or more representative pixels are displayed on the image projection unit.
- the output unit outputs a GUI for inputting the degree of blur of each of the one or more representative pixels in the projected calibration image.
- the correction unit corrects the input image information based on the degree of blur of each of the one or more representative pixels input via the GUI.
- FIG. 1 is a schematic diagram illustrating a configuration example of an image display system according to a first embodiment. It is the schematic which shows the example of an internal structure of a projector. It is a typical block diagram which shows the structural example of PC. It is a block diagram which shows the outline
- FIG. 1 is a schematic diagram illustrating a configuration example of an image display system according to the first embodiment of the present technology.
- the image display system 500 includes a projector 100 and a PC (Personal Computer) 200 that operates as an information processing apparatus according to the present technology.
- the projector 100 and the PC 200 are connected to each other, and the operation of the projector 100 can be controlled by operating the PC 200.
- the projector 100 is used, for example, as a projector for presentation or digital cinema.
- the present technology can also be applied to projectors used for other purposes or image display devices other than projectors.
- the projector 100 includes an input interface 101 provided with, for example, an HDMI (registered trademark) (High-Definition Multimedia Interface) terminal, a WiFi module, and the like.
- the PC 200 is connected to the input interface 101 via a wired or wireless connection. Further, image information to be projected is input to the input interface 101 from an image supply source (not shown). Note that the PC 200 may be an image supply source.
- FIG. 2 is a schematic diagram showing an example of the internal configuration of the projector 100.
- the projector 100 includes a light source unit 110, a light modulation unit 120, a projection unit 130, and a display control unit 140.
- the light source unit 110 typically generates white light and outputs the white light to the light modulation unit 120.
- a solid light source such as an LED (Light Emitting Diode) or an LD (Laser Diode), a mercury lamp, a xenon lamp, or the like is disposed.
- the light modulation unit 120 modulates the light from the light source unit 110 based on the image information input to the input interface 101 to generate an image 1 (see FIG. 1).
- the light modulation unit 120 combines, for example, an integrator element, a polarization conversion element, a split optical system that divides white light into three colors of RGB light, three light modulation elements that modulate each color light, and each modulated color light It has a synthetic optical system. Specific configurations of these members and the optical system are not limited.
- the projection unit 130 has a plurality of lenses, and projects the image 1 generated by the light modulation unit 120 onto a projection surface 5 (see FIG. 1) such as a screen.
- the configuration of the projection unit 130 is not limited, and any configuration may be adopted as appropriate.
- an image projection unit is realized by the light source unit 110, the light modulation unit 120, and the projection unit 130.
- the display control unit 140 controls the operation of each mechanism in the image display device 100.
- the display control unit 140 executes various processes on the image information input from the input interface 101.
- the display control unit 140 can correct the input image information.
- the configuration of the display control unit 140 is not limited, and arbitrary hardware and software may be used as appropriate.
- FIG. 3 is a schematic block diagram illustrating a configuration example of the PC 200.
- the PC 200 includes a CPU (Central Processing Unit) 201, a ROM (Read Only Memory) 202, a RAM (Random Access Memory) 203, an input / output interface 205, and a bus 204 that connects these components to each other.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- a display unit 206, an operation unit 207, a storage unit 208, a communication unit 209, a drive unit 210, an I / F (interface) unit 212, and the like are connected to the input / output interface 205.
- the display unit 206 is a display device using, for example, liquid crystal, EL (Electro-Luminescence), or the like.
- the operation unit 207 is, for example, a keyboard, a pointing device, or other operation devices.
- the touch panel can be integrated with the display unit 206.
- the storage unit 208 is a non-volatile storage device, such as an HDD (Hard Disk Drive), flash memory, or other solid-state memory.
- the drive unit 210 is a device that can drive a removable recording medium 211 such as an optical recording medium.
- the communication unit 209 is a communication device that can be connected to a LAN (Local Area Network), a WAN (Wide Area Network), or the like to communicate with other devices.
- the communication unit 209 may communicate using either wired or wireless communication.
- the I / F unit 212 is an interface for connecting other devices and various cables such as a USB (Universal Serial Bus) terminal, an HDMI (registered trademark) (High-Definition Multimedia Interface) terminal, or a network terminal. .
- USB Universal Serial Bus
- HDMI registered trademark
- HDMI High-Definition Multimedia Interface
- Information processing by the PC 200 is realized by the cooperation of the software stored in the storage unit 208 or the ROM 202 and the hardware resources of the PC 200. Specifically, it is realized by the CPU 201 loading a program constituting the software stored in the storage unit 208 or the like into the RAM 203 and executing it.
- a projection instruction unit, an output unit, a correction unit, and a calculation unit are realized by the CPU 201 executing a predetermined program.
- Dedicated hardware may be used to implement these blocks.
- the program is installed in the PC 200 via the recording medium 211, for example. Alternatively, installation may be performed via a global network or the like. Note that the information processing apparatus according to the present technology is not limited to the PC 200 described above, and various computers may be used.
- the projected image 1 is deteriorated due to the performance of optical elements such as lenses and mirrors arranged in the light modulation unit 120 and the projection unit 130 and the shape of the projection surface 5 on which the image 1 is displayed.
- a PSF Point Spread Function
- a inverse filter operation is executed using the calculated PSF.
- FIG. 4 is a block diagram showing an outline of PSF calculation according to the present embodiment.
- the CPU 201 functioning as a projection instruction unit instructs the projector 100 to project the calibration image 10 (step 101).
- the calibration image 15 is projected by the projector 100 that has received the instruction (step 102).
- the projected calibration image 15 is an image including deterioration such as the above-described blur.
- a code is distinguished and described by image information and a projection image.
- the calibration image 15 projected on the projection surface 5 is visually observed by the user 6, and the out-of-focus blur occurring in the calibration image 15 is reproduced.
- the reproduction of the out-of-focus blur corresponds to the input of the degree of blur (step 103).
- the out-of-focus blur is reproduced for each representative point (representative pixel) 16 in the calibration image 15.
- the PSF for each representative point 16 is calculated by the CPU 201 functioning as a calculation unit based on the reproduced focal blur (step 104). Based on the PSF for each representative point 16, the PSFs of pixels other than the representative point 16 are calculated by interpolation (step 105). As a result, PSF is output for each pixel of the projection image 1 projected by the projector 100 (step 106).
- the user 6 visually observes the blur in the calibration image 15 projected in this way, and the user 6 inputs the degree of the blur, so that the user 6 can visually recognize the blur with high accuracy without causing a sense of incongruity.
- the corrected image 1 can be projected.
- the projected calibration image 15 is captured by the camera and the PSF is automatically calculated, or when the PSF is automatically calculated based on the design value of the projector 100, it is clearly seen from the user 6.
- a strange portion that is, a portion where the image is broken occurs.
- the estimation accuracy of the PSF is lowered due to the aberration or noise of the camera lens.
- the PSF is calculated using the design value, it is difficult to correct the blur due to the characteristics on the projection surface 5 side such as the shape and material of the screen at the time of projection.
- FIG. 5 is a flowchart showing an example of calculating a detailed PSF.
- the calibration image 10 is input to the projector 100 and projected (step 201).
- FIG. 6 is a schematic diagram illustrating a configuration example of the input calibration image 10.
- FIG. 7 is a schematic diagram showing the calibration image 15 projected by the projector 100.
- the calibration image 10 is an image in which a single pixel (one dot) is displayed at predetermined intervals in the vertical and horizontal directions.
- a plurality of single pixels arranged in this manner become one or more representative pixels 11 according to the present embodiment.
- the number of representative pixels 11 and the size of the interval in the vertical and horizontal directions are not limited.
- the representative pixel 11 is a pixel for which a PSF representing the spread of light for each pixel is calculated.
- the pixel value set to the representative pixel 11 of the calibration image 10 is not limited.
- the maximum pixel value defined 255 for 8 bits, 1023 for 10 bits
- other pixel values may be set.
- the calibration image 10 including one or more representative pixels 11 is also referred to as a one-dot pattern image.
- the projected calibration image 15 is blurred.
- the representative pixels 16d in the four corner areas D1-D4 of the calibration image 15 are blurred. It is displayed. That is, the area D1-D4 becomes a defocus area (hereinafter, referred to as the defocus area D1-D4 using the same reference numerals).
- the focus is adjusted based on at least a part of the area by autofocus, manual operation, or the like. Therefore, the representative pixel 16 is displayed with almost no blur for the predetermined area.
- the center area is the focused area F in focus. Therefore, the representative pixel 16f in the focus area F is a pixel that is projected with almost no blur.
- the positions of the focus area F and the defocus area D are not limited and depend on the shape of the projection surface 5 and the like.
- the user 6 selects the representative pixel 16 for PSF calculation (step 202).
- the PC 200 displays, for example, a text image for explaining the selection operation on the display unit 206 or projects a pointer or the like for selecting the representative pixel 16 on the projector 100.
- the user 6 selects the representative pixel 16 by operating the operation unit 207 of the PC 200.
- the representative pixel 16d in the upper right defocus area D1 is selected.
- a GUI for reproducing the degree of blur is displayed on the display unit 206 by the CPU 201 functioning as an output unit (step 203). That is, a GUI for inputting the degree of blur of each of the one or more representative pixels 16 in the projected calibration image 15 is output.
- FIG. 8 is a schematic diagram showing a configuration example of a GUI for reproducing the degree of blur.
- a GUI 20 capable of creating a shape representing the degree of blur (hereinafter referred to as a blur shape) is displayed as the blur degree reproduction GUI.
- FIG. 8A shows the GUI 20 a before the shape is created by the user 6.
- FIG. 8B shows the GUI 20b after the shape is created by the user.
- the 8A includes a frame image 21 having a predetermined size and a reference image 22 serving as a reference for creating a shape arranged in the frame image 21.
- the GUI 20a illustrated in FIG. The frame image 21 is displayed as an image in which a blurred shape can be created inside.
- the reference image 22 is arranged at the center in the frame image 21 and is displayed with, for example, the maximum pixel value.
- the user 6 reproduces the shape of the representative pixel 16 d by changing the shape of the reference image 22.
- the representative pixel 16d in the defocus area D1 in FIG. 7 is displayed in an ellipse shape with the diagonal direction to the right as the major axis direction. This shape corresponds to the shape of light spreading from the representative pixel 16d, and this shape is created as a blurred shape 25 as shown in FIG. 8B (step 204).
- the pixel values in the shape 25 can be partially changed.
- the luminance of light decreases from the center of the representative pixel 16d toward the edge.
- the portion having a low pixel value is shown in gray.
- the color of the frame image 21 and the reference image 22 is not limited.
- the frame image 21 and the reference image 22 are displayed in the color to be calculated.
- high-precision correction can be performed.
- the process can be simplified.
- the technique for reproducing the blurred shape 25 on the UI is not limited, and an arbitrary technique may be used.
- the blur shape 25 can be created by using a well-known drawing technique or the like.
- text information or the like for explaining or prompting the user 6 for each operation of creating the blurred shape 25 may be displayed as appropriate. Hereinafter, description of such text information display may be omitted.
- a PSF for the representative pixel 16d is calculated based on the blurred shape 25 (step 205).
- 9 and 10 are diagrams for explaining an example of calculating the PSF based on the created blur shape 25.
- the area in the frame image 21 is divided by the size S1 of the representative pixel 16d in a state where no blur occurs.
- the size S1 is the size of light spread when no blur is generated in the representative pixel 16f.
- the representative pixel 16d in a state where no blur occurs may be simply referred to as a representative pixel 16d without blur.
- the size S1 can be calculated based on the size of the spread of light from the representative pixel 16d due to blur, that is, the blur size S2.
- the area in the frame image 21 is divided into nine equal parts so as to be an area of three pixels in the vertical and horizontal directions.
- the size of the representative pixel area 30 is the size S1 of the representative pixel 16d without blur.
- the central representative pixel region 30a is a display region for the representative pixel 16d without blur. That is, the blur is generated around the representative pixel region 30a.
- the pixel values P of the 64 pixels in each representative pixel region 30 are summed, and a sum value S is calculated for each representative pixel region 30.
- the normalized value N is calculated for each representative pixel region 30 by normalizing the total value S. This normalized value N is output as the PSF for the representative pixel 16d. That is, the PSF for the representative pixel 16d is calculated based on the blur shape 25 shown in FIG. 8B and the blur size S2.
- the blur size S ⁇ b> 2 may be input by the user 6 via the operation unit 207.
- the user 6 visually compares the representative pixel 16 f in the focus area F of the projected calibration image 15 with the representative pixel 16 d in the defocus area D.
- the size of the spread of light from the representative pixel 16d is grasped with reference to the size of the representative pixel 16f.
- the size is input to the PC 200 as the blur size S2.
- the GUI for inputting the blur size S2 may be arbitrarily set.
- the spot diagram of the projector 100 may be stored in the storage unit 208 or the like, and the blur size S2 may be automatically calculated based on the spot diagram.
- the spot diagram is a plot of points where light rays intersect the evaluation surface, and is information that enables evaluation of image features (for example, flare appearance).
- the spot shape at each position on the projection surface 5 of the image projected from the projector 100 can be acquired by the spot diagram. Based on the ratio between the spot size of the focused point in the spot diagram and the spot size of the blur reproduction target point, the blur size S2 of the blur reproduction target point can be calculated.
- the spot size is, for example, RMS (a sum of squares of the difference between the center of gravity position and each point position displayed in 100% (unit: mm)). Of course, it is not limited to this.
- the blur size S2 is acquired before the blur shape 25 is created by the input by the user 6 or the use of the spot size.
- the frame image 21 illustrated in FIGS. 8A and 8B may be displayed as an image indicating the blur size S2.
- the reference image 22 may be displayed as an image showing the representative pixel 16d without blur.
- a one-dot image 20c divided by a representative pixel region 30 having a size corresponding to one dot of the representative pixel 16d in which the region in the frame image 21 is not blurred may be displayed.
- the user 6 can create the blurred shape 25 with high operability and high accuracy.
- the reference image 22 may be presented to the user 6 as the representative pixel 16d without blur. Then, the user 6 may be requested to create the blurred shape 25 by deforming the reference image 22. For example, the user 6 considers the reference image 22 as the representative pixel 16f in the focus area F, and creates the blurred shape 25 so as to have the shape of the representative pixel 16d to be PSF calculated.
- the PC 200 divides the size of the created blur shape 25 (for example, the size of a rectangular area in which the line from the center to the farthest pixel to the half of the diagonal line) is divided by the size of the reference image 22 presented first.
- the blur size S2 can be calculated.
- any method may be used as a method for calculating the blur size S2.
- an image indicating the size S1 of the representative pixel 16d without blur may be drawn at the center of the created blur shape 25.
- the blur size S2 may be calculated by feeding back the result of the correction executed by the PSF once estimated.
- any GUI may be appropriately used as a GUI for creating the blur shape 25 or inputting the blur size S2.
- a GUI 20d having a predetermined pixel group 31 as a unit for shape creation may be displayed. By selecting a pixel value for each pixel group 31, the luminance of light from the projection surface 5 can be reproduced.
- step 206 it is determined whether or not to reproduce the degree of blur of the other representative pixels 16 (step 206). For example, if there is a representative pixel 16 for which the reproduction of the degree of blur has not yet been performed, the process returns from step 206 to step 202. If the input of the degree of blur has been completed for all the representative pixels 16, the process proceeds from step 206 No to step 207. Of course, the determination in step 206 may be executed in accordance with the operation of the user 6.
- step 207 When the PSF is calculated for each representative pixel 16, it is determined whether or not to perform PSF interpolation (step 207). If the determination result is Yes, the PSF is interpolated (step 208), and a PSF map that is a PSF for each pixel of the projected image 1 is calculated (step 209). If the determination in step 207 is No, the PSF map is calculated without interpolating the PSF (step 209).
- FIG. 13 is a diagram for explaining a processing example of PSF interpolation.
- each PSF of the representative point 1-4 that is the representative pixel 16 is calculated.
- the PSF of the pixel (interpolation target point 35) in the region surrounded by these representative points 1-4 is calculated by interpolation.
- the PSFga of the representative point 1 whose coordinate position is closest to the interpolation target point 35 and the PSF of the interpolation target point 35 may be set. This makes it possible to easily interpolate the PSF.
- the PSFs of the representative points 1-4 may be mixed according to the coordinate values.
- the PSFs of the representative points 1 and 2 are mixed based on the coordinate values in the horizontal direction to calculate the horizontal interpolation PSF1.
- the PSFs of the representative points 3 and 4 are mixed based on the horizontal coordinate values to calculate the horizontal interpolation PSF2.
- the horizontal interpolation PSF1 and PSF2 are mixed based on the coordinate values in the vertical direction to calculate the vertical interpolation PSF.
- the vertical interpolation PSF becomes the PSF of the interpolation target point 35.
- the PSF interpolation method is not limited, and other methods may be used.
- the PSF map calculated by the PC 200 is output to the projector 100 and stored in a memory or the like.
- the display control unit 140 of the projector 100 functions as a correction unit, and the input image information is corrected based on the stored PSF map. As a result, a high quality image can be projected.
- the image information may be input to the PC 200 and the image information may be corrected by the CPU 201 functioning as a correction unit.
- the corrected image information is output to the projector 100 and projected. Even in such processing, high-quality image projection is realized.
- FIG. 14 is a block diagram showing an example of PSF calculation according to the present embodiment.
- the one-dot pattern calibration image 15 is projected.
- a GUI for inputting the degree of blur of each of the one or more representative pixels 16 in the calibration image 15 a plurality of candidate images that are candidates for the blur shape are displayed.
- FIG. 15 is a schematic diagram illustrating a configuration example of a plurality of candidate images.
- candidate images 40 as shown in FIGS. 15A to 15C are stored in the database 50 and are read out as appropriate. It is not limited as to which image representing what shape is prepared as the candidate image 40. For example, based on the spot diagram, a shape having a high probability of becoming the blurred shape 25 is created as a plurality of candidate images 40. Of course, images having the same outer shape but different blur sizes may be prepared as candidate images 40, respectively.
- the user 6 selects the candidate image 40 having the closest shape (including the bokeh size) from among the plurality of candidate images 40 while visually observing the representative pixel 16 that is a target of PSF calculation in the calibration image 15 (step). 303).
- a PSF for the representative pixel 14 is calculated based on the selected candidate image 40 (step 304), and a PSF map is output by the same processing as in the first embodiment (steps 305 and 306). .
- a plurality of candidate images 40 are prepared in advance and the user 6 selects them, thereby simplifying the PSF calculation process and shortening the processing time.
- the user 6 can easily input the degree of blur.
- the shape of the selected candidate image 40 may be further changeable. As a result, the accuracy of the degree of blur input can be improved, and the PSF calculation accuracy is also improved.
- a method for changing the shape of the candidate image 40 is not limited, and any technique may be used.
- the blur intensity corresponds to the correction intensity at the representative pixel 16. That is, when the blur intensity is strong, the correction intensity in the inverse filter calculation is also strong. When the blur intensity is weak, the correction intensity is also weak.
- FIG. 16 is a diagram illustrating an example of a GUI for changing the blur intensity (correction intensity) in the representative pixel.
- a diagram hereinafter referred to as a Gaussian shape diagram
- the intensity of the blur can be adjusted by changing the magnitude of ⁇ of the Gaussian shape.
- the size of x0 and y0 may be changeable.
- a Gaussian shape diagram 45 shown in FIG. 16 is included in the GUI for inputting the degree of blur according to the present technology.
- FIG. 17 is a block diagram illustrating a calculation example of PSF according to another embodiment.
- a plurality of PSFs are stored in the database 55 in advance.
- the PC selects a predetermined PSF from the plurality of PSFs, and instructs the projector to project the calibration image 60 corrected based on the PSF (step 401).
- the user 6 selects a PSF to be actually used while visually confirming the representative pixel 61 that is a target of PSF calculation in the projected calibration image 60. For example, by operating the operation unit of the PC, the PSF in the database 55 is switched, and the PSF to be corrected with the highest accuracy is selected. Even in the present technology, it is possible to project a high-quality image.
- the calibration image 60 corrected based on the PSF selected by the PC can be said to be a preview image.
- the PSF is information corresponding to the degree of blur of each representative pixel 61. Therefore, the preview image is included in the GUI for inputting the degree of blur of each of the one or more representative pixels.
- the intensity of the PSF determined through the preview image may be adjustable by the user 6. In this case, for example, a Gaussian shape diagram 45 shown in FIG. 16 or the like may be displayed.
- an image that is not a one-dot pattern may be used as the calibration image.
- the projection target image may be used as it is, and the PSF may be calculated based on the preview image.
- an image that is not a one-dot pattern may be used as the calibration image.
- the present technology for calculating the PSF based on the visual observation of the user is effective.
- the corrected representative point result may be mixed and calculated as the interpolation target point correction result.
- the representative pixels that are PSF calculation targets are selected one by one.
- a representative pixel as a representative may be selected from a plurality of representative pixels in a defocus area within a predetermined range.
- the PSF calculated for the representative representative pixel may be set as the PSF of another representative pixel in the defocus area. That is, a common PSF may be set for each local area. As a result, the processing can be simplified and the processing time can be shortened.
- the GUI for inputting the degree of blur is displayed on the display unit 206 of the PC 200.
- a GUI for inputting the degree of blur is displayed on the projection surface 5, and the creation of the blur shape 25 or the like may be performed on the projection surface 5.
- the image projected on the projection surface 5 is blurred or the like, but the creation of the blurred shape 25 or the like is executed using a relatively large size area, thereby suppressing the influence of the blur or the like. That is, the PSF for each representative pixel can be calculated with sufficient accuracy.
- Parameters other than PSF may be used as a method for correcting image information based on the degree of blur input via the GUI. That is, the method for correcting an image based on the input degree of blur is not limited, and an arbitrary method may be used.
- the display control unit 140 of the projector 100 illustrated in FIG. 2 may function as an output unit and a calculation unit according to the present technology.
- the projector 100 can operate as an image display device according to the present technology.
- the projector 100 also functions as an information processing apparatus according to the present technology.
- this technique can also take the following structures.
- a projection instruction unit for instructing projection of a calibration image on which one or more representative pixels are displayed;
- An information processing apparatus comprising: an output unit that outputs a GUI (Graphical User Interface) for inputting a degree of blur of each of the one or more representative pixels in the projected calibration image.
- a correction unit that corrects a projected image based on a degree of blur of each of the one or more representative pixels input via the GUI.
- An information processing apparatus comprising: a calculation unit that calculates a PSF (Point Spread Function) for each pixel of a projected image based on a degree of blur of each of the one or more representative pixels input via the GUI .
- the output unit outputs a GUI capable of creating a shape representing the degree of blur.
- the information processing apparatus according to (4), The information processing apparatus calculates the PSF for the representative pixel based on the input shape representing the degree of blur and a size of light spread from the representative pixel due to blur.
- the information processing apparatus outputs a frame image indicating a size of spread of light from the representative pixel due to the blur so that a shape representing the degree of blur can be created in the frame image.
- the information processing apparatus outputs a reference image indicating the representative pixel in a state where no blur occurs so that the shape of the reference image can be changed.
- the output unit outputs a GUI for inputting a size of light spread from the representative pixel due to the blur.
- the information processing apparatus according to any one of (5) to (8), A storage unit for storing a spot diagram of a projection device that projects the calibration image; The information processing apparatus calculates the size of the spread of light from the representative pixel due to the blur based on the stored spot diagram. (10) The information processing apparatus according to any one of (4) to (9), The information processing apparatus, wherein the output unit outputs a plurality of candidate shape images serving as shape candidates representing the degree of blur. (11) The information processing apparatus according to (10), The output unit outputs the plurality of candidate shape images so that each shape can be changed. (12) The information processing apparatus according to any one of (3) to (11), The projection instruction unit instructs the projection of an image corrected based on the PSF for each pixel calculated by the calculation unit.
- S1 representative pixel size in a state where no blur occurs
- S2 blur size 1 ... projected image 10
- calibration image (image information) 11 representative pixel of calibration image (image information) 15
- calibration image (projection image) 16 ...
- Representative pixel of calibration image (projection image) 20 ... GUI that can create blurred shape 21 ... Frame image 22 ... Reference image 25 ... Bokeh shape 40 ...
- Candidate image 60 ... Proof image (after correction) 61 ... Representative pixel of calibration image (after correction) 100 ... Projector 200 ... PC 500 ... Image display system
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Controls And Circuits For Display Device (AREA)
- Projection Apparatus (AREA)
- Image Processing (AREA)
- Transforming Electric Information Into Light Information (AREA)
Abstract
Description
前記投射指示部は、1以上の代表画素が表示された校正用画像の投射を指示する。
前記出力部は、前記投射された校正用画像内の前記1以上の代表画素の各々のボケ度合いを入力するためのGUI(Graphical User Interface)を出力する。
これにより高品質な画像を投射することができる。
算出されたPSFを用いて入力画像に逆フィルタ処理を実行することで、高品質な画像を投射することが可能となる。
これによりユーザから見て破綻のない画像を投射することが可能となる。
これによりユーザから見て破綻のない高品質な画像の投射が可能となる。
これによりボケによる光の広がりのサイズを入力する操作を省くことが可能となり、簡単にPSFを算出することが可能となる。
これにより基準画像のサイズを基準として、自動的にボケによる光の広がりのサイズを算出することが可能となり、当該サイズを入力する操作を省くことが可能となる。
これによりユーザにとってのシンプルな操作が実現する。
これにより簡単にPSFを算出することが可能となる。
これによりユーザは、簡単にボケ度合いを入力することが可能となる。
これによりPSFの算出の精度を向上させることが可能となる。
これによりユーザは、投射された画像を確認しながらボケ度合いを入力することができる。
前記投射された校正用画像内の前記1以上の代表画素の各々のボケ度合いを入力するためのGUIが出力される。
1以上の代表画素が表示された校正用画像の投射を指示するステップ。
前記投射された校正用画像内の前記1以上の代表画素の各々のボケ度合いを入力するためのGUIを出力するステップ。
前記入力部には、画像情報が入力される。
前記画像投射部は、前記画像情報をもとに画像を生成して投射することが可能である。
前記投射指示部は、前記画像投射部に1以上の代表画素が表示された校正用画像を投射させる。
前記出力部は、前記投射された校正用画像内の前記1以上の代表画素の各々のボケ度合いを入力するためのGUIを出力する。
前記補正部は、前記GUIを介して入力された前記1以上の代表画素の各々のボケ度合いをもとに、前記入力された画像情報を補正する。
[画像表示システム]
図1は、本技術の第1の実施形態に係る画像表示システムの構成例を示す概略図である。画像表示システム500は、プロジェクタ100と、本技術に係る情報処理装置として動作するPC(Personal Computer)200とを有する。プロジェクタ100及びPC200は互いに接続されており、PC200を操作することでプロジェクタ100の動作を制御することが可能である。
例えば光変調部120や投射部130に配置されるレンズやミラー等の光学素子の性能や、画像1が表示される投射面5の形状等により、投射される画像1にボケ等の劣化が発生する場合がある。このボケ等の劣化を補正するために本実施形態では、PC200により投射画像1の各画素についてPSF(Point spread function:点広がり関数)が算出される。そして算出されたPSFを用いて逆フィルタ演算が実行される。
本技術に係る第2の実施形態の情報処理装置について説明する。これ以降の説明では、上記の実施形態で説明したPC200における構成及び作用と同様な部分については、その説明を省略又は簡略化する。
本技術は、以上説明した実施形態に限定されず、他の種々の実施形態を実現することができる。
(1)1以上の代表画素が表示された校正用画像の投射を指示する投射指示部と、
前記投射された校正用画像内の前記1以上の代表画素の各々のボケ度合いを入力するためのGUI(Graphical User Interface)を出力する出力部と
を具備する情報処理装置。
(2)(1)に記載の情報処理装置であって、さらに、
前記GUIを介して入力された前記1以上の代表画素の各々のボケ度合いをもとに、投射される画像を補正する補正部を具備する
情報処理装置。
(3)(1)又は(2)に記載の情報処理装置であって、さらに、
前記GUIを介して入力された前記1以上の代表画素の各々のボケ度合いをもとに、投射される画像の各画素についてのPSF(Point spread function)を算出する算出部を具備する
情報処理装置。
(4)(1)から(3)のうちいずれか1つに記載の情報処理装置であって、
前記出力部は、前記ボケ度合いを表す形状を作成可能なGUIを出力する
情報処理装置。
(5)(4)に記載の情報処理装置であって、
前記算出部は、前記入力された前記ボケ度合いを表す形状と、ボケによる前記代表画素からの光の広がりのサイズとをもとに、前記代表画素についてのPSFを算出する
情報処理装置。
(6)(5)に記載の情報処理装置であって、
前記出力部は、前記ボケによる前記代表画素からの光の広がりのサイズを示す枠画像を、前記枠画像内にて前記ボケ度合いを表す形状が作成可能に出力する
情報処理装置。
(7)(5)又は(6)に記載の情報処理装置であって、
前記出力部は、ボケの発生しない状態の前記代表画素を示す基準画像を、前記基準画像の形状を変更可能に出力する
情報処理装置。
(8)(5)から(7)のうちいずれか1つに記載の情報処理装置であって、
前記出力部は、前記ボケによる前記代表画素からの光の広がりのサイズを入力するためのGUIを出力する
情報処理装置。
(9)(5)から(8)のうちいずれか1つに記載の情報処理装置であって、さらに、
前記校正用画像を投射する投射装置のスポットダイアグラムを記憶する記憶部を具備し、
前記算出部は、前記記憶されたスポットダイアグラムをもとに、前記ボケによる前記代表画素からの光の広がりのサイズを算出する
情報処理装置。
(10)(4)から(9)のうちいずれか1つに記載の情報処理装置であって、
前記出力部は、前記ボケ度合いを表す形状の候補となる複数の候補形状画像を出力する
情報処理装置。
(11)(10)に記載の情報処理装置であって、
前記出力部は、前記複数の候補形状画像を、各々の形状を変更可能に出力する
情報処理装置。
(12)(3)から(11)のうちいずれか1つに記載の情報処理装置であって、
前記投射指示部は、前記算出部により算出された前記各画素についてのPSFをもとに補正された画像の投射を指示する
情報処理装置。
S2…ボケサイズ
1…投射画像
10…校正用画像(画像情報)
11…校正用画像(画像情報)の代表画素
15…校正用画像(投射画像)
16…校正用画像(投射画像)の代表画素
20…ボケ形状を作成可能なGUI
21…枠画像
22…基準画像
25…ボケ形状
40…候補画像
60…校正用画像(補正後)
61…校正用画像(補正後)の代表画素
100…プロジェクタ
200…PC
500…画像表示システム
Claims (15)
- 1以上の代表画素が表示された校正用画像の投射を指示する投射指示部と、
前記投射された校正用画像内の前記1以上の代表画素の各々のボケ度合いを入力するためのGUI(Graphical User Interface)を出力する出力部と
を具備する情報処理装置。 - 請求項1に記載の情報処理装置であって、さらに、
前記GUIを介して入力された前記1以上の代表画素の各々のボケ度合いをもとに、投射される画像を補正する補正部を具備する
情報処理装置。 - 請求項1に記載の情報処理装置であって、さらに、
前記GUIを介して入力された前記1以上の代表画素の各々のボケ度合いをもとに、投射される画像の各画素についてのPSF(Point spread function)を算出する算出部を具備する
情報処理装置。 - 請求項1に記載の情報処理装置であって、
前記出力部は、前記ボケ度合いを表す形状を作成可能なGUIを出力する
情報処理装置。 - 請求項4に記載の情報処理装置であって、
前記算出部は、前記入力された前記ボケ度合いを表す形状と、ボケによる前記代表画素からの光の広がりのサイズとをもとに、前記代表画素についてのPSFを算出する
情報処理装置。 - 請求項5に記載の情報処理装置であって、
前記出力部は、前記ボケによる前記代表画素からの光の広がりのサイズを示す枠画像を、前記枠画像内にて前記ボケ度合いを表す形状が作成可能に出力する
情報処理装置。 - 請求項5に記載の情報処理装置であって、
前記出力部は、ボケの発生しない状態の前記代表画素を示す基準画像を、前記基準画像の形状を変更可能に出力する
情報処理装置。 - 請求項5に記載の情報処理装置であって、
前記出力部は、前記ボケによる前記代表画素からの光の広がりのサイズを入力するためのGUIを出力する
情報処理装置。 - 請求項5に記載の情報処理装置であって、さらに、
前記校正用画像を投射する投射装置のスポットダイアグラムを記憶する記憶部を具備し、
前記算出部は、前記記憶されたスポットダイアグラムをもとに、前記ボケによる前記代表画素からの光の広がりのサイズを算出する
情報処理装置。 - 請求項4に記載の情報処理装置であって、
前記出力部は、前記ボケ度合いを表す形状の候補となる複数の候補形状画像を出力する
情報処理装置。 - 請求項10に記載の情報処理装置であって、
前記出力部は、前記複数の候補形状画像を、各々の形状を変更可能に出力する
情報処理装置。 - 請求項3に記載の情報処理装置であって、
前記投射指示部は、前記算出部により算出された前記各画素についてのPSFをもとに補正された画像の投射を指示する
情報処理装置。 - 1以上の代表画素が表示された校正用画像の投射を指示し、
前記投射された校正用画像内の前記1以上の代表画素の各々のボケ度合いを入力するためのGUI(Graphical User Interface)を出力する
ことをコンピュータが実行する情報処理方法。 - 1以上の代表画素が表示された校正用画像の投射を指示するステップと、
前記投射された校正用画像内の前記1以上の代表画素の各々のボケ度合いを入力するためのGUI(Graphical User Interface)を出力するステップと
をコンピュータに実行させるプログラム。 - 画像情報が入力される入力部と、
前記画像情報をもとに画像を生成して投射することが可能な画像投射部と、
前記画像投射部に1以上の代表画素が表示された校正用画像を投射させる投射指示部と、
前記投射された校正用画像内の前記1以上の代表画素の各々のボケ度合いを入力するためのGUI(Graphical User Interface)を出力する出力部と、
前記GUIを介して入力された前記1以上の代表画素の各々のボケ度合いをもとに、前記入力された画像情報を補正する補正部と
を具備する画像表示装置。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/558,324 US11069038B2 (en) | 2015-03-27 | 2016-01-12 | Information processing apparatus, information processing method, and image display apparatus |
JP2017509189A JP6794983B2 (ja) | 2015-03-27 | 2016-01-12 | 情報処理装置、情報処理方法、プログラム、及び画像表示装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-065963 | 2015-03-27 | ||
JP2015065963 | 2015-03-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016157671A1 true WO2016157671A1 (ja) | 2016-10-06 |
Family
ID=57004075
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/000113 WO2016157671A1 (ja) | 2015-03-27 | 2016-01-12 | 情報処理装置、情報処理方法、プログラム、及び画像表示装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11069038B2 (ja) |
JP (1) | JP6794983B2 (ja) |
WO (1) | WO2016157671A1 (ja) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018155269A1 (ja) * | 2017-02-27 | 2018-08-30 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
WO2019167455A1 (ja) * | 2018-03-02 | 2019-09-06 | ソニー株式会社 | 情報処理装置、情報処理装置の演算方法、プログラム |
WO2019187511A1 (ja) * | 2018-03-29 | 2019-10-03 | ソニー株式会社 | 信号処理装置、情報処理方法、プログラム |
WO2019230109A1 (ja) * | 2018-05-28 | 2019-12-05 | ソニー株式会社 | 画像処理装置、画像処理方法 |
WO2019230108A1 (ja) * | 2018-05-28 | 2019-12-05 | ソニー株式会社 | 画像処理装置、画像処理方法 |
WO2020184173A1 (ja) * | 2019-03-11 | 2020-09-17 | ソニー株式会社 | 画像処理装置、画像処理方法、およびプログラム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006174184A (ja) * | 2004-12-16 | 2006-06-29 | Olympus Corp | プロジェクタ |
JP2009042838A (ja) * | 2007-08-06 | 2009-02-26 | Ricoh Co Ltd | 画像投影方法および画像投影装置 |
JP2009524849A (ja) * | 2006-01-24 | 2009-07-02 | ザ トラスティーズ オブ コロンビア ユニヴァーシティ イン ザ シティ オブ ニューヨーク | シーン画像および奥行き形状を取り込んで補償画像を生成するためのシステム、方法、および媒体 |
JP2013516827A (ja) * | 2010-01-05 | 2013-05-13 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 画像投影装置及び方法 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005017136A (ja) * | 2003-06-26 | 2005-01-20 | Mitsubishi Electric Corp | 光学系評価装置 |
US20050190987A1 (en) * | 2004-02-27 | 2005-09-01 | Conceptual Assets, Inc. | Shaped blurring of images for improved localization of point energy radiators |
US20070286514A1 (en) * | 2006-06-08 | 2007-12-13 | Michael Scott Brown | Minimizing image blur in an image projected onto a display surface by a projector |
JP5116288B2 (ja) | 2006-11-16 | 2013-01-09 | 株式会社リコー | 画像投影装置及び画像投影方法 |
US20080117231A1 (en) * | 2006-11-19 | 2008-05-22 | Tom Kimpe | Display assemblies and computer programs and methods for defect compensation |
US8885941B2 (en) * | 2011-09-16 | 2014-11-11 | Adobe Systems Incorporated | System and method for estimating spatially varying defocus blur in a digital image |
JP2014220720A (ja) * | 2013-05-09 | 2014-11-20 | 株式会社東芝 | 電子機器、情報処理方法及びプログラム |
US9659351B2 (en) * | 2014-03-12 | 2017-05-23 | Purdue Research Foundation | Displaying personalized imagery for improving visual acuity |
US9712720B2 (en) * | 2014-06-02 | 2017-07-18 | Intel Corporation | Image refocusing for camera arrays |
WO2016075744A1 (ja) * | 2014-11-10 | 2016-05-19 | 日立マクセル株式会社 | プロジェクタ及び映像表示方法 |
US9684950B2 (en) * | 2014-12-18 | 2017-06-20 | Qualcomm Incorporated | Vision correction through graphics processing |
-
2016
- 2016-01-12 US US15/558,324 patent/US11069038B2/en active Active
- 2016-01-12 JP JP2017509189A patent/JP6794983B2/ja active Active
- 2016-01-12 WO PCT/JP2016/000113 patent/WO2016157671A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006174184A (ja) * | 2004-12-16 | 2006-06-29 | Olympus Corp | プロジェクタ |
JP2009524849A (ja) * | 2006-01-24 | 2009-07-02 | ザ トラスティーズ オブ コロンビア ユニヴァーシティ イン ザ シティ オブ ニューヨーク | シーン画像および奥行き形状を取り込んで補償画像を生成するためのシステム、方法、および媒体 |
JP2009042838A (ja) * | 2007-08-06 | 2009-02-26 | Ricoh Co Ltd | 画像投影方法および画像投影装置 |
JP2013516827A (ja) * | 2010-01-05 | 2013-05-13 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 画像投影装置及び方法 |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110313176A (zh) * | 2017-02-27 | 2019-10-08 | 索尼公司 | 图像处理装置、方法及程序 |
WO2018155269A1 (ja) * | 2017-02-27 | 2018-08-30 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
US11218675B2 (en) | 2018-03-02 | 2022-01-04 | Sony Corporation | Information processing apparatus, computation method of information processing apparatus, and program |
WO2019167455A1 (ja) * | 2018-03-02 | 2019-09-06 | ソニー株式会社 | 情報処理装置、情報処理装置の演算方法、プログラム |
WO2019187511A1 (ja) * | 2018-03-29 | 2019-10-03 | ソニー株式会社 | 信号処理装置、情報処理方法、プログラム |
JPWO2019187511A1 (ja) * | 2018-03-29 | 2021-04-15 | ソニー株式会社 | 信号処理装置、情報処理方法、プログラム |
WO2019230109A1 (ja) * | 2018-05-28 | 2019-12-05 | ソニー株式会社 | 画像処理装置、画像処理方法 |
WO2019230108A1 (ja) * | 2018-05-28 | 2019-12-05 | ソニー株式会社 | 画像処理装置、画像処理方法 |
JPWO2019230109A1 (ja) * | 2018-05-28 | 2021-07-08 | ソニーグループ株式会社 | 画像処理装置、画像処理方法 |
US11394941B2 (en) | 2018-05-28 | 2022-07-19 | Sony Corporation | Image processing device and image processing method |
US11575862B2 (en) | 2018-05-28 | 2023-02-07 | Sony Corporation | Image processing device and image processing method |
JP7287390B2 (ja) | 2018-05-28 | 2023-06-06 | ソニーグループ株式会社 | 画像処理装置、画像処理方法 |
WO2020184173A1 (ja) * | 2019-03-11 | 2020-09-17 | ソニー株式会社 | 画像処理装置、画像処理方法、およびプログラム |
US11431949B2 (en) | 2019-03-11 | 2022-08-30 | Sony Group Corporation | Image processing apparatus, image processing method, and program |
Also Published As
Publication number | Publication date |
---|---|
US11069038B2 (en) | 2021-07-20 |
JP6794983B2 (ja) | 2020-12-02 |
JPWO2016157671A1 (ja) | 2018-01-18 |
US20180082406A1 (en) | 2018-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6794983B2 (ja) | 情報処理装置、情報処理方法、プログラム、及び画像表示装置 | |
JP4742898B2 (ja) | 画像処理方法、画像処理プログラム、記録媒体、及びプロジェクタ | |
JP6155717B2 (ja) | 画像処理装置、プロジェクター及び画像処理方法 | |
JP6469130B2 (ja) | プロジェクタ及び映像表示方法 | |
US10593300B2 (en) | Display device and method of controlling the same | |
US10218948B2 (en) | Image displaying system, controlling method of image displaying system, and storage medium | |
JP2006014356A (ja) | 画像投影システム | |
WO2016157670A1 (ja) | 画像表示装置、画像表示方法、情報処理装置、情報処理方法、及びプログラム | |
JP2019078786A (ja) | 画像投射システム、プロジェクター、及び画像投射システムの制御方法 | |
JP2017147634A (ja) | 投影装置、投影方法及び投影システム | |
JP2010041172A (ja) | 画像処理装置、画像表示装置、画像処理方法、画像表示方法及びプログラム | |
JP2005354680A (ja) | 画像投影システム | |
WO2018025474A1 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
US10536676B2 (en) | Projection apparatus, control method, and storage medium | |
US10657622B2 (en) | Controlling projected image frame rate in response to determined projection surface curvature | |
JP2019062462A (ja) | 表示装置およびその制御装置、ならびにそれらの制御方法 | |
JP6685264B2 (ja) | 投写装置およびその制御方法、投写システム | |
KR20120095132A (ko) | 디지털 줌 시스템을 위한 영상 처리 장치 및 방법 | |
JP2014142466A (ja) | 投影装置及びその制御方法、プログラム、並びに記憶媒体 | |
JP6064699B2 (ja) | 画像処理装置、プロジェクター及び画像処理方法 | |
JP2016139036A (ja) | 表示装置 | |
JP2017161666A (ja) | 投射型画像表示装置 | |
JP2023036183A (ja) | プロジェクター、及びプロジェクターの制御方法 | |
JP2009237051A (ja) | プロジェクションシステムの光量調整方法、プロジェクションシステム、及びプロジェクタ | |
JP2015118275A (ja) | 画像処理装置、撮像装置、画像処理方法、プログラム、記憶媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16771568 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017509189 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15558324 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16771568 Country of ref document: EP Kind code of ref document: A1 |