WO2011135857A1 - 画像変換装置 - Google Patents
画像変換装置 Download PDFInfo
- Publication number
- WO2011135857A1 WO2011135857A1 PCT/JP2011/002472 JP2011002472W WO2011135857A1 WO 2011135857 A1 WO2011135857 A1 WO 2011135857A1 JP 2011002472 W JP2011002472 W JP 2011002472W WO 2011135857 A1 WO2011135857 A1 WO 2011135857A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image data
- eye
- eye image
- conversion
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/189—Recording image signals; Reproducing recorded image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/426—Internal components of the client ; Characteristics thereof
- H04N21/42646—Internal components of the client ; Characteristics thereof for reading from or writing on a non-volatile solid state storage medium, e.g. DVD, CD-ROM
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/432—Content retrieval operation from a local storage medium, e.g. hard-disk
- H04N21/4325—Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/84—Television signal recording using optical recording
- H04N5/85—Television signal recording using optical recording on discs or drums
Definitions
- the present invention relates to an image conversion apparatus that converts a two-dimensional image (2D image) into a three-dimensional stereoscopic image (3D image).
- a playback device that plays back a 3D image reads, for example, a left-eye image signal and a right-eye image signal from a disc, and alternately outputs the read left-eye image signal and right-eye image signal to a display.
- the display is used in combination with glasses with a liquid crystal shutter as disclosed in Patent Document 1, the display is indicated by the left-eye image signal and the right-eye image signal indicated by the left-eye image signal input from the playback device.
- the right-eye images are alternately displayed on the screen at a predetermined cycle.
- the display also opens the left-eye shutter of the glasses with liquid crystal shutters when the left-eye image indicated by the left-eye image signal is displayed, and the liquid-crystal shutter when the right-eye image indicated by the right-eye image signal is displayed. Control the glasses with the LCD shutter so that the shutter for the right eye of the glasses with the shutter opens.
- an image conversion apparatus that converts an existing 2D image into a 3D image has been proposed. Thereby, the user can view an existing 2D image as a 3D image.
- the 3D image provided by the conventional image conversion apparatus is an image that is recognized by the user as if, for example, the image is entirely projected from the display surface of the display device to the user side.
- the display surface of the display device feels small due to human visual characteristics.
- An object of the present invention is to provide an image conversion apparatus capable of generating a 3D image from which a user can visually recognize a sufficient stereoscopic effect from a 2D image.
- An image conversion apparatus is an image conversion apparatus that converts non-stereoscopic image data into stereoscopic image data composed of left-eye image data and right-eye image data, and inputs a non-stereo image.
- a predetermined portion in the horizontal direction in the displayed stereoscopic image is farthest from the user in a direction perpendicular to the display surface of the display device.
- the left-eye image is present so that a portion other than the predetermined portion is located at a position and is viewed by the user so that the portion is closer to the user toward the left and right ends of the stereoscopic image. It generates the data and the right-eye image data.
- An image conversion apparatus is an image conversion apparatus that converts non-stereoscopic image data into stereoscopic image data composed of left-eye image data and right-eye image data, and inputs a non-stereoscopic image.
- the stereoscopic image composed of the left-eye image and the right-eye image is displayed, the entire displayed stereoscopic image is farther from the display device than the display device in the direction perpendicular to the display surface of the display device.
- a predetermined portion in the horizontal direction in the display area of the display device is present at the closest position, and portions other than the predetermined portion are located farther from the user toward the left and right ends of the stereoscopic image. And As, as viewed by the user, generates the left-eye image data and right-eye image data.
- An image conversion apparatus is an image conversion apparatus that processes stereoscopic image data, and includes an input unit that inputs a non-stereo image and stereoscopic image data input from the input unit.
- a conversion unit that generates and outputs left-eye image data and right-eye image data by assigning different movement amounts to the left-eye image and the right-eye image of the stereoscopic image, and the conversion units are the same
- the difference between the movement amounts given to the left-eye image data and the right-eye image data generated from the stereoscopic image data is compared, the difference between the movement amounts given at the first pixel position of the stereoscopic image and the first
- the left-eye image data and the right-eye image data are generated so that the difference in the movement amount applied at the second pixel position different from the pixel position is different.
- a stereoscopic image composed of a left-eye image and a right-eye image is displayed on a display device capable of displaying a stereoscopic image
- a predetermined horizontal direction in the displayed stereoscopic image is displayed.
- the part exists at a position farthest from the user in a direction perpendicular to the display surface of the display device, and parts other than the predetermined part exist closer to the user toward the left and right ends of the stereoscopic image.
- the left-eye image data and the right-eye image data are generated so as to be visually recognized by the user.
- 2D image the user can feel a sufficient depth and spread according to human visual characteristics, and can make the display surface of the display device feel large.
- An image (3D image) can be generated.
- the entire displayed stereoscopic image is displayed on the display device.
- a stereoscopic image composed of a left-eye image and a right-eye image is displayed on a display device capable of displaying a stereoscopic image
- the entire displayed stereoscopic image is displayed on the display device.
- it is present at a position farther from the display device when viewed from the user, and a predetermined portion in the horizontal direction in the display area of the display device is present at the closest position.
- the left-eye image data and the right-eye image data are generated so as to be visually recognized by the user so as to exist at a position farther from the user as it goes toward the left and right ends of the 3D image.
- a stereoscopic image (3D image) that can make the display surface of the display device feel large can be generated.
- the first pixel of the stereoscopic image is compared when the difference in the movement amount given to the left-eye image data and the right-eye image data generated from the same stereoscopic image data is compared.
- the left-eye image data and the right-eye image data are generated so that the difference between the movement amounts given at the position and the difference between the movement amounts given at the second pixel position different from the first pixel position are different.
- FIG. 1 is a configuration diagram of a playback apparatus according to a first embodiment.
- 1 is a configuration diagram of a signal processing unit according to a first embodiment.
- FIG. 1 The figure for demonstrating the parallax amount etc. of the image for left eyes and the image for right eyes in Embodiment 1
- FIG. The figure of the image for left eyes and the image for right eyes produced
- Embodiment 1 1. Configuration 1. 1.3 Dimensional Stereo Image Reproduction Display System
- FIG. 1 shows the configuration of a three dimensional stereoscopic image reproduction display system.
- This three-dimensional stereoscopic image playback / display system includes a playback device 101, a display device 102, and 3D glasses 103.
- the playback device 101 plays back a 3D stereoscopic image signal from the data recorded on the disc and outputs it to the display device 102.
- the display device 102 displays a 3D image. Specifically, the display device 102 alternately displays an image for the left eye (hereinafter referred to as “L image”) and an image for the right eye (hereinafter referred to as “R image”).
- L image an image for the left eye
- R image image for the right eye
- the display device 102 sends an image synchronization signal to the 3D glasses 103 wirelessly such as infrared rays.
- the 3D glasses 103 include a liquid crystal shutter in each of the left-eye lens unit and the right-eye lens unit, and alternately open and close the left and right liquid crystal shutters based on an image synchronization signal from the display device 102. Specifically, when the display device 102 displays an L image, the left-eye liquid crystal shutter is opened and the right-eye liquid crystal shutter is closed. When the display device 102 displays an R image, the right-eye liquid crystal shutter is opened and the left-eye liquid crystal shutter is closed. With such a configuration, only the L image reaches the left eye of the user wearing the 3D glasses 103 and only the R image reaches the right eye, thereby allowing the user to visually recognize the 3D image.
- the 2D image can be converted into a 3D image and output. Details regarding the conversion of a 2D image into a 3D image will be described later.
- FIG. 2 shows the configuration of the reproduction device 101.
- the playback apparatus 101 includes a disk playback unit 202, a signal processing unit 203, a memory 204, a remote control reception unit 205, an output unit 206, and a program storage memory 207.
- the remote control receiving unit 205 receives a playback start / stop instruction from the user, a 3D image pop-out amount correction instruction, an instruction to convert a 2D image into a 3D image, and the like.
- the pop-out amount of the 3D image is when the user who views the 3D image visually recognizes that the 3D image protrudes from the display surface to the user side in a direction perpendicular to the display surface of the display device 102.
- the 3D image pop-out amount correction instruction includes a pop-out amount instruction for the entire 3D image and a pop-out amount instruction for a part of the 3D image. In the 3D image pop-up amount correction instruction, the pop-out amount can be changed for each portion of the 3D image.
- the disc playback unit 202 plays back the disc 201 in which data such as images (video) such as 2D images and 3D images, audio (audio), graphics (characters, menu images, etc.), etc. are recorded. Specifically, the disc playback unit 202 reads these data and outputs a data stream.
- the signal processing unit 203 decodes data such as images, sounds, and graphics included in the data stream output from the disc playback unit 202 and temporarily stores the data in the memory 204. Further, the signal processing unit 203 generates a device main body GUI stored in the memory 207 as necessary, and temporarily stores the device main body GUI in the memory 204. Data such as images, audio, graphics, and device main body GUI stored in the memory 204 is subjected to predetermined processing by the signal processing unit 203 and the amount of pop-out is adjusted, and the output unit 206 outputs the data in 3D format. Is output.
- the signal processing unit 203 can convert the 2D content into 3D content composed of 3D images and output the 2D content. . Details of this conversion processing will be described later.
- FIG. 3 shows the configuration of the signal processing unit 203.
- the signal processing unit 203 includes a stream separation unit 301, an audio decoder 302, a video decoder 303, a graphics decoder 304, a CPU 305, and a video signal processing unit 306.
- the CPU 305 When the CPU 305 receives a playback start instruction from the user via the remote control receiving unit 205, the CPU 305 causes the disc playback unit 202 to play the disc 201.
- the stream separation unit 301 separates image (video), audio, graphics, additional data including ID data, and the like included in the data stream output from the disc 201 by the disc reproduction unit 202.
- the audio decoder 302 decodes the audio data read from the disc 201 and transfers it to the memory 204.
- the video decoder 303 decodes the video data read from the disk 201 and transfers it to the memory 204.
- the graphics decoder 304 decodes the graphics data read from the disk 201 and transfers it to the memory 204.
- the CPU 305 reads the GUI data of the device main body from the program storage memory 207, generates it, and transfers it to the memory 204.
- the video signal processing unit 306 generates an L image and an R image using the various types of data according to the determination by the CPU 305, and outputs the L image and the R image in a 3D image format.
- the signal processing unit 203 converts the 2D image of the 2D content into 3D.
- the operation of converting to an image and outputting will be described.
- a stream including video data is input to the stream separation unit 301.
- the stream separation unit 301 outputs 2D image video data to the video decoder 303.
- the video decoder 303 decodes the video data of the 2D image and transfers it to the memory 204.
- the video signal output from the video decoder 303 is a 2D video signal.
- the memory 204 records the video signal.
- the remote control receiving unit 205 When the remote control receiving unit 205 receives an instruction to convert a 2D image into a 3D image, the CPU 305 instructs the memory 204 and the video signal processing unit 306 to convert the 2D image into a 3D image and output it. . Then, in order to generate a 3D image, the memory 204 outputs a video signal indicating the same 2D image two frames at a time for generating an L image and an R image of the 3D image. On the other hand, in order to convert a 2D image into a 3D image based on an instruction from the CPU 305, the video signal processing unit 306 performs different processing on image signals indicating the same 2D image of two frames output from the memory 204. Then, an image signal indicating the L image and the R image constituting the 3D image is generated, and the generated image signal is output to the output unit 206.
- FIG. 4 is a diagram showing the timing at which a video signal is input from the video decoder 303 to the memory 204 and the timing at which the video signal is output from the memory 204.
- 4A shows a case where the image indicated by the input video signal is a 3D image
- FIG. 4B shows a case where the image indicated by the input video signal is a 2D image and is converted into a 3D image and output. Show.
- the horizontal direction in FIGS. 4A and 4B shows the passage of time.
- an image indicated by a video signal input to the memory 204 is simply referred to as an “image input to the memory 204” or a “memory input image”, and the video signal output from the memory 204 indicates The image is simply referred to as “image output from the memory 204” or “memory output image”.
- the graphics indicated by the graphics signal input to the memory 204 is simply referred to as “graphics input to the memory 204” or “memory input graphics”.
- the graphics indicated by the graphics signal output from the memory 204 Is referred to simply as “graphics output from memory 204” or “memory input graphics”.
- the L image and the R image constituting the 3D image are alternately input, and after a certain time has elapsed after the input, the L image and the R image are output alternately, It is output to the video signal processing unit 306.
- the video signal processing unit 306 can change the 3D effect by performing processing on the input L image and R image.
- the same 2D image is output twice as an L image generation image and an R image generation image and input to the video signal processing unit 306.
- the video signal processing unit 306 generates an L image and an R image constituting a 3D image by performing different image processing on the L image generation image and the R image generation image.
- FIG. 5 shows an example of processing performed by the video signal processing unit 306 on the input 2D image when a 2D image is input to the video signal processing unit 306.
- FIG. 5A is a diagram showing the relationship between the horizontal pixel position of the 2D image input to the video signal processing unit 306 and the horizontal enlargement ratio (input-output horizontal enlargement ratio) with respect to the input image.
- FIG. 5B shows the relationship between the horizontal pixel position of the 2D image input to the video signal processing unit 306 and the horizontal pixel position of the 3D image (L image and R image) output from the video signal processing unit 306.
- FIG. FIG. 5C is a diagram illustrating the relationship between the horizontal pixel position of the 3D image (L image and R image) and the amount of parallax between the L image and the R image.
- the input-output horizontal enlargement ratio for generating the L image is set so as to increase with a constant slope as the value of the input horizontal pixel position increases.
- the horizontal enlargement ratio for the L image is 0.948 at the left end in the horizontal direction (the position of the virtual 0th pixel adjacent to the left of the 1st pixel; hereinafter referred to as “0th pixel”). It is set to 1.0 at the middle 960th pixel and 1.052 at the 1920th pixel at the right end in the horizontal direction, and increases with a certain slope. By setting in this way, the average enlargement ratio from the 0th pixel to the 1920th pixel is 1.0.
- the horizontal enlargement ratio for the R image is set so as to decrease with a constant inclination as the input horizontal pixel position increases, contrary to the horizontal enlargement ratio for the L image.
- the horizontal magnification for the L image is set to 1.052 for the 0th pixel, 1.0 for the 960th pixel in the middle in the horizontal direction, and 0.948 for the 1920th pixel at the right end in the horizontal direction. Decrease.
- the average enlargement ratio from the 0th pixel to the 1920th pixel is 1.0.
- the position of each pixel of the input image is the output horizontal pixel position of FIG. 5B in the output image. Is converted (moved) to the position indicated by.
- the horizontal enlargement ratio for the L image is set to 0.948 at the 0th pixel at the left end in the horizontal direction and 1.052 at the 1920th pixel at the right end in the horizontal direction as described in FIG. To increase. Therefore, as shown in FIG. 5B, the value indicating the output horizontal pixel position is smaller than the value indicating the corresponding input horizontal pixel position. For example, when the input horizontal pixel position is 200, the output horizontal pixel position is 191. When the input horizontal pixel position is 960, the output horizontal pixel position is 935. This means that the output horizontal pixel position is shifted to the left with respect to the input horizontal pixel position.
- the amount of deviation is zero at the 0th pixel at the left end of the input image, and between the left end position and the horizontal center position of the input image (the position of the 960th pixel (the position at which the horizontal enlargement ratio is 1)). Since the horizontal enlargement ratio is smaller than 1, the pixel position increases as it approaches the right end, and becomes the maximum at the center in the horizontal direction of the input image. Between this center position and the right end (1920th pixel) of the input image, the horizontal enlargement ratio is larger than 1, so the pixel position decreases as it approaches the right end, and zero at the right end (1920th pixel) of the input image. It becomes.
- the horizontal enlargement ratio for the R image is set opposite to the horizontal enlargement ratio for the L image, as described with reference to FIG. Specifically, it is set to 1.052 at the 0th pixel at the left end in the horizontal direction and 0.948 at the 1920th pixel at the right end in the horizontal direction, and decreases with a constant slope. Therefore, as shown in FIG. 5B, the value indicating the output horizontal pixel position is larger than the value indicating the corresponding input horizontal pixel position. For example, when the input horizontal pixel position is 200, the output horizontal pixel position is 209, and when the input horizontal pixel position is 960, the output horizontal pixel position is 985.
- the output horizontal pixel position is shifted to the right with respect to the input horizontal pixel position.
- the amount of deviation is zero at the 0th pixel at the left end of the input image, and between the left end position and the horizontal center position of the input image (the position of the 960th pixel (the position at which the horizontal enlargement ratio is 1)). Since the horizontal enlargement ratio is larger than 1, the pixel position increases as it approaches the right end, and becomes the maximum at the center in the horizontal direction of the input image, and between this position and the right end (1920th pixel) of the input image. Since the horizontal enlargement ratio is smaller than 1, the pixel position decreases as it approaches the right end, and becomes zero at the right end (1920th pixel) of the input image.
- FIG. 5C shows the difference between the output horizontal pixel position of the L image and the output horizontal pixel position of the R image, that is, the amount of parallax.
- the horizontal axis indicates the input horizontal pixel position
- the vertical axis indicates the amount of parallax.
- the 3D image generated by the L image and the R image having a varying amount of parallax is farthest from the user in the direction in which the central portion in the horizontal direction is perpendicular to the display surface of the display device 102.
- the position (hereinafter referred to as “far”) is referred to as “backward”, and the opposite direction is referred to as “front” as appropriate.
- the amount of parallax between the L image and the R image in this 3D image is 0 at both ends in the horizontal direction (first pixel and 1920 pixel), and is maximum at the center in the horizontal direction. That is, the input 2D image is converted into an L image and an R image that generate a curved 3D image that is recognized by the user so that the center in the horizontal direction is located behind the both ends in the horizontal direction. .
- FIG. 6 is a diagram illustrating a state in which an input 2D image is converted into a 3D image based on the characteristics illustrated in FIG.
- FIG. 6A shows an example of a horizontal 1920 pixel 2D image input to the video signal processing unit 306.
- FIG. 6B shows an L image
- FIG. 6C shows an R image when processing based on the characteristics shown in FIG. 5 is performed.
- the 200th pixel in the input 2D image is moved to the 191st pixel in the L image and the 209th pixel in the R image.
- a parallax of 16 pixels is generated between the R image and the L image.
- the 960th pixel located near the center in the horizontal direction in the input 2D image is moved to the 935th pixel in the L image and the 985th pixel in the R image.
- the 50th pixel is moved between the R image and the L image.
- the parallax for pixels is generated.
- the amount of parallax near the center in the horizontal direction is larger than the amount of parallax near both ends in the horizontal direction.
- the 3D image that the user will visually recognize through the 3D glasses 103 has a depth direction position at both ends in the horizontal direction that is a curved image as the display surface of the display device 201.
- the images are recognized by the user as having substantially the same position and the center portion in the horizontal direction being in a curved shape behind the both end portions in the horizontal direction.
- the amount of pixel movement is changed according to the input horizontal pixel position.
- the horizontal center portion of the displayed 3D image is the display device 102.
- the portions other than the central portion in the horizontal direction appear farther from the user toward the left and right ends of the stereoscopic image.
- the L image data and the R image data are generated so as to be visually recognized by the user. Note that the amount of parallax may be changed stepwise instead of continuously.
- the CPU 305 may adjust the pop-out amount of the 3D image generated by the video signal processing unit 306 in response to the pop-out amount correction instruction received by the remote control receiving unit 205 and the instruction to convert the 2D image into a 3D image. .
- the characteristics shown in FIG. 5B change. Specifically, when the pop-out amount is adjusted so as to be recognized by the user so as to exist on the nearer side than in the case of 3D display based on the characteristics shown in FIG. Translates upward in the vertical axis direction, and the conversion curve of the R image translates downward in the vertical axis direction. In this case, the characteristic of FIG. 5C translates downward in the vertical axis.
- the conversion curve of the L image translates upward in the vertical axis direction, and the conversion curve of the R image moves in the vertical axis direction. Translate downwards. In this case, the value of the graph in FIG. 5C is translated in the upward direction on the vertical axis.
- a method of adjusting the amount of protrusion of a part of the image there is a method of adjusting the horizontal enlargement ratio while maintaining the average value of the horizontal enlargement ratio in FIG. For example, when the horizontal enlargement ratio is increased, the absolute value of the slope of the straight line R and L in FIG. In this case, the difference between the output horizontal pixel positions in the R and L curves in FIG. Then, the maximum value of the parallax amount indicated by the curve in FIG. In this case, the 3D image is deeper than before adjustment. In order to reduce the horizontal enlargement ratio, the absolute value of the slope of the straight line R and L in FIG. In this case, the difference between the output horizontal pixel positions in the R and L curves in FIG.
- the pop-out amount can be adjusted according to the instruction received by remote control receiving unit 205.
- the playback apparatus 101 generates a stream separation unit 301 that inputs a 3D image, and generates and outputs L image data and R image data based on 2D image data input from the stream separation unit 301.
- a video signal processing unit 306 is provided.
- the video signal processing unit 306 displays a central portion in the horizontal direction of the displayed 3D image. In the direction perpendicular to the display surface 102, it exists at a position farthest from the user, and parts other than the central part appear farther from the user toward the left and right ends of the stereoscopic image.
- L image data and R image data are generated so as to be recognized by the user so as to be visually recognized by the user.
- the video signal processing unit 306 displays the entire 3D image displayed on the display device 102.
- L image data and R image data are generated so as to be recognized by the user so as to be present at a position farther from the display surface of the display device 102 when viewed from the user in a direction perpendicular to the display surface.
- the stereoscopic image realized by such a configuration it is possible to make the user feel the depth and the spread more strongly due to human visual characteristics.
- the configuration is such that the user is recognized so that the substantially central portion in the horizontal direction exists at the farthest position, but the portions other than the central portion appear at the farthest position. In this case, the same effect can be obtained.
- the position where the user recognizes the 3D image in the direction perpendicular to the display surface of the display device 102 may be configured to be adjustable by a remote controller.
- a signal from the remote control is received by the remote control receiving unit 205 and processed by the signal processing unit 203. With such a configuration, it is possible to generate a 3D image according to the user's preference.
- Embodiment 2 In the first embodiment, the central portion in the horizontal direction in the displayed 3D image exists at the farthest position from the user (the farthest), and the portions other than the central portion are closer to the left and right end portions, The L image data and the R image data are generated so that they are visually recognized by the user so as to be present at a position closer to the user (in front).
- the entire displayed 3D image exists at a position farther from the display surface of the display device 102 when viewed from the user, and the horizontal central portion in the display area of the display device 102 Is located at the closest position, and is displayed so as to be visually recognized by the user so that it is closer to the user as it goes to the left and right ends of the central partial stereoscopic image.
- the configuration of the playback apparatus 101 is the same as that of the first embodiment.
- the configuration of the second embodiment will be described in detail.
- FIG. 7 shows an example of processing performed by the video signal processing unit 306 on the input 2D image when a 2D image is input to the video signal processing unit 306.
- FIG. 7A is a diagram showing the relationship between the horizontal pixel position of the 2D image input to the video signal processing unit 306 and the horizontal expansion ratio (input-output horizontal expansion ratio) with respect to the input image.
- FIG. 7B shows the relationship between the horizontal pixel position of the 2D image input to the video signal processing unit 306 and the horizontal pixel position of the 3D image (L image and R image) output from the video signal processing unit 306.
- FIG. 7C is a diagram illustrating the relationship between the horizontal pixel position of the 3D image (L image and R image) and the output gain.
- FIG. 7D is a diagram illustrating the relationship between the horizontal pixel position of the 3D image (L image and R image) and the amount of parallax between the L image and the R image.
- the region where the horizontal enlargement ratio is changed in FIG. 7A is limited to the region near the center in the horizontal direction in the input image, and the region near the center in the horizontal direction is for L image generation.
- the horizontal enlargement ratio is reduced from 1.026 to 0.974, and the horizontal enlargement ratio for the R image is increased from 0.974 to 1.026.
- the input 2D image has the entire displayed 3D image located behind the display surface of the display device 102 and a predetermined horizontal portion in the display area of the display device 102. It is converted to a 3D image that is present in the forefront and is recognized by the user as being present in a stepwise or continuous manner from the predetermined portion toward the left and right ends.
- the L image generated from the 2D image is shifted to the left and the R image is shifted to the right based on the horizontal enlargement ratio shown in FIG.
- the first pixel of the input 2D image (image) is converted to the -19th pixel in the L image, and the 1920th pixel is output as the 1900th pixel.
- the first pixel of the input 2D image is output as the 21st pixel in the R image, and the 1920th pixel is output as the 1940th pixel.
- the video signal processing unit 306 outputs only the first pixel to the 1920th pixel in the final output. Therefore, a part of the L image is missing at the left end of the display surface (screen) of the display device 10 and a part of the R image is missing at the right end of the screen.
- the amplitude is corrected when the L image and the R image are output.
- the horizontal axis of FIG.7 (c) shows the horizontal pixel position of 3D image (output image), and a vertical axis
- shaft shows the gain of the amplitude of an output image.
- This gain is set to 1 in the intermediate portion (50th pixel to 1870th pixel) excluding the vicinity of both ends in the horizontal direction, is set to 0 at both ends, and changes between the intermediate portion and both ends with a predetermined inclination.
- the number of pixels from both ends to the middle part is set to a value larger than the maximum parallax amount.
- the parallax amount between the L image and the R image is a constant value at both ends in the horizontal direction, but changes so as to be smaller than this value in the middle. is doing. Specifically, the amount of parallax is set so that it appears to be recessed deeper than the display surface at both ends in the horizontal direction, and appears to protrude in a phase from the both ends inside.
- FIG. 8 is a diagram illustrating a state in which the input image is subjected to 3D image conversion based on the characteristics illustrated in FIG.
- FIG. 8A shows an example of a horizontal 1920 pixel 2D image input to the video signal processing unit 306.
- the input 2D image has 1920 pixels in the horizontal direction.
- FIG. 8B shows an L image and
- FIG. 8C shows an R image when processing based on the characteristics shown in FIG. 7 is performed.
- the 200th pixel in the input 2D image is moved to the 180th pixel in the L image and the 220th pixel in the R image.
- a parallax of 40 pixels occurs between the R image and the L image.
- the 960th pixel located near the horizontal center in the input 2D image is moved to the 946th pixel in the L image and to the 974th pixel in the R image.
- the parallax for pixels is generated.
- the absolute value of the parallax amount is smaller than at both ends in the horizontal direction. Therefore, the 3D image that the user will visually recognize through the 3D glasses 103 is, as shown in FIG.
- the entire displayed 3D image exists behind the display surface of the display device 102, and The user recognizes that the horizontal central portion of the display area of the display device 102 is present at the foremost side and is present in a stepwise or continuous manner from the central portion toward the left and right ends. Image.
- the playback apparatus 101 includes a stream separation unit 301 that inputs a 3D image, and L image data and R image data based on the 2D image data input from the stream separation unit 301.
- a video signal processing unit 306 that generates and outputs the video signal.
- the video signal processing unit 306 displays a 3D image composed of an L image and an R image on the display device 102 that can display a 3D image
- the entire displayed 3D image is displayed on the display surface of the display device 102.
- the vertical direction it exists at a position farther than the display device 102 as viewed from the user, and the central portion in the horizontal direction in the display area of the display device 102 is present at the closest position.
- the L image data and the R image data are generated so as to be visually recognized by the user so as to be present at a position farther from the user toward the left and right ends of the image.
- the user can feel a sufficient depth and spread due to human visual characteristics, and can feel a sense of popping out to the user at the center portion.
- 3D images that can make the display surface of the display device 102 feel large can be generated.
- the video signal processing unit 306 reduces the image amplitude at the end of the L image and the R image. As a result, it is possible to reduce a sense of incongruity caused by the loss of part of the L image or the R image at both ends in the horizontal direction of the 3D image. It should be noted that this technical idea and technical ideas of other embodiments (to be described later) relating thereto are also applicable to the first embodiment.
- Embodiment 3 In the third embodiment, a 2D image is converted into a 3D image based on the same characteristics as in the first embodiment, and graphics data that has been converted to 3D based on the same characteristics as in the first embodiment is superimposed. indicate.
- the configuration of the playback apparatus 101 is the same as that of the first embodiment.
- FIG. 9 shows the timing at which images and graphics are input to the memory 204 and the timing at which images and graphics are output from the memory 204.
- FIG. 9 shows a case where an input image is a 2D image and is converted into a 3D image and output. The horizontal direction in FIG. 9 shows the passage of time.
- the memory input image indicates an image input to the memory 204.
- the memory output image indicates an image output from the memory 204.
- the memory input graphics indicates graphics data such as caption data input to the memory 204.
- Memory output graphics indicates output graphics data output from the memory 204.
- the same 2D image and graphics data are output twice as an L image generation image and graphics data, an R image generation image and graphics data, respectively, and input to the video signal processing unit 306.
- the video signal processing unit 306 generates the L image and the R image constituting the 3D image by making the processing content for the L image generation image different from the processing content for the R image generation image.
- the difference from the first embodiment and the second embodiment is that not only the video signal but also the graphics signal is processed as the processing content in the video signal processing unit 306. Further, the processing contents in the video signal processing unit 306 can be independently performed for the video signal and the graphics signal, so that the positional relationship between the generated 3D image and the graphics can be varied. .
- the video signal processing unit 306 performs the same signal processing as in the first embodiment on 3D images and graphics.
- FIG. 10 shows how images and graphics are converted by this processing. Note that the same signal processing as in the first embodiment may be performed after the image and the graphics are combined.
- FIG. 10A shows an example of an image obtained by combining graphics with a horizontal 1920-pixel 2D image input to the video signal processing unit 306.
- FIG. 10B shows an L image
- FIG. 10C shows an R image when the image and graphics are combined after the processing of FIG. 5 is performed on this image.
- the 200th pixel in the input 2D image is moved to the 191st pixel in the L image and the 209th pixel in the R image.
- a parallax of 16 pixels occurs between the R image and the L image.
- the 960th pixel located near the center in the horizontal direction in the input 2D image is moved to the 935th pixel in the L image and the 985th pixel in the R image.
- the 50th pixel is moved between the R image and the L image.
- the parallax for pixels is generated.
- the amount of parallax near the center in the horizontal direction is larger than the amount of parallax near both ends in the horizontal direction. Therefore, the 3D image that the user will visually recognize through the 3D glasses 103 is, as shown in FIG. 10 (d), the both ends in the horizontal direction are substantially the same as the display surface of the display device 201, and the horizontal center
- the image is recognized by the user that the portion exists in the back in a curved shape extending from both ends in the horizontal direction.
- FIG. 11 shows an example of processing performed by the video signal processing unit 306 on the input graphics data when the graphics data is input to the video signal processing unit 306.
- FIG. 11A is a diagram showing the relationship between the horizontal pixel position of graphics input to the video signal processing unit 306 and the horizontal enlargement ratio (input-output horizontal enlargement ratio) for the input graphics. is there.
- FIG. 5B shows a horizontal pixel position of 2D graphics input to the video signal processing unit 306 and a horizontal pixel position of 3D graphics (L image and R image) output from the video signal processing unit 306. It is a figure which shows a relationship.
- FIG. 11A is a diagram showing the relationship between the horizontal pixel position of graphics input to the video signal processing unit 306 and the horizontal enlargement ratio (input-output horizontal enlargement ratio) for the input graphics. is there.
- FIG. 5B shows a horizontal pixel position of 2D graphics input to the video signal processing unit 306 and a horizontal pixel position of 3D graphics (L image
- 5C is a diagram illustrating the relationship between the horizontal pixel position of 3D graphics (L image and R image) and the amount of parallax between the L image and the R image. Note that the characteristics shown in FIGS. 11A, 11B, and 11C are the same as the characteristics shown in FIGS. 5A, 5B, and 5C.
- the 300th pixel constituting the left end of the input graphics is the 287th pixel in the L image and the 313 in the R image, as shown in FIG.
- a parallax of 26 pixels is generated between the R image and the L image.
- the 1620th pixel constituting the right end of the input graphics is moved to the 1607th pixel in the L image and the 1633th pixel in the R image.
- a 26-pixel parallax is generated between the R image and the L image. Arise.
- the input graphics are processed with the same characteristics as the input 2D image, it appears to be pasted on the curved surface converted image.
- Embodiment 4 In the fourth embodiment, a 2D image is converted into a 3D image based on the same characteristics as in the first embodiment. However, graphics data that has been converted to 3D in a manner that is not curved as in the third embodiment is superimposed. And display.
- the configuration of the playback apparatus 101 is the same as that of the first embodiment.
- a case will be described in which the processing for graphics data by the video signal processing unit 306 in the third embodiment is changed from that based on the characteristics shown in FIG. 10 to those based on the characteristics shown in FIG. Note that the processing on the 2D image in the video signal processing unit 306 is performed according to the characteristics shown in FIG.
- FIG. 12A is a diagram showing the relationship between the horizontal pixel position of the graphics input to the video signal processing unit 306 and the horizontal enlargement ratio (input-output horizontal enlargement ratio) with respect to the input graphics.
- FIG. 12B illustrates a horizontal pixel position of 2D graphics input to the video signal processing unit 306 and a horizontal pixel position of 3D graphics (L image and R image) output from the video signal processing unit 306. It is a figure which shows a relationship.
- FIG. 12C is a diagram illustrating the relationship between the horizontal pixel position of 3D graphics (L image and R image) and the amount of parallax between the L image and the R image.
- the horizontal enlargement ratio is fixed to 1 for both the L image and the R image.
- the output horizontal pixel position is 10 pixels to the left of the input pixel position
- the output horizontal pixel position is the input pixel position.
- the amount of parallax is 20 pixels regardless of the horizontal pixel position.
- FIG. 13 is a diagram illustrating a state in which the input 2D image and graphics are converted into a 3D image based on the characteristics illustrated in FIGS. 5 and 12.
- FIG. 13A shows an example of an image obtained by synthesizing graphics with a horizontal 1920-pixel 2D image input to the video signal processing unit 306, and is the same as FIG. 11A.
- the image based on the characteristic shown in FIG. 5 is processed for this image, and the graphic based on the characteristic shown in FIG. 12 is applied to the graphics.
- FIG. 13C shows an R image in b).
- the 200th pixel in the input 2D image is moved to the 191st pixel in the L image and the 209th pixel in the R image.
- the 3D image that the user will visually recognize through the 3D glasses 103 is such that both ends in the horizontal direction are substantially the same as the display surface of the display device 201, and the central part in the horizontal direction is The image is recognized by the user as being in a curved shape behind the both ends in the horizontal direction.
- the 300th pixel constituting the left end of the input graphics is moved to the 290th pixel in the L image and the 310th pixel in the R image, and as a result, between the R image and the L image.
- a 20-pixel parallax occurs.
- the 1620th pixel constituting the right end of the input graphics is moved to the 1610th pixel in the L image and to the 1630th pixel in the R image.
- 20 pixels as in the L image are placed between the R image and the L image.
- the parallax occurs.
- the 3D image that the user visually recognizes through the 3D glasses 103 is an image in which planar graphics appear to rise from the curved image as shown in FIG.
- the playback device 101 of the fourth embodiment not only 2D images but also graphics data can be displayed in 3D as in the third embodiment.
- 3D effect can be acquired also about graphics data.
- the L image data and the graphics data to be combined with the R image data are converted so as to have different offset amounts, and the R image data and the graphics data to be combined with the R image data are converted. Conversion is performed so that the offset amount is different.
- independent 3D effects can be obtained in the graphic data and the L and R image data.
- the fourth embodiment can obtain the same effects as those of the first and second embodiments with respect to an image, and can also obtain the effect of raising planar graphics with respect to a curved image. By raising the graphics, an effect of facilitating the recognition of the graphics can be obtained.
- Embodiments 1 to 4 have been exemplified as embodiments of the present invention. However, the present invention is not limited to this. Hereinafter, other embodiments of the present invention will be described together. In addition, this invention is not limited to these, It is applicable also to embodiment modified suitably.
- Embodiments 1 to 4 the case where the present invention is applied to a 2D image has been described.
- the present invention may be applied to a 3D image.
- the parallax amount in the 3D image can be adjusted to adjust the 3D effect such as the pop-out amount.
- the configuration is such that the horizontal center portion of the generated 3D image jumps out to the front or is recognized by the user so as to be present at the farthest side.
- the deepest position may be an arbitrary position on the left side or the right side of the central portion instead of the horizontal central portion. For example, when a person or the like exists in the 2D image from which the 3D image is generated, the position where the person or the like exists is detected, and the position is recognized by the user so that the position protrudes to the forefront. May be.
- the horizontal enlargement ratio is changed only based on the horizontal pixel position, but the horizontal enlargement ratio may be changed in consideration of the vertical pixel position.
- the change rate of the horizontal enlargement rate at the upper part of the input image may be set to be large, and the change rate of the horizontal enlargement rate may be reduced as it goes downward.
- the lower part of the image is recognized by the user so as to be present relatively nearer than the upper part.
- the horizontal enlargement ratio may be changed based on the state of the image. For example, in a dark scene where the human field of view becomes narrow, the parallax amount is set to be small, and in a bright scene, the parallax amount is set to be large. For example, the luminance (average value) of the entire image is obtained, and the amount of parallax is determined based on the luminance.
- the output amplitude at both ends in the horizontal direction (both left and right ends) in the 3D image is reduced, but the output amplitude at both ends in the vertical direction (both upper and lower ends) as well as both ends in the horizontal direction (both left and right ends). (Gain) may also be reduced. Thereby, it is possible to reduce the uncomfortable feeling that an image with parallax is cut by a TV frame without parallax.
- the image amplitude was reduced by changing the output gain of the image to reduce the uncomfortable feeling at the edge of the screen, but the composition ratio ( ⁇ value) with graphics (OSD screen) OSD 100%, video 0%, and gain 1 in Fig. 7 (c) is the composite ratio of OSD 0%, video 100%, and the others are implemented by continuously changing the composite ratio to reduce the image amplitude. You may do it.
- the luminance level of the OSD screen can be varied, for example, by setting the luminance level of the OSD to the average luminance level of the screen, the blurring at both ends in the horizontal direction becomes light rather than black. It is possible to show how it goes.
- the region in which the output amplitude of the horizontal end portions (and the vertical end portions) in the 3D image is reduced may be variable according to the parallax information of the image input to the video signal processing unit 306.
- the region in the 3D image where the output amplitude at both ends in the horizontal direction (and both ends in the vertical direction) is reduced may be variable according to the amount of parallax that is increased or decreased by processing performed on the image input to the video signal processing unit 306. Good.
- Embodiments 1 and 2 as in Embodiment 3, the 2D image and graphics input to the video signal processing unit 306 are subjected to different processing and then combined. good. As a result, for example, it is possible to always display a hula fix with respect to the image.
- image processing may be combined with audio processing.
- the sound field may be converted to be formed in the back as the horizontal center is retracted. Thereby, the effect of image conversion can be further enhanced.
- the image data and the graphics data are combined after being subjected to different processing.
- the graphics data is subjected to processing that is a difference between the image data and the graphics data, and is combined with the image data.
- the composite image may be subjected to horizontal processing.
- the display device 102 alternately displays the left-eye image and the right-eye image, and the left and right shutters of the 3D glasses 103 are alternately switched in synchronization with the switching.
- the following configuration may be used. That is, the display device 102 displays the image for the left eye and the image for the right eye separately on an odd line and an even line for each line, and a polarizing film different in the odd line and the even line is pasted on the display unit.
- the 3D glasses 103 are not a liquid crystal shutter system, and polarizing filters having different directions are attached to the left-eye lens and the right-eye lens, and the left-eye image and the right-eye image can be separated by this polarizing filter.
- the display device may be configured to alternately display the left-eye image and the right-eye image pixel by pixel in the horizontal direction, and alternately stick a polarizing film having a different polarization plane to the display unit pixel by pixel.
- any configuration may be used as long as the image data for the left eye and the right eye can reach the left eye and the right eye of the user.
- the playback apparatus 101 is configured to play back the data on the disk 201 by the disk playback unit 202
- the source 2D image is a stream input via a broadcasting station or a network, a Blu-ray disk, a DVD disk, a memory card, a USB memory It may be data recorded on a recording medium such as.
- Embodiments 1 to 4 the conversion of moving images has been described as an example, but the present invention is not limited to this. That is, it can be applied to still images such as JEPG.
- the conversion of the 2D image into the 3D image is performed by the signal processing unit 203 of the playback device 101.
- the display device 102 is provided with means having the same conversion function, and the display device 102 side You may go on.
- the present invention can be applied to an image conversion apparatus that converts a 2D image into a 3D image.
- the present invention is particularly applicable to 3D Blu-ray disc players, 3D Blu-ray disc recorders, 3D DVD players, 3D DVD recorders, 3D broadcast receivers, 3D TVs, 3D image display terminals, 3D mobile phone terminals, 3D car navigation systems, 3D digitals.
- the present invention can be applied to 3D image compatible devices such as a still camera, a 3D digital movie, a 3D network player, a 3D compatible computer, and a 3D compatible game player.
Abstract
Description
1. 構成
1.1.3次元立体画像再生表示システム
図1に3次元立体画像再生表示システムの構成を示す。この3次元立体画像再生表示システムは、再生装置101、表示装置102及び3D用メガネ103を有する。再生装置101はディスクに記録されたデータから3次元立体画像信号を再生し、表示装置102に出力する。表示装置102は3D画像を表示する。具体的には、表示装置102は、左目用の画像(以下、「L画像」という)と右目用の画像(以下、「R画像」)を交互に表示する。表示装置102は、赤外線などの無線で画像同期信号を3D用メガネ103に送る。3D用メガネ103は、左目用レンズ部及び右目用レンズ部にそれぞれ液晶シャッターを備え、表示装置102からの画像同期信号に基づいて左右の液晶シャッターを交互に開閉させる。具体的には、表示装置102がL画像を表示する時には左目用の液晶シャッターが開き、右目用の液晶シャッターが閉じる。表示装置102がR画像を表示する時には右目用の液晶シャッターが開き、左目用の液晶シャッターが閉じる。このような構成により、3D用メガネ103をかけたユーザの左目にはL画像のみが、右目にはR画像のみが届くことになり、これにより、ユーザは3D画像を視認することができる。
図2に再生装置101の構成を示す。再生装置101は、ディスク再生部202、信号処理部203、メモリ204、リモコン受信部205、出力部206およびプログラム格納用メモリ207を有する。リモコン受信部205は、ユーザからの再生開始、停止、3D画像の飛び出し量の補正指示、2D画像を3D画像に変換する旨の指示等を受け付ける。ここで、3D画像の飛び出し量とは、3D画像を視認するユーザにより、表示装置102の表示面に対して垂直な方向において3D画像が表示面からユーザ側へ飛び出しているように視認されるときに、この飛び出しの程度を示す量(正、負の値をとり得る)である。3D画像の飛び出し量の補正指示には、3D画像全体についての飛び出し量の指示や、3D画像の一部についての飛び出し量の指示を含む。また、3D画像の飛び出し量の補正指示においては、3D画像の部分ごとに飛び出し量を変える事も可能である。ディスク再生部202は、2D画像や3D画像等の画像(ビデオ)、音声(オーディオ)、グラフィックス(文字やメニュー画像など)等のデータ等が記録されたディスク201を再生する。具体的には、ディスク再生部202は、これらのデータを読み取ってデータストリームを出力する。信号処理部203は、ディスク再生部202から出力されたデータストリームに含まれる画像、音声、グラフィックス等のデータをデコードし、メモリ204に一時的に蓄積する。さらに、信号処理部203は、メモリ207に格納された機器本体GUIを必要に応じて生成し、メモリ204に一時的に蓄積させる。メモリ204に蓄積された画像、音声、グラフィックス、機器本体GUI等のデータは、信号処理部203で所定の処理が施され、かつ、飛び出し量が調整されて、出力部206から3Dのフォーマットで出力される。
図3に信号処理部203の構成を示す。信号処理部203は、ストリーム分離部301、オーディオデコーダ302、ビデオデコーダ303、グラフィックスデコーダ304、CPU305及びビデオ信号処理部306を有する。
ここで、ディスク201に記録されているコンテンツが2D画像で構成された2Dコンテンツである場合に、信号処理部203が2Dコンテンツの2D画像を3D画像に変換して出力する動作について説明する。ストリーム分離部301にはビデオデータを含むストリームが入力される。ストリーム分離部301は2D画像のビデオデータをビデオデコーダ303へ出力する。ビデオデコーダ303は2D画像のビデオデータをデコードして、メモリ204に転送する。ビデオデコーダ303から出力されるビデオ信号は2Dのビデオ信号である。メモリ204はそのビデオ信号を記録する。
本実施形態では、再生装置101は、3D画像を入力するストリーム分離部301と、ストリーム分離部301から入力された2D画像データに基づいてL画像データとR画像データとを生成して出力するビデオ信号処理部306を備える。ビデオ信号処理部306は、3D画像を表示可能な表示装置102に、L画像とR画像とからなる3D画像を表示したときに、その表示された3D画像における水平方向の中央部分が、表示装置102の表示面に垂直な方向において、ユーザから最も遠い位置に存在し、かつ前記中央部分以外の部分は、立体画像の左右の端部に向かうほど、ユーザからより遠い位置に存在しているように、ユーザにより視認されるように、ユーザにより認識されるように、L画像データとR画像データとを生成する。
実施の形態1では、表示された3D画像における水平方向の中央部分がユーザから最も遠い位置に(最も奥に)存在し、かつ前記中央部分以外の部分は、から左右の端部に向かうほど、ユーザにより近い位置に(手前に)存在しているようにユーザにより視認されるように、L画像データとR画像データとを生成した。これに対して、実施の形態2では、表示された3D画像全体が、ユーザから見て表示装置102の表示面よりも遠い位置に存在し、かつ表示装置102の表示領域における水平方向の中央部分が最も近い位置に存在し、中央部分立体画像の左右の端部に向かうほど、ユーザにより近い位置に存在しているように、ユーザにより視認されるように、表示される。なお、再生装置101の構成は、実施の形態1と同様である。以下、実施の形態2の構成について詳しく説明する。
実施の形態3では、実施の形態1と同様の特性に基づいて2D画像を3D画像に変換するとともに、グラフィックスデータを実施の形態1と同様の特性に基づいて3D化したものを重畳して表示する。再生装置101の構成は、実施の形態1と同様である。
実施の形態4では、実施の形態1と同様の特性に基づいて2D画像を3D画像に変換するが、グラフィックスデータについては実施の形態3のようには湾曲させない態様で3D化したものを重畳して表示する。再生装置101の構成は、実施の形態1と同様である。
実施の形態4において、実施の形態3におけるビデオ信号処理部306によるグラフィックスデータに対する処理を、図10に示す特性に基づくものから図12に示す特性に基づくものに変更された場合について説明する。なお、ビデオ信号処理部306における2D画像に対する処理は図5に示す特性により行われる。
本発明の実施の形態として実施の形態1~4を例示した。しかし、本発明はこれには限らない。以下、本発明の他の実施の形態をまとめて説明する。なお、本発明は、これらには限定されず、適宜修正された実施の形態に対しても適用可能である。
102 表示装置
103 3D用メガネ
201 ディスク
202 ディスク再生部
203 信号処理部
204 メモリ
205 リモコン受信部
206 出力部
207 プログラム格納用メモリ
301 ストリーム分離部
302 オーディオデコーダ
303 ビデオデコーダ
304 グラフィックスデコーダ
305 CPU
306 ビデオ信号処理部
Claims (20)
- 非立体画像データを、左目用画像データと右目用画像データからなる立体画像データに変換する画像変換装置であって、
非立体画像を入力する入力部と、
前記入力部から入力された非立体画像データに基づいて前記左目用画像データと前記右目用画像データとを生成して出力する変換部を備え、
前記変換部は、立体画像を表示可能な表示装置に、前記左目用画像と前記右目用画像とからなる立体画像を表示したときに、その表示された立体画像における水平方向の所定部分が、前記表示装置の表示面に垂直な方向において、ユーザから最も遠い位置に存在し、かつ前記所定部分以外の部分は、立体画像の左右の端部に向かうほど、ユーザにより近い位置に存在しているように、ユーザにより視認されるように、前記左目用画像データと前記右目用画像データとを生成する、画像変換装置。 - 前記変換部は、立体画像を表示可能な表示装置に、前記左目用画像と前記右目用画像とからなる立体画像を表示したときに、その表示される立体画像全体が、前記表示装置の表示面に垂直な方向において、ユーザから見て前記表示装置の表示面よりも遠い位置に存在しているようにユーザにより認識されるように、前記左目用画像データと前記右目用画像データとを生成する、請求項1に記載の画像変換装置。
- 前記所定部分は水平方向の略中央部である、
ことを特徴とする請求項1に記載の画像変換装置。 - 前記変換部は、同一の2つの非立体画像データを入力し、これらの非立体画像データを構成する画素において、前記左目用画像及び前記右目用画用として異なる移動量を付与することにより、前記左目用画像データと前記右目用画像データとを生成する、請求項1に記載の画像変換装置。
- 前記変換部は、左目用画像と右目用画像の端部の画像振幅を低減させる、請求項1に記載の画像変換装置。
- 前記変換部は、端部における画像振幅を低減させる領域を、視差量に応じて変化させる、請求項1に記載の画像変換装置。
- 前記表示装置の表示面に垂直な方向における立体画像の表示位置を調整するための指示を受け付ける受付部をさらに備える、請求項1に記載の画像変換装置。
- 前記変換部は、前記左目用画像データと、前記左目用画像データと合成されるグラフィックスデータとのオフセット量が異なるように、かつ前記右目用画像データと、前記右目用画像データと合成されるグラフィックスデータとのオフセット量が異なるように変換を行う、請求項1に記載の画像変換装置。
- 非立体画像データを、左目用画像データと右目用画像データからなる立体画像データに変換する画像変換装置であって、
非立体画像を入力する入力部と、
前記入力部から入力された非立体画像データに基づいて前記左目用画像データと前記右目用画像データとを生成して出力する変換部を備え、
前記変換部は、立体画像を表示可能な表示装置に、前記左目用画像と前記右目用画像とからなる立体画像を表示したときに、その表示された立体画像全体が、前記表示装置の表示面に垂直な方向において、ユーザから見て前記表示装置よりも遠い位置に存在し、かつ前記表示装置の表示領域における水平方向の所定部分が最も近い位置に存在し、前記所定部分以外の部分は、立体画像の左右の端部に向かうほど、ユーザからより遠い位置に存在しているように、ユーザにより視認されるように、前記左目用画像データと前記右目用画像データとを生成する、画像変換装置。 - 前記変換部は、立体画像を表示可能な表示装置に、前記左目用画像と前記右目用画像とからなる立体画像を表示したときに、前記所定部分から左右の端部の部分が、前記表示装置の表示面に垂直な方向において、一定の位置に存在しているようにユーザにより認識されるように、前記左目用画像データと前記右目用画像データとを生成する、請求項9に記載の画像変換装置。
- 前記所定部分は水平方向の略中央部である、
ことを特徴とする請求項9に記載の画像変換装置。 - 前記変換部は、同一の2つの非立体画像データを入力し、これらの非立体画像データを構成する画素において、前記左目用画像及び前記右目用画用として異なる移動量を付与することにより、前記左目用画像データと前記右目用画像データとを生成する、請求項9に記載の画像変換装置。
- 前記変換部は、左目用画像と右目用画像の端部の画像振幅を低減させる、請求項9に記載の画像変換装置。
- 前記変換部は、端部における画像振幅を低減させる領域を、視差量に応じて変化させる、請求項9に記載の画像変換装置。
- 前記表示装置の表示面に垂直な方向における立体画像の表示位置を調整するための指示を受け付ける受付部をさらに備える、請求項9に記載の画像変換装置。
- 前記変換部は、前記左目用画像データと、前記左目用画像データと合成されるグラフィックスデータとのオフセット量が異なるように変換し、前記右目用画像データと、前記右目用画像データと合成されるグラフィックスデータとのオフセット量が異なるように変換を行う、請求項9に記載の画像変換装置。
- 立体画像データを処理する画像変換装置であって、
非立体画像を入力する入力部と、
前記入力部から入力された立体画像データに基づいてその立体画像の左目用画像と右目用画像に対して異なる移動量を付与する事により、左目用画像データと右目用画像データとを生成して、出力する変換部を備え、
前記変換部は、同一の立体画像データから生成された左目用画像データと右目用画像データとに付与した移動量の差を比較した場合に、前記立体画像の第1の画素位置において付与した移動量の差と、第1の画素位置と異なる第2の画素位置において付与した移動量の差が異なるように、前記左目用画像データと前記右目用画像データとを生成する画像変換装置。 - 移動量を調整するための指示を受け付ける受付部をさらに備える、請求項17に記載の画像変換装置。
- 前記左目用画像データに付与する移動量と、前記左目用画像データと合成されるグラフィックスデータに付与する移動量が異なるように、かつ前記右目用画像データに付与する移動量と、前記右目用画像データと合成されるグラフィックスデータに付与する移動量が異なるように変換する、請求項17に記載の画像変換装置。
- 左目用画像データと右目用画像データの端部の画像振幅を低減させることを特徴とする、請求項17に記載の画像変換装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/643,802 US20130038611A1 (en) | 2010-04-28 | 2011-04-27 | Image conversion device |
CN2011800211191A CN102860020A (zh) | 2010-04-28 | 2011-04-27 | 图像转换装置 |
JP2012512672A JPWO2011135857A1 (ja) | 2010-04-28 | 2011-04-27 | 画像変換装置 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-103327 | 2010-04-28 | ||
JP2010103327 | 2010-04-28 | ||
JP2010229433 | 2010-10-12 | ||
JP2010-229433 | 2010-10-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011135857A1 true WO2011135857A1 (ja) | 2011-11-03 |
Family
ID=44861179
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/002472 WO2011135857A1 (ja) | 2010-04-28 | 2011-04-27 | 画像変換装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20130038611A1 (ja) |
JP (1) | JPWO2011135857A1 (ja) |
CN (1) | CN102860020A (ja) |
WO (1) | WO2011135857A1 (ja) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9967546B2 (en) | 2013-10-29 | 2018-05-08 | Vefxi Corporation | Method and apparatus for converting 2D-images and videos to 3D for consumer, commercial and professional applications |
KR102192986B1 (ko) | 2014-05-23 | 2020-12-18 | 삼성전자주식회사 | 영상 디스플레이 장치 및 영상 디스플레이 방법 |
US10158847B2 (en) | 2014-06-19 | 2018-12-18 | Vefxi Corporation | Real—time stereo 3D and autostereoscopic 3D video and image editing |
KR20160067518A (ko) * | 2014-12-04 | 2016-06-14 | 삼성전자주식회사 | 영상 생성 방법 및 장치 |
CN104486611A (zh) * | 2014-12-29 | 2015-04-01 | 北京极维客科技有限公司 | 一种图像转换方法及装置 |
US10284837B2 (en) | 2015-11-13 | 2019-05-07 | Vefxi Corporation | 3D system including lens modeling |
US10277877B2 (en) | 2015-11-13 | 2019-04-30 | Vefxi Corporation | 3D system including a neural network |
US10148933B2 (en) | 2015-11-13 | 2018-12-04 | Vefxi Corporation | 3D system including rendering with shifted compensation |
US10122987B2 (en) | 2015-11-13 | 2018-11-06 | Vefxi Corporation | 3D system including additional 2D to 3D conversion |
US10121280B2 (en) | 2015-11-13 | 2018-11-06 | Vefxi Corporation | 3D system including rendering with three dimensional transformation |
US10277879B2 (en) | 2015-11-13 | 2019-04-30 | Vefxi Corporation | 3D system including rendering with eye displacement |
US10148932B2 (en) | 2015-11-13 | 2018-12-04 | Vefxi Corporation | 3D system including object separation |
US10277880B2 (en) | 2015-11-13 | 2019-04-30 | Vefxi Corporation | 3D system including rendering with variable displacement |
US20170140571A1 (en) * | 2015-11-13 | 2017-05-18 | Craig Peterson | 3d system including rendering with curved display |
US10242448B2 (en) | 2015-11-13 | 2019-03-26 | Vefxi Corporation | 3D system including queue management |
US10225542B2 (en) | 2015-11-13 | 2019-03-05 | Vefxi Corporation | 3D system including rendering with angular compensation |
WO2017142712A1 (en) | 2016-02-18 | 2017-08-24 | Craig Peterson | 3d system including a marker mode |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61144192A (ja) * | 1984-12-17 | 1986-07-01 | Nippon Hoso Kyokai <Nhk> | 立体テレビジヨン画像表示装置 |
JPH0863615A (ja) * | 1994-08-26 | 1996-03-08 | Sanyo Electric Co Ltd | 2次元画像の3次元化方法 |
JPH09185712A (ja) * | 1995-12-28 | 1997-07-15 | Kazunari Era | 三次元画像データ作成方法 |
JP2005151534A (ja) * | 2003-09-24 | 2005-06-09 | Victor Co Of Japan Ltd | 擬似立体画像作成装置及び擬似立体画像作成方法並びに擬似立体画像表示システム |
JP2010510600A (ja) * | 2006-11-21 | 2010-04-02 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 画像の深度マップの生成 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5675376A (en) * | 1995-12-21 | 1997-10-07 | Lucent Technologies Inc. | Method for achieving eye-to-eye contact in a video-conferencing system |
JPH11110180A (ja) * | 1997-10-03 | 1999-04-23 | Sanyo Electric Co Ltd | 二次元画像を三次元画像に変換する方法及び装置 |
JPH11187426A (ja) * | 1997-12-18 | 1999-07-09 | Victor Co Of Japan Ltd | 立体映像装置及び方法 |
JP3666232B2 (ja) * | 1998-03-24 | 2005-06-29 | 富士電機システムズ株式会社 | 電気車の保護装置 |
CN2520082Y (zh) * | 2001-12-05 | 2002-11-06 | 中国科技开发院威海分院 | 外接式立体视频转换器 |
JP3857988B2 (ja) * | 2002-03-27 | 2006-12-13 | 三洋電機株式会社 | 立体画像処理方法および装置 |
JP3990271B2 (ja) * | 2002-12-18 | 2007-10-10 | 日本電信電話株式会社 | 簡易ステレオ画像入力装置、方法、プログラム、および記録媒体 |
JP2005073049A (ja) * | 2003-08-26 | 2005-03-17 | Sharp Corp | 立体映像再生装置および立体映像再生方法 |
US7262767B2 (en) * | 2004-09-21 | 2007-08-28 | Victor Company Of Japan, Limited | Pseudo 3D image creation device, pseudo 3D image creation method, and pseudo 3D image display system |
JP5132690B2 (ja) * | 2007-03-16 | 2013-01-30 | トムソン ライセンシング | テキストを3次元コンテンツと合成するシステム及び方法 |
CN101282492B (zh) * | 2008-05-23 | 2010-07-21 | 清华大学 | 三维影像显示深度调整方法 |
-
2011
- 2011-04-27 WO PCT/JP2011/002472 patent/WO2011135857A1/ja active Application Filing
- 2011-04-27 US US13/643,802 patent/US20130038611A1/en not_active Abandoned
- 2011-04-27 CN CN2011800211191A patent/CN102860020A/zh active Pending
- 2011-04-27 JP JP2012512672A patent/JPWO2011135857A1/ja active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61144192A (ja) * | 1984-12-17 | 1986-07-01 | Nippon Hoso Kyokai <Nhk> | 立体テレビジヨン画像表示装置 |
JPH0863615A (ja) * | 1994-08-26 | 1996-03-08 | Sanyo Electric Co Ltd | 2次元画像の3次元化方法 |
JPH09185712A (ja) * | 1995-12-28 | 1997-07-15 | Kazunari Era | 三次元画像データ作成方法 |
JP2005151534A (ja) * | 2003-09-24 | 2005-06-09 | Victor Co Of Japan Ltd | 擬似立体画像作成装置及び擬似立体画像作成方法並びに擬似立体画像表示システム |
JP2010510600A (ja) * | 2006-11-21 | 2010-04-02 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 画像の深度マップの生成 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2011135857A1 (ja) | 2013-07-18 |
CN102860020A (zh) | 2013-01-02 |
US20130038611A1 (en) | 2013-02-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011135857A1 (ja) | 画像変換装置 | |
US9124870B2 (en) | Three-dimensional video apparatus and method providing on screen display applied thereto | |
US20100045779A1 (en) | Three-dimensional video apparatus and method of providing on screen display applied thereto | |
EP2315450A2 (en) | 2D/3D display apparatus and 2D/3D image display method therein | |
JP5502436B2 (ja) | 映像信号処理装置 | |
WO2010092823A1 (ja) | 表示制御装置 | |
WO2011016240A1 (ja) | 映像再生装置 | |
KR20110116525A (ko) | 3d 오브젝트를 제공하는 영상표시장치, 그 시스템 및 그 동작 제어방법 | |
US20120075291A1 (en) | Display apparatus and method for processing image applied to the same | |
JP2012044308A (ja) | 3d画像出力装置及び3d画像表示装置 | |
WO2011118215A1 (ja) | 映像処理装置 | |
JP5505637B2 (ja) | 立体表示装置および立体表示装置の表示方法 | |
JP2011172172A (ja) | 立体映像処理装置および方法、並びにプログラム | |
US9407897B2 (en) | Video processing apparatus and video processing method | |
JP4806082B2 (ja) | 電子機器及び画像出力方法 | |
US20110134226A1 (en) | 3d image display apparatus and method for determining 3d image thereof | |
JP5066244B2 (ja) | 映像再生装置及び映像再生方法 | |
JP2012186652A (ja) | 電子機器、画像処理方法及び画像処理プログラム | |
JP5025768B2 (ja) | 電子機器及び画像処理方法 | |
WO2011065191A1 (ja) | 表示制御装置、表示制御方法、表示制御プログラム、コンピュータ読み取り可能な記録媒体、上記表示制御装置を備えた記録再生装置、音声出力装置、及び音声出力装置を備えた記録再生装置 | |
JP5058316B2 (ja) | 電子機器、画像処理方法、及び画像処理プログラム | |
WO2011114739A1 (ja) | 再生装置 | |
JP5166567B2 (ja) | 電子機器、映像データの表示制御方法、およびプログラム | |
WO2012014489A1 (ja) | 映像信号処理装置及び映像信号処理方法 | |
TWI555400B (zh) | 應用於顯示裝置的字幕控制方法與元件 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180021119.1 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11774643 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2012512672 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13643802 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11774643 Country of ref document: EP Kind code of ref document: A1 |