WO2012073823A1 - Dispositif de traitement d'image, procédé de traitement d'image et programme - Google Patents

Dispositif de traitement d'image, procédé de traitement d'image et programme Download PDF

Info

Publication number
WO2012073823A1
WO2012073823A1 PCT/JP2011/077186 JP2011077186W WO2012073823A1 WO 2012073823 A1 WO2012073823 A1 WO 2012073823A1 JP 2011077186 W JP2011077186 W JP 2011077186W WO 2012073823 A1 WO2012073823 A1 WO 2012073823A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
area
region
reference area
eye
Prior art date
Application number
PCT/JP2011/077186
Other languages
English (en)
Japanese (ja)
Inventor
岳彦 指田
Original Assignee
コニカミノルタホールディングス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタホールディングス株式会社 filed Critical コニカミノルタホールディングス株式会社
Publication of WO2012073823A1 publication Critical patent/WO2012073823A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components

Definitions

  • the present invention relates to an image processing device, an image processing method, and a program.
  • 3D televisions that use videos that can be viewed stereoscopically (also referred to as 3D videos or stereoscopic videos) are in the spotlight.
  • 3D television two images obtained by viewing the same object from different viewpoints are used to generate an image that can be viewed stereoscopically (also referred to as a 3D image or a stereoscopic image).
  • the position of the pixel indicating the same part of the object is shifted between the image for the left eye and the image for the right eye, and the focus adjustment function of the human eye is used to A sense of depth is given to the user.
  • the amount of displacement of the pixel position that captures the same part of the object between the image for the left eye and the image for the right eye is also referred to as “parallax”.
  • This 3D image technology has been adopted in various video fields.
  • an endoscopic device has been proposed that enables stereoscopic viewing of an image over a wide field of view by adjusting parallax detected from a stereo image to fall within the fusion range of people (for example, Patent Documents). 1).
  • a stereoscopic video processing apparatus that displays a stereoscopic reference image when a stereoscopic video is displayed and the sense of depth is adjusted (for example, Patent Document 2).
  • the set parallax is small to some extent, it may be difficult for the user to obtain a sense of depth depending on the size of the screen on which the image is displayed. That is, for the user, even if the object is the same, it may appear different from the actual object when viewed on the 3D image as compared to when the object is actually viewed.
  • Patent Document 3 In contrast to the current state of the 3D image technology, there is a technology in which a frame image is displayed around a 3D image in order to change the screen, add interest, or facilitate stereoscopic viewing. It has been proposed (for example, Patent Document 3). With this technology, it is possible to select which frame image to use from a plurality of prepared frame images.
  • JP-A-8-313825 Japanese Patent Laid-Open No. 11-155155 International Publication No. 2003/093023
  • Patent Document 3 Even with the technique disclosed in Patent Document 3, the user may not be able to obtain a sufficient depth feeling in the 3D image.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a technique for improving a sense of depth that can be obtained by a user viewing a 3D image.
  • an image processing apparatus includes a first image and a second image having a relationship in which positions of pixels indicating the same part of an object are shifted in one direction.
  • a reference value of the shift amount is determined according to the shift amount of the position of the pixel indicating the same part of the object between the first acquisition unit that acquires stereoscopic image information and the first image and the second image.
  • information relating to the first reference area image and the second reference area image having a relationship in which the position of a pixel indicating the same portion of the display object is shifted by an amount corresponding to the reference value in the one direction.
  • An image processing apparatus is the image processing apparatus according to the first aspect, wherein the second stereoscopic image information includes the first image, the second image, and the first reference on one screen.
  • One or more images of the two reference area images and the remaining one or more images include information in at least one of the formats that can be displayed in time sequence.
  • An image processing device is the image processing device according to the first or second aspect, wherein the determining unit is the same part of the object between the first image and the second image.
  • the reference value is determined in accordance with a distribution relating to the displacement amount of the pixel position indicating.
  • An image processing device is the image processing device according to any one of the first to third aspects, wherein the first image and the second image are detected according to a preset detection rule. It further includes a detection unit that detects a region of interest that is predicted to attract the user's eyes, and the determination unit includes the same part of the object included in the region of interest between the first image and the second image. The reference value is determined in accordance with the displacement amount of the pixel position indicating.
  • An image processing device is the image processing device according to any one of the first to fourth aspects, wherein the combining unit converts the first reference region image around the first image. And a region corresponding to the surrounding region of an image different from the first image are arranged in one or more regions, and the second reference region image is the second reference region image. It arrange
  • An image processing apparatus is the image processing apparatus according to any one of the first to fourth aspects, wherein the combining unit converts the first reference region image into the interior of the first image.
  • An area and an area corresponding to the internal area of an image different from the first image are arranged in one or more areas, and the second reference area image is arranged in the second image. It arrange
  • An image processing device is the image processing device according to any one of the first to fourth aspects, wherein a first reference region in which the first reference region image is arranged in the combining unit; A designation unit that designates a second reference region in which the second reference region image is arranged in the combining unit, a reception unit that receives a signal according to an operation of the operation unit by a user, A setting unit that sets the designation unit to any one of a plurality of modes including a mode and a second mode, and when the designation unit is set to the first mode,
  • the first reference area is designated in one or more of a plurality of areas including an area around the first image and an area corresponding to the area around the image different from the first image.
  • the second reference area Specify in one or more of a plurality of areas including an area around the second image and an area corresponding to the area around the image different from the second image, and set to the second mode
  • the first reference area is one of a plurality of areas including an inner area of the first image and an area corresponding to the inner area of an image different from the first image.
  • the second reference area is designated in the above area, and the second reference area is selected from among a plurality of areas including an internal area of the second image and an area corresponding to the internal area of an image different from the second image. Specify in one or more areas.
  • the image processing device acquires first stereoscopic image information relating to a first image and a second image having a relationship in which the positions of pixels indicating the same part of the object are shifted in one direction. And an image different from the internal region of the first image, the region around the first image, and the first image according to one or more images of the first image and the second image
  • the first reference area is designated in one or more of the areas included in the area, and the internal area of the second image, the area around the second image, and an image different from the second image
  • a first reference area image having a relationship in which the designation portion for designating the second reference area and the position of the pixel indicating the same part of the display object are shifted in the one direction
  • obtaining information related to the second reference area image A reference area image is arranged in the first reference area, and a second reference area image is arranged in the second reference area, so that the first image, the second image, the first reference area image, and the first reference area image are arranged
  • An image processing apparatus is the image processing apparatus according to the eighth aspect, wherein the second stereoscopic image information includes the first image, the second image, and the first reference on one screen. Information in a format that can be displayed at the same time in a manner in which the region image and the second reference region image are superimposed, and the first image, the second image, the first reference region image, and the first on one screen.
  • One or more images of the two reference area images and the remaining one or more images include information in at least one of the formats that can be displayed in time sequence.
  • An image processing apparatus is the image processing apparatus according to the eighth or ninth aspect, wherein the designation unit is the same part of the object between the first image and the second image
  • the first reference area and the second reference area are designated in accordance with a distribution relating to the displacement amount of the pixel position indicating.
  • An image processing apparatus is the image processing apparatus according to any one of the eighth to tenth aspects, wherein the first image and the second image are determined according to a preset detection rule. It further includes a detection unit that detects a region of interest that is predicted to attract the user's eyes, and the designation unit includes the same part of the object included in the region of interest between the first image and the second image. The first reference area and the second reference area are designated according to the amount of displacement of the pixel position indicating.
  • An image processing apparatus is the image processing apparatus according to any one of the eighth to eleventh aspects, wherein the designation unit is one or more of the first image and the second image.
  • the first reference area and the second reference area are designated so that the positions of the first reference area and the second reference area change according to the image.
  • An image processing device is the image processing device according to any one of the eighth to twelfth aspects, wherein the designation unit is one or more of the first image and the second image.
  • the first reference area and the second reference area are designated so that the sizes of the first reference area and the second reference area change according to the image.
  • An image processing device is the image processing device according to any one of the eighth to thirteenth aspects, wherein the specifying unit includes a region around the first image and the first image. In one or more of a plurality of areas including an area corresponding to the surrounding area of another image, the first reference area is designated, the surrounding area of the second image, A second reference area is designated in one or more of a plurality of areas including an area corresponding to the surrounding area of an image different from the second image.
  • An image processing device is the image processing device according to any one of the eighth to thirteenth aspects, wherein the specifying unit includes an internal region of the first image, the first image, Designates the first reference area in one or more of a plurality of areas including an area corresponding to the internal area of another image, and the internal area of the second image and the second image The second reference region is designated in one or more of a plurality of regions including a region corresponding to the internal region of another image.
  • An image processing device is the image processing device according to any one of the eighth to thirteenth aspects, wherein a reception unit that receives a signal according to an operation of the operation unit by a user, and the signal And a setting unit that sets the designation unit to any one of a plurality of modes including the first mode and the second mode, and the designation unit is set to the first mode.
  • the first reference area is one or more of a plurality of areas including an area around the first image and an area corresponding to the area around the image different from the first image.
  • a plurality of regions including a region around the second image and a region corresponding to the region around the second image different from the second image. In one or more areas of the second mode.
  • the first reference area is a plurality of areas including an internal area of the first image and an area corresponding to the internal area of an image different from the first image.
  • a plurality of regions including an internal region of the second image and a region corresponding to the internal region of an image different from the second image. It is specified in one or more areas.
  • first stereoscopic image information relating to a first image and a second image having a relationship in which positions of pixels indicating the same part of an object are shifted in one direction is obtained.
  • B determining a reference value for the shift amount according to a shift amount of a pixel position indicating the same portion of the object between the first image and the second image; c) obtaining information related to the first reference area image and the second reference area image having a relationship in which the positions of pixels indicating the same portion of the display object are shifted by an amount corresponding to the reference value in the one direction.
  • second stereoscopic image information by combining the first image, the second image, the first reference area image, and the second reference area image.
  • first stereoscopic image information relating to a first image and a second image having a relationship in which positions of pixels indicating the same part of an object are shifted in one direction is obtained.
  • F an internal region of the first image, a region around the first image, and the first image in accordance with one or more images of the first image and the second image; In one or more of the regions included in another image, a first reference region is designated, and an internal region of the second image, a region around the second image, and the second image
  • the first reference area image is arranged in the first reference area
  • the second reference area image is arranged in the second reference area
  • the first image, the second image, and the first reference are arranged.
  • the program according to the nineteenth aspect is executed by a control unit included in the information processing apparatus, thereby causing the information processing apparatus to function as an image processing apparatus according to any one of the first to sixteenth aspects.
  • the image processing apparatus can improve the sense of depth that can be obtained by a user viewing a 3D image.
  • the image processing apparatus can further improve the sense of depth obtained by the user viewing the 3D image according to the state of the 3D image.
  • the image processing apparatus can further improve the sense of depth that can be obtained by the user who is viewing the 3D image in the region of interest to the user.
  • the image processing apparatus can improve the sense of depth that can be obtained by a user viewing a 3D image without taking away as much as possible from the original target of attention.
  • the image processing apparatus According to the image processing apparatus according to any of the seventh and sixteenth aspects, it is possible to appropriately select suppression of visual discomfort and ensuring the visibility of 3D images in accordance with the user's intention.
  • a display mode suitable for viewing a 3D image can be realized by the image processing apparatus according to any of the twelfth and thirteenth modes.
  • the same effect as that of the image processing apparatus according to the first aspect can be realized.
  • FIG. 1 is a diagram for explaining an overview of processing according to the first embodiment.
  • FIG. 2 is a diagram for explaining an overview of processing according to the first embodiment.
  • FIG. 3 is a diagram showing a schematic configuration of the information processing system according to the first and second embodiments.
  • FIG. 4 is a diagram illustrating a functional configuration of the image processing apparatus according to the first and second embodiments.
  • FIG. 5 is a diagram illustrating an example of a stereoscopic image to which a reference area image is added.
  • FIG. 6 is a diagram illustrating an example of a stereoscopic image to which a reference area image is added.
  • FIG. 7 is a diagram illustrating an example of a stereoscopic image to which a reference area image is added.
  • FIG. 1 is a diagram for explaining an overview of processing according to the first embodiment.
  • FIG. 2 is a diagram for explaining an overview of processing according to the first embodiment.
  • FIG. 3 is a diagram showing a schematic configuration of the information processing system according to the
  • FIG. 8 is a diagram illustrating an example of a stereoscopic image to which a reference area image is added.
  • FIG. 9 is a diagram illustrating an example of a stereoscopic image to which a reference area image is added.
  • FIG. 10 is a diagram illustrating an example of a stereoscopic image to which a reference area image is added.
  • FIG. 11 is a diagram illustrating an example of a stereoscopic image to which a reference area image is added.
  • FIG. 12 is a flowchart showing the operation of the image processing apparatus.
  • FIG. 13 is a diagram for explaining an overview of processing according to the second embodiment.
  • FIG. 14 is a diagram illustrating an example of a stereoscopic image to which a reference area image is added.
  • FIG. 15 is a diagram illustrating an example of a stereoscopic image to which a reference area image is added.
  • a stereoscopically viewable image (also referred to as a 3D image) is displayed, for example, a left having a relationship in which the positions of pixels indicating the same portion of the display object are shifted in one direction.
  • a left-eye image image for the eye
  • G L image for the right eye and G R are prepared.
  • the one direction matches the direction in which the human left eye and the right eye are separated from each other, and is set, for example, in the horizontal direction of the image.
  • Image G L and the right eye image G R for the left eye may be obtained by imaging using the stereo camera.
  • the stereo camera has two cameras corresponding to a human left eye and a right eye.
  • FIG 1 (also referred to as object area) area indicating the three objects in the image G L for the left eye O1 L ⁇ O3 L contains, it contains three objects areas O1 R ⁇ O3 R in the right eye image G R The situation is shown.
  • the object regions O1 L and O1 R are regions indicating the same person, the object regions O2 L and O2 R are regions indicating the same conical object, and the object regions O3 L and O3 R are regions indicating the same hexagonal column object. It is.
  • the deviation amount between the position of the object area O1 occupy the position and the right-eye image G R of the object region O1 L occupying image G L for the left eye R (also referred to as parallax) of cases can be considered somewhat less.
  • the amount of deviation between the position of the position and the right-eye image G object area occupying R O2 R object region O2 L occupying image G L for the left eye (parallax) can be considered also somewhat smaller.
  • the amount of deviation between the position of the position and the right-eye image G object area occupying R O3 R object region O3 L occupying image G L for the left eye (parallax) can be considered also somewhat smaller.
  • the pixel indicating the same portion of the reference pixel and the object in the right-eye image G R in the image G L for the left eye A reference value for the amount of deviation is determined in accordance with the amount of deviation from the position. In other words, between the image G L and the right eye image G R for the left eye, in response to the deviation amount of the position of the pixels showing the same portion of the object, the reference value of the shift amount is determined.
  • a left-eye region image (also referred to as a left-eye reference region image) R1 L having a relationship in which the positions of pixels indicating the same portion of the display object are shifted in one direction according to the reference value of the shift amount;
  • a right eye region image (also referred to as a right eye reference region image) R1 R is generated.
  • a reference region image R1 L image G L and the left-eye left eye (also referred to as the left-eye synthesized image) combined image GS L and the right eye image G R for the right eye reference region image R1
  • An image synthesized with R (also referred to as a right-eye synthesized image) GS R is generated.
  • An example of the left-eye synthesized image GS L in which the left-eye reference region image R1 L is synthesized is shown in FIG.
  • the right side of FIG. 2 (also referred to as the right-eye 3D image area) (corresponding to the right eye reference area to be described later) around the area of the TA R image region corresponding to the right eye image G R OA R
  • An example of the right-eye synthesized image GS R in which the right-eye reference region image R1 R is synthesized is shown in FIG.
  • Display of 3D images can be realized by a mode in which images are sequentially displayed in a short time.
  • both images G L and the right eye image G R left eyes when included in each frame of video, for example, the left-eye synthesized image GS L is displayed as an image of the first field, the right-eye interlaced composite image GS R is displayed as the image of the second field may be adopted.
  • a reference region image R1 R reference region image R1 L and the right eye left eye may be an image of a different field from the image G L and the right eye image G R for the left eye, different frames It may be an image.
  • the left eye reference area image R1 L and the right eye reference area image R1 R can give the user a sense of depth corresponding to the reference value of the deviation amount.
  • the sense of depth By comparison with the sense of depth, and 3D image area TA L and the right-eye 3D image area for the left eye TA R it is easily increased sense of distance from the user to provide the user to the object. As a result, the sense of depth that can be obtained by the user viewing the 3D image can be improved.
  • FIG. 3 is a diagram illustrating a schematic configuration of the information processing system 1 according to the embodiment.
  • the information processing system 1 includes a stereo camera 2, an information processing device 4, and a line-of-sight detection sensor 5.
  • the information processing device 4 is connected to the stereo camera 2 and the line-of-sight detection sensor 5 so as to be able to transmit and receive data.
  • the stereo camera 2 has a camera 21 and a camera 22.
  • Each of the cameras 21 and 22 is an imaging device having a function of a digital camera having an imaging element such as a CCD.
  • an operation is performed in which light from a subject is received and information indicating a distribution relating to the luminance of the subject is acquired as image data by photoelectric conversion.
  • the camera 21 and the camera 22 are arranged, for example, separated by a predetermined distance in the horizontal direction.
  • the predetermined distance corresponds to, for example, a distance between an average human left eye and right eye.
  • a stereo image is acquired by photographing with the camera 21 and the camera 22 at substantially the same timing.
  • the stereo image is an image that includes a set of an image for the left eye (also referred to as an image for the left eye) and an image for the right eye (also referred to as an image for the right eye) and can be displayed in a stereoscopic manner.
  • N sets (N is an integer of 2 or more) of stereo images are obtained by performing a plurality of times of continuous shooting with the camera 21 and the camera 22 at a predetermined timing. May be acquired.
  • the N sets of stereo images correspond to N frames included in a moving image that can be stereoscopically viewed.
  • the line-of-sight detection sensor 5 detects a portion of the screen of the display unit 42 included in the information processing device 4 that is noticed by the user (also referred to as a portion of interest).
  • the display unit 42 and the line-of-sight detection sensor 5 are fixed to each other with a predetermined arrangement relationship.
  • an image of the user is obtained by photographing, and the direction of the line of sight of the user is detected by analyzing the image, and a portion of interest of the user on the screen of the display unit 42 is detected.
  • the analysis of the image can be realized, for example, by detecting the orientation of the face using pattern matching, and identifying the white-eye portion and the black-eye portion in both eyes using a color difference.
  • information relating to one or more stereo images obtained by the stereo camera 2 can be transmitted to the information processing device 4 via the communication line 3a.
  • information related to the target portion obtained by the line-of-sight detection sensor 5 can be transmitted to the information processing apparatus 4 via the communication line 3b.
  • the communication lines 3a and 3b may be wired lines or wireless lines.
  • the information processing apparatus 4 has a function of, for example, a personal computer (personal computer).
  • the information processing apparatus 4 includes an operation unit 41, a display unit 42, an interface (I / F) unit 43, a storage unit 44, an input / output unit 45, and a control unit 46.
  • the operation unit 41 includes, for example, a mouse and a keyboard.
  • the display unit 42 includes, for example, a liquid crystal display.
  • the I / F unit 43 receives information from the stereo camera 2 and the line-of-sight detection sensor 5.
  • the storage unit 44 includes, for example, a hard disk and stores each image obtained by the stereo camera 2. Further, the storage unit 44 stores a program PG1 and the like for realizing various operations in the information processing apparatus 4.
  • the input / output unit 45 includes, for example, a disk drive, can receive the storage medium 9 such as an optical disk, and can exchange data with the control unit 46.
  • the control unit 46 includes a CPU 46a that functions as a processor and a memory 46b that can temporarily store information, and comprehensively controls each unit of the information processing apparatus 4.
  • various functions, various information processing, and the like are realized by reading and executing the program PG1 in the storage unit 44. Data temporarily generated in this information processing is appropriately stored in the memory 46b.
  • the information processing device 4 functions as an image processing device that generates a stereoscopically viewable image (3D image), and further, as an image display system that displays the 3D image on the display unit 42. Also work.
  • the control unit 46 can store the program stored in the storage medium 9 in the storage unit 44 or the like via the input / output unit 45.
  • FIG. 4 is a block diagram illustrating a functional configuration of the image processing apparatus realized by the control unit 46.
  • the image processing apparatus includes an image acquisition unit 461, a region of interest detection unit 462, a reference deviation amount determination unit 463, a reference region image acquisition unit 464, a signal reception unit 465, a mode setting unit 466, a reference region specification unit 467, and an image composition unit. 468.
  • the image G L and the right eye image G R for the left eye, the position of the pixel indicating the same portion of the object in one direction (here, horizontal direction) have a relationship that is shifted to.
  • Attention area detection unit 462 according to a preset detection rule to detect a region of interest in the image G R image G L and the right eye left eye is expected to attract the user's eye.
  • Reference displacement amount determination unit 463 between the images G L and the right eye image G R for the left eye, in response to the deviation amount of the position of the pixels showing the same portion of the object, determining a reference value of the deviation amount .
  • the reference area image acquisition unit 464 acquires information (also referred to as reference stereoscopic image information) related to the left eye reference area image R1 L and the right eye reference area image R1 R.
  • the reference deviation amount determination unit 463 determines that the pixel position indicating the same portion of the display object is in one direction (here, the horizontal direction). The relationship is shifted by an amount corresponding to the determined reference value.
  • the signal reception unit 465 receives a signal corresponding to the operation of the operation unit 41 by the user.
  • the mode setting unit 466 sets the reference region designating unit 467 to any one of a plurality of modes including the first mode and the second mode in accordance with the signal received by the signal receiving unit 465. Further, the mode setting unit 466 sets the attention area detection unit 462 to a mode for detecting the attention area or a mode for not detecting the attention area in accordance with the signal received by the signal reception section 465.
  • Reference area designation unit 467 a left-eye reference area OA L for which the left-eye reference region image R1 L are combined, the right-eye criteria for which the right-eye reference region image R1 R is synthesized The area OA R is designated.
  • Image combining unit 468, an image G L for the left eye, and the image G R for the right eye, and the reference region image R1 L for the left eye, can stereoscopically by synthesizing the reference region image R1 R for the right eye Information (also referred to as second stereoscopic image information) is generated.
  • the left-eye reference area image R1 L may be arranged in the left-eye reference area OA L designated by the reference area designating unit 467, or may be arranged in a predetermined area.
  • the right eye reference area image R1 R may be arranged in the right eye reference area OA R designated by the reference area designation unit 467, or may be arranged in a predetermined area.
  • Predetermined region is, for example, may be an area around the image G L and the right eye image G R for the left eye, different separate image from the image G L and the right eye image G R for the left eye It may be a region corresponding to the surrounding region.
  • Another image may be, for example, another field image in an interlaced moving image or another frame image in a moving image.
  • the second stereoscopic image information generated in this way can be visually output on the display unit 42 under the control of the control unit 46.
  • the display unit 42 based on the second stereoscopic image information, and the image G L for the left eye, and the image G R for the right eye, and the reference region image R1 L for the left eye, right eye reference region image R1 R can be superimposed and displayed at the same time.
  • one or more of the left-eye image G L , the right-eye image G R , the left-eye reference region image R1 L , and the right-eye reference region image R1 R and the remainder One or more images may be displayed in time sequence.
  • the sense of distance from the user to the object and the 3D image area TA L and the right-eye 3D image area TA R for the left eye has on the user, the reference region image R1 L and the right eye reference for the left eye It can be enhanced by comparing the region image R1 R with the sense of distance from the user to the display object given to the user.
  • ⁇ (1-3-1) Attention Area Detection Method> As a method of detecting a region of interest in the region of interest detection unit 462, for example, one or more of the following detection methods (A1) to (A5) can be adopted.
  • A2 One or more images of the left-eye image G L and the right-eye image G R obtained from the image acquisition unit 461, and an image region that shows a specific type of object such as a person by template matching or the like Is detected as a region of interest.
  • the first stereoscopic image information is information indicating a moving image including a plurality of stereo images
  • an object having a motion exceeding a certain threshold is analyzed by analyzing a motion vector targeted for the plurality of stereo images.
  • the indicated area is detected as the attention area.
  • (A4) 1 or more images of the left eye obtained from an image acquisition unit 461 image G L and the right eye image G R is the target, at least one of the different specific color and a specific texture of the surrounding A region where is detected is detected as a region of interest.
  • the specific color include skin color that is a characteristic color of humans.
  • the specific texture include a color arrangement of parts (eyes, head hair, eyebrows, mouth, etc.) included in the human head.
  • the region of interest from one or more images of the left eye image obtained from the image acquisition unit 461 G L and the right eye image G R is Detected.
  • one or more image areas indicating a specific type of object such as a person by one or more image is a target template matching or the like of the image G L and the right eye image G R for the left eye
  • a region corresponding to a target portion of the one or more image regions may be detected as a target region.
  • a specific method for determination method (B1) is, for example, for all combinations of the reference pixels showing the same portion of the object with the corresponding pixels between the images G L and the right eye image G R for the left eye
  • a method is conceivable in which an average value of shift amounts of positions (for example, X and Y addresses) is calculated and the average value is determined as a reference value.
  • the combination of the reference pixel and the corresponding pixel between the image G L and the right eye image G R for the left eye for example, the corresponding points using various methods such as a phase-only correlation (POC) method Can be detected by searching.
  • POC phase-only correlation
  • a representative value for the shift amount of the position between the reference pixel and the corresponding pixel for all the pixels included in the region of interest is employed. May be.
  • this representative value for example, at least one of an average value, a maximum value, a minimum value, a mode value, an average value, and a median value can be adopted.
  • a shift amount of the position between the reference pixel and the corresponding pixel in the pixel at the center of gravity of the attention area may be adopted as a reference value of the shift amount.
  • the reference value of the shift amount may be shifted by a predetermined amount from the shift amount of the position between the reference pixel and the corresponding pixel in the attention area.
  • the predetermined amount may be specified by the user via the operation unit 41, for example, or the position between the reference pixel and the corresponding pixel in an area other than the attention area including the background (also referred to as a non-attention area) It may be determined based on the amount of deviation.
  • a representative value for the amount of positional deviation between the reference pixel and the corresponding pixel in the non-attention area may be employed. As this representative value, for example, at least one of an average value, a maximum value, a minimum value, a mode value, and a median value can be adopted.
  • the depth sense of the user is obtained for the object displayed in the 3D image area TA L and the right-eye 3D image area TA R for the left eye, left eye reference region image R1 L and the right eye reference region image Due to the presence of the display object displayed as R1 R , it becomes easier to increase.
  • the acquisition of the reference stereoscopic image information in the reference region image acquisition unit 464 can be realized, for example, by sequentially performing the following steps (C1) and (C2).
  • An image pattern stored in advance in the storage unit 44 or the like is read out.
  • this image pattern for example, an image pattern showing a specific pattern in which relatively large dots are randomly arranged, an image pattern including an information display column (for example, a data column or a time column) of digital broadcasting, and a device
  • An image pattern including operation buttons or the like can be employed.
  • step (C2) The image pattern read in step (C1) is used as one of the base images (for example, the left eye reference region image R1 L ), and the reference value determined by the reference deviation amount determination unit 463 is used.
  • the position of each pixel in one image is shifted in one direction to generate the other image (for example, the right eye reference region image R1 R ).
  • the reference region image R1 L for the left eye and the reference region image R1 R for the right eye are acquired.
  • a set of image patterns corresponding to a plurality of shift amounts is stored in advance in the storage unit 44 and the like, and a set of image patterns having a shift amount corresponding to the reference value determined by the reference shift amount determination unit 463 is stored in the storage unit 44 and the like.
  • the left eye reference area image R1 L and the right eye reference area image R1 R may be acquired.
  • FIGS. 5 to 7 are diagrams illustrating the left eye reference region image R1 L.
  • the left-eye reference region image R1 L is disposed on the left-eye reference area OA L around the 3D image area TA L for the left eye corresponding to the image G L for the left eye
  • the left eye composite image GS L is exemplified.
  • FIG. 5 schematically shows an image pattern showing a specific pattern in which relatively large dots are randomly arranged.
  • FIG. 6 schematically shows an image pattern including an information display column including a digital broadcast data column Pa1 and a time column Ca1.
  • FIG. 7 schematically shows an image pattern including the operation button group Ba1 and the time column Ta1.
  • the method of specifying the left-eye reference area OA L and the right-eye reference area OA R in the reference area specifying unit 467 differs depending on the mode of the reference area specifying unit 467 set by the mode setting unit 466. For example, according to one or more images of the case left eye reference area OA L and the right eye reference area OA R is a predetermined region, and the left-eye image G L and the right eye image G R Left A case may be considered where the ophthalmic reference area OA L and the right-eye reference area OA R are designated.
  • predetermined left eye reference area OA L and right eye reference area OA R may be designated, but the left eye according to the operation of the operation unit 41 by the user
  • the reference area OA L for the right eye and the reference area OA R for the right eye may be designated.
  • the reference area OA L is the left eye, including, for example, the surrounding area of the left eye image G L, and a region corresponding to the peripheral region in another image for the left eye image G L Can be specified in one or more of the regions.
  • the right-eye reference area OA R is, for example, the surrounding area of the right-eye image G R, in another image and the right-eye image G R of the plurality including a region corresponding to the surrounding area It can be specified in one or more of the regions.
  • the reference area OA L for the left eye to include the area surrounding the 3D image area TA L for the left eye corresponding to the image G L for the left eye can be specified.
  • the left eye reference area OA L may be designated so as to include a specific area of the area surrounding the left eye image GL .
  • the left and right areas which sandwich the 3D image area TA L for the left eye may be designated as the reference region OA L for the left eye.
  • regions under the 3D image area TA L for the left eye may be designated as the reference region OA L for the left eye.
  • region width surrounding the 3D image area TA L for the left eye is not uniform may be designated as the reference region OA L for the left eye.
  • left eye reference area OA L and the right eye reference area OA R can be specified .
  • the left eye reference area OA L is a plurality of regions including a region around the left-eye image G L, and a region corresponding to the peripheral region in another image for the left eye image G L Can be assigned to one or more regions.
  • the right-eye reference area OA R is, and the surrounding region of the right eye image G R, in another image and the right-eye image G R of a plurality of regions including a region corresponding to the surrounding area One or more of these areas can be designated.
  • the left-eye reference area OA L is, and the outer edge portion near the region of the left-eye image G L, and a region corresponding to the region of the outer edge vicinity in another image for the left eye image G L You may superimpose on one or more area
  • right-eye reference area OA R is, and the outer edge portion near the region of the right eye image G R, and a region corresponding to a region near the outer edge in another picture and the right-eye image G R You may superimpose on one or more area
  • the left eye reference area OA L and the right eye reference area OA R are designated.
  • the attention area those detected by the attention area detection unit 462 may be employed.
  • the difference between the maximum value and the minimum value of the positional deviation amount between the reference pixel indicating the same part of the object and the corresponding pixel is larger than the first threshold value. If it is larger and smaller than the second threshold value, a mode in which the left eye reference area OA L and the right eye reference area OA R are designated can be considered. On the other hand, for example, if the difference between the maximum value and the minimum value of the positional shift amount between the reference pixel indicating the same part of the object and the corresponding pixel is equal to or smaller than the first threshold value or equal to or larger than the second threshold value.
  • a mode in which the left-eye reference area OA L and the right-eye reference area OA R are not designated can be considered. Further, for example, as the difference between the maximum value and the minimum value of the positional shift amount between the reference pixel indicating the same part of the object and the corresponding pixel becomes smaller, the left eye reference area OA L and the right At least one of the number and size of the ophthalmic reference area OA R may be increased.
  • the difference between the maximum value and the minimum value of the positional shift amount between the reference pixel indicating the same part of the object in the region of interest and the corresponding pixel is the first method. If it is larger than the threshold value and smaller than the second threshold value, a mode in which the left-eye reference area OA L and the right-eye reference area OA R are designated can be considered. On the other hand, for example, if the difference between the maximum value and the minimum value of the positional shift amount between the reference pixel indicating the same part of the object and the corresponding pixel is equal to or smaller than the first threshold value or equal to or larger than the second threshold value.
  • a mode in which the left-eye reference area OA L and the right-eye reference area OA R are not designated can be considered. Further, for example, the smaller the difference between the maximum value and the minimum value of the positional shift amount between the reference pixel indicating the same part of the object in the attention area and the corresponding pixel, the smaller the left eye reference area OA. At least one of the number and size of L and the right eye reference area OA R may be increased.
  • Specifying (D3) as a specific method of (D4), for example, it was detected in the attention area detection unit 462 from the one or more images of the left eye image G L and the right eye image G R interest A mode in which the left-eye reference area OA L and the right-eye reference area OA R are designated at positions corresponding to the areas can be considered.
  • the object region Ob1 as the region of interest first predetermined direction (
  • the positions of the left-eye reference area OA L and the right-eye reference area OA R are changed. Further, when the size of the attention area changes, the sizes of the left-eye reference area OA L and the right-eye reference area OA R are changed. Further, if the number of attention areas changes, the number of the left eye reference area OA L and the right eye reference area OA R is changed.
  • the left-eye reference region image R1 L is arranged in the left-eye reference region OA L specified by the reference region specifying unit 467, and the right-eye reference region OA specified by the reference region specifying unit 467 is used.
  • the right eye reference area image R1 R in R second stereoscopic image information is generated.
  • the image combining unit 468, left eye reference region image R1 L is, and the surrounding area of the image G L for the left eye, corresponding to the peripheral region in another image from the image G L for the left eye area Can be arranged in one or more of a plurality of regions including.
  • the right-eye reference region image R1 R is a plurality of regions including a region around the right-eye image G R, and a region corresponding to the surrounding area in another image and the right-eye image G R May be arranged in one or more regions.
  • a left-eye composite image GS L in which the left-eye image GL and the left-eye reference region image R1 L shown in FIGS. 5 to 11 are combined is generated.
  • the right-eye image G R for the right eye reference region image R1 for the R and the right-eye synthesized composite image GS R also, similar to the left-eye synthesized image GS L form Can be generated as an image.
  • the second stereoscopic image information generated in this way may be information in a format including at least one of the first format and the second format.
  • the first type is the manner in which the left-eye image G L and the right eye image G R and the left-eye reference region image R1 L and the right eye reference region image R1 R is superimposed in one screen Includes formats that can be displayed at the same time.
  • the second format includes one or more images among the left-eye image G L , the right-eye image G R , the left-eye reference region image R1 L , and the right-eye reference region image R1 R on one screen. It includes a format in which one or more remaining images can be displayed in time sequence.
  • FIG. 12 is a flowchart showing an operation flow of the image processing apparatus according to the first embodiment. This operation flow is realized by reading and executing the program PG1 in the storage unit 44 by the control unit 46. For example, execution of image processing relating to a 3D image in the information processing apparatus 4 is requested in accordance with an operation of the operation unit 41 by the user, and this operation flow is started.
  • the first stereoscopic image information is acquired by the image acquisition unit 461.
  • step S2 the mode setting unit 466 determines whether or not the attention area detection unit 462 is set to a mode for detecting the attention area. If the mode for detecting the attention area is set, the process proceeds to step S3. If the mode for detecting the attention area is not set, the process proceeds to step S4.
  • step S3 the target area detection section 462, a target at least one of the image G L and the right eye image G R for the left eye, the region of interest is detected.
  • step S4 the reference shift amount determining unit 463, between the images G L and the right eye image G R for the left eye, in response to the deviation amount of the position of the pixels showing the same portion of an object, displacement amount of the reference The value is determined.
  • step S5 the reference region image acquisition unit 464 acquires reference stereoscopic image information related to the left-eye reference region image R1 L and the right-eye reference region image R1 R.
  • the reference deviation amount determination unit 463 determines that the pixel position indicating the same portion of the display object is in one direction (here, the horizontal direction). The relationship is shifted by an amount corresponding to the determined reference value.
  • step S ⁇ b> 6 the mode setting unit 466 determines whether or not the signal reception unit 465 has received a signal regarding the mode setting related to the designation of the left-eye reference area OA L and the right-eye reference area OA R. If a signal related to mode setting is accepted, the process proceeds to step S7. If a signal related to mode setting is not accepted, the process proceeds to step S8.
  • step S7 the mode setting unit 466 selects one of a plurality of modes including the first mode and the second mode as a reference region designation unit 467 according to the signal received by the signal reception unit 465. Is set. It is assumed that the reference area designating unit 467 is initially set to a predetermined mode before the mode is set in step S7.
  • step S8 the reference region designating unit 467 performs a right eye reference region OA L and a right eye reference region image R1 R to be combined with a left eye reference region image R1 L to be combined.
  • An ocular reference area OA R is designated.
  • step S9 the image synthesizing unit 468, the image G L for the left eye, and the image G R for the right eye, and the reference region image R1 L for the left eye, and the reference region image R1 R for the right eye are synthesized
  • the second stereoscopic image information is generated, and this operation flow ends.
  • second stereoscopic image information corresponding to one still image may be generated based on one stereo image, or a moving image including a plurality of frame images based on a plurality of stereo images. Corresponding second stereoscopic image information may be generated.
  • the left-eye reference area image R1 L and the right-eye reference area image R1 R have a sense of depth according to the reference value of the shift amount. To give.
  • 3D image area TA L and the right-eye 3D image area for the left eye TA R it is easily increased sense of distance from the user to provide the user to the object. As a result, the sense of depth that can be obtained by the user viewing the 3D image can be improved.
  • aspects to be displayed and the image G L for the left eye, and the image G R for the right eye, and the reference region image R1 L for the left eye, as an image is superimposed with the reference region image R1 R for the right eye can be realized.
  • one or more of the left-eye image G L , the right-eye image G R , the left-eye reference region image R1 L , and the right-eye reference region image R1 R and one or more remaining images are included.
  • the display of 3D images can also be realized by a mode in which the images are displayed in time sequence. As described above, even when a 3D image is displayed in various display modes, a sense of depth that can be obtained by a user viewing the 3D image can be improved.
  • the reference value of the shift amount is determined. As a result, the sense of depth that can be obtained by the user viewing the 3D image can be further improved.
  • the reference value of the shift amount is determined .
  • the sense of depth that can be obtained by the user who is viewing the 3D image can be further improved in the region that the user is interested in.
  • the plurality including left-eye reference region image R1 L is, and the surrounding region of the left-eye image G L, and a region corresponding to the peripheral region of another image from the image G L for the left eye Can be arranged in one or more of the regions.
  • the plurality comprising right-eye reference region image R1 R is, and the surrounding region of the right eye image G R, and a region corresponding to the peripheral region of the other image and the right-eye image G R Can be arranged in one or more of the regions.
  • At least one of the position and size of the left-eye reference area OA L and the right eye reference area OA R are designated so that one of them changes. Thereby, a display mode suitable for viewing a 3D image can be realized.
  • FIG. 13 is a diagram for explaining an overview of processing according to the second embodiment.
  • the left-eye reference region image R1 L is, and the surrounding area of the image G L for the left eye, of the periphery of another picture image G L for the left eye area arranged in one or more regions of the plurality of regions including the region corresponding to the right eye reference region image R1 R is, and the surrounding region of the right eye image G R, an image G R for the right eye Are arranged in one or more of a plurality of regions including a region corresponding to the surrounding region in another image.
  • the left eye reference area image R1 L is superimposed on an area inside the left eye image GL (also referred to as an internal area). It is arranged to the right-eye reference region image R1 R may be disposed so as to overlap the interior region of the right eye image G R. Also, the left-eye reference region image R1 L is, may be disposed in a region corresponding to the inner region of the left-eye image G L in another image from the image G L for the left eye, the reference for the right eye region image R1 R may be disposed in a region corresponding to the inner region of the right-eye image G R in another image and the right-eye image G R.
  • the left reference region image R1 L in the left-eye reference area OA L of the inner area of the 3D image area TA L for the left eye corresponding to the left-eye image G L shown in FIG. 1 An example of the composite image GS L for the left eye that is combined is shown.
  • the right-eye 3D image area TA right eye reference in the internal region of the R region OA R in the right eye a reference region image R1 corresponding to the right eye image G R shown in FIG. 1
  • An example of the composite image GS R for the right eye in which R is synthesized is shown.
  • the process according to the second embodiment can be realized in the same information processing system 1 (FIG. 3) as that of the first embodiment. That is, the function as the image processing apparatus according to the second embodiment can be realized in the control unit 46. Further, the processing according to the second embodiment can be realized by the functional configuration shown in FIG. 4, similarly to the processing according to the first embodiment. However, in the process according to the second embodiment, the reference stereoscopic image information acquired by the reference area image acquisition unit 464 is different from the process according to the first embodiment, and is specified by the reference area specifying unit 467. The left eye reference area OA L and the right eye reference area OA R are different. Other processes in the second embodiment are the same as the processes according to the first embodiment.
  • the left eye reference area image R1 L and the right eye reference area image R1 R indicated by the reference stereoscopic image information may be, for example, specific markers.
  • Specific markers for example, has a unique feature, the user by it as long as it readily distinguishable from the object to be originally contained in the image G L and the right eye image G R for the left eye.
  • Intrinsic features can be realized, for example, by shape, color, texture, and the like.
  • specific markers may be a marker or the like formed by CG or the like.
  • examples of the specific marker include various simple shapes such as a stick, a triangle, and an arrow, and various objects such as a vase and a butterfly.
  • the user is less likely to confuse the object with a particular marker originally contained in the image G R image G L and the right eye left eye.
  • the method of specifying the left-eye reference area OA L and the right-eye reference area OA R in the reference area specifying unit 467 differs depending on the mode of the reference area specifying unit 467 set by the mode setting unit 466. For example, according to one or more images of the case left eye reference area OA L and the right eye reference area OA R is a predetermined region, and the left-eye image G L and the right eye image G R Left A case may be considered where the ophthalmic reference area OA L and the right-eye reference area OA R are designated.
  • predetermined left eye reference area OA L and right eye reference area OA R may be designated, but the left eye according to the operation of the operation unit 41 by the user
  • the reference area OA L for the right eye and the reference area OA R for the right eye may be designated.
  • Reference area OA L is for the left eye, for example, the inner region of the left-eye image G L, of the plurality of regions including a region corresponding to the internal region of another image for the left eye image G L It is specified in one or more areas.
  • the right-eye reference area OA R is, for example, the right-eye image G R and the internal area, separate from the right-eye image G R images of a plurality of regions including a region corresponding to the internal region of the It is specified in one or more areas.
  • the image combining unit 468, left eye reference region image R1 L is, the internal area of the image G L for the left eye, a region corresponding to the internal region of another image from the image G L for the left eye Are arranged in one or more of a plurality of regions including.
  • the right-eye reference region image R1 R is an internal region of the right eye image G R, among the plurality of regions including a region corresponding to the internal region of another image and the right-eye image G R Are arranged in one or more regions.
  • left eye reference area OA L and the right eye reference area OA R can be specified .
  • the left eye reference area OA L is, the internal area of the image G L for the left eye, among the plurality of regions including a region corresponding to the internal region of another image from the image G L for the left eye Can be assigned to one or more regions.
  • the right-eye reference area OA R is, the right-eye image G R and the internal area, separate from the right-eye image G R image among the plurality of regions including a region corresponding to the internal region of the One or more regions can be designated.
  • the above-described specification methods (D1) to (D4) may be considered.
  • a specific designation method for example, a mode in which the left-eye reference area OA L and the right-eye reference area OA R are designated in the vicinity of the attention area is conceivable.
  • the vicinity of the attention area for example, the lower left, upper left, lower right, upper right, upper, lower, left, and right positions of the attention area can be considered. Further, as the vicinity of the attention area, a position surrounding the attention area may be considered.
  • the left eye reference area OA L and the right eye reference area OA R are designated at the lower left of the object area Ob1 that is the attention area. Can be considered.
  • a ring-shaped left-eye reference area OA L and right-eye reference area OA R surrounding the object area Ob1 that is the attention area can be considered.
  • a mode is also conceivable in which a left-eye reference area OA L and a right-eye reference area OA R that are adjacent to and surround the object area Ob1 that is the attention area are designated. It is done.
  • the sizes of the left-eye reference area OA L and the right-eye reference area OA R may be changed. If the number of attention areas changes, the left-eye reference area OA L may be changed. The number of the reference area OA L and the right eye reference area OA R may be changed.
  • Non-attention areas include areas other than the attention area detected by the attention area detection unit 462.
  • non-target region out of the image G L and the right eye image G R for the left eye, the region in the vicinity of the end portion, a region showing a small object motion obtained from the analysis result of the motion vector, and color and texture Examples of such a region are inconspicuous. This makes it difficult for the display of the area that the user is paying attention to. As a result, it is possible to achieve both the suppression of visual discomfort and the improvement of the sense of depth that can be obtained by the user viewing the 3D image.
  • the left-eye reference region image R1 L is, the internal area of the image G L for the left eye, another image from the image G L for the left eye Are arranged in one or more of a plurality of regions including a region corresponding to the internal region.
  • the right-eye reference region image R1 R is a plurality of areas including the internal area of the right-eye image G R, and a region corresponding to the internal region of another image from the right eye image G R Are arranged in one or more regions.
  • the method for designating the left eye reference region OA L and the right eye reference region OA R according to the first embodiment, and the second embodiment may be selectively executed.
  • the reference region specifying unit 467 is set to the first mode
  • the method for specifying the left eye reference region OA L and the right eye reference region OA R according to the first embodiment is executed
  • the reference area designating unit 467 is set to the second mode
  • the method for designating the left eye reference area OA L and the right eye reference area OA R according to the second embodiment may be executed.
  • both methods may be performed simultaneously.
  • the reference value of the shift amount, between the images G L and the right eye image G R for the left eye, in response to the deviation amount of the position of the pixels showing the same portion of the object is determined, it is not limited to this.
  • the reference value of the deviation amount is a fixed value
  • the left-eye reference area OA L and the right eye reference area OA based on one or more images of the image G L and the right eye image G R for the left eye R may be specified.

Abstract

La présente invention a pour but d'améliorer la sensation de profondeur devant être obtenue par un utilisateur qui regarde une image 3-D. Pour atteindre l'objectif, à titre d'exemple, des premières informations d'image stéréoscopique sont acquises concernant une première image et une seconde image qui ont une relation dans laquelle des positions de pixels qui indiquent la même partie d'un objet s'écartent dans une direction donnée. En outre, une valeur de référence d'un degré d'écart est déterminée en fonction du degré d'écart des positions des pixels qui indiquent la même partie de l'objet entre la première image et la seconde image. De plus, des informations sont acquises concernant une première image de région de référence et une seconde image de région de référence qui ont une relation dans laquelle des positions de pixels qui indiquent la même partie d'un objet à afficher s'écartent dans une direction donnée d'un degré correspondant à la valeur de référence. En outre, des secondes informations d'image stéréoscopique sont générées par composition de la première image, de la seconde image, de la première image de région de référence et de la seconde image de région de référence.
PCT/JP2011/077186 2010-12-03 2011-11-25 Dispositif de traitement d'image, procédé de traitement d'image et programme WO2012073823A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-270158 2010-12-03
JP2010270158 2010-12-03

Publications (1)

Publication Number Publication Date
WO2012073823A1 true WO2012073823A1 (fr) 2012-06-07

Family

ID=46171761

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/077186 WO2012073823A1 (fr) 2010-12-03 2011-11-25 Dispositif de traitement d'image, procédé de traitement d'image et programme

Country Status (1)

Country Link
WO (1) WO2012073823A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11155155A (ja) * 1997-11-19 1999-06-08 Toshiba Corp 立体映像処理装置
WO2010092823A1 (fr) * 2009-02-13 2010-08-19 パナソニック株式会社 Dispositif de commande d'affichage
WO2010122775A1 (fr) * 2009-04-21 2010-10-28 パナソニック株式会社 Appareil et procédé de traitement vidéo

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11155155A (ja) * 1997-11-19 1999-06-08 Toshiba Corp 立体映像処理装置
WO2010092823A1 (fr) * 2009-02-13 2010-08-19 パナソニック株式会社 Dispositif de commande d'affichage
WO2010122775A1 (fr) * 2009-04-21 2010-10-28 パナソニック株式会社 Appareil et procédé de traitement vidéo

Similar Documents

Publication Publication Date Title
US9007442B2 (en) Stereo image display system, stereo imaging apparatus and stereo display apparatus
TWI439120B (zh) 顯示裝置
US9014414B2 (en) Information processing apparatus and information processing method for processing image information at an arbitrary viewpoint in a physical space or virtual space
US8208048B2 (en) Method for high dynamic range imaging
CN103348682B (zh) 在多视图系统中提供单一视觉的方法和装置
US20120274626A1 (en) Stereoscopic Image Generating Apparatus and Method
KR101822471B1 (ko) 혼합현실을 이용한 가상현실 시스템 및 그 구현방법
KR101911250B1 (ko) 입체영상 처리 장치 및 다시점 영상을 디스플레이하기 위한 스윗 스포트의 위치를 조절하는 방법
JP2007052304A (ja) 映像表示システム
WO2011122177A1 (fr) Dispositif d'affichage d'image en 3d, dispositif de capture d'image en 3d et procédé d'affichage d'image en 3d
JP2008300983A (ja) 頭部装着型表示装置、及びその制御方法
JP6195076B2 (ja) 別視点画像生成装置および別視点画像生成方法
JP6855313B2 (ja) 画像表示システム、画像表示装置および画像表示方法
JP5464279B2 (ja) 画像処理装置、そのプログラム、および画像処理方法
JP2006202181A (ja) 画像出力方法および装置
US9495795B2 (en) Image recording device, three-dimensional image reproducing device, image recording method, and three-dimensional image reproducing method
US20130050416A1 (en) Video processing apparatus and video processing method
JP2010181826A (ja) 立体画像形成装置
JP2011172172A (ja) 立体映像処理装置および方法、並びにプログラム
CN106937103B (zh) 一种图像处理方法及装置
JP6649010B2 (ja) 情報処理装置
JP6963399B2 (ja) プログラム、記録媒体、画像生成装置、画像生成方法
US20140347352A1 (en) Apparatuses, methods, and systems for 2-dimensional and 3-dimensional rendering and display of plenoptic images
JP5464129B2 (ja) 画像処理装置および視差情報生成装置
JP2006267767A (ja) 画像表示装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11845710

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11845710

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP