WO2012105427A1 - Dispositif de traitement d'images, programme associé, procédé de traitement d'images - Google Patents

Dispositif de traitement d'images, programme associé, procédé de traitement d'images Download PDF

Info

Publication number
WO2012105427A1
WO2012105427A1 PCT/JP2012/051744 JP2012051744W WO2012105427A1 WO 2012105427 A1 WO2012105427 A1 WO 2012105427A1 JP 2012051744 W JP2012051744 W JP 2012051744W WO 2012105427 A1 WO2012105427 A1 WO 2012105427A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
stereoscopic
index
eye
time
Prior art date
Application number
PCT/JP2012/051744
Other languages
English (en)
Japanese (ja)
Inventor
聡志 麻生
岳彦 指田
Original Assignee
コニカミノルタホールディングス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタホールディングス株式会社 filed Critical コニカミノルタホールディングス株式会社
Publication of WO2012105427A1 publication Critical patent/WO2012105427A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity

Definitions

  • the present invention relates to an image processing technique for generating a stereoscopic image.
  • 3D televisions that use stereoscopic video (also called 3D video, stereoscopic video, or stereoscopic video) are in the spotlight.
  • 3D television two images obtained by viewing the same object from different viewpoints are used to generate an image that can be viewed stereoscopically (also referred to as a 3D image, a stereoscopic image, or a stereoscopic image).
  • the position of the pixel indicating the same part of the object is shifted between the image for the left eye and the image for the right eye, and the focus adjustment function of the human eye is used to A sense of depth is given to the observer.
  • the amount of displacement of the pixel position that captures the same part of the object between the image for the left eye and the image for the right eye is also referred to as “parallax”.
  • This 3D image technology has been adopted in various video fields.
  • an endoscopic device has been proposed that enables stereoscopic viewing of an image over a wide field of view by adjusting parallax detected from a stereo image to fall within the fusion range of people (for example, Patent Documents). 1).
  • a stereoscopic video processing apparatus that displays a stereoscopic reference image when a stereoscopic video is displayed and the sense of depth is adjusted (for example, Patent Document 2).
  • the set parallax is small to some extent, it may be difficult for the observer to obtain a sense of depth depending on the size of the screen on which the image is displayed. That is, for the observer, even if the object is the same, it may appear different from the actual object when viewed on the 3D image as compared with the case where the object is actually viewed.
  • Patent Document 3 In contrast to the current state of the 3D image technology, there is a technology in which a frame image is displayed around a 3D image in order to change the screen, add interest, or facilitate stereoscopic viewing. It has been proposed (for example, Patent Document 3). With this technology, it is possible to select which frame image to use from a plurality of prepared frame images.
  • JP-A-8-313825 Japanese Patent Laid-Open No. 11-155155 International Publication No. 2003/093023
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a technique capable of improving the sense of depth that is observed by an observer who observes a stereoscopic image.
  • the image processing apparatus has a parallax that is observed farther than the time-series first stereoscopic image of the observation object under the same image display conditions.
  • a stereoscopic index image generating unit that generates a time-series stereoscopic index image, and a combination unit that generates a time-series second stereoscopic image by combining the first stereoscopic image and the stereoscopic index image in time series.
  • the stereoscopic index image generation unit includes the stereoscopic index image so that the perspective difference corresponding to the difference between the representative value of the parallax of the first stereoscopic image and the parallax of the stereoscopic index image increases in time series. Is generated.
  • the image processing apparatus is the image processing apparatus according to the first aspect, and is predicted to attract an observer's eye in the first stereoscopic image according to a preset detection rule.
  • a detection unit configured to detect each of the regions of interest; and the stereoscopic index image generation unit determines the representative value based on a parallax for the region of interest.
  • An image processing device is the image processing device according to the first or second aspect, wherein the three-dimensional index image generation unit is configured to change the representative value according to a time-series change rate. Change the perspective difference.
  • the image processing device is the image processing device according to the second aspect, and the stereoscopic index image generation unit changes the perspective difference according to a pixel value of the attention area.
  • the image processing device is the image processing device according to the first or second aspect, and is an observer information acquisition unit that acquires information related to the type of observer assumed for the second stereoscopic image. , And the stereoscopic index image generating unit changes the perspective difference according to the type of the observer.
  • An image processing apparatus is the image processing apparatus according to the first or second aspect, further comprising an environment information acquisition unit that acquires information related to an observation environment assumed for the second stereoscopic image. And the stereoscopic index image generation unit changes the perspective difference according to the observation environment.
  • An image processing apparatus is the image processing apparatus according to any one of the first to sixth aspects, wherein the three-dimensional index image generation unit is for an index appearing in the three-dimensional index image. The perspective difference is changed while decreasing the size of the object in time series.
  • An image processing device is the image processing device according to any one of the second to seventh aspects, wherein the stereoscopic index image generation unit is configured to perform the attention in the image space of the first stereoscopic image. Following the time-series position change of the region, the position of the stereoscopic index image is changed in time-series.
  • An image processing apparatus is the image processing apparatus according to any one of the first to seventh aspects, wherein the three-dimensional index image generation unit is a predetermined index object in the three-dimensional index image.
  • the three-dimensional index image is generated so as to move away from the predetermined vanishing point in time series.
  • the image processing apparatus generates a time-series stereoscopic index image having a parallax that is observed farther than the first stereoscopic image of the observation object under the same image display conditions.
  • the unit generates the stereoscopic index image so that the perspective difference corresponding to the difference between the representative value of the parallax of the first stereoscopic image and the parallax of the stereoscopic index image increases in time series.
  • the program according to the eleventh aspect causes the image processing apparatus to function as the image processing apparatus according to any one of the first to tenth aspects by being executed by a computer mounted on the image processing apparatus.
  • An image processing method includes a three-dimensional index image generating step of generating a three-dimensional index image having a parallax that is observed farther than the first three-dimensional image of the observation object under the same image display conditions.
  • the stereoscopic index image is generated in each repetition cycle so that the perspective difference corresponding to the difference from the parallax of the stereoscopic index image increases in time series in the time-series set of the second stereoscopic images. That.
  • the second stereoscopic image is obtained by observing the observation object and the stereoscopic index image.
  • the depth difference that the observer who observes the 2D image learns is improved by the three-dimensionalization that the perspective difference increases in time series, that is, it is observed to dynamically expand the perspective. Can be.
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of an image processing system according to an embodiment.
  • FIG. 2 is a diagram illustrating an example of a functional configuration of the image processing apparatus according to the embodiment.
  • FIG. 3 is a diagram for explaining the parallax.
  • FIG. 4 is a diagram illustrating an example of the first stereoscopic image.
  • FIG. 5 is a diagram illustrating an example of the first stereoscopic image.
  • FIG. 6 is a diagram illustrating an example of the stereoscopic index image.
  • FIG. 7 is a diagram illustrating an example of the stereoscopic index image.
  • FIG. 8 is a diagram illustrating an example of a time-series second stereoscopic image.
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of an image processing system according to an embodiment.
  • FIG. 2 is a diagram illustrating an example of a functional configuration of the image processing apparatus according to the embodiment.
  • FIG. 3 is a diagram for explaining the
  • FIG. 9 is a diagram illustrating an example of a time-series second stereoscopic image.
  • FIG. 10 is a diagram for explaining an example of a method for generating a parallax of a stereoscopic index image.
  • FIG. 11 is a diagram illustrating an example of a time-series change in the position of the stereoscopic index image.
  • FIG. 12 is a diagram illustrating an example of a time-series change in the position of the stereoscopic index image.
  • FIG. 13 is a diagram for explaining an example of a method for generating a stereoscopic index image.
  • FIG. 14 is a diagram illustrating an example of a stereoscopic index image.
  • FIG. 15 is a diagram illustrating an example of a stereoscopic index image.
  • FIG. 16 is a flowchart showing the operation of the image processing apparatus.
  • FIG. 17 is a flowchart showing the operation of the image processing apparatus.
  • FIG. 18 is a flowchart showing
  • each image is the + X direction
  • the downward direction of each image is the + Y direction.
  • two orthogonal XY axes are attached.
  • FIG. 3 three orthogonal XYZ axes are attached.
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of an image processing system 100A according to an embodiment.
  • the image processing system 100A mainly includes a stereo camera 300, a line-of-sight detection sensor 47, and an image processing apparatus 200A.
  • the stereo camera 300 mainly includes a first camera 61 and a second camera 62.
  • the first camera 61 and the second camera 62 are mainly configured with a photographing optical system and a control processing circuit (not shown), respectively.
  • the first camera 61 and the second camera 62 are provided with a predetermined baseline length in the horizontal direction, and the light information from the observation object incident on the photographing optical system is synchronized by a control processing circuit or the like. For example, a digital image having a predetermined size such as a 3456 ⁇ 2592 pixel size, which constitutes a stereo image of the observation object, is acquired.
  • the stereo image includes a set of a left-eye image (also referred to as a left-eye image) and a right-eye image (also referred to as a right-eye image), and is an image that can be displayed stereoscopically, that is, a stereoscopic image. (3D image).
  • the stereo camera 300 shoots the observation object successively in time sequence while synchronizing the first camera 61 and the second camera 62 in accordance with the control by the image processing apparatus 200A, so that a plurality of first left-eye cameras are used.
  • An image 5L and a plurality of first right-eye images 5R can be acquired. That is, the stereo camera 300 acquires the time-series first left-eye image 5L and the time-series first right-eye image 5R, respectively.
  • the stereo camera 300 can acquire one first left-eye image 5L and one first right-eye image 5R in accordance with control by the image processing apparatus 200A.
  • the stereo camera 300 Various operations of the stereo camera 300 are controlled based on control signals supplied from the image processing apparatus 200A via the input / output unit 41 and the communication line DL1.
  • the first left-eye image 5L and the first right-eye image 5R may be color images or monochrome images.
  • the generated one or more first left-eye images 5L and one or more first right-eye images 5R are supplied to the input / output unit 41 of the image processing apparatus 200A via the communication line DL1.
  • the line-of-sight detection sensor 47 detects the line-of-sight information of the observer who is observing the screen of the display unit 43 included in the image processing apparatus 200A.
  • the display unit 43 and the line-of-sight detection sensor 47 are fixed to each other with a predetermined arrangement relationship.
  • an image of the observer is obtained by photographing, and by analyzing the image, the direction of the line of sight of the observer is detected as line-of-sight information.
  • the analysis of the image can be realized, for example, by detecting the orientation of the face using pattern matching, and identifying the white-eye portion and the black-eye portion in both eyes using a color difference.
  • the line-of-sight information obtained by the line-of-sight detection sensor 47 is supplied to the detection unit 13 (FIG. 2) of the image processing apparatus 200A via the communication line DL2.
  • the detection unit 13 performs detection processing for detecting a portion (also referred to as a region of interest) that is observed by the observer from the stereoscopic image displayed on the screen of the display unit 43 using the supplied line-of-sight information.
  • the communication lines DL1 and DL2 may be wired lines or wireless lines.
  • the image processing apparatus 200 ⁇ / b> A mainly includes a CPU 11 ⁇ / b> A, an input / output unit 41, an operation unit 42, a display unit 43, a ROM 44, a RAM 45, and a storage device 46. This is realized by executing a program on a computer.
  • the input / output unit 41 includes an input / output interface such as a USB interface or a Bluetooth (registered trademark) interface, an interface for connecting to a LAN or the Internet such as a multimedia drive and a network adapter, and the like. Exchange data between the two. Specifically, the input / output unit 41 supplies various control signals for the CPU 11A to control the stereo camera 300 and the line-of-sight detection sensor 47 to the stereo camera 300 via the communication lines DL1 and DL2, for example. . Further, the input / output unit 41 supplies the first left-eye image 5L and the first right-eye image 5R captured by the stereo camera 300 and the line-of-sight information acquired by the line-of-sight detection sensor 47 to the functional unit of the image processing apparatus 200A. .
  • an input / output interface such as a USB interface or a Bluetooth (registered trademark) interface
  • an interface for connecting to a LAN or the Internet such as a multimedia drive and a network adapter, and the like. Exchange data between the two.
  • the input / output unit 41 also receives the first left-eye image 5L and the first right-eye image 5R by receiving a storage medium such as an optical disk in which the first left-eye image 5L and the first right-eye image 5R are stored in advance. It can be supplied to the functional unit of the image processing apparatus 200A.
  • a storage medium such as an optical disk in which the first left-eye image 5L and the first right-eye image 5R are stored in advance. It can be supplied to the functional unit of the image processing apparatus 200A.
  • CG computer graphics
  • the operation unit 42 includes, for example, a keyboard or a mouse. When the operator operates the operation unit 42, setting of various control parameters for the image processing apparatus 200A and various operation modes of the image processing apparatus 200A are performed. Settings are made.
  • the functional unit of the image processing apparatus 200 ⁇ / b> A is configured to perform processing according to each operation mode set from the operation unit 42.
  • the display unit 43 is configured by, for example, a liquid crystal display screen for 3D display corresponding to a 3D display system such as a parallax barrier system. Further, the display unit 43 includes an image processing unit (not shown) that converts a stereoscopic image into an image format corresponding to the three-dimensional display method in the display unit 43, and the display unit 43 converts the necessary conversion by the image processing unit. The processed stereoscopic image is displayed on the display screen. As a three-dimensional display method in the display unit 43, for example, the left-eye image and the right-eye image are alternately switched at a high speed and displayed on the display unit 43, and each shutter corresponding to the left eye and the right eye is synchronized with the switching.
  • the display unit 43 displays the first left-eye image 5L, the first right-eye image 5R, and various setting information related to the image processing apparatus 200A supplied from the stereo camera 300 as a two-dimensional image or character information as an observer. It can also be displayed so that it can be visually recognized.
  • ROM (Read Only Memory) 44 is a read-only memory and stores a program PG1 for operating the CPU 11A.
  • a readable / writable nonvolatile memory (for example, a flash memory) may be used instead of the ROM 44.
  • a RAM (Random Access Memory) 45 is a readable / writable volatile memory, and an image storage unit that temporarily stores various images acquired by the image processing apparatus 200A and various stereoscopic images generated by the image processing apparatus 200A.
  • the CPU 11A functions as a work memory for temporarily storing processing information.
  • the storage device 46 is configured by, for example, a readable / writable nonvolatile memory such as a flash memory, a hard disk device, or the like, and permanently records information such as various control parameters and various operation modes of the image processing device 200A.
  • the storage device 46 also detects the left eye index image 6L (FIG. 2) and the right eye index image 6R in the detection program PG2 and the three-dimensional index image generation unit 18 (FIG. 2) executed in the detection unit 13 (FIG. 2) described later.
  • the original index image 6S (FIGS. 2 and 10) used for generating (FIG. 2) is also permanently stored.
  • a CPU (Central Processing Unit) 11A is a control processing device that controls and controls each functional unit of the image processing device 200A, and executes control and processing according to the program PG1 and the like stored in the ROM 44.
  • the CPU 11A includes a stereoscopic image acquisition unit 12, a detection unit 13, a determination unit 14, an observer information acquisition unit 15a, an environment information acquisition unit 15b, a parallax generation unit 16, a position specification unit 17a, a size specification unit 17b, and a stereoscopic index image generation unit. 18 and the combination unit 19 (FIG. 2).
  • the CPU 11A uses these functional units and the like to display the first left-eye image 5L and the first right-eye image 5R representing the image of the observation object, that is, the first three-dimensional image G5 in time series (FIG. 2).
  • the second left-eye image 7L (FIG. 2) and the second time-series image each including the image of the observation object and the image of the index object for making the stereoscopic image of the observation object stand out.
  • a right-eye image 7R (FIG. 2) that is, a time-series second stereoscopic image G7 (FIG. 2) is generated.
  • the CPU 11A is observed farther than the time-series first stereoscopic image G5 of the observation object under the same image display conditions.
  • a time-series stereoscopic image (also referred to as “stereoscopic index image”) G6 (FIG. 2) of the index object having such parallax is generated.
  • the time-series stereoscopic index image G6 has a perspective difference between the first stereoscopic image G5 and the stereoscopic index image G6 corresponding to the difference between the representative value of the parallax of the first stereoscopic image G5 and the parallax of the stereoscopic index image G6. Generated to increase in time series.
  • the CPU 11A combines the first three-dimensional image G5 and the generated three-dimensional index image G6 in time-series order, and thereby the time-series second three-dimensional image G7, that is, the time-series second left-eye image 7L and the second right-eye image.
  • An image 7R is generated.
  • the perspective difference between the first stereoscopic image G5 related to the observation object and the stereoscopic index image G6 related to the index object increases in time series. That is, in the second stereoscopic image G7, the three-dimensionalization is performed such that the perspective is dynamically expanded, thereby improving the sense of depth that the observer who observes the second stereoscopic image G7 learns. Can be.
  • the CPU 11A controls the operations of the stereo camera 300 and the line-of-sight detection sensor 47, and controls the display unit 43 to display various images, calculation results, various control information, and the like on the display unit 43.
  • the CPU 11 A, the input / output unit 41, the operation unit 42, the display unit 43, the ROM 44, the RAM 45, the storage device 46, and the like are electrically connected via a signal line 49. Therefore, for example, the CPU 11A performs control of the stereo camera 300 via the input / output unit 41, acquisition of image information from the stereo camera 300, acquisition of line-of-sight information from the line-of-sight detection sensor 47, display on the display unit 43, and the like. It can be executed at a predetermined timing.
  • the function units of the three-dimensional index image generation unit 18 and the combination unit 19 are realized by executing a predetermined program by the CPU 11A. Each of these function units is, for example, a dedicated hardware circuit. It may be realized.
  • the right-eye image and the left-eye image constituting the stereoscopic image have a relationship in which the positions of pixels corresponding to the same part of the object are shifted from each other in the left-right direction (X-axis direction in FIG. 3). Yes.
  • the amount of displacement of the position of the pixel capturing the same part of the object between the left-eye image and the right-eye image more specifically, the target pixel in the left-eye image corresponding to the same part of the object, respectively.
  • the position (X coordinate) of the corresponding pixel in the image space of the right-eye image is subtracted from the position of the target pixel (X-coordinate) in the image space of the left-eye image by Equation (1).
  • the subtraction result is also referred to as “parallax”.
  • the parallax value is negative, the same part of the object corresponding to the target pixel in the left-eye image and the corresponding pixel in the right-eye image is more than the display screen. Observed nearby. If the parallax value is zero, the same portion is observed at the same position as the display screen. If the parallax value is negative, the same portion is observed farther than the display screen.
  • FIG. 3 is a diagram for explaining an example of parallax between the left-eye image 23L and the right-eye image 23R constituting the stereo image.
  • the left-eye image 23L is an example of the first left-eye image 5L (FIGS. 1 and 2) of the subject photographed by the first camera 61
  • the right-eye image 23R is relative to the first camera 61.
  • This is an example of the first right-eye image 5R (FIGS. 1 and 2) of the subject photographed by the second camera 62 provided with a predetermined baseline length in the horizontal direction (+ X direction in FIG. 3).
  • the left-eye image 23L and the right-eye image 23R are arranged in the horizontal direction (the X axis in FIG. 5) so that the Y coordinates of the upper ends (lower ends) of both the images are equal to each other. (Direction).
  • foreground subject images 66a and 66b of the same near-side subject located in the + Z direction with respect to the stereo camera 300 are taken, respectively, and the stereo camera 300
  • far-field subject images 67a and 67b are photographed for the same far-side subject that is farther from the near-side subject in the + Z direction.
  • the pixel 68a on the foreground subject image 66a and the pixel 68b on the foreground subject image 66b are pixels corresponding to the same point of the near-side subject, respectively
  • the pixel 69b is a pixel corresponding to the same point of the far-side subject.
  • the parallax 9a is a parallax between the pixel 68a and the pixel 68b
  • the parallax 9b is a parallax between the pixel 69a and the pixel 69b.
  • Both the parallax 9a and the parallax 9b have positive values, and the size of the parallax 9a corresponding to the near subject is larger than that of the parallax 9b corresponding to the far subject. Therefore, when the left-eye image 23L and the right-eye image 23R are displayed as stereoscopic images on the display screen of the display unit 43 in this state, the point on the near subject corresponding to the parallax 9a is the parallax 9b. Are observed closer to the point on the far-side subject corresponding to, and both of these two points are observed closer to the display screen.
  • FIG. 2 is a block diagram illustrating an example of a functional configuration of the image processing apparatus 200A according to the embodiment.
  • the stereoscopic image acquisition unit 12 illustrated in FIG. 2 includes a plurality of first left-eye images 5L (FIG. 2) each including images representing the observation target in time series, and the observation target. Via the input / output unit 41, image information related to a plurality of first right-eye images 5R (FIG. 2) including each image represented in series and having a parallax with each of the plurality of first left-eye images 5L. Get.
  • the acquired time-series first left-eye image 5L and right-eye index image 5R are supplied to the detection unit 13, the determination unit 14, and the combination unit 19, respectively.
  • FIG. 4 and 5 are diagrams illustrating first stereoscopic images G5a and G5b, respectively, as an example of the first stereoscopic image G5 (FIG. 2) acquired by the stereoscopic image acquisition unit 12 (FIG. 2) of the image processing apparatus 200A. It is.
  • the first left-eye image 5L1 and the first right-eye image 5R1 constituting the first stereoscopic image G5a are images in which the observation object is photographed synchronously at a certain time t1, and the first The first left-eye image 5L2 and the first right-eye image 5R2 constituting the stereoscopic image G5b (FIG. 5) were taken in synchronization with each other at the time t2 when a predetermined time has elapsed from the time t1. It is an image. That is, the first stereoscopic images G5a and G5b constitute a time-series first stereoscopic image G5.
  • first left-eye images 5L1 and 5L2 constitute a plurality of first left-eye images each including images representing the observation target in time series, that is, time-series first left-eye images 5L.
  • first right-eye images 5R1 and 5R2 constitute a plurality of first right-eye images each including images representing the observation object in time series, that is, time-series first right-eye images 5R. Yes.
  • the observation object images 1L1 and 1L2 are images of the observation object in the first left-eye images 5L1 and 5L2, respectively.
  • the observation object images 1R1 and 1R2 are the observations in the first right-eye images 5R1 and 5R2, respectively. It is an image of an object.
  • the observation object images 1L1 and 1R1 have the same size in the image space, and the observation object images 1L2 and 1R2 have the same size in the image space.
  • the size of the observation object image 1L1 is larger than the size of the observation object image 1L2.
  • the position of the observation object image 1R1 in the first right-eye image 5R1 is shifted to the left ( ⁇ X side) with respect to the position of the observation object image 1L1 in the first left-eye image 5L1.
  • the position of the observation object image 1R2 in the first right-eye image 5R2 is also shifted to the left ( ⁇ X side) with respect to the position of the observation object image 1L2 in the first left-eye image 5L2.
  • the first left-eye image 5L1 and the first right-eye image 5R1 have a positive parallax with each other, and the first left-eye image 5L2 and the first right-eye image 5R2 are mutually positive. It has a number of parallaxes.
  • the position corresponding to the outer edge of the observation object image 1L1 in the first left-eye image 5L1 is indicated by a broken line so that the comparison is easy.
  • the position corresponding to the outer edge of the observation object image 1L2 in the first left-eye image 5L2 is indicated by a broken line.
  • Detection unit 13 The detection unit 13 illustrated in FIG. 2 performs detection processing for detecting each of the attention areas that are predicted to attract the viewer's eyes in the first stereoscopic image G5 according to a preset detection rule.
  • the detection unit 13 performs the detection process by executing the detection program PG2 (FIG. 2) stored in the storage device 46 in advance.
  • the detection program PG2 is a program that implements a method for detecting each of the attention areas that are predicted to attract the viewer's eyes in the first stereoscopic image G5 according to a preset detection rule.
  • the detection program PG2 for example, a process of detecting a region of interest based on the observer's line-of-sight information supplied from the line-of-sight detection sensor 47 and the mutual arrangement relationship between the line-of-sight detection sensor 47 and the display unit 43 is realized.
  • the program is adopted.
  • the detection program PG2 for example, a program that realizes a process of acquiring color information of each image constituting the first stereoscopic image G5 and detecting a color area different from the surrounding area as an attention area, and paying attention to an area brighter than the surrounding area Even if a program that realizes a process for detecting an area or a program that realizes a process for detecting a person area in the first stereoscopic image G5 using a person detection algorithm and detecting the area as an attention area is adopted. It does not impair usability.
  • the attention areas 38a and 38b shown in FIG. 4 are examples of attention areas in the first left-eye images 5L1 and 5L2, respectively, and the attention areas 39a and 39b are attention in the first right-eye images 5R1 and 5R2, respectively. It is an example of a field.
  • the attention areas 38a, 38b, 39a, and 39b are rectangular areas that are in contact with the outer edges of the observation object images 1L1, 1L2, 1R1, and 1R2. These areas may be detected as the attention areas as they are.
  • the detection unit 13 When the region of interest is detected by the detection unit 13, the detection unit 13 generates region information 31 (FIG. 2) related to the detected region of interest. Each of the generated region information 31 is supplied to the determination unit 14.
  • Determining unit 14 The determination unit 14 illustrated in FIG. 2 performs parallax for each combination in which a plurality of time-series first left-eye images 5L and a plurality of time-series first right-eye images 5R correspond to each other in time-series order.
  • a representative parallax 32 which is a representative value, is determined.
  • An example of a specific procedure in which the determination unit 14 determines the representative parallax 32 will be described below by taking, as an example, a case where the two sets of first stereoscopic images illustrated in FIG. 4 are acquired.
  • the determining unit 14 determines the representative parallax 32 by sequentially executing the following steps (a-1) to (a-4).
  • the determination unit 14 First, the determination unit 14 first determines the region of interest 38a and the first right eye in the first left eye image 5L1 based on the region information 31 corresponding to the first left eye image 5L1 and the first right eye image 5R1, respectively.
  • the attention area 39a in the work image 5R1 is specified.
  • the determination unit 14 performs the matching process using the correlation calculation method between the corresponding attention areas, that is, between the attention areas 38a and 39a, thereby obtaining the attention area.
  • Each corresponding pixel in the attention area 39a corresponding to each attention pixel in 38a is specified.
  • the determination unit 14 acquires, as each parallax, a subtraction result obtained by subtracting the X coordinate of each corresponding pixel from the X coordinate of each target pixel.
  • a correlation calculation method used for the matching processing for example, an NCC (Normalized Cross Correlation) method, an SAD (Sum of Absolute Difference) method, or a POC (Phase Only Correlation) method is employed.
  • the determination unit 14 determines the representative parallax 32 (FIG. 2) between the first left-eye image 5L1 and the first right-eye image 5R1. .
  • the representative parallax 32 for example, one of an average value, a maximum value, a minimum value, a mode value, and a median value for each parallax between the attention areas 38a and 39a may be employed.
  • the representative parallax 32 may be determined based on the parallax position of a part of the attention area such as the center of gravity position of the attention area or each pixel on the contour of the image on the attention area.
  • the determination unit 14 performs the steps (a-1) to (a-4) for the first left-eye image 5L2 and the first right-eye image 5R2 in the same manner, thereby performing the first left-eye image 5L2.
  • a representative parallax 32 between the image 5L2 and the first right-eye image 5R2 is determined.
  • the determined representative parallax 32 is supplied to the parallax generation unit 16.
  • steps (a-1) to (a-4) the attention area is specified, and the representative parallax 32 is determined based on the image information of the specified attention area.
  • the determination unit 14 covers each of the entire regions of the corresponding first left-eye image and first right-eye image, and each processing step corresponding to steps (a-2) to (a-4).
  • the representative parallax 32 between the first left-eye image 5L1 and the first right-eye image 5R1, and the representative between the first left-eye image 5L2 and the first right-eye image 5R2. Even if the parallax 32 is determined, the usefulness of the present invention is not impaired.
  • Observer information acquisition unit 15a The observer information acquisition unit 15a illustrated in FIG. 2 analyzes the image of the observer photographed by the line-of-sight detection sensor 47 using a technique such as template matching, for example, and assumes the observation assumed for the second stereoscopic image G7.
  • Type information 33 which is information related to the type of the person, is acquired.
  • the observer information acquisition unit 15a acquires, for example, the type information 33 related to the type of the observer stored in the representative position information 36 from the storage device 46 via the operation unit 42 according to the set operation mode.
  • the type information 33 may be acquired.
  • the type information 33 for example, information on the age of the observer, information on the patient's disease, and the like are employed.
  • the acquired type information 33 is supplied to the parallax generation unit 16.
  • Environmental information acquisition unit 15b acquires, for example, a statistical distribution state of each pixel value in an image related to the observation environment captured by the line-of-sight detection sensor 47, or stores it in advance via the operation unit 42.
  • the environment information 34 which is information related to the observation environment assumed for the second stereoscopic image G7, is acquired.
  • As the environment information 34 for example, an index value for each stage divided into three stages from the dark side to the bright side of the observation environment is adopted.
  • the acquired environment information 34 is supplied to the parallax generation unit 16.
  • ⁇ Parallax generator 16 The parallax generation unit 16 illustrated in FIG. 2 generates a plurality of index parallaxes 35 that are a plurality of parallaxes respectively corresponding to a plurality of time-series stereoscopic index images G6 generated by the stereoscopic index image generation unit 18. In the generation processing of the plurality of index parallaxes 35, the parallax generation unit 16 determines the plurality of index parallaxes for the plurality of representative parallaxes 32 respectively corresponding to the time-series first stereoscopic images G5 determined by the determination unit 14. A plurality of index parallaxes 35 are generated so that each subtraction result obtained by subtracting a plurality of representative parallaxes 32 from 35 in order of time series sequentially decreases.
  • the parallax generation unit 16 uses the time-series stereoscopic index image G6 respectively corresponding to the time-series first stereoscopic image G5, rather than the time-series first stereoscopic image G5, under the same image display conditions. Each of them generates a plurality of index parallaxes 35 so that they are observed more distantly, and corresponds to the difference between the representative parallax 32 of the first stereoscopic image G5 and the index parallax 35 of the stereoscopic index image G6. A plurality of index parallaxes 35 are generated so that the perspective difference between the one-dimensional image G5 and the stereoscopic index image G6 increases in time series.
  • FIG. 10 shows time-series stereoscopic index images G6 (shown in FIGS. 6 and 7, respectively) generated by the parallax generation unit 16 corresponding to the first stereoscopic images G5a and G5b shown in FIGS. 4 and 5, respectively. It is a figure for demonstrating an example of the production
  • the line segment 81a is a line segment indicating temporal changes in the representative parallax 32 corresponding to the time-series first stereoscopic images G5a and G5b acquired at times t1 and t2, respectively.
  • a point p1 is obtained by drawing the value dm1 of the representative parallax 32 of the first stereoscopic image G5a with respect to time t1.
  • a point p2 is obtained by drawing the value dm2 of the representative parallax 32 of the first stereoscopic image G5b with respect to the time t2.
  • the line segment 81b is a line segment indicating temporal changes in the index parallax 35 relating to the stereoscopic index images G6a (FIG. 6) and G6b (FIG. 7) respectively corresponding to the first stereoscopic images G5a and G5b.
  • a point p3 is obtained by drawing the value di1 of the index parallax 35 of the stereoscopic index image G6a with respect to time t1.
  • a point p4 is obtained by drawing the value di2 of the index parallax 35 of the stereoscopic index image G6b with respect to time t2.
  • the line segment 81c is a line segment indicating a temporal change in a subtraction result (difference) obtained by subtracting the representative parallax 32 from the index parallax 35.
  • a point p5 is obtained by drawing a value su1 as a subtraction result obtained by subtracting the representative parallax 32 of the first stereoscopic image G5a from the index parallax 35 of the stereoscopic index image G6a with respect to time t1.
  • a point p6 is obtained by drawing a subtraction result su2 obtained by subtracting the representative parallax 32 of the first stereoscopic image G5b from the index parallax 35 of the stereoscopic index image G6b with respect to time t2.
  • the parallax generation unit 16 first time-seriesly represents the representative parallax 32 of the first stereoscopic image G5 from the index parallax 35 of the stereoscopic index image G6.
  • Subtraction result values su1 and su2 obtained by subtraction in order are set.
  • the setting is performed by setting negative values su1 and su2 so that the values su1 and su2 satisfy the relationship of the expression (2). If the value su1 has already been set, only the value su2 is set.
  • the parallax generation unit 16 uses the set values su1 and su2 and the values dm1 and dm2 of the representative parallax 32 supplied from the determination unit 14 as equations (3) and (4). And a plurality of index parallaxes 35 relating to the stereoscopic index images G6a and G6b, respectively, are generated by calculating the values di1 and di2 of the multiple index parallaxes 35 relating to the stereoscopic index images G6a and G6b, respectively.
  • the parallax generation unit 16 when the first stereoscopic image G5 is not time-sequential and is configured by one first left-eye image 5L and first right-eye image 5R, that is, only dm1 is acquired from the values dm1 and dm2.
  • the parallax generation unit 16 generates a plurality of index parallaxes 35 respectively related to the stereoscopic index images G6a and G6b by setting the value dm1 to the value dm2.
  • the plurality of generated index parallaxes 35 are supplied to the stereoscopic index image generation unit 18.
  • the mode of reduction from the value su1 to the value su2 that is, the mode of time-series reduction of the subtraction result (difference) obtained by subtracting the representative parallax 32 from the index parallax 35 is related to the parallax generation unit 16 set in advance.
  • the operation mode for example, according to at least one of the value of the representative parallax 32, the time-series change rate of the representative parallax 32, the type information 33, the environment information 34, and the pixel value information of the first stereoscopic image G5. Can be set.
  • the increase rate of the time-series perspective difference between the observation object image and the index object image in FIG. 5 is the value of the representative parallax 32, the time-series change rate of the representative parallax 32, type information 33, environment information 34, and It can be set according to at least one of the pixel value information of the first stereoscopic image G5.
  • the parallax generation unit 16 When a time-series decrease mode is set for the subtraction result (difference) obtained by subtracting the representative parallax 32 from the index parallax 35 according to the value of the representative parallax 32, the parallax generation unit 16, for example, The larger the value of the representative parallax 32 at t1, that is, the closer the distance of the first stereoscopic image G5a observed from the predetermined reference origin, the smaller the rate of decrease from the value su1 to the value su2.
  • the setting is such that when the distance of the first stereoscopic image G5 is short, the time-series increase rate of the perspective difference between the first stereoscopic image G5 and the stereoscopic index image G6 is smaller than when the distance is long. This is performed based on the fact that the depth sensation that an observer who observes the second stereoscopic image G7 can remember is improved.
  • the parallax generation unit 16 When a time-series decrease mode is set for a subtraction result (difference) obtained by subtracting the representative parallax 32 from the index parallax 35 according to the time-series change rate of the representative parallax 32, the parallax generation unit 16 For example, the larger the time-series change rate of the representative parallax 32, that is, the greater the change rate of the distance of the first stereoscopic image G5 observed from the predetermined reference origin, the value from the value su1. The rate of decrease to su2 is set small.
  • the setting is such that when the change rate of the distance of the first stereoscopic image G5 is large, the time-series increase rate of the perspective difference between the first stereoscopic image G5 and the stereoscopic index image G6 is compared with the case where the change rate is small. This is based on the fact that even if the image size is small, the depth feeling that the observer who observes the second stereoscopic image G7 can remember can be improved.
  • the parallax generation unit 16 When the type information 33 corresponds to an infant or an elderly person, the value su1 is changed to the value su2 compared to the case where the type information 33 corresponds to a young group of 10's to 20's. Set the decrease rate of. The setting is performed based on the fact that when the observer is an infant or an elderly person, poor physical condition and eye fatigue are more likely to occur due to the influence of visual changes than the younger generation.
  • the parallax generation unit 16 When a time-series reduction mode is set for the subtraction result (difference) obtained by subtracting the representative parallax 32 from the index parallax 35 according to the environment information 34 about the observation environment, the parallax generation unit 16 The illuminance of the observation environment is estimated based on the type information 33, and when the illuminance is high, the decrease rate from the value su1 to the value su2 is set smaller than when the illuminance is low. This setting allows the second stereoscopic image G7 to be observed even when the time-series increase rate of the perspective difference between the first stereoscopic image G5 and the stereoscopic index image G6 is small when the illumination intensity of the observation environment is high compared to when the illumination environment is low. This is based on the fact that the depth perceived by the observer can be improved.
  • a time-series reduction mode is set for the subtraction result (difference) obtained by subtracting the representative parallax 32 from the index parallax 35 according to at least one pixel value information of the entire first stereoscopic image G5 and the attention area.
  • the decrease rate from the value su1 to the value su2 is set smaller than when the pixel value is small.
  • This setting is not necessary when the pixel value of the first stereoscopic image G5 is large, even if the time-series increase rate of the perspective difference between the first stereoscopic image G5 and the stereoscopic index image G6 is small compared to when the pixel value is small. This is performed based on the fact that the depth feeling that the observer who observes the stereoscopic image G7 can remember is improved.
  • the index parallax 35 is generated based on the information regarding the first stereoscopic image G5 at two times. Even if the index parallax 35 is generated in the same manner as the setting example of FIG. 10 based on the information about the first stereoscopic image G5 at three or more times, the usefulness of the present invention is not impaired. If the index parallax 35 is generated based on information about the first stereoscopic image G5 at each of three or more times, the index parallax 35 is generated based on information about the first stereoscopic image G5 at two times. In comparison, the time-series fluctuation of the perspective difference between the first stereoscopic image G5 and the stereoscopic index image G6 can be stabilized, and the discomfort felt by the observer observing the second stereoscopic image G7 can be further reduced.
  • the generation process for generating the index parallax 35 by the method illustrated in FIG. 10 is repeated based on the information regarding the first stereoscopic image G5 at two times, for example, the index parallax 35 in the previous generation process and A value obtained by weighting a reduction rate of the difference from the representative parallax 32 and the reduction rate in the generation process of the index parallax 35 based on information on the first stereoscopic image G5 at two times in the generation process being processed. Even if the average value is adopted as the final value of the reduction rate in the generation process during the process and the index parallax 35 is generated, the usefulness of the present invention is not impaired.
  • Position specifying part 17a The position specifying unit 17a illustrated in FIG. 2 generates representative position information 36 that is information regarding the position of the three-dimensional index image G6 at each time.
  • the position of the three-dimensional index image G6 is also referred to as a “representative position” because it is a position related to each of the left-eye index image 6L and the right-eye index image 6R constituting the stereoscopic index image G6.
  • the generated representative position information 36 is supplied to the stereoscopic index image generation unit 18.
  • the stereoscopic index image generation unit 18 determines the positions in the image space of the left-eye index image 6L and the right-eye index image 6R constituting the stereoscopic index image G6 based on the representative position information 36, the index parallax 35, and the like.
  • the left eye index image 6L and the right eye index image 6R are generated.
  • the position specifying unit 17a can perform a plurality of methods as a method for generating the representative position information 36 of the stereoscopic index image G6 according to the set operation mode.
  • -About representative position generation mode 1 When the representative position generation mode 1 is set, the position specifying unit 17a follows the time-series position change of the attention area in the image space of the first stereoscopic image G5, and the position of the stereoscopic index image G6 is timed.
  • Representative position information 36 which is information about the representative position of the three-dimensional index image G6 at each time, is generated so as to change in series.
  • FIG. 11 is a diagram illustrating an example of a time-series change in the position of the three-dimensional index image G6 in the representative position generation mode 1.
  • the first left-eye image 5L3 at time t1 (FIG. 10) and the first left-eye image 5L4 at time t2 (FIG. 10) are displayed superimposed on the same image space.
  • the first left-eye image 5L3 includes a left-eye observation object image 1L3 related to the observation object
  • the first left-eye image 5L4 includes a left-eye observation object image 1L4 related to the observation object.
  • the image areas of the first left-eye images 5L3 and 5L4 are detected as attention areas by the detection unit 13.
  • the position of the observation object image for the left eye is changed from the position of the observation object image 1L3 along the arrow Y1 to the observation object image 1L4. It has moved to the position.
  • the left-eye index image 6L3 related to the three-dimensional index image G6 at time t1 is displayed in a manner arranged in the first left-eye image 5L3.
  • the left-eye index image 6L4 related to the stereoscopic index image G6 at the time t2 is displayed in a manner arranged in the first left-eye image 5L4.
  • the left-eye index images 6L3 and 6L4 are respectively generated by the three-dimensional index image generation unit 18 based on the representative position information 36 corresponding to the times t1 and t2 generated by the position specifying unit 17a.
  • the position of the left-eye index image related to the stereoscopic index image G6 follows the time-series position change of the observation object image, that is, the attention area, and the left-eye index.
  • the position is moved from the position of the image 6L3 to the position of the left eye index image 6L4 along the arrow Y2.
  • the stereoscopic index image G6 is generated based on the representative position information 36 generated by the position specifying unit 17a set in the representative position generation mode 1, the first stereoscopic image G5 is displayed. Since the relative positional relationship between the attention area and the three-dimensional index image G6 does not vary with time, it is possible to improve the sense of depth that an observer who observes the second three-dimensional image G7 learns.
  • -About representative position generation mode 2 When the representative position generation mode 2 is set, the position designating unit 17a moves the predetermined index object image in the stereoscopic index image G6 toward a predetermined vanishing point (FOE) in time series.
  • the representative position information 36 that is information related to the representative position of the three-dimensional index image G6 at each time is generated.
  • the vanishing point is a fixed point determined by the relative movement direction of the index object with respect to the camera, and is determined by setting the movement direction of the index object in advance.
  • FIG. 12 is a diagram illustrating an example of a time-series change in the position of the three-dimensional index image G6 in the representative position generation mode 2.
  • the left eye index image 6L5 related to the stereoscopic index image G6 at time t1 (FIG. 10) is arranged in the image space of the first left eye image 5L5 related to the first stereoscopic image G5 at time t1. It is displayed. Further, the left-eye index image 6L6 related to the stereoscopic index image G6 at time t2 (FIG. 10) is displayed in a manner arranged in the image space of the first left-eye image 5L6 related to the first stereoscopic image G5 at time t2. Yes.
  • the left-eye index images 6L5 and 6L6 are respectively generated by the three-dimensional index image generating unit 18 based on the representative position information 36 corresponding to the times t1 and t2 generated by the position specifying unit 17a.
  • the position of the left-eye index image moves along the two-dimensional movement vectors 91 a and 91 b toward the vanishing point 104 during the elapse of time from time t ⁇ b> 1 to time t ⁇ b> 2. It has moved from the position of 6L5 to the position of the left eye index image 6L6.
  • the three-dimensional index image G6 is generated based on the representative position information 36 generated by the position specifying unit 17a set in the representative position generation mode 2, a predetermined value in the three-dimensional index image G6 is set. Since the three-dimensional index image G6 at each time is generated so that the image of the index object moves away toward a predetermined vanishing point in time series, a depth feeling that an observer who observes the second three-dimensional image G7 remembers Can be improved.
  • the size of the left-eye index image is generated with the passage of time due to the size information 37 generated by the size specifying unit 17 b described later, but the size is not changed. Even if the left-eye index image is moved from the position of the left-eye index image 6L5 toward the vanishing point 104 to the position of the left-eye index image 6L6, the usefulness of the present invention is not impaired.
  • the image of the predetermined index object in the three-dimensional index image G6 is moved in time series. However, even if the movement is not performed, that is, the index object Even if the position of the image is fixed in time, the usefulness of the present invention is not impaired.
  • Size designation part 17b The size designating unit 17b shown in FIG. 2 shows the size of the 3D index image G6 in the image space at each time so as to reduce the size of the index object appearing in the 3D index image G6 in time series.
  • the size information 37 which is information about the length, is generated.
  • the generated size information 37 is supplied to the stereoscopic index image generation unit 18.
  • the stereoscopic index image generation unit 18 sets the size in the image space of the left-eye index image 6L and the right-eye index image 6R constituting the stereoscopic index image G6 based on the size information 37, the index parallax 35, and the like.
  • An index image 6L and a right-eye index image 6R are generated.
  • the stereoscopic index image generation unit 18 determines the size of the index object appearing in the stereoscopic index image G6.
  • the perspective difference between the first stereoscopic image G5 and the stereoscopic index image G6 can be changed while decreasing in time series.
  • An observer who observes the second stereoscopic image G7 can learn not only the sense of depth due to the perspective difference but also the sense of depth due to the reduction in the size of the index object, so that the sense of depth that the observer can remember can be improved.
  • the size designation unit 17b does not change the size in the image space of the left-eye index image 6L and the right-eye index image 6R constituting the stereoscopic index image G6 in time series, that is, the image of the index object image. Even if the size is fixed in time, the usefulness of the present invention is not impaired.
  • the stereoscopic index image generation unit 18 shown in FIG. 2 includes an index parallax 35 (FIG. 2), representative position information 36 (FIG. 2), size information 37 (FIG. 2), and an original index image 6S (FIG. 2). Based on the time-series solid index of the index object having a parallax that is observed farther than the time-series first stereoscopic image G5 (FIG. 2) of the observation object under the same image display conditions. An image G6 (FIG. 2) is generated.
  • the time-series stereoscopic index image G6 is a perspective of the first stereoscopic image G5 and the stereoscopic index image G6 corresponding to the difference between the representative parallax of the first stereoscopic image G5 and the parallax of the stereoscopic index image G6.
  • the difference is generated so as to increase in time series.
  • the generated time-series stereoscopic index image G6 is supplied to the combination unit 19.
  • the combination unit 19 combines the generated time-series plurality of stereoscopic index images G6 and the time-series plurality of first stereoscopic images G5 in time-series order, thereby generating a time-series second stereoscopic image G7, that is, A time-series second left-eye image 7L and second right-eye image 7R are generated.
  • FIG. 6 and 7 are diagrams showing three-dimensional index images G6a and G6b, respectively, as an example of the three-dimensional index image G6 (FIG. 2) generated by the three-dimensional index image generating unit 18.
  • the left-eye index image 6L1 and the right-eye index image 6R1 constituting the stereoscopic index image G6a are images in which index objects at a certain time t1 are respectively expressed in synchronization
  • the stereoscopic index image G6b are images in which the index objects are respectively expressed synchronously at time t2 when a predetermined time has elapsed from time t1. That is, the stereoscopic index images G6a and G6b constitute a time-series stereoscopic index image G6 (FIG. 2).
  • the times t1 and t2 related to the index object are the same as the times t1 and t2 related to the first stereoscopic image G5a (FIG. 4) and the first stereoscopic image G5b (FIG. 5), respectively.
  • times t1 and t2 related to a second stereoscopic image G7a (FIG. 8) and a second stereoscopic image G7b (FIG. 9), which will be described later, are also the first stereoscopic image G5a (FIG. 4) and the first stereoscopic image G5b (FIG. 5). It is the same time as the time t1 and t2 concerning each.
  • the left-eye index images 6L1 and 6L2 are shown in a form arranged in the image spaces of the first left-eye images 5L1 (FIG. 4) and 5L2 (FIG. 5), respectively.
  • the right-eye index images 6R1 and 6R2 The right eye images 5R1 (FIG. 4) and 5R2 (FIG. 5) are respectively shown in a manner arranged in the image space.
  • the left-eye index images 6L1 and 6L2 constitute a plurality of left-eye index images 6L each including images representing the index objects in time series, that is, time-series left-eye index images 6L (FIG. 2). Yes.
  • the right-eye index images 6R1 and 6R2 constitute a plurality of right-eye index images 6R each including images representing the index objects in time series, that is, time-series right-eye index images 6R.
  • the left-eye index image 6L1 and the right-eye index image 6R1 have the same size in the image space, and the left-eye index image 6L2 and the right-eye index image 6R2 have the same size in the image space.
  • the size of the left-eye index image 6L1 is larger than the size of the left-eye index image 6L2.
  • the position of the right eye index image 6R1 in the image space of the first right eye image 5R1 is shifted to the right (+ X side) with respect to the position of the left eye index image 6L1 in the image space of the first left eye image 5L1.
  • the position of the right eye index image 6R2 in the image space of the first right eye image 5R2 is also shifted to the right (+ X side) with respect to the position of the left eye index image 6L2 in the image space of the first left eye image 5L2.
  • the left-eye index image 6L1 and the right-eye index image 6R1 have a negative parallax with each other
  • the left-eye index image 6L2 and the right-eye index image 6R2 have a negative parallax with each other. ing.
  • a position corresponding to the outer edge of the left-eye index image 6L1, that is, the index object image in the image space of the first left-eye image 5L1, so that comparison is easy. Is indicated by a broken line.
  • a position corresponding to the outer edge of the left-eye index image 6L2 in the first left-eye image 5L2 that is, the index object image is indicated by a broken line.
  • the time-series stereoscopic index image G6 related to the index object shown in FIGS. 6 and 7 is farther than the time-series first stereoscopic image G5 related to the observation object shown in FIGS. 4 and 5.
  • the time-series stereoscopic index image G6 is a perspective of the first stereoscopic image G5 and the stereoscopic index image G6 corresponding to the difference between the parallax representative value of the first stereoscopic image G5 and the parallax of the stereoscopic index image G6. The difference is generated so as to increase in time series.
  • FIG. 13 is a diagram for explaining an example of a method for generating the three-dimensional index image G6 (FIG. 2).
  • the three-dimensional index image generation unit 18 acquires the original index image 6S (FIG. 2) that is an image related to the index object specified in advance by the specifying process from the operation unit 42 from the storage device 46, and also acquires the original index image.
  • a three-dimensional index image G6 is generated.
  • the original index image 6S, the intermediate index image 6C, the left-eye index image 6L, and the right-eye index image 6R shown in FIG. 13 are the first left-eye image 5L and the first first-eye image 5L that are arranged so that the image spaces coincide with each other. It is shown in a form arranged in an image space corresponding to the right-eye image 5R.
  • the left-eye index image 6L and the right-eye index image 6R related to the stereoscopic index image G6 may be, for example, specific markers.
  • the specific marker may have a unique characteristic and can be easily distinguished from an object originally included in the left-eye image GL and the right-eye image GR by the user.
  • Intrinsic features can be realized, for example, by shape, color, texture, and the like.
  • the specific marker may be a marker formed of CG or the like.
  • the specific marker includes, for example, various simple shapes such as a rod (rectangle), a triangle, and an arrow, and various objects such as a vase and a butterfly, in addition to the star shape shown in FIG. Conceivable. This makes it difficult for the observer to confuse the object originally included in the first left-eye image 5L and the first right-eye image 5R with the specific marker.
  • the specific marker has a specific shape, a specific color, a specific texture, and the like, but is translucent, an image display based on the left-eye image GL and the right-eye image GR is displayed. It becomes difficult to be disturbed.
  • the original index image 6S is stored in the storage device 46 as image information in a form in which pixel coordinates in the image space and pixel values are associated with each pixel corresponding to each part of the marker, for example.
  • the original index image 6S is acquired by the stereoscopic index image generation unit 18 and used to generate the left-eye index image 6L and the right-eye index image 6R that form the stereoscopic index image G6.
  • the intermediate index image 6C represents an image of the original index image 6S that is virtually arranged at a position specified based on the representative position information 36 specified by the position specifying unit 17a.
  • the original index image 6S is arranged so that the spatial center-of-gravity position of the original index image 6S matches the position specified by the representative position information 36.
  • the left-eye index image 6L is shown at a position where the intermediate index image 6C is moved in the ⁇ X direction along the path 111 parallel to the X axis. Further, the index image for right eye 6R is shown at a position where the intermediate index image 6C is moved in the + X direction along the path 112 parallel to the X axis.
  • the three-dimensional index image generation unit 18 obtains the X coordinates of the positions (the center of gravity position, etc.) of the left-eye index image 6L and the right-eye index image 6R using the expressions (5) and (6), respectively.
  • the Y coordinates of the positions of the left eye index image 6L and the right eye index image 6R the Y coordinates of the positions specified by the representative position information 36 are employed as they are.
  • the three-dimensional index image generation unit 18 obtains the obtained left-eye index image 6L and the original index image 6S along the paths 113 and 114, respectively.
  • the left-eye index image 6L and the right-eye index image 6R are generated by translating to the respective positions of the right-eye index image 6R.
  • the index parallax 35 is generated for the left-eye index image 6L and the right-eye index image 6R.
  • the position of the intermediate index image 6C is matched with the position of the left-eye index image 6L, and the index parallax 35 is allotted to the right-eye index image 6R. Even if the position of the index image 6R for use is specified, the usefulness of the present invention is not impaired.
  • the original index image 6S is directly moved to the positions of the left-eye index image 6L and the right-eye index image 6R, thereby generating the left-eye index image 6L and the right-eye index image 6R.
  • the original index image 6S is once moved to the position of the intermediate index image 6C and then moved to the positions of the left-eye index image 6L and the right-eye index image 6R, the left-eye index image 6L and Even if the index image 6R for the right eye is generated, the usefulness of the present invention is not impaired.
  • FIG. 14 and 15 are diagrams each illustrating an example of the three-dimensional index image.
  • a left-eye index image 6La configured by two markers on a rectangle is shown in a form arranged in the second left-eye image 7La.
  • the second left-eye image 7La has an image region surrounded by the outer edge 107.
  • the second left-eye image 7La includes a first left-eye image 5L having an image region surrounded by the outer edge 106, and the left-eye index image 6La includes an outer edge 106 of the first left-eye image 5L.
  • the second left-eye image 7La is included in an area 108 surrounded by the outer edge 107 of the second left-eye image 7La.
  • the left-eye index image 6La is composed of a rectangular image pattern.
  • the image pattern is acquired by the image pattern stored in advance in the storage device 46 or the like being read out by the three-dimensional index image generation unit 18.
  • this image pattern for example, an image pattern showing a specific pattern in which relatively large dots are randomly arranged may be employed.
  • the operation mode of the position specifying unit 17a is set to the representative position generation mode 1.
  • the position designating unit 17a sets the position of the left-eye index image 6La based on the representative position information 36 output from the detection unit 13 that detects the observation object image 1L as a region of interest in the time series of the observation object image 1L.
  • the representative position information 36 is generated so as to move following a specific movement.
  • a left-eye index image 6 ⁇ / b> Lb composed of frame-shaped markers is shown in a form arranged on the second left-eye image 7 ⁇ / b> Lb.
  • the difference between the second left-eye image 7Lb and the second left-eye image 7La is the difference in shape between the left-eye index image 6Lb and the left-eye index image 6La and the presence or absence of the movement of the representative position.
  • the position specifying unit 17a displays the initial representative position of the left-eye index image 6Lb shown in FIG. 15 (such as an intermediate position between the left-eye index image and the right-eye index image).
  • a stereoscopic index image G6 (FIG. 2) related to the left-eye index image 6La (FIG. 14) or the left-eye index image 6Lb (FIG. 15), and a first stereoscopic image G5 (FIG. 2) related to the first left-eye image 5L.
  • the second stereoscopic image G7 (FIG. 2) by the combining unit 19, usually, for example, the image relating to the region 108 (FIGS. 14 and 15) is also combined.
  • the first index image 6La related to the left-eye index image 6Lb or the three-dimensional index image G6 related to the left-eye index image 6Lb is displayed on the first stereoscopic image G5 related to the first left-eye image 5L without combining the areas 108. Even if the one-dimensional image G5 and the three-dimensional index image G6 are combined, the usefulness of the present invention is not impaired.
  • Combination part 19 The combination unit 19 illustrated in FIG. 2 combines the first stereoscopic image G5 supplied from the stereoscopic image acquisition unit 12 and the stereoscopic index image G6 supplied from the stereoscopic index image generation unit 18 to generate a time series.
  • a second stereoscopic image G7 that is, a time-series second left-eye image 7L and second right-eye image 7R is generated.
  • FIG. 8 and 9 are diagrams showing second stereoscopic images G7a and G7b, respectively, as an example of the time-series second stereoscopic image G7 (FIG. 2) generated by the combination unit 19.
  • the second left-eye image 7L1 and the second right-eye image 7R1 constituting the second stereoscopic image G7a are images in which the observation object and the index object at a certain time t1 are respectively expressed in synchronization.
  • the second left-eye image 7L2 and the second right-eye image 7R2 constituting the second stereoscopic image G7b are the observation object and the index at a time t2 when a predetermined time has elapsed from the time t1. It is an image in which the object is expressed in synchronization with each other. That is, the second stereoscopic images G7a and G7b constitute a time-series second stereoscopic image G7 (FIG. 2).
  • the second stereoscopic image G7a (FIG. 8) is generated by combining the first stereoscopic image G5a (FIG. 4) and the stereoscopic index image G6a (FIG. 6) by the combining unit 19.
  • the second stereoscopic image G7b (FIG. 9) is generated by combining the first stereoscopic image G5b (FIG. 5) and the stereoscopic index image G6b (FIG. 7) by the combining unit 19.
  • the time-series second stereoscopic image G7 constituted by the second stereoscopic images G7a and G7b
  • the time-series stereoscopic index image relating to the object is more than the time-series first stereoscopic image relating to the observation object indicated by the observation object images 1L1 and 1L2 and the observation object images 1R1 and 1R2, respectively. Observed in the distance. Further, the perspective difference between the time-series stereoscopic index image related to the index object and the time-series first stereoscopic image related to the observation target is observed so as to increase in time series.
  • the index object is observed so as to move away from the observation target in time-series.
  • the perspective of the observation target object and the index target object is observed while spreading dynamically (in time series).
  • Such three-dimensionalization is performed. Therefore, the sense of depth that an observer who observes the second stereoscopic image G7 can remember can be improved.
  • ⁇ Operation flow of image processing apparatus > 16 to 18 are flowcharts showing an operation flow S100A as an example of the operation flow of the image processing apparatus 200A according to the embodiment.
  • the operation flow S100A is an example of an operation flow when the generation process of the second stereoscopic image G7 is started from the situation where the first stereoscopic image G5, the stereoscopic index image G6, and the second stereoscopic image G7 are not generated. Is shown.
  • the operation flow S100A is realized by the CPU 11A reading and executing the program PG1 in the ROM 44 and the detection program PG2 in the storage device 46. For example, execution of image processing relating to a stereoscopic image in the image processing apparatus 200A is requested in accordance with an operation of the operation unit 42 by the user, and the operation flow S100A is started.
  • step S110 in FIG. 16 the stereoscopic image acquisition unit 12 acquires a first stereoscopic image G5 related to the observation target.
  • step S120 the determination unit 14 determines the parallax representative parallax 32 of the first stereoscopic image G5. Note that when the attention area of the first stereoscopic image G5 is detected by the detection unit 13, the representative parallax 32 of the parallax is determined for the attention area.
  • step S130 the parallax generation unit 16, the position specification units 17a and 17b acquire the index parallax 35, the representative position information 36, and the size information 37 related to the stereoscopic image (stereoscopic index image G6) of the index object.
  • step S140 based on the index parallax 35, the representative position information 36, the size information 37, and the original index image 6S, the stereoscopic index image generation unit 18 generates a stereoscopic index image G6 related to the index object. Since the existing stereoscopic index image G6 does not exist at the start of step S140, for example, a predetermined parallax stored in the storage device 46 or the like is subtracted from the parallax of the first stereoscopic image G5 as the index parallax 35. The parallax is used as the index parallax 35. Similarly, as the representative position information 36, a predetermined initial position of the three-dimensional index image G6 is employed as the representative position information 36.
  • step S150 the combination unit 19 generates the second stereoscopic image G7 by combining the first stereoscopic image G5 and the stereoscopic index image G6.
  • the generated second stereoscopic image G7 is displayed on the display unit 43.
  • step S160 the stereoscopic image acquisition unit 12 acquires a new first stereoscopic image G5 related to the observation object.
  • step S170 the determination unit 14 determines the parallax representative parallax 32 of the first stereoscopic image G5. Note that when the attention area of the first stereoscopic image G5 is detected by the detection unit 13, the representative parallax 32 of the parallax is determined for the attention area.
  • step S180 the parallax generation unit 16 performs a new stereoscopic index so that the subtraction result obtained by subtracting the representative value of the parallax of the first stereoscopic image G5 from the parallax of the stereoscopic index image G6 (index parallax 35) decreases in time series.
  • the parallax (index parallax 35) related to the image G6 is acquired.
  • step S190 the size specifying unit 17b determines whether or not the operation mode related to the size specifying unit 17b is an operation mode for changing the size of the index object image in the image space.
  • step S190 If the operation mode for changing the size of the index object image in the image space is set as a result of the determination in step S190, the size designation unit 17b decreases the size in time series in step S200. Thus, the size information 37 is changed, and the process proceeds to step S210.
  • step S190 when the operation mode for changing the size of the index object image in the image space is not set, the size specifying unit 17b does not change the size information 37, and the processing is step. Moved to S210.
  • step S210 the position specifying unit 17a determines whether or not the operation mode related to the position specifying unit 17a is an operation mode for changing the representative position in the image space of the index object image.
  • step S210 when an operation mode for changing the representative position in the image space of the image of the index object is set, in step S220, the position specifying unit 17a determines the index according to the operation mode.
  • the representative position information 36 relating to the representative position in the image space of the object image is changed, and the process proceeds to step S230.
  • step S210 if the operation mode for changing the representative position in the image space of the index object image is not set, the representative position information 36 is not changed by the position specifying unit 17a, and the processing is performed. Is moved to step S230.
  • step S230 based on the index parallax 35, the representative position information 36, the size information 37, and the original index image 6S, the stereoscopic index image generation unit 18 generates a new stereoscopic index image G6 related to the index object.
  • the generated new three-dimensional index image G6 is a three-dimensional image having a parallax that is observed farther than the new first three-dimensional image G5 (step S160) of the observation object under the same image display conditions. .
  • step S240 the combination unit 19 generates a new second stereoscopic image G7 by combining the new first stereoscopic image G5 acquired in step 160 and the new stereoscopic index image G6 generated in step S230. To do.
  • the generated second stereoscopic image G7 is displayed on the display unit 43.
  • the index object is observed as if it is farther from the observation target than the second stereoscopic image G7 generated in step S150.
  • step S250 the stereoscopic image acquisition unit 12 determines whether acquisition of all the first stereoscopic images G5 related to the observation target has been completed.
  • step S250 If it is determined in step S250 that acquisition of all the first stereoscopic images G5 related to the observation target has not been completed, the process returns to step S160.
  • step S250 if acquisition of all the first stereoscopic images G5 related to the observation target has been completed, the processing related to the operation flow S100A is ended.
  • the perspective about the observation object and the index object dynamically spreads (time-series).
  • the three-dimensionalization is observed. Therefore, the sense of depth that an observer who observes the second stereoscopic image G7 can remember can be improved.
  • the image processing apparatus 200A generates the second stereoscopic image G7 and displays it on the display unit 43 each time a new first stereoscopic image G5 is acquired, thereby displaying the display unit A process of displaying a time-series second stereoscopic image G7 in 43 is performed.
  • the image processing apparatus 200A acquires the first time-series first stereoscopic images G5 recorded in advance, and each first three-dimensional image in the acquired time-series first three-dimensional images G5.
  • a time-series second stereoscopic image G7 is generated by collectively generating the second stereoscopic image G7 for each G5, and the time-series second stereoscopic image G7 that has been generated is displayed on the display unit 43 as a moving image. Even if the process of displaying as a stereoscopic image is performed, the usefulness of the present invention is not impaired.
  • Image processing system 200A Image processing apparatus 300 Stereo camera 5L First left eye image 5R First right eye image 6L Left eye index image 6R Right eye index image 6S Original index image 7L Second left eye image 7R Second right eye image 31 Area information 32 Representative parallax 33 Type information 34 Environment information 35 Index parallax 36 Representative position information 37 Size information G5 First stereoscopic image G6 Stereoscopic index image G7 Second stereoscopic image PG2 detection program

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

L'invention concerne l'amélioration de la sensation de profondeur perçue par un observateur visionnant une image en 3D. Afin de réaliser cet objectif, un dispositif de traitement d'images comprend : un générateur d'image-indice-stéréoscopique conçu pour créer une image à indice stéréoscopique à base de série temporelle ayant une disparité telle que l'on observe l'image à indice stéréoscopique à base de série temporelle, dans des conditions d'affichage d'image identiques, à une distance plus grande qu'une première image stéréoscopique à base de série temporelle d'un objet à observer ; et une unité de combinaison permettant de combiner, dans une séquence de série temporelle, la première image stéréoscopique et l'image à indice stéréoscopique, et à créer une seconde image stéréoscopique à base de série temporelle. Le générateur d'image-indice-stéréoscopique crée l'image à indice stéréoscopique afin qu'il y ait un accroissement à base de série temporelle au niveau d'une différence de distance correspondant à la différence entre une valeur représentative de la disparité de la première image stéréoscopique et la disparité de l'image à indice stéréoscopique.
PCT/JP2012/051744 2011-01-31 2012-01-27 Dispositif de traitement d'images, programme associé, procédé de traitement d'images WO2012105427A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-017845 2011-01-31
JP2011017845 2011-01-31

Publications (1)

Publication Number Publication Date
WO2012105427A1 true WO2012105427A1 (fr) 2012-08-09

Family

ID=46602643

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/051744 WO2012105427A1 (fr) 2011-01-31 2012-01-27 Dispositif de traitement d'images, programme associé, procédé de traitement d'images

Country Status (1)

Country Link
WO (1) WO2012105427A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05274421A (ja) * 1992-03-30 1993-10-22 Toshiba Corp カーソル制御装置
JP2002176661A (ja) * 2000-12-08 2002-06-21 Kawasaki Heavy Ind Ltd 画像表示装置
WO2010150554A1 (fr) * 2009-06-26 2010-12-29 パナソニック株式会社 Dispositif d'affichage d'image stéréoscopique

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05274421A (ja) * 1992-03-30 1993-10-22 Toshiba Corp カーソル制御装置
JP2002176661A (ja) * 2000-12-08 2002-06-21 Kawasaki Heavy Ind Ltd 画像表示装置
WO2010150554A1 (fr) * 2009-06-26 2010-12-29 パナソニック株式会社 Dispositif d'affichage d'image stéréoscopique

Similar Documents

Publication Publication Date Title
JP6023801B2 (ja) シミュレーション装置
TWI523488B (zh) 處理包含在信號中的視差資訊之方法
JP2020516090A (ja) ヘッドマウントディスプレイのための光照射野キャプチャおよびレンダリング
JP2005353047A (ja) 立体画像処理方法および立体画像処理装置
WO2006001361A1 (fr) Procede et dispositif de creation d’image stereoscopique
US11659158B1 (en) Frustum change in projection stereo rendering
JP5476910B2 (ja) 画像生成装置、画像生成方法、および、プログラム
WO2012086362A1 (fr) Dispositif de traitement d'image, programme associé et procédé de traitement d'image
JP6585938B2 (ja) 立体像奥行き変換装置およびそのプログラム
KR101270025B1 (ko) 스테레오 카메라의 주시각 제어방법 및 그 스테레오 카메라 장치
JP5840022B2 (ja) 立体画像処理装置、立体画像撮像装置、立体画像表示装置
TWI589150B (zh) 3d自動對焦顯示方法及其系統
KR101821141B1 (ko) 3차원 영상 시스템 및 그 영상 재생 방법
JP2012244453A (ja) 画像表示装置、画像表示システム、及び3次元眼鏡
JP2017098596A (ja) 画像生成方法及び画像生成装置
JP5741353B2 (ja) 画像処理システム、画像処理方法および画像処理プログラム
CN107229340B (zh) 一种信息处理方法以及电子设备
WO2012105427A1 (fr) Dispositif de traitement d'images, programme associé, procédé de traitement d'images
JP5891554B2 (ja) 立体感提示装置および方法ならびにぼけ画像生成処理装置,方法およびプログラム
KR101173280B1 (ko) 주시각 제어를 위한 입체 영상 신호의 처리 방법 및 장치
KR102700868B1 (ko) 3d객체에 대한 디스패리티의 조정을 수행하는 무안경 입체영상 표시장치
Chappuis et al. Subjective evaluation of an active crosstalk reduction system for mobile autostereoscopic displays
JP2014192877A (ja) 3d立体視画像作成方法、3d立体視画像作成システム及び3d立体視画像作成プログラム
Booth et al. Gaze3D: Framework for gaze analysis on 3D reconstructed scenes
KR101067965B1 (ko) 입체 영상 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12742659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12742659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP