WO2011096136A1 - Simulated image generating device and simulated image generating method - Google Patents

Simulated image generating device and simulated image generating method Download PDF

Info

Publication number
WO2011096136A1
WO2011096136A1 PCT/JP2010/072529 JP2010072529W WO2011096136A1 WO 2011096136 A1 WO2011096136 A1 WO 2011096136A1 JP 2010072529 W JP2010072529 W JP 2010072529W WO 2011096136 A1 WO2011096136 A1 WO 2011096136A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
subject
pseudo
background
area
Prior art date
Application number
PCT/JP2010/072529
Other languages
French (fr)
Japanese (ja)
Inventor
修 遠山
卓也 川野
Original Assignee
コニカミノルタホールディングス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタホールディングス株式会社 filed Critical コニカミノルタホールディングス株式会社
Publication of WO2011096136A1 publication Critical patent/WO2011096136A1/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • G03B35/12Stereoscopic photography by simultaneous recording involving recording of different viewpoint images in different colours on a colour film
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Definitions

  • the present invention relates to a pseudo image generation apparatus and method for generating a pseudo image of an image of a subject taken from a virtual viewpoint different from the viewpoint using an image of the subject taken from one viewpoint.
  • a pseudo image has been generated in which a pseudo image of an image obtained when a subject is photographed from a virtual viewpoint different from the viewpoint where the subject is actually photographed is simulated without performing actual photographing from the virtual viewpoint.
  • Generation devices are beginning to be used for purposes such as generating a group of images that can be viewed stereoscopically.
  • the depth of the subject is estimated from the screen configuration of one captured image (reference image), and on the image of the reference image based on the obtained depth information.
  • the pseudo image is generated from the reference image by obtaining the correspondence between each of the coordinates and each coordinate on the image of the pseudo image.
  • the pseudo image corresponding to this area cannot be obtained.
  • An appropriate pixel value cannot be obtained for the region (occlusion region) depending on the correspondence.
  • the pixel value of the occlusion area is set using the statistical amount of the texture in each area.
  • the pseudo image generation apparatus of Patent Document 1 cannot obtain an appropriate correspondence because the correspondence between the reference image and the pseudo image is obtained based on the estimated depth. Therefore, since the range of the occlusion area is not accurate, there is a problem that an observer who sees the generated pseudo image feels uncomfortable.
  • the occlusion area normally includes information on both the subject and its background.
  • the occlusion area contains information on the subject and its background.
  • the improvement in image quality is not aimed at, there is a problem that the frequency of generation of an image of an occlusion area in which an observer is likely to feel discomfort increases.
  • the present invention has been made to solve these problems, and an object of the present invention is to provide a technique for more accurately specifying the range of an occlusion area and generating a pseudo image with less discomfort.
  • the pseudo image generation device includes a first acquisition unit that acquires a reference image in which each subject is captured from a first viewpoint, and at least attention among the subjects.
  • a second acquisition unit configured to acquire distance information based on actual measurement for each point of the subject; the target subject; and a first portion that is a background portion of a first foreground image that is an image of the target subject among the reference images.
  • the identification means for identifying each background subject photographed in the background image of the image, the correspondence relationship between the reference image and the pseudo image of each subject corresponding to photographing from a virtual viewpoint different from the first viewpoint From the virtual viewpoint based on the correspondence relationship acquisition means for acquiring information based on each distance information, the correspondence relationship for at least the first foreground image of the reference image, and the reference image.
  • a first pseudo-image is generated that includes a second foreground image that is a corresponding image of the subject of interest and a second background image that is an image of each background subject corresponding to shooting from the virtual viewpoint.
  • Generating means for specifying an occlusion area that does not include the second foreground image and the second background image of the pseudo image; and an image of the occlusion area as the subject of interest.
  • second generation means for generating based on the respective information on each background subject.
  • the pseudo image generation device is the pseudo image generation device according to the first aspect, and corresponds to the first region corresponding to the subject of interest in the occlusion region and the background subjects.
  • Second specifying means for specifying the second area is further provided.
  • the second generation means generates an image of the first area based on information on the subject of interest, and generates an image of the second area based on information on each background subject.
  • the pseudo image generation device is the pseudo image generation device according to the second aspect, and is third acquisition means for acquiring shape information representing the entire three-dimensional shape of the object of interest. Is further provided.
  • the second specifying means specifies the first region based on the shape information.
  • the pseudo image generation device is the pseudo image generation device according to the second aspect, wherein the second generation means is a boundary region with the first region in the second foreground image.
  • the image of the first area is generated based on the second area image, and the image of the second area is generated based on a boundary area between the second background image and the second area.
  • the pseudo image generation device is the pseudo image generation device according to the fourth aspect, wherein the second generation means is a boundary region on the second region side of the first region. From the second region, the images of the first region and the second region are generated so that the pixel values of the region over the boundary region on the first region side in the second region gradually change.
  • the pseudo image generation device is the pseudo image generation device according to the first aspect, wherein the second generation means includes: (a) the second foreground image in the occlusion area; An image of the first boundary region is generated based on a boundary region with the occlusion region in the second foreground image, and a second boundary region with the second background image in the occlusion region is generated. An image is generated based on a boundary area with the occlusion area in the second background image, and (b) a pixel value of the occlusion area gradually increases from the first boundary area to the second boundary area. An image of the occlusion area is generated so as to change to
  • a pseudo image generation device includes a first acquisition unit configured to acquire a plurality of time-series images in which each subject is photographed in time sequence, and one image among the plurality of time-series images as a reference image.
  • Second acquisition means for acquiring distance information based on actual measurement for at least each point of the subject of interest in the state in which the reference image has been acquired, the subject of interest, and the reference image of the subject Identification means for identifying each background subject photographed in the first background image that is the background portion of the first foreground image that is the image of the subject of interest, the reference image, and the first image in which the reference image is photographed
  • a correspondence relationship acquisition means for acquiring a correspondence relationship with the pseudo image of each subject corresponding to shooting from a virtual viewpoint different from the viewpoint based on each distance information, and at least the first of the reference images Based on the correspondence relationship for the foreground image and the reference image, the second foreground image, which is the image of the subject of interest corresponding to the shooting from the virtual viewpoint, and the shooting from the virtual viewpoint First
  • the pseudo image generation device is the pseudo image generation device according to the first aspect, and the second generation means performs a smoothing process on the generated image of the occlusion area.
  • the pseudo image generation device is based on actual measurement for at least each point of the subject of interest in the first acquisition unit that acquires a reference image in which each subject is captured from a first viewpoint.
  • a second acquisition unit that acquires each distance information
  • a third acquisition unit that acquires shape information that represents the entire three-dimensional shape of the subject of interest, and a virtual viewpoint different from the first viewpoint A first region corresponding to the subject, and a target region of interest.
  • the occlusion region is a region where the corresponding portion of the reference image is not photographed among the pseudo images of each subject corresponding to photographing from A second region corresponding to each background subject photographed in the background portion of the first image based on the reference image, each distance image, and the shape information, and Picture The generated based on the information of the focused object, and a generation means for generating the pseudo-image by generating on the basis of an image of the second region on the information of the background object.
  • a pseudo image generation device is the pseudo image generation device according to the ninth aspect, wherein the generation means includes (a) an image of the target subject and the target subject among the reference subject. Identification means for identifying each background subject photographed in the first background image that is the background portion of a certain first foreground image, and (b) the correspondence between the reference image and the pseudo image, Correspondence acquisition means for acquiring based on distance information; and (c) shooting from the virtual viewpoint based on the correspondence between at least the first foreground image of the reference images and the reference image.
  • a first pseudo-image is generated that includes a second foreground image that is a corresponding image of the subject of interest and a second background image that is an image of each background subject corresponding to shooting from the virtual viewpoint.
  • the generating means 1 region is specified based on the distance information and the shape information, and the second region is defined as the second foreground image, the first region, and the second region of the pseudo image.
  • E generating an image of the first area based on the information of the subject of interest, and specifying the image of the second area as the respective areas.
  • Second generation means for generating based on information on the background subject.
  • the pseudo image generation method includes a step of obtaining a reference image in which each subject is photographed from a first viewpoint, and each distance information based on actual measurement for at least each point of the subject of interest among the subjects. And obtaining the subject of interest and identifying each background subject photographed in a first background image that is a background portion of a first foreground image that is an image of the subject of interest in the reference image.
  • the pseudo image generation method includes a step of acquiring a plurality of time-series images in which each subject is photographed in time sequence, and using one of the plurality of time-series images as a reference image, the reference image A step of acquiring distance information based on actual measurement for at least each point of the subject of interest in the state in which the image has been obtained, and a first image that is the image of the subject of interest among the subject of interest and the reference image A step of identifying each background subject photographed in the first background image that is the background portion of the foreground image, and a virtual viewpoint different from the reference image and the first viewpoint from which the reference image was photographed Obtaining a correspondence relationship with the pseudo image of each subject corresponding to the shooting of the subject based on the distance information, and the correspondence relationship with respect to at least the first foreground image among the reference images.
  • a second foreground image that is an image of the subject of interest corresponding to shooting from the virtual viewpoint and an image of each background subject corresponding to shooting from the virtual viewpoint.
  • the pseudo image generation method includes a step of acquiring a reference image in which each subject is photographed from a first viewpoint, and each distance information based on actual measurement for at least each point of the subject of interest among the subjects.
  • Each occlusion area which is an area in which the corresponding portion of the reference image is not photographed, and each background subject photographed in the first area corresponding to the subject and the background portion of the image of the subject of interest Is determined based on the reference image, each distance image, and the shape information, and the image of the first area is determined based on the information on the subject of interest.
  • Occlusion on the pseudo image based on the distance information of the subject based on the actual measurement by the pseudo image generation device according to any of the first to tenth aspects or the pseudo image generation method according to any of the eleventh to thirteenth aspects.
  • the range of the region can be specified more accurately, and the image of the specified occlusion region is generated based on the subject of interest and each background subject, so that a pseudo image with less discomfort can be generated.
  • FIG. 1 is a block diagram illustrating an example of a main configuration of a pseudo image generation system 100A according to the embodiment.
  • the pseudo image generation system 100A mainly includes a stereo camera 300 and a pseudo image generation device 200A.
  • the stereo camera 300 mainly includes a base camera 31 and a reference camera 32.
  • the reference camera 31 and the reference camera 32 are mainly configured by an imaging optical system and a control processing circuit (not shown), respectively.
  • the reference camera 31 and the reference camera 32 are provided with a predetermined baseline length apart, and the subject information is processed by synchronizing the light ray information from the subject incident on the photographing optical system with a control processing circuit or the like. For example, a standard image 1A and a reference image 1R that are digital images of a predetermined size such as VGA are generated.
  • the generated standard image 1A and reference image 1R are supplied to the input / output unit 41 of the pseudo image generating apparatus 200A via the data line DL.
  • Various operations of the stereo camera 300 are controlled based on control signals supplied from the pseudo image generation device 200A via the input / output unit 41 and the data line DL.
  • the stereo camera 300 can also generate a plurality of reference images 1A and a plurality of reference images 1R by continuously photographing the subject in time sequence while synchronizing the reference camera 31 and the reference camera 32.
  • the standard image 1A and the reference image 1R may be color images or monochrome images.
  • the stereo camera 300 is employed.
  • the reference camera 32 of the stereo camera 300 instead of the reference camera 32 of the stereo camera 300, light projection that projects various detection lights for shape measurement, such as laser light, onto a subject.
  • the reference camera 31 and the light projecting device may constitute an active distance measuring type three-dimensional measuring machine, and the three-dimensional measuring machine may be used instead of the stereo camera 300.
  • the image about the subject and the image used for the measurement of the distance information can be shared, and therefore the correspondence 56 (see FIG. 2) performed by the correspondence acquisition unit 15 described later. ), The processing cost for associating the image with the distance information can be reduced.
  • the coordinate measuring machine adopts a configuration that measures the distance information 52 (FIG. 2) about the subject based on an image taken from a predetermined viewpoint different from the reference image 1A, Since the reference image 1A and the distance information 52 can be associated with each other through matching between the image and the reference image 1A, the usefulness of the present invention is not impaired.
  • the pseudo image generation device 200A mainly includes a CPU 11A, an input / output unit 41, an operation unit 42, a display unit 43, a ROM 44, a RAM 45, and a storage device 46. This is realized by a computer or a dedicated hardware device.
  • the input / output unit 41 is configured by an input / output interface such as a USB interface, for example, and inputs image information and the like supplied from the stereo camera 300 to the pseudo image generation device 200A, and from the pseudo image generation device 200A to the stereo camera 300. Output various control signals to
  • the operation unit 42 includes, for example, a keyboard or a mouse.
  • various control parameters are set in the pseudo image generation device 200A, and various operations of the pseudo image generation device 200A.
  • the mode is set.
  • the display unit 43 includes, for example, a liquid crystal display, and displays various image information such as a reference image 1A supplied from the stereo camera 300 and a pseudo image 4A (FIG. 2) generated by the pseudo image generation device 200A. In addition, various information related to the device and control GUI (Graphical User Interface) are displayed.
  • image information such as a reference image 1A supplied from the stereo camera 300 and a pseudo image 4A (FIG. 2) generated by the pseudo image generation device 200A.
  • FOG. 2 pseudo image 4A
  • ROM (Read Only Memory) 44 is a read-only memory and stores a program for operating the CPU 11A.
  • a readable / writable nonvolatile memory for example, a flash memory may be used instead of the ROM 44.
  • a RAM (Random Access Memory) 45 is a readable and writable volatile memory that stores various images acquired by the first acquisition unit 12, pseudo images generated by the generation unit 21A, and processing information of the CPU 11A. Functions as a temporary work memory.
  • the storage device 46 is composed of, for example, a readable / writable nonvolatile memory such as a flash memory, a hard disk device, or the like, and permanently records various information such as setting information for the pseudo image generation device 200A.
  • the storage device 46 is provided with a parameter storage unit 47 and a shape data storage unit 48.
  • the parameter storage unit 47 includes a three-dimensional parameter 51 (FIG. 2), an imaging parameter 54 (FIG. 2), and Various parameters such as coordinate system information 55 (FIG. 2) are stored.
  • the shape data storage unit 48 stores model group shape data 61 (FIG. 2) that represents the overall three-dimensional shape of each of various subjects, as will be described later. It is referred to by the third acquisition unit 14 and used for the acquisition process of the shape information 62 (FIG. 2) about the subject of interest.
  • the CPU (Central Processing Unit) 11A is a control processing device that controls each functional unit of the pseudo image generation device 200A, and executes control and processing according to a program stored in the ROM 44.
  • the CPU 11A also functions as the first acquisition unit 12, the second acquisition unit 13, the third acquisition unit 14, the correspondence acquisition unit 15, and the generation unit 21A, as will be described later.
  • the CPU 11A uses the reference image 1A for the subject photographed from the first viewpoint to the pseudo image 4A for the subject corresponding to photographing from a virtual viewpoint different from the first viewpoint (FIG. 2). Is generated.
  • the generation unit 21A is configured by functional units such as a first specification unit 22, a second specification unit 23, a first generation unit 24, a second generation unit 25, and an identification unit 26.
  • each of the CPU 11A, the input / output unit 41, the operation unit 42, the display unit 43, the ROM 44, the RAM 45, the storage device 46, and the like are electrically connected via a signal line 49. Therefore, for example, the CPU 11A can execute control of the stereo camera 300 via the input / output unit 41, acquisition of image information from the stereo camera 300, display on the display unit 43, and the like at a predetermined timing.
  • the first acquisition unit 12, the second acquisition unit 13, the third acquisition unit 14, the correspondence relationship acquisition unit 15, and the functional units of the generation unit 21A and the generation unit 21A are configured.
  • the function units of the first specifying unit 22, the second specifying unit 23, the first generating unit 24, the second generating unit 25, and the identifying unit 26 are realized by the CPU 11A executing predetermined programs.
  • Each of these functional units may be realized by a dedicated hardware circuit, for example.
  • the pseudo image generation device 200A acquires the standard image 1A and the reference image 1R captured by the stereo camera 300, and the pseudo image generation device 200A processes the standard image 1A and the reference image 1R. By doing so, a pseudo image corresponding to shooting from a virtual viewpoint different from the first viewpoint from which the reference image 1A was shot based on the reference image 1A, that is, shot from a virtual viewpoint different from the first viewpoint A pseudo image corresponding to the image of the subject is generated.
  • FIG. 2 is a block diagram illustrating an example of a main functional configuration of the pseudo image generation apparatus 200A according to the embodiment.
  • FIG. 19 is a diagram illustrating an example of an operation flow of the pseudo image generation apparatus 200A according to the embodiment.
  • the operator can position and position the stereo camera 300 so that the subject of interest who wants to create a pseudo image corresponding to shooting from a virtual viewpoint can be shot from both the reference camera 31 and the reference camera 32 of the stereo camera 300. Adjust.
  • the position of the reference camera 31 of the stereo camera 300 in this state is the first viewpoint. More specifically, for example, the principal point position of the photographing optical system of the reference camera 31 is the first viewpoint.
  • a control signal corresponding to the button operation is supplied to the CPU 11A.
  • the CPU 11A supplies a control signal for causing the stereo camera 300 to perform a shooting operation.
  • the stereo camera 300 to which the control signal is supplied performs a photographing operation using the standard camera 31 and the reference camera 32 to generate the standard image 1A and the reference image 1R for each subject in the photographing field of view, and generates a pseudo image. Supply to apparatus 200A.
  • the first acquisition unit 12 acquires the reference image 1A and the reference image 1R obtained by shooting each subject from the first viewpoint via the input / output unit 41 (step S10 in FIG. 19).
  • FIG. 3 is a diagram illustrating an example of the reference image 1A.
  • the first foreground image 1a which is an image of a person facing the front, is captured in the reference image 1A.
  • the background portion of the first foreground image 1a is the first background image 2a obtained by photographing the back wall of the person.
  • the acquired reference image 1A is supplied to the second acquisition unit 13, the correspondence relationship acquisition unit 15, the first generation unit 24, and the identification unit 26. Further, the acquired reference image 1R is supplied to the second acquisition unit 13.
  • the first acquisition unit 12 may acquire the reference image 1A and the reference image 1R that have been captured in advance and stored in the recording medium via the input / output unit 41.
  • the second acquisition unit 13 that has acquired the three-dimensional parameter 51 performs a matching process between the standard image 1A and the reference image 1R to obtain a parallax with the reference image 1R for each pixel of the standard image 1A.
  • the second acquisition unit 13 converts the parallax for each pixel of the reference image 1A based on the principle of triangulation using the three-dimensional parameter 51, thereby each subject corresponding to each pixel of the reference image 1A.
  • Distance information 52 that is a set of three-dimensional coordinate values for each of the above points is generated.
  • a camera coordinate system depending on the position and orientation of the stereo camera 300 is employed.
  • the camera coordinate system of the stereo camera for example, an XYZ orthogonal coordinate system in which the principal point of the reference camera is the origin and the Z axis is along the optical axis direction of the reference camera is used.
  • the second acquisition unit 13 actually measures at least each point of the subject of interest among the subjects. Is obtained (step S20 in FIG. 19).
  • FIG. 5 is a diagram illustrating an example of the distance information 52 displayed as the distance image 5A.
  • the distance image 5A shown in FIG. 5 is an image in which the Z-axis coordinates in the distance information 52 corresponding to each pixel of the reference image 1A are used as the pixel value of each pixel.
  • the unit of the pixel value is meter.
  • the dotted line in the distance image 5A displays the outline of the first foreground image 1a on the distance image 5A in order to display the relationship between the pixel value of the distance image 5A and the first foreground image 1a in the reference image 1A in an easy-to-understand manner. It is a supplementary indication.
  • each background subject image captured in the background portion of the subject subject image a part or all of it is limited by the measurement range of a distance measuring device such as a stereo camera, and the reflectance of the background subject. In some cases, the distance information 52 cannot be acquired.
  • the shooting angle of view of the base camera 31 and the reference camera 32 is the same, the image of the end region of the base image 1A due to the parallax between the base camera 31 and the reference camera 32 is the reference image 1R. Is not photographed, distance information 52 is not generated for the end region.
  • the distance information 52 acquired by the second acquisition unit 13 is supplied to the third acquisition unit 14, the correspondence relationship acquisition unit 15, the second specification unit 23, and the identification unit 26.
  • the reference image is constituted by, for example, each image of a short-distance person, a medium-distance partition, and a long-distance building
  • the reference image is constituted by, for example, each image of a short-distance person, a medium-distance partition, and a long-distance building
  • an occlusion area related to the partition and the building also occurs, even in this case, if the method of the present invention is applied to the partition as a subject of interest, the range of the occlusion area related to the partition and the building behind the partition is specified, Image generation can be performed.
  • the third acquisition unit 14 When the third acquisition unit 14 receives the distance information 52 from the second acquisition unit 13, the third acquisition unit 14 stores a model group that expresses the entire three-dimensional shape of various subjects stored in advance in the shape data storage unit 48. From the shape data 61, shape data closest to the shape information represented by the distance information 52 is identified, and the identified shape data is acquired as shape information 62 representing the entire three-dimensional shape of the subject of interest (FIG. 19). Step S30).
  • various methods can be employed for identifying the shape data closest to the distance information 52 for the subject of interest from various shape data.
  • the distance image 5A for the distance information 52 and the model group shape The method disclosed in Japanese Patent Laid-Open No. 2001-143072 that performs the identification by comparing each distance image with respect to each of the data 61 can be employed.
  • the model group shape data 61 stored in the shape data storage unit 48 is preferably as close as possible to the actual entire circumference shape data of the subject of interest.
  • the shape information 62 for the entire circumference of the subject is set in the pseudo image generation device 200A, so that the corresponding shape is obtained from the model group shape data 61.
  • the object shape information 62 may be acquired without searching for the information 62.
  • the shape information 62 acquired by the third acquisition unit 14 is supplied to the second specifying unit 23.
  • the identification unit 26 When receiving the reference image 1A and the distance information 52 from the first acquisition unit 12 and the second acquisition unit 13, respectively, the identification unit 26 receives the subject of interest and the first foreground image that is an image of the subject of interest in the reference image 1A. Each background subject photographed in the first background image 2a that is the background portion of 1a is identified, and identification information 53 that is the identification result is generated (step S40 in FIG. 19).
  • a method for identifying the subject of interest and each background subject there are a method of identifying based on image information, a method of identifying based on distance information, and the like.
  • a portion where the difference in distance information corresponding to each pixel exceeds a predetermined range is a boundary between the subject of interest and the background subject, May be identified, or a portion having irregularities exceeding a predetermined reference may be a subject of interest, and a portion having irregularities that are equal to or smaller than a predetermined reference value may be identified as a background subject.
  • the boundary between the subject image and the background image is unclear because the pattern and color of the subject and the background subject are similar. Even when the subject cannot be identified based only on the image information, the subject and the background subject can be appropriately identified.
  • the target subject and the background subject are identified only by image processing based on the image information, in many cases, the target partial image and the background partial image can be accurately identified, which impairs the usefulness of the present invention. There is nothing.
  • the identification information 53 generated by the identification unit 26 is supplied to the first generation unit 24 and the second generation unit 25.
  • a pseudo image 2 ⁇ / b> A generated by a first generation unit 24 described later is also supplied to the identification unit 26.
  • the first generation unit 24 extracts only the first foreground image 1a corresponding to the subject of interest from the reference image 1A using the identification information 53, and the virtual viewpoint based on the first foreground image 1a. Is provided with an operation mode for generating a second foreground image 3a (FIG. 4) which is a pseudo image of the subject of interest corresponding to the shooting from
  • the first generation unit 24 can generate the pseudo image 2A at a lower processing cost than when acquiring the pseudo image 2A for the entire reference image 1A by operating in the operation mode.
  • the identification unit 26 can also identify the subject of interest and the background subject based on the pseudo image 2A generated by the first generation unit 24.
  • the correspondence acquisition unit 15 supplied with the reference image 1A, the distance information 52, the imaging parameter 54, and the coordinate system information 55 is a virtual image different from the reference image 1A and the first viewpoint.
  • a correspondence 56 with the pseudo image 2A of each subject corresponding to shooting from the viewpoint is acquired based on the distance information 52 (step S50 in FIG. 19).
  • the correspondence relationship 56 is a correspondence relationship between each coordinate on the image of the reference image 1A and each coordinate on the image of the pseudo image 2A.
  • the shooting parameter 54 and the coordinate system information 55 are stored in the parameter storage unit 47.
  • the shooting parameter 54 includes the reference camera 31, a virtual camera at a virtual viewpoint different from the reference camera 31, and distance information 52. These are imaging parameters such as the focal length, the number of pixels, and the pixel size for each of the distance measuring devices to be measured (stereo camera 300 of the present embodiment).
  • the coordinate system information 55 is information representing the relationship between the position and orientation of the reference camera 31, the virtual camera, and the distance measuring device.
  • distance information 52 corresponding to each pixel of the reference image 1A is obtained even if the position and orientation of the reference camera 31 and the distance measuring device are different.
  • a correspondence relationship when the three-dimensional shape represented by the distance information 52 is perspective-projected on the pseudo image 2A is also obtained.
  • each pixel of the reference image 1A that is, the coordinates on the image of the reference image 1A
  • each pixel of the pseudo image 2A that is, the coordinates on the image of the pseudo image 2A
  • one camera is a camera that captures the reference image 1A, such as a stereo camera, and is also a camera that captures an image used for ranging
  • the reference among the shooting parameters 54 and the coordinate system information 55 is used. Even if the relationship between the position and orientation of the camera 31 and the distance measuring device is unknown, each pixel of the reference image 1A can be associated with the distance information 52, so the correspondence 56 can be obtained.
  • the correspondence 56 acquired by the correspondence acquisition unit 15 is supplied to the first generation unit 24 and used to generate the pseudo image 2A.
  • the occlusion of the subject due to the parallax of the distance measuring device that measures the distance information 52 based on the principle of triangulation, and the measurement optical system of the distance measuring device For some areas such as the peripheral portion of the first foreground image 1a corresponding to the subject of interest due to a decrease in the amount of light incident on the measurement optical system from the subject due to the tilt angle of the subject surface with respect to the optical axis of Information 52 may not be measurable.
  • the ratio of the number of pixels for which distance information 52 is not obtained with respect to the total number of pixels of the first foreground image 1a is usually quite low.
  • the distance information 52 is based on the estimated distance information.
  • the correspondence relationship 56 can be obtained with higher accuracy than when the distance information 52 is not used by actual measurement. Therefore, the distance image 5A for all the pixels of the first foreground image 1a in the quasi-image 1A. Even if the distance information 52 is not obtained, the usefulness of the present invention is not impaired.
  • the second foreground image 3a cannot be formed from the pixels. Can be easily distinguished from the pixels of the first foreground image 1a from which the distance information 52 has been acquired.
  • the pixel values corresponding to the subject of interest are displayed in the second region 7a corresponding to the background subject.
  • the situation where is set can be easily avoided.
  • FIG. 4 is a diagram illustrating an example of the pseudo image 2A.
  • the second foreground image which is the image of the subject of interest corresponding to the shooting from the virtual viewpoint, based on the correspondence 56 of the first foreground image 1a corresponding to at least the subject of interest of the reference image 1A and the reference image 1A.
  • a pseudo image 2A including 3a and a second background image 4a that is an image of each background subject corresponding to shooting from a virtual viewpoint is generated (step S60 in FIG. 19).
  • the first generation unit 24 sets the identification information 53 according to the operation mode set from the operation unit 42 or the like.
  • the second foreground image 3a (FIG. 4) of the pseudo image 2A is generated from the first foreground image 1a (FIG. 3) of the reference image 1A according to the correspondence 56, and the first background image 2a of the reference image 1A is Without using the correspondence relationship 56, the first background image 2a can be used as it is as the second background image 4a in the pseudo image 2A.
  • the pseudo image 2A shown in FIG. 4 is a pseudo image when the operation mode in which the first background image 2a is used as it is as the second background image 4a is selected.
  • the occlusion area 5a in FIG. 4 is an area where neither the second foreground image 3a nor the second background image 4a exists in the pseudo image 2A.
  • the image of the background subject does not strictly correspond to the positional relationship corresponding to the virtual viewpoint. .
  • the parallax between the reference image 1A and the pseudo image 2A for the background subject in the distance is smaller than the parallax for the subject of interest, the parallax for the subject of interest focused by the observer corresponds to the virtual viewpoint. As long as it is a value, the viewer feels less uncomfortable with the pseudo image 2A.
  • the generated pseudo image 2A is supplied to the first specifying unit 22 and the second generating unit 25, and is also supplied to the identifying unit 26 as described in the explanation section of the identifying unit 26.
  • the first generation unit 24 may adopt the shape information 62 instead of the “depth estimation model” and acquire the pseudo image 2A by applying the method of Patent Document 1.
  • information on the specified occlusion area 5 a is supplied to the second specifying unit 23 and the second generating unit 25.
  • the information related to the specified occlusion area 5a may be generated as coordinate information of each pixel included in the area of the occlusion area 5a or each pixel on the boundary of the occlusion area 5a, for example, as shown in FIG. It may be generated as an image such as the pseudo image 2A shown.
  • the second specifying unit 23 includes the pseudo image 2A from each of the first generating unit 24, the first specifying unit 22, the parameter storage unit 47, the second acquiring unit 13, and the third acquiring unit 14.
  • the occlusion area 5a, the imaging parameter 54, the coordinate system information 55, the distance information 52, and the shape information 62 are supplied.
  • the second specifying unit 23 specifies the first region 6a related to the subject of interest in the occlusion region 5a and the second region 7a related to the background subject (step S80 in FIG. 19).
  • FIG. 6 is a diagram showing an example of the pseudo image 6A related to the shape information 62 of the entire circumference of the subject.
  • the shape region 6aA shown in FIG. 6 is a region of the three-dimensional shape on the pseudo image related to the virtual viewpoint when the three-dimensional shape represented by the shape information 62 is installed at the same position and posture as the subject of interest.
  • the posture information of the three-dimensional shape represented by the distance information 52 can also be acquired.
  • the three-dimensional shape represented by the distance information 52 and the three-dimensional shape represented by the shape information 62 are three-dimensional shapes for the same subject of interest, the three-dimensional shape represented by the shape information 62 is The same position and posture as the position and posture in the three-dimensional space of the three-dimensional shape represented by the distance information 52 can be given.
  • the correspondence between the three-dimensional shape represented by the shape information 62 and the three-dimensional shape image represented by the shape information 62 formed on the pseudo image related to the virtual viewpoint by perspective projection is expressed by the shooting parameter 54 and the coordinate system information 55.
  • the second specifying unit 23 specifies the shape area 6aA on the pseudo image related to the virtual viewpoint, and assigns a predetermined pixel value only to the shape area 6aA, for example.
  • a pseudo image 6A expressing 6aA is generated.
  • the method for the second specifying unit 23 to generate the pseudo image 6A from the shape information 62 the method disclosed in Japanese Patent Laid-Open No. 10-293862 may be employed.
  • FIG. 7 is a diagram showing an example of the pseudo image 3A in which the first area 6a and the second area 7a are set in the occlusion area 5a.
  • the second specifying unit 23 is a first region that is an occlusion region corresponding to the subject of interest on the occlusion region 5a based on the generated shape region 6aA and information on the occlusion region 5a supplied from the first specifying unit 22. For example, an area that does not include the first area 6a in the occlusion area 5a is specified as the second area 7a that is an occlusion area related to the background subject.
  • the second specifying unit 23 specifies the first region 6a corresponding to the subject of interest in the occlusion region 5a based on the shape information 62, and further specifies the second region 7a corresponding to each background subject.
  • Information relating to the identified first region 6a and second region 7a is supplied to the second generation unit 25.
  • the occlusion region 5a is defined based on, for example, a predetermined ratio based on statistical data such as an area ratio between the first region 6a and the second region 7a or a pixel number ratio in the horizontal or vertical direction of 1: 3. Even if the first area 6a and the second area 7a are set so as to have a ratio, usually, the first area 6a that is the range of the occlusion area corresponding to the subject of interest and the occlusion area corresponding to the background object Since the second region 7a as the range can be specified to such an extent that the observer does not feel uncomfortable, the usefulness of the present invention is not impaired.
  • the first area 6a and the second area 7a in the occlusion area 5a are set based on a predetermined ratio not based on statistical data or the like, the first area 6a corresponding to the subject of interest in the occlusion area 5a.
  • the viewer can specify the level so as not to feel uncomfortable, so that the usefulness of the present invention is not impaired.
  • the information related to the identified first region 6a and second region 7a may be generated as coordinate information of each pixel included in these regions or each pixel on the boundary between these regions, for example.
  • the image may be generated as an image such as the pseudo image 3A shown in FIG.
  • the information of the occlusion area 5a is used.
  • the second identification section 23 uses the occlusion area according to the operation mode set from the operation section 42.
  • the first region 6a and the second region 7a can be specified without using the information of 5a.
  • the second specifying unit 23 first specifies the shape region 6aA by, for example, the above-described method, and the second foreground of the pseudo image 2A supplied from the first generation unit 24 in the shape region 6aA.
  • An area not including the image 3a is specified as the first area 6a.
  • the second specifying unit 23 specifies a region that does not include any of the second foreground image 3a, the first region 6a, and the second background image 4a in the pseudo image 2A as the second region 7a.
  • the first region 6a and the second region 7a are specified without using the information of the occlusion region 5a.
  • the second specifying unit 23 specifies the first area 6a based on the distance information 52 and the shape information 62, and specifies the second area 7a as the second foreground image 3a and the first area in the pseudo image 2A. 6a and the second background image 4a also function as a specifying means for specifying as an area that does not include both.
  • the second generation unit 25 includes the pseudo image 2A, the occlusion area 5a, and the first area from the first generation unit 24, the first specification unit 22, the second specification unit 23, and the identification unit 26. 6a, the second area 7a, and the identification information 53 are supplied.
  • the second generation unit 25 generates the pseudo image 4A (FIG. 2) by generating images of the first region 6a and the second region 7a from these pieces of information according to the operation mode input from the operation unit 42. (Step S100 in FIG. 19).
  • the second generation unit 25 generates an image of the occlusion region 5a without using information for specifying the first region 6a and the second region 7a according to the operation mode input from the operation unit 42.
  • the pseudo image 4B (FIG. 2) can also be generated.
  • the occlusion area normally includes information on both the subject of interest and each background subject.
  • the second generation unit 25 uses the occlusion area 5a or the images of the first area 6a corresponding to the target subject in the occlusion area 5a and the images of the second area 7a corresponding to the background subjects as the target subject and each background subject. It generates based on each information.
  • a method for generating the occlusion area 5a or the first area 6a and the second area 7a based on the information of the subject of interest and each of the background subjects for example, the subject of interest in an image such as the reference image 1A or the pseudo image 2A
  • a method of generating based on image information relating to each background subject is employed.
  • the second generation unit 25 causes the color according to the setting. Based on the pattern or the like, an image of the occlusion area 5a or the first area 6a and the second area 7a is generated.
  • the second generation unit 25 generates an image of the occlusion region 5a or the first region 6a and the second region 7a based on the image information and characteristics of the subject of interest and the background subject. To do.
  • the second generation unit 25 generates an image of the occlusion area 5a or the areas of the first area 6a and the second area 7a based on the respective information of the subject of interest and the background object.
  • An image of the occlusion area may be generated.
  • a method of determining an area used for generating an image of each occlusion area and generating an image of each occlusion area based on the image of the area is adopted.
  • FIG. 14 is a diagram illustrating an example of a technique for generating an image of the occlusion area 5b based on the partial area 8g provided in the second foreground image 3b.
  • FIG. 15 is a diagram showing an example of a technique for generating an image of the occlusion area 5b based on the partial area 8h provided in the second background image 4b.
  • the image of the occlusion area 5b is generated by copying the texture of the partial area 9a of, for example, 3 ⁇ 3 pixels provided in the partial area 8g to the partial area 9b.
  • the image of the partial area 9a may be copied not only to the partial area 9b but also to other partial areas in the occlusion area 5b.
  • the image of the occlusion area 5b is generated by copying the texture of the partial area 9c to the partial area 9b as in FIG.
  • a pixel value (mode) indicating a mode value of frequency when a histogram of pixel values in a predetermined area in a non-occlusion area is taken, or A method of generating an image of an occlusion area using an average value of pixel values of a predetermined area may be employed.
  • the second generation unit 25 switches the operation mode using the operation unit 42, so that the occlusion area 5 a or the first occlusion area 5 a is generated based on the boundary area with the occlusion area 5 a in the image related to the subject of interest and the background subject. Images of the region 6a and the second region 7a can be generated.
  • border region In the present invention will be described below.
  • the occlusion areas such as the occlusion area 5a, the first area 6a, and the second area 7a according to the present embodiment are such that the boundary portion between the image of the subject of interest and the image of the background subject is displayed on the image when the pseudo image is generated. This is a region generated by separation.
  • the boundary area for a three-dimensional subject such as a person is obtained by obtaining a normal defined based on the distance information 52 for each pixel in the target area, for example, and the normal angle for the pixel at the boundary of the target area
  • a method is adopted in which the pixel is determined as a partial region of the region to which each pixel to which the difference is equal to or less than a predetermined angle range.
  • the predetermined angle range For example, 45 degrees is adopted as the predetermined angle range. It is desirable that the smaller the predetermined angle range is, the closer the maximum range of the boundary region set is to the boundary of the region of interest.
  • the boundary region range based on the angular range such as the normal line described above.
  • the normal line for the target pixel is obtained based on the distance information 52, for example, based on the distance information 52 for the target pixel and each of the two pixels adjacent to the target pixel in the horizontal and vertical directions.
  • a plane is defined from the three-dimensional coordinate values, and a normal of the plane is acquired as a normal for the pixel of interest.
  • the setting of the boundary region based on the normal angle described above may also be adopted for the boundary region of the background part.
  • the boundary area of the second background image 4a is directed from the boundary with the occlusion area 5a of the second background image 4a toward the inside of the second background image 4a at a predetermined ratio with respect to the number of horizontal pixels or the number of vertical pixels of the second background image 4a.
  • a method of determining as a partial region of regions set based on the number of pixels may be employed. For example, 1/5 is adopted as the predetermined ratio.
  • the maximum range of the boundary area may be determined by the number of pixels described above.
  • the “boundary region” in the present application is a predetermined condition that defines a range of a predetermined geometric characteristic related to the subject, such as the predetermined normal angle range described above, or a region range such as the number of pixels of the region, the size, etc. Based on a predetermined mathematical condition to be determined, a partial region of a region whose maximum range is a range set from the boundary between two regions on the image to the inside of one of the two regions .
  • the “boundary area” is not limited to a partial area in contact with the boundary.
  • the second generation unit 25 is configured to be able to implement several types of occlusion region image generation methods using the boundary region, and these functions are switched by an input from the operation unit 42.
  • 8 to 13, 16, and 17 are diagrams illustrating an example of a technique for generating an image of an occlusion area using a boundary area.
  • the image of the first region 6a is generated based on the boundary region set as the boundary 8a between the second foreground image 3a and the first region 6a.
  • the second background image 4a in the second background image 4a is generated.
  • An image of the second region 7a is generated based on the boundary region set as the boundary 8b with the two regions 7a.
  • FIG. 10 shows an example in which an image of the occlusion area 5b is generated based on the partial area 8c that is a boundary area in the second foreground image 3b.
  • FIG. 11 shows an example in which an image of the occlusion area 5b is generated based on the partial area 8d that is a boundary area in the second foreground image 3b that is in contact with the boundary between the second foreground image 3b and the occlusion area 5b. Is shown.
  • the partial area 8e is a boundary area that is not in contact with the boundary between the occlusion area 5b and the second background image 4b, and the partial area 8f is in contact with the boundary between the occlusion area 5b and the second background image 4b. This is the boundary area.
  • the second generation unit 25 first selects an image of the first boundary region (not shown) set near the boundary with the second foreground image 3b in the occlusion region 5b. 2 generated based on the partial region group 9d which is a boundary region with the occlusion region 5b in the foreground image 3b, and is set near the boundary with the second background image 4b in the occlusion region 5b.
  • An image of the boundary region is generated based on the partial region group 9e that is a boundary region with the occlusion region 5b in the second background image 4b.
  • the second generation unit 25 generates an image of the occlusion area 5b so that the pixel value of the occlusion area 5b gradually changes from the first boundary area to the second boundary area.
  • the arrow 12a indicates the shift direction when the second foreground image 3b is generated based on the correspondence 56, and an image of the occlusion area 5b is generated along the shift direction.
  • the image of the occlusion area 5b is generated so that the pixel value of the occlusion area 5b gradually changes from the first area 6b to the boundary area of the second area 7b.
  • a small number of pseudo images can be generated.
  • the second generation unit 25 starts from the partial region group 9f that is the boundary region on the second region 7b side in the first region 6b, and on the first region 6b side in the second region 7b.
  • the image of the occlusion region 5a is generated so that the pixel value of the partial region 10a over the partial region group 9g that is the boundary region of the region gradually changes.
  • the arrow 12b indicates the shift direction when the second foreground image 3b is generated based on the correspondence 56, and the first region 6b and the first region 6b gradually change along the shift direction. An image of two areas 7b is generated.
  • the shift direction can also be set by the operator from the operation unit 42.
  • each of the partial region groups 9d to 9g in FIGS. 16 and 17 includes each boundary region including each of the partial region groups 9d to 9g. It is shown as an example of partial areas set discretely.
  • the second generation unit 25 generates an image of each occlusion area such as the occlusion area 5a or the first area 6a and the second area 7a by the method described above.
  • the second generation unit 25 determines whether each occlusion area is in accordance with the set operation mode. For example, a smoothing process such as by using a smoothing filter such as a 3 ⁇ 3 pixel Gaussian filter may be performed on the above image.
  • a smoothing process such as by using a smoothing filter such as a 3 ⁇ 3 pixel Gaussian filter may be performed on the above image.
  • the second generation unit 25 displays the pseudo image in which the image of each occlusion area is generated on the display unit 43 (step S100 in FIG. 19), and ends the pseudo image generation process.
  • the occlusion area 5a on the pseudo image can be more accurately specified based on the distance information 52 of each subject based on the actual measurement, and the specified Since the image of the occlusion area 5a is generated based on the subject of interest and each background subject, a pseudo image with little discomfort can be generated.
  • the first area 6a corresponding to the subject of interest in the occlusion area 5a specified on the pseudo image and the second area 7a corresponding to each background subject are specified, and the first Since the image of the area 6a is generated based on the information of the subject of interest, and the image of the second area 7a is generated based on the information of each background object, the images of the first area 6a and the second area 7a are pseudo images. Can be made more similar to an actual image corresponding to, so that a pseudo image with less discomfort can be generated.
  • the shape information 62 expressing the entire three-dimensional shape of the subject of interest is acquired, and the first region 6a is specified based on the shape information 62, so that the subject of interest is identified. Since the first region 6a, which is a corresponding occlusion region, can be specified more accurately, a pseudo image with less discomfort can be generated.
  • the image of the first area 6a is generated based on the boundary area with the first area 6a in the second foreground image 3a, and the second area 7a in the second background image 4a
  • the images of the first region 6a and the second region 7a can more closely resemble the actual image corresponding to the pseudo image, so that the user feels more uncomfortable. It is possible to generate a pseudo image with less.
  • an image of the occlusion area may be generated based on a reference image that is taken in time sequence.
  • a method for generating an image of an occlusion area based on a time-series image will be described.
  • FIG. 18 is a diagram illustrating an example of a method for generating an image of an occlusion area based on a time-series image.
  • the reference images 1B to 1E shown in FIG. 18 are a series of time-series images of the subject of interest photographed in time sequence.
  • the reference images 1B to 1E are displayed in the shooting order along the time axis t1.
  • the first foreground images 1b to 1e are images of the subject of interest in the reference images 1B to 1E, respectively, and the subject of interest moves relative to the camera.
  • the partial areas 11b to 11e are provided at the same position and range with respect to the reference images 1B to 1E. Note that the positions are not completely the same, and may be slightly shifted.
  • the entire area of the partial area 11b and almost the entire area of the partial area 11c are respectively present in the background portions of the first foreground images 1b and 1c, and the partial areas 11d and 11e. Are all present within the partial areas 1d and 1e, respectively.
  • the occlusion region in the pseudo image generated from one reference image and taken from a different viewpoint from the reference image contains information on the subject and its background.
  • partial areas 11b to 11e are not limited to a part of the reference images 1B to 1E, respectively, and may be all of the reference images 1B to 1E, for example.
  • the pseudo image generation system according to the modification includes the stereo camera 300 of the pseudo image generation system 100A according to the embodiment, and the pseudo image generation apparatus having the same configuration as the pseudo image generation apparatus 200A according to the embodiment. Yes.
  • the stereo camera 300 has a continuous shooting function for continuously shooting a subject in time sequence.
  • the stereo camera 300 according to the modified example uses the continuous shooting function to generate a plurality of standard images and a plurality of reference images for each subject, and supplies the generated images to the pseudo image generation device according to the modified example. To do.
  • the pseudo image generation device is an embodiment except for the first acquisition unit and the second generation unit corresponding to the first acquisition unit 12 and the second generation unit 25 of the pseudo image generation device 200A according to the embodiment, respectively.
  • Each of the functional units is the same as that of the pseudo image generation apparatus 200A according to FIG.
  • the first acquisition unit according to the modification acquires a plurality of reference images and a plurality of reference images, which are a plurality of time-series images, for each subject photographed in time sequence by the stereo camera 300.
  • the first acquisition unit supplies one reference image among the plurality of acquired reference images to the second acquisition unit 13, the correspondence relationship acquisition unit 15, the first generation unit 24, and the identification unit 26.
  • the acquired plurality of reference images are supplied to the second generation unit 25.
  • the first acquisition unit supplies one reference image taken at the same time as the first reference image to the second acquisition unit 13 among the plurality of acquired reference images.
  • the reference image supplied to the second acquisition unit 13 may be a reference image in which each subject in the one standard image and each subject in the same state are captured. It is not limited to the reference image taken at the time.
  • the second acquisition unit 13 supplied with the reference image acquires distance information based on actual measurement for at least each point of the subject of interest in the state in which the first reference image is acquired.
  • the second generation unit according to the modification may have a predetermined number of reference images supplied from the first generation unit, such as a region corresponding to the occlusion region of the pseudo image supplied from the first generation unit 24.
  • An area is set, and an image used for generating an image of the occlusion area is generated by applying the method described with reference to FIG. 18 to the area, and the generated image is used. Generate an image of the occlusion area.
  • the second generation unit generates an image of the occlusion area in the pseudo image based on a plurality of reference images that are a plurality of time-series images.
  • an image corresponding to the occlusion area is searched by an image recognition process for a plurality of time-series images and used to generate an image of the occlusion area, it is based on the actually photographed subject image.
  • an image of the occlusion area can be generated, so that an image of the occlusion area closer to the real object can be generated.
  • one of a plurality of time series images may be significantly different from other time series images taken before and after that.
  • an abnormal image affected by noise or the like may be extracted based on the continuity of the motion of the subject over a plurality of time-series images. It becomes easy.
  • the movement of the subject varies depending on, for example, whether the subject is a person or a car, but if the subject is known in advance, the subject characteristics are also expressed in a plurality of time series. By using it for the prediction of the movement of the subject over the image, it becomes easier to extract an abnormal image affected by noise or the like.
  • the ratio of the occlusion area image to be an image including information on each of the subject of interest and the background subject is high, and the ratio at which a pseudo image with little discomfort can be generated can be increased.
  • the reference image By using not only the correspondence relationship based on the distance information 52 between 1A and the pseudo image, but also the correspondence relationship based on the distance information 52 between the reference image 1R and the pseudo image photographed in synchronization with the standard image 1A, each occlusion Even if the range of the region is specified and the image of each occlusion region using the standard image 1A and the reference image 1R is generated, the usefulness of the present invention is not impaired.
  • the range of each occlusion region can be specified more accurately and narrowly. For example, more appropriate information about each subject in the reference image 1A and the reference image 1R is used. Since an image of each occlusion area can be generated using, a pseudo image with less discomfort can be generated.
  • the stereo camera is constituted by three or more cameras.
  • a set of subjects from each position in a direction substantially perpendicular to the baseline length direction is provided.
  • a stereo image or a series of time-series images may be taken, and a pseudo image may be generated using the various methods described above.
  • the occlusion area on the subject can be reduced, the occlusion area on the pseudo image can be specified as a narrower and more accurate range, and each subject photographed from multiple directions can be specified.
  • the information on the subject can be acquired more accurately based on the image, and a pseudo image with less uncomfortable feeling can be generated.

Abstract

Disclosed is a technology capable of generating simulated images, with reduced uncanny valleys, from images of photographic subjects that correspond to photography from virtual viewpoints. A simulated image generating device comprises a first acquisition means for acquiring reference images wherein photographic subjects are photographed from a first viewpoint; a second acquisition means for acquiring information of all distances at least on the basis of actual measurements of subjects that are being focused upon; a distinguishing means for distinguishing between the subjects that are being focused upon and each background subject; a correspondence acquisition means for acquiring correspondences between the reference images and simulated images; a first generating means for generating the simulated images on the basis of the correspondences and the reference images, the simulated images further comprising second foreground images for the subjects that are being focused upon and second background images for each background subject; a first identifying means for identifying occlusion regions within the simulated images; and a second generating means for generating images of the occlusion regions on the basis of the respective information of the subjects that are being focused upon and of each background subject.

Description

疑似画像生成装置および疑似画像生成方法Pseudo image generation apparatus and pseudo image generation method
 本発明は、1の視点から撮影した被写体の画像を用いて、その視点とは別の仮想視点からその被写体を撮影したときの画像についての疑似画像を生成する疑似画像生成装置およびその方法に関する。 The present invention relates to a pseudo image generation apparatus and method for generating a pseudo image of an image of a subject taken from a virtual viewpoint different from the viewpoint using an image of the subject taken from one viewpoint.
 近年、被写体を実際に撮影した視点とは別の仮想視点からその被写体を撮影したときに得られる画像についての疑似画像を、仮想視点からの実際の撮影を行うことなく模擬的に生成する疑似画像生成装置が、立体視可能な画像群を生成する用途などに活用され始めている。 In recent years, a pseudo image has been generated in which a pseudo image of an image obtained when a subject is photographed from a virtual viewpoint different from the viewpoint where the subject is actually photographed is simulated without performing actual photographing from the virtual viewpoint. Generation devices are beginning to be used for purposes such as generating a group of images that can be viewed stereoscopically.
 特許文献1の疑似画像生成装置では、撮影済みの1枚の画像(基準画像)の画面構成から被写体の奥行き(奥行き推定モデル)を推定し、得られた奥行き情報に基づいて基準画像の画像上の各座標と、疑似画像の画像上の各座標との対応関係を求めることによって、基準画像から疑似画像を生成する。 In the pseudo image generation device of Patent Document 1, the depth of the subject (depth estimation model) is estimated from the screen configuration of one captured image (reference image), and on the image of the reference image based on the obtained depth information. The pseudo image is generated from the reference image by obtaining the correspondence between each of the coordinates and each coordinate on the image of the pseudo image.
 ここで、仮想視点からは見えるが基準画像を撮影した視点からは見えない、被写体の領域については基準画像と疑似画像との対応関係を求めることができないため、この領域に対応する疑似画像上の領域(オクルージョン領域)は、該対応関係によっては適切な画素値を求めることができない。 Here, since it is not possible to obtain the correspondence between the reference image and the pseudo image for the subject area that is visible from the virtual viewpoint but not from the viewpoint where the reference image is taken, the pseudo image corresponding to this area cannot be obtained. An appropriate pixel value cannot be obtained for the region (occlusion region) depending on the correspondence.
 そこで、特許文献1の疑似画像生成装置では、領域統合法で画面領域を分割した後、各領域内のテクスチャの統計量を用いてオクルージョン領域の画素値を設定している。 Therefore, in the pseudo image generation apparatus of Patent Document 1, after dividing the screen area by the area integration method, the pixel value of the occlusion area is set using the statistical amount of the texture in each area.
特開2005-151534号公報JP 2005-151534 A
 しかしながら、特許文献1の疑似画像生成装置では、推定した奥行きに基づいて基準画像と疑似画像との対応関係を求めているために適切な対応関係が得られない。従って、オクルージョン領域の範囲も正確ではないため、生成された疑似画像を見た観察者が違和感を覚えるといった問題がある。 However, the pseudo image generation apparatus of Patent Document 1 cannot obtain an appropriate correspondence because the correspondence between the reference image and the pseudo image is obtained based on the estimated depth. Therefore, since the range of the occlusion area is not accurate, there is a problem that an observer who sees the generated pseudo image feels uncomfortable.
 また、オクルージョン領域には、通常、被写体とその背景との両方の情報が含まれているが、特許文献1の疑似画像生成装置では、オクルージョン領域が被写体とその背景との情報を含有していることに着目した画質の向上が図られていないために、観察者が違和感を覚え易いオクルージョン領域の画像が生成される頻度が高くなるといった問題もある。 In addition, the occlusion area normally includes information on both the subject and its background. However, in the pseudo image generation device of Patent Document 1, the occlusion area contains information on the subject and its background. In particular, since the improvement in image quality is not aimed at, there is a problem that the frequency of generation of an image of an occlusion area in which an observer is likely to feel discomfort increases.
 本発明は、こうした問題を解決するためになされたもので、オクルージョン領域の範囲をより正確に特定するとともに、違和感が少ない疑似画像を生成する技術を提供することを目的とする。 The present invention has been made to solve these problems, and an object of the present invention is to provide a technique for more accurately specifying the range of an occlusion area and generating a pseudo image with less discomfort.
 上記の課題を解決するため、第1の態様に係る疑似画像生成装置は、各被写体が第1の視点から撮影された基準画像を取得する第1の取得手段と、前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する第2の取得手段と、前記着目被写体と、前記基準画像のうち前記着目被写体の画像である第1の前景画像の背景部分である第1の背景画像に撮影された各背景被写体とを識別する識別手段と、前記基準画像と、前記第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像との対応関係を、前記各距離情報に基づいて取得する対応関係取得手段と、前記基準画像のうち少なくとも前記第1の前景画像についての前記対応関係と、前記基準画像とに基づいて、前記仮想視点からの撮影に対応した前記着目被写体の画像である第2の前景画像と、前記仮想視点からの撮影に対応した前記各背景被写体の画像である第2の背景画像とを含有する前記疑似画像を生成する第1の生成手段と、前記疑似画像のうち前記第2の前景画像と、前記第2の背景画像とを含まないオクルージョン領域を特定する第1の特定手段と、前記オクルージョン領域の画像を前記着目被写体と、前記各背景被写体とのそれぞれの情報に基づいて生成する第2の生成手段と、を備える。 In order to solve the above-described problem, the pseudo image generation device according to the first aspect includes a first acquisition unit that acquires a reference image in which each subject is captured from a first viewpoint, and at least attention among the subjects. A second acquisition unit configured to acquire distance information based on actual measurement for each point of the subject; the target subject; and a first portion that is a background portion of a first foreground image that is an image of the target subject among the reference images. The identification means for identifying each background subject photographed in the background image of the image, the correspondence relationship between the reference image and the pseudo image of each subject corresponding to photographing from a virtual viewpoint different from the first viewpoint From the virtual viewpoint based on the correspondence relationship acquisition means for acquiring information based on each distance information, the correspondence relationship for at least the first foreground image of the reference image, and the reference image. A first pseudo-image is generated that includes a second foreground image that is a corresponding image of the subject of interest and a second background image that is an image of each background subject corresponding to shooting from the virtual viewpoint. Generating means; first specifying means for specifying an occlusion area that does not include the second foreground image and the second background image of the pseudo image; and an image of the occlusion area as the subject of interest. And second generation means for generating based on the respective information on each background subject.
 第2の態様に係る疑似画像生成装置は、第1の態様に係る疑似画像生成装置であって、前記オクルージョン領域のうち前記着目被写体に対応した第1の領域と、前記各背景被写体に対応した第2の領域とを特定する第2の特定手段を更に備える。前記第2の生成手段は、前記第1の領域の画像を前記着目被写体の情報に基づいて生成するとともに、前記第2の領域の画像を前記各背景被写体の情報に基づいて生成する。 The pseudo image generation device according to the second aspect is the pseudo image generation device according to the first aspect, and corresponds to the first region corresponding to the subject of interest in the occlusion region and the background subjects. Second specifying means for specifying the second area is further provided. The second generation means generates an image of the first area based on information on the subject of interest, and generates an image of the second area based on information on each background subject.
 第3の態様に係る疑似画像生成装置は、第2の態様に係る疑似画像生成装置であって、前記着目被写体の全周的な三次元形状を表現した形状情報を取得する第3の取得手段を更に備える。前記第2の特定手段は、前記第1の領域を前記形状情報に基づいて特定する。 The pseudo image generation device according to the third aspect is the pseudo image generation device according to the second aspect, and is third acquisition means for acquiring shape information representing the entire three-dimensional shape of the object of interest. Is further provided. The second specifying means specifies the first region based on the shape information.
 第4の態様に係る疑似画像生成装置は、第2の態様に係る疑似画像生成装置であって、前記第2の生成手段は、前記第2の前景画像における前記第1の領域との境界領域に基づいて前記第1の領域の画像を生成するとともに、前記第2の背景画像における前記第2の領域との境界領域に基づいて前記第2の領域の画像を生成する。 The pseudo image generation device according to a fourth aspect is the pseudo image generation device according to the second aspect, wherein the second generation means is a boundary region with the first region in the second foreground image. The image of the first area is generated based on the second area image, and the image of the second area is generated based on a boundary area between the second background image and the second area.
 第5の態様に係る疑似画像生成装置は、第4の態様に係る疑似画像生成装置であって、前記第2の生成手段は、前記第1の領域のうち前記第2の領域側の境界領域から、前記第2の領域のうち前記第1の領域側の境界領域にわたる領域の画素値が徐々に変化するように、前記第1の領域および前記第2の領域の画像を生成する。 The pseudo image generation device according to a fifth aspect is the pseudo image generation device according to the fourth aspect, wherein the second generation means is a boundary region on the second region side of the first region. From the second region, the images of the first region and the second region are generated so that the pixel values of the region over the boundary region on the first region side in the second region gradually change.
 第6の態様に係る疑似画像生成装置は、第1の態様に係る疑似画像生成装置であって、前記第2の生成手段は、(a)前記オクルージョン領域のうち前記第2の前景画像との第1の境界領域の画像を、前記第2の前景画像のうち前記オクルージョン領域との境界領域に基づいて生成するとともに、前記オクルージョン領域のうち前記第2の背景画像との第2の境界領域の画像を、前記第2の背景画像のうち前記オクルージョン領域との境界領域に基づいて生成し、(b)前記オクルージョン領域の画素値が、前記第1の境界領域から前記第2の境界領域にわたって徐々に変化するように前記オクルージョン領域の画像を生成する。 The pseudo image generation device according to a sixth aspect is the pseudo image generation device according to the first aspect, wherein the second generation means includes: (a) the second foreground image in the occlusion area; An image of the first boundary region is generated based on a boundary region with the occlusion region in the second foreground image, and a second boundary region with the second background image in the occlusion region is generated. An image is generated based on a boundary area with the occlusion area in the second background image, and (b) a pixel value of the occlusion area gradually increases from the first boundary area to the second boundary area. An image of the occlusion area is generated so as to change to
 第7の態様に係る疑似画像生成装置は、各被写体が時間順次に撮影された複数の時系列画像を取得する第1の取得手段と、前記複数の時系列画像のうち1の画像を基準画像として、前記基準画像が取得された状態における前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する第2の取得手段と、前記着目被写体と、前記基準画像のうち前記着目被写体の画像である第1の前景画像の背景部分である第1の背景画像に撮影された各背景被写体とを識別する識別手段と、前記基準画像と、前記基準画像が撮影された第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像との対応関係を、前記各距離情報に基づいて取得する対応関係取得手段と、前記基準画像のうち少なくとも前記第1の前景画像についての前記対応関係と、前記基準画像とに基づいて、前記仮想視点からの撮影に対応した前記着目被写体の画像である第2の前景画像と、前記仮想視点からの撮影に対応した前記各背景被写体の画像である第2の背景画像とを含有する前記疑似画像を生成する第1の生成手段と、前記疑似画像のうち前記第2の前景画像と、前記第2の背景画像とを含まないオクルージョン領域を特定する第1の特定手段と、前記複数の時系列画像に基づいて前記オクルージョン領域の画像を生成する第2の生成手段と、を備える。 A pseudo image generation device according to a seventh aspect includes a first acquisition unit configured to acquire a plurality of time-series images in which each subject is photographed in time sequence, and one image among the plurality of time-series images as a reference image. Second acquisition means for acquiring distance information based on actual measurement for at least each point of the subject of interest in the state in which the reference image has been acquired, the subject of interest, and the reference image of the subject Identification means for identifying each background subject photographed in the first background image that is the background portion of the first foreground image that is the image of the subject of interest, the reference image, and the first image in which the reference image is photographed A correspondence relationship acquisition means for acquiring a correspondence relationship with the pseudo image of each subject corresponding to shooting from a virtual viewpoint different from the viewpoint based on each distance information, and at least the first of the reference images Based on the correspondence relationship for the foreground image and the reference image, the second foreground image, which is the image of the subject of interest corresponding to the shooting from the virtual viewpoint, and the shooting from the virtual viewpoint First generation means for generating the pseudo image containing a second background image that is an image of each background subject; the second foreground image of the pseudo image; and the second background image; First occlusion area that does not contain the image, and second generation means for generating an image of the occlusion area based on the plurality of time-series images.
 第8の態様に係る疑似画像生成装置は、第1の態様に係る疑似画像生成装置であって、前記第2の生成手段は、生成された前記オクルージョン領域の画像に平滑化処理を行う。 The pseudo image generation device according to the eighth aspect is the pseudo image generation device according to the first aspect, and the second generation means performs a smoothing process on the generated image of the occlusion area.
 第9の態様に係る疑似画像生成装置は、各被写体が第1の視点から撮影された基準画像を取得する第1の取得手段と、前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する第2の取得手段と、前記着目被写体の全周的な三次元形状を表現した形状情報を取得する第3の取得手段と、前記第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像のうち、前記基準画像において対応する部分が撮影されていない領域であるオクルージョン領域であって、前記被写体に対応した第1の領域と、前記着目被写体の画像の背景部分に撮影された各背景被写体に対応した第2の領域とを、前記基準画像と、前記各距離画像と、前記形状情報とに基づいて特定するとともに、前記第1の領域の画像を前記着目被写体の情報に基づいて生成し、前記第2の領域の画像を前記各背景被写体の情報に基づいて生成することによって前記疑似画像を生成する生成手段と、を備える。 The pseudo image generation device according to the ninth aspect is based on actual measurement for at least each point of the subject of interest in the first acquisition unit that acquires a reference image in which each subject is captured from a first viewpoint. A second acquisition unit that acquires each distance information, a third acquisition unit that acquires shape information that represents the entire three-dimensional shape of the subject of interest, and a virtual viewpoint different from the first viewpoint A first region corresponding to the subject, and a target region of interest. The occlusion region is a region where the corresponding portion of the reference image is not photographed among the pseudo images of each subject corresponding to photographing from A second region corresponding to each background subject photographed in the background portion of the first image based on the reference image, each distance image, and the shape information, and Picture The generated based on the information of the focused object, and a generation means for generating the pseudo-image by generating on the basis of an image of the second region on the information of the background object.
 第10の態様に係る疑似画像生成装置は、第9の態様に係る疑似画像生成装置であって、前記生成手段は、(a)前記着目被写体と、前記基準画像のうち前記着目被写体の画像である第1の前景画像の背景部分である第1の背景画像に撮影された各背景被写体とを識別する識別手段と、(b)前記基準画像と、前記疑似画像との対応関係を、前記各距離情報に基づいて取得する対応関係取得手段と、(c)前記基準画像のうち少なくとも前記第1の前景画像についての前記対応関係と、前記基準画像とに基づいて、前記仮想視点からの撮影に対応した前記着目被写体の画像である第2の前景画像と、前記仮想視点からの撮影に対応した前記各背景被写体の画像である第2の背景画像とを含有する前記疑似画像を生成する第1の生成手段と、(d)前記第1の領域を前記距離情報と前記形状情報とに基づいて特定するとともに、前記第2の領域を、前記疑似画像のうち前記第2の前景画像と、前記第1の領域と、前記第2の背景領域とのいずれをも含まない領域として特定する特定手段と、(e)前記第1の領域の画像を前記着目被写体の情報に基づいて生成するとともに、前記第2の領域の画像を前記各背景被写体の情報に基づいて生成する第2の生成手段と、を備える。 A pseudo image generation device according to a tenth aspect is the pseudo image generation device according to the ninth aspect, wherein the generation means includes (a) an image of the target subject and the target subject among the reference subject. Identification means for identifying each background subject photographed in the first background image that is the background portion of a certain first foreground image, and (b) the correspondence between the reference image and the pseudo image, Correspondence acquisition means for acquiring based on distance information; and (c) shooting from the virtual viewpoint based on the correspondence between at least the first foreground image of the reference images and the reference image. A first pseudo-image is generated that includes a second foreground image that is a corresponding image of the subject of interest and a second background image that is an image of each background subject corresponding to shooting from the virtual viewpoint. And (d) the generating means 1 region is specified based on the distance information and the shape information, and the second region is defined as the second foreground image, the first region, and the second region of the pseudo image. (E) generating an image of the first area based on the information of the subject of interest, and specifying the image of the second area as the respective areas. Second generation means for generating based on information on the background subject.
 第11の態様に係る疑似画像生成方法は、各被写体が第1の視点から撮影された基準画像を取得する工程と、前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する工程と、前記着目被写体と、前記基準画像のうち前記着目被写体の画像である第1の前景画像の背景部分である第1の背景画像に撮影された各背景被写体とを識別する工程と、前記基準画像と、前記第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像との対応関係を、前記各距離情報に基づいて取得する工程と、前記基準画像のうち少なくとも前記第1の前景画像についての前記対応関係と、前記基準画像とに基づいて、前記仮想視点からの撮影に対応した前記着目被写体の画像である第2の前景画像と、前記仮想視点からの撮影に対応した前記各背景被写体の画像である第2の背景画像とを含有する前記疑似画像を生成する工程と、前記疑似画像のうち前記第2の前景画像と、前記第2の背景画像とを含まないオクルージョン領域を特定する工程と、前記オクルージョン領域の画像を前記着目被写体と、前記各背景被写体とのそれぞれの情報に基づいて生成する工程と、を備える。 The pseudo image generation method according to the eleventh aspect includes a step of obtaining a reference image in which each subject is photographed from a first viewpoint, and each distance information based on actual measurement for at least each point of the subject of interest among the subjects. And obtaining the subject of interest and identifying each background subject photographed in a first background image that is a background portion of a first foreground image that is an image of the subject of interest in the reference image. Obtaining a correspondence relationship between the reference image and a pseudo image of each subject corresponding to photographing from a virtual viewpoint different from the first viewpoint based on each distance information; and A second foreground image that is an image of the subject of interest corresponding to shooting from the virtual viewpoint based on at least the correspondence relationship with respect to the first foreground image and the reference image, and the temporary image Generating the pseudo image containing a second background image that is an image of each background subject corresponding to shooting from a viewpoint; the second foreground image of the pseudo image; and the second image A step of specifying an occlusion area not including a background image, and a step of generating an image of the occlusion area based on information on the subject of interest and each of the background subjects.
 第12の態様に係る疑似画像生成方法は、各被写体が時間順次に撮影された複数の時系列画像を取得する工程と、前記複数の時系列画像のうち1の画像を基準画像として、前記基準画像が取得された状態における前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する工程と、前記着目被写体と、前記基準画像のうち前記着目被写体の画像である第1の前景画像の背景部分である第1の背景画像に撮影された各背景被写体とを識別する工程と、前記基準画像と、前記基準画像が撮影された第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像との対応関係を、前記各距離情報に基づいて取得する工程と、前記基準画像のうち少なくとも前記第1の前景画像についての前記対応関係と、前記基準画像とに基づいて、前記仮想視点からの撮影に対応した前記着目被写体の画像である第2の前景画像と、前記仮想視点からの撮影に対応した前記各背景被写体の画像である第2の背景画像とを含有する前記疑似画像を生成する工程と、前記疑似画像のうち前記第2の前景画像と、前記第2の背景画像とを含まないオクルージョン領域を特定する工程と、前記複数の時系列画像に基づいて前記オクルージョン領域の画像を生成する工程と、を備える。 The pseudo image generation method according to the twelfth aspect includes a step of acquiring a plurality of time-series images in which each subject is photographed in time sequence, and using one of the plurality of time-series images as a reference image, the reference image A step of acquiring distance information based on actual measurement for at least each point of the subject of interest in the state in which the image has been obtained, and a first image that is the image of the subject of interest among the subject of interest and the reference image A step of identifying each background subject photographed in the first background image that is the background portion of the foreground image, and a virtual viewpoint different from the reference image and the first viewpoint from which the reference image was photographed Obtaining a correspondence relationship with the pseudo image of each subject corresponding to the shooting of the subject based on the distance information, and the correspondence relationship with respect to at least the first foreground image among the reference images. Based on the reference image, a second foreground image that is an image of the subject of interest corresponding to shooting from the virtual viewpoint and an image of each background subject corresponding to shooting from the virtual viewpoint. Generating the pseudo image including two background images, identifying an occlusion area that does not include the second foreground image and the second background image among the pseudo images, and the plurality Generating an image of the occlusion area based on the time-series images.
 第13の態様に係る疑似画像生成方法は、各被写体が第1の視点から撮影された基準画像を取得する工程と、前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する工程と、前記着目被写体の全周的な三次元形状を表現した形状情報を取得する工程と、前記第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像のうち、前記基準画像において対応する部分が撮影されていない領域であるオクルージョン領域であって、前記被写体に対応した第1の領域と、前記着目被写体の画像の背景部分に撮影された各背景被写体に対応した第2の領域とを、前記基準画像と、前記各距離画像と、前記形状情報とに基づいて特定するとともに、前記第1の領域の画像を前記着目被写体の情報に基づいて生成し、前記第2の領域の画像を前記各背景被写体の情報に基づいて生成することによって前記疑似画像を生成する工程と、を備える。 The pseudo image generation method according to the thirteenth aspect includes a step of acquiring a reference image in which each subject is photographed from a first viewpoint, and each distance information based on actual measurement for at least each point of the subject of interest among the subjects. A step of acquiring, a step of acquiring shape information representing a three-dimensional shape of the entire circumference of the subject of interest, and a pseudo image of each subject corresponding to photographing from a virtual viewpoint different from the first viewpoint Each occlusion area, which is an area in which the corresponding portion of the reference image is not photographed, and each background subject photographed in the first area corresponding to the subject and the background portion of the image of the subject of interest Is determined based on the reference image, each distance image, and the shape information, and the image of the first area is determined based on the information on the subject of interest. There generated, and a step of generating the pseudo-image by generating on the basis of an image of the second region on the information of the background object.
 第1から第10の何れの態様に係る疑似画像生成装置または第11から第13の何れの態様に係る疑似画像生成方法によっても、実測に基づく被写体の距離情報に基づいて疑似画像上でのオクルージョン領域の範囲をより正確に特定することができるとともに、特定されたオクルージョン領域の画像を、着目被写体と、各背景被写体とに基づいて生成するので違和感の少ない疑似画像を生成することができる。 Occlusion on the pseudo image based on the distance information of the subject based on the actual measurement by the pseudo image generation device according to any of the first to tenth aspects or the pseudo image generation method according to any of the eleventh to thirteenth aspects. The range of the region can be specified more accurately, and the image of the specified occlusion region is generated based on the subject of interest and each background subject, so that a pseudo image with less discomfort can be generated.
実施形態に係る疑似画像生成システムの主な構成の1例を示すブロック図である。It is a block diagram which shows an example of the main structures of the pseudo image generation system which concerns on embodiment. 実施形態に係る疑似画像生成装置の主な機能構成の1例を示すブロック図である。It is a block diagram which shows an example of the main function structures of the pseudo image generation apparatus which concerns on embodiment. 基準画像の1例を示す図である。It is a figure which shows an example of a reference | standard image. 疑似画像の1例を示す図である。It is a figure which shows an example of a pseudo image. 距離情報の1例を示す図である。It is a figure which shows an example of distance information. 被写体の全周的な形状情報に基づいた疑似画像の1例を示す図である。It is a figure which shows an example of the pseudo image based on the subject's perimeter shape information. オクルージョン領域に第1領域と第2領域とが設定された疑似画像の1例を示す図である。It is a figure which shows an example of the pseudo image by which the 1st area | region and the 2nd area | region were set to the occlusion area | region. オクルージョン領域の画像を生成する手法の1例を示す図である。It is a figure which shows an example of the method of producing | generating the image of an occlusion area | region. オクルージョン領域の画像を生成する手法の1例を示す図である。It is a figure which shows an example of the method of producing | generating the image of an occlusion area | region. オクルージョン領域の画像を生成する手法の1例を示す図である。It is a figure which shows an example of the method of producing | generating the image of an occlusion area | region. オクルージョン領域の画像を生成する手法の1例を示す図である。It is a figure which shows an example of the method of producing | generating the image of an occlusion area | region. オクルージョン領域の画像を生成する手法の1例を示す図である。It is a figure which shows an example of the method of producing | generating the image of an occlusion area | region. オクルージョン領域の画像を生成する手法の1例を示す図である。It is a figure which shows an example of the method of producing | generating the image of an occlusion area | region. オクルージョン領域の画像を生成する手法の1例を示す図である。It is a figure which shows an example of the method of producing | generating the image of an occlusion area | region. オクルージョン領域の画像を生成する手法の1例を示す図である。It is a figure which shows an example of the method of producing | generating the image of an occlusion area | region. オクルージョン領域の画像を生成する手法の1例を示す図である。It is a figure which shows an example of the method of producing | generating the image of an occlusion area | region. オクルージョン領域の画像を生成する手法の1例を示す図である。It is a figure which shows an example of the method of producing | generating the image of an occlusion area | region. 時系列画像情報に基づいてオクルージョン領域の画像を生成する手法の1例を示す図である。It is a figure which shows an example of the method of producing | generating the image of an occlusion area | region based on time series image information. 実施形態に係る疑似画像生成装置の動作フローの一例を示す図である。It is a figure which shows an example of the operation | movement flow of the pseudo image generation apparatus which concerns on embodiment.
 <実施形態について:>
 <◎疑似画像生成システム100Aについて:>
 図1は、実施形態に係る疑似画像生成システム100Aの主な構成の1例を示すブロック図である。
<About embodiment:>
<About the pseudo image generation system 100A:>
FIG. 1 is a block diagram illustrating an example of a main configuration of a pseudo image generation system 100A according to the embodiment.
 図1に示されるように、疑似画像生成システム100Aは、ステレオカメラ300と疑似画像生成装置200Aとを主に備えて構成されている。 As shown in FIG. 1, the pseudo image generation system 100A mainly includes a stereo camera 300 and a pseudo image generation device 200A.
 ◎ステレオカメラ300について:
 図1に示されるように、ステレオカメラ300は、基準カメラ31と参照カメラ32とを主に備えて構成されている。また、基準カメラ31および参照カメラ32は、それぞれ、不図示の撮影光学系および制御処理回路を主に備えて構成されている。
◎ About stereo camera 300:
As shown in FIG. 1, the stereo camera 300 mainly includes a base camera 31 and a reference camera 32. In addition, the reference camera 31 and the reference camera 32 are mainly configured by an imaging optical system and a control processing circuit (not shown), respectively.
 また、基準カメラ31と参照カメラ32とは、所定の基線長を隔てて設けられており、撮影光学系に入射した被写体からの光線情報を制御処理回路等で同期して処理することによって、被写体のステレオ画像を構成する、例えば、VGAなどの所定サイズのデジタル画像である基準画像1Aおよび参照画像1Rを生成する。 Further, the reference camera 31 and the reference camera 32 are provided with a predetermined baseline length apart, and the subject information is processed by synchronizing the light ray information from the subject incident on the photographing optical system with a control processing circuit or the like. For example, a standard image 1A and a reference image 1R that are digital images of a predetermined size such as VGA are generated.
 生成された基準画像1Aおよび参照画像1Rは、データ線DLを介して疑似画像生成装置200Aの入出力部41へと供給される。また、ステレオカメラ300の各種動作は、疑似画像生成装置200Aから入出力部41およびデータ線DLを介して供給される制御信号に基づいて制御される。 The generated standard image 1A and reference image 1R are supplied to the input / output unit 41 of the pseudo image generating apparatus 200A via the data line DL. Various operations of the stereo camera 300 are controlled based on control signals supplied from the pseudo image generation device 200A via the input / output unit 41 and the data line DL.
 なお、ステレオカメラ300は、基準カメラ31と参照カメラ32との同期をとりつつ被写体を時間順次に連続的に撮影することによって、複数の基準画像1Aおよび複数の参照画像1Rを生成することもできる。また、基準画像1Aおよび参照画像1Rは、カラー画像であってもモノクロ画像であってもよい。 Note that the stereo camera 300 can also generate a plurality of reference images 1A and a plurality of reference images 1R by continuously photographing the subject in time sequence while synchronizing the reference camera 31 and the reference camera 32. . Further, the standard image 1A and the reference image 1R may be color images or monochrome images.
 疑似画像生成システム100Aにおいては、ステレオカメラ300が採用されているが、例えば、ステレオカメラ300の参照カメラ32に代えて、レーザ光などの形状計測用の各種検出光を被写体へと投影する投光装置を採用することで、基準カメラ31と該投光装置とによって、アクティブ測距方式の三次元測定機を構成し、該三次元測定機が、ステレオカメラ300に代えて採用されても良い。 In the pseudo image generation system 100A, the stereo camera 300 is employed. For example, instead of the reference camera 32 of the stereo camera 300, light projection that projects various detection lights for shape measurement, such as laser light, onto a subject. By adopting the device, the reference camera 31 and the light projecting device may constitute an active distance measuring type three-dimensional measuring machine, and the three-dimensional measuring machine may be used instead of the stereo camera 300.
 ステレオカメラ300および該三次元測定機によれば、被写体についての画像と、距離情報の測定とに用いられる画像とを共通化できることから、後述する対応関係取得部15が行う対応関係56(図2)の取得処理の際に、画像と距離情報とを対応づける処理コストを低減することができる。 According to the stereo camera 300 and the coordinate measuring machine, the image about the subject and the image used for the measurement of the distance information can be shared, and therefore the correspondence 56 (see FIG. 2) performed by the correspondence acquisition unit 15 described later. ), The processing cost for associating the image with the distance information can be reduced.
 なお、三次元測定機が、基準画像1Aとは異なる所定の視点から撮影される画像に基づいて被写体についての距離情報52(図2)の測定を行う構成を採用するものであったとしても、該画像と基準画像1Aとのマッチングを介することによって、基準画像1Aと距離情報52との対応付けを行うことができるので、本発明の有用性を損なうものではない。 Even if the coordinate measuring machine adopts a configuration that measures the distance information 52 (FIG. 2) about the subject based on an image taken from a predetermined viewpoint different from the reference image 1A, Since the reference image 1A and the distance information 52 can be associated with each other through matching between the image and the reference image 1A, the usefulness of the present invention is not impaired.
 ◎疑似画像生成装置200Aの構成について:
 図1に示されるように、疑似画像生成装置200Aは、CPU11A、入出力部41、操作部42、表示部43、ROM44、RAM45および記憶装置46を主に備えて構成されており、例えば、汎用のコンピュータ、または専用のハードウェア装置などによって実現される。
About the configuration of the pseudo image generation device 200A:
As shown in FIG. 1, the pseudo image generation device 200A mainly includes a CPU 11A, an input / output unit 41, an operation unit 42, a display unit 43, a ROM 44, a RAM 45, and a storage device 46. This is realized by a computer or a dedicated hardware device.
 入出力部41は、例えば、USBインタフェースなどの入出力インタフェースによって構成されており、ステレオカメラ300から疑似画像生成装置200Aへ供給される画像情報等の入力、および疑似画像生成装置200Aからステレオカメラ300への各種制御信号等の出力を行う。 The input / output unit 41 is configured by an input / output interface such as a USB interface, for example, and inputs image information and the like supplied from the stereo camera 300 to the pseudo image generation device 200A, and from the pseudo image generation device 200A to the stereo camera 300. Output various control signals to
 操作部42は、例えば、キーボードあるいはマウスなどによって構成されており、操作者が操作部42を操作することによって、疑似画像生成装置200Aへの各種制御パラメータの設定、疑似画像生成装置200Aの各種動作モードの設定などが行われる。 The operation unit 42 includes, for example, a keyboard or a mouse. When the operator operates the operation unit 42, various control parameters are set in the pseudo image generation device 200A, and various operations of the pseudo image generation device 200A. The mode is set.
 表示部43は、例えば、液晶ディスプレイなどによって構成されており、ステレオカメラ300から供給される基準画像1Aおよび疑似画像生成装置200Aが生成する疑似画像4A(図2)などの各種画像情報の表示、ならびに装置に関する各種情報および制御用GUI(Graphical User Interface)などの表示を行う。 The display unit 43 includes, for example, a liquid crystal display, and displays various image information such as a reference image 1A supplied from the stereo camera 300 and a pseudo image 4A (FIG. 2) generated by the pseudo image generation device 200A. In addition, various information related to the device and control GUI (Graphical User Interface) are displayed.
 ROM(Read Only Memory)44は、読出し専用メモリであり、CPU11Aを動作させるプログラムなどを格納している。なお、読み書き自在の不揮発性メモリ(例えば、フラッシュメモリ)が、ROM44に代えて使用されてもよい。 ROM (Read Only Memory) 44 is a read-only memory and stores a program for operating the CPU 11A. A readable / writable nonvolatile memory (for example, a flash memory) may be used instead of the ROM 44.
 RAM(Random Access Memory )45は、読み書き自在の揮発性メモリであり、第1取得部12が取得した各種画像および生成部21Aが生成する疑似画像などを格納する画像格納部、CPU11Aの処理情報を一時的に記憶するワークメモリなどとして機能する。 A RAM (Random Access Memory) 45 is a readable and writable volatile memory that stores various images acquired by the first acquisition unit 12, pseudo images generated by the generation unit 21A, and processing information of the CPU 11A. Functions as a temporary work memory.
 記憶装置46は、例えば、フラッシュメモリなどの読み書き自在な不揮発性メモリやハードディスク装置等によって構成されており、疑似画像生成装置200Aに対する設定情報などの各種情報を恒久的に記録する。 The storage device 46 is composed of, for example, a readable / writable nonvolatile memory such as a flash memory, a hard disk device, or the like, and permanently records various information such as setting information for the pseudo image generation device 200A.
 また、記憶装置46にはパラメータ格納部47および形状データ格納部48が設けられており、パラメータ格納部47は、後述する3次元化パラメータ51(図2)、撮影パラメータ54(図2)、および座標系情報55(図2)などの各種パラメータを格納している。 The storage device 46 is provided with a parameter storage unit 47 and a shape data storage unit 48. The parameter storage unit 47 includes a three-dimensional parameter 51 (FIG. 2), an imaging parameter 54 (FIG. 2), and Various parameters such as coordinate system information 55 (FIG. 2) are stored.
 また、形状データ格納部48は、後述するように種々の被写体のそれぞれについての全体的な三次元形状を表現するモデル群形状データ61(図2)を格納しており、モデル群形状データ61は第3取得部14によって参照されて、着目被写体についての形状情報62(図2)の取得処理に供される。 In addition, the shape data storage unit 48 stores model group shape data 61 (FIG. 2) that represents the overall three-dimensional shape of each of various subjects, as will be described later. It is referred to by the third acquisition unit 14 and used for the acquisition process of the shape information 62 (FIG. 2) about the subject of interest.
 CPU(Central Processing Unit)11Aは、疑似画像生成装置200Aの各機能部を統轄制御する制御処理装置であり、ROM44に格納されたプログラムに従った制御および処理を実行する。 The CPU (Central Processing Unit) 11A is a control processing device that controls each functional unit of the pseudo image generation device 200A, and executes control and processing according to a program stored in the ROM 44.
 CPU11Aは、後述するように、第1取得部12、第2取得部13、第3取得部14、対応関係取得部15、および生成部21Aとしても機能する。 The CPU 11A also functions as the first acquisition unit 12, the second acquisition unit 13, the third acquisition unit 14, the correspondence acquisition unit 15, and the generation unit 21A, as will be described later.
 CPU11Aは、これらの機能部によって、第1の視点から撮影された被写体についての基準画像1Aから、第1の視点とは異なる仮想視点からの撮影に対応した被写体についての疑似画像4A(図2)を生成する。 With these functional units, the CPU 11A uses the reference image 1A for the subject photographed from the first viewpoint to the pseudo image 4A for the subject corresponding to photographing from a virtual viewpoint different from the first viewpoint (FIG. 2). Is generated.
 さらに、生成部21Aは、第1特定部22、第2特定部23、第1生成部24、第2生成部25、および識別部26の各機能部によって構成されている。 Furthermore, the generation unit 21A is configured by functional units such as a first specification unit 22, a second specification unit 23, a first generation unit 24, a second generation unit 25, and an identification unit 26.
 また、CPU11A、入出力部41、操作部42、表示部43、ROM44、RAM45、記憶装置46等のそれぞれは、信号線49を介して電気的に接続されている。したがって、CPU11Aは、例えば、入出力部41を介したステレオカメラ300の制御およびステレオカメラ300からの画像情報の取得、および表示部43への表示等を所定のタイミングで実行できる。 Further, each of the CPU 11A, the input / output unit 41, the operation unit 42, the display unit 43, the ROM 44, the RAM 45, the storage device 46, and the like are electrically connected via a signal line 49. Therefore, for example, the CPU 11A can execute control of the stereo camera 300 via the input / output unit 41, acquisition of image information from the stereo camera 300, display on the display unit 43, and the like at a predetermined timing.
 なお、図1に示される構成例では、第1取得部12、第2取得部13、第3取得部14、対応関係取得部15、および生成部21Aの各機能部、ならびに生成部21Aを構成する第1特定部22、第2特定部23、第1生成部24、第2生成部25、ならびに識別部26の各機能部は、CPU11Aで所定のプログラムを実行することによって実現されているが、これらの各機能部はそれぞれ、例えば、専用のハードウェア回路などによって実現されてもよい。 In the configuration example shown in FIG. 1, the first acquisition unit 12, the second acquisition unit 13, the third acquisition unit 14, the correspondence relationship acquisition unit 15, and the functional units of the generation unit 21A and the generation unit 21A are configured. The function units of the first specifying unit 22, the second specifying unit 23, the first generating unit 24, the second generating unit 25, and the identifying unit 26 are realized by the CPU 11A executing predetermined programs. Each of these functional units may be realized by a dedicated hardware circuit, for example.
 上述したように、疑似画像生成システム100Aでは、ステレオカメラ300が撮影した基準画像1Aおよび参照画像1Rを疑似画像生成装置200Aが取得し、疑似画像生成装置200Aが基準画像1Aおよび参照画像1Rを処理することによって、基準画像1Aに基づいて基準画像1Aが撮影された第1の視点とは別の仮想視点からの撮影に対応した疑似画像、すなわち第1の視点とは別の仮想視点から撮影した被写体の画像に相当する疑似画像を生成する。 As described above, in the pseudo image generation system 100A, the pseudo image generation device 200A acquires the standard image 1A and the reference image 1R captured by the stereo camera 300, and the pseudo image generation device 200A processes the standard image 1A and the reference image 1R. By doing so, a pseudo image corresponding to shooting from a virtual viewpoint different from the first viewpoint from which the reference image 1A was shot based on the reference image 1A, that is, shot from a virtual viewpoint different from the first viewpoint A pseudo image corresponding to the image of the subject is generated.
 ここで、疑似画像には、その生成過程において仮想視点から撮影に対応した適切な画素値が設定されていないオクルージョン領域が生ずるが、疑似画像生成装置200Aは、疑似画像生成過程で生じたオクルージョン領域に対して適切な画像を生成することによって疑似画像4Aおよび4B(図2)を生成する。 Here, in the pseudo image, an occlusion area in which an appropriate pixel value corresponding to shooting from the virtual viewpoint is set in the generation process is generated, but the pseudo image generation apparatus 200A has the occlusion area generated in the pseudo image generation process. Generate pseudo images 4A and 4B (FIG. 2).
 なお、本出願においては、仮想視点からは撮影できるが、基準画像を撮影した視点からは撮影できない被写体の領域(被写体のオクルージョン領域)に対応した疑似画像上の領域(疑似画像のオクルージョン領域)を「オクルージョン領域」と称する。 In the present application, an area (pseudo-image occlusion area) on the pseudo image corresponding to the area of the subject (subject occlusion area) that can be taken from the virtual viewpoint but cannot be taken from the viewpoint where the reference image is taken. This is referred to as an “occlusion area”.
 <◎疑似画像生成装置200Aの動作について:>
 図2は、実施形態に係る疑似画像生成装置200Aの主な機能構成の1例を示すブロック図である。
<About operation of pseudo image generation apparatus 200A:>
FIG. 2 is a block diagram illustrating an example of a main functional configuration of the pseudo image generation apparatus 200A according to the embodiment.
 また、図19は、実施形態に係る疑似画像生成装置200Aの動作フローの一例を示す図である。 FIG. 19 is a diagram illustrating an example of an operation flow of the pseudo image generation apparatus 200A according to the embodiment.
 以下では、図19の動作フローを適宜参照しつつ、図2に示される疑似画像生成装置200Aの各機能部の動作について詳しく説明する。 Hereinafter, the operation of each functional unit of the pseudo image generation apparatus 200A illustrated in FIG. 2 will be described in detail with reference to the operation flow of FIG. 19 as appropriate.
 先ず、操作者は、ステレオカメラ300の基準カメラ31と参照カメラ32との両方から、仮想視点からの撮影に対応した疑似画像を作成したい着目被写体が撮影できるように、ステレオカメラ300の位置および姿勢を調整する。この状態におけるステレオカメラ300の基準カメラ31の位置が第1の視点となる。より具体的には、例えば、基準カメラ31の撮影光学系の主点位置が第1の視点となる。 First, the operator can position and position the stereo camera 300 so that the subject of interest who wants to create a pseudo image corresponding to shooting from a virtual viewpoint can be shot from both the reference camera 31 and the reference camera 32 of the stereo camera 300. Adjust. The position of the reference camera 31 of the stereo camera 300 in this state is the first viewpoint. More specifically, for example, the principal point position of the photographing optical system of the reference camera 31 is the first viewpoint.
 ステレオカメラ300の設置が完了し、操作者が、疑似画像生成の動作開始を指示するために、表示部43に表示された制御用GUIのボタンを操作部42のマウスを用いてクリックすると、該ボタン操作に対応した制御信号がCPU11Aに供給される。 When the installation of the stereo camera 300 is completed and the operator clicks the control GUI button displayed on the display unit 43 using the mouse of the operation unit 42 in order to instruct the start of the pseudo image generation operation, A control signal corresponding to the button operation is supplied to the CPU 11A.
 この制御信号がCPU11Aに供給されると、CPU11Aは、ステレオカメラ300に対して、撮影動作を行わせる制御信号を供給する。 When this control signal is supplied to the CPU 11A, the CPU 11A supplies a control signal for causing the stereo camera 300 to perform a shooting operation.
 該制御信号を供給されたステレオカメラ300は、基準カメラ31および参照カメラ32を用いた撮影動作を行って、撮影視野域内の各被写体についての基準画像1Aおよび参照画像1Rを生成し、疑似画像生成装置200Aに供給する。 The stereo camera 300 to which the control signal is supplied performs a photographing operation using the standard camera 31 and the reference camera 32 to generate the standard image 1A and the reference image 1R for each subject in the photographing field of view, and generates a pseudo image. Supply to apparatus 200A.
 ○第1取得部12の動作:
 次に、第1取得部12は、入出力部41を介して、各被写体が第1の視点から撮影された基準画像1Aおよび参照画像1Rを取得する(図19のステップS10)。
○ Operation of the first acquisition unit 12:
Next, the first acquisition unit 12 acquires the reference image 1A and the reference image 1R obtained by shooting each subject from the first viewpoint via the input / output unit 41 (step S10 in FIG. 19).
 図3は、基準画像1Aの1例を示す図である。図3に示されるように、基準画像1Aには、正面を向いた人物の画像である第1前景画像1aが撮影されている。また、第1前景画像1aの背景部分は、人物の背部の壁が撮影された第1背景画像2aである。 FIG. 3 is a diagram illustrating an example of the reference image 1A. As shown in FIG. 3, the first foreground image 1a, which is an image of a person facing the front, is captured in the reference image 1A. The background portion of the first foreground image 1a is the first background image 2a obtained by photographing the back wall of the person.
 図2に示されるように、取得された基準画像1Aは、第2取得部13、対応関係取得部15、第1生成部24、および識別部26へと供給される。また、取得された参照画像1Rは、第2取得部13へと供給される。 2, the acquired reference image 1A is supplied to the second acquisition unit 13, the correspondence relationship acquisition unit 15, the first generation unit 24, and the identification unit 26. Further, the acquired reference image 1R is supplied to the second acquisition unit 13.
 なお、第1取得部12は、予め撮影されて記録メデイアに保存された基準画像1Aおよび参照画像1Rを、入出力部41を介して取得してもよい。 Note that the first acquisition unit 12 may acquire the reference image 1A and the reference image 1R that have been captured in advance and stored in the recording medium via the input / output unit 41.
 ○第2取得部13の動作:
 基準画像1Aおよび参照画像1Rが第2取得部13に供給されると、第2取得部13は、パラメータ格納部47から基線長、焦点距離情報などの3次元化パラメータ51を取得する。
○ Operation of the second acquisition unit 13:
When the reference image 1 </ b> A and the reference image 1 </ b> R are supplied to the second acquisition unit 13, the second acquisition unit 13 acquires a three-dimensional parameter 51 such as a baseline length and focal length information from the parameter storage unit 47.
 3次元化パラメータ51を取得した第2取得部13は、基準画像1Aと参照画像1Rとのマッチング処理を行うことによって、基準画像1Aの各画素について参照画像1Rとの視差を求める。 The second acquisition unit 13 that has acquired the three-dimensional parameter 51 performs a matching process between the standard image 1A and the reference image 1R to obtain a parallax with the reference image 1R for each pixel of the standard image 1A.
 次に、第2取得部13は、基準画像1Aの各画素についての視差を、3次元化パラメータ51を用いて三角測量の原理で変換することによって、基準画像1Aの各画素に対応する各被写体上の各点についての三次元座標値の集合である距離情報52を生成する。 Next, the second acquisition unit 13 converts the parallax for each pixel of the reference image 1A based on the principle of triangulation using the three-dimensional parameter 51, thereby each subject corresponding to each pixel of the reference image 1A. Distance information 52 that is a set of three-dimensional coordinate values for each of the above points is generated.
 距離情報52の座標系としては、例えば、ステレオカメラ300の位置および姿勢に依存したカメラ座標系などが採用される。また、ステレオカメラのカメラ座標系としては、例えば、基準カメラの主点を原点とし、Z軸が基準カメラの光軸方向に沿ったXYZ直交座標系などが採用される。 As the coordinate system of the distance information 52, for example, a camera coordinate system depending on the position and orientation of the stereo camera 300 is employed. As the camera coordinate system of the stereo camera, for example, an XYZ orthogonal coordinate system in which the principal point of the reference camera is the origin and the Z axis is along the optical axis direction of the reference camera is used.
 ここで、各被写体のうち少なくとも着目被写体については、基準カメラ31と参照カメラ32との両方に撮影されていることから、第2取得部13は、各被写体のうち少なくとも着目被写体の各点について実測に基づく距離情報52を取得している(図19のステップS20)。 Here, since at least the subject of interest among the subjects is captured by both the base camera 31 and the reference camera 32, the second acquisition unit 13 actually measures at least each point of the subject of interest among the subjects. Is obtained (step S20 in FIG. 19).
 図5は、距離画像5Aとして表示された距離情報52の1例を示す図である。図5に示される距離画像5Aは、基準画像1Aの各画素に対応する距離情報52のうちのZ軸座標を、各画素の画素値とした画像である。なお、図5の距離画像5Aにおいては、画素値の単位はメートルである。 FIG. 5 is a diagram illustrating an example of the distance information 52 displayed as the distance image 5A. The distance image 5A shown in FIG. 5 is an image in which the Z-axis coordinates in the distance information 52 corresponding to each pixel of the reference image 1A are used as the pixel value of each pixel. In the distance image 5A in FIG. 5, the unit of the pixel value is meter.
 また、距離画像5Aにおける点線は、距離画像5Aの画素値と、基準画像1Aにおける第1前景画像1aとの関係を判りやすく表示するために、第1前景画像1aの輪郭が距離画像5A上に補助的に示されたものである。 In addition, the dotted line in the distance image 5A displays the outline of the first foreground image 1a on the distance image 5A in order to display the relationship between the pixel value of the distance image 5A and the first foreground image 1a in the reference image 1A in an easy-to-understand manner. It is a supplementary indication.
 ここで、着目被写体の画像の背景部に撮影された各背景被写体の画像については、その一部または全部についてステレオカメラなどの測距装置の測定範囲の制限、および背景被写体の反射率などのために距離情報52が取得できない場合がある。 Here, with respect to each background subject image captured in the background portion of the subject subject image, a part or all of it is limited by the measurement range of a distance measuring device such as a stereo camera, and the reflectance of the background subject. In some cases, the distance information 52 cannot be acquired.
 また、例えば、基準カメラ31と参照カメラ32との撮影画角が同じ場合には、基準カメラ31と参照カメラ32との視差のために基準画像1Aの端部領域の画像は、参照画像1Rにおいては撮影されないため、該端部領域については距離情報52が生成されない。 Further, for example, when the shooting angle of view of the base camera 31 and the reference camera 32 is the same, the image of the end region of the base image 1A due to the parallax between the base camera 31 and the reference camera 32 is the reference image 1R. Is not photographed, distance information 52 is not generated for the end region.
 しかし、この場合においても、後述するように観察者が違和感を覚えることが少ない疑似画像を生成できるので、本発明の有用性を損なうことはない。 However, even in this case, as described later, it is possible to generate a pseudo image with which the observer feels uncomfortable, so that the usefulness of the present invention is not impaired.
 なお、例えば、基準カメラ31の撮影画角よりも大きな撮影画角を有する参照カメラ32を採用することなどによって、基準画像1Aの全てが、参照画像1Rのいずれかの領域に含まれるようにすることも可能である。 For example, by adopting a reference camera 32 having a shooting field angle larger than the shooting field angle of the standard camera 31, all the standard images 1A are included in any region of the reference image 1R. It is also possible.
 図2に示されるように、第2取得部13によって取得された距離情報52は、第3取得部14、対応関係取得部15、第2特定部23、および識別部26へと供給される。 2, the distance information 52 acquired by the second acquisition unit 13 is supplied to the third acquisition unit 14, the correspondence relationship acquisition unit 15, the second specification unit 23, and the identification unit 26.
 なお、基準画像が、例えば、近距離の人物と、中距離の間仕切りと、遠距離の建築物とのそれぞれの画像によって構成される場合には、人物と間仕切りとに係るオクルージョン領域だけでなく、間仕切りと建築物とに係るオクルージョン領域も生じるが、この場合でも、間仕切りを着目被写体として本発明の手法を適用すれば、間仕切りとその背部の建築物とに係るオクルージョン領域についても範囲の特定と、画像の生成とを行うことができる。 In addition, when the reference image is constituted by, for example, each image of a short-distance person, a medium-distance partition, and a long-distance building, not only the occlusion area related to the person and the partition, Although an occlusion area related to the partition and the building also occurs, even in this case, if the method of the present invention is applied to the partition as a subject of interest, the range of the occlusion area related to the partition and the building behind the partition is specified, Image generation can be performed.
 ○第3取得部14の動作:
 第3取得部14は、第2取得部13から距離情報52の供給をうけると、形状データ格納部48に予め格納されている、各種被写体についての全周的な三次元形状を表現したモデル群形状データ61の中から距離情報52が表現する形状情報に最も近い形状データを識別し、識別した形状データを着目被写体の全周的な三次元形状を表現した形状情報62として取得する(図19のステップS30)。
○ Operation of the third acquisition unit 14:
When the third acquisition unit 14 receives the distance information 52 from the second acquisition unit 13, the third acquisition unit 14 stores a model group that expresses the entire three-dimensional shape of various subjects stored in advance in the shape data storage unit 48. From the shape data 61, shape data closest to the shape information represented by the distance information 52 is identified, and the identified shape data is acquired as shape information 62 representing the entire three-dimensional shape of the subject of interest (FIG. 19). Step S30).
 なお、着目被写体についての距離情報52に最も近い形状データを種々の形状データから識別する手法については、種々の手法が採用され得るが、例えば、距離情報52についての距離画像5Aと、モデル群形状データ61のそれぞれについての各距離画像とを比較することによって該識別を行う特開2001-143072号公報の手法を採用することができる。 Note that various methods can be employed for identifying the shape data closest to the distance information 52 for the subject of interest from various shape data. For example, the distance image 5A for the distance information 52 and the model group shape The method disclosed in Japanese Patent Laid-Open No. 2001-143072 that performs the identification by comparing each distance image with respect to each of the data 61 can be employed.
 形状データ格納部48に格納しておくモデル群形状データ61は、着目被写体の実際の全周的な形状データに近ければ近いほどよいが、例えば、「男の子」の標準的な形状データなどの、着目被写体についての標準的(平均的)な形状データがモデル群形状データ61として格納されているとしても、着目被写体に対応したオクルージョン領域の範囲を、通常、観察者が違和感を覚えない程度に特定することができるので、本発明の有用性を損なうものではない。 The model group shape data 61 stored in the shape data storage unit 48 is preferably as close as possible to the actual entire circumference shape data of the subject of interest. For example, the standard shape data of “boy”, for example, Even if standard (average) shape data for the subject of interest is stored as the model group shape data 61, the range of the occlusion area corresponding to the subject of interest is usually specified to such an extent that the observer does not feel discomfort. Therefore, the usefulness of the present invention is not impaired.
 また、あらかじめ、被写体が何であるかがわかっている場合には、被写体についての全周的な形状情報62を疑似画像生成装置200Aに設定しておくことによって、モデル群形状データ61から対応する形状情報62を探索することなく被写体の形状情報62を取得しても良い。 In addition, when it is known in advance what the subject is, the shape information 62 for the entire circumference of the subject is set in the pseudo image generation device 200A, so that the corresponding shape is obtained from the model group shape data 61. The object shape information 62 may be acquired without searching for the information 62.
 図2に示されるように、第3取得部14によって取得された形状情報62は、第2特定部23へと供給される。 2, the shape information 62 acquired by the third acquisition unit 14 is supplied to the second specifying unit 23.
 ○識別部26の動作:
 識別部26は、第1取得部12および第2取得部13からそれぞれ基準画像1Aおよび距離情報52の供給を受けると、着目被写体と、基準画像1Aのうち着目被写体の画像である第1前景画像1aの背景部分である第1背景画像2aに撮影された各背景被写体とを識別し、該識別結果である識別情報53を生成する(図19のステップS40)。
○ Operation of the identification unit 26:
When receiving the reference image 1A and the distance information 52 from the first acquisition unit 12 and the second acquisition unit 13, respectively, the identification unit 26 receives the subject of interest and the first foreground image that is an image of the subject of interest in the reference image 1A. Each background subject photographed in the first background image 2a that is the background portion of 1a is identified, and identification information 53 that is the identification result is generated (step S40 in FIG. 19).
 着目被写体と、各背景被写体との識別手法としては、画像情報に基づいて識別する手法、および距離情報に基づいて識別する手法などがある。 As a method for identifying the subject of interest and each background subject, there are a method of identifying based on image information, a method of identifying based on distance information, and the like.
 距離情報に基づいて識別する場合、例えば、基準画像1Aにおいて、各画素に対応する距離情報の差が所定範囲を超える部分が着目被写体と背景被写体との境界であるとして、着目被写体と背景被写体との識別を行ってもよいし、所定基準を超える凹凸がある部分が着目被写体であり、凹凸が所定基準値以下である部分が背景被写体であるとして識別を行ってもよい。 When identifying based on the distance information, for example, in the reference image 1A, assuming that a portion where the difference in distance information corresponding to each pixel exceeds a predetermined range is a boundary between the subject of interest and the background subject, May be identified, or a portion having irregularities exceeding a predetermined reference may be a subject of interest, and a portion having irregularities that are equal to or smaller than a predetermined reference value may be identified as a background subject.
 距離情報に基づいて識別を行えば、被写体と背景被写体との模様および色彩が類似していることなどに起因して被写体画像と背景画像との境界が不明確であるために、被写体と各背景被写体とを画像情報のみに基づいて識別できない場合でも、被写体と背景被写体とを適切に識別できる。 If the identification is performed based on the distance information, the boundary between the subject image and the background image is unclear because the pattern and color of the subject and the background subject are similar. Even when the subject cannot be identified based only on the image information, the subject and the background subject can be appropriately identified.
 なお、画像情報に基づいた画像処理のみによって着目被写体と背景被写体との識別を行ったとしても、多くの場合、着目部分画像と背景部分画像とを的確に識別できるので本発明の有用性を損なうことはない。 Note that even if the target subject and the background subject are identified only by image processing based on the image information, in many cases, the target partial image and the background partial image can be accurately identified, which impairs the usefulness of the present invention. There is nothing.
 図2に示されるように、識別部26によって生成された識別情報53は、第1生成部24および第2生成部25へと供給される。 2, the identification information 53 generated by the identification unit 26 is supplied to the first generation unit 24 and the second generation unit 25.
 なお、図2に示されるように、後述する第1生成部24によって生成される疑似画像2Aもまた識別部26へと供給されている。 Note that, as shown in FIG. 2, a pseudo image 2 </ b> A generated by a first generation unit 24 described later is also supplied to the identification unit 26.
 ここで、第1生成部24は、後述するように、識別情報53を用いて基準画像1Aから着目被写体に対応した第1前景画像1aのみを抽出し、第1前景画像1aに基づいて仮想視点からの撮影に対応した着目被写体についての疑似画像である第2前景画像3a(図4)を生成する動作モードを備えている。 Here, as will be described later, the first generation unit 24 extracts only the first foreground image 1a corresponding to the subject of interest from the reference image 1A using the identification information 53, and the virtual viewpoint based on the first foreground image 1a. Is provided with an operation mode for generating a second foreground image 3a (FIG. 4) which is a pseudo image of the subject of interest corresponding to the shooting from
 第1生成部24は、該動作モードで動作することにより、基準画像1Aの全体についての疑似画像2Aを取得する場合よりも低い処理コストで、疑似画像2Aを生成することができる。 The first generation unit 24 can generate the pseudo image 2A at a lower processing cost than when acquiring the pseudo image 2A for the entire reference image 1A by operating in the operation mode.
 第1生成部24が、該動作モードを実行しない場合には、第1生成部24と第2生成部25のうち第2生成部25のみが識別情報53を使用するので、この場合には、識別部26は、第1生成部24が生成した疑似画像2Aに基づいて着目被写体と背景被写体との識別を行うこともできる。 When the first generation unit 24 does not execute the operation mode, only the second generation unit 25 among the first generation unit 24 and the second generation unit 25 uses the identification information 53. In this case, The identification unit 26 can also identify the subject of interest and the background subject based on the pseudo image 2A generated by the first generation unit 24.
 ○対応関係取得部15の動作:
 図2に示されるように、基準画像1A、距離情報52、撮影パラメータ54、および座標系情報55を供給された対応関係取得部15は、基準画像1Aと、第1の視点とは別の仮想視点からの撮影に対応した各被写体の疑似画像2Aとの対応関係56を、距離情報52に基づいて取得する(図19のステップS50)。
○ Operation of the correspondence acquisition unit 15:
As shown in FIG. 2, the correspondence acquisition unit 15 supplied with the reference image 1A, the distance information 52, the imaging parameter 54, and the coordinate system information 55 is a virtual image different from the reference image 1A and the first viewpoint. A correspondence 56 with the pseudo image 2A of each subject corresponding to shooting from the viewpoint is acquired based on the distance information 52 (step S50 in FIG. 19).
 対応関係56は、より詳細には、基準画像1Aの画像上の各座標と、疑似画像2Aの画像上の各座標との各対応関係である。 More specifically, the correspondence relationship 56 is a correspondence relationship between each coordinate on the image of the reference image 1A and each coordinate on the image of the pseudo image 2A.
 撮影パラメータ54および座標系情報55は、パラメータ格納部47に格納されており、撮影パラメータ54は、基準カメラ31と、基準カメラ31とは別の仮想視点にある仮想のカメラと、距離情報52を測定する測距装置(本実施形態のステレオカメラ300)とのそれぞれについての焦点距離、画素数、および画素サイズなどの撮影パラメータである。 The shooting parameter 54 and the coordinate system information 55 are stored in the parameter storage unit 47. The shooting parameter 54 includes the reference camera 31, a virtual camera at a virtual viewpoint different from the reference camera 31, and distance information 52. These are imaging parameters such as the focal length, the number of pixels, and the pixel size for each of the distance measuring devices to be measured (stereo camera 300 of the present embodiment).
 また、座標系情報55は基準カメラ31と、仮想カメラと、測距装置との相互の位置および姿勢の関係を表す情報である。 Also, the coordinate system information 55 is information representing the relationship between the position and orientation of the reference camera 31, the virtual camera, and the distance measuring device.
 撮影パラメータ54および座標系情報55が既知であれば、基準カメラ31と測距装置との位置および姿勢が異なる場合でも、基準画像1Aの各画素のそれぞれに対応する距離情報52が求められ、また、距離情報52で表される三次元形状を疑似画像2Aに透視投影したときの対応関係も求められる。 If the imaging parameter 54 and the coordinate system information 55 are known, distance information 52 corresponding to each pixel of the reference image 1A is obtained even if the position and orientation of the reference camera 31 and the distance measuring device are different. A correspondence relationship when the three-dimensional shape represented by the distance information 52 is perspective-projected on the pseudo image 2A is also obtained.
 従って、基準画像1Aの各画素、すなわち基準画像1Aの画像上の座標と、疑似画像2Aの各画素、すなわち疑似画像2Aの画像上の座標との対応関係56は、距離情報52を介して求めることができる。 Accordingly, the correspondence 56 between each pixel of the reference image 1A, that is, the coordinates on the image of the reference image 1A, and each pixel of the pseudo image 2A, that is, the coordinates on the image of the pseudo image 2A is obtained via the distance information 52. be able to.
 なお、ステレオカメラのように1のカメラが基準画像1Aを撮影するカメラであり、かつ、測距に用いられる画像を撮影するカメラでもある場合には、撮影パラメータ54および座標系情報55のうち基準カメラ31と測距装置との位置および姿勢の関係が未知であっても、基準画像1Aの各画素と、距離情報52との対応付けができるため、対応関係56を求めることができる。 If one camera is a camera that captures the reference image 1A, such as a stereo camera, and is also a camera that captures an image used for ranging, the reference among the shooting parameters 54 and the coordinate system information 55 is used. Even if the relationship between the position and orientation of the camera 31 and the distance measuring device is unknown, each pixel of the reference image 1A can be associated with the distance information 52, so the correspondence 56 can be obtained.
 図2に示されるように、対応関係取得部15によって取得された対応関係56は、第1生成部24へ供給されて、疑似画像2Aの生成に用いられる。 As shown in FIG. 2, the correspondence 56 acquired by the correspondence acquisition unit 15 is supplied to the first generation unit 24 and used to generate the pseudo image 2A.
 ここで、各被写体のうちの着目被写体が平面状で無い場合などでは、三角測量の原理で距離情報52を測定する測距装置の視差に起因した被写体のオクルージョン、および測距装置の測定光学系の光軸に対する被写体表面の傾き角度に起因した被写体から測定光学系に入射する光量の減少などのために、着目被写体に対応する第1前景画像1aの周辺部分など、一部の領域については距離情報52が測定できない場合がある。 Here, in the case where the subject of interest is not planar, the occlusion of the subject due to the parallax of the distance measuring device that measures the distance information 52 based on the principle of triangulation, and the measurement optical system of the distance measuring device For some areas such as the peripheral portion of the first foreground image 1a corresponding to the subject of interest due to a decrease in the amount of light incident on the measurement optical system from the subject due to the tilt angle of the subject surface with respect to the optical axis of Information 52 may not be measurable.
 しかし、この場合においても、第1前景画像1aの全ての画素数に対する距離情報52が得られていない画素数の割合は、通常、かなり低い。 However, even in this case, the ratio of the number of pixels for which distance information 52 is not obtained with respect to the total number of pixels of the first foreground image 1a is usually quite low.
 従って、実測によって正確な距離情報52が得られた画素についての距離情報に基づいて、距離情報52が得られなかった画素についての距離情報を推定し、推定された距離情報に基づいて距離情報52が得られなかった画素についても、例えば実測により距離情報52を使用しない場合に比べて精度が高い対応関係56を取得できるので、準画像1Aにおける第1前景画像1aの全ての画素について距離画像5Aにおける距離情報52が得られていない場合であっても、本発明の有用性を損なうものではない。 Therefore, based on the distance information about the pixels for which accurate distance information 52 has been obtained by actual measurement, distance information for the pixels for which distance information 52 has not been obtained is estimated, and the distance information 52 is based on the estimated distance information. For the pixels for which the distance information 52 is not obtained, for example, the correspondence relationship 56 can be obtained with higher accuracy than when the distance information 52 is not used by actual measurement. Therefore, the distance image 5A for all the pixels of the first foreground image 1a in the quasi-image 1A. Even if the distance information 52 is not obtained, the usefulness of the present invention is not impaired.
 また、距離情報52が得られていない基準画像1Aの画素について、対応関係56を求めないようにした場合には、該画素からは第2前景画像3aを構成することができないが、該画素については距離情報52が取得された第1前景画像1aの画素とは容易に識別できる。 In addition, when the correspondence relationship 56 is not obtained for the pixels of the reference image 1A for which the distance information 52 is not obtained, the second foreground image 3a cannot be formed from the pixels. Can be easily distinguished from the pixels of the first foreground image 1a from which the distance information 52 has been acquired.
 従って、対応関係56を取得しなかった第1前景画像1aの画素が、誤って第2背景画像4aとして扱われた結果、背景被写体に対応した第2領域7aに、着目被写体に対応した画素値が設定される事態は、容易に回避することができる。 Accordingly, as a result of the pixels of the first foreground image 1a that have not acquired the correspondence relationship 56 being erroneously treated as the second background image 4a, the pixel values corresponding to the subject of interest are displayed in the second region 7a corresponding to the background subject. The situation where is set can be easily avoided.
 このように、距離情報52が取得されなかった第1前景画像1aの画素については、対応関係56を取得しないとしても、本発明の有用性を損なうものではない。 Thus, even if the correspondence relationship 56 is not acquired for the pixels of the first foreground image 1a for which the distance information 52 has not been acquired, the usefulness of the present invention is not impaired.
 なお、基準画像1Aと疑似画像2Aとは、画素数が異なる場合であっても、対応関係56が適切に取得されることはいうまでもない。 Needless to say, even if the reference image 1A and the pseudo image 2A have different numbers of pixels, the correspondence relationship 56 is appropriately acquired.
 ○第1生成部24の動作:
 図4は、疑似画像2Aの1例を示す図である。第1生成部24は、第1取得部12、対応関係取得部15、および識別部26からそれぞれ基準画像1A、対応関係56、および識別情報53を供給されると、図4に示されるように、基準画像1Aのうち少なくとも着目被写体に対応した第1前景画像1aについての対応関係56と、基準画像1Aとに基づいて、仮想視点からの撮影に対応した着目被写体の画像である第2前景画像3aと、仮想視点からの撮影に対応した各背景被写体の画像である第2背景画像4aとを含有する疑似画像2Aを生成する(図19のステップS60)。
○ Operation of the first generator 24:
FIG. 4 is a diagram illustrating an example of the pseudo image 2A. When the first generation unit 24 is supplied with the reference image 1A, the correspondence relationship 56, and the identification information 53 from the first acquisition unit 12, the correspondence relationship acquisition unit 15, and the identification unit 26, respectively, as shown in FIG. The second foreground image, which is the image of the subject of interest corresponding to the shooting from the virtual viewpoint, based on the correspondence 56 of the first foreground image 1a corresponding to at least the subject of interest of the reference image 1A and the reference image 1A. A pseudo image 2A including 3a and a second background image 4a that is an image of each background subject corresponding to shooting from a virtual viewpoint is generated (step S60 in FIG. 19).
 より詳細には、第1生成部24は、第1背景画像2aについての対応関係56が取得されている場合であっても、操作部42等から設定される動作モードに応じて識別情報53を参照し、基準画像1Aの第1前景画像1a(図3)から疑似画像2Aの第2前景画像3a(図4)のみを対応関係56に従って生成し、基準画像1Aの第1背景画像2aについては対応関係56を用いることなく、第1背景画像2aをそのまま、疑似画像2Aにおける第2背景画像4aとすることができる。 More specifically, even when the correspondence relationship 56 for the first background image 2a is acquired, the first generation unit 24 sets the identification information 53 according to the operation mode set from the operation unit 42 or the like. Referring to FIG. 4, only the second foreground image 3a (FIG. 4) of the pseudo image 2A is generated from the first foreground image 1a (FIG. 3) of the reference image 1A according to the correspondence 56, and the first background image 2a of the reference image 1A is Without using the correspondence relationship 56, the first background image 2a can be used as it is as the second background image 4a in the pseudo image 2A.
 また、対応関係56が得られていない場合においても、結果的に該動作モードが適用された場合と同様の疑似画像2Aが生成される。 Further, even when the correspondence 56 is not obtained, as a result, a pseudo image 2A similar to that when the operation mode is applied is generated.
 なお、図4に示される疑似画像2Aは、第1背景画像2aをそのまま第2背景画像4aとする動作モードが選択された場合の疑似画像である。また、図4におけるオクルージョン領域5aは、疑似画像2Aにおいて第2前景画像3aと第2背景画像4aのいずれもが存在してない領域である。 Note that the pseudo image 2A shown in FIG. 4 is a pseudo image when the operation mode in which the first background image 2a is used as it is as the second background image 4a is selected. Further, the occlusion area 5a in FIG. 4 is an area where neither the second foreground image 3a nor the second background image 4a exists in the pseudo image 2A.
 背景被写体についての第1背景画像2aを変更せずに疑似画像2Aにおける第2背景画像4aとする場合には、背景被写体の画像は、厳密には仮想視点に対応した位置関係に対応していない。 When the first background image 2a for the background subject is not changed and is used as the second background image 4a in the pseudo image 2A, the image of the background subject does not strictly correspond to the positional relationship corresponding to the virtual viewpoint. .
 しかし、遠方にある背景被写体についての基準画像1Aと疑似画像2A間の視差は、着目被写体についての該視差に比べて小さいことから、観察者が着目した着目被写体についての視差が仮想視点に対応した値になってさえいれば、観察者が疑似画像2Aに対して覚える違和感は少なくなる。 However, since the parallax between the reference image 1A and the pseudo image 2A for the background subject in the distance is smaller than the parallax for the subject of interest, the parallax for the subject of interest focused by the observer corresponds to the virtual viewpoint. As long as it is a value, the viewer feels less uncomfortable with the pseudo image 2A.
 このように、該動作モードでは、着目被写体部分と背景被写体部分との両方の対応関係を用いる場合に比べて少ない処理コストで違和感の少ない疑似画像を生成することができるので、識別情報53を用いることなどによって、着目被写体についてのみ疑似画像2Aにおける第2前景画像3aを求めたとしても本発明の有用性を損なうことはない。 In this way, in this operation mode, a pseudo image with less discomfort can be generated at a lower processing cost than when using the correspondence between both the subject portion of interest and the background subject portion, so that the identification information 53 is used. Thus, even if the second foreground image 3a in the pseudo image 2A is obtained only for the subject of interest, the usefulness of the present invention is not impaired.
 生成された疑似画像2Aは、第1特定部22および第2生成部25へと供給されるとともに、識別部26の説明欄で既述したように、識別部26へも供給される。 The generated pseudo image 2A is supplied to the first specifying unit 22 and the second generating unit 25, and is also supplied to the identifying unit 26 as described in the explanation section of the identifying unit 26.
 なお、第1生成部24は、形状情報62を「奥行き推定モデル」に代えて採用し、特許文献1の手法を適用することによって疑似画像2Aを取得するなどしてもよい。 The first generation unit 24 may adopt the shape information 62 instead of the “depth estimation model” and acquire the pseudo image 2A by applying the method of Patent Document 1.
 ○第1特定部22の動作:
 第1特定部22は、第1生成部24から疑似画像2Aを供給されると、疑似画像2Aのうち着目被写体に対応した第2前景画像3aと、各背景被写体に対応した第2背景画像4aとを含まない領域をオクルージョン領域5aとして特定する(図19のステップS70)。
○ Operation of the first specifying unit 22:
When the first specifying unit 22 is supplied with the pseudo image 2A from the first generating unit 24, the second foreground image 3a corresponding to the subject of interest in the pseudo image 2A and the second background image 4a corresponding to each background subject. A region that does not include the is specified as the occlusion region 5a (step S70 in FIG. 19).
 また、図2に示されるように、特定されたオクルージョン領域5aに関する情報は、第2特定部23および第2生成部25へと供給される。 Further, as shown in FIG. 2, information on the specified occlusion area 5 a is supplied to the second specifying unit 23 and the second generating unit 25.
 なお、特定されたオクルージョン領域5aに関する情報は、例えば、オクルージョン領域5aの領域に含まれる各画素、またはオクルージョン領域5aの境界上の各画素の座標情報として生成されても良く、また、図4に示される疑似画像2Aなどの画像として生成されても良い。 The information related to the specified occlusion area 5a may be generated as coordinate information of each pixel included in the area of the occlusion area 5a or each pixel on the boundary of the occlusion area 5a, for example, as shown in FIG. It may be generated as an image such as the pseudo image 2A shown.
 ○第2特定部23の動作:
 図2に示されるように、第2特定部23は、第1生成部24、第1特定部22、パラメータ格納部47、第2取得部13、および第3取得部14のそれぞれから疑似画像2A、オクルージョン領域5a、撮影パラメータ54および座標系情報55、距離情報52、ならびに形状情報62のそれぞれを供給される。
○ Operation of the second specifying unit 23:
As illustrated in FIG. 2, the second specifying unit 23 includes the pseudo image 2A from each of the first generating unit 24, the first specifying unit 22, the parameter storage unit 47, the second acquiring unit 13, and the third acquiring unit 14. The occlusion area 5a, the imaging parameter 54, the coordinate system information 55, the distance information 52, and the shape information 62 are supplied.
 第2特定部23は、これらの情報に基づいて、オクルージョン領域5aに着目被写体に係る第1領域6aと、背景被写体に係る第2領域7aとを特定する(図19のステップS80)。 Based on these pieces of information, the second specifying unit 23 specifies the first region 6a related to the subject of interest in the occlusion region 5a and the second region 7a related to the background subject (step S80 in FIG. 19).
 図6は、被写体の全周的な形状情報62に係る疑似画像6Aの1例を示す図である。 FIG. 6 is a diagram showing an example of the pseudo image 6A related to the shape information 62 of the entire circumference of the subject.
 図6に示される形状領域6aAは、形状情報62が表す三次元形状が着目被写体と同じ位置および姿勢に設置されたときの仮想視点に係る疑似画像上での該三次元形状の領域である。 The shape region 6aA shown in FIG. 6 is a region of the three-dimensional shape on the pseudo image related to the virtual viewpoint when the three-dimensional shape represented by the shape information 62 is installed at the same position and posture as the subject of interest.
 第3取得部14において、形状情報62が取得される際には、距離情報52が表す三次元形状の姿勢情報についても取得できる。 When the shape information 62 is acquired by the third acquisition unit 14, the posture information of the three-dimensional shape represented by the distance information 52 can also be acquired.
 ここで、距離情報52が表す三次元形状と、形状情報62が表す三次元形状とは、同一の着目被写体についての三次元形状であるので、形状情報62が表す三次元形状に対しても、距離情報52が表す三次元形状の三次元空間における位置および姿勢と、同じ位置および姿勢を付与することができる。 Here, since the three-dimensional shape represented by the distance information 52 and the three-dimensional shape represented by the shape information 62 are three-dimensional shapes for the same subject of interest, the three-dimensional shape represented by the shape information 62 is The same position and posture as the position and posture in the three-dimensional space of the three-dimensional shape represented by the distance information 52 can be given.
 従って、形状情報62が表す三次元形状と、透視投影によって仮想視点に係る疑似画像上に形成される形状情報62が表す三次元形状の像との対応関係は、撮影パラメータ54および座標系情報55を用いて求めることができる。 Therefore, the correspondence between the three-dimensional shape represented by the shape information 62 and the three-dimensional shape image represented by the shape information 62 formed on the pseudo image related to the virtual viewpoint by perspective projection is expressed by the shooting parameter 54 and the coordinate system information 55. Can be obtained using
 第2特定部23は、該対応関係に基づいて、仮想視点に係る疑似画像上での形状領域6aAを特定するとともに、例えば、形状領域6aAのみに所定の画素値を付与することなどによって形状領域6aAを表現した疑似画像6Aを生成する。 Based on the correspondence, the second specifying unit 23 specifies the shape area 6aA on the pseudo image related to the virtual viewpoint, and assigns a predetermined pixel value only to the shape area 6aA, for example. A pseudo image 6A expressing 6aA is generated.
 第2特定部23が形状情報62から疑似画像6Aを生成する手法としては、特開平10-293862号公報に開示された手法が採用されてもよい。 As the method for the second specifying unit 23 to generate the pseudo image 6A from the shape information 62, the method disclosed in Japanese Patent Laid-Open No. 10-293862 may be employed.
 図7は、オクルージョン領域5aに第1領域6aと、第2領域7aとが設定された疑似画像3Aの1例を示す図である。 FIG. 7 is a diagram showing an example of the pseudo image 3A in which the first area 6a and the second area 7a are set in the occlusion area 5a.
 第2特定部23は、生成した形状領域6aAと、第1特定部22から供給されるオクルージョン領域5aの情報とに基づいて、オクルージョン領域5a上に着目被写体に対応したオクルージョン領域である第1領域6aを特定するとともに、例えば、オクルージョン領域5aのうち第1領域6aを含まない領域を、背景被写体に係るオクルージョン領域である第2領域7aとして特定する。 The second specifying unit 23 is a first region that is an occlusion region corresponding to the subject of interest on the occlusion region 5a based on the generated shape region 6aA and information on the occlusion region 5a supplied from the first specifying unit 22. For example, an area that does not include the first area 6a in the occlusion area 5a is specified as the second area 7a that is an occlusion area related to the background subject.
 すなわち、第2特定部23は、オクルージョン領域5aのうち着目被写体に対応した第1領域6aを、形状情報62に基づいて特定し、さらに、各背景被写体に対応した第2領域7aを特定する。 That is, the second specifying unit 23 specifies the first region 6a corresponding to the subject of interest in the occlusion region 5a based on the shape information 62, and further specifies the second region 7a corresponding to each background subject.
 特定された第1領域6aおよび第2領域7aに係る情報は、第2生成部25へと供給される。 Information relating to the identified first region 6a and second region 7a is supplied to the second generation unit 25.
 なお、オクルージョン領域5aを、例えば、第1領域6aと、第2領域7aとの面積比、または、水平方向あるいは垂直方向についての画素数比が、1対3などの統計データなどに基づく所定の比率になるように第1領域6aと第2領域7aとを設定するなどしても、通常、着目被写体に対応したオクルージョン領域の範囲である第1領域6aと、背景被写体に対応したオクルージョン領域の範囲である第2領域7aとを、観察者が違和感を覚えない程度に特定することができるので、本発明の有用性を損なうものではない。 Note that the occlusion region 5a is defined based on, for example, a predetermined ratio based on statistical data such as an area ratio between the first region 6a and the second region 7a or a pixel number ratio in the horizontal or vertical direction of 1: 3. Even if the first area 6a and the second area 7a are set so as to have a ratio, usually, the first area 6a that is the range of the occlusion area corresponding to the subject of interest and the occlusion area corresponding to the background object Since the second region 7a as the range can be specified to such an extent that the observer does not feel uncomfortable, the usefulness of the present invention is not impaired.
 また、例えば、統計データなどに基づかない所定の比率に基づいて、オクルージョン領域5aにおける第1領域6aと第2領域7aとを設定したとしても、オクルージョン領域5aにおいて着目被写体に対応した第1領域6aを設定しない場合に比べて、観察者が違和感を覚えない程度に特定することができるので、本発明の有用性を損なうものではない。 For example, even if the first area 6a and the second area 7a in the occlusion area 5a are set based on a predetermined ratio not based on statistical data or the like, the first area 6a corresponding to the subject of interest in the occlusion area 5a. Compared to the case where no is set, the viewer can specify the level so as not to feel uncomfortable, so that the usefulness of the present invention is not impaired.
 なお、特定された第1領域6aおよび第2領域7aに関する情報は、例えば、これらの領域に含まれる各画素、またはこれらの領域の境界上の各画素の座標情報として生成されても良く、また、図7に示される疑似画像3Aなどの画像として生成されても良い。 Note that the information related to the identified first region 6a and second region 7a may be generated as coordinate information of each pixel included in these regions or each pixel on the boundary between these regions, for example. The image may be generated as an image such as the pseudo image 3A shown in FIG.
 また、上述した第1領域6aおよび第2領域7aの特定手法では、オクルージョン領域5aの情報を用いているが、第2特定部23は、操作部42から設定される動作モードに応じてオクルージョン領域5aの情報を用いることなく第1領域6aおよび第2領域7aを特定することもできる。 Further, in the above-described identification method of the first area 6a and the second area 7a, the information of the occlusion area 5a is used. However, the second identification section 23 uses the occlusion area according to the operation mode set from the operation section 42. The first region 6a and the second region 7a can be specified without using the information of 5a.
 具体的には、第2特定部23は、先ず、例えば、上述の手法などによって形状領域6aAを特定し、形状領域6aAのうち、第1生成部24から供給された疑似画像2Aの第2前景画像3aを含まない領域を第1領域6aとして特定する。 Specifically, the second specifying unit 23 first specifies the shape region 6aA by, for example, the above-described method, and the second foreground of the pseudo image 2A supplied from the first generation unit 24 in the shape region 6aA. An area not including the image 3a is specified as the first area 6a.
 次に、第2特定部23は、疑似画像2Aのうち第2前景画像3aと、第1領域6aと、第2背景画像4aとのいずれをも含まない領域を第2領域7aとして特定することなどによって、オクルージョン領域5aの情報を用いることなく第1領域6aおよび第2領域7aが特定される。 Next, the second specifying unit 23 specifies a region that does not include any of the second foreground image 3a, the first region 6a, and the second background image 4a in the pseudo image 2A as the second region 7a. Thus, the first region 6a and the second region 7a are specified without using the information of the occlusion region 5a.
 すなわち、第2特定部23は、第1領域6aを距離情報52と形状情報62とに基づいて特定するとともに、第2領域7aを、疑似画像2Aのうち第2前景画像3aと、第1領域6aと、第2背景画像4aとのいずれをも含まない領域として特定する特定手段としても機能する。 That is, the second specifying unit 23 specifies the first area 6a based on the distance information 52 and the shape information 62, and specifies the second area 7a as the second foreground image 3a and the first area in the pseudo image 2A. 6a and the second background image 4a also function as a specifying means for specifying as an area that does not include both.
 ○第2生成部25の動作:
 図2に示されるように、第2生成部25は、第1生成部24、第1特定部22、第2特定部23、および識別部26から、疑似画像2A、オクルージョン領域5a、第1領域6aおよび第2領域7a、ならびに識別情報53のそれぞれを供給される。
○ Operation of the second generation unit 25:
As illustrated in FIG. 2, the second generation unit 25 includes the pseudo image 2A, the occlusion area 5a, and the first area from the first generation unit 24, the first specification unit 22, the second specification unit 23, and the identification unit 26. 6a, the second area 7a, and the identification information 53 are supplied.
 第2生成部25は、操作部42から入力される動作モードに応じて、これらの情報から第1領域6aおよび第2領域7aの画像を生成することで疑似画像4A(図2)を生成する(図19のステップS100)。 The second generation unit 25 generates the pseudo image 4A (FIG. 2) by generating images of the first region 6a and the second region 7a from these pieces of information according to the operation mode input from the operation unit 42. (Step S100 in FIG. 19).
 また、第2生成部25は、操作部42から入力される動作モードに応じて、第1領域6aおよび第2領域7aを特定する情報を用いることなく、オクルージョン領域5aの画像を生成することによって疑似画像4B(図2)を生成することもできる。 In addition, the second generation unit 25 generates an image of the occlusion region 5a without using information for specifying the first region 6a and the second region 7a according to the operation mode input from the operation unit 42. The pseudo image 4B (FIG. 2) can also be generated.
 第2生成部25に供給された疑似画像2Aのオクルージョン領域5aには、未だ、適切な画素値が設定されていない。観察者が疑似画像に対して違和感を覚えにくくするために、オクルージョン領域5aに対して適切な画像を生成する必要がある。 In the occlusion area 5a of the pseudo image 2A supplied to the second generation unit 25, an appropriate pixel value has not yet been set. In order to make it difficult for the observer to feel uncomfortable with the pseudo image, it is necessary to generate an appropriate image for the occlusion region 5a.
 ここで、オクルージョン領域には、通常、着目被写体と各背景被写体との両方の情報が含まれている。 Here, the occlusion area normally includes information on both the subject of interest and each background subject.
 そこで、第2生成部25は、オクルージョン領域5a、または、オクルージョン領域5aのうちの着目被写体に対応する第1領域6aおよび各背景被写体に対応した第2領域7aの画像を着目被写体および各背景被写体のそれぞれの情報に基づいて生成する。 Therefore, the second generation unit 25 uses the occlusion area 5a or the images of the first area 6a corresponding to the target subject in the occlusion area 5a and the images of the second area 7a corresponding to the background subjects as the target subject and each background subject. It generates based on each information.
 オクルージョン領域5a、または、第1領域6aおよび第2領域7aを着目被写体および各背景被写体のそれぞれの情報に基づいて生成する手法としては、例えば基準画像1Aあるいは疑似画像2Aなどの画像における着目被写体および各背景被写体に係る画像情報に基づいて生成する手法が採用される。 As a method for generating the occlusion area 5a or the first area 6a and the second area 7a based on the information of the subject of interest and each of the background subjects, for example, the subject of interest in an image such as the reference image 1A or the pseudo image 2A A method of generating based on image information relating to each background subject is employed.
 また、着目被写体と背景被写体とがそれぞれ何であるのかが予めわかっているような状況では、画像情報に基づくことなく、例えば、着目被写体と背景被写体とのそれぞれの色彩、模様などが操作部42から設定され、該設定に係る色彩、模様などを用いた各オクルージョン領域についての画像生成処理を指示する動作信号が操作部42から入力されることによって、第2生成部25は、該設定に係る色彩、模様などに基づいてオクルージョン領域5a、または、第1領域6aおよび第2領域7aの領域の画像を生成する。 Also, in a situation where it is known in advance what the target subject and the background subject are, for example, the colors and patterns of the target subject and the background subject are displayed from the operation unit 42 without being based on the image information. When the operation signal is input from the operation unit 42 to instruct an image generation process for each occlusion area using the color, pattern, or the like according to the setting, the second generation unit 25 causes the color according to the setting. Based on the pattern or the like, an image of the occlusion area 5a or the first area 6a and the second area 7a is generated.
 上述したように、第2生成部25は、着目被写体と背景被写体とのそれぞれの画像情報や特性に基づいて、オクルージョン領域5a、または、第1領域6aおよび第2領域7aの領域の画像を生成する。 As described above, the second generation unit 25 generates an image of the occlusion region 5a or the first region 6a and the second region 7a based on the image information and characteristics of the subject of interest and the background subject. To do.
 すなわち、第2生成部25は、着目被写体と背景被写体とのそれぞれの情報に基づいて、オクルージョン領域5a、または、第1領域6aおよび第2領域7aの領域の画像を生成する。 That is, the second generation unit 25 generates an image of the occlusion area 5a or the areas of the first area 6a and the second area 7a based on the respective information of the subject of interest and the background object.
 なお、着目被写体および各背景被写体の画像情報に基づいてオクルージョン領域5a、第1領域6aおよび第2領域7aの画像を生成する場合には、基準画像1Aと疑似画像2Aとのいずれに基づいて各オクルージョン領域の画像を生成しても良い。 Note that when the images of the occlusion area 5a, the first area 6a, and the second area 7a are generated based on the image information of the subject of interest and each of the background objects, each of the images based on either the reference image 1A or the pseudo image 2A. An image of the occlusion area may be generated.
 具体的には、例えば、各オクルージョン領域の画像の生成に用いられる領域を決定し、該領域の画像に基づいて、各オクルージョン領域の画像を生成する手法などが採用される。 Specifically, for example, a method of determining an area used for generating an image of each occlusion area and generating an image of each occlusion area based on the image of the area is adopted.
 図14は、第2前景画像3bの内部に設けられた部分領域8gに基づいて、オクルージョン領域5bの画像を生成する手法の1例を示す図である。 FIG. 14 is a diagram illustrating an example of a technique for generating an image of the occlusion area 5b based on the partial area 8g provided in the second foreground image 3b.
 また、図15は、第2背景画像4bに設けられた部分領域8hに基づいてオクルージョン領域5bの画像を生成する手法の1例を示す図である。 FIG. 15 is a diagram showing an example of a technique for generating an image of the occlusion area 5b based on the partial area 8h provided in the second background image 4b.
 図14では、部分領域8g内に設けられた、例えば3×3画素の部分領域9aのテクスチャを、部分領域9bに複写することによって、オクルージョン領域5bの画像が生成されている。 In FIG. 14, the image of the occlusion area 5b is generated by copying the texture of the partial area 9a of, for example, 3 × 3 pixels provided in the partial area 8g to the partial area 9b.
 なお、部分領域9aの画像を部分領域9bだけでなく、オクルージョン領域5b内の他の部分領域にも複写しても良い。 The image of the partial area 9a may be copied not only to the partial area 9b but also to other partial areas in the occlusion area 5b.
 また、図15では、図14と同様に部分領域9cのテクスチャを、部分領域9bに複写することによって、オクルージョン領域5bの画像が生成されている。 Further, in FIG. 15, the image of the occlusion area 5b is generated by copying the texture of the partial area 9c to the partial area 9b as in FIG.
 図14および図15に示される例の他にも、例えば、非オクルージョン領域における所定領域内の画素値のヒストグラムをとったときの度数の最頻値を示す画素値(最頻値)、または該所定領域の画素値の平均値を用いて、オクルージョン領域の画像を生成する手法などを採用しても良い。 In addition to the examples shown in FIG. 14 and FIG. 15, for example, a pixel value (mode) indicating a mode value of frequency when a histogram of pixel values in a predetermined area in a non-occlusion area is taken, or A method of generating an image of an occlusion area using an average value of pixel values of a predetermined area may be employed.
 また、第2生成部25は、操作部42を用いて動作モードを切り替えることによって、着目被写体および背景被写体に係る画像におけるオクルージョン領域5aとの境界領域に基づいて、オクルージョン領域5a、または、第1領域6aおよび第2領域7aの画像を生成することができる。 In addition, the second generation unit 25 switches the operation mode using the operation unit 42, so that the occlusion area 5 a or the first occlusion area 5 a is generated based on the boundary area with the occlusion area 5 a in the image related to the subject of interest and the background subject. Images of the region 6a and the second region 7a can be generated.
 ・境界領域について:
 以下に、本願発明における「境界領域」の用語について説明する。
・ About border areas:
The term “boundary region” in the present invention will be described below.
 本実施形態に係るオクルージョン領域5a、第1領域6a、および第2領域7aなどのオクルージョン領域は、着目被写体の画像と、背景被写体の画像との境界部分が、疑似画像を生成する際に画像上で分離して生じた領域である。 The occlusion areas such as the occlusion area 5a, the first area 6a, and the second area 7a according to the present embodiment are such that the boundary portion between the image of the subject of interest and the image of the background subject is displayed on the image when the pseudo image is generated. This is a region generated by separation.
 従って、観察者が違和感を覚えにくい疑似画像を生成するためには、各オクルージョン領域が接する非オクルージョン領域における、各オクルージョン領域との境界の近傍部分に基づいて各オクルージョン領域の画像を生成することが望ましい。 Therefore, in order to generate a pseudo image that makes it difficult for the observer to feel uncomfortable, it is possible to generate an image of each occlusion area based on the vicinity of the boundary with each occlusion area in a non-occlusion area that is in contact with each occlusion area. desirable.
 例えば、人物などの立体的な被写体についての境界領域は、例えば、着目領域内の各画素について距離情報52などに基づいて定められる法線を求め、着目領域の境界の画素についての法線の角度との差が所定の角度範囲以下となる各画素が属する領域のうちの部分領域として定める手法が採用される。 For example, the boundary area for a three-dimensional subject such as a person is obtained by obtaining a normal defined based on the distance information 52 for each pixel in the target area, for example, and the normal angle for the pixel at the boundary of the target area A method is adopted in which the pixel is determined as a partial region of the region to which each pixel to which the difference is equal to or less than a predetermined angle range.
 該所定の角度範囲については、例えば、45度などが採用される。該所定の角度範囲が小さければ小さいほど設定される境界領域の最大範囲は着目領域の境界に近くなるので望ましい。 For example, 45 degrees is adopted as the predetermined angle range. It is desirable that the smaller the predetermined angle range is, the closer the maximum range of the boundary region set is to the boundary of the region of interest.
 着目被写体は、多くの場合、立体的であり、カメラとの距離も近い場合が多いことから、上述した法線などの角度範囲に基づいて境界領域の範囲を設定することが望ましい。 Since the subject of interest is often three-dimensional and often close to the camera, it is desirable to set the boundary region range based on the angular range such as the normal line described above.
 なお、着目画素についての法線は、例えば、着目画素と、着目画素に対して水平方向、垂直方向に隣接する2つの画素のそれぞれについての三次元座標値を距離情報52に基づいて取得し、該三次元座標値から平面を定義して、該平面の法線を、着目画素についての法線として採用する手法などによって取得される。 The normal line for the target pixel is obtained based on the distance information 52, for example, based on the distance information 52 for the target pixel and each of the two pixels adjacent to the target pixel in the horizontal and vertical directions. A plane is defined from the three-dimensional coordinate values, and a normal of the plane is acquired as a normal for the pixel of interest.
 背景部分の境界領域についても、上述した法線角度に基づく境界領域の設定が採用されてもよい。 The setting of the boundary region based on the normal angle described above may also be adopted for the boundary region of the background part.
 また、通常、背景部分は着目被写体に対して遠方にあるために、しばしば背景部分についての距離情報52が得られないこと、および、背景部分は平面状であることも多いことなどから、背景部分についての境界領域は、例えば第2背景画像4aのオクルージョン領域5aとの境界から第2背景画像4a内部に向けて、第2背景画像4aの水平方向画素数または垂直方向画素数に対する所定の割合の画素数に基づいて設定される領域のうちの部分領域として定める手法などが採用されてもよい。該所定の割合としては例えば1/5などが採用される。 In addition, since the background part is usually far from the subject of interest, the distance information 52 about the background part is often not obtained, and the background part is often flat. For example, the boundary area of the second background image 4a is directed from the boundary with the occlusion area 5a of the second background image 4a toward the inside of the second background image 4a at a predetermined ratio with respect to the number of horizontal pixels or the number of vertical pixels of the second background image 4a. For example, a method of determining as a partial region of regions set based on the number of pixels may be employed. For example, 1/5 is adopted as the predetermined ratio.
 また、着目被写体についても、上述した画素数等によって境界領域の最大範囲を定めても良い。 Also, for the subject of interest, the maximum range of the boundary area may be determined by the number of pixels described above.
 すなわち、本願における「境界領域」は、上述した所定の法線角度範囲など、被写体に関する所定の幾何学的特性の範囲を定める所定の条件、あるいは、領域の画素数、サイズなどの領域の範囲を定める所定の数学的条件などに基づいて、画像上の2つの領域間の境界から2つの領域のうち一方の領域の領域内部にかけて設定される範囲を最大範囲とする領域のうちの部分領域である。 That is, the “boundary region” in the present application is a predetermined condition that defines a range of a predetermined geometric characteristic related to the subject, such as the predetermined normal angle range described above, or a region range such as the number of pixels of the region, the size, etc. Based on a predetermined mathematical condition to be determined, a partial region of a region whose maximum range is a range set from the boundary between two regions on the image to the inside of one of the two regions .
 従って、「境界領域」は、境界に接している部分領域に限定されない。 Therefore, the “boundary area” is not limited to a partial area in contact with the boundary.
 第2生成部25は、境界領域を用いたオクルージョン領域の画像生成方法を数種類実施可能に構成されており、これらの機能は、操作部42からの入力によって切り替えられる。 The second generation unit 25 is configured to be able to implement several types of occlusion region image generation methods using the boundary region, and these functions are switched by an input from the operation unit 42.
 図8~図13、図16および図17は、それぞれ、境界領域を用いてオクルージョン領域の画像を生成する手法の1例を示す図である。 8 to 13, 16, and 17 are diagrams illustrating an example of a technique for generating an image of an occlusion area using a boundary area.
 図8では、第2前景画像3aと第1領域6aとの境界8aとして設定された境界領域に基づいて第1領域6aの画像が生成されており、図9では、第2背景画像4aにおける第2領域7aとの境界8bとして設定された境界領域に基づいて第2領域7aの画像が生成されている。 In FIG. 8, the image of the first region 6a is generated based on the boundary region set as the boundary 8a between the second foreground image 3a and the first region 6a. In FIG. 9, the second background image 4a in the second background image 4a is generated. An image of the second region 7a is generated based on the boundary region set as the boundary 8b with the two regions 7a.
 また、図10は、第2前景画像3b内の境界領域である部分領域8cに基づいてオクルージョン領域5bの画像が生成される例を示している。 FIG. 10 shows an example in which an image of the occlusion area 5b is generated based on the partial area 8c that is a boundary area in the second foreground image 3b.
 また、図11は、第2前景画像3bとオクルージョン領域5bとの境界に接した、第2前景画像3b内の境界領域である部分領域8dに基づいて、オクルージョン領域5bの画像が生成される例を示している。 FIG. 11 shows an example in which an image of the occlusion area 5b is generated based on the partial area 8d that is a boundary area in the second foreground image 3b that is in contact with the boundary between the second foreground image 3b and the occlusion area 5b. Is shown.
 図12および図13は、第2前景画像3bに接したオクルージョン領域5bの周辺に設けられた境界領域である部分領域8eおよび8fに基づいて、オクルージョン領域5bの画像が生成される例を示している。 12 and 13 show an example in which an image of the occlusion area 5b is generated based on the partial areas 8e and 8f that are boundary areas provided around the occlusion area 5b in contact with the second foreground image 3b. Yes.
 これらの例では、部分領域8eは、オクルージョン領域5bと第2背景画像4bとの境界に接していない境界領域であり、部分領域8fは、オクルージョン領域5bと第2背景画像4bとの境界に接した境界領域である。 In these examples, the partial area 8e is a boundary area that is not in contact with the boundary between the occlusion area 5b and the second background image 4b, and the partial area 8f is in contact with the boundary between the occlusion area 5b and the second background image 4b. This is the boundary area.
 なお、図8~図13に示される例では、各境界領域に基づいてオクルージョン領域の画像が生成される際には、既述したように各境界領域について画素値の最頻値、あるいは平均値を求めてオクルージョン領域5bに設定することにより、または、図14および図15を用いて説明したように境界領域内の部分領域の画像をオクルージョン領域5b内に複写することなどによってオクルージョン領域5bの画像が生成される。 In the example shown in FIGS. 8 to 13, when an image of the occlusion area is generated based on each boundary area, the mode value or average value of the pixel values for each boundary area as described above. For the occlusion area 5b, or by copying the image of the partial area in the boundary area into the occlusion area 5b as described with reference to FIGS. Is generated.
 また、図16に示される例では、第2生成部25は、先ず、オクルージョン領域5bのうち第2前景画像3bとの境界付近に設定される不図示の第1の境界領域の画像を、第2前景画像3bのうちオクルージョン領域5bとの境界領域である部分領域群9dに基づいて生成するとともに、オクルージョン領域5bのうち第2背景画像4bとの境界付近に設定される不図示の第2の境界領域の画像を、第2背景画像4bのうちオクルージョン領域5bとの境界領域である部分領域群9eに基づいて生成する。 In the example shown in FIG. 16, the second generation unit 25 first selects an image of the first boundary region (not shown) set near the boundary with the second foreground image 3b in the occlusion region 5b. 2 generated based on the partial region group 9d which is a boundary region with the occlusion region 5b in the foreground image 3b, and is set near the boundary with the second background image 4b in the occlusion region 5b. An image of the boundary region is generated based on the partial region group 9e that is a boundary region with the occlusion region 5b in the second background image 4b.
 次に、第2生成部25は、オクルージョン領域5bの画素値が、該第1の境界領域から該第2の境界領域にわたって徐々に変化するようにオクルージョン領域5bの画像を生成する。 Next, the second generation unit 25 generates an image of the occlusion area 5b so that the pixel value of the occlusion area 5b gradually changes from the first boundary area to the second boundary area.
 ここで矢印12aは、対応関係56に基づいて第2前景画像3bが生成されたときのシフト方向を示しており、シフト方向に沿ってオクルージョン領域5bの画像が生成される。 Here, the arrow 12a indicates the shift direction when the second foreground image 3b is generated based on the correspondence 56, and an image of the occlusion area 5b is generated along the shift direction.
 図16に係る手法によれば、オクルージョン領域5bの画素値が、第1領域6bから第2領域7bの境界領域にわたって徐々に変化するようにオクルージョン領域5bの画像が生成されるので、より違和感の少ない疑似画像を生成することができる。 According to the method shown in FIG. 16, the image of the occlusion area 5b is generated so that the pixel value of the occlusion area 5b gradually changes from the first area 6b to the boundary area of the second area 7b. A small number of pseudo images can be generated.
 また、図17に示される例では、第2生成部25は、第1領域6bのうち第2領域7b側の境界領域である部分領域群9fから、第2領域7bのうち第1領域6b側の境界領域である部分領域群9gにわたった部分領域10aの画素値が徐々に変化するように、オクルージョン領域5aの画像を生成している。 In the example shown in FIG. 17, the second generation unit 25 starts from the partial region group 9f that is the boundary region on the second region 7b side in the first region 6b, and on the first region 6b side in the second region 7b. The image of the occlusion region 5a is generated so that the pixel value of the partial region 10a over the partial region group 9g that is the boundary region of the region gradually changes.
 ここで矢印12bは、対応関係56に基づいて第2前景画像3bが生成されたときのシフト方向を示しており、シフト方向に沿って徐々に画素値が変化するように第1領域6bおよび第2領域7bの画像が生成される。また、該シフト方向は、操作部42から操作者が設定することもできる。 Here, the arrow 12b indicates the shift direction when the second foreground image 3b is generated based on the correspondence 56, and the first region 6b and the first region 6b gradually change along the shift direction. An image of two areas 7b is generated. The shift direction can also be set by the operator from the operation unit 42.
 図17に係る手法によれば、第1領域6bと第2領域7bとの境界部分における画素値の変化がなめらかになるのでより違和感の少ない疑似画像を生成することができる。 According to the method according to FIG. 17, since the change in the pixel value at the boundary portion between the first region 6b and the second region 7b becomes smooth, a pseudo image with less discomfort can be generated.
 なお、シフト方向についての対応関係を判りやすく説明するために、図16および図17における各部分領域群9d~9gのそれぞれは、各部分領域群9d~9gをそれぞれ毎に包含する各境界領域のなかに離散的に設定された部分領域の例として示されている。 In order to easily explain the correspondence relationship with respect to the shift direction, each of the partial region groups 9d to 9g in FIGS. 16 and 17 includes each boundary region including each of the partial region groups 9d to 9g. It is shown as an example of partial areas set discretely.
 上述した手法などによって、第2生成部25は、オクルージョン領域5a、または第1領域6aおよび第2領域7aなどの各オクルージョン領域の画像を生成する。 The second generation unit 25 generates an image of each occlusion area such as the occlusion area 5a or the first area 6a and the second area 7a by the method described above.
 この段階の画像を各オクルージョン領域の最終的な画像として採用したとしても、本願発明の有用性を損なうものではないが、第2生成部25は、設定された動作モードに応じて、各オクルージョン領域の画像に対して、例えば、3×3画素のガウシアンフィルタなどの平滑化フィルタを用いることなどによる平滑化処理を行い得る。 Even if the image at this stage is adopted as the final image of each occlusion area, it does not impair the usefulness of the present invention. However, the second generation unit 25 determines whether each occlusion area is in accordance with the set operation mode. For example, a smoothing process such as by using a smoothing filter such as a 3 × 3 pixel Gaussian filter may be performed on the above image.
 各オクルージョン領域の画像に平滑化処理を行うことによって、画素値の変化が平滑化されるので、観察者が覚える疑似画像についての違和感をさらに低減することができる。 Since the change of the pixel value is smoothed by performing the smoothing process on the image of each occlusion area, it is possible to further reduce the uncomfortable feeling about the pseudo image that the observer remembers.
 第2生成部25は、各オクルージョン領域の画像が生成された疑似画像を表示部43に表示し(図19のステップS100)、疑似画像の生成処理を終了する。 The second generation unit 25 displays the pseudo image in which the image of each occlusion area is generated on the display unit 43 (step S100 in FIG. 19), and ends the pseudo image generation process.
 以上に説明したように、疑似画像生成装置200Aによれば、実測に基づく各被写体の距離情報52に基づいて疑似画像上でのオクルージョン領域5aをより正確に特定することができるとともに、特定されたオクルージョン領域5aの画像を、着目被写体と、各背景被写体とに基づいて生成するので、違和感の少ない疑似画像を生成することができる。 As described above, according to the pseudo image generation device 200A, the occlusion area 5a on the pseudo image can be more accurately specified based on the distance information 52 of each subject based on the actual measurement, and the specified Since the image of the occlusion area 5a is generated based on the subject of interest and each background subject, a pseudo image with little discomfort can be generated.
 また、疑似画像生成装置200Aによれば、疑似画像上に特定されたオクルージョン領域5aに着目被写体に対応した第1領域6aと、各背景被写体に対応した第2領域7aとを特定し、第1領域6aの画像を着目被写体の情報に基づいて生成するとともに、第2領域7aの画像を各背景被写体の情報に基づいて生成するので、第1領域6aおよび第2領域7aの画像を、疑似画像に対応する実際の画像に、より似せることができるので、より違和感の少ない疑似画像を生成することができる。 Further, according to the pseudo image generation device 200A, the first area 6a corresponding to the subject of interest in the occlusion area 5a specified on the pseudo image and the second area 7a corresponding to each background subject are specified, and the first Since the image of the area 6a is generated based on the information of the subject of interest, and the image of the second area 7a is generated based on the information of each background object, the images of the first area 6a and the second area 7a are pseudo images. Can be made more similar to an actual image corresponding to, so that a pseudo image with less discomfort can be generated.
 また、疑似画像生成装置200Aによれば、着目被写体の全周的な三次元形状を表現した形状情報62を取得し、第1領域6aを形状情報62に基づいて特定することによって、着目被写体に対応したオクルージョン領域である第1領域6aをより正確に特定することが可能となるので、より違和感の少ない疑似画像を生成することができる。 Further, according to the pseudo image generation device 200A, the shape information 62 expressing the entire three-dimensional shape of the subject of interest is acquired, and the first region 6a is specified based on the shape information 62, so that the subject of interest is identified. Since the first region 6a, which is a corresponding occlusion region, can be specified more accurately, a pseudo image with less discomfort can be generated.
 また、疑似画像生成装置200Aによれば、第2前景画像3aにおける第1領域6aとの境界領域に基づいて第1領域6aの画像を生成するとともに、第2背景画像4aにおける第2領域7aとの境界領域に基づいて第2領域7aの画像を生成することによって、第1領域6aおよび第2領域7aの画像を、疑似画像に対応する実際の画像に、より似せることができるので、より違和感の少ない疑似画像を生成することができる。 Further, according to the pseudo image generation device 200A, the image of the first area 6a is generated based on the boundary area with the first area 6a in the second foreground image 3a, and the second area 7a in the second background image 4a By generating the image of the second region 7a based on the boundary region, the images of the first region 6a and the second region 7a can more closely resemble the actual image corresponding to the pseudo image, so that the user feels more uncomfortable. It is possible to generate a pseudo image with less.
 <変形例について:>
 以上、本発明の実施の形態について説明してきたが、本発明は上記実施の形態に限定されるものではなく様々な変形が可能である。
<About modification:>
Although the embodiments of the present invention have been described above, the present invention is not limited to the above embodiments, and various modifications can be made.
 例えば、時間順次に撮影される基準画像に基づいてオクルージョン領域の画像を生成してもよい。以下に、時系列画像に基づいてオクルージョン領域の画像を生成する手法について説明する。 For example, an image of the occlusion area may be generated based on a reference image that is taken in time sequence. Hereinafter, a method for generating an image of an occlusion area based on a time-series image will be described.
 図18は、時系列画像に基づいたオクルージョン領域の画像を生成する手法の1例を示す図である。 FIG. 18 is a diagram illustrating an example of a method for generating an image of an occlusion area based on a time-series image.
 図18に示される基準画像1B~1Eは、時間順次に撮影された着目被写体についての一連の時系列画像である。基準画像1B~1Eは、時間軸t1に沿って撮影順に表示されている。 The reference images 1B to 1E shown in FIG. 18 are a series of time-series images of the subject of interest photographed in time sequence. The reference images 1B to 1E are displayed in the shooting order along the time axis t1.
 また、第1前景画像1b~1eは、それぞれ、基準画像1B~1Eにおける着目被写体の画像であり、該着目被写体はカメラに対して相対的に移動している。 The first foreground images 1b to 1e are images of the subject of interest in the reference images 1B to 1E, respectively, and the subject of interest moves relative to the camera.
 部分領域11b~11eは、各基準画像1B~1Eに対して同じ位置および範囲に設けられている。なお、位置については完全に同じ位置ではなく、少しずれていてもよい。 The partial areas 11b to 11e are provided at the same position and range with respect to the reference images 1B to 1E. Note that the positions are not completely the same, and may be slightly shifted.
 この場合、図18に示されるように、部分領域11bの全領域と部分領域11cのほぼ全ての領域は、それぞれ第1前景画像1bおよび1cの背景部分に存在しており、部分領域11dおよび11eの全領域は、それぞれ部分領域1dおよび1eの内部に存在している。 In this case, as shown in FIG. 18, the entire area of the partial area 11b and almost the entire area of the partial area 11c are respectively present in the background portions of the first foreground images 1b and 1c, and the partial areas 11d and 11e. Are all present within the partial areas 1d and 1e, respectively.
 従って、部分領域11b~11eにおいて同じ座標を持つ画素群毎に画素値の最頻値、または平均値を求めることによって、着目被写体と背景被写体との画像の画素値が混ざり合った、新たな画像が生成できる。 Therefore, by obtaining the mode value or average value of the pixel values for each pixel group having the same coordinates in the partial areas 11b to 11e, a new image in which the pixel values of the image of the subject of interest and the background subject are mixed is obtained. Can be generated.
 既述したように、1の基準画像から生成された、該基準画像とは別視点からの撮影に係る疑似画像におけるオクルージョン領域は、被写体とその背景との情報を含有しているので、該新たな画像に基づいてオクルージョン領域の画像を生成することによって、違和感を覚えることが少ない疑似画像を生成することができる。 As described above, the occlusion region in the pseudo image generated from one reference image and taken from a different viewpoint from the reference image contains information on the subject and its background. By generating an image of the occlusion area based on a simple image, a pseudo image with little uncomfortable feeling can be generated.
 なお、各部分領域11b~11eは、それぞれ基準画像1B~1Eの一部に限定されず、例えば、それぞれ基準画像1B~1Eの全部であってもよい。 Note that the partial areas 11b to 11e are not limited to a part of the reference images 1B to 1E, respectively, and may be all of the reference images 1B to 1E, for example.
 以下に変形例に係る疑似画像生成システムについて説明する。 The pseudo image generation system according to the modification will be described below.
 変形例に係る疑似画像生成システムは、実施形態に係る疑似画像生成システム100Aのステレオカメラ300を備えるとともに、実施形態に係る疑似画像生成装置200Aと同様の構成を有する疑似画像生成装置とを備えている。 The pseudo image generation system according to the modification includes the stereo camera 300 of the pseudo image generation system 100A according to the embodiment, and the pseudo image generation apparatus having the same configuration as the pseudo image generation apparatus 200A according to the embodiment. Yes.
 既述したようにステレオカメラ300は、被写体を時間順次に連続的に撮影する連続撮影機能を有している。 As described above, the stereo camera 300 has a continuous shooting function for continuously shooting a subject in time sequence.
 変形例に係るステレオカメラ300は、その連続撮影機能を用いて、各被写体についての複数の基準画像および複数の参照画像を生成し、生成した各画像を変形例に係る疑似画像生成装置へと供給する。 The stereo camera 300 according to the modified example uses the continuous shooting function to generate a plurality of standard images and a plurality of reference images for each subject, and supplies the generated images to the pseudo image generation device according to the modified example. To do.
 変形例に係る疑似画像生成装置は、実施形態に係る疑似画像生成装置200Aの第1取得部12および第2生成部25にそれぞれ相当する第1取得部および第2生成部を除いて、実施形態に係る疑似画像生成装置200Aと同じ各機能部を有している。 The pseudo image generation device according to the modification is an embodiment except for the first acquisition unit and the second generation unit corresponding to the first acquisition unit 12 and the second generation unit 25 of the pseudo image generation device 200A according to the embodiment, respectively. Each of the functional units is the same as that of the pseudo image generation apparatus 200A according to FIG.
 ここでは、変形例に係る疑似画像生成装置の各機能部のうち第2取得部および第2生成部について説明を行い、他の機能部については説明を省略する。 Here, the second acquisition unit and the second generation unit among the functional units of the pseudo image generation device according to the modification will be described, and description of the other functional units will be omitted.
 ○変形例に係る第1取得部について:
 変形例に係る第1取得部は、ステレオカメラ300によって時間順次に撮影された各被写体についての複数の時系列画像である複数の基準画像および複数の参照画像を取得する。
○ About the 1st acquisition part concerning a modification:
The first acquisition unit according to the modification acquires a plurality of reference images and a plurality of reference images, which are a plurality of time-series images, for each subject photographed in time sequence by the stereo camera 300.
 次に、該第1取得部は、取得した複数の基準画像のうち1の基準画像を、第2取得部13、対応関係取得部15、第1生成部24、および識別部26へと供給するとともに、取得した複数の基準画像を第2生成部25へと供給する。 Next, the first acquisition unit supplies one reference image among the plurality of acquired reference images to the second acquisition unit 13, the correspondence relationship acquisition unit 15, the first generation unit 24, and the identification unit 26. At the same time, the acquired plurality of reference images are supplied to the second generation unit 25.
 さらに、該第1取得部は、取得した複数の参照画像のうち該1の基準画像と同時刻に撮影された1の参照画像を第2取得部13へと供給する。 Furthermore, the first acquisition unit supplies one reference image taken at the same time as the first reference image to the second acquisition unit 13 among the plurality of acquired reference images.
 なお、第2取得部13に供給される参照画像は、該1の基準画像における各被写体と、同じ状態の各被写体が撮影されている参照画像であればよいので、該1の基準画像と同時刻に撮影された参照画像に限定されることはない。 Note that the reference image supplied to the second acquisition unit 13 may be a reference image in which each subject in the one standard image and each subject in the same state are captured. It is not limited to the reference image taken at the time.
 すなわち、該参照画像を供給された第2取得部13は、該1の基準画像が取得された状態における各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する。 That is, the second acquisition unit 13 supplied with the reference image acquires distance information based on actual measurement for at least each point of the subject of interest in the state in which the first reference image is acquired.
 ○変形例に係る第2生成部について:
 変形例に係る第2生成部は、第1生成部から供給された複数の基準画像のそれぞれについて、例えば、第1生成部24から供給される疑似画像のオクルージョン領域に対応する領域などの所定の領域を設定し、該領域に対して、例えば、図18を用いて説明した手法などを適用することによって、オクルージョン領域の画像の生成に使用される画像を生成し、生成された画像を用いてオクルージョン領域の画像を生成する。
○ About the 2nd generation part concerning a modification:
For example, the second generation unit according to the modification may have a predetermined number of reference images supplied from the first generation unit, such as a region corresponding to the occlusion region of the pseudo image supplied from the first generation unit 24. An area is set, and an image used for generating an image of the occlusion area is generated by applying the method described with reference to FIG. 18 to the area, and the generated image is used. Generate an image of the occlusion area.
 すなわち、変形例に係る第2生成部は、複数の時系列画像である複数の基準画像に基づいて疑似画像におけるオクルージョン領域の画像を生成する。 That is, the second generation unit according to the modification generates an image of the occlusion area in the pseudo image based on a plurality of reference images that are a plurality of time-series images.
 なお、第1の視点に対する被写体の角度が変化している状態で複数の時系列画像が撮影され、該複数の時系列画像のうち1の画像に基づいて疑似画像が生成された場合、疑似画像のオクルージョン領域に対応した画像が、他の時系列画像において既に撮影されている場合がある。 When a plurality of time-series images are captured in a state where the angle of the subject with respect to the first viewpoint is changed, and a pseudo image is generated based on one of the plurality of time-series images, the pseudo image In some cases, an image corresponding to the occlusion area is already captured in another time-series image.
 その場合は、複数の時系列画像を対象とする画像認識処理などによって該オクルージョン領域に対応した画像を探索して該オクルージョン領域の画像の生成に用いれば、実際に撮影された被写体の画像に基づいてオクルージョン領域の画像を生成できるので、より実物に近いオクルージョン領域の画像を生成することが可能となる。 In that case, if an image corresponding to the occlusion area is searched by an image recognition process for a plurality of time-series images and used to generate an image of the occlusion area, it is based on the actually photographed subject image. Thus, an image of the occlusion area can be generated, so that an image of the occlusion area closer to the real object can be generated.
 また、ノイズなどの影響によって複数の時系列画像のうちの1の画像が、その前後に撮影された他の時系列画像とは大きく異なる場合がある。 Also, due to the influence of noise or the like, one of a plurality of time series images may be significantly different from other time series images taken before and after that.
 このような場合には、該1の画像以外の他の画像に基づいて疑似画像を生成するとともに、他の各画像に基づいて疑似画像中のオクルージョン領域の画像を生成すれば、より正確にオクルージョン領域を特定できるとともに、疑似画像に対して違和感を覚えることがより少なくなるオクルージョン領域の画像を生成することができる。 In such a case, if a pseudo image is generated based on another image other than the one image and an image of an occlusion area in the pseudo image is generated based on each other image, the occlusion is more accurately performed. It is possible to generate an image of an occlusion region that can specify the region and lessen feel uncomfortable with the pseudo image.
 なお、例えば、被写体が何であるかが予め判っている場合には、複数の時系列画像にわたる被写体の動きの連続性などに基づいて、ノイズなどの影響を受けた異常な画像を抽出することが容易になる。 For example, when the subject is known in advance, an abnormal image affected by noise or the like may be extracted based on the continuity of the motion of the subject over a plurality of time-series images. It becomes easy.
 また、被写体の動きは、例えば、被写体が人物であるか自動車であるかなどによって異なってくるが、被写体が何であるかが予め判っている場合には、被写体の特性をも、複数の時系列画像にわたる被写体の動きの予測に用いることによって、ノイズなどの影響を受けた異常な画像を抽出することがさらに容易になる。 In addition, the movement of the subject varies depending on, for example, whether the subject is a person or a car, but if the subject is known in advance, the subject characteristics are also expressed in a plurality of time series. By using it for the prediction of the movement of the subject over the image, it becomes easier to extract an abnormal image affected by noise or the like.
 上述した変形例に係る疑似画像生成装置によれば、複数の時系列画像のうち1の基準画像から生成された、該基準画像とは別視点からの撮影に係る疑似画像におけるオクルージョン領域の範囲を、1の基準画像に撮影された各被写体の距離情報に基づいて、より正確に特定することができるとともに、複数の時系列画像に基づいて、特定されたオクルージョン領域の画像を生成するので、生成されたオクルージョン領域の画像は、着目被写体とその背景被写体とのそれぞれの情報を含んだ画像となる割合が高くなり、違和感の少ない疑似画像を生成できる割合を高めることができる。 According to the pseudo image generation device according to the modified example described above, the range of the occlusion area in the pseudo image that is generated from one reference image out of a plurality of time-series images and is taken from a different viewpoint from the reference image. Since it is possible to specify more accurately based on the distance information of each subject photographed in one reference image, and to generate an image of the specified occlusion area based on a plurality of time-series images. The ratio of the occlusion area image to be an image including information on each of the subject of interest and the background subject is high, and the ratio at which a pseudo image with little discomfort can be generated can be increased.
 また、1組の基準画像1Aと参照画像1Rとを撮影する疑似画像生成システム100A、および一連の時系列画像である複数の基準画像1Aと参照画像1Rを撮影する上述の変形例において、基準画像1Aと疑似画像との距離情報52に基づく対応関係だけでなく、基準画像1Aと同期して撮影された参照画像1Rと疑似画像との距離情報52に基づく対応関係をも用いることによって、各オクルージョン領域の範囲を特定するとともに、基準画像1Aおよび参照画像1Rを用いた各オクルージョン領域の画像を生成するとしても本発明の有用性を損なうものではない。 Further, in the pseudo image generation system 100A that captures a set of the reference image 1A and the reference image 1R, and the above-described modification that captures a plurality of the reference images 1A and the reference image 1R that are a series of time-series images, the reference image By using not only the correspondence relationship based on the distance information 52 between 1A and the pseudo image, but also the correspondence relationship based on the distance information 52 between the reference image 1R and the pseudo image photographed in synchronization with the standard image 1A, each occlusion Even if the range of the region is specified and the image of each occlusion region using the standard image 1A and the reference image 1R is generated, the usefulness of the present invention is not impaired.
 このように、参照画像1Rをも用いれば、各オクルージョン領域の範囲がさらに正確に、かつ、狭く特定され得るとともに、例えば、基準画像1Aと参照画像1Rとのうち、より適切な各被写体の情報を用いて各オクルージョン領域の画像が生成され得るので、より違和感を覚えることが少ない疑似画像を生成することができる。 As described above, if the reference image 1R is also used, the range of each occlusion region can be specified more accurately and narrowly. For example, more appropriate information about each subject in the reference image 1A and the reference image 1R is used. Since an image of each occlusion area can be generated using, a pseudo image with less discomfort can be generated.
 なお、ステレオカメラが3以上の各カメラによって構成される場合についても同様の手法が採用され得る。 It should be noted that the same method can be adopted when the stereo camera is constituted by three or more cameras.
 また、ステレオカメラを被写体に対して相対的に移動させつつ被写体を撮影すること、または、複数組みのステレオカメラを設けることによって、基線長方向と略直交する方向の各位置から被写体についての一組のステレオ画像または一連の時系列画像を撮影し、上述した各種手法を用いて疑似画像を生成しても良い。 Also, by photographing the subject while moving the stereo camera relative to the subject, or by providing a plurality of sets of stereo cameras, a set of subjects from each position in a direction substantially perpendicular to the baseline length direction is provided. A stereo image or a series of time-series images may be taken, and a pseudo image may be generated using the various methods described above.
 すなわち、基線長の方向に対して略直交する方向の各位置から被写体の画像を撮影することによって、例えば、基線長の方向に対して略平行な方向の各位置から被写体の画像を撮影する場合よりも、通常、被写体上のオクルージョン領域を減少させることができるので、疑似画像上のオクルージョン領域を、より狭く、より、正確な範囲として特定することができるとともに、多方向から撮影される各被写体の画像に基づいて被写体の情報をより的確に取得することができ、より違和感を覚えることが少ない疑似画像を生成することができる。 In other words, by shooting the subject image from each position in a direction substantially orthogonal to the baseline length direction, for example, when shooting the subject image from each position in a direction substantially parallel to the baseline length direction In general, since the occlusion area on the subject can be reduced, the occlusion area on the pseudo image can be specified as a narrower and more accurate range, and each subject photographed from multiple directions can be specified. The information on the subject can be acquired more accurately based on the image, and a pseudo image with less uncomfortable feeling can be generated.
 100A,100B 疑似画像生成システム
 200A,200B 疑似画像生成装置
 300 ステレオカメラ
 1A,1B,1C,1D,1E 基準画像
 1R 参照画像
 2A,2B,3A,3B,4A,4B 疑似画像
 5A 距離画像
 6A 疑似画像
 31 基準カメラ
 32 参照カメラ
 49 信号線
 51 3次元化パラメータ
 52 距離情報
 53 識別情報
 54 撮影パラメータ
 55 座標系情報
 56 対応関係
 61 モデル群形状データ
 62 形状情報
 1a,1b,1c,1d,1e 第1前景画像
 2a 第1背景画像
 3a,3b 第2前景画像
 4a,4b 第2背景画像
 5a,5b オクルージョン領域
 6a,6b 第1領域
 6aA 形状領域
 7a,7b 第2領域
 8a,8b 境界
 8c,8d,8e,8f,8g,8h 部分領域
 9a,9b,9c,9d,9e,9f,9g 部分領域
 10a 部分領域
 11b,11c,11d,11e 部分領域
 t1 時間軸
 DL データ線
100A, 100B pseudo image generation system 200A, 200B pseudo image generation apparatus 300 stereo camera 1A, 1B, 1C, 1D, 1E base image 1R reference image 2A, 2B, 3A, 3B, 4A, 4B pseudo image 5A distance image 6A pseudo image 31 Reference Camera 32 Reference Camera 49 Signal Line 51 Three-Dimensional Parameter 52 Distance Information 53 Identification Information 54 Imaging Parameter 55 Coordinate System Information 56 Correspondence 61 Model Group Shape Data 62 Shape Information 1a, 1b, 1c, 1d, 1e First Foreground Image 2a First background image 3a, 3b Second foreground image 4a, 4b Second background image 5a, 5b Occlusion region 6a, 6b First region 6aA Shape region 7a, 7b Second region 8a, 8b Boundary 8c, 8d, 8e, 8f, 8g, 8h Partial regions 9a, 9b, 9c, 9d, 9 e, 9f, 9g Partial area 10a Partial area 11b, 11c, 11d, 11e Partial area t1 Time axis DL Data line

Claims (13)

  1.  疑似画像生成装置であって、
     各被写体が第1の視点から撮影された基準画像を取得する第1の取得手段と、
     前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する第2の取得手段と、
     前記着目被写体と、前記基準画像のうち前記着目被写体の画像である第1の前景画像の背景部分である第1の背景画像に撮影された各背景被写体とを識別する識別手段と、
     前記基準画像と、前記第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像との対応関係を、前記各距離情報に基づいて取得する対応関係取得手段と、
     前記基準画像のうち少なくとも前記第1の前景画像についての前記対応関係と、前記基準画像とに基づいて、前記仮想視点からの撮影に対応した前記着目被写体の画像である第2の前景画像と、前記仮想視点からの撮影に対応した前記各背景被写体の画像である第2の背景画像とを含有する前記疑似画像を生成する第1の生成手段と、
     前記疑似画像のうち前記第2の前景画像と、前記第2の背景画像とを含まないオクルージョン領域を特定する第1の特定手段と、
     前記オクルージョン領域の画像を前記着目被写体と、前記各背景被写体とのそれぞれの情報に基づいて生成する第2の生成手段と、
    を備える疑似画像生成装置。
    A pseudo image generation device,
    First acquisition means for acquiring a reference image obtained by shooting each subject from a first viewpoint;
    Second acquisition means for acquiring distance information based on actual measurement for at least each point of the subject of interest among the subjects;
    Identifying means for identifying the subject of interest and each background subject photographed in a first background image that is a background portion of a first foreground image that is an image of the subject of interest of the reference image;
    Correspondence relationship acquisition means for acquiring a correspondence relationship between the reference image and the pseudo image of each subject corresponding to shooting from a virtual viewpoint different from the first viewpoint based on each distance information;
    A second foreground image that is an image of the subject of interest corresponding to shooting from the virtual viewpoint, based on the correspondence between at least the first foreground image of the reference image and the reference image; First generation means for generating the pseudo image containing a second background image that is an image of each background subject corresponding to shooting from the virtual viewpoint;
    First identifying means for identifying an occlusion area that does not include the second foreground image and the second background image of the pseudo image;
    Second generation means for generating an image of the occlusion area based on information of the subject of interest and each of the background subjects;
    A pseudo image generation apparatus comprising:
  2.  請求項1に記載された疑似画像生成装置であって、
     前記オクルージョン領域のうち前記着目被写体に対応した第1の領域と、前記各背景被写体に対応した第2の領域とを特定する第2の特定手段を更に備え、
     前記第2の生成手段は、前記第1の領域の画像を前記着目被写体の情報に基づいて生成するとともに、前記第2の領域の画像を前記各背景被写体の情報に基づいて生成する疑似画像生成装置。
    The pseudo image generation device according to claim 1,
    A second specifying means for specifying a first area corresponding to the subject of interest in the occlusion area and a second area corresponding to each of the background objects;
    The second generation unit generates an image of the first area based on the information on the subject of interest, and generates a pseudo image that generates an image of the second area based on the information on each background subject. apparatus.
  3.  請求項2に記載された疑似画像生成装置であって、
     前記着目被写体の全周的な三次元形状を表現した形状情報を取得する第3の取得手段を更に備え、
     前記第2の特定手段は、前記第1の領域を前記形状情報に基づいて特定する疑似画像生成装置。
    The pseudo image generation device according to claim 2,
    Further comprising third acquisition means for acquiring shape information expressing the three-dimensional shape of the entire circumference of the subject of interest;
    The pseudo image generating device, wherein the second specifying means specifies the first region based on the shape information.
  4.  請求項2に記載された疑似画像生成装置であって、
     前記第2の生成手段は、前記第2の前景画像における前記第1の領域との境界領域に基づいて前記第1の領域の画像を生成するとともに、前記第2の背景画像における前記第2の領域との境界領域に基づいて前記第2の領域の画像を生成する疑似画像生成装置。
    The pseudo image generation device according to claim 2,
    The second generation means generates an image of the first area based on a boundary area with the first area in the second foreground image, and also generates the second area in the second background image. A pseudo image generation device that generates an image of the second area based on a boundary area with the area.
  5.  請求項4に記載された疑似画像生成装置であって、
     前記第2の生成手段は、
     前記第1の領域のうち前記第2の領域側の境界領域から、前記第2の領域のうち前記第1の領域側の境界領域にわたる領域の画素値が徐々に変化するように、前記第1の領域および前記第2の領域の画像を生成する疑似画像生成装置。
    The pseudo image generation device according to claim 4,
    The second generation means includes
    The first region is such that a pixel value of a region extending from a boundary region on the second region side in the first region to a boundary region on the first region side in the second region is gradually changed. A pseudo image generation device for generating images of the second region and the second region.
  6.  請求項1に記載された疑似画像生成装置であって、
     前記第2の生成手段は、
     (a)前記オクルージョン領域のうち前記第2の前景画像との第1の境界領域の画像を、前記第2の前景画像のうち前記オクルージョン領域との境界領域に基づいて生成するとともに、前記オクルージョン領域のうち前記第2の背景画像との第2の境界領域の画像を、前記第2の背景画像のうち前記オクルージョン領域との境界領域に基づいて生成し、
     (b)前記オクルージョン領域の画素値が、前記第1の境界領域から前記第2の境界領域にわたって徐々に変化するように前記オクルージョン領域の画像を生成する疑似画像生成装置。
    The pseudo image generation device according to claim 1,
    The second generation means includes
    (a) generating an image of a first boundary area with the second foreground image in the occlusion area based on a boundary area with the occlusion area in the second foreground image; and Generating a second boundary region image with the second background image based on a boundary region with the occlusion region in the second background image,
    (b) A pseudo image generation device that generates an image of the occlusion area so that a pixel value of the occlusion area gradually changes from the first boundary area to the second boundary area.
  7.  疑似画像生成装置であって、
     各被写体が時間順次に撮影された複数の時系列画像を取得する第1の取得手段と、
     前記複数の時系列画像のうち1の画像を基準画像として、前記基準画像が取得された状態における前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する第2の取得手段と、
     前記着目被写体と、前記基準画像のうち前記着目被写体の画像である第1の前景画像の背景部分である第1の背景画像に撮影された各背景被写体とを識別する識別手段と、
     前記基準画像と、前記基準画像が撮影された第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像との対応関係を、前記各距離情報に基づいて取得する対応関係取得手段と、
     前記基準画像のうち少なくとも前記第1の前景画像についての前記対応関係と、前記基準画像とに基づいて、前記仮想視点からの撮影に対応した前記着目被写体の画像である第2の前景画像と、前記仮想視点からの撮影に対応した前記各背景被写体の画像である第2の背景画像とを含有する前記疑似画像を生成する第1の生成手段と、
     前記疑似画像のうち前記第2の前景画像と、前記第2の背景画像とを含まないオクルージョン領域を特定する第1の特定手段と、
     前記複数の時系列画像に基づいて前記オクルージョン領域の画像を生成する第2の生成手段と、
    を備える疑似画像生成装置。
    A pseudo image generation device,
    First acquisition means for acquiring a plurality of time-series images in which each subject is photographed in time sequence;
    Second acquisition for acquiring each distance information based on actual measurement for at least each point of the subject of interest in the state where the reference image is acquired, using one image of the plurality of time-series images as a reference image Means,
    Identifying means for identifying the subject of interest and each background subject photographed in a first background image that is a background portion of a first foreground image that is an image of the subject of interest of the reference image;
    Correspondence to acquire a correspondence relationship between the reference image and the pseudo image of each subject corresponding to shooting from a virtual viewpoint different from the first viewpoint from which the reference image was shot based on each distance information Relationship acquisition means;
    A second foreground image that is an image of the subject of interest corresponding to shooting from the virtual viewpoint, based on the correspondence between at least the first foreground image of the reference image and the reference image; First generation means for generating the pseudo image containing a second background image that is an image of each background subject corresponding to shooting from the virtual viewpoint;
    First identifying means for identifying an occlusion area that does not include the second foreground image and the second background image of the pseudo image;
    Second generation means for generating an image of the occlusion region based on the plurality of time-series images;
    A pseudo image generation apparatus comprising:
  8.  請求項1に記載された疑似画像生成装置であって、
     前記第2の生成手段は、生成された前記オクルージョン領域の画像に平滑化処理を行う疑似画像生成装置。
    The pseudo image generation device according to claim 1,
    The pseudo image generation device, wherein the second generation means performs a smoothing process on the generated image of the occlusion area.
  9.  疑似画像生成装置であって、
     各被写体が第1の視点から撮影された基準画像を取得する第1の取得手段と、
     前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する第2の取得手段と、
     前記着目被写体の全周的な三次元形状を表現した形状情報を取得する第3の取得手段と、
     前記第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像のうち、前記基準画像において対応する部分が撮影されていない領域であるオクルージョン領域であって、前記被写体に対応した第1の領域と、前記着目被写体の画像の背景部分に撮影された各背景被写体に対応した第2の領域とを、前記基準画像と、前記各距離画像と、前記形状情報とに基づいて特定するとともに、前記第1の領域の画像を前記着目被写体の情報に基づいて生成し、前記第2の領域の画像を前記各背景被写体の情報に基づいて生成することによって前記疑似画像を生成する生成手段と、
    を備える疑似画像生成装置。
    A pseudo image generation device,
    First acquisition means for acquiring a reference image obtained by shooting each subject from a first viewpoint;
    Second acquisition means for acquiring distance information based on actual measurement for at least each point of the subject of interest among the subjects;
    Third acquisition means for acquiring shape information representing the three-dimensional shape of the entire circumference of the subject of interest;
    Of the pseudo images of each subject corresponding to shooting from a virtual viewpoint different from the first viewpoint, an occlusion area in which a corresponding portion of the reference image is not shot, Based on the reference image, each distance image, and the shape information, the corresponding first region and the second region corresponding to each background subject photographed in the background portion of the image of the subject of interest. And generating the pseudo image by generating the image of the first area based on the information of the subject of interest and generating the image of the second area based on the information of each background subject. Generating means for
    A pseudo image generation apparatus comprising:
  10.  請求項9に記載された疑似画像生成装置であって、
     前記生成手段は、
     (a)前記着目被写体と、前記基準画像のうち前記着目被写体の画像である第1の前景画像の背景部分である第1の背景画像に撮影された各背景被写体とを識別する識別手段と、
     (b)前記基準画像と、前記疑似画像との対応関係を、前記各距離情報に基づいて取得する対応関係取得手段と、
     (c)前記基準画像のうち少なくとも前記第1の前景画像についての前記対応関係と、前記基準画像とに基づいて、前記仮想視点からの撮影に対応した前記着目被写体の画像である第2の前景画像と、前記仮想視点からの撮影に対応した前記各背景被写体の画像である第2の背景画像とを含有する前記疑似画像を生成する第1の生成手段と、
     (d)前記第1の領域を前記距離情報と前記形状情報とに基づいて特定するとともに、前記第2の領域を、前記疑似画像のうち前記第2の前景画像と、前記第1の領域と、前記第2の背景領域とのいずれをも含まない領域として特定する特定手段と、
     (e)前記第1の領域の画像を前記着目被写体の情報に基づいて生成するとともに、前記第2の領域の画像を前記各背景被写体の情報に基づいて生成する第2の生成手段と、
    を備える疑似画像生成装置。
    The pseudo image generation device according to claim 9,
    The generating means includes
    (a) identifying means for identifying the subject of interest and each background subject photographed in a first background image that is a background portion of a first foreground image that is an image of the subject of interest of the reference image;
    (b) correspondence relationship acquisition means for acquiring a correspondence relationship between the reference image and the pseudo image based on the distance information;
    (c) a second foreground that is an image of the subject of interest corresponding to shooting from the virtual viewpoint based on at least the correspondence relationship of the first foreground image in the reference image and the reference image. First generation means for generating the pseudo image containing an image and a second background image that is an image of each background subject corresponding to shooting from the virtual viewpoint;
    (d) identifying the first region based on the distance information and the shape information, and identifying the second region as the second foreground image and the first region of the pseudo image. Specifying means for specifying as an area that does not include any of the second background area;
    (e) second generation means for generating an image of the first area based on the information of the subject of interest and generating an image of the second area based on the information of each background object;
    A pseudo image generation apparatus comprising:
  11.  疑似画像生成方法であって、
     各被写体が第1の視点から撮影された基準画像を取得する工程と、
     前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する工程と、
     前記着目被写体と、前記基準画像のうち前記着目被写体の画像である第1の前景画像の背景部分である第1の背景画像に撮影された各背景被写体とを識別する工程と、
     前記基準画像と、前記第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像との対応関係を、前記各距離情報に基づいて取得する工程と、
     前記基準画像のうち少なくとも前記第1の前景画像についての前記対応関係と、前記基準画像とに基づいて、前記仮想視点からの撮影に対応した前記着目被写体の画像である第2の前景画像と、前記仮想視点からの撮影に対応した前記各背景被写体の画像である第2の背景画像とを含有する前記疑似画像を生成する工程と、
     前記疑似画像のうち前記第2の前景画像と、前記第2の背景画像とを含まないオクルージョン領域を特定する工程と、
     前記オクルージョン領域の画像を前記着目被写体と、前記各背景被写体とのそれぞれの情報に基づいて生成する工程と、
    を備える疑似画像生成方法。
    A pseudo image generation method,
    Obtaining a reference image in which each subject is photographed from a first viewpoint;
    Obtaining each distance information based on actual measurement for at least each point of the subject of interest among the subjects;
    Identifying the subject of interest and each background subject photographed in a first background image that is a background portion of a first foreground image that is an image of the subject of interest of the reference image;
    Obtaining a correspondence relationship between the reference image and the pseudo image of each subject corresponding to photographing from a virtual viewpoint different from the first viewpoint based on each distance information;
    A second foreground image that is an image of the subject of interest corresponding to shooting from the virtual viewpoint, based on the correspondence between at least the first foreground image of the reference image and the reference image; Generating the pseudo image containing a second background image that is an image of each background subject corresponding to shooting from the virtual viewpoint;
    Identifying an occlusion area that does not include the second foreground image and the second background image of the pseudo image;
    Generating an image of the occlusion area based on respective information of the subject of interest and each of the background subjects;
    A pseudo image generation method comprising:
  12.  疑似画像生成方法であって、
     各被写体が時間順次に撮影された複数の時系列画像を取得する工程と、
     前記複数の時系列画像のうち1の画像を基準画像として、前記基準画像が取得された状態における前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する工程と、
     前記着目被写体と、前記基準画像のうち前記着目被写体の画像である第1の前景画像の背景部分である第1の背景画像に撮影された各背景被写体とを識別する工程と、
     前記基準画像と、前記基準画像が撮影された第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像との対応関係を、前記各距離情報に基づいて取得する工程と、
     前記基準画像のうち少なくとも前記第1の前景画像についての前記対応関係と、前記基準画像とに基づいて、前記仮想視点からの撮影に対応した前記着目被写体の画像である第2の前景画像と、前記仮想視点からの撮影に対応した前記各背景被写体の画像である第2の背景画像とを含有する前記疑似画像を生成する工程と、
     前記疑似画像のうち前記第2の前景画像と、前記第2の背景画像とを含まないオクルージョン領域を特定する工程と、
     前記複数の時系列画像に基づいて前記オクルージョン領域の画像を生成する工程と、
    を備える疑似画像生成方法。
    A pseudo image generation method,
    Acquiring a plurality of time-series images in which each subject is photographed in time sequence;
    Obtaining each distance information based on actual measurement for at least each point of the subject of interest among the subjects in a state where the reference image is acquired, using one image of the plurality of time-series images as a reference image;
    Identifying the subject of interest and each background subject photographed in a first background image that is a background portion of a first foreground image that is an image of the subject of interest of the reference image;
    Obtaining a correspondence relationship between the reference image and the pseudo image of each subject corresponding to shooting from a virtual viewpoint different from the first viewpoint from which the reference image was shot based on the distance information. When,
    A second foreground image that is an image of the subject of interest corresponding to shooting from the virtual viewpoint, based on the correspondence between at least the first foreground image of the reference image and the reference image; Generating the pseudo image containing a second background image that is an image of each background subject corresponding to shooting from the virtual viewpoint;
    Identifying an occlusion area that does not include the second foreground image and the second background image of the pseudo image;
    Generating an image of the occlusion region based on the plurality of time-series images;
    A pseudo image generation method comprising:
  13.  疑似画像生成方法であって、
     各被写体が第1の視点から撮影された基準画像を取得する工程と、
     前記各被写体のうち少なくとも着目被写体の各点について実測に基づく各距離情報を取得する工程と、
     前記着目被写体の全周的な三次元形状を表現した形状情報を取得する工程と、
     前記第1の視点とは別の仮想視点からの撮影に対応した前記各被写体の疑似画像のうち、前記基準画像において対応する部分が撮影されていない領域であるオクルージョン領域であって、前記被写体に対応した第1の領域と、前記着目被写体の画像の背景部分に撮影された各背景被写体に対応した第2の領域とを、前記基準画像と、前記各距離画像と、前記形状情報とに基づいて特定するとともに、前記第1の領域の画像を前記着目被写体の情報に基づいて生成し、前記第2の領域の画像を前記各背景被写体の情報に基づいて生成することによって前記疑似画像を生成する工程と、
    を備える疑似画像生成方法。
    A pseudo image generation method,
    Obtaining a reference image in which each subject is photographed from a first viewpoint;
    Obtaining each distance information based on actual measurement for at least each point of the subject of interest among the subjects;
    Obtaining shape information representing the three-dimensional shape of the entire circumference of the subject of interest;
    Of the pseudo images of each subject corresponding to shooting from a virtual viewpoint different from the first viewpoint, an occlusion area in which a corresponding portion of the reference image is not shot, Based on the reference image, each distance image, and the shape information, the corresponding first region and the second region corresponding to each background subject photographed in the background portion of the image of the subject of interest. And generating the pseudo image by generating the image of the first area based on the information of the subject of interest and generating the image of the second area based on the information of each background subject. And a process of
    A pseudo image generation method comprising:
PCT/JP2010/072529 2010-02-02 2010-12-15 Simulated image generating device and simulated image generating method WO2011096136A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010021136 2010-02-02
JP2010-021136 2010-02-02

Publications (1)

Publication Number Publication Date
WO2011096136A1 true WO2011096136A1 (en) 2011-08-11

Family

ID=44355158

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/072529 WO2011096136A1 (en) 2010-02-02 2010-12-15 Simulated image generating device and simulated image generating method

Country Status (1)

Country Link
WO (1) WO2011096136A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013042379A (en) * 2011-08-17 2013-02-28 Ricoh Co Ltd Imaging apparatus
WO2017094536A1 (en) * 2015-12-01 2017-06-08 ソニー株式会社 Image-processing device and image-processing method
WO2017087653A3 (en) * 2015-11-19 2017-06-29 Kla-Tencor Corporation Generating simulated images from design information
WO2017205537A1 (en) * 2016-05-25 2017-11-30 Kla-Tencor Corporation Generating simulated images from input images for semiconductor applications
WO2021149509A1 (en) * 2020-01-23 2021-07-29 ソニーグループ株式会社 Imaging device, imaging method, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07282259A (en) * 1994-04-13 1995-10-27 Matsushita Electric Ind Co Ltd Parallax arithmetic unit and image composite device
JP2003526829A (en) * 1998-08-28 2003-09-09 サーノフ コーポレイション Image processing method and apparatus
JP2009211335A (en) * 2008-03-04 2009-09-17 Nippon Telegr & Teleph Corp <Ntt> Virtual viewpoint image generation method, virtual viewpoint image generation apparatus, virtual viewpoint image generation program, and recording medium from which same recorded program can be read by computer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07282259A (en) * 1994-04-13 1995-10-27 Matsushita Electric Ind Co Ltd Parallax arithmetic unit and image composite device
JP2003526829A (en) * 1998-08-28 2003-09-09 サーノフ コーポレイション Image processing method and apparatus
JP2009211335A (en) * 2008-03-04 2009-09-17 Nippon Telegr & Teleph Corp <Ntt> Virtual viewpoint image generation method, virtual viewpoint image generation apparatus, virtual viewpoint image generation program, and recording medium from which same recorded program can be read by computer

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013042379A (en) * 2011-08-17 2013-02-28 Ricoh Co Ltd Imaging apparatus
WO2017087653A3 (en) * 2015-11-19 2017-06-29 Kla-Tencor Corporation Generating simulated images from design information
US9965901B2 (en) 2015-11-19 2018-05-08 KLA—Tencor Corp. Generating simulated images from design information
WO2017094536A1 (en) * 2015-12-01 2017-06-08 ソニー株式会社 Image-processing device and image-processing method
US10846916B2 (en) 2015-12-01 2020-11-24 Sony Corporation Image processing apparatus and image processing method
WO2017205537A1 (en) * 2016-05-25 2017-11-30 Kla-Tencor Corporation Generating simulated images from input images for semiconductor applications
US10395356B2 (en) 2016-05-25 2019-08-27 Kla-Tencor Corp. Generating simulated images from input images for semiconductor applications
WO2021149509A1 (en) * 2020-01-23 2021-07-29 ソニーグループ株式会社 Imaging device, imaging method, and program

Similar Documents

Publication Publication Date Title
US20130335535A1 (en) Digital 3d camera using periodic illumination
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
RU2769303C2 (en) Equipment and method for formation of scene representation
WO2012056686A1 (en) 3d image interpolation device, 3d imaging device, and 3d image interpolation method
US20120051624A1 (en) Method and apparatus for detecting disparity
US20120176478A1 (en) Forming range maps using periodic illumination patterns
US20120176380A1 (en) Forming 3d models using periodic illumination patterns
JP2008537190A (en) Generation of three-dimensional image of object by irradiating with infrared pattern
Kilner et al. Objective quality assessment in free-viewpoint video production
US20220148207A1 (en) Processing of depth maps for images
JP5874649B2 (en) Image processing apparatus, program thereof, and image processing method
US20110187827A1 (en) Method and apparatus for creating a stereoscopic image
JP2004235934A (en) Calibration processor, calibration processing method, and computer program
WO2011096136A1 (en) Simulated image generating device and simulated image generating method
EP3832601A1 (en) Image processing device and three-dimensional measuring system
US20120050485A1 (en) Method and apparatus for generating a stereoscopic image
US20220277512A1 (en) Generation apparatus, generation method, system, and storage medium
KR20190044439A (en) Method of stitching depth maps for stereo images
CN107734266B (en) Image processing method and apparatus, electronic apparatus, and computer-readable storage medium
CN107622522B (en) Method and device for generating game material
JP5728399B2 (en) Measuring device, method and program
JP2009244229A (en) Three-dimensional image processing method, three-dimensional image processing device, and three-dimensional image processing program
CN116569214A (en) Apparatus and method for processing depth map
WO2019047984A1 (en) Method and device for image processing, electronic device, and computer-readable storage medium
US10360719B2 (en) Method and apparatus for obtaining high-quality textures

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10845267

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10845267

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP