WO2011096136A1 - Dispositif et procédé de génération d'images simulées - Google Patents

Dispositif et procédé de génération d'images simulées Download PDF

Info

Publication number
WO2011096136A1
WO2011096136A1 PCT/JP2010/072529 JP2010072529W WO2011096136A1 WO 2011096136 A1 WO2011096136 A1 WO 2011096136A1 JP 2010072529 W JP2010072529 W JP 2010072529W WO 2011096136 A1 WO2011096136 A1 WO 2011096136A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
subject
pseudo
background
area
Prior art date
Application number
PCT/JP2010/072529
Other languages
English (en)
Japanese (ja)
Inventor
修 遠山
卓也 川野
Original Assignee
コニカミノルタホールディングス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタホールディングス株式会社 filed Critical コニカミノルタホールディングス株式会社
Publication of WO2011096136A1 publication Critical patent/WO2011096136A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • G03B35/12Stereoscopic photography by simultaneous recording involving recording of different viewpoint images in different colours on a colour film
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Definitions

  • the present invention relates to a pseudo image generation apparatus and method for generating a pseudo image of an image of a subject taken from a virtual viewpoint different from the viewpoint using an image of the subject taken from one viewpoint.
  • a pseudo image has been generated in which a pseudo image of an image obtained when a subject is photographed from a virtual viewpoint different from the viewpoint where the subject is actually photographed is simulated without performing actual photographing from the virtual viewpoint.
  • Generation devices are beginning to be used for purposes such as generating a group of images that can be viewed stereoscopically.
  • the depth of the subject is estimated from the screen configuration of one captured image (reference image), and on the image of the reference image based on the obtained depth information.
  • the pseudo image is generated from the reference image by obtaining the correspondence between each of the coordinates and each coordinate on the image of the pseudo image.
  • the pseudo image corresponding to this area cannot be obtained.
  • An appropriate pixel value cannot be obtained for the region (occlusion region) depending on the correspondence.
  • the pixel value of the occlusion area is set using the statistical amount of the texture in each area.
  • the pseudo image generation apparatus of Patent Document 1 cannot obtain an appropriate correspondence because the correspondence between the reference image and the pseudo image is obtained based on the estimated depth. Therefore, since the range of the occlusion area is not accurate, there is a problem that an observer who sees the generated pseudo image feels uncomfortable.
  • the occlusion area normally includes information on both the subject and its background.
  • the occlusion area contains information on the subject and its background.
  • the improvement in image quality is not aimed at, there is a problem that the frequency of generation of an image of an occlusion area in which an observer is likely to feel discomfort increases.
  • the present invention has been made to solve these problems, and an object of the present invention is to provide a technique for more accurately specifying the range of an occlusion area and generating a pseudo image with less discomfort.
  • the pseudo image generation device includes a first acquisition unit that acquires a reference image in which each subject is captured from a first viewpoint, and at least attention among the subjects.
  • a second acquisition unit configured to acquire distance information based on actual measurement for each point of the subject; the target subject; and a first portion that is a background portion of a first foreground image that is an image of the target subject among the reference images.
  • the identification means for identifying each background subject photographed in the background image of the image, the correspondence relationship between the reference image and the pseudo image of each subject corresponding to photographing from a virtual viewpoint different from the first viewpoint From the virtual viewpoint based on the correspondence relationship acquisition means for acquiring information based on each distance information, the correspondence relationship for at least the first foreground image of the reference image, and the reference image.
  • a first pseudo-image is generated that includes a second foreground image that is a corresponding image of the subject of interest and a second background image that is an image of each background subject corresponding to shooting from the virtual viewpoint.
  • Generating means for specifying an occlusion area that does not include the second foreground image and the second background image of the pseudo image; and an image of the occlusion area as the subject of interest.
  • second generation means for generating based on the respective information on each background subject.
  • the pseudo image generation device is the pseudo image generation device according to the first aspect, and corresponds to the first region corresponding to the subject of interest in the occlusion region and the background subjects.
  • Second specifying means for specifying the second area is further provided.
  • the second generation means generates an image of the first area based on information on the subject of interest, and generates an image of the second area based on information on each background subject.
  • the pseudo image generation device is the pseudo image generation device according to the second aspect, and is third acquisition means for acquiring shape information representing the entire three-dimensional shape of the object of interest. Is further provided.
  • the second specifying means specifies the first region based on the shape information.
  • the pseudo image generation device is the pseudo image generation device according to the second aspect, wherein the second generation means is a boundary region with the first region in the second foreground image.
  • the image of the first area is generated based on the second area image, and the image of the second area is generated based on a boundary area between the second background image and the second area.
  • the pseudo image generation device is the pseudo image generation device according to the fourth aspect, wherein the second generation means is a boundary region on the second region side of the first region. From the second region, the images of the first region and the second region are generated so that the pixel values of the region over the boundary region on the first region side in the second region gradually change.
  • the pseudo image generation device is the pseudo image generation device according to the first aspect, wherein the second generation means includes: (a) the second foreground image in the occlusion area; An image of the first boundary region is generated based on a boundary region with the occlusion region in the second foreground image, and a second boundary region with the second background image in the occlusion region is generated. An image is generated based on a boundary area with the occlusion area in the second background image, and (b) a pixel value of the occlusion area gradually increases from the first boundary area to the second boundary area. An image of the occlusion area is generated so as to change to
  • a pseudo image generation device includes a first acquisition unit configured to acquire a plurality of time-series images in which each subject is photographed in time sequence, and one image among the plurality of time-series images as a reference image.
  • Second acquisition means for acquiring distance information based on actual measurement for at least each point of the subject of interest in the state in which the reference image has been acquired, the subject of interest, and the reference image of the subject Identification means for identifying each background subject photographed in the first background image that is the background portion of the first foreground image that is the image of the subject of interest, the reference image, and the first image in which the reference image is photographed
  • a correspondence relationship acquisition means for acquiring a correspondence relationship with the pseudo image of each subject corresponding to shooting from a virtual viewpoint different from the viewpoint based on each distance information, and at least the first of the reference images Based on the correspondence relationship for the foreground image and the reference image, the second foreground image, which is the image of the subject of interest corresponding to the shooting from the virtual viewpoint, and the shooting from the virtual viewpoint First
  • the pseudo image generation device is the pseudo image generation device according to the first aspect, and the second generation means performs a smoothing process on the generated image of the occlusion area.
  • the pseudo image generation device is based on actual measurement for at least each point of the subject of interest in the first acquisition unit that acquires a reference image in which each subject is captured from a first viewpoint.
  • a second acquisition unit that acquires each distance information
  • a third acquisition unit that acquires shape information that represents the entire three-dimensional shape of the subject of interest, and a virtual viewpoint different from the first viewpoint A first region corresponding to the subject, and a target region of interest.
  • the occlusion region is a region where the corresponding portion of the reference image is not photographed among the pseudo images of each subject corresponding to photographing from A second region corresponding to each background subject photographed in the background portion of the first image based on the reference image, each distance image, and the shape information, and Picture The generated based on the information of the focused object, and a generation means for generating the pseudo-image by generating on the basis of an image of the second region on the information of the background object.
  • a pseudo image generation device is the pseudo image generation device according to the ninth aspect, wherein the generation means includes (a) an image of the target subject and the target subject among the reference subject. Identification means for identifying each background subject photographed in the first background image that is the background portion of a certain first foreground image, and (b) the correspondence between the reference image and the pseudo image, Correspondence acquisition means for acquiring based on distance information; and (c) shooting from the virtual viewpoint based on the correspondence between at least the first foreground image of the reference images and the reference image.
  • a first pseudo-image is generated that includes a second foreground image that is a corresponding image of the subject of interest and a second background image that is an image of each background subject corresponding to shooting from the virtual viewpoint.
  • the generating means 1 region is specified based on the distance information and the shape information, and the second region is defined as the second foreground image, the first region, and the second region of the pseudo image.
  • E generating an image of the first area based on the information of the subject of interest, and specifying the image of the second area as the respective areas.
  • Second generation means for generating based on information on the background subject.
  • the pseudo image generation method includes a step of obtaining a reference image in which each subject is photographed from a first viewpoint, and each distance information based on actual measurement for at least each point of the subject of interest among the subjects. And obtaining the subject of interest and identifying each background subject photographed in a first background image that is a background portion of a first foreground image that is an image of the subject of interest in the reference image.
  • the pseudo image generation method includes a step of acquiring a plurality of time-series images in which each subject is photographed in time sequence, and using one of the plurality of time-series images as a reference image, the reference image A step of acquiring distance information based on actual measurement for at least each point of the subject of interest in the state in which the image has been obtained, and a first image that is the image of the subject of interest among the subject of interest and the reference image A step of identifying each background subject photographed in the first background image that is the background portion of the foreground image, and a virtual viewpoint different from the reference image and the first viewpoint from which the reference image was photographed Obtaining a correspondence relationship with the pseudo image of each subject corresponding to the shooting of the subject based on the distance information, and the correspondence relationship with respect to at least the first foreground image among the reference images.
  • a second foreground image that is an image of the subject of interest corresponding to shooting from the virtual viewpoint and an image of each background subject corresponding to shooting from the virtual viewpoint.
  • the pseudo image generation method includes a step of acquiring a reference image in which each subject is photographed from a first viewpoint, and each distance information based on actual measurement for at least each point of the subject of interest among the subjects.
  • Each occlusion area which is an area in which the corresponding portion of the reference image is not photographed, and each background subject photographed in the first area corresponding to the subject and the background portion of the image of the subject of interest Is determined based on the reference image, each distance image, and the shape information, and the image of the first area is determined based on the information on the subject of interest.
  • Occlusion on the pseudo image based on the distance information of the subject based on the actual measurement by the pseudo image generation device according to any of the first to tenth aspects or the pseudo image generation method according to any of the eleventh to thirteenth aspects.
  • the range of the region can be specified more accurately, and the image of the specified occlusion region is generated based on the subject of interest and each background subject, so that a pseudo image with less discomfort can be generated.
  • FIG. 1 is a block diagram illustrating an example of a main configuration of a pseudo image generation system 100A according to the embodiment.
  • the pseudo image generation system 100A mainly includes a stereo camera 300 and a pseudo image generation device 200A.
  • the stereo camera 300 mainly includes a base camera 31 and a reference camera 32.
  • the reference camera 31 and the reference camera 32 are mainly configured by an imaging optical system and a control processing circuit (not shown), respectively.
  • the reference camera 31 and the reference camera 32 are provided with a predetermined baseline length apart, and the subject information is processed by synchronizing the light ray information from the subject incident on the photographing optical system with a control processing circuit or the like. For example, a standard image 1A and a reference image 1R that are digital images of a predetermined size such as VGA are generated.
  • the generated standard image 1A and reference image 1R are supplied to the input / output unit 41 of the pseudo image generating apparatus 200A via the data line DL.
  • Various operations of the stereo camera 300 are controlled based on control signals supplied from the pseudo image generation device 200A via the input / output unit 41 and the data line DL.
  • the stereo camera 300 can also generate a plurality of reference images 1A and a plurality of reference images 1R by continuously photographing the subject in time sequence while synchronizing the reference camera 31 and the reference camera 32.
  • the standard image 1A and the reference image 1R may be color images or monochrome images.
  • the stereo camera 300 is employed.
  • the reference camera 32 of the stereo camera 300 instead of the reference camera 32 of the stereo camera 300, light projection that projects various detection lights for shape measurement, such as laser light, onto a subject.
  • the reference camera 31 and the light projecting device may constitute an active distance measuring type three-dimensional measuring machine, and the three-dimensional measuring machine may be used instead of the stereo camera 300.
  • the image about the subject and the image used for the measurement of the distance information can be shared, and therefore the correspondence 56 (see FIG. 2) performed by the correspondence acquisition unit 15 described later. ), The processing cost for associating the image with the distance information can be reduced.
  • the coordinate measuring machine adopts a configuration that measures the distance information 52 (FIG. 2) about the subject based on an image taken from a predetermined viewpoint different from the reference image 1A, Since the reference image 1A and the distance information 52 can be associated with each other through matching between the image and the reference image 1A, the usefulness of the present invention is not impaired.
  • the pseudo image generation device 200A mainly includes a CPU 11A, an input / output unit 41, an operation unit 42, a display unit 43, a ROM 44, a RAM 45, and a storage device 46. This is realized by a computer or a dedicated hardware device.
  • the input / output unit 41 is configured by an input / output interface such as a USB interface, for example, and inputs image information and the like supplied from the stereo camera 300 to the pseudo image generation device 200A, and from the pseudo image generation device 200A to the stereo camera 300. Output various control signals to
  • the operation unit 42 includes, for example, a keyboard or a mouse.
  • various control parameters are set in the pseudo image generation device 200A, and various operations of the pseudo image generation device 200A.
  • the mode is set.
  • the display unit 43 includes, for example, a liquid crystal display, and displays various image information such as a reference image 1A supplied from the stereo camera 300 and a pseudo image 4A (FIG. 2) generated by the pseudo image generation device 200A. In addition, various information related to the device and control GUI (Graphical User Interface) are displayed.
  • image information such as a reference image 1A supplied from the stereo camera 300 and a pseudo image 4A (FIG. 2) generated by the pseudo image generation device 200A.
  • FOG. 2 pseudo image 4A
  • ROM (Read Only Memory) 44 is a read-only memory and stores a program for operating the CPU 11A.
  • a readable / writable nonvolatile memory for example, a flash memory may be used instead of the ROM 44.
  • a RAM (Random Access Memory) 45 is a readable and writable volatile memory that stores various images acquired by the first acquisition unit 12, pseudo images generated by the generation unit 21A, and processing information of the CPU 11A. Functions as a temporary work memory.
  • the storage device 46 is composed of, for example, a readable / writable nonvolatile memory such as a flash memory, a hard disk device, or the like, and permanently records various information such as setting information for the pseudo image generation device 200A.
  • the storage device 46 is provided with a parameter storage unit 47 and a shape data storage unit 48.
  • the parameter storage unit 47 includes a three-dimensional parameter 51 (FIG. 2), an imaging parameter 54 (FIG. 2), and Various parameters such as coordinate system information 55 (FIG. 2) are stored.
  • the shape data storage unit 48 stores model group shape data 61 (FIG. 2) that represents the overall three-dimensional shape of each of various subjects, as will be described later. It is referred to by the third acquisition unit 14 and used for the acquisition process of the shape information 62 (FIG. 2) about the subject of interest.
  • the CPU (Central Processing Unit) 11A is a control processing device that controls each functional unit of the pseudo image generation device 200A, and executes control and processing according to a program stored in the ROM 44.
  • the CPU 11A also functions as the first acquisition unit 12, the second acquisition unit 13, the third acquisition unit 14, the correspondence acquisition unit 15, and the generation unit 21A, as will be described later.
  • the CPU 11A uses the reference image 1A for the subject photographed from the first viewpoint to the pseudo image 4A for the subject corresponding to photographing from a virtual viewpoint different from the first viewpoint (FIG. 2). Is generated.
  • the generation unit 21A is configured by functional units such as a first specification unit 22, a second specification unit 23, a first generation unit 24, a second generation unit 25, and an identification unit 26.
  • each of the CPU 11A, the input / output unit 41, the operation unit 42, the display unit 43, the ROM 44, the RAM 45, the storage device 46, and the like are electrically connected via a signal line 49. Therefore, for example, the CPU 11A can execute control of the stereo camera 300 via the input / output unit 41, acquisition of image information from the stereo camera 300, display on the display unit 43, and the like at a predetermined timing.
  • the first acquisition unit 12, the second acquisition unit 13, the third acquisition unit 14, the correspondence relationship acquisition unit 15, and the functional units of the generation unit 21A and the generation unit 21A are configured.
  • the function units of the first specifying unit 22, the second specifying unit 23, the first generating unit 24, the second generating unit 25, and the identifying unit 26 are realized by the CPU 11A executing predetermined programs.
  • Each of these functional units may be realized by a dedicated hardware circuit, for example.
  • the pseudo image generation device 200A acquires the standard image 1A and the reference image 1R captured by the stereo camera 300, and the pseudo image generation device 200A processes the standard image 1A and the reference image 1R. By doing so, a pseudo image corresponding to shooting from a virtual viewpoint different from the first viewpoint from which the reference image 1A was shot based on the reference image 1A, that is, shot from a virtual viewpoint different from the first viewpoint A pseudo image corresponding to the image of the subject is generated.
  • FIG. 2 is a block diagram illustrating an example of a main functional configuration of the pseudo image generation apparatus 200A according to the embodiment.
  • FIG. 19 is a diagram illustrating an example of an operation flow of the pseudo image generation apparatus 200A according to the embodiment.
  • the operator can position and position the stereo camera 300 so that the subject of interest who wants to create a pseudo image corresponding to shooting from a virtual viewpoint can be shot from both the reference camera 31 and the reference camera 32 of the stereo camera 300. Adjust.
  • the position of the reference camera 31 of the stereo camera 300 in this state is the first viewpoint. More specifically, for example, the principal point position of the photographing optical system of the reference camera 31 is the first viewpoint.
  • a control signal corresponding to the button operation is supplied to the CPU 11A.
  • the CPU 11A supplies a control signal for causing the stereo camera 300 to perform a shooting operation.
  • the stereo camera 300 to which the control signal is supplied performs a photographing operation using the standard camera 31 and the reference camera 32 to generate the standard image 1A and the reference image 1R for each subject in the photographing field of view, and generates a pseudo image. Supply to apparatus 200A.
  • the first acquisition unit 12 acquires the reference image 1A and the reference image 1R obtained by shooting each subject from the first viewpoint via the input / output unit 41 (step S10 in FIG. 19).
  • FIG. 3 is a diagram illustrating an example of the reference image 1A.
  • the first foreground image 1a which is an image of a person facing the front, is captured in the reference image 1A.
  • the background portion of the first foreground image 1a is the first background image 2a obtained by photographing the back wall of the person.
  • the acquired reference image 1A is supplied to the second acquisition unit 13, the correspondence relationship acquisition unit 15, the first generation unit 24, and the identification unit 26. Further, the acquired reference image 1R is supplied to the second acquisition unit 13.
  • the first acquisition unit 12 may acquire the reference image 1A and the reference image 1R that have been captured in advance and stored in the recording medium via the input / output unit 41.
  • the second acquisition unit 13 that has acquired the three-dimensional parameter 51 performs a matching process between the standard image 1A and the reference image 1R to obtain a parallax with the reference image 1R for each pixel of the standard image 1A.
  • the second acquisition unit 13 converts the parallax for each pixel of the reference image 1A based on the principle of triangulation using the three-dimensional parameter 51, thereby each subject corresponding to each pixel of the reference image 1A.
  • Distance information 52 that is a set of three-dimensional coordinate values for each of the above points is generated.
  • a camera coordinate system depending on the position and orientation of the stereo camera 300 is employed.
  • the camera coordinate system of the stereo camera for example, an XYZ orthogonal coordinate system in which the principal point of the reference camera is the origin and the Z axis is along the optical axis direction of the reference camera is used.
  • the second acquisition unit 13 actually measures at least each point of the subject of interest among the subjects. Is obtained (step S20 in FIG. 19).
  • FIG. 5 is a diagram illustrating an example of the distance information 52 displayed as the distance image 5A.
  • the distance image 5A shown in FIG. 5 is an image in which the Z-axis coordinates in the distance information 52 corresponding to each pixel of the reference image 1A are used as the pixel value of each pixel.
  • the unit of the pixel value is meter.
  • the dotted line in the distance image 5A displays the outline of the first foreground image 1a on the distance image 5A in order to display the relationship between the pixel value of the distance image 5A and the first foreground image 1a in the reference image 1A in an easy-to-understand manner. It is a supplementary indication.
  • each background subject image captured in the background portion of the subject subject image a part or all of it is limited by the measurement range of a distance measuring device such as a stereo camera, and the reflectance of the background subject. In some cases, the distance information 52 cannot be acquired.
  • the shooting angle of view of the base camera 31 and the reference camera 32 is the same, the image of the end region of the base image 1A due to the parallax between the base camera 31 and the reference camera 32 is the reference image 1R. Is not photographed, distance information 52 is not generated for the end region.
  • the distance information 52 acquired by the second acquisition unit 13 is supplied to the third acquisition unit 14, the correspondence relationship acquisition unit 15, the second specification unit 23, and the identification unit 26.
  • the reference image is constituted by, for example, each image of a short-distance person, a medium-distance partition, and a long-distance building
  • the reference image is constituted by, for example, each image of a short-distance person, a medium-distance partition, and a long-distance building
  • an occlusion area related to the partition and the building also occurs, even in this case, if the method of the present invention is applied to the partition as a subject of interest, the range of the occlusion area related to the partition and the building behind the partition is specified, Image generation can be performed.
  • the third acquisition unit 14 When the third acquisition unit 14 receives the distance information 52 from the second acquisition unit 13, the third acquisition unit 14 stores a model group that expresses the entire three-dimensional shape of various subjects stored in advance in the shape data storage unit 48. From the shape data 61, shape data closest to the shape information represented by the distance information 52 is identified, and the identified shape data is acquired as shape information 62 representing the entire three-dimensional shape of the subject of interest (FIG. 19). Step S30).
  • various methods can be employed for identifying the shape data closest to the distance information 52 for the subject of interest from various shape data.
  • the distance image 5A for the distance information 52 and the model group shape The method disclosed in Japanese Patent Laid-Open No. 2001-143072 that performs the identification by comparing each distance image with respect to each of the data 61 can be employed.
  • the model group shape data 61 stored in the shape data storage unit 48 is preferably as close as possible to the actual entire circumference shape data of the subject of interest.
  • the shape information 62 for the entire circumference of the subject is set in the pseudo image generation device 200A, so that the corresponding shape is obtained from the model group shape data 61.
  • the object shape information 62 may be acquired without searching for the information 62.
  • the shape information 62 acquired by the third acquisition unit 14 is supplied to the second specifying unit 23.
  • the identification unit 26 When receiving the reference image 1A and the distance information 52 from the first acquisition unit 12 and the second acquisition unit 13, respectively, the identification unit 26 receives the subject of interest and the first foreground image that is an image of the subject of interest in the reference image 1A. Each background subject photographed in the first background image 2a that is the background portion of 1a is identified, and identification information 53 that is the identification result is generated (step S40 in FIG. 19).
  • a method for identifying the subject of interest and each background subject there are a method of identifying based on image information, a method of identifying based on distance information, and the like.
  • a portion where the difference in distance information corresponding to each pixel exceeds a predetermined range is a boundary between the subject of interest and the background subject, May be identified, or a portion having irregularities exceeding a predetermined reference may be a subject of interest, and a portion having irregularities that are equal to or smaller than a predetermined reference value may be identified as a background subject.
  • the boundary between the subject image and the background image is unclear because the pattern and color of the subject and the background subject are similar. Even when the subject cannot be identified based only on the image information, the subject and the background subject can be appropriately identified.
  • the target subject and the background subject are identified only by image processing based on the image information, in many cases, the target partial image and the background partial image can be accurately identified, which impairs the usefulness of the present invention. There is nothing.
  • the identification information 53 generated by the identification unit 26 is supplied to the first generation unit 24 and the second generation unit 25.
  • a pseudo image 2 ⁇ / b> A generated by a first generation unit 24 described later is also supplied to the identification unit 26.
  • the first generation unit 24 extracts only the first foreground image 1a corresponding to the subject of interest from the reference image 1A using the identification information 53, and the virtual viewpoint based on the first foreground image 1a. Is provided with an operation mode for generating a second foreground image 3a (FIG. 4) which is a pseudo image of the subject of interest corresponding to the shooting from
  • the first generation unit 24 can generate the pseudo image 2A at a lower processing cost than when acquiring the pseudo image 2A for the entire reference image 1A by operating in the operation mode.
  • the identification unit 26 can also identify the subject of interest and the background subject based on the pseudo image 2A generated by the first generation unit 24.
  • the correspondence acquisition unit 15 supplied with the reference image 1A, the distance information 52, the imaging parameter 54, and the coordinate system information 55 is a virtual image different from the reference image 1A and the first viewpoint.
  • a correspondence 56 with the pseudo image 2A of each subject corresponding to shooting from the viewpoint is acquired based on the distance information 52 (step S50 in FIG. 19).
  • the correspondence relationship 56 is a correspondence relationship between each coordinate on the image of the reference image 1A and each coordinate on the image of the pseudo image 2A.
  • the shooting parameter 54 and the coordinate system information 55 are stored in the parameter storage unit 47.
  • the shooting parameter 54 includes the reference camera 31, a virtual camera at a virtual viewpoint different from the reference camera 31, and distance information 52. These are imaging parameters such as the focal length, the number of pixels, and the pixel size for each of the distance measuring devices to be measured (stereo camera 300 of the present embodiment).
  • the coordinate system information 55 is information representing the relationship between the position and orientation of the reference camera 31, the virtual camera, and the distance measuring device.
  • distance information 52 corresponding to each pixel of the reference image 1A is obtained even if the position and orientation of the reference camera 31 and the distance measuring device are different.
  • a correspondence relationship when the three-dimensional shape represented by the distance information 52 is perspective-projected on the pseudo image 2A is also obtained.
  • each pixel of the reference image 1A that is, the coordinates on the image of the reference image 1A
  • each pixel of the pseudo image 2A that is, the coordinates on the image of the pseudo image 2A
  • one camera is a camera that captures the reference image 1A, such as a stereo camera, and is also a camera that captures an image used for ranging
  • the reference among the shooting parameters 54 and the coordinate system information 55 is used. Even if the relationship between the position and orientation of the camera 31 and the distance measuring device is unknown, each pixel of the reference image 1A can be associated with the distance information 52, so the correspondence 56 can be obtained.
  • the correspondence 56 acquired by the correspondence acquisition unit 15 is supplied to the first generation unit 24 and used to generate the pseudo image 2A.
  • the occlusion of the subject due to the parallax of the distance measuring device that measures the distance information 52 based on the principle of triangulation, and the measurement optical system of the distance measuring device For some areas such as the peripheral portion of the first foreground image 1a corresponding to the subject of interest due to a decrease in the amount of light incident on the measurement optical system from the subject due to the tilt angle of the subject surface with respect to the optical axis of Information 52 may not be measurable.
  • the ratio of the number of pixels for which distance information 52 is not obtained with respect to the total number of pixels of the first foreground image 1a is usually quite low.
  • the distance information 52 is based on the estimated distance information.
  • the correspondence relationship 56 can be obtained with higher accuracy than when the distance information 52 is not used by actual measurement. Therefore, the distance image 5A for all the pixels of the first foreground image 1a in the quasi-image 1A. Even if the distance information 52 is not obtained, the usefulness of the present invention is not impaired.
  • the second foreground image 3a cannot be formed from the pixels. Can be easily distinguished from the pixels of the first foreground image 1a from which the distance information 52 has been acquired.
  • the pixel values corresponding to the subject of interest are displayed in the second region 7a corresponding to the background subject.
  • the situation where is set can be easily avoided.
  • FIG. 4 is a diagram illustrating an example of the pseudo image 2A.
  • the second foreground image which is the image of the subject of interest corresponding to the shooting from the virtual viewpoint, based on the correspondence 56 of the first foreground image 1a corresponding to at least the subject of interest of the reference image 1A and the reference image 1A.
  • a pseudo image 2A including 3a and a second background image 4a that is an image of each background subject corresponding to shooting from a virtual viewpoint is generated (step S60 in FIG. 19).
  • the first generation unit 24 sets the identification information 53 according to the operation mode set from the operation unit 42 or the like.
  • the second foreground image 3a (FIG. 4) of the pseudo image 2A is generated from the first foreground image 1a (FIG. 3) of the reference image 1A according to the correspondence 56, and the first background image 2a of the reference image 1A is Without using the correspondence relationship 56, the first background image 2a can be used as it is as the second background image 4a in the pseudo image 2A.
  • the pseudo image 2A shown in FIG. 4 is a pseudo image when the operation mode in which the first background image 2a is used as it is as the second background image 4a is selected.
  • the occlusion area 5a in FIG. 4 is an area where neither the second foreground image 3a nor the second background image 4a exists in the pseudo image 2A.
  • the image of the background subject does not strictly correspond to the positional relationship corresponding to the virtual viewpoint. .
  • the parallax between the reference image 1A and the pseudo image 2A for the background subject in the distance is smaller than the parallax for the subject of interest, the parallax for the subject of interest focused by the observer corresponds to the virtual viewpoint. As long as it is a value, the viewer feels less uncomfortable with the pseudo image 2A.
  • the generated pseudo image 2A is supplied to the first specifying unit 22 and the second generating unit 25, and is also supplied to the identifying unit 26 as described in the explanation section of the identifying unit 26.
  • the first generation unit 24 may adopt the shape information 62 instead of the “depth estimation model” and acquire the pseudo image 2A by applying the method of Patent Document 1.
  • information on the specified occlusion area 5 a is supplied to the second specifying unit 23 and the second generating unit 25.
  • the information related to the specified occlusion area 5a may be generated as coordinate information of each pixel included in the area of the occlusion area 5a or each pixel on the boundary of the occlusion area 5a, for example, as shown in FIG. It may be generated as an image such as the pseudo image 2A shown.
  • the second specifying unit 23 includes the pseudo image 2A from each of the first generating unit 24, the first specifying unit 22, the parameter storage unit 47, the second acquiring unit 13, and the third acquiring unit 14.
  • the occlusion area 5a, the imaging parameter 54, the coordinate system information 55, the distance information 52, and the shape information 62 are supplied.
  • the second specifying unit 23 specifies the first region 6a related to the subject of interest in the occlusion region 5a and the second region 7a related to the background subject (step S80 in FIG. 19).
  • FIG. 6 is a diagram showing an example of the pseudo image 6A related to the shape information 62 of the entire circumference of the subject.
  • the shape region 6aA shown in FIG. 6 is a region of the three-dimensional shape on the pseudo image related to the virtual viewpoint when the three-dimensional shape represented by the shape information 62 is installed at the same position and posture as the subject of interest.
  • the posture information of the three-dimensional shape represented by the distance information 52 can also be acquired.
  • the three-dimensional shape represented by the distance information 52 and the three-dimensional shape represented by the shape information 62 are three-dimensional shapes for the same subject of interest, the three-dimensional shape represented by the shape information 62 is The same position and posture as the position and posture in the three-dimensional space of the three-dimensional shape represented by the distance information 52 can be given.
  • the correspondence between the three-dimensional shape represented by the shape information 62 and the three-dimensional shape image represented by the shape information 62 formed on the pseudo image related to the virtual viewpoint by perspective projection is expressed by the shooting parameter 54 and the coordinate system information 55.
  • the second specifying unit 23 specifies the shape area 6aA on the pseudo image related to the virtual viewpoint, and assigns a predetermined pixel value only to the shape area 6aA, for example.
  • a pseudo image 6A expressing 6aA is generated.
  • the method for the second specifying unit 23 to generate the pseudo image 6A from the shape information 62 the method disclosed in Japanese Patent Laid-Open No. 10-293862 may be employed.
  • FIG. 7 is a diagram showing an example of the pseudo image 3A in which the first area 6a and the second area 7a are set in the occlusion area 5a.
  • the second specifying unit 23 is a first region that is an occlusion region corresponding to the subject of interest on the occlusion region 5a based on the generated shape region 6aA and information on the occlusion region 5a supplied from the first specifying unit 22. For example, an area that does not include the first area 6a in the occlusion area 5a is specified as the second area 7a that is an occlusion area related to the background subject.
  • the second specifying unit 23 specifies the first region 6a corresponding to the subject of interest in the occlusion region 5a based on the shape information 62, and further specifies the second region 7a corresponding to each background subject.
  • Information relating to the identified first region 6a and second region 7a is supplied to the second generation unit 25.
  • the occlusion region 5a is defined based on, for example, a predetermined ratio based on statistical data such as an area ratio between the first region 6a and the second region 7a or a pixel number ratio in the horizontal or vertical direction of 1: 3. Even if the first area 6a and the second area 7a are set so as to have a ratio, usually, the first area 6a that is the range of the occlusion area corresponding to the subject of interest and the occlusion area corresponding to the background object Since the second region 7a as the range can be specified to such an extent that the observer does not feel uncomfortable, the usefulness of the present invention is not impaired.
  • the first area 6a and the second area 7a in the occlusion area 5a are set based on a predetermined ratio not based on statistical data or the like, the first area 6a corresponding to the subject of interest in the occlusion area 5a.
  • the viewer can specify the level so as not to feel uncomfortable, so that the usefulness of the present invention is not impaired.
  • the information related to the identified first region 6a and second region 7a may be generated as coordinate information of each pixel included in these regions or each pixel on the boundary between these regions, for example.
  • the image may be generated as an image such as the pseudo image 3A shown in FIG.
  • the information of the occlusion area 5a is used.
  • the second identification section 23 uses the occlusion area according to the operation mode set from the operation section 42.
  • the first region 6a and the second region 7a can be specified without using the information of 5a.
  • the second specifying unit 23 first specifies the shape region 6aA by, for example, the above-described method, and the second foreground of the pseudo image 2A supplied from the first generation unit 24 in the shape region 6aA.
  • An area not including the image 3a is specified as the first area 6a.
  • the second specifying unit 23 specifies a region that does not include any of the second foreground image 3a, the first region 6a, and the second background image 4a in the pseudo image 2A as the second region 7a.
  • the first region 6a and the second region 7a are specified without using the information of the occlusion region 5a.
  • the second specifying unit 23 specifies the first area 6a based on the distance information 52 and the shape information 62, and specifies the second area 7a as the second foreground image 3a and the first area in the pseudo image 2A. 6a and the second background image 4a also function as a specifying means for specifying as an area that does not include both.
  • the second generation unit 25 includes the pseudo image 2A, the occlusion area 5a, and the first area from the first generation unit 24, the first specification unit 22, the second specification unit 23, and the identification unit 26. 6a, the second area 7a, and the identification information 53 are supplied.
  • the second generation unit 25 generates the pseudo image 4A (FIG. 2) by generating images of the first region 6a and the second region 7a from these pieces of information according to the operation mode input from the operation unit 42. (Step S100 in FIG. 19).
  • the second generation unit 25 generates an image of the occlusion region 5a without using information for specifying the first region 6a and the second region 7a according to the operation mode input from the operation unit 42.
  • the pseudo image 4B (FIG. 2) can also be generated.
  • the occlusion area normally includes information on both the subject of interest and each background subject.
  • the second generation unit 25 uses the occlusion area 5a or the images of the first area 6a corresponding to the target subject in the occlusion area 5a and the images of the second area 7a corresponding to the background subjects as the target subject and each background subject. It generates based on each information.
  • a method for generating the occlusion area 5a or the first area 6a and the second area 7a based on the information of the subject of interest and each of the background subjects for example, the subject of interest in an image such as the reference image 1A or the pseudo image 2A
  • a method of generating based on image information relating to each background subject is employed.
  • the second generation unit 25 causes the color according to the setting. Based on the pattern or the like, an image of the occlusion area 5a or the first area 6a and the second area 7a is generated.
  • the second generation unit 25 generates an image of the occlusion region 5a or the first region 6a and the second region 7a based on the image information and characteristics of the subject of interest and the background subject. To do.
  • the second generation unit 25 generates an image of the occlusion area 5a or the areas of the first area 6a and the second area 7a based on the respective information of the subject of interest and the background object.
  • An image of the occlusion area may be generated.
  • a method of determining an area used for generating an image of each occlusion area and generating an image of each occlusion area based on the image of the area is adopted.
  • FIG. 14 is a diagram illustrating an example of a technique for generating an image of the occlusion area 5b based on the partial area 8g provided in the second foreground image 3b.
  • FIG. 15 is a diagram showing an example of a technique for generating an image of the occlusion area 5b based on the partial area 8h provided in the second background image 4b.
  • the image of the occlusion area 5b is generated by copying the texture of the partial area 9a of, for example, 3 ⁇ 3 pixels provided in the partial area 8g to the partial area 9b.
  • the image of the partial area 9a may be copied not only to the partial area 9b but also to other partial areas in the occlusion area 5b.
  • the image of the occlusion area 5b is generated by copying the texture of the partial area 9c to the partial area 9b as in FIG.
  • a pixel value (mode) indicating a mode value of frequency when a histogram of pixel values in a predetermined area in a non-occlusion area is taken, or A method of generating an image of an occlusion area using an average value of pixel values of a predetermined area may be employed.
  • the second generation unit 25 switches the operation mode using the operation unit 42, so that the occlusion area 5 a or the first occlusion area 5 a is generated based on the boundary area with the occlusion area 5 a in the image related to the subject of interest and the background subject. Images of the region 6a and the second region 7a can be generated.
  • border region In the present invention will be described below.
  • the occlusion areas such as the occlusion area 5a, the first area 6a, and the second area 7a according to the present embodiment are such that the boundary portion between the image of the subject of interest and the image of the background subject is displayed on the image when the pseudo image is generated. This is a region generated by separation.
  • the boundary area for a three-dimensional subject such as a person is obtained by obtaining a normal defined based on the distance information 52 for each pixel in the target area, for example, and the normal angle for the pixel at the boundary of the target area
  • a method is adopted in which the pixel is determined as a partial region of the region to which each pixel to which the difference is equal to or less than a predetermined angle range.
  • the predetermined angle range For example, 45 degrees is adopted as the predetermined angle range. It is desirable that the smaller the predetermined angle range is, the closer the maximum range of the boundary region set is to the boundary of the region of interest.
  • the boundary region range based on the angular range such as the normal line described above.
  • the normal line for the target pixel is obtained based on the distance information 52, for example, based on the distance information 52 for the target pixel and each of the two pixels adjacent to the target pixel in the horizontal and vertical directions.
  • a plane is defined from the three-dimensional coordinate values, and a normal of the plane is acquired as a normal for the pixel of interest.
  • the setting of the boundary region based on the normal angle described above may also be adopted for the boundary region of the background part.
  • the boundary area of the second background image 4a is directed from the boundary with the occlusion area 5a of the second background image 4a toward the inside of the second background image 4a at a predetermined ratio with respect to the number of horizontal pixels or the number of vertical pixels of the second background image 4a.
  • a method of determining as a partial region of regions set based on the number of pixels may be employed. For example, 1/5 is adopted as the predetermined ratio.
  • the maximum range of the boundary area may be determined by the number of pixels described above.
  • the “boundary region” in the present application is a predetermined condition that defines a range of a predetermined geometric characteristic related to the subject, such as the predetermined normal angle range described above, or a region range such as the number of pixels of the region, the size, etc. Based on a predetermined mathematical condition to be determined, a partial region of a region whose maximum range is a range set from the boundary between two regions on the image to the inside of one of the two regions .
  • the “boundary area” is not limited to a partial area in contact with the boundary.
  • the second generation unit 25 is configured to be able to implement several types of occlusion region image generation methods using the boundary region, and these functions are switched by an input from the operation unit 42.
  • 8 to 13, 16, and 17 are diagrams illustrating an example of a technique for generating an image of an occlusion area using a boundary area.
  • the image of the first region 6a is generated based on the boundary region set as the boundary 8a between the second foreground image 3a and the first region 6a.
  • the second background image 4a in the second background image 4a is generated.
  • An image of the second region 7a is generated based on the boundary region set as the boundary 8b with the two regions 7a.
  • FIG. 10 shows an example in which an image of the occlusion area 5b is generated based on the partial area 8c that is a boundary area in the second foreground image 3b.
  • FIG. 11 shows an example in which an image of the occlusion area 5b is generated based on the partial area 8d that is a boundary area in the second foreground image 3b that is in contact with the boundary between the second foreground image 3b and the occlusion area 5b. Is shown.
  • the partial area 8e is a boundary area that is not in contact with the boundary between the occlusion area 5b and the second background image 4b, and the partial area 8f is in contact with the boundary between the occlusion area 5b and the second background image 4b. This is the boundary area.
  • the second generation unit 25 first selects an image of the first boundary region (not shown) set near the boundary with the second foreground image 3b in the occlusion region 5b. 2 generated based on the partial region group 9d which is a boundary region with the occlusion region 5b in the foreground image 3b, and is set near the boundary with the second background image 4b in the occlusion region 5b.
  • An image of the boundary region is generated based on the partial region group 9e that is a boundary region with the occlusion region 5b in the second background image 4b.
  • the second generation unit 25 generates an image of the occlusion area 5b so that the pixel value of the occlusion area 5b gradually changes from the first boundary area to the second boundary area.
  • the arrow 12a indicates the shift direction when the second foreground image 3b is generated based on the correspondence 56, and an image of the occlusion area 5b is generated along the shift direction.
  • the image of the occlusion area 5b is generated so that the pixel value of the occlusion area 5b gradually changes from the first area 6b to the boundary area of the second area 7b.
  • a small number of pseudo images can be generated.
  • the second generation unit 25 starts from the partial region group 9f that is the boundary region on the second region 7b side in the first region 6b, and on the first region 6b side in the second region 7b.
  • the image of the occlusion region 5a is generated so that the pixel value of the partial region 10a over the partial region group 9g that is the boundary region of the region gradually changes.
  • the arrow 12b indicates the shift direction when the second foreground image 3b is generated based on the correspondence 56, and the first region 6b and the first region 6b gradually change along the shift direction. An image of two areas 7b is generated.
  • the shift direction can also be set by the operator from the operation unit 42.
  • each of the partial region groups 9d to 9g in FIGS. 16 and 17 includes each boundary region including each of the partial region groups 9d to 9g. It is shown as an example of partial areas set discretely.
  • the second generation unit 25 generates an image of each occlusion area such as the occlusion area 5a or the first area 6a and the second area 7a by the method described above.
  • the second generation unit 25 determines whether each occlusion area is in accordance with the set operation mode. For example, a smoothing process such as by using a smoothing filter such as a 3 ⁇ 3 pixel Gaussian filter may be performed on the above image.
  • a smoothing process such as by using a smoothing filter such as a 3 ⁇ 3 pixel Gaussian filter may be performed on the above image.
  • the second generation unit 25 displays the pseudo image in which the image of each occlusion area is generated on the display unit 43 (step S100 in FIG. 19), and ends the pseudo image generation process.
  • the occlusion area 5a on the pseudo image can be more accurately specified based on the distance information 52 of each subject based on the actual measurement, and the specified Since the image of the occlusion area 5a is generated based on the subject of interest and each background subject, a pseudo image with little discomfort can be generated.
  • the first area 6a corresponding to the subject of interest in the occlusion area 5a specified on the pseudo image and the second area 7a corresponding to each background subject are specified, and the first Since the image of the area 6a is generated based on the information of the subject of interest, and the image of the second area 7a is generated based on the information of each background object, the images of the first area 6a and the second area 7a are pseudo images. Can be made more similar to an actual image corresponding to, so that a pseudo image with less discomfort can be generated.
  • the shape information 62 expressing the entire three-dimensional shape of the subject of interest is acquired, and the first region 6a is specified based on the shape information 62, so that the subject of interest is identified. Since the first region 6a, which is a corresponding occlusion region, can be specified more accurately, a pseudo image with less discomfort can be generated.
  • the image of the first area 6a is generated based on the boundary area with the first area 6a in the second foreground image 3a, and the second area 7a in the second background image 4a
  • the images of the first region 6a and the second region 7a can more closely resemble the actual image corresponding to the pseudo image, so that the user feels more uncomfortable. It is possible to generate a pseudo image with less.
  • an image of the occlusion area may be generated based on a reference image that is taken in time sequence.
  • a method for generating an image of an occlusion area based on a time-series image will be described.
  • FIG. 18 is a diagram illustrating an example of a method for generating an image of an occlusion area based on a time-series image.
  • the reference images 1B to 1E shown in FIG. 18 are a series of time-series images of the subject of interest photographed in time sequence.
  • the reference images 1B to 1E are displayed in the shooting order along the time axis t1.
  • the first foreground images 1b to 1e are images of the subject of interest in the reference images 1B to 1E, respectively, and the subject of interest moves relative to the camera.
  • the partial areas 11b to 11e are provided at the same position and range with respect to the reference images 1B to 1E. Note that the positions are not completely the same, and may be slightly shifted.
  • the entire area of the partial area 11b and almost the entire area of the partial area 11c are respectively present in the background portions of the first foreground images 1b and 1c, and the partial areas 11d and 11e. Are all present within the partial areas 1d and 1e, respectively.
  • the occlusion region in the pseudo image generated from one reference image and taken from a different viewpoint from the reference image contains information on the subject and its background.
  • partial areas 11b to 11e are not limited to a part of the reference images 1B to 1E, respectively, and may be all of the reference images 1B to 1E, for example.
  • the pseudo image generation system according to the modification includes the stereo camera 300 of the pseudo image generation system 100A according to the embodiment, and the pseudo image generation apparatus having the same configuration as the pseudo image generation apparatus 200A according to the embodiment. Yes.
  • the stereo camera 300 has a continuous shooting function for continuously shooting a subject in time sequence.
  • the stereo camera 300 according to the modified example uses the continuous shooting function to generate a plurality of standard images and a plurality of reference images for each subject, and supplies the generated images to the pseudo image generation device according to the modified example. To do.
  • the pseudo image generation device is an embodiment except for the first acquisition unit and the second generation unit corresponding to the first acquisition unit 12 and the second generation unit 25 of the pseudo image generation device 200A according to the embodiment, respectively.
  • Each of the functional units is the same as that of the pseudo image generation apparatus 200A according to FIG.
  • the first acquisition unit according to the modification acquires a plurality of reference images and a plurality of reference images, which are a plurality of time-series images, for each subject photographed in time sequence by the stereo camera 300.
  • the first acquisition unit supplies one reference image among the plurality of acquired reference images to the second acquisition unit 13, the correspondence relationship acquisition unit 15, the first generation unit 24, and the identification unit 26.
  • the acquired plurality of reference images are supplied to the second generation unit 25.
  • the first acquisition unit supplies one reference image taken at the same time as the first reference image to the second acquisition unit 13 among the plurality of acquired reference images.
  • the reference image supplied to the second acquisition unit 13 may be a reference image in which each subject in the one standard image and each subject in the same state are captured. It is not limited to the reference image taken at the time.
  • the second acquisition unit 13 supplied with the reference image acquires distance information based on actual measurement for at least each point of the subject of interest in the state in which the first reference image is acquired.
  • the second generation unit according to the modification may have a predetermined number of reference images supplied from the first generation unit, such as a region corresponding to the occlusion region of the pseudo image supplied from the first generation unit 24.
  • An area is set, and an image used for generating an image of the occlusion area is generated by applying the method described with reference to FIG. 18 to the area, and the generated image is used. Generate an image of the occlusion area.
  • the second generation unit generates an image of the occlusion area in the pseudo image based on a plurality of reference images that are a plurality of time-series images.
  • an image corresponding to the occlusion area is searched by an image recognition process for a plurality of time-series images and used to generate an image of the occlusion area, it is based on the actually photographed subject image.
  • an image of the occlusion area can be generated, so that an image of the occlusion area closer to the real object can be generated.
  • one of a plurality of time series images may be significantly different from other time series images taken before and after that.
  • an abnormal image affected by noise or the like may be extracted based on the continuity of the motion of the subject over a plurality of time-series images. It becomes easy.
  • the movement of the subject varies depending on, for example, whether the subject is a person or a car, but if the subject is known in advance, the subject characteristics are also expressed in a plurality of time series. By using it for the prediction of the movement of the subject over the image, it becomes easier to extract an abnormal image affected by noise or the like.
  • the ratio of the occlusion area image to be an image including information on each of the subject of interest and the background subject is high, and the ratio at which a pseudo image with little discomfort can be generated can be increased.
  • the reference image By using not only the correspondence relationship based on the distance information 52 between 1A and the pseudo image, but also the correspondence relationship based on the distance information 52 between the reference image 1R and the pseudo image photographed in synchronization with the standard image 1A, each occlusion Even if the range of the region is specified and the image of each occlusion region using the standard image 1A and the reference image 1R is generated, the usefulness of the present invention is not impaired.
  • the range of each occlusion region can be specified more accurately and narrowly. For example, more appropriate information about each subject in the reference image 1A and the reference image 1R is used. Since an image of each occlusion area can be generated using, a pseudo image with less discomfort can be generated.
  • the stereo camera is constituted by three or more cameras.
  • a set of subjects from each position in a direction substantially perpendicular to the baseline length direction is provided.
  • a stereo image or a series of time-series images may be taken, and a pseudo image may be generated using the various methods described above.
  • the occlusion area on the subject can be reduced, the occlusion area on the pseudo image can be specified as a narrower and more accurate range, and each subject photographed from multiple directions can be specified.
  • the information on the subject can be acquired more accurately based on the image, and a pseudo image with less uncomfortable feeling can be generated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne une technologie capable de générer des images simulées, avec des vallées dérangeantes réduites, à partir d'images de sujets photographiques qui correspondent à des photographies prises depuis des points de vue virtuels. Un dispositif de génération d'images simulées selon l'invention comporte un premier moyen d'acquisition destiné à acquérir des images de référence où des sujets photographiques sont photographiés depuis un premier point de vue ; un deuxième moyen d'acquisition destiné à acquérir des informations sur toutes les distances au moins sur la base de mesures réelles de sujets sur lesquels est effectuée une mise au point ; un moyen de distinction servant à distinguer les sujets sur lesquels est effectuée la mise au point et chaque sujet d'arrière-plan ; un moyen d'acquisition de correspondances servant à acquérir des correspondances entre images de référence et images simulées ; un premier moyen de génération servant à générer les images simulées sur la base des correspondances et des images de référence, les images simulées comportant en outre des deuxièmes images d'avant-plan des sujets sur lesquels est effectuée la mise au point et des deuxièmes images d'arrière-plan de chaque sujet d'arrière-plan ; un premier moyen d'identification servant à identifier des régions d'occultation au sein des images simulées ; et un deuxième moyen de génération servant à générer des images des régions d'occultation sur la base des informations respectives relatives aux sujets sur lesquels est effectuée la mise au point et à chaque sujet d'arrière-plan.
PCT/JP2010/072529 2010-02-02 2010-12-15 Dispositif et procédé de génération d'images simulées WO2011096136A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010021136 2010-02-02
JP2010-021136 2010-02-02

Publications (1)

Publication Number Publication Date
WO2011096136A1 true WO2011096136A1 (fr) 2011-08-11

Family

ID=44355158

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/072529 WO2011096136A1 (fr) 2010-02-02 2010-12-15 Dispositif et procédé de génération d'images simulées

Country Status (1)

Country Link
WO (1) WO2011096136A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013042379A (ja) * 2011-08-17 2013-02-28 Ricoh Co Ltd 撮像装置
WO2017094536A1 (fr) * 2015-12-01 2017-06-08 ソニー株式会社 Dispositif et procédé de traitement d'image
WO2017087653A3 (fr) * 2015-11-19 2017-06-29 Kla-Tencor Corporation Génération d'images simulées à partir d'informations de conception
WO2017205537A1 (fr) * 2016-05-25 2017-11-30 Kla-Tencor Corporation Génération d'images simulées à partir d'images d'entrée servant à des applications de semi-conducteurs
WO2021149509A1 (fr) * 2020-01-23 2021-07-29 ソニーグループ株式会社 Dispositif d'imagerie, procédé d'imagerie et programme

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07282259A (ja) * 1994-04-13 1995-10-27 Matsushita Electric Ind Co Ltd 視差演算装置及び画像合成装置
JP2003526829A (ja) * 1998-08-28 2003-09-09 サーノフ コーポレイション 画像処理方法および装置
JP2009211335A (ja) * 2008-03-04 2009-09-17 Nippon Telegr & Teleph Corp <Ntt> 仮想視点画像生成方法、仮想視点画像生成装置、仮想視点画像生成プログラムおよびそのプログラムを記録したコンピュータ読み取り可能な記録媒体

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07282259A (ja) * 1994-04-13 1995-10-27 Matsushita Electric Ind Co Ltd 視差演算装置及び画像合成装置
JP2003526829A (ja) * 1998-08-28 2003-09-09 サーノフ コーポレイション 画像処理方法および装置
JP2009211335A (ja) * 2008-03-04 2009-09-17 Nippon Telegr & Teleph Corp <Ntt> 仮想視点画像生成方法、仮想視点画像生成装置、仮想視点画像生成プログラムおよびそのプログラムを記録したコンピュータ読み取り可能な記録媒体

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013042379A (ja) * 2011-08-17 2013-02-28 Ricoh Co Ltd 撮像装置
WO2017087653A3 (fr) * 2015-11-19 2017-06-29 Kla-Tencor Corporation Génération d'images simulées à partir d'informations de conception
US9965901B2 (en) 2015-11-19 2018-05-08 KLA—Tencor Corp. Generating simulated images from design information
WO2017094536A1 (fr) * 2015-12-01 2017-06-08 ソニー株式会社 Dispositif et procédé de traitement d'image
US10846916B2 (en) 2015-12-01 2020-11-24 Sony Corporation Image processing apparatus and image processing method
WO2017205537A1 (fr) * 2016-05-25 2017-11-30 Kla-Tencor Corporation Génération d'images simulées à partir d'images d'entrée servant à des applications de semi-conducteurs
US10395356B2 (en) 2016-05-25 2019-08-27 Kla-Tencor Corp. Generating simulated images from input images for semiconductor applications
WO2021149509A1 (fr) * 2020-01-23 2021-07-29 ソニーグループ株式会社 Dispositif d'imagerie, procédé d'imagerie et programme

Similar Documents

Publication Publication Date Title
US20130335535A1 (en) Digital 3d camera using periodic illumination
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
US8611641B2 (en) Method and apparatus for detecting disparity
CN107481304B (zh) 在游戏场景中构建虚拟形象的方法及其装置
RU2769303C2 (ru) Оборудование и способ для формирования представления сцены
WO2012056686A1 (fr) Dispositif d&#39;interpolation d&#39;image 3d, dispositif d&#39;imagerie 3d, et procédé d&#39;interpolation d&#39;image 3d
US20120176478A1 (en) Forming range maps using periodic illumination patterns
US20120176380A1 (en) Forming 3d models using periodic illumination patterns
JP2008537190A (ja) 赤外線パターンを照射することによる対象物の三次元像の生成
Kilner et al. Objective quality assessment in free-viewpoint video production
US20220148207A1 (en) Processing of depth maps for images
JP5874649B2 (ja) 画像処理装置、そのプログラム、および画像処理方法
US20110187827A1 (en) Method and apparatus for creating a stereoscopic image
JP2004235934A (ja) キャリブレーション処理装置、およびキャリブレーション処理方法、並びにコンピュータ・プログラム
WO2011096136A1 (fr) Dispositif et procédé de génération d&#39;images simulées
EP3832601A1 (fr) Dispositif de traitement d&#39;image et système de mesure tridimensionnelle
US20120050485A1 (en) Method and apparatus for generating a stereoscopic image
US20220277512A1 (en) Generation apparatus, generation method, system, and storage medium
KR20190044439A (ko) 스테레오 이미지들에 관한 깊이 맵 스티칭 방법
CN107734266B (zh) 图像处理方法和装置、电子装置和计算机可读存储介质
JP5728399B2 (ja) 計測装置、方法及びプログラム
JP2009244229A (ja) 三次元画像処理方法、三次元画像処理装置および三次元画像処理プログラム
CN116569214A (zh) 用于处理深度图的装置和方法
WO2019047984A1 (fr) Procédé et dispositif de traitement d&#39;images, dispositif électronique et support de stockage lisible par ordinateur
US10360719B2 (en) Method and apparatus for obtaining high-quality textures

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10845267

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10845267

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP