WO2015093129A1 - 情報処理装置、情報処理方法およびプログラム - Google Patents
情報処理装置、情報処理方法およびプログラム Download PDFInfo
- Publication number
- WO2015093129A1 WO2015093129A1 PCT/JP2014/076618 JP2014076618W WO2015093129A1 WO 2015093129 A1 WO2015093129 A1 WO 2015093129A1 JP 2014076618 W JP2014076618 W JP 2014076618W WO 2015093129 A1 WO2015093129 A1 WO 2015093129A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- information processing
- unit
- processing apparatus
- background
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Definitions
- This disclosure relates to an information processing apparatus, an information processing method, and a program.
- AR augmented reality
- Information presented to the user in AR technology is visualized using various forms of virtual objects such as text, icons or animations.
- the virtual object is arranged in the AR space according to the position of the associated real object, but it is also possible to perform operations such as movement, collision, and deformation in the AR space.
- Patent Document 1 discloses a technique for deforming a virtual object or moving the virtual object while performing a collision determination with the terrain represented by the virtual object.
- Patent Document 1 merely expresses changes in the virtual object, such as deformation of the virtual object and interaction between the virtual objects.
- changes in the virtual object such as deformation of the virtual object and interaction between the virtual objects.
- the identification unit that distinguishes the object included in the real space from the background in order to generate a virtual object image in which the state of the object is changed are provided.
- the processor distinguishes an object included in the real space from the background in order to generate a virtual object image in which the state of the object is changed based on the three-dimensional data of the real space.
- an information processing method including separately identifying.
- the computer is configured to distinguish an object included in the real space from the background based on the three-dimensional data of the real space in order to generate a virtual object image in which the state of the object is changed.
- a program for functioning as an identification unit for separately identifying is provided.
- FIG. 5 is a diagram for describing an overview of an AR display process according to an embodiment of the present disclosure.
- FIG. It is a block diagram which shows an example of a structure of the smart phone which concerns on this embodiment. It is a figure for demonstrating the identification process of a target object. It is explanatory drawing for demonstrating the production
- FIG. 1 is a diagram for describing an overview of an AR display process according to an embodiment of the present disclosure.
- the user is holding the smartphone 1 in his hand and looking at the display unit 7, and the imaging unit arranged on the back side of the display unit 7 is directed to the desk.
- a telephone 10 On the desk, a telephone 10, a gum tape 11, a helmet 12, a beverage can 13, and a spray can 14 are placed.
- the smart phone 1 which concerns on this embodiment can perform the display which changed the object of the real space placed on these desks, as shown to the code
- the smartphone 1 displays a state in which a ball 120, which is a virtual object, is ejected from the smartphone 1 and collides with an object on a desk and acts in real space. Specifically, the smartphone 1 superimposes and displays the AR images 111 and 112 indicating the skipped gum tape 11 and the destroyed helmet 12 on the captured image obtained by capturing the real space in real time. Further, the smartphone 1 is an area where the gum tape 11 and the helmet 12 are originally located, and displays the AR images 121 and 122 indicating the background superimposed on newly exposed portions due to movement and destruction. However, even if it is a real object, the ball 120 does not collide with the telephone 10, the beverage can 13, and the spray can 14.
- the smartphone 1 displays the through images of the telephone 10, the beverage can 13, and the spray can 14 as they are.
- a target object an object that is subject to a state change such as being bounced off or destroyed
- a background object an object other than the target object such as a desk or a wall
- a captured image obtained by capturing real space in real time is also referred to as a through image.
- the smartphone 1 acquires three-dimensional data in real space.
- Three-dimensional data is data consisting of the position information of the vertices of an object in real space, line segments connecting the vertices, and the surface surrounded by the line segments, and is information that represents the three-dimensional shape (Surface) of real space. is there.
- the smartphone 1 can display an AR image (object image) at an appropriate position or express an interaction such as a collision between a virtual object and a real object.
- the smartphone 1 collects an image that is a source of an AR image related to the background object, and generates a texture (surface image) of the surface of the background object. Specifically, first, the smartphone 1 collects and buffers a captured image in which a corresponding real space region is captured in a portion corresponding to a background object in the surface of the three-dimensional data. For example, in the example illustrated in FIG. 1, the background object on the back side of the gum tape 11 is hidden and not captured only with the captured image captured from the front of the gum tape 11. For this reason, the smartphone 1 buffers the captured image captured from the back side of the gum tape 11 in addition to the captured image captured from the front surface of the gum tape 11. And the smart phone 1 produces
- a virtual texture background texture
- the smartphone 1 generates a texture that is a source of an AR image related to the target object. Specifically, first, the smartphone 1 generates a virtual object (changed object) in which the state of the target object is changed, and calculates the current position. For example, regarding the example illustrated in FIG. 1, the smartphone 1 generates three-dimensional data of the fragments 112 of the helmet 12. And the smart phone 1 produces
- the smartphone 1 displays the AR image while dynamically masking the through image in accordance with a change in state such as destruction or movement of the object. Specifically, first, the smartphone 1 calculates the state change of the change object generated as a preliminary preparation, and dynamically determines the current position of the change object in the through image. For example, in the example shown in FIG. 1, the current position of the skipped gum tape 111 and the current position of the fragment 112 of the helmet 12 are calculated. Next, in the through image, the smartphone 1 generates a drawing mask that transmits through an area that is displayed differently from the real space when performing AR display. For example, with respect to the gum tape 11 shown in FIG. 1, a drawing mask that transmits a background that is newly exposed when the gum tape 11 is bounced off and a region that displays the bounced gum tape 11 is generated. Next, the smartphone 1 displays a texture corresponding to the transparent region in the drawing mask.
- the smartphone 1 displays the background texture generated in advance on the surface of the background object using the drawing mask. For example, in the example shown in FIG. 1, the region 121 corresponding to the background texture is displayed in the background region that is newly exposed when the gum tape 11 is bounced off. In this way, the smartphone 1 can display a natural background in a region that is newly exposed due to a change in the state of the target object.
- the smartphone 1 displays the changed texture generated in advance on the surface of the changed object.
- the smartphone 1 attaches a corresponding region in the through image to the exposed portion of the surface of the change object in the through image. Paste the change texture to the area.
- the smartphone 1 displays the image of the outer peripheral portion of the through image pasted on the outer peripheral portion of the skipped gum tape 111.
- the smartphone 1 displays the change texture generated in advance on the fracture cross section of the fragment 112. In this way, the smartphone 1 can more naturally express the target object whose state has changed.
- the information processing apparatus may be an HMD (Head Mounted Display), a digital camera, a digital video camera, a tablet terminal, a mobile phone terminal, or the like.
- the HMD may display an AR image superimposed on a through image captured by a camera, or a display unit formed as a transparent or translucent through state. An AR image may be displayed.
- FIG. 2 is a block diagram illustrating an example of the configuration of the smartphone 1 according to the present embodiment.
- the smartphone 1 includes an imaging unit 2, a posture information acquisition unit 3, a three-dimensional data acquisition unit 4, a control unit 5, a display control unit 6, and a display unit 7.
- Imaging unit 2 photoelectrically converts a lens system including an imaging lens, a diaphragm, a zoom lens, and a focus lens, a drive system that causes the lens system to perform a focus operation and a zoom operation, and imaging light obtained by the lens system.
- a solid-state imaging device array that generates an imaging signal.
- the solid-state imaging device array may be realized by, for example, a CCD (Charge Coupled Device) sensor array or a CMOS (Complementary Metal Oxide Semiconductor) sensor array.
- the imaging unit 2 may be a monocular camera or a compound eye (stereo) camera. The captured image captured by the monocular camera or the compound eye camera may be used for generating a texture by the image generating unit 53 described later, or used for generating dynamic three-dimensional data by the three-dimensional data acquiring unit 4. Also good.
- the imaging unit 2 has a function of imaging a real space and acquiring an image used for generating a texture by a change object generating unit 52 described later.
- the imaging unit 2 has a function of acquiring a through image obtained by imaging a real space in real time.
- the imaging unit 2 outputs the captured image to the control unit 5 and the display control unit 6.
- the posture information acquisition unit 3 has a function of acquiring posture information indicating the position and angle (posture) of the smartphone 1.
- the posture information acquisition unit 3 acquires posture information of the imaging unit 2.
- SLAM Simultaneous Localization And Mapping
- VSLAM visual SLAM
- the posture information acquisition unit 3 performs matching between the environment map and the three-dimensional position of the feature point belonging to the object, thereby obtaining the polygon information constituting the shape of the object with respect to the real object. Positioning can be performed with high accuracy.
- the posture information acquisition unit 3 acquires the posture information of the imaging unit 2 based on the result of this alignment.
- the posture information acquisition unit 3 may acquire posture information of the imaging unit 2 by a posture estimation technique using a marker, a technique such as DTAM (Dense Tracking and Mapping in Real-Time), or Kinect Fusion. .
- the posture information acquisition unit 3 may acquire posture information based on information detected by an acceleration sensor, an angular velocity (gyro) sensor, or a geomagnetic sensor.
- the posture information acquisition unit 3 outputs the acquired posture information to the control unit 5 and the display control unit 6.
- the three-dimensional data acquisition unit 4 has a function of acquiring three-dimensional data in real space.
- the three-dimensional data can be created by, for example, a monocular imaging sensor or a compound eye imaging sensor, or by a shape sensor using infrared rays.
- the three-dimensional data acquisition unit 4 may generate three-dimensional data using the imaging unit 2 or a shape sensor using infrared rays (not shown) and the posture information acquisition unit 3, or may be generated in advance by another terminal. Data may be acquired from the outside.
- the three-dimensional data is realized as CAD (Computer Assisted Drafting) data, for example.
- the three-dimensional data acquisition unit 4 outputs the acquired three-dimensional data to the control unit 5 and the display control unit 6.
- Control unit 5 functions as an arithmetic processing device and a control device, and controls the overall operation in the smartphone 1 according to various programs.
- the control unit 5 is realized by an electronic circuit such as a CPU (Central Processing Unit) or a microprocessor, for example.
- the control unit 5 may include a ROM (Read Only Memory) that stores programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate.
- the control unit 5 functions as an identification unit 51, a change object generation unit 52, and an image generation unit 53.
- the identification unit 51 has a function of distinguishing and identifying the target object included in the real space from the background object based on the three-dimensional data of the real space acquired by the three-dimensional data acquisition unit 4. For example, the identification unit 51 identifies the floor (Dominant Plane) and the wall surface from the three-dimensional data on the assumption that the three-dimensional data indicates an artifact (for example, an indoor structure). And the identification part 51 identifies the three-dimensional shape which has a volume more than the threshold value which protrudes on the floor surface other than a floor surface and a wall surface as a target object.
- the identification unit 51 interpolates the defect by extending the vertices around the missing part, connecting the vertices around the missing part with a line, or compensating the surface.
- the identification unit 51 may interpolate the defect using an existing algorithm. Thereby, since the multi-facetedness (Manifoldness) of the target object is ensured, volume calculation is possible, and the identification unit 51 can appropriately identify the target object.
- the target object identified by the identifying unit 51 is a change object generation target by the change object generation unit 52 described later.
- the identification part 51 identifies other than a target object as a background object among the three-dimensional shapes which three-dimensional data shows.
- the background object identified by the identification unit 51 so as to be distinguished from the target object is excluded from the change object generation target by the change object generation unit 52 described later.
- the identification unit 51 may interpolate the loss of the background object by the same method as that for the target object.
- the target object identifying process by the identifying unit 51 will be described with reference to FIG.
- FIG. 3 is a diagram for explaining target object identification processing.
- the identification unit 51 performs plane extraction from the three-dimensional data 200 and identifies the plane 201 having the largest area as the floor surface.
- the identification unit 51 may extract the floor surface 201 based on the direction of gravity detected by the acceleration sensor of the terminal that acquired the three-dimensional data 200.
- the three-dimensional data 200 is generated by the three-dimensional data acquisition unit 4
- the floor surface 201 is extracted based on the direction of gravity detected by the acceleration sensor at the time of generation. By using the direction of gravity, the floor surface extraction accuracy is improved.
- the identification unit 51 extracts the floor surface using a relaxed gradient threshold (Gradient Threshold). May be.
- the identifying unit 51 identifies the wall surface 202 from the object placed on the floor surface. For example, the identification unit 51 defines a rectangle (Axis Aligned Boundary Box) and identifies a shape group perpendicular to the floor surface at the end of the three-dimensional shape indicated by the three-dimensional data as the wall surface 202.
- the identification unit 51 identifies the target object 210 and the other background objects 220 separately.
- the change object generation unit 52 has a function of generating a change object in which the state of the target object is changed for the target object identified by the identification unit 51. Specifically, the change object generation unit 52 generates data including position information of the vertices of the change object, line segments connecting the vertices, and a surface surrounded by the line segments.
- the change object may be, for example, a fragment in which the target object is destroyed, or may be a target object on which irregularities are formed.
- the change object generation unit 52 generates a change object indicating a fragment after destruction by applying an algorithm such as Voronoi Fracture or Voronoi Shatter to the target object.
- an algorithm such as Voronoi Fracture or Voronoi Shatter
- the change object generation unit 52 forms the change object 230 indicating the debris by separating the target object 210 identified by the identification unit 51 into a plurality of cross sections formed by destruction. Generate. At this time, the change object generation unit 52 sets flag information indicating whether or not each surface of the fragment is the surface of the target object before destruction. This flag information is referred to when a fragment is drawn by the display control unit 6 described later.
- the image generation unit 53 has a function of generating a texture that is the basis of an AR image in advance as preparation for AR display. Specifically, the image generation unit 53 generates a virtual object image of the background object identified by the identification unit 51, that is, a texture (background texture) to be displayed on the surface of the background object that is newly exposed due to a state change.
- the image generation unit 53 is a virtual object image obtained by changing the state of the target object identified by the identification unit 51, that is, a texture (change texture) to be displayed on the surface of the change object generated by the change object generation unit 52. ) Is generated.
- the image generation unit 53 generates a texture based on a captured image obtained by capturing the real space by the imaging unit 2.
- the image generation unit 53 should display on the surface of the background object by synthesizing the exposed background portion in one or more captured images obtained by capturing the real space.
- a background texture (first surface image) is generated.
- the image generation unit 53 collects images corresponding to background objects from one or more captured images, with a region (hereinafter also referred to as a polygon) delimited by line segments connecting vertices in the three-dimensional data as a minimum unit. By synthesizing, one background texture is generated.
- FIG. 4 is an explanatory diagram for explaining background texture generation processing.
- FIG. 4 shows the correspondence between captured images in real space and three-dimensional data. In FIG.
- the surface of the target object is shown as a polygon surrounded by a solid line
- the surface of the background object is shown as a polygon surrounded by a broken line.
- the image generation unit 53 generates a background texture by collecting and synthesizing images corresponding to the polygons of the background object indicated by broken lines in FIG. 4 from one or more captured images.
- the image generation unit 53 buffers a plurality of captured images captured by the imaging unit 2, and synthesizes a background texture using one or more buffered buffer images. During buffering, the image generation unit 53 preferentially buffers captured images with high independence. High independence means that there is little duplication of the portion where the background is hidden by the target object. As a result, the image generation unit 53 can generate a background texture including a larger number of polygons with a smaller number of sheets, thereby reducing the number of seams when combining and realizing a more natural AR display. . In addition, the image generation unit 53 preferentially buffers the captured image captured most recently.
- the image generation unit 53 can generate a background texture with the captured image captured more recently, so that the difference in imaging time between the buffer image that is the source of the AR image and the through image is reduced, A more natural AR display is realized.
- the image generation unit 53 is described as generating a background texture using a plurality of buffer images, but the technology according to the present disclosure is not limited to this.
- the image generation unit 53 may generate a change texture to be described later using a plurality of buffer images.
- the image generation unit 53 may generate an arbitrary texture related to the real object using a plurality of buffer images.
- the image generation unit 53 determines the independence for each vertex. Specifically, the image generation unit 53 performs buffering so that the vertices of as many background objects as possible are visible. For example, the image generation unit 53 determines the invisible vertices and the visible vertices of the background object, and performs buffering so that all the vertices are visible vertices in any captured image. Since the visibility is determined for each vertex, the number of determinations is reduced and the amount of calculation is reduced as compared to the case of determining for each pixel.
- the visible vertex is a vertex where a corresponding position in the captured image is exposed among the vertices of the three-dimensional data.
- An invisible vertex is a vertex whose corresponding position in a captured image is hidden among the vertices of three-dimensional data.
- the vertex 310 in front of the helmet 12 is a visible vertex
- the hidden vertex 320 on the back side of the helmet 12 is an invisible vertex.
- the image generation unit 53 may adopt the determination of visibility for each pixel.
- the image generation unit 53 uses the posture information and the three-dimensional data acquired by the posture information acquisition unit 3 to calculate from which position in the real space and at which angle the imaging unit 2 captures the vertex in the captured image. To determine whether or not to buffer.
- FIG. 5 is an explanatory diagram for explaining background texture generation processing. As illustrated in FIG. 5, the image generation unit 53 dynamically calculates the vertex position of the target object using the posture information and the three-dimensional data, and generates a mask 410 through which the target object region of the captured image 400 is transmitted. To do. Then, the image generation unit 53 determines vertices included in the transparent region of the mask 410 as invisible vertices, and determines other vertices as visible vertices.
- the image generation unit 53 may determine vertex visibility using a mask 420 obtained by two-dimensionally expanding the transmission region of the mask 410 instead of the mask 410. In this case, since a wider range of vertices are determined as invisible vertices, the posture estimation error of the posture information acquisition unit 3 is absorbed, and an erroneous determination that an invisible vertex is erroneously determined as a visible vertex is avoided.
- the image generation unit 53 generates one background texture by synthesizing one or more buffer images as described above.
- the image generation unit 53 preferentially combines captured images with a large visible area. More specifically, the image generation unit 53 synthesizes the buffer image so that the region synthesized from the captured image having a large visible area occupies more regions in the background texture. As a result, a background texture is generated from a smaller number of captured images, and a natural AR display with a more unified feeling is realized.
- the image generation unit 53 may determine that the visible area is large when the number of visible vertices is large, and may determine that the visible area is small when the number of visible vertices is small.
- the image generation unit 53 may interpolate the loss of the background texture by an image interpolation algorithm such as Inpainting.
- an image interpolation algorithm such as Inpainting.
- an area synthesized from the buffer image is referred to as a visible background texture
- an area interpolated by an image interpolation algorithm such as Inpainting is also referred to as an invisible background texture.
- the smartphone 1 may provide a UI (User Interface) for buffering captured images with high independence.
- the smartphone 1 captures the imaging unit 2 so that the entire region corresponding to the background of the three-dimensional surface represented by the three-dimensional data is exposed in at least one of the one or more buffered captured images.
- the UI for guiding is displayed.
- the smartphone 1 guides the user so that all the vertices of the background of the three-dimensional data become visible vertices in any of the buffered captured images.
- FIG. 6 is a diagram illustrating an example of a UI for generating a background texture. In the UI 500 shown in FIG.
- a display 512 indicating that the back surface of the helmet 12 and the surface in contact with the floor are not included in the buffered captured image is displayed.
- the UI 500 has a display 520 that guides the user to take an image from the back side of the helmet 12.
- the smartphone 1 may advance the buffering while giving the user a game feeling by displaying the display 530 indicating the collection rate. Such UI display is performed by the display control unit 6 described later.
- the image generation unit 53 records the luminance value in the vicinity of the position corresponding to the vertex of the three-dimensional data, which is the background that is exposed in the buffered captured image, during buffering or texture generation. .
- the display control unit 6 to be described later compares the luminance value so as to match the through image and corrects the luminance value of the texture, so that a more natural AR display is realized.
- the image generation unit 53 may record a plurality of luminance values associated with vertices as a luminance value distribution.
- the image generation unit 53 records the pixel values only in the vicinity of the position where the background texture is displayed, that is, in the vicinity of the target object. As a result, the number of comparisons of luminance values performed for the display control unit 6 to be described later to achieve consistency is reduced, and the accuracy of luminance value correction is improved. Of course, when the machine power is increased, pixel values may be recorded and luminance values may be compared for all visible vertices. Note that the image generation unit 53 may determine whether to record a luminance value based on a normal vector at a position corresponding to the vertex of the captured image.
- the image generation unit 53 records the luminance value when the direction of the normal vector is facing the imaging unit 2, thereby avoiding recording the luminance value at a position that may be a disturbance such as a horizontal orientation. it can. It is assumed that the image generation unit 53 records the pixel values of the vertices that can eliminate the influence of the outlier.
- the image generation unit 53 generates a change texture to be displayed on the surface of the change object generated by the change object generation unit 52.
- the texture displayed for the change object for example, there are two types of textures, that is, a texture of an invisible portion such as a fragment cross-section and a texture of a portion that was originally exposed and visible.
- the image generation unit 53 generates the former as a change texture (second surface image). For the latter, the display control unit 6 described later displays a corresponding part from the through image.
- the image generation unit 53 generates the change texture by estimating the change texture based on the portion corresponding to the exposed surface of the target object in the captured image. For example, the image generation unit 53 may determine a single color based on the average value of the pixel values of the exposed portions in the captured image of the target object, as the generation of the change texture. The polygon in the invisible region of the change object is filled with this single color by the display control unit 6 described later.
- the image generation unit 53 may generate a change texture by an image interpolation algorithm such as Inpainting. In the example shown in FIG. 4, the surface of the helmet 12 is visible, and the inside of the helmet 12 is invisible. Therefore, when the image generation unit 53 displays the AR image 112 showing the destroyed helmet 12 shown in FIG. 1, for example, the pixel value on the surface of the helmet 12 is used as the change texture of the broken section that is invisible among the fragments. A single color is determined by averaging.
- the background texture and the change texture are each generated based on the images captured by the imaging unit 2, but the technology according to the present disclosure is not limited thereto.
- the image generation unit 53 may generate a background texture and a change texture based on a captured image captured in advance by an external imaging device.
- the imaging unit that captures the captured image that is the source of the background texture and the change texture is the same as the imaging unit that captures the through image on which the AR image is superimposed.
- the background texture and the change texture may be generated without being based on the captured image.
- the image generation unit 53 may generate a texture filled with an arbitrary single color, or may generate a texture that expresses a sense of perspective based on a sensing result by a depth sensor (not shown).
- the display control unit 6 has a function of controlling the display unit 7 to perform AR display using the texture or the through image generated by the image generation unit 53. Specifically, first, the display control unit 6 uses the posture information acquired by the posture information acquisition unit 3 and the three-dimensional data acquired by the three-dimensional data acquisition unit 4, and uses the background object and the change object of the through image. Determine the vertex position dynamically. Then, the display control unit 6 displays the AR image superimposed on the through image while dynamically masking the through image according to the state change of the change object. The control by the display control unit 6 is roughly divided into calculation of the state of the change object, generation of a dynamic drawing mask, display of the background texture, and display of the texture on the change object.
- the display control unit 6 calculates the state of the change object. Specifically, first, the display control unit 6 calculates the movement of the change object. Then, the display control unit 6 dynamically determines the vertex positions of the background object and the change object in the through image using the posture information and the three-dimensional data. For example, when the state change is destruction, the display control unit 6 physically calculates the position and orientation of the fragments and determines the vertex position of the background object and the vertex position of each fragment in the through image.
- the display control unit 6 dynamically generates a drawing mask that transmits an AR display area different from the real space in the through image as the state of the target object changes. To do. For example, in the example shown in FIG. 1, since the helmet 12 is moved to the back side by the impact hitting the ball 120, the background newly exposed by the movement and the area where the broken pieces of the helmet 12 are located. A drawing mask that passes through is generated. Note that the display control unit 6 may enlarge the transmission area two-dimensionally or apply a Gaussian blur to the drawing mask. As a result, the joint between the original background and the texture drawn using the drawing mask is displayed more naturally. FIG.
- FIG. 7 is a diagram for explaining a dynamic drawing mask generation process that accompanies a change in the state of the target object.
- Reference numerals 610 and 620 in FIG. 7 indicate images displayed on the display unit 7, and reference numerals 612 and 622 indicate drawing masks dynamically generated by the display control unit 6.
- the display control unit 6 displays the through image as it is without performing AR display, and therefore there is no transparent region as indicated by reference numeral 612. Generate a drawing mask.
- the display control unit 6 generates a drawing mask that passes through the area where the AR image is displayed as indicated by reference numeral 622.
- the display control unit 6 displays an AR image in which a background texture is pasted on a polygon that is newly exposed when the state of the target object changes among the background objects hidden by the target object. indicate. Specifically, the display control unit 6 uses the posture information and the three-dimensional data to dynamically determine the vertex position of the background object in the through image, and corresponds the background texture to the newly exposed polygon in the background object. The AR image with the region pasted is superimposed and displayed on the through image. At this time, the display control unit 6 may correct the luminance value and pixel value of the background texture.
- the display control unit 6 determines whether the background texture is closer to the brightness value based on the comparison result between the brightness value distribution recorded during buffering and the brightness value distribution at the corresponding position in the through image. The brightness value of is corrected. Thereby, the background texture is more consistent with the through image, and a more natural AR display is realized.
- the background texture display processing will be specifically described below with reference to FIGS.
- FIG. 8 to 10 are explanatory diagrams for explaining the background texture display processing.
- 8 to 10 show display examples when all the target objects are deleted from the through image, as an example.
- FIG. 8 shows an example in which the background texture of the visible region is displayed on the background hidden by the target object.
- FIG. 8 shows an example in which the background texture of the visible region is displayed on the background hidden by the target object.
- FIG. 9 shows an example in which the background texture of the invisible region is displayed in addition to the background texture of the visible region on the background hidden by the target object.
- FIG. 10 shows a display example in which the display of the three-dimensional data is deleted from FIG.
- the display control unit 6 displays the texture by pasting the texture on the surface of the changed object using the dynamically determined vertex position of the changed object.
- the texture displayed for the change object there are two types of texture, that is, the texture of the invisible portion and the texture of the visible portion.
- the display control unit 6 displays an AR image in which the change texture generated by the image generation unit 53 is pasted on a polygon that is newly exposed due to a state change among the change objects.
- the display control unit 6 pastes a texture filled with a single color determined by the image generation unit 53 to a polygon corresponding to a fractured cross-section among polygons on each surface of a fragment.
- the display control unit 6 displays an AR image in which the image of the target object exposed in the through image is pasted to the corresponding polygon of the change object.
- the display control unit 6 pastes the image of the surface of the helmet 12 exposed in the through image to the polygon corresponding to the surface of the helmet 12 among the polygons of each surface of the fragment. wear.
- the display control unit 6 may determine whether to display the changed texture or a part of the through image with reference to the flag information set by the changed object generation unit 52.
- the display control unit 6 performs luminance correction according to the difference between the original position of the target object and the dynamically determined position of the change object. For example, the display control unit 6 estimates the light source position based on the luminance distribution of the target object in the through image, and calculates how the light hits the changed object from the estimated light source. Correct and display. Thereby, the display control part 6 can express the natural way of light according to the state change of the target object such as destruction or movement.
- FIG. 11 is a diagram illustrating an example of a brightness correction process for a change texture. As shown in FIG. 11, the broken piece 710 of the telephone 10 has changed in position and angle due to the destruction.
- the display control unit 6 expresses a shadow by correcting the luminance value of the receiver outer portion 712 of the fragments 710.
- the display control unit 6 can express a more natural destruction of the target object by generating a shadow from the moment of destruction.
- the light source position may be in a predetermined position in advance.
- Display unit 7 The display unit 7 synthesizes and displays the through image captured by the imaging unit 2 and the AR image generated by the image generation unit 53 based on the control by the display control unit 6.
- the display unit 7 is realized by, for example, an LCD (Liquid Crystal Display) or an OLED (Organic Light-Emitting Diode).
- the display unit 7 is formed as a transparent or translucent through state, and displays an AR image in a real space reflected on the display unit 7 in the through state. May be.
- the display unit 7 displays a UI for buffering a highly independent captured image described with reference to FIG. 6 based on control by the display control unit 6.
- FIG. 12 is a flowchart illustrating an example of the flow of the change object generation process executed in the smartphone 1 according to the present embodiment.
- the identification unit 51 extracts the floor surface. Specifically, the identification unit 51 performs plane extraction from the three-dimensional data acquired by the three-dimensional data acquisition unit 4, and identifies the plane with the largest area as the floor surface. At this time, the identification unit 51 may identify a shape group perpendicular to the floor surface at the end of the three-dimensional shape indicated by the three-dimensional data as the wall surface 202.
- the identification unit 51 separates the target object and the background object. Specifically, the identification unit 51 identifies, as a target object, a three-dimensional shape other than the floor surface and the wall surface and having a volume equal to or greater than a threshold value protruding above the floor surface. Moreover, the identification part 51 identifies other than a target object as a background object among the three-dimensional shapes which three-dimensional data shows.
- the identification unit 51 interpolates the loss of the target object. Specifically, the identification unit 51 extends the vertices around the missing part or connects the vertices around the missing part with a line with respect to the missing part of the target object that originally touched the floor or wall surface. By interpolating the surface or supplementing the surface, the defect is interpolated. Similarly, the identification unit 51 interpolates the loss of the background object.
- step S108 the change object generation unit 52 generates a change object in which the state of the target object is changed. Specifically, the change object generation unit 52 generates a change object indicating a fragment after destruction by applying an algorithm such as Voronoi Fracture or Voronoi Shatter to the target object.
- an algorithm such as Voronoi Fracture or Voronoi Shatter
- FIG. 13 is a flowchart illustrating an example of a flow of buffering processing executed in the smartphone 1 according to the present embodiment.
- step S ⁇ b> 202 the imaging unit 2 images a real space and outputs a captured image to the image generation unit 53.
- the image generation unit 53 estimates the position and orientation of the imaging unit 2. Specifically, the image generation unit 53 uses the posture information acquired by the posture information acquisition unit 3 when the imaging unit 2 images in step S202 and the three-dimensional data acquired by the three-dimensional data acquisition unit 4, It is estimated from which position in the real space the imaging unit 2 has captured the captured image of the buffering candidate at which angle.
- step S206 the image generation unit 53 determines whether or not to buffer the captured image captured by the imaging unit 2 in step S202. Specifically, first, the image generation unit 53 determines the visibility of the vertices of the background object in the captured image based on the position and angle in the real space of the imaging unit 2 estimated in step S204. Then, the image generation unit 53 determines whether or not buffering is possible so that a captured image that is highly independent and is captured most recently is preferentially buffered. By such a buffering determination, a background texture is generated from a smaller number of buffer images captured more recently, and a more natural AR display is realized. Note that the image generation unit 53 may update the contents of the buffer by replacing the buffer image when a captured image that is more independent than the buffered buffer image is newly captured.
- step S208 the image generation unit 53 buffers the captured image. On the other hand, when it is determined not to buffer (S206 / YES), the process returns to step S202 again.
- step S210 the image generation unit 53 distributes the luminance value in the vicinity of the position corresponding to the vertex of the three-dimensional data, which is the background exposed in the captured image to be buffered, to the vertex. Multiple records.
- the luminance value distribution recorded at this time is referred to by the display control unit 6, and the luminance value of the background texture is corrected in the AR display. Thereby, more natural AR display is realized.
- the smartphone 1 preferentially buffers captured images captured more recently with higher independence by repeating the processing described above.
- FIG. 14 is a flowchart illustrating an example of the flow of background texture generation processing executed in the smartphone 1 according to the present embodiment.
- step S302 the image generation unit 53 sorts the buffer images in ascending order based on the visible area, that is, in order of decreasing visible area (in order of decreasing visible vertices).
- step S306 which will be described later, the image generation unit 53 combines the buffer images in the sorted order, and as a result, the buffer image having a large visible area is preferentially combined.
- step S304 the image generation unit 53 corrects the pixel value with respect to each buffer image based on the buffer image having the maximum maximum visible area. This avoids the generation of an unnatural background texture in which pixel values differ greatly before and after the joint.
- step S306 the image generation unit 53 generates a background texture by synthesizing the buffer images in the order sorted in step S302, that is, in the order from the smallest visible area. Specifically, the image generation unit 53 draws the buffer images in a descending order of the visible area, and finally draws the buffer image having the largest visible area. Since the overlapping portion of the buffer image drawn at the beginning is overwritten by the buffer image drawn later, the drawing area becomes narrower as the visible area becomes smaller. For this reason, the image generation part 53 will synthesize
- step S308 the image generation unit 53 interpolates the missing portion of the generated background texture. Specifically, the image generation unit 53 interpolates an invisible region in any of the buffer images, such as a floor surface with which the target object is in contact, using an image interpolation algorithm such as Inpainting.
- the smartphone 1 generates one background texture by the processing described above.
- FIG. 15 is a flowchart illustrating an example of a flow of change texture generation processing executed in the smartphone 1 according to the present embodiment.
- the image generation unit 53 determines a single color to be used as a change texture by the display control unit 6 based on the visible portion of the target object.
- the image generation unit 53 may generate a change texture by an image interpolation algorithm such as Inpainting.
- the smartphone 1 generates a change texture by the process described above. Up to here [3-1. Generation process of change object] to [3-4.
- the process described in “Change Texture Generation Process” is described in advance as a preliminary preparation or described in [3-5. It is assumed to be executed immediately before the process described in “AR display process”. Thereby, the difference in the imaging time of the image that is the source of various textures and the through image is reduced, and a more natural AR display is realized.
- FIG. 16 is a flowchart illustrating an example of the flow of the AR display process executed in the smartphone 1 according to the present embodiment.
- FIG. 16 will be specifically described on the assumption that an AR display indicating the destruction of the target object shown in FIG. 1 is performed.
- step S ⁇ b> 502 the display control unit 6 acquires a through image captured in real time by the imaging unit 2 and posture information acquired in real time by the posture information acquisition unit 3. .
- step S504 the display control unit 6 calculates the position and orientation of the fragments. Specifically, the display control unit 6 physically calculates the movement of the fragment, and calculates the current position of each vertex of the background object and the fragment in the through image using the posture information and the three-dimensional data acquired in step S502.
- step S506 the display control unit 6 dynamically generates a drawing mask according to the destruction state. Specifically, the display control unit 6 uses the vertex position of each piece determined in step S504 to generate a drawing mask that passes through the region where the fragment is displayed and the region where the background texture is displayed. Note that the display control unit 6 expands the transmissive region two-dimensionally or applies a Gaussian blur to the drawing mask so that the seam between the original background and the texture drawn using the drawing mask can be obtained. It is displayed more naturally.
- step S508 the display control unit 6 calculates a parameter for luminance correction. Specifically, the display control unit 6 determines that the background brightness values are closer to each other based on the comparison result between the brightness value distribution recorded during buffering and the brightness value distribution at the corresponding position in the through image. Correct the brightness value of the texture.
- step S510 the display control unit 6 draws a through image captured by the imaging unit 2 in real time.
- the AR image is superimposed on the through image by drawing the change texture, the background texture, and the like in an overlapping manner.
- step S512 the display control unit 6 fills the depth buffer with all of the change object and the background object.
- step S514 the display control unit 6 draws a background texture based on the drawing mask generated in step S506. Specifically, the display control unit 6 refers to the depth buffer and draws the background object by pasting the corresponding region of the background texture on the polygon included in the region that is transparent in the drawing mask.
- step S5166 the display control unit 6 draws a fragment.
- the display control unit 6 refers to the depth buffer and colors a single color determined by the image generation unit 53 to a polygon such as a fragment cross-section newly exposed due to a state change among the fragments. Further, the display control unit 6 draws the image of the surface of the target object exposed in the through image on the corresponding polygon among the fragments.
- step S528 the display control unit 6 performs various post-processing. For example, the display control unit 6 draws another virtual object, estimates the light source position, and draws a shadow.
- the smartphone 1 superimposes and displays the AR image representing the destruction of the target object existing in the real space on the through image by the processing described above.
- position information acquisition part 3, the three-dimensional data acquisition part 4, the control part 5, the display control part 6, and the display part 7 are formed in the same apparatus as the smart phone 1.
- the present technology is not limited to such an example.
- the imaging unit 2 may be included in the external device, and the smartphone 1 may perform the above-described AR display based on the captured image acquired from the external device.
- the server on the cloud includes the three-dimensional data acquisition unit 4, the control unit 5, and the display control unit 6, and the client device connected to the server via the network is the imaging unit 2, the posture information acquisition unit 3. , And a display unit 7.
- the client device may transmit the captured image and the posture information to the server and display the AR image according to various calculations and controls by the server.
- the smartphone 1 may express enlargement / reduction or movement of the target object.
- the technology according to the present disclosure can be applied to, for example, a redesign simulator that virtually moves a desk or a chair.
- a series of control processing by each device described in this specification may be realized using any of software, hardware, and a combination of software and hardware.
- the program constituting the software is stored in advance in a storage medium (non-transitory medium) provided inside or outside each device.
- Each program is read into a RAM at the time of execution, for example, and executed by a processor such as a CPU.
- An identification unit for identifying an object included in the real space by distinguishing it from the background in order to generate a virtual object image in which the state of the object is changed, based on the three-dimensional data of the real space;
- An information processing apparatus comprising: (2) The information processing apparatus includes: The information processing apparatus according to (1), further including an image generation unit configured to generate the object image of the object identified by the identification unit.
- the information processing apparatus includes: A change object generation unit that generates a virtual change object in which the state of the object is changed; A display control unit for controlling the display unit to display the object image generated by the image generation unit on the surface of the change object;
- the information processing apparatus according to (2) further including: (4)
- the image generation unit is configured to estimate a surface of the object hidden in the captured image based on a portion corresponding to the exposed surface of the object in the captured image obtained by capturing the real space. Generate an image, The information processing apparatus according to (3), wherein the display control unit displays the object image in which the second surface image is pasted in a region newly exposed by the change in the change object.
- the display control unit displays the object image obtained by pasting an image of a target object exposed in a through image obtained by capturing the real space in real time in a corresponding area of the change object, (3) Or the information processing apparatus as described in (4).
- the display control unit estimates a light source position in the real space, corrects the luminance of the object image in accordance with the estimated light source position, and displays the corrected image.
- (3) to (5) The information processing apparatus described in 1.
- the display control unit displays the object image with the first surface image pasted on a part of the background hidden by the object that is newly exposed when the state of the object changes.
- the information processing apparatus according to any one of (6) to (6).
- the information processing unit according to (7), wherein the image generation unit generates the first surface image by combining the exposed backgrounds in one or more captured images in which the real space is captured. apparatus.
- the image generation unit preferentially buffers the captured image captured in the most recent captured image captured in the real space with less overlap of the portion where the background is hidden by the object, and
- the information processing apparatus according to (8), which is used for generating the first surface image.
- the said image generation part determines whether the said captured image is buffered based on the attitude
- Information processing device
- the display control unit is configured such that the entire region of the portion corresponding to the background of the three-dimensional surface represented by the three-dimensional data is exposed in at least one of the one or more buffered captured images.
- the information processing apparatus according to (9) or (10), wherein display for guiding an imaging posture of an imaging unit that captures a captured image used for image generation by the image generation unit is performed.
- the display control unit includes a brightness value in the vicinity of a position corresponding to a vertex of the three-dimensional data, which is the background exposed in the buffered captured image, and a through image obtained by capturing the real space in real time.
- the information processing apparatus includes: The image generation unit further includes an imaging unit that captures a captured image used for image generation, The display control unit described in any one of (3) to (12), wherein the image generated by the image generation unit is combined with the captured image captured in real time by the imaging unit and displayed. Information processing device. (14) The information processing apparatus according to any one of (3) to (13), wherein the background discriminated and identified by the identification unit is excluded from a change object generation target by a change object generation unit.
- the identification unit extracts a floor surface from the three-dimensional data, identifies a portion protruding above the extracted floor surface as the object, and identifies other than the object as the background.
- the information processing apparatus according to any one of (14).
- the state change includes destruction of the object.
- a processor distinguishing and identifying an object included in the real space from a background in order to generate a virtual object image in which the state of the object is changed based on three-dimensional data of the real space;
- An information processing method including: (19) Computer An identification unit for identifying an object included in the real space by distinguishing it from the background in order to generate a virtual object image in which the state of the object is changed, based on the three-dimensional data of the real space; Program to function as.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
1.概要
2.スマートフォンの構成例
3.動作処理
3-1.変化オブジェクトの生成処理
3-2.バッファリング処理
3-3.背景テクスチャの生成処理
3-4.変化テクスチャの生成処理
3-5.AR表示処理
4.まとめ
まず、図1を参照して、本開示の一実施形態に係るAR表示処理の概要を説明する。
図2は、本実施形態に係るスマートフォン1の構成の一例を示すブロック図である。図2に示すように、スマートフォン1は、撮像部2、姿勢情報取得部3、三次元データ取得部4、制御部5、表示制御部6、および表示部7を有する。
撮像部2は、撮像レンズ、絞り、ズームレンズ、及びフォーカスレンズ等により構成されるレンズ系、レンズ系に対してフォーカス動作やズーム動作を行わせる駆動系、レンズ系で得られる撮像光を光電変換して撮像信号を生成する固体撮像素子アレイ等を有する。固体撮像素子アレイは、例えばCCD(Charge Coupled Device)センサアレイや、CMOS(Complementary Metal Oxide Semiconductor)センサアレイにより実現されてもよい。撮像部2は、単眼カメラであってもよいし、複眼(ステレオ)カメラであってもよい。単眼カメラ又は複眼カメラによって撮像された撮像画像は、後述する画像生成部53によるテクスチャの生成に用いられてもよいし、三次元データ取得部4による動的な三次元データの生成に用いられてもよい。
姿勢情報取得部3は、スマートフォン1の位置および角度(姿勢)を示す姿勢情報を取得する機能を有する。特に、姿勢情報取得部3は、撮像部2の姿勢情報を取得する。AR技術において、真に有用な情報をユーザに呈示するためには、コンピュータが実空間の状況を的確に把握することが重要である。そのため、AR技術の基盤となる実空間の状況の把握を目的とした技術の開発が進められている。そのような技術のひとつとして、例えば、カメラの位置や姿勢とカメラの画像に映る特徴点の位置とを同時に推定可能なSLAM(Simultaneous Localization And Mapping)とよばれる技術がある。単眼カメラを用いたSLAM技術の基本的な原理は、「Andrew J.Davison, “Real-Time Simultaneous Localization and Mapping with a Single Camera”, Proceedings of the 9th IEEE International Conference on Computer Vision Volume 2, 2003, pp.1403-1410」において説明されている。なお、カメラ画像を用いて視覚的に位置を推定するSLAM技術は、特にVSLAM(visual SLAM)とも称される。SLAM技術においては、環境マップとカメラ画像とを用いて、カメラの位置および姿勢が推定される。姿勢情報取得部3は、例えばSLAM技術を用いた場合、環境マップと物体に属す特徴点の三次元位置とのマッチングをとることにより、当該物体の形状を構成するポリゴン情報を実物体に対して高精度に位置合わせをすることができる。姿勢情報取得部3は、この位置合わせの結果により、撮像部2の姿勢情報を取得する。他にも、姿勢情報取得部3は、マーカーを用いた姿勢推定技術や、DTAM(Dense Tracking and Mapping in Real-Time)、Kinect Fusionといった技術により、撮像部2の姿勢情報を取得してもよい。また、姿勢情報取得部3は、加速度センサや角速度(ジャイロ)センサ、地磁気センサにより検知された情報に基づいて、姿勢情報を取得してもよい。姿勢情報取得部3は、取得した姿勢情報を制御部5および表示制御部6に出力する。
三次元データ取得部4は、実空間の三次元データを取得する機能を有する。三次元データは、例えば単眼撮像センサ又は複眼撮像センサによって、又は赤外線を用いた形状センサによって作成され得る。三次元データ取得部4は、撮像部2又は図示しない赤外線を用いた形状センサおよび姿勢情報取得部3を用いて三次元データを生成してもよいし、予め他の端末により生成された三次元データを外部から取得してもよい。三次元データは、例えばCAD(Computer Assisted Drafting)データとして実現される。三次元データ取得部4は、取得した三次元データを制御部5および表示制御部6に出力する。
制御部5は、演算処理装置および制御装置として機能し、各種プログラムに従ってスマートフォン1内の動作全般を制御する。制御部5は、例えばCPU(Central Processing Unit)、マイクロプロセッサ等の電子回路によって実現される。なお、制御部5は、使用するプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、および適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)を含んでいてもよい。図2に示すように、制御部5は、識別部51、変化オブジェクト生成部52、および画像生成部53として機能する。
識別部51は三次元データ取得部4により取得された実空間の三次元データに基づいて、実空間に含まれる対象オブジェクトを背景オブジェクトから区別して識別する機能を有する。例えば、識別部51は、三次元データが人工物(例えば室内の構造)を示しているものと仮定して、三次元データから床面(Dominant Plane)および壁面を識別する。そして、識別部51は、床面および壁面以外であって、床面の上に突出している閾値以上の体積を有する三次元形状を、対象オブジェクトとして識別する。ただし、識別された対象オブジェクトのうち、もともと床面や壁面に接触していた部分などは、三次元データに表面として登録されていないため欠損(Hole)になり得る。このため、識別部51は、欠損部分の周辺の頂点を伸ばしたり、欠損部分の周辺の頂点間を線で繋いだり、表面を補ったりすることで、欠損を補間する。他にも、識別部51は、既存アルゴリズムにより欠損を補間してもよい。これにより対象オブジェクトの多面性(Manifoldness)が担保されるため体積計算が可能となり、識別部51は対象オブジェクトを適切に識別することができる。識別部51により識別された対象オブジェクトは、後述の変化オブジェクト生成部52による変化オブジェクトの生成対象となる。また、識別部51は、三次元データが示す三次元形状のうち、対象オブジェクト以外を背景オブジェクトとして識別する。識別部51により対象オブジェクトと区別して識別された背景オブジェクトは、後述の変化オブジェクト生成部52による変化オブジェクトの生成対象から除外される。識別部51は、対象オブジェクトと同様の手法により、背景オブジェクトの欠損を補間してもよい。以下、図3を参照して、識別部51による対象オブジェクトの識別処理について説明する。
変化オブジェクト生成部52は、識別部51により識別された対象オブジェクトについて、対象オブジェクトの状態を変化させた変化オブジェクトを生成する機能を有する。詳しくは、変化オブジェクト生成部52は、変化オブジェクトの頂点の位置情報、頂点間を繋ぐ線分、および線分により囲まれる表面から成るデータを生成する。変化オブジェクトは、例えば対象オブジェクトが破壊された破片であってもよいし、凹凸が形成された対象オブジェクトであってもよい。例えば、変化オブジェクト生成部52は、対象オブジェクトにVoronoi FractureやVoronoi Shatter等のアルゴリズムを適用することで、破壊後の破片を示す変化オブジェクトを生成する。以下、図3を再度参照して、変化オブジェクト生成部52による変化オブジェクトの生成処理について説明する。
画像生成部53は、AR画像のもととなるテクスチャを、AR表示のための事前準備として予め生成する機能を有する。詳しくは、画像生成部53は、識別部51により識別された背景オブジェクトの仮想のオブジェクト画像、即ち状態変化により新たに露出する背景オブジェクトの表面に表示すべきテクスチャ(背景テクスチャ)を生成する。また、画像生成部53は、識別部51により識別された対象オブジェクトの状態を変化させた仮想のオブジェクト画像、即ち変化オブジェクト生成部52により生成された変化オブジェクトの表面に表示すべきテクスチャ(変化テクスチャ)を生成する。画像生成部53は、例えば撮像部2により実空間が撮像された撮像画像に基づいてテクスチャを生成する。
画像生成部53は、実空間が撮像された1枚以上の撮像画像において露出している背景部分を合成することで、背景オブジェクトの表面に表示すべき背景テクスチャ(第1の表面画像)を生成する。詳しくは、画像生成部53は、三次元データにおける頂点を繋ぐ線分により区切られる領域(以下ではポリゴンとも称する)を最小単位として、1枚以上の撮像画像から背景オブジェクトに相当する画像を集めて合成することで、1枚の背景テクスチャを生成する。図4は、背景テクスチャの生成処理を説明するための説明図である。図4は、実空間の撮像画像と三次元データとの対応関係を示している。図4では、対象オブジェクトの表面が実線で囲まれたポリゴンとして示され、背景オブジェクトの表面が破線で囲まれたポリゴンとして示されている。画像生成部53は、1枚以上の撮像画像から、図4の破線で示された背景オブジェクトの各ポリゴンに対応する画像を集めて合成することで、背景テクスチャを生成する。
画像生成部53は、変化オブジェクト生成部52により生成された変化オブジェクトの表面に表示すべき変化テクスチャを生成する。変化オブジェクトに関して表示されるテクスチャとしては、例えば破片断面等の不可視な部分のテクスチャと、もともと露出しており可視であった部分のテクスチャの2種類が挙げられる。画像生成部53は、このうち前者を変化テクスチャ(第2の表面画像)として生成する。後者については、後述の表示制御部6により、スルー画像から対応する部分が表示される。
表示制御部6は、画像生成部53により生成されたテクスチャまたはスルー画像を用いて、AR表示を行うよう表示部7を制御する機能を有する。詳しくは、まず、表示制御部6は、姿勢情報取得部3により取得された姿勢情報および三次元データ取得部4により取得された三次元データを用いて、スルー画像における背景オブジェクト、および変化オブジェクトの頂点位置を動的に決定する。そして、表示制御部6は、変化オブジェクトの状態変化に応じてスルー画像を動的にマスクしながら、スルー画像にAR画像を重畳して表示する。表示制御部6による制御は、変化オブジェクトの状態の演算、動的な描画マスクの生成、背景テクスチャの表示、および変化オブジェクトへのテクスチャの表示に大別される。
表示制御部6は、変化オブジェクトの状態を演算する。詳しくは、まず、表示制御部6は、変化オブジェクトの動きを演算する。そして、表示制御部6は、姿勢情報および三次元データを用いて、スルー画像における背景オブジェクトおよび変化オブジェクトの頂点位置を動的に決定する。例えば、状態変化が破壊である場合、表示制御部6は、破片の位置や姿勢を物理演算して、スルー画像における背景オブジェクトの頂点位置および各破片の頂点位置を決定する。
表示制御部6は、スルー画像において、対象オブジェクトの状態変化に伴い、実空間と異なったAR表示を行う領域を透過する描画マスクを動的に生成する。例えば、図1に示した例においては、ヘルメット12がボール120に当たった衝撃で奥側に移動しているため、移動により新たに露出する背景、および破壊されたヘルメット12の破片が位置する領域を透過する描画マスクを生成する。なお、表示制御部6は、透過領域を二次元的に領域拡大させたり、描画マスクにガウシアンブラーを適用したりしてもよい。これにより、もともとの背景と描画マスクを用いて描画されるテクスチャとの継ぎ目がより自然に表示される。図7は、対象オブジェクトの状態変化に伴う動的な描画マスク生成処理を説明するための図である。図7における符号610および符号620は、表示部7により表示される画像を示し、符号612および符号622は、表示制御部6による動的に生成される描画マスクを示している。図7の符号610に示すように対象オブジェクトに状態変化が生じていない場合、表示制御部6は、AR表示を行わずスルー画像をそのまま表示するため、符号612に示すように透過する領域がない描画マスクを生成する。一方で、図7の符号620に示すように対象オブジェクトの状態に変化が生じた場合、表示制御部6は、符号622に示すようにAR画像を表示する領域を透過する描画マスクを生成する。
表示制御部6は、対象オブジェクトにより隠れた背景オブジェクトのうち、対象オブジェクトの状態が変化することにより新たに露出するポリゴンに、背景テクスチャを貼り付けたAR画像を表示する。詳しくは、表示制御部6は、姿勢情報および三次元データを用いてスルー画像における背景オブジェクトの頂点位置を動的に決定しながら、背景オブジェクトのうち新たに露出するポリゴンに、背景テクスチャの対応する領域を貼り付けたAR画像を、スルー画像に重畳して表示する。このとき、表示制御部6は、背景テクスチャの輝度値や画素値を補正してもよい。例えば、表示制御部6は、バッファリングの際に記録された輝度値の分布とスルー画像における対応する位置の輝度値の分布との比較結果に基づいて、両者の輝度値がより近づくよう背景テクスチャの輝度値を補正する。これにより、背景テクスチャがよりスルー画像と整合し、より自然なAR表示が実現される。以下、図8~図10を参照して、背景テクスチャの表示処理について具体的に説明する。
表示制御部6は、動的に決定された変化オブジェクトの頂点位置を用いて、変化オブジェクトの表面にテクスチャを貼り付けて表示する。上述したように、変化オブジェクトに関して表示されるテクスチャとしては、不可視であった部分のテクスチャと、可視であった部分のテクスチャの2種類が挙げられる。前者については、表示制御部6は、変化オブジェクトのうち状態変化により新たに露出するポリゴンに、画像生成部53により生成された変化テクスチャを貼り付けたAR画像を表示する。例えば、図1に示した例においては、表示制御部6は、破片の各表面のポリゴンのうち破壊断面に相当するポリゴンに、画像生成部53により決定された単一色を塗りつぶしたテクスチャを貼り付ける。また、後者については、表示制御部6は、スルー画像において露出している対象オブジェクトの画像を、変化オブジェクトの対応するポリゴンに貼り付けたAR画像を表示する。例えば、図1に示した例においては、表示制御部6は、スルー画像において露出しているヘルメット12の表面の画像を、破片の各表面のポリゴンのうちヘルメット12の表面に対応するポリゴンに貼り付ける。なお、表示制御部6は、変化オブジェクト生成部52により設定されたフラグ情報を参照して、変化テクスチャを表示するか、スルー画像の一部を表示するかを判定してもよい。
表示部7は、表示制御部6による制御に基づいて、撮像部2により撮像されたスルー画像、および画像生成部53により生成されたAR画像を合成して表示する。表示部7は、例えばLCD(Liquid Crystal Display)またはOLED(Organic Light-Emitting Diode)、などにより実現される。また、本実施形態に係る情報処理装置がHMDとして実現される場合、表示部7は、透明または半透明のスルー状態として形成され、スルー状態の表示部7に映る実空間にAR画像を表示してもよい。他にも、表示部7は、表示制御部6による制御に基づき、図6を参照して説明した、独立性の高い撮像画像をバッファリングするためのUIを表示する。
以下では、一例として、スマートフォン1が、対象オブジェクトが破壊されたAR画像を表示する例における動作処理を説明する
図12は、本実施形態に係るスマートフォン1において実行される変化オブジェクトの生成処理の流れの一例を示すフローチャートである。
図13は、本実施形態に係るスマートフォン1において実行されるバッファリング処理の流れの一例を示すフローチャートである。
図14は、本実施形態に係るスマートフォン1において実行される背景テクスチャの生成処理の流れの一例を示すフローチャートである。
図15は、本実施形態に係るスマートフォン1において実行される変化テクスチャの生成処理の流れの一例を示すフローチャートである。
図16は、本実施形態に係るスマートフォン1において実行されるAR表示処理の流れの一例を示すフローチャートである。図16では、図1に示した対象オブジェクトの破壊を示すAR表示を行う例を想定して、具体的に説明する。
ここまで、図1~図16を用いて、本開示に係る技術の実施形態を詳細に説明した。上述した実施形態によれば、実空間に存在するオブジェクトが変化する表現が提供され、AR技術による現実世界が拡張されたかのような感覚をユーザにより強く与えることができる。例えば、本実施形態に係るスマートフォン1は、実世界に存在するオブジェクトの自然な破壊表現を実現することができる。
(1)
実空間の三次元データに基づいて、前記実空間に含まれるオブジェクトを、前記オブジェクトの状態を変化させた仮想のオブジェクト画像を生成するために背景から区別して識別する識別部、
を備える情報処理装置。
(2)
前記情報処理装置は、
前記識別部により識別された前記オブジェクトの前記オブジェクト画像を生成する画像生成部をさらに備える、前記(1)に記載の情報処理装置。
(3)
前記情報処理装置は、
前記オブジェクトの状態を変化させた仮想の変化オブジェクトを生成する変化オブジェクト生成部と、
前記変化オブジェクトの表面に前記画像生成部により生成された前記オブジェクト画像を表示するよう表示部を制御する表示制御部と、
をさらに備える、前記(2)に記載の情報処理装置。
(4)
前記画像生成部は、前記実空間が撮像された撮像画像のうち前記オブジェクトの露出した表面に相当する部分に基づいて、前記撮像画像においては隠れている前記オブジェクトの表面を推定した第2の表面画像を生成し、
前記表示制御部は、前記変化オブジェクトのうち前記変化により新たに露出する領域に、前記第2の表面画像を貼り付けた前記オブジェクト画像を表示する、前記(3)に記載の情報処理装置。
(5)
前記表示制御部は、前記実空間を実時間で撮像したスルー画像において露出している対象オブジェクトの画像を、前記変化オブジェクトの対応する領域に貼り付けた前記オブジェクト画像を表示する、前記(3)または(4)に記載の情報処理装置。
(6)
前記表示制御部は、前記実空間における光源位置を推定して、推定した前記光源位置に応じて前記オブジェクト画像の輝度を補正して表示する、前記(3)~(5)のいずれか一項に記載の情報処理装置。
(7)
前記表示制御部は、前記オブジェクトにより隠れた前記背景のうち前記オブジェクトの状態が変化することで新たに露出する部分に、第1の表面画像を貼り付けた前記オブジェクト画像を表示する、前記(3)~(6)のいずれか一項に記載の情報処理装置。
(8)
前記画像生成部は、前記実空間が撮像された1枚以上の撮像画像において露出している前記背景を合成することで前記第1の表面画像を生成する、前記(7)に記載の情報処理装置。
(9)
前記画像生成部は、前記実空間が撮像された撮像画像のうち、前記オブジェクトにより前記背景が隠れる部分の重複が少なく、且つ直近に撮像された前記撮像画像を、優先的にバッファリングして前記第1の表面画像の生成に用いる、前記(8)に記載の情報処理装置。
(10)
前記画像生成部は、撮像画像を撮像した撮像部の位置および角度を示す姿勢情報および前記三次元データに基づいて、前記撮像画像をバッファリングするか否かを判定する、前記(9)に記載の情報処理装置。
(11)
前記表示制御部は、前記三次元データが表す立体の表面のうち前記背景に対応する部分の全領域が、バッファリングされた1つ以上の前記撮像画像の少なくともいずれかにおいて露出されるように、前記画像生成部が画像生成に用いる撮像画像を撮像する撮像部の撮像姿勢を誘導する表示を行う、前記(9)または(10)に記載の情報処理装置。
(12)
前記表示制御部は、バッファリングされた撮像画像において露出している前記背景であって前記三次元データの頂点に対応する位置近傍の輝度値と、前記実空間を実時間で撮像されたスルー画像における対応する位置の輝度値との比較結果に基づいて、前記第1の表面画像の輝度値を補正する、前記(9)~(11)のいずれか一項に記載の情報処理装置。
(13)
前記情報処理装置は、
前記画像生成部が画像生成に用いる撮像画像を撮像する撮像部をさらに備え、
前記表示制御部は、前記撮像部により実時間で撮像された撮像画像に前記画像生成部により生成された画像を合成して表示する、前記(3)~(12)のいずれか一項に記載の情報処理装置。
(14)
前記識別部により区別して識別された前記背景は、変化オブジェクト生成部による前記変化オブジェクトの生成対象から除外される、前記(3)~(13)のいずれか一項に記載の情報処理装置。
(15)
前記識別部は、前記三次元データから床面を抽出して、抽出した前記床面の上に突出する部分を前記オブジェクトとして識別し、前記オブジェクト以外を前記背景として識別する、前記(1)~(14)のいずれか一項に記載の情報処理装置。
(16)
前記識別部は、重力方向に基づいて前記床面を抽出する、前記(15)に記載の情報処理装置。
(17)
前記状態の変化は、前記オブジェクトの破壊を含む、前記(1)~(16)のいずれか一項に記載の情報処理装置。
(18)
プロセッサが、実空間の三次元データに基づいて、前記実空間に含まれるオブジェクトを、前記オブジェクトの状態を変化させた仮想のオブジェクト画像を生成するために背景から区別して識別すること、
を含む情報処理方法。
(19)
コンピュータを、
実空間の三次元データに基づいて、前記実空間に含まれるオブジェクトを、前記オブジェクトの状態を変化させた仮想のオブジェクト画像を生成するために背景から区別して識別する識別部、
として機能させるためのプログラム。
2 撮像部
3 姿勢情報取得部
4 三次元データ取得部
5 制御部
51 識別部
52 変化オブジェクト生成部
53 画像生成部
6 表示制御部
7 表示部
10 電話機
11 ガムテープ
12 ヘルメット
13 飲料缶
14 スプレー缶
Claims (19)
- 実空間の三次元データに基づいて、前記実空間に含まれるオブジェクトを、前記オブジェクトの状態を変化させた仮想のオブジェクト画像を生成するために背景から区別して識別する識別部、
を備える情報処理装置。 - 前記情報処理装置は、
前記識別部により識別された前記オブジェクトの前記オブジェクト画像を生成する画像生成部をさらに備える、請求項1に記載の情報処理装置。 - 前記情報処理装置は、
前記オブジェクトの状態を変化させた仮想の変化オブジェクトを生成する変化オブジェクト生成部と、
前記変化オブジェクトの表面に前記画像生成部により生成された前記オブジェクト画像を表示するよう表示部を制御する表示制御部と、
をさらに備える、請求項2に記載の情報処理装置。 - 前記画像生成部は、前記実空間が撮像された撮像画像のうち前記オブジェクトの露出した表面に相当する部分に基づいて、前記撮像画像においては隠れている前記オブジェクトの表面を推定した第2の表面画像を生成し、
前記表示制御部は、前記変化オブジェクトのうち前記変化により新たに露出する領域に、前記第2の表面画像を貼り付けた前記オブジェクト画像を表示する、請求項3に記載の情報処理装置。 - 前記表示制御部は、前記実空間を実時間で撮像したスルー画像において露出している対象オブジェクトの画像を、前記変化オブジェクトの対応する領域に貼り付けた前記オブジェクト画像を表示する、請求項3に記載の情報処理装置。
- 前記表示制御部は、前記実空間における光源位置を推定して、推定した前記光源位置に応じて前記オブジェクト画像の輝度を補正して表示する、請求項3に記載の情報処理装置。
- 前記表示制御部は、前記オブジェクトにより隠れた前記背景のうち前記オブジェクトの状態が変化することで新たに露出する部分に、第1の表面画像を貼り付けた前記オブジェクト画像を表示する、請求項3に記載の情報処理装置。
- 前記画像生成部は、前記実空間が撮像された1枚以上の撮像画像において露出している前記背景を合成することで前記第1の表面画像を生成する、請求項7に記載の情報処理装置。
- 前記画像生成部は、前記実空間が撮像された撮像画像のうち、前記オブジェクトにより前記背景が隠れる部分の重複が少なく、且つ直近に撮像された前記撮像画像を、優先的にバッファリングして前記第1の表面画像の生成に用いる、請求項8に記載の情報処理装置。
- 前記画像生成部は、撮像画像を撮像した撮像部の位置および角度を示す姿勢情報および前記三次元データに基づいて、前記撮像画像をバッファリングするか否かを判定する、請求項9に記載の情報処理装置。
- 前記表示制御部は、前記三次元データが表す立体の表面のうち前記背景に対応する部分の全領域が、バッファリングされた1つ以上の前記撮像画像の少なくともいずれかにおいて露出されるように、前記画像生成部が画像生成に用いる撮像画像を撮像する撮像部の撮像姿勢を誘導する表示を行う、請求項9に記載の情報処理装置。
- 前記表示制御部は、バッファリングされた撮像画像において露出している前記背景であって前記三次元データの頂点に対応する位置近傍の輝度値と、前記実空間を実時間で撮像されたスルー画像における対応する位置の輝度値との比較結果に基づいて、前記第1の表面画像の輝度値を補正する、請求項9に記載の情報処理装置。
- 前記情報処理装置は、
前記画像生成部が画像生成に用いる撮像画像を撮像する撮像部をさらに備え、
前記表示制御部は、前記撮像部により実時間で撮像された撮像画像に前記画像生成部により生成された画像を合成して表示する、請求項3に記載の情報処理装置。 - 前記識別部により区別して識別された前記背景は、変化オブジェクト生成部による前記変化オブジェクトの生成対象から除外される、請求項3に記載の情報処理装置。
- 前記識別部は、前記三次元データから床面を抽出して、抽出した前記床面の上に突出する部分を前記オブジェクトとして識別し、前記オブジェクト以外を前記背景として識別する、請求項1に記載の情報処理装置。
- 前記識別部は、重力方向に基づいて前記床面を抽出する、請求項15に記載の情報処理装置。
- 前記状態の変化は、前記オブジェクトの破壊を含む、請求項1に記載の情報処理装置。
- プロセッサが、実空間の三次元データに基づいて、前記実空間に含まれるオブジェクトを、前記オブジェクトの状態を変化させた仮想のオブジェクト画像を生成するために背景から区別して識別すること、
を含む情報処理方法。 - コンピュータを、
実空間の三次元データに基づいて、前記実空間に含まれるオブジェクトを、前記オブジェクトの状態を変化させた仮想のオブジェクト画像を生成するために背景から区別して識別する識別部、
として機能させるためのプログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015553406A JP6332281B2 (ja) | 2013-12-17 | 2014-10-03 | 情報処理装置、情報処理方法およびプログラム |
CN201480067824.9A CN105814611B (zh) | 2013-12-17 | 2014-10-03 | 信息处理设备和方法以及非易失性计算机可读存储介质 |
EP14871364.7A EP3086292B1 (en) | 2013-12-17 | 2014-10-03 | Information processing device, information processing method, and program |
US15/102,299 US10452892B2 (en) | 2013-12-17 | 2014-10-03 | Controlling image processing device to display data based on state of object in real space |
US16/578,818 US11462028B2 (en) | 2013-12-17 | 2019-09-23 | Information processing device and information processing method to generate a virtual object image based on change in state of object in real space |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013260107 | 2013-12-17 | ||
JP2013-260107 | 2013-12-17 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/102,299 A-371-Of-International US10452892B2 (en) | 2013-12-17 | 2014-10-03 | Controlling image processing device to display data based on state of object in real space |
US16/578,818 Continuation US11462028B2 (en) | 2013-12-17 | 2019-09-23 | Information processing device and information processing method to generate a virtual object image based on change in state of object in real space |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015093129A1 true WO2015093129A1 (ja) | 2015-06-25 |
Family
ID=53402486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/076618 WO2015093129A1 (ja) | 2013-12-17 | 2014-10-03 | 情報処理装置、情報処理方法およびプログラム |
Country Status (5)
Country | Link |
---|---|
US (2) | US10452892B2 (ja) |
EP (1) | EP3086292B1 (ja) |
JP (1) | JP6332281B2 (ja) |
CN (3) | CN105814611B (ja) |
WO (1) | WO2015093129A1 (ja) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106125938A (zh) * | 2016-07-01 | 2016-11-16 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
JP6113337B1 (ja) * | 2016-06-17 | 2017-04-12 | 株式会社コロプラ | 表示制御方法および当該表示制御方法をコンピュータに実行させるためのプログラム |
JP2019185776A (ja) * | 2018-04-06 | 2019-10-24 | コリア ユニバーシティ リサーチ アンド ビジネス ファウンデーションKorea University Research And Business Foundation | 室内空間の3次元地図生成方法及び装置 |
JP2020503611A (ja) * | 2016-12-26 | 2020-01-30 | インターデジタル シーイー パテント ホールディングス | 複合現実において動的仮想コンテンツを生成するデバイスおよび方法 |
US10650595B2 (en) | 2015-07-09 | 2020-05-12 | Nokia Technologies Oy | Mediated reality |
JP2020522804A (ja) * | 2017-06-01 | 2020-07-30 | シグニファイ ホールディング ビー ヴィSignify Holding B.V. | 仮想オブジェクトをレンダリングするためのシステム及び方法 |
JP2021125209A (ja) * | 2020-02-07 | 2021-08-30 | 株式会社ドワンゴ | 視聴端末、視聴方法、視聴システム及びプログラム |
WO2023048018A1 (ja) * | 2021-09-27 | 2023-03-30 | 株式会社Jvcケンウッド | 表示装置、表示装置の制御方法およびプログラム |
JP7535281B1 (ja) | 2024-03-29 | 2024-08-16 | 株式会社深谷歩事務所 | 消火訓練mrプログラムおよび消火訓練mrシステム |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105814611B (zh) * | 2013-12-17 | 2020-08-18 | 索尼公司 | 信息处理设备和方法以及非易失性计算机可读存储介质 |
JP6857980B2 (ja) * | 2016-08-02 | 2021-04-14 | キヤノン株式会社 | 情報処理装置、情報処理装置の制御方法およびプログラム |
US10242503B2 (en) | 2017-01-09 | 2019-03-26 | Snap Inc. | Surface aware lens |
JP6782433B2 (ja) * | 2017-03-22 | 2020-11-11 | パナソニックIpマネジメント株式会社 | 画像認識装置 |
CN107170046B (zh) * | 2017-03-30 | 2020-10-30 | 努比亚技术有限公司 | 一种增强现实装置及增强现实画面显示方法 |
JP7046506B2 (ja) * | 2017-06-12 | 2022-04-04 | キヤノン株式会社 | 情報処理装置、情報処理方法及びプログラム |
CN110998674B (zh) * | 2017-08-09 | 2023-11-24 | 索尼公司 | 信息处理装置、信息处理方法和程序 |
CN109427083B (zh) * | 2017-08-17 | 2022-02-01 | 腾讯科技(深圳)有限公司 | 三维虚拟形象的显示方法、装置、终端及存储介质 |
US10380803B1 (en) * | 2018-03-26 | 2019-08-13 | Verizon Patent And Licensing Inc. | Methods and systems for virtualizing a target object within a mixed reality presentation |
US11122237B2 (en) * | 2018-06-05 | 2021-09-14 | Axon Enterprise, Inc. | Systems and methods for redaction of screens |
KR102126561B1 (ko) * | 2018-07-23 | 2020-06-24 | 주식회사 쓰리아이 | 적응적 삼차원 공간 생성방법 및 그 시스템 |
US11030813B2 (en) | 2018-08-30 | 2021-06-08 | Snap Inc. | Video clip object tracking |
CN111200827B (zh) * | 2018-11-19 | 2023-03-21 | 华硕电脑股份有限公司 | 网络系统、无线网络延伸器以及网络供应端 |
US11176737B2 (en) | 2018-11-27 | 2021-11-16 | Snap Inc. | Textured mesh building |
CN113167742B (zh) * | 2018-11-29 | 2024-02-27 | 富士胶片株式会社 | 混凝土构造物的点检辅助装置、点检辅助方法及记录介质 |
US11301966B2 (en) * | 2018-12-10 | 2022-04-12 | Apple Inc. | Per-pixel filter |
CN113330484A (zh) | 2018-12-20 | 2021-08-31 | 斯纳普公司 | 虚拟表面修改 |
JP7341736B2 (ja) * | 2019-06-06 | 2023-09-11 | キヤノン株式会社 | 情報処理装置、情報処理方法及びプログラム |
US11189098B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | 3D object camera customization system |
US11232646B2 (en) | 2019-09-06 | 2022-01-25 | Snap Inc. | Context-based virtual object rendering |
US11263817B1 (en) | 2019-12-19 | 2022-03-01 | Snap Inc. | 3D captions with face tracking |
US11227442B1 (en) | 2019-12-19 | 2022-01-18 | Snap Inc. | 3D captions with semantic graphical elements |
US20230127539A1 (en) * | 2020-04-21 | 2023-04-27 | Sony Group Corporation | Information processing apparatus, information processing method, and information processing program |
KR20220003376A (ko) | 2020-07-01 | 2022-01-10 | 삼성전자주식회사 | 이미지 처리 방법 및 장치 |
DE102023119371A1 (de) | 2023-02-14 | 2024-08-14 | Dr. Ing. H.C. F. Porsche Aktiengesellschaft | Verfahren, System und Computerprogrammprodukt zur Verbesserung von simulierten Darstellungen von realen Umgebungen |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004503307A (ja) * | 2000-06-15 | 2004-02-05 | インテル・コーポレーション | 可動遠隔操縦式ビデオゲームシステム |
JP2009076060A (ja) * | 2007-08-29 | 2009-04-09 | Casio Comput Co Ltd | 画像合成装置および画像合成処理プログラム |
JP2012141822A (ja) | 2010-12-29 | 2012-07-26 | Nintendo Co Ltd | 情報処理プログラム、情報処理システム、情報処理装置および情報処理方法 |
WO2013027628A1 (ja) * | 2011-08-24 | 2013-02-28 | ソニー株式会社 | 情報処理装置、情報処理方法及びプログラム |
Family Cites Families (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1045129C (zh) * | 1993-03-29 | 1999-09-15 | 松下电器产业株式会社 | 个人识别装置 |
US8189864B2 (en) | 2007-08-29 | 2012-05-29 | Casio Computer Co., Ltd. | Composite image generating apparatus, composite image generating method, and storage medium |
KR20110002025A (ko) * | 2008-03-10 | 2011-01-06 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | 디지털 이미지를 수정하기 위한 방법 및 장치 |
JP5156571B2 (ja) * | 2008-10-10 | 2013-03-06 | キヤノン株式会社 | 画像処理装置、画像処理方法 |
CN102177719B (zh) * | 2009-01-06 | 2013-08-28 | 松下电器产业株式会社 | 摄像装置朝向检测装置和具备该装置的移动体 |
JP5275880B2 (ja) * | 2009-04-03 | 2013-08-28 | 株式会社トプコン | 光画像計測装置 |
JP2011095797A (ja) * | 2009-10-27 | 2011-05-12 | Sony Corp | 画像処理装置、画像処理方法及びプログラム |
US8947455B2 (en) * | 2010-02-22 | 2015-02-03 | Nike, Inc. | Augmented reality design system |
US20120249797A1 (en) * | 2010-02-28 | 2012-10-04 | Osterhout Group, Inc. | Head-worn adaptive display |
US20110234631A1 (en) * | 2010-03-25 | 2011-09-29 | Bizmodeline Co., Ltd. | Augmented reality systems |
CA2797302C (en) * | 2010-04-28 | 2019-01-15 | Ryerson University | System and methods for intraoperative guidance feedback |
JP5652097B2 (ja) * | 2010-10-01 | 2015-01-14 | ソニー株式会社 | 画像処理装置、プログラム及び画像処理方法 |
US20160187654A1 (en) * | 2011-02-28 | 2016-06-30 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with a light transmissive wedge shaped illumination system |
JP2012181688A (ja) * | 2011-03-01 | 2012-09-20 | Sony Corp | 情報処理装置、情報処理方法、情報処理システムおよびプログラム |
KR20130053466A (ko) * | 2011-11-14 | 2013-05-24 | 한국전자통신연구원 | 인터랙티브 증강공간 제공을 위한 콘텐츠 재생 장치 및 방법 |
TWI544447B (zh) * | 2011-11-29 | 2016-08-01 | 財團法人資訊工業策進會 | 擴增實境的方法及系統 |
US8963805B2 (en) * | 2012-01-27 | 2015-02-24 | Microsoft Corporation | Executable virtual objects associated with real objects |
US9734633B2 (en) * | 2012-01-27 | 2017-08-15 | Microsoft Technology Licensing, Llc | Virtual environment generating system |
JP2013225245A (ja) * | 2012-04-23 | 2013-10-31 | Sony Corp | 画像処理装置、画像処理方法及びプログラム |
JP6040564B2 (ja) * | 2012-05-08 | 2016-12-07 | ソニー株式会社 | 画像処理装置、投影制御方法及びプログラム |
JP6064376B2 (ja) * | 2012-06-06 | 2017-01-25 | ソニー株式会社 | 情報処理装置、コンピュータプログラムおよび端末装置 |
US8922557B2 (en) * | 2012-06-29 | 2014-12-30 | Embarcadero Technologies, Inc. | Creating a three dimensional user interface |
US9443414B2 (en) * | 2012-08-07 | 2016-09-13 | Microsoft Technology Licensing, Llc | Object tracking |
JP2014071499A (ja) * | 2012-09-27 | 2014-04-21 | Kyocera Corp | 表示装置および制御方法 |
US20140125698A1 (en) * | 2012-11-05 | 2014-05-08 | Stephen Latta | Mixed-reality arena |
WO2014101955A1 (en) * | 2012-12-28 | 2014-07-03 | Metaio Gmbh | Method of and system for projecting digital information on a real object in a real environment |
US9535496B2 (en) * | 2013-03-15 | 2017-01-03 | Daqri, Llc | Visual gestures |
US9129430B2 (en) * | 2013-06-25 | 2015-09-08 | Microsoft Technology Licensing, Llc | Indicating out-of-view augmented reality images |
US9207771B2 (en) * | 2013-07-08 | 2015-12-08 | Augmenta Oy | Gesture based user interface |
US20150091891A1 (en) * | 2013-09-30 | 2015-04-02 | Dumedia, Inc. | System and method for non-holographic teleportation |
KR101873127B1 (ko) * | 2013-09-30 | 2018-06-29 | 피씨엠에스 홀딩스, 인크. | 증강 현실 디스플레이 및/또는 사용자 인터페이스를 제공하기 위한 방법, 장치, 시스템, 디바이스, 및 컴퓨터 프로그램 제품 |
US9256072B2 (en) * | 2013-10-02 | 2016-02-09 | Philip Scott Lyren | Wearable electronic glasses that detect movement of a real object copies movement of a virtual object |
US9747307B2 (en) * | 2013-11-18 | 2017-08-29 | Scott Kier | Systems and methods for immersive backgrounds |
CN105814611B (zh) * | 2013-12-17 | 2020-08-18 | 索尼公司 | 信息处理设备和方法以及非易失性计算机可读存储介质 |
-
2014
- 2014-10-03 CN CN201480067824.9A patent/CN105814611B/zh not_active Expired - Fee Related
- 2014-10-03 WO PCT/JP2014/076618 patent/WO2015093129A1/ja active Application Filing
- 2014-10-03 JP JP2015553406A patent/JP6332281B2/ja active Active
- 2014-10-03 US US15/102,299 patent/US10452892B2/en active Active
- 2014-10-03 CN CN202010717885.2A patent/CN111986328A/zh active Pending
- 2014-10-03 CN CN202010728196.1A patent/CN111985344A/zh active Pending
- 2014-10-03 EP EP14871364.7A patent/EP3086292B1/en not_active Not-in-force
-
2019
- 2019-09-23 US US16/578,818 patent/US11462028B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004503307A (ja) * | 2000-06-15 | 2004-02-05 | インテル・コーポレーション | 可動遠隔操縦式ビデオゲームシステム |
JP2009076060A (ja) * | 2007-08-29 | 2009-04-09 | Casio Comput Co Ltd | 画像合成装置および画像合成処理プログラム |
JP2012141822A (ja) | 2010-12-29 | 2012-07-26 | Nintendo Co Ltd | 情報処理プログラム、情報処理システム、情報処理装置および情報処理方法 |
WO2013027628A1 (ja) * | 2011-08-24 | 2013-02-28 | ソニー株式会社 | 情報処理装置、情報処理方法及びプログラム |
Non-Patent Citations (2)
Title |
---|
ANDREW J. DAVISON: "Real-Time Simultaneous Localization and Mapping with a Single Camera", PROCEEDINGS OF THE 9TH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, vol. 2, 2003, pages 1403 - 1410 |
See also references of EP3086292A4 |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10650595B2 (en) | 2015-07-09 | 2020-05-12 | Nokia Technologies Oy | Mediated reality |
JP6113337B1 (ja) * | 2016-06-17 | 2017-04-12 | 株式会社コロプラ | 表示制御方法および当該表示制御方法をコンピュータに実行させるためのプログラム |
JP2017224244A (ja) * | 2016-06-17 | 2017-12-21 | 株式会社コロプラ | 表示制御方法および当該表示制御方法をコンピュータに実行させるためのプログラム |
CN106125938B (zh) * | 2016-07-01 | 2021-10-22 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
CN106125938A (zh) * | 2016-07-01 | 2016-11-16 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
US11580706B2 (en) | 2016-12-26 | 2023-02-14 | Interdigital Ce Patent Holdings, Sas | Device and method for generating dynamic virtual contents in mixed reality |
JP2020503611A (ja) * | 2016-12-26 | 2020-01-30 | インターデジタル シーイー パテント ホールディングス | 複合現実において動的仮想コンテンツを生成するデバイスおよび方法 |
JP7179024B2 (ja) | 2017-06-01 | 2022-11-28 | シグニファイ ホールディング ビー ヴィ | 仮想オブジェクトをレンダリングするためのシステム及び方法 |
JP2020522804A (ja) * | 2017-06-01 | 2020-07-30 | シグニファイ ホールディング ビー ヴィSignify Holding B.V. | 仮想オブジェクトをレンダリングするためのシステム及び方法 |
JP2019185776A (ja) * | 2018-04-06 | 2019-10-24 | コリア ユニバーシティ リサーチ アンド ビジネス ファウンデーションKorea University Research And Business Foundation | 室内空間の3次元地図生成方法及び装置 |
JP7475022B2 (ja) | 2018-04-06 | 2024-04-26 | コリア ユニバーシティ リサーチ アンド ビジネス ファウンデーション | 室内空間の3次元地図生成方法及び装置 |
JP2021125209A (ja) * | 2020-02-07 | 2021-08-30 | 株式会社ドワンゴ | 視聴端末、視聴方法、視聴システム及びプログラム |
WO2023048018A1 (ja) * | 2021-09-27 | 2023-03-30 | 株式会社Jvcケンウッド | 表示装置、表示装置の制御方法およびプログラム |
JP7535281B1 (ja) | 2024-03-29 | 2024-08-16 | 株式会社深谷歩事務所 | 消火訓練mrプログラムおよび消火訓練mrシステム |
Also Published As
Publication number | Publication date |
---|---|
EP3086292A4 (en) | 2017-08-02 |
CN105814611B (zh) | 2020-08-18 |
JPWO2015093129A1 (ja) | 2017-03-16 |
EP3086292B1 (en) | 2018-12-26 |
US11462028B2 (en) | 2022-10-04 |
EP3086292A1 (en) | 2016-10-26 |
CN105814611A (zh) | 2016-07-27 |
US20200019755A1 (en) | 2020-01-16 |
US10452892B2 (en) | 2019-10-22 |
CN111985344A (zh) | 2020-11-24 |
JP6332281B2 (ja) | 2018-05-30 |
CN111986328A (zh) | 2020-11-24 |
US20170017830A1 (en) | 2017-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6332281B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
CN109561296B (zh) | 图像处理装置、图像处理方法、图像处理系统和存储介质 | |
JP6747504B2 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
JP5818773B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
JP2015114905A (ja) | 情報処理装置、情報処理方法およびプログラム | |
CN106896925A (zh) | 一种虚拟现实与真实场景融合的装置 | |
KR20150082379A (ko) | 단안 시각 slam 을 위한 고속 초기화 | |
US9361731B2 (en) | Method and apparatus for displaying video on 3D map | |
US10863210B2 (en) | Client-server communication for live filtering in a camera view | |
JP2022523478A (ja) | マルチビュー視覚データからの損傷検出 | |
WO2010038693A1 (ja) | 情報処理装置、情報処理方法、プログラム及び情報記憶媒体 | |
US12010288B2 (en) | Information processing device, information processing method, and program | |
CN111801725A (zh) | 图像显示控制装置及图像显示控制用程序 | |
JP2016071645A (ja) | オブジェクト3次元モデル復元方法、装置およびプログラム | |
CN107016730A (zh) | 一种虚拟现实与真实场景融合的装置 | |
US11275434B2 (en) | Information processing apparatus, information processing method, and storage medium | |
CN106981100A (zh) | 一种虚拟现实与真实场景融合的装置 | |
WO2019230169A1 (ja) | 表示制御装置、プログラムおよび表示制御方法 | |
KR101915578B1 (ko) | 시점 기반 오브젝트 피킹 시스템 및 그 방법 | |
US20230127539A1 (en) | Information processing apparatus, information processing method, and information processing program | |
JP6392739B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
Yuan et al. | 18.2: Depth sensing and augmented reality technologies for mobile 3D platforms | |
US20240257440A1 (en) | Information processing apparatus, information processing method, and program | |
JP7261121B2 (ja) | 情報端末装置及びプログラム | |
JP2017102784A (ja) | 画像処理装置、画像処理方法及び画像処理プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14871364 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015553406 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2014871364 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014871364 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15102299 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |