JP2000276613A - Device and method for processing information - Google Patents

Device and method for processing information

Info

Publication number
JP2000276613A
JP2000276613A JP8521899A JP8521899A JP2000276613A JP 2000276613 A JP2000276613 A JP 2000276613A JP 8521899 A JP8521899 A JP 8521899A JP 8521899 A JP8521899 A JP 8521899A JP 2000276613 A JP2000276613 A JP 2000276613A
Authority
JP
Japan
Prior art keywords
image
viewpoint
position
unit
information processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP8521899A
Other languages
Japanese (ja)
Inventor
Takayuki Ashigahara
Hideto Takeuchi
Teruyuki Ushiro
輝行 後
英人 竹内
隆之 芦ヶ原
Original Assignee
Sony Corp
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp, ソニー株式会社 filed Critical Sony Corp
Priority to JP8521899A priority Critical patent/JP2000276613A/en
Publication of JP2000276613A publication Critical patent/JP2000276613A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Abstract

(57) [Summary] [PROBLEMS] To provide an image without a sense of incongruity that fuses a real space and a virtual space. SOLUTION: In a three-dimensional position / direction calculation unit 3, a position of an optical see-through display unit 1 in a three-dimensional space is calculated, and in a viewpoint position / direction calculation unit 5, a user's viewpoint is calculated. In part 8,
The coordinate conversion of the virtual image is performed based on the position of the optical see-through display unit 1 and the viewpoint of the user. The virtual image after the coordinate conversion is displayed on the optical see-through display unit 1.
And displayed together with the image of the real space.

Description

DETAILED DESCRIPTION OF THE INVENTION

[0001]

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an information processing apparatus and an information processing method, and more particularly, to an information processing apparatus and information capable of providing an image without a sense of incongruity by integrating a real space and a virtual space. Regarding the processing method.

[0002]

2. Description of the Related Art A technology for integrating a real space and a virtual space is called a mixed reality (MR).
Among them, a technology for superimposing and displaying information of a virtual space on a real space is known as an Augmented Reality (AR).
(Augmented Reality)).

As a method of realizing the AR, for example, a transmission type HMD (Head Mounted Display) is used to superimpose and display an image of a virtual object on a real world scene seen through the display. What is called see-through,
There is a so-called video see-through in which an image of a real object captured by a video camera or the like and an image of a virtual object are superimposed and displayed. The details of these are described, for example, in "Kyohide Sato et al.," Method of Positioning Real World and Virtual Space ", Image Recognition and Understanding Symposium, July 1998" and the like.

[0004]

By the way, when realizing AR using an HMD, since the HMD is mounted on the head of the user, it is necessary to provide a virtual image to the movement of the user. The positional relationship between the display and the user's viewpoint hardly changes. That is, in the HMD, the positional relationship between the display and the user's viewpoint is almost fixed, and an image without a sense of incongruity in which an image of a real object (real image) and an image of a virtual object (virtual image) are superimposed. Can be provided.

[0005] However, since the HMD is used by being mounted on the head, the head of the user is restrained and the user feels troublesome. Accordingly, the applicant of the present application has disclosed, for example,
Japanese Patent Application Publication No. 0-51711 proposes a video see-through type portable display as an apparatus for providing an image in which a real space and a virtual space are integrated without restraining a user's head.

[0006] The portable display has a flat plate shape, and is used, for example, while being held by a user. For this reason, the position and direction of the user's viewpoint with respect to the portable display change depending on the user's posture, usage conditions, and the like.

On the other hand, in this portable display, an actual image picked up by a video camera mounted thereon is displayed. Therefore, when the user looks at the screen from the optical axis of the video camera (directly behind), the user can see the image of the real world from the user's viewpoint,
In other cases, the user sees an image unrelated to the user's viewpoint. That is, the user sees a real image different from the real image seen from his / her own viewpoint, and feels uncomfortable.

[0008] As described above, the discomfort of the real image due to the fact that the viewpoint of the user is not considered can be solved by, for example, making the portable display an optical see-through type instead of a video see-through type. However, it is not possible to eliminate the discomfort caused by the virtual image. That is, when the viewpoint of the user is not considered, the virtual image is displayed assuming the viewpoint at a fixed position with respect to the portable display. As a result, the user's perspective
If the viewpoint is different from the assumed viewpoint, the virtual image viewed by the user has a strange feeling.

Therefore, when the viewpoint of the user is not considered, an image in which the real image and the virtual image are superimposed (composite image)
Is also uncomfortable.

The applicant of the present application has disclosed, for example,
Japanese Patent Application Laid-Open No. 49290/49 proposes a method of creating and displaying a bird's-eye view from an arbitrary viewpoint. However, this method does not consider the viewpoint of the user, and uses this method.
It is difficult to superimpose a real image and a virtual image to provide a composite image without a sense of incongruity.

The present invention has been made in view of such a situation, and it is an object of the present invention to provide an image that combines a real space and a virtual space and has no uncomfortable feeling.

[0012]

An information processing apparatus according to the present invention comprises: a display position calculating means for calculating a position of a display means in a three-dimensional space; a viewpoint calculating means for calculating a viewpoint of a user; Virtual image conversion means for converting a virtual image based on a user's viewpoint.

The information processing method according to the present invention includes a display position calculating step of calculating a position of a display means in a three-dimensional space, a viewpoint calculating step of calculating a user's viewpoint, and a position of the display means and the user's viewpoint. And a virtual image conversion step of converting a virtual image.

In the information processing apparatus and the information processing method having the above configurations, the position of the display means in the three-dimensional space is calculated, and the viewpoint of the user is calculated. Then, the virtual image is converted based on the position of the display means and the viewpoint of the user.

[0015]

FIG. 1 shows the structure (functional structure) of a first embodiment of a portable display device to which the present invention is applied.

The optical see-through type display unit 1 is, for example, an optical see-through type LCD such as a transparent LCD (Liquid Crystal Display), which displays a virtual image supplied from the rendering unit 9 and via a display screen thereof. The real space can be seen. Accordingly, the optical see-through display unit 1 displays a composite image in which a real image viewed through the display screen and a virtual image from the rendering unit 9 are superimposed. Note that the optical see-through display unit 1 is housed in a flat-plate-shaped housing approximately the size of a so-called notebook personal computer (personal computer). In addition, each block described later is also housed in the housing, so that the portable display device is convenient to carry. The portable display device is used by the user holding it with one hand or both hands, or fixed by a stand or the like.

The three-dimensional position / direction sensor 2 is fixed to the optical see-through type display unit 1 and outputs an output for calculating the position of the optical see-through type display unit 1 with respect to a predetermined reference plane and a posture such as an inclination. The information is supplied to the three-dimensional position / direction calculation unit 3. 3D position / direction calculator 3
Is configured to calculate the position of the optical see-through display unit 1 with respect to a predetermined reference plane and a posture such as an inclination with respect to a predetermined reference plane based on the output of the three-dimensional position and direction sensor 2 and supply the calculated posture to the virtual object coordinate conversion unit 8. I have.

Here, as a method of calculating the position and orientation of the optical see-through display unit 1, there is a method of detecting a magnetic field by using a source coil composed of a quadrature coil and a position sensor. That is, for example, the three-dimensional position and direction sensor 2 is installed at a position using the source coil as a reference plane (for example, the position of an actual object viewed through the optical see-through display unit 1).
By using a position sensor as, the position and orientation of the optical see-through display unit 1 can be calculated in the three-dimensional position and direction calculation unit 3. The method of calculating the position and orientation of the optical see-through display unit 1 in this manner is disclosed in detail in, for example, the above-mentioned Japanese Patent Application Laid-Open No. 10-51711. As an apparatus for calculating the position and orientation of an object using a quadrature coil, for example, there is 3SPACE (trademark) of Polhemus.

The calculation of the position and orientation of the optical see-through display unit 1 can be performed by using, for example, ultrasonic waves. Devices that calculate the position and orientation of an object using ultrasonic waves include, for example, Stereo Graphics
There is Crystal EYES ™ of the company.

The viewpoint position direction sensor 4 is fixed to, for example, the optical see-through type display unit 1 and calculates a user's viewpoint (direction of the viewpoint or position (coordinates) in a three-dimensional space) with respect to a predetermined reference plane. Is supplied to the viewpoint position / direction calculation unit 5. The viewpoint position / direction calculation unit 5 calculates the viewpoint of the user with respect to a predetermined reference plane based on the output of the viewpoint position / direction sensor 4, and supplies the viewpoint to the virtual object coordinate conversion unit 8.

Here, the user's viewpoint can be determined, for example, by asking the user to wear glasses with a source coil, which is an orthogonal coil, and using a position sensor, which is an orthogonal coil, as the viewpoint position direction sensor 4. It can be calculated in the same manner as in the case of the direction calculation unit 3.

The user's viewpoint is, for example, a CCD (Charge Coupled Device) as the viewpoint position / direction sensor 4.
e) By using a video camera, the viewpoint position / direction calculation unit 5 can calculate the image by recognizing an image output by the CCD video camera. Specifically, for example, "Yagi et al.," Construction of a common platform for image information processing ", Jikkenkenho Vol.98, No 26 ISSN0919-6
072 98-CVM-110-9, 1998.3.19 '', by performing image recognition, cut out the area of the user's face from the captured image, and perform stereo processing on the eye position from there. Thus, the viewpoint can be calculated. Alternatively, the viewpoint can be calculated by having a predetermined marker attached to the user's face and obtaining the position of the marker by image processing.

The virtual object data storage section 6 stores data on the shape and texture of the virtual object. The virtual object generation unit 7 generates a virtual object (virtual object) in a predetermined three-dimensional space based on the data stored in the virtual object data storage unit 6, and supplies the virtual object to a virtual object coordinate conversion unit 8. It has been made like that. The virtual object coordinate conversion unit 8
Based on the position and orientation of the optical see-through display unit 1 from the three-dimensional position and direction calculation unit 3 and the user's viewpoint from the viewpoint position and direction calculation unit 5, coordinate conversion of the virtual object is performed by geometric calculation, and the coordinates are calculated. The converted virtual object is supplied to the rendering unit 9. The rendering unit 9 performs rendering based on the data of the virtual object after the coordinate conversion from the virtual object coordinate conversion unit 8 and performs three-dimensional CG (Co) as a rendering result of the virtual object.
mputer Graphics (virtual image) is supplied to the optical see-through display unit 1.

Here, as a method of rendering a three-dimensional CG as a virtual image, for example, an image based rendering or a geometry based rendering is used.
Etc. can be adopted.

Next, FIG. 2 shows an example of an electrical configuration of the portable display device of FIG.

The reference camera 11 and the detection camera 12 are
For example, it is a CCD video camera, and constitutes the viewpoint position / direction sensor 4 in the embodiment of FIG. The reference camera 11 or the detection camera 12 captures a user from different viewpoint directions, and supplies the resulting user image to an A / D (Analog / Digital) converter 13 or 14, respectively. . A / D converter 1
3 or 14 is the reference camera 11 or the detection camera 12
Is converted into digital image data and supplied to the frame memory 15 or 16 respectively. The frame memory 15 or 16 stores the image data from the A / D converter 13 or 14, respectively.

CPU (Central Processing Unit) 17
Controls the blocks of FIG. 2 that constitute the portable display device, and also performs the processes performed by the blocks illustrated in FIG. That is, the CPU 17 uses the RS-232C
/ RS-422 supplied via controller 422
The position and orientation of the optical see-through display unit 1 are calculated based on the output of the dimensional position / direction sensor 2. Further, the CPU 17 calculates the position or direction of the user's viewpoint based on the images stored in the frame memories 15 and 16 as described above. Further, the CPU 17 controls the virtual object generation unit 7 in FIG.
, A virtual object coordinate conversion unit 8 performs coordinate conversion, a rendering unit 9 performs rendering, and the like.

A ROM (Read Only Memory) 18 is an IP
L (Initial Program Loading) programs are stored. RAM (Random Access Memory) 19
The CPU 17 stores a program for performing the above-described processing and data necessary for the operation of the CPU 17. The RS-232C / RS-422 controller 20 communicates with the three-dimensional position and orientation sensor 2
The serial communication conforms to the standards of -232C or RS-422, and the output of the three-dimensional position / direction sensor 2 is supplied to the CPU 17 via a bus.

The LCD controller 21 uses a VRAM (Video RAM) 22 under the control of the CPU 17 to
The display of the optical see-through type display unit 1 which is D is controlled. The VRAM 22 is configured to temporarily store image data displayed by the optical see-through display unit 1. That is, the image data to be displayed is written into the VRAM 22 via the LCD controller 21,
The CD controller 21 supplies image data stored in the VRAM 22 to the optical see-through display unit 1 so that an image is displayed.

The storage controller 23 is, for example,
Magnetic disk 24 such as HD (Hard Disk) or FD (FloppyDisk), magneto-optical disk 25 such as mini disk (trademark), optical disk 26 such as CD-ROM (Compact Disc ROM), and nonvolatile such as ROM and flash memory The access to the memory 27 is controlled.
Magnetic disk 24, magneto-optical disk 25, optical disk 2
6. The non-volatile memory 27 stores the data of the shape and texture of the virtual object.
17, the data is read out via the storage controller 23. The magnetic disk 2
4 and the like also store programs for the CPU 17 to perform the above-described processing.

The communication controller 28 controls wireless communication using radio waves or infrared rays and wired communication using Ethernet (trademark) or the like. For example, data of the shape and texture of the virtual object, programs for the CPU 17 to perform various processes, and the like can be obtained from an external device by performing communication via the communication controller 28. .

In the embodiment shown in FIG. 2, the viewpoint position direction sensor 4 is composed of two CCD video cameras, a reference camera 11 and a detection camera 12, and the CPU 17 controls the two reference video cameras 11 and the detection camera 12. By performing stereo processing using an image, the position of the user's viewpoint in a three-dimensional space is calculated.

Here, the stereo processing will be described.

The stereo processing associates pixels between a plurality of images obtained by photographing the same object with a camera from two or more directions (different line-of-sight directions), and thereby, the parallax between the corresponding pixels and the camera The distance from the object to the object and the shape of the object are determined.

That is, the reference camera 11 and the detection camera 1
When the object is photographed in Step 2, an image including the projected image of the object (reference camera image) is obtained from the reference camera 11, and an image including the projected image of the object (detected camera image) is also obtained from the detection camera 12. Now, as shown in FIG. 3, if a certain point P on the object is displayed in both the reference camera image and the detected camera image, the position on the reference camera image where the point P is displayed is: Position on the detected camera image,
That is, from the corresponding point, the reference camera 11 and the detection camera 12
, And the position of the point P in the three-dimensional space (three-dimensional position) can be obtained using the principle of triangulation.

Therefore, in the stereo processing, first, it is necessary to detect a corresponding point. As a detection method, for example, there is an area-based matching method using an epipolar line.

That is, as shown in FIG.
In, the point P on the object is the point P and the reference camera 1
1 on a straight line L connecting the optical center (lens center) O1
It is projected on the intersection na of the reference camera 11 with the imaging surface S1.

In the detection camera 12, the point P of the object is defined by the intersection nb between the point P and the optical center (lens center) O 2 of the detection camera 12 at the intersection with the imaging surface S 2 of the detection camera 12. Projected to

In this case, the straight line L is an intersection line L between a plane passing through the optical centers O1 and O2 and the point na (or the point P) and the imaging surface S2 on which the detected camera image is formed.
2 is projected on the imaging surface S2. Point P is a straight line L
This is the upper point, and therefore, on the imaging surface S2, the point nb where the point P is projected exists on the straight line L2 where the straight line L is projected, and this straight line L2 is called an epipolar line. That is, there is a possibility that the corresponding point nb of the point na exists.
Since it is on the epipolar line L2, the search for the corresponding point nb may be performed on the epipolar line L2.

Here, the epipolar line can be considered, for example, for each pixel constituting the reference camera image formed on the imaging surface S1, but if the positional relationship between the reference camera 11 and the detection camera 12 is known, The epipolar line existing for each pixel can be obtained in advance.

The detection of the corresponding point nb from the point on the epipolar line L2 can be performed, for example, by the following area-based matching.

That is, in the area-based matching, FIG.
As shown in (A), for example, it is a rectangular small block (for example, a block of 5 × 5 pixels in the horizontal and vertical directions) with a point na on the reference camera image (for example, an intersection of diagonal lines). The reference block is extracted from the reference camera image and, as shown in FIG. 4B, has the same size as the reference block centered on a certain point on the epipolar line L2 projected on the detected camera image. Are extracted from the detected camera image.

Here, in the embodiment of FIG. 4B, six points nb1 to nb6 are provided on the epipolar line L2 as the center of the detection block. The six points nb1 to nb6 are points of the straight line L in the three-dimensional space shown in FIG. 3, and the points whose distances from the reference point are, for example, 1 m, 2 m, 3 m, 4 m, 5 m, and 6 m are detected. The point projected on the imaging surface S2 of the camera 12, and therefore, the distance from the reference point is 1m, 2m, 3m, 4
m, 5 m, and 6 m, respectively.

In the area-based matching, detection blocks centered on points nb1 to nb6 provided on the epipolar line L2 are extracted from the detected camera image, and the correlation between each detection block and the reference block is calculated as follows. The calculation is performed using a predetermined evaluation function. And point n
The point nb at the center of the detection block having the highest correlation with the reference block around a is obtained as the corresponding point of the point na.

Here, as the evaluation function for evaluating the correlation between the reference block and the detection block, for example, the pixels constituting the reference block and the pixel values of the pixels constituting the detection block corresponding to the respective pixels are used. The sum of the absolute values of the differences, the sum of the squares of the pixel values, and the normalized cross-correlation (n
or normalized cross correlation).

Now, assuming that the sum of the absolute values of the differences between the pixel values is used as the evaluation function, the following equation is obtained for a predetermined point (x, y) (pixel at coordinates (x, y)) on the reference camera image. The correlation between a certain point (x ′, y ′) on the detected camera image is, for example, an evaluation value (error value) e represented by the following equation.
It is evaluated by (x, y).

[0047]

(Equation 1) (1) where e (x, y) is a pixel (x, y) on the reference camera image and a pixel (x ′, y ′) on the detected camera image. Evaluation value (error value) indicating the correlation between
Represents Further, YA (x + i, y + j) represents, for example, luminance as a pixel value of a pixel at a point (x + i, y + j) on the reference camera image, and YB (x ′ + i, y ′ +
j) is a point (x ′ + i, y ′ + j) on the detected camera image
Represents the luminance of the pixel at. W represents a reference block and a detection block, and i, j∈W represents a point (x +
i, y + j) or a point (x ′ + i, y ′ + j) is a point (pixel) in the reference block or the detection block, respectively.
It represents that.

It should be noted that the evaluation value e (x,
y) decreases as the correlation between the pixel (x, y) on the reference camera image and the pixel (x ′, y ′) on the detected camera image increases, and therefore, the evaluation value e (x, y) ) Is determined as the corresponding point of the pixel (x, y) on the reference camera image.

When an evaluation value having a smaller value as the correlation is higher as shown in the equation (1) is used, each of the points nb1 to nb6 on the epipolar line L2 is, for example, as shown in FIG. Evaluation value (value of evaluation function)
Is obtained. Here, in FIG. 5, the point nb1
Each distance number (points nb1 to nb6 on the epipolar line L2 corresponding to a distance number corresponding to the distance from the reference point, which is previously assigned to each point in the three-dimensional space corresponding to nb6 to nb6) Are shown in the figure.

When the evaluation curve having the evaluation values as shown in FIG. 5 is obtained, the point on the epipolar line L2 corresponding to the distance number 3 having the smallest evaluation value (highest correlation) is defined as a point. na is detected as a corresponding point. In FIG. 5, interpolation is performed using the evaluation value (indicated by a black circle in FIG. 5) near the minimum value among the evaluation values obtained for the points corresponding to the distance numbers 1 to 6, and the evaluation value becomes higher. It is also possible to obtain a point that becomes smaller (a point corresponding to 3.3 m indicated by a cross in FIG. 5) and detect that point as a final corresponding point.

The points nb1 to nb6 are obtained by projecting a point na on the imaging surface S1 of the reference camera 11 and a point on a straight line L connecting the optical center O1 onto the imaging surface S2 of the detection camera 12.
The setting of, for example, the reference camera 11 and the detection camera 1
2 can be performed at the time of calibration (the method of calibration is not particularly limited). Such setting is performed for each epipolar line existing for each pixel constituting the imaging surface S1 of the reference camera 11, and the distance (hereinafter, appropriately referred to as a set point) to a point set on the epipolar line (hereinafter, appropriately referred to as a set point). If a distance number / distance table for associating a distance number corresponding to a distance from the reference point with a distance from the reference point is created in advance, a set point serving as a corresponding point is detected, and the corresponding set point is detected. By converting the distance number with reference to the distance number / distance table, the distance from the reference point (estimated value of the distance to a point on the object) can be immediately obtained. That is, the distance can be obtained directly from the corresponding point.
However, the distance can also be calculated from the parallax after detecting the corresponding point and calculating the parallax.

The reference camera 11 and the detection camera 1
If the object imaged in step 2 is, for example, a human face, the directions of the faces viewed from the reference camera 11 and the detection camera 12 may be different. In this case, even if the correlation between the reference block and the detection block between the corresponding points is calculated as they are, the correlation may not be large, and as a result, the correct corresponding point may not be obtained. Therefore, it is desirable to calculate the correlation between the reference block and the detection block, for example, after projecting the detection block so as to be viewed from the reference camera 11.

In the embodiment of FIG. 2, one reference camera 11 and one detection camera 12 are used, but a plurality of detection cameras can be used. In this case, the multi-baseline stereo (Multi Ba
An evaluation value is obtained by the seline stereo method, and a distance is obtained based on the evaluation value, that is, a distance image is obtained by so-called multi-view stereo processing.

Here, the multi-baseline stereo method uses one reference camera image and a plurality of detected camera images, and evaluates the correlation value between each of the plurality of detected camera images with the reference camera image. And add the respective evaluation values for the same distance number,
The added value is used as a final evaluation value to obtain a distance with high accuracy.For details, see Masatoshi Okutomi, Takeo Kanade, "Stereo Matching Using Multiple Baseline Lengths," IEICE Transactions D-II V
ol. J75-D-II No. 8 pp. 1317-1327 described in August 1992. The multi-baseline stereo method is particularly effective when, for example, the reference camera image and the detected camera image have a repeating pattern.

Further, in the embodiment shown in FIG.
Reference camera 11 and detection camera 1 which are CD video cameras
2, the viewpoint position / direction sensor 4 is configured. However, the viewpoint position / direction sensor 4 can be configured by one CCD video camera. However, multiple CCs
When a D video camera is used, the position of the user's viewpoint in a three-dimensional space can be obtained by stereo processing. However, when a single CCD video camera is used, such a three-dimensional position ( Distance) is difficult to determine. That is, when one CCD video camera is used, the direction of the viewpoint (for example, the direction of the viewpoint with reference to the optical see-through display unit 1) can be obtained, but it is difficult to obtain the position in the three-dimensional space. Becomes

Next, referring to the flowchart of FIG.
The operation of the portable display device of FIG. 1 will be described.

First, in step S1, the position and orientation of the optical see-through display unit 1 are calculated, and the viewpoint of the user is calculated. That is, the position and orientation of the optical see-through display unit 1 are calculated by the three-dimensional position and direction calculation unit 3 based on the output of the three-dimensional position and direction sensor 2, and supplied to the virtual object coordinate conversion unit 8. At the same time, the viewpoint position / direction calculator 5 calculates (the position or direction of) the viewpoint of the user based on the output of the viewpoint position / direction sensor 4 and supplies the calculated viewpoint to the virtual object coordinate converter 8.

On the other hand, the virtual object generation unit 7 performs a predetermined 3 based on the data stored in the virtual object data storage unit 6.
A virtual object in the three-dimensional space is generated and supplied to the virtual object coordinate conversion unit 8. In step S2, the virtual object coordinate conversion unit 8 performs coordinate conversion of the virtual object in a predetermined three-dimensional space based on the position and orientation of the optical see-through display unit 1 and the viewpoint of the user. The data of the virtual object is supplied to the rendering unit 9.
In step S3, the rendering unit 9 renders the virtual object in accordance with the data from the virtual object coordinate conversion unit 8, and supplies the rendering to the optical see-through display unit 1. In the optical see-through display unit 1, step S4
In, the image (virtual image) of the virtual object from the rendering unit 9 is displayed. Then, returning to step S1,
Hereinafter, the same processing is repeated.

As a result, as shown in FIGS. 6 and 7, the optical see-through type display section 1 displays a natural image in which the real image and the virtual image are superimposed without any discomfort.

That is, for example, when the user
As shown in (A), it is assumed that an existing table is viewed from directly in front of the display screen of the optical see-through display unit 1.
At this time, the rendering unit 9 renders a virtual image of the virtual object, so that the optical see-through display unit 1 places a triangular pyramid-shaped virtual image on an existing table as shown in FIG. Assume that an image on which an object is placed is displayed.

In this case, as shown in FIG. 7C, when the user moves from the front of the display screen of the optical see-through display unit 1 to the left and thereby shifts the viewpoint to the left, the optical see-through display is performed. The actual table viewed through the unit 1 moves to the left of the display screen, and accordingly, the virtual object displayed on the optical see-through display unit 1 also moves to the left. As a result, the user
As shown in (D), as the viewpoint moves to the left,
The table and the virtual object on it have also moved to the left, and the user can see a natural image.

Further, as shown in FIG. 7 (E), when the user moves from the front of the display screen of the optical see-through type display unit 1 to the right, thereby moving the viewpoint to the right, the optical see-through type display unit 1 The actual table viewed through the display 1 moves to the right of the display screen, and accordingly, the virtual object displayed on the optical see-through display unit 1 also moves to the right. As a result, the user
As shown in (F), as the viewpoint moves to the right,
The table and the virtual object on it have also moved to the right, and you can see an image that does not feel uncomfortable.

As shown in FIG. 7A, when the user looks at a real table and a virtual object from a certain position in front of the display screen of the optical see-through display unit 1, FIG. As shown in (A), when the viewpoint moves away from the optical see-through type display unit 1 and the viewpoint moves to a distant position, the area occupied by the actual table viewed through the optical see-through type display unit 1 occupies the display screen. As a result, the virtual object displayed on the optical see-through display unit 1 is also enlarged. As a result, as shown in FIG. 8 (B), the user moves the table to a far distance, and the table and the virtual object on it become larger on the display screen of the optical see-through display unit 1 as shown in FIG. Now, you can see the image without any discomfort.

Further, the user approaches the optical see-through type display unit 1 as shown in FIG.
As the position of the viewpoint approaches, the area occupied by the actual table visible through the optical see-through display unit 1 occupies a smaller display screen, but along with this, the virtual object displayed on the optical see-through display unit 1 also decreases. , Is reduced.
As a result, as shown in FIG. 8D, the user moves the viewpoint closer to the optical see-through display unit 1,
The table and the virtual object on it can be seen on the display screen of the optical see-through type display unit 1 so as to have a small and comfortable image.

As shown in FIG. 7, if it is only necessary to deal with a case where the direction of the viewpoint has changed, it is basically sufficient to calculate the direction of the viewpoint in the viewpoint position direction calculation unit 5. However, as shown in FIG. 8, in order to cope with a case where the distance to the viewpoint has changed, the viewpoint position direction calculation unit 5 uses a three-dimensional position of the viewpoint (for example, a user in a three-dimensional space). Position of the midpoint between the left and right eyes of
Needs to be calculated.

Next, in the portable display device of FIG. 1, the virtual object coordinate conversion unit 8 performs the coordinate conversion of the virtual object based on the position and orientation of the optical see-through display unit 1 and the viewpoint of the user. The virtual object after the conversion is displayed on the optical see-through display unit 1 to provide a virtual image that is not uncomfortable to the user. Coordinate conversion in the virtual object coordinate conversion unit 8 will be described. .

Now, as a local coordinate system for handling coordinates in a three-dimensional space, for example, as shown in FIG. 9, the upper left point of the display screen of the optical see-through type display unit 1 is set as the origin,
(XL, YL, ZL), which defines the x-axis from the left to the right of the display screen, the y-axis from the top to the bottom of the display screen, and the z-axis from the front to the back, respectively. Further, as a display screen coordinate system that handles coordinates on a two-dimensional plane for displaying on the display screen of the optical see-through display unit 1, for example,
The origin is defined as the point at the upper left of the display screen, and the x-axis is defined from left to right of the display screen, and the y-axis is defined from top to bottom of the display screen as (u, v).

In this case, the coordinates of the user's viewpoint in the local coordinate system are represented by A (VXL, VYL, VZ).
L). The coordinates of the virtual object in the local coordinate system are represented by B (OXL, OYL, OZL). Further, a predetermined three-dimensional coordinate system of the virtual object (virtual object generation unit 7)
3D coordinate system of the virtual object generated by
The coordinates in the reference coordinate system) are expressed as (OX, OY, O
Z).

When the optical see-through type display unit 1 moves, the local axes XL, YL, and ZL move. The relation between the local coordinate system and the reference coordinate system is based on the three-dimensional position. It can be obtained based on the position and orientation of the optical see-through display unit 1 calculated by the direction calculation unit 3. If the relationship between the local coordinate system and the reference coordinate system is known, a point (OX, OY, OZ) in the reference coordinate system
Is a point B (OX in the local coordinate system) by scaling, rotation, and translation by the affine transformation shown in the following equation.
L, OYL, OZL).

[0070]

(Equation 2) (2) In the equation (2), m11 to m33 are scaling and rotation factors, and TX, TY, and TZ are translation factors.

Coordinate B of the virtual object in the local coordinate system
When (OXL, OYL, OZL) is obtained, the point B viewed from the user's viewpoint, that is, the coordinates (u, v) of the virtual object in the display screen coordinate system can be obtained as shown in FIG.

FIG. 10 shows a state in which FIG. 9 is viewed from the YL axis direction (a state in which the YL axis direction is the traveling direction).

In FIG. 10, the intersection point of the perpendicular extending from the viewpoint position A to the ZL axis and the perpendicular extending from the position B of the virtual object to the XL axis is represented by C, and the position B of the virtual object is further represented by C.
Let D be the point of intersection of the perpendicular from the axis to the XL axis and the XL axis. Further, an intersection between the straight line AB and the XL axis, that is, a point at which the point B is photographed on the display image when the virtual object is viewed from the viewpoint of the user is defined as E.

In this case, the z coordinate of the point E in the local coordinate system is 0, but its x coordinate and y coordinate are the x coordinate and y coordinate of the virtual object in the display screen coordinate system. That is, the coordinates (u, v) of the virtual object in the display screen coordinate system
Is equal to the x coordinate and the y coordinate of the point E in the local coordinate system.
YL, VZL) and the position B of the virtual object (OXL, OY)
L, OZL).

Now, in FIG. 10, triangles ABC and E
Since BD is similar, the ratio of the line segments DE and CA,
The ratio of the line segments BD and BC is equal, and therefore the following equation holds.

(U−OXL) / (VXL−OXL) = OZL / (OZL−VZL) (3) By solving equation (3) for u, the following equation is obtained.

U = OZL (VXL−OXL) / (OZL−VZL) + OXL (4) When FIG. 9 is viewed from the XL axis direction, v is also u.
The following equation is obtained as in the case of

V = OZL (VYL−OYL) / (OZL−VZL) + OYL (5) According to equations (4) and (5), the coordinates (in the display screen coordinate system of the virtual object viewed from the user's viewpoint) u, v).

Next, FIG. 11 shows the configuration of a second embodiment of a portable display device to which the present invention is applied. In addition,
In the figure, portions corresponding to those in FIG. 1 are denoted by the same reference numerals, and description thereof will be omitted below. That is, the portable display device of FIG. 11 includes a video see-through display unit 31 instead of the optical see-through display unit 1, and further includes an image input unit 32 and an input image coordinate conversion unit 33. Other than that, the configuration is basically the same as in FIG.

The video see-through type display section 31 is composed of, for example, an LCD, and displays a composite image supplied from the rendering section 9 in which a real image and a virtual image are superimposed. The video see-through display unit 3
1 is housed in a flat casing like the optical see-through display unit 1, but the real space is not visible through the display screen.

The image input section 32 is composed of, for example, a CCD video camera or the like.
Is fixed on the rear surface (the surface opposite to the display screen) so that the direction perpendicular to the rear surface (the side facing the front of the video see-through display unit 31) is the imaging direction. The image input unit 32 is a video see-through display unit 31.
, An actual object in the real space on the back side of the camera is imaged, and a real image obtained as a result is supplied to the input image coordinate conversion unit 33.

As described above, the input image coordinate conversion unit 33 is supplied with the actual image from the image input unit 32, and the three-dimensional position / direction calculation unit 3 transmits the video see-through type display unit 31.
Are supplied, and the viewpoint of the user is supplied from the viewpoint position / direction calculator 5. Then, the input image coordinate conversion unit 33 converts the coordinates of the real image from the image input unit 32 into the position and orientation of the video see-through display unit 31 from the three-dimensional position and direction calculation unit 3,
In addition, based on the user's viewpoint and the like from the viewpoint position / direction calculation unit 5, the data is converted by geometric calculation and supplied to the rendering unit 9.

Therefore, in the portable display device shown in FIG. 1, the virtual image is superimposed and displayed on the real space seen through the optical see-through display unit 1, but the portable display device shown in FIG. In the apparatus, a virtual image is displayed so as to be superimposed on a real image in the real space captured by the image input unit 32.

The electric configuration of the portable display device shown in FIG. 11 is the same as that of the portable display device except that a CCD video camera as an image input unit 32 is added and the CPU 17 also performs processing in the input image coordinate conversion unit 33. Since the electrical configuration of the portable display device of FIG. 1 shown in FIG. 2 is the same as that of FIG.

Next, the operation of the portable display device of FIG. 11 will be described with reference to the flowchart of FIG.

First, in step S11, the position and orientation of the video see-through display unit 31 are calculated, and the viewpoint of the user is calculated. That is, the position and orientation of the video see-through display unit 31 are calculated by the three-dimensional position and direction calculation unit 3 based on the output of the three-dimensional position and direction sensor 2, and the virtual object coordinate conversion unit 8 and the input image coordinate conversion unit 33 are calculated. Supplied to At the same time, the viewpoint position / direction calculator 5 calculates (the position or direction of) the user's viewpoint based on the output of the viewpoint position / direction sensor 4 and supplies the calculated viewpoint to the virtual object coordinate converter 8 and the input image coordinate converter 33. You.

On the other hand, the virtual object generation unit 7 performs a predetermined 3 based on the data stored in the virtual object data storage unit 6.
A virtual object in the three-dimensional space is generated and supplied to the virtual object coordinate conversion unit 8. At the same time, the image input unit 32 captures an image of the real space in the direction of the back surface of the video see-through display unit 31, and the resulting real image is supplied to the input image coordinate conversion unit 33.

Then, in step S12, the virtual object coordinate conversion unit 8 sets the predetermined three based on the position and orientation of the video see-through type display unit 31 and the viewpoint of the user.
The coordinate conversion of the virtual object in the dimensional space is performed, and the converted data of the virtual object is supplied to the rendering unit 9. In step S12, in the input image coordinate conversion unit 33, the position and orientation of the video see-through display unit 31, the viewpoint of the user, and the image input unit 32
The coordinate conversion of the real image from the image input unit 32 is performed based on the position and orientation with respect to the predetermined reference plane, and the converted real image data is supplied to the rendering unit 9.

That is, in the portable display device using the video see-through display unit 31, the real space is viewed through the video see-through display unit 31, like the portable display device using the optical see-through display unit 1. Because you ca n’t do that
The real space captured by the image input unit 32 is displayed on the video see-through display unit 31, and the real image output from the image input unit 32 is directly used as the video see-through display unit 31.
In this case, the real space displayed on the video see-through display unit 31 may be different from the real space viewed from the user's viewpoint, and the user may feel uncomfortable. Therefore, the input image coordinate conversion unit 33 determines the actual position of the video input unit 32 based on the position and orientation of the video see-through display unit 31 and the position and orientation of the image input unit 32 as well as the user's viewpoint. The image is coordinate-transformed into an image that does not cause a sense of discomfort from the viewpoint of the user.

When the rendering section 9 receives the data of the virtual object from the virtual object coordinate conversion section 8 and receives the real image from the input image coordinate conversion section 33, the rendering section 9 executes step S1.
In 3, the virtual object is rendered (virtual image is drawn). Further, the rendering unit 9 combines the virtual image with the real image from the input image coordinate conversion unit 33, and supplies the resultant combined image to the video see-through display unit 31. Video see-through display unit 31
Then, in step S14, the composite image from the rendering unit 9 is displayed, the process returns to step S1, and the same processing is repeated thereafter.

As described above, in the portable display device of FIG. 11, the virtual object coordinate conversion unit 8 performs coordinate conversion of the virtual object based on the position and orientation of the video see-through display unit 31 and the viewpoint of the user. At the same time, in the input image coordinate conversion unit 33, the position and orientation of the video see-through display unit 31, the viewpoint of the user, and the image input unit 3
Based on the position and orientation of No. 2, coordinate conversion of the real image is performed, and they are combined. Then, the composite image is displayed on the video see-through display unit 31. Therefore, it is possible to provide a composite image that does not cause any discomfort to the user.

In the embodiment shown in FIG. 11, the image input section 32 is fixed to the video see-through display section 31 as described above. Therefore, the position and orientation of the image input unit 32 can be obtained from the position and orientation of the video see-through display unit 31, and the input image coordinate conversion unit 33 thus determines the position and orientation of the image input unit 32. It is made to ask.

By the way, if the range of the real space imaged by the image input unit 32 is small (the image pickup range of the image input unit 32 is the same as the range that can be displayed on the video see-through display unit 1), the input image coordinates By performing the coordinate conversion in the conversion unit 33, the actual image may not be displayed on the entire display screen of the video see-through display unit 31, and in this case, the user may feel uncomfortable. Therefore, the image input unit 32 can image a wide range of the real space to a certain extent, thereby outputting a real image in a wider range than the real image that can be displayed by the video see-through display unit 31. . in this case,
By the coordinate conversion by the input image coordinate conversion unit 33, it is possible to prevent the real image from being displayed on the entire display screen of the video see-through display unit 31.

As a method for imaging a wide range of the real space, for example, there is a method using a fisheye lens or the like. Specifically, for example, by using an omnidirectional image sensor using a hexagonal pyramid mirror, a real image in a wide range of real space can be obtained. A method for creating a wide range of images by using an omni-directional image sensor using a hexagonal pyramid mirror is described in, for example, “Kawanishi et al.” The details are described in "Image Creation, Image Recognition and Understanding Symposium, July 1998".

Next, the coordinate conversion of the real image performed by the input image coordinate conversion unit 33 will be described with reference to FIG.

As in the case of FIGS. 9 and 10 described above, a local coordinate system and a display screen coordinate system having the origin at the upper left point of the display screen of the video see-through type display unit 31 are considered. As the camera coordinate system for the input unit 31, three axes orthogonal to each other are set as an x-axis, a y-axis, and a z-axis with the optical center as the origin (XR,
YR, ZR) are defined.

Here, in order to simplify the explanation,
It is assumed that the local coordinate system and the camera coordinate system have a parallel movement relationship. That is, here, the XR axis, the YR axis,
It is assumed that the ZR axis is parallel to the Xc axis, the Yc axis, and the Zc axis.

In this case, the point F of the object in the real space is defined as (Xd, Yd, Zd) in the local coordinate system.
In the camera coordinate system, (Xc, Yc, Zc)
The relationship between the two is represented by the following equation.

[0099]

(Equation 3) (6) where Tx, Ty, and Tz represent the translational relationship between the local coordinate system and the camera coordinate system.
Is fixed to the video see-through display unit 31, and can be obtained in advance. Further, as described later, even if the image input unit 32 is not fixed, Tx, Ty, and Tz can detect the position / posture of the image input unit 32 and display the position / posture and the video see-through display. It can be obtained from the position / posture of the unit 31.

On the other hand, with respect to the image pickup plane in the image input section 32, the intersection with the optical axis is set as the origin, and the XR axis
Consider a screen coordinate system in which an axis parallel to the axis is the x-axis or the y-axis, and suppose that point F is projected to a point (Uc, Vc) in the screen coordinate system. In this case, the relationship between the point F (Xc, Yc, Zc) in the camera coordinate system and the point (Uc, Vc) in the screen coordinate system is expressed by the following equation.

[0101]

(Equation 4) (7) where fc represents the focal length of the optical system of the image input unit 32. When the actual image output from the image input unit 32 is displayed as it is, the point (Uc, V
The point F is displayed at c).

Further, the viewpoint of the user in the local coordinate system is (Xe, Ye, Ze), and the viewpoint (Xe, Y
e, Ze), the point F is projected onto a point (Ud, Vd) in the display screen coordinate system. In this case, the relationship between the point F (Xd, Yd, Zd) in the local coordinate system and the point (Ud, Vd) in the display screen coordinate system is expressed by the following equation.

[0103]

(Equation 5) (8) Accordingly, from the expressions (6) to (8), the input image coordinate conversion unit 33 sees the point F from the viewpoint of the user by performing the coordinate conversion of the real image according to the following expression. In this case, the point (Ud, Vd) in the display screen coordinate system can be obtained.

[0104]

(Equation 6) (9) The position of the point F of the object in the local coordinate system (X
d, Yd, Zd) are assumed to be known in advance. However, if the position (Xd, Yd, Zd) of the point F of the object is not known, it is assumed that the object is a plane, and a conversion formula for performing the coordinate conversion of the real image is obtained. Can be.

Also, here, the local coordinate system and the camera coordinate system are assumed to be in a parallel movement relationship. However, if the rotation is added to the relationship between the local coordinate system and the camera coordinate system, the expression By making (6) an affine transformation equation shown in equation (2), similar to the above case,
A conversion equation for performing coordinate conversion of the real image can be obtained.

Next, FIG. 14 shows the configuration of a third embodiment of the portable display device to which the present invention is applied. In addition,
In the figure, the part corresponding to the case in FIG.
The same reference numerals are given, and the description thereof is omitted below. That is, in the portable display device of FIG. 14, an input unit position / direction calculation unit 41 is newly provided as compared with the case of FIG. 11, and the output thereof is supplied to the input image coordinate conversion unit 33. I have. Further, in the portable display device of FIG. 14, the image input section 32 is attached to the video see-through display section 31, but is not fixed but movable.

That is, for example, the image input unit 32
As shown in the figure, a predetermined range on the back surface of the video see-through display unit 31 can be moved, and the imaging direction can be changed (rotated).
It has become. In the embodiment shown in FIG. 14, the image input section 32 can be manually moved.

Therefore, in the portable display device shown in FIG. 14, since the image input section 32 is not fixed as in the case of FIG. 11, the position and orientation of the image input section 32 are different from those of the video see-through display section 31. It cannot be determined from the position and orientation.

Thus, in the embodiment of FIG.
An input position direction calculation unit 41 is attached to the image input unit 32. The input position direction calculation unit 41 calculates a three-dimensional position and orientation of the image input unit 32 with respect to a predetermined reference plane, and calculates input image coordinates. The data is supplied to the conversion unit 33.

In the embodiment shown in FIG. 11, the image input unit 32
Is fixed to the video see-through display unit 31, and if the direction of the user's viewpoint is extremely different from the imaging direction of the image input unit 32, an actual image that does not cause any discomfort when viewed from the user's viewpoint May be difficult to obtain, but if the image input unit 32 is movable, the user
Such a problem can be solved by moving the image input unit 32 such that the imaging direction matches the direction of the user's viewpoint.

Next, FIG. 16 shows the configuration of a fourth embodiment of the portable display device to which the present invention is applied. In addition,
In the figure, the parts corresponding to those in FIG.
The same reference numerals are given, and description thereof will be omitted below. That is, in the portable display device of FIG. 16, the movable image input unit 32 and the input unit position / direction calculation unit 41 attached thereto are not attached to the video see-through display unit 31 but are separated therefrom. It is installed in the position where it was.

In the portable display device configured as described above, for example, the video see-through display unit 31
If there is any obstacle between the subject to be imaged and the obstacle, avoid the obstacle and use the image input unit 32.
Is installed, it is possible to image the real space without any obstacles. Specifically, for example, by installing the image input unit 32 in a room next to the room where the portable display device is located, the portable display device can display a real image and a virtual image of the room next to the room. It is possible to provide a composite image. The outputs of the image input unit 32 and the input unit position / direction calculation unit 41 can be supplied to the input image coordinate conversion unit 33 by wire or wirelessly.

Next, FIG. 17 shows the configuration of a fifth embodiment of a portable display device to which the present invention is applied. In addition,
In the figure, the parts corresponding to those in FIG.
The same reference numerals are given, and description thereof will be omitted below. That is, the portable display device of FIG. 17 is basically the same as that of FIG. 14 except that the input image coordinate conversion unit 33 and the input unit position / direction calculation unit 41 are deleted and the control unit 51 is newly provided. It is configured similarly.

Therefore, also in FIG.
Reference numeral 2 denotes a movable type, but the image input unit 32 is controlled (attitude control) by the control unit 51 instead of moving manually.

That is, the control unit 51 is supplied with the user's viewpoint output from the viewpoint position / direction calculation unit 5, and the control unit 51 controls the image input unit 32 to change the imaging direction of the user. Is moved so as to match the direction of the viewpoint. Therefore, in this case, the image input unit 3
2 does not need to be subjected to the coordinate transformation because the real image output from the user matches the one when the real space is viewed from the user's viewpoint, so that the input image coordinate transformation unit 33 is not required. Become.

FIG. 18 shows the configuration of a portable display device according to a sixth embodiment of the present invention. In addition,
In the figure, the part corresponding to the case in FIG.
The same reference numerals are given, and description thereof will be omitted below. That is, in the portable display device of FIG. 18, the input image coordinate conversion unit 33 is deleted, the distance calculation unit 61 and the input image synthesis unit 62 are newly provided, and instead of the single image input unit 32, Except that a plurality of N (N is 2 or more) image input units 321 to 32N are provided,
The configuration is the same as in FIG.

The image input units 321 to 32N are, for example,
As shown in FIG. 19, the video see-through type display unit 31 is attached to the back surface so that the back surface direction is imaged from different viewpoint positions (optimally for stereo processing). An actual image of the real space on the back surface of the video see-through display unit 31 captured from such different viewpoint positions is output. The plurality of real images are supplied to the distance calculation unit 61 and the input image synthesis unit 62.

The distance calculation unit 61 obtains the three-dimensional position of the object in the real space by performing the above-mentioned stereo processing from the N real images from the image input units 321 to 32N, and obtains the rendering unit 9 and the It is supplied to the input image synthesizing unit 62. In this case, the rendering unit 9 renders the virtual image in consideration of the distance to the object in the real space. Therefore, in this case, it is possible to obtain a composite image in which the context of the virtual object and the actual object is considered.

On the other hand, as described above, the input image synthesizing unit 62 is supplied with N actual images from the image input units 321 to 32N, and is supplied with the positions of the objects in the real space from the distance calculating unit 61. In addition, the position and orientation of the video see-through display unit 31 are supplied from the three-dimensional position / direction calculation unit 3, and the viewpoint of the user is supplied from the viewpoint position / direction calculation unit 5.

The input image synthesizing section 62 generates N real images (N) based on the position of the object in the real space, the position and orientation of the video see-through type display section 31, and the viewpoint of the user.
A real image obtained when the object is viewed from each direction is synthesized (blended), whereby a real image obtained when the real space is viewed from the user's viewpoint is generated and supplied to the rendering unit 9.

Therefore, in this case, it is possible to obtain a real image that does not cause a sense of discomfort from the viewpoint of the user. Also, in this case, a real image obtained when the object is viewed from the user's viewpoint is generated from the N real images viewed from the N direction,
As in the case of the embodiment of FIG. 11, compared to a case where an actual image obtained when an object is viewed from the user's viewpoint is generated using one real image viewed from one direction,
The so-called occlusion problem can be solved.

In the embodiment shown in FIG. 18, N
Although the real images are obtained by the N image input units 321 to 32N, it is also possible to obtain the N real images by moving one or more image input units. However, in this case, the moving position and the posture of the image input unit need to be set in advance or measured.

Further, generation of a real image obtained when the real space is viewed from the viewpoint of the user using the N real images can be performed, for example, as described with reference to FIG. In this case, there are N screen coordinate systems, and therefore, in FIG.
Will be projected onto the N screen coordinate systems. As a result, the projection points of the object on the N screen coordinate systems are obtained. In such a case, the N projection points are blended and the points (Ud, Vd), that is, the projected point (U) on the final screen coordinate system
c, Vc).

Next, FIG. 20 shows the configuration of a portable display device according to a seventh embodiment of the present invention. In addition,
In the figure, the part corresponding to the case in FIG.
The same reference numerals are given, and description thereof will be omitted below. That is, the portable display device of FIG.
1 and a distance sensor 71 are newly provided,
The configuration is basically the same as that in FIG.

The distance sensor 71 is, for example, an active sensor such as a range finder, and outputs a signal for measuring a distance to an object by irradiating a laser beam according to the principle of triangulation. ing. The output of the distance sensor 71 is supplied to the distance calculation unit 61, and the distance calculation unit 61
The distance to the object (the three-dimensional position of the object) is calculated,
It is supplied to the rendering unit 9 and the input image coordinate conversion unit 33.

In the input image conversion unit 33, the distance calculation unit 61
Based on the three-dimensional position of the object from, the coordinate transformation of the real image from the image input unit 32 is performed as described with reference to FIG. On the other hand, the rendering unit 9 renders the virtual image in consideration of the distance to the object in the real space.

Therefore, also in this case, as in the case of FIG. 18, it is possible to obtain a composite image in which the context of the virtual object and the actual object is taken into consideration.

As described above, the viewpoint of the user is calculated, and the virtual image and the real image are converted in consideration of the viewpoint of the user. It is possible to provide an image that does not give a sense of incongruity by fusing an image with a virtual space. As a result, when the portable display device is provided with an operation unit for operating a virtual object, for example, as disclosed in Japanese Patent Application Laid-Open No. 10-51711, ,
Work on a virtual object can be performed without a sense of incongruity.

As described above, the case where the present invention is applied to the portable display device using the optical see-through display unit 1 and the portable display device using the video see-through display unit 31 has been described. In the portable display device using the display unit 31, since it is necessary to process the captured real image, it is necessary to process the real image with the latency corresponding to the processing time until the real image is displayed. , A delay of about several frames occurs. On the other hand, such a delay does not occur in the portable display device using the optical see-through display unit 1.

In this embodiment mode, the display device is a portable type. However, the present invention is applicable not only to a portable type display device but also to a stationary type display device used by being installed on a desk or the like. Is also applicable.

Further, in a virtual image of a virtual object, for example, as disclosed in Japanese Patent Application Laid-Open No. Hei 10-51711, a shadow of the virtual object can be added.

Further, for example, when a plurality of portable display devices are used, a common virtual object is assigned to the plurality of portable display devices as disclosed in Japanese Patent Application Laid-Open No. Hei 10-51711. In addition to displaying the virtual object, it is possible to reflect the operation of the portable display device on the virtual object on all portable display devices.

[0133]

As described above, according to the information processing apparatus and the information processing method of the present invention, the position of the display means in the three-dimensional space is calculated, and the viewpoint of the user is calculated. The virtual image is converted based on the user's viewpoint. Therefore, from the user's perspective,
It is possible to provide an image with no discomfort.

[Brief description of the drawings]

FIG. 1 is a block diagram illustrating a configuration example of a first embodiment of a portable display device to which the present invention has been applied.

FIG. 2 is a block diagram illustrating an example of a hardware configuration of the portable display device of FIG.

FIG. 3 is a diagram for explaining an epipolar line.

FIG. 4 is a diagram showing a reference camera image and a detected camera image.

FIG. 5 is a diagram showing a transition of an evaluation value.

FIG. 6 is a flowchart for explaining the operation of the portable display device of FIG. 1;

FIG. 7 is a diagram illustrating a relationship between a position of a user's viewpoint and an image displayed at each position.

FIG. 8 is a diagram illustrating a relationship between a position of a user's viewpoint and an image displayed at each position.

FIG. 9 is a diagram for describing a local coordinate system and a display screen coordinate system.

FIG. 10 is a diagram for explaining coordinate conversion by a virtual object coordinate conversion unit 8 in FIG. 1;

FIG. 11 is a block diagram illustrating a configuration example of a second embodiment of a portable display device to which the present invention has been applied.

FIG. 12 is a flowchart for explaining the operation of the portable display device of FIG. 11;

FIG. 13 is a diagram for explaining coordinate conversion by an input image coordinate conversion unit 33 of FIG. 11;

FIG. 14 is a block diagram illustrating a configuration example of a third embodiment of a portable display device to which the present invention has been applied.

FIG. 15 is a diagram showing that the image input unit 32 is movable.

FIG. 16 is a block diagram illustrating a configuration example of a fourth embodiment of a portable display device to which the present invention has been applied.

FIG. 17 is a block diagram illustrating a configuration example of a fifth embodiment of a portable display device to which the present invention has been applied.

FIG. 18 is a block diagram illustrating a configuration example of a sixth embodiment of a portable display device to which the present invention has been applied.

FIG. 19 is a diagram illustrating a state in which a plurality of real images are captured by the image input units 321 to 32N.

FIG. 20 is a block diagram illustrating a configuration example of a seventh embodiment of a portable display device to which the present invention has been applied.

[Explanation of symbols]

Reference Signs List 1 optical see-through display unit, 2 3D position / direction sensor, 3 3D position / direction calculation unit, 4 viewpoint position / direction sensor, 5 viewpoint position / direction calculation unit, 6 virtual object data storage unit, 7 virtual object generation unit, 8 virtual Object coordinate transformation unit, 9 rendering unit, 11 reference camera,
12 detection camera, 13, 14 A / D converter, 1
5, 16 frame memory, 17 CPU, 18
ROM, 19 RAM, 20 RS-232C / R
S-422 controller, 21 LCD controller,
22 VRAM, 23 storage controller,
24 magnetic disks, 25 magneto-optical disks, 2
6 optical disk, 27 non-volatile memory, 28 communication controller, 31 video see-through display unit,
32, 321 to 32N image input unit, 33 input image coordinate conversion unit, 41 input unit position / direction calculation unit, 51
Control unit, 61 distance calculation unit, 62 input image synthesis unit, 71 distance sensor

──────────────────────────────────────────────────続 き Continued on the front page (51) Int.Cl. 7 Identification symbol FI Theme coat ゛ (Reference) G06T 1/00 H04N 13/00 5C023 G09G 3/20 660 13/04 5C061 680 G06F 15/20 D 5C080 H04N 5 / 262 15/62 360 13/00 415 13/04 15/66 450 15/72 450A (72) Inventor Takayuki Ashigahara 6-35 Kitashinagawa, Shinagawa-ku, Tokyo F-term (Sony Corporation) (Reference) 2F065 AA04 AA06 AA31 BB05 CC16 FF05 FF67 JJ03 JJ05 JJ26 PP01 QQ00 QQ03 QQ13 QQ23 QQ24 QQ25 QQ27 QQ28 QQ38 QQ41 QQ42 SS13 5B049 AA01 DD01 DD05 EE07 EE41 FF03 BA01 EA03 EA04 CA08 CA13 CA16 CB08 CB13 CB16 CC02 CD14 CE08 CH08 DA07 DB02 DB09 DC02 DC09 5B080 BA02 BA08 DA06 FA08 GA00 5C023 AA10 AA16 AA18 A A38 AA40 BA01 BA12 CA03 CA06 DA04 DA08 EA03 5C061 AA21 AA29 AB12 AB24 5C080 AA10 BB05 DD01 EE29 JJ01 JJ02 JJ05 JJ07

Claims (20)

[Claims]
1. An information processing apparatus for performing information processing for superimposing and displaying a real image which is an image of an actual object and a virtual image which is an image of a virtual object, and displays the image. Display means; display position calculation means for calculating a position of the display means in a three-dimensional space; viewpoint calculation means for calculating a user's viewpoint; and displaying the virtual image based on the position of the display means and the user's viewpoint. An information processing apparatus comprising: virtual image conversion means for converting; and drawing means for performing drawing for displaying the converted virtual image on the display means.
2. The display means is an optical see-through type display device, and an actual image viewed through the display device;
The information processing apparatus according to claim 1, wherein the information processing apparatus displays the virtual image in a superimposed manner.
3. The image processing apparatus according to claim 1, further comprising an imaging unit configured to capture the actual object and output the real image, wherein the display unit superimposes the real image output by the imaging unit and the virtual image. The information processing apparatus according to claim 1, wherein the information is displayed.
4. An apparatus according to claim 1, further comprising: a real image conversion unit configured to convert the real image output by the imaging unit based on a position of the display unit and a viewpoint of a user. 4. The information processing apparatus according to claim 3, wherein the virtual image after the conversion is superimposed and displayed.
5. The information processing apparatus according to claim 3, wherein said imaging means is fixed at a predetermined position.
6. The information processing apparatus according to claim 5, wherein said imaging means is fixed to said display means.
7. The information processing apparatus according to claim 3, wherein the imaging unit is installed at a position distant from the display unit.
8. The information processing apparatus according to claim 3, wherein the imaging unit outputs a real image in a range wider than the real image that can be displayed by the display unit.
9. The information processing apparatus according to claim 3, wherein the real image output by the imaging unit is converted based on a position of the display unit, a viewpoint of a user, and a position of the imaging unit. .
10. The information processing apparatus according to claim 3, wherein said imaging means is movable.
11. An image capturing position calculating unit for calculating a position of the image capturing unit in a three-dimensional space, and the real position output by the image capturing unit based on a position of the display unit, a user's viewpoint, and a position of the image capturing unit. 10. The information processing apparatus according to claim 9, further comprising real image conversion means for converting an image.
12. The information processing apparatus according to claim 3, further comprising control means for controlling the image pickup means based on the viewpoint of the user.
13. The apparatus according to claim 3, further comprising a plurality of said imaging units, and further comprising a generation unit for generating said real image to be displayed on said display unit from outputs of said plurality of said imaging units. Information processing device.
14. A virtual image conversion unit, further comprising a distance calculation unit that calculates a distance to the real object, wherein the virtual image conversion unit calculates a distance to the real object based on a position of the display unit, a user's viewpoint, and a distance to the real object. The information processing apparatus according to claim 3, wherein the virtual image is converted.
15. The information processing apparatus according to claim 14, wherein the distance calculation unit calculates a distance to the actual object based on the plurality of real images.
16. The viewpoint calculation means captures an image of the user and recognizes an image obtained as a result,
The viewpoint of the user is calculated.
An information processing apparatus according to claim 1.
17. The apparatus according to claim 1, wherein the viewpoint calculation unit calculates the viewpoint of the user by imaging the user and performing stereo processing using an image obtained as a result. Information processing device.
18. The information processing apparatus according to claim 1, wherein the viewpoint calculation unit calculates a direction of a user's viewpoint or a position of the viewpoint in a three-dimensional space.
19. The information processing device according to claim 1, wherein the information processing device is a portable device.
20. An information processing method of an information processing apparatus for performing information processing for superimposing and displaying a real image as an image of an actual object and a virtual image as an image of a virtual object, The information processing apparatus includes a display unit that displays an image, a display position calculation step of calculating a position of the display unit in a three-dimensional space, a viewpoint calculation step of calculating a viewpoint of a user, and a position of the display unit. An information processing method comprising: a virtual image conversion step of converting the virtual image based on a user's viewpoint; and a drawing step of performing drawing for displaying the converted virtual image on the display unit. Method.
JP8521899A 1999-03-29 1999-03-29 Device and method for processing information Withdrawn JP2000276613A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP8521899A JP2000276613A (en) 1999-03-29 1999-03-29 Device and method for processing information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP8521899A JP2000276613A (en) 1999-03-29 1999-03-29 Device and method for processing information

Publications (1)

Publication Number Publication Date
JP2000276613A true JP2000276613A (en) 2000-10-06

Family

ID=13852444

Family Applications (1)

Application Number Title Priority Date Filing Date
JP8521899A Withdrawn JP2000276613A (en) 1999-03-29 1999-03-29 Device and method for processing information

Country Status (1)

Country Link
JP (1) JP2000276613A (en)

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002073955A1 (en) * 2001-03-13 2002-09-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, studio apparatus, storage medium, and program
WO2003083526A1 (en) * 2002-03-28 2003-10-09 Linus Ab A method and a device for co-ordinated displaying of a direct image and an electro-optical image
EP1398601A2 (en) * 2002-09-13 2004-03-17 Canon Kabushiki Kaisha Head up display for navigation purposes in a vehicle
EP1511300A1 (en) * 2002-06-03 2005-03-02 Sony Corporation Image processing device and method, program, program recording medium, data structure, and data recording medium
JP2005323925A (en) * 2004-05-17 2005-11-24 Ge Medical Systems Global Technology Co Llc Ultrasonic imaging device
JP2007529960A (en) * 2004-03-18 2007-10-25 ソニー エレクトロニクス インク 3D information acquisition and display system for personal electronic devices
WO2009038149A1 (en) * 2007-09-20 2009-03-26 Nec Corporation Video image providing system and video image providing method
DE102008020771A1 (en) * 2008-04-21 2009-07-09 Carl Zeiss 3D Metrology Services Gmbh Deviation determining method, involves locating viewers at viewing position of device and screen, such that viewers view each position of exemplars corresponding to measured coordinates of actual condition
DE102008020772A1 (en) * 2008-04-21 2009-10-22 Carl Zeiss 3D Metrology Services Gmbh Presentation of results of a measurement of workpieces
JP2010510572A (en) * 2006-11-20 2010-04-02 トムソン ライセンシングThomson Licensing Method and system for light modeling
JP2010122879A (en) * 2008-11-19 2010-06-03 Sony Ericsson Mobile Communications Ab Terminal device, display control method, and display control program
JP2010541053A (en) * 2007-09-25 2010-12-24 メタイオ ゲゼルシャフト ミット ベシュレンクテル ハフツング Method and apparatus for rendering a virtual object in a real environment
JP2011508557A (en) * 2007-12-26 2011-03-10 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Image processor for overlaying graphics objects
KR20110136012A (en) * 2010-06-14 2011-12-21 주식회사 비즈모델라인 Augmented reality device to track eyesight direction and position
JP2012003328A (en) * 2010-06-14 2012-01-05 Hal Laboratory Inc Three-dimensional image display program, three-dimensional image display apparatus, three-dimensional image display system, and three-dimensional image display method
JP2012013514A (en) * 2010-06-30 2012-01-19 Canon Inc Information processor, three dimensional position calculation method and program
JP2012033043A (en) * 2010-07-30 2012-02-16 Toshiba Corp Information display device and information display method
JP2012115414A (en) * 2010-11-30 2012-06-21 Nintendo Co Ltd Game device, method of providing game, game program, and game system
JP2012175324A (en) * 2011-02-21 2012-09-10 Tatsumi Denshi Kogyo Kk Automatic photograph creation system, automatic photograph creation apparatus, server device and terminal device
WO2012124250A1 (en) * 2011-03-15 2012-09-20 パナソニック株式会社 Object control device, object control method, object control program, and integrated circuit
JP2012190230A (en) * 2011-03-10 2012-10-04 Nec Casio Mobile Communications Ltd Display device, display method, and program
JP2013015796A (en) * 2011-07-06 2013-01-24 Sony Corp Display control apparatus, display control method, and computer program
KR101309176B1 (en) * 2006-01-18 2013-09-23 삼성전자주식회사 Apparatus and method for augmented reality
WO2013145147A1 (en) * 2012-03-28 2013-10-03 パイオニア株式会社 Head mounted display and display method
WO2013148222A1 (en) 2012-03-28 2013-10-03 Microsoft Corporation Augmented reality light guide display
JP2013542462A (en) * 2010-09-21 2013-11-21 マイクロソフト コーポレーション Opaque filter for transmissive head mounted display
WO2014037972A1 (en) * 2012-09-05 2014-03-13 Necカシオモバイルコミュニケーションズ株式会社 Display device, display method, and program
WO2014091824A1 (en) * 2012-12-10 2014-06-19 ソニー株式会社 Display control device, display control method and program
JP2014515130A (en) * 2011-03-10 2014-06-26 マイクロソフト コーポレーション Theme-based expansion of photorealistic views
CN104204994A (en) * 2012-04-26 2014-12-10 英特尔公司 Augmented reality computing device, apparatus and system
WO2015125709A1 (en) * 2014-02-20 2015-08-27 株式会社ソニー・コンピュータエンタテインメント Information processing device and information processing method
JP2015207219A (en) * 2014-04-22 2015-11-19 富士通株式会社 Display device, position specification program, and position specification method
JP5961892B1 (en) * 2015-04-03 2016-08-03 株式会社ハコスコ Display terminal and information recording medium
WO2017121361A1 (en) * 2016-01-14 2017-07-20 深圳前海达闼云端智能科技有限公司 Three-dimensional stereo display processing method and apparatus for curved two-dimensional screen
US9717981B2 (en) 2012-04-05 2017-08-01 Microsoft Technology Licensing, Llc Augmented reality and physical games
US9807381B2 (en) 2012-03-14 2017-10-31 Microsoft Technology Licensing, Llc Imaging structure emitter calibration
US10018844B2 (en) 2015-02-09 2018-07-10 Microsoft Technology Licensing, Llc Wearable image display system
US10192358B2 (en) 2012-12-20 2019-01-29 Microsoft Technology Licensing, Llc Auto-stereoscopic augmented reality display
US10191515B2 (en) 2012-03-28 2019-01-29 Microsoft Technology Licensing, Llc Mobile device light guide display
US10317677B2 (en) 2015-02-09 2019-06-11 Microsoft Technology Licensing, Llc Display system
US10502876B2 (en) 2012-05-22 2019-12-10 Microsoft Technology Licensing, Llc Waveguide optics focus elements
US10579206B2 (en) 2016-12-14 2020-03-03 Samsung Electronics Co., Ltd. Display apparatus and method for controlling the display apparatus

Cited By (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002073955A1 (en) * 2001-03-13 2002-09-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, studio apparatus, storage medium, and program
WO2003083526A1 (en) * 2002-03-28 2003-10-09 Linus Ab A method and a device for co-ordinated displaying of a direct image and an electro-optical image
EP1511300A4 (en) * 2002-06-03 2006-01-04 Sony Corp Image processing device and method, program, program recording medium, data structure, and data recording medium
EP1511300A1 (en) * 2002-06-03 2005-03-02 Sony Corporation Image processing device and method, program, program recording medium, data structure, and data recording medium
US7831086B2 (en) 2002-06-03 2010-11-09 Sony Corporation Image processing device and method, program, program recording medium, data structure, and data recording medium
US6956503B2 (en) 2002-09-13 2005-10-18 Canon Kabushiki Kaisha Image display apparatus, image display method, measurement apparatus, measurement method, information processing method, information processing apparatus, and identification method
EP1398601A2 (en) * 2002-09-13 2004-03-17 Canon Kabushiki Kaisha Head up display for navigation purposes in a vehicle
CN1296680C (en) * 2002-09-13 2007-01-24 佳能株式会社 Image display and method, measuring device and method, recognition method
US7423553B2 (en) 2002-09-13 2008-09-09 Canon Kabushiki Kaisha Image display apparatus, image display method, measurement apparatus, measurement method, information processing method, information processing apparatus, and identification method
EP1398601A3 (en) * 2002-09-13 2014-05-07 Canon Kabushiki Kaisha Head up display for navigation purposes in a vehicle
JP2007529960A (en) * 2004-03-18 2007-10-25 ソニー エレクトロニクス インク 3D information acquisition and display system for personal electronic devices
JP2005323925A (en) * 2004-05-17 2005-11-24 Ge Medical Systems Global Technology Co Llc Ultrasonic imaging device
JP4615893B2 (en) * 2004-05-17 2011-01-19 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー Ultrasonic imaging device
KR101309176B1 (en) * 2006-01-18 2013-09-23 삼성전자주식회사 Apparatus and method for augmented reality
JP2010510572A (en) * 2006-11-20 2010-04-02 トムソン ライセンシングThomson Licensing Method and system for light modeling
US8466917B2 (en) 2006-11-20 2013-06-18 Thomson Licensing Method and system for modeling light
WO2009038149A1 (en) * 2007-09-20 2009-03-26 Nec Corporation Video image providing system and video image providing method
US8531514B2 (en) 2007-09-20 2013-09-10 Nec Corporation Image providing system and image providing method
JP5564946B2 (en) * 2007-09-20 2014-08-06 日本電気株式会社 Video providing system and video providing method
US9390560B2 (en) 2007-09-25 2016-07-12 Metaio Gmbh Method and device for illustrating a virtual object in a real environment
JP2010541053A (en) * 2007-09-25 2010-12-24 メタイオ ゲゼルシャフト ミット ベシュレンクテル ハフツング Method and apparatus for rendering a virtual object in a real environment
JP2011508557A (en) * 2007-12-26 2011-03-10 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Image processor for overlaying graphics objects
US8638984B2 (en) 2008-04-21 2014-01-28 Carl Zeiss Industrielle Messtechnik Gmbh Display of results of a measurement of workpieces as a function of the detection of the gesture of a user
DE102008020772A1 (en) * 2008-04-21 2009-10-22 Carl Zeiss 3D Metrology Services Gmbh Presentation of results of a measurement of workpieces
DE102008020771A1 (en) * 2008-04-21 2009-07-09 Carl Zeiss 3D Metrology Services Gmbh Deviation determining method, involves locating viewers at viewing position of device and screen, such that viewers view each position of exemplars corresponding to measured coordinates of actual condition
JP2010122879A (en) * 2008-11-19 2010-06-03 Sony Ericsson Mobile Communications Ab Terminal device, display control method, and display control program
KR101691564B1 (en) * 2010-06-14 2016-12-30 주식회사 비즈모델라인 Method for Providing Augmented Reality by using Tracking Eyesight
JP2012003328A (en) * 2010-06-14 2012-01-05 Hal Laboratory Inc Three-dimensional image display program, three-dimensional image display apparatus, three-dimensional image display system, and three-dimensional image display method
KR20110136012A (en) * 2010-06-14 2011-12-21 주식회사 비즈모델라인 Augmented reality device to track eyesight direction and position
JP2012013514A (en) * 2010-06-30 2012-01-19 Canon Inc Information processor, three dimensional position calculation method and program
JP2012033043A (en) * 2010-07-30 2012-02-16 Toshiba Corp Information display device and information display method
US9286730B2 (en) 2010-09-21 2016-03-15 Microsoft Technology Licensing, Llc Opacity filter for display device
US10665033B2 (en) 2010-09-21 2020-05-26 Telefonaktiebolaget Lm Ericsson (Publ) Opacity filter for display device
JP2013542462A (en) * 2010-09-21 2013-11-21 マイクロソフト コーポレーション Opaque filter for transmissive head mounted display
KR20130139878A (en) * 2010-09-21 2013-12-23 마이크로소프트 코포레이션 Opacity filter for see-through head mounted display
US10573086B2 (en) 2010-09-21 2020-02-25 Telefonaktiebolaget Lm Ericsson (Publ) Opacity filter for display device
KR101895085B1 (en) * 2010-09-21 2018-10-04 텔레폰악티에볼라겟엘엠에릭슨(펍) Opacity filter for see-through head mounted display
US10388076B2 (en) 2010-09-21 2019-08-20 Telefonaktiebolaget Lm Ericsson (Publ) Opacity filter for display device
US9911236B2 (en) 2010-09-21 2018-03-06 Telefonaktiebolaget L M Ericsson (Publ) Opacity filter for display device
JP2012115414A (en) * 2010-11-30 2012-06-21 Nintendo Co Ltd Game device, method of providing game, game program, and game system
JP2012175324A (en) * 2011-02-21 2012-09-10 Tatsumi Denshi Kogyo Kk Automatic photograph creation system, automatic photograph creation apparatus, server device and terminal device
JP2014515130A (en) * 2011-03-10 2014-06-26 マイクロソフト コーポレーション Theme-based expansion of photorealistic views
JP2012190230A (en) * 2011-03-10 2012-10-04 Nec Casio Mobile Communications Ltd Display device, display method, and program
WO2012124250A1 (en) * 2011-03-15 2012-09-20 パナソニック株式会社 Object control device, object control method, object control program, and integrated circuit
JPWO2012124250A1 (en) * 2011-03-15 2014-07-17 パナソニック株式会社 Object control apparatus, object control method, object control program, and integrated circuit
JP2013015796A (en) * 2011-07-06 2013-01-24 Sony Corp Display control apparatus, display control method, and computer program
US10074346B2 (en) 2011-07-06 2018-09-11 Sony Corporation Display control apparatus and method to control a transparent display
US9807381B2 (en) 2012-03-14 2017-10-31 Microsoft Technology Licensing, Llc Imaging structure emitter calibration
US9558590B2 (en) 2012-03-28 2017-01-31 Microsoft Technology Licensing, Llc Augmented reality light guide display
JP2018081706A (en) * 2012-03-28 2018-05-24 マイクロソフト テクノロジー ライセンシング,エルエルシー Augmented reality light guide display
JP2015523583A (en) * 2012-03-28 2015-08-13 マイクロソフト コーポレーション Augmented reality light guide display
EP2831867A4 (en) * 2012-03-28 2015-04-01 Microsoft Corp Augmented reality light guide display
EP2831867A1 (en) * 2012-03-28 2015-02-04 Microsoft Corporation Augmented reality light guide display
US10191515B2 (en) 2012-03-28 2019-01-29 Microsoft Technology Licensing, Llc Mobile device light guide display
WO2013145147A1 (en) * 2012-03-28 2013-10-03 パイオニア株式会社 Head mounted display and display method
US10388073B2 (en) 2012-03-28 2019-08-20 Microsoft Technology Licensing, Llc Augmented reality light guide display
CN104205193A (en) * 2012-03-28 2014-12-10 微软公司 Augmented reality light guide display
WO2013148222A1 (en) 2012-03-28 2013-10-03 Microsoft Corporation Augmented reality light guide display
US9717981B2 (en) 2012-04-05 2017-08-01 Microsoft Technology Licensing, Llc Augmented reality and physical games
US10478717B2 (en) 2012-04-05 2019-11-19 Microsoft Technology Licensing, Llc Augmented reality and physical games
CN104204994B (en) * 2012-04-26 2018-06-05 英特尔公司 Augmented reality computing device, equipment and system
CN104204994A (en) * 2012-04-26 2014-12-10 英特尔公司 Augmented reality computing device, apparatus and system
US10502876B2 (en) 2012-05-22 2019-12-10 Microsoft Technology Licensing, Llc Waveguide optics focus elements
CN104603717A (en) * 2012-09-05 2015-05-06 Nec卡西欧移动通信株式会社 Display device, display method, and program
WO2014037972A1 (en) * 2012-09-05 2014-03-13 Necカシオモバイルコミュニケーションズ株式会社 Display device, display method, and program
JPWO2014091824A1 (en) * 2012-12-10 2017-01-05 ソニー株式会社 Display control apparatus, display control method, and program
US9613461B2 (en) 2012-12-10 2017-04-04 Sony Corporation Display control apparatus, display control method, and program
US10181221B2 (en) 2012-12-10 2019-01-15 Sony Corporation Display control apparatus, display control method, and program
WO2014091824A1 (en) * 2012-12-10 2014-06-19 ソニー株式会社 Display control device, display control method and program
US10192358B2 (en) 2012-12-20 2019-01-29 Microsoft Technology Licensing, Llc Auto-stereoscopic augmented reality display
US10192360B2 (en) 2014-02-20 2019-01-29 Sony Interactive Entertainment Inc. Information processing apparatus and information processing method
WO2015125709A1 (en) * 2014-02-20 2015-08-27 株式会社ソニー・コンピュータエンタテインメント Information processing device and information processing method
JP2015156131A (en) * 2014-02-20 2015-08-27 株式会社ソニー・コンピュータエンタテインメント Information processing apparatus and information processing method
JP2015207219A (en) * 2014-04-22 2015-11-19 富士通株式会社 Display device, position specification program, and position specification method
US10317677B2 (en) 2015-02-09 2019-06-11 Microsoft Technology Licensing, Llc Display system
US10018844B2 (en) 2015-02-09 2018-07-10 Microsoft Technology Licensing, Llc Wearable image display system
JP5961892B1 (en) * 2015-04-03 2016-08-03 株式会社ハコスコ Display terminal and information recording medium
WO2016157523A1 (en) * 2015-04-03 2016-10-06 株式会社SR laboratories Display terminal and information recording medium
WO2017121361A1 (en) * 2016-01-14 2017-07-20 深圳前海达闼云端智能科技有限公司 Three-dimensional stereo display processing method and apparatus for curved two-dimensional screen
US10579206B2 (en) 2016-12-14 2020-03-03 Samsung Electronics Co., Ltd. Display apparatus and method for controlling the display apparatus

Similar Documents

Publication Publication Date Title
KR101613387B1 (en) Image capturing apparatus, image capture system, image processing method, information processing apparatus, and computer-readable storage medium
US9083859B2 (en) System and method for determining geo-location(s) in images
Huang et al. 6-DOF VR videos with a single 360-camera
EP2884460B1 (en) Image capturing apparatus, monitoring system, image processing apparatus, image capturing method, and non-transitory computer readable recording medium
CN103180893B (en) For providing the method and system of three-dimensional user interface
US10198865B2 (en) HMD calibration with direct geometric modeling
US8965741B2 (en) Context aware surface scanning and reconstruction
US20160117864A1 (en) Recalibration of a flexible mixed reality device
WO2015068656A1 (en) Image-generating device and method
US8953036B2 (en) Information processing apparatus, information processing method, program, and information processing system
JP4136859B2 (en) Position and orientation measurement method
US7733404B2 (en) Fast imaging system calibration
JP4108609B2 (en) How to calibrate a projector with a camera
JP5430565B2 (en) Electronic mirror device
JP5491235B2 (en) Camera calibration device
KR101270893B1 (en) Image processing device, image processing method, program thereof, recording medium containing the program, and imaging device
US7095424B2 (en) Image display apparatus and method, and storage medium
US6677939B2 (en) Stereoscopic image processing apparatus and method, stereoscopic vision parameter setting apparatus and method and computer program storage medium information processing method and apparatus
US20160042572A1 (en) Interactive three dimensional displays on handheld devices
US20200051269A1 (en) Hybrid depth sensing pipeline
Gluckman et al. Rectified catadioptric stereo sensors
Szeliski et al. Creating full view panoramic image mosaics and environment maps
US6570566B1 (en) Image processing apparatus, image processing method, and program providing medium
US6891518B2 (en) Augmented reality visualization device
US8581961B2 (en) Stereoscopic panoramic video capture system using surface identification and distance registration technique

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20060606