JP2008217119A - System, image processor and image processing method - Google Patents

System, image processor and image processing method Download PDF

Info

Publication number
JP2008217119A
JP2008217119A JP2007050199A JP2007050199A JP2008217119A JP 2008217119 A JP2008217119 A JP 2008217119A JP 2007050199 A JP2007050199 A JP 2007050199A JP 2007050199 A JP2007050199 A JP 2007050199A JP 2008217119 A JP2008217119 A JP 2008217119A
Authority
JP
Japan
Prior art keywords
image
position
image processing
virtual space
means
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2007050199A
Other languages
Japanese (ja)
Inventor
Yasuo Katano
康生 片野
Original Assignee
Canon Inc
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc, キヤノン株式会社 filed Critical Canon Inc
Priority to JP2007050199A priority Critical patent/JP2008217119A/en
Publication of JP2008217119A publication Critical patent/JP2008217119A/en
Application status is Withdrawn legal-status Critical

Links

Images

Abstract

PROBLEM TO BE SOLVED: To provide a technique for enabling another user who observes a virtual space to perform an operation performed by a user operating an object in the virtual space.
SOLUTION: Cone region information indicating a cone region extending in the direction indicated by the viewpoint posture is generated with the position of the observer's viewpoint as a vertex (S404), and transmitted to another computer (S405). The other computer specifies the entire image or a part of the virtual object included in the area indicated by the cone area information in the virtual object in the virtual space (S455). Then, the transparency of the whole image or a part is controlled, and a virtual space image in which the maximum transparency is set for other than the whole image or a part is generated (S456 to S458).
[Selection] Figure 4

Description

  The present invention relates to a technique for superimposing a virtual space image on a real space and presenting it to a user.

  2. Description of the Related Art Conventionally, in the design / manufacturing field, design / prototyping work on a computer system has been performed in order to improve and speed up design accuracy.

  As a procedure, first, a basic design is performed on a 3D CAD system. Based on the 3D CAD design data, rapid prototyping methods, simple prototypes (mockups) using aluminum materials and foamed resin materials, Create an actual prototype. Based on this real model (prototype), a design review is performed, and the operability, maintenance, and assembly of the product designed by the 3D CAD system are verified, and an attempt is made to improve the design accuracy. . However, many issues have been pointed out in design reviews that actually create real models, such as mockups, including the following problems.

・ It takes time to create and cannot respond to the latest information. ・ Frequent update is impossible due to the cost of prototyping. Attempts have been made. As one of such attempts, a virtual mockup review (DMR) in which all verification work performed on a real model is performed in a virtual space is actively performed.

  By reducing the number of prototypes, the ratio of the number of prototypes to the development cost of the product is reduced, and even in situations where it is not possible to repeatedly produce prototypes with single development products or small-lot production products. Product development that does not degrade product quality. In addition, by shortening the trial period, it is possible to perform design work that can cope with an early product development cycle. For that purpose, it is required to perform all ergonomics verification operations such as assembly procedures, maintenance, and wiring operations that can be performed in an actual mock-up in a DMR in a virtual space.

  By performing DMR in the virtual space, the following advantages can be cited.

-Design review can always be performed with the latest data.-Since there is a prototype in the virtual space, the cost of creating the prototype is free.-Design review that cannot be performed with the actual prototype is possible. Switching display → Switching multiple plans for only a part → Superimposing virtual data calculated using analysis method → Easy to move and delete parts because there is no weight In this way, Virtual Reality (VR) ) There is a system that uses technology to experience the full-scale DMR by immersing the user in a virtual space. Moreover, it is disclosed by patent document 1 that a plurality of experiencing persons share a virtual space.
JP 2005-049996 A

  In an ergonomics verification system using real prototypes, especially when a plurality of people are experiencing, the experienceer B observes the ergonomics verification work performed by the experiencer A, and the work is performed in an appropriate posture and state. It is desirable to work in the procedure of confirming whether However, in practice, there is a problem that the line of sight of the experience person B is blocked by the object in an operation of reaching for a gap between a plurality of objects.

  In order to solve this problem, in the DMR system in the virtual space, it is possible to observe the operation of the user A behind the scenes covered by the object by switching the display / non-display of the arbitrary object and the transparency. . However, there is a risk that important parts involved in the verification work and even the operator's area of interest will be hidden or made transparent. For this reason, it is difficult to maintain the visualization state of data necessary for the workability verification of the experience person A and the view of the experience person B at the same time, and it has been greatly impeded to perform the ergonomics verification work comfortably.

  The present invention has been made in view of the above problems, and allows a user who observes the virtual space to perform an operation performed by a user operating an object in the virtual space. The purpose is to provide the technology.

  In order to achieve the object of the present invention, for example, the system of the present invention comprises the following arrangement.

That is, acquisition means for acquiring the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the acquiring means;
An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device; and a system connected to a network for each observer,
The first image processing apparatus in the system is:
First means for generating cone area information indicating a cone area extending in a direction indicated by the posture acquired by the acquisition means, with the position acquired by the acquisition means included in the first image processing apparatus as a vertex;
A second means for transmitting the cone region information to a second image processing device different from the first image processing device via the network;
The second image processing apparatus includes:
A third means for specifying an entire image or a part of a virtual object included in an area indicated by the cone area information in the virtual object in the virtual space;
A fourth control unit that controls the generation unit of the second image processing apparatus to control the transparency of the whole image or a part and to generate a virtual space image in which the maximum transparency is set for a part other than the whole image or a part. And means.

  In order to achieve the object of the present invention, for example, the system of the present invention comprises the following arrangement.

That is, a first acquisition unit that acquires the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
Second acquisition means for acquiring the position of the observer's hand;
Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the first acquisition means;
An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device; and a system connected to a network for each observer,
The first image processing apparatus in the system is:
A cone whose top is the position acquired by the first acquisition unit included in the first image processing apparatus and extends from the position to the position acquired by the second acquisition unit included in the first image processing apparatus. First means for generating cone region information indicating the region;
A second means for transmitting the cone region information to a second image processing device different from the first image processing device via the network;
The second image processing apparatus includes:
A third means for specifying an entire image or a part of a virtual object included in an area indicated by the cone area information in the virtual object in the virtual space;
A fourth control unit that controls the generation unit of the second image processing apparatus to control the transparency of the whole image or a part and to generate a virtual space image in which the maximum transparency is set for a part other than the whole image or a part. And means.

  In order to achieve the object of the present invention, for example, an image processing apparatus of the present invention comprises the following arrangement.

That is, acquisition means for acquiring the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the acquiring means;
An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device; and an image processing apparatus in a system connected to a network for each observer,
Cone region information indicating a cone region that extends in the direction indicated by the posture acquired by the acquisition unit, with the position acquired by the acquisition unit of the other image processing device as a vertex, is transferred from the other image processing device to the network. Receiving means for receiving via,
In the virtual object in the virtual space, specifying means for specifying the whole image or a part of the virtual object included in the area indicated by the cone area information;
And control means for controlling the generating means so as to control the transparency of the whole image or a part and to generate a virtual space image in which the maximum transparency is set for the part other than the whole image or a part.

  In order to achieve the object of the present invention, for example, an image processing apparatus of the present invention comprises the following arrangement.

That is, a first acquisition unit that acquires the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
Second acquisition means for acquiring the position of the observer's hand;
Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the first acquisition means;
An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device; and an image processing apparatus in a system connected to a network for each observer,
A cone showing a cone region extending from the position acquired by the first acquisition unit included in the other image processing apparatus to the position acquired by the second acquisition unit included in the other image processing apparatus from the position. Receiving means for receiving area information from the other image processing apparatus via the network;
In the virtual object in the virtual space, specifying means for specifying the whole image or a part of the virtual object included in the area indicated by the cone area information;
And control means for controlling the generating means so as to control the transparency of the whole image or a part and to generate a virtual space image in which the maximum transparency is set for the part other than the whole image or a part.

  In order to achieve the object of the present invention, for example, an image processing method of the present invention comprises the following arrangement.

That is, acquisition means for acquiring the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the acquiring means;
An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device is an image processing method performed by an image processing apparatus in a system connected to a network for each observer. And
Cone region information indicating a cone region that extends in the direction indicated by the posture acquired by the acquisition unit, with the position acquired by the acquisition unit of the other image processing device as a vertex, is transferred from the other image processing device to the network. A receiving process for receiving via
In the virtual object in the virtual space, a specifying step of specifying an entire image or a part of the virtual object included in the area indicated by the cone area information;
And a control step of controlling the generation means so as to generate a virtual space image in which maximum transparency is set for the whole image or a part other than the whole image or a part.

  In order to achieve the object of the present invention, for example, an image processing method of the present invention comprises the following arrangement.

That is, a first acquisition unit that acquires the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
Second acquisition means for acquiring the position of the observer's hand;
Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the first acquisition means;
An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device is an image processing method performed by an image processing apparatus in a system connected to a network for each observer. And
A cone showing a cone region extending from the position acquired by the first acquisition unit included in the other image processing apparatus to the position acquired by the second acquisition unit included in the other image processing apparatus from the position. Receiving the region information from the other image processing device via the network;
In the virtual object in the virtual space, a specifying step of specifying an entire image or a part of the virtual object included in the area indicated by the cone area information;
And a control step of controlling the generation means so as to generate a virtual space image in which maximum transparency is set for the whole image or a part other than the whole image or a part.

  According to the configuration of the present invention, an object in a virtual space observed by the user can be observed by another user who observes the virtual space without being shielded by the other virtual object.

  Hereinafter, the present invention will be described in detail according to preferred embodiments with reference to the accompanying drawings.

[First Embodiment]
FIG. 1 is a diagram illustrating an appearance of a mixed reality presentation system for providing an observer (user) with a mixed reality space in which a virtual space is superimposed on a real space.

  In the figure, reference numeral 200 denotes a transmitter which generates a magnetic field. Reference numeral 100 denotes a head-mounted display device (hereinafter referred to as HMD: Head Mounted Display) that is mounted on the observer's head and provides the result of combining the real space and the virtual space in front of the observer's eyes. is there. The HMD 100 includes cameras 102R and 102L, display devices 101R and 101L, and a magnetic receiver 201.

  The cameras 102 </ b> R and 102 </ b> L capture a moving image of the real space that can be seen from the position of the right eye and left eye of the observer wearing the HMD 100 on the head, and the captured image of each frame is output to the computer 400 in the subsequent stage. . Hereinafter, in the description common to the cameras 102R and 102L, these may be collectively referred to as “camera 102”.

  The display devices 101R and 101L are mounted on the HMD 100 so that they are positioned in front of the right eye and the left eye when the observer wears the HMD 100 on the head. Display the based image. Accordingly, an image generated by the computer 400 is provided in front of the viewer's right eye and left eye. Hereinafter, in the description common to the display devices 101R and 101L, these may be collectively referred to as “display device 101”.

  The magnetic receiver 201 detects a change in the magnetic field generated by the transmitter 200 and outputs a detection result signal to the subsequent position / orientation measurement device 205. The detected signal corresponds to the position and orientation of the magnetic receiver 201 in the coordinate system (sensor coordinate system) in which the position of the transmitter 200 is the origin and the three axes orthogonal to each other are the x, y, and z axes, respectively. It is a signal which shows the change of the magnetic field detected.

  Based on this signal, the position / orientation measurement apparatus 205 obtains the position / orientation of the magnetic receiver 201 in the sensor coordinate system, and outputs data indicating the obtained position / orientation to the computer 400 at the subsequent stage.

  The magnetic receiver 202 can be held by a viewer and changed in position and posture. The magnetic receiver 202 is the same as the magnetic receiver 201 described above, and outputs a signal indicating a change in the magnetic field detected according to its own position and orientation to the position and orientation measurement device 205.

  Based on this signal, the position / orientation measuring apparatus 205 obtains the position / orientation of the magnetic receiver 202 in the sensor coordinate system, and outputs data indicating the obtained position / orientation to the computer 400 at the subsequent stage.

  A computer 400 is an image processing apparatus that generates an image signal to be output to the display apparatuses 101R and 101L of the HMD 100, receives data from the position / orientation measurement apparatus 205, manages the process, and the like. This computer is generally composed of, for example, a PC (personal computer), WS (workstation), or the like. FIG. 6 is a diagram illustrating a hardware configuration of the computer 400.

  Reference numeral 1001 denotes a CPU. The CPU 1001 controls the entire computer 400 using computer programs and data stored in the RAM 1002 and the ROM 1003 and controls data communication with external devices connected to I / Fs (interfaces) 1007 and 1008. . Also, each process described later performed by the computer 400 is executed.

  Reference numeral 1002 denotes a RAM, which is an area for temporarily storing computer programs and data loaded from the external storage device 1005, and an area for temporarily storing data received from the outside via the I / Fs 1007 and 1008. Have. Further, the RAM 1002 has a work area necessary when the CPU 1001 executes various processes. That is, the RAM 1002 can provide various areas as appropriate.

  A ROM 1003 stores a boot program, setting data of the computer 400, and the like.

  An operation unit 1004 includes a keyboard, a mouse, a joystick, and the like, and various instructions can be input to the CPU 1001 when operated by an operator of the computer 400.

  An external storage device 1005 functions as a large-capacity information storage device represented by a hard disk drive device. Here, an OS (Operating System) and computer programs and data for causing the CPU 1001 to execute processes described below performed by the computer 400 are stored. Computer programs and data stored in the external storage device 1005 are loaded into the RAM 1002 as appropriate under the control of the CPU 1001. Also, what is described as known data (information) in the following description (or data that should be necessary for the processing described below) is also stored in the external storage device 1005, and is controlled by the CPU 1001 as necessary. Thus, it is loaded into the RAM 1002 as appropriate.

  A display unit 1006 includes a CRT, a liquid crystal screen, and the like, and displays a processing result by the CPU 1001 using images, characters, and the like.

  Reference numeral 1007 denotes an I / F, to which the position / orientation measuring apparatus 205, the camera 102, the display apparatus 101, and the like are connected. Accordingly, the computer 400 can perform data communication with the position / orientation measurement apparatus 205, the camera 102, the display apparatus 101, and the like via the I / F 1007.

  Reference numeral 1008 denotes an I / F for connecting the computer 400 to a network described later. Therefore, the computer 400 can perform data communication with each device connected to the network via the I / F 1008.

  Reference numeral 1009 denotes a bus connecting the above-described units.

  The computer 400 having the above configuration captures an image of the real space obtained from each of the cameras 102R and 102L, and arranges one or more virtual objects in the virtual space by each process described below. Then, based on the position and orientation obtained from the magnetic receiver 201, the viewpoint of the observer (the position of the camera 102 in this embodiment) is obtained, and it appears when the virtual space including the arranged virtual object is viewed from the cameras 102R and 102L. An image (virtual space image) is generated. Then, the generated image is superimposed on the previously captured real space image, and the superimposed image is output to the display devices 101R and 101L. As a result, an image of the mixed reality space corresponding to the position and orientation of each eye is displayed in front of the right and left eyes of the observer wearing the HMD 100 on the head.

  FIG. 2 is a diagram showing a functional configuration of the computer 400. In the present embodiment, description will be made assuming that each unit shown in the figure is configured by software. The software execution process is performed by the CPU 1001. However, some or all of the units shown in FIG. 2 may be configured by hardware.

  Reference numerals 401R and 401L denote video capture units that capture images input from the cameras 102R and 102L as digital signals, respectively.

  Reference numeral 404 denotes a position / orientation information input unit that captures data output from the position / orientation measurement apparatus 205. That is, this data is data indicating the position and orientation of the magnetic receiver 201 in the sensor coordinate system, and data indicating the position and orientation of the magnetic receiver 202 in the sensor coordinate system.

  Reference numeral 406 denotes 3DCG drawing data, which is data for generating an image of a virtual object. The 3DCG drawing data includes a virtual object arrangement position, data indicating the geometric shape and color of the virtual object, texture data, and the like.

  Reference numeral 405 denotes a position / orientation calculation unit. The position / orientation calculation unit 405 obtains the position / orientation of the cameras 102R and 102L in the sensor coordinate system using data indicating the position / orientation in the sensor coordinate system of the magnetic receiver 201 input from the position / orientation information input unit 404. Such a process is a general process.

  For example, the positional orientation relationship between the magnetic receiver 201 and the camera 102R and the positional orientation relationship between the magnetic receiver 201 and the camera 102L are measured in advance as biases 1 and 2, respectively. For example, the data of the biases 1 and 2 are registered in the external storage device 1005 so that the position and orientation calculation unit 405 can handle them. When the position and orientation calculation unit 405 receives data indicating the position and orientation of the magnetic receiver 201 in the sensor coordinate system, the position and orientation calculation unit 405 can obtain the position and orientation of the camera 102R in the sensor coordinate system by adding the bias 1 data thereto. . Similarly, when the position / orientation calculation unit 405 receives data indicating the position / orientation of the magnetic receiver 201 in the sensor coordinate system, the position / orientation of the camera 102L in the sensor coordinate system is obtained by adding data of bias 2 thereto. be able to.

  In this manner, the position / orientation calculation unit 405 can obtain the position / orientation of the cameras 102R and 102L in the sensor coordinate system as the viewpoint position / orientation.

  Reference numeral 407 denotes a CG rendering unit. First, one or more virtual objects are arranged in the virtual space according to the processing described later. Then, an image of a virtual space (an image of a virtual space including one or more arranged virtual objects) that is visible according to the position and orientation of the cameras 102R and 102L is generated. Note that processing for generating an image of a virtual space that can be viewed from a viewpoint having a predetermined position and orientation is a well-known technique, and thus detailed description thereof will be omitted. Hereinafter, the cameras 102R and 102L may be collectively referred to as “viewpoint”.

  Reference numerals 402R and 402L denote video composition units. The video composition unit 402R superimposes a virtual space image that is visible according to the position and orientation of the camera 102R generated by the CG rendering unit 407 on the real space image input from the video capture unit 401R, and outputs the superimposed image to the video generation unit 403R. To do. Similarly, the video composition unit 402L superimposes a virtual space image that is visible according to the position and orientation of the camera 102L generated by the CG rendering unit 407 on the real space image input from the video capture unit 401L, and generates a video generation unit 403L. Output to. Accordingly, it is possible to generate a mixed reality space image that can be seen according to the position and orientation of the camera 102R and a mixed reality space image that can be seen according to the position and orientation of the camera 102L.

  The video generation units 403R and 403L convert the mixed reality space images output from the video synthesis units 402R and 402L into analog signals, and output the analog signals to the display devices 101R and 101L, respectively. Thereby, an image of the mixed reality space corresponding to each eye is displayed in front of the right eye and the left eye of the observer wearing the HMD 100 on the head.

  In this embodiment, the mixed reality presentation system shown in FIG. 1 is prepared for the number of observers, and a network system in which the computer 400 in each system is connected to a network such as a LAN or the Internet is used. Thereby, a plurality of observers can experience the same mixed reality space, and the other observer can observe the work content of one observer.

  Various configurations have been proposed for a system configuration for sharing and experiencing the same mixed reality space among a plurality of observers, and the present embodiment is not limited to a specific system configuration. . In the following, as an example of the configuration of the system, the system shown in FIG. 1 is prepared for the number of observers, and the computer 400 in each system is connected to a network such as a LAN or the Internet. Is used.

  FIG. 3 is a diagram showing a situation where the observer 501 and the observer 502 share and observe the mixed reality space in which the virtual objects 511 and 512 are arranged. As described above, the mixed reality presentation system is provided for each of the viewers (501, 502) so that the viewers 501 and 502 can experience the same mixed reality space. The computer 400 in each mixed reality presentation system will be described as being connected to a network.

  The observer 501 holds the magnetic receiver 202 in the hand 501b, and when the position of the arm 501a or the hand 501b is moved, the position and orientation of the hand 501b are measured by the magnetic receiver 202. Accordingly, since the computer 400 can obtain the position and orientation of the hand 501b in the mixed reality space, for example, when an operation such as screwing is performed with the hand 501b, the virtual object 511 or the virtual object 512 is screwed. The result can also be reflected.

  Here, the observer 501 uses the hand 501b to verify whether the space between the virtual object 511 and the virtual object 512 is sufficient for the observer 501 to put the hand 501b into the work. The case where it enters in this space is demonstrated. Such verification is also presented to the observer 502.

  In order to perform such verification, the observer 501 puts the hand 501b between the virtual object 511 and the virtual object 512 as shown in FIG. 3, but at this time, the line of sight 570 of the observer 501 is in his / her hand 501b. Suitable for the surrounding area. The direction of the line of sight 570 corresponds to the viewpoint orientation calculated by the position / orientation calculation unit 405. Reference numerals 571a and 572b respectively denote the range of the field of view of the observer 501. Therefore, the observer 501 can see the space between the virtual object 511 and the virtual object 512. However, even if the observer 502 tries to observe the space, the virtual object 512 is shielded from the space and cannot be observed.

  This makes it impossible for the observer 502 to present the observer 502 with the space around the hand 501b. Therefore, in the present embodiment, the space other than the space (gazing space) that the observer 501 will be gazing at is presented to the observer 502 in a display form that allows the viewer 502 to see the inside of the gaze space. To present. Hereinafter, in order to perform such presentation, a mixed reality presentation system provided for the observer 501 (hereinafter, mixed reality presentation system 1) and a mixed reality presentation system provided for the observer 502 ( Hereinafter, processing performed by the mixed reality presentation system 2) will be described.

  In addition, although the computer 400 in the mixed reality presentation system 1 and the computer 400 in the mixed reality presentation system 2 have the same configuration, the former is the computer 400-1 and the latter is the computer 400- in order to distinguish them for explanation. 2 (other image processing apparatus).

  FIG. 4 is a flowchart of processing performed by the computer 400-1 and the computer 400-2.

  Note that computer programs and data for causing the CPU 1001 of the computer 400-1 to execute the processing performed by the computer 400-1 (steps S401 to S409) are stored in the external storage device 1005 of the computer 400-1. The computer program and data are loaded into the RAM 1002 of the computer 400-1 as appropriate under the control of the CPU 1001 of the computer 400-1. Then, when the CPU 1001 of the computer 400-1 executes processing using the loaded computer program and data, the computer 400-1 executes processing of steps S401 to S409.

  In addition, computer programs and data for causing the CPU 1001 of the computer 400-2 to execute processing (steps S451 to S461) performed by the computer 400-2 are stored in the external storage device 1005 of the computer 400-2. The computer program and data are loaded into the RAM 1002 of the computer 400-2 as appropriate under the control of the CPU 1001 of the computer 400-2. When the CPU 1001 of the computer 400-2 executes processing using the loaded computer program and data, the computer 400-2 executes processing of steps S451 to S461.

  First, in step S401, an image for one frame sent from the camera 102 in the mixed reality presentation system 1 is acquired in the RAM 1002 via the I / F 1007 as a real space image.

  In step S <b> 402, data sent from the position / orientation measurement apparatus 205 in the mixed reality presentation system 1 is acquired in the RAM 1002 via the I / F 1007.

  Next, in step S403, based on the data acquired in step S402, the processing described above is performed by the position / orientation calculation unit 405, and the position / orientation of the camera 102 (an observer using the mixed reality presentation system 1) is performed. Position and orientation of the viewpoint).

  In step S404, the position of the viewpoint obtained in step S403 is set as a vertex, and cone area information indicating a cone area extending in the line-of-sight direction based on the viewpoint posture obtained in step S403 is generated from the vertex. Here, the cone region information will be described.

  FIG. 7 is a diagram illustrating a cone region. In the figure, reference numeral 701 indicates the position of the viewpoint obtained in step S403. As the position of the viewpoint, both the right eye and the left eye of the observer can be considered, but only one eye will be described here for the sake of simplicity.

  Reference numeral 702 denotes a line-of-sight vector (visual axis) defined by the viewpoint posture component obtained in step S403, and corresponds to the line-of-sight 570 shown in FIG. Reference numeral 750 denotes a cone region. In the drawing, a cone-shaped cone region is shown as the cone region, but the cone region is not limited to the cone. As described above, the cone region 750 has a vertex at the viewpoint position 701 and extends along the line-of-sight vector 702. Further, it is assumed that the length of the cone region with respect to the direction of the line-of-sight vector 702 is sufficiently long.

  760 may be determined according to the angle of view of the camera 102. In the present embodiment, the angle indicated by 760 is matched with the angle of view of the camera 102.

  Therefore, when generating cone region information indicating such a cone region, the cone region information includes information indicating the position of the viewpoint, information indicating the angle of view of the camera 102, and information indicating the line-of-sight vector. It shall be included. Note that the cone area information may include any information as long as the cone area can be defined.

  Returning to FIG. 4, in step S 405, the cone region information generated in this way is transmitted to the computer 400-2 via the I / F 1008. Processing performed on the computer 400-2 side will be described later.

  Next, in step S406, first, each virtual object is arranged in the virtual space with a predetermined position and orientation or a position and orientation determined by various operations on the virtual space. Then, an image that is seen when the virtual space is viewed from the viewpoint having the position and orientation determined in step S403 is generated as a virtual space image.

  In step S407, the virtual space image generated in step S406 is combined with the real space image acquired in the RAM 1002 in step S401 to generate a mixed reality space image (synthesized image). Various processes have conventionally been proposed for generating a mixed reality space image. In this embodiment, an image of a mixed reality space may be generated by any process.

  In step S408, the composite image data generated in this way is transmitted to the display device 101 in the mixed reality presentation system 1 via the I / F 1007. As a result, the composite image generated in step S407 is presented in front of the observer (observer A) using the mixed reality presentation system 1.

  Next, in step S409, it is determined whether an instruction to end the process is input via the operation unit 1004, and whether a condition for ending the process is satisfied. As a result of the determination, if an instruction to end the process is input via the operation unit 1004 or a condition for ending the process is satisfied, the process ends. On the other hand, if the instruction to end the process is not input via the operation unit 1004 and the conditions for ending the process are not satisfied, the process returns to step S401 to perform the subsequent processes for the next frame. repeat.

  Next, processing performed by the computer 400-2 in the mixed reality presentation system 2 will be described.

  First, in step S451, an image for one frame sent from the camera 102 in the mixed reality presentation system 2 is acquired as a real space image in the RAM 1002 via the I / F 1007.

  In step S452, data sent from the position / orientation measurement apparatus 205 in the mixed reality presentation system 2 is acquired in the RAM 1002 via the I / F 1007.

  Next, in step S453, based on the data acquired in step S452, the processing described above is performed by the position / orientation calculation unit 405, and the position / orientation of the camera 102 (an observer using the mixed reality presentation system 2). (Observer B's viewpoint position and orientation).

  In step S454, the cone area information transmitted in step S405 is acquired in the RAM 1002 via the I / F 1008.

  Next, in step S455, each virtual object is first arranged in the virtual space with a predetermined position and orientation, or with a position and orientation determined by various operations on the virtual space. The configuration of the virtual space is common to all observers. Next, when the cone area indicated by the cone area information received in step S454 is set in the virtual space, a virtual object (entire image or part) included in the cone area is specified.

  When specifying the virtual object included in the cone region, various units can be considered. For example, even if it is a virtual object composed of a plurality of parts, such as a car or a desk, if one of them can be regarded as an object, it may be specified in units of objects or in units of parts. If the virtual object is composed of polygons, it may be specified in units of polygons.

  In addition, since processing for specifying objects, parts, polygons, and the like included in the cone region is well known, description thereof will be omitted.

  Next, in step S456, a virtual object existing outside the cone region (a virtual object not specified in step S455), that is, a part other than the whole image or a part of the virtual object is drawn to be transparent. Control the alpha value and set the maximum transparency. Note that it is not necessary to be completely transparent as long as the transparency is transparent. Since a technique for setting arbitrary transparency for a part or the whole image of a virtual object is well known, a description of the technique is omitted.

  Next, in step S457, in the area specified in step S455, an alpha value is set so that the transparency is lowered as it is closer to the visual axis based on the viewpoint posture obtained in step S453, and the transparency is increased as it is farther from the visual axis. To do. For example, when the virtual object is composed of polygons, among the polygon groups in the region specified in step S455, the transparency of the polygons close to the visual axis is lowered, and the transparency of the polygons far from the visual axis is increased. . The degree to which the transparency is raised or lowered according to the distance from the visual axis is not particularly limited. For example, each observer may set the transparency with his / her own computer.

  Next, in step S458, an image that can be seen when the virtual space in which the virtual objects having the alpha values set in steps S456 and S457 are arranged is viewed from the viewpoint having the position and orientation determined in step S453 is used as a virtual space image. Generate.

  In step S459, the virtual space image generated in step S458 is combined with the real space image acquired in the RAM 1002 in step S451, thereby generating an image (composite image) of the mixed reality space. Various processes have conventionally been proposed for generating a mixed reality space image. In this embodiment, an image of a mixed reality space may be generated by any process.

  In step S460, the composite image data generated in this way is transmitted to the display device 101 in the mixed reality presentation system 2 via the I / F 1007. As a result, the composite image generated in step S459 is presented in front of the eyes of the viewer B.

  In such a composite image, the virtual object outside the cone region extending from the position of the viewpoint of the observer A in the line of sight of the viewpoint is transparent. Therefore, the observer B can observe the inside of the field of view (the virtual object in the cone area, the hand of the observer A, etc.) where the observer A will be gazing. Furthermore, in the cone area, the closer to the observer A's visual axis, the lower the transparency, so that the area that the observer A will be more closely watching can be explicitly presented to the observer B. .

  Next, in step S461, it is determined whether an instruction to end the process is input via the operation unit 1004, and whether a condition for ending the process is satisfied. As a result of the determination, if an instruction to end the process is input via the operation unit 1004 or a condition for ending the process is satisfied, the process ends. On the other hand, if an instruction to end the process is not input via the operation unit 1004 and the condition for ending the process is not satisfied, the process returns to step S451 to perform the subsequent processes for the next frame. repeat.

[Second Embodiment]
In the first embodiment, the cone area has a vertex at the position of the viewpoint obtained in step S403, and extends from the vertex in the line-of-sight direction based on the attitude of the viewpoint obtained in step S403. However, there are other conceivable settings for the cone area.

  In the present embodiment, the position of the viewpoint obtained in step S403 is set as a vertex, and the cone region is set in a direction extending from the vertex to the position of the hand 501b. Since the magnetic receiver 202 is held by the hand 501b, the position of the hand 501b can be measured.

  FIG. 5 is a diagram for explaining a situation in which the cone region is set by a method different from that in the first embodiment in the situation shown in FIG. In the figure, 599 indicates a straight line passing through the position of the viewpoint and the position of the hand 501b.

  Note that the present embodiment and the first embodiment differ only in the cone area indicated by the cone area information, and the other processes may be performed in the same manner as in the first embodiment. In addition, what was used as a visual axis in 1st Embodiment uses the linear vector (straight line 599) which passes along the position of a viewpoint, and the position of the hand 501b in this embodiment.

  In the first and second embodiments, the video see-through type is used as the HMD 100, but an optical see-through type may be used. In this case, what is necessary is just to set an appropriate value for what is used as an angle of view in the said embodiment.

  In the first and second embodiments, the apparatus on the side that receives the cone area information is described as one unit. However, even if there are a plurality of units that receive the cone region information, the processing performed on each receiving side is the same. is there. However, as described above, how to change the transparency with respect to the virtual space in the cone area may be set on each receiving side.

  In the first and second embodiments, a magnetic sensor is used as a sensor. However, any sensor may be used as long as the technique enables position and orientation measurement, and the position and orientation can be determined using a captured image. A well-known technique for measuring may be used.

[Other Embodiments]
Needless to say, the object of the present invention can be achieved as follows. That is, a recording medium (or storage medium) that records a program code of software that implements the functions of the above-described embodiments is supplied to a system or apparatus. Needless to say, such a storage medium is a computer-readable storage medium. Then, the computer (or CPU or MPU) of the system or apparatus reads and executes the program code stored in the recording medium. In this case, the program code itself read from the recording medium realizes the functions of the above-described embodiment, and the recording medium on which the program code is recorded constitutes the present invention.

  Further, by executing the program code read by the computer, an operating system (OS) or the like running on the computer performs part or all of the actual processing based on the instruction of the program code. Needless to say, the process includes the case where the functions of the above-described embodiments are realized.

  Furthermore, it is assumed that the program code read from the recording medium is written in a memory provided in a function expansion card inserted into the computer or a function expansion unit connected to the computer. After that, based on the instruction of the program code, the CPU included in the function expansion card or function expansion unit performs part or all of the actual processing, and the function of the above-described embodiment is realized by the processing. Needless to say.

  When the present invention is applied to the recording medium, program code corresponding to the flowchart described above is stored in the recording medium.

It is a figure which shows the external appearance of the mixed reality presentation system for providing the observer (user) with the mixed reality space which superimposed the virtual space on the real space. 2 is a diagram illustrating a functional configuration of a computer 400. FIG. It is a figure which shows a mode that the observer 501 and the observer 502 are observing the mixed reality space where the virtual object 511 and the virtual object 512 are arranged. It is a flowchart of the process which the computer 400-1 and the computer 400-2 perform. In the situation shown in FIG. 3, it is a figure explaining the situation which sets a cone area | region by the method different from 1st Embodiment. 2 is a diagram illustrating a hardware configuration of a computer 400. FIG. It is a figure which shows a cone area | region.

Claims (12)

  1. Acquisition means for acquiring the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
    Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the acquiring means;
    An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device; and a system connected to a network for each observer,
    The first image processing apparatus in the system is:
    First means for generating cone area information indicating a cone area extending in a direction indicated by the posture acquired by the acquisition means, with the position acquired by the acquisition means included in the first image processing apparatus as a vertex;
    A second means for transmitting the cone region information to a second image processing device different from the first image processing device via the network;
    The second image processing apparatus includes:
    A third means for specifying an entire image or a part of a virtual object included in an area indicated by the cone area information in the virtual object in the virtual space;
    A fourth control unit that controls the generation unit of the second image processing apparatus to control the transparency of the whole image or a part and to generate a virtual space image in which the maximum transparency is set for a part other than the whole image or a part. A system comprising: means.
  2.   The system according to claim 1, wherein the cone region information includes information indicating the position of the viewpoint, information indicating the attitude of the viewpoint, and information indicating an angle of view from the viewpoint. .
  3.   The fourth means refers to information indicating the posture included in the cone region information, and in the whole image or a part thereof, the region closer to the visual axis based on the referred information decreases the transparency, and the farther the transparency, The system of claim 2, wherein:
  4. First acquisition means for acquiring the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
    Second acquisition means for acquiring the position of the observer's hand;
    Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the first acquisition means;
    An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device; and a system connected to a network for each observer,
    The first image processing apparatus in the system is:
    A cone whose top is the position acquired by the first acquisition unit included in the first image processing apparatus and extends from the position to the position acquired by the second acquisition unit included in the first image processing apparatus. First means for generating cone region information indicating the region;
    A second means for transmitting the cone region information to a second image processing device different from the first image processing device via the network;
    The second image processing apparatus includes:
    A third means for specifying an entire image or a part of a virtual object included in an area indicated by the cone area information in the virtual object in the virtual space;
    A fourth control unit that controls the generation unit of the second image processing apparatus to control the transparency of the whole image or a part and to generate a virtual space image in which the maximum transparency is set for a part other than the whole image or a part. A system comprising: means.
  5. The cone region information includes information indicating the position of the viewpoint, information indicating the posture of a straight line passing through the position of the viewpoint and the position of the hand, and information indicating the angle of view from the viewpoint. The system of claim 4.
  6.   The fourth means refers to the information indicating the posture of the straight line included in the cone region information, and in the whole image or a part, the region closer to the axis based on the referenced information decreases the transparency and is farther away. 6. The system of claim 5, wherein the system increases transparency.
  7. Acquisition means for acquiring the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
    Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the acquiring means;
    An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device; and an image processing apparatus in a system connected to a network for each observer,
    Cone region information indicating a cone region that extends in the direction indicated by the posture acquired by the acquisition unit, with the position acquired by the acquisition unit of the other image processing device as a vertex, is transferred from the other image processing device to the network. Receiving means for receiving via,
    In the virtual object in the virtual space, specifying means for specifying the whole image or a part of the virtual object included in the area indicated by the cone area information;
    Control means for controlling the transparency of the whole image or a part thereof, and for controlling the generation means so as to generate a virtual space image in which maximum transparency is set for the part other than the whole image or part of the image. Processing equipment.
  8. First acquisition means for acquiring the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
    Second acquisition means for acquiring the position of the observer's hand;
    Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the first acquisition means;
    An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device; and an image processing apparatus in a system connected to a network for each observer,
    A cone showing a cone region extending from the position acquired by the first acquisition unit included in the other image processing apparatus to the position acquired by the second acquisition unit included in the other image processing apparatus from the position. Receiving means for receiving area information from the other image processing apparatus via the network;
    In the virtual object in the virtual space, specifying means for specifying the whole image or a part of the virtual object included in the area indicated by the cone area information;
    Control means for controlling the transparency of the whole image or a part thereof, and for controlling the generation means so as to generate a virtual space image in which maximum transparency is set for the part other than the whole image or part of the image. Processing equipment.
  9. Acquisition means for acquiring the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
    Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the acquiring means;
    An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device is an image processing method performed by an image processing apparatus in a system connected to a network for each observer. And
    Cone region information indicating a cone region that extends in the direction indicated by the posture acquired by the acquisition unit, with the position acquired by the acquisition unit of the other image processing device as a vertex, is transferred from the other image processing device to the network. A receiving process for receiving via
    In the virtual object in the virtual space, a specifying step of specifying an entire image or a part of the virtual object included in the area indicated by the cone area information;
    And a control step of controlling the generation means so as to control the transparency of the whole image or a part and to generate a virtual space image in which the maximum transparency is set for the part other than the whole image or a part. Processing method.
  10. First acquisition means for acquiring the position and orientation of the viewpoint of the observer wearing the head-mounted display device;
    Second acquisition means for acquiring the position and orientation of the observer's hand;
    Generating means for generating, as a virtual space image, an image obtained by viewing a virtual space in which one or more virtual objects are arranged from a viewpoint based on the position and orientation acquired by the first acquisition means;
    An image processing apparatus comprising: an output unit that outputs the virtual space image to the head-mounted display device is an image processing method performed by an image processing apparatus in a system connected to a network for each observer. And
    A cone showing a cone region extending from the position acquired by the first acquisition unit included in the other image processing apparatus to the position acquired by the second acquisition unit included in the other image processing apparatus from the position. Receiving the region information from the other image processing device via the network;
    In the virtual object in the virtual space, a specifying step of specifying an entire image or a part of the virtual object included in the area indicated by the cone area information;
    And a control step of controlling the generation means so as to control the transparency of the whole image or a part and to generate a virtual space image in which the maximum transparency is set for the part other than the whole image or a part. Processing method.
  11.   A computer program for causing a computer to execute the image processing method according to claim 9.
  12.   A computer-readable storage medium storing the computer program according to claim 11.
JP2007050199A 2007-02-28 2007-02-28 System, image processor and image processing method Withdrawn JP2008217119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007050199A JP2008217119A (en) 2007-02-28 2007-02-28 System, image processor and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007050199A JP2008217119A (en) 2007-02-28 2007-02-28 System, image processor and image processing method

Publications (1)

Publication Number Publication Date
JP2008217119A true JP2008217119A (en) 2008-09-18

Family

ID=39837130

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007050199A Withdrawn JP2008217119A (en) 2007-02-28 2007-02-28 System, image processor and image processing method

Country Status (1)

Country Link
JP (1) JP2008217119A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014526157A (en) * 2011-06-23 2014-10-02 マイクロソフト コーポレーション Classification of the total field of view of the head mounted display
JP6112689B1 (en) * 2016-02-17 2017-04-12 株式会社菊池製作所 Superimposed image display system
JP2017078891A (en) * 2015-10-19 2017-04-27 株式会社コロプラ Image generation device, image generation method, and image generation program

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014526157A (en) * 2011-06-23 2014-10-02 マイクロソフト コーポレーション Classification of the total field of view of the head mounted display
JP2017078891A (en) * 2015-10-19 2017-04-27 株式会社コロプラ Image generation device, image generation method, and image generation program
WO2017069075A1 (en) * 2015-10-19 2017-04-27 株式会社コロプラ Image generation device, image generation method, and non-temporary recording medium in which image generation program is stored
JP6112689B1 (en) * 2016-02-17 2017-04-12 株式会社菊池製作所 Superimposed image display system
JP2017146758A (en) * 2016-02-17 2017-08-24 株式会社菊池製作所 Overlapping image display system

Similar Documents

Publication Publication Date Title
CA2820950C (en) Optimized focal area for augmented reality displays
US8957948B2 (en) Geometric calibration of head-worn multi-camera eye tracking system
JP4679661B1 (en) Information presenting apparatus, information presenting method, and program
CN102591449B (en) The fusion of the low latency of virtual content and real content
US6753828B2 (en) System and method for calibrating a stereo optical see-through head-mounted display system for augmented reality
US8704882B2 (en) Simulated head mounted display system and method
US20130335405A1 (en) Virtual object generation within a virtual environment
US8872853B2 (en) Virtual light in augmented reality
US8055061B2 (en) Method and apparatus for generating three-dimensional model information
US20020105484A1 (en) System and method for calibrating a monocular optical see-through head-mounted display system for augmented reality
KR20130108643A (en) Systems and methods for a gaze and gesture interface
JP4401728B2 (en) Mixed reality space image generation method and mixed reality system
US9607419B2 (en) Method of fitting virtual item using human body model and system for providing fitting service of virtual item
JP5791131B2 (en) Interactive reality extension for natural interactions
US10133073B2 (en) Image generation apparatus and image generation method
US20130326364A1 (en) Position relative hologram interactions
JP3944019B2 (en) Information processing apparatus and method
US20080266386A1 (en) System
JP2013061937A (en) Combined stereo camera and stereo display interaction
JP2005038008A (en) Image processing method, image processor
JP2006127158A (en) Image processing method and image processor
WO2014097271A1 (en) Adaptive projector
KR20160012139A (en) Hologram anchoring and dynamic positioning
JP2004062756A (en) Information-presenting device and information-processing method
US7952594B2 (en) Information processing method, information processing apparatus, and image sensing apparatus

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20100511