WO2021177132A1 - Information processing device, information processing system, information processing method, and program - Google Patents

Information processing device, information processing system, information processing method, and program Download PDF

Info

Publication number
WO2021177132A1
WO2021177132A1 PCT/JP2021/007064 JP2021007064W WO2021177132A1 WO 2021177132 A1 WO2021177132 A1 WO 2021177132A1 JP 2021007064 W JP2021007064 W JP 2021007064W WO 2021177132 A1 WO2021177132 A1 WO 2021177132A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
information processing
correction
processing device
real
Prior art date
Application number
PCT/JP2021/007064
Other languages
French (fr)
Japanese (ja)
Inventor
富士夫 荒井
秀憲 青木
智彦 後藤
遼 深澤
京二郎 永野
春香 藤澤
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021177132A1 publication Critical patent/WO2021177132A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • This technology relates to a technology for correcting the position of a virtual object that is commonly displayed in AR among a plurality of AR (Augmented Reality) devices.
  • Patent Document 1 Conventionally, a technique for displaying a common virtual object among a plurality of AR devices in AR at the same position has been known (see, for example, Patent Document 1 below).
  • the self-position of the AR device is estimated by comparing the feature point cloud extracted from the image information captured by the AR device with the information of the feature point group included in the map information.
  • the map information used for self-position estimation includes a method of creating it in advance and a method of creating it at the same time as self-position estimation without creating it in advance.
  • the method of creating map information at the same time as self-position estimation is generally called SLAM (Simultaneous Localization and Mapping).
  • the estimated self-position may deviate from the actual position in the real space, resulting in an error.
  • Such an error causes a misalignment of a common virtual object displayed in AR in a plurality of AR devices.
  • the purpose of this technology is to provide a technology that can accurately display a common virtual object in AR at the same position among a plurality of AR devices.
  • the information processing device includes a control unit.
  • the control unit estimates its own position in the global coordinate system corresponding to the real space.
  • Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
  • the position of the first virtual object is set in the global coordinate system.
  • the position of the real object in the global coordinate system is calculated.
  • the position of the second virtual object that is AR-displayed in common with the other device is corrected.
  • control unit may correct the position of the second virtual object based on the difference between the position of the first virtual object and the position of the real object.
  • control unit sets the position of the first virtual object with respect to the real object in the global coordinate system based on the image information, and the first virtual object based on the coordinate information.
  • the position of the second virtual object may be corrected based on the difference between the position of the virtual object and the position of the first virtual object based on the image information.
  • control unit may calculate a correction value based on the difference and correct the position of the second virtual object by the correction value.
  • control unit may move the position of the second virtual object according to the correction value to correct the position of the second virtual object.
  • control unit may rotate the second virtual object to correct the position of the second virtual object according to the correction value.
  • control unit may change the degree of correction by the correction value.
  • control unit may change the degree of correction by the correction value according to the distance between the information processing device and the real object when the correction value is calculated. good.
  • control unit determines the distance between the other device and the real object when the other device sets the position of the first virtual object with respect to the real object.
  • the degree of correction by the correction value may be changed accordingly.
  • the other device may set the position of the first virtual object so that the first virtual object overlaps the real object and can be AR-displayed.
  • the other device may set the position of the first virtual object so that the first virtual object can be AR-displayed in the vicinity of the real object.
  • the other device may display the first virtual object in AR.
  • control unit may display the first virtual object in AR.
  • the other device is an object in which the first virtual object is located from among a plurality of real objects existing in the real space based on the image information acquired by the other device. You may select the real object that becomes.
  • the other device may select the real object satisfying a predetermined condition as the real object to which the first virtual object is located.
  • the predetermined condition may be that the real object has a specific shape.
  • the predetermined condition is that the real object has a three-dimensional shape that is substantially uniquely specified regardless of the direction in which the real object is viewed. good.
  • the information processing system includes an information processing device and another device.
  • the information processing device has a control unit.
  • the control unit estimates its own position in the global coordinate system corresponding to the real space.
  • Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
  • the position of the first virtual object is set in the global coordinate system.
  • the position of the real object in the global coordinate system is calculated.
  • the position of the second virtual object that is AR-displayed in common with the other device is corrected.
  • the information processing method related to this technology estimates the self-position in the global coordinate system corresponding to the real space, and Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system. Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system. Based on the image information, the position of the real object in the global coordinate system is calculated. This includes correcting the position of the second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object.
  • the program related to this technology estimates the self-position in the global coordinate system corresponding to the real space, and Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system. Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system. Based on the image information, the position of the real object in the global coordinate system is calculated. Based on the positional relationship between the first virtual object and the real object, the computer is made to execute a process of correcting the position of the second virtual object that is AR-displayed in common with the other device.
  • FIG. 6 it is a figure which shows the information processing system which concerns on 1st Embodiment of this technology. It is a perspective view which shows HMD in an information processing system. It is a block diagram which shows the internal structure of an HMD. It is a timing chart which shows the processing of an information processing system. It is a timing chart which shows the processing of an information processing system. It is a figure which shows the state when the user who wears a 1st HMD and the user who wears a 2nd HMD are looking at a real object for correction. In the example shown in FIG. 6, it is a figure which shows an example of the image information when the correction real object was imaged by the image pickup unit of the 1st HMD.
  • FIG. 7 It is a figure which shows the information of the feature point cloud extracted from the image information shown in FIG. 7. It is a figure which shows an example when the coordinate position of the correction virtual object is set with respect to the correction real object. It is a figure which shows the state when the position of the correction virtual object is set (AR is displayed) by the 2nd HMD. It is a figure which shows the state when the correction real object was imaged by the image pickup part of the 2nd HMD. It is a figure which shows the information of the feature point cloud extracted from the image information shown in FIG. It is a figure which shows the difference between the position and posture of a virtual object for correction, and the position and posture of a real object for correction.
  • FIG. 1 is a diagram showing an information processing system 100 according to a first embodiment of the present technology.
  • the information processing system 100 includes a plurality of HMDs (Head Mounted Display) 10s and a server device 20.
  • HMDs Head Mounted Display
  • HMD10 which is used by being attached to a head
  • the number of HMD10s is two, but the number of HMD10s is not particularly limited as long as it is two or more.
  • FIG. 2 is a perspective view showing the HMD 10.
  • FIG. 3 is a block diagram showing an internal configuration of the HMD.
  • the HMD 10 includes an HMD main body 11, a control unit 1, a storage unit 2, a display unit 3, an imaging unit 4, an inertial sensor 5, an operation unit 6, and a communication unit 7. It has.
  • the HMD main body 11 is attached to the user's head and used.
  • the HMD main body 11 is attached to the front portion 12, the right temple portion 13 provided on the right side of the front portion 12, the left temple portion 14 provided on the left side of the front portion 12, and the lower side of the front portion 12. It has a glass portion 15.
  • the display unit 3 is a display having light transmission (optical see-through display), and includes, for example, an OLED (Organic Light Emitting Diode) as a light source and a light guide plate.
  • the display unit 3 can AR-display the virtual object according to the control of the control unit 1.
  • the display unit 3 may adopt various forms such as a configuration using a half mirror and a retinal operation display.
  • the light source of the display unit 3 may be provided on the front unit 12, the right temple unit 13, the left temple unit 14, or the like.
  • AR display means that the virtual object is displayed so as to be perceived as if it were a real object existing in the real space from the user's point of view.
  • the display unit 3 may be a video see-through display. In this case, an image in which a virtual object is superimposed on the image captured by the imaging unit 4 is displayed on the display unit 3.
  • the image pickup unit 4 is, for example, a camera, and includes an image pickup element such as a CCD (Charge Coupled Device) sensor and a CMOS (Complemented Metal Oxide Semiconductor) sensor, and an optical system such as an imaging lens.
  • the imaging unit 4 is provided outward on the outer surface of the front unit 12, images an actual object existing in the user's line-of-sight direction, and transfers the image information (depth information) obtained by the imaging to the control unit 1. Is output.
  • Two imaging units 4 are provided in the front unit 12 at predetermined intervals in the lateral direction.
  • the location and number of the imaging units 4 can be changed as appropriate.
  • an infrared sensor may be used instead of the camera, or a combination of the camera and the infrared sensor may be used.
  • the inertial sensor 5 includes a 3-axis acceleration sensor that detects acceleration in the 3-axis direction and an angular velocity sensor that detects an angular velocity around the 3-axis.
  • the inertial sensor 5 outputs the acceleration in the three-axis direction and the angular velocity around the three axes obtained by the detection to the control unit 1 as inertial information.
  • the detection axes of the inertial sensor 5 are three axes, but the detection axes may be one axis or two axes. Further, in the present embodiment, two types of sensors are used as the inertial sensor 5, but one type or three or more types of sensors may be used as the inertial sensor 5. Other examples of the inertial sensor 5 include a speed sensor, an angle sensor, and the like.
  • the operation unit 6 is, for example, various types of operation units such as a pressing type and a contact type, and detects an operation by the user and outputs the operation to the control unit 1.
  • the operation unit 6 is provided on the front side of the left temple unit 14, but the position where the operation unit 6 is provided is any position as long as it is easy for the user to operate. May be good.
  • the communication unit 7 is capable of wirelessly or wired communication with another HMD 10 and the server device 20.
  • the control unit 1 executes various operations based on various programs stored in the storage unit 2 and comprehensively controls each unit of the HMD 10.
  • the control unit 1 includes a CPU (Central Processing Unit) 16, a VPU (Vision Processing Unit) 17, and a GPU (Graphics Processing Unit) 18.
  • the VPU 17 executes a process related to self-position estimation, a process of analyzing image information acquired by the imaging unit 4, and the like.
  • Self-position estimation includes relocalization and motion tracking.
  • the relocalization estimates the current self-position and attitude in the global coordinate system based on the image and map information captured by the image pickup unit 4 immediately after the power is turned on to the HMD 10 or at a predetermined timing thereafter. It is a technology.
  • Motion tracking calculates the amount of change (movement) of self-position and posture for each minute time based on image information (and inertial information), and by sequentially adding this amount of change, the current self in the global coordinate system. It is a technique for estimating the position and posture.
  • the VPU 17 first performs image processing on the image information acquired by the imaging unit 4 and extracts a feature point cloud from the image information. Then, the VPU 17 compares the extracted feature point cloud with the feature point cloud (or mesh information in which the feature point cloud is connected) included in the map information, thereby displaying its own position and its own position in the global coordinate system. Estimate the posture.
  • Relocation is executed immediately after the power is turned on, or when self-position estimation based on motion tracking fails. Further, the process of comparing the feature point cloud from the image information by the imaging unit 4 with the feature point cloud included in the map information is constantly executed, and when the matching of these feature point groups is successful, the relocalization is performed. It may be executed.
  • the VPU 17 In motion tracking, the VPU 17 first performs image processing on the image information acquired by the imaging unit 4 and extracts a feature point cloud from the image information. Then, the VPU 17 calculates the change amount of the previous self-position and posture and the current self-position and posture by comparing the feature point group of the image information in the previous time with the feature point group of the image information in this time. The VPU 17 estimates the current self-position and attitude in the global coordinate system by adding this amount of change to the previous self-position and attitude.
  • the original self-position and posture to which the amount of change is added becomes the self-position and posture estimated by the re-localization.
  • the inertia information from the inertial sensor 5 may be used instead of the image information.
  • both image information and inertial information may be used.
  • the CPU 16 executes a process of setting the position and display contents of the virtual object, a delay compensation process, and the like. In addition, the CPU 16 executes various processes related to the application commonly executed by each HMD.
  • the VPU 17 executes a process of controlling the AR display of the virtual object by the display unit 3. Specifically, the VPU 17 executes a process of calculating an image of a virtual object to be AR-displayed according to the user's viewpoint position and outputting it to the display unit 3.
  • the display of the display unit 3 is controlled so that the common virtual object is AR-displayed at the same position in the global coordinate system in the plurality of HMD10s.
  • the first type is a correction virtual object 31 (first virtual object)
  • the second type is a normal virtual object (second virtual object).
  • a normal virtual object is a common virtual object that is AR-displayed at the same position in the global coordinate system when a common application is executed in a plurality of HMD10s.
  • the ammunition virtual object corresponds to this regular virtual object.
  • the correction virtual object 31 is a virtual object used to correct the AR display position so that a normal virtual object is displayed in the same position and posture in a plurality of HMD10s.
  • the correction virtual object 31 may be AR-displayed, or may not be AR-displayed simply by setting the position in the global coordinate system. However, in the first embodiment, the correction virtual object 31 is AR-displayed. The case where it is displayed will be described.
  • the storage unit 2 includes various programs required for processing of the control unit 1, a non-volatile memory for storing various data, and a volatile memory used as a work area of the control unit 1.
  • the various programs may be read from a portable recording medium such as an optical disk or a semiconductor memory, or may be downloaded from the server device 20 on the network.
  • the storage unit 2 stores the map information commonly used in each HMD 10.
  • Map information is three-dimensional information corresponding to the global coordinate system, and is information used for self-position estimation (relocalization).
  • This map information includes information on a feature point cloud (or mesh information in which the feature point clouds are connected) corresponding to each real object in the real space.
  • Each feature point included in this feature point cloud is associated with position information in the global coordinate system.
  • the map information is generated based on the image information acquired in advance by the operator with a camera or the like in the real space where the application is executed in each HMD 10, for example.
  • each HMD 10 the position and orientation of each HMD 10 are represented by a common global coordinate system by using common map information in the self-position estimation in each HMD 10.
  • the server device 20 is configured to be able to communicate with each HMD 10.
  • the server device 20 includes a control unit, a storage unit, a communication unit, and the like (each portion is not shown).
  • the control unit executes various operations based on various programs stored in the storage unit, and controls each part of the server device in an integrated manner.
  • the storage unit includes various programs required for processing of the control unit, a non-volatile memory for storing various data, and a volatile memory used as a work area of the control unit 1.
  • Various programs may be read from a portable recording medium such as an optical disk or a semiconductor memory.
  • the communication unit is configured to be able to communicate with each HMD.
  • the two HMDs 10 are referred to as a first HMD10a (another device) and a second HMD10b (information processing device), respectively, for convenience.
  • the server device 20 transmits the map information created in advance to the first HMD10a and the second HMD10b, respectively.
  • the control unit 1 (VPU17) of the first HMD10a and the second HMD10b receives the map information and stores it in the storage unit 2, respectively, whereby the map information is shared by the first HMD10a and the second HMD10b. And the global coordinate system is shared.
  • the control unit 1 (VPU17) of the first HMD10a and the second HMD10b has a feature point group in the image information acquired by its own imaging unit 4 after receiving the map information, and a feature point group in the map information. Estimate the self-position and orientation in the global coordinate system based on the comparison of. That is, the control unit 1 (VPU17) of the first HMD10a and the second HMD10b executes self-position estimation based on relocalization when the map information is received.
  • the control unit 1 (VPU17) of the first HMD10a and the second HMD10b compares the feature point group of the image information in the previous time with the feature point group of the image information in the current time, and determines the previous self-position and posture. Calculate the amount of change in self-position and posture this time.
  • the control unit 1 (VPU17) of the first HMD10a and the second HMD10b determines the amount of change in the self-position and posture in the previous time (immediately after the re-localization, the self-position and posture estimated by the re-localization). By adding to, the current self-position and attitude in the global coordinate system are estimated. That is, the control unit 1 (VPU17) of the first HMD10a and the second HMD10b executes the self-position estimation based on the motion tracking after the self-position estimation based on the relocalization.
  • self-position estimation based on motion tracking is continuously executed. While the self-position estimation based on this motion tracking is being continued, the self-position estimation based on the relocation may be executed at a predetermined timing.
  • the timing at which the self-position estimation based on relocalization is executed is, for example, when the self-position estimation based on motion tracking fails or when the self-position estimation based on relocalization is always executed, the image information For example, when matching by the feature point cloud and the feature point cloud of the map information is successful.
  • control unit 1 (VPU17) of the first HMD10a and the second HMD10b estimates its own position and attitude, respectively, in the common global coordinate system.
  • the control unit 1 (VPU17) of the first HMD10a and the second HMD10b transmits the self-position and the posture to the server device 20 each time the self-position and the posture are estimated. Further, the server device 20 transmits the self-position and the posture received from one HMD10 of the first HMD10a and the second HMD10b to the other HMD10. Thereby, the first HMD10a and the second HMD10b can recognize the position and orientation of the other HMD10 in the global coordinate system.
  • the first HMD10a and the second HMD10b are in the global coordinate system. It is necessary to accurately estimate the self-position of each.
  • the first HMD10a and the second HMD10b cannot completely and accurately estimate their own positions. That is, the estimated self-position and posture may deviate from the actual position and posture in the real space, resulting in an error. Such an error causes a misalignment of a common (normal) virtual object that can be AR-displayed at the same position in the global coordinate system in the first HMD10a and the second HMD10b.
  • the position of a common (normal) virtual object is represented as three-dimensional coordinate information in the global coordinate system. If one of the first HMD10a and the second HMD10b AR displays a common virtual object at the position (x, y, z) in the global coordinate system, the other HMD10 also has global coordinates.
  • the common virtual object in the system is AR-displayed at the position (x, y, z).
  • the subsequent processing is executed in order to correct the misalignment of such a common (normal) virtual object.
  • the control unit 1 (VPU17) of the first HMD 10a is used for correction within a certain field of view and distance ahead of the line-of-sight direction seen by the user, based on the image information from the image pickup unit 4. It is determined whether or not the real object 30 (see FIGS. 6, 7, etc.) exists. That is, the control unit 1 (VPU17) of the first HMD 10a executes a process of selecting (finding) the correction real object 30 from the plurality of real objects existing in the real space based on the image information.
  • the correction real object 30 is a real object for which the correction virtual object 31 is positioned (displayed in AR).
  • the correction real object 30 is a real object that satisfies a predetermined condition among a plurality of real objects existing in the real space, and the first HMD10a and the second HMD10b are real objects that satisfy the predetermined condition. Is selected as the correction real object 30.
  • the condition selected as the correction real object 30 is that the real object has a specific shape.
  • this condition is that the real object has a three-dimensional shape that is substantially uniquely identified when the real object is viewed from any direction.
  • the correction real object 30 has a regular shape such as a sphere, a rectangular parallelepiped (including a cube), a cylinder, a polygonal prism, a cone, and a polygonal pyramid.
  • FIG. 6 is a diagram showing a state when a user wearing the first HMD10a and a user wearing the second HMD10b are looking at the actual object 30 for correction.
  • the shape of the correction real object 30 is a cube.
  • the user wearing the first HMD10a is looking at the correction real object 30 from an oblique left direction
  • the user wearing the second HMD10b is looking at the correction real object 30 from an oblique right direction.
  • An example is shown when looking at the object 30.
  • FIG. 7 is a diagram showing an example of image information when the correction real object 30 is imaged by the image capturing unit 4 of the first HMD 10a in the example shown in FIG.
  • FIG. 8 is a diagram showing information of a feature point cloud extracted from the image information shown in FIG. 7.
  • the control unit 1 (VPU17) of the first HMD 10a selects the actual object 30 for correction based on the information of the feature point cloud as shown in FIG. 8 extracted from the image information as shown in FIG. 7, for example (VPU17). Execute the process (find).
  • the number of correction real objects 30 selected is not always one. That is, every time the correction real object 30 is detected by the first HMD 10a due to the movement of the user wearing the first HMD 10a or the operation of looking around, a new correction real object 30 is sequentially added. ..
  • the control unit 1 (CPU16) of the first HMD 10a sets the correction real object 30 in its own global coordinate system based on the information of the feature point cloud of the correction real object 30 (see FIG. 8).
  • the coordinate position (AR display position) of the correction virtual object 31 is set with respect to the correction real object 30.
  • the coordinate positions of the correction virtual object 31 set at this time are the position of the center of gravity of the correction virtual object 31, the position of the vertices (corners) of each part (face) constituting the correction virtual object 31, and the correction virtual object. Includes information such as the orientation of 31.
  • FIG. 9 is a diagram showing an example when the coordinate position of the correction virtual object 31 is set (when AR is displayed) with respect to the correction real object 30.
  • the control unit 1 (GPU18) of the first HMD 10a After setting the coordinate position of the correction virtual object 31, the control unit 1 (GPU18) of the first HMD 10a causes the display unit 3 to AR-display the correction virtual object 31 at the coordinate position.
  • the correction virtual object 31 does not necessarily have to be AR-displayed in the first HMD 10a, and its coordinate position may only be set.
  • the shape and size of the correction virtual object 31 are not determined in advance, but are determined based on the shape and size of the correction real object 30. Specifically, in the present embodiment, the shape and size of the correction virtual object 31 are related to the correction real object 30, and are the same as the shape (cube) and size of the correction real object 30. There is.
  • the position of the correction virtual object 31 is set so that the correction virtual object 31 overlaps with the correction real object 30 and is displayed in AR. Will be done.
  • the shape and size of the correction virtual object 31 may be determined in advance (for example, when the correction virtual object 31 is a character). Further, the position of the correction virtual object 31 may be set in the vicinity of the correction virtual object 31 (for example, when the correction virtual object 31 of the character is on the correction real object 30).
  • control unit 1 (CPU 16) of the first HMD 10a After setting the position of the correction virtual object 31 or displaying the correction virtual object 31 in AR, the control unit 1 (CPU 16) of the first HMD 10a outputs the coordinate information of the position of the correction virtual object 31 to the server. It transmits to the device 20.
  • the control unit 1 (CPU 16) of the first HMD 10a uses the corners (each angle) on the three surfaces as the coordinate information of the correction virtual object 31.
  • the coordinate information of a) to (g) is transmitted to the server device 20.
  • the server device 20 When the server device 20 receives the coordinate information of the correction virtual object 31 from the first HMD 10a, the server device 20 transmits this coordinate information to the second HMD 10b.
  • the coordinate information of the correction virtual object 31 is transmitted from the first HMD 10a to the second HMD 10b via the server device 20, but this coordinate information is directly from the first HMD 10a. May be transmitted to the second HMD10b.
  • control unit 1 (CPU16) of the second HMD 10b receives the coordinate information of the correction virtual object 31, in its own global coordinate system, the control unit 1 (CPU 16) obtains the self-position and orientation and the acquired coordinate information of the correction virtual object 31. Based on this, it is determined whether or not the correction real object 30 selected by the first HMD 10a exists within a certain field of view and distance ahead of the user's line-of-sight direction.
  • the second HMD 10b approaches a position that is equal to or less than a certain distance with respect to the correction real object 30 selected by the first HMD 10a, and the direction of the user's line of sight of the correction real object 30 is selected. It is determined that the correction real object 30 exists in the user's field of view when facing.
  • the user wearing the second HMD10b In order to correct the position of the shared (normal) virtual object, the user wearing the second HMD10b approaches the correction real object 30 selected by the first HMD10a and looks in the direction thereof. There is a need. On the other hand, since the user wearing the first HMD10a basically moves while looking around, the correction real objects 30 are sequentially increased according to the movement of the user of the first HMD10a and moved to a plurality of locations. It will be scattered. Therefore, the possibility that the user wearing the second HMD 10b approaches and faces the correction real object 30 selected by the first HMD 10a is increased every time the correction real object 30 is newly selected and increases. Will increase to.
  • the control unit 1 of the second HMD 10b executes the following processing. That is, the control unit 1 (CPU16) of the second HMD 10b is the position of the correction virtual object 31 based on the self position and the posture and the acquired coordinate information of the correction virtual object 31 in its own global coordinate system. To set.
  • FIG. 10 is a diagram showing a state when the position of the correction virtual object 31 is set (AR displayed) by the second HMD 10b based on the coordinate information.
  • control unit 1 (GPU18) of the second HMD 10b causes the display unit 3 to AR-display the correction virtual object 31 at the set position.
  • the control unit 1 of the second HMD 10b only sets the position of the correction virtual object 31, and does not have to actually display the correction virtual object 31 in AR.
  • the control unit 1 (CPU16) of the second HMD10b receives the coordinate information of the angles (a) to (g) on the front surface, the upper surface, and the left surface described above ([ ⁇ x a , y a , z a ⁇ , ⁇ x b , y b , z b ⁇ , ⁇ x c , y c , z c ⁇ , ⁇ x d , y d , z d ⁇ ], [ ⁇ x e , y e , z e ⁇ , ⁇ x f, y f, z f ⁇ , ⁇ x b, y b, z b ⁇ , ⁇ x a, y a, z a ⁇ ], [ ⁇ x e, y e, z e ⁇ , ⁇ x a , y a , z a ⁇ , ⁇ x d , y d , z d ⁇
  • the user wearing the first HMD 10a is looking at the correction real object 30 from an oblique left direction.
  • the user wearing the second HMD 10b approaches the correction real object 30 and looks at the correction real object 30, the user is looking at the correction real object 30 from an oblique right direction.
  • the front surface, the upper surface, and the left side surface of the correction virtual object 31 can be seen from the first HMD10a side, but the front surface, the upper surface surface, and the right side surface of the correction object can be seen from the second HMD10b side. You can see three sides.
  • Control unit 1 of the second HMD10b (CPU16) is a front, a surface other than the right side surface of the three faces of the top and right side, i.e., coordinate information of the front and top ([ ⁇ x a, y a , z a ⁇ , ⁇ X b , y b , z b ⁇ , ⁇ x c , y c , z c ⁇ , ⁇ x d , y d , z d ⁇ ], [ ⁇ x e , y e , z e ⁇ , ⁇ x f , Y f , z f ⁇ , ⁇ x b , y b , z b ⁇ , ⁇ x a , y a , z a ⁇ ]). Therefore, the control unit 1 (CPU16) of the second HMD10b sets the coordinates at the position in its own global coordinate system, and calculates how the front surface and
  • the control unit 1 (CPU16) of the second HMD 10b since the control unit 1 (CPU16) of the second HMD 10b does not have the coordinate information of the right side surface, the coordinate information of the right side surface ([ ⁇ x f , y f , z) is obtained from the coordinate information of the other surface. f ⁇ , ⁇ x b , y b , z b ⁇ , ⁇ x c , y c , z c ⁇ , ⁇ x h , y h , z h ⁇ ]) (H) Coordinates need to be predicted). Then, the control unit 1 (CPU16) of the second HMD10b sets the coordinates at the predicted position in its own global coordinate system, and obtains what the right side surface looks like when viewed from its own position and posture.
  • the control unit 1 (CPU16) of the second HMD 10b also has a portion of the correction virtual object 31 corresponding to the portion of the correction real object 30 that cannot be seen from the first HMD 10a when the correction real object 30 is selected. , The coordinates of that part can be predicted accurately.
  • the prediction of the coordinates in the part of the virtual object corresponding to the portion of the invisible real object 30 for correction may be executed on the first HMD10a side instead of the second HMD10b.
  • the predicted coordinate information is transmitted to the first HMD 10a together with the coordinate information corresponding to the visible part (via the server).
  • the relative positional relationship between the correction virtual object 31 and the correction real object 30 in the first HMD 10a (see FIG. 9).
  • the relative positional relationship (see FIG. 10) between the correction virtual object 31 and the correction real object 30 in the second HMD 10b should be the same. Therefore, when the self-position and posture in the first HMD 10a and the second HMD 10b are accurate, the position of the correction virtual object 31 is set to completely overlap the correction real object 30 in the second HMD 10b. Should be done.
  • the relative positions of the correction virtual object 31 and the correction real object 30 in the first HMD10a are different.
  • the position of the correction virtual object 31 is set to a position deviated from the correction real object 30.
  • this relationship is utilized, and in the second HMD 10b, the AR display position of the common (normal) virtual object is corrected based on the positional relationship between the correction virtual object 31 and the correction real object 30. ..
  • the control unit 1 (VPU17) of the second HMD 10b After setting the coordinate position of the correction virtual object 31 or displaying the correction virtual object 31 in AR, the control unit 1 (VPU17) of the second HMD 10b acquires image information from the image pickup unit 4 and obtains image information. A feature point cloud is extracted from the image information.
  • FIG. 11 is a diagram showing a state when the correction real object 30 is imaged by the imaging unit 4 of the second HMD 10b.
  • FIG. 12 is a diagram showing information of a feature point cloud extracted from the image information shown in FIG.
  • control unit 1 (CPU16) of the second HMD10b is a feature point cloud corresponding to the correction real object 30 among the feature point cloud included in the image information based on the coordinate information acquired from the first HMD10a. Determine which feature point cloud the group belongs to.
  • control unit 1 (CPU16) of the second HMD 10b determines the position and orientation of the correction real object 30 in its own global coordinate system based on the information of the feature point cloud corresponding to the correction real object 30. calculate.
  • the control unit 1 (CPU16) of the second HMD10b has the position and orientation of the correction virtual object 31 set based on the coordinate information from the first HMD10a. , The difference between the position and the posture of the actual object 30 for correction is obtained.
  • FIG. 13 is a diagram showing the difference between the position and orientation of the correction virtual object 31 and the position and orientation of the correction real object 30.
  • control unit 1 (CPU16) of the second HMD10b When the control unit 1 (CPU16) of the second HMD10b obtains the difference, the control unit 1 (CPU16) stores the difference as a correction value in the storage unit 2. Then, when the control unit 1 (CPU16) of the second HMD10b AR-displays the (normal) virtual object common to the first HMD10a, the control unit 1 (CPU16) uses this difference as a correction value and AR-displays the common virtual object. Correct the position. When the correction virtual object 31 is displayed in AR, the correction virtual object 31 may also be corrected by the correction value.
  • the correction value for example, there is a method of obtaining the movement amount and the rotation angle.
  • the difference of the movement amount for example, the center of gravity of the correction for the virtual object 31 (x G, y G, z G) and center of gravity of the correction the real object 30 (x 'G, y' G, z 'G) and It is calculated from.
  • the position of the center of gravity (x G , y G , z G ) of the correction virtual object 31 is calculated from, for example, the positions of the angles (a) to (h) in the correction virtual object 31. Further, the center of gravity of the correction the real object 30 (x 'G, y' G, z 'G) , the angular correction real object 30 (a' is calculated from the position of) ⁇ (h ').
  • the calculated movement amount and rotation angle are used as correction values, and the corrections represented by the following formulas are executed for the coordinate points of the common (normal) virtual object.
  • the coordinates before correction are P (x, y, z)
  • the coordinates after correction are P'(x', y', z').
  • the common (normal) virtual object will be displayed in the same position and orientation in the first HMD10a and the second HMD10b.
  • the correction value may be transmitted from the second HMD 10b to the server device 20, and the server device 20 may use this correction value to correct the position and orientation of a common (normal) virtual object. Further, the calculation of the correction value may be performed by the server device 20 or the first HMD10a instead of the second HMD10b (for example, when the processing performance of the second HMD10b is low). In this case, the information necessary for calculating the correction value (information on the position and posture of the correction virtual object 31 in the second HMD 10b, information on the position and posture of the correction real object 30) is transmitted from the second HMD 10b to the server. It is transmitted to the device 20 or the second HMD 10b.
  • the correction value is calculated, when the user wearing the second HMD10b approaches the correction real object 30 selected by the first HMD10a and looks in the direction, the correction value is newly calculated. And the correction value is updated.
  • the current correction value may be reset to 0.
  • the correction value calculated when the distance between the correction real object 30 and the second HMD 10b is long is calculated when the distance between the correction real object 30 and the second HMD 10b is short. It may be less reliable than the corrected correction value. This is because the recognition of the position and shape of the correction real object 30 in the second HMD 10b may become inaccurate as the distance between the second HMD 10b and the correction real object 30 increases. be.
  • the degree of correction by the correction value of the common (normal) virtual object is changed according to the distance between the second HMD 10b when the correction value is calculated and the real object 30 for correction. May be good. In this case, the closer the distance between the second HMD 10b when the correction value is calculated and the actual object 30 for correction is, the higher the degree of correction by the correction value is.
  • a common (normal) virtual object is AR-displayed at a certain distance from the second HMD10b.
  • the correction value is multiplied by 1.
  • the distance between the second HMD 10b when the correction value is calculated and the correction real object 30 exceeds a certain distance, 0.9 with respect to the correction value as the distance increases. Values such as 0.8 ... are multiplied.
  • the distance between the correction real object 30 and the first HMD 10a is set. The same is true when it is far away. That is, the correction value calculated when the distance between the correction real object 30 and the first HMD 10a is long is calculated when the distance between the correction real object 30 and the first HMD 10a is short. The reliability may be lower than the corrected value.
  • the degree of correction by the correction value of the common (normal) virtual object is the first HMD 10a when the coordinate position of the correction virtual object 31 is set by the first HMD 10a, and the correction real object 30. It may vary depending on the distance between them. In this case, the closer the distance between the first HMD 10a when the coordinate position of the correction virtual object 31 is set by the first HMD 10a and the correction real object 30, the higher the degree of correction by the correction value. Be crushed.
  • the correction value is set. Is multiplied by 1.
  • the distance between the first HMD 10a and the correction real object 30 when the coordinate position of the correction virtual object 31 is set by the first HMD 10a exceeds a certain distance, the distance becomes long. Therefore, the correction value is multiplied by a value such as 0.9, 0.8, and so on.
  • the correction value is calculated each time the correction real object 30 selected by the first HMD10a is detected by the second HMD10b has been described.
  • the correction real objects 30 selected by the first HMD 10a are densely packed, the correction real objects 30 are frequently detected by the second HMD 10b, and the correction value is frequently calculated. Will be done.
  • the processing load on the second HMD 10b may increase. Therefore, for example, the second HMD 10b may execute the calculation of a new correction value when the correction real object 30 is detected after a certain period of time has elapsed from the processing of the previous calculation of the correction value.
  • the HMD10 on the transmitting side for transmitting the coordinate information of the correction virtual object 31 and the HMD10 on the receiving side for receiving the coordinate information are predetermined.
  • the HMD10 on the transmitting side and the HMD10 on the receiving side may not be determined in advance.
  • both the first HMD10a and the second HMD10b execute the process of selecting (finding) the correction real object 30, and the one HMD10 that first finds the correction real object 30 is for correction.
  • the coordinate information of the virtual object 31 is transmitted.
  • the other HMD 10 calculates a correction value based on the positional relationship between the correction virtual object 31 and the correction real object 30 when approaching the correction real object 30 selected by one HMD 10 and looking in the direction thereof. ..
  • one HMD10 corrects the AR display position with the common (normal) virtual object by using the correction value.
  • the positions of the common (normal) virtual objects are corrected based on the positional relationship between the correction virtual object 31 and the correction real object 30.
  • the position of a common (normal) virtual object can be corrected by a relatively simple method.
  • the position of the common (normal) virtual object is corrected based on the difference between the position of the correction virtual object 31 and the position of the correction real object 30.
  • the AR display position of the common virtual object can be appropriately corrected.
  • the correction value is calculated based on the difference between the position of the correction virtual object 31 and the position of the correction real object 30, and the position of the common (normal) virtual object is calculated according to the correction value. Is corrected. Also, the correction value moves and rotates a common (normal) virtual object. As a result, the AR display position of the common (normal) virtual object can be corrected more appropriately.
  • the degree of correction by the correction value is changed according to the distance between the second HMD 10b when the correction value is calculated and the actual object 30 for correction. You may. Thereby, the degree of correction by the correction value can be appropriately changed.
  • the degree of correction by the correction value is the first HMD10a when the coordinate position of the correction virtual object 31 is set by the first HMD10a, and the correction real object. It may be varied depending on the distance to 30. Thereby, the degree of correction by the correction value can be appropriately changed.
  • the first HMD10a selects (finds) the correction real object 30 from the plurality of real objects existing in the real space based on the image information acquired by the first HMD10a. ).
  • the real object existing in the real space can be selected as the correction real object 30, so that it is not necessary to specially install a marker or the like in the real space, which saves time and effort. be able to.
  • the condition selected as the correction real object 30 is that the real object has a specific shape.
  • this condition is that the real object has a three-dimensional shape that is substantially uniquely specified regardless of the direction in which the real object is viewed.
  • a method is used in which correction is performed individually for each common (normal) virtual object according to the correction value.
  • a method of correcting the self-position and the posture itself by the correction value may be used.
  • the correction will be applied to all common (normal) virtual objects.
  • the distance between the above-mentioned second HMD 10b and the correction real object 30 and the distance between the first HMD 10a and the correction real object 30 become long, the correction using the correction value is also common. Unintended misalignment of (normal) virtual objects can occur. Therefore, it is necessary to individually correct each common (normal) virtual object by the correction value and share it with each HMD10 without using the method of correcting the self-position and the posture itself by the correction value. A method such as performing correction by the correction value only for a relatively high common virtual object may be used.
  • Second Embodiment a second embodiment of the present technology will be described.
  • another method of obtaining the positional relationship between the correction virtual object 31 and the correction real object 30 in the second HMD 10b will be described.
  • control unit 1 (CPU16) of the second HMD10b sets the position of the correction virtual object 31 based on the coordinate information acquired from the first HMD10a (see FIG. 10), or After displaying the correction virtual object 31 in AR, the following processing is executed.
  • control unit 1 (VPU17) of the second HMD10b acquires image information from the image pickup unit 4 and extracts a feature point cloud from the image information. Then, the control unit 1 (CPU16) of the second HMD10b is a feature point cloud corresponding to the correction real object 30 among the feature point cloud included in the image information based on the coordinate information acquired from the first HMD10a. Determines which feature point cloud is.
  • control unit 1 (CPU 16) of the second HMD 10b corrects the correction real object 30 in its own global coordinate system based on the information of the feature point group corresponding to the correction real object 30.
  • FIG. 14 is a diagram showing an example when the coordinate position of the correction virtual object 31 is set (when AR is displayed) with respect to the correction real object 30.
  • the first type is a correction virtual object 31a whose coordinate position is set based on the coordinate information acquired from the first HMD 10a (see FIG. 10).
  • the second type is a correction virtual object 31b whose coordinate position is set based on the image information acquired in the second HMD 10b (see FIG. 14).
  • the first type of correction virtual object 31 is referred to as a correction virtual object 31a based on coordinate information
  • the second type of correction virtual object 31 is referred to as a correction virtual object 31b based on image information. ..
  • the control unit 1 (CPU16) of the second HMD10b determines that the first HMD10a is the correction real object 30 based on the image information of the first HMD10a. The same conditions as when the coordinate position of the correction virtual object 31 is set are used. Then, the control unit 1 (CPU16) of the second HMD10b uses this same condition to make a correction virtual object based on the image information with respect to the correction real object 30 based on the image information of the second HMD10b. The coordinate position (AR display position) of 31b is set.
  • the correction virtual object 31b based on the image information is AR-displayed, the correction virtual object 31b based on the image information is displayed with respect to the correction real object 30.
  • the coordinate position of the correction virtual object 31b based on the image information is set so that the AR is displayed so as to overlap the same position.
  • the image information is used when setting the coordinate position of the correction virtual object 31b based on the image information. Based on this, the shape and size of the correction virtual object 31b based on the image information are determined.
  • the control unit 1 (GPU18) of the first HMD 10a AR-displays the correction virtual object 31b based on the image information at the position corresponding to the set coordinate position on the display unit 3.
  • the correction virtual object 31b based on the image information does not necessarily have to be AR-displayed in the second HMD10b, and its coordinate position may only be set.
  • the control unit 1 (CPU16) of the second HMD 10b uses the coordinate information. The difference between the position and orientation of the correction virtual object 31a based on the image information and the position and orientation of the correction virtual object 31b based on the image information is obtained.
  • FIG. 15 is a diagram showing the difference between the position and orientation of the correction virtual object 31a based on the coordinate information and the position and orientation of the correction virtual object 31b based on the image information.
  • control unit 1 (CPU16) of the second HMD10b When the control unit 1 (CPU16) of the second HMD10b obtains the difference, the control unit 1 (CPU16) stores the difference as a correction value in the storage unit 2. Then, when the control unit 1 (CPU16) of the second HMD10b AR-displays the (normal) virtual object common to the first HMD10a, the control unit 1 (CPU16) uses this difference as a correction value and AR-displays the common virtual object. Correct the position.
  • the same method as in the above-described first embodiment can be used. That is, in the above-described first embodiment, the wording of the "correction virtual object 31" in the place where the calculation method of the correction value and the correction method using the correction value are explained is "the correction virtual based on the coordinate information". It may be read as "object 31a”, and the wording of "correction real object 30" may be read as "correction virtual object 31b based on image information”. Further, "angle (a') to (h')” may be read as “angle (a") to (h ")”.
  • HMD10 has been taken as an example of the AR device (information processing device).
  • the AR device is not limited to the HMD 10.
  • Other examples of the AR device include wearable devices other than the HMD10 such as a wristband type (watch type), a ring type, and a pendant type.
  • a mobile phone including a smartphone
  • a tablet PC a portable game machine
  • a portable music player and the like can be mentioned.
  • the AR device may be any device as long as it can display a virtual object in AR (and can be mounted or grasped by the user and moved together with the user).
  • the present technology can also have the following configurations. (1) Estimate the self-position in the global coordinate system corresponding to the real space, Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system. Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system. Based on the image information, the position of the real object in the global coordinate system is calculated. An information processing device including a control unit that corrects the position of a second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object. (2) The information processing device according to (1) above.
  • the control unit is an information processing device that corrects the position of the second virtual object based on the difference between the position of the first virtual object and the position of the real object.
  • the control unit sets the position of the first virtual object with respect to the real object in the global coordinate system based on the image information, and sets the position of the first virtual object based on the coordinate information.
  • An information processing device that corrects the position of the second virtual object based on the difference from the position of the first virtual object based on the image information.
  • the control unit is an information processing device that calculates a correction value based on the difference and corrects the position of the second virtual object based on the correction value.
  • the control unit is an information processing device that corrects the position of the second virtual object by moving the position of the second virtual object according to the correction value.
  • the control unit is an information processing device that corrects the position of the second virtual object by rotating the second virtual object according to the correction value.
  • the control unit is an information processing device that changes the degree of correction by the correction value.
  • the control unit is an information processing device that changes the degree of correction by the correction value according to the distance between the information processing device when the correction value is calculated and the real object. (9) The information processing device according to (7) or (8) above.
  • the control unit determines the correction value according to the distance between the other device and the real object when the other device sets the position of the first virtual object with respect to the real object.
  • An information processing device that changes the degree of correction by.
  • the other device is an information processing device that sets the position of the first virtual object so that the first virtual object overlaps the real object and can be AR-displayed.
  • the other device is an information processing device that sets the position of the first virtual object so that the first virtual object can be AR-displayed in the vicinity of the real object.
  • (12) The information processing device according to any one of (1) to (11) above.
  • the other device is an information processing device that AR-displays the first virtual object.
  • the control unit is an information processing device that AR-displays the first virtual object.
  • the information processing apparatus according to any one of (1) to (13) above. Based on the image information acquired by the other device, the other device selects the real object to which the first virtual object is located from among a plurality of real objects existing in the real space. Information processing device to select.
  • the other device is an information processing device that selects the real object satisfying a predetermined condition as the real object to which the first virtual object is located. (16) The information processing device according to (15) above.
  • the information processing device that the predetermined condition is that the real object has a specific shape.
  • the predetermined condition is that the real object has a three-dimensional shape that is substantially uniquely specified regardless of the direction in which the real object is viewed.
  • Estimate the self-position in the global coordinate system corresponding to the real space, and Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system. Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system. Based on the image information, the position of the real object in the global coordinate system is calculated.
  • An information processing device having a control unit that corrects the position of a second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object, and the other device.
  • An information processing method for correcting the position of a second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object (20) Estimate the self-position in the global coordinate system corresponding to the real space, and Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system. Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system. Based on the image information, the position of the real object in the global coordinate system is calculated.
  • Control unit 10 ... HMD 20 . Server device 30 . Real object for correction 31 ... Virtual object for correction 100 ... Information processing system

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An information processing device according to the present technology is provided with a control unit. The control unit estimates an own position in global coordinates corresponding to a real space, acquires coordinate information about the location of a first AR-displayable virtual object that is set in the global coordinates for a real object in the real space by another device sharing the global coordinates, sets the location of the first virtual object in the global coordinates on the basis of the own location and the coordinate information, calculates the location of the real object in the global coordinates on the basis of image information, and corrects the location of a second virtual object, which is commonly AR-displayed with the other device, on the basis of a location relationship between the first virtual object and the real object.

Description

情報処理装置、情報処理システム、情報処理方法及びプログラムInformation processing equipment, information processing system, information processing method and program
 本技術は、複数のAR(Augmented Reality)装置間でそれぞれ共通でAR表示される仮想オブジェクトの位置を補正する技術に関する。 This technology relates to a technology for correcting the position of a virtual object that is commonly displayed in AR among a plurality of AR (Augmented Reality) devices.
 従来から、複数のAR装置間で共通の仮想オブジェクトをそれぞれ同じ位置にAR表示する技術が知られている(例えば、下記特許文献1参照)。 Conventionally, a technique for displaying a common virtual object among a plurality of AR devices in AR at the same position has been known (see, for example, Patent Document 1 below).
 複数のAR装置間において同じ位置に正確に共通の仮想オブジェクトをAR表示するためには、それぞれの複数のAR装置が正確に自己位置推定を行う必要がある。自己位置推定においては、AR装置が撮像した画像情報から抽出された特徴点群と、マップ情報に含まれる特徴点群の情報とが比較されて、AR装置の自己位置が推定される。自己位置推定に用いられるマップ情報は、事前に作成される方法と、事前に作成せずに自己位置推定と同時に作成する方法とが存在する。マップ情報を自己位置推定と同時に作成する方法は、一般的にSLAM(Simultaneous Localization and Mapping)と呼ばれる。 In order to accurately display a common virtual object in the same position among a plurality of AR devices, it is necessary for each of the plurality of AR devices to accurately estimate the self-position. In the self-position estimation, the self-position of the AR device is estimated by comparing the feature point cloud extracted from the image information captured by the AR device with the information of the feature point group included in the map information. The map information used for self-position estimation includes a method of creating it in advance and a method of creating it at the same time as self-position estimation without creating it in advance. The method of creating map information at the same time as self-position estimation is generally called SLAM (Simultaneous Localization and Mapping).
 上記2つの方法のうちいずれの方法においても、推定された自己位置が実空間の実際の位置に対してずれてしまい、誤差が生じてしまうことがある。このような誤差は、複数のAR装置においてAR表示される共通の仮想オブジェクトの位置ずれの原因となる。 In either of the above two methods, the estimated self-position may deviate from the actual position in the real space, resulting in an error. Such an error causes a misalignment of a common virtual object displayed in AR in a plurality of AR devices.
特開2012-168646号公報Japanese Unexamined Patent Publication No. 2012-168646
 複数のAR装置間において同じ位置に正確に共通の仮想オブジェクトをAR表示することが可能な技術が求められている。 There is a need for a technology that can accurately display a common virtual object in AR at the same position among multiple AR devices.
 以上のような事情に鑑み、本技術の目的は、複数のAR装置間において同じ位置に正確に共通の仮想オブジェクトをAR表示することが可能な技術を提供することにある。 In view of the above circumstances, the purpose of this technology is to provide a technology that can accurately display a common virtual object in AR at the same position among a plurality of AR devices.
 本技術に係る情報処理装置は、制御部を具備する。
 前記制御部は、実空間に対応するグローバル座標系において自己位置を推定し、
 前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
 前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
 画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
 前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する。
The information processing device according to the present technology includes a control unit.
The control unit estimates its own position in the global coordinate system corresponding to the real space.
Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
Based on the image information, the position of the real object in the global coordinate system is calculated.
Based on the positional relationship between the first virtual object and the real object, the position of the second virtual object that is AR-displayed in common with the other device is corrected.
 これにより、複数のAR装置間において同じ位置に正確に共通の仮想オブジェクトをAR表示することが可能となる。 This makes it possible to AR-display a common virtual object at the same position among a plurality of AR devices.
 上記情報処理装置において、前記制御部は、前記第1の仮想オブジェクトの位置と、前記実オブジェクトの位置との差に基づいて、前記第2の仮想オブジェクトの位置を補正してもよい。 In the information processing device, the control unit may correct the position of the second virtual object based on the difference between the position of the first virtual object and the position of the real object.
 上記情報処理装置において、前記制御部は、前記画像情報に基づいて、前記グローバル座標系において、前記実オブジェクトに対して前記第1の仮想オブジェクトの位置を設定し、前記座標情報に基づく前記第1の仮想オブジェクトの位置と、前記画像情報に基づく前記第1の仮想オブジェクトの位置との差に基づいて、前記第2の仮想オブジェクトの位置を補正してもよい。 In the information processing device, the control unit sets the position of the first virtual object with respect to the real object in the global coordinate system based on the image information, and the first virtual object based on the coordinate information. The position of the second virtual object may be corrected based on the difference between the position of the virtual object and the position of the first virtual object based on the image information.
 上記情報処理装置において、前記制御部は、前記差に基づいて補正値を算出し、補正値により前記第2の仮想オブジェクトの位置を補正してもよい。 In the information processing device, the control unit may calculate a correction value based on the difference and correct the position of the second virtual object by the correction value.
 上記情報処理装置において、前記制御部は、前記補正値により、前記第2の仮想オブジェクトの位置を移動させて前記第2の仮想オブジェクトの位置を補正してもよい。 In the information processing device, the control unit may move the position of the second virtual object according to the correction value to correct the position of the second virtual object.
 上記情報処理装置において、前記制御部は、前記補正値により、前記第2の仮想オブジェクトを回転させて前記第2の仮想オブジェクトの位置を補正してもよい。 In the information processing device, the control unit may rotate the second virtual object to correct the position of the second virtual object according to the correction value.
 上記情報処理装置において、前記制御部は、前記補正値による補正の度合いを変化させてもよい。 In the information processing device, the control unit may change the degree of correction by the correction value.
 上記情報処理装置において、前記制御部は、前記補正値が算出されたときの前記情報処理装置と、前記実オブジェクトとの間の距離に応じて、前記補正値による補正の度合いを変化させてもよい。 In the information processing device, the control unit may change the degree of correction by the correction value according to the distance between the information processing device and the real object when the correction value is calculated. good.
 上記情報処理装置において、前記制御部は、前記他の装置が前記実オブジェクトに対して前記第1の仮想オブジェクトの位置を設定したときの前記他の装置と、前記実オブジェクトとの間の距離に応じて、前記補正値による補正の度合いを変化させてもよい。 In the information processing device, the control unit determines the distance between the other device and the real object when the other device sets the position of the first virtual object with respect to the real object. The degree of correction by the correction value may be changed accordingly.
 上記情報処理装置において、前記他の装置は、前記第1の仮想オブジェクトが前記実オブジェクトに重なってAR表示可能なように前記第1の仮想オブジェクトの位置を設定してもよい。 In the information processing device, the other device may set the position of the first virtual object so that the first virtual object overlaps the real object and can be AR-displayed.
 上記情報処理装置において、前記他の装置は、前記第1の仮想オブジェクトが前記実オブジェクトの近傍にAR表示可能なように前記第1の仮想オブジェクトの位置を設定してもよい。 In the information processing device, the other device may set the position of the first virtual object so that the first virtual object can be AR-displayed in the vicinity of the real object.
 上記情報処理装置において、前記他の装置は、前記第1の仮想オブジェクトをAR表示してもよい。 In the information processing device, the other device may display the first virtual object in AR.
 上記情報処理装置において、前記制御部は、前記第1の仮想オブジェクトをAR表示してもよい。 In the information processing device, the control unit may display the first virtual object in AR.
 上記情報処理装置において、前記他の装置は、前記他の装置が取得した画像情報に基づいて、前記実空間に存在する複数の実オブジェクトの中から、前記第1の仮想オブジェクトが位置される対象となる前記実オブジェクトを選択してもよい。 In the information processing device, the other device is an object in which the first virtual object is located from among a plurality of real objects existing in the real space based on the image information acquired by the other device. You may select the real object that becomes.
 上記情報処理装置において、前記他の装置は、所定の条件を満たす前記実オブジェクトを、前記第1の仮想オブジェクトが位置される対象となる前記実オブジェクトとして選択してもよい。 In the information processing device, the other device may select the real object satisfying a predetermined condition as the real object to which the first virtual object is located.
 上記情報処理装置において、前記所定の条件は、前記実オブジェクトが特定の形状を有することであってもよい。 In the information processing device, the predetermined condition may be that the real object has a specific shape.
 上記情報処理装置において、前記所定の条件は、いずれの方向から前記実オブジェクトを見た場合でも、前記実オブジェクトが実質的に一意に特定される3次元形状を有していることであってもよい。 In the information processing apparatus, the predetermined condition is that the real object has a three-dimensional shape that is substantially uniquely specified regardless of the direction in which the real object is viewed. good.
 本技術に係る情報処理システムは、情報処理装置と、他の装置とを具備する。
 前記情報処理装置は、制御部を有する。
 前記制御部は、実空間に対応するグローバル座標系において自己位置を推定し、
 前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
 前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
 画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
 前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する。
The information processing system according to the present technology includes an information processing device and another device.
The information processing device has a control unit.
The control unit estimates its own position in the global coordinate system corresponding to the real space.
Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
Based on the image information, the position of the real object in the global coordinate system is calculated.
Based on the positional relationship between the first virtual object and the real object, the position of the second virtual object that is AR-displayed in common with the other device is corrected.
 本技術に係る情報処理方法は、実空間に対応するグローバル座標系において自己位置を推定し、
 前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
 前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
 画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
 前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する
 ことを含む。
The information processing method related to this technology estimates the self-position in the global coordinate system corresponding to the real space, and
Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
Based on the image information, the position of the real object in the global coordinate system is calculated.
This includes correcting the position of the second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object.
 本技術に係るプログラムは、実空間に対応するグローバル座標系において自己位置を推定し、
 前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
 前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
 画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
 前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する
 処理をコンピュータに実行させる。
The program related to this technology estimates the self-position in the global coordinate system corresponding to the real space, and
Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
Based on the image information, the position of the real object in the global coordinate system is calculated.
Based on the positional relationship between the first virtual object and the real object, the computer is made to execute a process of correcting the position of the second virtual object that is AR-displayed in common with the other device.
本技術の第1実施形態に係る情報処理システムを示す図である。It is a figure which shows the information processing system which concerns on 1st Embodiment of this technology. 情報処理システムにおけるHMDを示す斜視図である。It is a perspective view which shows HMD in an information processing system. HMDの内部構成を示すブロック図である。It is a block diagram which shows the internal structure of an HMD. 情報処理システムの処理を示すタイミングチャートである。It is a timing chart which shows the processing of an information processing system. 情報処理システムの処理を示すタイミングチャートである。It is a timing chart which shows the processing of an information processing system. 第1のHMDを装着したユーザ、及び第2のHMDを装着したユーザが、補正用実オブジェクトを見ているときの様子を示す図である。It is a figure which shows the state when the user who wears a 1st HMD and the user who wears a 2nd HMD are looking at a real object for correction. 図6に示す例において、補正用実オブジェクトが第1のHMDの撮像部により撮像されたときの画像情報の一例を示す図である。In the example shown in FIG. 6, it is a figure which shows an example of the image information when the correction real object was imaged by the image pickup unit of the 1st HMD. 図7に示す画像情報から抽出された特徴点群の情報を示す図である。It is a figure which shows the information of the feature point cloud extracted from the image information shown in FIG. 7. 補正用実オブジェクトに対して補正用仮想オブジェクトの座標位置が設定されたときの一例を示す図である。It is a figure which shows an example when the coordinate position of the correction virtual object is set with respect to the correction real object. 第2のHMDにより補正用仮想オブジェクトの位置が設定された(AR表示された)ときの様子を示す図である。It is a figure which shows the state when the position of the correction virtual object is set (AR is displayed) by the 2nd HMD. 補正用実オブジェクトが第2のHMDの撮像部によって撮像されたときの様子を示す図である。It is a figure which shows the state when the correction real object was imaged by the image pickup part of the 2nd HMD. 図11に示す画像情報から抽出された特徴点群の情報を示す図である。It is a figure which shows the information of the feature point cloud extracted from the image information shown in FIG. 補正用仮想オブジェクトの位置及び姿勢と、補正用実オブジェクトの位置及び姿勢との差を示す図である。It is a figure which shows the difference between the position and posture of a virtual object for correction, and the position and posture of a real object for correction. 補正用実オブジェクトに対して補正用仮想オブジェクトの座標位置が設定されたとき(AR表示されたとき)の一例を示す図である。It is a figure which shows an example when the coordinate position of the correction virtual object is set (when AR is displayed) with respect to the correction real object. 座標情報に基づく補正用仮想オブジェクトの位置及び姿勢と、画像情報に基づく補正用仮想オブジェクトの位置及び姿勢との差を示す図である。It is a figure which shows the difference between the position and posture of the correction virtual object based on the coordinate information, and the position and posture of a correction virtual object based on image information.
 以下、本技術に係る実施形態を、図面を参照しながら説明する。 Hereinafter, embodiments relating to the present technology will be described with reference to the drawings.
≪第1実施形態≫
 <システムの全体構成及び各部の構成>
 図1は、本技術の第1実施形態に係る情報処理システム100を示す図である。図1に示すように、情報処理システム100は、複数のHMD(Head Mounted Display)10と、サーバ装置20とを含む。
<< First Embodiment >>
<Overall system configuration and configuration of each part>
FIG. 1 is a diagram showing an information processing system 100 according to a first embodiment of the present technology. As shown in FIG. 1, the information processing system 100 includes a plurality of HMDs (Head Mounted Display) 10s and a server device 20.
 第1実施形態では、仮想オブジェクトをAR表示可能なAR装置(情報処理装置)の一例として、頭部に装着されて使用されるHMD10を例に挙げて説明する。図1に示す例では、HMD10の数が2体とされているが、HMD10の数については2以上であれば特に限定されない。 In the first embodiment, as an example of an AR device (information processing device) capable of displaying a virtual object in AR, HMD10, which is used by being attached to a head, will be described as an example. In the example shown in FIG. 1, the number of HMD10s is two, but the number of HMD10s is not particularly limited as long as it is two or more.
 「HMD10」
 図2は、HMD10を示す斜視図である。図3は、HMDの内部構成を示すブロック図である。
"HMD10"
FIG. 2 is a perspective view showing the HMD 10. FIG. 3 is a block diagram showing an internal configuration of the HMD.
 これらの図に示すように、HMD10は、HMD本体11と、制御部1と、記憶部2と、表示部3と、撮像部4と、慣性センサ5と、操作部6と、通信部7とを備えている。 As shown in these figures, the HMD 10 includes an HMD main body 11, a control unit 1, a storage unit 2, a display unit 3, an imaging unit 4, an inertial sensor 5, an operation unit 6, and a communication unit 7. It has.
 HMD本体11は、ユーザの頭部に装着されて使用される。HMD本体11は、フロント部12と、フロント部12の右側に設けられた右テンプル部13と、フロント部12の左側に設けられた左テンプル部14と、フロント部12の下側に取り付けられたグラス部15とを有する。 The HMD main body 11 is attached to the user's head and used. The HMD main body 11 is attached to the front portion 12, the right temple portion 13 provided on the right side of the front portion 12, the left temple portion 14 provided on the left side of the front portion 12, and the lower side of the front portion 12. It has a glass portion 15.
 表示部3の少なくとも一部は、グラス部15に設けられている。表示部3は、光透過性を有するディスプレイ(光学シースルーディスプレイ)であり、例えば、光源としてのOLED(Organic Light Emitting Diode)及び導光板を含む。表示部3は、制御部1の制御に応じて、仮想オブジェクトをAR表示することが可能とされている。表示部3は、ハーフミラーを用いた構成、網膜操作ディスプレイ等、種々の形態を採用し得る。なお、表示部3の光源は、フロント部12、右テンプル部13、あるいは、左テンプル部14等に設けられていてもよい。 At least a part of the display unit 3 is provided in the glass unit 15. The display unit 3 is a display having light transmission (optical see-through display), and includes, for example, an OLED (Organic Light Emitting Diode) as a light source and a light guide plate. The display unit 3 can AR-display the virtual object according to the control of the control unit 1. The display unit 3 may adopt various forms such as a configuration using a half mirror and a retinal operation display. The light source of the display unit 3 may be provided on the front unit 12, the right temple unit 13, the left temple unit 14, or the like.
 AR表示とは、ユーザから見て、仮想オブジェクトが、あたかも実空間に存在する現実物体であるかのように知覚させるように表示を行うことを意味する。なお、表示部3は、ビデオシースルーディスプレイであってもよい。この場合、撮像部4により撮像された画像に仮想オブジェクトが重畳された画像が表示部3上に表示される。 AR display means that the virtual object is displayed so as to be perceived as if it were a real object existing in the real space from the user's point of view. The display unit 3 may be a video see-through display. In this case, an image in which a virtual object is superimposed on the image captured by the imaging unit 4 is displayed on the display unit 3.
 撮像部4は、例えば、カメラであり、CCD(Charge Coupled Device)センサ、CMOS(Complemented Metal Oxide Semiconductor)センサ等の撮像素子と、結像レンズなど等の光学系とを含む。撮像部4は、フロント部12の外面において外向きに設けられており、ユーザの視線方向の先に存在する実オブジェクトを撮像し、撮像により得られた画像情報(デプス情報)を制御部1へと出力する。 The image pickup unit 4 is, for example, a camera, and includes an image pickup element such as a CCD (Charge Coupled Device) sensor and a CMOS (Complemented Metal Oxide Semiconductor) sensor, and an optical system such as an imaging lens. The imaging unit 4 is provided outward on the outer surface of the front unit 12, images an actual object existing in the user's line-of-sight direction, and transfers the image information (depth information) obtained by the imaging to the control unit 1. Is output.
 撮像部4は、フロント部12において横方向に所定の間隔を開けて2つ設けられている。撮像部4が設けられる場所、数については、適宜変更可能である。なお、撮像部4として、カメラの代わりに赤外線センサが用いられてもよく、カメラ及び赤外線センサの組み合わせが用いられてもよい。 Two imaging units 4 are provided in the front unit 12 at predetermined intervals in the lateral direction. The location and number of the imaging units 4 can be changed as appropriate. As the image pickup unit 4, an infrared sensor may be used instead of the camera, or a combination of the camera and the infrared sensor may be used.
 慣性センサ5は、3軸方向の加速度を検出する3軸の加速度センサと、3軸回りの角速度を検出する角速度センサとを含む。慣性センサ5は、検出により得られた3軸方向の加速度、3軸回りの角速度を慣性情報として、制御部1に出力する。 The inertial sensor 5 includes a 3-axis acceleration sensor that detects acceleration in the 3-axis direction and an angular velocity sensor that detects an angular velocity around the 3-axis. The inertial sensor 5 outputs the acceleration in the three-axis direction and the angular velocity around the three axes obtained by the detection to the control unit 1 as inertial information.
 本実施形態では、慣性センサ5の検出軸が3軸とされているが、この検出軸は、1軸、あるいは、2軸であってもよい。また、本実施形態では、慣性センサ5として、2種類のセンサが用いられているが、慣性センサ5として1種類、あるいは、3種類以上のセンサが用いられてもよい。なお、慣性センサ5の他の例としては、速度センサ、角度センサ等が挙げられる。 In the present embodiment, the detection axes of the inertial sensor 5 are three axes, but the detection axes may be one axis or two axes. Further, in the present embodiment, two types of sensors are used as the inertial sensor 5, but one type or three or more types of sensors may be used as the inertial sensor 5. Other examples of the inertial sensor 5 include a speed sensor, an angle sensor, and the like.
 操作部6は、例えば、押圧式、接触式等の各種のタイプの操作部であり、ユーザによる操作を検出して制御部1へと出力する。図2に示す例では、操作部6は、左テンプル部14の前方側に設けられているが、操作部6が設けられる位置はユーザが操作しやすい位置であればどのような位置であってもよい。 The operation unit 6 is, for example, various types of operation units such as a pressing type and a contact type, and detects an operation by the user and outputs the operation to the control unit 1. In the example shown in FIG. 2, the operation unit 6 is provided on the front side of the left temple unit 14, but the position where the operation unit 6 is provided is any position as long as it is easy for the user to operate. May be good.
 通信部7は、他のHMD10及びサーバ装置20との間で、無線又は有線により通信可能とされている。 The communication unit 7 is capable of wirelessly or wired communication with another HMD 10 and the server device 20.
 制御部1は、記憶部2に記憶された各種のプログラムに基づき種々の演算を実行し、HMD10の各部を統括的に制御する。制御部1は、CPU(Central Processing Unit)16と、VPU(Vision Processing Unit)17と、GPU(Graphics Processing Unit)18とを含む。 The control unit 1 executes various operations based on various programs stored in the storage unit 2 and comprehensively controls each unit of the HMD 10. The control unit 1 includes a CPU (Central Processing Unit) 16, a VPU (Vision Processing Unit) 17, and a GPU (Graphics Processing Unit) 18.
 VPU17は、自己位置推定に関する処理や、撮像部4によって取得された画像情報を解析する処理などを実行する。自己位置推定は、リローカライゼイション及びモーショントラッキングを含む。 The VPU 17 executes a process related to self-position estimation, a process of analyzing image information acquired by the imaging unit 4, and the like. Self-position estimation includes relocalization and motion tracking.
 リローカライゼイションは、HMD10に電源が投入された直後や、その後の所定のタイミングにおいて、撮像部4により撮像された画像及びマップ情報に基づき、グローバル座標系において現在の自己位置及び姿勢を推定する技術である。 The relocalization estimates the current self-position and attitude in the global coordinate system based on the image and map information captured by the image pickup unit 4 immediately after the power is turned on to the HMD 10 or at a predetermined timing thereafter. It is a technology.
 モーショントラッキングは、画像情報(及び慣性情報)に基づき、微小時間毎に、自己位置及び姿勢の変化量(動き)を算出し、この変化量を順次加算することで、グローバル座標系において現在の自己位置及び姿勢を推定する技術である。 Motion tracking calculates the amount of change (movement) of self-position and posture for each minute time based on image information (and inertial information), and by sequentially adding this amount of change, the current self in the global coordinate system. It is a technique for estimating the position and posture.
 リローカライゼイションでは、VPU17は、まず、撮像部4によって取得された画像情報を画像処理して画像情報から特徴点群を抽出する。そして、VPU17は、抽出された特徴点群と、マップ情報に含まれる特徴点群(あるいは、特徴点群が繋ぎあわされたメッシュ情報)とを比較することで、グローバル座標系において自己の位置及び姿勢を推定する。 In the relocalization, the VPU 17 first performs image processing on the image information acquired by the imaging unit 4 and extracts a feature point cloud from the image information. Then, the VPU 17 compares the extracted feature point cloud with the feature point cloud (or mesh information in which the feature point cloud is connected) included in the map information, thereby displaying its own position and its own position in the global coordinate system. Estimate the posture.
 リローカライゼイションは、電源投入直後や、モーショントラッキングに基づく自己位置推定が失敗したとき等に実行される。また、撮像部4による画像情報からの特徴点群と、マップ情報に含まれる特徴点群とを比較する処理が常時実行され、これらの特徴点群のマッチングが成功したときにリローカライゼイションが実行されてもよい。 Relocation is executed immediately after the power is turned on, or when self-position estimation based on motion tracking fails. Further, the process of comparing the feature point cloud from the image information by the imaging unit 4 with the feature point cloud included in the map information is constantly executed, and when the matching of these feature point groups is successful, the relocalization is performed. It may be executed.
 モーショントラッキングでは、VPU17は、まず、撮像部4によって取得された画像情報を画像処理して画像情報から特徴点群を抽出する。そして、VPU17は、前回における画像情報の特徴点群と、今回における画像情報の特徴点群との比較により、前回の自己位置及び姿勢と今回の自己位置及び姿勢の変化量を算出する。VPU17は、この変化量を前回における自己位置及び姿勢に加算することで、グローバル座標系において現在における自己位置及び姿勢を推定する。 In motion tracking, the VPU 17 first performs image processing on the image information acquired by the imaging unit 4 and extracts a feature point cloud from the image information. Then, the VPU 17 calculates the change amount of the previous self-position and posture and the current self-position and posture by comparing the feature point group of the image information in the previous time with the feature point group of the image information in this time. The VPU 17 estimates the current self-position and attitude in the global coordinate system by adding this amount of change to the previous self-position and attitude.
 なお、リローカライゼイションが実行された直後においては、変化量が加算される元の自己位置及び姿勢は、リローカライゼイションによって推定された自己位置及び姿勢となる。 Immediately after the relocalization is executed, the original self-position and posture to which the amount of change is added becomes the self-position and posture estimated by the re-localization.
 ここでの説明では、モーショントラッキングについて撮像部4からの画像情報が用いられる場合について説明したが、画像情報の代わりに、慣性センサ5からの慣性情報が用いられてもよい。あるいは、画像情報及び慣性情報の両方が用いられてもよい。 In the description here, the case where the image information from the imaging unit 4 is used for motion tracking has been described, but the inertia information from the inertial sensor 5 may be used instead of the image information. Alternatively, both image information and inertial information may be used.
 CPU16は、仮想オブジェクトの位置及び表示内容を設定する処理、遅延補償処理等を実行する。また、CPU16は、各HMDで共通で実行されるアプリケーションに関する各種の処理を実行する。 The CPU 16 executes a process of setting the position and display contents of the virtual object, a delay compensation process, and the like. In addition, the CPU 16 executes various processes related to the application commonly executed by each HMD.
 VPU17は、表示部3による仮想オブジェクトのAR表示を制御する処理を実行する。具体的には、VPU17は、ユーザの視点位置に合わせてAR表示される仮想オブジェクトの画像を算出して表示部3へと出力する処理を実行する。 The VPU 17 executes a process of controlling the AR display of the virtual object by the display unit 3. Specifically, the VPU 17 executes a process of calculating an image of a virtual object to be AR-displayed according to the user's viewpoint position and outputting it to the display unit 3.
 本実施形態では、複数のHMD10において、グローバル座標系において同じ位置に共通の仮想オブジェクトがAR表示されるように表示部3の表示が制御される。 In the present embodiment, the display of the display unit 3 is controlled so that the common virtual object is AR-displayed at the same position in the global coordinate system in the plurality of HMD10s.
 一例として、例えば、複数のHMD10において、それぞれ複数のユーザが模擬銃で撃ちあうサバイバルゲームのアプリケーションが実行される場合について説明する。この場合、ユーザが模擬銃を撃つ動作を行うと、銃弾の仮想オブジェクトがそれぞれのHMD10において、グローバル座標系において同じ位置にAR表示される。 As an example, a case where, for example, a survival game application in which a plurality of users shoot each other with a simulated gun is executed in a plurality of HMD10s will be described. In this case, when the user shoots a simulated gun, the virtual object of the ammunition is AR-displayed at the same position in the global coordinate system in each HMD 10.
 本実施形態においては、仮想オブジェクトとして2種類の仮想オブジェクトが存在する。1種類目は、補正用仮想オブジェクト31(第1の仮想オブジェクト)であり、2種類目は、通常の仮想オブジェクト(第2の仮想オブジェクト)である。 In this embodiment, there are two types of virtual objects as virtual objects. The first type is a correction virtual object 31 (first virtual object), and the second type is a normal virtual object (second virtual object).
 通常の仮想オブジェクトは、複数のHMD10において、共通のアプリケーションが実行されているときに、グローバル座標系において同じ位置にAR表示される共通の仮想オブジェクトである。例えば、サバイバルゲームの例では、銃弾の仮想オブジェクトがこの通常の仮想オブジェクトに対応する。 A normal virtual object is a common virtual object that is AR-displayed at the same position in the global coordinate system when a common application is executed in a plurality of HMD10s. For example, in the survival game example, the ammunition virtual object corresponds to this regular virtual object.
 補正用仮想オブジェクト31は、通常の仮想オブジェクトが複数のHMD10において同じ位置及び姿勢で表示されるように、そのAR表示位置を補正するために用いられる仮想オブジェクトである。この補正用仮想オブジェクト31は、AR表示されてもよいし、グローバル座標系において単に位置が設定されるだけでAR表示されなくてもよいが、第1実施形態では、補正用仮想オブジェクト31がAR表示される場合について説明する。 The correction virtual object 31 is a virtual object used to correct the AR display position so that a normal virtual object is displayed in the same position and posture in a plurality of HMD10s. The correction virtual object 31 may be AR-displayed, or may not be AR-displayed simply by setting the position in the global coordinate system. However, in the first embodiment, the correction virtual object 31 is AR-displayed. The case where it is displayed will be described.
 なお、各実施形態の説明において単に仮想オブジェクトと呼んだ場合には、通常の仮想オブジェクトと補正用仮想オブジェクト31の総称を意味することとする。 Note that when the term is simply referred to as a virtual object in the description of each embodiment, it means a general term for a normal virtual object and a correction virtual object 31.
 記憶部2は、制御部1の処理に必要な各種のプログラムや、各種のデータが記憶される不揮発性のメモリと、制御部1の作業領域として用いられる揮発性のメモリとを含む。なお、各種のプログラムは、光ディスク、半導体メモリなどの可搬性の記録媒体から読み取られてもよいし、ネットワーク上のサーバ装置20からダウンロードされてもよい。 The storage unit 2 includes various programs required for processing of the control unit 1, a non-volatile memory for storing various data, and a volatile memory used as a work area of the control unit 1. The various programs may be read from a portable recording medium such as an optical disk or a semiconductor memory, or may be downloaded from the server device 20 on the network.
 本実施形態では、特に、記憶部2には、各HMD10で共通で用いられるマップ情報が保存される。マップ情報は、事前に作成される方法と、事前に作成せずに自己位置推定と同時に作成する方法(SLAM)とが存在する。この2つの方法のうちいずれの方法が用いられてもよいが、本実施形態の説明では、マップ情報が事前に作成される場合について説明する。 In the present embodiment, in particular, the storage unit 2 stores the map information commonly used in each HMD 10. There are a method of creating map information in advance and a method of creating map information at the same time as self-position estimation without creating it in advance (SLAM). Any of these two methods may be used, but in the description of the present embodiment, the case where the map information is created in advance will be described.
 マップ情報は、グローバル座標系に対応する3次元情報であり、自己位置推定(リローカライズ)に用いられる情報である。このマップ情報は、実空間における各実オブジェクトに対応する特徴点群の情報(あるいは、特徴点群が繋ぎ合わされたメッシュ情報)を含んでいる。この特徴点群に含まれる特徴点は、それぞれ、グローバル座標系における位置情報と関連付けられている。 Map information is three-dimensional information corresponding to the global coordinate system, and is information used for self-position estimation (relocalization). This map information includes information on a feature point cloud (or mesh information in which the feature point clouds are connected) corresponding to each real object in the real space. Each feature point included in this feature point cloud is associated with position information in the global coordinate system.
 マップ情報は、例えば、各HMD10においてアプリケーションが実行される実空間において、予め、作業者によりカメラ等により取得された画像情報に基づいて生成される。 The map information is generated based on the image information acquired in advance by the operator with a camera or the like in the real space where the application is executed in each HMD 10, for example.
 本実施形態では、各HMD10における自己位置推定において共通のマップ情報が用いられることで、各HMD10の位置及び姿勢が共通のグローバル座標系で表される。 In this embodiment, the position and orientation of each HMD 10 are represented by a common global coordinate system by using common map information in the self-position estimation in each HMD 10.
 「サーバ装置20」
 サーバ装置20は、各HMD10と通信可能に構成されている。サーバ装置20は、制御部、記憶部、通信部等を備えている(各部分ついて不図示)。
"Server device 20"
The server device 20 is configured to be able to communicate with each HMD 10. The server device 20 includes a control unit, a storage unit, a communication unit, and the like (each portion is not shown).
 制御部は、記憶部に記憶された各種のプログラムに基づき種々の演算を実行し、サーバ装置の各部を統括的に制御する。 The control unit executes various operations based on various programs stored in the storage unit, and controls each part of the server device in an integrated manner.
 記憶部は、制御部の処理に必要な各種のプログラムや、各種のデータが記憶される不揮発性のメモリと、制御部1の作業領域として用いられる揮発性のメモリとを含む。各種のプログラムは、光ディスク、半導体メモリなどの可搬性の記録媒体から読み取られてもよい。 The storage unit includes various programs required for processing of the control unit, a non-volatile memory for storing various data, and a volatile memory used as a work area of the control unit 1. Various programs may be read from a portable recording medium such as an optical disk or a semiconductor memory.
 通信部は、各HMDとの間で通信可能に構成されている。 The communication unit is configured to be able to communicate with each HMD.
<動作説明>
 次に、情報処理システム100の処理について説明する。図4及び図5は、情報処理システム100の処理を示すタイミングチャートである。
<Operation explanation>
Next, the processing of the information processing system 100 will be described. 4 and 5 are timing charts showing the processing of the information processing system 100.
 処理の説明では、2つのHMD10を便宜的にそれぞれ第1のHMD10a(他の装置)及び第2のHMD10b(情報処理装置)と呼ぶ。 In the description of the process, the two HMDs 10 are referred to as a first HMD10a (another device) and a second HMD10b (information processing device), respectively, for convenience.
 図4に示すように、まず、サーバ装置20は、事前に作成されたマップ情報を第1のHMD10a及び第2のHMD10bに対してそれぞれ送信する。第1のHMD10a及び第2のHMD10bの制御部1(VPU17)は、それぞれ、マップ情報を受信して記憶部2に記憶させ、これにより、マップ情報が第1のHMD10a及び第2のHMD10bにより共有され、グローバル座標系が共有される。 As shown in FIG. 4, first, the server device 20 transmits the map information created in advance to the first HMD10a and the second HMD10b, respectively. The control unit 1 (VPU17) of the first HMD10a and the second HMD10b receives the map information and stores it in the storage unit 2, respectively, whereby the map information is shared by the first HMD10a and the second HMD10b. And the global coordinate system is shared.
 第1のHMD10a及び第2のHMD10bの制御部1(VPU17)は、マップ情報が受信された後、自己の撮像部4によって取得された画像情報における特徴点群と、マップ情報における特徴点群との比較に基づいて、グローバル座標系における自己位置及び姿勢を推定する。つまり、第1のHMD10a及び第2のHMD10bの制御部1(VPU17)は、マップ情報が受信されると、リローカライゼイションに基づく自己位置推定を実行する。 The control unit 1 (VPU17) of the first HMD10a and the second HMD10b has a feature point group in the image information acquired by its own imaging unit 4 after receiving the map information, and a feature point group in the map information. Estimate the self-position and orientation in the global coordinate system based on the comparison of. That is, the control unit 1 (VPU17) of the first HMD10a and the second HMD10b executes self-position estimation based on relocalization when the map information is received.
 その後、第1のHMD10a及び第2のHMD10bの制御部1(VPU17)は、前回における画像情報の特徴点群と、今回における画像情報の特徴点群との比較により、前回の自己位置及び姿勢と今回の自己位置及び姿勢の変化量を算出する。第1のHMD10a及び第2のHMD10bの制御部1(VPU17)は、この変化量を前回における自己位置及び姿勢(リローカライゼイションの直後は、リローカライゼイションで推定された自己位置及び姿勢)に加算することで、グローバル座標系において現在における自己位置及び姿勢を推定する。つまり、第1のHMD10a及び第2のHMD10bの制御部1(VPU17)は、リローカライゼイションに基づく自己位置推定の後、モーショントラッキングに基づく自己位置推定を実行する。 After that, the control unit 1 (VPU17) of the first HMD10a and the second HMD10b compares the feature point group of the image information in the previous time with the feature point group of the image information in the current time, and determines the previous self-position and posture. Calculate the amount of change in self-position and posture this time. The control unit 1 (VPU17) of the first HMD10a and the second HMD10b determines the amount of change in the self-position and posture in the previous time (immediately after the re-localization, the self-position and posture estimated by the re-localization). By adding to, the current self-position and attitude in the global coordinate system are estimated. That is, the control unit 1 (VPU17) of the first HMD10a and the second HMD10b executes the self-position estimation based on the motion tracking after the self-position estimation based on the relocalization.
 その後、モーショントラッキングに基づく自己位置推定が継続して実行される。なお、このモーショントラッキングに基づく自己位置推定の継続中において、所定のタイミングでリローカライゼイションに基づく自己位置推定が実行される場合がある。 After that, self-position estimation based on motion tracking is continuously executed. While the self-position estimation based on this motion tracking is being continued, the self-position estimation based on the relocation may be executed at a predetermined timing.
 リローカライゼイションに基づく自己位置推定が実行されるタイミングは、例えば、モーショントラッキングに基づく自己位置推定が失敗した場合や、リローカライゼイションに基づく自己位置推定を常時実行する場合において、画像情報の特徴点群及びマップ情報の特徴点群によるマッチングが成功した場合などである。 The timing at which the self-position estimation based on relocalization is executed is, for example, when the self-position estimation based on motion tracking fails or when the self-position estimation based on relocalization is always executed, the image information For example, when matching by the feature point cloud and the feature point cloud of the map information is successful.
 第1のHMD10a及び第2のHMD10bの制御部1(VPU17)は、このようにして、共通のグローバル座標系において、それぞれ、自己位置及び姿勢を推定する。 In this way, the control unit 1 (VPU17) of the first HMD10a and the second HMD10b estimates its own position and attitude, respectively, in the common global coordinate system.
 第1のHMD10a及び第2のHMD10bの制御部1(VPU17)は、自己位置及び姿勢が推定される度に、その自己位置及び姿勢をサーバ装置20に対して送信する。また、サーバ装置20は、第1のHMD10a及び第2のHMD10bのうち一方のHMD10から受信した自己位置及び姿勢を他方のHMD10に対して送信する。これにより、第1のHMD10a及び第2のHMD10bは、グローバル座標系において他方のHMD10の位置及び姿勢を認識することができる。 The control unit 1 (VPU17) of the first HMD10a and the second HMD10b transmits the self-position and the posture to the server device 20 each time the self-position and the posture are estimated. Further, the server device 20 transmits the self-position and the posture received from one HMD10 of the first HMD10a and the second HMD10b to the other HMD10. Thereby, the first HMD10a and the second HMD10b can recognize the position and orientation of the other HMD10 in the global coordinate system.
 ここで、第1のHMD10a及び第2のHMD10bにおいて同じ位置に正確に共通の(通常の)仮想オブジェクトをAR表示するためには、それぞれの第1のHMD10a及び第2のHMD10bがグローバル座標系においてそれぞれ正確に自己位置推定を行う必要がある。 Here, in order to AR-display a common (normal) virtual object at exactly the same position in the first HMD10a and the second HMD10b, the first HMD10a and the second HMD10b, respectively, are in the global coordinate system. It is necessary to accurately estimate the self-position of each.
 一方、第1のHMD10a及び第2のHMD10bは、完全に正確に自己位置推定を行うことはできない。つまり、推定された自己位置及び姿勢が実空間における実際の位置及び姿勢に対してずれてしまい、誤差が生じてしまうことがある。このような誤差は、第1のHMD10a及び第2のHMD10bにおいて、グローバル座標系において同じ位置にAR表示され得る共通の(通常の)仮想オブジェクトの位置ずれの原因となる。 On the other hand, the first HMD10a and the second HMD10b cannot completely and accurately estimate their own positions. That is, the estimated self-position and posture may deviate from the actual position and posture in the real space, resulting in an error. Such an error causes a misalignment of a common (normal) virtual object that can be AR-displayed at the same position in the global coordinate system in the first HMD10a and the second HMD10b.
 共通の(通常の)仮想オブジェクトの位置ずれの原因としては以下の例が挙げられる。
・同じような特徴点群パターンが実空間において複数存在する場所(例えば、同一形状の複数の窓や繰り返しパターンを有する壁)が検出されることによる自己位置の誤認識
・撮像部4のレンズ部の汚れによる自己位置の誤認識
・撮像部4とその他の部分(特に表示部3)の相対的な位置ずれ
・マップ情報に対応する特徴点が継続して検出できない(例えば、事前に作成されたマップ情報が存在しないエリアの検出してしまう場合、影や動的オブジェクト等によりマップ情報が作成された時点での特徴点が、現在における特徴点と変化してしまっている場合等)ことによる自己位置のロスト又は自己位置のドリフト
The following are examples of common (normal) virtual object misalignments.
-Misrecognition of self-position due to detection of multiple locations in real space where similar feature point cloud patterns exist (for example, multiple windows of the same shape or walls with repeating patterns) -The lens unit of the imaging unit 4 Misrecognition of self-position due to dirt in When an area where map information does not exist is detected, or when the feature point at the time when the map information is created by a shadow or a dynamic object has changed from the current feature point, etc.) Lost position or drift self-position
 共通の(通常の)仮想オブジェクトの位置は、グローバル座標系において3次元の座標情報として表される。仮に、第1のHMD10a及び第2のHMD10bのうち、一方のHMD10がグローバル座標系において共通の仮想オブジェクトを(x、y、z)の位置にAR表示する場合、他方のHMD10も同様にグローバル座標系において共通の仮想オブジェクトを(x、y、z)の位置にAR表示する。 The position of a common (normal) virtual object is represented as three-dimensional coordinate information in the global coordinate system. If one of the first HMD10a and the second HMD10b AR displays a common virtual object at the position (x, y, z) in the global coordinate system, the other HMD10 also has global coordinates. The common virtual object in the system is AR-displayed at the position (x, y, z).
 この場合において、いずれかのHMD10において、推定される自己位置及び姿勢と実際の位置及び姿勢との間に誤差が生じているとする。この場合、一方のHMD10においてAR表示される共通の(通常の)仮想オブジェクトにおけるユーザから見た位置及び姿勢と、他方のHMD10においてAR表示される共通の仮想オブジェクトにおけるユーザから見た位置及び姿勢とが異なってしまうことになる。 In this case, it is assumed that there is an error between the estimated self-position and posture and the actual position and posture in any of the HMD10s. In this case, the position and orientation seen by the user in the common (normal) virtual object AR displayed in one HMD10 and the position and orientation seen by the user in the common virtual object AR displayed in the other HMD10. Will be different.
 サバイバルゲームの例で説明すると、一方のHMD10を装着したユーザから見ると他方のHMD10を装着したユーザに対して銃弾の仮想オブジェクトが当たったように見えるのに対して、他方のHMD10を装着したユーザから見ると自身に対して銃弾の仮想オブジェクトが当たっていないように見えるような状況が生じ得る。 Explaining with the example of the survival game, from the user who wears one HMD10, it seems that the virtual object of the ammunition hits the user who wears the other HMD10, whereas the user who wears the other HMD10. From the perspective, there can be situations where the virtual object of the ammunition does not appear to hit itself.
 そこで、本実施形態では、このような共通の(通常の)仮想オブジェクトの位置ずれを補正するために以降の処理が実行される。 Therefore, in the present embodiment, the subsequent processing is executed in order to correct the misalignment of such a common (normal) virtual object.
 第1のHMD10aの制御部1(VPU17)は、自己位置推定の開始後に、撮像部4からの画像情報に基づいて、ユーザが見ている視線方向の先の一定の視野及び距離内に補正用実オブジェクト30(図6、図7等を参照)が存在するかどうかを判定する。つまり、第1のHMD10aの制御部1(VPU17)は、画像情報に基づいて、実空間に存在する複数の実オブジェクトの中から、補正用実オブジェクト30を選択する(見つけ出す)処理を実行する。 After the start of self-position estimation, the control unit 1 (VPU17) of the first HMD 10a is used for correction within a certain field of view and distance ahead of the line-of-sight direction seen by the user, based on the image information from the image pickup unit 4. It is determined whether or not the real object 30 (see FIGS. 6, 7, etc.) exists. That is, the control unit 1 (VPU17) of the first HMD 10a executes a process of selecting (finding) the correction real object 30 from the plurality of real objects existing in the real space based on the image information.
 補正用実オブジェクト30は、補正用仮想オブジェクト31を位置させる(AR表示させる)対象となる実オブジェクトである。この補正用実オブジェクト30は、実空間に存在する複数の実オブジェクトのうち、所定の条件を満たす実オブジェクトとされており、第1のHMD10a及び第2のHMD10bは、所定の条件を満たす実オブジェクトを補正用実オブジェクト30として選択する。 The correction real object 30 is a real object for which the correction virtual object 31 is positioned (displayed in AR). The correction real object 30 is a real object that satisfies a predetermined condition among a plurality of real objects existing in the real space, and the first HMD10a and the second HMD10b are real objects that satisfy the predetermined condition. Is selected as the correction real object 30.
 本実施形態では、補正用実オブジェクト30として選択される条件は、実オブジェクトが特定の形状を有することされている。典型的には、この条件は、いずれの方向からその実オブジェクトを見た場合でも、実オブジェクトが実質的に一意に特定される3次元形状を有していることとされる。例えば、補正用実オブジェクト30は、球体、直方体(立方体を含む)、円柱体、多角柱体、円錐体、多角錐体等の規則的な形状を有している。 In the present embodiment, the condition selected as the correction real object 30 is that the real object has a specific shape. Typically, this condition is that the real object has a three-dimensional shape that is substantially uniquely identified when the real object is viewed from any direction. For example, the correction real object 30 has a regular shape such as a sphere, a rectangular parallelepiped (including a cube), a cylinder, a polygonal prism, a cone, and a polygonal pyramid.
 図6は、第1のHMD10aを装着したユーザ、及び第2のHMD10bを装着したユーザが、補正用実オブジェクト30を見ているときの様子を示す図である。図6に示す例では、補正用実オブジェクト30の形状が立方体とされている。 FIG. 6 is a diagram showing a state when a user wearing the first HMD10a and a user wearing the second HMD10b are looking at the actual object 30 for correction. In the example shown in FIG. 6, the shape of the correction real object 30 is a cube.
 また、図6に示す例では、第1のHMD10aを装着したユーザが、補正用実オブジェクト30を左斜め方向から見ており、第2のHMD10bを装着したユーザが、右斜め方向から補正用実オブジェクト30を見ている場合の一例が示されている。 Further, in the example shown in FIG. 6, the user wearing the first HMD10a is looking at the correction real object 30 from an oblique left direction, and the user wearing the second HMD10b is looking at the correction real object 30 from an oblique right direction. An example is shown when looking at the object 30.
 図7は、図6に示す例において、補正用実オブジェクト30が第1のHMD10aの撮像部4により撮像されたときの画像情報の一例を示す図である。図8は、図7に示す画像情報から抽出された特徴点群の情報を示す図である。 FIG. 7 is a diagram showing an example of image information when the correction real object 30 is imaged by the image capturing unit 4 of the first HMD 10a in the example shown in FIG. FIG. 8 is a diagram showing information of a feature point cloud extracted from the image information shown in FIG. 7.
 第1のHMD10aの制御部1(VPU17)は、例えば、図7に示すような画像情報から抽出された図8に示すような特徴点群の情報に基づき、補正用実オブジェクト30を選択する(見つけ出す)処理を実行する。 The control unit 1 (VPU17) of the first HMD 10a selects the actual object 30 for correction based on the information of the feature point cloud as shown in FIG. 8 extracted from the image information as shown in FIG. 7, for example (VPU17). Execute the process (find).
 また、選択される補正用実オブジェクト30の数は、1つであるとは限らない。つまり、第1のHMD10aを装着したユーザの移動や、周囲を見回す動作によって、第1のHMD10aによって補正用実オブジェクト30が検出される度に、新たな補正用実オブジェクト30が順次追加されていく。 Also, the number of correction real objects 30 selected is not always one. That is, every time the correction real object 30 is detected by the first HMD 10a due to the movement of the user wearing the first HMD 10a or the operation of looking around, a new correction real object 30 is sequentially added. ..
 補正用実オブジェクト30を選択した後、第1のHMD10aの制御部1(CPU16)は、補正用実オブジェクト30の特徴点群の情報(図8参照)に基づいて、自己のグローバル座標系において、補正用実オブジェクト30に対して補正用仮想オブジェクト31の座標位置(AR表示位置)を設定する。 After selecting the correction real object 30, the control unit 1 (CPU16) of the first HMD 10a sets the correction real object 30 in its own global coordinate system based on the information of the feature point cloud of the correction real object 30 (see FIG. 8). The coordinate position (AR display position) of the correction virtual object 31 is set with respect to the correction real object 30.
 このときに設定される補正用仮想オブジェクト31の座標位置は、補正用仮想オブジェクト31の重心位置、補正用仮想オブジェクト31を構成する各部分(面)の頂点(角)の位置、補正用仮想オブジェクト31の向きなどの情報を含む。 The coordinate positions of the correction virtual object 31 set at this time are the position of the center of gravity of the correction virtual object 31, the position of the vertices (corners) of each part (face) constituting the correction virtual object 31, and the correction virtual object. Includes information such as the orientation of 31.
 図9は、補正用実オブジェクト30に対して補正用仮想オブジェクト31の座標位置が設定されたとき(AR表示されたとき)の一例を示す図である。 FIG. 9 is a diagram showing an example when the coordinate position of the correction virtual object 31 is set (when AR is displayed) with respect to the correction real object 30.
 補正用仮想オブジェクト31の座標位置を設定すると、次に、第1のHMD10aの制御部1(GPU18)は、表示部3において、その座標位置に補正用仮想オブジェクト31をAR表示させる。なお、補正用仮想オブジェクト31は、第1のHMD10aにおいて必ずしもAR表示される必要はなく、その座標位置が設定されるだけでもよい。 After setting the coordinate position of the correction virtual object 31, the control unit 1 (GPU18) of the first HMD 10a causes the display unit 3 to AR-display the correction virtual object 31 at the coordinate position. The correction virtual object 31 does not necessarily have to be AR-displayed in the first HMD 10a, and its coordinate position may only be set.
 本実施形態では、補正用仮想オブジェクト31の形状及び大きさは予め決まっておらず、補正用実オブジェクト30の形状及び大きさに基づいて決定される。具体的には、本実施形態では、補正用仮想オブジェクト31の形状及び大きさは、補正用実オブジェクト30と関連しており、補正用実オブジェクト30の形状(立方体)及び大きさと同じとされている。 In the present embodiment, the shape and size of the correction virtual object 31 are not determined in advance, but are determined based on the shape and size of the correction real object 30. Specifically, in the present embodiment, the shape and size of the correction virtual object 31 are related to the correction real object 30, and are the same as the shape (cube) and size of the correction real object 30. There is.
 また、補正用仮想オブジェクト31がAR表示されたときに、補正用仮想オブジェクト31が補正用実オブジェクト30に対して同じ位置に重なってAR表示されるように、補正用仮想オブジェクト31の位置が設定される。 Further, when the correction virtual object 31 is AR-displayed, the position of the correction virtual object 31 is set so that the correction virtual object 31 overlaps with the correction real object 30 and is displayed in AR. Will be done.
 なお、補正用仮想オブジェクト31の形状及び大きさは、予め決定されていてもよい(例えば、補正用仮想オブジェクト31がキャラクタである場合等)。また、補正用仮想オブジェクト31の位置は、補正用仮想オブジェクト31の近傍に設定されていてもよい(例えば、キャラクタの補正用仮想オブジェクト31が補正用実オブジェクト30に乗っている場合等)。 The shape and size of the correction virtual object 31 may be determined in advance (for example, when the correction virtual object 31 is a character). Further, the position of the correction virtual object 31 may be set in the vicinity of the correction virtual object 31 (for example, when the correction virtual object 31 of the character is on the correction real object 30).
 第1のHMD10aの制御部1(CPU16)は、補正用仮想オブジェクト31の位置を設定した後、あるいは、補正用仮想オブジェクト31をAR表示した後、補正用仮想オブジェクト31の位置の座標情報をサーバ装置20に対して送信する。 After setting the position of the correction virtual object 31 or displaying the correction virtual object 31 in AR, the control unit 1 (CPU 16) of the first HMD 10a outputs the coordinate information of the position of the correction virtual object 31 to the server. It transmits to the device 20.
 例えば、図9に示すように、立方体の補正用仮想オブジェクト31の場合、第1のHMD10aの制御部1(CPU16)は、補正用仮想オブジェクト31の座標情報として、3つの面におけるそれぞれの角(a)~(g)の座標情報をサーバ装置20に送信する。
 正面:[{x、y、z}、{x、y、z}、{x、y、z}、{x、y、z}]
 上面:[{x、y、z}、{x、y、z}、{x、y、z}、{x、y、z}]
 左側面:[{x、y、z}、{x、y、z}、{x、y、z}、{x、y、z}]
For example, as shown in FIG. 9, in the case of the cube correction virtual object 31, the control unit 1 (CPU 16) of the first HMD 10a uses the corners (each angle) on the three surfaces as the coordinate information of the correction virtual object 31. The coordinate information of a) to (g) is transmitted to the server device 20.
Front: [{x a , y a , z a }, {x b , y b , z b }, {x c , y c , z c }, {x d , y d , z d }]
Top surface: [{x e , y e , z e }, {x f , y f , z f }, {x b , y b , z b }, {x a , y a , z a }]
Left side: [{x e, y e , z e}, {x a, y a, z a}, {x d, y d, z d}, {x g, y g, z g}]
 サーバ装置20は、補正用仮想オブジェクト31の座標情報を第1のHMD10aから受信すると、この座標情報を第2のHMD10bに対して送信する。なお、ここでの例では、補正用仮想オブジェクト31の座標情報が第1のHMD10aからサーバ装置20を介して第2のHMD10bに送信されているが、この座標情報は、第1のHMD10aから直接的に第2のHMD10bに送信されてもよい。 When the server device 20 receives the coordinate information of the correction virtual object 31 from the first HMD 10a, the server device 20 transmits this coordinate information to the second HMD 10b. In the example here, the coordinate information of the correction virtual object 31 is transmitted from the first HMD 10a to the second HMD 10b via the server device 20, but this coordinate information is directly from the first HMD 10a. May be transmitted to the second HMD10b.
 第2のHMD10bの制御部1(CPU16)は、補正用仮想オブジェクト31の座標情報を受信すると、自己のグローバル座標系において、自己位置及び姿勢と、取得した補正用仮想オブジェクト31の座標情報とに基づいて、ユーザの視線方向の先の一定の視野及び距離内に、第1のHMD10aによって選択された補正用実オブジェクト30が存在するかどうかを判定する。 When the control unit 1 (CPU16) of the second HMD 10b receives the coordinate information of the correction virtual object 31, in its own global coordinate system, the control unit 1 (CPU 16) obtains the self-position and orientation and the acquired coordinate information of the correction virtual object 31. Based on this, it is determined whether or not the correction real object 30 selected by the first HMD 10a exists within a certain field of view and distance ahead of the user's line-of-sight direction.
 ここで、仮に、第1のHMD10aにより補正用実オブジェクト30が選択された時点で、第1のHMD10aを装着したユーザと、第2のHMD10bが離れた場所に位置していた場合を想定する。この場合、第2のHMD10bは、第1のHMD10aによって選択された補正用実オブジェクト30に対して一定の距離以下となる位置に近づいてユーザの視線方向が選択された補正用実オブジェクト30の方向を向いているときに、ユーザの視野内に、その補正用実オブジェクト30が存在していると判定する。 Here, it is assumed that the user wearing the first HMD10a and the second HMD10b are located at a distance from each other when the correction real object 30 is selected by the first HMD10a. In this case, the second HMD 10b approaches a position that is equal to or less than a certain distance with respect to the correction real object 30 selected by the first HMD 10a, and the direction of the user's line of sight of the correction real object 30 is selected. It is determined that the correction real object 30 exists in the user's field of view when facing.
 なお、共有される(通常の)仮想オブジェクトの位置の補正のためには、第2のHMD10bを装着したユーザが、第1のHMD10aによって選択された補正用実オブジェクト30に近づいてその方向を見る必要がある。一方、第1のHMD10aを装着したユーザは、基本的には、周囲を見回しながら移動するので、補正用実オブジェクト30は、第1のHMD10aのユーザの移動に応じて順次増加されながら複数個所に点在されることになる。従って、第2のHMD10bを装着したユーザが、第1のHMD10aによって選択された補正用実オブジェクト30に近づいてその方向を向く可能性は、補正用実オブジェクト30が新たに選択されて増加する度に増加することになる。 In order to correct the position of the shared (normal) virtual object, the user wearing the second HMD10b approaches the correction real object 30 selected by the first HMD10a and looks in the direction thereof. There is a need. On the other hand, since the user wearing the first HMD10a basically moves while looking around, the correction real objects 30 are sequentially increased according to the movement of the user of the first HMD10a and moved to a plurality of locations. It will be scattered. Therefore, the possibility that the user wearing the second HMD 10b approaches and faces the correction real object 30 selected by the first HMD 10a is increased every time the correction real object 30 is newly selected and increases. Will increase to.
 第1のHMD10aによって選択された補正用実オブジェクト30が第2のHMD10bを装着したユーザから一定の距離にあり、かつ、ユーザの視野内に存在している場合、第2のHMD10bの制御部1(CPU16)は、次の処理を実行する。つまり、第2のHMD10bの制御部1(CPU16)は、自己のグローバル座標系において、自己位置及び姿勢と、取得した補正用仮想オブジェクト31の座標情報とに基づいて、補正用仮想オブジェクト31の位置を設定する。 When the correction real object 30 selected by the first HMD 10a is at a certain distance from the user wearing the second HMD 10b and is within the user's field of view, the control unit 1 of the second HMD 10b (CPU 16) executes the following processing. That is, the control unit 1 (CPU16) of the second HMD 10b is the position of the correction virtual object 31 based on the self position and the posture and the acquired coordinate information of the correction virtual object 31 in its own global coordinate system. To set.
 図10は、第2のHMD10bにより座標情報に基づいて補正用仮想オブジェクト31の位置が設定された(AR表示された)ときの様子を示す図である。 FIG. 10 is a diagram showing a state when the position of the correction virtual object 31 is set (AR displayed) by the second HMD 10b based on the coordinate information.
 次に、第2のHMD10bの制御部1(GPU18)は、表示部3において、その設定された位置に補正用仮想オブジェクト31をAR表示させる。なお、第2のHMD10bの制御部1は、補正用仮想オブジェクト31の位置を設定するだけで、実際に補正用仮想オブジェクト31をAR表示しなくてもよい。 Next, the control unit 1 (GPU18) of the second HMD 10b causes the display unit 3 to AR-display the correction virtual object 31 at the set position. The control unit 1 of the second HMD 10b only sets the position of the correction virtual object 31, and does not have to actually display the correction virtual object 31 in AR.
 第2のHMD10bの制御部1(CPU16)が補正用仮想オブジェクト31の位置を設定するときの処理について一例を挙げて具体的に説明する。 The process when the control unit 1 (CPU 16) of the second HMD 10b sets the position of the correction virtual object 31 will be specifically described with an example.
 第2のHMD10bの制御部1(CPU16)は、立方体の仮想オブジェクトの座標情報として、上述の正面、上面、左側面におけるそれぞれの角(a)~(g)の座標情報([{x、y、z}、{x、y、z}、{x、y、z}、{x、y、z}]、[{x、y、z}、{x、y、z}、{x、y、z}、{x、y、z}]、[{x、y、z}、{x、y、z}、{x、y、z}、{x、y、z}])を取得している。第2のHMD10bの制御部1(CPU16)は、この情報による立方体の補正用仮想オブジェクト31が、現在の自己位置及び姿勢から見てどのように見えるのかを求める。 The control unit 1 (CPU16) of the second HMD10b receives the coordinate information of the angles (a) to (g) on the front surface, the upper surface, and the left surface described above ([{x a , y a , z a }, {x b , y b , z b }, {x c , y c , z c }, {x d , y d , z d }], [{x e , y e , z e}, {x f, y f, z f}, {x b, y b, z b}, {x a, y a, z a}], [{x e, y e, z e}, { x a , y a , z a }, {x d , y d , z d }, {x g , y g , z g }]) has been acquired. The control unit 1 (CPU 16) of the second HMD 10b obtains what the virtual object 31 for correcting the cube based on this information looks like when viewed from the current self-position and posture.
 図6に示すように、第1のHMD10aにより補正用実オブジェクト30が選択されたとき、第1のHMD10aを装着したユーザは、左斜め方向から補正用実オブジェクト30を見ている。一方、第2のHMD10bを装着したユーザが補正用実オブジェクト30に近づいて補正用実オブジェクト30を見たとき、ユーザは、右斜め方向から補正用実オブジェクト30を見ている。 As shown in FIG. 6, when the correction real object 30 is selected by the first HMD 10a, the user wearing the first HMD 10a is looking at the correction real object 30 from an oblique left direction. On the other hand, when the user wearing the second HMD 10b approaches the correction real object 30 and looks at the correction real object 30, the user is looking at the correction real object 30 from an oblique right direction.
 このため、第1のHMD10a側からは補正用仮想オブジェクト31の正面、上面及び左側面の3面が見えることになるが、第2のHMD10b側からは補正用オブジェクトの正面、上面及び右側面の3面が見えることになる。 Therefore, the front surface, the upper surface, and the left side surface of the correction virtual object 31 can be seen from the first HMD10a side, but the front surface, the upper surface surface, and the right side surface of the correction object can be seen from the second HMD10b side. You can see three sides.
 第2のHMD10bの制御部1(CPU16)は、正面、上面及び右側面の3面のうち右側面以外の面、つまり、正面及び上面の座標情報([{x、y、z}、{x、y、z}、{x、y、z}、{x、y、z}]、[{x、y、z}、{x、y、z}、{x、y、z}、{x、y、z}])を有している。従って、第2のHMD10bの制御部1(CPU16)は、自己のグローバル座標系において、その位置に座標を設定して、自己位置及び姿勢から見て正面及び上面がどのように見えるかを算出する。 Control unit 1 of the second HMD10b (CPU16) is a front, a surface other than the right side surface of the three faces of the top and right side, i.e., coordinate information of the front and top ([{x a, y a , z a} , {X b , y b , z b }, {x c , y c , z c }, {x d , y d , z d }], [{x e , y e , z e }, {x f , Y f , z f }, {x b , y b , z b }, {x a , y a , z a }]). Therefore, the control unit 1 (CPU16) of the second HMD10b sets the coordinates at the position in its own global coordinate system, and calculates how the front surface and the upper surface look from the self position and the posture. ..
 一方、第2のHMD10bの制御部1(CPU16)は、右側面の座標情報は有していないので、他の面の座標情報から、右側面の座標情報([{x、y、z}、{x、y、z}、{x、y、z}、{x、y、z}])を予測する(図10の例では、特に、角(h)の座標の予測が必要)。そして、第2のHMD10bの制御部1(CPU16)は、自己のグローバル座標系において、予測した位置に座標を設定して、自己位置及び姿勢から見て右側面がどのように見えるかを求める。 On the other hand, since the control unit 1 (CPU16) of the second HMD 10b does not have the coordinate information of the right side surface, the coordinate information of the right side surface ([{x f , y f , z) is obtained from the coordinate information of the other surface. f }, {x b , y b , z b }, {x c , y c , z c }, {x h , y h , z h }]) (H) Coordinates need to be predicted). Then, the control unit 1 (CPU16) of the second HMD10b sets the coordinates at the predicted position in its own global coordinate system, and obtains what the right side surface looks like when viewed from its own position and posture.
 なお、本実施形態では、上述のように、どの方向から見てもその3次元形状が一意に特定されるような規則的な形状を有する実オブジェクトが補正用実オブジェクト30として選択される。このため、補正用実オブジェクト30の選択時に第1のHMD10aから見えない補正用実オブジェクト30の箇所に対応する補正用仮想オブジェクト31の部分についても、第2のHMD10bの制御部1(CPU16)は、その部分の座標を正確に予測することができる。 In the present embodiment, as described above, a real object having a regular shape such that the three-dimensional shape is uniquely specified when viewed from any direction is selected as the correction real object 30. Therefore, the control unit 1 (CPU16) of the second HMD 10b also has a portion of the correction virtual object 31 corresponding to the portion of the correction real object 30 that cannot be seen from the first HMD 10a when the correction real object 30 is selected. , The coordinates of that part can be predicted accurately.
 なお、見えない補正用実オブジェクト30の箇所に対応する仮想オブジェクトの部分における座標の予測は、第2のHMD10bではなく、第1のHMD10a側で実行されてもよい。この場合、予測された座標情報が、見える箇所に対応する座標情報と共に(サーバを介して)、第1のHMD10aに対して送信される。 Note that the prediction of the coordinates in the part of the virtual object corresponding to the portion of the invisible real object 30 for correction may be executed on the first HMD10a side instead of the second HMD10b. In this case, the predicted coordinate information is transmitted to the first HMD 10a together with the coordinate information corresponding to the visible part (via the server).
 ここで、第1のHMD10a及び第2のHMD10bにおける自己位置及び姿勢が正確である場合、第1のHMD10aにおける補正用仮想オブジェクト31及び補正用実オブジェクト30の相対的な位置関係(図9参照)と、第2のHMD10bにおける補正用仮想オブジェクト31及び補正用実オブジェクト30の相対的な位置関係(図10参照)とが同じとなるはずである。従って、第1のHMD10a及び第2のHMD10bにおける自己位置及び姿勢が正確である場合、第2のHMD10bにおいて、補正用仮想オブジェクト31の位置が補正用実オブジェクト30に対して完全に重なるように設定されるはずである。 Here, when the self-position and the posture in the first HMD 10a and the second HMD 10b are accurate, the relative positional relationship between the correction virtual object 31 and the correction real object 30 in the first HMD 10a (see FIG. 9). And the relative positional relationship (see FIG. 10) between the correction virtual object 31 and the correction real object 30 in the second HMD 10b should be the same. Therefore, when the self-position and posture in the first HMD 10a and the second HMD 10b are accurate, the position of the correction virtual object 31 is set to completely overlap the correction real object 30 in the second HMD 10b. Should be done.
 一方、第1のHMD10a又は第2のHMD10bのうち少なくとも一方のHMD10における自己位置及び姿勢が不正確である場合、第1のHMD10aにおける補正用仮想オブジェクト31及び補正用実オブジェクト30の相対的な位置関係(図9参照)と、第2のHMD10bにおける補正用仮想オブジェクト31及び補正用実オブジェクト30の相対的な位置関係(図10参照)とが異なってしまう。この場合、図10に示すように、補正用仮想オブジェクト31の位置が補正用実オブジェクト30に対してずれた位置に設定されてしまう。 On the other hand, if the self-position and orientation in at least one of the first HMD10a or the second HMD10b are inaccurate, the relative positions of the correction virtual object 31 and the correction real object 30 in the first HMD10a. The relationship (see FIG. 9) and the relative positional relationship (see FIG. 10) between the correction virtual object 31 and the correction real object 30 in the second HMD 10b are different. In this case, as shown in FIG. 10, the position of the correction virtual object 31 is set to a position deviated from the correction real object 30.
 本技術では、この関係が利用され、第2のHMD10bにおいて、補正用仮想オブジェクト31及び補正用実オブジェクト30の位置関係に基づいて、共通の(通常の)仮想オブジェクトのAR表示位置が補正される。 In the present technology, this relationship is utilized, and in the second HMD 10b, the AR display position of the common (normal) virtual object is corrected based on the positional relationship between the correction virtual object 31 and the correction real object 30. ..
 補正用仮想オブジェクト31の座標位置を設定した後、あるいは、補正用仮想オブジェクト31をAR表示した後、第2のHMD10bの制御部1(VPU17)は、撮像部4から画像情報を取得して、画像情報から特徴点群を抽出する。 After setting the coordinate position of the correction virtual object 31 or displaying the correction virtual object 31 in AR, the control unit 1 (VPU17) of the second HMD 10b acquires image information from the image pickup unit 4 and obtains image information. A feature point cloud is extracted from the image information.
 図11は、補正用実オブジェクト30が第2のHMD10bの撮像部4によって撮像されたときの様子を示す図である。図12は、図11に示す画像情報から抽出された特徴点群の情報を示す図である。 FIG. 11 is a diagram showing a state when the correction real object 30 is imaged by the imaging unit 4 of the second HMD 10b. FIG. 12 is a diagram showing information of a feature point cloud extracted from the image information shown in FIG.
 次に、第2のHMD10bの制御部1(CPU16)は、第1のHMD10aから取得した座標情報に基づいて、画像情報に含まれる特徴点群のうち、補正用実オブジェクト30に対応する特徴点群がどの特徴点群であるかを判定する。次に、第2のHMD10bの制御部1(CPU16)は、補正用実オブジェクト30に対応する特徴点群の情報に基づいて、自己のグローバル座標系において、補正用実オブジェクト30の位置及び姿勢を算出する。 Next, the control unit 1 (CPU16) of the second HMD10b is a feature point cloud corresponding to the correction real object 30 among the feature point cloud included in the image information based on the coordinate information acquired from the first HMD10a. Determine which feature point cloud the group belongs to. Next, the control unit 1 (CPU16) of the second HMD 10b determines the position and orientation of the correction real object 30 in its own global coordinate system based on the information of the feature point cloud corresponding to the correction real object 30. calculate.
 補正用実オブジェクト30の位置及び姿勢を算出すると、第2のHMD10bの制御部1(CPU16)は、第1のHMD10aからの座標情報に基づいて設定された補正用仮想オブジェクト31の位置及び姿勢と、補正用実オブジェクト30の位置及び姿勢との差を求める。 When the position and orientation of the correction real object 30 are calculated, the control unit 1 (CPU16) of the second HMD10b has the position and orientation of the correction virtual object 31 set based on the coordinate information from the first HMD10a. , The difference between the position and the posture of the actual object 30 for correction is obtained.
 図13は、補正用仮想オブジェクト31の位置及び姿勢と、補正用実オブジェクト30の位置及び姿勢との差を示す図である。 FIG. 13 is a diagram showing the difference between the position and orientation of the correction virtual object 31 and the position and orientation of the correction real object 30.
 第2のHMD10bの制御部1(CPU16)は、差を求めると、この差を補正値として記憶部2に記憶させる。そして、第2のHMD10bの制御部1(CPU16)は、第1のHMD10aと共通の(通常の)仮想オブジェクトをAR表示するとき、この差を補正値として用いて、共通の仮想オブジェクトのAR表示位置を補正する。なお、補正用仮想オブジェクト31がAR表示される場合、補正用仮想オブジェクト31に対しても補正値による補正が行われてもよい。 When the control unit 1 (CPU16) of the second HMD10b obtains the difference, the control unit 1 (CPU16) stores the difference as a correction value in the storage unit 2. Then, when the control unit 1 (CPU16) of the second HMD10b AR-displays the (normal) virtual object common to the first HMD10a, the control unit 1 (CPU16) uses this difference as a correction value and AR-displays the common virtual object. Correct the position. When the correction virtual object 31 is displayed in AR, the correction virtual object 31 may also be corrected by the correction value.
 補正値を求める方法として、例えば、移動量及び回転角を求める方法がある。移動量は、例えば、補正用仮想オブジェクト31の重心位置(x、y、z)と、補正用実オブジェクト30の重心位置(x'、y'、z')との差から算出される。 As a method of obtaining the correction value, for example, there is a method of obtaining the movement amount and the rotation angle. The difference of the movement amount, for example, the center of gravity of the correction for the virtual object 31 (x G, y G, z G) and center of gravity of the correction the real object 30 (x 'G, y' G, z 'G) and It is calculated from.
 補正用仮想オブジェクト31の重心位置(x、y、z)は、例えば、補正用仮想オブジェクト31における角(a)~(h)の位置から算出される。また、補正用実オブジェクト30の重心位置(x'、y'、z')は、補正用実オブジェクト30の角(a')~(h')の位置から算出される。 The position of the center of gravity (x G , y G , z G ) of the correction virtual object 31 is calculated from, for example, the positions of the angles (a) to (h) in the correction virtual object 31. Further, the center of gravity of the correction the real object 30 (x 'G, y' G, z 'G) , the angular correction real object 30 (a' is calculated from the position of) ~ (h ').
 この場合、移動量(補正値)(T、T、T)は、(T、T、T)=(x-x'、y-y'、z-z')により算出される。 In this case, the movement amount (correction value) (T x, T y, T z) is, (T x, T y, T z) = (x G -x 'G, y G -y' G, z G - It is calculated by z'G).
 また、回転角(補正値)(θ、θ、θ)は、補正用仮想オブジェクト31の角(a)~(h)のうち特定の2つの角を結ぶベクトルAと、補正用実オブジェクト30の角(a')~(h')のうち対応する特定の2つの角を結ぶベクトルA'とを用いて、cosθ=A・A'/|A||A'|により算出される。この式が用いられてX軸回り、Y軸回り、Z軸回りの回転角がそれぞれ算出される。 The rotation angles (correction values) (θ z , θ z , θ z ) are the vector A connecting two specific angles (a) to (h) of the correction virtual object 31 and the correction actual. Calculated by cosθ = A · A'/ | A || A'| using the vector A'connecting two corresponding specific angles among the angles (a') to (h') of the object 30. .. This formula is used to calculate the rotation angles around the X-axis, Y-axis, and Z-axis, respectively.
 そして、算出された移動量及び回転角が補正値とされ、共通の(通常の)仮想オブジェクトの座標点に対して、以下の各式で表される補正が実行される。なお、以下の各式において補正前の座標がP(x、y、z)であり、補正後の座標がP'(x'、y'、z')である。 Then, the calculated movement amount and rotation angle are used as correction values, and the corrections represented by the following formulas are executed for the coordinate points of the common (normal) virtual object. In each of the following equations, the coordinates before correction are P (x, y, z), and the coordinates after correction are P'(x', y', z').
 [移動]
Figure JPOXMLDOC01-appb-M000001
[Move]
Figure JPOXMLDOC01-appb-M000001
 [X軸回りの回転]
Figure JPOXMLDOC01-appb-M000002
[Rotation around X-axis]
Figure JPOXMLDOC01-appb-M000002
 [Y軸回りの回転]
Figure JPOXMLDOC01-appb-M000003
[Rotation around Y axis]
Figure JPOXMLDOC01-appb-M000003
 [Z軸回りの回転]
Figure JPOXMLDOC01-appb-M000004
[Rotation around Z axis]
Figure JPOXMLDOC01-appb-M000004
 この補正によって、共通の(通常の)仮想オブジェクトが、第1のHMD10a及び第2のHMD10bにおいて同じ位置及び姿勢で表示されることになる。 By this correction, the common (normal) virtual object will be displayed in the same position and orientation in the first HMD10a and the second HMD10b.
 なお、補正値が第2のHMD10bからサーバ装置20に送信され、サーバ装置20がこの補正値を用いて共通の(通常の)仮想オブジェクトの位置及び姿勢の補正を行ってもよい。また、補正値の算出についても、第2のHMD10bではなく、サーバ装置20あるいは第1のHMD10aが行ってもよい(例えば、第2のHMD10bの処理性能が低いような場合)。なお、この場合、補正値の算出に必要な情報(第2のHMD10bにおける補正用仮想オブジェクト31の位置及び姿勢の情報、補正用実オブジェクト30の位置及び姿勢の情報)が第2のHMD10bからサーバ装置20あるいは第2のHMD10bに送信される。 Note that the correction value may be transmitted from the second HMD 10b to the server device 20, and the server device 20 may use this correction value to correct the position and orientation of a common (normal) virtual object. Further, the calculation of the correction value may be performed by the server device 20 or the first HMD10a instead of the second HMD10b (for example, when the processing performance of the second HMD10b is low). In this case, the information necessary for calculating the correction value (information on the position and posture of the correction virtual object 31 in the second HMD 10b, information on the position and posture of the correction real object 30) is transmitted from the second HMD 10b to the server. It is transmitted to the device 20 or the second HMD 10b.
 補正値が算出された後、再び、第2のHMD10bを装着したユーザが、第1のHMD10aにより選択された補正用実オブジェクト30に近づいてその方向を見た場合、新たに補正値が算出されて、補正値が更新される。なお、第1のHMD10a及び第2のHMD10bのうち一方のHMD10によりリローカライゼイションに基づく自己位置推定が実行されたとき、現在の補正値が0にリセットされてもよい。 After the correction value is calculated, when the user wearing the second HMD10b approaches the correction real object 30 selected by the first HMD10a and looks in the direction, the correction value is newly calculated. And the correction value is updated. When self-position estimation based on relocalization is executed by one of the first HMD10a and the second HMD10b, the current correction value may be reset to 0.
 ここで、補正用実オブジェクト30と、第2のHMD10bとの間の距離が遠いときに算出された補正値は、補正用実オブジェクト30と第2のHMD10bとの間の距離が近いときに算出された補正値よりも信頼度が低い可能性がある。これは、第2のHMD10bと補正用実オブジェクト30との間の距離が遠くなるに従って、第2のHMD10bにおける補正用実オブジェクト30の位置や形状の認識が不正確となる可能性があるためである。 Here, the correction value calculated when the distance between the correction real object 30 and the second HMD 10b is long is calculated when the distance between the correction real object 30 and the second HMD 10b is short. It may be less reliable than the corrected correction value. This is because the recognition of the position and shape of the correction real object 30 in the second HMD 10b may become inaccurate as the distance between the second HMD 10b and the correction real object 30 increases. be.
 このため、共通の(通常の)仮想オブジェクトの補正値による補正の度合いが、補正値が算出されたときの第2のHMD10bと、補正用実オブジェクト30との間の距離に応じて変化されてもよい。この場合、補正値が算出されたときの第2のHMD10bと、補正用実オブジェクト30との間の距離が近いほど、補正値による補正の度合いが高くされる。 Therefore, the degree of correction by the correction value of the common (normal) virtual object is changed according to the distance between the second HMD 10b when the correction value is calculated and the real object 30 for correction. May be good. In this case, the closer the distance between the second HMD 10b when the correction value is calculated and the actual object 30 for correction is, the higher the degree of correction by the correction value is.
 一例として、共通の(通常の)仮想オブジェクトが第2のHMD10bから或る一定の距離にAR表示されるとする。この場合、補正値が算出されたときの第2のHMD10bと、補正用実オブジェクト30との間の距離が一定の距離以下であるとき、補正値に対して1が乗算される。一方、補正値が算出されたときの第2のHMD10bと、補正用実オブジェクト30との間の距離が一定の距離を超える場合、その距離が遠くなるに従って、補正値に対して0.9、0.8・・等の値が乗算される。 As an example, suppose that a common (normal) virtual object is AR-displayed at a certain distance from the second HMD10b. In this case, when the distance between the second HMD 10b when the correction value is calculated and the correction real object 30 is equal to or less than a certain distance, the correction value is multiplied by 1. On the other hand, when the distance between the second HMD 10b when the correction value is calculated and the correction real object 30 exceeds a certain distance, 0.9 with respect to the correction value as the distance increases. Values such as 0.8 ... are multiplied.
 なお、これについては、第1のHMD10aが補正用実オブジェクト30に対して補正用仮想オブジェクト31の座標位置を設定したときに、補正用実オブジェクト30と、第1のHMD10aとの間の距離が遠い場合にも同様のことがいえる。つまり、補正用実オブジェクト30と、第1のHMD10aとの間の距離が遠いときに算出された補正値は、補正用実オブジェクト30と第1のHMD10aとの間の距離が近いときに算出された補正値よりも信頼度が低い可能性がある。 Regarding this, when the first HMD 10a sets the coordinate position of the correction virtual object 31 with respect to the correction real object 30, the distance between the correction real object 30 and the first HMD 10a is set. The same is true when it is far away. That is, the correction value calculated when the distance between the correction real object 30 and the first HMD 10a is long is calculated when the distance between the correction real object 30 and the first HMD 10a is short. The reliability may be lower than the corrected value.
 これは、第1のHMD10aと補正用実オブジェクト30との間の距離が遠くなるに従って、第1のHMD10aにおける補正用実オブジェクト30の位置や形状の認識が不正確となり、第1のHMD10aから送信される補正用仮想オブジェクト31の座標情報が不正確となる可能性があるためである。 This is because as the distance between the first HMD 10a and the correction real object 30 increases, the recognition of the position and shape of the correction real object 30 in the first HMD 10a becomes inaccurate, and the first HMD 10a transmits. This is because the coordinate information of the correction virtual object 31 to be performed may be inaccurate.
 従って、共通の(通常の)仮想オブジェクトの補正値による補正の度合いが、第1のHMD10aにより補正用仮想オブジェクト31の座標位置が設定されたときの第1のHMD10aと、補正用実オブジェクト30との間の距離に応じて変化されてもよい。この場合、第1のHMD10aにより補正用仮想オブジェクト31の座標位置が設定されたときの第1のHMD10aと、補正用実オブジェクト30との間の距離が近いほど、補正値による補正の度合いが高くされる。 Therefore, the degree of correction by the correction value of the common (normal) virtual object is the first HMD 10a when the coordinate position of the correction virtual object 31 is set by the first HMD 10a, and the correction real object 30. It may vary depending on the distance between them. In this case, the closer the distance between the first HMD 10a when the coordinate position of the correction virtual object 31 is set by the first HMD 10a and the correction real object 30, the higher the degree of correction by the correction value. Be crushed.
 例えば、共通の(通常の)仮想オブジェクトが第2のHMD10bから或る一定の距離にAR表示されるとする。この場合、第1のHMD10aにより補正用仮想オブジェクト31の座標位置が設定されたときの第1のHMD10aと、補正用実オブジェクト30との間の距離が一定の距離以下であるとき、補正値に対して1が乗算される。一方、第1のHMD10aにより補正用仮想オブジェクト31の座標位置が設定されたときの第1のHMD10aと、補正用実オブジェクト30との間の距離が一定の距離を超える場合、その距離が遠くなるに従って、補正値に対して0.9、0.8・・等の値が乗算される。 For example, suppose that a common (normal) virtual object is AR-displayed at a certain distance from the second HMD10b. In this case, when the distance between the first HMD 10a when the coordinate position of the correction virtual object 31 is set by the first HMD 10a and the correction real object 30 is equal to or less than a certain distance, the correction value is set. Is multiplied by 1. On the other hand, when the distance between the first HMD 10a and the correction real object 30 when the coordinate position of the correction virtual object 31 is set by the first HMD 10a exceeds a certain distance, the distance becomes long. Therefore, the correction value is multiplied by a value such as 0.9, 0.8, and so on.
 ここでの説明では、第1のHMD10aにより選択された補正用実オブジェクト30が第2のHMD10bにより検出される度に補正値が算出される場合について説明した。一方、第1のHMD10aにより選択された補正用実オブジェクト30が密集している場合などは、第2のHMD10bにより補正用実オブジェクト30が頻繁に検出されることになり、補正値が頻繁に算出されることになる。この場合、第2のHMD10bに対する処理負荷が大きくなってしまう可能性がある。従って、例えば、第2のHMD10bは、前回の補正値の算出の処理から一定期間経過した後に補正用実オブジェクト30が検出された場合に、新たな補正値の算出を実行してもよい。 In the description here, the case where the correction value is calculated each time the correction real object 30 selected by the first HMD10a is detected by the second HMD10b has been described. On the other hand, when the correction real objects 30 selected by the first HMD 10a are densely packed, the correction real objects 30 are frequently detected by the second HMD 10b, and the correction value is frequently calculated. Will be done. In this case, the processing load on the second HMD 10b may increase. Therefore, for example, the second HMD 10b may execute the calculation of a new correction value when the correction real object 30 is detected after a certain period of time has elapsed from the processing of the previous calculation of the correction value.
 また、ここでの例では、補正用仮想オブジェクト31の座標情報を送信する送信側のHMD10と、座標情報を受信する受信側のHMD10とが予め決まっている場合について説明した。一方、送信側のHMD10及び受信側のHMD10は予め決まっていなくてもよい。 Further, in the example here, the case where the HMD10 on the transmitting side for transmitting the coordinate information of the correction virtual object 31 and the HMD10 on the receiving side for receiving the coordinate information are predetermined has been described. On the other hand, the HMD10 on the transmitting side and the HMD10 on the receiving side may not be determined in advance.
 この場合、例えば、第1のHMD10a及び第2のHMD10bの両方が、補正用実オブジェクト30を選択する(見つけ出す)処理を実行し、先に補正用実オブジェクト30を見つけ出した一方のHMD10が補正用仮想オブジェクト31の座標情報を送信する。他方のHMD10は、一方のHMD10により選択された補正用実オブジェクト30に近づいてその方向を見たときに補正用仮想オブジェクト31及び補正用実オブジェクト30の位置関係に基づいて、補正値を算出する。そして、一方のHMD10は、その補正値を用いて、共通の(通常の)仮想オブジェクトとのAR表示位置を補正する。 In this case, for example, both the first HMD10a and the second HMD10b execute the process of selecting (finding) the correction real object 30, and the one HMD10 that first finds the correction real object 30 is for correction. The coordinate information of the virtual object 31 is transmitted. The other HMD 10 calculates a correction value based on the positional relationship between the correction virtual object 31 and the correction real object 30 when approaching the correction real object 30 selected by one HMD 10 and looking in the direction thereof. .. Then, one HMD10 corrects the AR display position with the common (normal) virtual object by using the correction value.
 <作用等>
 以上説明したように、第1実施形態では、補正用仮想オブジェクト31及び補正用実オブジェクト30の位置関係に基づいて、共通の(通常の)仮想オブジェクトの位置が補正される。
<Action, etc.>
As described above, in the first embodiment, the positions of the common (normal) virtual objects are corrected based on the positional relationship between the correction virtual object 31 and the correction real object 30.
 これにより、第1のHMD10a及び第2のHMD10bにおいて、共通の(通常の)仮想オブジェクトの見かけ上のずれを解消することができ、同じ位置に正確に共通の(通常の)仮想オブジェクトをAR表示することが可能となる。また、本実施形態では、比較的に簡単な方法により共通の(通常の)仮想オブジェクトの位置を補正することができる。 As a result, in the first HMD10a and the second HMD10b, it is possible to eliminate the apparent deviation of the common (normal) virtual object, and the common (normal) virtual object is AR-displayed at the same position. It becomes possible to do. Further, in the present embodiment, the position of a common (normal) virtual object can be corrected by a relatively simple method.
 また、第1実施形態では、補正用仮想オブジェクト31の位置と、補正用実オブジェクト30の位置との差に基づいて、共通の(通常の)仮想オブジェクトの位置が補正される。これにより、共通の仮想オブジェクトのAR表示位置を適切に補正することができる。 Further, in the first embodiment, the position of the common (normal) virtual object is corrected based on the difference between the position of the correction virtual object 31 and the position of the correction real object 30. As a result, the AR display position of the common virtual object can be appropriately corrected.
 また、第1実施形態では、補正用仮想オブジェクト31の位置と、補正用実オブジェクト30の位置との差に基づいて、補正値が算出され、補正値により共通の(通常の)仮想オブジェクトの位置が補正される。また、補正値により、共通の(通常の)仮想オブジェクトが移動及び回転される。これにより、共通の(通常の)仮想オブジェクトのAR表示位置をさらに適切に補正することができる。 Further, in the first embodiment, the correction value is calculated based on the difference between the position of the correction virtual object 31 and the position of the correction real object 30, and the position of the common (normal) virtual object is calculated according to the correction value. Is corrected. Also, the correction value moves and rotates a common (normal) virtual object. As a result, the AR display position of the common (normal) virtual object can be corrected more appropriately.
 また、上述のように、第1実施形態において、補正値による補正の度合いが、補正値が算出されたときの第2のHMD10bと、補正用実オブジェクト30との間の距離に応じて変化されてもよい。これにより、適切に補正値による補正の度合いを変化させることができる。 Further, as described above, in the first embodiment, the degree of correction by the correction value is changed according to the distance between the second HMD 10b when the correction value is calculated and the actual object 30 for correction. You may. Thereby, the degree of correction by the correction value can be appropriately changed.
 また、上述のように、第1実施形態において、補正値による補正の度合いが、第1のHMD10aにより補正用仮想オブジェクト31の座標位置が設定されたときの第1のHMD10aと、補正用実オブジェクト30との間の距離に応じて変化されてもよい。これにより、適切に補正値による補正の度合いを変化させることができる。 Further, as described above, in the first embodiment, the degree of correction by the correction value is the first HMD10a when the coordinate position of the correction virtual object 31 is set by the first HMD10a, and the correction real object. It may be varied depending on the distance to 30. Thereby, the degree of correction by the correction value can be appropriately changed.
 また、第1実施形態では、第1のHMD10aが、第1のHMD10aが取得した画像情報に基づいて、実空間に存在する複数の実オブジェクトの中から、補正用実オブジェクト30を選択する(見つけ出す)。これにより、第1実施形態では、実空間においてその場に存在する実オブジェクトを補正用実オブジェクト30として選択することができるので、実空間に特別にマーカなどを設置する必要がなくなり、手間を省くことができる。 Further, in the first embodiment, the first HMD10a selects (finds) the correction real object 30 from the plurality of real objects existing in the real space based on the image information acquired by the first HMD10a. ). As a result, in the first embodiment, the real object existing in the real space can be selected as the correction real object 30, so that it is not necessary to specially install a marker or the like in the real space, which saves time and effort. be able to.
 また、第1実施形態では、補正用実オブジェクト30として選択される条件が、実オブジェクトが特定の形状を有していることとされる。特に、この条件は、いずれの方向から実オブジェクトを見た場合でも、実オブジェクトが実質的に一意に特定される3次元形状を有していることとされている。これにより、補正用実オブジェクト30の選択時に第1のHMD10aから見えない補正用実オブジェクト30の箇所に対応する補正用仮想オブジェクト31の部分についても、その部分の座標を正確に予測することが可能となる。 Further, in the first embodiment, the condition selected as the correction real object 30 is that the real object has a specific shape. In particular, this condition is that the real object has a three-dimensional shape that is substantially uniquely specified regardless of the direction in which the real object is viewed. As a result, it is possible to accurately predict the coordinates of the part of the correction virtual object 31 corresponding to the part of the correction real object 30 that cannot be seen from the first HMD 10a when the correction real object 30 is selected. It becomes.
 ここで、第1実施形態では、補正値により共通の(通常の)仮想オブジェクト毎に個別に補正を行うといった方法が用いられている。一方、この方法に代えて、補正値により、例えば、自己位置及び姿勢自体を補正するといった方法が用いられる場合も考えられる。 Here, in the first embodiment, a method is used in which correction is performed individually for each common (normal) virtual object according to the correction value. On the other hand, instead of this method, a method of correcting the self-position and the posture itself by the correction value may be used.
 しかしながら、この方法の場合、共通の(通常の)仮想オブジェクトすべてに補正が適用されることになる。このとき、上述の第2のHMD10bと、補正用実オブジェクト30との間の距離、第1のHMD10a及び補正用実オブジェクト30の間の距離が遠くなると、補正値を用いた補正によっても共通の(通常の)仮想オブジェクトにおける意図しない位置ずれが生じる可能性がある。このため、補正値により自己位置及び姿勢自体を補正するといった手法は用いずに、補正値により共通の(通常の)仮想オブジェクト毎に個別に補正を行い、かつ、各HMD10で共有する必要性が相対的に高い共通の仮想オブジェクトにのみ補正値による補正を行うといった方法が用いられてもよい。 However, with this method, the correction will be applied to all common (normal) virtual objects. At this time, if the distance between the above-mentioned second HMD 10b and the correction real object 30 and the distance between the first HMD 10a and the correction real object 30 become long, the correction using the correction value is also common. Unintended misalignment of (normal) virtual objects can occur. Therefore, it is necessary to individually correct each common (normal) virtual object by the correction value and share it with each HMD10 without using the method of correcting the self-position and the posture itself by the correction value. A method such as performing correction by the correction value only for a relatively high common virtual object may be used.
 ≪第2実施形態≫
 次に、本技術の第2実施形態について説明する。第2実施形態では、第2のHMD10bにおいて、補正用仮想オブジェクト31及び補正用実オブジェクト30の位置関係を求める他の方法について説明する。
<< Second Embodiment >>
Next, a second embodiment of the present technology will be described. In the second embodiment, another method of obtaining the positional relationship between the correction virtual object 31 and the correction real object 30 in the second HMD 10b will be described.
 第2実施形態では、第2のHMD10bの制御部1(CPU16)は、第1のHMD10aから取得した座標情報に基づいて補正用仮想オブジェクト31の位置を設定した後(図10参照)、あるいは、補正用仮想オブジェクト31をAR表示した後に、次の処理を実行する。 In the second embodiment, the control unit 1 (CPU16) of the second HMD10b sets the position of the correction virtual object 31 based on the coordinate information acquired from the first HMD10a (see FIG. 10), or After displaying the correction virtual object 31 in AR, the following processing is executed.
 まず、第2のHMD10bの制御部1(VPU17)は、撮像部4から画像情報を取得して、画像情報から特徴点群を抽出する。そして、第2のHMD10bの制御部1(CPU16)は、第1のHMD10aから取得した座標情報に基づいて、画像情報に含まれる特徴点群のうち、補正用実オブジェクト30に対応する特徴点群がどの特徴点群であるかを判定する。 First, the control unit 1 (VPU17) of the second HMD10b acquires image information from the image pickup unit 4 and extracts a feature point cloud from the image information. Then, the control unit 1 (CPU16) of the second HMD10b is a feature point cloud corresponding to the correction real object 30 among the feature point cloud included in the image information based on the coordinate information acquired from the first HMD10a. Determines which feature point cloud is.
 次に、第2のHMD10bの制御部1(CPU16)は、補正用実オブジェクト30に対応する特徴点群の情報に基づいて、自己のグローバル座標系において、補正用実オブジェクト30に対して補正用仮想オブジェクト31の座標位置(AR表示位置)を設定する。 Next, the control unit 1 (CPU 16) of the second HMD 10b corrects the correction real object 30 in its own global coordinate system based on the information of the feature point group corresponding to the correction real object 30. Set the coordinate position (AR display position) of the virtual object 31.
 図14は、補正用実オブジェクト30に対して補正用仮想オブジェクト31の座標位置が設定されたとき(AR表示されたとき)の一例を示す図である。 FIG. 14 is a diagram showing an example when the coordinate position of the correction virtual object 31 is set (when AR is displayed) with respect to the correction real object 30.
 なお、第2実施形態では、第2のHMD10bにおける補正用仮想オブジェクト31として2種類の仮想オブジェクトが存在する。1種類目は、第1のHMD10aから取得した座標情報に基づいてその座標位置が設定される補正用仮想オブジェクト31aである(図10参照)。2種類目は、第2のHMD10bにおいて取得された画像情報に基づいてその座標位置が設定される補正用仮想オブジェクト31bである(図14参照)。なお、以降の説明では、1種類目の補正用仮想オブジェクト31を座標情報に基づく補正用仮想オブジェクト31aと呼び、2種類目の補正用仮想オブジェクト31を画像情報に基づく補正用仮想オブジェクト31bと呼ぶ。 In the second embodiment, there are two types of virtual objects as the correction virtual objects 31 in the second HMD10b. The first type is a correction virtual object 31a whose coordinate position is set based on the coordinate information acquired from the first HMD 10a (see FIG. 10). The second type is a correction virtual object 31b whose coordinate position is set based on the image information acquired in the second HMD 10b (see FIG. 14). In the following description, the first type of correction virtual object 31 is referred to as a correction virtual object 31a based on coordinate information, and the second type of correction virtual object 31 is referred to as a correction virtual object 31b based on image information. ..
 画像情報に基づく補正用仮想オブジェクト31bの座標位置を設定するとき、第2のHMD10bの制御部1(CPU16)は、第1のHMD10aが第1のHMD10aの画像情報に基づいて補正用実オブジェクト30に対して補正用仮想オブジェクト31の座標位置を設定したときと同じ条件を使用する。そして、第2のHMD10bの制御部1(CPU16)は、この同じ条件を使用して、第2のHMD10bの画像情報に基づいて、補正用実オブジェクト30に対して画像情報に基づく補正用仮想オブジェクト31bの座標位置(AR表示位置)を設定する。 When setting the coordinate position of the correction virtual object 31b based on the image information, the control unit 1 (CPU16) of the second HMD10b determines that the first HMD10a is the correction real object 30 based on the image information of the first HMD10a. The same conditions as when the coordinate position of the correction virtual object 31 is set are used. Then, the control unit 1 (CPU16) of the second HMD10b uses this same condition to make a correction virtual object based on the image information with respect to the correction real object 30 based on the image information of the second HMD10b. The coordinate position (AR display position) of 31b is set.
 例えば、第2のHMD10bの制御部1(CPU16)は、画像情報に基づく補正用仮想オブジェクト31bがAR表示されたときに、画像情報に基づく補正用仮想オブジェクト31bが補正用実オブジェクト30に対して同じ位置に重なってAR表示されるように、画像情報に基づく補正用仮想オブジェクト31bの座標位置を設定する。なお、立方体の補正用仮想オブジェクト31のように補正用仮想オブジェクト31の形状及び大きさが予め決まっていない場合には、画像情報に基づく補正用仮想オブジェクト31bの座標位置の設定時に、画像情報に基づいて画像情報に基づく補正用仮想オブジェクト31bの形状及び大きさが決定される。 For example, in the control unit 1 (CPU16) of the second HMD 10b, when the correction virtual object 31b based on the image information is AR-displayed, the correction virtual object 31b based on the image information is displayed with respect to the correction real object 30. The coordinate position of the correction virtual object 31b based on the image information is set so that the AR is displayed so as to overlap the same position. When the shape and size of the correction virtual object 31 are not determined in advance like the cube correction virtual object 31, the image information is used when setting the coordinate position of the correction virtual object 31b based on the image information. Based on this, the shape and size of the correction virtual object 31b based on the image information are determined.
 次に、第1のHMD10aの制御部1(GPU18)は、表示部3において、設定された座標位置に対応する位置に画像情報に基づく補正用仮想オブジェクト31bをAR表示する。なお、画像情報に基づく補正用仮想オブジェクト31bは、第2のHMD10bにおいて必ずしもAR表示される必要はなく、その座標位置が設定されるだけでもよい。 Next, the control unit 1 (GPU18) of the first HMD 10a AR-displays the correction virtual object 31b based on the image information at the position corresponding to the set coordinate position on the display unit 3. The correction virtual object 31b based on the image information does not necessarily have to be AR-displayed in the second HMD10b, and its coordinate position may only be set.
 画像情報に基づく補正用仮想オブジェクト31bの座標位置を設定した後、あるいは、画像情報に基づく補正用仮想オブジェクト31bをAR表示した後、第2のHMD10bの制御部1(CPU16)は、座標情報に基づく補正用仮想オブジェクト31aの位置及び姿勢と、画像情報に基づく補正用仮想オブジェクト31bの位置及び姿勢との差を求める。 After setting the coordinate position of the correction virtual object 31b based on the image information, or after displaying the correction virtual object 31b based on the image information in AR, the control unit 1 (CPU16) of the second HMD 10b uses the coordinate information. The difference between the position and orientation of the correction virtual object 31a based on the image information and the position and orientation of the correction virtual object 31b based on the image information is obtained.
 図15は、座標情報に基づく補正用仮想オブジェクト31aの位置及び姿勢と、画像情報に基づく補正用仮想オブジェクト31bの位置及び姿勢との差を示す図である。 FIG. 15 is a diagram showing the difference between the position and orientation of the correction virtual object 31a based on the coordinate information and the position and orientation of the correction virtual object 31b based on the image information.
 第2のHMD10bの制御部1(CPU16)は、差を求めると、この差を補正値として記憶部2に記憶させる。そして、第2のHMD10bの制御部1(CPU16)は、第1のHMD10aと共通の(通常の)仮想オブジェクトをAR表示するとき、この差を補正値として用いて、共通の仮想オブジェクトのAR表示位置を補正する。 When the control unit 1 (CPU16) of the second HMD10b obtains the difference, the control unit 1 (CPU16) stores the difference as a correction value in the storage unit 2. Then, when the control unit 1 (CPU16) of the second HMD10b AR-displays the (normal) virtual object common to the first HMD10a, the control unit 1 (CPU16) uses this difference as a correction value and AR-displays the common virtual object. Correct the position.
 補正値の具体的な算出方法や、この補正値を用いた具体的な補正の方法については、上述の第1実施形態と同様の方法を用いることができる。つまり、上述の第1実施形態において、補正値の算出方法、この補正値を用いた補正の方法が説明されている箇所における「補正用仮想オブジェクト31」の文言が「座標情報に基づく補正用仮想オブジェクト31a」と読み替えられ、「補正用実オブジェクト30」の文言が「画像情報に基づく補正用仮想オブジェクト31b」と読み替えられればよい。また、「角(a')~(h')」が「角(a'')~(h'')」と読み替えられればよい。 As a specific calculation method of the correction value and a specific correction method using this correction value, the same method as in the above-described first embodiment can be used. That is, in the above-described first embodiment, the wording of the "correction virtual object 31" in the place where the calculation method of the correction value and the correction method using the correction value are explained is "the correction virtual based on the coordinate information". It may be read as "object 31a", and the wording of "correction real object 30" may be read as "correction virtual object 31b based on image information". Further, "angle (a') to (h')" may be read as "angle (a") to (h ")".
 第2実施形態でも第1実施形態と同様に、第1のHMD10a及び第2のHMD10bにおいて、共通の(通常の)仮想オブジェクトの見かけ上のずれを解消することができ、同じ位置に正確に共通の(通常の)仮想オブジェクトをAR表示することが可能となる。 In the second embodiment as well as in the first embodiment, it is possible to eliminate the apparent deviation of the common (normal) virtual object in the first HMD10a and the second HMD10b, and it is exactly common in the same position. (Normal) virtual object can be AR-displayed.
 ≪各種変形例≫
 以上の説明では、AR装置(情報処理装置)の一例として、HMD10を例に挙げて説明した。一方、AR装置は、HMD10に限られない。AR装置の他の例としては、例えば、リストバンド型(腕時計型)、指輪型、ペンダント型などのHMD10以外のウェアラブル装置が挙げられる。また、AR装置の他の例としては、携帯電話機(スマートフォンを含む)、タブレットPC、携帯ゲーム機、携帯音楽プレイヤーなどが挙げられる。典型的には、AR装置は、仮想オブジェクトをAR表示可能な装置(かつ、ユーザに装着又は把持されてユーザと共に移動可能な装置)であれば、どのような装置であってもよい。
≪Various deformation examples≫
In the above description, HMD10 has been taken as an example of the AR device (information processing device). On the other hand, the AR device is not limited to the HMD 10. Other examples of the AR device include wearable devices other than the HMD10 such as a wristband type (watch type), a ring type, and a pendant type. Further, as another example of the AR device, a mobile phone (including a smartphone), a tablet PC, a portable game machine, a portable music player, and the like can be mentioned. Typically, the AR device may be any device as long as it can display a virtual object in AR (and can be mounted or grasped by the user and moved together with the user).
 本技術は、以下の構成をとることもできる。
(1)実空間に対応するグローバル座標系において自己位置を推定し、
 前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
 前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
 画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
 前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する制御部
 を具備する情報処理装置。
(2) 上記(1)に記載の情報処理装置であって、
 前記制御部は、前記第1の仮想オブジェクトの位置と、前記実オブジェクトの位置との差に基づいて、前記第2の仮想オブジェクトの位置を補正する
 情報処理装置。
(3) 上記(1)に記載の情報処理装置であって、
 前記制御部は、前記画像情報に基づいて、前記グローバル座標系において、前記実オブジェクトに対して前記第1の仮想オブジェクトの位置を設定し、前記座標情報に基づく前記第1の仮想オブジェクトの位置と、前記画像情報に基づく前記第1の仮想オブジェクトの位置との差に基づいて、前記第2の仮想オブジェクトの位置を補正する
 情報処理装置。
(4) 上記(2)又は(3)に記載の情報処理装置であって、
 前記制御部は、前記差に基づいて補正値を算出し、補正値により前記第2の仮想オブジェクトの位置を補正する
 情報処理装置。
(5) 上記(4)に記載の情報処理装置であって、
 前記制御部は、前記補正値により、前記第2の仮想オブジェクトの位置を移動させて前記第2の仮想オブジェクトの位置を補正する
 情報処理装置。
(6) 上記(4)又は(5)に記載の情報処理装置であって、
 前記制御部は、前記補正値により、前記第2の仮想オブジェクトを回転させて前記第2の仮想オブジェクトの位置を補正する
 情報処理装置。
(7) 上記(4)~(6)のうちいずれか1つに記載の情報処理装置であって、
 前記制御部は、前記補正値による補正の度合いを変化させる
 情報処理装置。
(8) 上記(7)に記載の情報処理装置であって、
 前記制御部は、前記補正値が算出されたときの前記情報処理装置と、前記実オブジェクトとの間の距離に応じて、前記補正値による補正の度合いを変化させる
 情報処理装置。
(9) 上記(7)又は(8)に記載の情報処理装置であって、
 前記制御部は、前記他の装置が前記実オブジェクトに対して前記第1の仮想オブジェクトの位置を設定したときの前記他の装置と、前記実オブジェクトとの間の距離に応じて、前記補正値による補正の度合いを変化させる
 情報処理装置。
(10)上記(1)~(9)のうちいずれか1つに記載の情報処理装置であって、
 前記他の装置は、前記第1の仮想オブジェクトが前記実オブジェクトに重なってAR表示可能なように前記第1の仮想オブジェクトの位置を設定する
 情報処理装置。
(11) 上記(1)~(9)のうちいずれか1つに記載の情報処理装置であって、
 前記他の装置は、前記第1の仮想オブジェクトが前記実オブジェクトの近傍にAR表示可能なように前記第1の仮想オブジェクトの位置を設定する
 情報処理装置。
(12) 上記(1)~(11)のうちいずれか1つに記載の情報処理装置であって、
 前記他の装置は、前記第1の仮想オブジェクトをAR表示する
 情報処理装置。
(13) 上記(1)~(12)のうちいずれか1つに記載の情報処理装置であって、
 前記制御部は、前記第1の仮想オブジェクトをAR表示する
 情報処理装置。
(14) 上記(1)~(13)のうちいずれか1つに記載情報処理装置であって、
 前記他の装置は、前記他の装置が取得した画像情報に基づいて、前記実空間に存在する複数の実オブジェクトの中から、前記第1の仮想オブジェクトが位置される対象となる前記実オブジェクトを選択する
 情報処理装置。
(15) 上記(14)に記載の情報処理装置であって、
 前記他の装置は、所定の条件を満たす前記実オブジェクトを、前記第1の仮想オブジェクトが位置される対象となる前記実オブジェクトとして選択する
 情報処理装置。
(16) 上記(15)に記載の情報処理装置であって、
 前記所定の条件は、前記実オブジェクトが特定の形状を有することである
 情報処理装置。
(17) 上記(16)に記載の情報処理装置であって、
 前記所定の条件は、いずれの方向から前記実オブジェクトを見た場合でも、前記実オブジェクトが実質的に一意に特定される3次元形状を有していることである
 情報処理装置。
(18) 実空間に対応するグローバル座標系において自己位置を推定し、
 前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
 前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
 画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
 前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する制御部
 を有する情報処理装置と
 前記他装置と
 を具備する情報処理システム。
(19) 実空間に対応するグローバル座標系において自己位置を推定し、
 前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
 前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
 画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
 前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する
 情報処理方法。
(20) 実空間に対応するグローバル座標系において自己位置を推定し、
 前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
 前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
 画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
 前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する
 処理をコンピュータに実行させるプログラム。
The present technology can also have the following configurations.
(1) Estimate the self-position in the global coordinate system corresponding to the real space,
Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
Based on the image information, the position of the real object in the global coordinate system is calculated.
An information processing device including a control unit that corrects the position of a second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object.
(2) The information processing device according to (1) above.
The control unit is an information processing device that corrects the position of the second virtual object based on the difference between the position of the first virtual object and the position of the real object.
(3) The information processing device according to (1) above.
The control unit sets the position of the first virtual object with respect to the real object in the global coordinate system based on the image information, and sets the position of the first virtual object based on the coordinate information. An information processing device that corrects the position of the second virtual object based on the difference from the position of the first virtual object based on the image information.
(4) The information processing device according to (2) or (3) above.
The control unit is an information processing device that calculates a correction value based on the difference and corrects the position of the second virtual object based on the correction value.
(5) The information processing device according to (4) above.
The control unit is an information processing device that corrects the position of the second virtual object by moving the position of the second virtual object according to the correction value.
(6) The information processing device according to (4) or (5) above.
The control unit is an information processing device that corrects the position of the second virtual object by rotating the second virtual object according to the correction value.
(7) The information processing device according to any one of (4) to (6) above.
The control unit is an information processing device that changes the degree of correction by the correction value.
(8) The information processing device according to (7) above.
The control unit is an information processing device that changes the degree of correction by the correction value according to the distance between the information processing device when the correction value is calculated and the real object.
(9) The information processing device according to (7) or (8) above.
The control unit determines the correction value according to the distance between the other device and the real object when the other device sets the position of the first virtual object with respect to the real object. An information processing device that changes the degree of correction by.
(10) The information processing device according to any one of (1) to (9) above.
The other device is an information processing device that sets the position of the first virtual object so that the first virtual object overlaps the real object and can be AR-displayed.
(11) The information processing device according to any one of (1) to (9) above.
The other device is an information processing device that sets the position of the first virtual object so that the first virtual object can be AR-displayed in the vicinity of the real object.
(12) The information processing device according to any one of (1) to (11) above.
The other device is an information processing device that AR-displays the first virtual object.
(13) The information processing device according to any one of (1) to (12) above.
The control unit is an information processing device that AR-displays the first virtual object.
(14) The information processing apparatus according to any one of (1) to (13) above.
Based on the image information acquired by the other device, the other device selects the real object to which the first virtual object is located from among a plurality of real objects existing in the real space. Information processing device to select.
(15) The information processing device according to (14) above.
The other device is an information processing device that selects the real object satisfying a predetermined condition as the real object to which the first virtual object is located.
(16) The information processing device according to (15) above.
The information processing device that the predetermined condition is that the real object has a specific shape.
(17) The information processing device according to (16) above.
The predetermined condition is that the real object has a three-dimensional shape that is substantially uniquely specified regardless of the direction in which the real object is viewed.
(18) Estimate the self-position in the global coordinate system corresponding to the real space, and
Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
Based on the image information, the position of the real object in the global coordinate system is calculated.
An information processing device having a control unit that corrects the position of a second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object, and the other device. Information processing system equipped with.
(19) Estimate the self-position in the global coordinate system corresponding to the real space, and
Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
Based on the image information, the position of the real object in the global coordinate system is calculated.
An information processing method for correcting the position of a second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object.
(20) Estimate the self-position in the global coordinate system corresponding to the real space, and
Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
Based on the image information, the position of the real object in the global coordinate system is calculated.
A program that causes a computer to execute a process of correcting the position of a second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object.
 1…制御部
 10…HMD
 20…サーバ装置
 30…補正用実オブジェクト
 31…補正用仮想オブジェクト
 100…情報処理システム
1 ... Control unit 10 ... HMD
20 ... Server device 30 ... Real object for correction 31 ... Virtual object for correction 100 ... Information processing system

Claims (20)

  1.  実空間に対応するグローバル座標系において自己位置を推定し、
     前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
     前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
     画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
     前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する制御部
     を具備する情報処理装置。
    Estimate the self-position in the global coordinate system corresponding to the real space,
    Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
    Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
    Based on the image information, the position of the real object in the global coordinate system is calculated.
    An information processing device including a control unit that corrects the position of a second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object.
  2.  請求項1に記載の情報処理装置であって、
     前記制御部は、前記第1の仮想オブジェクトの位置と、前記実オブジェクトの位置との差に基づいて、前記第2の仮想オブジェクトの位置を補正する
     情報処理装置。
    The information processing device according to claim 1.
    The control unit is an information processing device that corrects the position of the second virtual object based on the difference between the position of the first virtual object and the position of the real object.
  3.  請求項1に記載の情報処理装置であって、
     前記制御部は、前記画像情報に基づいて、前記グローバル座標系において、前記実オブジェクトに対して前記第1の仮想オブジェクトの位置を設定し、前記座標情報に基づく前記第1の仮想オブジェクトの位置と、前記画像情報に基づく前記第1の仮想オブジェクトの位置との差に基づいて、前記第2の仮想オブジェクトの位置を補正する
     情報処理装置。
    The information processing device according to claim 1.
    The control unit sets the position of the first virtual object with respect to the real object in the global coordinate system based on the image information, and sets the position of the first virtual object based on the coordinate information. An information processing device that corrects the position of the second virtual object based on the difference from the position of the first virtual object based on the image information.
  4.  請求項2に記載の情報処理装置であって、
     前記制御部は、前記差に基づいて補正値を算出し、補正値により前記第2の仮想オブジェクトの位置を補正する
     情報処理装置。
    The information processing device according to claim 2.
    The control unit is an information processing device that calculates a correction value based on the difference and corrects the position of the second virtual object based on the correction value.
  5.  請求項4に記載の情報処理装置であって、
     前記制御部は、前記補正値により、前記第2の仮想オブジェクトの位置を移動させて前記第2の仮想オブジェクトの位置を補正する
     情報処理装置。
    The information processing device according to claim 4.
    The control unit is an information processing device that corrects the position of the second virtual object by moving the position of the second virtual object according to the correction value.
  6.  請求項4に記載の情報処理装置であって、
     前記制御部は、前記補正値により、前記第2の仮想オブジェクトを回転させて前記第2の仮想オブジェクトの位置を補正する
     情報処理装置。
    The information processing device according to claim 4.
    The control unit is an information processing device that corrects the position of the second virtual object by rotating the second virtual object according to the correction value.
  7.  請求項4に記載の情報処理装置であって、
     前記制御部は、前記補正値による補正の度合いを変化させる
     情報処理装置。
    The information processing device according to claim 4.
    The control unit is an information processing device that changes the degree of correction by the correction value.
  8.  請求項7に記載の情報処理装置であって、
     前記制御部は、前記補正値が算出されたときの前記情報処理装置と、前記実オブジェクトとの間の距離に応じて、前記補正値による補正の度合いを変化させる
     情報処理装置。
    The information processing device according to claim 7.
    The control unit is an information processing device that changes the degree of correction by the correction value according to the distance between the information processing device when the correction value is calculated and the real object.
  9.  請求項4に記載の情報処理装置であって、
     前記制御部は、前記他の装置が前記実オブジェクトに対して前記第1の仮想オブジェクトの位置を設定したときの前記他の装置と、前記実オブジェクトとの間の距離に応じて、前記補正値による補正の度合いを変化させる
     情報処理装置。
    The information processing device according to claim 4.
    The control unit determines the correction value according to the distance between the other device and the real object when the other device sets the position of the first virtual object with respect to the real object. An information processing device that changes the degree of correction by.
  10.  請求項1に記載の情報処理装置であって、
     前記他の装置は、前記第1の仮想オブジェクトが前記実オブジェクトに重なってAR表示可能なように前記第1の仮想オブジェクトの位置を設定する
     情報処理装置。
    The information processing device according to claim 1.
    The other device is an information processing device that sets the position of the first virtual object so that the first virtual object overlaps the real object and can be AR-displayed.
  11.  請求項1に記載の情報処理装置であって、
     前記他の装置は、前記第1の仮想オブジェクトが前記実オブジェクトの近傍にAR表示可能なように前記第1の仮想オブジェクトの位置を設定する
     情報処理装置。
    The information processing device according to claim 1.
    The other device is an information processing device that sets the position of the first virtual object so that the first virtual object can be AR-displayed in the vicinity of the real object.
  12.  請求項1に記載の情報処理装置であって、
     前記他の装置は、前記第1の仮想オブジェクトをAR表示する
     情報処理装置。
    The information processing device according to claim 1.
    The other device is an information processing device that AR-displays the first virtual object.
  13.  請求項1に記載の情報処理装置であって、
     前記制御部は、前記第1の仮想オブジェクトをAR表示する
     情報処理装置。
    The information processing device according to claim 1.
    The control unit is an information processing device that AR-displays the first virtual object.
  14.  請求項1に記載情報処理装置であって、
     前記他の装置は、前記他の装置が取得した画像情報に基づいて、前記実空間に存在する複数の実オブジェクトの中から、前記第1の仮想オブジェクトが位置される対象となる前記実オブジェクトを選択する
     情報処理装置。
    The information processing apparatus according to claim 1.
    Based on the image information acquired by the other device, the other device selects the real object to which the first virtual object is located from among a plurality of real objects existing in the real space. Information processing device to select.
  15.  請求項14に記載の情報処理装置であって、
     前記他の装置は、所定の条件を満たす前記実オブジェクトを、前記第1の仮想オブジェクトが位置される対象となる前記実オブジェクトとして選択する
     情報処理装置。
    The information processing device according to claim 14.
    The other device is an information processing device that selects the real object satisfying a predetermined condition as the real object to which the first virtual object is located.
  16.  請求項15に記載の情報処理装置であって、
     前記所定の条件は、前記実オブジェクトが特定の形状を有することである
     情報処理装置。
    The information processing device according to claim 15.
    The information processing device that the predetermined condition is that the real object has a specific shape.
  17.  請求項16に記載の情報処理装置であって、
     前記所定の条件は、いずれの方向から前記実オブジェクトを見た場合でも、前記実オブジェクトが実質的に一意に特定される3次元形状を有していることである
     情報処理装置。
    The information processing device according to claim 16.
    The predetermined condition is that the real object has a three-dimensional shape that is substantially uniquely specified regardless of the direction in which the real object is viewed.
  18.  実空間に対応するグローバル座標系において自己位置を推定し、
     前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
     前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
     画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
     前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する制御部
     を有する情報処理装置と
     前記他装置と
     を具備する情報処理システム。
    Estimate the self-position in the global coordinate system corresponding to the real space,
    Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
    Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
    Based on the image information, the position of the real object in the global coordinate system is calculated.
    An information processing device having a control unit that corrects the position of a second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object, and the other device. Information processing system equipped with.
  19.  実空間に対応するグローバル座標系において自己位置を推定し、
     前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
     前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
     画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
     前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する
     情報処理方法。
    Estimate the self-position in the global coordinate system corresponding to the real space,
    Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
    Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
    Based on the image information, the position of the real object in the global coordinate system is calculated.
    An information processing method for correcting the position of a second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object.
  20.  実空間に対応するグローバル座標系において自己位置を推定し、
     前記グローバル座標系を共有する他の装置が、前記グローバル座標系において、前記実空間の実オブジェクトに対して設定したAR表示可能な第1の仮想オブジェクトの位置の座標情報を取得し、
     前記自己位置及び前記座標情報に基づき、前記グローバル座標系において、前記第1の仮想オブジェクトの位置を設定し、
     画像情報に基づいて、前記グローバル座標系における前記実オブジェクトの位置を算出し、
     前記第1の仮想オブジェクト及び前記実オブジェクトの位置関係に基づいて、前記他の装置と共通でAR表示される第2の仮想オブジェクトの位置を補正する
     処理をコンピュータに実行させるプログラム。
    Estimate the self-position in the global coordinate system corresponding to the real space,
    Another device sharing the global coordinate system acquires the coordinate information of the position of the first virtual object that can be AR-displayed set for the real object in the real space in the global coordinate system.
    Based on the self-position and the coordinate information, the position of the first virtual object is set in the global coordinate system.
    Based on the image information, the position of the real object in the global coordinate system is calculated.
    A program that causes a computer to execute a process of correcting the position of a second virtual object that is AR-displayed in common with the other device based on the positional relationship between the first virtual object and the real object.
PCT/JP2021/007064 2020-03-06 2021-02-25 Information processing device, information processing system, information processing method, and program WO2021177132A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-039143 2020-03-06
JP2020039143 2020-03-06

Publications (1)

Publication Number Publication Date
WO2021177132A1 true WO2021177132A1 (en) 2021-09-10

Family

ID=77614267

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/007064 WO2021177132A1 (en) 2020-03-06 2021-02-25 Information processing device, information processing system, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2021177132A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014016986A1 (en) * 2012-07-27 2014-01-30 Necソフト株式会社 Three-dimensional environment sharing system, and three-dimensional environment sharing method
JP2016021096A (en) * 2014-07-11 2016-02-04 Kddi株式会社 Image processing device, image processing method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014016986A1 (en) * 2012-07-27 2014-01-30 Necソフト株式会社 Three-dimensional environment sharing system, and three-dimensional environment sharing method
JP2016021096A (en) * 2014-07-11 2016-02-04 Kddi株式会社 Image processing device, image processing method, and program

Similar Documents

Publication Publication Date Title
TWI722280B (en) Controller tracking for multiple degrees of freedom
US9892563B2 (en) System and method for generating a mixed reality environment
CN110047104B (en) Object detection and tracking method, head-mounted display device, and storage medium
EP3469458B1 (en) Six dof mixed reality input by fusing inertial handheld controller with hand tracking
JP6860488B2 (en) Mixed reality system
US10249090B2 (en) Robust optical disambiguation and tracking of two or more hand-held controllers with passive optical and inertial tracking
CN110140099B (en) System and method for tracking controller
US20160292924A1 (en) System and method for augmented reality and virtual reality applications
US11127380B2 (en) Content stabilization for head-mounted displays
US20150097719A1 (en) System and method for active reference positioning in an augmented reality environment
US11190904B2 (en) Relative spatial localization of mobile devices
WO2016041088A1 (en) System and method for tracking wearable peripherals in augmented reality and virtual reality applications
KR20180075191A (en) Method and electronic device for controlling unmanned aerial vehicle
KR20150093831A (en) Direct interaction system for mixed reality environments
WO2015048890A1 (en) System and method for augmented reality and virtual reality applications
JP2021060627A (en) Information processing apparatus, information processing method, and program
CN110969706B (en) Augmented reality device, image processing method, system and storage medium thereof
CN111947650A (en) Fusion positioning system and method based on optical tracking and inertial tracking
US11944897B2 (en) Device including plurality of markers
WO2021177132A1 (en) Information processing device, information processing system, information processing method, and program
CN111489376B (en) Method, device, terminal equipment and storage medium for tracking interaction equipment
TWI814624B (en) Landmark identification and marking system for a panoramic image and method thereof
US20230120092A1 (en) Information processing device and information processing method
US20230062045A1 (en) Display control device, display control method, and recording medium
WO2023157499A1 (en) Information processing device and device position estimation method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21763518

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21763518

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP