WO2022044900A1 - Information processing device, information processing method, and recording medium - Google Patents

Information processing device, information processing method, and recording medium Download PDF

Info

Publication number
WO2022044900A1
WO2022044900A1 PCT/JP2021/030110 JP2021030110W WO2022044900A1 WO 2022044900 A1 WO2022044900 A1 WO 2022044900A1 JP 2021030110 W JP2021030110 W JP 2021030110W WO 2022044900 A1 WO2022044900 A1 WO 2022044900A1
Authority
WO
WIPO (PCT)
Prior art keywords
real object
unit
real
detection range
information processing
Prior art date
Application number
PCT/JP2021/030110
Other languages
French (fr)
Japanese (ja)
Inventor
俊逸 小原
誠 ダニエル 徳永
春香 藤澤
一 若林
優生 武田
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2022044900A1 publication Critical patent/WO2022044900A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present disclosure relates to an information processing device, an information processing method, and a recording medium, and more particularly to an information processing device, an information processing method, and a recording medium that enable more suitable presentation of a virtual object corresponding to a real object.
  • AR Augmented Reality
  • virtual objects of various modes such as text, icons, and animations are superimposed on an object in real space (hereinafter referred to as a real object) and presented to a user.
  • Patent Document 1 as a configuration using AR technology, when the position or posture of the user's viewpoint changes, the position shift of the virtual object that occurs between the time when the virtual object is drawn and the time when the virtual object is presented to the user is described. Information processing devices that suppress are disclosed.
  • This disclosure is made in view of such a situation, and makes it possible to more preferably present a virtual object corresponding to a real object.
  • the information processing apparatus of the present disclosure corresponds to the position of the real object, which is smaller than the imaging angle at the self-position, based on the object position information and the self-position information of the real object in the three-dimensional coordinate system corresponding to the real space. It is an information processing apparatus including a detection range setting unit for setting a detection range to be processed and an object detection unit for detecting the real object in the detection range.
  • the information processing apparatus is smaller than the image pickup angle at the self-position and is smaller than the real object based on the object position information and the self-position information of the real object in the three-dimensional coordinate system corresponding to the real space.
  • This is an information processing method in which a detection range corresponding to a position of a body is set and the real object is detected in the detection range.
  • the recording medium of the present disclosure is smaller than the imaging angle at the self-position and corresponds to the position of the real object based on the object position information and the self-position information of the real object in the three-dimensional coordinate system corresponding to the real space.
  • a computer-readable recording medium on which a program for setting a detection range and executing a process for detecting the real object in the detection range is recorded.
  • the detection range smaller than the imaging angle of view at the self-position and corresponding to the position of the real object. Is set, and the real object is detected in the detection range.
  • Issues of conventional technology For example, consider a case where information is superimposed on a player's head reflected on an AR terminal owned by an spectator in a stadium such as athletics or soccer. In this case, when superimposing a virtual object on a player who moves at a position several hundred meters or more away from the spectator, the AR terminal can identify the player with the camera of the AR terminal, measure the distance to the player with a depth sensor, and so on. Then, it is necessary to grasp the position of the player. However, it is not easy to realize these with a sensor that can be mounted on a mobile terminal that is required to be compact and have low power consumption.
  • the distribution server can use wireless communication to communicate with the player. It becomes possible to deliver the location information to the AR terminal of the audience. However, there is a possibility that the position of the virtual object may be displaced due to the transmission delay between the sensor and the distribution server and between the distribution server and the AR terminal.
  • FIG. 1 is a diagram showing an example of a network configuration to which the technique according to the present disclosure is applied.
  • FIG. 1 shows a real object position estimation system 10, a distribution server 20, and an AR terminal 30.
  • the real object position estimation system 10 and the distribution server 20, the distribution server 20 and the AR terminal 30 each perform wireless communication with each other.
  • the real object position estimation system 10 senses the position of a real object such as a player RO1 or a formula car RO2 in a three-dimensional coordinate system corresponding to a real space such as a stadium such as land or soccer or a circuit such as F1. ..
  • the real object position estimation system 10 aggregates information such as the position of the real object and the sensing time obtained by sensing in the distribution server 20.
  • the distribution server 20 Based on the information aggregated by the real object position estimation system 10, the distribution server 20 has a real object identifier unique to the real object, object position information indicating the position of the real object, sensing time corresponding to the object position information, and the real object. Additional information regarding the above is sequentially delivered to the AR terminal 30.
  • the AR terminal 30 is configured as a mobile terminal such as a smartphone owned by the above-mentioned stadium or circuit spectator (user U), an HMD (Head Mounted Display), AR goggles, or the like.
  • the AR terminal 30 superimposes a virtual object on the player RO1 in the stadium and the formula car RO2 in the circuit.
  • the AR terminal 30 shares a time axis with the same three-dimensional coordinate system as the real object position estimation system 10, and recognizes the position (self-position) of the own terminal in the three-dimensional coordinate system in real time.
  • the AR terminal 30 uses various information distributed from the distribution server 20 and its own position to acquire the position of the real object at the current time, thereby transmitting the transmission between the real object position estimation system 10 and the AR terminal 30. Compensates for delays and realizes the presentation of virtual objects without misalignment.
  • the AR terminal 30 receives the sensing time t-1 and the object position information representing the position P (t-1) of the real object in the three-dimensional coordinate system at the time t-1 from the distribution server 20.
  • the AR terminal 30 sets an imaging angle of view including a plane that passes through the position P (t-1) and faces the front of the camera of the AR terminal 30 based on the self-position in the three-dimensional coordinate system.
  • the captured image CI is imaged.
  • the AR terminal 30 sets a detection range DR (t) capable of detecting a real object at the current time t, which is the current time, in the captured image CI.
  • the AR terminal 30 converts the position p (t) on the captured image CI of the real object into the position P (t) in the three-dimensional coordinate system by detecting the real object in the detection range DR (t). do.
  • the AR terminal 30 corresponds to the position P (t) of the real object at the current time t in the three-dimensional coordinate system and does not overlap the real object (for example, the real object in the direction of gravity).
  • a virtual object VO corresponding to the additional information from the distribution server 20 is superimposed on the position above the body).
  • FIG. 3 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ a GPS (Global Positioning System) method as a tracking method for a real object.
  • the real object position estimation system 10 is configured as, for example, a wearable device worn by each athlete as a real object.
  • the real object position estimation system 10 of FIG. 3 includes a GPS sensor 51, a coordinate conversion unit 52, a time measurement unit 53, and a transmission unit 54.
  • the GPS sensor 51 receives GPS position information from GPS satellites and supplies it to the coordinate conversion unit 52.
  • the GPS position information represents a position (latitude, longitude, altitude) in the GPS coordinate system.
  • positioning by BLE (Bluetooth (registered trademark) Low Energy), UWB (Ultra Wide Band), or the like may be used, or these positioning techniques may be used in combination.
  • the coordinate conversion unit 52 converts the GPS position information from the GPS sensor 51 into the position information in the three-dimensional coordinate system, so that the sensor position information indicating the position of the GPS sensor 51 in the three-dimensional coordinate system is transmitted to the transmission unit 54. Supply.
  • a three-dimensional coordinate system it is assumed that one of the axes is aligned in the direction of gravity or the direction of gravity is determined.
  • the transformation of coordinates is performed using a predetermined transformation logic or transformation matrix.
  • the time measuring unit 53 acquires the time (sensing time) when the GPS sensor 51 receives the GPS position information by measuring the time with an accuracy of milliseconds or more, and supplies the time to the transmitting unit 54.
  • the transmission unit 54 transmits the sensor position information from the coordinate conversion unit 52 and the sensing time from the time measurement unit 53 to the distribution server 20 together with the identifier (sensor identifier) unique to the GPS sensor 51.
  • the distribution server 20 of FIG. 3 includes a receiving unit 61, a real object identifier conversion unit 62, an additional information acquisition unit 63, and a transmitting unit 64.
  • the receiving unit 61 receives the sensor identifier, the sensor position information, and the sensing time transmitted from the real object position estimation system 10.
  • the sensor identifier is supplied to the real object identifier conversion unit 62, and the sensor position information and the sensing time are supplied to the transmission unit 64.
  • the real object identifier conversion unit 62 converts the sensor identifier from the reception unit 61 into an identifier (real object identifier) unique to the real object to which the real object position estimation system 10 is mounted, and transmits it to the additional information acquisition unit 63. Supply to unit 64.
  • the conversion from the sensor identifier to the real object identifier is performed, for example, based on a correspondence table showing which player is wearing which sensor device (real object position estimation system 10).
  • the additional information acquisition unit 63 acquires the additional information added to the real object as a virtual object, which is the information corresponding to the real object identifier from the real object identifier conversion unit 62, and supplies it to the transmission unit 64.
  • the additional information acquired by the additional information acquisition unit 63 may be, for example, fixed information about a real object such as a player's name, affiliation, or uniform number, or another system such as a player's ranking or score. It may be real-time changing information about a real object obtained from.
  • the transmission unit 64 transmits the sensor position information and sensing time from the reception unit 61, the real object identifier from the real object identifier conversion unit 62, and the additional information from the additional information acquisition unit 63 to the AR terminal 30. Since the position of the GPS sensor 51 represented by the sensor position information is equal to the position of the real object to which the real object position estimation system 10 is mounted, the sensor position information is AR as the object position information representing the position of the real object. It is transmitted to the terminal 30.
  • FIG. 4 is a flowchart illustrating the operation of the real object position estimation system 10 of FIG.
  • step S11 the GPS sensor 51 receives GPS position information from GPS satellites.
  • step S12 the time measuring unit 53 acquires the time when the GPS sensor 51 receives the GPS position information as the sensing time.
  • step S13 the coordinate conversion unit 52 converts the GPS position information received by the GPS sensor 51 into position information (sensor position information) in the three-dimensional coordinate system.
  • step S14 the transmission unit 54 transmits the sensor identifier of the GPS sensor 51, the sensor position information, and the sensing time to the distribution server 20.
  • FIG. 5 is a flowchart illustrating the operation of the distribution server 20 of FIG.
  • step S21 the receiving unit 61 receives the sensor identifier, the sensor position information, and the sensing time transmitted from the real object position estimation system 10.
  • step S22 the real object identifier conversion unit 62 converts the sensor identifier received by the reception unit 61 into a real object identifier.
  • step S23 the additional information acquisition unit 63 acquires additional information corresponding to the converted real object identifier.
  • step S24 the transmission unit 64 transmits the real object identifier, the object position information (sensor position information), the sensing time, and the additional information to the AR terminal 30.
  • the position of the real object is acquired based on the GPS position information.
  • FIG. 6 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ an inside-out method as a tracking method for a real object.
  • the position of the object is measured by using the sensor mounted on the object itself to be measured. Therefore, in the example of FIG. 6, the real object position estimation system 10 is configured as, for example, a wearable device worn by each athlete as a real object.
  • the real object position estimation system 10 of FIG. 6 includes a sensor unit 71, a self-position estimation unit 72, a time measurement unit 53, and a transmission unit 54. Since the time measuring unit 53 and the transmitting unit 54 have the same configuration as shown in FIG. 3, the description thereof will be omitted.
  • the sensor unit 71 is composed of a stereo camera, a depth sensor, and the like, senses the environment around the real object, and supplies the sensing result to the self-position estimation unit 72.
  • the self-position estimation unit 72 estimates the position of the sensor unit 71 in the three-dimensional coordinate system based on the sensing result from the sensor unit 71, and thereby transmits the sensor position information indicating the position of the sensor unit 71 to the transmission unit 54. Supply.
  • an IMU Inertial Measurement Unit
  • the position of the sensor unit 71 in the three-dimensional coordinate system can be estimated based on the sensing result by the sensor unit 71 and the angular velocity and acceleration detected by the IMU. You may do it.
  • FIG. 7 is a flowchart illustrating the operation of the real object position estimation system 10 of FIG.
  • step S31 the sensor unit 71 senses the environment around the real object.
  • step S32 the time measuring unit 53 acquires the time sensed by the sensor unit 71 as the sensing time.
  • step S33 the self-position estimation unit 72 estimates the position of the sensor unit 71 in the three-dimensional coordinate system based on the sensing result of the sensing by the sensor unit 71.
  • step S34 the transmission unit 54 transmits the sensor identifier of the sensor unit 71, the sensor position information indicating the estimated position of the sensor unit 71, and the sensing time to the distribution server 20.
  • the position of the real object is estimated by the real object position estimation system 10 mounted on the real object.
  • FIG. 8 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ an inside-out method as a tracking method for a real object. Therefore, even in the example of FIG. 8, the real object position estimation system 10 is configured as, for example, a wearable device worn by each athlete as a real object.
  • the self-position estimation unit 72 is provided in the distribution server 20 instead of the real object position estimation system 10, and the real object position estimation system 10 of FIG. 6 is provided. And different from the distribution server 20.
  • the sensor unit 71 supplies the sensing result to the transmitting unit 54, and the transmitting unit 54 transmits the sensing result as it is to the distribution server 20 instead of the sensor position information. do.
  • the self-position estimation unit 72 estimates the position of the sensor unit 71 in the three-dimensional coordinate system based on the sensing result from the real object position estimation system 10, so that the sensor unit 71 The sensor position information indicating the position is supplied to the transmission unit 54.
  • FIG. 9 is a flowchart illustrating the operation of the real object position estimation system 10 of FIG.
  • step S41 the sensor unit 71 senses the environment around the real object.
  • step S42 the time measuring unit 53 acquires the time sensed by the sensor unit 71 as the sensing time.
  • step S43 the transmission unit 54 transmits the sensor identifier of the sensor unit 71, the sensing result, and the sensing time to the distribution server 20.
  • FIG. 10 is a flowchart illustrating the operation of the distribution server 20 of FIG.
  • step S51 the receiving unit 61 receives the sensor identifier, the sensing result, and the sensing time transmitted from the real object position estimation system 10.
  • step S52 the real object identifier conversion unit 62 converts the sensor identifier received by the reception unit 61 into a real object identifier.
  • step S53 the self-position estimation unit 72 is equipped with the position of the sensor unit 71 in the three-dimensional coordinate system, that is, the real object position estimation system 10 based on the sensing result from the real object position estimation system 10. Estimate the position of the body.
  • step S54 the additional information acquisition unit 63 acquires additional information corresponding to the converted real object identifier.
  • step S55 the transmission unit 64 transmits the real object identifier, the object position information, the sensing time, and the additional information to the AR terminal 30.
  • the position of the real object is estimated by the distribution server 20 instead of the real object position estimation system 10 mounted on the real object.
  • FIG. 11 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ an outside-in method as a tracking method for a real object.
  • the position of the object is measured using a sensor installed outside.
  • a method of attaching a marker to an object and observing it with an external camera can be mentioned. Therefore, in the example of FIG. 11, the real object position estimation system 10 is arranged around the real object, and is configured as, for example, a plurality of high-precision sensor devices installed so as to surround the stadium.
  • the real object position estimation system 10 of FIG. 11 includes a sensor unit 71, a real object position estimation unit 81, a real object identification unit 82, a time measurement unit 53, and a transmission unit 83. Since the sensor unit 71 has the same configuration as shown in FIG. 6 and the time measuring unit 53 has the same configuration as shown in FIG. 3, the description thereof will be omitted.
  • the real object position estimation unit 81 estimates the position of the real object in the three-dimensional coordinate system based on the sensing result from the sensor unit 71, and supplies the object position information indicating the position of the real object to the transmission unit 83. do.
  • the position and orientation of the sensor unit 71 for example, the depth sensor
  • the position of the real object can be estimated based on the depth information from the depth sensor.
  • the sensor unit 71 is composed of a camera
  • the position of the real object may be estimated by image recognition for the captured image captured by the camera.
  • the object position information may represent the position of one point such as the center of gravity of the real object or the center of the bounding box that recognizes the real object.
  • the real object identification unit 82 identifies the real object based on the sensing result from the sensor unit 71, and supplies the real object identifier unique to the identified real object to the transmission unit 83.
  • each player is identified by recognizing the player's face and uniform number in the captured image as the sensing result. Athletes as real objects may be equipped with markers and infrared lamps necessary for identification.
  • the transmission unit 83 transmits the object position information from the real object position estimation unit 81, the real object identifier from the real object identification unit 82, and the sensing time from the time measurement unit 53 to the distribution server 20.
  • the distribution server 20 of FIG. 11 includes a receiving unit 61, an additional information acquisition unit 63, and a transmitting unit 64. Since the receiving unit 61, the additional information acquisition unit 63, and the transmitting unit 64 have the same configuration as shown in FIG. 3, the description thereof will be omitted.
  • FIG. 12 is a flowchart illustrating the operation of the real object position estimation system 10 of FIG.
  • step S61 the sensor unit 71 senses the environment in which the real object exists.
  • step S62 the time measuring unit 53 acquires the time sensed by the sensor unit 71 as the sensing time.
  • step S63 the real object position estimation unit 81 estimates the position of the real object in the three-dimensional coordinate system from the known position of the sensor unit 71 (sensor position) and the sensing result from the sensor unit 71.
  • step S64 the real object identification unit 82 identifies the real object from the sensing result from the sensor unit 71.
  • step S65 the transmission unit 83 transmits the real object identifier, the object position information, and the sensing time to the distribution server 20.
  • FIG. 13 is a flowchart illustrating the operation of the distribution server 20 of FIG.
  • step S71 the receiving unit 61 receives the real object identifier, the object position information, and the sensing time transmitted from the real object position estimation system 10.
  • step S72 the additional information acquisition unit 63 acquires additional information corresponding to the real object identifier received by the reception unit 61.
  • step S73 the transmission unit 64 transmits the real object identifier, the object position information, the sensing time, and the additional information to the AR terminal 30.
  • the position of the real object is acquired by the plurality of real object position estimation systems 10 installed outside the real object.
  • FIG. 14 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ an outside-in method as a tracking method for a real object. Therefore, even in the example of FIG. 14, the real object position estimation system 10 is configured as, for example, a plurality of high-precision sensor devices installed so as to surround the stadium.
  • the real object position estimation unit 81 and the real object identification unit 82 are provided in the distribution server 20 instead of the real object position estimation system 10. It is different from the real object position estimation system 10 and the distribution server 20 of the above.
  • the distribution server 20 of FIG. 14 is further provided with a sensor position acquisition unit 91.
  • the sensor unit 71 supplies the sensing result to the transmission unit 83, and the transmission unit 83 replaces the real object identifier and the object position information with the sensor identifier and the sensing result. Is transmitted to the distribution server 20 as it is.
  • the sensor position acquisition unit 91 acquires the position / orientation of the sensor unit 71 in the three-dimensional coordinate system based on the sensor identifier from the actual object position estimation system 10, and estimates the actual object position. It is supplied to the unit 81.
  • the acquisition of the position / posture of the sensor unit 71 is performed, for example, based on a correspondence table showing the correspondence between the position / posture of each sensor unit 71 measured in advance and the sensor identifier.
  • the real object position estimation unit 81 is a real object in the three-dimensional coordinate system based on the sensing result from the real object position estimation system 10 and the position / orientation of the sensor unit 71 in the three-dimensional coordinate system from the sensor position acquisition unit 91.
  • the position of the object is estimated, and the object position information is supplied to the transmission unit 64.
  • the real object identification unit 82 identifies the real object based on the sensing result from the real object position estimation system 10, and assigns the real object identifier unique to the identified real object to the additional information acquisition unit 63 and the transmission unit 64. Supply.
  • FIG. 15 is a flowchart illustrating the operation of the distribution server 20 of FIG.
  • step S81 the receiving unit 61 receives the sensor identifier, the sensing result, and the sensing time transmitted from the real object position estimation system 10.
  • step S82 the sensor position acquisition unit 91 acquires the position (sensor position) of the sensor unit 71 in the three-dimensional coordinate system based on the sensor identifier received by the reception unit 61.
  • step S83 the real object position estimation unit 81 estimates the position of the real object in the three-dimensional coordinate system from the sensor position acquired by the sensor position acquisition unit 91 and the sensing result received by the reception unit 61.
  • step S84 the real object identification unit 82 identifies the real object from the sensing result received by the reception unit 61.
  • step S85 the additional information acquisition unit 63 acquires additional information corresponding to the real object identifier unique to the identified real object.
  • step S86 the transmission unit 64 transmits the real object identifier, the object position information, the sensing time, and the additional information to the AR terminal 30.
  • the real object is identified and the position of the real object is estimated by the distribution server 20 instead of the real object position estimation system 10 mounted on the real object.
  • FIG. 16 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ an outside-in method as a tracking method for a real object.
  • the real object position estimation system 10 is configured as a moving body (for example, a drone flying over the stadium) that moves around the real object.
  • the real object position estimation system 10 of FIG. 16 is different from the real object position estimation system 10 of FIG. 11 in that a self-position estimation unit 72 and a control unit 101 are further provided in the distribution server 20.
  • the distribution server 20 of FIG. 16 is configured in the same manner as the distribution server 20 of FIG.
  • the self-position estimation unit 72 estimates the position of the sensor unit 71 (real object position estimation system 10) in the three-dimensional coordinate system based on the sensing result from the sensor unit 71. Then, it is supplied to the real object position estimation unit 81 and the control unit 101.
  • the real object position estimation unit 81 estimates the position of the real object in the three-dimensional coordinate system based on the sensing result from the sensor unit 71 and the position of the sensor unit 71 in the three-dimensional coordinate system from the self-position estimation unit 72. , The object position information is supplied to the transmission unit 83.
  • the control unit 101 determines that the real object position estimation system 10 as a drone is localized or moved to a predetermined position based on the position of the sensor unit 71 in the three-dimensional coordinate system from the self-position estimation unit 72. Control actuators not shown.
  • FIG. 17 is a flowchart illustrating the operation of the real object position estimation system 10 of FIG.
  • step S91 the sensor unit 71 senses the environment in which the real object exists, in which the real object position estimation system 10 as a drone is flying.
  • step S92 the time measuring unit 53 acquires the time sensed by the sensor unit 71 as the sensing time.
  • step S93 the self-position estimation unit 72 estimates the position (sensor position) of the sensor unit 71 (real object position estimation system 10) in the three-dimensional coordinate system based on the sensing result of the sensing by the sensor unit 71.
  • step S94 the control unit 101 controls an actuator for flying the real object position estimation system 10 based on the sensor position estimated by the self-position estimation unit 72.
  • step S95 the real object position estimation unit 81 estimates the position of the real object in the three-dimensional coordinate system from the sensor position estimated by the self-position estimation unit 72 and the sensing result from the sensor unit 71.
  • step S96 the real object identification unit 82 identifies the real object from the sensing result from the sensor unit 71.
  • step S97 the transmission unit 83 transmits the real object identifier, the object position information, and the sensing time to the distribution server 20.
  • the position of the real object is acquired by the real object position estimation system 10 flying around the real object.
  • FIG. 18 is a block diagram showing a configuration example of the AR terminal 30 according to the first embodiment.
  • the AR terminal 30 acquires the position of the real object at the current time based only on the information distributed from the distribution server 20, and arranges the corresponding virtual object.
  • the AR terminal 30 of FIG. 18 includes a receiving unit 111, a time measuring unit 112, a sensor unit 113, a self-position estimation unit 114, a moving range prediction unit 115, a detection range setting unit 116, an object detection unit 117, and a virtual object arrangement unit 118. It includes a drawing unit 119 and a display unit 120.
  • the receiving unit 111 receives the real object identifier, the object position information, the sensing time, and the additional information distributed from the distribution server 20.
  • the object position information representing the position of the real object is not the information acquired by the AR terminal 30, but the information based on the sensing result by the external sensor acquired by wireless communication.
  • the object position information and the sensing time are associated with the real object identifier and supplied to the movement range prediction unit 115, and the additional information is associated with the real object identifier and supplied to the virtual object arrangement unit 118.
  • the time measurement unit 112 acquires the current time (current time) by measuring the time with an accuracy of milliseconds or more, and supplies it to the movement range prediction unit 115.
  • the sensor unit 113 is composed of a stereo camera, a depth sensor, or the like, senses the environment around the AR terminal 30, and supplies the sensing result to the self-position estimation unit 114 and the detection range setting unit 116.
  • the self-position estimation unit 114 estimates the position of the sensor unit 113 (AR terminal 30) in the three-dimensional coordinate system based on the sensing result from the sensor unit 113, and thereby obtains self-position information indicating the position of the AR terminal 30. , Supply to the detection range setting unit 116 and the drawing unit 119.
  • an IMU may be provided so that the position of the sensor unit 113 in the three-dimensional coordinate system can be estimated based on the sensing result by the sensor unit 113 and the angular velocity and acceleration detected by the IMU.
  • the movement range prediction unit 115 predicts the movement range in which the real object can move in the three-dimensional coordinate system based on the object position information from the reception unit 111. Specifically, the movement range prediction unit 115 predicts the movement range of the actual object from the sensing time from the reception unit 111 corresponding to the object position information to the current time from the time measurement unit 112. When there are a plurality of real objects in the three-dimensional coordinate system, the movement range is predicted for each real object based on the real object identifier unique to each real object.
  • the moving range of the real object is predicted by estimating the predicted position of the real object at the current time from the moving speed and moving direction of the real object based on the object position information.
  • the velocity vector (traveling direction and velocity) of the real object is estimated based on the difference of the object position information for the last two times received for a certain real object.
  • the predicted position of the real object at the current time is estimated using the estimated velocity vector with the travel time between the received sensing time and the current time as the travel time.
  • the movement range is between the predicted positions estimated from the positions represented by the latest object position information.
  • the movement range of the real object may be predicted using the context of the real object.
  • the context here is the maximum speed at which the real object can move, or the plane or direction in which the real object can move.
  • the maximum speed is, for example, 12.5 m / s based on the world record of land (sprinting).
  • the maximum speed is, for example, 378 km / h, which is the world record of an F1 car.
  • the real object can move on a plane perpendicular to the direction of gravity. Further, in a track and field stadium or a circuit such as F1, the traveling direction of the real object is uniquely determined depending on the position of the real object on the course.
  • the movement range predicted as described above may be provided with a margin such as 1.2 times, or may include a positioning error or a distance measurement error of the GPS sensor 51 or the sensor unit 71. ..
  • the movement range is predicted including a positioning error of about several meters.
  • the detection range setting unit 116 is smaller than the imaging angle of view at the self-position and corresponds to the position of the real object based on the object position information of the real object and the self-position information from the self-position estimation unit 114. To set.
  • the detection range is set as a range in which a real object can be detected in the captured image captured at the imaging angle of view at the self-position.
  • the detection range setting unit 116 uses the self-position information from the self-position estimation unit 114 to capture an image angle of view including the movement range of the real object in the three-dimensional coordinate system predicted by the movement range prediction unit 115. To set.
  • the detection range setting unit 116 sets the movement range predicted by the movement range prediction unit 115 based on the object position information in the image captured by the sensor unit 113 at the set angle of view (angle of view including the real object).
  • the range including the corresponding area is set as the detection range.
  • the detection range setting unit 116 is based on the self-position in the captured image CI captured at the imaging angle of view including the moving range from time t-1 to time t.
  • the detection range DR (t) capable of detecting a real object at the current time t is set.
  • a bounding box BB is set on the three-dimensional coordinate system as shown in FIG. 20 for the shapes of people at time t-1 and time t.
  • the bounding box BB is set so that the height of the person is parallel to the direction of gravity and the aggregated one point is the center of gravity.
  • the height of the person here may be a world record such as 2.5 m, or may be included in the additional information from the distribution server 20.
  • the eight vertices of the bounding box BB for the real object at time t-1 and the eight vertices of the bounding box BB for the real object at time t on the world coordinate system. Is projected onto the uv coordinate system.
  • the point P on the world coordinate system three-dimensional coordinate system
  • Xw-Yw-Zw coordinate system is an image of the captured image CI via the camera coordinate system (Xc-Yc-Zc coordinate system). It is projected onto the point p on the coordinate system (uv coordinate system).
  • a rectangular area surrounded by the minimum value of u, the maximum value of u, the minimum value of v, and the maximum value of v in the uv coordinate system is set in the detection range.
  • the detection range described here is an example, and its shape is not limited to a rectangle such as a circle.
  • the object detection unit 117 detects a real object within the detection range set in the captured image, and converts the position of the detected real object on the captured image into a position in the three-dimensional coordinate system.
  • the object detection unit 117 acquires the center of gravity of the rectangular detection frame in which the real object is detected in the detection range as the position on the captured image of the real object. If the real object is limited to humans, only humans will be detected. Further, when a plurality of people are detected, the position closest to the position on the captured image corresponding to the predicted position at the current time is adopted. Semantic segmentation, which estimates the subject based on the attributes of each pixel of the captured image, may be used to detect the real object.
  • the object detection unit 117 converts the position on the captured image (position on the uv coordinate system) of the detected real object to the position on the three-dimensional coordinate system.
  • the position on the three-dimensional coordinate system corresponding to the Xc—Yc coordinates excluding the depth direction in the camera coordinate system is obtained by the back projection in the above-mentioned camera model.
  • the predicted position at the current time t estimated by the movement range prediction unit 115 is applied to the position on the three-dimensional coordinate system corresponding to the position in the depth direction in the camera coordinate system. This is because the position in the depth direction is not accurate, but the difference from the actual position is considered to be small, and human beings are insensitive to the difference in the position in the depth direction. This is because it does not give a sense of discomfort.
  • the object detection unit 117 acquires the position of the real object at the current time in the three-dimensional coordinate system.
  • the virtual object arranging unit 118 virtually generates additional information from the receiving unit 111 on the three-dimensional coordinate system corresponding to the position of the real object at the current time in the three-dimensional coordinate system acquired by the object detection unit 117. Place it as an object.
  • the virtual object arranging unit 118 arranges the virtual object at a position that does not overlap with the real object, for example, at a position several tens of centimeters above the real object in the direction of gravity.
  • the drawing unit 119 renders a virtual object arranged on the three-dimensional coordinate system by the virtual object arrangement unit 118 based on the self-position information from the self-position estimation unit 114.
  • the display unit 120 is composed of a display or the like, and displays a virtual object rendered by the drawing unit 119.
  • step S111 the sensor unit 113 senses the environment around the AR terminal 30 (for example, the stadium or circuit in which the user U is watching).
  • step S112 the time measuring unit 112 acquires the time sensed by the sensor unit 113 as the current time.
  • step S113 the self-position estimation unit 114 estimates the self-position (position of the AR terminal 30) in the three-dimensional coordinate system based on the sensing result of the sensing by the sensor unit 113.
  • step S114 the AR terminal 30 executes a three-dimensional position acquisition process for acquiring the position of the real object at the current time in the three-dimensional coordinate system.
  • the details of the three-dimensional position acquisition process will be described later.
  • step S115 the virtual object arranging unit 118 arranges the virtual object on the three-dimensional coordinate system corresponding to the position of the real object at the current time in the three-dimensional coordinate system acquired by the three-dimensional position acquisition process. ..
  • step S116 the drawing unit 119 renders a virtual object arranged on the three-dimensional coordinate system by the virtual object arrangement unit 118 based on the self-position information representing the self-position estimated by the self-position estimation unit 114.
  • step S117 the display unit 120 displays the virtual object rendered by the drawing unit 119.
  • FIG. 23 is a flowchart illustrating details of the three-dimensional position acquisition process executed in step S114 of FIG. 22.
  • step S121 the receiving unit 111 receives the real object identifier, the object position information, the sensing time, and the additional information distributed from the distribution server 20.
  • step S122 the movement range prediction unit 115 predicts the movement range of the real object in the three-dimensional coordinate system based on the corresponding object position information for all the received real object identifiers.
  • step S123 the detection range setting unit 116 is predicted for all real object identifiers in the captured image captured at the self-position using the self-position information representing the self-position estimated by the self-position estimation unit 114. Set the detection range corresponding to the movement range.
  • step S124 the object detection unit 117 detects the corresponding real object in each of the detection ranges set for all the real object identifiers.
  • step S125 the object detection unit 117 converts the positions of all the detected real objects on the captured image into the positions in the three-dimensional coordinate system. As a result, the position of the real object at the current time in the three-dimensional coordinate system is acquired.
  • the detection range smaller than the imaging angle of view and corresponding to the position of the real object is set, so that the real object can be detected with less processing.
  • the position of the real object to be detected the position of the real object at the current time detected on the captured image is adopted instead of the position distributed from the distribution server 20, so that the position of the real object is estimated from the real object position estimation system 10. It is possible to compensate for the transmission delay up to the AR terminal 30. As a result, it is possible to eliminate the positional deviation of the virtual object while reducing the processing load of the AR terminal 30, and it is possible to more preferably present the virtual object corresponding to the real object.
  • FIG. 24 is a block diagram showing a configuration example of the AR terminal 30 according to the second embodiment.
  • the AR terminal 30 acquires the position of the real object at the current time by using the information distributed from the distribution server 20 and the distance measurement result from the own terminal to the real object together. Place the corresponding virtual object.
  • the AR terminal 30 of FIG. 24 includes a real object position estimation unit 131, a real object identification unit 132, and a real object position selection unit 133, in addition to the reception unit 111 to the display unit 120 of the AR terminal 30 of FIG.
  • the real object position estimation unit 131 estimates the position of the real object in the three-dimensional coordinate system based on the sensing result from the sensor unit 113 and the self-position information from the self-position estimation unit 114, and the estimation result is the actual object. It is supplied to the body position selection unit 133.
  • the sensing result from the sensor unit 113 is, for example, the distance to the actual object measured by the depth sensor.
  • the real object identification unit 132 identifies the real object based on the sensing result from the sensor unit 113, and supplies the real object identifier unique to the identified real object to the real object position selection unit 133.
  • the real object position selection unit 133 selects the estimation result as the position of the real object, and the real object from the real object identification unit 132. It is supplied to the virtual object arrangement unit 118 in association with the identifier.
  • the position of the real object cannot be estimated by the real object position estimation unit 131, the position of the real object acquired by the object detection unit 117 is selected as the position of the real object and supplied to the virtual object arrangement unit 118. ..
  • the operation of the AR terminal 30 of FIG. 24 is basically the same as the operation of the AR terminal 30 of FIG. 18 described with reference to the flowchart of FIG. 22 except for step S114.
  • FIG. 25 is a flowchart illustrating details of the three-dimensional position acquisition process executed in step S114 of FIG. 22 by the AR terminal 30 of FIG. 24.
  • step S131 the real object identification unit 132 identifies the real object from the sensing result from the sensor unit 113.
  • step S132 the real object position estimation unit 131 is the position of the real object in the three-dimensional coordinate system from the self-position (position of the AR terminal 30) estimated by the self-position estimation unit 114 and the sensing result from the sensor unit 113. To estimate.
  • step S133 the AR terminal 30 of FIG. 24 acquires the positions of all real objects at the current time in the three-dimensional coordinate system by executing the process described with reference to the flowchart of FIG. 23.
  • step S134 the real object position selection unit 133 determines whether or not the position of the real object can be estimated by the real object position estimation unit 131 for all the real object identifiers from the real object identification unit 132. If it is determined that the position of the real object can be estimated, the process proceeds to step S135.
  • step S135 the real object position selection unit 133 replaces the position of the real object acquired in step S133 with the position of the real object estimated in step S132 for all the real object identifiers.
  • the position of the real object acquired in step S133 may be replaced with the position of the real object estimated in step S132 only for the real object whose position is determined to be estimable.
  • step S135 is skipped and the positions of the real objects acquired in step S133 are adopted as the positions of all the real objects.
  • the position of the real object acquired in step S133 may be adopted only for the real object whose position is not determined to be estimable.
  • the position of the real object when the position of the real object can be estimated within the range measuring range such as the depth sensor provided in the AR terminal 30, such as when the real object exists at a close distance from the AR terminal 30, the estimated real object is obtained.
  • the position of the body is adopted. In this case, it is not necessary to compensate for the transmission delay between the real object position estimation system 10 and the AR terminal 30, and it is possible to more preferably present a virtual object corresponding to the real object.
  • the technology according to the present disclosure may be applied to a configuration in which a virtual object is superimposed on a player in a stadium such as athletics or soccer or a formula car in a circuit such as F1.
  • the technology according to the present disclosure may be applied to a configuration in which a virtual object is superimposed on a vehicle arranged by a user in, for example, a vehicle allocation application in which a taxi, a hire, or the like can be arranged.
  • the technique according to the present disclosure may be applied to a configuration in which a virtual object is superimposed on a player around the user in an AR shooting game that can be played from a first-person viewpoint such as FPS (First Person Shooter).
  • FPS First Person Shooter
  • the series of processes described above can be executed by hardware or software.
  • the programs constituting the software are installed on the computer.
  • the computer includes a computer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
  • FIG. 26 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
  • the CPU 501 the CPU 501, the ROM (ReadOnlyMemory) 502, and the RAM (RandomAccessMemory) 503 are connected to each other by the bus 504.
  • An input / output interface 505 is further connected to the bus 504.
  • An input unit 506, an output unit 507, a storage unit 508, a communication unit 509, and a drive 510 are connected to the input / output interface 505.
  • the input unit 506 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 507 includes a display, a speaker, and the like.
  • the storage unit 508 includes a hard disk, a non-volatile memory, and the like.
  • the communication unit 509 includes a network interface and the like.
  • the drive 510 drives a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 501 loads the program stored in the storage unit 508 into the RAM 503 via the input / output interface 505 and the bus 504 and executes the above-mentioned series. Is processed.
  • the program executed by the computer (CPU 501) can be recorded and provided on the removable media 511 as a package media or the like, for example.
  • the program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the storage unit 508 via the input / output interface 505 by mounting the removable media 511 in the drive 510. Further, the program can be received by the communication unit 509 and installed in the storage unit 508 via a wired or wireless transmission medium. In addition, the program can be installed in the ROM 502 or the storage unit 508 in advance.
  • the program executed by the computer may be a program in which processing is performed in chronological order according to the order described in the present specification, in parallel, or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
  • the present disclosure may have the following structure.
  • a detection range that sets a detection range that is smaller than the imaging angle of view at the self-position and corresponds to the position of the real object, based on the object position information and the self-position information of the real object in the three-dimensional coordinate system corresponding to the real space.
  • Setting part and An information processing device including an object detection unit that detects the real object in the detection range.
  • the detection range setting unit sets the detection range capable of detecting the real object in the captured image captured at the imaging angle of view.
  • the information processing apparatus according to (1), wherein the object detection unit converts a position of the real object detected in the detection range on the captured image into a position in the three-dimensional coordinate system.
  • a movement range prediction unit for predicting a movement range in which the real object can move in the three-dimensional coordinate system based on the object position information.
  • the detection range setting unit sets the detection range including a region corresponding to the movement range in the captured image.
  • the detection range setting unit sets the minimum region in which the real object is predicted to be captured in the captured image based on the movement range in the detection range.
  • the movement range prediction unit predicts the movement range from the sensing time when the real object is sensed by an external sensor to the current time.
  • the detection range setting unit sets the detection range at the current time corresponding to the movement range, and sets the detection range.
  • the information processing device according to (4) or (5), wherein the object detection unit acquires the position of the real object at the current time in the three-dimensional coordinate system.
  • the movement range prediction unit predicts the movement range by estimating the predicted position of the real object at the current time from the movement speed and the movement direction of the real object based on the object position information (6).
  • the detection range setting unit projects the position of the real object at the sensing time in the three-dimensional coordinate system and the predicted position of the real object at the current time onto the image coordinate system of the captured image.
  • the information processing apparatus according to (7) which sets the detection range at the current time.
  • the context includes at least one of the maximum velocity, plane, and direction in which the real object can move.
  • the movement range prediction unit predicts the movement range for each real object based on the real object identifier unique to the real object, whichever is (3) to (10).
  • the information processing apparatus according to any one of (1) to (11), wherein the object position information is information acquired by wireless communication and based on a sensing result by an external sensor.
  • the sensor is a first sensor mounted on the real object, a second sensor arranged around the real object, and a third sensor mounted on a moving body moving around the real object.
  • the information processing apparatus which is configured as at least one of.
  • Any of (1) to (13) further including a virtual object arranging unit for arranging a virtual object corresponding to the position of the real object at the current time in the three-dimensional coordinate system detected in the detection range.
  • the virtual object placement unit arranges the virtual object based on the estimated position of the real object (14) to (17).
  • the information processing device described in any of the above.
  • (19) Information processing equipment Based on the object position information and self-position information of the real object in the three-dimensional coordinate system corresponding to the real space, a detection range smaller than the imaging angle of view at the self-position and corresponding to the position of the real object is set. An information processing method for detecting the real object in the detection range.
  • a detection range smaller than the imaging angle of view at the self-position and corresponding to the position of the real object is set.
  • a computer-readable recording medium on which a program for executing a process for detecting the real object in the detection range is recorded.
  • Real object position estimation system 20 distribution server, 30 AR terminal, 111 receiver, 112 time measurement unit, 113 sensor unit, 114 self-position estimation unit, 115 movement range prediction unit, 116 detection range setting unit, 117 object detection unit , 118 virtual object placement unit, 119 drawing unit, 120 display unit, 131 real object position estimation unit, 132 real object identification unit, 133 real object position selection unit

Abstract

The present disclosure relates to an information processing device, an information processing method, and a recording medium that make it possible to more appropriately present a virtual object corresponding to a real object. A detection range setting unit sets, on the basis of object location information of a real object and self-location information in a three-dimensional coordinate system corresponding to a real space, a detection range which is smaller than an imaging angle of view at the self-location and corresponds to the location of the real object. An object detection unit detects the real object in the detection range. The present disclosure is applicable, for example, to an AR terminal that superimposes information onto the real space.

Description

情報処理装置、情報処理方法、および記録媒体Information processing equipment, information processing methods, and recording media
 本開示は、情報処理装置、情報処理方法、および記録媒体に関し、特に、実物体に対応する仮想物体をより好適に提示できるようにする情報処理装置、情報処理方法、および記録媒体に関する。 The present disclosure relates to an information processing device, an information processing method, and a recording medium, and more particularly to an information processing device, an information processing method, and a recording medium that enable more suitable presentation of a virtual object corresponding to a real object.
 近年、実空間内の物体(以下、実物体という)に対して、テキスト、アイコン、またはアニメーションなどの様々な態様の仮想物体を重畳させてユーザに提示するAR(Augmented Reality)技術が知られている。 In recent years, AR (Augmented Reality) technology has been known in which virtual objects of various modes such as text, icons, and animations are superimposed on an object in real space (hereinafter referred to as a real object) and presented to a user. There is.
 特許文献1には、AR技術を利用した構成として、ユーザの視点の位置や姿勢が変化した場合に、仮想物体が描画されてからユーザに提示されるまでの間に生じる仮想物体の位置ずれを抑える情報処理装置が開示されている。 In Patent Document 1, as a configuration using AR technology, when the position or posture of the user's viewpoint changes, the position shift of the virtual object that occurs between the time when the virtual object is drawn and the time when the virtual object is presented to the user is described. Information processing devices that suppress are disclosed.
特開2020-3898号公報Japanese Unexamined Patent Publication No. 2020-3898
 比較的離れた位置で移動する実物体に対して仮想物体を重畳するAR端末においては、仮想物体の位置ずれが生じるおそれがあった。 In an AR terminal that superimposes a virtual object on a real object that moves at a relatively distant position, there is a risk that the position of the virtual object will shift.
 本開示は、このような状況に鑑みてなされたものであり、実物体に対応する仮想物体をより好適に提示できるようにするものである。 This disclosure is made in view of such a situation, and makes it possible to more preferably present a virtual object corresponding to a real object.
 本開示の情報処理装置は、実空間に対応する三次元座標系における実物体の物体位置情報と自己位置情報に基づいて、自己位置での撮像画角よりも小さく且つ前記実物体の位置に対応する検出範囲を設定する検出範囲設定部と、前記検出範囲において前記実物体を検出する物体検出部とを備える情報処理装置である。 The information processing apparatus of the present disclosure corresponds to the position of the real object, which is smaller than the imaging angle at the self-position, based on the object position information and the self-position information of the real object in the three-dimensional coordinate system corresponding to the real space. It is an information processing apparatus including a detection range setting unit for setting a detection range to be processed and an object detection unit for detecting the real object in the detection range.
 本開示の情報処理方法は、情報処理装置が、実空間に対応する三次元座標系における実物体の物体位置情報と自己位置情報に基づいて、自己位置での撮像画角よりも小さく且つ前記実物体の位置に対応する検出範囲を設定し、前記検出範囲において前記実物体を検出する情報処理方法である。 In the information processing method of the present disclosure, the information processing apparatus is smaller than the image pickup angle at the self-position and is smaller than the real object based on the object position information and the self-position information of the real object in the three-dimensional coordinate system corresponding to the real space. This is an information processing method in which a detection range corresponding to a position of a body is set and the real object is detected in the detection range.
 本開示の記録媒体は、実空間に対応する三次元座標系における実物体の物体位置情報と自己位置情報に基づいて、自己位置での撮像画角よりも小さく且つ前記実物体の位置に対応する検出範囲を設定し、前記検出範囲において前記実物体を検出する処理を実行させるためのプログラムを記録した、コンピュータが読み取り可能な記録媒体である。 The recording medium of the present disclosure is smaller than the imaging angle at the self-position and corresponds to the position of the real object based on the object position information and the self-position information of the real object in the three-dimensional coordinate system corresponding to the real space. A computer-readable recording medium on which a program for setting a detection range and executing a process for detecting the real object in the detection range is recorded.
 本開示においては、実空間に対応する三次元座標系における実物体の物体位置情報と自己位置情報に基づいて、自己位置での撮像画角よりも小さく且つ前記実物体の位置に対応する検出範囲が設定され、前記検出範囲において前記実物体が検出される。 In the present disclosure, based on the object position information and the self-position information of the real object in the three-dimensional coordinate system corresponding to the real space, the detection range smaller than the imaging angle of view at the self-position and corresponding to the position of the real object. Is set, and the real object is detected in the detection range.
本開示に係る技術を適用したネットワーク構成の例を示す図である。It is a figure which shows the example of the network configuration which applied the technique which concerns on this disclosure. 仮想物体の提示について説明する図である。It is a figure explaining the presentation of a virtual object. 実物体位置推定システムと配信サーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the real object position estimation system and the distribution server. 実物体位置推定システムの動作について説明するフローチャートである。It is a flowchart explaining operation of a real object position estimation system. 配信サーバの動作について説明するフローチャートである。It is a flowchart explaining operation of a distribution server. 実物体位置推定システムと配信サーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the real object position estimation system and the distribution server. 実物体位置推定システムの動作について説明するフローチャートである。It is a flowchart explaining operation of a real object position estimation system. 実物体位置推定システムと配信サーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the real object position estimation system and the distribution server. 実物体位置推定システムの動作について説明するフローチャートである。It is a flowchart explaining operation of a real object position estimation system. 配信サーバの動作について説明するフローチャートである。It is a flowchart explaining operation of a distribution server. 実物体位置推定システムと配信サーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the real object position estimation system and the distribution server. 実物体位置推定システムの動作について説明するフローチャートである。It is a flowchart explaining operation of a real object position estimation system. 配信サーバの動作について説明するフローチャートである。It is a flowchart explaining operation of a distribution server. 実物体位置推定システムと配信サーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the real object position estimation system and the distribution server. 配信サーバの動作について説明するフローチャートである。It is a flowchart explaining operation of a distribution server. 実物体位置推定システムと配信サーバの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the real object position estimation system and the distribution server. 実物体位置推定システムの動作について説明するフローチャートである。It is a flowchart explaining operation of a real object position estimation system. 第1の実施の形態に係るAR端末の構成例を示すブロック図である。It is a block diagram which shows the structural example of the AR terminal which concerns on 1st Embodiment. 検出範囲の設定について説明する図である。It is a figure explaining the setting of the detection range. 三次元座標系に設定されるバウンディングボックスの例を示す図である。It is a figure which shows the example of the bounding box set in a three-dimensional coordinate system. 実物体の位置の画像平面への射影について説明する図である。It is a figure explaining the projection of the position of a real object on an image plane. AR端末の動作について説明するフローチャートである。It is a flowchart explaining the operation of an AR terminal. 三次元位置取得処理の詳細について説明するフローチャートである。It is a flowchart explaining the detail of 3D position acquisition processing. 第2の実施の形態に係るAR端末の構成例を示すブロック図である。It is a block diagram which shows the structural example of the AR terminal which concerns on 2nd Embodiment. 三次元位置取得処理の詳細について説明するフローチャートである。It is a flowchart explaining the detail of 3D position acquisition processing. コンピュータの構成例を示すブロック図である。It is a block diagram which shows the configuration example of a computer.
 以下、本開示を実施するための形態(以下、実施の形態とする)について説明する。なお、説明は以下の順序で行う。 Hereinafter, a mode for implementing the present disclosure (hereinafter referred to as an embodiment) will be described. The explanation will be given in the following order.
 1.従来技術の課題
 2.本開示に係る技術の概要
 3.実物体位置推定システムと配信サーバの構成および動作
  3-1.GPS方式
  3-2.インサイドアウト方式1
  3-3.インサイドアウト方式2
  3-4.アウトサイドイン方式1
  3-5.アウトサイドイン方式2
  3-6.アウトサイドイン方式3
 4.AR端末の構成と動作
  4-1.第1の実施の形態
  4-2.第2の実施の形態
 5.コンピュータの構成例
1. 1. Issues of conventional technology 2. Outline of the technology related to this disclosure 3. Configuration and operation of the real object position estimation system and distribution server 3-1. GPS method 3-2. Inside-out method 1
3-3. Inside-out method 2
3-4. Outside-in method 1
3-5. Outside-in method 2
3-6. Outside-in method 3
4. Configuration and operation of AR terminal 4-1. First Embodiment 4-2. Second embodiment 5. Computer configuration example
<1.従来技術の課題>
 例えば、陸上やサッカーなどの競技場において、観客が所有するAR端末に映る選手の頭上に情報を重畳するケースを考える。この場合、AR端末は、観客から数100m以上離れた位置で移動する選手に対して仮想物体を重畳する際に、AR端末のカメラで選手を見分けたり、デプスセンサで選手までの距離を計測するなどして、選手の位置を把握する必要がある。しかしながら、これらのことを、小型化・低消費電力化が求められるモバイル端末に搭載可能なセンサで実現するのは容易ではない。
<1. Issues of conventional technology >
For example, consider a case where information is superimposed on a player's head reflected on an AR terminal owned by an spectator in a stadium such as athletics or soccer. In this case, when superimposing a virtual object on a player who moves at a position several hundred meters or more away from the spectator, the AR terminal can identify the player with the camera of the AR terminal, measure the distance to the player with a depth sensor, and so on. Then, it is necessary to grasp the position of the player. However, it is not easy to realize these with a sensor that can be mounted on a mobile terminal that is required to be compact and have low power consumption.
 これに対して、自己位置を認識可能なセンサを選手に装着したり、競技場内に設置された多数の高精度センサが選手の位置を認識することで、配信サーバが、無線通信により、選手の位置情報を観客のAR端末に配信することが可能となる。しかしながら、センサと配信サーバとの間、配信サーバとAR端末との間の伝送遅延により、仮想物体の位置ずれが生じるおそれがあった。 On the other hand, by attaching a sensor that can recognize the player's position to the player, or by recognizing the position of the player by a large number of high-precision sensors installed in the stadium, the distribution server can use wireless communication to communicate with the player. It becomes possible to deliver the location information to the AR terminal of the audience. However, there is a possibility that the position of the virtual object may be displaced due to the transmission delay between the sensor and the distribution server and between the distribution server and the AR terminal.
 そこで、本開示に係る技術においては、観客のAR端末で撮像される撮像画像において検出された選手(実物体)の位置に基づいて、上述した伝送遅延による仮想物体の位置ずれを解消することを実現する。 Therefore, in the technique according to the present disclosure, it is possible to eliminate the misalignment of the virtual object due to the transmission delay described above based on the position of the athlete (real object) detected in the captured image captured by the AR terminal of the spectator. Realize.
<2.本開示に係る技術の概要>
 図1は、本開示に係る技術を適用したネットワーク構成の例を示す図である。
<2. Outline of the technology related to this disclosure>
FIG. 1 is a diagram showing an example of a network configuration to which the technique according to the present disclosure is applied.
 図1には、実物体位置推定システム10、配信サーバ20、およびAR端末30が示されている。実物体位置推定システム10と配信サーバ20、配信サーバ20とAR端末30は、それぞれ互いに無線通信を行う。 FIG. 1 shows a real object position estimation system 10, a distribution server 20, and an AR terminal 30. The real object position estimation system 10 and the distribution server 20, the distribution server 20 and the AR terminal 30 each perform wireless communication with each other.
 実物体位置推定システム10は、例えば、陸上やサッカーなどの競技場やF1などのサーキットといった、実空間に対応する三次元座標系において、選手RO1やフォーミュラカーRO2などの実物体の位置をセンシングする。実物体位置推定システム10は、センシングにより得られた実物体の位置やセンシング時刻などの情報を、配信サーバ20に集約する。 The real object position estimation system 10 senses the position of a real object such as a player RO1 or a formula car RO2 in a three-dimensional coordinate system corresponding to a real space such as a stadium such as land or soccer or a circuit such as F1. .. The real object position estimation system 10 aggregates information such as the position of the real object and the sensing time obtained by sensing in the distribution server 20.
 配信サーバ20は、実物体位置推定システム10により集約された情報に基づいて、実物体に固有の実物体識別子、実物体の位置を表す物体位置情報、物体位置情報に対応するセンシング時刻、実物体に関する付加情報を、逐次、AR端末30に配信する。 Based on the information aggregated by the real object position estimation system 10, the distribution server 20 has a real object identifier unique to the real object, object position information indicating the position of the real object, sensing time corresponding to the object position information, and the real object. Additional information regarding the above is sequentially delivered to the AR terminal 30.
 AR端末30は、上述した競技場やサーキットの観客(ユーザU)が所有するスマートフォンなどのモバイル端末、HMD(Head Mounted Display)やARゴーグルなどとして構成される。AR端末30は、競技場における選手RO1やサーキットにおけるフォーミュラカーRO2に対して仮想物体を重畳する。AR端末30は、実物体位置推定システム10と同じ三次元座標系と時間軸を共有し、その三次元座標系における自端末の位置(自己位置)をリアルタイムに認識する。AR端末30は、配信サーバ20から配信された各種情報と自己位置を用いて、現時刻での実物体の位置を取得することで、実物体位置推定システム10からAR端末30までの間の伝送遅延を補償し、位置ずれのない仮想物体の提示を実現する。 The AR terminal 30 is configured as a mobile terminal such as a smartphone owned by the above-mentioned stadium or circuit spectator (user U), an HMD (Head Mounted Display), AR goggles, or the like. The AR terminal 30 superimposes a virtual object on the player RO1 in the stadium and the formula car RO2 in the circuit. The AR terminal 30 shares a time axis with the same three-dimensional coordinate system as the real object position estimation system 10, and recognizes the position (self-position) of the own terminal in the three-dimensional coordinate system in real time. The AR terminal 30 uses various information distributed from the distribution server 20 and its own position to acquire the position of the real object at the current time, thereby transmitting the transmission between the real object position estimation system 10 and the AR terminal 30. Compensates for delays and realizes the presentation of virtual objects without misalignment.
 図2を参照して、AR端末30による仮想物体の提示について説明する。 The presentation of the virtual object by the AR terminal 30 will be described with reference to FIG.
(1)AR端末30は、配信サーバ20より、センシング時刻t-1と、時刻t-1における三次元座標系における実物体の位置P(t-1)を表す物体位置情報を受信する。 (1) The AR terminal 30 receives the sensing time t-1 and the object position information representing the position P (t-1) of the real object in the three-dimensional coordinate system at the time t-1 from the distribution server 20.
(2)AR端末30は、三次元座標系における自己位置に基づいて、位置P(t-1)を通り、AR端末30のカメラに対して正面を向く平面を含む撮像画角を設定し、撮像画像CIを撮像する。 (2) The AR terminal 30 sets an imaging angle of view including a plane that passes through the position P (t-1) and faces the front of the camera of the AR terminal 30 based on the self-position in the three-dimensional coordinate system. The captured image CI is imaged.
(3)AR端末30は、撮像画像CIにおいて、現在の時刻である現時刻tにおいて実物体を検出し得る検出範囲DR(t)を設定する。 (3) The AR terminal 30 sets a detection range DR (t) capable of detecting a real object at the current time t, which is the current time, in the captured image CI.
(4)AR端末30は、検出範囲DR(t)において実物体を検出することで、実物体の撮像画像CI上の位置p(t)を、三次元座標系における位置P(t)に変換する。 (4) The AR terminal 30 converts the position p (t) on the captured image CI of the real object into the position P (t) in the three-dimensional coordinate system by detecting the real object in the detection range DR (t). do.
(5)AR端末30は、撮像画像CIにおいて、三次元座標系における現時刻tでの実物体の位置P(t)に対応して、実物体に重ならない位置(例えば重力方向に対して実物体より上側の位置)に、配信サーバ20からの付加情報に対応した仮想物体VOを重畳する。 (5) In the captured image CI, the AR terminal 30 corresponds to the position P (t) of the real object at the current time t in the three-dimensional coordinate system and does not overlap the real object (for example, the real object in the direction of gravity). A virtual object VO corresponding to the additional information from the distribution server 20 is superimposed on the position above the body).
 以上のように、AR端末30で撮像される撮像画像CIにおいて検出された実物体の位置に基づいて仮想物体VOを重畳することで、実物体位置推定システム10からAR端末30までの間の伝送遅延による仮想物体VOの位置ずれを解消することが可能となる。 As described above, by superimposing the virtual object VO based on the position of the real object detected in the captured image CI captured by the AR terminal 30, transmission between the real object position estimation system 10 and the AR terminal 30 is performed. It is possible to eliminate the positional shift of the virtual object VO due to the delay.
 以下においては、図1のネットワーク構成における各部の構成と動作について説明する。 In the following, the configuration and operation of each part in the network configuration of FIG. 1 will be described.
<3.実物体位置推定システムと配信サーバの構成および動作>
 実物体位置推定システム10と配信サーバ20の構成として、以下に示される6つの方式を採用することができる。
<3. Configuration and operation of real object position estimation system and distribution server>
As the configuration of the real object position estimation system 10 and the distribution server 20, the following six methods can be adopted.
(3-1.GPS方式)
 図3は、実物体のトラッキング方式としてGPS(Global Positioning System)方式を採用した実物体位置推定システム10と配信サーバ20の構成例を示すブロック図である。図3の例では、実物体位置推定システム10は、例えば、実物体としての個々の選手が装着するウェアラブルデバイスなどとして構成される。
(3-1. GPS method)
FIG. 3 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ a GPS (Global Positioning System) method as a tracking method for a real object. In the example of FIG. 3, the real object position estimation system 10 is configured as, for example, a wearable device worn by each athlete as a real object.
 図3の実物体位置推定システム10は、GPSセンサ51、座標変換部52、時刻計測部53、および送信部54を備える。 The real object position estimation system 10 of FIG. 3 includes a GPS sensor 51, a coordinate conversion unit 52, a time measurement unit 53, and a transmission unit 54.
 GPSセンサ51は、GPS衛星からのGPS位置情報を受信し、座標変換部52に供給する。GPS位置情報は、GPS座標系における位置(緯度、経度、標高)を表す。GPSセンサ51に代えて、BLE(Bluetooth(登録商標) Low Energy)やUWB(Ultra Wide Band)などによる測位を用いてもよいし、これらの測位技術を併用してもよい。 The GPS sensor 51 receives GPS position information from GPS satellites and supplies it to the coordinate conversion unit 52. The GPS position information represents a position (latitude, longitude, altitude) in the GPS coordinate system. Instead of the GPS sensor 51, positioning by BLE (Bluetooth (registered trademark) Low Energy), UWB (Ultra Wide Band), or the like may be used, or these positioning techniques may be used in combination.
 座標変換部52は、GPSセンサ51からのGPS位置情報を、三次元座標系における位置情報に変換することで、三次元座標系におけるGPSセンサ51の位置を表すセンサ位置情報を、送信部54に供給する。三次元座標系において、軸の1つは重力方向に揃えられているか、または、重力方向が決められているものとする。座標の変換は、あらかじめ定められた変換ロジックや変換行列を用いて行われる。 The coordinate conversion unit 52 converts the GPS position information from the GPS sensor 51 into the position information in the three-dimensional coordinate system, so that the sensor position information indicating the position of the GPS sensor 51 in the three-dimensional coordinate system is transmitted to the transmission unit 54. Supply. In a three-dimensional coordinate system, it is assumed that one of the axes is aligned in the direction of gravity or the direction of gravity is determined. The transformation of coordinates is performed using a predetermined transformation logic or transformation matrix.
 時刻計測部53は、ミリ秒以上の精度で時刻を計測することで、GPSセンサ51がGPS位置情報を受信した時刻(センシング時刻)を取得し、送信部54に供給する。 The time measuring unit 53 acquires the time (sensing time) when the GPS sensor 51 receives the GPS position information by measuring the time with an accuracy of milliseconds or more, and supplies the time to the transmitting unit 54.
 送信部54は、GPSセンサ51に固有の識別子(センサ識別子)とともに、座標変換部52からのセンサ位置情報と、時刻計測部53からのセンシング時刻を、配信サーバ20に送信する。 The transmission unit 54 transmits the sensor position information from the coordinate conversion unit 52 and the sensing time from the time measurement unit 53 to the distribution server 20 together with the identifier (sensor identifier) unique to the GPS sensor 51.
 図3の配信サーバ20は、受信部61、実物体識別子変換部62、付加情報取得部63、および送信部64を備える。 The distribution server 20 of FIG. 3 includes a receiving unit 61, a real object identifier conversion unit 62, an additional information acquisition unit 63, and a transmitting unit 64.
 受信部61は、実物体位置推定システム10から送信されてくるセンサ識別子、センサ位置情報、およびセンシング時刻を受信する。センサ識別子は、実物体識別子変換部62に供給され、センサ位置情報とセンシング時刻は、送信部64に供給される。 The receiving unit 61 receives the sensor identifier, the sensor position information, and the sensing time transmitted from the real object position estimation system 10. The sensor identifier is supplied to the real object identifier conversion unit 62, and the sensor position information and the sensing time are supplied to the transmission unit 64.
 実物体識別子変換部62は、受信部61からのセンサ識別子を、実物体位置推定システム10が装着されている実物体に固有の識別子(実物体識別子)に変換し、付加情報取得部63と送信部64に供給する。センサ識別子から実物体識別子への変換は、例えば、どの選手がどのセンサデバイス(実物体位置推定システム10)を装着しているかを表す対応表に基づいて行われる。 The real object identifier conversion unit 62 converts the sensor identifier from the reception unit 61 into an identifier (real object identifier) unique to the real object to which the real object position estimation system 10 is mounted, and transmits it to the additional information acquisition unit 63. Supply to unit 64. The conversion from the sensor identifier to the real object identifier is performed, for example, based on a correspondence table showing which player is wearing which sensor device (real object position estimation system 10).
 付加情報取得部63は、実物体識別子変換部62からの実物体識別子に対応する情報であって、仮想物体として実物体に付加される付加情報を取得し、送信部64に供給する。付加情報取得部63により取得される付加情報は、例えば、選手の名前や所属、背番号など、実物体に関する固定的な情報であってもよいし、選手の順位やスコアなど、他のシステムなどから取得される、実物体に関するリアルタイムに変化する情報であってもよい。 The additional information acquisition unit 63 acquires the additional information added to the real object as a virtual object, which is the information corresponding to the real object identifier from the real object identifier conversion unit 62, and supplies it to the transmission unit 64. The additional information acquired by the additional information acquisition unit 63 may be, for example, fixed information about a real object such as a player's name, affiliation, or uniform number, or another system such as a player's ranking or score. It may be real-time changing information about a real object obtained from.
 送信部64は、受信部61からのセンサ位置情報とセンシング時刻、実物体識別子変換部62からの実物体識別子、および、付加情報取得部63からの付加情報を、AR端末30に送信する。センサ位置情報で表されるGPSセンサ51の位置は、実物体位置推定システム10が装着されている実物体の位置に等しいので、センサ位置情報は、実物体の位置を表す物体位置情報として、AR端末30に送信される。 The transmission unit 64 transmits the sensor position information and sensing time from the reception unit 61, the real object identifier from the real object identifier conversion unit 62, and the additional information from the additional information acquisition unit 63 to the AR terminal 30. Since the position of the GPS sensor 51 represented by the sensor position information is equal to the position of the real object to which the real object position estimation system 10 is mounted, the sensor position information is AR as the object position information representing the position of the real object. It is transmitted to the terminal 30.
 次に、図3の実物体位置推定システム10と配信サーバ20の動作について説明する。 Next, the operations of the real object position estimation system 10 and the distribution server 20 in FIG. 3 will be described.
 図4は、図3の実物体位置推定システム10の動作について説明するフローチャートである。 FIG. 4 is a flowchart illustrating the operation of the real object position estimation system 10 of FIG.
 ステップS11において、GPSセンサ51は、GPS衛星からのGPS位置情報を受信する。 In step S11, the GPS sensor 51 receives GPS position information from GPS satellites.
 ステップS12において、時刻計測部53は、GPSセンサ51がGPS位置情報を受信した時刻をセンシング時刻として取得する。 In step S12, the time measuring unit 53 acquires the time when the GPS sensor 51 receives the GPS position information as the sensing time.
 ステップS13において、座標変換部52は、GPSセンサ51により受信されたGPS位置情報を、三次元座標系における位置情報(センサ位置情報)に変換する。 In step S13, the coordinate conversion unit 52 converts the GPS position information received by the GPS sensor 51 into position information (sensor position information) in the three-dimensional coordinate system.
 ステップS14において、送信部54は、GPSセンサ51のセンサ識別子、センサ位置情報、およびセンシング時刻を、配信サーバ20に送信する。 In step S14, the transmission unit 54 transmits the sensor identifier of the GPS sensor 51, the sensor position information, and the sensing time to the distribution server 20.
 図5は、図3の配信サーバ20の動作について説明するフローチャートである。 FIG. 5 is a flowchart illustrating the operation of the distribution server 20 of FIG.
 ステップS21において、受信部61は、実物体位置推定システム10から送信されてくるセンサ識別子、センサ位置情報、およびセンシング時刻を受信する。 In step S21, the receiving unit 61 receives the sensor identifier, the sensor position information, and the sensing time transmitted from the real object position estimation system 10.
 ステップS22において、実物体識別子変換部62は、受信部61により受信されたセンサ識別子を実物体識別子に変換する。 In step S22, the real object identifier conversion unit 62 converts the sensor identifier received by the reception unit 61 into a real object identifier.
 ステップS23において、付加情報取得部63は、変換された実物体識別子に対応する付加情報を取得する。 In step S23, the additional information acquisition unit 63 acquires additional information corresponding to the converted real object identifier.
 ステップS24において、送信部64は、実物体識別子、物体位置情報(センサ位置情報)、センシング時刻、および付加情報を、AR端末30に送信する。 In step S24, the transmission unit 64 transmits the real object identifier, the object position information (sensor position information), the sensing time, and the additional information to the AR terminal 30.
 以上のように、図3の構成においては、GPS位置情報に基づいて、実物体の位置が取得される。 As described above, in the configuration of FIG. 3, the position of the real object is acquired based on the GPS position information.
(3-2.インサイドアウト方式1)
 図6は、実物体のトラッキング方式としてインサイドアウト方式を採用した実物体位置推定システム10と配信サーバ20の構成例を示すブロック図である。インサイドアウト方式では、計測したい対象物自体に搭載したセンサを利用して、対象物の位置が計測される。したがって、図6の例では、実物体位置推定システム10は、例えば、実物体としての個々の選手が装着するウェアラブルデバイスなどとして構成される。
(3-2. Inside-out method 1)
FIG. 6 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ an inside-out method as a tracking method for a real object. In the inside-out method, the position of the object is measured by using the sensor mounted on the object itself to be measured. Therefore, in the example of FIG. 6, the real object position estimation system 10 is configured as, for example, a wearable device worn by each athlete as a real object.
 図6の実物体位置推定システム10は、センサ部71、自己位置推定部72、時刻計測部53、および送信部54を備える。時刻計測部53と送信部54は、それぞれ図3に示された構成と同様であるので、その説明は省略する。 The real object position estimation system 10 of FIG. 6 includes a sensor unit 71, a self-position estimation unit 72, a time measurement unit 53, and a transmission unit 54. Since the time measuring unit 53 and the transmitting unit 54 have the same configuration as shown in FIG. 3, the description thereof will be omitted.
 センサ部71は、ステレオカメラやデプスセンサなどで構成され、実物体の周囲の環境をセンシングし、そのセンシング結果を、自己位置推定部72に供給する。 The sensor unit 71 is composed of a stereo camera, a depth sensor, and the like, senses the environment around the real object, and supplies the sensing result to the self-position estimation unit 72.
 自己位置推定部72は、センサ部71からのセンシング結果に基づいて、三次元座標系におけるセンサ部71の位置を推定することで、センサ部71の位置を表すセンサ位置情報を、送信部54に供給する。センサ部71に加え、IMU(Inertial Measurement Unit)を設け、センサ部71によるセンシング結果と、IMUにより検出された角速度や加速度に基づいて、三次元座標系におけるセンサ部71の位置が推定されるようにしてもよい。 The self-position estimation unit 72 estimates the position of the sensor unit 71 in the three-dimensional coordinate system based on the sensing result from the sensor unit 71, and thereby transmits the sensor position information indicating the position of the sensor unit 71 to the transmission unit 54. Supply. In addition to the sensor unit 71, an IMU (Inertial Measurement Unit) is provided so that the position of the sensor unit 71 in the three-dimensional coordinate system can be estimated based on the sensing result by the sensor unit 71 and the angular velocity and acceleration detected by the IMU. You may do it.
 図6の配信サーバ20は、図3に示された構成と同様であるので、その説明は省略する。 Since the distribution server 20 of FIG. 6 has the same configuration as that shown in FIG. 3, the description thereof will be omitted.
 次に、図6の実物体位置推定システム10と配信サーバ20の動作について説明する。 Next, the operations of the real object position estimation system 10 and the distribution server 20 of FIG. 6 will be described.
 図7は、図6の実物体位置推定システム10の動作について説明するフローチャートである。 FIG. 7 is a flowchart illustrating the operation of the real object position estimation system 10 of FIG.
 ステップS31において、センサ部71は、実物体の周囲の環境をセンシングする。 In step S31, the sensor unit 71 senses the environment around the real object.
 ステップS32において、時刻計測部53は、センサ部71がセンシングした時刻をセンシング時刻として取得する。 In step S32, the time measuring unit 53 acquires the time sensed by the sensor unit 71 as the sensing time.
 ステップS33において、自己位置推定部72は、センサ部71によるセンシングのセンシング結果に基づいて、三次元座標系におけるセンサ部71の位置を推定する。 In step S33, the self-position estimation unit 72 estimates the position of the sensor unit 71 in the three-dimensional coordinate system based on the sensing result of the sensing by the sensor unit 71.
 ステップS34において、送信部54は、センサ部71のセンサ識別子、推定されたセンサ部71の位置を表すセンサ位置情報、およびセンシング時刻を、配信サーバ20に送信する。 In step S34, the transmission unit 54 transmits the sensor identifier of the sensor unit 71, the sensor position information indicating the estimated position of the sensor unit 71, and the sensing time to the distribution server 20.
 なお、図6の配信サーバ20の動作は、図5を参照して説明した図3の配信サーバ20の動作と同様であるので、その説明は省略する。 Since the operation of the distribution server 20 in FIG. 6 is the same as the operation of the distribution server 20 in FIG. 3 described with reference to FIG. 5, the description thereof will be omitted.
 以上のように、図6の構成においては、実物体に装着される実物体位置推定システム10において、実物体の位置が推定される。 As described above, in the configuration of FIG. 6, the position of the real object is estimated by the real object position estimation system 10 mounted on the real object.
(3-3.インサイドアウト方式2)
 図8は、実物体のトラッキング方式としてインサイドアウト方式を採用した実物体位置推定システム10と配信サーバ20の構成例を示すブロック図である。したがって、図8の例でも、実物体位置推定システム10は、例えば、実物体としての個々の選手が装着するウェアラブルデバイスなどとして構成される。
(3-3. Inside-out method 2)
FIG. 8 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ an inside-out method as a tracking method for a real object. Therefore, even in the example of FIG. 8, the real object position estimation system 10 is configured as, for example, a wearable device worn by each athlete as a real object.
 図8の実物体位置推定システム10および配信サーバ20は、自己位置推定部72が、実物体位置推定システム10ではなく配信サーバ20に設けられている点で、図6の実物体位置推定システム10および配信サーバ20と異なる。 In the real object position estimation system 10 and the distribution server 20 of FIG. 8, the self-position estimation unit 72 is provided in the distribution server 20 instead of the real object position estimation system 10, and the real object position estimation system 10 of FIG. 6 is provided. And different from the distribution server 20.
 すなわち、図8の実物体位置推定システム10においては、センサ部71が、センシング結果を送信部54に供給し、送信部54は、センサ位置情報に代えて、センシング結果をそのまま配信サーバ20に送信する。 That is, in the real object position estimation system 10 of FIG. 8, the sensor unit 71 supplies the sensing result to the transmitting unit 54, and the transmitting unit 54 transmits the sensing result as it is to the distribution server 20 instead of the sensor position information. do.
 図8の配信サーバ20においては、自己位置推定部72が、実物体位置推定システム10からのセンシング結果に基づいて、三次元座標系におけるセンサ部71の位置を推定することで、センサ部71の位置を表すセンサ位置情報を、送信部54に供給する。 In the distribution server 20 of FIG. 8, the self-position estimation unit 72 estimates the position of the sensor unit 71 in the three-dimensional coordinate system based on the sensing result from the real object position estimation system 10, so that the sensor unit 71 The sensor position information indicating the position is supplied to the transmission unit 54.
 次に、図8の実物体位置推定システム10と配信サーバ20の動作について説明する。 Next, the operations of the real object position estimation system 10 and the distribution server 20 of FIG. 8 will be described.
 図9は、図8の実物体位置推定システム10の動作について説明するフローチャートである。 FIG. 9 is a flowchart illustrating the operation of the real object position estimation system 10 of FIG.
 ステップS41において、センサ部71は、実物体の周囲の環境をセンシングする。 In step S41, the sensor unit 71 senses the environment around the real object.
 ステップS42において、時刻計測部53は、センサ部71がセンシングした時刻をセンシング時刻として取得する。 In step S42, the time measuring unit 53 acquires the time sensed by the sensor unit 71 as the sensing time.
 ステップS43において、送信部54は、センサ部71のセンサ識別子、センシング結果、およびセンシング時刻を、配信サーバ20に送信する。 In step S43, the transmission unit 54 transmits the sensor identifier of the sensor unit 71, the sensing result, and the sensing time to the distribution server 20.
 図10は、図8の配信サーバ20の動作について説明するフローチャートである。 FIG. 10 is a flowchart illustrating the operation of the distribution server 20 of FIG.
 ステップS51において、受信部61は、実物体位置推定システム10から送信されてくるセンサ識別子、センシング結果、およびセンシング時刻を受信する。 In step S51, the receiving unit 61 receives the sensor identifier, the sensing result, and the sensing time transmitted from the real object position estimation system 10.
 ステップS52において、実物体識別子変換部62は、受信部61により受信されたセンサ識別子を実物体識別子に変換する。 In step S52, the real object identifier conversion unit 62 converts the sensor identifier received by the reception unit 61 into a real object identifier.
 ステップS53において、自己位置推定部72は、実物体位置推定システム10からのセンシング結果に基づいて、三次元座標系におけるセンサ部71の位置、すなわち、実物体位置推定システム10が装着されている実物体の位置を推定する。 In step S53, the self-position estimation unit 72 is equipped with the position of the sensor unit 71 in the three-dimensional coordinate system, that is, the real object position estimation system 10 based on the sensing result from the real object position estimation system 10. Estimate the position of the body.
 ステップS54において、付加情報取得部63は、変換された実物体識別子に対応する付加情報を取得する。 In step S54, the additional information acquisition unit 63 acquires additional information corresponding to the converted real object identifier.
 ステップS55において、送信部64は、実物体識別子、物体位置情報、センシング時刻、および付加情報を、AR端末30に送信する。 In step S55, the transmission unit 64 transmits the real object identifier, the object position information, the sensing time, and the additional information to the AR terminal 30.
 以上のように、図8の構成においては、実物体に装着される実物体位置推定システム10ではなく、配信サーバ20において、実物体の位置が推定される。 As described above, in the configuration of FIG. 8, the position of the real object is estimated by the distribution server 20 instead of the real object position estimation system 10 mounted on the real object.
(3-4.アウトサイドイン方式1)
 図11は、実物体のトラッキング方式としてアウトサイドイン方式を採用した実物体位置推定システム10と配信サーバ20の構成例を示すブロック図である。アウトサイドイン方式では、外部に設置されたセンサを利用して、対象物の位置が計測される。対象物にマーカを付け、外部カメラで観測する手法などが挙げられる。したがって、図11の例では、実物体位置推定システム10は、実物体の周囲に配置され、例えば、競技場を囲むように設置される複数の高精度センサデバイスなどとして構成される。
(3-4. Outside-in method 1)
FIG. 11 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ an outside-in method as a tracking method for a real object. In the outside-in method, the position of the object is measured using a sensor installed outside. A method of attaching a marker to an object and observing it with an external camera can be mentioned. Therefore, in the example of FIG. 11, the real object position estimation system 10 is arranged around the real object, and is configured as, for example, a plurality of high-precision sensor devices installed so as to surround the stadium.
 図11の実物体位置推定システム10は、センサ部71、実物体位置推定部81、実物体識別部82、時刻計測部53、および送信部83を備える。センサ部71は、図6に示された構成と同様であり、時刻計測部53は、図3に示された構成と同様であるので、その説明は省略する。 The real object position estimation system 10 of FIG. 11 includes a sensor unit 71, a real object position estimation unit 81, a real object identification unit 82, a time measurement unit 53, and a transmission unit 83. Since the sensor unit 71 has the same configuration as shown in FIG. 6 and the time measuring unit 53 has the same configuration as shown in FIG. 3, the description thereof will be omitted.
 実物体位置推定部81は、センサ部71からのセンシング結果に基づいて、三次元座標系における実物体の位置を推定することで、実物体の位置を表す物体位置情報を、送信部83に供給する。ここでは、三次元座標系におけるセンサ部71(例えばデプスセンサ)の位置・姿勢が既知であるので、デプスセンサからのデプス情報に基づいて、実物体の位置が推定可能となる。センサ部71がカメラで構成される場合、カメラにより撮像された撮像画像に対する画像認識により、実物体の位置が推定されてもよい。物体位置情報は、実物体の重心や、実物体を認識したバウンディングボックスの中心などの1点の位置を表してもよい。 The real object position estimation unit 81 estimates the position of the real object in the three-dimensional coordinate system based on the sensing result from the sensor unit 71, and supplies the object position information indicating the position of the real object to the transmission unit 83. do. Here, since the position and orientation of the sensor unit 71 (for example, the depth sensor) in the three-dimensional coordinate system are known, the position of the real object can be estimated based on the depth information from the depth sensor. When the sensor unit 71 is composed of a camera, the position of the real object may be estimated by image recognition for the captured image captured by the camera. The object position information may represent the position of one point such as the center of gravity of the real object or the center of the bounding box that recognizes the real object.
 実物体識別部82は、センサ部71からのセンシング結果に基づいて、実物体を識別し、識別された実物体に固有の実物体識別子を、送信部83に供給する。ここでは、センシング結果としての撮像画像において、選手の顔や背番号が認識されることで、個々の選手が識別される。実物体としての選手に、それぞれ識別に必要なマーカや赤外線ランプが装着されるようにしてもよい。 The real object identification unit 82 identifies the real object based on the sensing result from the sensor unit 71, and supplies the real object identifier unique to the identified real object to the transmission unit 83. Here, each player is identified by recognizing the player's face and uniform number in the captured image as the sensing result. Athletes as real objects may be equipped with markers and infrared lamps necessary for identification.
 送信部83は、実物体位置推定部81からの物体位置情報、実物体識別部82からの実物体識別子、および、時刻計測部53からのセンシング時刻を、配信サーバ20に送信する。 The transmission unit 83 transmits the object position information from the real object position estimation unit 81, the real object identifier from the real object identification unit 82, and the sensing time from the time measurement unit 53 to the distribution server 20.
 図11の配信サーバ20は、受信部61、付加情報取得部63、および送信部64を備える。受信部61、付加情報取得部63、および送信部64は、それぞれ図3に示された構成と同様であるので、その説明は省略する。 The distribution server 20 of FIG. 11 includes a receiving unit 61, an additional information acquisition unit 63, and a transmitting unit 64. Since the receiving unit 61, the additional information acquisition unit 63, and the transmitting unit 64 have the same configuration as shown in FIG. 3, the description thereof will be omitted.
 次に、図11の実物体位置推定システム10と配信サーバ20の動作について説明する。 Next, the operations of the real object position estimation system 10 and the distribution server 20 in FIG. 11 will be described.
 図12は、図11の実物体位置推定システム10の動作について説明するフローチャートである。 FIG. 12 is a flowchart illustrating the operation of the real object position estimation system 10 of FIG.
 ステップS61において、センサ部71は、実物体が存在する環境をセンシングする。 In step S61, the sensor unit 71 senses the environment in which the real object exists.
 ステップS62において、時刻計測部53は、センサ部71がセンシングした時刻をセンシング時刻として取得する。 In step S62, the time measuring unit 53 acquires the time sensed by the sensor unit 71 as the sensing time.
 ステップS63において、実物体位置推定部81は、既知であるセンサ部71の位置(センサ位置)と、センサ部71からのセンシング結果から、三次元座標系における実物体の位置を推定する。 In step S63, the real object position estimation unit 81 estimates the position of the real object in the three-dimensional coordinate system from the known position of the sensor unit 71 (sensor position) and the sensing result from the sensor unit 71.
 ステップS64において、実物体識別部82は、センサ部71からのセンシング結果から実物体を識別する。 In step S64, the real object identification unit 82 identifies the real object from the sensing result from the sensor unit 71.
 ステップS65において、送信部83は、実物体識別子、物体位置情報、およびセンシング時刻を、配信サーバ20に送信する。 In step S65, the transmission unit 83 transmits the real object identifier, the object position information, and the sensing time to the distribution server 20.
 図13は、図11の配信サーバ20の動作について説明するフローチャートである。 FIG. 13 is a flowchart illustrating the operation of the distribution server 20 of FIG.
 ステップS71において、受信部61は、実物体位置推定システム10から送信されてくる実物体識別子、物体位置情報、およびセンシング時刻を受信する。 In step S71, the receiving unit 61 receives the real object identifier, the object position information, and the sensing time transmitted from the real object position estimation system 10.
 ステップS72において、付加情報取得部63は、受信部61により受信された実物体識別子に対応する付加情報を取得する。 In step S72, the additional information acquisition unit 63 acquires additional information corresponding to the real object identifier received by the reception unit 61.
 ステップS73において、送信部64は、実物体識別子、物体位置情報、センシング時刻、および付加情報を、AR端末30に送信する。 In step S73, the transmission unit 64 transmits the real object identifier, the object position information, the sensing time, and the additional information to the AR terminal 30.
 以上のように、図11の構成においては、実物体の外部に設置される複数の実物体位置推定システム10において、実物体の位置が取得される。 As described above, in the configuration of FIG. 11, the position of the real object is acquired by the plurality of real object position estimation systems 10 installed outside the real object.
(3-5.アウトサイドイン方式2)
 図14は、実物体のトラッキング方式としてアウトサイドイン方式を採用した実物体位置推定システム10と配信サーバ20の構成例を示すブロック図である。したがって、図14の例でも、実物体位置推定システム10は、例えば、競技場を囲むように設置される複数の高精度センサデバイスなどとして構成される。
(3-5. Outside-in method 2)
FIG. 14 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ an outside-in method as a tracking method for a real object. Therefore, even in the example of FIG. 14, the real object position estimation system 10 is configured as, for example, a plurality of high-precision sensor devices installed so as to surround the stadium.
 図14の実物体位置推定システム10および配信サーバ20は、実物体位置推定部81と実物体識別部82が、実物体位置推定システム10ではなく配信サーバ20に設けられている点で、図6の実物体位置推定システム10および配信サーバ20と異なる。図14の配信サーバ20には、センサ位置取得部91がさらに設けられている。 In the real object position estimation system 10 and the distribution server 20 of FIG. 14, the real object position estimation unit 81 and the real object identification unit 82 are provided in the distribution server 20 instead of the real object position estimation system 10. It is different from the real object position estimation system 10 and the distribution server 20 of the above. The distribution server 20 of FIG. 14 is further provided with a sensor position acquisition unit 91.
 すなわち、図14の実物体位置推定システム10においては、センサ部71が、センシング結果を送信部83に供給し、送信部83は、実物体識別子と物体位置情報に代えて、センサ識別子とセンシング結果をそのまま配信サーバ20に送信する。 That is, in the real object position estimation system 10 of FIG. 14, the sensor unit 71 supplies the sensing result to the transmission unit 83, and the transmission unit 83 replaces the real object identifier and the object position information with the sensor identifier and the sensing result. Is transmitted to the distribution server 20 as it is.
 図14の配信サーバ20においては、センサ位置取得部91が、実物体位置推定システム10からのセンサ識別子に基づいて、三次元座標系におけるセンサ部71の位置・姿勢を取得し、実物体位置推定部81に供給する。センサ部71の位置・姿勢の取得は、例えば、あらかじめ測定された各センサ部71の位置・姿勢とセンサ識別子との対応を表す対応表に基づいて行われる。 In the distribution server 20 of FIG. 14, the sensor position acquisition unit 91 acquires the position / orientation of the sensor unit 71 in the three-dimensional coordinate system based on the sensor identifier from the actual object position estimation system 10, and estimates the actual object position. It is supplied to the unit 81. The acquisition of the position / posture of the sensor unit 71 is performed, for example, based on a correspondence table showing the correspondence between the position / posture of each sensor unit 71 measured in advance and the sensor identifier.
 実物体位置推定部81は、実物体位置推定システム10からのセンシング結果と、センサ位置取得部91からの三次元座標系におけるセンサ部71の位置・姿勢に基づいて、三次元座標系における実物体の位置を推定し、物体位置情報を送信部64に供給する。 The real object position estimation unit 81 is a real object in the three-dimensional coordinate system based on the sensing result from the real object position estimation system 10 and the position / orientation of the sensor unit 71 in the three-dimensional coordinate system from the sensor position acquisition unit 91. The position of the object is estimated, and the object position information is supplied to the transmission unit 64.
 実物体識別部82は、実物体位置推定システム10からのセンシング結果に基づいて、実物体を識別し、識別された実物体に固有の実物体識別子を、付加情報取得部63と送信部64に供給する。 The real object identification unit 82 identifies the real object based on the sensing result from the real object position estimation system 10, and assigns the real object identifier unique to the identified real object to the additional information acquisition unit 63 and the transmission unit 64. Supply.
 次に、図14の実物体位置推定システム10と配信サーバ20の動作について説明する。 Next, the operations of the real object position estimation system 10 and the distribution server 20 of FIG. 14 will be described.
 図14の実物体位置推定システム10の動作は、図9を参照して説明した図8の実物体位置推定システム10の動作と同様であるので、その説明は省略する。 Since the operation of the real object position estimation system 10 of FIG. 14 is the same as the operation of the real object position estimation system 10 of FIG. 8 described with reference to FIG. 9, the description thereof will be omitted.
 図15は、図14の配信サーバ20の動作について説明するフローチャートである。 FIG. 15 is a flowchart illustrating the operation of the distribution server 20 of FIG.
 ステップS81において、受信部61は、実物体位置推定システム10から送信されてくるセンサ識別子、センシング結果、およびセンシング時刻を受信する。 In step S81, the receiving unit 61 receives the sensor identifier, the sensing result, and the sensing time transmitted from the real object position estimation system 10.
 ステップS82において、センサ位置取得部91は、受信部61により受信されたセンサ識別子に基づいて、三次元座標系におけるセンサ部71の位置(センサ位置)を取得する。 In step S82, the sensor position acquisition unit 91 acquires the position (sensor position) of the sensor unit 71 in the three-dimensional coordinate system based on the sensor identifier received by the reception unit 61.
 ステップS83において、実物体位置推定部81は、センサ位置取得部91により取得されたセンサ位置と、受信部61により受信されたセンシング結果から、三次元座標系における実物体の位置を推定する。 In step S83, the real object position estimation unit 81 estimates the position of the real object in the three-dimensional coordinate system from the sensor position acquired by the sensor position acquisition unit 91 and the sensing result received by the reception unit 61.
 ステップS84において、実物体識別部82は、受信部61により受信されたセンシング結果から実物体を識別する。 In step S84, the real object identification unit 82 identifies the real object from the sensing result received by the reception unit 61.
 ステップS85において、付加情報取得部63は、識別された実物体に固有の実物体識別子に対応する付加情報を取得する。 In step S85, the additional information acquisition unit 63 acquires additional information corresponding to the real object identifier unique to the identified real object.
 ステップS86において、送信部64は、実物体識別子、物体位置情報、センシング時刻、および付加情報を、AR端末30に送信する。 In step S86, the transmission unit 64 transmits the real object identifier, the object position information, the sensing time, and the additional information to the AR terminal 30.
 以上のように、図15の構成においては、実物体に装着される実物体位置推定システム10ではなく、配信サーバ20において、実物体が識別され、実物体の位置が推定される。 As described above, in the configuration of FIG. 15, the real object is identified and the position of the real object is estimated by the distribution server 20 instead of the real object position estimation system 10 mounted on the real object.
(3-6.アウトサイドイン方式3)
 図16は、実物体のトラッキング方式としてアウトサイドイン方式を採用した実物体位置推定システム10と配信サーバ20の構成例を示すブロック図である。但し、図16の例では、実物体位置推定システム10は、実物体の周辺を移動する移動体(例えば、競技場の上空を飛行するドローンなど)として構成される。
(3-6. Outside-in method 3)
FIG. 16 is a block diagram showing a configuration example of a real object position estimation system 10 and a distribution server 20 that employ an outside-in method as a tracking method for a real object. However, in the example of FIG. 16, the real object position estimation system 10 is configured as a moving body (for example, a drone flying over the stadium) that moves around the real object.
 図16の実物体位置推定システム10は、自己位置推定部72と制御部101が配信サーバ20にさらに設けられている点で、図11の実物体位置推定システム10と異なる。図16の配信サーバ20は、図11の配信サーバ20と同様にして構成される。 The real object position estimation system 10 of FIG. 16 is different from the real object position estimation system 10 of FIG. 11 in that a self-position estimation unit 72 and a control unit 101 are further provided in the distribution server 20. The distribution server 20 of FIG. 16 is configured in the same manner as the distribution server 20 of FIG.
 図16の実物体位置推定システム10においては、自己位置推定部72が、センサ部71からのセンシング結果に基づいて、三次元座標系におけるセンサ部71(実物体位置推定システム10)の位置を推定し、実物体位置推定部81と制御部101に供給する。 In the real object position estimation system 10 of FIG. 16, the self-position estimation unit 72 estimates the position of the sensor unit 71 (real object position estimation system 10) in the three-dimensional coordinate system based on the sensing result from the sensor unit 71. Then, it is supplied to the real object position estimation unit 81 and the control unit 101.
 実物体位置推定部81は、センサ部71からのセンシング結果と、自己位置推定部72からの三次元座標系におけるセンサ部71の位置に基づいて、三次元座標系における実物体の位置を推定し、物体位置情報を送信部83に供給する。 The real object position estimation unit 81 estimates the position of the real object in the three-dimensional coordinate system based on the sensing result from the sensor unit 71 and the position of the sensor unit 71 in the three-dimensional coordinate system from the self-position estimation unit 72. , The object position information is supplied to the transmission unit 83.
 制御部101は、自己位置推定部72からの三次元座標系におけるセンサ部71の位置に基づいて、ドローンとしての実物体位置推定システム10が、所定の位置に定位するかまたは移動するために、図示せぬアクチュエータを制御する。 The control unit 101 determines that the real object position estimation system 10 as a drone is localized or moved to a predetermined position based on the position of the sensor unit 71 in the three-dimensional coordinate system from the self-position estimation unit 72. Control actuators not shown.
 次に、図16の実物体位置推定システム10と配信サーバ20の動作について説明する。 Next, the operations of the real object position estimation system 10 and the distribution server 20 of FIG. 16 will be described.
 図17は、図16の実物体位置推定システム10の動作について説明するフローチャートである。 FIG. 17 is a flowchart illustrating the operation of the real object position estimation system 10 of FIG.
 ステップS91において、センサ部71は、ドローンとしての実物体位置推定システム10が飛行している、実物体が存在する環境をセンシングする。 In step S91, the sensor unit 71 senses the environment in which the real object exists, in which the real object position estimation system 10 as a drone is flying.
 ステップS92において、時刻計測部53は、センサ部71がセンシングした時刻をセンシング時刻として取得する。 In step S92, the time measuring unit 53 acquires the time sensed by the sensor unit 71 as the sensing time.
 ステップS93において、自己位置推定部72は、センサ部71によるセンシングのセンシング結果に基づいて、三次元座標系におけるセンサ部71(実物体位置推定システム10)の位置(センサ位置)を推定する。 In step S93, the self-position estimation unit 72 estimates the position (sensor position) of the sensor unit 71 (real object position estimation system 10) in the three-dimensional coordinate system based on the sensing result of the sensing by the sensor unit 71.
 ステップS94において、制御部101は、自己位置推定部72により推定されたセンサ位置に基づいて、実物体位置推定システム10を飛行させるためのアクチュエータを制御する。 In step S94, the control unit 101 controls an actuator for flying the real object position estimation system 10 based on the sensor position estimated by the self-position estimation unit 72.
 ステップS95において、実物体位置推定部81は、自己位置推定部72により推定されたセンサ位置と、センサ部71からのセンシング結果から、三次元座標系における実物体の位置を推定する。 In step S95, the real object position estimation unit 81 estimates the position of the real object in the three-dimensional coordinate system from the sensor position estimated by the self-position estimation unit 72 and the sensing result from the sensor unit 71.
 ステップS96において、実物体識別部82は、センサ部71からのセンシング結果から実物体を識別する。 In step S96, the real object identification unit 82 identifies the real object from the sensing result from the sensor unit 71.
 ステップS97において、送信部83は、実物体識別子、物体位置情報、およびセンシング時刻を、配信サーバ20に送信する。 In step S97, the transmission unit 83 transmits the real object identifier, the object position information, and the sensing time to the distribution server 20.
 なお、図16の配信サーバ20の動作は、図13を参照して説明した図11の配信サーバ20の動作と同様であるので、その説明は省略する。 Since the operation of the distribution server 20 of FIG. 16 is the same as the operation of the distribution server 20 of FIG. 11 described with reference to FIG. 13, the description thereof will be omitted.
 以上のように、図16の構成においては、実物体の周囲を飛行する実物体位置推定システム10において、実物体の位置が取得される。 As described above, in the configuration of FIG. 16, the position of the real object is acquired by the real object position estimation system 10 flying around the real object.
<4.AR端末の構成と動作>
 AR端末30の構成について説明する
<4. Configuration and operation of AR terminal>
The configuration of the AR terminal 30 will be described.
(4-1.第1の実施の形態)
 図18は、第1の実施の形態に係るAR端末30の構成例を示すブロック図である。図18の例では、AR端末30は、配信サーバ20から配信される情報のみに基づいて、現時刻での実物体の位置を取得し、対応する仮想物体を配置する。
(4-1. First Embodiment)
FIG. 18 is a block diagram showing a configuration example of the AR terminal 30 according to the first embodiment. In the example of FIG. 18, the AR terminal 30 acquires the position of the real object at the current time based only on the information distributed from the distribution server 20, and arranges the corresponding virtual object.
 図18のAR端末30は、受信部111、時刻計測部112、センサ部113、自己位置推定部114、移動範囲予測部115、検出範囲設定部116、物体検出部117、仮想物体配置部118、描画部119、および表示部120を備える。 The AR terminal 30 of FIG. 18 includes a receiving unit 111, a time measuring unit 112, a sensor unit 113, a self-position estimation unit 114, a moving range prediction unit 115, a detection range setting unit 116, an object detection unit 117, and a virtual object arrangement unit 118. It includes a drawing unit 119 and a display unit 120.
 受信部111は、配信サーバ20から配信される実物体識別子、物体位置情報、センシング時刻、および付加情報を受信する。実物体の位置を表す物体位置情報は、AR端末30で取得される情報ではなく、無線通信により取得される、外部のセンサによるセンシング結果に基づいた情報である。物体位置情報とセンシング時刻は、実物体識別子と対応付けられて、移動範囲予測部115に供給され、付加情報は、実物体識別子と対応付けられて、仮想物体配置部118に供給される。 The receiving unit 111 receives the real object identifier, the object position information, the sensing time, and the additional information distributed from the distribution server 20. The object position information representing the position of the real object is not the information acquired by the AR terminal 30, but the information based on the sensing result by the external sensor acquired by wireless communication. The object position information and the sensing time are associated with the real object identifier and supplied to the movement range prediction unit 115, and the additional information is associated with the real object identifier and supplied to the virtual object arrangement unit 118.
 時刻計測部112は、ミリ秒以上の精度で時刻を計測することで、現在の時刻(現時刻)を取得し、移動範囲予測部115に供給する。 The time measurement unit 112 acquires the current time (current time) by measuring the time with an accuracy of milliseconds or more, and supplies it to the movement range prediction unit 115.
 センサ部113は、ステレオカメラやデプスセンサなどで構成され、AR端末30の周囲の環境をセンシングし、そのセンシング結果を、自己位置推定部114と検出範囲設定部116に供給する。 The sensor unit 113 is composed of a stereo camera, a depth sensor, or the like, senses the environment around the AR terminal 30, and supplies the sensing result to the self-position estimation unit 114 and the detection range setting unit 116.
 自己位置推定部114は、センサ部113からのセンシング結果に基づいて、三次元座標系におけるセンサ部113(AR端末30)の位置を推定することで、AR端末30の位置を表す自己位置情報を、検出範囲設定部116と描画部119に供給する。センサ部113に加え、IMUを設け、センサ部113によるセンシング結果と、IMUにより検出された角速度や加速度に基づいて、三次元座標系におけるセンサ部113の位置が推定されるようにしてもよい。 The self-position estimation unit 114 estimates the position of the sensor unit 113 (AR terminal 30) in the three-dimensional coordinate system based on the sensing result from the sensor unit 113, and thereby obtains self-position information indicating the position of the AR terminal 30. , Supply to the detection range setting unit 116 and the drawing unit 119. In addition to the sensor unit 113, an IMU may be provided so that the position of the sensor unit 113 in the three-dimensional coordinate system can be estimated based on the sensing result by the sensor unit 113 and the angular velocity and acceleration detected by the IMU.
 移動範囲予測部115は、受信部111からの物体位置情報に基づいて、三次元座標系において実物体が移動し得る移動範囲を予測する。具体的には、移動範囲予測部115は、物体位置情報に対応する受信部111からのセンシング時刻から、時刻計測部112からの現時刻までの、実物体の移動範囲を予測する。三次元座標系において複数の実物体が存在する場合、それぞれの実物体に固有の実物体識別子に基づいて、実物体毎に移動範囲が予測される。 The movement range prediction unit 115 predicts the movement range in which the real object can move in the three-dimensional coordinate system based on the object position information from the reception unit 111. Specifically, the movement range prediction unit 115 predicts the movement range of the actual object from the sensing time from the reception unit 111 corresponding to the object position information to the current time from the time measurement unit 112. When there are a plurality of real objects in the three-dimensional coordinate system, the movement range is predicted for each real object based on the real object identifier unique to each real object.
 ここでは、物体位置情報に基づいた実物体の移動速度と移動方向から、現時刻での実物体の予測位置が推定されることで、実物体の移動範囲が予測される。例えば、ある実物体について受信した直近2回分の物体位置情報の差分に基づいて、実物体の速度ベクトル(進行方向と速度)が推定される。受信したセンシング時刻から現時刻までの間を移動時間として、推定された速度ベクトルを用いて、現時刻での実物体の予測位置が推定される。最新の物体位置情報で表される位置から推定された予測位置の間が移動範囲となる。 Here, the moving range of the real object is predicted by estimating the predicted position of the real object at the current time from the moving speed and moving direction of the real object based on the object position information. For example, the velocity vector (traveling direction and velocity) of the real object is estimated based on the difference of the object position information for the last two times received for a certain real object. The predicted position of the real object at the current time is estimated using the estimated velocity vector with the travel time between the received sensing time and the current time as the travel time. The movement range is between the predicted positions estimated from the positions represented by the latest object position information.
 また、実物体のコンテキストを用いて、実物体の移動範囲が予測されてもよい。ここでいうコンテキストは、実物体が移動可能な最大速度であったり、実物体が移動可能な平面や方向であったりする。実物体が人である場合には、最大速度は、例えば陸上(短距離走)の世界記録に基づいた12.5m/sなどとされる。実物体が車である場合には、最大速度は、例えばF1カーの世界記録である378km/hなどとされる。また、実物体は、重力方向に対して垂直な平面上を移動可能とされる。さらに、陸上の競技場やF1などのサーキットにおいては、実物体の進行方向は、その実物体のコース上の位置によっては一意に決定される。但し、サッカーなどのような球技において、実物体(選手)の進行方向を推定することが難しい場合には、重力方向に対して垂直な平面上で全ての方向が移動範囲に含まれるようにする。このようなコンテキストを条件として、実物体に応じて、その移動範囲を限定することが可能となる。 Further, the movement range of the real object may be predicted using the context of the real object. The context here is the maximum speed at which the real object can move, or the plane or direction in which the real object can move. When the real object is a human, the maximum speed is, for example, 12.5 m / s based on the world record of land (sprinting). When the real object is a car, the maximum speed is, for example, 378 km / h, which is the world record of an F1 car. In addition, the real object can move on a plane perpendicular to the direction of gravity. Further, in a track and field stadium or a circuit such as F1, the traveling direction of the real object is uniquely determined depending on the position of the real object on the course. However, in ball games such as soccer, when it is difficult to estimate the traveling direction of a real object (player), all directions should be included in the movement range on a plane perpendicular to the direction of gravity. .. Given such a context, it is possible to limit the range of movement according to the real object.
 さらに、以上のようにして予測された移動範囲に、例えば1.2倍などのマージンが与えられてもよいし、GPSセンサ51やセンサ部71の測位誤差や測距誤差が含まれてもよい。例えば、実物体のトラッキング方式としてGPS方式が採用された場合には、数m程度の測位誤差を含めて移動範囲が予測される。 Further, the movement range predicted as described above may be provided with a margin such as 1.2 times, or may include a positioning error or a distance measurement error of the GPS sensor 51 or the sensor unit 71. .. For example, when the GPS method is adopted as the tracking method for a real object, the movement range is predicted including a positioning error of about several meters.
 検出範囲設定部116は、実物体の物体位置情報と、自己位置推定部114からの自己位置情報に基づいて、自己位置での撮像画角よりも小さく且つ前記実物体の位置に対応する検出範囲を設定する。検出範囲は、自己位置での撮像画角で撮像された撮像画像において、実物体を検出し得る範囲として設定される。具体的には、検出範囲設定部116は、自己位置推定部114からの自己位置情報を用いて、移動範囲予測部115により予測された三次元座標系における実物体の移動範囲を含む撮像画角を設定する。検出範囲設定部116は、設定した撮像画角(実物体を含む画角)でセンサ部113により撮像された撮像画像において、移動範囲予測部115により物体位置情報に基づいて予測された移動範囲に対応する領域が含まれる範囲を、検出範囲に設定する。 The detection range setting unit 116 is smaller than the imaging angle of view at the self-position and corresponds to the position of the real object based on the object position information of the real object and the self-position information from the self-position estimation unit 114. To set. The detection range is set as a range in which a real object can be detected in the captured image captured at the imaging angle of view at the self-position. Specifically, the detection range setting unit 116 uses the self-position information from the self-position estimation unit 114 to capture an image angle of view including the movement range of the real object in the three-dimensional coordinate system predicted by the movement range prediction unit 115. To set. The detection range setting unit 116 sets the movement range predicted by the movement range prediction unit 115 based on the object position information in the image captured by the sensor unit 113 at the set angle of view (angle of view including the real object). The range including the corresponding area is set as the detection range.
 例えば、物体位置情報に対応するセンシング時刻t-1から現時刻tまでの間の移動範囲が、移動範囲予測部115により予測されたとする。この場合、図19に示されるように、検出範囲設定部116は、自己位置に基づいて、時刻t-1から時刻tまでの間の移動範囲を含む撮像画角で撮像された撮像画像CIにおいて、現時刻tにおいて実物体を検出し得る検出範囲DR(t)を設定する。 For example, it is assumed that the movement range from the sensing time t-1 corresponding to the object position information to the current time t is predicted by the movement range prediction unit 115. In this case, as shown in FIG. 19, the detection range setting unit 116 is based on the self-position in the captured image CI captured at the imaging angle of view including the moving range from time t-1 to time t. , The detection range DR (t) capable of detecting a real object at the current time t is set.
 より詳細には、予測された移動範囲に基づいて、撮像画像CIにおいて、時刻t-1から時刻tまでの間で人の形状が撮像されていると予測される最小の矩形領域が決定される。まず、時刻t-1と時刻tでの人の形状について、図20に示されるように、三次元座標系上にバウンディングボックスBBが設定される。三次元座標系における実物体の位置を1点に集約する場合、人の身長が重力方向と平行になり、集約された1点が重心となるように、バウンディングボックスBBが設定される。ここでの人の身長は、例えば2.5mなどの世界記録が用いられてもよいし、配信サーバ20からの付加情報に含まれていてもよい。 More specifically, based on the predicted range of movement, the smallest rectangular region in the captured image CI that is predicted to capture the shape of a person between time t-1 and time t is determined. .. First, a bounding box BB is set on the three-dimensional coordinate system as shown in FIG. 20 for the shapes of people at time t-1 and time t. When the positions of real objects in the three-dimensional coordinate system are aggregated to one point, the bounding box BB is set so that the height of the person is parallel to the direction of gravity and the aggregated one point is the center of gravity. The height of the person here may be a world record such as 2.5 m, or may be included in the additional information from the distribution server 20.
 次に、図21に示されるカメラモデルに基づいて、ワールド座標系上の、時刻t-1における実物体についてのバウンディングボックスBBの8頂点と、時刻tにおける実物体についてのバウンディングボックスBBの8頂点が、u-v座標系上に射影される。図21においては、ワールド座標系(三次元座標系)(Xw-Yw-Zw座標系)上の点Pが、カメラ座標系(Xc-Yc-Zc座標系)を介して、撮像画像CIの画像座標系(u-v座標系)上の点pに射影される。そして、u-v座標系におけるuの最小値、uの最大値、vの最小値、vの最大値で囲まれる矩形領域が、検出範囲に設定される。ここで説明した検出範囲は一例であり、その形状は、円形など矩形に限られない。 Next, based on the camera model shown in FIG. 21, the eight vertices of the bounding box BB for the real object at time t-1 and the eight vertices of the bounding box BB for the real object at time t on the world coordinate system. Is projected onto the uv coordinate system. In FIG. 21, the point P on the world coordinate system (three-dimensional coordinate system) (Xw-Yw-Zw coordinate system) is an image of the captured image CI via the camera coordinate system (Xc-Yc-Zc coordinate system). It is projected onto the point p on the coordinate system (uv coordinate system). Then, a rectangular area surrounded by the minimum value of u, the maximum value of u, the minimum value of v, and the maximum value of v in the uv coordinate system is set in the detection range. The detection range described here is an example, and its shape is not limited to a rectangle such as a circle.
 物体検出部117は、撮像画像において設定された検出範囲において実物体を検出し、検出された実物体の撮像画像上の位置を三次元座標系における位置に変換する。 The object detection unit 117 detects a real object within the detection range set in the captured image, and converts the position of the detected real object on the captured image into a position in the three-dimensional coordinate system.
 まず、物体検出部117は、検出範囲において実物体を検出した矩形の検出枠の重心を、実物体の撮像画像上の位置として取得する。実物体が人に限定されている場合には、人だけが検出される。また、複数の人が検出された場合、現時刻での予測位置に対応する撮像画像上の位置に最も近い位置が採用されるようにする。実物体の検出には、撮像画像の各画素の属性に基づいて被写体を推定するセマンティックセグメンテーションが用いられてもよい。 First, the object detection unit 117 acquires the center of gravity of the rectangular detection frame in which the real object is detected in the detection range as the position on the captured image of the real object. If the real object is limited to humans, only humans will be detected. Further, when a plurality of people are detected, the position closest to the position on the captured image corresponding to the predicted position at the current time is adopted. Semantic segmentation, which estimates the subject based on the attributes of each pixel of the captured image, may be used to detect the real object.
 次に、物体検出部117は、検出された実物体の、撮像画像上の位置(u-v座標系上の位置)を、三次元座標系上の位置に変換する。ここでは、カメラ座標系における奥行き方向を除いたXc-Yc座標に対応する三次元座標系上の位置が、上述したカメラモデルにおける逆射影により求められる。なお、カメラ座標系における奥行き方向の位置に対応する三次元座標系上の位置は、移動範囲予測部115により推定された現時刻tでの予測位置が適用されるものとする。これは、この奥行き方向の位置は、正確ではないものの実際の位置との差は小さいと考えられることと、人間は奥行き方向の位置の差には鈍感であり、後述する仮想物体の重畳においても違和感を与えにくいことによる。 Next, the object detection unit 117 converts the position on the captured image (position on the uv coordinate system) of the detected real object to the position on the three-dimensional coordinate system. Here, the position on the three-dimensional coordinate system corresponding to the Xc—Yc coordinates excluding the depth direction in the camera coordinate system is obtained by the back projection in the above-mentioned camera model. It is assumed that the predicted position at the current time t estimated by the movement range prediction unit 115 is applied to the position on the three-dimensional coordinate system corresponding to the position in the depth direction in the camera coordinate system. This is because the position in the depth direction is not accurate, but the difference from the actual position is considered to be small, and human beings are insensitive to the difference in the position in the depth direction. This is because it does not give a sense of discomfort.
 このようにして、物体検出部117は、三次元座標系における現時刻での実物体の位置を取得する。 In this way, the object detection unit 117 acquires the position of the real object at the current time in the three-dimensional coordinate system.
 仮想物体配置部118は、物体検出部117により取得された、三次元座標系における現時刻での実物体の位置に対応して、三次元座標系上に、受信部111からの付加情報を仮想物体として配置する。仮想物体配置部118は、実物体に重ならない位置、例えば、重力方向に対して実物体の上側に数10cm離れた位置に、仮想物体を配置する。 The virtual object arranging unit 118 virtually generates additional information from the receiving unit 111 on the three-dimensional coordinate system corresponding to the position of the real object at the current time in the three-dimensional coordinate system acquired by the object detection unit 117. Place it as an object. The virtual object arranging unit 118 arranges the virtual object at a position that does not overlap with the real object, for example, at a position several tens of centimeters above the real object in the direction of gravity.
 描画部119は、自己位置推定部114からの自己位置情報に基づいて、仮想物体配置部118により三次元座標系上に配置された仮想物体をレンダリングする。 The drawing unit 119 renders a virtual object arranged on the three-dimensional coordinate system by the virtual object arrangement unit 118 based on the self-position information from the self-position estimation unit 114.
 表示部120は、ディスプレイなどで構成され、描画部119によりレンダリングされた仮想物体を表示する。 The display unit 120 is composed of a display or the like, and displays a virtual object rendered by the drawing unit 119.
(AR端末の動作)
 図22のフローチャートを参照して、AR端末30の動作について説明する。
(Operation of AR terminal)
The operation of the AR terminal 30 will be described with reference to the flowchart of FIG.
 ステップS111において、センサ部113は、AR端末30の周囲の環境(例えば、ユーザUが観戦している競技場やサーキット)をセンシングする。 In step S111, the sensor unit 113 senses the environment around the AR terminal 30 (for example, the stadium or circuit in which the user U is watching).
 ステップS112において、時刻計測部112は、センサ部113がセンシングした時刻を現時刻として取得する。 In step S112, the time measuring unit 112 acquires the time sensed by the sensor unit 113 as the current time.
 ステップS113において、自己位置推定部114は、センサ部113によるセンシングのセンシング結果に基づいて、三次元座標系における自己位置(AR端末30の位置)を推定する。 In step S113, the self-position estimation unit 114 estimates the self-position (position of the AR terminal 30) in the three-dimensional coordinate system based on the sensing result of the sensing by the sensor unit 113.
 ステップS114において、AR端末30は、三次元座標系における現時刻での実物体の位置を取得する三次元位置取得処理を実行する。三次元位置取得処理の詳細については後述する。 In step S114, the AR terminal 30 executes a three-dimensional position acquisition process for acquiring the position of the real object at the current time in the three-dimensional coordinate system. The details of the three-dimensional position acquisition process will be described later.
 ステップS115において、仮想物体配置部118は、三次元位置取得処理により取得された、三次元座標系における現時刻での実物体の位置に対応して、三次元座標系上に仮想物体を配置する。 In step S115, the virtual object arranging unit 118 arranges the virtual object on the three-dimensional coordinate system corresponding to the position of the real object at the current time in the three-dimensional coordinate system acquired by the three-dimensional position acquisition process. ..
 ステップS116において、描画部119は、自己位置推定部114により推定された自己位置を表す自己位置情報に基づいて、仮想物体配置部118により三次元座標系上に配置された仮想物体をレンダリングする。 In step S116, the drawing unit 119 renders a virtual object arranged on the three-dimensional coordinate system by the virtual object arrangement unit 118 based on the self-position information representing the self-position estimated by the self-position estimation unit 114.
 ステップS117において、表示部120は、描画部119によりレンダリングされた仮想物体を表示する。 In step S117, the display unit 120 displays the virtual object rendered by the drawing unit 119.
 図23は、図22のステップS114において実行される三次元位置取得処理の詳細について説明するフローチャートである。 FIG. 23 is a flowchart illustrating details of the three-dimensional position acquisition process executed in step S114 of FIG. 22.
 ステップS121において、受信部111は、配信サーバ20から配信される実物体識別子、物体位置情報、センシング時刻、および付加情報を受信する。 In step S121, the receiving unit 111 receives the real object identifier, the object position information, the sensing time, and the additional information distributed from the distribution server 20.
 ステップS122において、移動範囲予測部115は、受信された全ての実物体識別子について、対応する物体位置情報に基づいて、三次元座標系における実物体の移動範囲を予測する。 In step S122, the movement range prediction unit 115 predicts the movement range of the real object in the three-dimensional coordinate system based on the corresponding object position information for all the received real object identifiers.
 ステップS123において、検出範囲設定部116は、自己位置推定部114により推定された自己位置を表す自己位置情報を用いて、自己位置で撮像される撮像画像において、全ての実物体識別子について予測された移動範囲に対応する検出範囲を設定する。 In step S123, the detection range setting unit 116 is predicted for all real object identifiers in the captured image captured at the self-position using the self-position information representing the self-position estimated by the self-position estimation unit 114. Set the detection range corresponding to the movement range.
 ステップS124において、物体検出部117は、全ての実物体識別子について設定された検出範囲それぞれにおいて、対応する実物体を検出する。 In step S124, the object detection unit 117 detects the corresponding real object in each of the detection ranges set for all the real object identifiers.
 ステップS125において、物体検出部117は、検出された全ての実物体の撮像画像上の位置を三次元座標系における位置に変換する。これにより、三次元座標系における現時刻での実物体の位置が取得される。 In step S125, the object detection unit 117 converts the positions of all the detected real objects on the captured image into the positions in the three-dimensional coordinate system. As a result, the position of the real object at the current time in the three-dimensional coordinate system is acquired.
 以上の処理によれば、AR端末30において、撮像画角よりも小さく且つ実物体の位置に対応する検出範囲が設定されるので、より少ない処理で実物体を検出することができる。また、検出される実物体の位置として、配信サーバ20から配信された位置ではなく、撮像画像上で検出された現時刻での実物体の位置が採用されるので、実物体位置推定システム10からAR端末30までの間の伝送遅延を補償することができる。結果として、AR端末30の処理負荷を低減しつつ、仮想物体の位置ずれを解消することができ、実物体に対応する仮想物体をより好適に提示することが可能となる。 According to the above processing, in the AR terminal 30, the detection range smaller than the imaging angle of view and corresponding to the position of the real object is set, so that the real object can be detected with less processing. Further, as the position of the real object to be detected, the position of the real object at the current time detected on the captured image is adopted instead of the position distributed from the distribution server 20, so that the position of the real object is estimated from the real object position estimation system 10. It is possible to compensate for the transmission delay up to the AR terminal 30. As a result, it is possible to eliminate the positional deviation of the virtual object while reducing the processing load of the AR terminal 30, and it is possible to more preferably present the virtual object corresponding to the real object.
(4-2.第2の実施の形態)
 図24は、第2の実施の形態に係るAR端末30の構成例を示すブロック図である。図24の例では、AR端末30は、配信サーバ20から配信される情報と、自端末から実物体までの測距結果とを併用することで、現時刻での実物体の位置を取得し、対応する仮想物体を配置する。
(4-2. Second embodiment)
FIG. 24 is a block diagram showing a configuration example of the AR terminal 30 according to the second embodiment. In the example of FIG. 24, the AR terminal 30 acquires the position of the real object at the current time by using the information distributed from the distribution server 20 and the distance measurement result from the own terminal to the real object together. Place the corresponding virtual object.
 図24のAR端末30は、図18のAR端末30が備える受信部111乃至表示部120に加え、実物体位置推定部131、実物体識別部132、および実物体位置選択部133を備える。 The AR terminal 30 of FIG. 24 includes a real object position estimation unit 131, a real object identification unit 132, and a real object position selection unit 133, in addition to the reception unit 111 to the display unit 120 of the AR terminal 30 of FIG.
 実物体位置推定部131は、センサ部113からのセンシング結果と、自己位置推定部114からの自己位置情報に基づいて、三次元座標系における実物体の位置を推定し、その推定結果を、実物体位置選択部133に供給する。センサ部113からのセンシング結果は、例えばデプスセンサにより計測される、実物体までの距離とされる。 The real object position estimation unit 131 estimates the position of the real object in the three-dimensional coordinate system based on the sensing result from the sensor unit 113 and the self-position information from the self-position estimation unit 114, and the estimation result is the actual object. It is supplied to the body position selection unit 133. The sensing result from the sensor unit 113 is, for example, the distance to the actual object measured by the depth sensor.
 実物体識別部132は、センサ部113からのセンシング結果に基づいて、実物体を識別し、識別された実物体に固有の実物体識別子を、実物体位置選択部133に供給する。 The real object identification unit 132 identifies the real object based on the sensing result from the sensor unit 113, and supplies the real object identifier unique to the identified real object to the real object position selection unit 133.
 実物体位置選択部133は、実物体位置推定部131により実物体の位置が推定可能であった場合には、実物体の位置としてその推定結果を選択し、実物体識別部132からの実物体識別子と対応付けて、仮想物体配置部118に供給する。実物体位置推定部131により実物体の位置が推定可能でなかった場合には、実物体の位置として物体検出部117により取得された実物体の位置を選択し、仮想物体配置部118に供給する。 When the position of the real object can be estimated by the real object position estimation unit 131, the real object position selection unit 133 selects the estimation result as the position of the real object, and the real object from the real object identification unit 132. It is supplied to the virtual object arrangement unit 118 in association with the identifier. When the position of the real object cannot be estimated by the real object position estimation unit 131, the position of the real object acquired by the object detection unit 117 is selected as the position of the real object and supplied to the virtual object arrangement unit 118. ..
(AR端末の動作)
 図24のAR端末30の動作は、基本的には、図22のフローチャートを参照して説明した図18のAR端末30の動作と、ステップS114以外は同様である。
(Operation of AR terminal)
The operation of the AR terminal 30 of FIG. 24 is basically the same as the operation of the AR terminal 30 of FIG. 18 described with reference to the flowchart of FIG. 22 except for step S114.
 図25は、図24のAR端末30によって、図22のステップS114において実行される三次元位置取得処理の詳細について説明するフローチャートである。 FIG. 25 is a flowchart illustrating details of the three-dimensional position acquisition process executed in step S114 of FIG. 22 by the AR terminal 30 of FIG. 24.
 ステップS131において、実物体識別部132は、センサ部113からのセンシング結果から、実物体を識別する。 In step S131, the real object identification unit 132 identifies the real object from the sensing result from the sensor unit 113.
 ステップS132において、実物体位置推定部131は、自己位置推定部114により推定された自己位置(AR端末30の位置)と、センサ部113からのセンシング結果から、三次元座標系における実物体の位置を推定する。 In step S132, the real object position estimation unit 131 is the position of the real object in the three-dimensional coordinate system from the self-position (position of the AR terminal 30) estimated by the self-position estimation unit 114 and the sensing result from the sensor unit 113. To estimate.
 ステップS133において、図24のAR端末30は、図23のフローチャートを参照して説明した処理を実行することで、三次元座標系における現時刻での全ての実物体の位置を取得する。 In step S133, the AR terminal 30 of FIG. 24 acquires the positions of all real objects at the current time in the three-dimensional coordinate system by executing the process described with reference to the flowchart of FIG. 23.
 ステップS134において、実物体位置選択部133は、実物体識別部132からの全ての実物体識別子について、実物体位置推定部131により実物体の位置が推定可能であったか否かを判定する。実物体の位置が推定可能であったと判定された場合、ステップS135に進む。 In step S134, the real object position selection unit 133 determines whether or not the position of the real object can be estimated by the real object position estimation unit 131 for all the real object identifiers from the real object identification unit 132. If it is determined that the position of the real object can be estimated, the process proceeds to step S135.
 ステップS135において、実物体位置選択部133は、全ての実物体識別子について、ステップS133において取得された実物体の位置を、ステップS132において推定された実物体の位置に置き換える。ここでは、その位置が推定可能と判定された実物体のみについて、ステップS133において取得された実物体の位置が、ステップS132において推定された実物体の位置に置き換えられてもよい。 In step S135, the real object position selection unit 133 replaces the position of the real object acquired in step S133 with the position of the real object estimated in step S132 for all the real object identifiers. Here, the position of the real object acquired in step S133 may be replaced with the position of the real object estimated in step S132 only for the real object whose position is determined to be estimable.
 一方、ステップS134において実物体の位置が推定可能であったと判定されなかった場合、ステップS135はスキップされ、全ての実物体の位置として、ステップS133において取得された実物体の位置が採用される。ここでは、その位置が推定可能と判定されなかった実物体のみについて、ステップS133において取得された実物体の位置が、採用されてもよい。 On the other hand, if it is not determined in step S134 that the position of the real object could be estimated, step S135 is skipped and the positions of the real objects acquired in step S133 are adopted as the positions of all the real objects. Here, the position of the real object acquired in step S133 may be adopted only for the real object whose position is not determined to be estimable.
 以上の処理によれば、実物体がAR端末30から至近距離に存在するなど、AR端末30が備えるデプスセンサなどの測距範囲内で実物体の位置が推定可能な場合には、推定された実物体の位置が採用される。この場合、実物体位置推定システム10からAR端末30までの間の伝送遅延を補償する必要もなく、実物体に対応する仮想物体をより好適に提示することが可能となる。 According to the above processing, when the position of the real object can be estimated within the range measuring range such as the depth sensor provided in the AR terminal 30, such as when the real object exists at a close distance from the AR terminal 30, the estimated real object is obtained. The position of the body is adopted. In this case, it is not necessary to compensate for the transmission delay between the real object position estimation system 10 and the AR terminal 30, and it is possible to more preferably present a virtual object corresponding to the real object.
 以上においては、本開示に係る技術を、陸上やサッカーなどの競技場における選手や、F1などのサーキットにおけるフォーミュラカーに対して仮想物体を重畳する構成に適用した例について説明した。これらに限らず、本開示に係る技術を、例えば、タクシーやハイヤーなどを手配可能な配車アプリケーションにおいて、ユーザが手配した車両に対して仮想物体を重畳する構成に適用してもよい。また、本開示に係る技術を、FPS(First Person Shooter)のような一人称視点でプレイ可能なARシューティングゲームにおいて、ユーザの周囲のプレーヤに対して仮想物体を重畳する構成に適用してもよい。 In the above, an example in which the technology according to the present disclosure is applied to a configuration in which a virtual object is superimposed on a player in a stadium such as athletics or soccer or a formula car in a circuit such as F1 has been described. Not limited to these, the technology according to the present disclosure may be applied to a configuration in which a virtual object is superimposed on a vehicle arranged by a user in, for example, a vehicle allocation application in which a taxi, a hire, or the like can be arranged. Further, the technique according to the present disclosure may be applied to a configuration in which a virtual object is superimposed on a player around the user in an AR shooting game that can be played from a first-person viewpoint such as FPS (First Person Shooter).
<5.コンピュータの構成例>
 上述した一連の処理は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。一連の処理をソフトウェアにより実行する場合には、そのソフトウェアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウェアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
<5. Computer configuration example>
The series of processes described above can be executed by hardware or software. When a series of processes is executed by software, the programs constituting the software are installed on the computer. Here, the computer includes a computer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
 図26は、上述した一連の処理をプログラムにより実行するコンピュータのハードウェアの構成例を示すブロック図である。 FIG. 26 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
 コンピュータにおいて、CPU501,ROM(Read Only Memory)502,RAM(Random Access Memory)503は、バス504により相互に接続されている。 In the computer, the CPU 501, the ROM (ReadOnlyMemory) 502, and the RAM (RandomAccessMemory) 503 are connected to each other by the bus 504.
 バス504には、さらに、入出力インタフェース505が接続されている。入出力インタフェース505には、入力部506、出力部507、記憶部508、通信部509、およびドライブ510が接続されている。 An input / output interface 505 is further connected to the bus 504. An input unit 506, an output unit 507, a storage unit 508, a communication unit 509, and a drive 510 are connected to the input / output interface 505.
 入力部506は、キーボード、マウス、マイクロフォンなどよりなる。出力部507は、ディスプレイ、スピーカなどよりなる。記憶部508は、ハードディスクや不揮発性のメモリなどよりなる。通信部509は、ネットワークインタフェースなどよりなる。ドライブ510は、磁気ディスク、光ディスク、光磁気ディスク、または半導体メモリなどのリムーバブルメディア511を駆動する。 The input unit 506 includes a keyboard, a mouse, a microphone, and the like. The output unit 507 includes a display, a speaker, and the like. The storage unit 508 includes a hard disk, a non-volatile memory, and the like. The communication unit 509 includes a network interface and the like. The drive 510 drives a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
 以上のように構成されるコンピュータでは、CPU501が、例えば、記憶部508に記憶されているプログラムを、入出力インタフェース505およびバス504を介して、RAM503にロードして実行することにより、上述した一連の処理が行われる。 In the computer configured as described above, the CPU 501 loads the program stored in the storage unit 508 into the RAM 503 via the input / output interface 505 and the bus 504 and executes the above-mentioned series. Is processed.
 コンピュータ(CPU501)が実行するプログラムは、例えば、パッケージメディア等としてのリムーバブルメディア511に記録して提供することができる。また、プログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することができる。 The program executed by the computer (CPU 501) can be recorded and provided on the removable media 511 as a package media or the like, for example. The program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
 コンピュータでは、プログラムは、リムーバブルメディア511をドライブ510に装着することにより、入出力インタフェース505を介して、記憶部508にインストールすることができる。また、プログラムは、有線または無線の伝送媒体を介して、通信部509で受信し、記憶部508にインストールすることができる。その他、プログラムは、ROM502や記憶部508に、あらかじめインストールしておくことができる。 In the computer, the program can be installed in the storage unit 508 via the input / output interface 505 by mounting the removable media 511 in the drive 510. Further, the program can be received by the communication unit 509 and installed in the storage unit 508 via a wired or wireless transmission medium. In addition, the program can be installed in the ROM 502 or the storage unit 508 in advance.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program in which processing is performed in chronological order according to the order described in the present specification, in parallel, or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
 本開示の実施の形態は、上述した実施の形態に限定されるものではなく、本開示の要旨を逸脱しない範囲において種々の変更が可能である。 The embodiments of the present disclosure are not limited to the embodiments described above, and various changes can be made without departing from the gist of the present disclosure.
 本明細書に記載された効果はあくまで例示であって限定されるものではなく、他の効果があってもよい。 The effects described in this specification are merely examples and are not limited, and other effects may be used.
 さらに、本開示は以下のような構成をとることができる。
(1)
 実空間に対応する三次元座標系における実物体の物体位置情報と自己位置情報に基づいて、自己位置での撮像画角よりも小さく且つ前記実物体の位置に対応する検出範囲を設定する検出範囲設定部と、
 前記検出範囲において前記実物体を検出する物体検出部と
 を備える情報処理装置。
(2)
 前記検出範囲設定部は、前記撮像画角で撮像された撮像画像において、前記実物体を検出し得る前記検出範囲を設定し、
 前記物体検出部は、前記検出範囲において検出された前記実物体の前記撮像画像上の位置を、前記三次元座標系における位置に変換する
 (1)に記載の情報処理装置。
(3)
 前記物体位置情報に基づいて、前記三次元座標系において前記実物体が移動し得る移動範囲を予測する移動範囲予測部をさらに備え、
 前記検出範囲設定部は、前記自己位置情報を用いて、前記移動範囲を含む前記撮像画角で撮像された前記撮像画像において前記検出範囲を設定する
 (2)に記載の情報処理装置。
(4)
 前記検出範囲設定部は、前記撮像画像において、前記移動範囲に対応する領域が含まれる前記検出範囲を設定する
 (3)に記載の情報処理装置。
(5)
 前記検出範囲設定部は、前記移動範囲に基づいて前記撮像画像において前記実物体が撮像されると予測される最小の前記領域を、前記検出範囲に設定する
 (4)に記載の情報処理装置。
(6)
 前記移動範囲予測部は、外部のセンサにより前記実物体がセンシングされたセンシング時刻から現時刻までの間の前記移動範囲を予測し、
 前記検出範囲設定部は、前記移動範囲に対応する現時刻での前記検出範囲を設定し、
 前記物体検出部は、前記三次元座標系における現時刻での前記実物体の位置を取得する
 (4)または(5)に記載の情報処理装置。
(7)
 前記移動範囲予測部は、前記物体位置情報に基づいた前記実物体の移動速度と移動方向から、現時刻での前記実物体の予測位置を推定することで、前記移動範囲を予測する
 (6)に記載の情報処理装置。
(8)
 前記検出範囲設定部は、前記三次元座標系における前記センシング時刻での前記実物体の位置と、現時刻での前記実物体の前記予測位置を、前記撮像画像の画像座標系に射影することで、現時刻での前記検出範囲を設定する
 (7)に記載の情報処理装置。
(9)
 前記移動範囲予測部は、前記実物体のコンテキストを用いて、前記移動範囲を予測する
 (3)乃至(8)のいずれかに記載の情報処理装置。
(10)
 前記コンテキストは、前記実物体が移動可能な最大速度、平面、および方向の少なくともいずれかを含む
 (9)に記載の情報処理装置。
(11)
 前記移動範囲予測部は、複数の前記実物体が存在する場合、前記実物体に固有の実物体識別子に基づいて、前記実物体毎に前記移動範囲を予測する
 (3)乃至(10)のいずれかに記載の情報処理装置。
(12)
 前記物体位置情報は、無線通信により取得される、外部のセンサによるセンシング結果に基づいた情報である
 (1)乃至(11)のいずれかに記載の情報処理装置。
(13)
 前記センサは、前記実物体に装着される第1のセンサ、前記実物体の周囲に配置される第2のセンサ、および、前記実物体の周辺を移動する移動体に搭載される第3のセンサの少なくとも1つとして構成される
 (12)に記載の情報処理装置。
(14)
 前記検出範囲において検出された、前記三次元座標系における現時刻での前記実物体の位置に対応して、仮想物体を配置する仮想物体配置部をさらに備える
 (1)乃至(13)のいずれかに記載の情報処理装置。
(15)
 前記仮想物体配置部は、前記実物体に重ならない位置に、前記仮想物体を配置する
 (14)に記載の情報処理装置。
(16)
 前記仮想物体配置部は、前記物体位置情報とともに取得される、前記実物体に関する付加情報を、前記仮想物体として配置する
 (14)または(15)に記載の情報処理装置。
(17)
 前記自己位置情報に基づいて、前記仮想物体をレンダリングする描画部と、
 レンダリングされた前記仮想物体を表示する表示部をさらに備える
 (14)乃至(16)のいずれかに記載の情報処理装置。
(18)
 前記自己位置情報と前記実物体までの距離に基づいて、前記三次元座標系における前記実物体の位置を推定する実物体位置推定部をさらに備え、
 前記仮想物体配置部は、前記実物体位置推定部が前記実物体の位置を推定可能な場合、推定された前記実物体の位置に基づいて、前記仮想物体を配置する
 (14)乃至(17)のいずれかに記載の情報処理装置。
(19)
 情報処理装置が、
 実空間に対応する三次元座標系における実物体の物体位置情報と自己位置情報に基づいて、自己位置での撮像画角よりも小さく且つ前記実物体の位置に対応する検出範囲を設定し、
 前記検出範囲において前記実物体を検出する
 情報処理方法。
(20)
 実空間に対応する三次元座標系における実物体の物体位置情報と自己位置情報に基づいて、自己位置での撮像画角よりも小さく且つ前記実物体の位置に対応する検出範囲を設定し、
 前記検出範囲において前記実物体を検出する
 処理を実行させるためのプログラムを記録した、コンピュータが読み取り可能な記録媒体。
Further, the present disclosure may have the following structure.
(1)
A detection range that sets a detection range that is smaller than the imaging angle of view at the self-position and corresponds to the position of the real object, based on the object position information and the self-position information of the real object in the three-dimensional coordinate system corresponding to the real space. Setting part and
An information processing device including an object detection unit that detects the real object in the detection range.
(2)
The detection range setting unit sets the detection range capable of detecting the real object in the captured image captured at the imaging angle of view.
The information processing apparatus according to (1), wherein the object detection unit converts a position of the real object detected in the detection range on the captured image into a position in the three-dimensional coordinate system.
(3)
Further, a movement range prediction unit for predicting a movement range in which the real object can move in the three-dimensional coordinate system based on the object position information is provided.
The information processing apparatus according to (2), wherein the detection range setting unit sets the detection range in the captured image captured at the imaging angle of view including the moving range by using the self-position information.
(4)
The information processing apparatus according to (3), wherein the detection range setting unit sets the detection range including a region corresponding to the movement range in the captured image.
(5)
The information processing apparatus according to (4), wherein the detection range setting unit sets the minimum region in which the real object is predicted to be captured in the captured image based on the movement range in the detection range.
(6)
The movement range prediction unit predicts the movement range from the sensing time when the real object is sensed by an external sensor to the current time.
The detection range setting unit sets the detection range at the current time corresponding to the movement range, and sets the detection range.
The information processing device according to (4) or (5), wherein the object detection unit acquires the position of the real object at the current time in the three-dimensional coordinate system.
(7)
The movement range prediction unit predicts the movement range by estimating the predicted position of the real object at the current time from the movement speed and the movement direction of the real object based on the object position information (6). The information processing device described in.
(8)
The detection range setting unit projects the position of the real object at the sensing time in the three-dimensional coordinate system and the predicted position of the real object at the current time onto the image coordinate system of the captured image. , The information processing apparatus according to (7), which sets the detection range at the current time.
(9)
The information processing apparatus according to any one of (3) to (8), wherein the movement range prediction unit predicts the movement range using the context of the real object.
(10)
9. The information processing apparatus according to (9), wherein the context includes at least one of the maximum velocity, plane, and direction in which the real object can move.
(11)
When a plurality of the real objects exist, the movement range prediction unit predicts the movement range for each real object based on the real object identifier unique to the real object, whichever is (3) to (10). Information processing device described in Crab.
(12)
The information processing apparatus according to any one of (1) to (11), wherein the object position information is information acquired by wireless communication and based on a sensing result by an external sensor.
(13)
The sensor is a first sensor mounted on the real object, a second sensor arranged around the real object, and a third sensor mounted on a moving body moving around the real object. The information processing apparatus according to (12), which is configured as at least one of.
(14)
Any of (1) to (13) further including a virtual object arranging unit for arranging a virtual object corresponding to the position of the real object at the current time in the three-dimensional coordinate system detected in the detection range. The information processing device described in.
(15)
The information processing apparatus according to (14), wherein the virtual object arranging unit arranges the virtual object at a position that does not overlap with the real object.
(16)
The information processing apparatus according to (14) or (15), wherein the virtual object arranging unit arranges additional information about the real object acquired together with the object position information as the virtual object.
(17)
A drawing unit that renders the virtual object based on the self-position information,
The information processing apparatus according to any one of (14) to (16), further comprising a display unit for displaying the rendered virtual object.
(18)
Further provided with a real object position estimation unit that estimates the position of the real object in the three-dimensional coordinate system based on the self-position information and the distance to the real object.
When the real object position estimation unit can estimate the position of the real object, the virtual object placement unit arranges the virtual object based on the estimated position of the real object (14) to (17). The information processing device described in any of the above.
(19)
Information processing equipment
Based on the object position information and self-position information of the real object in the three-dimensional coordinate system corresponding to the real space, a detection range smaller than the imaging angle of view at the self-position and corresponding to the position of the real object is set.
An information processing method for detecting the real object in the detection range.
(20)
Based on the object position information and self-position information of the real object in the three-dimensional coordinate system corresponding to the real space, a detection range smaller than the imaging angle of view at the self-position and corresponding to the position of the real object is set.
A computer-readable recording medium on which a program for executing a process for detecting the real object in the detection range is recorded.
 10 実物体位置推定システム, 20 配信サーバ, 30 AR端末, 111 受信部, 112 時刻計測部, 113 センサ部, 114 自己位置推定部, 115 移動範囲予測部, 116 検出範囲設定部, 117 物体検出部, 118 仮想物体配置部, 119 描画部, 120 表示部, 131 実物体位置推定部, 132 実物体識別部, 133 実物体位置選択部 10 Real object position estimation system, 20 distribution server, 30 AR terminal, 111 receiver, 112 time measurement unit, 113 sensor unit, 114 self-position estimation unit, 115 movement range prediction unit, 116 detection range setting unit, 117 object detection unit , 118 virtual object placement unit, 119 drawing unit, 120 display unit, 131 real object position estimation unit, 132 real object identification unit, 133 real object position selection unit

Claims (20)

  1.  実空間に対応する三次元座標系における実物体の物体位置情報と自己位置情報に基づいて、自己位置での撮像画角よりも小さく且つ前記実物体の位置に対応する検出範囲を設定する検出範囲設定部と、
     前記検出範囲において前記実物体を検出する物体検出部と
     を備える情報処理装置。
    A detection range that sets a detection range that is smaller than the imaging angle of view at the self-position and corresponds to the position of the real object, based on the object position information and the self-position information of the real object in the three-dimensional coordinate system corresponding to the real space. Setting part and
    An information processing device including an object detection unit that detects the real object in the detection range.
  2.  前記検出範囲設定部は、前記撮像画角で撮像された撮像画像において、前記実物体を検出し得る前記検出範囲を設定し、
     前記物体検出部は、前記検出範囲において検出された前記実物体の前記撮像画像上の位置を、前記三次元座標系における位置に変換する
     請求項1に記載の情報処理装置。
    The detection range setting unit sets the detection range capable of detecting the real object in the captured image captured at the imaging angle of view.
    The information processing apparatus according to claim 1, wherein the object detection unit converts a position of the real object detected in the detection range on the captured image into a position in the three-dimensional coordinate system.
  3.  前記物体位置情報に基づいて、前記三次元座標系において前記実物体が移動し得る移動範囲を予測する移動範囲予測部をさらに備え、
     前記検出範囲設定部は、前記自己位置情報を用いて、前記移動範囲を含む前記撮像画角で撮像された前記撮像画像において前記検出範囲を設定する
     請求項2に記載の情報処理装置。
    Further, a movement range prediction unit for predicting a movement range in which the real object can move in the three-dimensional coordinate system based on the object position information is provided.
    The information processing apparatus according to claim 2, wherein the detection range setting unit uses the self-position information to set the detection range in the captured image captured at the imaging angle of view including the moving range.
  4.  前記検出範囲設定部は、前記撮像画像において、前記移動範囲に対応する領域が含まれる前記検出範囲を設定する
     請求項3に記載の情報処理装置。
    The information processing apparatus according to claim 3, wherein the detection range setting unit sets the detection range including a region corresponding to the movement range in the captured image.
  5.  前記検出範囲設定部は、前記移動範囲に基づいて前記撮像画像において前記実物体が撮像されると予測される最小の前記領域を、前記検出範囲に設定する
     請求項4に記載の情報処理装置。
    The information processing apparatus according to claim 4, wherein the detection range setting unit sets the minimum region in which the real object is predicted to be captured in the captured image based on the movement range in the detection range.
  6.  前記移動範囲予測部は、外部のセンサにより前記実物体がセンシングされたセンシング時刻から現時刻までの間の前記移動範囲を予測し、
     前記検出範囲設定部は、前記移動範囲に対応する現時刻での前記検出範囲を設定し、
     前記物体検出部は、前記三次元座標系における現時刻での前記実物体の位置を取得する
     請求項4に記載の情報処理装置。
    The movement range prediction unit predicts the movement range from the sensing time when the real object is sensed by an external sensor to the current time.
    The detection range setting unit sets the detection range at the current time corresponding to the movement range, and sets the detection range.
    The information processing device according to claim 4, wherein the object detection unit acquires the position of the real object at the current time in the three-dimensional coordinate system.
  7.  前記移動範囲予測部は、前記物体位置情報に基づいた前記実物体の移動速度と移動方向から、現時刻での前記実物体の予測位置を推定することで、前記移動範囲を予測する
     請求項6に記載の情報処理装置。
    The moving range prediction unit predicts the moving range by estimating the predicted position of the real object at the current time from the moving speed and the moving direction of the real object based on the object position information. The information processing device described in.
  8.  前記検出範囲設定部は、前記三次元座標系における前記センシング時刻での前記実物体の位置と、現時刻での前記実物体の前記予測位置を、前記撮像画像の画像座標系に射影することで、現時刻での前記検出範囲を設定する
     請求項7に記載の情報処理装置。
    The detection range setting unit projects the position of the real object at the sensing time in the three-dimensional coordinate system and the predicted position of the real object at the current time onto the image coordinate system of the captured image. The information processing apparatus according to claim 7, wherein the detection range at the current time is set.
  9.  前記移動範囲予測部は、前記実物体のコンテキストを用いて、前記移動範囲を予測する
     請求項3に記載の情報処理装置。
    The information processing device according to claim 3, wherein the movement range prediction unit predicts the movement range by using the context of the real object.
  10.  前記コンテキストは、前記実物体が移動可能な最大速度、平面、および方向の少なくともいずれかを含む
     請求項9に記載の情報処理装置。
    The information processing apparatus according to claim 9, wherein the context includes at least one of a maximum speed, a plane, and a direction in which the real object can move.
  11.  前記移動範囲予測部は、複数の前記実物体が存在する場合、前記実物体に固有の実物体識別子に基づいて、前記実物体毎に前記移動範囲を予測する
     請求項3に記載の情報処理装置。
    The information processing device according to claim 3, wherein the movement range prediction unit predicts the movement range for each real object based on the real object identifier unique to the real object when a plurality of the real objects exist. ..
  12.  前記物体位置情報は、無線通信により取得される、外部のセンサによるセンシング結果に基づいた情報である
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the object position information is information based on a sensing result by an external sensor acquired by wireless communication.
  13.  前記センサは、前記実物体に装着される第1のセンサ、前記実物体の周囲に配置される第2のセンサ、および、前記実物体の周辺を移動する移動体に搭載される第3のセンサの少なくとも1つとして構成される
     請求項12に記載の情報処理装置。
    The sensor is a first sensor mounted on the real object, a second sensor arranged around the real object, and a third sensor mounted on a moving body moving around the real object. The information processing apparatus according to claim 12, which is configured as at least one of the above.
  14.  前記検出範囲において検出された、前記三次元座標系における現時刻での前記実物体の位置に対応して、仮想物体を配置する仮想物体配置部をさらに備える
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, further comprising a virtual object arranging unit for arranging a virtual object corresponding to the position of the real object at the current time in the three-dimensional coordinate system detected in the detection range.
  15.  前記仮想物体配置部は、前記実物体に重ならない位置に、前記仮想物体を配置する
     請求項14に記載の情報処理装置。
    The information processing device according to claim 14, wherein the virtual object arranging unit arranges the virtual object at a position that does not overlap with the real object.
  16.  前記仮想物体配置部は、前記物体位置情報とともに取得される、前記実物体に関する付加情報を、前記仮想物体として配置する
     請求項14に記載の情報処理装置。
    The information processing device according to claim 14, wherein the virtual object arranging unit arranges additional information about the real object acquired together with the object position information as the virtual object.
  17.  前記自己位置情報に基づいて、前記仮想物体をレンダリングする描画部と、
     レンダリングされた前記仮想物体を表示する表示部をさらに備える
     請求項14に記載の情報処理装置。
    A drawing unit that renders the virtual object based on the self-position information,
    The information processing apparatus according to claim 14, further comprising a display unit for displaying the rendered virtual object.
  18.  前記自己位置情報と前記実物体までの距離に基づいて、前記三次元座標系における前記実物体の位置を推定する実物体位置推定部をさらに備え、
     前記仮想物体配置部は、前記実物体位置推定部が前記実物体の位置を推定可能な場合、推定された前記実物体の位置に基づいて、前記仮想物体を配置する
     請求項14に記載の情報処理装置。
    Further provided with a real object position estimation unit that estimates the position of the real object in the three-dimensional coordinate system based on the self-position information and the distance to the real object.
    The information according to claim 14, wherein the virtual object arranging unit arranges the virtual object based on the estimated position of the real object when the real object position estimation unit can estimate the position of the real object. Processing equipment.
  19.  情報処理装置が、
     実空間に対応する三次元座標系における実物体の物体位置情報と自己位置情報に基づいて、自己位置での撮像画角よりも小さく且つ前記実物体の位置に対応する検出範囲を設定し、
     前記検出範囲において前記実物体を検出する
     情報処理方法。
    Information processing equipment
    Based on the object position information and self-position information of the real object in the three-dimensional coordinate system corresponding to the real space, a detection range smaller than the imaging angle of view at the self-position and corresponding to the position of the real object is set.
    An information processing method for detecting the real object in the detection range.
  20.  実空間に対応する三次元座標系における実物体の物体位置情報と自己位置情報に基づいて、自己位置での撮像画角よりも小さく且つ前記実物体の位置に対応する検出範囲を設定し、
     前記検出範囲において前記実物体を検出する
     処理を実行させるためのプログラムを記録した、コンピュータが読み取り可能な記録媒体。
    Based on the object position information and self-position information of the real object in the three-dimensional coordinate system corresponding to the real space, a detection range smaller than the imaging angle of view at the self-position and corresponding to the position of the real object is set.
    A computer-readable recording medium on which a program for executing a process for detecting the real object in the detection range is recorded.
PCT/JP2021/030110 2020-08-31 2021-08-18 Information processing device, information processing method, and recording medium WO2022044900A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020145344 2020-08-31
JP2020-145344 2020-08-31

Publications (1)

Publication Number Publication Date
WO2022044900A1 true WO2022044900A1 (en) 2022-03-03

Family

ID=80354261

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/030110 WO2022044900A1 (en) 2020-08-31 2021-08-18 Information processing device, information processing method, and recording medium

Country Status (1)

Country Link
WO (1) WO2022044900A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017131071A1 (en) * 2016-01-28 2017-08-03 日本電信電話株式会社 Virtual environment construction device, video presentation device, model learning device, optimum depth determination device, method therefor, and program
JP2019036346A (en) * 2017-08-14 2019-03-07 キヤノン株式会社 Image processing apparatus, image processing method, and program
WO2019219423A1 (en) * 2018-05-18 2019-11-21 Valeo Comfort And Driving Assistance Shared environment for vehicle occupant and remote user

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017131071A1 (en) * 2016-01-28 2017-08-03 日本電信電話株式会社 Virtual environment construction device, video presentation device, model learning device, optimum depth determination device, method therefor, and program
JP2019036346A (en) * 2017-08-14 2019-03-07 キヤノン株式会社 Image processing apparatus, image processing method, and program
WO2019219423A1 (en) * 2018-05-18 2019-11-21 Valeo Comfort And Driving Assistance Shared environment for vehicle occupant and remote user

Similar Documents

Publication Publication Date Title
CN112567201B (en) Distance measuring method and device
US9747697B2 (en) System and method for tracking
EP3608685B1 (en) Object-tracking system
US9682482B2 (en) Autonomous moving device and control method of autonomous moving device
US10481679B2 (en) Method and system for optical-inertial tracking of a moving object
CN109658435A (en) The unmanned plane cloud for capturing and creating for video
US20170201708A1 (en) Information processing apparatus, information processing method, and program
WO2016031105A1 (en) Information-processing device, information processing method, and program
JPWO2018179644A1 (en) Information processing apparatus, information processing method, and recording medium
US11181376B2 (en) Information processing device and information processing method
WO2003092291A1 (en) Object detection device, object detection server, and object detection method
WO2018113759A1 (en) Detection system and detection method based on positioning system and ar/mr
JP2017129904A (en) Information processor, information processing method, and record medium
WO2019051832A1 (en) Movable object control method, device and system
KR102190743B1 (en) AUGMENTED REALITY SERVICE PROVIDING APPARATUS INTERACTING WITH ROBOT and METHOD OF THEREOF
US20210157396A1 (en) System and method related to data fusing
WO2021250914A1 (en) Information processing device, movement device, information processing system, method, and program
US20170359671A1 (en) Positioning arrangement
WO2022044900A1 (en) Information processing device, information processing method, and recording medium
US20230103650A1 (en) System and method for providing scene information
CN113272864A (en) Information processing apparatus, information processing method, and program
CN112788443A (en) Interaction method and system based on optical communication device
US10735902B1 (en) Method and computer program for taking action based on determined movement path of mobile devices
JP2005258792A (en) Apparatus, method and program for generating image
US20230120092A1 (en) Information processing device and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21861328

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21861328

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP