WO2015093130A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2015093130A1
WO2015093130A1 PCT/JP2014/076620 JP2014076620W WO2015093130A1 WO 2015093130 A1 WO2015093130 A1 WO 2015093130A1 JP 2014076620 W JP2014076620 W JP 2014076620W WO 2015093130 A1 WO2015093130 A1 WO 2015093130A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
posture
information
display
marker
Prior art date
Application number
PCT/JP2014/076620
Other languages
French (fr)
Japanese (ja)
Inventor
裕也 花井
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2015093130A1 publication Critical patent/WO2015093130A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • This disclosure relates to an information processing apparatus, an information processing method, and a program.
  • UIs User Interfaces
  • a technique is required for the processing device that accepts the gesture input to estimate the position and angle of the operation device operated by the user.
  • the following patent document 1 describes gesture input according to a controller position by estimating a controller position by tracking feature points provided in the controller by a statically installed camera. Techniques that enable this are disclosed.
  • the position and orientation of the operating device having the imaging unit in the environment map representing the position of the object existing in the real space based on the captured image captured by the imaging unit that captures the real space.
  • An estimation unit that estimates first posture information to be shown, a communication unit that transmits the first posture information estimated by the estimation unit to a processing device that performs processing according to the position and posture of the controller device;
  • An information processing apparatus is provided.
  • the position of the operating device having the imaging unit in the environment map representing the position of the object existing in the real space based on the captured image captured by the imaging unit that captures the real space, and Estimating first posture information indicating a posture, and transmitting the estimated first posture information to a processing device that performs processing according to a position and a posture of the operating device.
  • An information processing method executed by a processor of a processing device is provided.
  • the operation device having the imaging unit in the environment map representing the position of the object existing in the real space based on a captured image captured by the imaging unit that captures the real space.
  • An estimation unit that estimates first posture information indicating the position and posture of the controller, and the first posture information estimated by the estimation unit is transmitted to a processing device that performs processing according to the position and posture of the controller device And a communication unit functioning program are provided.
  • FIG. 1 is an explanatory diagram for explaining an overview of a display system according to an embodiment.
  • the user holds a smartphone 1 as a display device and a smartphone 2 as an operation device.
  • the display system includes a display device 1 and an operation device 2, and the display device 1 performs a display 100 according to the position of the operation device 2.
  • AR augmented reality
  • Information presented to the user in AR technology is visualized using various forms of virtual objects such as text, icons or animations.
  • the virtual object is arranged in the AR space according to the position of the associated real object.
  • the virtual object can perform an operation such as movement, collision, or deformation in the AR space.
  • the display device 1 arranges a virtual object in the AR space according to the position of the controller device 2 and displays an AR image representing an action such as movement, collision, or deformation.
  • the user is holding the display device 1 with his right hand and looking at the display unit 17, and the camera disposed on the back side of the display unit 17 is pointed at the desk.
  • the display device 1 superimposes and displays an AR image showing a state in which the ball 8 that is a virtual object is ejected from a position corresponding to the operation device 2 on a through image captured in real time on a desk by a camera. Yes.
  • the display device 1 can inject the ball 8 from a position corresponding to the operation device 2 by estimating the position of the operation device 2 in the through image.
  • the display device 1 estimates the position of the operation device 2.
  • a comparative example in which the operation device 2 is imaged by a camera provided in the display device 1 and a relative position between the display device 1 and the operation device 2 is estimated by image recognition can be considered.
  • the controller device 2 when the controller device 2 is out of the angle of view of the camera, the controller device 2 is not captured in the captured image, so it is difficult to estimate the relative position. That is, in order for the display device 1 to succeed in estimating the relative position, there is a constraint that the operation device 2 exists within the imaging range of the camera provided in the display device 1.
  • the operation device 2 (information processing device) according to the present embodiment has been created with the above circumstances taken into consideration.
  • the operation device 2 according to the present embodiment enables processing according to the position of the operation device 2 by the display device 1 without providing a restriction on the positional relationship between the display device 1 and the operation device 2.
  • the controller device 2 estimates the position and orientation (angle) of the controller device 2 in the world coordinate system, and transmits the estimation result to the display device 1.
  • the world coordinate system is a coordinate system set in the real space.
  • the display device 1 also estimates the position and orientation of the display device 1 in the same world coordinate system. Then, the display device 1 estimates the relative position between the display device 1 and the operation device 2 by integrating the estimation result received from the operation device 2 and its own estimation result.
  • Reference numeral 9 shown in FIG. 1 schematically shows the position in the through image of the operating device 2 estimated by the display device 1 by such integration processing.
  • the display device 1 superimposes an AR image indicating the ball 8 at a position corresponding to the position indicated by reference numeral 9.
  • the superimposed display of the AR image at the position corresponding to the controller device 2 in the through image is realized. Since the display device 1 does not need to track the operation device 2 by image recognition, the relative position can be estimated even when the operation device 2 is out of the angle of view of the camera. That is, the positional relationship between the display device 1 and the operation device 2 is not limited.
  • the display device 1 performs calculation of the position and orientation of a virtual object, display processing of an AR image, recognition of gesture input according to the movement of the operation device 2, calculation of output based on the recognition result, and the like. .
  • the display device needs to perform image recognition to estimate the position of the controller device, and the processing load becomes great.
  • the display device 1 does not need to perform image recognition, so the processing load is reduced.
  • the display device 1 may be an HMD (Head Mounted Display), a digital camera, a digital video camera, a tablet terminal, a mobile phone terminal, or the like.
  • the HMD may display an AR image superimposed on a through image captured by a camera, or a display unit formed in a transparent or translucent through state. An AR image may be displayed.
  • FIG. 1 illustrates an example in which the operation device 2 according to an embodiment of the present disclosure is realized as a smartphone, but the technology according to the present disclosure is not limited thereto.
  • the controller device 2 may be a digital camera, a digital video camera, a tablet terminal, a mobile phone terminal, or the like.
  • the operation device 2 may be an operation-dedicated device having a rod shape, a spherical shape, or any other shape.
  • the processing device performs a process for expressing an arbitrary interaction between the controller device 2 and the real world, such as vibration according to the position of the controller device 2, sound reproduction, and light emission. May be.
  • FIG. 2 is a block diagram illustrating a configuration example of the display system according to the first embodiment.
  • the display system according to the present embodiment includes a display device 1-1 and an operation device 2-1, and shares the same environment map 3.
  • the controller device 2-1 includes an imaging unit 21, a sensor 22, a posture estimation unit 23, a posture information transmission unit 24, a user operation acquisition unit 25, and a control signal transmission unit 26.
  • Imaging unit 21 photoelectrically converts imaging light obtained by a lens system including an imaging lens, a diaphragm, a zoom lens, and a focus lens, a drive system that causes the lens system to perform a focus operation and a zoom operation, and the lens system.
  • a solid-state imaging device array that generates an imaging signal.
  • the solid-state imaging device array may be realized by, for example, a CCD (Charge Coupled Device) sensor array or a CMOS (Complementary Metal Oxide Semiconductor) sensor array.
  • the imaging unit 21 according to the present embodiment images a real space and outputs a captured image to the posture estimation unit 23.
  • the sensor 22 has a function of detecting the attitude of the controller device 2-1.
  • the sensor 22 is realized by an acceleration sensor, an angular velocity (gyro) sensor, or a geomagnetic sensor.
  • the sensor 22 outputs a sensing result indicating the detected gravity direction, inclination, or angle to the posture estimation unit 23.
  • the environment map 3 is information that represents a three-dimensional position of an object existing in the real space. More specifically, the environment map 3 is a set of data representing the position and orientation of one or more objects existing in the real space.
  • the environment map can include, for example, an object name corresponding to the object, a three-dimensional position of a feature point corresponding to the object, and polygon information constituting the shape of the object.
  • the environment map can be constructed, for example, by obtaining the three-dimensional position of each feature point from the position of the feature point on the imaging surface.
  • the environment map 3 is expressed as not belonging to either the display device 1-1 or the operation device 2-1, but can be accessed from both the display device 1-1 and the operation device 2-1. It is expressed that there is. This indicates that the display device 1-1 and the operation device 2-1 share the same environment map 3. As a result, the display device 1-1 and the operation device 2-1 each estimate the position and angle in the same world coordinate system, and the posture information integration process by the display device 1-1 described later becomes possible.
  • the environment map 3 may be included in the display device 1-1, and the operation device 2-1 may access the environment map 3 through communication with the display device 1-1.
  • the environment map 3 may be included in the operation device 2-1, or may be a home server connected to the display device 1-1 and the operation device 2-1, or a server on the cloud. It may be included in the information processing apparatus.
  • the display device 1-1 and the operation device 2-1 can estimate the position and angle in the same world coordinate system, what is shared need not be the environment map 3.
  • the display device 1-1 and the operation device 2-1 may have a common so-called planar marker recognition dictionary as a starting point capable of defining a common coordinate system.
  • the posture estimation unit 23 has a function as an estimation unit that estimates posture information (first posture information) indicating the position and angle (posture) of the controller device 2-1 in the environment map 3 (world coordinate system). Specifically, the posture estimation unit 23 estimates the position and posture of the controller device 2-1 by estimating the position and posture of the imaging unit 21.
  • SLAM Simultaneous Localization And Mapping
  • the posture estimation unit 23 uses the SLAM technology to estimate the position and posture of the controller device 2-1 based on the matching result between the feature points in the environment map 3 and the feature points reflected in the captured image captured by the imaging unit 21.
  • the posture estimation unit 23 estimates the position and posture of the imaging unit 21 in the environment map 3 by using a posture estimation technique using a marker, DTAM (Dense Tracking and Mapping in Real-Time), and a technique such as Kinect Fusion. May be.
  • the posture estimation unit 23 may correct the estimation result of the position and posture by the SLAM technique using the various sensing results output from the sensor 22.
  • the posture estimation unit 23 outputs the estimated posture information to the posture information transmission unit 24.
  • the posture information transmission unit 24 has a function of transmitting the posture information estimated by the posture estimation unit 23 to the display device 1-1. Thereby, the display device 1-1 can perform various processes according to the position and posture of the operation device 2-1.
  • the attitude information transmission unit 24 is a communication module for transmitting and receiving data to and from the display device 1-1 by wire / wireless.
  • the attitude information transmission unit 24 is directly connected to the display device 1-1 by a method such as wireless LAN (Local Area Network), Wi-Fi (Wireless Fidelity (registered trademark)), infrared communication, Bluetooth (registered trademark), or the like. Wireless communication is performed via a network access point.
  • the posture estimation unit 23 and the posture information transmission unit 24 are included in the controller device 2-1, but the technology according to the present disclosure is not limited to this.
  • the posture estimation unit 23 and the posture information transmission unit 24 may be included in other information processing devices connected to the display device 1-1 and the operation device 2-1.
  • the other information processing apparatus estimates the posture information of the controller device 2-1 based on the captured image and the sensing result received from the controller device 2-1, and the estimated posture information is displayed on the display device 1-1. You may send it.
  • the posture estimation unit 23 may be included in the display device 1-1.
  • the user operation acquisition unit 25 has a function of acquiring a user operation as control information for controlling the display device 1-1 according to the user operation.
  • the user operation acquisition unit 25 is realized by a button, a keyboard, a mouse, a trackball, a touch pad, a touch panel, or the like.
  • the user operation acquisition unit 25 outputs control information indicating the acquired user operation to the control signal transmission unit 26.
  • Control signal transmitter 26 The control signal transmission unit 26 has a function of transmitting the control information output by the user operation acquisition unit 25 to the display device 1-1. Thereby, the display device 1-1 can perform processing according to the control information.
  • the control signal transmission unit 26 is a communication module for transmitting and receiving data to and from the display device 1-1 by wire / wireless.
  • the control signal transmission unit 26 uses a wireless LAN, Wi-Fi (registered trademark), infrared communication, Bluetooth (registered trademark), or the like to perform wireless communication directly with the display device 1-1 or via a network access point. To do.
  • the operation device 2-1 functions as an arithmetic processing device and a control device, and has a control unit that controls the overall operation in the operation device 2-1 according to various programs. You may do it.
  • the control unit is realized by, for example, a CPU (Central Processing Unit) and a microprocessor.
  • the control unit may include a ROM (Read Only Memory) that stores programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate.
  • the display device 1-1 includes an imaging unit 11, a sensor 12, a posture estimation unit 13, a posture information reception unit 14, a control information reception unit 15, a control unit 16, and a display unit 17.
  • Imaging unit 11 photoelectrically converts imaging light obtained by a lens system including an imaging lens, a diaphragm, a zoom lens, and a focus lens, a drive system that causes the lens system to perform a focus operation and a zoom operation, and the lens system.
  • a solid-state imaging device array that generates an imaging signal.
  • the solid-state image sensor array may be realized by, for example, a CCD sensor array or a CMOS sensor array.
  • the imaging unit 11 according to the present embodiment images a real space and outputs a captured image to the posture estimation unit 13.
  • the sensor 12 has a function of detecting the attitude of the display device 1-1.
  • the sensor 12 is realized by an acceleration sensor, an angular velocity (gyro) sensor, or a geomagnetic sensor.
  • the sensor 12 outputs a sensing result indicating the detected gravity direction, inclination, or angle to the posture estimation unit 13.
  • the posture estimation unit 13 has a function as an estimation unit that estimates posture information (second posture information) indicating the position and angle (posture) of the display device 1-1 in the environment map 3 (world coordinate system). Specifically, the posture estimation unit 13 estimates the position and posture of the display device 1-1 by estimating the position and posture of the imaging unit 11. Similar to the posture estimation unit 23 described above, the posture estimation unit 13 uses the SLAM technology to display the display device 1 based on the matching result between the feature points in the environment map 3 and the feature points reflected in the captured image captured by the imaging unit 11. -1 position and orientation are estimated.
  • the posture estimation unit 13 may estimate the position and posture of the imaging unit 11 in the environment map 3 by a posture estimation technology using a marker, or a technology such as DTAM or Kinect Fusion.
  • the posture estimation unit 13 may correct the posture estimation result by the SLAM technology using various sensing results output by the sensor 12.
  • the posture estimation unit 13 outputs the estimated posture information to the control unit 16.
  • the posture estimation unit 13 is included in the display device 1-1, but the technology according to the present disclosure is not limited to this.
  • the posture estimation unit 13 may be included in another information processing apparatus connected to the display device 1-1 and the operation device 2-1.
  • the other information processing device estimates the posture information of the display device 1-1 based on the captured image and the sensing result received from the display device 1-1, and sends the estimated posture information to the display device 1-1. You may send it.
  • the posture estimation unit 13 is included in the operation device 2-1, estimates posture information of the display device 1-1 based on a captured image and a sensing result received from the display device 1-1, and estimates the posture. Information may be returned to the display device 1-1.
  • the posture information receiving unit 14 has a function of receiving the posture information of the controller device 2-1 transmitted by the posture information transmitting unit 24.
  • the posture information receiving unit 14 is a communication module for transmitting / receiving data to / from the controller device 2-1 by wire / wireless.
  • the attitude information receiving unit 14 wirelessly communicates with the controller device 2-1 directly or via a network access point using a method such as wireless LAN, Wi-Fi (registered trademark), infrared communication, Bluetooth (registered trademark), or the like. To do.
  • the posture information receiving unit 14 outputs the received posture information to the control unit 16.
  • Control information receiver 15 has a function of receiving the control information transmitted by the control signal transmitting unit 26.
  • the control information receiving unit 15 is a communication module for transmitting / receiving data to / from the controller device 2-1 by wire / wireless.
  • the control information receiving unit 15 wirelessly communicates with the controller device 2-1 directly or via a network access point using a wireless LAN, Wi-Fi (registered trademark), infrared communication, Bluetooth (registered trademark), or the like. To do.
  • the control information receiving unit 15 outputs the received control information to the control unit 16.
  • Control unit 16 functions as an arithmetic processing device and a control device, and controls the overall operation in the display device 1-1 according to various programs.
  • the control unit 16 is realized by, for example, a CPU or a microprocessor.
  • the control unit 16 may include a ROM that stores programs to be used, calculation parameters, and the like, and a RAM that temporarily stores parameters that change as appropriate.
  • the control unit 16 integrates the posture information of the display device 1-1 output from the posture estimation unit 13 and the posture information of the operation device 2-1 received by the posture information reception unit 14 to display
  • the relative position between the device 1-1 and the controller device 2-1 is calculated. Since these posture information are position information in the world coordinate system defined by the same environment map 3, the relative position can be calculated by integrating them.
  • the relative position means the position of the operating device 2-1 in the coordinate system of the display device 1-1.
  • the relative position may be regarded as the position of the operating device 2-1 in the through image captured by the operating device 2. This relative position is used in determining the virtual object drawing method.
  • display control of the virtual object by the control unit 16 will be described.
  • the control unit 16 acquires the position of the operation device 2-1 in the world coordinate system from the posture information of the operation device 2-1 received by the posture information reception unit. Next, the control unit 16 determines the position of the virtual object in the world coordinate system. Next, the control unit 16 determines the positions of the operation device 2-1 and the virtual object in the coordinate system of the display device 1-1. Then, the control unit 16 determines a virtual object drawing method according to the positional relationship between the controller device 2-1 and the virtual object in the coordinate system of the display device 1-1. As a drawing method, for example, partial occlusion is expressed according to the context of the operation device 2-1 and the virtual object as viewed from the display device 1-1.
  • control unit 16 can calculate the relative position, it is possible to perform expression according to such a context. Since the display device 1 does not need to track the operation device 2 by image recognition, the relative position can be estimated even when the operation device 2 is out of the angle of view of the camera. Further, since the control unit 16 does not need to perform posture estimation of the operation device 2-1 by image recognition performed in the technique described in Patent Document 1, the processing load on the display device 1 side is reduced.
  • control unit 16 performs various processes according to the control information received by the control information receiving unit 15. Specifically, in addition to the calculated relative position between the display device 1-1 and the operation device 2-1, the control unit 16 displays the display unit 17 so as to display a visual effect corresponding to the user operation indicated by the control information. Control. For example, when a predetermined user operation is acquired by the user operation acquisition unit 25, the control unit 16 displays an AR image indicating that the ball 8 illustrated in FIG. 1 is ejected from the operation device 2-1.
  • the control unit 16 may display a virtual operation assisting device as an example of a virtual object reflecting the relative position and the control information.
  • the virtual operation assisting device is a virtual object indicating an action point defined by a relative position with respect to the operation device 2-1.
  • the operating point of the operation device 2-1 can be installed at a position away from the main body of the operation device 2-1, or the operation content can be visually expressed in a user-friendly manner. .
  • a display example of the virtual operation assistance device will be described with reference to FIG.
  • FIG. 3 is a diagram showing an example of AR display by the display device 1-1 according to the first embodiment.
  • FIG. 3 shows an example in which the display device 1-1 displays the AR image 8-1 showing the virtual object of the light bulb in front of the operation device 2-1 in the through image.
  • the light bulb 8-1 moves along with the operating device 2-1, as if it were integrated with the operating device 2-1. This is realized by the control unit 16 estimating the position of the operation device 2-1 in the through image and displaying the AR image 8-1 fixedly in front of the operation device 2-1.
  • the user can use the light bulb 8-1 as a part of the operation device 2-1.
  • the display device 1-1 causes the light bulb 8-1 to function as an action point, and displays various AR images with the light bulb 8-1 as a reference point superimposed on the through image.
  • the control unit 16 may switch the color of the light bulb 8-1.
  • the control unit 16 may display the neon sign 8-2 at the three-dimensional position through which the light bulb 8-1 has passed.
  • the control unit 16 may move the bulb 8-1 away from or close to the operation device 2-1.
  • the display reference numeral 9 in FIG. 1 showing the result of the posture estimation of the controller device 2-1 by the display device 1-1 is omitted.
  • the virtual operation assisting device In addition to the light bulb 8-1 shown in FIG. 3, various virtual objects are conceivable for the virtual operation assisting device. For example, scissors, a cutter, a pen, a ruler, etc. can be virtual operation assisting devices and can act on virtual objects and real objects in real space. These multiple types of virtual operation assistance devices may be switched by a swipe operation in the user operation acquisition unit 25 formed as a touch panel, for example. Further, for example, the user may wear the display device 1-1 formed as an HMD, hold the operation device 2-1 with both hands, and operate the virtual auxiliary device with both hands.
  • control unit 16 may express various visual effects such as displaying text and images in accordance with the position of the operation device 2-1, in addition to displaying the AR image.
  • control unit 16 performs processing for expressing any interaction between the controller device 2 and the real world, such as vibration, sound reproduction, and light emission according to the position of the controller device 2-1. -1 or the controller device 2-1 may be controlled.
  • the attitude information receiving unit 14, the control information receiving unit 15, and the control unit 16 are included in the display device 1-1.
  • the technology according to the present disclosure is not limited to this.
  • the posture information receiving unit 14, the control information receiving unit 15, and the control unit 16 may be included in another information processing apparatus connected to the display device 1-1 and the operation device 2-1.
  • the other information processing device generates display control information indicating the content of the AR image based on the posture information and control information received from the display device 1-1 and the operation device 2-1, and performs display control.
  • Information may be transmitted to the display device 1-1 to control display by the display unit 17.
  • the posture information receiving unit 14, the control information receiving unit 15, and the control unit 16 may be included in the controller device 2-1.
  • Display unit 17 displays a through image captured in real time by the imaging unit 11 based on control by the control unit 16, displays various AR images in a superimposed manner, and stores stored image data (still image data / Moving image data) is displayed (reproduced).
  • the display unit 17 is realized by, for example, an LCD (Liquid Crystal Display) or an OLED (Organic Light-Emitting Diode).
  • the processing apparatus according to the present embodiment is realized as an HMD
  • the display unit 17 is formed as a transparent or translucent through state, and displays an AR image in a real space reflected on the display unit 17 in the through state. Also good.
  • FIG. 4 is a sequence diagram illustrating an example of the flow of the AR display process executed in the display system according to the first embodiment.
  • the display device 1-1 and the operation device 2-1 are involved in the sequence shown in FIG.
  • the sequence shown in FIG. 4 shows an example in which each process is executed by either the display device 1-1 or the operation device 2-1, but each process may be executed by either of them. It may be centrally executed by another information processing apparatus such as a server.
  • step S102-1 the display device 1-1 performs posture estimation.
  • the posture estimation unit 13 uses SLAM technology to determine the position and posture of the display device 1-1 based on the matching result between the feature points in the environment map 3 and the feature points reflected in the captured image captured by the imaging unit 11. presume.
  • the posture estimation unit 13 may correct the posture estimation result by the SLAM technique using various sensing results output by the sensor 12.
  • the controller device 2-1 performs posture estimation.
  • the posture estimation unit 23 estimates the position and posture of the controller device 2-1 using the same environment map 3 as the posture estimation unit 13.
  • step S104 the controller device 2-1 transmits the estimated posture information to the display device 1-1.
  • the posture information transmission unit 24 transmits the posture information estimated in step S102-2 by the posture estimation unit 23 to the display device 1-1.
  • step S106 the display device 1-1 integrates the posture information of the display device 1-1 and the posture information of the operation device 2-1.
  • the control unit 16 integrates the posture information of the display device 1-1 output by the posture estimation unit 13 and the posture information of the operation device 2-1 received by the posture information reception unit 14 to display the display device 1. ⁇ 1 and the operation device 2-1 are calculated.
  • step S108 the controller device 2-1 acquires a user operation.
  • the user operation acquisition unit 25 acquires a user operation such as a touch on the touch panel or a button press by the user as control information for controlling the display device 1-1 according to the user operation.
  • step S110 the controller device 2-1 transmits the acquired control information to the display device 1-1.
  • the control signal transmission unit 26 transmits control information indicating the user operation acquired in step S108 to the display device 1-1.
  • the display device 1-1 calculates visual information.
  • the control unit 16 calculates visual information corresponding to the user operation indicated by the control information in addition to the relative position between the display device 1-1 and the operation device 2-1 calculated in step S106.
  • the control unit 16 calculates the position of the operating device 2-1 in the through image captured by the imaging unit 11, and operates the operating device 2- Display control information for displaying an AR image indicating a virtual object reflecting the control information at a position corresponding to 1 is generated.
  • step S114 the display device 1-1 performs AR display. Specifically, the display unit 17 performs display based on the display control information generated by the control unit 16 in step S112.
  • Second Embodiment> This embodiment is a form which avoids the situation where the posture estimation of the controller device 2 becomes difficult by determining the approach between the controller device 2 and the real object and notifying the user.
  • Various configurations of the present embodiment are conceivable. As an example, a configuration example of a display system according to the present embodiment will be described with reference to FIGS.
  • FIG. 5 is a block diagram illustrating a configuration example 1 of the display system according to the second embodiment.
  • the display system according to the present embodiment includes a display device 1-2 and an operation device 2-2.
  • the three-dimensional data 4 is data composed of the position information of the vertices of the object in the real space, the line segments connecting the vertices, and the surface surrounded by the line segments, and is information expressing the three-dimensional shape (Surface) of the real space. is there.
  • the position information included in the three-dimensional data 4 is information indicating a position in the world coordinate system in the environment map 3.
  • the three-dimensional data 4 may be dynamically generated using the captured image captured by the imaging unit 21 and the posture information estimated by the posture estimation unit 23, or may be generated in advance.
  • the three-dimensional data is realized as CAD (Computer Assisted Drafting) data, for example.
  • the three-dimensional data 4 is expressed as belonging to neither the display device 1-2 nor the operation device 2-2, and is expressed as accessible from the operation device 2-2. .
  • the three-dimensional data 4 may be included in any device.
  • the three-dimensional data 4 may be included in the display device 1-2 or the operation device 2-2, or may be a home server connected to the operation device 2-2 via a network or a server on the cloud. It may be included in another information processing apparatus.
  • the approach determination unit 27 has a function of determining whether or not the distance between the imaging surface of the imaging unit 21 and the real object is equal to or less than a threshold value.
  • the posture estimation unit 23 uses the SLAM technique to determine the position of the controller device 2-1 based on the matching result between the feature points in the environment map 3 and the feature points reflected in the captured image captured by the imaging unit 21.
  • Estimate posture In order for the posture estimation unit 23 to succeed in estimating posture information, it is necessary that at least a feature point is included in the imaging range and that the feature point is reflected in the captured image to the extent that it can be identified. Therefore, the approach determination unit 27 determines whether or not the imaging surface of the imaging unit 21 and the real object are excessively close.
  • the posture estimation unit 23 may not satisfy the condition for successfully estimating the posture information. Therefore, a notification by an approach notification unit 28 described later is made.
  • the approach determining unit 27 may perform notification by the approach notifying unit 28 when no feature point is included in the imaging range, such as when the imaging surface of the imaging unit 21 faces the sky.
  • the approach determination unit 27 calculates the distance between the imaging surface of the imaging unit 21 and the real object based on the attitude information of the controller device 2-2 estimated by the attitude estimation unit 23 and the three-dimensional data 4. Specifically, the approach determination unit 27 obtains the position and angle of the controller device 2-2 in the world coordinate system indicated by the posture information by taking the correspondence between the three-dimensional shape indicated by the three-dimensional data 4 and the posture information. The position and angle of the operation device 2-2 with respect to the three-dimensional shape are converted. Then, the approach determination unit 27 calculates the distance between the surface of the real object indicated by the three-dimensional shape and the imaging surface of the imaging unit 21 included in the controller device 2-2.
  • the approach determination unit 27 simply generates a mesh by applying Delaunay triangulation to the SLAM feature point information, and regards this mesh as a pseudo three-dimensional shape and approaches it. A determination may be made. In this case, the approach determination unit 27 can perform the approach determination without using the three-dimensional data 4.
  • the approach determination unit 27 is included in the operation device 2-2, but the technology according to the present disclosure is not limited to this.
  • the approach determination unit 27 may be included in another information processing apparatus connected to the controller device 2-2.
  • the other information processing device calculates the distance between the imaging surface of the imaging unit 21 and the real object based on the three-dimensional data 4 and the posture information received from the operation device 2-2, and makes an approach determination.
  • the notification by the approach notification unit 28 may be controlled.
  • the approach determination unit 27 may be included in the display device 1-2.
  • the approach notification unit 28 has a function as a notification unit that performs notification indicating that the distance between the imaging surface of the imaging unit 21 and the real object is equal to or less than a threshold based on the determination result by the approach determination unit 27.
  • the approach notification unit 28 is configured by a display unit, a vibration motor, a speaker, an LED, or the like, and may perform notification by screen display, vibration, sound reproduction, light emission, or the like.
  • the notification by the approach notification unit 28 prompts the user to move the operating device 2-2 away from the real object, and can prevent a situation in which it is difficult to estimate the posture of the operating device 2-2.
  • the control unit 16 may assist the virtual operation assisting device to avoid a situation in which posture estimation by the operation device 2-2 becomes difficult.
  • the control unit 16 displays the virtual operation assistance device at a position where the distance between the imaging surface of the imaging unit 21 and the real object can be equal to or less than a threshold value.
  • the control unit 16 can prompt the user to prevent the virtual operation assisting device from colliding with the real object, and can avoid a situation where it is difficult to estimate the posture of the operation device 2-2.
  • the display system may perform notification by warning display, vibration, sound reproduction, light emission, or the like.
  • a display example of such a virtual operation assistance device will be described with reference to FIG.
  • FIG. 6 is a diagram showing an AR display example by the display device 1-2 according to the second embodiment.
  • An example of displaying 8-3 is shown. Similar to the light bulb 8-1 described above with reference to FIG. 3, the virtual operation assisting device 8-3 is attached to the operation device 2-2 as if it is integrated with the operation device 2-2. Move. With such a virtual operation assisting device 8-3, the control unit 16 makes it difficult for the distance between the imaging surface of the imaging unit 21 and the real object to be equal to or smaller than the threshold, that is, the posture estimation of the operating device 2-2. The situation can be avoided in advance.
  • control unit 16 allows the virtual operation assisting device to move to an arbitrary position in accordance with a user operation, thereby enabling an action at a position close to the real object.
  • the control unit 16 approaches the virtual operation assisting device instead of the main body of the operation device 2-2 even in a place where it is difficult to estimate the posture by the operation device 2-2 because the distance to the real object is too short. be able to.
  • the control unit 16 moves the virtual operation assisting device in the up / down / left / right direction or the direction away / approaching from the operation device 2-2 by a swipe operation in the user operation acquisition unit 25 formed as a touch panel.
  • the action point is guided to a position corresponding to the user operation.
  • the control part 16 can express the interaction to the position close
  • the posture estimation unit 23 improves the robustness of posture estimation by the SLAM technology by selecting the captured image in the imaging direction that is easy to estimate the posture and estimating the posture information of the controller device 2-2.
  • the imaging unit 21 may be formed as two wide-angle cameras provided on the back and front surfaces of the casing of the controller device 2-2, or a 360-degree omnidirectional camera.
  • the posture estimation unit 23 can perform posture estimation by selectively using a region where feature points are easily recognized from the captured image.
  • the image capturing unit 21 is provided on, for example, a pan head that moves with six degrees of freedom, and the posture estimating unit 23 moves the pan head so that the image capturing surface is continuously directed in a direction in which posture estimation can be performed most stably. You may control.
  • the depth information acquisition unit 29 has a function of acquiring depth information in the imaging direction of the imaging unit 21.
  • the depth information acquisition unit 29 may be realized by a stereo camera or an infrared camera.
  • the depth information acquisition unit 29 outputs the acquired depth information to the approach determination unit 27.
  • the posture information transmission unit 241 has the same function as the posture information transmission unit 24 described above with reference to FIG.
  • the posture information transmission unit 241 according to the present embodiment transmits the posture information of the operation device 2-5 estimated by the posture estimation unit 23 to the display device 1-5.
  • the position correction unit 143 is included in the display device 1-5 and the marker generation unit 244 is included in the operation device 2-5.
  • the technique according to the present disclosure is not limited thereto. It is not limited.
  • the position correction unit 143 and the marker generation unit 244 may be included in another information processing apparatus connected to the display device 1-5 and the operation device 2-5. In this case, another information processing apparatus may perform marker recognition and relative position calculation to control display on the display unit 17, generate a correction matrix, and control marker display by the marker display unit 245. .
  • the position correction unit 143 and the marker generation unit 244 may be included in either the display device 1-5 or the operation device 2-5.
  • the display device 1 and the operation device 2 have many of the same or similar functions.
  • posture estimation unit 13 and posture estimation unit 23 realize the same or similar functions.
  • posture information transmission unit 141 and posture information transmission unit 241, posture information reception unit 142, and posture information reception unit 242 realize the same or similar functions.
  • the display device 1 and the operation device 2 are interchangeable programmatically, the development efficiency when developing the display system is good.

Abstract

[Problem] To propose an information processing device, an information processing method, and a program that make it possible for a processing device to perform processing in accordance with the position of a manipulation device without putting restrictions on the positional relationship between the processing device and the manipulation device. [Solution] An information processing device that has the following: an inference unit that, on the basis of a taken image taken by an imaging unit that images real space, uses inference to generate first orientation information indicating the position and orientation of a manipulation device containing the imaging unit in an environment map that represents the position of an object in real space; and a communication unit that transmits the first orientation information generated by the inference unit to a processing device that performs processing in accordance with the position and orientation of the manipulation device.

Description

情報処理装置、情報処理方法およびプログラムInformation processing apparatus, information processing method, and program
 本開示は、情報処理装置、情報処理方法およびプログラムに関する。 This disclosure relates to an information processing apparatus, an information processing method, and a program.
 スマートフォンやPC(Personal Computer)、ゲーム機等の情報処理装置の普及および高機能化に伴い、より直感的な情報処理装置の操作を可能にするための様々なUI(User Interface)が開発されている。そのようなUIのひとつとして、ユーザが操作装置を身に着けて動かすことにより実現されるジスチャー入力がある。操作装置によるジェスチャー入力を実現するためには、ジェスチャー入力を受け付ける処理装置が、ユーザにより操作される操作装置の位置および角度を推定するための技術が求められる。 As information processing devices such as smartphones, PCs (Personal Computers), and game machines have become widespread and highly functional, various UIs (User Interfaces) have been developed to enable more intuitive operation of information processing devices. Yes. As one of such UIs, there is a gesture input realized by a user wearing and moving an operation device. In order to realize the gesture input by the operation device, a technique is required for the processing device that accepts the gesture input to estimate the position and angle of the operation device operated by the user.
 例えば、ゲームコントローラの分野では、下記特許文献1に、静的に設置されたカメラによりコントローラに設けられた特徴点を追跡することで、コントローラの位置を推定してコントローラの位置に応じたジェスチャー入力を可能にする技術が開示されている。 For example, in the field of game controllers, the following patent document 1 describes gesture input according to a controller position by estimating a controller position by tracking feature points provided in the controller by a statically installed camera. Techniques that enable this are disclosed.
特開2012-135642号公報JP 2012-135642 A
 しかし、上記特許文献1に記載の技術では、設置されたカメラの画角から操作装置が外れた場合、特徴点を追跡することができなくなり、操作装置の位置を推定することが困難になる。即ち、処理装置(カメラ)と操作装置との位置関係に制約が存在していた。そこで、本開示では、処理装置と操作装置との位置関係に制約を設けることなく、処理装置による操作装置の位置に応じた処理を可能にする、新規かつ改良された情報処理装置、情報処理方法およびプログラムを提案する。 However, in the technique described in Patent Document 1, when the operating device is deviated from the angle of view of the installed camera, the feature point cannot be tracked and it is difficult to estimate the position of the operating device. That is, there is a restriction on the positional relationship between the processing device (camera) and the operation device. Therefore, in the present disclosure, a new and improved information processing apparatus and information processing method that enable processing according to the position of the operation device by the processing device without providing restrictions on the positional relationship between the processing device and the operation device. And suggest programs.
 本開示によれば、実空間を撮像する撮像部により撮像された撮像画像に基づいて、前記実空間に存在する物体の位置を表現する環境マップにおける前記撮像部を有する操作装置の位置および姿勢を示す第1の姿勢情報を推定する推定部と、前記推定部により推定された前記第1の姿勢情報を、前記操作装置の位置および姿勢に応じた処理を行う処理装置に送信する通信部と、を備える、情報処理装置が提供される。 According to the present disclosure, the position and orientation of the operating device having the imaging unit in the environment map representing the position of the object existing in the real space based on the captured image captured by the imaging unit that captures the real space. An estimation unit that estimates first posture information to be shown, a communication unit that transmits the first posture information estimated by the estimation unit to a processing device that performs processing according to the position and posture of the controller device; An information processing apparatus is provided.
 また、本開示によれば、実空間を撮像する撮像部により撮像された撮像画像に基づいて、前記実空間に存在する物体の位置を表現する環境マップにおける前記撮像部を有する操作装置の位置および姿勢を示す第1の姿勢情報を推定することと、推定された前記第1の姿勢情報を、前記操作装置の位置および姿勢に応じた処理を行う処理装置に送信することと、を備える、情報処理装置のプロセッサにより実行される情報処理方法が提供される。 Further, according to the present disclosure, the position of the operating device having the imaging unit in the environment map representing the position of the object existing in the real space based on the captured image captured by the imaging unit that captures the real space, and Estimating first posture information indicating a posture, and transmitting the estimated first posture information to a processing device that performs processing according to a position and a posture of the operating device. An information processing method executed by a processor of a processing device is provided.
 また、本開示によれば、コンピュータを、実空間を撮像する撮像部により撮像された撮像画像に基づいて、前記実空間に存在する物体の位置を表現する環境マップにおける前記撮像部を有する操作装置の位置および姿勢を示す第1の姿勢情報を推定する推定部と、前記推定部により推定された前記第1の姿勢情報を、前記操作装置の位置および姿勢に応じた処理を行う処理装置に送信する通信部と、として機能させるためのプログラムが提供される。 In addition, according to the present disclosure, the operation device having the imaging unit in the environment map representing the position of the object existing in the real space based on a captured image captured by the imaging unit that captures the real space. An estimation unit that estimates first posture information indicating the position and posture of the controller, and the first posture information estimated by the estimation unit is transmitted to a processing device that performs processing according to the position and posture of the controller device And a communication unit functioning program are provided.
 以上説明したように本開示によれば、処理装置と操作装置との位置関係に制約を設けることなく、処理装置による操作装置の位置に応じた処理が可能となる。
 なお、上記の効果は必ずしも限定的なものではなく、上記の効果とともに、または上記の効果に代えて、本明細書に示されたいずれかの効果、または本明細書から把握され得る他の効果が奏されてもよい。
As described above, according to the present disclosure, it is possible to perform processing in accordance with the position of the operation device by the processing device without limiting the positional relationship between the processing device and the operation device.
Note that the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
一実施形態に係る表示システムの概要を説明するための説明図である。It is explanatory drawing for demonstrating the outline | summary of the display system which concerns on one Embodiment. 第1の実施形態に係る表示システムの構成例を示すブロック図である。It is a block diagram which shows the structural example of the display system which concerns on 1st Embodiment. 第1の実施形態に係る表示装置によるAR表示例を示す図である。It is a figure which shows the example of AR display by the display apparatus which concerns on 1st Embodiment. 第1の実施形態に係る表示システムにおいて実行されるAR表示処理の流れの一例を示すシーケンス図である。It is a sequence diagram which shows an example of the flow of the AR display process performed in the display system which concerns on 1st Embodiment. 第2の実施形態に係る表示システムの構成例1を示すブロック図である。It is a block diagram which shows the structural example 1 of the display system which concerns on 2nd Embodiment. 第2の実施形態に係る表示装置によるAR表示例を示す図である。It is a figure which shows the example of AR display by the display apparatus which concerns on 2nd Embodiment. 第2の実施形態に係る表示システムの構成例2を示すブロック図である。It is a block diagram which shows the structural example 2 of the display system which concerns on 2nd Embodiment. 第2の実施形態に係る表示システムの構成例3を示すブロック図である。It is a block diagram which shows the structural example 3 of the display system which concerns on 2nd Embodiment. 第3の実施形態に係る表示システムの概要を説明するための説明図である。It is explanatory drawing for demonstrating the outline | summary of the display system which concerns on 3rd Embodiment. 第3の実施形態に係る表示システムの構成例を示すブロック図である。It is a block diagram which shows the structural example of the display system which concerns on 3rd Embodiment.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
 なお、説明は以下の順序で行うものとする。
  1.概要
  2.第1の実施形態
   2-1.操作装置の構成例
   2-2.表示装置の構成例
   2-3.動作処理
  3.第2の実施形態
   3-1.構成例1
   3-2.構成例2
   3-3.構成例3
  4.第3の実施形態
   4-1.概要
   4-2.構成例
  5.まとめ
The description will be made in the following order.
1. Overview 2. First embodiment 2-1. Configuration example of operation device 2-2. Configuration example of display device 2-3. 2. Operation processing Second embodiment 3-1. Configuration example 1
3-2. Configuration example 2
3-3. Configuration example 3
4). Third embodiment 4-1. Outline 4-2. Configuration example 5. Summary
 <1.概要>
 まず、図1を参照して、本開示の一実施形態に係る表示システムの概要を説明する。
<1. Overview>
First, an overview of a display system according to an embodiment of the present disclosure will be described with reference to FIG.
 図1は、一実施形態に係る表示システムの概要を説明するための説明図である。図1に示すように、ユーザは表示装置としてのスマートフォン1、および操作装置としてのスマートフォン2を把持している。表示システムは、表示装置1および操作装置2から成り、表示装置1は操作装置2の位置に応じた表示100を行う。 FIG. 1 is an explanatory diagram for explaining an overview of a display system according to an embodiment. As shown in FIG. 1, the user holds a smartphone 1 as a display device and a smartphone 2 as an operation device. The display system includes a display device 1 and an operation device 2, and the display device 1 performs a display 100 according to the position of the operation device 2.
 ここで、実空間に付加的な情報を重畳してユーザに呈示する拡張現実(AR:Augmented Reality)と呼ばれる技術について説明する。AR技術においてユーザに呈示される情報は、テキスト、アイコン又はアニメーションなどの様々な形態の仮想オブジェクトを用いて可視化される。仮想オブジェクトは、関連付けられる実物体の位置に応じてAR空間内に配置される。他にも、仮想オブジェクトは、AR空間内で移動、衝突、または変形といった動作を行うことも可能である。本実施形態に係る表示装置1は、操作装置2の位置に応じて、仮想オブジェクトをAR空間内に配置し、移動、衝突、または変形といった動作を表現するAR画像を表示する。 Here, a technology called augmented reality (AR) that superimposes additional information on the real space and presents it to the user will be described. Information presented to the user in AR technology is visualized using various forms of virtual objects such as text, icons or animations. The virtual object is arranged in the AR space according to the position of the associated real object. In addition, the virtual object can perform an operation such as movement, collision, or deformation in the AR space. The display device 1 according to the present embodiment arranges a virtual object in the AR space according to the position of the controller device 2 and displays an AR image representing an action such as movement, collision, or deformation.
 図1に示すように、ユーザは表示装置1を右手で把持して表示部17を見ており、表示部17の裏面側に配置されたカメラが机に向けられている。表示装置1は、カメラにより机の上を実時間で撮像したスルー画像に、操作装置2に対応する位置から仮想オブジェクトであるボール8が射出された様子を示すAR画像を重畳して表示している。表示装置1は、スルー画像における操作装置2の位置を推定することにより、操作装置2に対応する位置からボール8を射出することが可能となる。 As shown in FIG. 1, the user is holding the display device 1 with his right hand and looking at the display unit 17, and the camera disposed on the back side of the display unit 17 is pointed at the desk. The display device 1 superimposes and displays an AR image showing a state in which the ball 8 that is a virtual object is ejected from a position corresponding to the operation device 2 on a through image captured in real time on a desk by a camera. Yes. The display device 1 can inject the ball 8 from a position corresponding to the operation device 2 by estimating the position of the operation device 2 in the through image.
 表示装置1が操作装置2の位置を推定する手法は多様に考えられる。一例として、表示装置1に設けられたカメラにより操作装置2を撮像して、画像認識により表示装置1と操作装置2との相対位置を推定する比較例が考えられる。しかし、この比較例では、カメラの画角から操作装置2が外れた場合、撮像画像に操作装置2が写らなくなるため、相対位置を推定することが困難になる。つまり、表示装置1が相対位置の推定に成功するためには、表示装置1に設けられたカメラの撮像範囲内に操作装置2が存在すること、という制約条件が存在していた。 There are various ways in which the display device 1 estimates the position of the operation device 2. As an example, a comparative example in which the operation device 2 is imaged by a camera provided in the display device 1 and a relative position between the display device 1 and the operation device 2 is estimated by image recognition can be considered. However, in this comparative example, when the controller device 2 is out of the angle of view of the camera, the controller device 2 is not captured in the captured image, so it is difficult to estimate the relative position. That is, in order for the display device 1 to succeed in estimating the relative position, there is a constraint that the operation device 2 exists within the imaging range of the camera provided in the display device 1.
 そこで、上記事情を一着眼点にして本実施形態に係る操作装置2(情報処理装置)を創作するに至った。本実施形態に係る操作装置2は、表示装置1と操作装置2との位置関係に制約を設けることなく、表示装置1による操作装置2の位置に応じた処理を可能にする。 Accordingly, the operation device 2 (information processing device) according to the present embodiment has been created with the above circumstances taken into consideration. The operation device 2 according to the present embodiment enables processing according to the position of the operation device 2 by the display device 1 without providing a restriction on the positional relationship between the display device 1 and the operation device 2.
 具体的には、本実施形態に係る操作装置2は、世界座標系における操作装置2の位置および姿勢(角度)を推定して、その推定結果を表示装置1に送信する。なお、世界座標系とは、実空間に設定される座標系である。同様に、表示装置1も、同一の世界座標系における表示装置1の位置および姿勢を推定する。そして、表示装置1は、操作装置2から受信された推定結果と自身の推定結果を統合することで、表示装置1と操作装置2との相対位置を推定する。図1に示した符号9は、このような統合処理により、表示装置1が推定した操作装置2のスルー画像における位置を模擬的に示している。表示装置1は、符号9に示した位置に対応する位置にボール8を示すAR画像を重畳する。これにより、結果的に、スルー画像における操作装置2に対応する位置へのAR画像の重畳表示が実現される。表示装置1は、画像認識により操作装置2を追跡する必要がないため、カメラの画角から操作装置2が外れた場合であっても、相対位置を推定することができる。つまり、表示装置1と操作装置2との位置関係に制約がない。 Specifically, the controller device 2 according to the present embodiment estimates the position and orientation (angle) of the controller device 2 in the world coordinate system, and transmits the estimation result to the display device 1. The world coordinate system is a coordinate system set in the real space. Similarly, the display device 1 also estimates the position and orientation of the display device 1 in the same world coordinate system. Then, the display device 1 estimates the relative position between the display device 1 and the operation device 2 by integrating the estimation result received from the operation device 2 and its own estimation result. Reference numeral 9 shown in FIG. 1 schematically shows the position in the through image of the operating device 2 estimated by the display device 1 by such integration processing. The display device 1 superimposes an AR image indicating the ball 8 at a position corresponding to the position indicated by reference numeral 9. As a result, the superimposed display of the AR image at the position corresponding to the controller device 2 in the through image is realized. Since the display device 1 does not need to track the operation device 2 by image recognition, the relative position can be estimated even when the operation device 2 is out of the angle of view of the camera. That is, the positional relationship between the display device 1 and the operation device 2 is not limited.
 一般的に、表示装置1においては、仮想オブジェクトの位置および姿勢の演算、AR画像の表示処理、操作装置2の動きに応じたジェスチャー入力の認識、および認識結果に基づく出力の演算等が行われる。この上、上述した比較例によれば、表示装置は画像認識を行って操作装置の位置を推定する必要があり、処理負荷は多大なものとなる。これに対し、本実施形態によれば、表示装置1は画像認識を行う必要がないため、処理負荷が軽減される。 In general, the display device 1 performs calculation of the position and orientation of a virtual object, display processing of an AR image, recognition of gesture input according to the movement of the operation device 2, calculation of output based on the recognition result, and the like. . In addition, according to the above-described comparative example, the display device needs to perform image recognition to estimate the position of the controller device, and the processing load becomes great. On the other hand, according to the present embodiment, the display device 1 does not need to perform image recognition, so the processing load is reduced.
 なお、図1では、本開示の一実施形態に係る表示装置1がスマートフォンとして実現される例を示したが、本開示に係る技術はこれに限定されない。例えば、表示装置1は、HMD(Head Mounted Display)、デジタルカメラ、デジタルビデオカメラ、タブレット端末、携帯電話端末等であってもよい。また、表示装置1がHMDとして実現される場合、HMDは、カメラにより撮像されたスルー画像にAR画像を重畳して表示してもよいし、透明または半透明のスルー状態として形成された表示部にAR画像を表示してもよい。 In addition, in FIG. 1, although the example which the display apparatus 1 which concerns on one Embodiment of this indication is implement | achieved as a smart phone was shown, the technique which concerns on this indication is not limited to this. For example, the display device 1 may be an HMD (Head Mounted Display), a digital camera, a digital video camera, a tablet terminal, a mobile phone terminal, or the like. When the display device 1 is realized as an HMD, the HMD may display an AR image superimposed on a through image captured by a camera, or a display unit formed in a transparent or translucent through state. An AR image may be displayed.
 同様に、図1では、本開示の一実施形態に係る操作装置2がスマートフォンとして実現される例を示したが、本開示に係る技術はこれに限定されない。例えば、操作装置2は、デジタルカメラ、デジタルビデオカメラ、タブレット端末、携帯電話端末等であってもよい。他にも、操作装置2は、棒状、球状、その他の任意の形状を有する操作専用のデバイスであってもよい。 Similarly, FIG. 1 illustrates an example in which the operation device 2 according to an embodiment of the present disclosure is realized as a smartphone, but the technology according to the present disclosure is not limited thereto. For example, the controller device 2 may be a digital camera, a digital video camera, a tablet terminal, a mobile phone terminal, or the like. In addition, the operation device 2 may be an operation-dedicated device having a rod shape, a spherical shape, or any other shape.
 また、図1では、本開示の一実施形態に係る処理装置が、表示装置として実現される例を示したが、本開示に係る技術はこれに限定されない。例えば、処理装置は、AR画像の表示の他にも、操作装置2の位置に応じた振動、音声の再生、発光等の、操作装置2と実世界との任意のインタラクションを表現する処理を行ってもよい。 1 illustrates an example in which the processing device according to an embodiment of the present disclosure is realized as a display device, but the technology according to the present disclosure is not limited thereto. For example, in addition to the display of the AR image, the processing device performs a process for expressing an arbitrary interaction between the controller device 2 and the real world, such as vibration according to the position of the controller device 2, sound reproduction, and light emission. May be.
 以上、本実施形態に係る表示システムの概要を説明した。以下、各実施形態について詳細に説明する。 The overview of the display system according to this embodiment has been described above. Hereinafter, each embodiment will be described in detail.
 <2.第1の実施形態>
 まず、図2を参照して、本実施形態に係る表示システムの構成を説明する。図2は、第1の実施形態に係る表示システムの構成例を示すブロック図である。図2に示すように、本実施形態に係る表示システムは、表示装置1-1および操作装置2-1を有し、同一の環境マップ3を共有している。
<2. First Embodiment>
First, the configuration of the display system according to the present embodiment will be described with reference to FIG. FIG. 2 is a block diagram illustrating a configuration example of the display system according to the first embodiment. As shown in FIG. 2, the display system according to the present embodiment includes a display device 1-1 and an operation device 2-1, and shares the same environment map 3.
 [2-1.操作装置の構成例]
 図2に示すように、操作装置2-1は、撮像部21、センサ22、姿勢推定部23、姿勢情報送信部24、ユーザ操作取得部25、および制御信号送信部26を有する。
[2-1. Example of operation device configuration]
As illustrated in FIG. 2, the controller device 2-1 includes an imaging unit 21, a sensor 22, a posture estimation unit 23, a posture information transmission unit 24, a user operation acquisition unit 25, and a control signal transmission unit 26.
  (1)撮像部21
 撮像部21は、撮像レンズ、絞り、ズームレンズ、およびフォーカスレンズ等により構成されるレンズ系、レンズ系に対してフォーカス動作やズーム動作を行わせる駆動系、レンズ系で得られる撮像光を光電変換して撮像信号を生成する固体撮像素子アレイ等を有する。固体撮像素子アレイは、例えばCCD(Charge Coupled Device)センサアレイや、CMOS(Complementary Metal Oxide Semiconductor)センサアレイにより実現されてもよい。本実施形態に係る撮像部21は、実空間を撮像して、撮像画像を姿勢推定部23に出力する。
(1) Imaging unit 21
The imaging unit 21 photoelectrically converts imaging light obtained by a lens system including an imaging lens, a diaphragm, a zoom lens, and a focus lens, a drive system that causes the lens system to perform a focus operation and a zoom operation, and the lens system. A solid-state imaging device array that generates an imaging signal. The solid-state imaging device array may be realized by, for example, a CCD (Charge Coupled Device) sensor array or a CMOS (Complementary Metal Oxide Semiconductor) sensor array. The imaging unit 21 according to the present embodiment images a real space and outputs a captured image to the posture estimation unit 23.
  (2)センサ22
 センサ22は、操作装置2-1の姿勢を検知する機能を有する。例えば、センサ22は、加速度センサや角速度(ジャイロ)センサ、地磁気センサにより実現される。センサ22は、検知した重力方向、傾き、または角度を示すセンシング結果を、姿勢推定部23に出力する。
(2) Sensor 22
The sensor 22 has a function of detecting the attitude of the controller device 2-1. For example, the sensor 22 is realized by an acceleration sensor, an angular velocity (gyro) sensor, or a geomagnetic sensor. The sensor 22 outputs a sensing result indicating the detected gravity direction, inclination, or angle to the posture estimation unit 23.
  (3-1)環境マップ3
 環境マップ3は、実空間内に存在する物体の三次元位置を表現する情報である。より具体的には、環境マップ3は、実空間内に存在する1つ以上の物体の位置および姿勢を表現するデータの集合である。環境マップには、例えば、物体に対応するオブジェクト名称、当該物体に対応する特徴点の三次元位置、及び当該物体の形状を構成するポリゴン情報などが含まれ得る。環境マップは、例えば、特徴点の撮像面上の位置から各特徴点の三次元位置を求めることにより構築され得る。
(3-1) Environmental map 3
The environment map 3 is information that represents a three-dimensional position of an object existing in the real space. More specifically, the environment map 3 is a set of data representing the position and orientation of one or more objects existing in the real space. The environment map can include, for example, an object name corresponding to the object, a three-dimensional position of a feature point corresponding to the object, and polygon information constituting the shape of the object. The environment map can be constructed, for example, by obtaining the three-dimensional position of each feature point from the position of the feature point on the imaging surface.
 図2においては、環境マップ3は、表示装置1-1および操作装置2-1のいずれにも属さないものとして表現されつつ、表示装置1-1および操作装置2-1の双方からアクセス可能であることが表現されている。これは、表示装置1-1および操作装置2-1が、同一の環境マップ3を共有することを示している。これにより、表示装置1-1および操作装置2-1が同一の世界座標系における位置および角度をそれぞれ推定することとなり、後述の表示装置1-1による姿勢情報の統合処理が可能となる。なお、環境マップ3は、表示装置1-1に含まれ、操作装置2-1は表示装置1-1との通信により環境マップ3にアクセスしてもよい。同様に、環境マップ3は、操作装置2-1に含まれていてもよいし、表示装置1-1および操作装置2-1とネットワークにより接続されたホームサーバ、またはクラウド上のサーバ等の他の情報処理装置に含まれていてもよい。 In FIG. 2, the environment map 3 is expressed as not belonging to either the display device 1-1 or the operation device 2-1, but can be accessed from both the display device 1-1 and the operation device 2-1. It is expressed that there is. This indicates that the display device 1-1 and the operation device 2-1 share the same environment map 3. As a result, the display device 1-1 and the operation device 2-1 each estimate the position and angle in the same world coordinate system, and the posture information integration process by the display device 1-1 described later becomes possible. The environment map 3 may be included in the display device 1-1, and the operation device 2-1 may access the environment map 3 through communication with the display device 1-1. Similarly, the environment map 3 may be included in the operation device 2-1, or may be a home server connected to the display device 1-1 and the operation device 2-1, or a server on the cloud. It may be included in the information processing apparatus.
 また、表示装置1-1および操作装置2-1が同一の世界座標系における位置および角度を推定することが可能であるならば、共有するものは環境マップ3でなくてもよい。例えば、表示装置1-1および操作装置2-1が、共通の座標系を定義可能な起点として、いわゆる平面マーカーの認識辞書を共通に有する構成であってもよい。 Also, if the display device 1-1 and the operation device 2-1 can estimate the position and angle in the same world coordinate system, what is shared need not be the environment map 3. For example, the display device 1-1 and the operation device 2-1 may have a common so-called planar marker recognition dictionary as a starting point capable of defining a common coordinate system.
  (3-2)姿勢推定部23
 姿勢推定部23は、環境マップ3(世界座標系)における操作装置2-1の位置および角度(姿勢)を示す姿勢情報(第1の姿勢情報)を推定する推定部としての機能を有する。詳しくは、姿勢推定部23は、撮像部21の位置および姿勢を推定することで、操作装置2-1の位置および姿勢を推定する。世界座標系における位置推定の技術のひとつとして、例えば、カメラの位置や姿勢とカメラの画像に映る特徴点の位置とを同時に推定可能なSLAM(Simultaneous Localization And Mapping)とよばれる技術がある。単眼カメラを用いたSLAM技術の基本的な原理は、「Andrew J.Davison, “Real-Time Simultaneous Localization and Mapping with a Single Camera”, Proceedings of the 9th IEEE International Conference on Computer Vision Volume 2, 2003, pp.1403-1410」において説明されている。なお、カメラ画像を用いて視覚的に位置を推定するSLAM技術は、特にVSLAM(visual SLAM)とも称される。姿勢推定部23は、SLAM技術を用いて、環境マップ3における特徴点と撮像部21により撮像された撮像画像に映る特徴点とのマッチング結果により操作装置2-1の位置および姿勢を推定する。他にも、姿勢推定部23は、マーカーを用いた姿勢推定技術や、DTAM(Dense Tracking and Mapping in Real-Time)、Kinect Fusionといった技術により、環境マップ3における撮像部21の位置および姿勢を推定してもよい。姿勢推定部23は、センサ22により出力された各種センシング結果を用いて、SLAM技術による位置および姿勢の推定結果を補正してもよい。姿勢推定部23は、推定した姿勢情報を姿勢情報送信部24に出力する。
(3-2) Attitude estimation unit 23
The posture estimation unit 23 has a function as an estimation unit that estimates posture information (first posture information) indicating the position and angle (posture) of the controller device 2-1 in the environment map 3 (world coordinate system). Specifically, the posture estimation unit 23 estimates the position and posture of the controller device 2-1 by estimating the position and posture of the imaging unit 21. As one of the position estimation techniques in the world coordinate system, for example, there is a technique called SLAM (Simultaneous Localization And Mapping) that can simultaneously estimate the position and orientation of a camera and the position of a feature point reflected in a camera image. The basic principle of SLAM technology using a monocular camera is described in “Andrew J. Davison,“ Real-Time Simultaneous Localization and Mapping with a Single Camera ”, Proceedings of the 9th IEEE International Conference on Computer Vision Volume 2, 2003, pp. .1403-1410 ". The SLAM technique for visually estimating a position using a camera image is also referred to as VSLAM (visual SLAM). The posture estimation unit 23 uses the SLAM technology to estimate the position and posture of the controller device 2-1 based on the matching result between the feature points in the environment map 3 and the feature points reflected in the captured image captured by the imaging unit 21. In addition, the posture estimation unit 23 estimates the position and posture of the imaging unit 21 in the environment map 3 by using a posture estimation technique using a marker, DTAM (Dense Tracking and Mapping in Real-Time), and a technique such as Kinect Fusion. May be. The posture estimation unit 23 may correct the estimation result of the position and posture by the SLAM technique using the various sensing results output from the sensor 22. The posture estimation unit 23 outputs the estimated posture information to the posture information transmission unit 24.
  (4)姿勢情報送信部24
 姿勢情報送信部24は、姿勢推定部23により推定された姿勢情報を、表示装置1-1に送信する機能を有する。これにより、表示装置1-1は、操作装置2-1の位置および姿勢に応じた各種処理を行うことが可能となる。姿勢情報送信部24は、有線/無線により表示装置1-1との間でデータの送受信を行うための通信モジュールである。例えば、姿勢情報送信部24は、無線LAN(Local Area Network)、Wi-Fi(Wireless Fidelity、登録商標)、赤外線通信、Bluetooth(登録商標)等の方式で、表示装置1-1と直接、またはネットワークアクセスポイントを介して無線通信する。
(4) Posture information transmission unit 24
The posture information transmission unit 24 has a function of transmitting the posture information estimated by the posture estimation unit 23 to the display device 1-1. Thereby, the display device 1-1 can perform various processes according to the position and posture of the operation device 2-1. The attitude information transmission unit 24 is a communication module for transmitting and receiving data to and from the display device 1-1 by wire / wireless. For example, the attitude information transmission unit 24 is directly connected to the display device 1-1 by a method such as wireless LAN (Local Area Network), Wi-Fi (Wireless Fidelity (registered trademark)), infrared communication, Bluetooth (registered trademark), or the like. Wireless communication is performed via a network access point.
 なお、図2においては、姿勢推定部23および姿勢情報送信部24が、操作装置2-1に含まれているが、本開示にかかる技術はこれに限定されない。例えば、姿勢推定部23および姿勢情報送信部24が、表示装置1-1および操作装置2-1と接続された他の情報処理装置に含まれてもよい。この場合、他の情報処理装置は、操作装置2-1から受信した撮像画像およびセンシング結果に基づいて操作装置2-1の姿勢情報を推定して、推定した姿勢情報を表示装置1-1に送信してもよい。同様に、姿勢推定部23は、表示装置1-1に含まれていてもよい。 In FIG. 2, the posture estimation unit 23 and the posture information transmission unit 24 are included in the controller device 2-1, but the technology according to the present disclosure is not limited to this. For example, the posture estimation unit 23 and the posture information transmission unit 24 may be included in other information processing devices connected to the display device 1-1 and the operation device 2-1. In this case, the other information processing apparatus estimates the posture information of the controller device 2-1 based on the captured image and the sensing result received from the controller device 2-1, and the estimated posture information is displayed on the display device 1-1. You may send it. Similarly, the posture estimation unit 23 may be included in the display device 1-1.
  (5)ユーザ操作取得部25
 ユーザ操作取得部25は、表示装置1-1をユーザ操作に応じて制御するための制御情報として、ユーザ操作を取得する機能を有する。例えば、ユーザ操作取得部25は、ボタン、キーボード、マウス、トラックボール、タッチパッド、またはタッチパネル等により実現される。ユーザ操作取得部25は、取得したユーザ操作を示す制御情報を制御信号送信部26に出力する。
(5) User operation acquisition unit 25
The user operation acquisition unit 25 has a function of acquiring a user operation as control information for controlling the display device 1-1 according to the user operation. For example, the user operation acquisition unit 25 is realized by a button, a keyboard, a mouse, a trackball, a touch pad, a touch panel, or the like. The user operation acquisition unit 25 outputs control information indicating the acquired user operation to the control signal transmission unit 26.
  (6)制御信号送信部26
 制御信号送信部26は、ユーザ操作取得部25により出力された制御情報を、表示装置1-1に送信する機能を有する。これにより、表示装置1-1は、制御情報に応じた処理を行うことが可能となる。制御信号送信部26は、有線/無線により表示装置1-1との間でデータの送受信を行うための通信モジュールである。例えば、制御信号送信部26は、無線LAN、Wi-Fi(登録商標)、赤外線通信、Bluetooth(登録商標)等の方式で、表示装置1-1と直接、またはネットワークアクセスポイントを介して無線通信する。
(6) Control signal transmitter 26
The control signal transmission unit 26 has a function of transmitting the control information output by the user operation acquisition unit 25 to the display device 1-1. Thereby, the display device 1-1 can perform processing according to the control information. The control signal transmission unit 26 is a communication module for transmitting and receiving data to and from the display device 1-1 by wire / wireless. For example, the control signal transmission unit 26 uses a wireless LAN, Wi-Fi (registered trademark), infrared communication, Bluetooth (registered trademark), or the like to perform wireless communication directly with the display device 1-1 or via a network access point. To do.
  (7)補足
 図1には明示していないが、操作装置2-1は、演算処理装置および制御装置として機能し、各種プログラムに従って操作装置2-1内の動作全般を制御する制御部を有していてもよい。制御部は、例えばCPU(Central Processing Unit)、マイクロプロセッサによって実現される。なお、制御部は、使用するプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、および適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)を含んでいてもよい。
(7) Supplement Although not explicitly shown in FIG. 1, the operation device 2-1 functions as an arithmetic processing device and a control device, and has a control unit that controls the overall operation in the operation device 2-1 according to various programs. You may do it. The control unit is realized by, for example, a CPU (Central Processing Unit) and a microprocessor. The control unit may include a ROM (Read Only Memory) that stores programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate.
 [2-2.表示装置の構成例]
 図2に示すように、表示装置1-1は、撮像部11、センサ12、姿勢推定部13、姿勢情報受信部14、制御情報受信部15、制御部16、および表示部17を有する。
[2-2. Example of configuration of display device]
As illustrated in FIG. 2, the display device 1-1 includes an imaging unit 11, a sensor 12, a posture estimation unit 13, a posture information reception unit 14, a control information reception unit 15, a control unit 16, and a display unit 17.
  (1)撮像部11
 撮像部11は、撮像レンズ、絞り、ズームレンズ、およびフォーカスレンズ等により構成されるレンズ系、レンズ系に対してフォーカス動作やズーム動作を行わせる駆動系、レンズ系で得られる撮像光を光電変換して撮像信号を生成する固体撮像素子アレイ等を有する。固体撮像素子アレイは、例えばCCDセンサアレイや、CMOSセンサアレイにより実現されてもよい。本実施形態に係る撮像部11は、実空間を撮像して、撮像画像を姿勢推定部13に出力する。
(1) Imaging unit 11
The imaging unit 11 photoelectrically converts imaging light obtained by a lens system including an imaging lens, a diaphragm, a zoom lens, and a focus lens, a drive system that causes the lens system to perform a focus operation and a zoom operation, and the lens system. A solid-state imaging device array that generates an imaging signal. The solid-state image sensor array may be realized by, for example, a CCD sensor array or a CMOS sensor array. The imaging unit 11 according to the present embodiment images a real space and outputs a captured image to the posture estimation unit 13.
  (2)センサ12
 センサ12は、表示装置1-1の姿勢を検知する機能を有する。例えば、センサ12は、加速度センサや角速度(ジャイロ)センサ、地磁気センサにより実現される。センサ12は、検知した重力方向、傾き、または角度を示すセンシング結果を、姿勢推定部13に出力する。
(2) Sensor 12
The sensor 12 has a function of detecting the attitude of the display device 1-1. For example, the sensor 12 is realized by an acceleration sensor, an angular velocity (gyro) sensor, or a geomagnetic sensor. The sensor 12 outputs a sensing result indicating the detected gravity direction, inclination, or angle to the posture estimation unit 13.
  (3)姿勢推定部13
 姿勢推定部13は、環境マップ3(世界座標系)における表示装置1-1の位置および角度(姿勢)を示す姿勢情報(第2の姿勢情報)を推定する推定部としての機能を有する。詳しくは、姿勢推定部13は、撮像部11の位置および姿勢を推定することで、表示装置1-1の位置および姿勢を推定する。姿勢推定部13は、上述した姿勢推定部23と同様に、SLAM技術を用いて、環境マップ3における特徴点と撮像部11により撮像された撮像画像に映る特徴点とのマッチング結果により表示装置1-1の位置および姿勢を推定する。他にも、姿勢推定部13は、マーカーを用いた姿勢推定技術や、DTAM、Kinect Fusionといった技術により、環境マップ3における撮像部11の位置および姿勢を推定してもよい。姿勢推定部13は、センサ12により出力された各種センシング結果を用いて、SLAM技術による姿勢推定結果を補正してもよい。姿勢推定部13は、推定した姿勢情報を制御部16に出力する。
(3) Posture estimation unit 13
The posture estimation unit 13 has a function as an estimation unit that estimates posture information (second posture information) indicating the position and angle (posture) of the display device 1-1 in the environment map 3 (world coordinate system). Specifically, the posture estimation unit 13 estimates the position and posture of the display device 1-1 by estimating the position and posture of the imaging unit 11. Similar to the posture estimation unit 23 described above, the posture estimation unit 13 uses the SLAM technology to display the display device 1 based on the matching result between the feature points in the environment map 3 and the feature points reflected in the captured image captured by the imaging unit 11. -1 position and orientation are estimated. In addition, the posture estimation unit 13 may estimate the position and posture of the imaging unit 11 in the environment map 3 by a posture estimation technology using a marker, or a technology such as DTAM or Kinect Fusion. The posture estimation unit 13 may correct the posture estimation result by the SLAM technology using various sensing results output by the sensor 12. The posture estimation unit 13 outputs the estimated posture information to the control unit 16.
 なお、図2においては、姿勢推定部13が、表示装置1-1に含まれているが、本開示にかかる技術はこれに限定されない。例えば、姿勢推定部13が、表示装置1-1および操作装置2-1と接続された他の情報処理装置に含まれてもよい。この場合、他の情報処理装置は、表示装置1-1から受信した撮像画像およびセンシング結果に基づいて表示装置1-1の姿勢情報を推定して、推定した姿勢情報を表示装置1-1に送信してもよい。同様に、姿勢推定部13は、操作装置2-1に含まれ、表示装置1-1から受信した撮像画像およびセンシング結果に基づいて表示装置1-1の姿勢情報を推定して、推定した姿勢情報を表示装置1-1に返信してもよい。 In FIG. 2, the posture estimation unit 13 is included in the display device 1-1, but the technology according to the present disclosure is not limited to this. For example, the posture estimation unit 13 may be included in another information processing apparatus connected to the display device 1-1 and the operation device 2-1. In this case, the other information processing device estimates the posture information of the display device 1-1 based on the captured image and the sensing result received from the display device 1-1, and sends the estimated posture information to the display device 1-1. You may send it. Similarly, the posture estimation unit 13 is included in the operation device 2-1, estimates posture information of the display device 1-1 based on a captured image and a sensing result received from the display device 1-1, and estimates the posture. Information may be returned to the display device 1-1.
  (4)姿勢情報受信部14
 姿勢情報受信部14は、姿勢情報送信部24により送信された操作装置2-1の姿勢情報を受信する機能を有する。姿勢情報受信部14は、有線/無線により操作装置2-1との間でデータの送受信を行うための通信モジュールである。例えば、姿勢情報受信部14は、無線LAN、Wi-Fi(登録商標)、赤外線通信、Bluetooth(登録商標)等の方式で、操作装置2-1と直接、またはネットワークアクセスポイントを介して無線通信する。姿勢情報受信部14は、受信した姿勢情報を制御部16に出力する。
(4) Attitude information receiver 14
The posture information receiving unit 14 has a function of receiving the posture information of the controller device 2-1 transmitted by the posture information transmitting unit 24. The posture information receiving unit 14 is a communication module for transmitting / receiving data to / from the controller device 2-1 by wire / wireless. For example, the attitude information receiving unit 14 wirelessly communicates with the controller device 2-1 directly or via a network access point using a method such as wireless LAN, Wi-Fi (registered trademark), infrared communication, Bluetooth (registered trademark), or the like. To do. The posture information receiving unit 14 outputs the received posture information to the control unit 16.
  (5)制御情報受信部15
 制御情報受信部15は、制御信号送信部26により送信された制御情報を受信する機能を有する。制御情報受信部15は、有線/無線により操作装置2-1との間でデータの送受信を行うための通信モジュールである。例えば、制御情報受信部15は、無線LAN、Wi-Fi(登録商標)、赤外線通信、Bluetooth(登録商標)等の方式で、操作装置2-1と直接、またはネットワークアクセスポイントを介して無線通信する。制御情報受信部15は、受信した制御情報を制御部16に出力する。
(5) Control information receiver 15
The control information receiving unit 15 has a function of receiving the control information transmitted by the control signal transmitting unit 26. The control information receiving unit 15 is a communication module for transmitting / receiving data to / from the controller device 2-1 by wire / wireless. For example, the control information receiving unit 15 wirelessly communicates with the controller device 2-1 directly or via a network access point using a wireless LAN, Wi-Fi (registered trademark), infrared communication, Bluetooth (registered trademark), or the like. To do. The control information receiving unit 15 outputs the received control information to the control unit 16.
  (6)制御部16
 制御部16は、演算処理装置および制御装置として機能し、各種プログラムに従って表示装置1-1内の動作全般を制御する。制御部16は、例えばCPU、マイクロプロセッサによって実現される。なお、制御部16は、使用するプログラムや演算パラメータ等を記憶するROM、および適宜変化するパラメータ等を一時記憶するRAMを含んでいてもよい。
(6) Control unit 16
The control unit 16 functions as an arithmetic processing device and a control device, and controls the overall operation in the display device 1-1 according to various programs. The control unit 16 is realized by, for example, a CPU or a microprocessor. The control unit 16 may include a ROM that stores programs to be used, calculation parameters, and the like, and a RAM that temporarily stores parameters that change as appropriate.
 本実施形態に係る制御部16は、姿勢推定部13により出力された表示装置1-1の姿勢情報および姿勢情報受信部14により受信された操作装置2-1の姿勢情報を統合して、表示装置1-1と操作装置2-1との相対位置を演算する。これらの姿勢情報は、同一の環境マップ3により規定される世界座標系における位置情報であるため、統合することにより相対位置の演算が可能となる。ここで、相対位置とは、表示装置1-1の座標系における操作装置2-1の位置を意味するものとする。相対位置を、操作装置2により撮像されたスルー画像における操作装置2-1の位置として捉えてもよい。この相対位置は、仮想オブジェクトの描画方法の決定の際に用いられる。以下、制御部16による仮想オブジェクトの表示制御について説明する。まず、制御部16は、姿勢情報受信部14により受信された操作装置2-1の姿勢情報から、世界座標系における操作装置2-1の位置を取得する。次いで、制御部16は、世界座標系における仮想オブジェクトの位置を決定する。次に、制御部16は、表示装置1-1の座標系における操作装置2-1および仮想オブジェクトの位置を決定する。そして、制御部16は、表示装置1-1の座標系における操作装置2-1および仮想オブジェクトの位置関係に応じて、仮想オブジェクトの描画方法を決定する。描画方法としては、例えば表示装置1-1から見た操作装置2-1と仮想オブジェクトとの前後関係に応じて、一部オクルージョンの表現をすることが挙げられる。制御部16は、相対位置を演算可能であるため、このような前後関係に応じた表現を行うことが可能となる。表示装置1は、画像認識により操作装置2を追跡する必要がないため、カメラの画角から操作装置2が外れた場合であっても、相対位置を推定することができる。また、制御部16は、上記特許文献1に記載された技術において行われてきた画像認識による操作装置2-1の姿勢推定を行う必要がないため、表示装置1側での処理負荷が軽減される。 The control unit 16 according to the present embodiment integrates the posture information of the display device 1-1 output from the posture estimation unit 13 and the posture information of the operation device 2-1 received by the posture information reception unit 14 to display The relative position between the device 1-1 and the controller device 2-1 is calculated. Since these posture information are position information in the world coordinate system defined by the same environment map 3, the relative position can be calculated by integrating them. Here, the relative position means the position of the operating device 2-1 in the coordinate system of the display device 1-1. The relative position may be regarded as the position of the operating device 2-1 in the through image captured by the operating device 2. This relative position is used in determining the virtual object drawing method. Hereinafter, display control of the virtual object by the control unit 16 will be described. First, the control unit 16 acquires the position of the operation device 2-1 in the world coordinate system from the posture information of the operation device 2-1 received by the posture information reception unit. Next, the control unit 16 determines the position of the virtual object in the world coordinate system. Next, the control unit 16 determines the positions of the operation device 2-1 and the virtual object in the coordinate system of the display device 1-1. Then, the control unit 16 determines a virtual object drawing method according to the positional relationship between the controller device 2-1 and the virtual object in the coordinate system of the display device 1-1. As a drawing method, for example, partial occlusion is expressed according to the context of the operation device 2-1 and the virtual object as viewed from the display device 1-1. Since the control unit 16 can calculate the relative position, it is possible to perform expression according to such a context. Since the display device 1 does not need to track the operation device 2 by image recognition, the relative position can be estimated even when the operation device 2 is out of the angle of view of the camera. Further, since the control unit 16 does not need to perform posture estimation of the operation device 2-1 by image recognition performed in the technique described in Patent Document 1, the processing load on the display device 1 side is reduced. The
 また、制御部16は、制御情報受信部15により受信された制御情報に応じた各種処理を行う。具体的には、制御部16は、演算した表示装置1-1と操作装置2-1との相対位置に加えて、制御情報が示すユーザ操作に応じた視覚効果を表示するよう表示部17を制御する。例えば、制御部16は、ユーザ操作取得部25により所定のユーザ操作が取得された場合に、図1に示したボール8が操作装置2-1から射出された様子を示すAR画像を表示する。 In addition, the control unit 16 performs various processes according to the control information received by the control information receiving unit 15. Specifically, in addition to the calculated relative position between the display device 1-1 and the operation device 2-1, the control unit 16 displays the display unit 17 so as to display a visual effect corresponding to the user operation indicated by the control information. Control. For example, when a predetermined user operation is acquired by the user operation acquisition unit 25, the control unit 16 displays an AR image indicating that the ball 8 illustrated in FIG. 1 is ejected from the operation device 2-1.
 制御部16は、相対位置および制御情報を反映する仮想オブジェクトの一例として、仮想的操作補助デバイスを表示してもよい。仮想的操作補助デバイスとは、操作装置2-1との相対位置によって定義される作用点を示す仮想オブジェクトである。仮想的操作補助デバイスにより、操作装置2-1の作用点を操作装置2-1本体から離れた位置に設置したり、ユーザに操作内容を視覚的に分かりやすく表現したりすることが可能となる。以下、図3を参照して、仮想的操作補助デバイスの表示例を説明する。 The control unit 16 may display a virtual operation assisting device as an example of a virtual object reflecting the relative position and the control information. The virtual operation assisting device is a virtual object indicating an action point defined by a relative position with respect to the operation device 2-1. With the virtual operation assisting device, the operating point of the operation device 2-1 can be installed at a position away from the main body of the operation device 2-1, or the operation content can be visually expressed in a user-friendly manner. . Hereinafter, a display example of the virtual operation assistance device will be described with reference to FIG.
 図3は、第1の実施形態に係る表示装置1-1によるAR表示例を示す図である。図3では、表示装置1-1が、スルー画像における操作装置2-1の前方に、電球の仮想オブジェクトを示すAR画像8-1を表示する例を示している。この電球8-1は、操作装置2-1とあたかも一体化しているかのように操作装置2-1に付随して動く。これは、制御部16が、スルー画像における操作装置2-1の位置を推定して、操作装置2-1の前方に固定的にAR画像8-1を表示することにより実現される。ユーザは、操作装置2-1の一部として電球8-1を使用することができる。具体的には、表示装置1-1は、電球8-1を作用点として機能させて、電球8-1を基準点とする多様なAR画像をスルー画像に重畳表示する。例えば、タッチパネルとして表示部と一体的に構成されたユーザ操作取得部25により横方向のスワイプ操作が取得された場合に、制御部16は、電球8-1の色を切り替えてもよい。他にも、ユーザ操作取得部25によりタップ操作が検出された場合に、制御部16は、電球8-1が通過した三次元位置にネオンサイン8-2を表示してもよい。他にも、ユーザ操作取得部25により上下方向のスワイプ操作が取得された場合に、制御部16は、操作装置2-1から電球8-1を遠ざけたり近づけたりしてもよい。なお、図3においては、表示装置1-1による操作装置2-1の姿勢推定の結果を示す表示(図1の符号9)を省略している。 FIG. 3 is a diagram showing an example of AR display by the display device 1-1 according to the first embodiment. FIG. 3 shows an example in which the display device 1-1 displays the AR image 8-1 showing the virtual object of the light bulb in front of the operation device 2-1 in the through image. The light bulb 8-1 moves along with the operating device 2-1, as if it were integrated with the operating device 2-1. This is realized by the control unit 16 estimating the position of the operation device 2-1 in the through image and displaying the AR image 8-1 fixedly in front of the operation device 2-1. The user can use the light bulb 8-1 as a part of the operation device 2-1. Specifically, the display device 1-1 causes the light bulb 8-1 to function as an action point, and displays various AR images with the light bulb 8-1 as a reference point superimposed on the through image. For example, when a horizontal swipe operation is acquired by a user operation acquisition unit 25 that is configured integrally with the display unit as a touch panel, the control unit 16 may switch the color of the light bulb 8-1. In addition, when the tap operation is detected by the user operation acquisition unit 25, the control unit 16 may display the neon sign 8-2 at the three-dimensional position through which the light bulb 8-1 has passed. In addition, when the swipe operation in the vertical direction is acquired by the user operation acquisition unit 25, the control unit 16 may move the bulb 8-1 away from or close to the operation device 2-1. In FIG. 3, the display (reference numeral 9 in FIG. 1) showing the result of the posture estimation of the controller device 2-1 by the display device 1-1 is omitted.
 仮想的操作補助デバイスには、図3に示した電球8-1以外にも多様な仮想オブジェクトが考えられる。例えば、はさみ、カッター、ペン、定規等が仮想的操作補助デバイスとなり、実空間上の仮想オブジェクトや実オブジェクトに作用し得る。これらの複数種類の仮想的操作補助デバイスは、例えばタッチパネルとして形成されたユーザ操作取得部25でのスワイプ操作により切り替えられてもよい。また、例えば、ユーザは、HMDとして形成された表示装置1-1を装着して、両手にそれぞれ操作装置2-1を把持して、両手で仮想的補助デバイスを操作してもよい。 In addition to the light bulb 8-1 shown in FIG. 3, various virtual objects are conceivable for the virtual operation assisting device. For example, scissors, a cutter, a pen, a ruler, etc. can be virtual operation assisting devices and can act on virtual objects and real objects in real space. These multiple types of virtual operation assistance devices may be switched by a swipe operation in the user operation acquisition unit 25 formed as a touch panel, for example. Further, for example, the user may wear the display device 1-1 formed as an HMD, hold the operation device 2-1 with both hands, and operate the virtual auxiliary device with both hands.
 なお、制御部16は、AR画像の表示以外にも、操作装置2-1の位置に応じてテキストや画像を表示する等の各種視覚効果を表現してもよい。他にも、制御部16は、操作装置2-1の位置に応じた振動、音声の再生、発光等の、操作装置2と実世界との任意のインタラクションを表現する処理を行うよう表示装置1-1または操作装置2-1を制御してもよい。 Note that the control unit 16 may express various visual effects such as displaying text and images in accordance with the position of the operation device 2-1, in addition to displaying the AR image. In addition, the control unit 16 performs processing for expressing any interaction between the controller device 2 and the real world, such as vibration, sound reproduction, and light emission according to the position of the controller device 2-1. -1 or the controller device 2-1 may be controlled.
 また、図2においては、姿勢情報受信部14、制御情報受信部15および制御部16が、表示装置1-1に含まれているが、本開示にかかる技術はこれに限定されない。例えば、姿勢情報受信部14、制御情報受信部15および制御部16が、表示装置1-1および操作装置2-1と接続された他の情報処理装置に含まれてもよい。この場合、他の情報処理装置は、表示装置1-1および操作装置2-1から受信された姿勢情報および制御情報に基づいて、AR画像の内容を示す表示制御情報を生成して、表示制御情報を表示装置1-1に送信して表示部17による表示を制御してもよい。同様に、姿勢情報受信部14、制御情報受信部15および制御部16は、操作装置2-1に含まれていてもよい。 In FIG. 2, the attitude information receiving unit 14, the control information receiving unit 15, and the control unit 16 are included in the display device 1-1. However, the technology according to the present disclosure is not limited to this. For example, the posture information receiving unit 14, the control information receiving unit 15, and the control unit 16 may be included in another information processing apparatus connected to the display device 1-1 and the operation device 2-1. In this case, the other information processing device generates display control information indicating the content of the AR image based on the posture information and control information received from the display device 1-1 and the operation device 2-1, and performs display control. Information may be transmitted to the display device 1-1 to control display by the display unit 17. Similarly, the posture information receiving unit 14, the control information receiving unit 15, and the control unit 16 may be included in the controller device 2-1.
  (7)表示部17
 表示部17は、制御部16による制御に基づいて、撮像部11により実時間で撮像されたスルー画像を表示したり、各種AR画像を重畳表示したり、記憶された画像データ(静止画像データ/動画像データ)を表示(再生)したりする。表示部17は、例えばLCD(Liquid Crystal Display)またはOLED(Organic Light-Emitting Diode)、などにより実現される。また、本実施形態に係る処理装置がHMDとして実現される場合、表示部17は、透明または半透明のスルー状態として形成され、スルー状態の表示部17に映る実空間にAR画像を表示してもよい。
(7) Display unit 17
The display unit 17 displays a through image captured in real time by the imaging unit 11 based on control by the control unit 16, displays various AR images in a superimposed manner, and stores stored image data (still image data / Moving image data) is displayed (reproduced). The display unit 17 is realized by, for example, an LCD (Liquid Crystal Display) or an OLED (Organic Light-Emitting Diode). When the processing apparatus according to the present embodiment is realized as an HMD, the display unit 17 is formed as a transparent or translucent through state, and displays an AR image in a real space reflected on the display unit 17 in the through state. Also good.
 以上、本実施形態に係る表示システムの構成例を説明した。続いて、図4を参照して、本実施形態に係る表示システムの動作処理を説明する。 The configuration example of the display system according to the present embodiment has been described above. Subsequently, an operation process of the display system according to the present embodiment will be described with reference to FIG.
 [2-3.動作処理]
 図4は、第1の実施形態に係る表示システムにおいて実行されるAR表示処理の流れの一例を示すシーケンス図である。図4に示したシーケンスには、表示装置1-1および操作装置2-1が関与する。なお、図4に示したシーケンスでは、各処理が表示装置1-1および操作装置2-1のいずれかで実行される例を示しているが、各処理はどちらで実行されてもよいし、サーバ等の他の情報処理装置により一元的に実行されてもよい。
[2-3. Operation processing]
FIG. 4 is a sequence diagram illustrating an example of the flow of the AR display process executed in the display system according to the first embodiment. The display device 1-1 and the operation device 2-1 are involved in the sequence shown in FIG. The sequence shown in FIG. 4 shows an example in which each process is executed by either the display device 1-1 or the operation device 2-1, but each process may be executed by either of them. It may be centrally executed by another information processing apparatus such as a server.
 図4に示すように、まず、ステップS102-1で、表示装置1-1は、姿勢推定を行う。詳しくは、姿勢推定部13は、SLAM技術を用いて、環境マップ3における特徴点と撮像部11により撮像された撮像画像に映る特徴点とのマッチング結果により表示装置1-1の位置および姿勢を推定する。このとき、姿勢推定部13は、センサ12により出力された各種センシング結果を用いて、SLAM技術による姿勢推定結果を補正してもよい。同様にして、ステップS102-2で、操作装置2-1は、姿勢推定を行う。特に、姿勢推定部23は、姿勢推定部13と同一の環境マップ3を用いて、操作装置2-1の位置および姿勢を推定する。 As shown in FIG. 4, first, in step S102-1, the display device 1-1 performs posture estimation. Specifically, the posture estimation unit 13 uses SLAM technology to determine the position and posture of the display device 1-1 based on the matching result between the feature points in the environment map 3 and the feature points reflected in the captured image captured by the imaging unit 11. presume. At this time, the posture estimation unit 13 may correct the posture estimation result by the SLAM technique using various sensing results output by the sensor 12. Similarly, in step S102-2, the controller device 2-1 performs posture estimation. In particular, the posture estimation unit 23 estimates the position and posture of the controller device 2-1 using the same environment map 3 as the posture estimation unit 13.
 次いで、ステップS104で、操作装置2-1は、推定した姿勢情報を表示装置1-1に送信する。詳しくは、姿勢情報送信部24は、姿勢推定部23によりステップS102-2において推定された姿勢情報を、表示装置1-1に送信する。 Next, in step S104, the controller device 2-1 transmits the estimated posture information to the display device 1-1. Specifically, the posture information transmission unit 24 transmits the posture information estimated in step S102-2 by the posture estimation unit 23 to the display device 1-1.
 次に、ステップS106で、表示装置1-1は、表示装置1-1の姿勢情報と操作装置2-1の姿勢情報とを統合する。詳しくは、制御部16は、姿勢推定部13により出力された表示装置1-1の姿勢情報および姿勢情報受信部14により受信された操作装置2-1の姿勢情報を統合して、表示装置1-1と操作装置2-1との相対位置を演算する。 Next, in step S106, the display device 1-1 integrates the posture information of the display device 1-1 and the posture information of the operation device 2-1. Specifically, the control unit 16 integrates the posture information of the display device 1-1 output by the posture estimation unit 13 and the posture information of the operation device 2-1 received by the posture information reception unit 14 to display the display device 1. −1 and the operation device 2-1 are calculated.
 他方、ステップS108で、操作装置2-1は、ユーザ操作を取得する。詳しくは、ユーザ操作取得部25は、ユーザによるタッチパネルへのタッチまたはボタン押下等のユーザ操作を、表示装置1-1をユーザ操作に応じて制御するための制御情報として取得する。 On the other hand, in step S108, the controller device 2-1 acquires a user operation. Specifically, the user operation acquisition unit 25 acquires a user operation such as a touch on the touch panel or a button press by the user as control information for controlling the display device 1-1 according to the user operation.
 次いで、ステップS110で、操作装置2-1は、取得した制御情報を表示装置1-1に送信する。詳しくは、制御信号送信部26は、ステップS108において取得されたユーザ操作を示す制御情報を、表示装置1-1に送信する。 Next, in step S110, the controller device 2-1 transmits the acquired control information to the display device 1-1. Specifically, the control signal transmission unit 26 transmits control information indicating the user operation acquired in step S108 to the display device 1-1.
 次に、ステップS112で、表示装置1-1は、視覚情報を演算する。詳しくは、制御部16は、ステップS106において演算した表示装置1-1と操作装置2-1との相対位置に加えて、制御情報が示すユーザ操作に応じた視覚情報を演算する。具体的には、制御部16は、図1および図3を参照して説明したように、撮像部11により撮像されたスルー画像における操作装置2-1の位置を演算して、操作装置2-1に対応する位置に、制御情報を反映させた仮想オブジェクトを示すAR画像を表示するための表示制御情報を生成する。 Next, in step S112, the display device 1-1 calculates visual information. Specifically, the control unit 16 calculates visual information corresponding to the user operation indicated by the control information in addition to the relative position between the display device 1-1 and the operation device 2-1 calculated in step S106. Specifically, as described with reference to FIGS. 1 and 3, the control unit 16 calculates the position of the operating device 2-1 in the through image captured by the imaging unit 11, and operates the operating device 2- Display control information for displaying an AR image indicating a virtual object reflecting the control information at a position corresponding to 1 is generated.
 そして、ステップS114で、表示装置1-1は、AR表示を行う。詳しくは、表示部17は、ステップS112において制御部16により生成された表示制御情報に基づいた表示を行う。 In step S114, the display device 1-1 performs AR display. Specifically, the display unit 17 performs display based on the display control information generated by the control unit 16 in step S112.
 以上、本実施形態に係る表示システムの動作処理を説明した。 The operation process of the display system according to this embodiment has been described above.
 <3.第2の実施形態>
 本実施形態は、操作装置2と実オブジェクトとの接近判定およびユーザへの通知を行うことにより、操作装置2の姿勢推定が困難になる状況を回避する形態である。本実施形態の構成は多様に考えられる。一例として、図5~図8を参照して、本実施形態に係る表示システムの構成例を説明する。
<3. Second Embodiment>
This embodiment is a form which avoids the situation where the posture estimation of the controller device 2 becomes difficult by determining the approach between the controller device 2 and the real object and notifying the user. Various configurations of the present embodiment are conceivable. As an example, a configuration example of a display system according to the present embodiment will be described with reference to FIGS.
 [3-1.構成例1]
 図5は、第2の実施形態に係る表示システムの構成例1を示すブロック図である。図5に示すように、本実施形態に係る表示システムは、表示装置1-2および操作装置2-2から成り、図2を参照して説明した第1の実施形態に係る表示システムの構成に加えて、三次元データ4、接近判定部27、および接近通知部28を有する。
[3-1. Configuration Example 1]
FIG. 5 is a block diagram illustrating a configuration example 1 of the display system according to the second embodiment. As shown in FIG. 5, the display system according to the present embodiment includes a display device 1-2 and an operation device 2-2. The display system according to the first embodiment described with reference to FIG. In addition, it has three-dimensional data 4, an approach determination unit 27, and an approach notification unit 28.
  (1)三次元データ4
 三次元データ4は、実空間における物体の頂点の位置情報、頂点間を繋ぐ線分、および線分により囲まれる表面から成るデータであり、実空間の三次元形状(Surface)を表現する情報である。ここで、三次元データ4が含む位置情報とは、環境マップ3における世界座標系における位置を示す情報である。三次元データ4は、撮像部21により撮像された撮像画像および姿勢推定部23により推定された姿勢情報を用いて動的に生成されてもよいし、予め生成されたものであってもよい。三次元データは、例えばCAD(Computer Assisted Drafting)データとして実現される。
(1) Three-dimensional data 4
The three-dimensional data 4 is data composed of the position information of the vertices of the object in the real space, the line segments connecting the vertices, and the surface surrounded by the line segments, and is information expressing the three-dimensional shape (Surface) of the real space. is there. Here, the position information included in the three-dimensional data 4 is information indicating a position in the world coordinate system in the environment map 3. The three-dimensional data 4 may be dynamically generated using the captured image captured by the imaging unit 21 and the posture information estimated by the posture estimation unit 23, or may be generated in advance. The three-dimensional data is realized as CAD (Computer Assisted Drafting) data, for example.
 図5においては、三次元データ4は、表示装置1-2および操作装置2-2のいずれにも属さないものとして表現されつつ、操作装置2-2からアクセス可能であることが表現されている。これは、三次元データ4が任意の装置に含まれていてもよいことを示している。例えば、三次元データ4は、表示装置1-2、または操作装置2-2に含まれていてもよいし、操作装置2-2とネットワークにより接続されたホームサーバ、またはクラウド上のサーバ等の他の情報処理装置に含まれていてもよい。 In FIG. 5, the three-dimensional data 4 is expressed as belonging to neither the display device 1-2 nor the operation device 2-2, and is expressed as accessible from the operation device 2-2. . This indicates that the three-dimensional data 4 may be included in any device. For example, the three-dimensional data 4 may be included in the display device 1-2 or the operation device 2-2, or may be a home server connected to the operation device 2-2 via a network or a server on the cloud. It may be included in another information processing apparatus.
  (2)接近判定部27
 接近判定部27は、撮像部21の撮像面と実オブジェクトとの距離が閾値以下であるか否かを判定する機能を有する。上述したように、姿勢推定部23は、SLAM技術を用いて、環境マップ3における特徴点と撮像部21により撮像された撮像画像に映る特徴点とのマッチング結果により操作装置2-1の位置および姿勢を推定する。姿勢推定部23が姿勢情報の推定に成功するためには、少なくとも撮像範囲に特徴点が含まれること、および特徴点が識別可能な程度に撮像画像に映っていることを要する。そこで、接近判定部27は、撮像部21の撮像面と実オブジェクトとが過度に接近しているか否かを判定する。接近判定部27により、撮像部21の撮像面と実オブジェクトとの距離が閾値以下であると判定された場合、姿勢推定部23が姿勢情報の推定に成功するための条件を満たさないおそれがあるために、後述の接近通知部28による通知がなされる。他にも、接近判定部27は、撮像部21の撮像面が空を向いているなど、撮像範囲に特徴点が含まれていない場合に、接近通知部28による通知を行ってもよい。
(2) Approach determination unit 27
The approach determination unit 27 has a function of determining whether or not the distance between the imaging surface of the imaging unit 21 and the real object is equal to or less than a threshold value. As described above, the posture estimation unit 23 uses the SLAM technique to determine the position of the controller device 2-1 based on the matching result between the feature points in the environment map 3 and the feature points reflected in the captured image captured by the imaging unit 21. Estimate posture. In order for the posture estimation unit 23 to succeed in estimating posture information, it is necessary that at least a feature point is included in the imaging range and that the feature point is reflected in the captured image to the extent that it can be identified. Therefore, the approach determination unit 27 determines whether or not the imaging surface of the imaging unit 21 and the real object are excessively close. When the approach determination unit 27 determines that the distance between the imaging surface of the imaging unit 21 and the real object is equal to or less than the threshold, the posture estimation unit 23 may not satisfy the condition for successfully estimating the posture information. Therefore, a notification by an approach notification unit 28 described later is made. In addition, the approach determining unit 27 may perform notification by the approach notifying unit 28 when no feature point is included in the imaging range, such as when the imaging surface of the imaging unit 21 faces the sky.
 接近判定部27は、姿勢推定部23により推定された操作装置2-2の姿勢情報、および三次元データ4に基づいて、撮像部21の撮像面と実オブジェクトとの距離を算出する。具体的には、接近判定部27は、三次元データ4が示す三次元形状と姿勢情報との対応を取ることにより、姿勢情報が示す世界座標系における操作装置2-2の位置および角度を、三次元形状に対する操作装置2-2の位置および角度に変換する。そして、接近判定部27は、三次元形状が示す実オブジェクトの表面と、操作装置2-2が有する撮像部21の撮像面との距離を算出する。他にも、接近判定部27は、SLAMの特徴点情報にドロネー三角形分割(delaunay triangulation)を適用することで簡易的にメッシュを生成して、このメッシュを疑似的な三次元形状とみなして接近判定を行ってもよい。この場合、接近判定部27は、三次元データ4を用いずとも接近判定を行うことができる。 The approach determination unit 27 calculates the distance between the imaging surface of the imaging unit 21 and the real object based on the attitude information of the controller device 2-2 estimated by the attitude estimation unit 23 and the three-dimensional data 4. Specifically, the approach determination unit 27 obtains the position and angle of the controller device 2-2 in the world coordinate system indicated by the posture information by taking the correspondence between the three-dimensional shape indicated by the three-dimensional data 4 and the posture information. The position and angle of the operation device 2-2 with respect to the three-dimensional shape are converted. Then, the approach determination unit 27 calculates the distance between the surface of the real object indicated by the three-dimensional shape and the imaging surface of the imaging unit 21 included in the controller device 2-2. In addition, the approach determination unit 27 simply generates a mesh by applying Delaunay triangulation to the SLAM feature point information, and regards this mesh as a pseudo three-dimensional shape and approaches it. A determination may be made. In this case, the approach determination unit 27 can perform the approach determination without using the three-dimensional data 4.
 なお、図5においては、接近判定部27が操作装置2-2に含まれているが、本開示にかかる技術はこれに限定されない。例えば、接近判定部27が操作装置2-2と接続された他の情報処理装置に含まれてもよい。この場合、他の情報処理装置は、三次元データ4、および操作装置2-2から受信された姿勢情報に基づいて、撮像部21の撮像面と実オブジェクトとの距離を算出して接近判定を行い、接近通知部28による通知を制御してもよい。同様に、接近判定部27は、表示装置1-2に含まれていてもよい。 In FIG. 5, the approach determination unit 27 is included in the operation device 2-2, but the technology according to the present disclosure is not limited to this. For example, the approach determination unit 27 may be included in another information processing apparatus connected to the controller device 2-2. In this case, the other information processing device calculates the distance between the imaging surface of the imaging unit 21 and the real object based on the three-dimensional data 4 and the posture information received from the operation device 2-2, and makes an approach determination. The notification by the approach notification unit 28 may be controlled. Similarly, the approach determination unit 27 may be included in the display device 1-2.
  (3)接近通知部28
 接近通知部28は、接近判定部27による判定結果に基づき、撮像部21の撮像面と実オブジェクトとの距離が閾値以下であることを示す通知を行う通知部としての機能を有する。例えば、接近通知部28は、表示部、振動モータ、スピーカ、またはLED等により構成され、画面表示、振動、音声の再生、または発光等により通知を行ってもよい。接近通知部28が通知を行うことにより、操作装置2-2を実オブジェクトから離間するようユーザに促し、操作装置2-2の姿勢推定が困難になる状況を未然に回避することができる。
(3) Approach notification unit 28
The approach notification unit 28 has a function as a notification unit that performs notification indicating that the distance between the imaging surface of the imaging unit 21 and the real object is equal to or less than a threshold based on the determination result by the approach determination unit 27. For example, the approach notification unit 28 is configured by a display unit, a vibration motor, a speaker, an LED, or the like, and may perform notification by screen display, vibration, sound reproduction, light emission, or the like. The notification by the approach notification unit 28 prompts the user to move the operating device 2-2 away from the real object, and can prevent a situation in which it is difficult to estimate the posture of the operating device 2-2.
  (4)補足1
 また、制御部16は、仮想的操作補助デバイスによって、操作装置2-2による姿勢推定が困難になる状況を回避することを支援してもよい。例えば、制御部16は、スルー画像において、撮像部21の撮像面と実オブジェクトとの距離が閾値以下になり得る位置に仮想的操作補助デバイスを表示する。この表示により、制御部16は、仮想的操作補助デバイスが実オブジェクトに衝突しないようユーザに促し、操作装置2-2の姿勢推定が困難になる状況を未然に回避することができる。もちろん、これらが衝突した場合には、表示システムは、警告表示、振動、音声の再生、または発光等による通知を行ってもよい。以下、図6を参照して、このような仮想的操作補助デバイスの表示例を説明する。
(4) Supplement 1
In addition, the control unit 16 may assist the virtual operation assisting device to avoid a situation in which posture estimation by the operation device 2-2 becomes difficult. For example, in the through image, the control unit 16 displays the virtual operation assistance device at a position where the distance between the imaging surface of the imaging unit 21 and the real object can be equal to or less than a threshold value. With this display, the control unit 16 can prompt the user to prevent the virtual operation assisting device from colliding with the real object, and can avoid a situation where it is difficult to estimate the posture of the operation device 2-2. Of course, when these collide, the display system may perform notification by warning display, vibration, sound reproduction, light emission, or the like. Hereinafter, a display example of such a virtual operation assistance device will be described with reference to FIG.
 図6は、第2の実施形態に係る表示装置1-2によるAR表示例を示す図である。図6では、表示装置1-2が、スルー画像における操作装置2-2の前方に、操作装置2-2による姿勢推定が困難になる状況を回避することを支援するための仮想的操作補助デバイス8-3を表示する例を示している。この仮想的操作補助デバイス8-3は、図3を参照して上記説明した電球8-1と同様に、操作装置2-2とあたかも一体化しているかのように操作装置2-2に付随して動く。制御部16は、このような仮想的操作補助デバイス8-3により、撮像部21の撮像面と実オブジェクトとの距離が閾値以下になること、即ち操作装置2-2の姿勢推定が困難になる状況を未然に回避することができる。 FIG. 6 is a diagram showing an AR display example by the display device 1-2 according to the second embodiment. In FIG. 6, a virtual operation assisting device for assisting the display device 1-2 to avoid a situation where posture estimation by the operation device 2-2 is difficult in front of the operation device 2-2 in a through image. An example of displaying 8-3 is shown. Similar to the light bulb 8-1 described above with reference to FIG. 3, the virtual operation assisting device 8-3 is attached to the operation device 2-2 as if it is integrated with the operation device 2-2. Move. With such a virtual operation assisting device 8-3, the control unit 16 makes it difficult for the distance between the imaging surface of the imaging unit 21 and the real object to be equal to or smaller than the threshold, that is, the posture estimation of the operating device 2-2. The situation can be avoided in advance.
 さらに、制御部16は、この仮想的操作補助デバイスをユーザ操作に応じて任意の位置に移動させることで、実オブジェクトと接近した位置への作用を可能にする。詳しくは、制御部16は、実オブジェクトとの距離が近すぎて操作装置2-2による姿勢推定が困難になる場所であっても、操作装置2-2本体の代わり仮想的操作補助デバイスを近付けることができる。例えば、制御部16は、タッチパネルとして形成されたユーザ操作取得部25でのスワイプ操作により、仮想的操作補助デバイスを上下左右、または操作装置2-2から離間/接近する方向に移動させることで、ユーザ操作に応じた位置に作用点を導く。これにより、制御部16は、実オブジェクトに近い位置、例えば実オブジェクトそのものや実オブジェクトに接近した位置へのインタラクションを表現することができる。 Furthermore, the control unit 16 allows the virtual operation assisting device to move to an arbitrary position in accordance with a user operation, thereby enabling an action at a position close to the real object. Specifically, the control unit 16 approaches the virtual operation assisting device instead of the main body of the operation device 2-2 even in a place where it is difficult to estimate the posture by the operation device 2-2 because the distance to the real object is too short. be able to. For example, the control unit 16 moves the virtual operation assisting device in the up / down / left / right direction or the direction away / approaching from the operation device 2-2 by a swipe operation in the user operation acquisition unit 25 formed as a touch panel. The action point is guided to a position corresponding to the user operation. Thereby, the control part 16 can express the interaction to the position close | similar to a real object, for example, the position which approached the real object itself or a real object.
  (5)補足2
 操作装置2による自身の姿勢推定が困難になる状況を回避する手法は多様に考えられる。その一例として、姿勢推定部23が、姿勢推定が容易な撮像方向の撮像画像を選択して操作装置2-2の姿勢情報を推定することで、SLAM技術による姿勢推定のロバスト性を向上させることが考えられる。例えば、撮像部21が、操作装置2-2の筐体の裏面および表面に設けられる2つの広角カメラ、または360度の全天球カメラとして形成されてもよい。この場合、姿勢推定部23は、撮像画像から特徴点を認識し易い領域を選択的に用いて姿勢推定を行うことができる。他にも、撮像部21が例えば6自由度で動く雲台上に設けられ、最も安定して姿勢推定を行うことができる方向に撮像面を向け続けるように、姿勢推定部23が雲台を制御してもよい。
(5) Supplement 2
There are various methods for avoiding a situation in which it is difficult to estimate the posture of the controller device 2 itself. As an example, the posture estimation unit 23 improves the robustness of posture estimation by the SLAM technology by selecting the captured image in the imaging direction that is easy to estimate the posture and estimating the posture information of the controller device 2-2. Can be considered. For example, the imaging unit 21 may be formed as two wide-angle cameras provided on the back and front surfaces of the casing of the controller device 2-2, or a 360-degree omnidirectional camera. In this case, the posture estimation unit 23 can perform posture estimation by selectively using a region where feature points are easily recognized from the captured image. In addition, the image capturing unit 21 is provided on, for example, a pan head that moves with six degrees of freedom, and the posture estimating unit 23 moves the pan head so that the image capturing surface is continuously directed in a direction in which posture estimation can be performed most stably. You may control.
 [3-2.構成例2]
 図7は、第2の実施形態に係る表示システムの構成例2を示すブロック図である。図7に示すように、本実施形態に係る表示システムは、表示装置1-3および操作装置2-3から成り、図5を参照して説明した三次元データ4に代えて深度情報取得部29を有する。
[3-2. Configuration example 2]
FIG. 7 is a block diagram illustrating a configuration example 2 of the display system according to the second embodiment. As shown in FIG. 7, the display system according to the present embodiment includes a display device 1-3 and an operation device 2-3. The depth information acquisition unit 29 is replaced with the three-dimensional data 4 described with reference to FIG. Have
  (1)深度情報取得部29
 深度情報取得部29は、撮像部21の撮像方向の深度情報を取得する機能を有する。例えば、深度情報取得部29は、ステレオカメラまたは赤外線カメラ等により実現されてもよい。深度情報取得部29は、取得した深度情報を接近判定部27に出力する。
(1) Depth information acquisition unit 29
The depth information acquisition unit 29 has a function of acquiring depth information in the imaging direction of the imaging unit 21. For example, the depth information acquisition unit 29 may be realized by a stereo camera or an infrared camera. The depth information acquisition unit 29 outputs the acquired depth information to the approach determination unit 27.
 接近判定部27は、深度情報取得部29により出力された深度情報を用いて、撮像部21の撮像面と実オブジェクトとの距離が閾値以下であるか否かを判定する。そして、接近通知部28は、接近判定部27により撮像部21の撮像面と実オブジェクトとの距離が閾値以下であると判定された場合、その旨を示す通知を行う。この通知により、操作装置2-3を実オブジェクトから離間するようユーザに促し、操作装置2-3の姿勢推定が困難になる状況を未然に回避することができる。 The approach determination unit 27 uses the depth information output by the depth information acquisition unit 29 to determine whether the distance between the imaging surface of the imaging unit 21 and the real object is equal to or less than a threshold value. When the approach determining unit 27 determines that the distance between the imaging surface of the imaging unit 21 and the real object is equal to or less than the threshold value, the approach notifying unit 27 notifies the fact to that effect. This notification prompts the user to move the controller device 2-3 away from the real object, thereby avoiding a situation where it is difficult to estimate the posture of the controller device 2-3.
 [3-3.構成例3]
 図8は、第2の実施形態に係る表示システムの構成例3を示すブロック図である。図8に示すように、本実施形態に係る表示システムは、表示装置1-4および操作装置2-4から成り、図5を参照して説明した接近判定部27および接近通知部28に代えて接近判定部18および接近通知部19を有する。
[3-3. Configuration Example 3]
FIG. 8 is a block diagram illustrating a configuration example 3 of the display system according to the second embodiment. As shown in FIG. 8, the display system according to the present embodiment includes a display device 1-4 and an operation device 2-4. Instead of the approach determination unit 27 and the approach notification unit 28 described with reference to FIG. An approach determining unit 18 and an approach notifying unit 19 are provided.
  (1)接近判定部18
 接近判定部18は、接近判定部27と同様の機能を有する。具体的には、接近判定部18は、三次元データ4、および姿勢情報受信部14により受信された操作装置2-4の姿勢情報に基づいて、撮像部21の撮像面と実オブジェクトとの距離を算出する。そして、接近判定部18は、撮像部21の撮像面と実オブジェクトとの距離が閾値以下であるか否かを判定する。接近判定部18は、判定結果を接近通知部19に出力する。
(1) Approach determination unit 18
The approach determination unit 18 has the same function as the approach determination unit 27. Specifically, the approach determination unit 18 determines the distance between the imaging surface of the imaging unit 21 and the real object based on the three-dimensional data 4 and the posture information of the controller device 2-4 received by the posture information receiving unit 14. Is calculated. And the approach determination part 18 determines whether the distance of the imaging surface of the imaging part 21 and a real object is below a threshold value. The approach determination unit 18 outputs the determination result to the approach notification unit 19.
  (2)接近通知部19
 接近通知部19は、接近通知部28と同様の機能を有する。特に、接近通知部19は、表示装置1-4側での通知を可能にする。例えば、接近通知部19は、表示部17による警告表示を行うことで、よりユーザに強い注意を促すことができる。
(2) Approach notification unit 19
The approach notification unit 19 has the same function as the approach notification unit 28. In particular, the approach notification unit 19 enables notification on the display device 1-4 side. For example, the approach notification unit 19 can prompt the user to pay more attention by displaying a warning on the display unit 17.
 以上、本実施形態に係る表示システムの構成例を説明した。 The configuration example of the display system according to the present embodiment has been described above.
 <4.第3の実施形態>
 本実施形態は、表示装置1-5が操作装置2-5に表示されたマーカーの認識結果に基づいてSLAMによる姿勢推定誤差を補正する形態である。まず、図9を参照して、本実施形態に係る表示システムの概要を説明する。
<4. Third Embodiment>
In the present embodiment, the display device 1-5 corrects the posture estimation error due to SLAM based on the marker recognition result displayed on the operation device 2-5. First, an overview of the display system according to the present embodiment will be described with reference to FIG.
 [4-1.概要]
 図9は、第3の実施形態に係る表示システムの概要を説明するための説明図である。図9は、本実施形態に係る表示装置1-5の表示部17に映される、操作装置2-5を撮像したスルー画像を示している。図9に示すように、本実施形態に係る操作装置2-5は、ディスプレイの全領域に黒色を表示しつつ、白色の矩形から成るマーカー5を表示する。本実施形態に係る表示装置1-5は、このマーカー5の認識結果に基づいて、SLAMによる姿勢推定誤差に起因する、表示装置1-5および操作装置2-5の姿勢情報の統合による相対位置の推定誤差を補正する。
[4-1. Overview]
FIG. 9 is an explanatory diagram for explaining an overview of a display system according to the third embodiment. FIG. 9 shows a through image captured by the operation device 2-5 and displayed on the display unit 17 of the display device 1-5 according to the present embodiment. As shown in FIG. 9, the operating device 2-5 according to the present embodiment displays the marker 5 made of a white rectangle while displaying black in the entire area of the display. The display device 1-5 according to the present embodiment, based on the recognition result of the marker 5, causes the relative position by integrating the posture information of the display device 1-5 and the operation device 2-5 due to the posture estimation error by SLAM. The estimation error is corrected.
 図9に示すように、マーカー5が表示装置1-5の表示部17に正対している。詳しくは、マーカー5を示す正方形の各辺が画面の各辺と平行に映されている。操作装置2-5は、マーカー5が画面に正対するように、即ち表示装置1-5の撮像部11に正対するように、マーカー5を表示する。これにより、スルー画像におけるマーカー5の形状が既知(固定)となるため、表示装置1-5によるマーカー5の認識負荷が軽減される。なお、図9では、一例としてマーカー5を正方形としてが、多角形、円形、楕円形等の任意の形状を取り得る。 As shown in FIG. 9, the marker 5 faces the display unit 17 of the display device 1-5. Specifically, each square side indicating the marker 5 is projected in parallel with each side of the screen. The operation device 2-5 displays the marker 5 so that the marker 5 faces the screen, that is, so as to face the imaging unit 11 of the display device 1-5. Thereby, since the shape of the marker 5 in the through image is known (fixed), the recognition load of the marker 5 by the display device 1-5 is reduced. In FIG. 9, as an example, the marker 5 is a square, but may take any shape such as a polygon, a circle, and an ellipse.
 以上、本実施形態に係る表示システムの概要を説明した。続いて、図10を参照して、本実施形態に係る表示システムの構成例を説明する。 The overview of the display system according to this embodiment has been described above. Next, a configuration example of the display system according to the present embodiment will be described with reference to FIG.
 [4-2.構成例]
 図10は、第3の実施形態に係る表示システムの構成例を示すブロック図である。図10に示すように、本実施形態に係る表示システムは、表示装置1-5および操作装置2-5から成り、図2を参照して説明した第1の実施形態に係る表示システムに加えて、双方向に姿勢情報を通知するための機能、およびマーカーを正対表示するための機能を有する。双方向に姿勢情報を通知するための機能は、姿勢情報送信部141、姿勢情報受信部142、姿勢情報送信部241、および姿勢情報受信部242により実現される。マーカーを正対表示するための機能は、位置補正部143、補正行列送信部144、補正行列受信部243、マーカー生成部244、およびマーカー表示部245により実現される。
[4-2. Configuration example]
FIG. 10 is a block diagram illustrating a configuration example of a display system according to the third embodiment. As shown in FIG. 10, the display system according to this embodiment includes a display device 1-5 and an operation device 2-5. In addition to the display system according to the first embodiment described with reference to FIG. , A function for notifying posture information in both directions, and a function for displaying a marker facing each other. A function for notifying posture information bidirectionally is realized by the posture information transmitting unit 141, the posture information receiving unit 142, the posture information transmitting unit 241, and the posture information receiving unit 242. The function for displaying the marker in a face-to-face relationship is realized by the position correction unit 143, the correction matrix transmission unit 144, the correction matrix reception unit 243, the marker generation unit 244, and the marker display unit 245.
  (1)姿勢情報送信部141
 姿勢情報送信部141は、図2を参照して上記説明した姿勢情報送信部24と同様の機能を有する。本実施形態に係る姿勢情報送信部141は、姿勢推定部13により推定された表示装置1-5の姿勢情報を、操作装置2-5に送信する。
(1) Posture information transmission unit 141
The posture information transmission unit 141 has the same function as the posture information transmission unit 24 described above with reference to FIG. The posture information transmission unit 141 according to the present embodiment transmits the posture information of the display device 1-5 estimated by the posture estimation unit 13 to the operation device 2-5.
  (2)姿勢情報受信部142
 姿勢情報受信部142は、図2を参照して上記説明した姿勢情報受信部14と同様の機能を有する。本実施形態に係る姿勢情報受信部142は、後述の姿勢情報送信部241により送信された操作装置2-5の姿勢情報を受信して、位置補正部143に出力する。
(2) Attitude information receiving unit 142
The posture information receiving unit 142 has the same function as the posture information receiving unit 14 described above with reference to FIG. The posture information receiving unit 142 according to the present embodiment receives posture information of the controller device 2-5 transmitted by a posture information transmitting unit 241 described later, and outputs the posture information to the position correcting unit 143.
  (3)姿勢情報送信部241
 姿勢情報送信部241は、図2を参照して上記説明した姿勢情報送信部24と同様の機能を有する。本実施形態に係る姿勢情報送信部241は、姿勢推定部23により推定された操作装置2-5の姿勢情報を、表示装置1-5に送信する。
(3) Posture information transmission unit 241
The posture information transmission unit 241 has the same function as the posture information transmission unit 24 described above with reference to FIG. The posture information transmission unit 241 according to the present embodiment transmits the posture information of the operation device 2-5 estimated by the posture estimation unit 23 to the display device 1-5.
  (4)姿勢情報受信部242
 姿勢情報受信部242は、図2を参照して上記説明した姿勢情報受信部14と同様の機能を有する。本実施形態に係る姿勢情報受信部242は、姿勢情報送信部141により送信された表示装置1-5の姿勢情報を受信して、マーカー生成部244に出力する。
(4) Attitude information receiving unit 242
The posture information receiving unit 242 has the same function as the posture information receiving unit 14 described above with reference to FIG. The posture information receiving unit 242 according to the present embodiment receives the posture information of the display device 1-5 transmitted by the posture information transmitting unit 141 and outputs the posture information to the marker generating unit 244.
  (5)位置補正部143
 位置補正部143は、撮像部11により撮像されたスルー画像からマーカーを画像認識し、認識結果を用いて表示装置1-5と操作装置2-5との相対位置を演算する機能を有する。詳しくは、まず、位置補正部143は、姿勢推定部13により出力された表示装置1-5の姿勢情報および姿勢情報受信部142により受信された操作装置2-5の姿勢情報を統合して、表示装置1-5と操作装置2-5との相対位置を演算する。ここでの演算処理は、図2を参照して上記説明した制御部16における処理と同様である。これと同期して、撮像部11は、後述のマーカー表示部245により表示されたマーカーを撮像したスルー画像を出力する。そして、位置補正部143は、スルー画像からマーカーを画像認識することで、スルー画像における操作装置2-5の位置を推定する。ここで、SLAMによる姿勢推定誤差が含まれているとしても、姿勢情報に基づく相対位置の演算によりスルー画像における操作装置2-5の位置が既知である。即ち、スルー画像におけるマーカーの位置が既知である。よって、位置補正部143は、スルー画像における限定された領域の中からマーカーを認識すればよい。仮に、その領域でのマーカーの識別に失敗する場合であっても、位置補正部143は、その周辺領域を探索することで、効率的にマーカーを識別することができる。このため、スルー画像全体からマーカーを認識する場合と比較して処理負荷が軽減される。位置補正部143は、マーカーに基づく位置推定結果により、姿勢情報の統合による相対位置の演算結果を補正する。
(5) Position correction unit 143
The position correction unit 143 has a function of recognizing the marker from the through image captured by the imaging unit 11 and calculating the relative position between the display device 1-5 and the operation device 2-5 using the recognition result. Specifically, first, the position correction unit 143 integrates the posture information of the display device 1-5 output by the posture estimation unit 13 and the posture information of the operation device 2-5 received by the posture information reception unit 142, The relative position between the display device 1-5 and the operation device 2-5 is calculated. The arithmetic processing here is the same as the processing in the control unit 16 described above with reference to FIG. In synchronization with this, the imaging unit 11 outputs a through image obtained by imaging a marker displayed by a marker display unit 245 described later. Then, the position correcting unit 143 estimates the position of the controller device 2-5 in the through image by recognizing the marker from the through image. Here, even if a posture estimation error by SLAM is included, the position of the controller device 2-5 in the through image is known by the calculation of the relative position based on the posture information. That is, the position of the marker in the through image is known. Therefore, the position correction unit 143 may recognize a marker from a limited area in the through image. Even if the identification of the marker in the region fails, the position correction unit 143 can efficiently identify the marker by searching the peripheral region. For this reason, the processing load is reduced as compared with the case where the marker is recognized from the entire through image. The position correction unit 143 corrects the calculation result of the relative position by integrating the posture information based on the position estimation result based on the marker.
 位置補正部143は、例えばHaar-like特徴量を用いてマーカーを認識する。Haar-like特徴量とは、矩形領域の平均明度の差分値として求められるスカラ量である。図9に示したマーカー5のような黒色の領域中の白色の領域は、黒色の領域との明度の差分が明確であるため、Haar-like特徴量による認識が容易である。位置補正部143は、マーカーを認識すればよいので、操作装置2-5自体を認識する場合と比較して、処理負荷が軽減される。 The position correction unit 143 recognizes the marker using, for example, a Haar-like feature amount. The Haar-like feature amount is a scalar amount obtained as a difference value of the average brightness of the rectangular area. A white region in the black region such as the marker 5 shown in FIG. 9 has a clear brightness difference from the black region, and is easily recognized by the Haar-like feature value. Since the position correction unit 143 only needs to recognize the marker, the processing load is reduced as compared with the case of recognizing the controller device 2-5 itself.
 本実施形態に係る制御部16は、位置補正部143により演算および補正された表示装置1-5と操作装置2-5との相対位置に応じて、各種処理を行う。例えば、制御部16は、補正後の相対位置に応じてスルー画像における操作装置2-5の位置を推定して、推定した位置に対応する位置に仮想オブジェクトを表示する。相対位置の演算結果がマーカーの認識結果に基づいて補正されているので、補正されない場合と比較して推定された位置と実際の位置との誤差が少なくなり、より自然なAR表示が実現される。 The control unit 16 according to the present embodiment performs various processes according to the relative position between the display device 1-5 and the operation device 2-5 calculated and corrected by the position correction unit 143. For example, the control unit 16 estimates the position of the controller device 2-5 in the through image according to the corrected relative position, and displays the virtual object at a position corresponding to the estimated position. Since the calculation result of the relative position is corrected based on the recognition result of the marker, an error between the estimated position and the actual position is reduced as compared with the case where correction is not performed, and a more natural AR display is realized. .
 SLAMによる姿勢推定誤差の回転成分が大きい場合、マーカーが撮像部21に対して正対表示されないことになり、位置補正部143によるマーカーの認識負荷が多大になり得る。そこで、位置補正部143は、回転成分の誤差を軽減すべく、マーカーの表示制御情報(形状、角度、および位置)を補正するための補正行列を生成する。詳しくは、位置補正部143は、スルー画像において認識したマーカーと正対する場合に認識されると想定されるマーカーとの差分を補正する補正行列を生成する。位置補正部143は、生成した補正行列を補正行列送信部144に出力する。なお、SLAMによる姿勢推定誤差のオフセットが一定であると仮定すると、補正行列もまた一定の値となる。そこで、位置補正部143は、補正行列にローパスフィルター等を適用してもよい。この場合、補正幅が安定するため、後述のマーカー表示部245による安定したマーカーの表示が実現される。 When the rotational component of the posture estimation error by SLAM is large, the marker is not displayed facing the imaging unit 21, and the marker recognition load by the position correction unit 143 can be great. Therefore, the position correction unit 143 generates a correction matrix for correcting the display control information (shape, angle, and position) of the marker so as to reduce the error of the rotation component. Specifically, the position correction unit 143 generates a correction matrix that corrects a difference from a marker that is assumed to be recognized when facing a marker recognized in the through image. The position correction unit 143 outputs the generated correction matrix to the correction matrix transmission unit 144. Assuming that the offset of the posture estimation error by SLAM is constant, the correction matrix also has a constant value. Therefore, the position correction unit 143 may apply a low-pass filter or the like to the correction matrix. In this case, since the correction width is stable, stable marker display is realized by a marker display unit 245 described later.
  (6)補正行列送信部144
 補正行列送信部144は、位置補正部143により出力された補正行列を示す情報を、操作装置2-5に送信する機能を有する。補正行列送信部144は、有線/無線により操作装置2-5との間でデータの送受信を行うための通信モジュールである。例えば、補正行列送信部144は、無線LAN、Wi-Fi(登録商標)、赤外線通信、Bluetooth(登録商標)等の方式で、操作装置2-5と直接、またはネットワークアクセスポイントを介して無線通信する。
(6) Correction matrix transmission unit 144
The correction matrix transmission unit 144 has a function of transmitting information indicating the correction matrix output from the position correction unit 143 to the controller device 2-5. The correction matrix transmission unit 144 is a communication module for transmitting / receiving data to / from the controller device 2-5 by wire / wireless. For example, the correction matrix transmission unit 144 uses a wireless LAN, Wi-Fi (registered trademark), infrared communication, Bluetooth (registered trademark), or the like to perform wireless communication directly with the controller device 2-5 or via a network access point. To do.
  (7)補正行列受信部243
 補正行列受信部243は、補正行列送信部144により送信された補正行列を示す情報を受信する機能を有する。補正行列受信部243は、有線/無線により表示装置1-5との間でデータの送受信を行うための通信モジュールである。例えば、補正行列受信部243は、無線LAN、Wi-Fi(登録商標)、赤外線通信、Bluetooth(登録商標)等の方式で、表示装置1-5と直接、またはネットワークアクセスポイントを介して無線通信する。補正行列受信部243は、受信した補正行列をマーカー生成部244に出力する。
(7) Correction matrix receiver 243
The correction matrix reception unit 243 has a function of receiving information indicating the correction matrix transmitted by the correction matrix transmission unit 144. The correction matrix receiving unit 243 is a communication module for transmitting / receiving data to / from the display device 1-5 by wire / wireless. For example, the correction matrix receiving unit 243 wirelessly communicates with the display device 1-5 directly or via a network access point using a wireless LAN, Wi-Fi (registered trademark), infrared communication, Bluetooth (registered trademark), or the like. To do. The correction matrix reception unit 243 outputs the received correction matrix to the marker generation unit 244.
  (8)マーカー生成部244
 マーカー生成部244は、位置補正部143による表示装置1-5と操作装置2-5との相対位置の演算に用いられるマーカーを生成して、生成したマーカーを表示するようマーカー表示部245を制御するマーカー制御部としての機能を有する。詳しくは、まず、マーカー生成部244は、姿勢推定部23により出力された操作装置2-5の姿勢情報および姿勢情報受信部242により受信された表示装置1-5の姿勢情報を統合して、表示装置1-5と操作装置2-5との相対位置を演算する。ここでの演算処理は、図2を参照して上記説明した制御部16における処理と同様である。次いで、マーカー生成部244は、相対位置の演算結果に基づいて、表示装置1-5に正対するマーカーの形状、角度、および位置を規定する表示制御情報を生成する。そして、マーカー生成部244は、生成した表示制御情報をマーカー表示部245に出力して、表示装置1-5の撮像部11に正対するマーカーを表示するようマーカー表示部245を制御する。これにより、マーカーが表示装置1-5の撮像部11に正対するため、位置補正部143によるマーカーの認識がより容易になり、認識負荷が軽減される。なお、マーカーは、表示装置1-5の撮像部11により撮像されて初めて意味をもつものであるため、マーカー生成部244は、操作装置2-5が撮像部11の画角外である場合、マーカーの表示を停止してもよい。また、マーカー生成部244は、SLAMによる姿勢推定誤差が少ない場合は位置のみの補正を行うなど、補正内容を制限することでマーカーの生成負荷を軽減してもよい。
(8) Marker generation unit 244
The marker generation unit 244 generates a marker used for the calculation of the relative position between the display device 1-5 and the operation device 2-5 by the position correction unit 143, and controls the marker display unit 245 to display the generated marker. Functions as a marker control unit. Specifically, first, the marker generation unit 244 integrates the posture information of the operation device 2-5 output by the posture estimation unit 23 and the posture information of the display device 1-5 received by the posture information reception unit 242; The relative position between the display device 1-5 and the operation device 2-5 is calculated. The arithmetic processing here is the same as the processing in the control unit 16 described above with reference to FIG. Next, the marker generation unit 244 generates display control information that defines the shape, angle, and position of the marker facing the display device 1-5 based on the calculation result of the relative position. Then, the marker generation unit 244 outputs the generated display control information to the marker display unit 245, and controls the marker display unit 245 so as to display a marker facing the imaging unit 11 of the display device 1-5. Accordingly, since the marker faces the imaging unit 11 of the display device 1-5, the marker is easily recognized by the position correcting unit 143, and the recognition load is reduced. Note that since the marker is meaningful only after being imaged by the imaging unit 11 of the display device 1-5, the marker generation unit 244 is configured so that the operation device 2-5 is outside the angle of view of the imaging unit 11. The marker display may be stopped. Further, the marker generation unit 244 may reduce the marker generation load by limiting the correction content, such as correcting only the position when the posture estimation error by SLAM is small.
 マーカー生成部244は、SLAMによる姿勢推定誤差に応じて、表示装置1-5の撮像部11に正対するようマーカーの形状、位置、角度を補正する。詳しくは、表示装置1-5の位置補正部143によるマーカーの認識結果にさらに基づいて、表示装置1-5の撮像部11に正対するマーカーを表示するようマーカー表示部245を制御する。具体的には、マーカー生成部244は、相対位置の演算結果に基づいて生成したマーカーの表示制御情報を、補正行列受信部243により受信された補正行列により補正する。これにより、マーカーが表示装置1-5の撮像部11にさらに正対するようになるため、位置補正部143によるマーカーの認識負荷がさらに軽減される。 The marker generation unit 244 corrects the shape, position, and angle of the marker so as to face the imaging unit 11 of the display device 1-5 according to the posture estimation error by SLAM. Specifically, the marker display unit 245 is controlled to display a marker facing the imaging unit 11 of the display device 1-5 based further on the marker recognition result by the position correction unit 143 of the display device 1-5. Specifically, the marker generation unit 244 corrects the display control information of the marker generated based on the calculation result of the relative position using the correction matrix received by the correction matrix reception unit 243. As a result, the marker further faces the imaging unit 11 of the display device 1-5, so that the marker recognition load by the position correction unit 143 is further reduced.
 マーカー生成部244が生成するマーカーは多様に考えられる。例えば、マーカー生成部244は、図9に示したように、Haar-like特徴量による認識が容易な、周辺の領域との明度差分が明確な単一色から成る矩形を、マーカーとして生成してもよい。他にも、マーカー生成部244は、Cyber Code等の検出コストが低い平面マーカーを生成してもよい。この場合、Haar-like特徴量によるマーカーの識別が困難な場合であっても、識別を可能にすることができる。 The marker generated by the marker generation unit 244 can be considered variously. For example, as shown in FIG. 9, the marker generation unit 244 may generate a rectangle made of a single color with a clear brightness difference from the surrounding area, which can be easily recognized by the Haar-like feature value, as a marker. Good. In addition, the marker generation unit 244 may generate a planar marker with a low detection cost, such as Cyber Code. In this case, even if it is difficult to identify the marker by the Haar-like feature amount, it is possible to identify the marker.
  (9)マーカー表示部245
 マーカー表示部245は、マーカー生成部244により生成されたマーカーを表示する機能を有する。マーカー表示部245は、例えばLCDまたはOLEDなどにより実現される。
(9) Marker display section 245
The marker display unit 245 has a function of displaying the marker generated by the marker generation unit 244. The marker display unit 245 is realized by, for example, an LCD or an OLED.
  (10)補足
 なお、図10においては、位置補正部143が表示装置1-5に含まれ、マーカー生成部244が操作装置2-5に含まれているが、本開示にかかる技術はこれに限定されない。例えば、位置補正部143およびマーカー生成部244が表示装置1-5および操作装置2-5と接続された他の情報処理装置に含まれてもよい。この場合、他の情報処理装置は、マーカーの認識および相対位置の演算を行って表示部17の表示を制御し、補正行列を生成してマーカー表示部245によるマーカーの表示を制御してもよい。同様に、位置補正部143およびマーカー生成部244は、表示装置1-5または操作装置2-5のいずれか一方に含まれていてもよい。
(10) Supplement In FIG. 10, the position correction unit 143 is included in the display device 1-5 and the marker generation unit 244 is included in the operation device 2-5. However, the technique according to the present disclosure is not limited thereto. It is not limited. For example, the position correction unit 143 and the marker generation unit 244 may be included in another information processing apparatus connected to the display device 1-5 and the operation device 2-5. In this case, another information processing apparatus may perform marker recognition and relative position calculation to control display on the display unit 17, generate a correction matrix, and control marker display by the marker display unit 245. . Similarly, the position correction unit 143 and the marker generation unit 244 may be included in either the display device 1-5 or the operation device 2-5.
 以上、本実施形態に係る表示システムの構成例を説明した。 The configuration example of the display system according to the present embodiment has been described above.
 <5.まとめ>
 ここまで、図1~図10を用いて、本開示に係る技術の実施形態を詳細に説明した。上述した実施形態によれば、処理装置と操作装置との位置関係に制約を設けることなく、処理装置による操作装置の位置に応じた処理が可能である。具体的には、本実施形態に係る表示システムは、表示装置1および操作装置2の姿勢推定を同一の環境マップ3を用いて行い、後に統合することによりこれらの相対位置を推定する。このため、表示システムは、表示装置1の撮像部11の画角から操作装置2が外れた場合であっても、相対位置を推定することができる。つまり、表示システムは、表示装置1と操作装置2との位置関係に制約を設けることなく、相対位置を推定することができ、推定結果に応じた各種処理を行うことができる。
<5. Summary>
So far, the embodiments of the technology according to the present disclosure have been described in detail with reference to FIGS. 1 to 10. According to the above-described embodiment, it is possible to perform processing according to the position of the operating device by the processing device without restricting the positional relationship between the processing device and the operating device. Specifically, the display system according to the present embodiment estimates the postures of the display device 1 and the operation device 2 using the same environment map 3, and estimates their relative positions by integrating them later. For this reason, the display system can estimate the relative position even when the controller device 2 is out of the angle of view of the imaging unit 11 of the display device 1. That is, the display system can estimate the relative position without restricting the positional relationship between the display device 1 and the controller device 2, and can perform various processes according to the estimation result.
 また、表示装置1は、位置推定のために、操作装置2そのものを画像認識により追跡する必要がないため、相対位置の推定処理のための負荷が軽減される。また、表示装置1は、操作装置2に表示されたマーカーを認識することで、相対位置をさらに正確に推定することができる。その上、操作装置2は、相対位置を演算することにより、また、さらには補正行列を用いることにより、マーカーを表示装置1に対して正対表示することができるため、表示装置1によるマーカーの認識負荷が軽減される。 Also, since the display device 1 does not need to track the operation device 2 itself by image recognition for position estimation, the load for the relative position estimation processing is reduced. Further, the display device 1 can estimate the relative position more accurately by recognizing the marker displayed on the operation device 2. In addition, since the operation device 2 can display the marker on the display device 1 by calculating the relative position and further using the correction matrix, the marker of the marker by the display device 1 can be displayed. The recognition load is reduced.
 また、表示装置1は、仮想的な操作用補助デバイスを表示し、実世界とのインタラクションを表現することができる。操作装置2は、SLAMにより姿勢推定を行うため、撮像部21の撮像面と実オブジェクトとが過度に接近した場合に姿勢推定が困難になる。この点、表示装置1は、仮想的な操作用補助デバイスを表示することで、操作装置2の姿勢推定が困難になる状況を未然に回避することができ、また、実オブジェクトに近い位置での実オブジェクトとのインタラクションを表現することができる。 In addition, the display device 1 can display a virtual operation auxiliary device and express an interaction with the real world. Since the controller device 2 performs posture estimation by SLAM, posture estimation becomes difficult when the imaging surface of the imaging unit 21 and the real object approach excessively. In this respect, the display device 1 can avoid a situation where it is difficult to estimate the posture of the operation device 2 by displaying a virtual operation auxiliary device. It can represent interactions with real objects.
 また、表示装置1および操作装置2は、同一または類似の機能を多く有している。例えば、姿勢推定部13および姿勢推定部23、姿勢情報送信部141および姿勢情報送信部241、姿勢情報受信部142および姿勢情報受信部242は、同一または類似の機能を実現している。このため、表示装置1および操作装置2はプログラム的に可換であるため、表示システムを開発する際の開発効率が良い。 Further, the display device 1 and the operation device 2 have many of the same or similar functions. For example, posture estimation unit 13 and posture estimation unit 23, posture information transmission unit 141 and posture information transmission unit 241, posture information reception unit 142, and posture information reception unit 242 realize the same or similar functions. For this reason, since the display device 1 and the operation device 2 are interchangeable programmatically, the development efficiency when developing the display system is good.
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure.
 例えば、上記実施形態の説明において随時説明してきたように、表示装置1、および操作装置2の各構成要素は、表示装置1、操作装置2、または他の情報処理装置のいずれに含まれてもよい。例えば、表示装置1が撮像部11、センサ12、および表示部17を有し、操作装置2が撮像部21、センサ22、ユーザ操作取得部25を有し、サーバ等の他の情報処理装置がその他の構成要素を有していてもよい。この場合、サーバが、撮像画像およびセンシング結果に基づいてSLAM技術により表示装置1および操作装置2の姿勢情報をそれぞれ推定し、統合して相対位置を推定してもよい。そして、表示装置1は、サーバによる制御に応じてAR画像を表示してもよい。また、操作装置2は、サーバによる制御に応じてマーカーを表示してもよい。 For example, as has been described from time to time in the description of the embodiment, each component of the display device 1 and the operation device 2 may be included in any of the display device 1, the operation device 2, and other information processing devices. Good. For example, the display device 1 includes the imaging unit 11, the sensor 12, and the display unit 17, the operation device 2 includes the imaging unit 21, the sensor 22, and the user operation acquisition unit 25. You may have another component. In this case, the server may estimate the posture information of the display device 1 and the controller device 2 by the SLAM technology based on the captured image and the sensing result, and may estimate the relative position by integrating them. And the display apparatus 1 may display AR image according to control by a server. Moreover, the controller device 2 may display a marker according to control by the server.
 また、本開示の第1~第3の実施形態の実施形態は適宜組み合わせることが可能である。 Further, the embodiments of the first to third embodiments of the present disclosure can be appropriately combined.
 なお、本明細書において説明した各装置による一連の制御処理は、ソフトウェア、ハードウェア、およびソフトウェアとハードウェアとの組合せのいずれを用いて実現されてもよい。ソフトウェアを構成するプログラムは、例えば、各装置の内部又は外部に設けられる記憶媒体(非一時的な媒体:non-transitory media)に予め格納される。そして、各プログラムは、例えば、実行時にRAMに読み込まれ、CPUなどのプロセッサにより実行される。 Note that a series of control processing by each device described in this specification may be realized using any of software, hardware, and a combination of software and hardware. For example, the program constituting the software is stored in advance in a storage medium (non-transitory medium) provided inside or outside each device. Each program is read into a RAM at the time of execution, for example, and executed by a processor such as a CPU.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in this specification are merely illustrative or illustrative, and are not limited. That is, the technology according to the present disclosure can exhibit other effects that are apparent to those skilled in the art from the description of the present specification in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 実空間を撮像する撮像部により撮像された撮像画像に基づいて、前記実空間に存在する物体の位置を表現する環境マップにおける前記撮像部を有する操作装置の位置および姿勢を示す第1の姿勢情報を推定する推定部と、
 前記推定部により推定された前記第1の姿勢情報を、前記操作装置の位置および姿勢に応じた処理を行う処理装置に送信する通信部と、
を備える、情報処理装置。
(2)
 前記第1の姿勢情報は、前記環境マップにおける前記処理装置の位置および姿勢を示す第2の姿勢情報との統合により前記操作装置と前記処理装置との相対位置を演算するために用いられる、前記(1)に記載の情報処理装置。
(3)
 前記情報処理装置は、
 前記処理装置により認識され、前記相対位置の演算に用いられるマーカーを表示するよう前記操作装置の表示部を制御するマーカー制御部をさらに備える、前記(2)に記載の情報処理装置。
(4)
 前記マーカー制御部は、前記第1の姿勢情報および前記第2の姿勢情報との統合による前記相対位置の演算結果に基づいて、前記処理装置の撮像部に正対する前記マーカーを表示するよう前記表示部を制御する、前記(3)に記載の情報処理装置。
(5)
 前記マーカー制御部は、前記処理装置による前記マーカーの認識結果にさらに基づいて、前記処理装置の撮像部に正対する前記マーカーを表示するよう前記表示部を制御する、前記(4)に記載の情報処理装置。
(6)
 前記第1の姿勢情報は、前記処理装置により前記相対位置に応じた位置に仮想オブジェクトを表示するために用いられる、前記(2)~(5)のいずれか一項に記載の情報処理装置。
(7)
 前記情報処理装置は、
 前記撮像部の撮像面と実オブジェクトとの距離が閾値以下であることを示す通知を行う通知部をさらに備える、前記(1)~(6)のいずれか一項に記載の情報処理装置。
(8)
 前記推定部は、姿勢推定が容易な撮像方向の撮像画像を選択して前記第1の姿勢情報を推定する、前記(1)~(7)のいずれか一項に記載の情報処理装置。
(9)
 実空間を撮像する撮像部により撮像された撮像画像に基づいて、前記実空間に存在する物体の位置を表現する環境マップにおける前記撮像部を有する操作装置の位置および姿勢を示す第1の姿勢情報を推定することと、
 推定された前記第1の姿勢情報を、前記操作装置の位置および姿勢に応じた処理を行う処理装置に送信することと、
を備える、情報処理装置のプロセッサにより実行される情報処理方法。
(10)
 コンピュータを、
 実空間を撮像する撮像部により撮像された撮像画像に基づいて、前記実空間に存在する物体の位置を表現する環境マップにおける前記撮像部を有する操作装置の位置および姿勢を示す第1の姿勢情報を推定する推定部と、
 前記推定部により推定された前記第1の姿勢情報を、前記操作装置の位置および姿勢に応じた処理を行う処理装置に送信する通信部と、
として機能させるためのプログラム。
The following configurations also belong to the technical scope of the present disclosure.
(1)
1st attitude | position information which shows the position and attitude | position of the operating device which has the said imaging part in the environment map expressing the position of the object which exists in the said real space based on the captured image imaged by the imaging part which images real space An estimation unit for estimating
A communication unit that transmits the first posture information estimated by the estimation unit to a processing device that performs processing according to the position and posture of the controller device;
An information processing apparatus comprising:
(2)
The first posture information is used to calculate a relative position between the operation device and the processing device by integrating with second posture information indicating the position and posture of the processing device in the environment map. The information processing apparatus according to (1).
(3)
The information processing apparatus includes:
The information processing apparatus according to (2), further including a marker control unit that controls a display unit of the operation device to display a marker that is recognized by the processing device and used for the calculation of the relative position.
(4)
The marker control unit is configured to display the marker facing the imaging unit of the processing device based on a calculation result of the relative position obtained by integrating the first posture information and the second posture information. The information processing apparatus according to (3), which controls a unit.
(5)
The information according to (4), wherein the marker control unit controls the display unit to display the marker facing the imaging unit of the processing device based further on a recognition result of the marker by the processing device. Processing equipment.
(6)
The information processing apparatus according to any one of (2) to (5), wherein the first posture information is used by the processing apparatus to display a virtual object at a position corresponding to the relative position.
(7)
The information processing apparatus includes:
The information processing apparatus according to any one of (1) to (6), further including a notification unit that performs notification indicating that a distance between an imaging surface of the imaging unit and a real object is equal to or less than a threshold value.
(8)
The information processing apparatus according to any one of (1) to (7), wherein the estimation unit estimates a first orientation information by selecting a captured image in an imaging direction that allows easy orientation estimation.
(9)
1st attitude | position information which shows the position and attitude | position of the operating device which has the said imaging part in the environment map expressing the position of the object which exists in the said real space based on the captured image imaged by the imaging part which images real space Estimating
Transmitting the estimated first posture information to a processing device that performs processing according to the position and posture of the operating device;
An information processing method executed by a processor of the information processing apparatus.
(10)
Computer
1st attitude | position information which shows the position and attitude | position of the operating device which has the said imaging part in the environment map expressing the position of the object which exists in the said real space based on the captured image imaged by the imaging part which images real space An estimation unit for estimating
A communication unit that transmits the first posture information estimated by the estimation unit to a processing device that performs processing according to the position and posture of the controller device;
Program to function as.
 1  表示装置
 11  撮像部
 12  センサ
 13  姿勢推定部
 14  姿勢情報受信部
 141 姿勢情報送信部
 142 姿勢情報受信部
 143 位置補正部
 144 補正行列送信部
 15  制御情報受信部
 16  制御部
 17  表示部
 18  接近判定部
 19  接近通知部
 2  操作装置
 21  撮像部
 22  センサ
 23  姿勢推定部
 24  姿勢情報送信部
 241 姿勢情報送信部
 242 姿勢情報受信部
 243 補正行列受信部
 244 マーカー生成部
 245 マーカー表示部
 25  ユーザ操作取得部
 26  制御信号送信部
 27  接近判定部
 28  接近通知部
 29  深度情報取得部
 3  環境マップ
 4  三次元データ
 5  マーカー
 
DESCRIPTION OF SYMBOLS 1 Display apparatus 11 Imaging part 12 Sensor 13 Attitude estimation part 14 Attitude information receiving part 141 Attitude information transmission part 142 Attitude information receiving part 143 Position correction part 144 Correction matrix transmission part 15 Control information receiving part 16 Control part 17 Display part 18 Approach determination Unit 19 approach notification unit 2 operation device 21 imaging unit 22 sensor 23 posture estimation unit 24 posture information transmission unit 241 posture information transmission unit 242 posture information reception unit 243 correction matrix reception unit 244 marker generation unit 245 marker display unit 25 user operation acquisition unit 26 control signal transmission unit 27 approach determination unit 28 approach notification unit 29 depth information acquisition unit 3 environment map 4 three-dimensional data 5 marker

Claims (10)

  1.  実空間を撮像する撮像部により撮像された撮像画像に基づいて、前記実空間に存在する物体の位置を表現する環境マップにおける前記撮像部を有する操作装置の位置および姿勢を示す第1の姿勢情報を推定する推定部と、
     前記推定部により推定された前記第1の姿勢情報を、前記操作装置の位置および姿勢に応じた処理を行う処理装置に送信する通信部と、
    を備える、情報処理装置。
    1st attitude | position information which shows the position and attitude | position of the operating device which has the said imaging part in the environment map expressing the position of the object which exists in the said real space based on the captured image imaged by the imaging part which images real space An estimation unit for estimating
    A communication unit that transmits the first posture information estimated by the estimation unit to a processing device that performs processing according to the position and posture of the controller device;
    An information processing apparatus comprising:
  2.  前記第1の姿勢情報は、前記環境マップにおける前記処理装置の位置および姿勢を示す第2の姿勢情報との統合により前記操作装置と前記処理装置との相対位置を演算するために用いられる、請求項1に記載の情報処理装置。 The first posture information is used to calculate a relative position between the operation device and the processing device by integration with second posture information indicating the position and posture of the processing device in the environment map. Item 4. The information processing apparatus according to Item 1.
  3.  前記情報処理装置は、
     前記処理装置により認識され、前記相対位置の演算に用いられるマーカーを表示するよう前記操作装置の表示部を制御するマーカー制御部をさらに備える、請求項2に記載の情報処理装置。
    The information processing apparatus includes:
    The information processing apparatus according to claim 2, further comprising a marker control unit that controls a display unit of the operation device to display a marker that is recognized by the processing device and is used for the calculation of the relative position.
  4.  前記マーカー制御部は、前記第1の姿勢情報および前記第2の姿勢情報との統合による前記相対位置の演算結果に基づいて、前記処理装置の撮像部に正対する前記マーカーを表示するよう前記表示部を制御する、請求項3に記載の情報処理装置。 The marker control unit is configured to display the marker facing the imaging unit of the processing device based on a calculation result of the relative position obtained by integrating the first posture information and the second posture information. The information processing apparatus according to claim 3, wherein the information processing apparatus controls the unit.
  5.  前記マーカー制御部は、前記処理装置による前記マーカーの認識結果にさらに基づいて、前記処理装置の撮像部に正対する前記マーカーを表示するよう前記表示部を制御する、請求項4に記載の情報処理装置。 5. The information processing according to claim 4, wherein the marker control unit controls the display unit to display the marker facing the imaging unit of the processing device based further on a recognition result of the marker by the processing device. apparatus.
  6.  前記第1の姿勢情報は、前記処理装置により前記相対位置に応じた位置に仮想オブジェクトを表示するために用いられる、請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein the first posture information is used by the processing apparatus to display a virtual object at a position corresponding to the relative position.
  7.  前記情報処理装置は、
     前記撮像部の撮像面と実オブジェクトとの距離が閾値以下であることを示す通知を行う通知部をさらに備える、請求項1に記載の情報処理装置。
    The information processing apparatus includes:
    The information processing apparatus according to claim 1, further comprising a notification unit that performs notification indicating that a distance between an imaging surface of the imaging unit and a real object is equal to or less than a threshold value.
  8.  前記推定部は、姿勢推定が容易な撮像方向の撮像画像を選択して前記第1の姿勢情報を推定する、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the estimation unit selects a captured image in an imaging direction that is easy to estimate a posture and estimates the first posture information.
  9.  実空間を撮像する撮像部により撮像された撮像画像に基づいて、前記実空間に存在する物体の位置を表現する環境マップにおける前記撮像部を有する操作装置の位置および姿勢を示す第1の姿勢情報を推定することと、
     推定された前記第1の姿勢情報を、前記操作装置の位置および姿勢に応じた処理を行う処理装置に送信することと、
    を備える、情報処理装置のプロセッサにより実行される情報処理方法。
    1st attitude | position information which shows the position and attitude | position of the operating device which has the said imaging part in the environment map expressing the position of the object which exists in the said real space based on the captured image imaged by the imaging part which images real space Estimating
    Transmitting the estimated first posture information to a processing device that performs processing according to the position and posture of the operating device;
    An information processing method executed by a processor of the information processing apparatus.
  10.  コンピュータを、
     実空間を撮像する撮像部により撮像された撮像画像に基づいて、前記実空間に存在する物体の位置を表現する環境マップにおける前記撮像部を有する操作装置の位置および姿勢を示す第1の姿勢情報を推定する推定部と、
     前記推定部により推定された前記第1の姿勢情報を、前記操作装置の位置および姿勢に応じた処理を行う処理装置に送信する通信部と、
    として機能させるためのプログラム。
    Computer
    1st attitude | position information which shows the position and attitude | position of the operating device which has the said imaging part in the environment map expressing the position of the object which exists in the said real space based on the captured image imaged by the imaging part which images real space An estimation unit for estimating
    A communication unit that transmits the first posture information estimated by the estimation unit to a processing device that performs processing according to the position and posture of the controller device;
    Program to function as.
PCT/JP2014/076620 2013-12-17 2014-10-03 Information processing device, information processing method, and program WO2015093130A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013260108A JP2015118442A (en) 2013-12-17 2013-12-17 Information processor, information processing method, and program
JP2013-260108 2013-12-17

Publications (1)

Publication Number Publication Date
WO2015093130A1 true WO2015093130A1 (en) 2015-06-25

Family

ID=53402487

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/076620 WO2015093130A1 (en) 2013-12-17 2014-10-03 Information processing device, information processing method, and program

Country Status (2)

Country Link
JP (1) JP2015118442A (en)
WO (1) WO2015093130A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111386511A (en) * 2017-10-23 2020-07-07 皇家飞利浦有限公司 Augmented reality service instruction library based on self-expansion
EP3974950A4 (en) * 2019-07-19 2022-08-10 Huawei Technologies Co., Ltd. Interactive method and apparatus in virtual reality scene

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6565465B2 (en) * 2015-08-12 2019-08-28 セイコーエプソン株式会社 Image display device, computer program, and image display system
JP2017134630A (en) * 2016-01-28 2017-08-03 セイコーエプソン株式会社 Display device, control method of display device, and program
JP6938158B2 (en) * 2016-03-10 2021-09-22 キヤノン株式会社 Information processing equipment, information processing methods, and programs
JP6932907B2 (en) * 2016-09-23 2021-09-08 カシオ計算機株式会社 Information processing equipment, information processing system, information processing method and program
CA3087738A1 (en) * 2018-02-06 2019-08-15 Magic Leap, Inc. Systems and methods for augmented reality

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012155654A (en) * 2011-01-28 2012-08-16 Sony Corp Information processing device, notification method, and program
JP2013059573A (en) * 2011-09-14 2013-04-04 Namco Bandai Games Inc Program, information memory medium and game apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012155654A (en) * 2011-01-28 2012-08-16 Sony Corp Information processing device, notification method, and program
JP2013059573A (en) * 2011-09-14 2013-04-04 Namco Bandai Games Inc Program, information memory medium and game apparatus

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111386511A (en) * 2017-10-23 2020-07-07 皇家飞利浦有限公司 Augmented reality service instruction library based on self-expansion
US11861898B2 (en) 2017-10-23 2024-01-02 Koninklijke Philips N.V. Self-expanding augmented reality-based service instructions library
EP3974950A4 (en) * 2019-07-19 2022-08-10 Huawei Technologies Co., Ltd. Interactive method and apparatus in virtual reality scene
US11798234B2 (en) 2019-07-19 2023-10-24 Huawei Technologies Co., Ltd. Interaction method in virtual reality scenario and apparatus

Also Published As

Publication number Publication date
JP2015118442A (en) 2015-06-25

Similar Documents

Publication Publication Date Title
US10936874B1 (en) Controller gestures in virtual, augmented, and mixed reality (xR) applications
WO2015093130A1 (en) Information processing device, information processing method, and program
US10852847B2 (en) Controller tracking for multiple degrees of freedom
US9639988B2 (en) Information processing apparatus and computer program product for processing a virtual object
US9256986B2 (en) Automated guidance when taking a photograph, using virtual objects overlaid on an image
US9824497B2 (en) Information processing apparatus, information processing system, and information processing method
EP2984541B1 (en) Near-plane segmentation using pulsed light source
US11625841B2 (en) Localization and tracking method and platform, head-mounted display system, and computer-readable storage medium
JP3926837B2 (en) Display control method and apparatus, program, and portable device
US20160049011A1 (en) Display control device, display control method, and program
JP2021522564A (en) Systems and methods for detecting human gaze and gestures in an unconstrained environment
EP3654146A1 (en) Anchoring virtual images to real world surfaces in augmented reality systems
US10169880B2 (en) Information processing apparatus, information processing method, and program
US11869156B2 (en) Augmented reality eyewear with speech bubbles and translation
CN109992111B (en) Augmented reality extension method and electronic device
US20200341284A1 (en) Information processing apparatus, information processing method, and recording medium
US11436818B2 (en) Interactive method and interactive system
WO2017163648A1 (en) Head-mounted device
CN114115544B (en) Man-machine interaction method, three-dimensional display device and storage medium
WO2019106862A1 (en) Operation guiding system
US20230215098A1 (en) Method and system for creating and storing map target

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14871238

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14871238

Country of ref document: EP

Kind code of ref document: A1