WO2022065120A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2022065120A1
WO2022065120A1 PCT/JP2021/033620 JP2021033620W WO2022065120A1 WO 2022065120 A1 WO2022065120 A1 WO 2022065120A1 JP 2021033620 W JP2021033620 W JP 2021033620W WO 2022065120 A1 WO2022065120 A1 WO 2022065120A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual object
information
hand
unit
Prior art date
Application number
PCT/JP2021/033620
Other languages
French (fr)
Japanese (ja)
Inventor
京二郎 永野
淳 木村
真 城間
毅 石川
大輔 田島
純輝 井上
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2022065120A1 publication Critical patent/WO2022065120A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • This technology relates to information processing devices, information processing methods, and programs, and in particular, information processing devices, information processing methods, and programs designed to improve operability in the space provided by xR (x-Reality). Regarding.
  • Japanese Unexamined Patent Publication No. 2014-09296 Japanese Unexamined Patent Publication No. 2019-008798 Japanese Patent Application Laid-Open No. 2003-067107 Japanese Patent No. 5871345
  • the operability of virtual objects may differ from that of the real world, and there is room for improvement in terms of operability.
  • This technology was made in view of such a situation, and it is intended to improve the operability in the space provided by xR (x-Reality).
  • the information processing unit of the information processing apparatus having the processing unit is operated by the user for the operation of the virtual object based on the attribute of the virtual object to be operated or the operation status of the virtual object. It is an information processing method that controls the presentation of information to any one or more of visual, auditory, and tactile sensations.
  • one or more of the user's visual, auditory, and tactile senses for the operation of the virtual object can be obtained.
  • the presentation of information is controlled.
  • FIG. 1 is a block diagram showing a configuration example of a first embodiment of an information processing apparatus to which the present technology is applied.
  • the information processing device 11 in FIG. 1 uses an HMD (Head Mounted Display) and a controller to provide a user with a space / world generated by xR (x-Reality).
  • xR is a technology that includes VR (Virtual Reality), AR (Augmented Reality), MR (Mixed Reality), SR (Substitution Reality), and the like.
  • VR is a technology that provides users with a space and world that is different from reality.
  • AR is a technology that provides users with information that does not actually exist in the real space (hereinafter referred to as the real space).
  • MR is a technology that provides users with a world that fuses the real space and virtual space of the present and the past.
  • SR is a technology that projects past images in real space to give the illusion that past events are actually happening.
  • the space provided (generated) by xR is called the xR space.
  • the xR space contains at least one of a real object and a virtual object in real space. In the following, when there is no distinction between an object that exists in real space and an object that is virtual, it is simply called an object.
  • the HMD is a device equipped with a display that mainly presents images to the user.
  • the display held by the head regardless of the mounting method on the head is referred to as HMD.
  • a stationary display such as a PC (personal computer) or a display of a mobile terminal such as a notebook PC, a smartphone, or a tablet may be used.
  • the controller is a device for detecting the user's operation, and is not limited to a specific type of controller.
  • a controller there are a controller that the user holds by hand, a controller that is worn by the user's hand (hand controller), and the like.
  • the information processing device 11 has a sensor unit 21, a control unit 22, a video display unit 23, a sound presentation unit 24, a tactile presentation unit 25, and a storage unit 26.
  • the sensor unit 21 includes various sensors such as a camera 31, a gyro sensor 32, an acceleration sensor 33, a direction sensor 34, and a ToF (Time-of-Flight) camera 35.
  • Various sensors of the sensor unit 21 are arranged in any one of a device such as an HMD and a controller worn by the user, a device such as a controller held by the user, an environment in which the user exists, and a user's body.
  • the type of sensor included in the sensor unit 21 is not limited to the type shown in FIG.
  • the sensor unit 21 supplies the sensor information detected by various sensors to the control unit 22.
  • the sensor unit 21 When the sensor unit 21 is distinguished for each device or portion where the sensor is arranged, there are a plurality of sensor units 21, but they are not specified in the present embodiment. When there are a plurality of sensor units 21, the types of sensors possessed by each sensor unit 21 may be different.
  • the camera 31 shoots a subject in the shooting range and supplies the image of the subject as sensor information to the control unit 22.
  • the sensor unit 21 arranged in the HMD 73 may include, for example, an outward-facing camera and an inward-facing camera as the camera 31.
  • the outward camera captures the front view of the HDM.
  • An inward-looking camera placed on the HMD captures the user's eyes.
  • the gyro sensor 32 detects the angular velocity around the three orthogonal axes, and supplies the detected angular velocity as sensor information to the control unit 22.
  • the gyro sensor 32 is arranged in, for example, an HMD or a controller.
  • the acceleration sensor 33 detects acceleration in the three orthogonal axes directions, and supplies the detected acceleration to the control unit 22 as sensor information.
  • the acceleration sensor 33 is arranged in the HMD or the controller in combination with the gyro sensor 32, for example.
  • the azimuth sensor 34 detects the direction of the geomagnetism as the azimuth, and supplies the detected azimuth as sensor information to the control unit 22.
  • the azimuth sensor 34 is arranged in the HMD or the controller in combination with the gyro sensor 32 and the acceleration sensor 33, for example.
  • the ToF camera 35 detects the distance to the subject (object) and supplies a distance image having the detected distance as a pixel value to the control unit 22 as sensor information.
  • the control unit 22 controls the output to the video display unit 23, the sound presentation unit 24, and the tactile presentation unit 25 based on the sensor information from the sensor unit 21.
  • the control unit 22 includes a sensor information acquisition unit 41, a position / attitude acquisition unit 42, an object information acquisition unit 43, an intention identification unit 44, a feedback determination unit 45, an output control unit 46, and the like.
  • the sensor information acquisition unit 41 acquires sensor information detected by various sensors of the sensor unit 21.
  • the position / attitude acquisition unit 42 acquires the position and attitude of the HMD and the controller as position / attitude information based on the sensor information acquired by the sensor information acquisition unit 41.
  • the position / posture acquisition unit 42 acquires not only the position and posture of the controller itself, but also the position and posture of the user's hand or finger when the controller attached to the user's hand is used. The positions and postures of the user's hands and fingers can also be acquired based on the image taken by the camera 31.
  • the object information acquisition unit 43 acquires the object information of the object existing in the xR space.
  • the object information may be information that is pre-tagged to the object or may be dynamically extracted from the object in xR space presented to the user.
  • the object information includes intention determination information and feedback information in addition to information regarding the shape, position, and posture of the object in the xR space.
  • the intention determination information is information regarding a determination standard used for determining the presence or absence of an intention in the intention identification unit 44, which will be described later.
  • the feedback information is information about the content of the user's visual, auditory, and tactile feedback to contact with the object.
  • the intention identification unit 44 combines the position / orientation information of the controller (including information regarding the position and posture of the user's hand or finger) acquired by the position / attitude acquisition unit 42 and the object information acquired by the object information acquisition unit 43. Based on this, the presence or absence of the user's intention with respect to the contact between the user's hand and the object is identified (determined).
  • the feedback determination unit 45 has the user's visual, auditory, and tactile senses with respect to the contact between the user's hand and the object based on the determination result of the intention identification unit 44 and the object information acquired by the object information acquisition unit 43. Generate feedback (contents) to.
  • the output control unit 46 generates an image (video), a sound, and a tactile sensation for presenting the xR space generated by a predetermined program such as an application to the user, and the video display unit 23, the sound presentation unit 24, respectively.
  • the output signal to be output by the tactile presentation unit 25 is supplied to the video display unit 23, the sound presentation unit 24, and the tactile presentation unit 25.
  • the output control unit 46 controls the visual, auditory, and tactile information presented to the user.
  • the output control unit 46 reflects the feedback of the content generated by the feedback determination unit 45 when generating the image, sound, and tactile sensation to be presented to the user.
  • the video display unit 23 is a display that displays an image (video) of the output signal supplied from the output control unit 46.
  • the sound presentation unit 24 is a speaker that outputs the sound of the output signal supplied from the output control unit 46.
  • the tactile presentation unit 25 is a presentation unit that presents the tactile sensation of the output signal supplied from the output control unit 46.
  • the oscillator is used as the tactile presentation unit 25, and the tactile sensation is presented to the user by vibration.
  • the tactile presentation unit 25 is not limited to the oscillator, and may be an arbitrary type vibration generator such as an eccentric motor or a linear vibrator.
  • the tactile presentation unit 25 is not limited to the case where vibration is presented as a tactile sensation, and may be a presentation unit that presents an arbitrary tactile sensation such as pressure.
  • FIG. 2 is a block diagram showing a first configuration example of the hardware in which the information processing apparatus 11 of FIG. 1 is constructed.
  • the hardware 61 of FIG. 2 has a display side device 71, a sensor side device 72, an HMD 73, a speaker 74, a controller 75, and a sensor 76.
  • the display side device 71 has a CPU (Central Processing Unit) 81, a memory 82, an input / output I / F (interface) 83, and a communication device 84. These components are connected by a bus so that data can be exchanged with each other.
  • CPU Central Processing Unit
  • the CPU 81 executes a series of instructions included in the program stored in the memory 82.
  • the memory 82 includes RAM (RandomAccessMemory) and storage.
  • the storage includes non-volatile memory such as ROM (Read Only Memory) and a hard disk device.
  • the memory 82 stores programs and data used in the processing of the CPU 81.
  • the input / output I / F 83 inputs / outputs signals to / from the HMD 73, the speaker 74, and the controller 75.
  • the communication device 84 controls communication with the communication device 84 of the sensor side device 72.
  • Memory 92 includes RAM and storage. Storage includes non-volatile memory such as ROM and hard disk drive. The memory 92 stores programs and data used in the processing of the CPU 91.
  • the input / output I / F 93 inputs / outputs a signal to / from the sensor 76.
  • the communication device 94 controls communication between the display side device 71 and the communication device 84.
  • the HMD 73 is a device having the image display unit 23 of FIG. 1, and is attached to the user's head to present an image (image) of the xR space to the user.
  • the HMD 73 may have a sound presentation unit 24, a camera 31, a gyro sensor 32, an acceleration sensor 33, an orientation sensor 34, and a ToF camera 35 in the sensor unit 21.
  • the speaker 74 is a device provided with the sound presenting unit 24 of FIG. 1, and presents sound to the user.
  • the speaker 74 is arranged, for example, in an environment where a user exists.
  • the controller 75 is a device having the tactile presentation unit 25 of FIG.
  • the controller 75 is attached to or gripped by the user's hand.
  • the controller 75 may have a gyro sensor 32, an acceleration sensor 33, and a direction sensor 34 in the sensor unit 21.
  • the controller 75 may have a control button.
  • the sensor 76 is a device having various sensors in the sensor unit 21 of FIG. 1, and is arranged in an environment where a user exists.
  • FIG. 3 is a block diagram showing a second configuration example of the hardware in which the information processing apparatus 11 of FIG. 1 is constructed.
  • the same parts as those in FIG. 2 are designated by the same reference numerals, and the description thereof will be omitted.
  • the hardware 61 of FIG. 3 has an HMD 73, a speaker 74, a controller 75, a sensor 76, and a display / sensor device 111. Therefore, the hardware 61 of FIG. 3 is common to the case of FIG. 2 in that it has an HMD 73, a speaker 74, a controller 75, and a sensor 76. However, the hardware 61 of FIG. 3 is different from the case of FIG. 2 in that the display / sensor device 111 is provided in place of the display side device 71 and the sensor side device 72 of FIG.
  • the display / sensor device 111 in FIG. 3 is a device in which the display side device 71 and the sensor side device 72 in FIG. 2 are integrated. That is, the display / sensor device 111 of FIG. 3 corresponds to the display side device 71 when the processing of the sensor side device 72 is performed by the display side device 71 in FIG. 2.
  • the display / sensor device 111 of FIG. 3 is common to the display side device 71 of FIG. 2 in that it has a CPU 121, a memory 122, and an input / output I / F 123. However, the display / sensor device 111 of FIG. 3 does not have the communication device 84 of the display side device 71 of FIG. 3, and the sensor 76 is connected to the input / output I / F 123. It is different from the display side device 71.
  • FIG. 4 is an external view illustrating the HMD 73.
  • the HMD73 in FIG. 4 is a glasses-type HMD (AR glass) compatible with AR or MR.
  • the HMD 73 is fixed to the head of the user 131.
  • the HMD 73 has an image display unit 23A for the right eye and an image display unit 23B for the left eye corresponding to the image display unit 23 of FIG.
  • the video display units 23A and 23B are arranged in front of the user 131.
  • the video display units 23A and 23B are, for example, transmissive displays.
  • AR virtual objects virtual objects
  • virtual objects such as text, figures, or objects having a three-dimensional structure are displayed on the video display units 23A and 23B.
  • the virtual object is superimposed and displayed on the landscape in the real space.
  • Outward facing cameras 132A and 132B corresponding to the camera 31 of the sensor unit 21 of FIG. 1 are provided on the right and left edges of the front surface of the HMD 73.
  • the outward-facing cameras 132A and 132B photograph the front direction of the HMD 73. That is, the outward-facing cameras 132A and 132B take pictures in the real space in the direction visually recognized by the user's right eye and left eye. For example, by using the outward-facing cameras 132A and 132B as stereo cameras, the shape, position, and posture of an object (real object) in which a real space exists can be recognized.
  • the HMD 73 can be made into a video-transparent HMD by superimposing the real space image taken by the outward-facing cameras 132A and 132B and the virtual object and displaying them on the video display units 23A and 23B.
  • Inward facing cameras 132C and 132D corresponding to the camera 31 of the sensor unit 21 in FIG. 1 are provided on the right and left edges on the back side of the HMD 73.
  • the inward cameras 132C and 132D capture the user's right and left eyes.
  • the image taken by the inward-facing cameras 132C and 132D can recognize the position of the user's eyeball, the position of the pupil, the direction of the line of sight, and the like.
  • the HMD 73 may have the speaker 74 (or earphone speaker) or microphone shown in FIG.
  • the HMD 73 is not limited to the case of FIG. 4, and may be an HMD having an appropriate form corresponding to the type of xR space provided by xR (type of xR).
  • FIG. 5 is a diagram illustrating the arrangement of the sensor and the oscillator in the hand controller.
  • the IMU (Inertial Measurement Unit) 151A to 151C and the transducers 153A to 153C of FIG. 5 are held by, for example, a holding body (not shown) of a controller (hand controller), and the holding body is held by a user's hand. When attached to 141, it is placed in various places on the hand 141.
  • the hand controller corresponds to the controller 75 in FIG.
  • the IMU 151A and the oscillator 153A are arranged on the instep of the hand 141.
  • the IMU 151B and the oscillator 153B are placed on the thumb.
  • the IMU 151C and the oscillator 153C are arranged on the index finger.
  • the IMUs 151A to 151C are, for example, sensor units in which the gyro sensor 32, the acceleration sensor 33, and the directional sensor 34 of the sensor unit 21 of FIG. 1 are integrally packaged.
  • the position and posture of the hand, the position and posture of the thumb, the arrangement, and the position and posture of the index finger can be recognized based on the sensor information of the IMUs 151A to 151C.
  • the oscillators 153A to 153C present a tactile sensation to the user's hands and fingers for contact with the object.
  • the IMUs 151B and 151C and the IMUs and oscillators corresponding to the oscillators 153B to 153C are not limited to the thumb and the index finger, and may be arranged on one or more of the five fingers.
  • FIG. 6 is a flowchart illustrating the processing procedure of the information processing apparatus 11.
  • step S13 it is assumed that the user moves his / her hand and touches the object. The process proceeds from step S13 to step S14.
  • step S14 the intention identification unit 44 acquires the object information of the object in contact with the user's hand from the object information acquisition unit 43 of FIG.
  • the intention identification unit 44 determines whether or not the user intends the contact between the user's hand and the object based on the object information (intention determination information). That is, the intention identification unit 44 determines whether or not there is an intention for the contact between the hand and the object.
  • step S15 the feedback determination unit 45 generates feedback to the user regarding the contact between the user's hand and the object.
  • Feedback on the contact between the user's hand and the object represents the presentation to the user of contact with the object.
  • the presentation to the user is made to any one or more of the user's visual sense, auditory sense, and tactile sense.
  • Generating feedback means generating feedback content for the user's visual, auditory, and tactile sensations.
  • the output control unit 46 displays the image (video), sound, and tactile sensation reflecting the feedback of the content generated by the feedback determination unit 45 in the video display unit 23, the sound presentation unit 24, and the tactile sensation presentation unit 25, respectively. Present to the user via.
  • the process returns from step S15 to step S11 and is repeated from step S11.
  • the intention identification unit 44 of the information processing apparatus 11 of FIG. 1 responds to contact with a virtual object when the user moves his / her hand and touches the virtual object (or when the virtual object moves and touches the user's hand). Judgment of the presence or absence of the user's intention (determination of the presence or absence of intention). At this time, the intention identification unit 44 determines whether or not there is an intention based on the intention determination information acquired by the object information acquisition unit 43.
  • the intention determination information includes information that serves as a determination criterion described later in the intention presence / absence determination. The judgment criteria may be set for each virtual object that the user's hand touches, or may be common regardless of the virtual object.
  • the feedback determination unit 45 of the information processing apparatus 11 generates feedback to the user when it is determined by the intention presence / absence determination that there is an intention. The feedback in this case is normal feedback.
  • the feedback determination unit 45 does not give feedback to the user when it is determined by the intention determination that there is no intention. In this case, the feedback determination unit 45 does not generate feedback. However, the feedback determination unit 45 does not generate feedback when it is determined by the intention determination that there is no intention, but instead provides feedback (feedback corresponding to the unintention) corresponding to the case where it is determined that there is no intention. It may be generated based on the feedback information acquired by the information acquisition unit 43. The mode of feedback corresponding to unintentional will be described later.
  • the intention identification unit 44 may determine that there is an intention based on any of the determination criteria, it may determine that there is an intention as a comprehensive determination result, or the intention among the plurality of determination criteria. It may be the case that the comprehensive judgment result is determined according to the type of the determination criteria determined to be none or intentional. For example, assume that priorities are preset for multiple criteria. Even if the intention identification unit 44 determines that there is an intention based on the determination criteria of the predetermined priority, if it determines that there is no intention based on the determination criteria having a higher priority than that, there is no intention as a comprehensive determination result. It may be the case that it is determined that.
  • the first to ninth judgment criteria in the following intention presence / absence judgment will be described in order.
  • the outline of the first judgment criteria to the ninth judgment criteria is as follows.
  • ⁇ 1st judgment criterion whether or not it is within the viewing range (angle of view)
  • ⁇ 2nd judgment criterion whether or not the hand is moving at a high speed
  • ⁇ 3rd judgment criterion whether or not the hand is touched from the back side
  • 4th judgment criterion Whether or not another object is held in the hand
  • Fifth criterion The line of sight or headray hits the contacted object
  • Sixth criterion Position of the contacted object with respect to the user
  • Seventh criterion Head Relative distance between hand and hand
  • Eighth criterion Openness of index finger and thumb
  • Ninth criterion Relationship with the object being viewed
  • FIG. 7 is a diagram illustrating the first determination criterion.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • the HMD 73 displays an image (virtual object) of the field of view range (angle of view range) 171 for the user 161.
  • the visual field range 171 is regarded as a range limited to a central visual field or the like in consideration of general human visual field characteristics (range of a predetermined viewing angle), not a visual field range for the entire image displayed by the HMD 73 to the user 161. You may.
  • the virtual object 181 exists outside the field of view 171 and the virtual object 182 exists inside the field of view 171.
  • FIG. 7 shows a state in which the user 161 moves the hand 162 from outside the field of view 171 to the virtual object 182 within the field of view 171.
  • the user 161 does not visually recognize anything other than the virtual object having the visual field range 171. Therefore, while the user 161 is moving the hand 162 from outside the field of view 171 to inside the field of view 171 the hand 162 may unintentionally come into contact with the virtual object 181 outside the field of view 171.
  • the intention identification unit 44 determines that there is no intention when the hand 162 of the user 161 comes into contact with the virtual object 181 existing outside the visual field range 171.
  • the feedback determination unit 45 does not generate feedback for the contact (collision) between the hand 162 and the virtual object 181.
  • the mode in which the feedback determination unit 45 generates feedback corresponding to the intentionlessly will be described later (hereinafter, the same applies).
  • the intention identification unit 44 determines that there is an intention when the hand 162 of the user 1611 comes into contact with the virtual object 182 existing in the visual field range 171.
  • the feedback determination unit 45 generates the content of feedback for the contact (collision) between the hand 162 and the virtual object 181. ..
  • FIG. 8 is a diagram illustrating a second determination criterion.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • a virtual object 181 and a virtual object 182 exist in the space.
  • FIG. 8 shows a situation in which the hand 162 touches the virtual object 181 when the user 161 moves the hand 162 from a position away from the virtual object 181 in order to touch the virtual object 182.
  • the user 161 moves the hand 162 at a somewhat high speed until immediately before touching the virtual object 182, and slows down the hand 162 immediately before touching the virtual object 182. Therefore, when the hand 162 touches the virtual object and the hand 162 is moving above a predetermined threshold value, it is considered that the user 161 does not intend to touch the virtual object.
  • the intention identification unit 44 determines that there is an intention when the movement speed of the hand 162 is less than the threshold value. In FIG. 8, for example, when the hand 162 touches the virtual object 182, it is determined that there is an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object.
  • FIG. 9 is a diagram illustrating a third determination criterion.
  • the user 161 wears the HMD 73 on the head and visually recognizes the space provided by xR.
  • a virtual object 183 exists in the space.
  • Situation 173 shows a situation where the user 161 touches the virtual object 183 from the back side of the hand 162 when the hand 162 is moved.
  • Situation 174 shows a situation where the user 161 touches the virtual object 183 from the flat side of the hand 162 when the hand 162 is moved.
  • the intention identification unit 44 determines that there is no intention.
  • the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object 183.
  • the intention identification unit 44 determines that there is no intention when the hand 162 of the user 161 is holding the virtual object 184 (when the hand 162 is holding the virtual object 184). judge.
  • the feedback determination unit 45 does not generate feedback for the contact (collision) between the hand 162 and the virtual object 185.
  • the intention identification unit 44 determines that there is an intention when the hand 162 of the user 161 does not hold the virtual object 184 (when it does not hold it). judge.
  • the intention identification unit 44 determines that there is an intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 generates normal feedback for the contact (collision) between the hand 162 and the virtual object 185. ..
  • FIG. 11 is a diagram illustrating a fifth determination criterion.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • a virtual object 186 and a virtual object 187 exist in the space.
  • the virtual object 187 is hit by a headlay indicating the line of sight of the user 161 or the front direction of the HMD 73 (the virtual object 187 exists in the line of sight direction or the headray direction).
  • FIG. 11 shows a state in which the hand 162 comes into contact with the virtual object 186 when the user 161 moves the hand 162 to the virtual object 187 in the line-of-sight direction or the headley direction. In this case, since the user 161 is not watching the virtual object 186, it is considered that the user 161 does not intend to touch the virtual object 186.
  • the intention identification unit 44 determines that there is no intention.
  • the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object 185.
  • the intention identification unit 44 determines that there is an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object 185.
  • FIG. 12 is a diagram illustrating the sixth determination criterion.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • a virtual object 188 and a virtual object 189 exist in the space.
  • the virtual object 188 exists in an area (right area 190R) on the right side of the area (front area 190C) in front (front) of the user 161.
  • the virtual object 189 exists in the front area 190C with respect to the user 161. It is assumed that the space on the front side of the user 161 is divided into three regions in the left-right direction by the boundary surface along the front direction. In this case, of the three regions, the region having a predetermined width (about shoulder width) in the left-right direction about the central axis of the user 161 is defined as the front region 190C.
  • the region on the right side with respect to the front region 190C is the right region 190R, and the region on the left side is the left region 190L.
  • FIG. 12 shows a situation where the user 161 moves the right hand 162R from the right side area 190R to the virtual object 189 in the front area 190C, and the right hand 162R touches the virtual object 188 in the right side area 190R. ..
  • the virtual object is likely to exist in front of the body (front area 190C) or on the left side (left side area 190L). That is, when the user 161 is moving the right hand 162R in the right side area 190R, there is a high possibility that the right hand 162R is being moved to the virtual object existing in the front area 190C or the left side area 190L. Similarly, when the user 161 is moving the left hand 162L in the left side area 190L, there is a high possibility that the left hand 162L is being moved to the virtual object existing in the front area 190C or the right side area 190R.
  • the intention identification unit 44 touches the virtual object 188. Is determined to be unintentional. In consideration of this determination result, when the intention identification unit 44 determines that there is no intention as a comprehensive determination result, the feedback determination unit 45 does not generate feedback for the contact between the right hand 162R or the left hand 162L and the virtual object.
  • the intention identification unit 44 When the right hand 162R of the user 161 comes into contact with a virtual object (for example, a virtual object 189) existing in the front area 190C or the left side area 190L, the intention identification unit 44 is the front area 190C or the right side area.
  • the left hand 162L of the user 161 touches the virtual object (for example, the virtual object 189) existing in the 190R, it is determined that there is an intention.
  • the feedback determination unit 45 generates normal feedback for the contact between the right hand 162R or the left hand 162L and the virtual object.
  • FIG. 13 is a diagram illustrating a seventh determination criterion.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • a virtual object 191 and a virtual object 192 exist in the space.
  • the virtual object 191 exists at a distance of the short-distance boundary 201 or less with respect to the head of the user 161 (for example, eyes, hereinafter, the same applies).
  • the virtual object 192 exists at a distance farther (larger) than the short-distance boundary 201 and closer (smaller) than the long-distance boundary 202 with respect to the head of the user 161.
  • the long-distance boundary 202 represents a distance that is a predetermined distance to the head of the user 161 or a position of that distance.
  • the distance of the long-distance boundary 202 is farther (larger) than that of the short-distance boundary 201.
  • the distance of the long-distance boundary 202 is set to a distance at which it is estimated that the virtual object will not be touched at a further distant distance due to the same circumstances as the short-distance boundary 201.
  • the distance between the short-distance boundary 201 and the long-distance boundary 202 does not have to be the distance to the head of the user 161 and may be, for example, the distance from the central axis of the body of the user 161.
  • FIG. 13 shows a state in which the hand 162 passes through the virtual object 191 while the user 161 is moving the hand 162 from a distance of the short distance boundary 201 or less to the virtual object 192.
  • the virtual object 191 exists at a distance of the short distance boundary 201 or less, it is considered that the user 161 does not intend to touch the virtual object 191.
  • the virtual object 192 exists at a distance farther than the short-distance boundary 201 and closer than the long-distance boundary 202, it is considered that the user 161 intends to touch the virtual object 192.
  • the intention identification unit 44 determines that the distance to the head of the user 161 at the contact position is the short-distance boundary 201 or less, or the long-distance boundary 202 or more. In that case, it is determined that there is no intention.
  • the distance of the user 161 at the contact position with respect to the head may be the distance of the hand 162 with respect to the head when the hand 162 and the virtual object are in contact with each other, or the distance of the virtual object with respect to the head. May be.
  • the contact between the hand 162 and the virtual object 191 is determined by the intention identification unit 44 to be unintentional.
  • the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object.
  • the intention identification unit 44 when the hand 162 of the user 161 and the virtual object come into contact with each other, the distance to the head of the user 161 at the contact position is farther than the short-distance boundary 201 and closer than the long-distance boundary 202. In that case, it is determined that there is an intention. In FIG. 13, the contact between the hand 162 and the virtual object 192 is determined by the intention identification unit 44 to be intentional. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object.
  • FIG. 14 is a diagram illustrating the eighth determination criterion.
  • the virtual object 221 exists in a space provided by xR to a user (referred to as user 161) who wears an HMD 73 (not shown) on his head.
  • Situation 211 shows a case where the distance between the thumb and the index finger is smaller than the size of the virtual object 221 when the hand 162 of the user 161 comes into contact with the virtual object 221.
  • Situation 212 shows a case where the distance between the thumb and the index finger is larger than the size of the virtual object 221 when the hand 162 of the user 161 comes into contact with the virtual object 221.
  • the distance between the thumb and the index finger becomes larger than the size of the virtual object due to the pre-shaping operation for grasping the virtual object.
  • the size of the virtual object is the minimum distance between the thumb and the index finger that can grab the virtual object.
  • the intention identification unit 44 determines that the distance between the thumb and the index finger of the hand 162 of the user 161 is smaller than the size of the virtual object 221. Judged as unintentional. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object 221.
  • the intention identification unit 44 makes contact between the hand 162 of the user 161 and the virtual object 221 and the distance between the thumb and the index finger of the hand 162 of the user 161 is larger than the size of the virtual object 221. Is determined to have an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object 221.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • a television receiver 231 which is a virtual object
  • a remote controller 232 as a virtual object of the television receiver 231
  • a virtual object 233 which is not related to the television receiver 231.
  • the line-of-sight direction of the user 161 or the headray direction of the HMD 73 is directed to the direction of viewing the television receiver 231.
  • FIG. 15 shows a state in which the user 161 touches the virtual object 233 when moving the hand 162 to take the remote controller 232 while watching the monitor of the television receiver 231.
  • the object to be grasped may be the remote controller 232 related to the television receiver 231. Highly sex. In this way, when the user 161 moves the hand 162, it is considered that the user 161 intends the contact between the hand 162 and the virtual object that is highly related to the virtual object that has been seen at least immediately before. Be done.
  • the intention identification unit 44 acquires the high degree of relevance between the contacted virtual object and the virtual object viewed by the user 161.
  • the virtual object viewed by the user 161 means a virtual object currently viewed by the user 161 or a virtual object viewed by the user 161 until immediately before the user 161 moves the hand 162. Whether or not the object is a virtual object viewed by the user 161 can be determined by the line-of-sight direction of the user 161 or the headray direction of the HMD 73.
  • the height of relevance between virtual objects is preset. For example, the height of the relevance may be set in two stages of high relevance or low relevance, or may be set so that the higher the relevance, the larger the numerical value.
  • the intention identification unit 44 determines that there is no intention. For example, when the height of relevance is set by a continuous or stepwise numerical value, it is determined that the relevance is low when the height of relevance is smaller than a predetermined threshold value. In FIG. 15, the intention identification unit 44 determines that there is no intention for the contact between the user 161's hand 162 and the virtual object 233. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object.
  • the intention identification unit 44 determines that the virtual object touched by the hand 162 of the user 161 is highly related to the virtual object viewed by the user 161, it determines that there is an intention. For example, when the height of relevance is set by a continuous or stepwise numerical value, it is determined that the relevance is high when the height of relevance is equal to or higher than a predetermined threshold value. In FIG. 15, the intention identification unit 44 determines that the contact between the user 161's hand 162 and the remote controller 232 has an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object 221.
  • the feedback determination unit 45 when the intention identification unit 44 determines that there is no intention, the feedback determination unit 45 does not generate feedback. However, when the intention identification unit 44 determines that there is no intention, the feedback determination unit 45 may generate feedback corresponding to the unintention.
  • the feedback determination unit 45 changes the brightness of the contacted virtual object. For example, when the HMD 73 is an optical see-through type AR glass, the feedback determination unit 45 generates feedback that reduces the brightness of the virtual object in contact with the HMD 73. When the HMD 73 is a video see-through type AR glass or a smartphone type AR glass using a smartphone, the feedback determination unit 45 generates feedback that raises the brightness of the contacted virtual object. The content of the generated feedback is reflected in the image generated by the output control unit 46, and the image is supplied to the video display unit 23. As a result, feedback to the user's vision (feedback corresponding to the unintention) to the unintentional contact between the user's hand and the virtual object is executed.
  • FIG. 16 is a diagram illustrating feedback corresponding to unintentional.
  • the state 241 in FIG. 16 represents a virtual object 251 that is visually presented to the user before the user's hand touches it.
  • the state 242 represents the virtual object 251 presented to the user's vision in a predetermined period after the user's hand touches the virtual object 251.
  • the brightness of the virtual object 251 changes from the state 241 to the state 242 when the user's hand touches it.
  • the user can recognize that he / she has touched the virtual object 251 even if he / she does not intend to touch the virtual object 251.
  • there is no effect such as the movement of the virtual object 251 due to contact with the hand.
  • the predetermined period from the state 242 to the original brightness from the state 243 may be a predetermined time length. However, it may be the time while the user's hand is in contact with the virtual object 251.
  • the brightness of the virtual object in the state 242 may be changed according to the speed at which the user's hand passes through the virtual object 251. For example, when the user's hand passes through the virtual object 251 faster than a predetermined speed, the brightness of the virtual object 251 in the state 242 is reduced by about 80% as compared with the state 241. When the user's hand passes through the virtual object 251 slower than a predetermined speed, the brightness of the virtual object 251 in the state 242 is reduced by about 40% as compared with the state 241.
  • the unintentional feedback to the visual sense is not limited to the case of changing the brightness of the contacted virtual object, but may be the case of changing an arbitrary element (color, etc.) of the display form of the contacted virtual object.
  • the intention identification unit 44 determines that there is no intention as a comprehensive determination result when the hand and the virtual object come into contact with each other
  • the feedback determination unit 45 presents a sound at the moment when the hand and the virtual object come into contact with each other. May be generated.
  • a method of presenting the sound it may be presented by stereophonic sound (3D audio) so that the user can recognize the contacted position by the sound.
  • the feedback determination unit 45 presents vibration to the user's hand at the moment when the hand and the virtual object come into contact with each other. Tactile feedback may be generated.
  • Tactile feedback may be generated.
  • the feedback that corresponds unintentionally the purpose is to make the user aware of the existence of the virtual object. Therefore, in the unintentional feedback, for example, the feedback determination unit 45 generates feedback that presents vibration at a lower frequency in a shorter period of time than in normal feedback when the hand and the virtual object come into contact with each other.
  • the feedback determination unit 45 may change the tactile sensation according to the speed at which the user's hand passes through the virtual object. For example, when the speed at which the user's hand passes through the virtual object is faster than a predetermined speed, the feedback determination unit 45 generates feedback that presents low-frequency vibration in a short period (first period). If the speed at which the user's hand passes through the virtual object is slower than the predetermined speed, the feedback determination unit 45 presents low-frequency vibration for a long period of time (a second period longer than the first period). May be generated.
  • the object due to the contact. It is possible to reduce the influence on such things. That is, there is a limit to the field of view that the user can see with the HMD73. There can be many objects in xR space. Therefore, the user may unintentionally touch another object while moving his / her hand toward the object he / she wants to touch. At that time, if an object that the user does not intend to touch flies or makes a sound, the user is confused.
  • the information processing apparatus since normal feedback is executed only when the intended object is touched, no feedback that causes confusion to the user is provided, and it is easy for the user to perform operations in the xR space. Become. In the first embodiment of the information processing apparatus, it is also possible to give some feedback (feedback corresponding to unintentional) to the user when the user's hand unintentionally touches the object. This makes it possible to present the user with the presence of an object that is not visible to the user.
  • FIG. 17 is a block diagram showing a configuration example of a second embodiment of the information processing apparatus to which the present technology is applied.
  • the information processing device 301 in FIG. 17 provides the user with the space / world generated by AR by using the HMD and the controller.
  • the second embodiment of the information processing apparatus is the information processing apparatus that provides the space / world generated by xR to the user, similarly to the information processing apparatus 11 of FIG. 1 in the first embodiment. It is possible to apply to.
  • AR space the space provided (generated) by AR will be referred to as the AR space.
  • real objects and virtual objects coexist in real space.
  • object when there is no distinction between an object that exists in real space and an object that is virtual, it is simply called an object.
  • the sensor unit 321 has an inward camera 331, an outward camera 332, a microphone 333, a gyro sensor 334, an acceleration sensor 335, and an azimuth sensor 336.
  • the inward-facing camera 331 corresponds to the inward-facing cameras 132C and 132D in the HMD73 of FIG.
  • the inward camera 331 photographs the user's eye and supplies the captured image as sensor information to the control unit 322.
  • the gyro sensor 334 detects the angular velocity around the three orthogonal axes, and supplies the detected angular velocity as sensor information to the control unit 322.
  • the gyro sensor 334 detects the angular velocity of the AR glass system 311 (HMD73 as hardware).
  • the acceleration sensor 335 detects the acceleration around the three orthogonal axes, and supplies the detected acceleration to the control unit 322 as sensor information.
  • the accelerometer 335 detects the angular velocity of the AR glass system 311 (HMD73 as hardware).
  • the azimuth sensor 336 detects the direction of the geomagnetism as the azimuth, and supplies the detected azimuth as sensor information to the control unit 322.
  • the directional sensor 336 detects the directional of the AR glass system 311 (HMD73 as hardware).
  • the control unit 322 has a display unit 324 (corresponding to the video display unit 23 in FIG. 1) and a speaker 325 (sound presentation in FIG. 1) based on the sensor information from the sensor unit 321 and the sensor information described later from the hand controller 312. (One form of unit 24) and output control to the vibration presenting unit 354 (corresponding to the tactile presenting unit 25 in FIG. 1) in the hand controller 312 are performed.
  • the hand position detection unit 341 detects the position and posture of the user's hand or finger based on the sensor information from the hand controller 312 (for example, corresponding to the hand controller shown in FIG. 5).
  • the positions and postures of the user's hands and fingers are detected based on the sensor signals from the gyro sensor 351, the acceleration sensor 352, and the orientation sensor 353, which will be described later, mounted on the hand controller 312.
  • the position and posture of the user's hand or finger may be detected based on the image taken by the outward camera 332.
  • the hand position detection unit 341 detects the shape (posture) of the hand controller 312 or the marker given on the hand controller 312 based on the image from the outward camera 332, and the position of the hand and the finger. And it is possible to detect the posture.
  • the hand position detection unit 341 includes the positions and postures of the hands and fingers detected based on the image from the outward camera 332, and the gyro sensor 351, the acceleration sensor 352, and the orientation sensor described later mounted on the hand controller 312. In the case of detecting the position and posture (shape) of the hand (hand controller 312) and the finger more accurately by integrating the positions and postures of the hand and the finger detected based on the sensor signal from the 353. May be good.
  • the object detection unit 342 detects object information (object information) representing the position and shape of the object in the AR space.
  • object information object information
  • the object detection unit 342 is an image from the outward camera 332 or a distance image obtained by a ToF camera (corresponding to the ToF camera 35 in FIG. 1) (not shown).
  • the position and shape of the real object are detected based on (depth information).
  • the position and shape of the real object may be known in advance, or it may be detected based on the design information of the environment (building, etc.) in which the user exists or the information acquired from another system. It may be the case.
  • the application execution unit 343 generates an AR space to be provided to the user by executing a program of a predetermined application.
  • the application execution unit 343 uses the user's hand based on the position and posture of the user's hand detected by the finger position detection unit 341 and the position and shape of the object detected by the object detection unit 342. And whether or not any object is approaching / contacting (approaching or contacting) is detected. The application execution unit 343 detects an object that has come into contact with or has come into contact with the hand.
  • the application execution unit 343 In the feedback generation process, the application execution unit 343 generates feedback (contents) to the user's visual, auditory, and tactile senses when the user's hand approaches or touches the object.
  • the output control unit 344 generates an image (video), sound, and tactile sensation for presenting the AR space generated by the application execution unit 343 to the user, and generates a display unit 324, a speaker 325, and a hand controller 312, respectively.
  • An output signal for output is generated by the vibration presenting unit 354 of.
  • the output control unit 344 supplies the generated output signal to the display unit 324, the speaker 325, and the vibration presentation unit 354 of the hand controller 312. As a result, the output control unit 344 controls the visual, auditory, and tactile information presented to the user.
  • the speaker 325 is a sound presentation unit that outputs the sound of the output signal supplied from the output control unit 344.
  • the storage unit 326 stores programs and data executed by the control unit 322.
  • the gyro sensor 351 detects the angular velocity around the three orthogonal axes, and supplies the detected angular velocity as sensor information to the control unit 322 of the AR glass system 311.
  • the gyro sensor 351 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and detects the angular velocity of the hand and the finger.
  • the acceleration sensor 352 detects the acceleration in the orthogonal three-axis directions, and supplies the detected acceleration as sensor information to the control unit 322 of the AR glass system 311.
  • the acceleration sensor 352 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and detects the acceleration of the hand and the finger.
  • the azimuth sensor 353 detects the direction of the geomagnetism as the azimuth, and supplies the detected azimuth as sensor information to the control unit 322 of the AR glass system 311.
  • the orientation sensor 353 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and detects the orientation of the hand and the finger.
  • the vibration presentation unit 354 generates vibration of the output signal supplied from the control unit 322 (output control unit 344) of the AR glass system 311.
  • the vibration presenting unit 354 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and gives vibration to the hand and the finger.
  • FIG. 18 is a flowchart illustrating the processing procedure of the information processing apparatus 301.
  • step S31 the application execution unit 343 of FIG. 17 determines whether or not the user's hand is approaching / contacting the object (object) (whether or not it is in the approaching / contacting state) by the approach / contact detection.
  • step S31 If it is determined in step S31 that the state is not in the approaching / contacting state, the process repeats step S31.
  • step S31 If it is determined in step S31 that the state is approaching / contacting, the process proceeds from step S31 to step S32.
  • step S32 the application execution unit 343 determines whether or not the user can see (whether the visible state or the non-visible state) an object that is approaching or in contact with the user's hand by visual recognition detection.
  • step S32 If it is determined in step S32 that the state is visible (visible), the process proceeds from step S32 to step S33.
  • step S33 the application execution unit 343 generates normal (conventional) feedback to the user regarding the approach / contact between the hand and the object by the feedback generation process.
  • the ordinary feedback has the same meaning as the ordinary feedback described in the first embodiment of the information processing apparatus.
  • the application execution unit 343 generates feedback that presents a realistic tactile sensation (vibration) to the user's hand according to the movement of the object and the hand.
  • the output control unit 344 generates a tactile sensation (vibration) that reflects the feedback of the content generated by the application execution unit 343.
  • the output control unit 344 supplies the generated tactile sensation (vibration) to the vibration presentation unit 354 of the hand controller 312 as an output signal.
  • normal feedback to the user's tactile sensation corresponding to the visual state is executed for the approach / contact between the user's hand and the object.
  • the process returns from step S33 to step S31 and is repeated from step S31.
  • step S32 If it is determined in step S32 that the state is invisible (not visible), the process proceeds from step S32 to step S34.
  • step S34 the application execution unit 343 detects an object (object) that is approaching or in contact with the user's hand. The process proceeds from step S34 to step S35.
  • the object that is approaching or in contact with the user's hand is detected at the same time when it is detected in step S31 that the user's hand is approaching or in contact with the object (object) by the approach / contact detection. There is.
  • step S35 the application execution unit 343 generates feedback to the user's visual, auditory, and tactile senses according to the property of the object detected in step S34 by the feedback generation process.
  • the output control unit 344 generates an image (video), sound, and tactile sensation that reflects the feedback of the content generated by the application execution unit 343, and displays the information in the display unit 324, the speaker 325, and the hand controller. It is supplied as an output signal to the vibration presenting unit 354 of 312.
  • feedback information presentation
  • the process returns from step S35 to step S31 and is repeated from step S31.
  • the application execution unit 343 of the information processing apparatus 301 in FIG. 17 performs the user's hand by the approach / contact process. Detects that the object and the object are close to each other or come into contact with each other. In this approach / contact process, the application execution unit 343 has the position and posture of the user's hand detected by the finger position detection unit 341 and the position and shape (object) of each object in the AR space detected by the object detection unit 342. Information) and get.
  • the application execution unit 343 is based on the acquired information, and the distance between the user's hand and each object (the distance of the closest part between the area of the user's hand and the area of each object). However, it is determined whether or not it is equal to or less than a predetermined threshold value. As a result of the determination, when there is an object whose distance is equal to or less than the threshold value with respect to the user's hand (hereinafter referred to as a target object), it is detected that the user's hand and the target object are close to each other or in contact with each other.
  • the state in which the user's hand and the object are in close contact may be regarded as a state in which they are in contact with each other, and the close contact may be included in the contact.
  • the feedback when the user's hand and the object come into contact with each other has been described.
  • the approach may be included as well as the morphology.
  • Visual detection When the application execution unit 343 detects that the user's hand and the object are close to each other, the visual detection detects whether or not the user is looking at the approaching / contacting target object (visual state or non-visual state). Is it a state?) Is detected. In this visual recognition detection, the application execution unit 343 detects (determines) whether it is a visible state or a non-visual state by using the first detection condition to the fifth detection condition described later.
  • the detection condition is not limited to the case where the visible state or the non-visible state is detected by using all of the first detection condition to the fifth detection condition, and the detection condition is one or more of the first detection condition to the fifth detection condition. May be used for visual detection.
  • the application execution unit 343 When the application execution unit 343 performs visual detection using a plurality of detection conditions and detects that it is in a non-visual state by any of the detection conditions, the application execution unit 343 is a comprehensive (final) detection result (comprehensive). The detection result) is assumed to be invisible. However, when the application execution unit 343 detects that it is in the visual state by any of the detection conditions, the comprehensive detection result may be in the visual state, and the visible state or non-visual state among the plurality of detection conditions. It may be the case that the comprehensive detection result is determined according to the type of the detection condition detected in the visual state. For example, assume that priorities are preset for multiple detection conditions.
  • the first detection condition to the fifth detection condition are as follows.
  • the first detection condition to the fifth detection condition are shown as conditions for detecting that the state is invisible.
  • -First detection condition The target object does not exist within the user's field of view (conditions such as closed eyes and blindness of the user are also applicable).
  • Second detection condition The target object does not exist within the field of view of the AR glass system 311 (HMD73).
  • -Third detection condition The target object is in the user's peripheral visual field.
  • -Fourth detection condition The target object has never been visually recognized in the user's central visual field.
  • -Fifth detection condition The target object is shielded by another object (the target object exists in a box, hot water, smoke, etc.).
  • the positions and postures of the user's hand and the target object in the AR space are detected by the finger position detection unit 341 and the object detection unit 342 as information that the application execution unit 343 appropriately refers to when performing visual detection.
  • the orientation (headlay direction) of the AR glass system 311 is detected based on the sensor information from the gyro sensor 334, the acceleration sensor 335, and the azimuth sensor 336.
  • the user's line-of-sight direction is detected by the image of the user's eyes from the inward camera 331.
  • the application execution unit 343 specifies the user's visual field range in the AR space based on the user's line-of-sight direction.
  • the application execution unit 343 detects that the target object is invisible when the target object does not exist within the visual field range of the specified user.
  • the application execution unit 343 detects that it is in the visual state.
  • the application execution unit 343 may be invisible when the user closes his / her eyes. Whether or not the user has his eyes closed is detected by the image from the inward camera 331.
  • the application execution unit 343 may be invisible even when the target object exists within the user's visual field range when the user's eyes are blind. Information on whether or not the user is visually impaired can be obtained from another system or the like. When the user is visually impaired, the application execution unit 343 acquires information in the visual field range that the user can stably see, and is invisible when the target object does not exist in the visual field range. May be.
  • the application execution unit 343 specifies the visual field range of the AR glass system 311 in the AR space based on the direction (head ray direction) of the AR glass system 311 (HMD73), and the AR thereof.
  • the application execution unit 343 detects that it is in the visual state.
  • the application execution unit 343 In the visual detection using the fourth detection condition, the application execution unit 343 records the history of the range of the user's central visual field for a predetermined time. In the history of the range of the central visual field recorded, the application execution unit 343 considers that the target object has never been visually recognized in the user's central visual field when the position of the target object is not included in the central visual field range. Detects that it is in a visible state. When the position of the target object is included in the range of the central visual field even once in the history of the range of the central visual field recorded by the application execution unit 343 (the target object is within the central visual field during a predetermined time from the present to the past). If it does not exist), it is detected that it is in the visual field state.
  • the application execution unit 343 detects another object existing between the target object and the user's head.
  • the existence of other objects can be grasped based on the design information of the environment (building, etc.) in which the user exists and the information from other systems.
  • the application execution unit 343 detects that the target object is invisible when another object that shields the target object exists between the target object and the user's head. For example, when the target object is in a box, hot water, or smoke, the application execution unit 343 detects that it is invisible.
  • the application execution unit 343 detects that it is in the visual state when there is no other object that shields the target object between the target object and the user's head.
  • the application execution unit 343 After performing visual detection using any one or more of the above first detection conditions to the fifth detection conditions, the application execution unit 343 sets the user's hand and the target object by the feedback generation process. Generate feedback to the user regarding the approach / contact of the object.
  • the application execution unit 343 In the feedback generation process, the application execution unit 343 generates normal feedback when it is detected by visual recognition detection that it is in a visual state.
  • the application execution unit 343 presents a realistic tactile sensation (vibration) to the user's hand according to the movement between the target object and the hand.
  • the output control unit 344 generates a tactile sensation (vibration) that reflects the feedback of the content generated by the application execution unit 343 when generating an image, a sound, and a tactile sensation for presenting the AR space to the user. ..
  • the output control unit 344 supplies the generated image, sound, and tactile sensation as output signals to the display unit 324, the speaker 325, and the vibration presentation unit 354 of the hand controller 312, respectively.
  • normal feedback to the user's tactile sensation corresponding to the visual state is executed for the approach / contact between the hand and the target object.
  • the usual feedback is not limited to this.
  • FIG. 19 is a diagram illustrating a first embodiment of feedback corresponding to a non-visual state.
  • the target object 381 and the target object 382 represent target objects that are close to or in contact with each other and are not visible to the user.
  • the target object 381 is smaller than the target object 382.
  • the application execution unit 343 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the size of the target object from the object information (property of the target object) detected by the object detection unit 342. The application execution unit 343 generates feedback that presents the user with a sound or vibration (tactile sensation) having a frequency corresponding to the size of the acquired target object. For example, the application execution unit 343 generates feedback of high frequency sound or vibration (tactile sensation) as the size of the target object is smaller. In FIG. 19, the target object 381 in the situation 371 is smaller than the target object 382 in the situation 372. The application execution unit 343 makes the frequency of the sound or vibration of the feedback for the approach / contact between the hand and the target object 381 higher than the feedback for the approach / contact between the hand and the target object 382. Both sound and vibration may be presented to the user.
  • the application execution unit 343 may generate a sound or vibration feedback having an amplitude according to the size of the target object. For example, the application execution unit 343 generates feedback of sound or vibration having a smaller amplitude as the size of the target object is smaller.
  • the user can be made to recognize the size of the target object in the invisible state by the volume of sound or the magnitude of vibration.
  • FIG. 20 is a diagram illustrating a second modification of the first embodiment of feedback corresponding to the invisible state.
  • the user 411 wears the AR glass 401 representing the AR glass system 311 of FIG. 17 as hardware on the head, and visually recognizes the image 402 of the AR space.
  • the target object 421 and the target object 422 are target objects that have approached or touched the hand 412 of the user 411, respectively, and represent the target objects that the user cannot see.
  • the target object 421 is smaller than the target object 422.
  • the application execution unit 343 generates feedback that presents the image of the figure enlarged or reduced at a magnification according to the size of the target object to the user. For example, the application execution unit 343 generates feedback of an image representing a circular figure having a smaller magnification as the size of the target object is smaller.
  • the target object 421 of the situation 391 is smaller than the target object 422 of the situation 392. Therefore, the application execution unit 343 sets the enlargement magnification of the circular figure 403, which is superimposed on the image 402 in the AR space and presented as feedback for the approach / contact between the hand and the target object 421, so that the hand and the target object 422 approach each other. -Make it smaller than the magnification of the circular figure 403 in the feedback to the contact.
  • the figure presented by feedback may have a shape other than a circular shape.
  • the size of the target object can be recognized by the feedback to the user's visual, auditory, or tactile sense.
  • the target object For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough.
  • Such useful information can be presented to the user by feedback to the user's visual, auditory, or tactile sensations.
  • the second embodiment is an embodiment for generating feedback for presenting the number of target objects in the invisible state to the user.
  • the waveforms 441 to 443 in FIG. 21 represent the waveforms of sound or vibration (tactile sensation) presented by feedback.
  • the application execution unit 343 acquires the number of the target objects in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do.
  • the number of target objects means the number of target objects that the user's hands approach and touch at the same time.
  • the application execution unit 343 generates feedback that presents the user with the sound or vibration (tactile sensation) of the waveform of the peak number according to the number of acquired target objects.
  • the presentation time of sound or vibration by feedback is determined to be a fixed time.
  • the application execution unit 343 sets the frequency of the sound or vibration so that the number of peaks of the sound or vibration waveform within the presentation time becomes a number corresponding to the number of target objects. For example, the application execution unit 343 generates feedback of a waveform (frequency) having a smaller number of peaks or vibration as the number of target objects is smaller.
  • the number of peaks of the sound or vibration waveform presented by feedback increases in the order of waveform 441, waveform 442, and waveform 443, and the target object in the order of waveform 441, waveform 442, and waveform 443. It is presented to the user that the number of is large.
  • the application execution unit 343 generates feedback for presenting a rough number.
  • FIG. 22 is a diagram illustrating feedback for presenting a rough number of target objects.
  • the application execution unit 343 sets the frequency of the sound or vibration presented to the user as shown in the waveform of FIG. 22 to a fixed frequency that is easy for humans to perceive. ..
  • the application execution unit 343 sets the presentation time of the sound or vibration at the fixed frequency to the length corresponding to the number.
  • the application execution unit 343 generates feedback of sound or vibration having a constant frequency and with a presentation time corresponding to the number of target objects.
  • the number of target objects in the invisible state can be recognized by the feedback to the user's auditory sense or tactile sense.
  • the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough.
  • Such useful information can be presented to the user by feedback to the user's auditory or tactile sensation.
  • the third embodiment is an embodiment for generating feedback for presenting the color (including brightness, pattern, etc.) of the target object in the invisible state to the user.
  • the application execution unit 343 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the color of the target object in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do.
  • the application execution unit 343 generates feedback that presents the sound or vibration (tactile sensation) of the frequency corresponding to the acquired brightness of the target object to the user. For example, the application execution unit 343 generates feedback of high frequency sound or vibration as the brightness of the target object increases.
  • the height of the brightness of the target object in the invisible state is presented to the user by the frequency of the feedback sound or vibration.
  • the application execution unit 343 detects the complexity of the pattern (high spatial frequency) of the target object in the invisible state by the object detection unit 342. Obtained from the object information (property of the target object).
  • the application execution unit 343 generates feedback that presents the user with a sound or vibration (tactile sensation) having a frequency corresponding to the height of the spatial frequency of the acquired pattern of the target object. For example, the application execution unit 343 generates feedback of high frequency sound or vibration as the spatial frequency of the pattern of the target object is higher. As a result, the more complicated the pattern of the target object is, the higher the frequency of the sound or vibration is presented to the user. The more monotonous the pattern of the target object is, the lower the frequency of sound or vibration is presented to the user.
  • the complexity of the pattern of the target object in the invisible state is presented to the user by the sound of the feedback or the frequency of the vibration. That is, the color (information about the color) of the target object can be recognized by the feedback to the user's auditory sense or tactile sense. For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough. Such useful information can be presented to the user by feedback to the user's auditory or tactile sensation.
  • the fourth embodiment is an embodiment for generating feedback for presenting the type of the target object in the invisible state to the user.
  • the application execution unit 343 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the type of the target object in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do.
  • the application execution unit 343 generates feedback that presents the user with the sound or vibration (tactile sensation) of the number of times according to the type of the acquired target object.
  • the number of sounds or vibrations may be the number of peaks of the waveform, or the number of intermittent sounds or vibrations, as in the second embodiment.
  • the number of times according to the type of the target object means the number of times of sound or vibration associated with the type of the target object in advance.
  • FIG. 23 is a diagram illustrating a fourth embodiment of feedback corresponding to the invisible state.
  • the target object example 461 in FIG. 23 represents the case where the target object 471 is a pistol.
  • the target object example 462 represents a case where the target object 472 is a knife.
  • the circle mark 481 and the circle mark 482 at the top of the target object 471 and the target object 472 represent the number of sounds or vibrations associated with the types of the target object 471 and the target object 472, respectively. Since the number of circle marks 481 is one for the target object 471, the number of times of the associated sound or vibration is one. Since the number of circle marks 482 is two for the target object 472, the number of associated sounds or vibrations is two. In this way, the number of sounds or vibrations is associated with the type of target object in advance.
  • the user may memorize the sound or the number of vibrations associated with the type of the target object.
  • the application execution unit 343 may present images of the circle mark 481 and the circle mark 482 as shown in FIG. 23 in the vicinity of the target object.
  • the type of the target object in the invisible state is presented to the user by the sound of the feedback or the number of vibrations. That is, the type of the target object can be recognized by the feedback to the user's auditory sense or tactile sense. For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough. Such useful information can be presented to the user by feedback to the user's auditory or tactile sensation.
  • a fifth embodiment is an embodiment that generates feedback for presenting the orientation of the target object in the invisible state.
  • 24 and 25 are diagrams illustrating a fifth embodiment of feedback corresponding to the invisible state.
  • the target object 501 is arranged laterally with respect to the palm of the hand 412. That is, the axis direction of the target object 501 is the direction along the flat of the user's hand 412, and the target object 501 is arranged in the direction perpendicular to the axis direction of the finger.
  • the target object 501 is arranged vertically with respect to the palm of the hand 412. That is, the axial direction of the target object 501 is arranged along the flat of the user's hand 412 and parallel to the axial direction of the finger.
  • the application execution unit 343 changes the position of the oscillator that presents vibration when the user moves the hand 412 among the oscillators of the vibration presentation unit 354 of the hand controller 312 according to the orientation of the detected target object.
  • the oscillators are arranged, for example, at the fingertips of the thumb, the fingertips of the index finger, and the back of the hand as shown in FIG. However, the oscillators may be arranged in more parts.
  • the application execution unit 343 moves the hand 412 forward in a situation where the target object 501 is arranged sideways with respect to the palm of the user's hand 412 as in the situation 491 of FIG. 24.
  • the application execution unit 343 changes the position of the oscillator that presents vibration from the oscillator arranged on the tip end side of the hand 412 to the oscillator arranged on the base end side of the finger.
  • the application execution unit 343 causes only the oscillator on which some fingers are arranged to present the vibration.
  • the change in the position of the oscillator that presents vibration is not limited to the above case. It suffices if the difference in the orientation of the target object in the invisible state and the difference in the movement of the hand can be recognized by the difference in the position of the oscillator that presents the vibration.
  • a sixth embodiment is an embodiment that generates feedback for showing that the target object in the invisible state required by the user is in the vicinity of the user's hand. For example, a user may want to pick up the necessary tools while keeping an eye on something. In such a case, the user moves his / her hand to the approximate position where the target tool is placed without looking at the target tool. At this time, in the sixth embodiment, when the hand moves near the target tool, the user is presented with the feedback that the target tool is near the hand.
  • FIG. 26 is a diagram illustrating a sixth embodiment of feedback corresponding to the invisible state.
  • the user 411 wears the AR glass 401 representing the AR glass system 311 of FIG. 17 as hardware on the head, and visually recognizes the image of the engine 511 in the AR space.
  • the user 411 is in a state of gazing at the attention point with respect to the engine 511 with the finger of the left hand 412L.
  • the user 411 keeps the state, moves the right hand 412R to the place where the unused yellow sticky note (virtual object) is placed, picks the sticky note, and tries to paste the sticky note in the place to be noted.
  • User 411 repeats the operation and work of pasting a yellow sticky note.
  • the application execution unit 343 detects (predicts) the target object that the user 411 wants to grasp, depending on the operation of the user 411, the fixed process, and the like.
  • the user 411 detects the yellow sticky note as the target object.
  • the application execution unit 343 generates feedback that presents vibration to the user's hand.
  • the vibration may be a simple sine wave.
  • the feedback may be the presentation of sound rather than vibration.
  • the application execution unit 343 when the user's right hand 412R approaches the yellow sticky note, the application execution unit 343 generates feedback that presents vibration to the user's right hand 412R or the like.
  • the seventh embodiment generates feedback for showing that the target object in the invisible state, which is dangerous to touch (which may cause an accident, disadvantage, etc.), is in the vicinity of the user's hand. Is.
  • FIG. 27 is a diagram illustrating a seventh embodiment of feedback corresponding to the invisible state.
  • the situation 521 shows a case where the target object approaching / contacting the user's hand 412 is a cup (real object) containing a liquid.
  • Situation 522 shows a case where the target object approaching / contacting the user's hand 412 is a heated kettle (real object).
  • Situation 523 shows the case where the target object approaching / contacting the user's hand 412 is a selected image of the credit card to be used.
  • the application execution unit 343 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the type of the target object in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do. The application execution unit 343 detects whether or not the target object in the invisible state is dangerous based on the acquired object information.
  • Whether or not the target object is dangerous may be set as object information in advance, or may be determined based on the environment in the real space.
  • the application execution unit 343 generates feedback that presents an image, sound, or vibration (tactile sensation) to the user when the target object in the invisible state is dangerous. This presents, for example, an image, sound, or vibration (tactile sensation) that indicates danger to the user. If the risk level of the target object can be acquired, the application execution unit 343 may generate feedback that presents the user with a sound or vibration having an amplitude corresponding to the risk level of the target object. For example, the higher the risk of the target object, the larger the amplitude of the sound or vibration may be presented to the user. An image of a color or size corresponding to the degree of danger of the target object may be presented to the user. When the target object is a virtual object, the application execution unit 343 may invalidate the operation on the target object.
  • the application execution unit 343 may generate feedback that presents an image, sound, or vibration to the user only when the user performs a specific action.
  • the application execution unit 343 may generate the following two-step feedback. For example, the application execution unit 343 generates feedback that presents an image, sound, or vibration to the user as simple primary information in order to present to the user the existence of a dangerous target object in an invisible state. After that, when the user shows interest and the user performs a specific hand action (action) such as "stroking", the application execution unit 343 displays an image, a sound, or an image, a sound, or as further information presentation. Generate feedback that presents the vibration to the user.
  • action such as "stroking”
  • the application execution unit 343 is a communication unit not only for the user himself but also for others (for the AR glass system used by others).
  • the communication of 323 may cause feedback to present an image, sound, or vibration.
  • the application execution unit 343 notifies the management center via the network or the like from the communication unit 323 so that the related parties can be contacted. good. This makes it possible for people around you to know that the user is performing a dangerous operation and provide support.
  • the seventh embodiment of feedback corresponding to the above-mentioned invisible state when the user's hand approaches or touches a dangerous object in the invisible state, the feedback image, sound, or vibration is presented to the user. Will be done. That is, it is possible to recognize that the user's hand has approached or touched the dangerous object in an invisible state by feedback to the user's visual, auditory, or tactile sense.
  • the target object if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough.
  • Such useful information can be presented to the user by feedback to the user's visual, auditory, or tactile sensations.
  • the user in the xR space provided to the user, when the user's hand and the object come into close contact with each other, the user is visually recognizing the object. At the time, normal feedback is executed. In normal feedback, for example, a realistic tactile sensation is presented to the user.
  • feedback corresponding to the invisible state is executed. In the feedback corresponding to the non-visual state, information that is easily obtained in the visual state but cannot be obtained in the non-visual state is presented to the user by feedback to the user's visual, auditory, or tactile sense.
  • the target object when the target object is invisible, the user often wants to know the property of the target object. Therefore, in the feedback corresponding to the non-visual state, attributes such as size, number, color, type, orientation, necessity, and danger of objects that the user has not seen are presented to the user. In this way, by switching the feedback action when the user's hand and the object come into close contact with each other between the visible state and the non-visible state, the feedback to the user is useful and a realistic experience in the xR space. It will be compatible as a means of providing various information to users.
  • Patent Document 2 Japanese Unexamined Patent Publication No. 2019-008798 proposes to present tactile sensation according to the position visually observed.
  • the technique of Patent Document 2 aims to direct the line of sight to a point of interest by stimulating the tactile sensation of the user when the user is looking away from the place (area of interest) that the user wants to see.
  • it is detected whether or not the user is visually recognizing the target object at hand as in the second embodiment of the information processing apparatus, and information corresponding to the target object is presented. Etc. are not done. Therefore, the technical contents of the second embodiment of the present information processing apparatus and the technique of Patent Document 2 are significantly different.
  • FIG. 28 is a block diagram showing a configuration example of a third embodiment of the information processing apparatus to which the present technology is applied.
  • the same reference numerals are given to the parts common to the information processing apparatus 11 of FIG. 1, and the description thereof will be omitted.
  • the information processing device 601 uses the HMD 73 of FIG. 4 and the controller of FIG. 5 (controller 75 of FIG. 2) to display the space / world generated by xR to the user. offer.
  • the information processing device 601 has a sensor unit 21, a control unit 612, a video display unit 23, a sound presentation unit 24, a tactile presentation unit 25, and a storage unit 26. Therefore, the information processing device 601 is common to the information processing device 11 of FIG. 1 in that it has a sensor unit 21, a video display unit 23, a sound presentation unit 24, a tactile presentation unit 25, and a storage unit 26. However, the information processing apparatus 601 is different from the information processing apparatus 11 of FIG. 1 in that the control unit 612 is provided instead of the control unit 22 of FIG.
  • the information processing device 601 can be built with the hardware 61 shown in FIGS. 2 and 3.
  • the position / attitude acquisition unit 622 acquires the positions and attitudes of the HMD 73, the speaker 74, and the controller 75 (see FIG. 2) as position / attitude information based on the sensor information acquired by the sensor information acquisition unit 621.
  • the position / posture acquisition unit 622 acquires not only the position and posture of the controller itself, but also the position and posture of the user's hand or finger when the controller attached to the user's hand is used. The positions and postures of the user's hands and fingers can also be acquired based on the image taken by the camera 31.
  • the object selection acquisition unit 623 targets any object existing in the xR space as an operation target based on the sensor information obtained from the sensor information acquisition unit 621 and the position / attitude acquisition unit 622 and the position / attitude information of the hands and fingers. Gets whether or not it has been selected by the user. It is determined that the object selection is executed when the user touches the index finger and the thumb, or when the user holds the hand. The object selection acquisition unit 623 identifies the selected object when any of the objects is selected.
  • the application execution unit 624 creates an xR space to be provided to the user by executing the program of the predetermined application.
  • the application execution unit 624 moves, rotates, and enlarges / reduces the object (object selected as the operation target by the user) specified by the object selection acquisition unit 623 based on the operation by the user's hand movement or the like in the xR space. Etc. are performed.
  • the application execution unit 624 generates feedback to the user's visual sense (visual sense), auditory sense (sound), and tactile sense according to the object attribute (object information) for the user's operation on the object.
  • the application execution unit 624 has an object information acquisition unit 631 and a selection feedback generation unit 632.
  • the object information acquisition unit 631 acquires the object information tagged with the object selected as the operation target.
  • the object information is tagged in advance for each object existing in the xR space and stored in the storage unit 26.
  • the object information acquisition unit 631 reads the object information tagged with respect to the object selected as the operation target from the storage unit 26.
  • the object information acquisition unit 631 may recognize a real object and extract object information based on the sensor information acquired by the sensor information acquisition unit 621, or may dynamically extract object information from a virtual object. May be good.
  • the selection feedback generation unit 632 has a visual generation unit 641, a sound generation unit 642, and a tactile generation unit 643.
  • the visual generation unit 641 generates an image of the operation line according to the object (attribute) selected as the operation target based on the object information acquired by the object information acquisition unit 631.
  • the operation line is a line connecting the selected object and the hand of the operating user.
  • the operation line can be not only a straight line, but also a dotted line, only the start point and the end point, and a line connecting 3D objects.
  • the sound generation unit 642 generates a sound (sound) according to the selected object (attribute) based on the object information acquired by the object information acquisition unit 631.
  • the tactile sensation generation unit 643 generates a tactile sensation according to (attribute) the selected object based on the object information acquired by the object information acquisition unit 631.
  • the output control unit 625 generates an image, a sound, and a tactile sensation for presenting the xR space generated by the application execution unit 624 to the user, and the video display unit 23, the sound presentation unit 24, and the tactile sensation presentation unit, respectively. It is output to 25 as an output signal. As a result, the output control unit 625 controls the visual, auditory, and tactile information presented to the user. The output control unit 625 visually generates the application execution unit 624 as information of feedback to the user regarding the operation of the object selected as the operation target when generating the image, sound, and tactile sensation to be presented to the user. The image, sound, and tactile sensation generated by the unit 641, the sound generation unit 642, and the tactile sensation generation unit 643 are reflected.
  • FIG. 29 is a diagram illustrating the operation of the object.
  • the image of the xR space presented to the user's vision by the HMD 73 includes the user's right hand 661R and the left hand 661L.
  • the virtual object 667 which is a human model, is presented as a 3D object. It is assumed that the user performs an operation (operation) of emitting (virtual) light rays from, for example, the right hand 661R and the left hand 661L, and irradiates the virtual object 667 with the light rays.
  • a rectangular frame 672 representing an individually operable range is displayed on the virtual object 667 irradiated with the light beam.
  • the frame 672 that surrounds the entire virtual object 667 is displayed.
  • connection points 673R and 673L are set as connection points 673R and 673L.
  • the user's right hand 661R and the connection point 673R are connected by the operation line 674R
  • the user's left hand 661L and the connection point 673L are connected by the operation line 674L.
  • the virtual object 667 is selected as the operation target.
  • the operation lines 674R and 674L are made of a highly rigid material, for example, the user moves the right hand 661R and the left hand 661L to perform operations such as moving, rotating, and scaling the virtual object 667. be able to. It is possible to perform the same operation as a hand using the controller.
  • the frame 672 may be the surface of the virtual object 667, and the existence of the frame 672 is not shown in the following description.
  • the object to be operated may be a real object instead of a virtual object. However, in the following, the operation target will be described as being a virtual object.
  • FIG. 30 is a flowchart illustrating the processing procedure of the information processing apparatus 601.
  • step S51 the object selection acquisition unit 623 emits a virtual light ray from the user's hand by a predetermined operation or operation of the user. The user irradiates the virtual object to be operated with the virtual light beam. The process proceeds from step S51 to step S52.
  • step S52 the user operates a hand or a controller to select a virtual object to be operated. That is, as described with reference to FIG. 29, for example, a virtual object is selected by performing a pinching operation with the thumb and the index finger.
  • the object selection acquisition unit 623 identifies a virtual object selected as an operation target. The process proceeds from step S52 to step S53.
  • step S53 the object information acquisition unit 631 of the control unit 612 acquires the object information of the selected object.
  • the object information acquisition unit 631 acquires the object information of the selected portion. The process proceeds from step S53 to step S54.
  • step S54 the position / posture acquisition unit 622 acquires the positions and postures of the hands and fingers as position / posture information. The process proceeds from step S54 to step S55.
  • step S55 the selection feedback generation unit 632 of the control unit 612 uses the hand and finger position / posture information acquired in step S54 as feedback for the user's operation based on the object information acquired in step S53. Generates the image, sound, and tactile sensation presented to. The process proceeds from step S55 to step S56.
  • step S56 the output control unit 625 of the control unit 612 outputs the feedback image, sound, and tactile sensation generated in step S55 to the video display unit 23, the sound presentation unit 24, and the tactile sensation presentation unit 25, respectively. do.
  • the process returns from step S56 to step S54 and repeats from step S54.
  • the process returns to step S51 and repeats from step S51.
  • the selection feedback generation unit 632 of the application execution unit 624 generates feedback to the user of images, sounds, and vibrations such as operation lines for the operation of the virtual object in the xR space. Feedback is generated based on the object information (attribute) of the virtual object selected as the operation target. This gives the user a feeling of operation according to the attributes of the virtual object to be operated.
  • FIG. 31 is a diagram illustrating a virtual object.
  • the portion that can be individually selected as the operation target in this way may be registered in the storage unit 26 as object information in advance, or may be dynamically extracted from the virtual object.
  • the object information acquisition unit 631 of the application execution unit 624 acquires object information for each part of the virtual object selected as the operation target.
  • Object information includes color, hardness, weight, image, sound (sound), size, object elasticity, thickness, hardness / brittleness, material characteristics (glossiness and roughness), heat, and importance. Degree etc. are included.
  • the object information acquisition unit 631 may use, for example, color, hardness, weight, image, sound, and as object information (attribute type) of the leaf portion 692. , Acquire information about vibration data, etc.
  • color the information "green or fresh green color” is acquired.
  • hardness information that it is “soft” is acquired.
  • weight information that it is “soft” is acquired.
  • weight the information "light” is acquired.
  • image “leaf image data” is acquired.
  • the sound “the sound when the leaves rub against each other” is acquired.
  • the "vibration data” the "rough vibration when the leaf hits the hand” is acquired.
  • the selection feedback generation unit 632 (visual generation unit 641) of the application execution unit 624 uses the object information of the leaf portion 692 as the feedback to the user's vision for the operation of the virtual object, and the operation line 674R and the operation line 674R shown in FIG. Generate an operation line (object) corresponding to 674L.
  • FIG. 32 is a diagram illustrating four forms of an operation line presented to the user when the leaf portion 692 of the virtual object 691 of FIG. 31 is selected as an operation target.
  • the user's right hand 661R and the leaf portion 692 are connected by operation lines 711 to 714 having different forms, respectively.
  • the operation line on the left hand is omitted.
  • both hands and the virtual object to be operated do not necessarily have to be connected by the operation line.
  • the operation line 711 is generated by a solid line. Information on color, hardness, and weight among the object information of the leaf portion 692 is reflected in the generation of the operation line 711. For example, since the color of the leaf portion 692 of the operation line 711 is green, the operation line 711 is also generated to be green. Since the leaf portion 692 is soft and light in weight, the operation line 711 is generated by a soft line (curved line) instead of a straight line. The thickness of the operation line 711 may be changed according to the hardness or weight of the leaf portion 692.
  • the operation line 712 is generated by a dotted line.
  • the operation line 712 differs only in the type of line from the operation line 711 of the form 701.
  • the operation line 712 also reflects the object information of the leaf portion 692 like the operation line 711.
  • the operation line 713 differs only in the form of the line from the operation line 711 of the form 701.
  • the operation line 713 also reflects the object information of the leaf portion 692 like the operation line 711.
  • the operation line 714 is generated by a series of leaves.
  • the operation line 714 among the object information of the leaf portion 692, the information of "leaf image data" related to the image is used. Similar to the operation line 711, the operation line 714 reflects information on the hardness and weight of the leaf portion 692.
  • the operation line may be generated by connecting the leaf animation.
  • the selection feedback generation unit 632 (sound generation unit 642) of the application execution unit 624 generates a sound (sound) using the object information of the leaf portion 692 as feedback to the user's hearing for the operation of the virtual object.
  • the sound generation unit 642 uses information about the sound included in the object information of the leaf portion 692 (sound when the leaves rub against each other) as the sound presented to the user when the leaf portion 692 is moved by the user's operation. To generate.
  • the volume of the sound may be varied according to the movement of the user's hand and the wood portion 629. For example, when both the user's hand and the leaf portion 692 are stationary, a sound is generated in which the leaves are slightly rubbed by the wind. When the user moves his hand to the right and the leaf portion 692 also moves to the right at high speed, a sound like a strong wind hitting the leaves is generated.
  • the selection feedback generation unit 632 (tactile generation unit 643) of the application execution unit 624 generates a tactile sensation (vibration) using the object information of the leaf portion 692 as feedback to the user's tactile sensation for the operation of the virtual object.
  • the tactile generation unit 643 uses the information related to the vibration data included in the object information of the leaf portion 692 to generate a rough vibration when the leaf hits the hand.
  • the tactile generation unit 643 changes the generated vibration according to the movement between the user's hand and the leaf portion 692. For example, the tactile generation unit 643 generates high-frequency vibrations that are repeated about twice a second when the user's hand and the leaf portion 692 are both stationary or slow. The tactile generation unit 643 generates high-frequency vibrations that are repeated about 10 times per second when the movement between the user's hand and the leaf portion 692 is fast.
  • the vibration frequency reflects information on the weight included in the object information of the leaf portion 692.
  • the object information acquisition unit 631 determines the color, hardness, weight, and the like as the object information (attribute type) of the trunk portion 693, as in the leaf portion 692. Acquire information about images, sounds, vibration data, etc. For example, regarding the color, the information “brown” is acquired. Regarding hardness, information that it is “hard” is acquired. Regarding the weight, the information “heavy” is acquired. As for the image, “image data of the tree trunk” is acquired. As for the sound, “the rattling sound of bending an elastic tree branch” is acquired. As for “vibration data”, “heavy vibration of elastic tree branches bent” is acquired.
  • the selection feedback generation unit 632 (visual generation unit 641) of the application execution unit 624 generates an operation line (object) using the object information of the trunk portion 693 as feedback to the user's vision for the operation of the virtual object.
  • FIG. 33 is a diagram illustrating four forms of an operation line presented to the user when the trunk portion 693 of the virtual object 691 of FIG. 31 is selected as an operation target.
  • the forms 701 to 704 of FIG. 33 correspond to the forms 701 to 704 having the same reference numerals in FIG. 32, the methods of generating the operation lines are the same, and the same reference numerals are given to the operation lines.
  • the operation line 711 is generated by a solid line. Information on color, hardness, and weight among the object information of the trunk portion 693 is reflected in the generation of the operation line 711. For example, since the color of the trunk portion 693 of the operation line 711 is brown, the operation line 711 is also generated to be brown. Since the trunk portion 693 is hard and heavy, the operation line 711 is generated in a straight line.
  • the operation line 712 is generated by a dotted line.
  • the operation line 712 differs only in the type of line from the operation line 711 of the form 701. Similar to the operation line 711, the operation line 712 also reflects the object information of the trunk portion 693.
  • the operation line 713 differs only in the form of the line from the operation line 711 of the form 701. Similar to the operation line 711, the operation line 713 also reflects the object information of the trunk portion 693.
  • the operation line 714 is generated by a series of tree trunks.
  • the operation line 714 among the object information of the trunk portion 693, the information of "image data of the trunk of the tree" regarding the image is used. Similar to the operation line 711, the operation line 714 reflects information on the hardness and weight of the trunk portion 693.
  • the operation line may be generated by connecting the animation of the tree trunk.
  • the selection feedback generation unit 632 (sound generation unit 642) of the application execution unit 624 generates a sound (sound) using the object information of the trunk portion 693 as feedback to the user's hearing for the operation of the virtual object.
  • the sound generation unit 642 sets the sound presented to the user when the trunk portion 693 is moved by the user's operation as information about the sound included in the object information of the trunk portion 693 (bending an elastic tree branch). It is generated using a gurgling sound).
  • the volume of the sound may be changed according to the movement of the user's hand and the trunk portion 693. For example, when both the user's hand and the trunk portion 693 are stationary, almost no sound is generated, but when the user's hand and the trunk portion 693 are moving, a strong gurgling sound is generated.
  • the selection feedback generation unit 632 (tactile generation unit 643) of the application execution unit 624 generates a tactile sensation (vibration) using the object information of the trunk portion 693 as feedback to the user's tactile sensation for the operation of the virtual object.
  • the tactile generation unit 643 uses the information related to the vibration data included in the object information of the trunk portion 693 to generate a heavy vibration by bending an elastic tree branch.
  • the tactile generation unit 643 changes the generated vibration according to the movement of the user's hand and the trunk portion 693. For example, the tactile generation unit 643 generates low-frequency vibrations that are repeated about once per second when the user's hand and the trunk portion 693 are both stationary or slow. The tactile generator 643 generates a low-frequency vibration with a large amplitude when the user's hand and the trunk portion 693 start to move, and the faster the movement, the higher the frequency, and the high frequency repeated about 10 times per second. Generates vibrations. The vibration frequency reflects information about the weight contained in the object information of the trunk portion 693.
  • FIG. 34 is a diagram illustrating a case where the weight of the object information is changed according to the strength of the user's power.
  • the state 721 in FIG. 34 shows a state in which a weak user is operating the leaf portion 692 with the right hand 661R by the operation line 711.
  • the state 722 represents a state in which a strong user is operating the leaf portion 692 with the right hand 661R by the operation line 711.
  • state 721 a weak user feels heavier than a strong user when moving the leaf portion 692. Therefore, when the feedback is generated, the selection feedback generation unit 632 changes the information regarding the weight included in the object information of the leaf portion 692 to be heavier. As a result, the operation line 711 is generated as a straight line. An appropriate feeling of operation is expressed for users with weak power.
  • the selection feedback generation unit 632 of the application execution unit 624 when the operation target object collides with a virtual or real wall or another object while the user is moving the operation target object, at that moment.
  • the object information may be changed.
  • FIG. 35 is a diagram showing a state when the object information is changed when the object to be operated collides.
  • the operation line 711 in the state 731 of FIG. 35 is an operation line generated by the same generation method as the operation line 711 of the form 701 of FIG. 32.
  • the state 731 in FIG. 35 represents a case where the user collides with a wall or the like while operating the operation line 711 with the right hand 661R to move the leaf portion 692.
  • the selection feedback generation unit 632 changes the direction of increasing the weight information included in the object information of the leaf portion 692 or the direction of increasing the hardness information.
  • the operation line 711 which was curved before the leaf portion 692 collided, changes to a straight line at the moment of the collision, and it is appropriately expressed that the collision occurred.
  • the operation line 714 in the state 732 of FIG. 35 is an operation line generated by the same generation method as the operation line 714 of the form 704 of FIG. 32.
  • the state 732 of FIG. 35 represents a case where the user collides with a wall or the like while operating the operation line 714 with the right hand 661R to move the leaf portion 692.
  • the selection feedback generation unit 632 changes the direction of increasing the weight information included in the object information of the leaf portion 692 or the direction of increasing the hardness information. As a result, the leaves connecting the operation lines 714 are scattered at the moment of collision, so that the collision is appropriately expressed.
  • FIG. 36 is a diagram showing a state when the object to be operated is placed at a designated position on the surface.
  • FIG. 36 shows a state when the user operates the operation line 711 with the right hand 661R to move the object 741 to be operated in the air and places it at a designated position on a surface such as a desk.
  • the selection feedback generation unit 632 changes the information regarding the weight of the object information of the object 741 to lighten when the object 741 is moving in the air.
  • the operation line 711 is generated as a curved soft line.
  • the selection feedback generation unit 632 changes the information regarding the weight of the object information of the object 741 in the direction of increasing the weight.
  • the operation line 711 is generated as a straight line. If the operation line 711 is a straight line, it is easier to perform a delicate operation of placing the object 741 at a designated position on the surface.
  • the selection feedback generation unit 632 may change the information regarding the color of the object information to be operated according to the environment color seen by the user.
  • FIG. 37 is a diagram illustrating a process for making the operation line easier to see.
  • the state 751 in FIG. 37 is a case where the environmental color is dark, and the state 752 is a case where the environmental color is bright.
  • the selection feedback generation unit 632 changes the information regarding the color of the object information of the object 741 to be operated to a bright color. As a result, when the environment color is dark, the operation line 711 is generated in a bright color, which makes it easy to see.
  • the selection feedback generation unit 632 changes the information regarding the color of the object information of the object 741 to be operated to a dark color.
  • the operation line 711 is generated in a dark color, which makes it easier to see.
  • the third embodiment of the above information processing apparatus when the user operates an object in the xR space provided to the user, feedback such as an operation line according to the attribute of the object to be operated is given to the user. Presented at. This makes it possible for the user to intuitively and easily grasp what kind of attribute object is being operated and what part of the object is being operated.
  • the series of processes in the information processing device 11, the information processing device 301, or the information processing device 601 described above can be executed by hardware or by software.
  • the programs constituting the software are installed in the computer.
  • the computer includes a computer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
  • FIG. 38 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
  • the CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input / output interface 905 is further connected to the bus 904.
  • An input unit 906, an output unit 907, a storage unit 908, a communication unit 909, and a drive 910 are connected to the input / output interface 905.
  • the input unit 906 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 907 includes a display, a speaker, and the like.
  • the storage unit 908 includes a hard disk, a non-volatile memory, and the like.
  • the communication unit 909 includes a network interface and the like.
  • the drive 910 drives a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 901 loads the program stored in the storage unit 908 into the RAM 903 via the input / output interface 905 and the bus 904 and executes the above-mentioned series. Is processed.
  • the program executed by the computer (CPU901) can be recorded and provided on the removable media 911 as a package media or the like, for example.
  • the program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the storage unit 908 via the input / output interface 905 by mounting the removable media 911 in the drive 910. Further, the program can be received by the communication unit 909 via a wired or wireless transmission medium and installed in the storage unit 908. In addition, the program can be installed in the ROM 902 or the storage unit 908 in advance.
  • the program executed by the computer may be a program in which processing is performed in chronological order according to the order described in the present specification, in parallel, or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
  • This technology can also take the following configurations.
  • An information processing device having a processing unit that controls presentation.
  • the information processing apparatus according to (1) wherein the operation of the virtual object includes an operation of bringing the user's hand into contact with the virtual object.
  • the processing unit is The information processing device according to (1) or (2), which controls the presentation based on whether or not the user intends to operate the virtual object.
  • the processing unit is The information processing apparatus according to (3), wherein the information is not presented for the operation of the virtual object when the user does not intend to operate the virtual object.
  • the processing unit is In the above (3) or (4), the user presents different information for the operation of the virtual object depending on whether the user intends to operate the virtual object or the operation of the virtual object.
  • the processing unit is The information processing device according to (3) or (5), wherein the user changes the brightness of the virtual object as the information to the visual sense when the user does not intend to operate the virtual object.
  • the processing unit is When the user does not intend to operate the virtual object, the sound or vibration of the type or amplitude according to the speed of the user's hand is presented as the information to the auditory sense or the tactile sense. 5) Or the information processing apparatus according to (6).
  • the processing unit is The information processing apparatus according to any one of (1) to (7), which controls the presentation based on whether or not the virtual object is within the field of view of the user.
  • the processing unit is The information processing apparatus according to any one of (1) to (8), which controls the presentation based on the orientation of the user's hand with respect to the virtual object.
  • the processing unit is The information processing apparatus according to any one of (1) to (9), which controls the presentation based on whether or not the object is held in the user's hand.
  • the processing unit is The information processing apparatus according to any one of (1) to (10), which controls the presentation based on the direction of the user's line of sight or the direction of the head-mounted display worn by the user.
  • the processing unit is The information processing apparatus according to any one of (1) to (11), which controls the presentation based on the positional relationship between the user and the virtual object. (13) The processing unit is The information processing apparatus according to any one of (1) to (12), which controls the presentation based on the state of the user's hand. (14) The processing unit is The information processing apparatus according to any one of (1) to (13), which controls the presentation based on the relationship with the object visually recognized by the user. (15) The processing unit is In the above (1) or (2), the user presents different information for the operation of the virtual object depending on whether the user is visually recognizing the virtual object or not. The information processing device described.
  • the processing unit is When the virtual object does not exist within the field of view of the user or the head-mounted display worn by the user, or when the user has his / her eyes closed, the virtual object is not visually recognized.
  • the processing unit is The information processing apparatus according to (15) or (16), wherein the virtual object is not visually recognized when the virtual object is present in the peripheral visual field of the user.
  • the processing unit is The above (15) to (17), wherein the virtual object is not visually recognized when the virtual object does not exist in the central visual field of the user during a predetermined time from the present to the past. Information processing equipment.
  • the processing unit is The information processing apparatus according to any one of (15) to (18), wherein the virtual object is not visually recognized when the object that shields the virtual object exists.
  • the processing unit is One of the above (1), (2), and (15) to (19) that controls the presentation based on the attribute of the virtual object when the user does not visually recognize the virtual object.
  • the information processing device described. (21)
  • the processing unit is Any one of (1), (2), and (15) to (20) that controls the presentation based on the size of the virtual object when the user does not visually recognize the virtual object.
  • the processing unit is The information processing apparatus according to (21), which presents an image corresponding to the size of the virtual object as the information to the visual sense.
  • the processing unit is One of the above (1), (2), and (15) to (22) that controls the presentation based on the number of the virtual objects when the user does not visually recognize the virtual object.
  • the processing unit is The information processing apparatus according to (23), wherein the sound or vibration of a waveform corresponding to the number of virtual objects is presented as the information to the auditory sense or the tactile sense.
  • the processing unit is One of (1), (2), and (15) to (24) that controls the presentation based on the color of the virtual object when the user does not visually recognize the virtual object.
  • the processing unit is The information processing apparatus according to (25), which presents sound or vibration having a frequency corresponding to the color of the virtual object as the information to the auditory sense or the tactile sense.
  • the processing unit is One of the above (1), (2), and (15) to (26) that controls the presentation based on the type of the virtual object when the user does not visually recognize the virtual object.
  • the processing unit is The information processing apparatus according to (27), wherein the sound or vibration of the number of times according to the type of the virtual object is presented as the information to the auditory sense or the tactile sense.
  • the processing unit is The presentation is controlled based on the orientation of the virtual object with respect to the user's hand when the user is not visually recognizing the virtual object (1), (2), and (15) to (28). ) Is described in any of the information processing devices.
  • the processing unit is The information processing apparatus according to (29), wherein when the orientation of the virtual object is different with respect to the direction in which the hand of the user has moved, different vibrations are presented as the information to the tactile sensation.
  • the processing unit is The presentation is controlled based on whether or not the virtual object is a virtual object predicted to be operated by the user when the user does not visually recognize the virtual object (1), (2). ), And the information processing apparatus according to any one of (15) to (30).
  • the processing unit is The information processing apparatus according to (1), which is an operation line connecting the virtual object and the hand of the user, and presents the operation line for operating the virtual object as the information to the visual sense.
  • the processing unit is The information processing apparatus according to (35), wherein the operation line corresponding to the color, hardness, weight, or image as the attribute of the virtual object is presented as the information to the visual sense.
  • the processing unit is The information processing according to (35) or (36), wherein the operation line is presented by a solid line, a dotted line, a line having only a start point portion and an end point portion, or a series of images as the attribute of the virtual object. Device.
  • the processing unit is The information processing apparatus according to any one of (35) to (37), wherein the shape of the operation line is changed according to the hardness as the attribute of the virtual object or the weight.
  • the processing unit is The information processing apparatus according to any one of (35) to (38), wherein the attribute is changed based on the characteristics of the user.
  • the processing unit is The information processing apparatus according to any one of (35) to (39), wherein the attribute is changed when the virtual object collides with another object.
  • the processing unit is The information processing apparatus according to any one of (35) to (40), wherein the attribute is changed depending on whether the virtual object exists in the air or on the object.
  • the processing unit is The information processing apparatus according to any one of (35) to (41), wherein the color of the virtual object as an attribute is changed according to the environment color. (43) The processing unit is The information processing apparatus according to any one of (1) and (35) to (42), which controls the presentation of the sound as the information to the auditory sense based on the sound as the attribute of the virtual object. .. (44) The processing unit is The information processing apparatus according to (43), wherein the sound as the information to the auditory sense is changed according to the speed of movement of the virtual object. (45) The processing unit is The information processing apparatus according to any one of (1) and (35) to (42), which controls the presentation of the vibration as the information to the tactile sensation based on the vibration as the attribute of the virtual object. ..
  • the processing unit is The information processing apparatus according to (45), wherein the vibration as the information to the tactile sensation is changed according to the speed of movement of the virtual object.
  • the processing unit of the information processing apparatus having the processing unit is Controlled presentation of information to any one or more of the user's visual, auditory, and tactile sensations for the operation of the virtual object based on the attributes of the virtual object to be operated or the status of the operation on the virtual object.
  • Information processing method Computer Controlled presentation of information to any one or more of the user's visual, auditory, and tactile sensations for the operation of the virtual object based on the attributes of the virtual object to be operated or the status of the operation on the virtual object.

Abstract

The present technology relates to an information processing device, an information processing method, and a program which make it possible to improve operability in a space provided by x-Reality (xR). Presentation of information to any one of the visual sensation, auditory sensation and tactile sensation of a user with respect to a manipulation of a virtual object is controlled on the basis of an attribute of the manipulated virtual object or the status of the manipulation with respect to the virtual object.

Description

情報処理装置、情報処理方法、及び、プログラムInformation processing equipment, information processing methods, and programs
 本技術は、情報処理装置、情報処理方法、及び、プログラムに関し、特に、xR(x-Reality)により提供される空間における操作性を向上させるようにした情報処理装置、情報処理方法、及び、プログラムに関する。 This technology relates to information processing devices, information processing methods, and programs, and in particular, information processing devices, information processing methods, and programs designed to improve operability in the space provided by xR (x-Reality). Regarding.
 特許文献1乃至4には、仮想空間における操作性等の向上を図る技術が開示されている。 Patent Documents 1 to 4 disclose techniques for improving operability and the like in a virtual space.
特開2014-092906号公報Japanese Unexamined Patent Publication No. 2014-09296 特開2019-008798号公報Japanese Unexamined Patent Publication No. 2019-008798 特開2003-067107号公報Japanese Patent Application Laid-Open No. 2003-067107 特許第5871345号公報Japanese Patent No. 5871345
 xRにより提供される空間・世界では、仮想オブジェクト等に対する操作感等が現実世界とは異なることもあり、操作性の面で改善の余地がある。 In the space / world provided by xR, the operability of virtual objects may differ from that of the real world, and there is room for improvement in terms of operability.
 本技術はこのような状況に鑑みてなされたものであり、xR(x-Reality)により提供される空間における操作性を向上させるようにする。 This technology was made in view of such a situation, and it is intended to improve the operability in the space provided by xR (x-Reality).
 本技術の情報処理装置、又は、プログラムは、操作される仮想物体の属性、又は、前記仮想物体に対する操作の状況に基づいて、前記仮想物体の操作に対するユーザの視覚、聴覚、及び、触覚のうちのいずれか1以上への情報の提示を制御する処理部を有する情報処理装置、又は、そのような情報処理装置として、コンピュータを機能させるためのプログラムである。 The information processing device or program of the present technology has the user's visual, auditory, and tactile sensations for the operation of the virtual object based on the attributes of the virtual object to be operated or the situation of the operation on the virtual object. An information processing device having a processing unit that controls the presentation of information to any one or more of the above, or a program for operating a computer as such an information processing device.
 本技術の情報処理方法は、処理部を有する情報処理装置の前記処理部が、操作される仮想物体の属性、又は、前記仮想物体に対する操作の状況に基づいて、前記仮想物体の操作に対するユーザの視覚、聴覚、及び、触覚のうちのいずれか1以上への情報の提示を制御する情報処理方法である。 In the information processing method of the present technology, the information processing unit of the information processing apparatus having the processing unit is operated by the user for the operation of the virtual object based on the attribute of the virtual object to be operated or the operation status of the virtual object. It is an information processing method that controls the presentation of information to any one or more of visual, auditory, and tactile sensations.
 本技術においては、操作される仮想物体の属性、又は、前記仮想物体に対する操作の状況に基づいて、前記仮想物体の操作に対するユーザの視覚、聴覚、及び、触覚のうちのいずれか1以上への情報の提示が制御される。 In the present technology, based on the attribute of the virtual object to be operated or the situation of the operation on the virtual object, one or more of the user's visual, auditory, and tactile senses for the operation of the virtual object can be obtained. The presentation of information is controlled.
本技術が適用された情報処理装置の第1の実施の形態の構成例を示したブロック図である。It is a block diagram which showed the structural example of the 1st Embodiment of the information processing apparatus to which this technique is applied. 図1の情報処理装置が構築されるハードウエアの第1の構成例を示すブロック図である。It is a block diagram which shows the 1st configuration example of the hardware in which the information processing apparatus of FIG. 1 is constructed. 図1の情報処理装置が構築されるハードウエアの第2の構成例を示すブロック図である。It is a block diagram which shows the 2nd configuration example of the hardware in which the information processing apparatus of FIG. 1 is constructed. HMDを例示した外観図である。It is an external view which exemplifies HMD. ハンドコントローラにおけるセンサ及び振動子の配置を例示した図である。It is a figure exemplifying the arrangement of a sensor and an oscillator in a hand controller. 図1の情報処理装置の処理手順を例示したフローチャートである。It is a flowchart illustrating the processing procedure of the information processing apparatus of FIG. 第1判断基準を説明する図である。It is a figure explaining the 1st judgment criterion. 第2判断基準を説明する図である。It is a figure explaining the 2nd judgment criterion. 第3判断基準を説明する図である。It is a figure explaining the 3rd judgment criterion. 第4判断基準を説明する図である。It is a figure explaining the 4th judgment criterion. 第5判断基準を説明する図である。It is a figure explaining the 5th determination criterion. 第6判断基準を説明する図である。It is a figure explaining the sixth judgment criterion. 第7判断基準を説明する図である。It is a figure explaining the 7th judgment criterion. 第8判断基準を説明する図である。It is a figure explaining the 8th judgment criterion. 第9判断基準を説明する図である。It is a figure explaining the 9th judgment criterion. 意図無しに対応したフィードバックを説明する図である。It is a figure explaining the feedback corresponding to unintentional. 本技術が適用された情報処理装置の第2の実施の形態の構成例を示したブロック図である。It is a block diagram which showed the structural example of the 2nd Embodiment of the information processing apparatus to which this technique is applied. 図17の情報処理装置の処理手順を例示したフローチャートである。It is a flowchart illustrating the processing procedure of the information processing apparatus of FIG. 非視認状態に対応したフィードバックの第1実施例を説明する図である。It is a figure explaining the 1st Example of feedback corresponding to the non-visual state. 非視認状態に対応したフィードバックの第1実施例の第2変形例を説明する図である。It is a figure explaining the 2nd modification of the 1st Example of feedback corresponding to the non-visual state. 非視認状態に対応したフィードバックの第2実施例を説明する図である。It is a figure explaining the 2nd Embodiment of the feedback corresponding to the non-visual state. 対象オブジェクトの大まかな個数を提示するためのフィードバックを説明する図である。It is a figure explaining the feedback for presenting the rough number of the target objects. 非視認状態に対応したフィードバックの第4実施例を説明する図である。It is a figure explaining the 4th Embodiment of feedback corresponding to the non-visual state. 非視認状態に対応したフィードバックの第5実施例を説明する図である。It is a figure explaining the 5th Example of feedback corresponding to the non-visual state. 非視認状態に対応したフィードバックの第5実施例を説明する図である。It is a figure explaining the 5th Example of feedback corresponding to the non-visual state. 非視認状態に対応したフィードバックの第6実施例を説明する図である。It is a figure explaining the sixth embodiment of feedback corresponding to the non-visual state. 非視認状態に対応したフィードバックの第7実施例を説明する図である。It is a figure explaining the 7th Example of feedback corresponding to the non-visual state. 本技術が適用された情報処理装置の第3の実施の形態の構成例を示したブロック図である。It is a block diagram which showed the structural example of the 3rd Embodiment of the information processing apparatus to which this technique is applied. オブジェクトの操作を説明する図である。It is a figure explaining the operation of an object. 図28の情報処理装置の処理手順を例示したフローチャートである。It is a flowchart illustrating the processing procedure of the information processing apparatus of FIG. 28. 仮想オブジェクトを例示した図である。It is a figure exemplifying a virtual object. 葉部分が操作対象として選択された場合の操作ラインの形態を例示した図である。It is a figure which illustrated the form of the operation line when a leaf part is selected as an operation target. 幹部分が操作対象として選択された場合の操作ラインの形態を例示した図である。It is a figure which illustrated the form of the operation line when the trunk part is selected as an operation target. ユーザの力の強さに応じてオブジェクト情報の重さを変更した場合を例示した図である。It is a figure exemplifying the case where the weight of the object information is changed according to the strength of the user's power. 操作対象のオブジェクトが衝突した場合にオブジェクト情報を変化させたときの様子を示した図である。It is a figure which showed the state when the object information was changed when the object to be operated collided. 操作対象のオブジェクトを面上の指定位置に置くときの様子を示した図である。It is a figure which showed the state when the object to be operated is placed at the designated position on a surface. 操作ラインを見やすくするための処理を説明する図である。It is a figure explaining the process for making an operation line easy to see. 一連の処理をプログラムにより実行するコンピュータのハードウエアの構成例を示すブロック図である。It is a block diagram which shows the configuration example of the hardware of the computer which executes a series of processing by a program.
 以下、図面を参照しながら本技術の実施の形態について説明する。 Hereinafter, embodiments of the present technology will be described with reference to the drawings.
<<本技術が適用された情報処理装置の第1の実施の形態>>
 図1は、本技術が適用された情報処理装置の第1の実施の形態の構成例を示したブロック図である。
<< First embodiment of the information processing apparatus to which this technology is applied >>
FIG. 1 is a block diagram showing a configuration example of a first embodiment of an information processing apparatus to which the present technology is applied.
 図1の情報処理装置11は、HMD(Head Mounted Display)及びコントローラを用いて、ユーザに対してxR(x-Reality)により生成された空間・世界を提供する。xRとは、VR(仮想現実:Virtual Reality)、AR(拡張現実:Augmented Reality)、MR(複合現実:Mixed Reality)、及び、SR(代替現実:Substitution Reality)等を含む技術である。VRは、現実とは異なる空間・世界をユーザに提供する技術である。ARは、現実空間(以下、実空間という)に現実には存在しない情報をユーザに提供する技術である。MRは、現在と過去の実空間と仮想空間とを融合した世界をユーザに提供する技術である。SRは、実空間に過去の映像を映し出して過去の事象が現実に起きているかのように錯覚させる技術である。 The information processing device 11 in FIG. 1 uses an HMD (Head Mounted Display) and a controller to provide a user with a space / world generated by xR (x-Reality). xR is a technology that includes VR (Virtual Reality), AR (Augmented Reality), MR (Mixed Reality), SR (Substitution Reality), and the like. VR is a technology that provides users with a space and world that is different from reality. AR is a technology that provides users with information that does not actually exist in the real space (hereinafter referred to as the real space). MR is a technology that provides users with a world that fuses the real space and virtual space of the present and the past. SR is a technology that projects past images in real space to give the illusion that past events are actually happening.
 なお、xRにより提供(生成)される空間をxR空間というものとする。xR空間には、実空間に実在するオブジェクトと仮想のオブジェクトとのうち少なくとも一方が含まれる。以下において、実空間に実在するオブジェクトと仮想のオブジェクトとの区別をしない場合には単にオブジェクトという。 The space provided (generated) by xR is called the xR space. The xR space contains at least one of a real object and a virtual object in real space. In the following, when there is no distinction between an object that exists in real space and an object that is virtual, it is simply called an object.
 HMDは、主にユーザに映像を提示するディスプレイを備えた装置である。本明細書では、頭部への装着方式にかかわらず頭部で保持されるディスプレイをHMDという。なお、HMDの代わりに、PC(パーソナルコンピュータ)等の据え置き型のディスプレイや、ノート型PC、スマートフォン、又は、タブレット等の携帯端末のディスプレイが用いられる場合であってもよい。 The HMD is a device equipped with a display that mainly presents images to the user. In the present specification, the display held by the head regardless of the mounting method on the head is referred to as HMD. In addition, instead of the HMD, a stationary display such as a PC (personal computer) or a display of a mobile terminal such as a notebook PC, a smartphone, or a tablet may be used.
 コントローラは、ユーザの操作を検出するための装置であり、特定の形態のコントローラに限定されない。例えば、コントローラとして、ユーザが手で把持するコントローラやユーザの手に装着するコントローラ(ハンドコントローラ)等がある。 The controller is a device for detecting the user's operation, and is not limited to a specific type of controller. For example, as a controller, there are a controller that the user holds by hand, a controller that is worn by the user's hand (hand controller), and the like.
 情報処理装置11は、センサ部21、制御部22、映像表示部23、音提示部24、触覚提示部25、及び、記憶部26を有する。 The information processing device 11 has a sensor unit 21, a control unit 22, a video display unit 23, a sound presentation unit 24, a tactile presentation unit 25, and a storage unit 26.
 センサ部21は、カメラ31、ジャイロセンサ32、加速度センサ33、方位センサ34、及び、ToF(Time-of-Flight)カメラ35等の各種センサを含む。センサ部21の各種センサは、ユーザに装着されるHMDやコントローラ等の装置、ユーザが把持するコントローラ等の装置、ユーザが存在する環境、及び、ユーザの身体のうちのいずれかに配置される。センサ部21が有するセンサの種類は図1に示す種類に限定されない。センサ部21は、各種センサにより検出したセンサ情報を制御部22に供給する。 The sensor unit 21 includes various sensors such as a camera 31, a gyro sensor 32, an acceleration sensor 33, a direction sensor 34, and a ToF (Time-of-Flight) camera 35. Various sensors of the sensor unit 21 are arranged in any one of a device such as an HMD and a controller worn by the user, a device such as a controller held by the user, an environment in which the user exists, and a user's body. The type of sensor included in the sensor unit 21 is not limited to the type shown in FIG. The sensor unit 21 supplies the sensor information detected by various sensors to the control unit 22.
 なお、センサ部21をセンサが配置される装置ごと又は部分ごとに区別すると、センサ部21は複数存在するが、本実施の形態では明示しない。センサ部21が複数存在する場合に、各センサ部21が有するセンサの種類が相違する場合であってもよい。 When the sensor unit 21 is distinguished for each device or portion where the sensor is arranged, there are a plurality of sensor units 21, but they are not specified in the present embodiment. When there are a plurality of sensor units 21, the types of sensors possessed by each sensor unit 21 may be different.
 カメラ31は、撮影範囲の被写体を撮影し、被写体の画像をセンサ情報として制御部22に供給する。後述のHMD73に配置されるセンサ部21には、カメラ31として、例えば、外向きカメラと内向きカメラが含まれる場合がある。外向きカメラは、HDMの正面方向の視界を撮影する。HMDに配置される内向きカメラは、ユーザの眼を撮影する。 The camera 31 shoots a subject in the shooting range and supplies the image of the subject as sensor information to the control unit 22. The sensor unit 21 arranged in the HMD 73, which will be described later, may include, for example, an outward-facing camera and an inward-facing camera as the camera 31. The outward camera captures the front view of the HDM. An inward-looking camera placed on the HMD captures the user's eyes.
 ジャイロセンサ32は、直交3軸周りの角速度を検出し、検出した角速度をセンサ情報として制御部22に供給する。ジャイロセンサ32は、例えば、HMDやコントローラに配置される。 The gyro sensor 32 detects the angular velocity around the three orthogonal axes, and supplies the detected angular velocity as sensor information to the control unit 22. The gyro sensor 32 is arranged in, for example, an HMD or a controller.
 加速度センサ33は、直交3軸方向の加速度を検出し、検出した加速度をセンサ情報として制御部22に供給する。加速度センサ33は、例えば、ジャイロセンサ32と組み合わせてHMDやコントローラに配置される。 The acceleration sensor 33 detects acceleration in the three orthogonal axes directions, and supplies the detected acceleration to the control unit 22 as sensor information. The acceleration sensor 33 is arranged in the HMD or the controller in combination with the gyro sensor 32, for example.
 方位センサ34は、地磁気の方向を方位として検出し、検出した方位をセンサ情報として制御部22に供給する。方位センサ34は、例えば、ジャイロセンサ32及び加速度センサ33と組み合わせてHMDやコントローラに配置される。 The azimuth sensor 34 detects the direction of the geomagnetism as the azimuth, and supplies the detected azimuth as sensor information to the control unit 22. The azimuth sensor 34 is arranged in the HMD or the controller in combination with the gyro sensor 32 and the acceleration sensor 33, for example.
 ToFカメラ35は、被写体(物体)までの距離を検出し、検出した距離を画素値とする距離画像をセンサ情報として制御部22に供給する。 The ToF camera 35 detects the distance to the subject (object) and supplies a distance image having the detected distance as a pixel value to the control unit 22 as sensor information.
 制御部22は、センサ部21からのセンサ情報に基づいて、映像表示部23、音提示部24、及び、触覚提示部25への出力制御等を行う。 The control unit 22 controls the output to the video display unit 23, the sound presentation unit 24, and the tactile presentation unit 25 based on the sensor information from the sensor unit 21.
 制御部22は、センサ情報取得部41、位置姿勢取得部42、オブジェクト情報取得部43、意図識別部44、フィードバック判断部45、及び、出力制御部46等を有する。 The control unit 22 includes a sensor information acquisition unit 41, a position / attitude acquisition unit 42, an object information acquisition unit 43, an intention identification unit 44, a feedback determination unit 45, an output control unit 46, and the like.
 センサ情報取得部41は、センサ部21の各種センサにより検出されたセンサ情報を取得する。 The sensor information acquisition unit 41 acquires sensor information detected by various sensors of the sensor unit 21.
 位置姿勢取得部42は、センサ情報取得部41が取得したセンサ情報に基づいて、HMD及びコントローラの位置及び姿勢を位置姿勢情報として取得する。位置姿勢取得部42は、コントローラ自体の位置及び姿勢だけでなく、ユーザの手に装着するコントローラを用いる場合等には、ユーザの手や指の位置及び姿勢を取得する。ユーザの手や指の位置及び姿勢は、カメラ31により撮影された画像に基づいても取得し得る。 The position / attitude acquisition unit 42 acquires the position and attitude of the HMD and the controller as position / attitude information based on the sensor information acquired by the sensor information acquisition unit 41. The position / posture acquisition unit 42 acquires not only the position and posture of the controller itself, but also the position and posture of the user's hand or finger when the controller attached to the user's hand is used. The positions and postures of the user's hands and fingers can also be acquired based on the image taken by the camera 31.
 オブジェクト情報取得部43は、xR空間に存在するオブジェクトのオブジェクト情報を取得する。オブジェクト情報は、オブジェクトに事前にタグ付けされた情報であってもよいし、ユーザに提示されるxR空間のオブジェクトから動的に抽出してもよい。オブジェクト情報は、xR空間におけるオブジェクトの形状、位置及び姿勢に関する情報の他に、意図判定情報及びフィードバック情報を含む。意図判定情報は、後述の意図識別部44における意図の有無の判定に用いる判断基準に関する情報である。フィードバック情報は、オブジェクトへの接触に対するユーザの視覚、聴覚、及び、触覚へのフィードバックの内容に関する情報である。 The object information acquisition unit 43 acquires the object information of the object existing in the xR space. The object information may be information that is pre-tagged to the object or may be dynamically extracted from the object in xR space presented to the user. The object information includes intention determination information and feedback information in addition to information regarding the shape, position, and posture of the object in the xR space. The intention determination information is information regarding a determination standard used for determining the presence or absence of an intention in the intention identification unit 44, which will be described later. The feedback information is information about the content of the user's visual, auditory, and tactile feedback to contact with the object.
 意図識別部44は、位置姿勢取得部42により取得されたコントローラの位置姿勢情報(ユーザの手や指の位置及び姿勢に関する情報を含む)と、オブジェクト情報取得部43により取得されたオブジェクト情報とに基づいて、ユーザの手とオブジェクトとの接触に対してユーザの意図の有無を識別(判定)する。 The intention identification unit 44 combines the position / orientation information of the controller (including information regarding the position and posture of the user's hand or finger) acquired by the position / attitude acquisition unit 42 and the object information acquired by the object information acquisition unit 43. Based on this, the presence or absence of the user's intention with respect to the contact between the user's hand and the object is identified (determined).
 フィードバック判断部45は、意図識別部44での判定結果と、オブジェクト情報取得部43により取得されたオブジェクト情報とに基づいて、ユーザの手とオブジェクトとの接触に対するユーザの視覚、聴覚、及び、触覚へのフィードバック(の内容)を生成する。 The feedback determination unit 45 has the user's visual, auditory, and tactile senses with respect to the contact between the user's hand and the object based on the determination result of the intention identification unit 44 and the object information acquired by the object information acquisition unit 43. Generate feedback (contents) to.
 出力制御部46は、アプリケーション等の所定のプログラムにより生成されたxR空間をユーザに提示するための画像(映像)、音、及び、触覚を生成し、それぞれ映像表示部23、音提示部24、及び、触覚提示部25により出力するための出力信号を映像表示部23、音提示部24、及び、触覚提示部25に供給する。これによって、出力制御部46は、ユーザに提示する視覚、聴覚、及び、触覚への情報を制御する。出力制御部46は、ユーザに提示するための画像、音、及び、触覚を生成する際に、フィードバック判断部45により生成された内容のフィードバックを反映させる。 The output control unit 46 generates an image (video), a sound, and a tactile sensation for presenting the xR space generated by a predetermined program such as an application to the user, and the video display unit 23, the sound presentation unit 24, respectively. The output signal to be output by the tactile presentation unit 25 is supplied to the video display unit 23, the sound presentation unit 24, and the tactile presentation unit 25. As a result, the output control unit 46 controls the visual, auditory, and tactile information presented to the user. The output control unit 46 reflects the feedback of the content generated by the feedback determination unit 45 when generating the image, sound, and tactile sensation to be presented to the user.
 映像表示部23は、出力制御部46から供給された出力信号の画像(映像)を表示するディスプレイである。 The video display unit 23 is a display that displays an image (video) of the output signal supplied from the output control unit 46.
 音提示部24は、出力制御部46から供給された出力信号の音を出力するスピーカである。 The sound presentation unit 24 is a speaker that outputs the sound of the output signal supplied from the output control unit 46.
 触覚提示部25は、出力制御部46から供給された出力信号の触覚を提示する提示部である。本実施の形態では、触覚提示部25として振動子が用いられるものとし、ユーザに対して振動により触覚を提示する。触覚提示部25は、振動子に限らず、偏心モータ、リニア・バイブレータ等の任意方式の振動発生装置であってよい。触覚提示部25は、振動を触覚として提示する場合に限らず、圧力等の任意の触覚を提示する提示部であってよい。 The tactile presentation unit 25 is a presentation unit that presents the tactile sensation of the output signal supplied from the output control unit 46. In the present embodiment, it is assumed that the oscillator is used as the tactile presentation unit 25, and the tactile sensation is presented to the user by vibration. The tactile presentation unit 25 is not limited to the oscillator, and may be an arbitrary type vibration generator such as an eccentric motor or a linear vibrator. The tactile presentation unit 25 is not limited to the case where vibration is presented as a tactile sensation, and may be a presentation unit that presents an arbitrary tactile sensation such as pressure.
 記憶部26は、制御部22が実行するプログラムやデータを記憶する。 The storage unit 26 stores programs and data executed by the control unit 22.
<ハードウエアの第1の構成例>
 図2は、図1の情報処理装置11が構築されるハードウエアの第1の構成例を示すブロック図である。
<First configuration example of hardware>
FIG. 2 is a block diagram showing a first configuration example of the hardware in which the information processing apparatus 11 of FIG. 1 is constructed.
 図2のハードウエア61は、表示側装置71、センサ側装置72、HMD73、スピーカ74、コントローラ75、及び、センサ76を有する。 The hardware 61 of FIG. 2 has a display side device 71, a sensor side device 72, an HMD 73, a speaker 74, a controller 75, and a sensor 76.
 表示側装置71は、CPU(Central Processing Unit)81、メモリ82、入出力I/F(インタフェース)83、及び、通信装置84を有する。これらの構成要素はバスにより互いにデータのやり取り可能に接続される。 The display side device 71 has a CPU (Central Processing Unit) 81, a memory 82, an input / output I / F (interface) 83, and a communication device 84. These components are connected by a bus so that data can be exchanged with each other.
 CPU81は、メモリ82に格納されているプログラムに含まれる一連の命令を実行する。 The CPU 81 executes a series of instructions included in the program stored in the memory 82.
 メモリ82は、RAM(Random Access Memory)、及び、ストレージを含む。ストレージには、ROM(Read Only Memory)やハードディスク装置等の不揮発性のメモリが含まれる。メモリ82は、CPU81の処理で使用されるプログラム及びデータを保存する。 The memory 82 includes RAM (RandomAccessMemory) and storage. The storage includes non-volatile memory such as ROM (Read Only Memory) and a hard disk device. The memory 82 stores programs and data used in the processing of the CPU 81.
 入出力I/F83は、HMD73、スピーカ74、及び、コントローラ75との間での信号の入出力を行う。 The input / output I / F 83 inputs / outputs signals to / from the HMD 73, the speaker 74, and the controller 75.
 通信装置84は、センサ側装置72の通信装置84との間での通信を制御する。 The communication device 84 controls communication with the communication device 84 of the sensor side device 72.
 センサ側装置72は、CPU91、メモリ92、入出力I/F93、及び、通信装置94を有する。これらの構成要素はバスにより互いにデータのやり取り可能に接続される。 The sensor-side device 72 has a CPU 91, a memory 92, an input / output I / F 93, and a communication device 94. These components are connected by a bus so that data can be exchanged with each other.
 CPU91は、メモリ92に格納されているプログラムに含まれる一連の命令を実行する。 The CPU 91 executes a series of instructions included in the program stored in the memory 92.
 メモリ92は、RAM、及び、ストレージを含む。ストレージには、ROMやハードディスク装置等の不揮発性のメモリが含まれる。メモリ92は、CPU91の処理で使用されるプログラム及びデータを保存する。 Memory 92 includes RAM and storage. Storage includes non-volatile memory such as ROM and hard disk drive. The memory 92 stores programs and data used in the processing of the CPU 91.
 入出力I/F93は、センサ76との間の信号の入出力を行う。 The input / output I / F 93 inputs / outputs a signal to / from the sensor 76.
 通信装置94は、表示側装置71の通信装置84との間の通信を制御する。 The communication device 94 controls communication between the display side device 71 and the communication device 84.
 HMD73は、図1の映像表示部23を有する装置であり、ユーザの頭部に装着されてユーザにxR空間の画像(映像)を提示する。HMD73は、映像表示部23以外に、音提示部24、並びに、センサ部21におけるカメラ31、ジャイロセンサ32、加速度センサ33、方位センサ34、及び、ToFカメラ35を有していてもよい。 The HMD 73 is a device having the image display unit 23 of FIG. 1, and is attached to the user's head to present an image (image) of the xR space to the user. In addition to the image display unit 23, the HMD 73 may have a sound presentation unit 24, a camera 31, a gyro sensor 32, an acceleration sensor 33, an orientation sensor 34, and a ToF camera 35 in the sensor unit 21.
 スピーカ74は、図1の音提示部24を備えた装置であり、ユーザに対して音を提示する。スピーカ74は、例えば、ユーザが存在する環境に配置される。 The speaker 74 is a device provided with the sound presenting unit 24 of FIG. 1, and presents sound to the user. The speaker 74 is arranged, for example, in an environment where a user exists.
 コントローラ75は、図1の触覚提示部25を有する装置である。コントローラ75は、ユーザの手に装着され、又は、ユーザの手で把持される。コントローラ75は、触覚提示部25以外に、センサ部21におけるジャイロセンサ32、加速度センサ33、及び、方位センサ34を有していてよい。コントローラ75は、コントロールボタンを有している場合であってもよい。 The controller 75 is a device having the tactile presentation unit 25 of FIG. The controller 75 is attached to or gripped by the user's hand. In addition to the tactile presentation unit 25, the controller 75 may have a gyro sensor 32, an acceleration sensor 33, and a direction sensor 34 in the sensor unit 21. The controller 75 may have a control button.
 センサ76は、図1のセンサ部21における各種センサを有する装置であり、ユーザが存在する環境に配置される。 The sensor 76 is a device having various sensors in the sensor unit 21 of FIG. 1, and is arranged in an environment where a user exists.
<ハードウエアの第2の構成例>
 図3は、図1の情報処理装置11が構築されるハードウエアの第2の構成例を示すブロック図である。なお、図中、図2と共通する部分には同一の符号を付してあり、その説明を省略する。
<Second hardware configuration example>
FIG. 3 is a block diagram showing a second configuration example of the hardware in which the information processing apparatus 11 of FIG. 1 is constructed. In the drawings, the same parts as those in FIG. 2 are designated by the same reference numerals, and the description thereof will be omitted.
 図3のハードウエア61は、HMD73、スピーカ74、コントローラ75、及び、センサ76、表示/センサ装置111を有する。したがって、図3のハードウエア61は、HMD73、スピーカ74、コントローラ75、及び、センサ76を有する点で、図2の場合と共通する。ただし、図3のハードウエア61は、図2の表示側装置71及びセンサ側装置72の代わりに表示/センサ装置111が設けられている点で、図2の場合と相違する。 The hardware 61 of FIG. 3 has an HMD 73, a speaker 74, a controller 75, a sensor 76, and a display / sensor device 111. Therefore, the hardware 61 of FIG. 3 is common to the case of FIG. 2 in that it has an HMD 73, a speaker 74, a controller 75, and a sensor 76. However, the hardware 61 of FIG. 3 is different from the case of FIG. 2 in that the display / sensor device 111 is provided in place of the display side device 71 and the sensor side device 72 of FIG.
 図3の表示/センサ装置111は、図2における表示側装置71とセンサ側装置72とが一体化した装置である。即ち、図3の表示/センサ装置111は、図2において、センサ側装置72の処理を表示側装置71で行う場合の表示側装置71に対応する。 The display / sensor device 111 in FIG. 3 is a device in which the display side device 71 and the sensor side device 72 in FIG. 2 are integrated. That is, the display / sensor device 111 of FIG. 3 corresponds to the display side device 71 when the processing of the sensor side device 72 is performed by the display side device 71 in FIG. 2.
 図3の表示/センサ装置111は、CPU121、メモリ122、及び、入出力I/F123を有する点で、図2の表示側装置71と共通する。ただし、図3の表示/センサ装置111は、図3の表示側装置71の通信装置84を有していない点、及び、入出力I/F123にセンサ76が接続される点で、図2の表示側装置71と相違する。 The display / sensor device 111 of FIG. 3 is common to the display side device 71 of FIG. 2 in that it has a CPU 121, a memory 122, and an input / output I / F 123. However, the display / sensor device 111 of FIG. 3 does not have the communication device 84 of the display side device 71 of FIG. 3, and the sensor 76 is connected to the input / output I / F 123. It is different from the display side device 71.
<HMDの外観>
 図4は、HMD73を例示した外観図である。
<Appearance of HMD>
FIG. 4 is an external view illustrating the HMD 73.
 図4のHMD73は、AR又はMRに対応したメガネ型のHMD(ARグラス)である。HMD73は、ユーザ131の頭に固定される。HMD73は、図1の映像表示部23に対応する右眼用の映像表示部23Aと左眼用の映像表示部23Bとを有する。映像表示部23A、23Bは、ユーザ131の眼前に配置される。 The HMD73 in FIG. 4 is a glasses-type HMD (AR glass) compatible with AR or MR. The HMD 73 is fixed to the head of the user 131. The HMD 73 has an image display unit 23A for the right eye and an image display unit 23B for the left eye corresponding to the image display unit 23 of FIG. The video display units 23A and 23B are arranged in front of the user 131.
 映像表示部23A、23Bは、例えば透過型のディスプレイである。映像表示部23A、23Bには、テキスト、図、又は、3次元構造のオブジェクト等のARの仮想のオブジェクト(仮想オブジェクト)が表示される。これによって、実空間の風景に仮想オブジェクトが重畳表示される。 The video display units 23A and 23B are, for example, transmissive displays. AR virtual objects (virtual objects) such as text, figures, or objects having a three-dimensional structure are displayed on the video display units 23A and 23B. As a result, the virtual object is superimposed and displayed on the landscape in the real space.
 HMD73の正面の右縁部及び左縁部には、図1のセンサ部21のカメラ31に対応する外向きカメラ132A、132Bが設けられる。外向きカメラ132A、132Bは、HMD73の正面方向を撮影する。即ち、外向きカメラ132A、132Bは、ユーザの右眼及び左眼が視認する方向の実空間を撮影範囲として撮影する。例えば、外向きカメラ132A、132Bをステレオカメラとして用いることで、実空間の存在するオブジェクト(実物体)の形状、位置、及び、姿勢が認識され得る。外向きカメラ132A、132Bにより撮影された実空間の画像と、仮想オブジェクトとを重畳して映像表示部23A、23Bに表示させることで、HMD73をビデオ透過型のHMDとすることができる。 Outward facing cameras 132A and 132B corresponding to the camera 31 of the sensor unit 21 of FIG. 1 are provided on the right and left edges of the front surface of the HMD 73. The outward-facing cameras 132A and 132B photograph the front direction of the HMD 73. That is, the outward-facing cameras 132A and 132B take pictures in the real space in the direction visually recognized by the user's right eye and left eye. For example, by using the outward-facing cameras 132A and 132B as stereo cameras, the shape, position, and posture of an object (real object) in which a real space exists can be recognized. The HMD 73 can be made into a video-transparent HMD by superimposing the real space image taken by the outward-facing cameras 132A and 132B and the virtual object and displaying them on the video display units 23A and 23B.
 HMD73の裏側の右縁部及び左縁部には、図1のセンサ部21のカメラ31に対応する内向きカメラ132C、132Dが設けられる。内向きカメラ132C、132Dは、ユーザの右眼及び左眼を撮影する。内向きカメラ132C、132Dにより撮影された画像により、ユーザの眼球の位置、瞳孔の位置、及び視線の向き等が認識され得る。 Inward facing cameras 132C and 132D corresponding to the camera 31 of the sensor unit 21 in FIG. 1 are provided on the right and left edges on the back side of the HMD 73. The inward cameras 132C and 132D capture the user's right and left eyes. The image taken by the inward-facing cameras 132C and 132D can recognize the position of the user's eyeball, the position of the pupil, the direction of the line of sight, and the like.
 HMD73は、図2のスピーカ74(又はイヤホンスピーカ)やマイクロフォンを有していてもよい。HMD73は、図4の場合に限らず、xRにより提供されるxR空間の種類(xRの種類)に対応した適切な形態のHMDであってよい。 The HMD 73 may have the speaker 74 (or earphone speaker) or microphone shown in FIG. The HMD 73 is not limited to the case of FIG. 4, and may be an HMD having an appropriate form corresponding to the type of xR space provided by xR (type of xR).
<コントローラ>
 図5は、ハンドコントローラにおけるセンサ及び振動子の配置を例示した図である。
<Controller>
FIG. 5 is a diagram illustrating the arrangement of the sensor and the oscillator in the hand controller.
 図5のIMU(Inertial Measurement Unit:慣性計測装置)151A乃至151C、及び、振動子153A乃至153Cは、例えば、コントローラ(ハンドコントローラ)の不図示の保持体に保持され、その保持体がユーザの手141に装着されることで手141の各所に配置される。ハンドコントローラは図2のコントローラ75に対応する。IMU151A及び振動子153Aは、手141の甲に配置される。IMU151B及び振動子153Bは、親指に配置される。IMU151C及び振動子153Cは、人指し指に配置される。IMU151A乃至151Cは、例えば、図1のセンサ部21のジャイロセンサ32、加速度センサ33、及び、方位センサ34を一体的にパッケージ化したセンサユニットである。 The IMU (Inertial Measurement Unit) 151A to 151C and the transducers 153A to 153C of FIG. 5 are held by, for example, a holding body (not shown) of a controller (hand controller), and the holding body is held by a user's hand. When attached to 141, it is placed in various places on the hand 141. The hand controller corresponds to the controller 75 in FIG. The IMU 151A and the oscillator 153A are arranged on the instep of the hand 141. The IMU 151B and the oscillator 153B are placed on the thumb. The IMU 151C and the oscillator 153C are arranged on the index finger. The IMUs 151A to 151C are, for example, sensor units in which the gyro sensor 32, the acceleration sensor 33, and the directional sensor 34 of the sensor unit 21 of FIG. 1 are integrally packaged.
 ハンドコントローラによれば、IMU151A乃至151Cのセンサ情報に基づいて、手の位置及び姿勢、親指の位置及び姿勢、並び、人指し指の位置及び姿勢が認識され得る。ハンドコントローラによれば、振動子153A乃至153Cにより、オブジェクトとの接触に対するユーザの手及び指への触覚が提示される。 According to the hand controller, the position and posture of the hand, the position and posture of the thumb, the arrangement, and the position and posture of the index finger can be recognized based on the sensor information of the IMUs 151A to 151C. According to the hand controller, the oscillators 153A to 153C present a tactile sensation to the user's hands and fingers for contact with the object.
 なお、IMU151B及び151C及び振動子153B乃至153Cに相当するIMU及び振動子は、親指及び人指し指に限らず、5本の指のうちの1以上の指に配置されている場合であってよい。 The IMUs 151B and 151C and the IMUs and oscillators corresponding to the oscillators 153B to 153C are not limited to the thumb and the index finger, and may be arranged on one or more of the five fingers.
<情報処理装置11が行う処理の手順>
 図6は、情報処理装置11の処理手順を例示したフローチャートである。
<Procedure of processing performed by the information processing device 11>
FIG. 6 is a flowchart illustrating the processing procedure of the information processing apparatus 11.
 ステップS11では、出力制御部46は、プログラムにより生成したxR空間における仮想オブジェクト(仮想物体)等の画像を出力する出力信号を映像表示部23に供給し、ユーザの視界に仮想オブジェクトを提示する。処理はステップS11からステップS12に進む。 In step S11, the output control unit 46 supplies an output signal for outputting an image such as a virtual object (virtual object) in the xR space generated by the program to the video display unit 23, and presents the virtual object to the user's field of view. The process proceeds from step S11 to step S12.
 ステップS12では、位置姿勢取得部42は、コントローラ75の位置及び姿勢を取得する。処理は、ステップS12からステップS13に進む。 In step S12, the position / posture acquisition unit 42 acquires the position and posture of the controller 75. The process proceeds from step S12 to step S13.
 ステップS13では、ユーザが手を動かしてオブジェクトに触れたと仮定する。処理はステップS13からステップS14に進む。 In step S13, it is assumed that the user moves his / her hand and touches the object. The process proceeds from step S13 to step S14.
 ステップS14では、意図識別部44は、ユーザの手に接触したオブジェクトのオブジェクト情報を図1のオブジェクト情報取得部43から取得する。意図識別部44は、オブジェクト情報(意図判定情報)に基づいて、ユーザの手とオブジェクトとの接触をユーザが意図しているか否かを判定する。即ち、意図識別部44は、手とオブジェクトとの接触に対する意図の有無を判定する。 In step S14, the intention identification unit 44 acquires the object information of the object in contact with the user's hand from the object information acquisition unit 43 of FIG. The intention identification unit 44 determines whether or not the user intends the contact between the user's hand and the object based on the object information (intention determination information). That is, the intention identification unit 44 determines whether or not there is an intention for the contact between the hand and the object.
 ステップS14において、意図無しと判定された場合、処理はステップS11に戻り、ステップS11から繰り返される。 If it is determined in step S14 that there is no intention, the process returns to step S11 and is repeated from step S11.
 ステップS14において、意図有りと判定された場合、処理はステップS15に進む。 If it is determined in step S14 that there is an intention, the process proceeds to step S15.
 ステップS15では、フィードバック判断部45は、ユーザの手とオブジェクトとの接触に対するユーザへのフィードバックを生成する。ユーザの手とオブジェクトとの接触に対するフィードバックとは、オブジェクトに接触したことのユーザへの提示を表す。ユーザへの提示は、ユーザの視覚、聴覚、及び、触覚のうちいずれか1以上に対して行われる。フィードバックを生成するとは、ユーザの視覚、聴覚、及び、触覚に対するフィードバックの内容を生成することを意味する。出力制御部46は、フィードバック判断部45により生成された内容のフィードバックを反映させた画像(映像)、音、及び、触覚を、それぞれ映像表示部23、音提示部24、及び、触覚提示部25を介してユーザに対して提示する。処理は、ステップS15からステップS11に戻り、ステップS11から繰り返される。 In step S15, the feedback determination unit 45 generates feedback to the user regarding the contact between the user's hand and the object. Feedback on the contact between the user's hand and the object represents the presentation to the user of contact with the object. The presentation to the user is made to any one or more of the user's visual sense, auditory sense, and tactile sense. Generating feedback means generating feedback content for the user's visual, auditory, and tactile sensations. The output control unit 46 displays the image (video), sound, and tactile sensation reflecting the feedback of the content generated by the feedback determination unit 45 in the video display unit 23, the sound presentation unit 24, and the tactile sensation presentation unit 25, respectively. Present to the user via. The process returns from step S15 to step S11 and is repeated from step S11.
<ユーザの手とオブジェクトとの接触に対するフィードバックの詳細>
 本技術は、ユーザに対してxRにより提供される任意の空間に対して適用され得る。xRにより提供される空間としては、実空間に仮想オブジェクトを重ねた空間、仮想空間、実空間と仮想空間とを融合した空間などがあり得る。これらのxRにより提供されるxR空間において、本技術は、ユーザの仮想の手と仮想オブジェクトとの組み合わせ、ユーザの仮想の手と現実のオブジェクト(実オブジェクト)との組み合わせ、及び、ユーザの現実の手と仮想オブジェクトとの組み合わせのそれぞれにおけるユーザの手とオブジェクトとの接触に対するフィードバックに適用され得る。ただし、以下においては、ユーザの手については仮想の手であるか現実の手であるかは明示せず、単にユーザの手とし、ユーザの手と接触するオブジェクトについては仮想オブジェクトであるとして説明する。
<Details of feedback on the contact between the user's hand and the object>
The technique may be applied to any space provided by xR to the user. The space provided by xR may include a space in which virtual objects are superimposed on a real space, a virtual space, and a space in which a real space and a virtual space are fused. In the xR space provided by these xRs, the present technology combines the user's virtual hand with a virtual object, the user's virtual hand with a real object (real object), and the user's real object. It can be applied to feedback on the user's hand-object contact in each of the hand-virtual object combinations. However, in the following, it is not specified whether the user's hand is a virtual hand or a real hand, but simply the user's hand, and the object that comes into contact with the user's hand is described as a virtual object. ..
 図1の情報処理装置11の意図識別部44は、ユーザが手を動かして仮想オブジェクトに接触した場合(又は、仮想オブジェクトが動いてユーザの手に接触した場合)に、仮想オブジェクトとの接触に対するユーザの意図の有無を判定(意図有無判定)する。この際、意図識別部44は、オブジェクト情報取得部43により取得された意図判定情報に基づいて意図有無判定を行う。意図判定情報には、意図有無判定での後述の判断基準となる情報が含まれる。判断基準はユーザの手が接触した仮想オブジェクトごとに設定されていてもよいし、仮想オブジェクトにかかわらず共通であってもよい。情報処理装置11のフィードバック判断部45は、意図有無判定により意図有りと判定された場合には、ユーザへのフィードバックを生成する。この場合のフィードバックは、通常のフィードバックである。 The intention identification unit 44 of the information processing apparatus 11 of FIG. 1 responds to contact with a virtual object when the user moves his / her hand and touches the virtual object (or when the virtual object moves and touches the user's hand). Judgment of the presence or absence of the user's intention (determination of the presence or absence of intention). At this time, the intention identification unit 44 determines whether or not there is an intention based on the intention determination information acquired by the object information acquisition unit 43. The intention determination information includes information that serves as a determination criterion described later in the intention presence / absence determination. The judgment criteria may be set for each virtual object that the user's hand touches, or may be common regardless of the virtual object. The feedback determination unit 45 of the information processing apparatus 11 generates feedback to the user when it is determined by the intention presence / absence determination that there is an intention. The feedback in this case is normal feedback.
 通常のフィードバックでは、フィードバック判断部45は、ユーザの手と仮想オブジェクトとの接触により、仮想オブジェクトを動かすこと、接触音を発生させること等のユーザの視覚、聴覚、及び、触覚へのフィードバックの内容を生成する。その際、フィードバックの内容は、図1のオブジェクト情報取得部43により取得されるフィードバック情報に基づいて、接触したオブジェクトごとに対応した内容に設定される。出力制御部46は、xR空間をユーザに提示するための画像、音、及び、触覚を生成する際に、それらの画像、音、及び、触覚に対して、フィードバック判断部45により生成されたフィードバックの内容を反映させる。出力制御部46は、生成した画像、音、及び、触覚をそれぞれ図1の映像表示部23、音提示部24、及び、触覚提示部25に出力信号として供給する。これによって、ユーザの手と仮想オブジェクトとの接触に対するユーザの視覚、聴覚、及び、触覚へのフィードバックが実行される。なお、通常のフィードバックは、周知のフィードバックが適用可能であり、特定の内容に限定されない。以下において、通常のフィードバックの内容、及び、処理についての説明は省略する。 In normal feedback, the feedback determination unit 45 provides feedback to the user's visual, auditory, and tactile senses such as moving the virtual object and generating a contact sound by the contact between the user's hand and the virtual object. To generate. At that time, the content of the feedback is set to the content corresponding to each contacted object based on the feedback information acquired by the object information acquisition unit 43 of FIG. When the output control unit 46 generates an image, a sound, and a tactile sensation for presenting the xR space to the user, the feedback generated by the feedback determination unit 45 is given to the image, the sound, and the tactile sensation. Reflect the contents of. The output control unit 46 supplies the generated image, sound, and tactile sensation as output signals to the video display unit 23, the sound presentation unit 24, and the tactile sensation presentation unit 25 of FIG. 1, respectively. This provides feedback to the user's sight, hearing, and touch to the contact between the user's hand and the virtual object. It should be noted that the usual feedback can be applied to well-known feedback and is not limited to a specific content. In the following, the contents of normal feedback and the description of processing will be omitted.
 一方、フィードバック判断部45は、意図判定により意図無しと判定された場合には、ユーザへのフィードバックは行わない。この場合、フィードバック判断部45は、フィードバックを生成しない。ただし、フィードバック判断部45は、意図判定により意図無しと判定された場合に、フィードバックを生成しないのではなく、意図無しと判定された場合に対応したフィードバック(意図無しに対応したフィードバック)を、オブジェクト情報取得部43により取得されたフィードバック情報に基づいて生成してもよい。意図無しに対応したフィードバックの態様については後述する。 On the other hand, the feedback determination unit 45 does not give feedback to the user when it is determined by the intention determination that there is no intention. In this case, the feedback determination unit 45 does not generate feedback. However, the feedback determination unit 45 does not generate feedback when it is determined by the intention determination that there is no intention, but instead provides feedback (feedback corresponding to the unintention) corresponding to the case where it is determined that there is no intention. It may be generated based on the feedback information acquired by the information acquisition unit 43. The mode of feedback corresponding to unintentional will be described later.
(意図有無判定)
 意図識別部44での意図有無判定は、以下で説明する第1判断基準乃至第9判断基準に基づいて行われる。ただし、第1判断基準乃至第9判断基準の全てを用いて意図有無判定が行われる場合に限らず、第1判断基準乃至第9判断基準のうちのいずれか1以上の判断基準を用いて意図有無判定が行われる場合であってもよい。意図識別部44は、複数の判断基準を用いて意図有無判定を行う場合に、いずれかの判断基準により意図無しと判定した場合には、総合的(最終的)な判定結果(総合判定結果)として意図無しと判定する。だたし、意図識別部44は、いずれかの判断基準により意図ありと判定した場合には、総合判定結果として意図有りと判定する場合であってもよいし、複数の判断基準のうちの意図無し又は意図有りと判定した判断基準の種類に応じて、総合判定結果を決定する場合であってもよい。例えば、複数の判断基準に対して優先順位が事前に設定されていると仮定する。意図識別部44は、所定の優先順位の判断基準により意図有りと判定した場合であっても、それよりも優先順位の高い判断基準により意図無しと判定した場合には、総合判定結果として意図無しと判定するような場合であってもよい。
(Judgment of intention)
The intention identification unit 44 determines whether or not there is an intention based on the first determination criteria to the ninth determination criteria described below. However, the intention is not limited to the case where the intention presence / absence judgment is performed using all of the first judgment criteria to the ninth judgment criteria, and the intention is made using one or more of the first judgment criteria to the ninth judgment criteria. It may be the case where the presence / absence determination is performed. When the intention identification unit 44 determines whether or not there is an intention using a plurality of judgment criteria, if it is determined that there is no intention by any of the judgment criteria, a comprehensive (final) determination result (comprehensive determination result) It is judged that there is no intention. However, when the intention identification unit 44 determines that there is an intention based on any of the determination criteria, it may determine that there is an intention as a comprehensive determination result, or the intention among the plurality of determination criteria. It may be the case that the comprehensive judgment result is determined according to the type of the determination criteria determined to be none or intentional. For example, assume that priorities are preset for multiple criteria. Even if the intention identification unit 44 determines that there is an intention based on the determination criteria of the predetermined priority, if it determines that there is no intention based on the determination criteria having a higher priority than that, there is no intention as a comprehensive determination result. It may be the case that it is determined that.
 以下の意図有無判定における第1判断基準乃至第9判断基準について順に説明する。第1判断基準乃至第9判断基準の概要は次の通りである。
・第1判断基準:視野範囲(画角)内か否か
・第2判断基準:速い速度で手を動かしているか否か
・第3判断基準:手の甲側から触ったか否か
・第4判断基準:手に別のオブジェクトを握っているか否か
・第5判断基準:接触したオブジェクトに視線やヘッドレイがあたっている
・第6判断基準:接触したオブジェクトのユーザに対する位置
・第7判断基準:頭部と手の相対距離
・第8判断基準:人差し指と親指の開き具合
・第9判断基準:見ているオブジェクトとの関連性
The first to ninth judgment criteria in the following intention presence / absence judgment will be described in order. The outline of the first judgment criteria to the ninth judgment criteria is as follows.
・ 1st judgment criterion: whether or not it is within the viewing range (angle of view) ・ 2nd judgment criterion: whether or not the hand is moving at a high speed ・ 3rd judgment criterion: whether or not the hand is touched from the back side ・ 4th judgment criterion : Whether or not another object is held in the hand ・ Fifth criterion: The line of sight or headray hits the contacted object ・ Sixth criterion: Position of the contacted object with respect to the user ・ Seventh criterion: Head Relative distance between hand and hand ・ Eighth criterion: Openness of index finger and thumb ・ Ninth criterion: Relationship with the object being viewed
(第1判断基準)
 図7は、第1判断基準を説明する図である。
(1st criterion)
FIG. 7 is a diagram illustrating the first determination criterion.
 図7において、ユーザ161は、HMD73を頭部に装着し、xRにより提供される空間を視認している。HMD73は、ユーザ161に対して視野範囲(画角範囲)171の画像(仮想オブジェクト)を表示している。なお、視野範囲171は、HMD73がユーザ161に対して表示する画像全体に対する視野範囲ではなく、人の一般的な視野特性を考慮した中心視野等に制限した範囲(所定視野角の範囲)とみなしてもよい。 In FIG. 7, the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR. The HMD 73 displays an image (virtual object) of the field of view range (angle of view range) 171 for the user 161. The visual field range 171 is regarded as a range limited to a central visual field or the like in consideration of general human visual field characteristics (range of a predetermined viewing angle), not a visual field range for the entire image displayed by the HMD 73 to the user 161. You may.
 視野範囲171外には仮想オブジェクト181が存在し、視野範囲171内には、仮想オブジェクト182が存在する。 The virtual object 181 exists outside the field of view 171 and the virtual object 182 exists inside the field of view 171.
 図7には、ユーザ161が、視野範囲171外から視野範囲171内の仮想オブジェクト182まで手162を動かした場合の様子が示されている。この場合、ユーザ161は、視野範囲171の仮想オブジェクト以外は、視覚的に認識していない。そのため、ユーザ161が手162を視野範囲171外から視野範囲171内まで動かしている間に、手162が視野範囲171外の仮想オブジェクト181に意図せずに接触する場合がある。 FIG. 7 shows a state in which the user 161 moves the hand 162 from outside the field of view 171 to the virtual object 182 within the field of view 171. In this case, the user 161 does not visually recognize anything other than the virtual object having the visual field range 171. Therefore, while the user 161 is moving the hand 162 from outside the field of view 171 to inside the field of view 171 the hand 162 may unintentionally come into contact with the virtual object 181 outside the field of view 171.
 意図識別部44は、視野範囲171外に存在する仮想オブジェクト181に対してユーザ161の手162が接触した場合、意図無しと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図無しと判定した場合、フィードバック判断部45は、手162と仮想オブジェクト181との接触(衝突)に対するフィードバックを生成しない。フィードバック判断部45が意図無しに対応したフィードバックを生成する態様については後述する(以下、同様)。 The intention identification unit 44 determines that there is no intention when the hand 162 of the user 161 comes into contact with the virtual object 181 existing outside the visual field range 171. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact (collision) between the hand 162 and the virtual object 181. The mode in which the feedback determination unit 45 generates feedback corresponding to the intentionlessly will be described later (hereinafter, the same applies).
 意図識別部44は、視野範囲171内に存在する仮想オブジェクト182に対してユーザ1611の手162が接触した場合、意図有りと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図有りと判定した場合、フィードバック判断部45は、手162と仮想オブジェクト181との接触(衝突)に対するフィードバックの内容を生成する。 The intention identification unit 44 determines that there is an intention when the hand 162 of the user 1611 comes into contact with the virtual object 182 existing in the visual field range 171. When the intention identification unit 44 determines that there is an intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 generates the content of feedback for the contact (collision) between the hand 162 and the virtual object 181. ..
 意図識別部44により総合判定結果として意図有りと判定された場合、フィードバック判断部45は、手162と仮想オブジェクトとの接触に対して通常のフィードバックを生成する。 When the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object.
(第2判断基準)
 図8は、第2判断基準を説明する図である。
(Second criterion)
FIG. 8 is a diagram illustrating a second determination criterion.
 図8において、ユーザ161は、HMD73を頭部に装着し、xRにより提供される空間を視認している。その空間には、仮想オブジェクト181と仮想オブジェクト182とが存在する。 In FIG. 8, the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR. A virtual object 181 and a virtual object 182 exist in the space.
 図8には、ユーザ161が、仮想オブジェクト182に触るために仮想オブジェクト181に対して離れた位置から手162を動かした際に、手162が仮想オブジェクト181に接触した場合の様子が示されている。この場合、ユーザ161は、仮想オブジェクト182に接触する直前までは、手162をある程度、速い速度で動かし、仮想オブジェクト182に接触する直前で手162の速度を落とすと考えられる。したがって、手162が仮想オブジェクトに接触した場合に、手162が所定の閾値以上で動いているときには、ユーザ161は、その仮想オブジェクトへの接触を意図していないと考えられる。 FIG. 8 shows a situation in which the hand 162 touches the virtual object 181 when the user 161 moves the hand 162 from a position away from the virtual object 181 in order to touch the virtual object 182. There is. In this case, it is considered that the user 161 moves the hand 162 at a somewhat high speed until immediately before touching the virtual object 182, and slows down the hand 162 immediately before touching the virtual object 182. Therefore, when the hand 162 touches the virtual object and the hand 162 is moving above a predetermined threshold value, it is considered that the user 161 does not intend to touch the virtual object.
 意図識別部44は、仮想オブジェクトに対してユーザ161の手162が接触した場合に、そのときの手162の移動速度と、予め決められた所定の閾値とを比較する。比較した結果、手162の移動速度が閾値以上の場合には、意図無しと判定する。図8において、例えば、仮想オブジェクト181に対して手162が接触したときには、意図無しと判定される。この判定結果を考慮して、意図識別部44が、総合判定結果として意図無しと判定した場合、フィードバック判断部45は、手162と仮想オブジェクトとの接触に対するフィードバックを生成しない。 When the hand 162 of the user 161 comes into contact with the virtual object, the intention identification unit 44 compares the moving speed of the hand 162 at that time with a predetermined threshold value. As a result of comparison, when the moving speed of the hand 162 is equal to or higher than the threshold value, it is determined that there is no intention. In FIG. 8, for example, when the hand 162 touches the virtual object 181, it is determined that there is no intention. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object.
 意図識別部44は、上述の比較の結果、手162の移動速度が閾値未満の場合には、意図有りと判定する。図8において、例えば、仮想オブジェクト182に対して手162が接触したときには、意図有りと判定される。この判定結果を考慮して、意図識別部44が、総合判定結果として意図有りと判定した場合、フィードバック判断部45は、手162と仮想オブジェクトとの接触に対する通常のフィードバックを生成する。 As a result of the above comparison, the intention identification unit 44 determines that there is an intention when the movement speed of the hand 162 is less than the threshold value. In FIG. 8, for example, when the hand 162 touches the virtual object 182, it is determined that there is an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object.
(第3判断基準)
 図9は、第3判断基準を説明する図である。
(Third criterion)
FIG. 9 is a diagram illustrating a third determination criterion.
 図9の状況173、及び、状況174において、ユーザ161は、HMD73を頭部に装着し、xRにより提供される空間を視認している。その空間には、仮想オブジェクト183が存在する。 In the situation 173 and the situation 174 of FIG. 9, the user 161 wears the HMD 73 on the head and visually recognizes the space provided by xR. A virtual object 183 exists in the space.
 状況173には、ユーザ161が手162を動かした際に、手162の甲側から仮想オブジェクト183に接触した場合の様子が示されている。 Situation 173 shows a situation where the user 161 touches the virtual object 183 from the back side of the hand 162 when the hand 162 is moved.
 状況174には、ユーザ161が手162を動かした際に、手162の平側から仮想オブジェクト183に接触した場合の様子が示されている。 Situation 174 shows a situation where the user 161 touches the virtual object 183 from the flat side of the hand 162 when the hand 162 is moved.
 ユーザ161が、仮想オブジェクト183に触るために手162を動かした場合、状況174のように手162の平側から仮想オブジェクト183に接触すると考えられる。したがって、状況173のように手162の甲側から仮想オブジェクト183に接触した場合には、ユーザ161は、仮想オブジェクト183への接触を意図していないと考えられる。 When the user 161 moves the hand 162 to touch the virtual object 183, it is considered that the user 161 touches the virtual object 183 from the flat side of the hand 162 as in the situation 174. Therefore, when the virtual object 183 is touched from the back side of the hand 162 as in the situation 173, it is considered that the user 161 does not intend to touch the virtual object 183.
 意図識別部44は、仮想オブジェクト183に対してユーザ161の手162の甲側が接触した場合には、意図無しと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図無しと判定した場合、フィードバック判断部45は、手162と仮想オブジェクト183との接触に対するフィードバックを生成しない。 When the instep side of the hand 162 of the user 161 comes into contact with the virtual object 183, the intention identification unit 44 determines that there is no intention. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object 183.
 意図識別部44は、仮想オブジェクト183に対してユーザ161の手162の平側が接触した場合には、意図有りと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図有りと判定した場合、フィードバック判断部45は、手162と仮想オブジェクトとの接触に対する通常のフィードバックを生成する。 The intention identification unit 44 determines that there is an intention when the flat side of the hand 162 of the user 161 touches the virtual object 183. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object.
(第4判断基準)
 図10は、第4判断基準を説明する図である。
(4th criterion)
FIG. 10 is a diagram illustrating a fourth determination criterion.
 図10において、ユーザ161は、HMD73を頭部に装着し、xRにより提供される空間を視認している。その空間には、仮想オブジェクト185が存在する。 In FIG. 10, the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR. A virtual object 185 exists in the space.
 図10には、ユーザ161が、仮想オブジェクト184を手162で握り、他の位置に仮想オブジェクト184を移動させるために仮想オブジェクト184を握った状態で手162を動かした場合の様子が示されている。この場合に、仮想オブジェクト184を握った手162が意図しない仮想オブジェクト185に接触(衝突)することがある。このとき、ユーザ161は、仮想オブジェクト183への接触を意図していないと判断し得る。 FIG. 10 shows a case where the user 161 holds the virtual object 184 with the hand 162 and moves the hand 162 while holding the virtual object 184 in order to move the virtual object 184 to another position. There is. In this case, the hand 162 holding the virtual object 184 may come into contact (collision) with the unintended virtual object 185. At this time, the user 161 may determine that he / she does not intend to contact the virtual object 183.
 意図識別部44は、仮想オブジェクト185に対してユーザ161の手162が接触した際に、ユーザ161の手162が仮想オブジェクト184を握っている場合(把持している場合)には、意図無しと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図無しと判定した場合、フィードバック判断部45は、手162と仮想オブジェクト185との接触(衝突)に対するフィードバックを生成しない。 When the hand 162 of the user 161 comes into contact with the virtual object 185, the intention identification unit 44 determines that there is no intention when the hand 162 of the user 161 is holding the virtual object 184 (when the hand 162 is holding the virtual object 184). judge. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact (collision) between the hand 162 and the virtual object 185.
 意図識別部44は、仮想オブジェクト185に対してユーザ161の手162が接触した際に、ユーザ161の手162が仮想オブジェクト184を握っていない場合(把持していない場合)には、意図有りと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図有りと判定した場合、フィードバック判断部45は、手162と仮想オブジェクト185との接触(衝突)に対する通常のフィードバックを生成する。 When the hand 162 of the user 161 comes into contact with the virtual object 185, the intention identification unit 44 determines that there is an intention when the hand 162 of the user 161 does not hold the virtual object 184 (when it does not hold it). judge. When the intention identification unit 44 determines that there is an intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 generates normal feedback for the contact (collision) between the hand 162 and the virtual object 185. ..
(第5判断基準)
 図11は、第5判断基準を説明する図である。
(Fifth criterion)
FIG. 11 is a diagram illustrating a fifth determination criterion.
 図11において、ユーザ161は、HMD73を頭部に装着し、xRにより提供される空間を視認している。その空間には、仮想オブジェクト186、及び、仮想オブジェクト187が存在する。仮想オブジェクト187には、ユーザ161の視線、又は、HMD73の正面方向を示すヘッドレイがあたっている(仮想オブジェクト187は視線方向又はヘッドレイ方向に存在する)。 In FIG. 11, the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR. A virtual object 186 and a virtual object 187 exist in the space. The virtual object 187 is hit by a headlay indicating the line of sight of the user 161 or the front direction of the HMD 73 (the virtual object 187 exists in the line of sight direction or the headray direction).
 図11には、ユーザ161が、視線方向又はヘッドレイ方向の仮想オブジェクト187まで手162を動かした際に、手162が仮想オブジェクト186に接触した場合の様子が示されている。この場合に、ユーザ161は、仮想オブジェクト186を注視していないので、ユーザ161が仮想オブジェクト186に対して触れることを意図していないと考えられる。 FIG. 11 shows a state in which the hand 162 comes into contact with the virtual object 186 when the user 161 moves the hand 162 to the virtual object 187 in the line-of-sight direction or the headley direction. In this case, since the user 161 is not watching the virtual object 186, it is considered that the user 161 does not intend to touch the virtual object 186.
 意図識別部44は、ユーザ161の視線方向又はヘッドレイ方向に存在しない仮想オブジェクト186に対してユーザ161の手162が接触した場合には、意図無しと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図無しと判定した場合、フィードバック判断部45は、手162と仮想オブジェクト185との接触に対するフィードバックを生成しない。 When the hand 162 of the user 161 comes into contact with the virtual object 186 that does not exist in the line-of-sight direction or the headley direction of the user 161, the intention identification unit 44 determines that there is no intention. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object 185.
 意図識別部44は、ユーザ161の視線方向又はヘッドレイ方向に存在する仮想オブジェクト187に対してユーザ161の手162が接触した場合には、意図有りと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図有りと判定した場合、フィードバック判断部45は、手162と仮想オブジェクト185との接触に対する通常のフィードバックを生成する。 When the hand 162 of the user 161 comes into contact with the virtual object 187 existing in the line-of-sight direction or the headley direction of the user 161, the intention identification unit 44 determines that there is an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object 185.
(第6判断基準)
 図12は、第6判断基準を説明する図である。
(6th criterion)
FIG. 12 is a diagram illustrating the sixth determination criterion.
 図12において、ユーザ161は、HMD73を頭部に装着し、xRにより提供される空間を視認している。その空間には、仮想オブジェクト188、及び、仮想オブジェクト189が存在する。仮想オブジェクト188は、ユーザ161に対して前方(正面)の領域(前方領域190C)よりも右側の領域(右側領域190R)に存在する。仮想オブジェクト189は、ユーザ161に対して前方領域190Cに存在する。なお、ユーザ161の前方側の空間を前方向に沿った境界面により左右方向に3つの領域に3分割したとする。場合に、それらの3つの領域のうち、ユーザ161の中心軸を中心に左右方向に所定幅(肩幅程度)を有する領域を前方領域190Cとする。前方領域190Cに対して右側の領域を右側領域190Rとし、左側の領域を左側領域190Lとする。 In FIG. 12, the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR. A virtual object 188 and a virtual object 189 exist in the space. The virtual object 188 exists in an area (right area 190R) on the right side of the area (front area 190C) in front (front) of the user 161. The virtual object 189 exists in the front area 190C with respect to the user 161. It is assumed that the space on the front side of the user 161 is divided into three regions in the left-right direction by the boundary surface along the front direction. In this case, of the three regions, the region having a predetermined width (about shoulder width) in the left-right direction about the central axis of the user 161 is defined as the front region 190C. The region on the right side with respect to the front region 190C is the right region 190R, and the region on the left side is the left region 190L.
 図12には、ユーザ161が、右側領域190Rから前方領域190Cの仮想オブジェクト189まで右手162Rを動かした際に、右手162Rが右側領域190Rの仮想オブジェクト188に接触した場合の様子が示されている。ユーザ161が、右手162Rで仮想オブジェクトを触る場合、その仮想オブジェクトは、体の前方(前方領域190C)、又は、左側(左側領域190L)に存在する可能性が高い。即ち、ユーザ161が右側領域190Rで右手162Rを動かしている場合には、前方領域190C、又は、左側領域190Lに存在する仮想オブジェクトまで右手162Rを動かしている途中である可能性が高い。同様に、ユーザ161が左側領域190Lで左手162Lを動かしている場合には、前方領域190C、又は、右側領域190Rに存在する仮想オブジェクトまで左手162Lを動かしている途中である可能性が高い。 FIG. 12 shows a situation where the user 161 moves the right hand 162R from the right side area 190R to the virtual object 189 in the front area 190C, and the right hand 162R touches the virtual object 188 in the right side area 190R. .. When the user 161 touches the virtual object with the right hand 162R, the virtual object is likely to exist in front of the body (front area 190C) or on the left side (left side area 190L). That is, when the user 161 is moving the right hand 162R in the right side area 190R, there is a high possibility that the right hand 162R is being moved to the virtual object existing in the front area 190C or the left side area 190L. Similarly, when the user 161 is moving the left hand 162L in the left side area 190L, there is a high possibility that the left hand 162L is being moved to the virtual object existing in the front area 190C or the right side area 190R.
 意図識別部44は、右側領域190Rに存在する仮想オブジェクト188に対してユーザ161の右手162Rが接触した場合、又は、左側領域190Lに存在する仮想オブジェクトに対してユーザ161の左手162Lが接触した場合には、意図無しと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図無しと判定した場合、フィードバック判断部45は、右手162R又は左手162Lと仮想オブジェクトとの接触に対するフィードバックを生成しない。 When the right hand 162R of the user 161 touches the virtual object 188 existing in the right side area 190R, or when the left hand 162L of the user 161 touches the virtual object existing in the left side area 190L, the intention identification unit 44 touches the virtual object 188. Is determined to be unintentional. In consideration of this determination result, when the intention identification unit 44 determines that there is no intention as a comprehensive determination result, the feedback determination unit 45 does not generate feedback for the contact between the right hand 162R or the left hand 162L and the virtual object.
 意図識別部44は、前方領域190C、又は、左側領域190Lに存在する仮想オブジェクト(例えば、仮想オブジェクト189)に対してユーザ161の右手162Rが接触した場合、又は、前方領域190C、又は、右側領域190Rに存在する仮想オブジェクト(例えば、仮想オブジェクト189)に対してユーザ161の左手162Lが接触した場合には、意図有りと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図有りと判定した場合、フィードバック判断部45は、右手162R又は左手162Lと仮想オブジェクトとの接触に対する通常のフィードバックを生成する。 When the right hand 162R of the user 161 comes into contact with a virtual object (for example, a virtual object 189) existing in the front area 190C or the left side area 190L, the intention identification unit 44 is the front area 190C or the right side area. When the left hand 162L of the user 161 touches the virtual object (for example, the virtual object 189) existing in the 190R, it is determined that there is an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the right hand 162R or the left hand 162L and the virtual object.
(第7判断基準)
 図13は、第7判断基準を説明する図である。
(7th criterion)
FIG. 13 is a diagram illustrating a seventh determination criterion.
 図13において、ユーザ161は、HMD73を頭部に装着し、xRにより提供される空間を視認している。その空間には、仮想オブジェクト191、及び、仮想オブジェクト192が存在する。仮想オブジェクト191は、ユーザ161の頭部(例えば眼、以下、同様)に対して近距離境界201以下の距離に存在する。仮想オブジェクト192は、ユーザ161の頭部に対して近距離境界201よりも遠く(大きく)、かつ、遠距離境界202よりも近い(小さい)距離に存在する。 In FIG. 13, the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR. A virtual object 191 and a virtual object 192 exist in the space. The virtual object 191 exists at a distance of the short-distance boundary 201 or less with respect to the head of the user 161 (for example, eyes, hereinafter, the same applies). The virtual object 192 exists at a distance farther (larger) than the short-distance boundary 201 and closer (smaller) than the long-distance boundary 202 with respect to the head of the user 161.
 近距離境界201は、ユーザ161の頭部に対して予め決められた所定の距離、又は、その距離の位置を表す。ユーザ161が仮想オブジェクトに触る場合、ユーザ161はその仮想オブジェクトとの間に一定の距離をとると考えられる。このような事情を考慮して、近距離境界201は、これ以上、近くの距離では仮想オブジェクトに触らないと推測される距離に設定される。 The short-distance boundary 201 represents a predetermined distance to the head of the user 161 or a position at that distance. When the user 161 touches the virtual object, the user 161 is considered to take a certain distance from the virtual object. In consideration of such circumstances, the short-distance boundary 201 is set to a distance at which it is estimated that the virtual object will not be touched at a closer distance.
 遠距離境界202は、ユーザ161の頭部に対して予め決められた所定の距離となる距離、又は、その距離の位置を表す。遠距離境界202の距離は、近距離境界201よりも遠い(大きい)。遠距離境界202の距離は、近距離境界201と同様の事情により、これ以上、遠くの距離では仮想オブジェクトに触らないと推測される距離に設定される。なお、近距離境界201及び遠距離境界202のそれぞれの距離は、ユーザ161の頭部に対する距離でなくてもよく、例えば、ユーザ161の体の中心軸からの距離としてもよい。 The long-distance boundary 202 represents a distance that is a predetermined distance to the head of the user 161 or a position of that distance. The distance of the long-distance boundary 202 is farther (larger) than that of the short-distance boundary 201. The distance of the long-distance boundary 202 is set to a distance at which it is estimated that the virtual object will not be touched at a further distant distance due to the same circumstances as the short-distance boundary 201. The distance between the short-distance boundary 201 and the long-distance boundary 202 does not have to be the distance to the head of the user 161 and may be, for example, the distance from the central axis of the body of the user 161.
 図13には、ユーザ161が、近距離境界201以下の距離から仮想オブジェクト192まで手162を動かしている際に、手162が仮想オブジェクト191を通過した場合の様子が示されている。この場合、仮想オブジェクト191は、近距離境界201以下の距離に存在するため、ユーザ161が仮想オブジェクト191に対して触れることを意図していないと考えられる。仮想オブジェクト192は、近距離境界201よりも遠く、かつ、遠距離境界202よりも近い距離に存在するため、ユーザ161は仮想オブジェクト192に対して触れることを意図していると考えられる。 FIG. 13 shows a state in which the hand 162 passes through the virtual object 191 while the user 161 is moving the hand 162 from a distance of the short distance boundary 201 or less to the virtual object 192. In this case, since the virtual object 191 exists at a distance of the short distance boundary 201 or less, it is considered that the user 161 does not intend to touch the virtual object 191. Since the virtual object 192 exists at a distance farther than the short-distance boundary 201 and closer than the long-distance boundary 202, it is considered that the user 161 intends to touch the virtual object 192.
 意図識別部44は、ユーザ161の手162と仮想オブジェクトとが接触した場合に、接触した位置のユーザ161の頭部に対する距離が、近距離境界201以下の場合、又は、遠距離境界202以上の場合には、意図無しと判定する。接触した位置のユーザ161の頭部に対する距離は、具体的には、手162と仮想オブジェクトとが接触したときの頭部に対する手162の距離であってもよいし、頭部に対する仮想オブジェクトの距離であってもよい。図13では、手162と仮想オブジェクト191との接触は、意図識別部44により意図無しと判定される。この判定結果を考慮して、意図識別部44が、総合判定結果として意図無しと判定した場合、フィードバック判断部45は、手162と仮想オブジェクトとの接触に対するフィードバックを生成しない。 When the hand 162 of the user 161 and the virtual object come into contact with each other, the intention identification unit 44 determines that the distance to the head of the user 161 at the contact position is the short-distance boundary 201 or less, or the long-distance boundary 202 or more. In that case, it is determined that there is no intention. Specifically, the distance of the user 161 at the contact position with respect to the head may be the distance of the hand 162 with respect to the head when the hand 162 and the virtual object are in contact with each other, or the distance of the virtual object with respect to the head. May be. In FIG. 13, the contact between the hand 162 and the virtual object 191 is determined by the intention identification unit 44 to be unintentional. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object.
 意図識別部44は、ユーザ161の手162と仮想オブジェクトとが接触した場合に、接触した位置のユーザ161の頭部に対する距離が、近距離境界201より遠く、かつ、遠距離境界202よりも近い場合には、意図有りと判定する。図13では、手162と仮想オブジェクト192との接触は、意図識別部44により意図有りと判定される。この判定結果を考慮して、意図識別部44が、総合判定結果として意図有りと判定した場合、フィードバック判断部45は、手162と仮想オブジェクトとの接触に対する通常のフィードバックを生成する。 In the intention identification unit 44, when the hand 162 of the user 161 and the virtual object come into contact with each other, the distance to the head of the user 161 at the contact position is farther than the short-distance boundary 201 and closer than the long-distance boundary 202. In that case, it is determined that there is an intention. In FIG. 13, the contact between the hand 162 and the virtual object 192 is determined by the intention identification unit 44 to be intentional. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object.
(第8判断基準)
 図14は、第8判断基準を説明する図である。
(8th criterion)
FIG. 14 is a diagram illustrating the eighth determination criterion.
 図14の状況211、及び、状況212において、仮想オブジェクト221は、不図示のHMD73を頭部に装着した不図示のユーザ(ユーザ161とする)に対してxRにより提供される空間に存在する。 In Situation 211 and Situation 212 of FIG. 14, the virtual object 221 exists in a space provided by xR to a user (referred to as user 161) who wears an HMD 73 (not shown) on his head.
 状況211には、ユーザ161の手162が仮想オブジェクト221に接触する際に、親指と人指し指との間の距離が、仮想オブジェクト221の大きさよりも小さい場合の様子が示されている。 Situation 211 shows a case where the distance between the thumb and the index finger is smaller than the size of the virtual object 221 when the hand 162 of the user 161 comes into contact with the virtual object 221.
 状況212には、ユーザ161の手162が仮想オブジェクト221に接触する際に、親指と人指し指との間の距離が、仮想オブジェクト221の大きさよりも大きい場合の様子が示されている。 Situation 212 shows a case where the distance between the thumb and the index finger is larger than the size of the virtual object 221 when the hand 162 of the user 161 comes into contact with the virtual object 221.
 ユーザが仮想オブジェクトを掴もうとする場合、仮想オブジェクトを掴むためのプリシェイピングの動作により親指と人指し指との間の距離が、仮想オブジェクトの大きさ以上の大きさとなる。なお、仮想オブジェクトの大きさは、仮想オブジェクトを掴むことが可能な親指と人指し指との間の最小距離とする。 When the user tries to grab the virtual object, the distance between the thumb and the index finger becomes larger than the size of the virtual object due to the pre-shaping operation for grasping the virtual object. The size of the virtual object is the minimum distance between the thumb and the index finger that can grab the virtual object.
 したがって、状況211の場合には、ユーザ161は、手162と仮想オブジェクト221との接触を意図していないと考えられる。状況212の場合には、ユーザ161は、手162と仮想オブジェクト221との接触を意図していると考えられる。 Therefore, in the case of situation 211, it is considered that the user 161 does not intend to make contact between the hand 162 and the virtual object 221. In the case of situation 212, it is believed that user 161 intends to make contact between the hand 162 and the virtual object 221.
 意図識別部44は、ユーザ161の手162と仮想オブジェクト221とが接触した際に、ユーザ161の手162の親指と人指し指との間の距離が、仮想オブジェクト221の大きさよりも小さい場合には、意図無しと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図無しと判定した場合、フィードバック判断部45は、手162と仮想オブジェクト221との接触に対するフィードバックを生成しない。 When the hand 162 of the user 161 and the virtual object 221 come into contact with each other, the intention identification unit 44 determines that the distance between the thumb and the index finger of the hand 162 of the user 161 is smaller than the size of the virtual object 221. Judged as unintentional. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object 221.
 意図識別部44は、ユーザ161の手162と仮想オブジェクト221とが接触した際に、ユーザ161の手162の親指と人指し指との間の距離が、仮想オブジェクト221の大きさ以上の大きさの場合には、意図有りと判定する。この判定結果を考慮して、意図識別部44が、総合判定結果として意図有りと判定した場合、フィードバック判断部45は、手162と仮想オブジェクト221との接触に対する通常のフィードバックを生成する。 When the intention identification unit 44 makes contact between the hand 162 of the user 161 and the virtual object 221 and the distance between the thumb and the index finger of the hand 162 of the user 161 is larger than the size of the virtual object 221. Is determined to have an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object 221.
(第9判断基準)
 図15は、第9判断基準を説明する図である。
(9th criterion)
FIG. 15 is a diagram illustrating the ninth determination criterion.
 図15において、ユーザ161は、HMD73を頭部に装着し、xRにより提供される空間を視認している。その空間には、仮想オブジェクトであるテレビ受像機231、テレビ受像機231の仮想オブジェクトとしてのリモートコントローラ232、及び、テレビ受像機231とは関連性のない仮想オブジェクト233が存在する。ユーザ161の視線方向又はHMD73のヘッドレイ方向は、テレビ受像機231を見る方向に向いている。 In FIG. 15, the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR. In that space, there are a television receiver 231 which is a virtual object, a remote controller 232 as a virtual object of the television receiver 231 and a virtual object 233 which is not related to the television receiver 231. The line-of-sight direction of the user 161 or the headray direction of the HMD 73 is directed to the direction of viewing the television receiver 231.
 図15には、ユーザ161が、テレビ受像機231のモニタを見ながらリモートコントローラ232を取るために手162を動かした際に、仮想オブジェクト233に接触した場合の様子が示されている。 FIG. 15 shows a state in which the user 161 touches the virtual object 233 when moving the hand 162 to take the remote controller 232 while watching the monitor of the television receiver 231.
 ユーザ161がテレビ受像機231のモニタを見ている際に、何かの仮想オブジェクトを把持するために手162を動かした場合、把持する対象がテレビ受像機231に関連するリモートコントローラ232である可能性が高い。このように、ユーザ161が手162を動かした際に、ユーザ161は、少なくともその直前まで見ていた仮想オブジェクトに対して関連性の高い仮想オブジェクトと手162との接触を意図していると考えられる。 If the user 161 moves his / her hand 162 to grasp some virtual object while watching the monitor of the television receiver 231, the object to be grasped may be the remote controller 232 related to the television receiver 231. Highly sex. In this way, when the user 161 moves the hand 162, it is considered that the user 161 intends the contact between the hand 162 and the virtual object that is highly related to the virtual object that has been seen at least immediately before. Be done.
 意図識別部44は、ユーザ161の手162と仮想オブジェクトとが接触した際に、その接触した仮想オブジェクトと、ユーザ161が見ている仮想オブジェクトとの関連性の高さを取得する。ユーザ161が見ている仮想オブジェクトとは、ユーザ161が現在見ている仮想オブジェクト、又は、ユーザ161が手162を動かす直前までユーザ161が見ていた仮想オブジェクトを意味する。ユーザ161が見ている仮想オブジェクトであるか否かは、ユーザ161の視線方向又はHMD73のヘッドレイ方向により判断し得る。仮想オブジェクトの間での関連性の高さは、予め設定される。例えば、関連性の高さは、関連性が高いか低いかの二段階で設定されてもよいし、関連性が高い程、大きい数値となるように設定されてもよい。 When the hand 162 of the user 161 and the virtual object come into contact with each other, the intention identification unit 44 acquires the high degree of relevance between the contacted virtual object and the virtual object viewed by the user 161. The virtual object viewed by the user 161 means a virtual object currently viewed by the user 161 or a virtual object viewed by the user 161 until immediately before the user 161 moves the hand 162. Whether or not the object is a virtual object viewed by the user 161 can be determined by the line-of-sight direction of the user 161 or the headray direction of the HMD 73. The height of relevance between virtual objects is preset. For example, the height of the relevance may be set in two stages of high relevance or low relevance, or may be set so that the higher the relevance, the larger the numerical value.
 意図識別部44は、ユーザ161の手162が接触した仮想オブジェクトと、ユーザ161が見ている仮想オブジェクトとの関連性が低いと判定した場合には、意図無しと判定する。例えば、関連性の高さが連続的又は段階的な数値により設定されている場合には、関連性の高さが所定の閾値より小さいときに、関連性が低いと判定される。図15では、ユーザ161の手162と仮想オブジェクト233との接触に対しては、意図識別部44により意図無しと判定される。この判定結果を考慮して、意図識別部44が、総合判定結果として意図無しと判定した場合、フィードバック判断部45は、手162と仮想オブジェクトとの接触に対するフィードバックを生成しない。 When the intention identification unit 44 determines that the virtual object touched by the hand 162 of the user 161 and the virtual object viewed by the user 161 are not closely related, the intention identification unit 44 determines that there is no intention. For example, when the height of relevance is set by a continuous or stepwise numerical value, it is determined that the relevance is low when the height of relevance is smaller than a predetermined threshold value. In FIG. 15, the intention identification unit 44 determines that there is no intention for the contact between the user 161's hand 162 and the virtual object 233. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object.
 意図識別部44は、ユーザ161の手162が接触した仮想オブジェクトと、ユーザ161が見ている仮想オブジェクトとの関連性が高いと判定した場合には、意図有りと判定する。例えば、関連性の高さが連続的又は段階的な数値により設定されている場合には、関連性の高さが所定の閾値以上のときに、関連性が高いと判定される。図15では、ユーザ161の手162とリモートコントローラ232との接触に対しては、意図識別部44により意図有りと判定される。この判定結果を考慮して、意図識別部44が、総合判定結果として意図有りと判定した場合、フィードバック判断部45は、手162と仮想オブジェクト221との接触に対する通常のフィードバックを生成する。 When the intention identification unit 44 determines that the virtual object touched by the hand 162 of the user 161 is highly related to the virtual object viewed by the user 161, it determines that there is an intention. For example, when the height of relevance is set by a continuous or stepwise numerical value, it is determined that the relevance is high when the height of relevance is equal to or higher than a predetermined threshold value. In FIG. 15, the intention identification unit 44 determines that the contact between the user 161's hand 162 and the remote controller 232 has an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object 221.
 以上の第1判断基準乃至第9判断基準の説明では、意図識別部44により意図無しと判定された場合には、フィードバック判断部45は、フィードバックを生成しないものとした。ただし、意図識別部44により意図無しと判定された場合に、フィードバック判断部45は、意図無しに対応したフィードバックを生成してもよい。 In the above description of the first judgment criteria to the ninth judgment criteria, when the intention identification unit 44 determines that there is no intention, the feedback determination unit 45 does not generate feedback. However, when the intention identification unit 44 determines that there is no intention, the feedback determination unit 45 may generate feedback corresponding to the unintention.
(意図無しに対応した視覚へのフィードバック)
 手と仮想オブジェクトが接触した際に、意図識別部44が総合判定結果として意図無しと判定した場合において、フィードバック判断部45は、接触した仮想オブジェクトの明度を変更する。例えば、HMD73が、光学透過型(Optical See Through型)のARグラスの場合には、フィードバック判断部45は、接触した仮想オブジェクトの明度を下げるフィードバックを生成する。HMD73が、ビデオ透過型(Video See Through型)のARグラスやスマートフォンを用いたスマホ型のARグラスの場合には、フィードバック判断部45は、接触した仮想オブジェクトの明度を上げるフィードバックを生成する。生成されたフィードバックの内容は、出力制御部46が生成する画像に反映され、その画像が映像表示部23に供給される。これにより、ユーザの手と仮想オブジェクトとの意図無しの接触に対するユーザの視覚へのフィードバック(意図無しに対応したフィードバック)が実行される。
(Feedback to vision corresponding to unintentional)
When the intention identification unit 44 determines that there is no intention as a comprehensive determination result when the hand and the virtual object come into contact with each other, the feedback determination unit 45 changes the brightness of the contacted virtual object. For example, when the HMD 73 is an optical see-through type AR glass, the feedback determination unit 45 generates feedback that reduces the brightness of the virtual object in contact with the HMD 73. When the HMD 73 is a video see-through type AR glass or a smartphone type AR glass using a smartphone, the feedback determination unit 45 generates feedback that raises the brightness of the contacted virtual object. The content of the generated feedback is reflected in the image generated by the output control unit 46, and the image is supplied to the video display unit 23. As a result, feedback to the user's vision (feedback corresponding to the unintention) to the unintentional contact between the user's hand and the virtual object is executed.
 図16は、意図無しに対応したフィードバックを説明する図である。 FIG. 16 is a diagram illustrating feedback corresponding to unintentional.
 図16の状態241は、ユーザの手が接触する前にユーザに視覚に提示される仮想オブジェクト251を表す。 The state 241 in FIG. 16 represents a virtual object 251 that is visually presented to the user before the user's hand touches it.
 状態242は、ユーザの手が仮想オブジェクト251に接触した後の所定期間において、ユーザの視覚に提示される仮想オブジェクト251を表す。仮想オブジェクト251の明度は、ユーザの手が接触すると、状態241から状態242へと変化する。これによって、ユーザは、仮想オブジェクト251への接触を意図していない場合であっても仮想オブジェクト251に接触したことを認識することができる。ただし、手との接触により仮想オブジェクト251が移動する等の影響は生じない。 The state 242 represents the virtual object 251 presented to the user's vision in a predetermined period after the user's hand touches the virtual object 251. The brightness of the virtual object 251 changes from the state 241 to the state 242 when the user's hand touches it. As a result, the user can recognize that he / she has touched the virtual object 251 even if he / she does not intend to touch the virtual object 251. However, there is no effect such as the movement of the virtual object 251 due to contact with the hand.
 状態243は、ユーザの手が仮想オブジェクト251との接触した後の所定期間が経過した後にユーザの視覚に提示される仮想オブジェクト251を表す。ユーザの手が仮想オブジェクト251との接触した後、所定期間が経過すると、仮想オブジェクト251の明度は、状態242から状態243へと変化する。状態243の仮想オブジェクト251の明度は、状態241のときと同じである。したがって、状態242から状態243への変化によって、仮想オブジェクト251の明度は元に戻る。 The state 243 represents the virtual object 251 presented to the user's vision after a predetermined period of time has elapsed after the user's hand comes into contact with the virtual object 251. After a predetermined period of time elapses after the user's hand comes into contact with the virtual object 251, the brightness of the virtual object 251 changes from the state 242 to the state 243. The brightness of the virtual object 251 in the state 243 is the same as that in the state 241. Therefore, the brightness of the virtual object 251 is restored by the change from the state 242 to the state 243.
 仮想オブジェクト251の明度が、状態241から状態242に変化した後、状態242から状態243(状態241)への元の明度に戻るまでの所定期間は、予め決められた時間長さであってよいし、ユーザの手が仮想オブジェクト251に接触している間の時間としてもよい。 After the brightness of the virtual object 251 changes from the state 241 to the state 242, the predetermined period from the state 242 to the original brightness from the state 243 (state 241) may be a predetermined time length. However, it may be the time while the user's hand is in contact with the virtual object 251.
 状態242での仮想オブジェクトの明度は、ユーザの手が仮想オブジェクト251を通過する際の速度に応じて変更してもよい。例えば、ユーザの手が仮想オブジェクト251を所定速度よりも速く通過するときは状態242での仮想オブジェクト251の明度を状態241のときよりも約80%ほど低下させる。ユーザの手が仮想オブジェクト251を所定速度よりも遅く通過するときは状態242での仮想オブジェクト251の明度を状態241のときよりも約40%ほど低下させる。 The brightness of the virtual object in the state 242 may be changed according to the speed at which the user's hand passes through the virtual object 251. For example, when the user's hand passes through the virtual object 251 faster than a predetermined speed, the brightness of the virtual object 251 in the state 242 is reduced by about 80% as compared with the state 241. When the user's hand passes through the virtual object 251 slower than a predetermined speed, the brightness of the virtual object 251 in the state 242 is reduced by about 40% as compared with the state 241.
 状態241からの状態242への仮想オブジェクト251の明度の切替え、又は、状態242から状態243への明度の切替えは、時間的に徐々に(段階的に)行われるようにしてもよい。これにより、仮想オブジェクト251の明度の切替えがフォードイン/フォードアウトのような方法で行われるようにしてもよい。 The switching of the brightness of the virtual object 251 from the state 241 to the state 242 or the switching of the brightness from the state 242 to the state 243 may be performed gradually (stepwise) in time. As a result, the switching of the brightness of the virtual object 251 may be performed by a method such as ford-in / ford-out.
 意図無しに対応した視覚へのフィードバックは、接触した仮想オブジェクトの明度を変化させる場合に限らず、接触した仮想オブジェクトの表示形態の任意の要素(色など)を変化させる場合であってよい。 The unintentional feedback to the visual sense is not limited to the case of changing the brightness of the contacted virtual object, but may be the case of changing an arbitrary element (color, etc.) of the display form of the contacted virtual object.
(意図無しに対応した聴覚へのフィードバック)
 手と仮想オブジェクトが接触した際に、意図識別部44が総合判定結果として意図無しと判定した場合において、フィードバック判断部45は、手と仮想オブジェクトが接触した瞬間に音を提示する聴覚へのフィードバックを生成してもよい。音の提示方法として、立体音響(3D audio)で提示し、接触した位置を音でユーザに認識させるようにしてもよい。
(Feedback to hearing that corresponds unintentionally)
When the intention identification unit 44 determines that there is no intention as a comprehensive determination result when the hand and the virtual object come into contact with each other, the feedback determination unit 45 presents a sound at the moment when the hand and the virtual object come into contact with each other. May be generated. As a method of presenting the sound, it may be presented by stereophonic sound (3D audio) so that the user can recognize the contacted position by the sound.
 意図無しに対応した聴覚へのフィードバックとして、フィードバック判断部45は、例えば、通常のフィードバックの場合と異なる種類や音量(振幅)の音を生成する。フィードバック判断部45は、ユーザの手が仮想オブジェクトを通過する際の速度に応じて、音の種類や音量を変更してもよい。例えば、ユーザの手が仮想オブジェクトを通過する際の速度が所定速度よりも速い場合には、フィードバック判断部45は、「シュッ」というような軽い音で、かつ、小さい音量(第1音量)の音を提示するフィードバックを生成する。ユーザの手が仮想オブジェクトを通過する際の速度が所定速度よりも遅い場合には、フィードバック判断部45は、「ズンッ」というような重い音で、かつ、大きな音量(第1音量よりも大きな第2音量)の音を提示するフィードバックを生成してもよい。 As feedback to the auditory sense corresponding to unintentional, the feedback determination unit 45 generates, for example, a sound of a different type and volume (amplitude) from the case of normal feedback. The feedback determination unit 45 may change the type and volume of the sound according to the speed at which the user's hand passes through the virtual object. For example, when the speed at which the user's hand passes through the virtual object is faster than the predetermined speed, the feedback determination unit 45 makes a light sound such as "shut" and has a low volume (first volume). Generate feedback that presents the sound. When the speed at which the user's hand passes through the virtual object is slower than the predetermined speed, the feedback determination unit 45 makes a heavy sound such as "Zun" and a loud volume (a louder volume than the first volume). You may generate feedback that presents a sound (2 volumes).
(意図無しに対応した触覚へのフィードバック)
 手と仮想オブジェクトが接触した際に、意図識別部44が総合判定結果として意図無しと判定した場合において、フィードバック判断部45は、手と仮想オブジェクトが接触した瞬間にユーザの手に振動を提示する触覚へのフィードバックを生成してもよい。通常のフィードバックでは、手と仮想オブジェクトが接触した際に、ユーザの手に対して高周波数の振動を短期間で提示して仮想オブジェクトと衝突した触覚を提示している。これに対して意図無しに対応したフィードバックでは、ユーザに対して仮想オブジェクトの存在を認識させることが目的となる。したがって、意図無しに対応したフィードバックでは、例えば、フィードバック判断部45は、手と仮想オブジェクトが接触した際に、通常のフィードバックのときよりも低周波数の振動を短期間で提示するフィードバックを生成する。
(Feedback to the tactile sensation corresponding to unintentional)
When the intention identification unit 44 determines that there is no intention as a comprehensive determination result when the hand and the virtual object come into contact with each other, the feedback determination unit 45 presents vibration to the user's hand at the moment when the hand and the virtual object come into contact with each other. Tactile feedback may be generated. In normal feedback, when a hand and a virtual object come into contact with each other, a high-frequency vibration is presented to the user's hand in a short period of time to present a tactile sensation that collides with the virtual object. On the other hand, in the feedback that corresponds unintentionally, the purpose is to make the user aware of the existence of the virtual object. Therefore, in the unintentional feedback, for example, the feedback determination unit 45 generates feedback that presents vibration at a lower frequency in a shorter period of time than in normal feedback when the hand and the virtual object come into contact with each other.
 意図無しに対応した触覚へのフィードバックとして、フィードバック判断部45は、ユーザの手が仮想オブジェクトを通過する際の速度に応じて、触覚を変更してもよい。例えば、ユーザの手が仮想オブジェクトを通過する際の速度が所定速度よりも速い場合には、フィードバック判断部45は、低周波数の振動を短期間(第1期間)で提示するフィードバックを生成する。ユーザの手が仮想オブジェクトを通過する際の速度が所定速度よりも遅い場合には、フィードバック判断部45は、低周波数の振動を長期間(第1期間よりも長い第2期間)で提示するフィードバックを生成してもよい。 As feedback to the tactile sensation corresponding to unintentional, the feedback determination unit 45 may change the tactile sensation according to the speed at which the user's hand passes through the virtual object. For example, when the speed at which the user's hand passes through the virtual object is faster than a predetermined speed, the feedback determination unit 45 generates feedback that presents low-frequency vibration in a short period (first period). If the speed at which the user's hand passes through the virtual object is slower than the predetermined speed, the feedback determination unit 45 presents low-frequency vibration for a long period of time (a second period longer than the first period). May be generated.
 以上の情報処理装置の第1の実施の形態によれば、ユーザに提供されるxR空間において、ユーザの手とオブジェクトとの接触に対してユーザが意図しないような場合には、その接触によるオブジェクト等への影響を軽減することができる。即ち、HMD73でユーザが視認できる視野範囲には限界がある。xR空間に多くのオブジェクトが存在する場合もある。そのため、ユーザが触りたいオブジェクトに向けて手を動かしている途中で他のオブジェクトに意図せずに触ってしまうことがある。その際に、ユーザが触ることを意図していないオブジェクトが飛んでいったり、音が鳴ると、ユーザに混乱を招く。情報処理装置の第1の実施の形態では、意図したオブジェクトを触ったときのみ、通常のフィードバックを実行するので、ユーザに混乱を招くようなフィードバックが行われず、ユーザにとってxR空間における操作が行い易くなる。情報処理装置の第1の実施の形態では、ユーザの手が意図せずにオブジェクトに接触した際に、ユーザに対してある程度のフィードバック(意図無しに対応したフィードバック)を与えることも可能である。これによって、ユーザが視認していないオブジェクトの存在感をユーザに提示することができる。 According to the first embodiment of the information processing apparatus described above, in the xR space provided to the user, when the user does not intend the contact between the user's hand and the object, the object due to the contact. It is possible to reduce the influence on such things. That is, there is a limit to the field of view that the user can see with the HMD73. There can be many objects in xR space. Therefore, the user may unintentionally touch another object while moving his / her hand toward the object he / she wants to touch. At that time, if an object that the user does not intend to touch flies or makes a sound, the user is confused. In the first embodiment of the information processing apparatus, since normal feedback is executed only when the intended object is touched, no feedback that causes confusion to the user is provided, and it is easy for the user to perform operations in the xR space. Become. In the first embodiment of the information processing apparatus, it is also possible to give some feedback (feedback corresponding to unintentional) to the user when the user's hand unintentionally touches the object. This makes it possible to present the user with the presence of an object that is not visible to the user.
<<本技術が適用された情報処理装置の第2の実施の形態>>
 図17は、本技術が適用された情報処理装置の第2の実施の形態の構成例を示したブロック図である。
<< Second embodiment of the information processing device to which this technology is applied >>
FIG. 17 is a block diagram showing a configuration example of a second embodiment of the information processing apparatus to which the present technology is applied.
 図17の情報処理装置301は、HMD及びコントローラを用いて、ユーザに対してARにより生成された空間・世界を提供する。ただし、情報処理装置の第2の実施の形態は、第1の実施の形態における図1の情報処理装置11と同様に、ユーザに対してxRにより生成された空間・世界を提供する情報処理装置に適用することが可能である。 The information processing device 301 in FIG. 17 provides the user with the space / world generated by AR by using the HMD and the controller. However, the second embodiment of the information processing apparatus is the information processing apparatus that provides the space / world generated by xR to the user, similarly to the information processing apparatus 11 of FIG. 1 in the first embodiment. It is possible to apply to.
 以下において、ARにより提供(生成)される空間をAR空間というものとする。AR空間には、実空間に実在するオブジェクトと仮想のオブジェクトとが混在する。以下において、実空間に実在するオブジェクトと仮想のオブジェクトとの区別をしない場合には単にオブジェクトという。 In the following, the space provided (generated) by AR will be referred to as the AR space. In AR space, real objects and virtual objects coexist in real space. In the following, when there is no distinction between an object that exists in real space and an object that is virtual, it is simply called an object.
 情報処理装置301は、ARグラスシステム311、及び、ハンドコントローラ312を有する。 The information processing device 301 has an AR glass system 311 and a hand controller 312.
 ARグラスシステム311は、図4に示したハードウエアとしてのHMD73において構築されるシステムである。ARグラスシステム311は、センサ部321、制御部322、通信部323、表示部324、スピーカ325、及び、記憶部326を有する。 The AR glass system 311 is a system built in the HMD 73 as the hardware shown in FIG. The AR glass system 311 has a sensor unit 321, a control unit 322, a communication unit 323, a display unit 324, a speaker 325, and a storage unit 326.
 センサ部321は、内向きカメラ331、外向きカメラ332、マイク333、ジャイロセンサ334、加速度センサ335、及び、方位センサ336を有する。 The sensor unit 321 has an inward camera 331, an outward camera 332, a microphone 333, a gyro sensor 334, an acceleration sensor 335, and an azimuth sensor 336.
 内向きカメラ331は、図4のHMD73における内向きカメラ132C、132Dに相当する。内向きカメラ331は、ユーザの眼を撮影し、撮影した画像をセンサ情報として制御部322に供給する。 The inward-facing camera 331 corresponds to the inward-facing cameras 132C and 132D in the HMD73 of FIG. The inward camera 331 photographs the user's eye and supplies the captured image as sensor information to the control unit 322.
 外向きカメラ332は、図4のHMD73における外向きカメラ132A、132Bに相当する。外向きカメラ332は、ユーザの右眼及び左眼が視認する方向(HMD73の正面方向)の実空間を撮影範囲として撮影する。外向きカメラ332は、撮影した画像をセンサ情報として制御部322に供給する。 The outward-facing camera 332 corresponds to the outward-facing cameras 132A and 132B in the HMD73 of FIG. The outward-facing camera 332 shoots in the real space in the direction visually recognized by the user's right eye and left eye (front direction of the HMD 73) as the shooting range. The outward-facing camera 332 supplies the captured image as sensor information to the control unit 322.
 マイク333は、音(音や声)を集音して電気信号に変換し、センサ情報として制御部322に供給する。 The microphone 333 collects sounds (sounds and voices), converts them into electrical signals, and supplies them as sensor information to the control unit 322.
 ジャイロセンサ334は、直交3軸周りの角速度を検出し、検出した角速度をセンサ情報として制御部322に供給する。ジャイロセンサ334は、ARグラスシステム311(ハードウエアとしてのHMD73)の角速度を検出する。 The gyro sensor 334 detects the angular velocity around the three orthogonal axes, and supplies the detected angular velocity as sensor information to the control unit 322. The gyro sensor 334 detects the angular velocity of the AR glass system 311 (HMD73 as hardware).
 加速度センサ335は、直交3軸周りの加速度を検出し、検出した加速度をセンサ情報として制御部322に供給する。加速度センサ335は、ARグラスシステム311(ハードウエアとしてのHMD73)の角速度を検出する。 The acceleration sensor 335 detects the acceleration around the three orthogonal axes, and supplies the detected acceleration to the control unit 322 as sensor information. The accelerometer 335 detects the angular velocity of the AR glass system 311 (HMD73 as hardware).
 方位センサ336は、地磁気の方向を方位として検出し、検出した方位をセンサ情報として制御部322に供給する。方位センサ336は、ARグラスシステム311(ハードウエアとしてのHMD73)の方位を検出する。 The azimuth sensor 336 detects the direction of the geomagnetism as the azimuth, and supplies the detected azimuth as sensor information to the control unit 322. The directional sensor 336 detects the directional of the AR glass system 311 (HMD73 as hardware).
 制御部322は、センサ部321からのセンサ情報と、ハンドコントローラ312からの後述のセンサ情報と基づいて、表示部324(図1の映像表示部23に相当)、スピーカ325(図1の音提示部24の一形態)、及び、ハンドコントローラ312における振動提示部354(図1の触覚提示部25に相当)への出力制御等を行う。 The control unit 322 has a display unit 324 (corresponding to the video display unit 23 in FIG. 1) and a speaker 325 (sound presentation in FIG. 1) based on the sensor information from the sensor unit 321 and the sensor information described later from the hand controller 312. (One form of unit 24) and output control to the vibration presenting unit 354 (corresponding to the tactile presenting unit 25 in FIG. 1) in the hand controller 312 are performed.
 制御部322は、手指位置検出部341、物体検出部342、アプリ実行部343、及び、出力制御部344を有する。 The control unit 322 includes a finger position detection unit 341, an object detection unit 342, an application execution unit 343, and an output control unit 344.
 手指位置検出部341は、ハンドコントローラ312(例えば、図5に示したハンドコントローラに相当)からのセンサ情報に基づいて、ユーザの手や指の位置及び姿勢を検出する。ユーザの手や指の位置及び姿勢は、ハンドコントローラ312に実装された後述のジャイロセンサ351、加速度センサ352、及び、方位センサ353からのセンサ信号に基づいて検出される。ただし、ユーザの手や指の位置及び姿勢は、外向きカメラ332により撮影された画像に基づいて検出される場合であってもよい。例えば、手指位置検出部341は、外向きカメラ332からの画像に基づいて、ハンドコントローラ312の形状(姿勢)、又は、ハンドコントローラ312上に付与されたマーカを検出して、手及び指の位置及び姿勢を検出することが可能である。 The hand position detection unit 341 detects the position and posture of the user's hand or finger based on the sensor information from the hand controller 312 (for example, corresponding to the hand controller shown in FIG. 5). The positions and postures of the user's hands and fingers are detected based on the sensor signals from the gyro sensor 351, the acceleration sensor 352, and the orientation sensor 353, which will be described later, mounted on the hand controller 312. However, the position and posture of the user's hand or finger may be detected based on the image taken by the outward camera 332. For example, the hand position detection unit 341 detects the shape (posture) of the hand controller 312 or the marker given on the hand controller 312 based on the image from the outward camera 332, and the position of the hand and the finger. And it is possible to detect the posture.
 手指位置検出部341は、外向きカメラ332からの画像に基づいて検出される手及び指の位置及び姿勢と、ハンドコントローラ312に実装された後述のジャイロセンサ351、加速度センサ352、及び、方位センサ353からのセンサ信号に基づいて検出される手及び指の位置及び姿勢とを統合することにより、より正確に手(ハンドコントローラ312)や指の位置及び姿勢(形状)を検出する場合であってもよい。 The hand position detection unit 341 includes the positions and postures of the hands and fingers detected based on the image from the outward camera 332, and the gyro sensor 351, the acceleration sensor 352, and the orientation sensor described later mounted on the hand controller 312. In the case of detecting the position and posture (shape) of the hand (hand controller 312) and the finger more accurately by integrating the positions and postures of the hand and the finger detected based on the sensor signal from the 353. May be good.
 なお、ユーザが存在する環境に設置されたカメラからの画像に基づいて、ユーザの手及び指の位置及び姿勢が検出されるようにしてもよい。 Note that the positions and postures of the user's hands and fingers may be detected based on the image from the camera installed in the environment where the user exists.
 物体検出部342は、AR空間におけるオブジェクトの位置及び形状等を表すオブジェクト情報(物体情報)を検出する。オブジェクトが実空間に存在する実オブジェクトの場合には、物体検出部342は、外向きカメラ332からの画像、又は、不図示のToFカメラ(図1のToFカメラ35に相当)により得られる距離画像(深度情報)に基づいて、実オブジェクトの位置及び形状を検出する。ただし、実オブジェクトの位置及び形状は、事前に把握されている場合であってもよいし、ユーザが存在する環境(建物等)の設計情報や、別のシステムから取得した情報に基づいて検出される場合であってもよい。 The object detection unit 342 detects object information (object information) representing the position and shape of the object in the AR space. When the object is a real object existing in the real space, the object detection unit 342 is an image from the outward camera 332 or a distance image obtained by a ToF camera (corresponding to the ToF camera 35 in FIG. 1) (not shown). The position and shape of the real object are detected based on (depth information). However, the position and shape of the real object may be known in advance, or it may be detected based on the design information of the environment (building, etc.) in which the user exists or the information acquired from another system. It may be the case.
 なお、ハンドコントローラ312に実装されたカメラからの画像、又は、ユーザが存在する環境に設置されたカメラからの画像に基づいて、実オブジェクトの位置及び形状が検出されるようにしてもよい。後述のアプリ実行部343における接近・接触検出において、ハンドコントローラ312に実装したカメラの画像、又は、ユーザが存在する環境に設置されたカメラの画像を用いてユーザの手元付近の実オブジェクトの有無等の検出を補助することも可能である。 The position and shape of the real object may be detected based on the image from the camera mounted on the hand controller 312 or the image from the camera installed in the environment where the user exists. In the approach / contact detection in the application execution unit 343 described later, the presence or absence of a real object near the user's hand using the image of the camera mounted on the hand controller 312 or the image of the camera installed in the environment where the user exists, etc. It is also possible to assist in the detection of.
 オブジェクトが仮想オブジェクト(虚像)の場合には、物体検出部342は、後述のアプリ実行部343により生成されるAR空間の情報に検出する。 When the object is a virtual object (virtual image), the object detection unit 342 detects it in the information in the AR space generated by the application execution unit 343 described later.
 アプリ実行部343は、所定のアプリケーションのプログラムを実行することにより、ユーザに提供するAR空間を生成する。 The application execution unit 343 generates an AR space to be provided to the user by executing a program of a predetermined application.
 アプリ実行部343は、接近・接触検出、視認検出、及び、フィードバックの生成処理を実行する。 The application execution unit 343 executes approach / contact detection, visual detection, and feedback generation processing.
 接近・接触検出では、アプリ実行部343は、手指位置検出部341により検出されたユーザの手の位置及び姿勢と、物体検出部342により検出されたオブジェクトの位置及び形状に基づいて、ユーザの手といずれかのオブジェクトとが接近・接触(接近又は接触)したか否かを検出する。アプリ実行部343は、手と接近・接触したオブジェクトを検出する。 In the approach / contact detection, the application execution unit 343 uses the user's hand based on the position and posture of the user's hand detected by the finger position detection unit 341 and the position and shape of the object detected by the object detection unit 342. And whether or not any object is approaching / contacting (approaching or contacting) is detected. The application execution unit 343 detects an object that has come into contact with or has come into contact with the hand.
 視認検出では、アプリ実行部343は、接近・接触検出によりユーザの手がオブジェクトに接近・接触したことが検出された場合に、そのオブジェクトをユーザが見えているか否か(視認しているか否か)の検出を行う。視認検出は、例えば、ユーザの視線方向、又は、ARグラスシステム311(ユーザの頭部)の向き(ヘッドレイ方向)等に基づいて行われる。ユーザの視線方向は、内向きカメラ331からのセンサ情報により検出され得る。ARグラスシステム311の向きは、ジャイロセンサ334、加速度センサ335、及び、方位センサ336からのセンサ情報に基づいて検出され得る。ARグラスシステム311(ユーザの頭部)の向きは、ユーザが存在する環境に設置されたカメラの画像に基づいて検出される場合であってもよい。なお、視線検出についての詳細は後述する。 In the visual detection, the application execution unit 343 determines whether or not the user can see the object (whether or not the object is visually recognized) when it is detected that the user's hand approaches or touches the object by the approach / contact detection. ) Is detected. The visual recognition detection is performed based on, for example, the direction of the user's line of sight, the direction of the AR glass system 311 (user's head) (head ray direction), or the like. The user's line-of-sight direction can be detected by the sensor information from the inward camera 331. The orientation of the AR glass system 311 can be detected based on sensor information from the gyro sensor 334, the accelerometer 335, and the azimuth sensor 336. The orientation of the AR glass system 311 (user's head) may be detected based on the image of a camera installed in the environment in which the user exists. The details of the line-of-sight detection will be described later.
 フィードバックの生成処理では、アプリ実行部343は、ユーザの手がオブジェクトに接近・接触したことに対するユーザの視覚、聴覚、及び、触覚へのフィードバック(の内容)を生成する。 In the feedback generation process, the application execution unit 343 generates feedback (contents) to the user's visual, auditory, and tactile senses when the user's hand approaches or touches the object.
 出力制御部344は、アプリ実行部343により生成されたAR空間をユーザに提示するための画像(映像)、音、及び、触覚を生成し、それぞれ表示部324、スピーカ325、及び、ハンドコントローラ312の振動提示部354により出力するための出力信号を生成する。出力制御部344は、生成した出力信号を表示部324、スピーカ325、及び、ハンドコントローラ312の振動提示部354に供給する。これによって、出力制御部344は、ユーザに提示する視覚、聴覚、及び、触覚への情報を制御する。 The output control unit 344 generates an image (video), sound, and tactile sensation for presenting the AR space generated by the application execution unit 343 to the user, and generates a display unit 324, a speaker 325, and a hand controller 312, respectively. An output signal for output is generated by the vibration presenting unit 354 of. The output control unit 344 supplies the generated output signal to the display unit 324, the speaker 325, and the vibration presentation unit 354 of the hand controller 312. As a result, the output control unit 344 controls the visual, auditory, and tactile information presented to the user.
 出力制御部344は、ユーザに提示するための画像、音、及び、触覚を生成する際に、アプリ実行部343のフィードバックの生成処理により生成された内容のフィードバックを反映させた画像、音、及び、触覚を生成する。 The output control unit 344 reflects the feedback of the content generated by the feedback generation process of the application execution unit 343 when generating the image, sound, and tactile sensation to be presented to the user, and the image, sound, and , Generates tactile sensation.
 通信部323は、ARグラスシステム311以外の装置やシステム等との通信を制御する。 The communication unit 323 controls communication with devices and systems other than the AR glass system 311.
 表示部324は、出力制御部344から供給された出力信号の画像(映像)を表示するディスプレイである。 The display unit 324 is a display that displays an image (video) of the output signal supplied from the output control unit 344.
 スピーカ325は、出力制御部344から供給された出力信号の音を出力する音提示部である。 The speaker 325 is a sound presentation unit that outputs the sound of the output signal supplied from the output control unit 344.
 記憶部326は、制御部322が実行するプログラムやデータを記憶する。 The storage unit 326 stores programs and data executed by the control unit 322.
 ハンドコントローラ312は、図5に示したハンドコントローラに相当する。ハンドコントローラ312は、ユーザの手に装着され、又は、ユーザの手で把持される。コントローラ312は、ジャイロセンサ351、加速度センサ352、方位センサ353、及び、振動提示部354を有する。 The hand controller 312 corresponds to the hand controller shown in FIG. The hand controller 312 is attached to or gripped by the user's hand. The controller 312 has a gyro sensor 351, an acceleration sensor 352, an azimuth sensor 353, and a vibration presenting unit 354.
 ジャイロセンサ351は、直交3軸周りの角速度を検出し、検出した角速度をセンサ情報としてARグラスシステム311の制御部322に供給する。ジャイロセンサ351は、図5のハンドコントローラの場合と同様に手及び指に配置され、手及び指の角速度を検出する。 The gyro sensor 351 detects the angular velocity around the three orthogonal axes, and supplies the detected angular velocity as sensor information to the control unit 322 of the AR glass system 311. The gyro sensor 351 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and detects the angular velocity of the hand and the finger.
 加速度センサ352は、直交3軸方向の加速度を検出し、検出した加速度をセンサ情報としてARグラスシステム311の制御部322に供給する。加速度センサ352は、図5のハンドコントローラの場合と同様に手及び指に配置され、手及び指の加速度を検出する。 The acceleration sensor 352 detects the acceleration in the orthogonal three-axis directions, and supplies the detected acceleration as sensor information to the control unit 322 of the AR glass system 311. The acceleration sensor 352 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and detects the acceleration of the hand and the finger.
 方位センサ353は、地磁気の方向を方位として検出し、検出した方位をセンサ情報としてARグラスシステム311の制御部322に供給する。方位センサ353は、図5のハンドコントローラの場合と同様に手及び指に配置され、手及び指の方位を検出する。 The azimuth sensor 353 detects the direction of the geomagnetism as the azimuth, and supplies the detected azimuth as sensor information to the control unit 322 of the AR glass system 311. The orientation sensor 353 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and detects the orientation of the hand and the finger.
 振動提示部354は、ARグラスシステム311の制御部322(出力制御部344)から供給される出力信号の振動を発生させる。振動提示部354は、図5のハンドコントローラの場合と同様に手及び指に配置され、手及び指に対して振動を与える。 The vibration presentation unit 354 generates vibration of the output signal supplied from the control unit 322 (output control unit 344) of the AR glass system 311. The vibration presenting unit 354 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and gives vibration to the hand and the finger.
<情報処理装置301が行う処理の手順>
 図18は、情報処理装置301の処理手順を例示したフローチャートである。
<Procedure of processing performed by the information processing device 301>
FIG. 18 is a flowchart illustrating the processing procedure of the information processing apparatus 301.
 ステップS31では、図17のアプリ実行部343は、接近・接触検出により、ユーザの手がオブジェクト(物体)に接近・接触しているか否か(接近・接触状態か否か)を判定する。 In step S31, the application execution unit 343 of FIG. 17 determines whether or not the user's hand is approaching / contacting the object (object) (whether or not it is in the approaching / contacting state) by the approach / contact detection.
 ステップS31において、接近・接触状態ではないと判定された場合、処理はステップS31を繰り返す。 If it is determined in step S31 that the state is not in the approaching / contacting state, the process repeats step S31.
 ステップS31において、接近・接触状態ではあると判定された場合、処理はステップS31からステップS32に進む。 If it is determined in step S31 that the state is approaching / contacting, the process proceeds from step S31 to step S32.
 ステップS32では、アプリ実行部343は、視認検出により、ユーザの手に接近・接触しているオブジェクトをユーザが見えているかしているか否か(視認状態か非視認状態か)を判定する。 In step S32, the application execution unit 343 determines whether or not the user can see (whether the visible state or the non-visible state) an object that is approaching or in contact with the user's hand by visual recognition detection.
 ステップS32において、視認状態(見えている)であると判定された場合、処理はステップS32からステップS33に進む。 If it is determined in step S32 that the state is visible (visible), the process proceeds from step S32 to step S33.
 ステップS33では、アプリ実行部343は、フィードバックの生成処理により、手とオブジェクトとの接近・接触に対するユーザへの通常(従来)のフィードバックを生成する。通常のフィードバックは、情報処理装置の第1の実施の形態で説明した通常のフィードバックと同様の意味を有する。ただし、本実施の形態では具体的に、通常のフィードバックとして、アプリ実行部343は、オブジェクトと手との動作に応じてリアリティのある触覚(振動)をユーザの手に提示するフィードバックを生成する。出力制御部344は、アプリ実行部343により生成された内容のフィードバックを反映させた触覚(振動)を生成する。出力制御部344は、生成した触覚(振動)をハンドコントローラ312の振動提示部354に出力信号として供給する。これによって、ユーザの手とオブジェクトとの接近・接触に対して、視認状態に対応したユーザの触覚への通常のフィードバック(触覚提示)が実行される。処理はステップS33からステップS31に戻り、ステップS31から繰り返される。 In step S33, the application execution unit 343 generates normal (conventional) feedback to the user regarding the approach / contact between the hand and the object by the feedback generation process. The ordinary feedback has the same meaning as the ordinary feedback described in the first embodiment of the information processing apparatus. However, in the present embodiment, specifically, as normal feedback, the application execution unit 343 generates feedback that presents a realistic tactile sensation (vibration) to the user's hand according to the movement of the object and the hand. The output control unit 344 generates a tactile sensation (vibration) that reflects the feedback of the content generated by the application execution unit 343. The output control unit 344 supplies the generated tactile sensation (vibration) to the vibration presentation unit 354 of the hand controller 312 as an output signal. As a result, normal feedback (tactile presentation) to the user's tactile sensation corresponding to the visual state is executed for the approach / contact between the user's hand and the object. The process returns from step S33 to step S31 and is repeated from step S31.
 ステップS32において、非視認状態(見えていない)であると判定された場合、処理はステップS32からステップS34に進む。 If it is determined in step S32 that the state is invisible (not visible), the process proceeds from step S32 to step S34.
 ステップS34では、アプリ実行部343は、ユーザの手と接近・接触しているオブジェクト(物体)を検出する。処理はステップS34からステップS35に進む。 In step S34, the application execution unit 343 detects an object (object) that is approaching or in contact with the user's hand. The process proceeds from step S34 to step S35.
 なお、ユーザの手と接近・接触しているオブジェクトは、ステップS31において、接近・接触検出により、ユーザの手がオブジェクト(物体)に接近・接触していることを検出した際に同時に検出されている。 The object that is approaching or in contact with the user's hand is detected at the same time when it is detected in step S31 that the user's hand is approaching or in contact with the object (object) by the approach / contact detection. There is.
 ステップS35では、アプリ実行部343は、フィードバックの生成処理により、ステップS34で検出したオブジェクトのプロパティに応じて、ユーザの視覚、聴覚、及び、触覚へのフィードバックを生成する。出力制御部344は、アプリ実行部343により生成された内容のフィードバックを反映させた画像(映像)、音、及び、触覚を生成し、それらの情報を表示部324、スピーカ325、及び、ハンドコントローラ312の振動提示部354に出力信号として供給する。これによって、ユーザの手とオブジェクトとの接近・接触に対して、非視認状態に対応したユーザの視覚、聴覚、及び、触覚へのフィードバック(情報提示)が実行される。処理は、ステップS35からステップS31に戻り、ステップS31から繰り返される。 In step S35, the application execution unit 343 generates feedback to the user's visual, auditory, and tactile senses according to the property of the object detected in step S34 by the feedback generation process. The output control unit 344 generates an image (video), sound, and tactile sensation that reflects the feedback of the content generated by the application execution unit 343, and displays the information in the display unit 324, the speaker 325, and the hand controller. It is supplied as an output signal to the vibration presenting unit 354 of 312. As a result, feedback (information presentation) to the user's visual sense, auditory sense, and tactile sense corresponding to the non-visual state is executed for the approach / contact between the user's hand and the object. The process returns from step S35 to step S31 and is repeated from step S31.
<ユーザの手とオブジェクトとの接近・接触に対するフィードバックの詳細>
(接近・接触処理)
 ユーザの手とオブジェクトとが相対的に動いて、ユーザの手とオブジェクトとが接近・近接した場合に、図17の情報処理装置301のアプリ実行部343は、接近・接触処理により、ユーザの手とオブジェクトとが接近・接触したことを検出する。この接近・接触処理では、アプリ実行部343は、手指位置検出部341により検出されたユーザの手の位置及び姿勢と、物体検出部342により検出されたAR空間における各オブジェクトの位置及び形状(オブジェクト情報)とを取得する。アプリ実行部343は、これらの取得した情報に基づいて、ユーザの手と各オブジェクトとの間の距離(ユーザの手の領域と各オブジェクトの領域との間の最も接近している部分の距離)が、予め決められた閾値以下であるか否かを判定する。判定の結果、ユーザの手に対して距離が閾値以下となるオブジェクト(以下、対象オブジェクトという)が存在する場合には、ユーザの手とその対象オブジェクトとが接近・接触したことを検出する。
<Details of feedback on the approach / contact between the user's hand and the object>
(Approach / contact processing)
When the user's hand and the object move relatively, and the user's hand and the object come close to each other, the application execution unit 343 of the information processing apparatus 301 in FIG. 17 performs the user's hand by the approach / contact process. Detects that the object and the object are close to each other or come into contact with each other. In this approach / contact process, the application execution unit 343 has the position and posture of the user's hand detected by the finger position detection unit 341 and the position and shape (object) of each object in the AR space detected by the object detection unit 342. Information) and get. The application execution unit 343 is based on the acquired information, and the distance between the user's hand and each object (the distance of the closest part between the area of the user's hand and the area of each object). However, it is determined whether or not it is equal to or less than a predetermined threshold value. As a result of the determination, when there is an object whose distance is equal to or less than the threshold value with respect to the user's hand (hereinafter referred to as a target object), it is detected that the user's hand and the target object are close to each other or in contact with each other.
 なお、ユーザの手とオブジェクトとが接近した状態は、接触したとみなせる状態であるとし、接近は接触に含まれるとしてもよい。本技術が適用された情報処理装置の第1の実施の形態では、ユーザの手とオブジェクトとが接触した場合のフィードバックについて説明したが、その接触には、本情報処理装置の第2の実施の形態と同様に接近が含まれるとしてもよい。 Note that the state in which the user's hand and the object are in close contact may be regarded as a state in which they are in contact with each other, and the close contact may be included in the contact. In the first embodiment of the information processing apparatus to which the present technology is applied, the feedback when the user's hand and the object come into contact with each other has been described. The approach may be included as well as the morphology.
(視認検出)
 アプリ実行部343は、ユーザの手とオブジェクトとが接近・接触したことを検出した場合、視認検出により、ユーザがその接近・接触している対象オブジェクトを見ているか否か(視認状態か非視認状態か)を検出する。この視認検出において、アプリ実行部343は、視認状態か非視認状態かを後述の第1検出条件乃至第5検出条件を用いて検出(判定)する。ただし、第1検出条件乃至第5検出条件の全てを用いて視認状態か非視認状態かを検出する場合に限らず、第1検出条件乃至第5検出条件のうちのいずれか1以上の検出条件を用いて視認検出を行う場合であってもよい。
(Visual detection)
When the application execution unit 343 detects that the user's hand and the object are close to each other, the visual detection detects whether or not the user is looking at the approaching / contacting target object (visual state or non-visual state). Is it a state?) Is detected. In this visual recognition detection, the application execution unit 343 detects (determines) whether it is a visible state or a non-visual state by using the first detection condition to the fifth detection condition described later. However, the detection condition is not limited to the case where the visible state or the non-visible state is detected by using all of the first detection condition to the fifth detection condition, and the detection condition is one or more of the first detection condition to the fifth detection condition. May be used for visual detection.
 アプリ実行部343は、複数の検出条件を用いて視認検出を行う場合に、いずれかの検出条件により非視認状態であることを検出した場合には、総合的(最終的)な検出結果(総合検出結果)を非視認状態であるとする。ただし、アプリ実行部343は、いずれかの検出条件により視認状態であることを検出した場合には、総合検出結果を視認状態であるとしてもよいし、複数の検出条件のうちの視認状態又は非視認状態であると検出した検出条件の種類に応じて、総合検出結果を決定する場合であってもよい。例えば、複数の検出条件に対して優先順位が事前に設定されていると仮定する。アプリ実行部343は、所定の優先順位の検出条件により視認状態であること検出した場合であっても、それよりも優先順位の高い検出条件により非視認状態であることを検出した場合には、総合検出結果を非視認状態であるとするような場合であってもよい。 When the application execution unit 343 performs visual detection using a plurality of detection conditions and detects that it is in a non-visual state by any of the detection conditions, the application execution unit 343 is a comprehensive (final) detection result (comprehensive). The detection result) is assumed to be invisible. However, when the application execution unit 343 detects that it is in the visual state by any of the detection conditions, the comprehensive detection result may be in the visual state, and the visible state or non-visual state among the plurality of detection conditions. It may be the case that the comprehensive detection result is determined according to the type of the detection condition detected in the visual state. For example, assume that priorities are preset for multiple detection conditions. Even if the application execution unit 343 detects that it is in the visible state according to the detection condition of a predetermined priority, if it detects that it is in the invisible state by the detection condition having a higher priority than that, the application execution unit 343 detects that it is in the invisible state. It may be the case that the comprehensive detection result is invisible.
 第1検出条件乃至第5検出条件は次の通りである。なお、非視認状態であることを検出する条件として第1検出条件乃至第5検出条件を示す。
・第1検出条件:対象オブジェクトがユーザの視野範囲内に存在しない(目を閉じている、ユーザの目が不自由である等の条件も該当する)。
・第2検出条件:対象オブジェクトがARグラスシステム311(HMD73)の視野範囲内に存在しない。
・第3検出条件:対象オブジェクトがユーザの周辺視野内である。
・第4検出条件:対象オブジェクトがユーザの中心視野で一度も視認されていない。
・第5検出条件:対象オブジェクトが他のオブジェクトで遮蔽されている(対象オブジェクトが箱の中、湯の中、又は、煙の中等に存在する等)。
The first detection condition to the fifth detection condition are as follows. The first detection condition to the fifth detection condition are shown as conditions for detecting that the state is invisible.
-First detection condition: The target object does not exist within the user's field of view (conditions such as closed eyes and blindness of the user are also applicable).
Second detection condition: The target object does not exist within the field of view of the AR glass system 311 (HMD73).
-Third detection condition: The target object is in the user's peripheral visual field.
-Fourth detection condition: The target object has never been visually recognized in the user's central visual field.
-Fifth detection condition: The target object is shielded by another object (the target object exists in a box, hot water, smoke, etc.).
 これらの第1検出条件乃至第5検出条件を満たす場合には、それぞれ非視認状態であることが検出される。 When these first detection conditions to fifth detection conditions are satisfied, it is detected that they are invisible.
 ここで、視認検出を行う際にアプリ実行部343が適宜参照する情報として、AR空間におけるユーザの手や対象オブジェクトの位置及び姿勢は手指位置検出部341及び物体検出部342により検出される。ARグラスシステム311(HMD73)の向き(ヘッドレイ方向)は、ジャイロセンサ334、加速度センサ335、及び、方位センサ336からのセンサ情報に基づいて検出される。ユーザの視線方向は、内向きカメラ331からのユーザの眼の画像により検出される。 Here, the positions and postures of the user's hand and the target object in the AR space are detected by the finger position detection unit 341 and the object detection unit 342 as information that the application execution unit 343 appropriately refers to when performing visual detection. The orientation (headlay direction) of the AR glass system 311 (HMD73) is detected based on the sensor information from the gyro sensor 334, the acceleration sensor 335, and the azimuth sensor 336. The user's line-of-sight direction is detected by the image of the user's eyes from the inward camera 331.
(第1検出条件)
 第1検出条件を用いた視認検出では、アプリ実行部343は、ユーザの視線方向に基づいてAR空間におけるユーザの視野範囲を特定する。アプリ実行部343は、特定したユーザの視野範囲内に対象オブジェクトが存在しない場合には、非視認状態であることを検出する。アプリ実行部343は、ユーザの視野範囲に対象オブジェクトが存在する場合には、視認状態であることを検出する。
(First detection condition)
In the visual detection using the first detection condition, the application execution unit 343 specifies the user's visual field range in the AR space based on the user's line-of-sight direction. The application execution unit 343 detects that the target object is invisible when the target object does not exist within the visual field range of the specified user. When the target object exists in the user's visual field range, the application execution unit 343 detects that it is in the visual state.
 ただし、アプリ実行部343は、ユーザの視野範囲内に対象オブジェクトが存在する場合であっても、アプリ実行部343は、ユーザが眼を閉じている場合には非視認状態であるとしてもよい。ユーザが目を閉じているか否かは、内向きカメラ331からの画像により検出される。 However, even if the target object exists within the field of view of the user, the application execution unit 343 may be invisible when the user closes his / her eyes. Whether or not the user has his eyes closed is detected by the image from the inward camera 331.
 アプリ実行部343は、ユーザの視野範囲内に対象オブジェクトが存在する場合であっても、ユーザの目が不自由な場合には、非視認状態であるとしてもよい。ユーザの目が不自由か否かの情報は、他のシステム等から取得され得る。ユーザの目が不自由な場合には、アプリ実行部343は、そのユーザが安定して視認できる視野範囲の情報を取得し、その視野範囲内に対象オブジェクトが存在しない場合に非視認状態であるとしてもよい。 The application execution unit 343 may be invisible even when the target object exists within the user's visual field range when the user's eyes are blind. Information on whether or not the user is visually impaired can be obtained from another system or the like. When the user is visually impaired, the application execution unit 343 acquires information in the visual field range that the user can stably see, and is invisible when the target object does not exist in the visual field range. May be.
(第2検出条件)
 第2検出条件を用いた視認検出では、アプリ実行部343は、ARグラスシステム311(HMD73)の向き(ヘッドレイ方向)に基づいて、AR空間におけるARグラスシステム311の視野範囲を特定し、そのARグラスシステム311の視野範囲内に対象オブジェクトが存在しない場合には、非視認状態であることを検出する。アプリ実行部343は、ARグラスシステム311の視野範囲に対象オブジェクトが存在する場合には、視認状態であることを検出する。
(Second detection condition)
In the visual detection using the second detection condition, the application execution unit 343 specifies the visual field range of the AR glass system 311 in the AR space based on the direction (head ray direction) of the AR glass system 311 (HMD73), and the AR thereof. When the target object does not exist within the field of view of the glass system 311, it is detected that the target object is invisible. When the target object exists in the visual field range of the AR glass system 311, the application execution unit 343 detects that it is in the visual state.
(第3検出条件)
 第3検出条件を用いた視認検出では、アプリ実行部343は、ユーザの視線方向に基づいてAR空間におけるユーザの周辺視野の範囲を特定する。アプリ実行部343は、特定したユーザの周辺視野内に対象オブジェクトが存在する場合には、非視認状態であることを検出する。アプリ実行部343は、ユーザの周辺視野内に対象オブジェクトが存在しない場合には、視認状態であることを検出する。ただし、ユーザの中心視野と周辺視野のいずれにも対象オブジェクトが存在しない場合には、アプリ実行部343は、非視認状態であるとする。なお、ユーザの視野範囲のうち、中心視野は、人の一般的な視野特性を考慮して焦点を合わせている範囲として設定される。周辺視野は、中心視野の外側の範囲として設定される。
(Third detection condition)
In the visual detection using the third detection condition, the application execution unit 343 specifies the range of the user's peripheral visual field in the AR space based on the user's line-of-sight direction. The application execution unit 343 detects that the target object is invisible when the target object exists in the peripheral visual field of the specified user. The application execution unit 343 detects that the target object is in the visible state when the target object does not exist in the peripheral visual field of the user. However, when the target object does not exist in either the central visual field or the peripheral visual field of the user, the application execution unit 343 is assumed to be invisible. Of the user's visual field range, the central visual field is set as the range focused in consideration of the general visual field characteristics of a person. Peripheral vision is set as a range outside the central visual field.
(第4検出条件)
 第4検出条件を用いた視認検出では、アプリ実行部343は、ユーザの中心視野の範囲の履歴を所定時間分記録する。アプリ実行部343は、記録した中心視野の範囲の履歴において、中心視野の範囲に対象オブジェクトの位置が含まれない場合には、対象オブジェクトがユーザの中心視野で一度も視認されていないとして、非視認状態であることを検出する。アプリ実行部343は、記録した中心視野の範囲の履歴において、中心視野の範囲に対象オブジェクトの位置が一度でも含まれた場合(現在から過去への所定時間の間で対象オブジェクトが中心視野内に存在しない場合)には、視認状態であることを検出する。
(4th detection condition)
In the visual detection using the fourth detection condition, the application execution unit 343 records the history of the range of the user's central visual field for a predetermined time. In the history of the range of the central visual field recorded, the application execution unit 343 considers that the target object has never been visually recognized in the user's central visual field when the position of the target object is not included in the central visual field range. Detects that it is in a visible state. When the position of the target object is included in the range of the central visual field even once in the history of the range of the central visual field recorded by the application execution unit 343 (the target object is within the central visual field during a predetermined time from the present to the past). If it does not exist), it is detected that it is in the visual field state.
(第5検出条件)
 第5検出条件を用いた視認検出では、アプリ実行部343は、対象オブジェクトとユーザの頭部との間に存在する他のオブジェクトを検出する。他のオブジェクトの存在は、ユーザが存在する環境(建物等)の設計情報や、他のシステムからの情報に基づいて把握され得る。アプリ実行部343は、対象オブジェクトとユーザの頭部との間に対象オブジェクトを遮蔽する他のオブジェクトが存在する場合には、非視認状態であることを検出する。例えば、対象オブジェクトが、箱の中、湯の中、又は、煙の中に存在する場合には、アプリ実行部343は、非視認状態であることを検出する。アプリ実行部343は、対象オブジェクトとユーザの頭部との間に対象オブジェクトを遮蔽する他のオブジェクトが存在しない場合には、視認状態であることを検出する。
(Fifth detection condition)
In the visual detection using the fifth detection condition, the application execution unit 343 detects another object existing between the target object and the user's head. The existence of other objects can be grasped based on the design information of the environment (building, etc.) in which the user exists and the information from other systems. The application execution unit 343 detects that the target object is invisible when another object that shields the target object exists between the target object and the user's head. For example, when the target object is in a box, hot water, or smoke, the application execution unit 343 detects that it is invisible. The application execution unit 343 detects that it is in the visual state when there is no other object that shields the target object between the target object and the user's head.
(フィードバックの生成処理)
 以上の第1検出条件乃至第5検出条件のうちのいずれか1以上の検出条件を用いて視認検出を行った後、アプリ実行部343は、フィードバックの生成処理により、ユーザの手と対象オブジェクトとの接近・接触に対するユーザへのフィードバックを生成する。
(Feedback generation process)
After performing visual detection using any one or more of the above first detection conditions to the fifth detection conditions, the application execution unit 343 sets the user's hand and the target object by the feedback generation process. Generate feedback to the user regarding the approach / contact of the object.
 フィードバックの生成処理では、アプリ実行部343は、視認検出により視認状態であることを検出した場合には、通常のフィードバックを生成する。 In the feedback generation process, the application execution unit 343 generates normal feedback when it is detected by visual recognition detection that it is in a visual state.
 通常のフィードバックでは、図18のフローチャートにおけるステップS33での処理で説明したように、アプリ実行部343は、対象オブジェクトと手との動作に応じてリアリティのある触覚(振動)をユーザの手に提示するフィードバックを生成する。出力制御部344は、AR空間をユーザに提示するための画像、音、及び、触覚を生成する際に、アプリ実行部343により生成された内容のフィードバックを反映させた触覚(振動)を生成する。出力制御部344は、生成した画像、音、及び、触覚をそれぞれ表示部324、スピーカ325、及び、ハンドコントローラ312の振動提示部354に出力信号として供給する。これにより、手と対象オブジェクトとの接近・接触に対して、視認状態に対応したユーザの触覚への通常のフィードバックが実行される。なお、通常のフィードバックは、これに限らない。 In normal feedback, as described in the process in step S33 in the flowchart of FIG. 18, the application execution unit 343 presents a realistic tactile sensation (vibration) to the user's hand according to the movement between the target object and the hand. Generate feedback. The output control unit 344 generates a tactile sensation (vibration) that reflects the feedback of the content generated by the application execution unit 343 when generating an image, a sound, and a tactile sensation for presenting the AR space to the user. .. The output control unit 344 supplies the generated image, sound, and tactile sensation as output signals to the display unit 324, the speaker 325, and the vibration presentation unit 354 of the hand controller 312, respectively. As a result, normal feedback to the user's tactile sensation corresponding to the visual state is executed for the approach / contact between the hand and the target object. The usual feedback is not limited to this.
 一方、アプリ実行部343は、視認検出により非視認状態であることを検出した場合には、通常のフィードバックと異なる非視認状態に対応したフィードバックを生成する。なお、ユーザが対象オブジェクトを視認していない非視認状態の場合には、ユーザは対象オブジェクトに意図せずに手を接触(又は接近)させた場合が含まれる。したがって、非視認状態に対応したフィードバックは、情報処理装置の第1の実施の形態における意図無しに対応したフィードバックの一形態となり得る。非視認状態に対応したフィードバックの内容については後述する。出力制御部344は、AR空間をユーザに提示するための画像、音、及び、触覚を生成する際に、アプリ実行部343により生成された内容のフィードバックを反映させた画像、音、及び、触覚(振動)を生成する。出力制御部344は、生成した画像、音、及び、触覚をそれぞれ表示部324、スピーカ325、及び、ハンドコントローラ312の振動提示部354に出力信号として供給する。これにより、手と対象オブジェクトとの接近・接触に対して、非視認状態に対応したユーザの視覚、聴覚、及び、触覚へのフィードバックが実行される。 On the other hand, when the application execution unit 343 detects that it is in the invisible state by the visual detection, it generates feedback corresponding to the invisible state different from the normal feedback. In addition, in the case of the non-visual state in which the user does not visually recognize the target object, the case where the user unintentionally touches (or approaches) the target object is included. Therefore, the feedback corresponding to the non-visual state can be a form of feedback corresponding to the unintentional in the first embodiment of the information processing apparatus. The content of the feedback corresponding to the invisible state will be described later. The output control unit 344 reflects the feedback of the content generated by the application execution unit 343 when generating the image, sound, and tactile sensation for presenting the AR space to the user. Generates (vibration). The output control unit 344 supplies the generated image, sound, and tactile sensation as output signals to the display unit 324, the speaker 325, and the vibration presentation unit 354 of the hand controller 312, respectively. As a result, feedback to the user's visual sense, auditory sense, and tactile sense corresponding to the non-visual state is executed for the approach / contact between the hand and the target object.
 以下、非視認状態に対応したフィードバックの実施例について説明する。 Hereinafter, an example of feedback corresponding to the invisible state will be described.
(非視認状態に対応したフィードバックの第1実施例)
 第1実施例は、ユーザに対象オブジェクトの大きさを提示するためのフィードバックを生成する実施例である。
(First Example of Feedback Corresponding to the Non-Visible State)
The first embodiment is an embodiment that generates feedback for presenting the size of the target object to the user.
 図19は、非視認状態に対応したフィードバックの第1実施例を説明する図である。 FIG. 19 is a diagram illustrating a first embodiment of feedback corresponding to a non-visual state.
 図19の状況371、及び、状況372において、対象オブジェクト381、及び、対象オブジェクト382は、それぞれ、手に接近・接触した対象オブジェクトであってユーザが見えていない対象オブジェクトを表す。対象オブジェクト381の方が対象オブジェクト382より小さい。 In the situation 371 and the situation 372 of FIG. 19, the target object 381 and the target object 382 represent target objects that are close to or in contact with each other and are not visible to the user. The target object 381 is smaller than the target object 382.
 フィードバックの生成処理において非視認状態に対応したフィードバックを生成する場合、アプリ実行部343は、対象オブジェクトの大きさを物体検出部342により検出されたオブジェクト情報(対象オブジェクトのプロパティ)から取得する。アプリ実行部343は、取得した対象オブジェクトの大きさに応じた周波数の音、又は、振動(触覚)をユーザに提示するフィードバックを生成する。例えば、アプリ実行部343は、対象オブジェクトの大きさが小さい程、周波数の高い音、又は、振動(触覚)のフィードバックを生成する。図19では、状況371の対象オブジェクト381の方が状況372の対象オブジェクト382よりも小さい。アプリ実行部343は、手と対象オブジェクト381との接近・接触に対するフィードバックの音又は振動の周波数を、手と対象オブジェクト382との接近・接触に対するフィードバックよりも高くする。なお、音及び振動の両方をユーザに提示してもよい。 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the size of the target object from the object information (property of the target object) detected by the object detection unit 342. The application execution unit 343 generates feedback that presents the user with a sound or vibration (tactile sensation) having a frequency corresponding to the size of the acquired target object. For example, the application execution unit 343 generates feedback of high frequency sound or vibration (tactile sensation) as the size of the target object is smaller. In FIG. 19, the target object 381 in the situation 371 is smaller than the target object 382 in the situation 372. The application execution unit 343 makes the frequency of the sound or vibration of the feedback for the approach / contact between the hand and the target object 381 higher than the feedback for the approach / contact between the hand and the target object 382. Both sound and vibration may be presented to the user.
 これによれば、非視認状態の対象オブジェクトの大きさを音又は振動の周波数によりユーザに認識させることができる。 According to this, the user can be made to recognize the size of the target object in the invisible state by the frequency of sound or vibration.
 非視認状態に対応したフィードバックの第1実施例の第1変形例として、アプリ実行部343は、対象オブジェクトの大きさに応じた振幅の音、又は、振動のフィードバックを生成してもよい。例えば、アプリ実行部343は、対象オブジェクトの大きさが小さい程、振幅の小さい音、又は、振動のフィードバックを生成する。 As a first modification of the first embodiment of feedback corresponding to the invisible state, the application execution unit 343 may generate a sound or vibration feedback having an amplitude according to the size of the target object. For example, the application execution unit 343 generates feedback of sound or vibration having a smaller amplitude as the size of the target object is smaller.
 これによれば、非視認状態の対象オブジェクトの大きさを音の音量又は振動の大きさによりユーザに認識させることができる。 According to this, the user can be made to recognize the size of the target object in the invisible state by the volume of sound or the magnitude of vibration.
 図20は、非視認状態に対応したフィードバックの第1実施例の第2変形例を説明する図である。 FIG. 20 is a diagram illustrating a second modification of the first embodiment of feedback corresponding to the invisible state.
 図20の状況391、及び、状況392において、ユーザ411は、図17のARグラスシステム311をハードウエアとして表したARグラス401を頭部に装着し、AR空間の画像402を視認している。対象オブジェクト421、及び、対象オブジェクト422は、それぞれ、ユーザ411の手412に接近・接触した対象オブジェクトであって、ユーザが見えていない対象オブジェクトを表す。対象オブジェクト421の方が対象オブジェクト422より小さい。 In the situation 391 and the situation 392 of FIG. 20, the user 411 wears the AR glass 401 representing the AR glass system 311 of FIG. 17 as hardware on the head, and visually recognizes the image 402 of the AR space. The target object 421 and the target object 422 are target objects that have approached or touched the hand 412 of the user 411, respectively, and represent the target objects that the user cannot see. The target object 421 is smaller than the target object 422.
 アプリ実行部343は、対象オブジェクトの大きさに応じた倍率で拡大又は縮小された図形の画像をユーザに提示するフィードバックを生成する。例えば、アプリ実行部343は、対象オブジェクトの大きさが小さい程、拡大倍率の小さい円形状の図形を表した画像のフィードバックを生成する。図19では、状況391の対象オブジェクト421の方が状況392の対象オブジェクト422よりも小さい。したがって、アプリ実行部343は、手と対象オブジェクト421との接近・接触に対するフィードバックとしてAR空間の画像402に重畳して提示する円形状の図形403の拡大倍率を、手と対象オブジェクト422との接近・接触に対するフィードバックにおける円形状の図形403の拡大倍率よりも小さくする。なお、フィードバックにより提示する図形は、円形状以外の形状であってもよい。 The application execution unit 343 generates feedback that presents the image of the figure enlarged or reduced at a magnification according to the size of the target object to the user. For example, the application execution unit 343 generates feedback of an image representing a circular figure having a smaller magnification as the size of the target object is smaller. In FIG. 19, the target object 421 of the situation 391 is smaller than the target object 422 of the situation 392. Therefore, the application execution unit 343 sets the enlargement magnification of the circular figure 403, which is superimposed on the image 402 in the AR space and presented as feedback for the approach / contact between the hand and the target object 421, so that the hand and the target object 422 approach each other. -Make it smaller than the magnification of the circular figure 403 in the feedback to the contact. The figure presented by feedback may have a shape other than a circular shape.
 これによれば、非視認状態の対象オブジェクトの大きさを画像(図形)の大きさによりユーザに認識させることができる。 According to this, the user can be made to recognize the size of the target object in the invisible state by the size of the image (figure).
 以上の非視認状態に対応したフィードバックの第1実施例によれば、対象オブジェクトの大きさを、ユーザの視覚、聴覚、又は、触覚へのフィードバックにより認識させることができる。ユーザにとって、対象オブジェクトが見えていれば冗長となるが、対象オブジェクトが見えてない場合には、対象オブジェクトを目視して得られる情報量には及ばなくとも、有益となる情報が存在する。そのような有益となる情報を、ユーザの視覚、聴覚、又は、触覚へのフィードバックによりユーザに提示することができる。 According to the first embodiment of the feedback corresponding to the above-mentioned invisible state, the size of the target object can be recognized by the feedback to the user's visual, auditory, or tactile sense. For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough. Such useful information can be presented to the user by feedback to the user's visual, auditory, or tactile sensations.
(非視認状態に対応したフィードバックの第2実施例)
 第2実施例は、ユーザに非視認状態の対象オブジェクトの個数を提示するためのフィードバックを生成する実施例である。
(Second embodiment of feedback corresponding to the invisible state)
The second embodiment is an embodiment for generating feedback for presenting the number of target objects in the invisible state to the user.
 図21は、非視認状態に対応したフィードバックの第2実施例を説明する図である。 FIG. 21 is a diagram illustrating a second embodiment of feedback corresponding to the invisible state.
 図21の波形441乃至443は、フィードバックにより提示する音、又は、振動(触覚)の波形を表す。 The waveforms 441 to 443 in FIG. 21 represent the waveforms of sound or vibration (tactile sensation) presented by feedback.
 フィードバックの生成処理において非視認状態に対応したフィードバックを生成する場合、アプリ実行部343は、非視認状態の対象オブジェクトの個数を物体検出部342により検出されたオブジェクト情報(対象オブジェクトのプロパティ)から取得する。対象オブジェクトの個数とは、ユーザの手が同時に接近・接触した対象オブジェクトの個数を意味する。 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the number of the target objects in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do. The number of target objects means the number of target objects that the user's hands approach and touch at the same time.
 アプリ実行部343は、取得した対象オブジェクトの個数に応じたピーク数の波形の音、又は、振動(触覚)をユーザに提示するフィードバックを生成する。フィードバックによる音、又は、振動の提示時間は一定時間に決められる。アプリ実行部343は、その提示時間内での音、又は、振動の波形のピーク数が対象オブジェクトの個数に応じた数となるように音、又は、振動の周波数を設定する。例えば、アプリ実行部343は、対象オブジェクトの個数が小さい程、ピーク数の少ない波形(周波数)の音、又は、振動のフィードバックを生成する。図21では、フィードバックにより提示する音、又は、振動の波形のピーク数が、波形441、波形442、及び、波形443の順に多くなり、波形441、波形442、及び、波形443の順に、対象オブジェクトの個数が多いことがユーザに提示される。 The application execution unit 343 generates feedback that presents the user with the sound or vibration (tactile sensation) of the waveform of the peak number according to the number of acquired target objects. The presentation time of sound or vibration by feedback is determined to be a fixed time. The application execution unit 343 sets the frequency of the sound or vibration so that the number of peaks of the sound or vibration waveform within the presentation time becomes a number corresponding to the number of target objects. For example, the application execution unit 343 generates feedback of a waveform (frequency) having a smaller number of peaks or vibration as the number of target objects is smaller. In FIG. 21, the number of peaks of the sound or vibration waveform presented by feedback increases in the order of waveform 441, waveform 442, and waveform 443, and the target object in the order of waveform 441, waveform 442, and waveform 443. It is presented to the user that the number of is large.
 対象オブジェクトの個数が多くなり、ユーザに提示される音、又は、振動の周波数が高なり過ぎると、個数の違いをユーザが認識できなくなる。そこで、アプリ実行部343は、対象オブジェクトの個数が、所定の閾値を超えた場合には、大まかな個数を提示するためのフィードバックを生成する。 If the number of target objects increases and the frequency of the sound or vibration presented to the user becomes too high, the user will not be able to recognize the difference in the number. Therefore, when the number of target objects exceeds a predetermined threshold value, the application execution unit 343 generates feedback for presenting a rough number.
 図22は、対象オブジェクトの大まかな個数を提示するためのフィードバックを説明する図である。対象オブジェクトの大まかな個数を提示するためのフィードバックの生成では、アプリ実行部343は、図22の波形のようにユーザに提示する音、又は、振動の周波数を人間がとらえやすい固定の周波数とする。アプリ実行部343は、その固定の周波数による音、又は、振動の提示時間を個数に応じた長さに設定する。これによって、アプリ実行部343は、対象オブジェクトの個数が閾値を超えた場合には対象オブジェクトの個数に応じた提示時間で、かつ、一定周波数の音、又は、振動のフィードバックを生成する。 FIG. 22 is a diagram illustrating feedback for presenting a rough number of target objects. In the generation of feedback for presenting a rough number of target objects, the application execution unit 343 sets the frequency of the sound or vibration presented to the user as shown in the waveform of FIG. 22 to a fixed frequency that is easy for humans to perceive. .. The application execution unit 343 sets the presentation time of the sound or vibration at the fixed frequency to the length corresponding to the number. As a result, when the number of target objects exceeds the threshold value, the application execution unit 343 generates feedback of sound or vibration having a constant frequency and with a presentation time corresponding to the number of target objects.
 以上の非視認状態に対応したフィードバックの第2実施例によれば、非視認状態の対象オブジェクトの個数を、ユーザの聴覚、又は、触覚へのフィードバックにより認識させることができる。ユーザにとって、対象オブジェクトが見えていれば冗長となるが、対象オブジェクトが見えてない場合には、対象オブジェクトを目視して得られる情報量には及ばなくとも、有益となる情報が存在する。そのような有益となる情報を、ユーザの聴覚、又は、触覚へのフィードバックによりユーザに提示することができる。 According to the second embodiment of the feedback corresponding to the above-mentioned invisible state, the number of target objects in the invisible state can be recognized by the feedback to the user's auditory sense or tactile sense. For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough. Such useful information can be presented to the user by feedback to the user's auditory or tactile sensation.
(非視認状態に対応したフィードバックの第3実施例)
 第3実施例は、ユーザに非視認状態の対象オブジェクトの色(輝度、模様等を含む)を提示するためのフィードバックを生成する実施例である。
(Third embodiment of feedback corresponding to the invisible state)
The third embodiment is an embodiment for generating feedback for presenting the color (including brightness, pattern, etc.) of the target object in the invisible state to the user.
 フィードバックの生成処理において非視認状態に対応したフィードバックを生成する場合、アプリ実行部343は、非視認状態の対象オブジェクトの色を物体検出部342により検出されたオブジェクト情報(対象オブジェクトのプロパティ)から取得する。 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the color of the target object in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do.
 アプリ実行部343は、取得した対象オブジェクトの輝度に応じた周波数の音、又は、振動(触覚)をユーザに提示するフィードバックを生成する。例えば、アプリ実行部343は、対象オブジェクトの輝度が高い程、周波数の高い音、又は、振動のフィードバックを生成する。 The application execution unit 343 generates feedback that presents the sound or vibration (tactile sensation) of the frequency corresponding to the acquired brightness of the target object to the user. For example, the application execution unit 343 generates feedback of high frequency sound or vibration as the brightness of the target object increases.
 これによれば、非視認状態の対象オブジェクトの輝度の高さがフィードバックの音又は振動の周波数によってユーザに提示されるようになる。 According to this, the height of the brightness of the target object in the invisible state is presented to the user by the frequency of the feedback sound or vibration.
 非視認状態に対応したフィードバックの第3実施例の第1変形例として、アプリ実行部343は、非視認状態の対象オブジェクトの模様の複雑さ(空間周波数の高さ)を物体検出部342により検出されたオブジェクト情報(対象オブジェクトのプロパティ)から取得する。 As a first modification of the third embodiment of the feedback corresponding to the invisible state, the application execution unit 343 detects the complexity of the pattern (high spatial frequency) of the target object in the invisible state by the object detection unit 342. Obtained from the object information (property of the target object).
 アプリ実行部343は、取得した対象オブジェクトの模様の空間周波数の高さに応じた周波数の音、又は、振動(触覚)をユーザに提示するフィードバックを生成する。例えば、アプリ実行部343は、対象オブジェクトの模様の空間周波数が高い程、周波数の高い音、又は、振動のフィードバックを生成する。これにより、対象オブジェクトの模様が複雑な程、周波数の高い音、又は、振動がユーザに提示される。対象オブジェクトの模様が単調な程、周波数の低い音、又は、振動がユーザに提示される。 The application execution unit 343 generates feedback that presents the user with a sound or vibration (tactile sensation) having a frequency corresponding to the height of the spatial frequency of the acquired pattern of the target object. For example, the application execution unit 343 generates feedback of high frequency sound or vibration as the spatial frequency of the pattern of the target object is higher. As a result, the more complicated the pattern of the target object is, the higher the frequency of the sound or vibration is presented to the user. The more monotonous the pattern of the target object is, the lower the frequency of sound or vibration is presented to the user.
 以上の非視認状態に対応したフィードバックの第3実施例によれば、非視認状態の対象オブジェクトの模様の複雑さが、フィードバックの音、又は、振動の周波数によってユーザに提示されるようになる。即ち、対象オブジェクトの色(色に関する情報)を、ユーザの聴覚、又は、触覚へのフィードバックにより認識させることができる。ユーザにとって、対象オブジェクトが見えていれば冗長となるが、対象オブジェクトが見えてない場合には、対象オブジェクトを目視して得られる情報量には及ばなくとも、有益となる情報が存在する。そのような有益となる情報を、ユーザの聴覚、又は、触覚へのフィードバックによりユーザに提示することができる。 According to the third embodiment of the feedback corresponding to the above-mentioned invisible state, the complexity of the pattern of the target object in the invisible state is presented to the user by the sound of the feedback or the frequency of the vibration. That is, the color (information about the color) of the target object can be recognized by the feedback to the user's auditory sense or tactile sense. For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough. Such useful information can be presented to the user by feedback to the user's auditory or tactile sensation.
(非視認状態に対応したフィードバックの第4実施例)
 第4実施例は、ユーザに非視認状態の対象オブジェクトの種類を提示するためのフィードバックを生成する実施例である。
(Fourth Example of Feedback Corresponding to the Non-Visible State)
The fourth embodiment is an embodiment for generating feedback for presenting the type of the target object in the invisible state to the user.
 フィードバックの生成処理において非視認状態に対応したフィードバックを生成する場合、アプリ実行部343は、非視認状態の対象オブジェクトの種類を物体検出部342により検出されたオブジェクト情報(対象オブジェクトのプロパティ)から取得する。 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the type of the target object in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do.
 アプリ実行部343は、取得した対象オブジェクトの種類に応じた回数の音、又は、振動(触覚)をユーザに提示するフィードバックを生成する。音、又は、振動の回数とは、第2実施例のように、波形のピーク数であってもよいし、断続的に発生させる音、又は、振動の発生回数であってもよい。対象オブジェクトの種類に応じた回数とは、対象オブジェクトの種類に事前に対応付けられた音、又は、振動の回数を意味する。 The application execution unit 343 generates feedback that presents the user with the sound or vibration (tactile sensation) of the number of times according to the type of the acquired target object. The number of sounds or vibrations may be the number of peaks of the waveform, or the number of intermittent sounds or vibrations, as in the second embodiment. The number of times according to the type of the target object means the number of times of sound or vibration associated with the type of the target object in advance.
 図23は、非視認状態に対応したフィードバックの第4実施例を説明する図である。 FIG. 23 is a diagram illustrating a fourth embodiment of feedback corresponding to the invisible state.
 図23の対象オブジェクト例461は、対象オブジェクト471がピストルの場合を表す。対象オブジェクト例462は、対象オブジェクト472がナイフの場合を表す。対象オブジェクト471及び対象オブジェクト472の上部の丸マーク481及び丸マーク482は、それぞれ対象オブジェクト471及び対象オブジェクト472の種類に対応付けられた音、又は、振動の回数を表す。対象オブジェクト471に対しては、丸マーク481の数が1つであるため、対応付けられた音、又は、振動の回数は1回である。対象オブジェクト472に対しては、丸マーク482の数が2つであるため、対応付けられた音、又は、振動の回数は2回である。このように対象オブジェクトの種類に対して事前に音、又は、振動の回数が対応付けられる。 The target object example 461 in FIG. 23 represents the case where the target object 471 is a pistol. The target object example 462 represents a case where the target object 472 is a knife. The circle mark 481 and the circle mark 482 at the top of the target object 471 and the target object 472 represent the number of sounds or vibrations associated with the types of the target object 471 and the target object 472, respectively. Since the number of circle marks 481 is one for the target object 471, the number of times of the associated sound or vibration is one. Since the number of circle marks 482 is two for the target object 472, the number of associated sounds or vibrations is two. In this way, the number of sounds or vibrations is associated with the type of target object in advance.
 対象オブジェクトの種類に対応付けられた音、又は、振動の回数については、ユーザが記憶してもよい。アプリ実行部343は、対象オブジェクトが視認状態であるときにそのオブジェクトの近傍に図23のような丸マーク481及び丸マーク482の画像を提示してもよい。 The user may memorize the sound or the number of vibrations associated with the type of the target object. When the target object is in the visual state, the application execution unit 343 may present images of the circle mark 481 and the circle mark 482 as shown in FIG. 23 in the vicinity of the target object.
 これによれば、非視認状態の対象オブジェクトの種類が、フィードバックの音、又は、振動の回数によってユーザに提示されるようになる。 According to this, the type of the target object in the invisible state is presented to the user by the sound of feedback or the number of vibrations.
 以上の非視認状態に対応したフィードバックの第4実施例によれば、非視認状態の対象オブジェクトの種類が、フィードバックの音、又は、振動の回数によってユーザに提示されるようになる。即ち、対象オブジェクトの種類を、ユーザの聴覚、又は、触覚へのフィードバックにより認識させることができる。ユーザにとって、対象オブジェクトが見えていれば冗長となるが、対象オブジェクトが見えてない場合には、対象オブジェクトを目視して得られる情報量には及ばなくとも、有益となる情報が存在する。そのような有益となる情報を、ユーザの聴覚、又は、触覚へのフィードバックによりユーザに提示することができる。 According to the fourth embodiment of the feedback corresponding to the above-mentioned invisible state, the type of the target object in the invisible state is presented to the user by the sound of the feedback or the number of vibrations. That is, the type of the target object can be recognized by the feedback to the user's auditory sense or tactile sense. For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough. Such useful information can be presented to the user by feedback to the user's auditory or tactile sensation.
(非視認状態に対応したフィードバックの第5実施例)
 第5実施例は、非視認状態の対象オブジェクトの向きを提示するためのフィードバックを生成する実施例である。
(Fifth Example of Feedback Corresponding to the Non-Visible State)
A fifth embodiment is an embodiment that generates feedback for presenting the orientation of the target object in the invisible state.
 図24、及び、図25は、非視認状態に対応したフィードバックの第5実施例を説明する図である。 24 and 25 are diagrams illustrating a fifth embodiment of feedback corresponding to the invisible state.
 図24の状況491、及び、状況492、並び、図25の状況493、及び、状況494において、非視認状態の対象オブジェクト501は、棒状のオブジェクトを表す。 In the situation 491 and the situation 492 of FIG. 24, side by side, the situation 493 and the situation 494 of FIG. 25, the target object 501 in the invisible state represents a rod-shaped object.
 ユーザの手412は対象オブジェクト501に接近・接触している状況が示されている。 The situation where the user's hand 412 is approaching / contacting the target object 501 is shown.
 図24の状況491、及び、状況492では、対象オブジェクト501が手412の平に対して横方向に配置されている。即ち、対象オブジェクト501の軸線方向がユーザの手412の平に沿った方向で、かつ、指の軸線方向に対して垂直方向に配置されている。 In the situation 491 and the situation 492 of FIG. 24, the target object 501 is arranged laterally with respect to the palm of the hand 412. That is, the axis direction of the target object 501 is the direction along the flat of the user's hand 412, and the target object 501 is arranged in the direction perpendicular to the axis direction of the finger.
 図25の状況493、及び、状況494では、対象オブジェクト501が手412の平に対して縦方向に配置されている。即ち、対象オブジェクト501の軸線方向がユーザの手412の平に沿った方向で、かつ、指の軸線方向に平行方向に配置されている。 In the situation 493 and the situation 494 of FIG. 25, the target object 501 is arranged vertically with respect to the palm of the hand 412. That is, the axial direction of the target object 501 is arranged along the flat of the user's hand 412 and parallel to the axial direction of the finger.
 フィードバックの生成処理において非視認状態に対応したフィードバックを生成する場合、アプリ実行部343は、対象オブジェクト501の位置及び姿勢(向き)を物体検出部342により検出されたオブジェクト情報(対象オブジェクトのプロパティ)から取得する。アプリ実行部343は、ユーザの手412の位置及び姿勢(向き)を手指位置検出部341から取得する。これによって、アプリ実行部343は、ユーザの手412の平に対して非視認状態の対象オブジェクト501が横向きに配置されているか縦向きに配置されているかを検出する。 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 determines the position and orientation (orientation) of the target object 501 as object information (property of the target object) detected by the object detection unit 342. Get from. The application execution unit 343 acquires the position and posture (orientation) of the user's hand 412 from the finger position detection unit 341. As a result, the application execution unit 343 detects whether the target object 501 in the invisible state is arranged horizontally or vertically with respect to the palm of the user's hand 412.
 アプリ実行部343は、検出した対象オブジェクトの向きに応じて、ハンドコントローラ312の振動提示部354が有する振動子のうち、ユーザが手412を動かしたときに振動を提示させる振動子の位置を変化させる。振動子は、例えば、図5に示したように親指の指先と人指し指の指先と手の甲とに配置されている。ただし、振動子はさらに多くの部分に配置されていてもよい。 The application execution unit 343 changes the position of the oscillator that presents vibration when the user moves the hand 412 among the oscillators of the vibration presentation unit 354 of the hand controller 312 according to the orientation of the detected target object. Let me. The oscillators are arranged, for example, at the fingertips of the thumb, the fingertips of the index finger, and the back of the hand as shown in FIG. However, the oscillators may be arranged in more parts.
 例えば、アプリ実行部343は、図24の状況491のように、ユーザの手412の平に対して対象オブジェクト501が横向きに配置されている状況において、ユーザが手412を前方向に動かした場合には、アプリ実行部343は、手412の先端側に配置された振動子から指の基端側に配置された振動子へと振動を提示させる振動子の位置を変化させる。 For example, when the application execution unit 343 moves the hand 412 forward in a situation where the target object 501 is arranged sideways with respect to the palm of the user's hand 412 as in the situation 491 of FIG. 24. The application execution unit 343 changes the position of the oscillator that presents vibration from the oscillator arranged on the tip end side of the hand 412 to the oscillator arranged on the base end side of the finger.
 アプリ実行部343は、図24の状況492のように、ユーザの手412の平に対して対象オブジェクト501が横向きに配置されている状況において、ユーザが手412を横方向に動かした場合には、アプリ実行部343は、振動を提示させる振動子の位置を変化させず、例えば、各指の指先に配置された振動子のみに振動を提示させる。 When the user moves the hand 412 laterally in the situation where the target object 501 is arranged sideways with respect to the palm of the user's hand 412 as in the situation 492 of FIG. , The application execution unit 343 does not change the position of the vibrator that presents the vibration, and for example, causes only the vibrator arranged at the fingertip of each finger to present the vibration.
 アプリ実行部343は、図25の状況493のように、ユーザの手412の平に対して対象オブジェクト501が縦向きに配置されている状況において、ユーザが手412を前方向に動かした場合には、アプリ実行部343は、振動を提示させる振動子の位置を変化させず、例えば、一部の指の配置された振動子のみに振動を提示させる。 When the user moves the hand 412 forward in the situation where the target object 501 is arranged vertically with respect to the flat of the user's hand 412 as in the situation 493 of FIG. Does not change the position of the oscillator that presents the vibration, for example, the application execution unit 343 causes only the oscillator on which some fingers are arranged to present the vibration.
 アプリ実行部343は、図25の状況494のように、ユーザの手412の平に対して対象オブジェクト501が縦向きに配置されている状況において、ユーザが手412を横方向に動かした場合には、アプリ実行部343は、手412の小指側に配置された振動子から親指側に配置された振動子へと振動を提示させる振動子の位置を変化させる。 When the user moves the hand 412 horizontally in the situation where the target object 501 is arranged vertically with respect to the flat of the user's hand 412 as in the situation 494 of FIG. The application execution unit 343 changes the position of the oscillator that presents vibration from the oscillator arranged on the little finger side of the hand 412 to the oscillator arranged on the thumb side.
 なお、図24及び図25に示した各状況491乃至493において、振動を提示させる振動子の位置の変更は、上述の場合に限らない。非視認状態の対象オブジェクトの向きの違いと手の動きの違いが、振動を提示する振動子の位置の変化の違いで認識できればよい。 Note that, in each of the situations 491 to 493 shown in FIGS. 24 and 25, the change in the position of the oscillator that presents vibration is not limited to the above case. It suffices if the difference in the orientation of the target object in the invisible state and the difference in the movement of the hand can be recognized by the difference in the position of the oscillator that presents the vibration.
 以上の非視認状態に対応したフィードバックの第5実施例によれば、ユーザの手により握ることや摘まむことが可能な非視認状態の対象オブジェクトのユーザの手に対する向きが、フィードバックの振動によってユーザに提示されるようになる。即ち、ユーザの手に対する対象オブジェクトの向きを、ユーザの触覚へのフィードバックにより認識させることができる。ユーザにとって、対象オブジェクトが見えていれば冗長となるが、対象オブジェクトが見えてない場合には、対象オブジェクトを目視して得られる情報量には及ばなくとも、有益となる情報が存在する。そのような有益となる情報を、ユーザの触覚へのフィードバックによりユーザに提示することができる。 According to the fifth embodiment of the feedback corresponding to the above-mentioned invisible state, the orientation of the target object in the invisible state, which can be grasped or picked by the user's hand, with respect to the user's hand is determined by the vibration of the feedback. Will be presented to. That is, the orientation of the target object with respect to the user's hand can be recognized by feedback to the user's tactile sensation. For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough. Such useful information can be presented to the user by feedback to the user's sense of touch.
(非視認状態に対応したフィードバックの第6実施例)
 第6実施例は、ユーザが必要とする非視認状態の対象オブジェクトがユーザの手の近傍に存在することを提示するためのフィードバックを生成する実施例である。例えば、ユーザは何らかの作業中に目が一切離せない状況で、必要な道具を取りたい場合がある。このような場合、ユーザは、目的の道具を見ることなく目的の道具が置かれたおおよその位置に手を動かす。このとき、第6実施例では、手が目的の道具の近くに移動すると、フィードバックにより目的の道具が手の近くにあることがユーザに提示される。
(Sixth Example of Feedback Corresponding to the Non-Visible State)
A sixth embodiment is an embodiment that generates feedback for showing that the target object in the invisible state required by the user is in the vicinity of the user's hand. For example, a user may want to pick up the necessary tools while keeping an eye on something. In such a case, the user moves his / her hand to the approximate position where the target tool is placed without looking at the target tool. At this time, in the sixth embodiment, when the hand moves near the target tool, the user is presented with the feedback that the target tool is near the hand.
 図26は、非視認状態に対応したフィードバックの第6実施例を説明する図である。 FIG. 26 is a diagram illustrating a sixth embodiment of feedback corresponding to the invisible state.
 図26において、ユーザ411は、図17のARグラスシステム311をハードウエアとして表したARグラス401を頭部に装着し、AR空間のエンジン511の画像を視認している。ユーザ411は、エンジン511に対して留意箇所を左手412Lの指で指して注視した状態である。ユーザ411は、その状態を維持して、未使用の黄色の付箋(仮想オブジェクト)が置いてあるところに右手412Rを動かして付箋を摘まみ、留意箇所にその付箋を貼り付けようとしている。ユーザ411は黄色の付箋を貼り付ける動作、作業を繰り返している。 In FIG. 26, the user 411 wears the AR glass 401 representing the AR glass system 311 of FIG. 17 as hardware on the head, and visually recognizes the image of the engine 511 in the AR space. The user 411 is in a state of gazing at the attention point with respect to the engine 511 with the finger of the left hand 412L. The user 411 keeps the state, moves the right hand 412R to the place where the unused yellow sticky note (virtual object) is placed, picks the sticky note, and tries to paste the sticky note in the place to be noted. User 411 repeats the operation and work of pasting a yellow sticky note.
 このようにユーザ411の動作や、決まった工程などによって、アプリ実行部343は、ユーザ411が掴みたいと考えている目的オブジェクトを検出(予測)する。図26ではユーザ411は黄色の付箋が目的オブジェクトとして検出される。複数存在するオブジェクトの中で、ユーザ411の手が目的オブジェクトに非視認状態で接近・接触した際には、アプリ実行部343は、ユーザの手に振動を提示するフィードバックを生成する。振動は単純なサイン波であって良い。フィードバックは振動ではなく音の提示であってもよい。図26では、ユーザ411の右手412Rが黄色の付箋に近づくと、アプリ実行部343は、ユーザの右手412R等に振動を提示するフィードバックを生成する。 In this way, the application execution unit 343 detects (predicts) the target object that the user 411 wants to grasp, depending on the operation of the user 411, the fixed process, and the like. In FIG. 26, the user 411 detects the yellow sticky note as the target object. When the hand of the user 411 approaches or touches the target object in an invisible state among a plurality of existing objects, the application execution unit 343 generates feedback that presents vibration to the user's hand. The vibration may be a simple sine wave. The feedback may be the presentation of sound rather than vibration. In FIG. 26, when the user's right hand 412R approaches the yellow sticky note, the application execution unit 343 generates feedback that presents vibration to the user's right hand 412R or the like.
 以上の非視認状態に対応したフィードバックの第6実施例によれば、ユーザの繰り返される動作によりユーザが掴むことが予測されている非視認状態の対象オブジェクトにユーザの手が近づいた場合に、フィードバックの振動、又は、音がユーザに提示されるようになる。即ち、ユーザの望むオブジェクトにユーザの手が接近・接触したことを、ユーザの聴覚、又は、触覚へのフィードバックによりユーザに認識させることができる。ユーザにとって、対象オブジェクトが見えていれば冗長となるが、対象オブジェクトが見えてない場合には、対象オブジェクトを目視して得られる情報量には及ばなくとも、有益となる情報が存在する。そのような有益となる情報を、ユーザの聴覚、又は、触覚へのフィードバックによりユーザに提示することができる。 According to the sixth embodiment of feedback corresponding to the above-mentioned invisible state, feedback is given when the user's hand approaches the target object in the invisible state, which is predicted to be grasped by the user due to repeated actions of the user. Vibration or sound will be presented to the user. That is, it is possible to make the user recognize that the user's hand has approached or touched the object desired by the user by feedback to the user's auditory sense or tactile sense. For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough. Such useful information can be presented to the user by feedback to the user's auditory or tactile sensation.
(非視認状態に対応したフィードバックの第7実施例)
 第7実施例は、触れると危険な(事故・不利益等を及ぼす可能性がある)非視認状態の対象オブジェクトがユーザの手の近傍に存在することを提示するためのフィードバックを生成する実施例である。
(Seventh Example of Feedback Corresponding to the Non-Visible State)
The seventh embodiment generates feedback for showing that the target object in the invisible state, which is dangerous to touch (which may cause an accident, disadvantage, etc.), is in the vicinity of the user's hand. Is.
 図27は、非視認状態に対応したフィードバックの第7実施例を説明する図である。 FIG. 27 is a diagram illustrating a seventh embodiment of feedback corresponding to the invisible state.
 図27において、状況521は、ユーザの手412に接近・接触した対象オブジェクトが液体の入ったコップ(実オブジェクト)である場合を示す。状況522は、ユーザの手412に接近・接触した対象オブジェクトが加熱されたヤカン(実オブジェクト)である場合を示す。状況523は、ユーザの手412に接近・接触した対象オブジェクトが、利用するクレジットカードの選択画像である場合を示す。これらの対象オブジェクトは、危険なオブジェクトに該当する。 In FIG. 27, the situation 521 shows a case where the target object approaching / contacting the user's hand 412 is a cup (real object) containing a liquid. Situation 522 shows a case where the target object approaching / contacting the user's hand 412 is a heated kettle (real object). Situation 523 shows the case where the target object approaching / contacting the user's hand 412 is a selected image of the credit card to be used. These target objects are dangerous objects.
 フィードバックの生成処理において非視認状態に対応したフィードバックを生成する場合、アプリ実行部343は、非視認状態の対象オブジェクトの種類を物体検出部342により検出されたオブジェクト情報(対象オブジェクトのプロパティ)から取得する。アプリ実行部343は、取得したオブジェクト情報に基づいて、非視認状態の対象オブジェクトが危険なものであるか否かを検出する。 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the type of the target object in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do. The application execution unit 343 detects whether or not the target object in the invisible state is dangerous based on the acquired object information.
 なお、対象オブジェクトが危険なものであるか否かは予めオブジェクト情報として設定されていてもよし、実空間の環境に基づいて判定される場合であってもよい。 Whether or not the target object is dangerous may be set as object information in advance, or may be determined based on the environment in the real space.
 アプリ実行部343は、非視認状態の対象オブジェクトが危険なものである場合に、画像、音、又は、振動(触覚)をユーザに提示するフィードバックを生成する。これにより、例えば、危険であること表す画像、音、又は、振動(触覚)がユーザに提示される。対象オブジェクトの危険度について取得可能な場合には、アプリ実行部343は、対象オブジェクトの危険度に応じた振幅の音、又は、振動をユーザに提示するフィードバックを生成してもよい。例えば、対象オブジェクトの危険度が高い程、大きな振幅の音、又は、振動がユーザに提示されるようにしてもよい。対象オブジェクトの危険度に応じた色、又は、大きさの画像がユーザに提示されるようにしてもよい。対象オブジェクトが仮想オブジェクトの場合、アプリ実行部343は、対象オブジェクトへの操作を無効としてもよい。 The application execution unit 343 generates feedback that presents an image, sound, or vibration (tactile sensation) to the user when the target object in the invisible state is dangerous. This presents, for example, an image, sound, or vibration (tactile sensation) that indicates danger to the user. If the risk level of the target object can be acquired, the application execution unit 343 may generate feedback that presents the user with a sound or vibration having an amplitude corresponding to the risk level of the target object. For example, the higher the risk of the target object, the larger the amplitude of the sound or vibration may be presented to the user. An image of a color or size corresponding to the degree of danger of the target object may be presented to the user. When the target object is a virtual object, the application execution unit 343 may invalidate the operation on the target object.
 なお、ユーザが特定の手の動作(ジェスチャ)を行った場合にだけ反応する対象オブジェクトが存在する。例えば、「なでる」という手の動作や、「かき回す」という手の動作を行った場合にだけ反応する対象オブジェクトが存在する。そのような対象オブジェクトに対しては、アプリ実行部343は、ユーザが特定の動作を行った場合にだけ、画像、音、又は、振動をユーザに提示するフィードバックを生成してもよい。 Note that there is a target object that reacts only when the user performs a specific hand action (gesture). For example, there is a target object that reacts only when the hand movement of "stroking" or the hand movement of "stirring" is performed. For such a target object, the application execution unit 343 may generate feedback that presents an image, sound, or vibration to the user only when the user performs a specific action.
 アプリ実行部343は、次のような2段階のフィードバックを生成してもよい。例えば、アプリ実行部343は、非視認状態の危険な対象オブジェクトの存在をユーザに提示するために、簡単な一次情報として画像、音、又は、振動をユーザに提示するフィードバックを生成する。その後、ユーザが興味を示して、例えば、「なでる」などのユーザが特定の手の動作(アクション)を行った場合に、アプリ実行部343は、更なる情報提示として、画像、音、又は、振動をユーザに提示するフィードバックを生成する。 The application execution unit 343 may generate the following two-step feedback. For example, the application execution unit 343 generates feedback that presents an image, sound, or vibration to the user as simple primary information in order to present to the user the existence of a dangerous target object in an invisible state. After that, when the user shows interest and the user performs a specific hand action (action) such as "stroking", the application execution unit 343 displays an image, a sound, or an image, a sound, or as further information presentation. Generate feedback that presents the vibration to the user.
 アプリ実行部343は、非視認状態の対象オブジェクトが危険なものである場合に、ユーザ本人のみでなく、他者に対しても(他者が使用するARグラスシステムに対しても)、通信部323の通信により画像、音、又は、振動を提示するフィードバックを実行させるようにしてもよい。同様にアプリ実行部343は、対象オブジェクトが危険なものである場合に、通信部323からネットワーク等を介して管理センタに危険であることの通知を行い、関係者に連絡されるようにしてもよい。これにより、危険な操作をユーザがしていることなどを、周囲の人が知り、サポートを行うことが可能となる。 When the target object in the invisible state is dangerous, the application execution unit 343 is a communication unit not only for the user himself but also for others (for the AR glass system used by others). The communication of 323 may cause feedback to present an image, sound, or vibration. Similarly, when the target object is dangerous, the application execution unit 343 notifies the management center via the network or the like from the communication unit 323 so that the related parties can be contacted. good. This makes it possible for people around you to know that the user is performing a dangerous operation and provide support.
 以上の非視認状態に対応したフィードバックの第7実施例によれば、非視認状態の危険なオブジェクトにユーザの手が接近・接触した場合に、フィードバックの画像、音、又は、振動がユーザに提示されるようになる。即ち、ユーザの手が危険なオブジェクトに非視認状態で接近・接触したことを、ユーザの視覚、聴覚、又は、触覚へのフィードバックにより認識させることができる。ユーザにとって、対象オブジェクトが見えていれば冗長となるが、対象オブジェクトが見えてない場合には、対象オブジェクトを目視して得られる情報量には及ばなくとも、有益となる情報が存在する。そのような有益となる情報を、ユーザの視覚、聴覚、又は、触覚へのフィードバックによりユーザに提示することができる。 According to the seventh embodiment of feedback corresponding to the above-mentioned invisible state, when the user's hand approaches or touches a dangerous object in the invisible state, the feedback image, sound, or vibration is presented to the user. Will be done. That is, it is possible to recognize that the user's hand has approached or touched the dangerous object in an invisible state by feedback to the user's visual, auditory, or tactile sense. For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough. Such useful information can be presented to the user by feedback to the user's visual, auditory, or tactile sensations.
 以上の情報処理装置の第2の実施の形態によれば、ユーザに提供されるxR空間において、ユーザの手とオブジェクトとが接近・接触した場合に、ユーザがそのオブジェクトを視認している視認状態のときには、通常のフィードバックが実行される。通常のフィードバックでは、例えば、リアルな触感がユーザに提示される。一方、ユーザの手とオブジェクトとが接近・接触した場合に、ユーザがそのオブジェクトを視認していない非視認状態のときには、非視認状態に対応したフィードバックが実行される。非視認状態に対応したフィードバックでは、視認状態では容易に得られる情報であるが非視認状態では得られない情報がユーザの視覚、聴覚、又は、触覚へのフィードバックによりユーザに提示される。即ち、対象オブジェクトが非視認状態のときに、ユーザは対象オブジェクトのプロパティを知りたいという場合が多く発生する。そこで、非視認状態に対応したフィードバックでは、ユーザが見ていないオブジェクトの大きさ、個数、色、種類、向き、必要性、及び、危険性等の属性がユーザに提示される。このように、ユーザの手とオブジェクトとが接近・接触した場合のフィードバックの作用を、視認状態と非視認状態とで切り替えることで、ユーザへのフィードバックが、xR空間でのリアルな体験と、有益な情報とをユーザに提供する手段として両立されるようになる。 According to the second embodiment of the information processing apparatus described above, in the xR space provided to the user, when the user's hand and the object come into close contact with each other, the user is visually recognizing the object. At the time, normal feedback is executed. In normal feedback, for example, a realistic tactile sensation is presented to the user. On the other hand, when the user's hand and the object come into close contact with each other and the user is not invisible to the object, feedback corresponding to the invisible state is executed. In the feedback corresponding to the non-visual state, information that is easily obtained in the visual state but cannot be obtained in the non-visual state is presented to the user by feedback to the user's visual, auditory, or tactile sense. That is, when the target object is invisible, the user often wants to know the property of the target object. Therefore, in the feedback corresponding to the non-visual state, attributes such as size, number, color, type, orientation, necessity, and danger of objects that the user has not seen are presented to the user. In this way, by switching the feedback action when the user's hand and the object come into close contact with each other between the visible state and the non-visible state, the feedback to the user is useful and a realistic experience in the xR space. It will be compatible as a means of providing various information to users.
 なお、特許文献2(特開2019-008798号公報)には、目視している位置に応じて触覚提示を行う提案が行われている。特許文献2の技術は、ユーザが見て欲しい場所(関心点領域)から目を逸らしている場合に、ユーザの触覚を刺激することで、関心点に視線を向けさせることを目的とする。特許文献2の技術では、本情報処理装置の第2の実施の形態のように手元の対象オブジェクトをユーザが視認しているか否かを検出することや、対象オブジェクトに応じた情報を提示すること等は行っていない。したがって、本本情報処理装置の第2の実施の形態と特許文献2の技術とは技術内容が大きく異なる。 Note that Patent Document 2 (Japanese Unexamined Patent Publication No. 2019-008798) proposes to present tactile sensation according to the position visually observed. The technique of Patent Document 2 aims to direct the line of sight to a point of interest by stimulating the tactile sensation of the user when the user is looking away from the place (area of interest) that the user wants to see. In the technique of Patent Document 2, it is detected whether or not the user is visually recognizing the target object at hand as in the second embodiment of the information processing apparatus, and information corresponding to the target object is presented. Etc. are not done. Therefore, the technical contents of the second embodiment of the present information processing apparatus and the technique of Patent Document 2 are significantly different.
<<本技術が適用された情報処理装置の第3の実施の形態>>
 図28は、本技術が適用された情報処理装置の第3の実施の形態の構成例を示したブロック図である。なお、図中、図1の情報処理装置11と共通する部分には同一の符号を付してあり、その説明は省略する。
<< Third embodiment of the information processing device to which this technology is applied >>
FIG. 28 is a block diagram showing a configuration example of a third embodiment of the information processing apparatus to which the present technology is applied. In the figure, the same reference numerals are given to the parts common to the information processing apparatus 11 of FIG. 1, and the description thereof will be omitted.
 情報処理装置601は、図1の情報処理装置11と同様に、図4のHMD73及び図5のコントローラ(図2のコントローラ75)を用いて、ユーザに対してxRにより生成された空間・世界を提供する。 Similar to the information processing device 11 of FIG. 1, the information processing device 601 uses the HMD 73 of FIG. 4 and the controller of FIG. 5 (controller 75 of FIG. 2) to display the space / world generated by xR to the user. offer.
 情報処理装置601は、センサ部21、制御部612、映像表示部23、音提示部24、触覚提示部25、及び、記憶部26を有する。したがって、情報処理装置601は、センサ部21、映像表示部23、音提示部24、触覚提示部25、及び、記憶部26を有する点で、図1の情報処理装置11と共通する。ただし、情報処理装置601は、図1の制御部22の代わりに制御部612が設けられている点で、図1の情報処理装置11と相違する。 The information processing device 601 has a sensor unit 21, a control unit 612, a video display unit 23, a sound presentation unit 24, a tactile presentation unit 25, and a storage unit 26. Therefore, the information processing device 601 is common to the information processing device 11 of FIG. 1 in that it has a sensor unit 21, a video display unit 23, a sound presentation unit 24, a tactile presentation unit 25, and a storage unit 26. However, the information processing apparatus 601 is different from the information processing apparatus 11 of FIG. 1 in that the control unit 612 is provided instead of the control unit 22 of FIG.
 以下、制御部612について説明する。なお、情報処理装置601は、図2及び図3に示したハードウエア61において構築され得る。 Hereinafter, the control unit 612 will be described. The information processing device 601 can be built with the hardware 61 shown in FIGS. 2 and 3.
 制御部612は、センサ情報取得部621、位置姿勢取得部622、オブジェクト選択取得部623、アプリ実行部624、及び、出力制御部625を有する。 The control unit 612 has a sensor information acquisition unit 621, a position / attitude acquisition unit 622, an object selection acquisition unit 623, an application execution unit 624, and an output control unit 625.
 センサ情報取得部621は、センサ部21の各種センサにより検出されたセンサ情報を取得する。 The sensor information acquisition unit 621 acquires sensor information detected by various sensors of the sensor unit 21.
 位置姿勢取得部622は、センサ情報取得部621が取得したセンサ情報に基づいて、HMD73、スピーカ74、及び、コントローラ75(図2参照)の位置及び姿勢を位置姿勢情報として取得する。位置姿勢取得部622は、コントローラ自体の位置及び姿勢だけでなく、ユーザの手に装着するコントローラを用いる場合等には、ユーザの手や指の位置及び姿勢を取得する。ユーザの手や指の位置及び姿勢は、カメラ31により撮影された画像に基づいても取得し得る。 The position / attitude acquisition unit 622 acquires the positions and attitudes of the HMD 73, the speaker 74, and the controller 75 (see FIG. 2) as position / attitude information based on the sensor information acquired by the sensor information acquisition unit 621. The position / posture acquisition unit 622 acquires not only the position and posture of the controller itself, but also the position and posture of the user's hand or finger when the controller attached to the user's hand is used. The positions and postures of the user's hands and fingers can also be acquired based on the image taken by the camera 31.
 オブジェクト選択取得部623は、センサ情報取得部621、及び、位置姿勢取得部622から所得したセンサ情報や手及び指の位置姿勢情報に基づいて、xR空間に存在するいずれかのオブジェクトが操作対象としてユーザにより選択されたか否かを取得する。オブジェクトの選択は、ユーザが人差し指と親指を接触させた時、又は、手を握った時に実行されたと判断される。オブジェクト選択取得部623は、いずれかのオブジェクトが選択された場合に、選択されたオブジェクトを特定する。 The object selection acquisition unit 623 targets any object existing in the xR space as an operation target based on the sensor information obtained from the sensor information acquisition unit 621 and the position / attitude acquisition unit 622 and the position / attitude information of the hands and fingers. Gets whether or not it has been selected by the user. It is determined that the object selection is executed when the user touches the index finger and the thumb, or when the user holds the hand. The object selection acquisition unit 623 identifies the selected object when any of the objects is selected.
 アプリ実行部624は、所定のアプリケーションのプログラムを実行することにより、ユーザに提供するxR空間を生成する。 The application execution unit 624 creates an xR space to be provided to the user by executing the program of the predetermined application.
 アプリ実行部624は、ユーザの手の動作等による操作に基づいて、オブジェクト選択取得部623により特定されたオブジェクト(ユーザにより操作対象として選択されたオブジェクト)に対するxR空間での移動、回転、拡大縮小等の処理を行う。 The application execution unit 624 moves, rotates, and enlarges / reduces the object (object selected as the operation target by the user) specified by the object selection acquisition unit 623 based on the operation by the user's hand movement or the like in the xR space. Etc. are performed.
 アプリ実行部624は、ユーザのオブジェクトに対する操作に対して、オブジェクトの属性(オブジェクト情報)に応じたユーザの視覚(ビジュアル)、聴覚(サウンド)、及び、触覚へのフィードバックを生成する。 The application execution unit 624 generates feedback to the user's visual sense (visual sense), auditory sense (sound), and tactile sense according to the object attribute (object information) for the user's operation on the object.
 アプリ実行部624は、オブジェクト情報取得部631、及び、選択フィードバック生成部632を有する。 The application execution unit 624 has an object information acquisition unit 631 and a selection feedback generation unit 632.
 オブジェクト情報取得部631は、操作対象として選択されたオブジェクトにタグ付けされたオブジェクト情報を取得する。オブジェクト情報は、xR空間に存在する各オブジェクトに対して事前にタグ付けされて記憶部26に記憶されている。オブジェクト情報取得部631は、操作対象として選択されたオブジェクト対してタグ付けされたオブジェクト情報を記憶部26から読み込む。オブジェクト情報取得部631は、センサ情報取得部621により取得されたセンサ情報に基づいて、実オブジェクトを認識してオブジェクト情報を抽出しても良いし、仮想オブジェクトから動的にオブジェクト情報を抽出してもよい。 The object information acquisition unit 631 acquires the object information tagged with the object selected as the operation target. The object information is tagged in advance for each object existing in the xR space and stored in the storage unit 26. The object information acquisition unit 631 reads the object information tagged with respect to the object selected as the operation target from the storage unit 26. The object information acquisition unit 631 may recognize a real object and extract object information based on the sensor information acquired by the sensor information acquisition unit 621, or may dynamically extract object information from a virtual object. May be good.
 選択フィードバック生成部632は、ビジュアル生成部641、サウンド生成部642、及び、触覚生成部643を有する。 The selection feedback generation unit 632 has a visual generation unit 641, a sound generation unit 642, and a tactile generation unit 643.
 ビジュアル生成部641は、オブジェクト情報取得部631により取得されたオブジェクト情報に基づいて、操作対象として選択されたオブジェクト(の属性)に応じた操作ラインの画像を生成する。操作ラインは、選択されたオブジェクトと、操作しているユーザの手とを結ぶラインである。操作ラインには、直線だけでなく、点線、始点と終点のみ、3Dオブジェクトを繋げたラインなどがあり得る。 The visual generation unit 641 generates an image of the operation line according to the object (attribute) selected as the operation target based on the object information acquired by the object information acquisition unit 631. The operation line is a line connecting the selected object and the hand of the operating user. The operation line can be not only a straight line, but also a dotted line, only the start point and the end point, and a line connecting 3D objects.
 サウンド生成部642は、オブジェクト情報取得部631により取得されたオブジェクト情報に基づいて、選択されたオブジェクト(の属性)に応じたサウンド(音)を生成する。 The sound generation unit 642 generates a sound (sound) according to the selected object (attribute) based on the object information acquired by the object information acquisition unit 631.
 触覚生成部643は、オブジェクト情報取得部631により取得されたオブジェクト情報に基づいて、選択されたオブジェクト(の属性)に応じた触覚を生成する。 The tactile sensation generation unit 643 generates a tactile sensation according to (attribute) the selected object based on the object information acquired by the object information acquisition unit 631.
 出力制御部625は、アプリ実行部624により生成されたxR空間をユーザに提示するための画像、音、及び、触覚を生成し、それぞれ映像表示部23、音提示部24、及び、触覚提示部25に出力信号として出力する。これによって、出力制御部625は、ユーザに提示する視覚、聴覚、及び、触覚への情報を制御する。出力制御部625は、ユーザに提示するための画像、音、及び、触覚を生成する際に、操作対象として選択されたオブジェクトの操作に対するユーザへのフィードバックの情報として、アプリ実行部624のビジュアル生成部641、サウンド生成部642、及び、触覚生成部643により生成された画像、音、及び、触覚を反映させる。 The output control unit 625 generates an image, a sound, and a tactile sensation for presenting the xR space generated by the application execution unit 624 to the user, and the video display unit 23, the sound presentation unit 24, and the tactile sensation presentation unit, respectively. It is output to 25 as an output signal. As a result, the output control unit 625 controls the visual, auditory, and tactile information presented to the user. The output control unit 625 visually generates the application execution unit 624 as information of feedback to the user regarding the operation of the object selected as the operation target when generating the image, sound, and tactile sensation to be presented to the user. The image, sound, and tactile sensation generated by the unit 641, the sound generation unit 642, and the tactile sensation generation unit 643 are reflected.
<オブジェクトの操作概要>
 図29は、オブジェクトの操作を説明する図である。
<Overview of object operation>
FIG. 29 is a diagram illustrating the operation of the object.
 図29において、HMD73によりユーザの視覚に提示されるxR空間の画像には、ユーザの右手661Rと左手661Lとが存在する。人体模型である仮想オブジェクト667は、3Dオブジェクトとして提示されている。ユーザが例えば右手661R及び左手661Lから(仮想の)光線を出射する操作(動作)を行い、光線を仮想オブジェクト667に照射したとする。このとき、光線が照射された仮想オブジェクト667には、個別に操作可能な範囲を表す矩形状のフレーム672が表示される。図29では、仮想オブジェクト667の全体が一体的に操作されるので、仮想オブジェクト667の全体を囲むフレーム672が表示される。 In FIG. 29, the image of the xR space presented to the user's vision by the HMD 73 includes the user's right hand 661R and the left hand 661L. The virtual object 667, which is a human model, is presented as a 3D object. It is assumed that the user performs an operation (operation) of emitting (virtual) light rays from, for example, the right hand 661R and the left hand 661L, and irradiates the virtual object 667 with the light rays. At this time, a rectangular frame 672 representing an individually operable range is displayed on the virtual object 667 irradiated with the light beam. In FIG. 29, since the entire virtual object 667 is operated integrally, the frame 672 that surrounds the entire virtual object 667 is displayed.
 続いて、ユーザが右手661Rと左手661Lとのそれぞれにおいて親指と人指し指とを接触させて摘まむ動作を行うと、そのときに右手661R及び左手661Lのそれぞれから出射されていた光線と、フレーム672で囲まれた面との交点が連結点673R、673Lとして設定される。これにより、ユーザの右手661Rと連結点673Rとが操作ライン674Rにより連結され、ユーザの左手661Lと連結点673Lとが操作ライン674Lにより連結される。これにより、仮想オブジェクト667が操作対象として選択される。操作ライン674R、及び、674Lは、例えば剛性の高い素材で形成されているとみなして、ユーザが右手661R及び左手661Lを動かすことで、仮想オブジェクト667の移動、回転、拡大縮小等の操作を行うことができる。なお、コントローラを用いて手と同様の操作を行うことが可能である。フレーム672は、仮想オブジェクト667の表面であってもよく、以下の説明においてはフレーム672の存在は示さないものとする。操作対象とするオブジェクトは仮想オブジェクトではなく実オブジェクトであってもよい。ただし、以下において、操作対象は仮想オブジェクトであるものとして説明する。 Subsequently, when the user performs an operation of touching and picking the thumb and the index finger in each of the right hand 661R and the left hand 661L, the light rays emitted from each of the right hand 661R and the left hand 661L at that time and the frame 672 The intersections with the enclosed surface are set as connection points 673R and 673L. As a result, the user's right hand 661R and the connection point 673R are connected by the operation line 674R, and the user's left hand 661L and the connection point 673L are connected by the operation line 674L. As a result, the virtual object 667 is selected as the operation target. Assuming that the operation lines 674R and 674L are made of a highly rigid material, for example, the user moves the right hand 661R and the left hand 661L to perform operations such as moving, rotating, and scaling the virtual object 667. be able to. It is possible to perform the same operation as a hand using the controller. The frame 672 may be the surface of the virtual object 667, and the existence of the frame 672 is not shown in the following description. The object to be operated may be a real object instead of a virtual object. However, in the following, the operation target will be described as being a virtual object.
<情報処理装置601が行う処理の手順>
 図30は、情報処理装置601の処理手順を例示したフローチャートである。
<Procedure of processing performed by the information processing device 601>
FIG. 30 is a flowchart illustrating the processing procedure of the information processing apparatus 601.
 ステップS51では、オブジェクト選択取得部623は、ユーザの所定の操作又は動作によりユーザの手から仮想の光線を出射させる。ユーザは、操作対象とする仮想オブジェクトにその仮想の光線を照射する。処理はステップS51からステップS52に進む。 In step S51, the object selection acquisition unit 623 emits a virtual light ray from the user's hand by a predetermined operation or operation of the user. The user irradiates the virtual object to be operated with the virtual light beam. The process proceeds from step S51 to step S52.
 ステップS52では、ユーザは、手やコントローラを操作して操作対象とする仮想オブジェクトを選択する。即ち、図29で説明したように例えば、親指と人指し指とで摘まむ動作を行うことで、仮想オブジェクトを選択する。オブジェクト選択取得部623は、操作対象として選択された仮想オブジェクトを特定する。処理はステップS52からステップS53に進む。 In step S52, the user operates a hand or a controller to select a virtual object to be operated. That is, as described with reference to FIG. 29, for example, a virtual object is selected by performing a pinching operation with the thumb and the index finger. The object selection acquisition unit 623 identifies a virtual object selected as an operation target. The process proceeds from step S52 to step S53.
 ステップS53では、制御部612のオブジェクト情報取得部631は、選択されたオブジェクトのオブジェクト情報を取得する。部分的に選択可能なオブジェクトが選択された場合には、オブジェクト情報取得部631は、その選択された部分のオブジェクト情報を取得する。処理はステップS53からステップS54に進む。 In step S53, the object information acquisition unit 631 of the control unit 612 acquires the object information of the selected object. When a partially selectable object is selected, the object information acquisition unit 631 acquires the object information of the selected portion. The process proceeds from step S53 to step S54.
 ステップS54では、位置姿勢取得部622は、手及び指の位置及び姿勢を位置姿勢情報として取得する。処理はステップS54からステップS55に進む。 In step S54, the position / posture acquisition unit 622 acquires the positions and postures of the hands and fingers as position / posture information. The process proceeds from step S54 to step S55.
 ステップS55では、制御部612の選択フィードバック生成部632は、ステップS54で取得された手及び指の位置姿勢情報と、ステップS53で取得されたオブジェクト情報とに基づいて、ユーザの操作に対するフィードバックとしてユーザに提示する画像、音、及び、触覚を生成する。処理はステップS55からステップS56に進む。 In step S55, the selection feedback generation unit 632 of the control unit 612 uses the hand and finger position / posture information acquired in step S54 as feedback for the user's operation based on the object information acquired in step S53. Generates the image, sound, and tactile sensation presented to. The process proceeds from step S55 to step S56.
 ステップS56では、制御部612の出力制御部625は、ステップS55で生成されたフィードバックの画像、音、及び、触覚を、それぞれ映像表示部23、音提示部24、及び、触覚提示部25に出力する。処理はステップS56からステップS54に戻り、ステップS54から繰り返す。なお、ユーザが親指と人指し指を離してオブジェクトの選択を解除した場合には、処理は、ステップS51に戻り、ステップS51から繰り返す。 In step S56, the output control unit 625 of the control unit 612 outputs the feedback image, sound, and tactile sensation generated in step S55 to the video display unit 23, the sound presentation unit 24, and the tactile sensation presentation unit 25, respectively. do. The process returns from step S56 to step S54 and repeats from step S54. When the user releases the thumb and the index finger to deselect the object, the process returns to step S51 and repeats from step S51.
<アプリ実行部624の処理の詳細>
 以下、情報処理装置601のアプリ実行部624の処理について説明する。
<Details of processing of application execution unit 624>
Hereinafter, the processing of the application execution unit 624 of the information processing device 601 will be described.
 アプリ実行部624の選択フィードバック生成部632は、xR空間における仮想オブジェクトの操作に対して、操作ライン等の画像、音、及び、振動のユーザへのフィードバックを生成する。フィードバックは、操作対象として選択された仮想オブジェクトのオブジェクト情報(属性)に基づいて生成される。これによって、ユーザに対して操作対象の仮想オブジェクトの属性に応じた操作感を与える。 The selection feedback generation unit 632 of the application execution unit 624 generates feedback to the user of images, sounds, and vibrations such as operation lines for the operation of the virtual object in the xR space. Feedback is generated based on the object information (attribute) of the virtual object selected as the operation target. This gives the user a feeling of operation according to the attributes of the virtual object to be operated.
 以下において、図31に示す仮想オブジェクトを操作対象とする場合を例にしてアプリ実行部624の処理について説明する。 In the following, the processing of the application execution unit 624 will be described by taking the case where the virtual object shown in FIG. 31 is the operation target as an example.
 図31は、仮想オブジェクトを例示した図である。 FIG. 31 is a diagram illustrating a virtual object.
 図31において、仮想オブジェクト691は、木の3Dオブジェクトを表す。仮想オブジェクト691は、例えば、操作対象として個別に選択できる部分が葉っぱの部分692(以下、葉部分692という)と、木の幹の部分693(以下、幹部分693)とに分けられている。 In FIG. 31, the virtual object 691 represents a 3D object of a tree. In the virtual object 691, for example, a portion that can be individually selected as an operation target is divided into a leaf portion 692 (hereinafter referred to as a leaf portion 692) and a tree trunk portion 693 (hereinafter referred to as a trunk portion 693).
 このように操作対象として個別に選択できる部分は、事前にオブジェクト情報として記憶部26に登録しておく場合であってもよいし、仮想オブジェクトから動的に抽出する場合であってもよい。 The portion that can be individually selected as the operation target in this way may be registered in the storage unit 26 as object information in advance, or may be dynamically extracted from the virtual object.
 アプリ実行部624のオブジェクト情報取得部631は、操作対象として選択された仮想オブジェクトの部分ごとにオブジェクト情報を取得する。オブジェクト情報としては、色、硬さ、重さ、画像、サウンド(音)、大きさ、オブジェクトの弾性、太さ、硬さ/もろさ、マテリアルの特徴(光沢感やザラつき感)、熱さ、重要度等が含まれる。 The object information acquisition unit 631 of the application execution unit 624 acquires object information for each part of the virtual object selected as the operation target. Object information includes color, hardness, weight, image, sound (sound), size, object elasticity, thickness, hardness / brittleness, material characteristics (glossiness and roughness), heat, and importance. Degree etc. are included.
 図31の葉部分692が操作対象として選択された場合、オブジェクト情報取得部631は、葉部分692のオブジェクト情報(属性の種類)として、例えば、色、硬さ、重さ、画像、サウンド、及び、振動データ等に関する情報を取得する。例えば、色に関しては、「緑、又は、新緑の色」という情報が取得される。硬さに関しては、「柔らかい」という情報が取得される。重さに関しては、「軽い」という情報が取得される。画像に関しては、「葉っぱの画像データ」が取得される。サウンドに関しては、「葉っぱ同士が擦れるときの音」が取得される。「振動データ」に関しては、「葉っぱが手にあたった時のカサカサとした振動」が取得される。 When the leaf portion 692 of FIG. 31 is selected as an operation target, the object information acquisition unit 631 may use, for example, color, hardness, weight, image, sound, and as object information (attribute type) of the leaf portion 692. , Acquire information about vibration data, etc. For example, regarding the color, the information "green or fresh green color" is acquired. Regarding hardness, information that it is "soft" is acquired. Regarding the weight, the information "light" is acquired. As for the image, "leaf image data" is acquired. As for the sound, "the sound when the leaves rub against each other" is acquired. As for the "vibration data", the "rough vibration when the leaf hits the hand" is acquired.
 アプリ実行部624の選択フィードバック生成部632(ビジュアル生成部641)は、仮想オブジェクトの操作に対するユーザの視覚へのフィードバックとして、葉部分692のオブジェクト情報を使用して図29に示した操作ライン674R及び674Lに相当する操作ライン(オブジェクト)を生成する。 The selection feedback generation unit 632 (visual generation unit 641) of the application execution unit 624 uses the object information of the leaf portion 692 as the feedback to the user's vision for the operation of the virtual object, and the operation line 674R and the operation line 674R shown in FIG. Generate an operation line (object) corresponding to 674L.
 図32は、図31の仮想オブジェクト691の葉部分692が操作対象として選択された場合にユーザに提示される操作ラインの4つの形態を例示した図である。 FIG. 32 is a diagram illustrating four forms of an operation line presented to the user when the leaf portion 692 of the virtual object 691 of FIG. 31 is selected as an operation target.
 図32の形態701乃至704は、それぞれ形態が異なる操作ライン711乃至714によりユーザの右手661Rと葉部分692とが連結されている。なお、左手の操作ラインについては省略されている。ただし、必ずしも両手と操作対象の仮想オブジェクトとが操作ラインで連結されていなくてもよい。 In the forms 701 to 704 of FIG. 32, the user's right hand 661R and the leaf portion 692 are connected by operation lines 711 to 714 having different forms, respectively. The operation line on the left hand is omitted. However, both hands and the virtual object to be operated do not necessarily have to be connected by the operation line.
 形態701では、操作ライン711は実線により生成される。操作ライン711の生成には、葉部分692のオブジェクト情報のうち色、硬さ、及び、重さに関する情報が反映される。例えば、操作ライン711は、葉部分692の色が緑色であることから、操作ライン711も緑色に生成される。葉部分692の硬さが柔らかく、重さが軽いことから、操作ライン711は直線ではなく柔らかい線(湾曲した線)で生成される。なお、葉部分692の硬さ又は重さに応じて操作ライン711の太さが変更されるようにしてもよい。 In form 701, the operation line 711 is generated by a solid line. Information on color, hardness, and weight among the object information of the leaf portion 692 is reflected in the generation of the operation line 711. For example, since the color of the leaf portion 692 of the operation line 711 is green, the operation line 711 is also generated to be green. Since the leaf portion 692 is soft and light in weight, the operation line 711 is generated by a soft line (curved line) instead of a straight line. The thickness of the operation line 711 may be changed according to the hardness or weight of the leaf portion 692.
 形態702では、操作ライン712は、点線で生成される。操作ライン712は、形態701の操作ライン711と比較して線の種類のみが相違している。操作ライン712も操作ライン711と同様に葉部分692のオブジェクト情報が反映されている。 In the form 702, the operation line 712 is generated by a dotted line. The operation line 712 differs only in the type of line from the operation line 711 of the form 701. The operation line 712 also reflects the object information of the leaf portion 692 like the operation line 711.
 形態703では、操作ライン713は、視点と終点の部分のみが提示される。操作ライン713は、形態701の操作ライン711と比較して線の形態のみが相違している。操作ライン713も操作ライン711と同様に葉部分692のオブジェクト情報が反映されている。 In the form 703, only the viewpoint and the end point of the operation line 713 are presented. The operation line 713 differs only in the form of the line from the operation line 711 of the form 701. The operation line 713 also reflects the object information of the leaf portion 692 like the operation line 711.
 形態704では、操作ライン714は、葉っぱの連なりにより生成される。操作ライン714は、葉部分692のオブジェクト情報のうち、画像に関する「葉っぱの画像データ」の情報が使用されている。操作ライン714には、操作ライン711と同様に葉部分692の硬さ及び重さに関する情報が反映されている。 In form 704, the operation line 714 is generated by a series of leaves. In the operation line 714, among the object information of the leaf portion 692, the information of "leaf image data" related to the image is used. Similar to the operation line 711, the operation line 714 reflects information on the hardness and weight of the leaf portion 692.
 なお、操作ライン711乃至714に限らず、葉部分692のオブジェクト情報として、葉っぱのアニメーションが含まれる場合には、葉っぱのアニメーションを繋いで操作ラインを生成してもよい。 Not limited to the operation lines 711 to 714, when the leaf animation is included as the object information of the leaf portion 692, the operation line may be generated by connecting the leaf animation.
 アプリ実行部624の選択フィードバック生成部632(サウンド生成部642)は、仮想オブジェクトの操作に対するユーザの聴覚へのフィードバックとして、葉部分692のオブジェクト情報を使用してサウンド(音)を生成する。例えば、サウンド生成部642は、ユーザの操作により葉部分692が動いた際にユーザに提示するサウンドを、葉部分692のオブジェクト情報に含まれるサウンドに関する情報(葉っぱ同士が擦れるときのサウンド)を用いて生成する。サウンドの音量は、ユーザの手と木部分629の動きに応じて変化するようにしてもよい。例えば、ユーザの手と葉部分692がいずれも静止している時には、葉っぱが風で少しだけすれるようなサウンドが生成される。ユーザが手を右に動かし葉部分692も右に高速で動いている時には、強い風が葉っぱにあたっているようなサウンドが生成される。 The selection feedback generation unit 632 (sound generation unit 642) of the application execution unit 624 generates a sound (sound) using the object information of the leaf portion 692 as feedback to the user's hearing for the operation of the virtual object. For example, the sound generation unit 642 uses information about the sound included in the object information of the leaf portion 692 (sound when the leaves rub against each other) as the sound presented to the user when the leaf portion 692 is moved by the user's operation. To generate. The volume of the sound may be varied according to the movement of the user's hand and the wood portion 629. For example, when both the user's hand and the leaf portion 692 are stationary, a sound is generated in which the leaves are slightly rubbed by the wind. When the user moves his hand to the right and the leaf portion 692 also moves to the right at high speed, a sound like a strong wind hitting the leaves is generated.
 アプリ実行部624の選択フィードバック生成部632(触覚生成部643)は、仮想オブジェクトの操作に対するユーザの触覚へのフィードバックとして、葉部分692のオブジェクト情報を使用して触覚(振動)を生成する。 The selection feedback generation unit 632 (tactile generation unit 643) of the application execution unit 624 generates a tactile sensation (vibration) using the object information of the leaf portion 692 as feedback to the user's tactile sensation for the operation of the virtual object.
 触覚生成部643は、葉部分692のオブジェクト情報に含まれる振動データに関する情報を用いて、葉っぱが手にあたった時のカサカサとした振動を生成する。 The tactile generation unit 643 uses the information related to the vibration data included in the object information of the leaf portion 692 to generate a rough vibration when the leaf hits the hand.
 触覚生成部643は、ユーザの手と葉部分692との動きに応じて、生成する振動を変化させる。例えば、触覚生成部643は、ユーザの手と葉部分692とがいずれも静止している時や、動きが遅い時には、1秒間に2回程度繰り返される高周波数の振動を生成する。触覚生成部643は、ユーザの手と葉部分692との動きが速い時には、1秒間に10回程度繰り返される高周波数の振動を生成する。なお、振動の周波数には、葉部分692のオブジェクト情報に含まれる重さに関する情報が反映されている。 The tactile generation unit 643 changes the generated vibration according to the movement between the user's hand and the leaf portion 692. For example, the tactile generation unit 643 generates high-frequency vibrations that are repeated about twice a second when the user's hand and the leaf portion 692 are both stationary or slow. The tactile generation unit 643 generates high-frequency vibrations that are repeated about 10 times per second when the movement between the user's hand and the leaf portion 692 is fast. The vibration frequency reflects information on the weight included in the object information of the leaf portion 692.
 図31の幹部分693が操作対象として選択された場合、オブジェクト情報取得部631は、幹部分693のオブジェクト情報(属性の種類)として、葉部分692と同様に、色、硬さ、重さ、画像、サウンド、及び、振動データ等に関する情報を取得する。例えば、色に関しては、「茶色」という情報が取得される。硬さに関しては、「硬い」という情報が取得される。重さに関しては、「重い」という情報が取得される。画像に関しては、「木の幹の画像データ」が取得される。サウンドに関しては、「弾力性のある木の枝を曲げたときのぐぐぐっという音」が取得される。「振動データ」に関しては、「弾力性のある木の枝を曲げた重い振動」が取得される。 When the trunk portion 693 of FIG. 31 is selected as the operation target, the object information acquisition unit 631 determines the color, hardness, weight, and the like as the object information (attribute type) of the trunk portion 693, as in the leaf portion 692. Acquire information about images, sounds, vibration data, etc. For example, regarding the color, the information "brown" is acquired. Regarding hardness, information that it is "hard" is acquired. Regarding the weight, the information "heavy" is acquired. As for the image, "image data of the tree trunk" is acquired. As for the sound, "the rattling sound of bending an elastic tree branch" is acquired. As for "vibration data", "heavy vibration of elastic tree branches bent" is acquired.
 アプリ実行部624の選択フィードバック生成部632(ビジュアル生成部641)は、仮想オブジェクトの操作に対するユーザの視覚へのフィードバックとして、幹部分693のオブジェクト情報を使用して操作ライン(オブジェクト)を生成する。 The selection feedback generation unit 632 (visual generation unit 641) of the application execution unit 624 generates an operation line (object) using the object information of the trunk portion 693 as feedback to the user's vision for the operation of the virtual object.
 図33は、図31の仮想オブジェクト691の幹部分693が操作対象として選択された場合にユーザに提示される操作ラインの4つの形態を例示した図である。 FIG. 33 is a diagram illustrating four forms of an operation line presented to the user when the trunk portion 693 of the virtual object 691 of FIG. 31 is selected as an operation target.
 図33の形態701乃至704は、図32の同一符号の形態701乃至704に対応し、それぞれ操作ラインの生成方法が一致し、操作ラインにも同一符号が付されてある。 The forms 701 to 704 of FIG. 33 correspond to the forms 701 to 704 having the same reference numerals in FIG. 32, the methods of generating the operation lines are the same, and the same reference numerals are given to the operation lines.
 形態701では、操作ライン711は実線により生成される。操作ライン711の生成には、幹部分693のオブジェクト情報のうち色、硬さ、及び、重さに関する情報が反映される。例えば、操作ライン711は、幹部分693の色が茶色であることから、操作ライン711も茶色に生成される。幹部分693の硬さが硬く、重さが重いことから、操作ライン711は直線で生成される。 In form 701, the operation line 711 is generated by a solid line. Information on color, hardness, and weight among the object information of the trunk portion 693 is reflected in the generation of the operation line 711. For example, since the color of the trunk portion 693 of the operation line 711 is brown, the operation line 711 is also generated to be brown. Since the trunk portion 693 is hard and heavy, the operation line 711 is generated in a straight line.
 形態702では、操作ライン712は、点線で生成される。操作ライン712は、形態701の操作ライン711と比較して線の種類のみが相違している。操作ライン712も操作ライン711と同様に幹部分693のオブジェクト情報が反映されている。 In the form 702, the operation line 712 is generated by a dotted line. The operation line 712 differs only in the type of line from the operation line 711 of the form 701. Similar to the operation line 711, the operation line 712 also reflects the object information of the trunk portion 693.
 形態703では、操作ライン713は、視点と終点の部分のみが提示される。操作ライン713は、形態701の操作ライン711と比較して線の形態のみが相違している。操作ライン713も操作ライン711と同様に幹部分693のオブジェクト情報が反映されている。 In the form 703, only the viewpoint and the end point of the operation line 713 are presented. The operation line 713 differs only in the form of the line from the operation line 711 of the form 701. Similar to the operation line 711, the operation line 713 also reflects the object information of the trunk portion 693.
 形態704では、操作ライン714は、木の幹の連なりにより生成される。操作ライン714は、幹部分693のオブジェクト情報のうち、画像に関する「木の幹の画像データ」の情報が使用されている。操作ライン714には、操作ライン711と同様に幹部分693の硬さ及び重さに関する情報が反映されている。 In form 704, the operation line 714 is generated by a series of tree trunks. In the operation line 714, among the object information of the trunk portion 693, the information of "image data of the trunk of the tree" regarding the image is used. Similar to the operation line 711, the operation line 714 reflects information on the hardness and weight of the trunk portion 693.
 なお、操作ライン711乃至714に限らず、幹部分693のオブジェクト情報として、木の幹のアニメーションが含まれる場合には、木の幹のアニメーションを繋いで操作ラインを生成してもよい。 Not limited to the operation lines 711 to 714, if the object information of the trunk portion 693 includes the animation of the tree trunk, the operation line may be generated by connecting the animation of the tree trunk.
 アプリ実行部624の選択フィードバック生成部632(サウンド生成部642)は、仮想オブジェクトの操作に対するユーザの聴覚へのフィードバックとして、幹部分693のオブジェクト情報を使用してサウンド(音)を生成する。例えば、サウンド生成部642は、ユーザの操作により幹部分693が動いた際にユーザに提示するサウンドを、幹部分693のオブジェクト情報に含まれるサウンドに関する情報(弾力性のある木の枝を曲げたぐぐぐっという音)を用いて生成する。サウンドの音量は、ユーザの手と幹部分693の動きに応じて変化するようにしてもよい。例えば、ユーザの手と幹部分693がいずれも静止している時には、ほとんど音が発生しないが、ユーザの手や幹部分693が動いている時には、強いぐぐぐっというサウンドが生成される。 The selection feedback generation unit 632 (sound generation unit 642) of the application execution unit 624 generates a sound (sound) using the object information of the trunk portion 693 as feedback to the user's hearing for the operation of the virtual object. For example, the sound generation unit 642 sets the sound presented to the user when the trunk portion 693 is moved by the user's operation as information about the sound included in the object information of the trunk portion 693 (bending an elastic tree branch). It is generated using a gurgling sound). The volume of the sound may be changed according to the movement of the user's hand and the trunk portion 693. For example, when both the user's hand and the trunk portion 693 are stationary, almost no sound is generated, but when the user's hand and the trunk portion 693 are moving, a strong gurgling sound is generated.
 アプリ実行部624の選択フィードバック生成部632(触覚生成部643)は、仮想オブジェクトの操作に対するユーザの触覚へのフィードバックとして、幹部分693のオブジェクト情報を使用して触覚(振動)を生成する。 The selection feedback generation unit 632 (tactile generation unit 643) of the application execution unit 624 generates a tactile sensation (vibration) using the object information of the trunk portion 693 as feedback to the user's tactile sensation for the operation of the virtual object.
 触覚生成部643は、幹部分693のオブジェクト情報に含まれる振動データに関する情報を用いて、弾力性のある木の枝を曲げた重い振動を生成する。 The tactile generation unit 643 uses the information related to the vibration data included in the object information of the trunk portion 693 to generate a heavy vibration by bending an elastic tree branch.
 触覚生成部643は、ユーザの手と幹部分693との動きに応じて、生成する振動を変化させる。例えば、触覚生成部643は、ユーザの手と幹部分693とがいずれも静止している時や、動きが遅い時には、1秒間に1回程度繰り返される低周波数の振動を生成する。触覚生成部643は、ユーザの手と幹部分693とが動き始めのときには大きな振幅の低周波数の振動を生成し、動きが速くなるほど、周波数を高くし、1秒間に10回程度繰り返される高周波数の振動を生成する。振動の周波数には、幹部分693のオブジェクト情報に含まれる重さに関する情報が反映されている。 The tactile generation unit 643 changes the generated vibration according to the movement of the user's hand and the trunk portion 693. For example, the tactile generation unit 643 generates low-frequency vibrations that are repeated about once per second when the user's hand and the trunk portion 693 are both stationary or slow. The tactile generator 643 generates a low-frequency vibration with a large amplitude when the user's hand and the trunk portion 693 start to move, and the faster the movement, the higher the frequency, and the high frequency repeated about 10 times per second. Generates vibrations. The vibration frequency reflects information about the weight contained in the object information of the trunk portion 693.
(ユーザの特徴に応じたフィードバック)
 ユーザの特徴に応じてフィードバックの与え方を変化させる形態について説明する。アプリ実行部624の選択フィードバック生成部632は、フィードバックとしてユーザに提示する画像、音、及び、触覚を生成する際に、オブジェクト情報に含まれる重さに関する情報を使用する場合には、ユーザの特徴等に基づいて重さに関する情報を変更してもよい。
(Feedback according to user characteristics)
A form of changing the way of giving feedback according to the characteristics of the user will be described. When the selection feedback generation unit 632 of the application execution unit 624 uses the information regarding the weight included in the object information when generating the image, sound, and tactile sensation to be presented to the user as feedback, the user's characteristics. The information regarding the weight may be changed based on the above.
 例えば、選択フィードバック生成部632は、ユーザの性別、手の大きさ、体重、及び、身長に応じて、オブジェクト情報の重さに関する情報を変更する。選択フィードバック生成部632は、身長や体重が大きいユーザは力が強いと判断し、オブジェクト情報の重さを少し軽くする。 For example, the selection feedback generation unit 632 changes the information regarding the weight of the object information according to the user's gender, hand size, weight, and height. The selection feedback generation unit 632 determines that a user with a large height or weight has strong power, and reduces the weight of the object information a little.
 図34は、ユーザの力の強さに応じてオブジェクト情報の重さを変更した場合を例示した図である。 FIG. 34 is a diagram illustrating a case where the weight of the object information is changed according to the strength of the user's power.
 図34の状態721は、力の弱いユーザが右手661Rで葉部分692を操作ライン711により操作している様子を表す。状態722は、力の強いユーザが右手661Rで葉部分692を操作ライン711により操作している様子を表す。 The state 721 in FIG. 34 shows a state in which a weak user is operating the leaf portion 692 with the right hand 661R by the operation line 711. The state 722 represents a state in which a strong user is operating the leaf portion 692 with the right hand 661R by the operation line 711.
 状態721において、力の弱いユーザは、葉部分692を動かす場合に力の強いユーザよりも重さを感じる。したがって、選択フィードバック生成部632は、フィードバックを生成する際に、葉部分692のオブジェクト情報に含まれる重さに関する情報を重くする方向に変更する。これによって、操作ライン711が直線として生成される。力の弱いユーザに対して適切な操作感が表現される。 In state 721, a weak user feels heavier than a strong user when moving the leaf portion 692. Therefore, when the feedback is generated, the selection feedback generation unit 632 changes the information regarding the weight included in the object information of the leaf portion 692 to be heavier. As a result, the operation line 711 is generated as a straight line. An appropriate feeling of operation is expressed for users with weak power.
 状態721において、力の強いユーザは、葉部分692を動かす場合に力の弱いユーザよりも軽く感じる。したがって、選択フィードバック生成部632は、フィードバックを生成する際に、葉部分692のオブジェクト情報に含まれる重さに関する情報を軽くする方向に変更する。これによって、操作ライン711が曲線として生成される。力の強いユーザに対して適切な操作感が表現される。 In state 721, a strong user feels lighter than a weak user when moving the leaf portion 692. Therefore, the selection feedback generation unit 632 changes the information regarding the weight included in the object information of the leaf portion 692 to lighten the information when generating the feedback. As a result, the operation line 711 is generated as a curve. An appropriate operation feeling is expressed for a strong user.
(オブジェクトの状況に応じたフィードバック)
 操作対象のオブジェクトの状況に応じてフィードバックの与え方を変化させる実施例について説明する。
(Feedback according to the status of the object)
An embodiment in which the method of giving feedback is changed according to the situation of the object to be operated will be described.
 アプリ実行部624の選択フィードバック生成部632は、ユーザが操作対象のオブジェクトを移動させている際に、操作対象のオブジェクトが仮想又は現実の壁や他のオブジェクトに衝突した場合には、その瞬間にオブジェクト情報を変化させても良い。 The selection feedback generation unit 632 of the application execution unit 624, when the operation target object collides with a virtual or real wall or another object while the user is moving the operation target object, at that moment. The object information may be changed.
 図35は、操作対象のオブジェクトが衝突した場合にオブジェクト情報を変化させたときの様子を示した図である。 FIG. 35 is a diagram showing a state when the object information is changed when the object to be operated collides.
 図35の状態731の操作ライン711は、図32の形態701の操作ライン711と同一の生成方法で生成された操作ラインである。図35の状態731は、ユーザが右手661Rで操作ライン711を操作して葉部分692を移動させている際に、壁等に衝突した場合を表している。 The operation line 711 in the state 731 of FIG. 35 is an operation line generated by the same generation method as the operation line 711 of the form 701 of FIG. 32. The state 731 in FIG. 35 represents a case where the user collides with a wall or the like while operating the operation line 711 with the right hand 661R to move the leaf portion 692.
 選択フィードバック生成部632は、葉部分692が壁等に衝突した時に、葉部分692のオブジェクト情報に含まれる重さに関する情報を重くする方向、又は、硬さに関する情報を硬くする方向に変更する。これによって、葉部分692が衝突する前には湾曲していた操作ライン711が、衝突の瞬間に直線に変化し、衝突したことが適切に表現される。 When the leaf portion 692 collides with a wall or the like, the selection feedback generation unit 632 changes the direction of increasing the weight information included in the object information of the leaf portion 692 or the direction of increasing the hardness information. As a result, the operation line 711, which was curved before the leaf portion 692 collided, changes to a straight line at the moment of the collision, and it is appropriately expressed that the collision occurred.
 図35の状態732の操作ライン714は、図32の形態704の操作ライン714と同一の生成方法で生成された操作ラインである。図35の状態732は、ユーザが右手661Rで操作ライン714を操作して葉部分692を移動させている際に、壁等に衝突した場合を表している。 The operation line 714 in the state 732 of FIG. 35 is an operation line generated by the same generation method as the operation line 714 of the form 704 of FIG. 32. The state 732 of FIG. 35 represents a case where the user collides with a wall or the like while operating the operation line 714 with the right hand 661R to move the leaf portion 692.
 選択フィードバック生成部632は、葉部分692が壁等に衝突した時に、葉部分692のオブジェクト情報に含まれる重さに関する情報を重くする方向、又は、硬さに関する情報を硬くする方向に変更する。これによって、操作ライン714を繋ぐ葉っぱが衝突した瞬間に散るので、衝突したことが適切に表現される。 When the leaf portion 692 collides with a wall or the like, the selection feedback generation unit 632 changes the direction of increasing the weight information included in the object information of the leaf portion 692 or the direction of increasing the hardness information. As a result, the leaves connecting the operation lines 714 are scattered at the moment of collision, so that the collision is appropriately expressed.
 操作対象のオブジェクトの状況に応じてフィードバックの与え方を変化させる他の実施例について説明する。 Explain another embodiment in which the method of giving feedback is changed according to the situation of the object to be operated.
 図36は、操作対象のオブジェクトを面上の指定位置に置くときの様子を示した図である。 FIG. 36 is a diagram showing a state when the object to be operated is placed at a designated position on the surface.
 図36には、ユーザが右手661Rで操作ライン711を操作して操作対象のオブジェクト741を空中で移動させ、机等の面上の指定位置に置くときの様子が示されている。この場合において、選択フィードバック生成部632は、オブジェクト741が空中を移動しているときには、オブジェクト741のオブジェクト情報の重さに関する情報を軽くする方向に変更する。これによって、オブジェクト741が空中を移動しているときには、操作ライン711が湾曲した柔らかい線として生成される。 FIG. 36 shows a state when the user operates the operation line 711 with the right hand 661R to move the object 741 to be operated in the air and places it at a designated position on a surface such as a desk. In this case, the selection feedback generation unit 632 changes the information regarding the weight of the object information of the object 741 to lighten when the object 741 is moving in the air. As a result, when the object 741 is moving in the air, the operation line 711 is generated as a curved soft line.
 一方、オブジェクト741が面上に置かれたときは、選択フィードバック生成部632は、オブジェクト741のオブジェクト情報の重さに関する情報を重くする方向に変更する。これによって、オブジェクト741が面上に置かれたときには、操作ライン711が直線として生成される。操作ライン711は直線の方が、オブジェクト741を面上の指定位置に置くという繊細な操作が行い易くなる。 On the other hand, when the object 741 is placed on the surface, the selection feedback generation unit 632 changes the information regarding the weight of the object information of the object 741 in the direction of increasing the weight. As a result, when the object 741 is placed on the surface, the operation line 711 is generated as a straight line. If the operation line 711 is a straight line, it is easier to perform a delicate operation of placing the object 741 at a designated position on the surface.
(操作ラインの視認性向上の実施例)
 操作ラインを見やすくするための実施例について説明する。
(Example of improving visibility of operation line)
An embodiment for making the operation line easy to see will be described.
 背景色が操作ラインの色と近い場合、操作ラインが見えなくなる可能性がある。そこで、選択フィードバック生成部632は、ユーザが見ている環境色に応じて操作対象のオブジェクト情報の色に関する情報を変化させるようにしてもよい。 If the background color is close to the color of the operation line, the operation line may not be visible. Therefore, the selection feedback generation unit 632 may change the information regarding the color of the object information to be operated according to the environment color seen by the user.
 図37は、操作ラインを見やすくするための処理を説明する図である。図37の状態751は環境色が暗い場合であり、状態752は環境色が明るい場合である。 FIG. 37 is a diagram illustrating a process for making the operation line easier to see. The state 751 in FIG. 37 is a case where the environmental color is dark, and the state 752 is a case where the environmental color is bright.
 状態751の環境色が暗い場合には、選択フィードバック生成部632は、操作対象のオブジェクト741のオブジェクト情報の色に関する情報を明るめの色に変更する。その結果、環境色が暗いときには、操作ライン711が明るい色で生成され、見やすくなる。 When the environment color of the state 751 is dark, the selection feedback generation unit 632 changes the information regarding the color of the object information of the object 741 to be operated to a bright color. As a result, when the environment color is dark, the operation line 711 is generated in a bright color, which makes it easy to see.
 状態752の環境色が明るい場合には、選択フィードバック生成部632は、操作対象のオブジェクト741のオブジェクト情報の色に関する情報を暗めの色に変更する。その結果、環境色が明るいときには、操作ライン711が暗めの色で生成され、見やすくなる。 When the environment color of the state 752 is bright, the selection feedback generation unit 632 changes the information regarding the color of the object information of the object 741 to be operated to a dark color. As a result, when the environment color is bright, the operation line 711 is generated in a dark color, which makes it easier to see.
 以上の情報処理装置の第3の実施の形態によれば、ユーザに提供されるxR空間において、ユーザがオブジェクトを操作する場合に、操作対象のオブジェクトの属性に応じた操作ライン等のフィードバックがユーザに提示される。これによって、ユーザは、どのような属性のオブジェクトを操作しているのか、また、オブジェクトのどのような部分を操作しているのかを感覚的に容易に把握できるようになる。 According to the third embodiment of the above information processing apparatus, when the user operates an object in the xR space provided to the user, feedback such as an operation line according to the attribute of the object to be operated is given to the user. Presented at. This makes it possible for the user to intuitively and easily grasp what kind of attribute object is being operated and what part of the object is being operated.
 なお、特許文献4(特許第5871345号公報)には、ユーザが手を握って所定のジェスチャを行うことで仮想オブジェクトを遠隔で操作する技術が開示されている。しかしながら、特許文献4の技術では、どのような属性のオブジェクトを操作しているのかや、オブジェクトのどの部分を操作しているのかを把握することが困難である。仮に、操作ラインを表示したとしても、それだけでは本情報処理装置の第3の実施の形態のように、操作対象のオブジェクトの属性を感覚的に把握することは難しい。 Note that Patent Document 4 (Patent No. 5871345) discloses a technique for remotely controlling a virtual object by a user holding a hand and performing a predetermined gesture. However, in the technique of Patent Document 4, it is difficult to grasp what kind of attribute object is being operated and which part of the object is being operated. Even if the operation line is displayed, it is difficult to intuitively grasp the attributes of the object to be operated as in the third embodiment of the information processing apparatus.
<プログラム>
 上述した情報処理装置11、情報処理装置301、又は、情報処理装置601での一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
<Program>
The series of processes in the information processing device 11, the information processing device 301, or the information processing device 601 described above can be executed by hardware or by software. When a series of processes are executed by software, the programs constituting the software are installed in the computer. Here, the computer includes a computer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
 図38は、上述した一連の処理をプログラムにより実行するコンピュータのハードウエアの構成例を示すブロック図である。 FIG. 38 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
 コンピュータにおいて、CPU(Central Processing Unit)901,ROM(Read Only Memory)902,RAM(Random Access Memory)903は、バス904により相互に接続されている。 In the computer, the CPU (Central Processing Unit) 901, the ROM (Read Only Memory) 902, and the RAM (Random Access Memory) 903 are connected to each other by the bus 904.
 バス904には、さらに、入出力インタフェース905が接続されている。入出力インタフェース905には、入力部906、出力部907、記憶部908、通信部909、及びドライブ910が接続されている。 An input / output interface 905 is further connected to the bus 904. An input unit 906, an output unit 907, a storage unit 908, a communication unit 909, and a drive 910 are connected to the input / output interface 905.
 入力部906は、キーボード、マウス、マイクロフォンなどよりなる。出力部907は、ディスプレイ、スピーカなどよりなる。記憶部908は、ハードディスクや不揮発性のメモリなどよりなる。通信部909は、ネットワークインタフェースなどよりなる。ドライブ910は、磁気ディスク、光ディスク、光磁気ディスク、又は半導体メモリなどのリムーバブルメディア911を駆動する。 The input unit 906 includes a keyboard, a mouse, a microphone, and the like. The output unit 907 includes a display, a speaker, and the like. The storage unit 908 includes a hard disk, a non-volatile memory, and the like. The communication unit 909 includes a network interface and the like. The drive 910 drives a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
 以上のように構成されるコンピュータでは、CPU901が、例えば、記憶部908に記憶されているプログラムを、入出力インタフェース905及びバス904を介して、RAM903にロードして実行することにより、上述した一連の処理が行われる。 In the computer configured as described above, the CPU 901 loads the program stored in the storage unit 908 into the RAM 903 via the input / output interface 905 and the bus 904 and executes the above-mentioned series. Is processed.
 コンピュータ(CPU901)が実行するプログラムは、例えば、パッケージメディア等としてのリムーバブルメディア911に記録して提供することができる。また、プログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線又は無線の伝送媒体を介して提供することができる。 The program executed by the computer (CPU901) can be recorded and provided on the removable media 911 as a package media or the like, for example. The program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
 コンピュータでは、プログラムは、リムーバブルメディア911をドライブ910に装着することにより、入出力インタフェース905を介して、記憶部908にインストールすることができる。また、プログラムは、有線又は無線の伝送媒体を介して、通信部909で受信し、記憶部908にインストールすることができる。その他、プログラムは、ROM902や記憶部908に、あらかじめインストールしておくことができる。 In the computer, the program can be installed in the storage unit 908 via the input / output interface 905 by mounting the removable media 911 in the drive 910. Further, the program can be received by the communication unit 909 via a wired or wireless transmission medium and installed in the storage unit 908. In addition, the program can be installed in the ROM 902 or the storage unit 908 in advance.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program in which processing is performed in chronological order according to the order described in the present specification, in parallel, or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
本技術は以下のような構成も取ることができる。
(1) 操作される仮想物体の属性、又は、前記仮想物体に対する操作の状況に基づいて、前記仮想物体の操作に対するユーザの視覚、聴覚、及び、触覚のうちのいずれか1以上への情報の提示を制御する処理部
 を有する情報処理装置。
(2) 前記仮想物体の操作は、前記ユーザの手を前記仮想物体に接触させる操作を含む
 前記(1)に記載の情報処理装置。
(3) 前記処理部は、
 前記ユーザが前記仮想物体の操作を意図したか否かに基づいて、前記提示を制御する
 前記(1)又は(2)に記載の情報処理装置。
(4) 前記処理部は、
 前記ユーザが前記仮想物体の操作を意図しない場合には、前記仮想物体の操作に対する前記情報の提示を行わない
 前記(3)に記載の情報処理装置。
(5) 前記処理部は、
 前記ユーザが、前記仮想物体の操作を意図しない場合と、前記仮想物体の操作を意図した場合とで、前記仮想物体の操作に対して異なる前記情報を提示する
 前記(3)又は(4)に記載の情報処理装置。
(6) 前記処理部は、
 前記ユーザが、前記仮想物体の操作を意図しない場合には、前記視覚への前記情報としての前記仮想物体の明度を変更する
 前記(3)又は(5)に記載の情報処理装置。
(7) 前記処理部は、
 前記ユーザが前記仮想物体の操作を意図しない場合には、前記ユーザの手の速度に応じた種類又は振幅の音又は振動を、前記聴覚又は触覚への前記情報として提示する
 前記(3)、(5)、又は、(6)に記載の情報処理装置。
(8) 前記処理部は、
 前記仮想物体が前記ユーザの視野範囲内か否かに基づいて、前記提示を制御する
 前記(1)乃至(7)のいずれかに記載の情報処理装置。
(9) 前記処理部は、
 前記仮想物体に対する前記ユーザの手の向きに基づいて、前記提示を制御する
 前記(1)乃至(8)のいずれかに記載の情報処理装置。
(10) 前記処理部は、
 前記ユーザの手に物体が把持されているか否かに基づいて、前記提示を制御する
 前記(1)乃至(9)のいずれかに記載の情報処理装置。
(11) 前記処理部は、
 前記ユーザの視線方向、又は、前記ユーザが装着するヘッドマウントディスプレイの方向に基づいて、前記提示を制御する
 前記(1)乃至(10)のいずれかに記載の情報処理装置。
(12) 前記処理部は、
 前記ユーザと前記仮想物体との位置関係に基づいて、前記提示を制御する
 前記(1)乃至(11)のいずれかに記載の情報処理装置。
(13) 前記処理部は、
 前記ユーザの手の状態に基づいて、前記提示を制御する
 前記(1)乃至(12)のいずれかに記載の情報処理装置。
(14) 前記処理部は、
 前記ユーザが視認している物体との関連性に基づいて、前記提示を制御する
 前記(1)乃至(13)のいずれかに記載の情報処理装置。
(15) 前記処理部は、
 前記ユーザが、前記仮想物体を視認している場合と、前記仮想物体を視認していない場合とで、前記仮想物体の操作に対して異なる前記情報を提示する
 前記(1)又は(2)に記載の情報処理装置。
(16) 前記処理部は、
 前記仮想物体が前記ユーザ又は前記ユーザが装着するヘッドマウントディスプレイの視野範囲内に存在しない場合、又は、前記ユーザが眼を閉じている場合に、前記仮想物体を視認していない場合とする
 前記(15)に記載の情報処理装置。
(17) 前記処理部は、
 前記仮想物体が前記ユーザの周辺視野内に存在する場合に、前記仮想物体を視認していない場合とする
 前記(15)又は(16)に記載の情報処理装置。
(18) 前記処理部は、
 現在から過去への所定時間の間で前記仮想物体が前記ユーザの中心視野内に存在しない場合に、前記仮想物体を視認していない場合とする
 前記(15)乃至(17)のいずれかに記載の情報処理装置。
(19) 前記処理部は、
 前記仮想物体を遮蔽する物体が存在する場合に、前記仮想物体を視認していない場合とする
 前記(15)乃至(18)のいずれかに記載の情報処理装置。
(20) 前記処理部は、
 前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体の属性に基づいて、前記提示を制御する
 前記(1)、(2)、及び、(15)乃至(19)のいずれかに記載の情報処理装置。
(21) 前記処理部は、
 前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体の大きさに基づいて、前記提示を制御する
 前記(1)、(2)、及び、(15)乃至(20)のいずれかに記載の情報処理装置。
(22) 前記処理部は、
 前記仮想物体の大きさに応じた画像を、前記視覚への前記情報として提示する
 前記(21)に記載の情報処理装置。
(23) 前記処理部は、
 前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体の個数に基づいて、前記提示を制御する
 前記(1)、(2)、及び、(15)乃至(22)のいずれかに記載の情報処理装置。
(24) 前記処理部は、
 前記仮想物体の個数に応じた波形の音又は振動を、前記聴覚又は前記触覚への前記情報として提示する
 前記(23)に記載の情報処理装置。
(25) 前記処理部は、
 前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体の色に基づいて、前記提示を制御する
 前記(1)、(2)、及び、(15)乃至(24)のいずれかに記載の情報処理装置。
(26) 前記処理部は、
 前記仮想物体の色に応じた周波数の音又は振動を、前記聴覚又は前記触覚への前記情報として提示する
 前記(25)に記載の情報処理装置。
(27) 前記処理部は、
 前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体の種類に基づいて、前記提示を制御する
 前記(1)、(2)、及び、(15)乃至(26)のいずれかに記載の情報処理装置。
(28) 前記処理部は、
 前記仮想物体の種類に応じた回数の音又は振動を、前記聴覚又は前記触覚への前記情報として提示する
 前記(27)に記載の情報処理装置。
(29) 前記処理部は、
 前記ユーザが前記仮想物体を視認していない場合に、前記ユーザの手に対する前記仮想物体の向きに基づいて、前記提示を制御する
 前記(1)、(2)、及び、(15)乃至(28)のいずれかに記載の情報処理装置。
(30) 前記処理部は、
 前記ユーザの前記手が移動した方向に対して、前記仮想物体の前記向きが異なる場合に、異なる振動を、前記触覚への前記情報として提示する
 前記(29)に記載の情報処理装置。
(31) 前記処理部は、
 前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体が、前記ユーザが操作すると予測された仮想物体であるか否かに基づいて、前記提示を制御する
 前記(1)、(2)、及び、(15)乃至(30)のいずれかに記載の情報処理装置。
(32) 前記処理部は、
 前記仮想物体が、前記ユーザが操作すると予測された仮想物体である場合に、前記聴覚又は前記触覚への前記情報を提示する
 前記(31)に記載の情報処理装置。
(33) 前記処理部は、
 前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体が危険か否か、又は、前記仮想物体の危険度に基づいて、前記提示を制御する
 前記(1)、(2)、及び、(15)乃至(32)のいずれかに記載の情報処理装置。
(34) 前記処理部は、
 前記仮想物体の危険であることを表す画像、音、又は、振動を、前記視覚、前記聴覚、又は、前記触覚への前記情報として提示する
 前記(33)に記載の情報処理装置。
(35) 前記処理部は、
 前記仮想物体と前記ユーザの手とを繋ぐ操作ラインであって、前記仮想物体を操作するための前記操作ラインを、前記視覚への前記情報として提示する
 前記(1)に記載の情報処理装置。
(36) 前記処理部は、
 前記仮想物体の前記属性としての色、硬さ、重さ、又は、画像に応じた前記操作ラインを、前記視覚への前記情報として提示する
 前記(35)に記載の情報処理装置。
(37) 前記処理部は、
 前記操作ラインを、実線、点線、始点部分及び終点部分のみの線、又は、前記仮想物体の前記属性としての画像の連なりにより、提示する
 前記(35)、又は、(36)に記載の情報処理装置。
(38) 前記処理部は、
 前記仮想物体の前記属性としての硬さ、又は、重さに応じて前記操作ラインの形状を変更する
 前記(35)乃至(37)のいずれかに記載の情報処理装置。
(39) 前記処理部は、
 前記ユーザの特徴に基づいて、前記属性を変更する
 前記(35)乃至(38)のいずれかに記載の情報処理装置。
(40) 前記処理部は、
 前記仮想物体が、他の物体に衝突した場合に前記属性を変更する
 前記(35)乃至(39)のいずれかに記載の情報処理装置。
(41) 前記処理部は、
 前記仮想物体が、空中に存在する場合と、物体上に存在する場合とで、前記属性を変更する
 前記(35)乃至(40)のいずれかに記載の情報処理装置。
(42) 前記処理部は、
 前記仮想物体の前記属性としての色を、環境色に応じて変更する
 前記(35)乃至(41)のいずれかに記載の情報処理装置。
(43) 前記処理部は、
 前記仮想物体の前記属性としての音に基づいて、前記聴覚への前記情報としての音の提示を制御する
 前記(1)、及び、(35)乃至(42)のいずれかに記載の情報処理装置。
(44) 前記処理部は、
 前記仮想物体の動きの速さに応じて前記聴覚への前記情報としての前記音を変更する
 前記(43)に記載の情報処理装置。
(45) 前記処理部は、
 前記仮想物体の前記属性としての振動に基づいて、前記触覚への前記情報としての振動の提示を制御する
 前記(1)、及び、(35)乃至(42)のいずれかに記載の情報処理装置。
(46) 前記処理部は、
 前記仮想物体の動きの速さに応じて前記触覚への前記情報としての前記振動を変更する
 前記(45)に記載の情報処理装置。
(47) 処理部
 を有する
 情報処理装置の
 前記処理部が、
 操作される仮想物体の属性、又は、前記仮想物体に対する操作の状況に基づいて、前記仮想物体の操作に対するユーザの視覚、聴覚、及び、触覚のうちのいずれか1以上への情報の提示を制御する
 情報処理方法。
(48) コンピュータを、
 操作される仮想物体の属性、又は、前記仮想物体に対する操作の状況に基づいて、前記仮想物体の操作に対するユーザの視覚、聴覚、及び、触覚のうちのいずれか1以上への情報の提示を制御する処理部
 として機能させるためのプログラム。
This technology can also take the following configurations.
(1) Information on any one or more of the user's visual, auditory, and tactile sensations regarding the operation of the virtual object based on the attributes of the virtual object to be operated or the status of the operation on the virtual object. An information processing device having a processing unit that controls presentation.
(2) The information processing apparatus according to (1), wherein the operation of the virtual object includes an operation of bringing the user's hand into contact with the virtual object.
(3) The processing unit is
The information processing device according to (1) or (2), which controls the presentation based on whether or not the user intends to operate the virtual object.
(4) The processing unit is
The information processing apparatus according to (3), wherein the information is not presented for the operation of the virtual object when the user does not intend to operate the virtual object.
(5) The processing unit is
In the above (3) or (4), the user presents different information for the operation of the virtual object depending on whether the user intends to operate the virtual object or the operation of the virtual object. The information processing device described.
(6) The processing unit is
The information processing device according to (3) or (5), wherein the user changes the brightness of the virtual object as the information to the visual sense when the user does not intend to operate the virtual object.
(7) The processing unit is
When the user does not intend to operate the virtual object, the sound or vibration of the type or amplitude according to the speed of the user's hand is presented as the information to the auditory sense or the tactile sense. 5) Or the information processing apparatus according to (6).
(8) The processing unit is
The information processing apparatus according to any one of (1) to (7), which controls the presentation based on whether or not the virtual object is within the field of view of the user.
(9) The processing unit is
The information processing apparatus according to any one of (1) to (8), which controls the presentation based on the orientation of the user's hand with respect to the virtual object.
(10) The processing unit is
The information processing apparatus according to any one of (1) to (9), which controls the presentation based on whether or not the object is held in the user's hand.
(11) The processing unit is
The information processing apparatus according to any one of (1) to (10), which controls the presentation based on the direction of the user's line of sight or the direction of the head-mounted display worn by the user.
(12) The processing unit is
The information processing apparatus according to any one of (1) to (11), which controls the presentation based on the positional relationship between the user and the virtual object.
(13) The processing unit is
The information processing apparatus according to any one of (1) to (12), which controls the presentation based on the state of the user's hand.
(14) The processing unit is
The information processing apparatus according to any one of (1) to (13), which controls the presentation based on the relationship with the object visually recognized by the user.
(15) The processing unit is
In the above (1) or (2), the user presents different information for the operation of the virtual object depending on whether the user is visually recognizing the virtual object or not. The information processing device described.
(16) The processing unit is
When the virtual object does not exist within the field of view of the user or the head-mounted display worn by the user, or when the user has his / her eyes closed, the virtual object is not visually recognized. The information processing apparatus according to 15).
(17) The processing unit is
The information processing apparatus according to (15) or (16), wherein the virtual object is not visually recognized when the virtual object is present in the peripheral visual field of the user.
(18) The processing unit is
The above (15) to (17), wherein the virtual object is not visually recognized when the virtual object does not exist in the central visual field of the user during a predetermined time from the present to the past. Information processing equipment.
(19) The processing unit is
The information processing apparatus according to any one of (15) to (18), wherein the virtual object is not visually recognized when the object that shields the virtual object exists.
(20) The processing unit is
One of the above (1), (2), and (15) to (19) that controls the presentation based on the attribute of the virtual object when the user does not visually recognize the virtual object. The information processing device described.
(21) The processing unit is
Any one of (1), (2), and (15) to (20) that controls the presentation based on the size of the virtual object when the user does not visually recognize the virtual object. The information processing device described in.
(22) The processing unit is
The information processing apparatus according to (21), which presents an image corresponding to the size of the virtual object as the information to the visual sense.
(23) The processing unit is
One of the above (1), (2), and (15) to (22) that controls the presentation based on the number of the virtual objects when the user does not visually recognize the virtual object. The information processing device described.
(24) The processing unit is
The information processing apparatus according to (23), wherein the sound or vibration of a waveform corresponding to the number of virtual objects is presented as the information to the auditory sense or the tactile sense.
(25) The processing unit is
One of (1), (2), and (15) to (24) that controls the presentation based on the color of the virtual object when the user does not visually recognize the virtual object. The information processing device described.
(26) The processing unit is
The information processing apparatus according to (25), which presents sound or vibration having a frequency corresponding to the color of the virtual object as the information to the auditory sense or the tactile sense.
(27) The processing unit is
One of the above (1), (2), and (15) to (26) that controls the presentation based on the type of the virtual object when the user does not visually recognize the virtual object. The information processing device described.
(28) The processing unit is
The information processing apparatus according to (27), wherein the sound or vibration of the number of times according to the type of the virtual object is presented as the information to the auditory sense or the tactile sense.
(29) The processing unit is
The presentation is controlled based on the orientation of the virtual object with respect to the user's hand when the user is not visually recognizing the virtual object (1), (2), and (15) to (28). ) Is described in any of the information processing devices.
(30) The processing unit is
The information processing apparatus according to (29), wherein when the orientation of the virtual object is different with respect to the direction in which the hand of the user has moved, different vibrations are presented as the information to the tactile sensation.
(31) The processing unit is
The presentation is controlled based on whether or not the virtual object is a virtual object predicted to be operated by the user when the user does not visually recognize the virtual object (1), (2). ), And the information processing apparatus according to any one of (15) to (30).
(32) The processing unit is
The information processing apparatus according to (31), wherein the information processing device presents the information to the auditory sense or the tactile sense when the virtual object is a virtual object predicted to be operated by the user.
(33) The processing unit is
The presentation is controlled based on whether or not the virtual object is dangerous or the degree of danger of the virtual object when the user does not visually recognize the virtual object, (1), (2), and the above. , (15) to (32).
(34) The processing unit is
The information processing apparatus according to (33), wherein an image, sound, or vibration indicating the danger of the virtual object is presented as the information to the visual sense, the auditory sense, or the tactile sense.
(35) The processing unit is
The information processing apparatus according to (1), which is an operation line connecting the virtual object and the hand of the user, and presents the operation line for operating the virtual object as the information to the visual sense.
(36) The processing unit is
The information processing apparatus according to (35), wherein the operation line corresponding to the color, hardness, weight, or image as the attribute of the virtual object is presented as the information to the visual sense.
(37) The processing unit is
The information processing according to (35) or (36), wherein the operation line is presented by a solid line, a dotted line, a line having only a start point portion and an end point portion, or a series of images as the attribute of the virtual object. Device.
(38) The processing unit is
The information processing apparatus according to any one of (35) to (37), wherein the shape of the operation line is changed according to the hardness as the attribute of the virtual object or the weight.
(39) The processing unit is
The information processing apparatus according to any one of (35) to (38), wherein the attribute is changed based on the characteristics of the user.
(40) The processing unit is
The information processing apparatus according to any one of (35) to (39), wherein the attribute is changed when the virtual object collides with another object.
(41) The processing unit is
The information processing apparatus according to any one of (35) to (40), wherein the attribute is changed depending on whether the virtual object exists in the air or on the object.
(42) The processing unit is
The information processing apparatus according to any one of (35) to (41), wherein the color of the virtual object as an attribute is changed according to the environment color.
(43) The processing unit is
The information processing apparatus according to any one of (1) and (35) to (42), which controls the presentation of the sound as the information to the auditory sense based on the sound as the attribute of the virtual object. ..
(44) The processing unit is
The information processing apparatus according to (43), wherein the sound as the information to the auditory sense is changed according to the speed of movement of the virtual object.
(45) The processing unit is
The information processing apparatus according to any one of (1) and (35) to (42), which controls the presentation of the vibration as the information to the tactile sensation based on the vibration as the attribute of the virtual object. ..
(46) The processing unit is
The information processing apparatus according to (45), wherein the vibration as the information to the tactile sensation is changed according to the speed of movement of the virtual object.
(47) The processing unit of the information processing apparatus having the processing unit is
Controlled presentation of information to any one or more of the user's visual, auditory, and tactile sensations for the operation of the virtual object based on the attributes of the virtual object to be operated or the status of the operation on the virtual object. Information processing method.
(48) Computer
Controlled presentation of information to any one or more of the user's visual, auditory, and tactile sensations for the operation of the virtual object based on the attributes of the virtual object to be operated or the status of the operation on the virtual object. A program to function as a processing unit.
 11,301,601 情報処理装置, 21,331 センサ部, 22,322,612 制御部, 23 映像表示部, 24 音提示部, 25 触覚提示部, 41,621 センサ情報取得部, 42,622 位置姿勢取得部, 43,631 オブジェクト情報取得部, 44 意図識別部, 45 フィードバック判断部, 46,344,625 出力制御部, 73 HMD, 311 ARグラスシステム, 312 ハンドコントローラ, 341 手指位置検出部, 342 物体検出部, 343,624 アプリ実行部, 354 振動提示部, 623 オブジェクト選択取得部, 632 選択フィードバック生成部, 641 ビジュアル生成部, 642 サウンド生成部, 触覚生成部 11,301,601 information processing device, 21,331 sensor unit, 22,322,612 control unit, 23 video display unit, 24 sound presentation unit, 25 tactile presentation unit, 41,621 sensor information acquisition unit, 42,622 position Attitude acquisition unit, 43,631, object information acquisition unit, 44 intention identification unit, 45 feedback judgment unit, 46,344,625 output control unit, 73 HMD, 311 AR glass system, 312 hand controller, 341 hand position detection unit, 342 Object detection unit, 343,624 application execution unit, 354 vibration presentation unit, 623 object selection acquisition unit, 632 selection feedback generation unit, 641 visual generation unit, 642 sound generation unit, tactile sensation generation unit

Claims (20)

  1.  操作される仮想物体の属性、又は、前記仮想物体に対する操作の状況に基づいて、前記仮想物体の操作に対するユーザの視覚、聴覚、及び、触覚のうちのいずれか1以上への情報の提示を制御する処理部
     を有する情報処理装置。
    Controlled presentation of information to any one or more of the user's visual, auditory, and tactile sensations for the operation of the virtual object based on the attributes of the virtual object to be operated or the status of the operation on the virtual object. An information processing device that has a processing unit.
  2.  前記仮想物体の操作は、前記ユーザの手を前記仮想物体に接触させる操作を含む
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the operation of the virtual object includes an operation of bringing the user's hand into contact with the virtual object.
  3.  前記処理部は、
     前記ユーザが前記仮想物体の操作を意図したか否かに基づいて、前記提示を制御する
     請求項1に記載の情報処理装置。
    The processing unit
    The information processing device according to claim 1, wherein the presentation is controlled based on whether or not the user intends to operate the virtual object.
  4.  前記処理部は、
     前記ユーザが前記仮想物体の操作を意図しない場合には、前記仮想物体の操作に対する前記情報の提示を行わない
     請求項3に記載の情報処理装置。
    The processing unit
    The information processing device according to claim 3, wherein the information processing device does not present the information for the operation of the virtual object when the user does not intend to operate the virtual object.
  5.  前記処理部は、
     前記ユーザが、前記仮想物体の操作を意図しない場合と、前記仮想物体の操作を意図した場合とで、前記仮想物体の操作に対して異なる前記情報を提示する
     請求項3に記載の情報処理装置。
    The processing unit
    The information processing apparatus according to claim 3, wherein the user presents different information for the operation of the virtual object depending on whether the user intends to operate the virtual object or the virtual object. ..
  6.  前記処理部は、
     前記ユーザが、前記仮想物体の操作を意図しない場合には、前記視覚への前記情報としての前記仮想物体の明度を変更する
     請求項3に記載の情報処理装置。
    The processing unit
    The information processing device according to claim 3, wherein the user changes the brightness of the virtual object as the information to the visual sense when the user does not intend to operate the virtual object.
  7.  前記処理部は、
     前記ユーザが前記仮想物体の操作を意図しない場合には、前記ユーザの手の速度に応じた種類又は振幅の音又は振動を、前記聴覚又は触覚への前記情報として提示する
     請求項3に記載の情報処理装置。
    The processing unit
    The third aspect of claim 3, wherein when the user does not intend to operate the virtual object, a sound or vibration of a type or amplitude corresponding to the speed of the user's hand is presented as the information to the auditory sense or the tactile sense. Information processing device.
  8.  前記処理部は、
     前記仮想物体が前記ユーザの視野範囲内か否かに基づいて、前記提示を制御する
     請求項1に記載の情報処理装置。
    The processing unit
    The information processing device according to claim 1, wherein the presentation is controlled based on whether or not the virtual object is within the field of view of the user.
  9.  前記処理部は、
     前記ユーザが、前記仮想物体を視認している場合と、前記仮想物体を視認していない場合とで、前記仮想物体の操作に対して異なる前記情報を提示する
     請求項1に記載の情報処理装置。
    The processing unit
    The information processing device according to claim 1, wherein the user presents different information for the operation of the virtual object depending on whether the user is visually recognizing the virtual object or not. ..
  10.  前記処理部は、
     前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体の属性に基づいて、前記提示を制御する
     請求項1に記載の情報処理装置。
    The processing unit
    The information processing device according to claim 1, wherein the presentation is controlled based on the attribute of the virtual object when the user does not visually recognize the virtual object.
  11.  前記処理部は、
     前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体の大きさに基づいて、前記提示を制御する
     請求項1に記載の情報処理装置。
    The processing unit
    The information processing apparatus according to claim 1, wherein the presentation is controlled based on the size of the virtual object when the user does not visually recognize the virtual object.
  12.  前記処理部は、
     前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体の個数に基づいて、前記提示を制御する
     請求項1に記載の情報処理装置。
    The processing unit
    The information processing device according to claim 1, wherein the presentation is controlled based on the number of the virtual objects when the user does not visually recognize the virtual objects.
  13.  前記処理部は、
     前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体の色に基づいて、前記提示を制御する
     請求項1に記載の情報処理装置。
    The processing unit
    The information processing apparatus according to claim 1, wherein the presentation is controlled based on the color of the virtual object when the user does not visually recognize the virtual object.
  14.  前記処理部は、
     前記ユーザが前記仮想物体を視認していない場合に、前記仮想物体の種類に基づいて、前記提示を制御する
     請求項1に記載の情報処理装置。
    The processing unit
    The information processing apparatus according to claim 1, wherein the presentation is controlled based on the type of the virtual object when the user does not visually recognize the virtual object.
  15.  前記処理部は、
     前記仮想物体と前記ユーザの手とを繋ぐ操作ラインであって、前記仮想物体を操作するための前記操作ラインを、前記視覚への前記情報として提示する
     請求項1に記載の情報処理装置。
    The processing unit
    The information processing apparatus according to claim 1, which is an operation line connecting the virtual object and the hand of the user, and presents the operation line for operating the virtual object as the information to the visual sense.
  16.  前記処理部は、
     前記仮想物体の前記属性としての色、硬さ、重さ、又は、画像に応じた前記操作ラインを、前記視覚への前記情報として提示する
     請求項15に記載の情報処理装置。
    The processing unit
    The information processing apparatus according to claim 15, wherein the operation line corresponding to the color, hardness, weight, or image as the attribute of the virtual object is presented as the information to the visual sense.
  17.  前記処理部は、
     前記操作ラインを、実線、点線、始点部分及び終点部分のみの線、又は、前記仮想物体の前記属性としての画像の連なりにより、提示する
     請求項15に記載の情報処理装置。
    The processing unit
    The information processing apparatus according to claim 15, wherein the operation line is presented by a solid line, a dotted line, a line having only a start point portion and an end point portion, or a series of images as the attribute of the virtual object.
  18.  前記処理部は、
     前記仮想物体の前記属性としての硬さ、又は、重さに応じて前記操作ラインの形状を変更する
     請求項15に記載の情報処理装置。
    The processing unit
    The information processing apparatus according to claim 15, wherein the shape of the operation line is changed according to the hardness or weight of the virtual object as the attribute.
  19.  処理部
     を有する
     情報処理装置の
     前記処理部が、
     操作される仮想物体の属性、又は、前記仮想物体に対する操作の状況に基づいて、前記仮想物体の操作に対するユーザの視覚、聴覚、及び、触覚のうちのいずれか1以上への情報の提示を制御する
     情報処理方法。
    The processing unit of the information processing device having the processing unit
    Controlled presentation of information to any one or more of the user's visual, auditory, and tactile sensations for the operation of the virtual object based on the attributes of the virtual object to be operated or the status of the operation on the virtual object. Information processing method.
  20.  コンピュータを、
     操作される仮想物体の属性、又は、前記仮想物体に対する操作の状況に基づいて、前記仮想物体の操作に対するユーザの視覚、聴覚、及び、触覚のうちのいずれか1以上への情報の提示を制御する処理部
     として機能させるためのプログラム。
    Computer,
    Controlled presentation of information to any one or more of the user's visual, auditory, and tactile sensations for the operation of the virtual object based on the attributes of the virtual object to be operated or the status of the operation on the virtual object. A program to function as a processing unit.
PCT/JP2021/033620 2020-09-28 2021-09-14 Information processing device, information processing method, and program WO2022065120A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-161753 2020-09-28
JP2020161753 2020-09-28

Publications (1)

Publication Number Publication Date
WO2022065120A1 true WO2022065120A1 (en) 2022-03-31

Family

ID=80845408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/033620 WO2022065120A1 (en) 2020-09-28 2021-09-14 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2022065120A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003067107A (en) * 2001-08-28 2003-03-07 Foundation For Advancement Of Science & Technology Tactile sense presenting device
JP2009069918A (en) * 2007-09-10 2009-04-02 Canon Inc Information processor and information processing method
JP2014092906A (en) * 2012-11-02 2014-05-19 Nippon Hoso Kyokai <Nhk> Tactile force presentation device
WO2020105606A1 (en) * 2018-11-21 2020-05-28 ソニー株式会社 Display control device, display device, display control method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003067107A (en) * 2001-08-28 2003-03-07 Foundation For Advancement Of Science & Technology Tactile sense presenting device
JP2009069918A (en) * 2007-09-10 2009-04-02 Canon Inc Information processor and information processing method
JP2014092906A (en) * 2012-11-02 2014-05-19 Nippon Hoso Kyokai <Nhk> Tactile force presentation device
WO2020105606A1 (en) * 2018-11-21 2020-05-28 ソニー株式会社 Display control device, display device, display control method, and program

Similar Documents

Publication Publication Date Title
US9367136B2 (en) Holographic object feedback
US20220121344A1 (en) Methods for interacting with virtual controls and/or an affordance for moving virtual objects in virtual environments
JP7341166B2 (en) Transmode input fusion for wearable systems
CN109804334B (en) System and method for automatic placement of virtual objects in three-dimensional space
EP3332311B1 (en) Hover behavior for gaze interactions in virtual reality
EP3542252B1 (en) Context-sensitive hand interaction
EP3427103B1 (en) Virtual reality
EP3311249B1 (en) Three-dimensional user input
EP2672880B1 (en) Gaze detection in a 3d mapping environment
JP2022535325A (en) Arm Gaze-Driven User Interface Element Gating for Artificial Reality Systems
TW201214266A (en) Three dimensional user interface effects on a display by using properties of motion
US20190272040A1 (en) Manipulation determination apparatus, manipulation determination method, and, program
WO2019187862A1 (en) Information processing device, information processing method, and recording medium
JP6332652B1 (en) Display control apparatus and program
JP2022535182A (en) Artificial reality system with personal assistant elements gating user interface elements
JP2022535322A (en) Gesture-Driven User Interface Element Gating to Identify Corners for Artificial Reality Systems
WO2019010337A1 (en) Volumetric multi-selection interface for selecting multiple entities in 3d space
CN108369451B (en) Information processing apparatus, information processing method, and computer-readable storage medium
WO2022065120A1 (en) Information processing device, information processing method, and program
CN111240483A (en) Operation control method, head-mounted device, and medium
WO2021176861A1 (en) Information processing device and information processing method, computer program, and augmented reality sensing system
JP6922743B2 (en) Information processing equipment, information processing methods and programs
WO2023286316A1 (en) Input device, system, and control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21872250

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21872250

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP